anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Shouldn't reflection at the boundary interfere with the original wave to not give any wave?
Question: So I have been taking a wave course, and one thing that I don't understand is how does reflection even allows normal modes of vibration. I'll try to explain my confusion: Suppose we have a string which is fixed at one of its ends, and I set up a travelling wave of some frequency on the string. This wave travels along the string in the positive x direction and then it reaches the fixed end. Upon reflection a reflected wave with phase shifted by π travels in the negative x direction and so this means the two destructively interfere and give no wave; in all the books I have read (French, Crawford, Howard...) they say that setting up a travelling wave with a certain specific frequency establishes a standing wave on the string, but shouldn't the two reflected wave no matter what the frequency destructively interfere and give no wave? So my question is: Does reflection of wave always produce a π phase shifted reflected wave? If it does then how does the reflected wave and the original wave interfere to give a standing wave? I do understand how mathematically solving the wave equation with the proper boundary conditions gives the normal mode/ stationary wave solution and also know how the method of images is incorporated but yet I don't see why the virtual or image pulse should have the same phase while the reflected wave( hence the virtual wave) must be π shifted in case of rigid fixed ends. Thank you for your time and help. Answer: You're right: the reflected wave, being shifted by $\pi$, interferes destructively with the incoming wave and gives a zero total disturbance at the end of the string. If you think about it, the $\pi$ phase shift actually derives from the fact that you want a zero disturbance at the boundary: you indeed derive it by applying the boundary condition to the solution of the wave equation. You can think of it as being an "ad hoc" shift to fit the boundary. The reason why they only interfere destructively at the boundary is that the incoming and reflected waves travel in different directions, so at every point of the string the total disturbance (the sum of the two waves, therefore their interference effect) is time dependent. The only points in which the total disturbance is constant is at the boundary, because "you wanted it to be that way" upon applying the boundary condition, and in the nodes of the stationary wave, because they are the particular points of the string in which the contribution of each wave to the total disturbance changes with time, but the two balance out at every instant to give a zero overall displacement. Interestingly enough, if the reflection actually caused destructive interference everywhere, you would only need a mirror to eliminate every electromagnetic field arriving at it at normal incidence (with the other appropriate boundary conditions to obtain standing waves of course).
{ "domain": "physics.stackexchange", "id": 56891, "tags": "newtonian-mechanics, waves, vibrations" }
Confusion between precision and recall
Question: I have a machine learning model that try to fingerprint the functions in a binary file with a corpus. Final output of upon inputing a binary file is a table with one to one mapping between the binary function and the corpus function as follows-: As you could see from the names, some of the functions are correct while the others are incorrect. Is there a way to calculate precision and recall for the above result? I understand that precision and recall make sense if I am doing other ML tasks such as image classification. Using a confusion matrix will help to calculate both the metrics easily. However, I am confused and feel that I could not do such measures as it is just one to one mapping which is either true or false. If precision and recall does not make sense, is there any other metrics I could use to evaluate the model? Thank you! Answer: First of all, precision and recall are not specific to image classification; they are relevant wherever there are two distinct "positive" and "negative" classes (for example, when you test an e-mail for "spam/not-spam", or a blood sample for "has virus/does not have virus"). You can read more on this question on Cross Validated, but to sum it up - precision is the probability that a sample is positive if a test said it is, and recall is the probability that a positive sample will be reported as positive by the test. False positives mess up your precision, and false negatives mess up your recall. Now, your task appears to be one of multi-class classification - with at least 17 classes, from your example. I wouldn't go with precision/recall for this - you can only do it pair-wise for pairs of classes. You can, however, plot a CxC confusion matrix (where C is the number of classes), and investigate where your models tend to miss. There's an implementation in SKLearn (link). If you need a single-number metric, I'd start with just accuracy (and develop from there). Following Nuclear Wang's comment, I'd also suggest looking at Cohen's Kappa (see explanation on Cross Validated) to better account for class imbalance. To read more on multi-class classification, see this question. I'd also recommend this blog post on Towards Data Science
{ "domain": "datascience.stackexchange", "id": 7904, "tags": "classification, multiclass-classification, metric" }
Feynman finite state machine
Question: In his lectures on computer science, Feynman talks about finite state machines: he present a simple delay finite state machine Let me now give a specific example of an FSM that actually does something, albeit something pretty trivial - a delay machine. You feed it a stimulus and, after a pause, it responds with the same stimulus. That's all it does. Figure 3.4 shows the "state diagram" of such a delay machine. I just don't see the utility of using two states here, why not just use a machine with one state? Answer: Your machine outputs the character it just received; Feynman's outputs the previous one. More specifically, when Feynman's automaton receives the sequence of stimuli $s_1, s_2, \dots$, its output is $x, s_1, s_2, \dots$, where $x=0$ if the automaton starts in state $1$ and $x=1$, otherwise. Your automaton outputs $s_1, s_2, \dots$ given the same sequence of stimuli.
{ "domain": "cs.stackexchange", "id": 4425, "tags": "automata, finite-automata, computation-models" }
localization: WARNING: could not obtain transform from map to odom. Error was Unable to lookup. Could not transform measurement into odom. Ignoring
Question: hello, I have a nomadic scout 2-wheel robot with a kinect sensor that is able to navigate inside a map by means of the ros (groovy) navigation stack. An odometry topic is provided by the robot, and the /odom --> /base_link transform too. I want to feed the ekf_localization node with the position the robot provided by our Vicon MoCap system, in order to fuse it with the odometry information. I have a map, provided by a map server, and the robot pose coming from Vicon is published as a PoseWithCovarianceStamped topic, with ref_frame_id = 'map'. So I want to use ekf_localization node in the second mode, with world_frame = 'map', and I expect the /map --> /odom tf from ekf_localization node. What I see is that ekf_localization node is publishing /odom --> /base_link transform too, overwriting the odometry-generated transform each other. So the ekf_localization node seems working in the first mode, and it does not sense the /odom --> /base_link tf, already published by the robot. this is my ekf launch file: <launch> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true"> <param name="frequency" value="30"/> <param name="sensor_timeout" value="0.1"/> <param name="two_d_mode" value="true"/> <param name="pose0" value="/scoutPose"/> <param name="odom0" value="/odom"/> <rosparam param="pose0_config">[true, true, false, true, true, true, false, false, false, false, false, false]</rosparam> <rosparam param="odom0_config">[true, true, false, true, true, true, false, false, false, false, false, false]</rosparam> <param name="pose0_differential" value="true"/> <param name="odom0_differential" value="false"/> <param name="debug" value="true"/> <param name="debug_out_file" value="/home/robouser/debug_ekf_localization.txt"/> <param name="map_frame" value="map"/> <param name="world_frame" value="map"/> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_link"/> <rosparam param="process_noise_covariance">[0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.002, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.002, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.004]</rosparam> </node> </launch> and this is the vicon launch file: <launch> <node pkg="vicon_bridge" type="vicon_bridge" name="vicon" output="screen"> <param name="stream_mode" value="ClientPull" type="str" /> <param name="datastream_hostport" value="ivy:801" type="str" /> <param name="tf_ref_frame_id" value="/map" type="str" /> </node> </launch> the debug file shows the warning message shown in the subject. What could be the problem? thank you in advance. Roberto Originally posted by roberto colella on ROS Answers with karma: 1 on 2015-09-17 Post score: 0 Original comments Comment by Tom Moore on 2015-09-18: Please post a sample message for every input topic. Also, how did you check that the EKF is also publishing the odom->base_link transform? Are you inferring that from the warning, or did you use another method? Comment by Tom Moore on 2015-09-22: I believe that the transform was being published, but what I'm not sure of is that the EKF was publishing it. Your odometry source will often broadcast that transform as well. If your world_frame is set to map, the EKF will not broadcast a transform from odom->base_link. Comment by roberto colella on 2015-09-22: With world_frame = map the ekf should publish the map --> odom transform. What happens is that it publishes odom->base_link transform instead. I am sure that EKF was publishing the odom->base_link transform, I checked with tf_monitor, rviz, rostopic echo... Answer: Adding answer as a comment isn't long enough. Re: which transform the EKF is publishing, a few things: First, see the code here. The filter should always use your world_frame parameter as the parent frame for its tf broadcast. You're using the groovy version, and perhaps there's something going on there that I don't recall, but I'm pretty sure it works as well. The tools you listed tell you what transforms are being broadcast, but only tf_monitor can tell you which node is broadcasting it. Can you please post the output of tf_monitor? If for whatever reason the EKF is publishing that transform (as determined by the tf_monitor output and the test I suggested above), then please post your full launch file for every node you're running. EDIT: Removing things you've already investigated, sorry. Originally posted by Tom Moore with karma: 13689 on 2015-09-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22655, "tags": "navigation, robot-localization" }
Independent sources of information on radiation pollution in Europe
Question: Bear with me if this is a wrong place to ask this question. I aksed this question on Physics at StackExchange and was told it will be more on topic here. Recent news show a fire has started close to the former nuclear plant in Chernobyl because someone was burning grass. Ukrainian State Scientific and Technical Center for Nuclear and Radiation Safety explained: the fire in the exclusion zone has been extinguished. [...] The updated modeling results indicate that the potential maximum concentrations of Cs-137 in the surface layer in Kyiv were lower than 0.1 mBq/m3. The forecasted concentrations are less than those observed previously and more than 1000 times lower than the acceptable levels established by NRBU-97. They do not pose any harm to the health of the population. The information has been confirmed (translation mine) by Polish National Atomic Energy Agency: With regard to false information appearing in public space about the alleged occurrence of radioactive contamination in Poland, the National Atomic Energy Agency strongly denies this type of information. The radiation situation in the territory of the Republic of Poland remains normal. There is no threat to the health and life of the population in the country. On the other hand, we have received contradicting information from several friends, family and "trusted" sources that there was a threat to our safety and we should stay at home for two days and close the doors and windows to avoid contamination. I consider this fake news, but I am also aware that sometimes governmental agencies delay notifying the public because of political or other reasons. For instance, when there was Chernobyl disaster in 1986, some scientist at one of Polish physics institute had access to Geiger counter. They started to carry over their own measurements of contamination but they were told to stop doing that by the institute authorities that there connected to the Communist Party at that time. There used to be a similar problem with smog air pollution in my and other European countries. There were some governmental measurements of air pollution but there was also some controversy how they measured air pollution, how frequently they reported it and how they defined safety thresholds. In a response to that problem multiple companies started to produce low-cost devices that you can install at your home to measure air pollution. This provides a source of information independent from governmental agencies. Is there similar initiative, agencies, companies, etc. that measure radiation pollution in Europe? Answer: Two websites I've come across that might be of interest are: Radioactivity Environmental Monitoring Joint Research Centre, for Europe. Radmon.org which provides radiation information from number of global station.
{ "domain": "earthscience.stackexchange", "id": 2048, "tags": "air-pollution, pollution, radioactivity" }
Writing a Simple Image Publisher (C++) tutorial Segmentation Fault
Question: I am running ROS Kinetic full desktop install on Raspberrypi 3b running Debian Jessie. These are fresh installs with nothing else added. I followed the Simple Image Publisher/Subscriber tutorials and can successful compile. When I run my_publisher (c++) I get a Segmentation Fault. If I start up python on the command and do "import cv2" I get a Segmentation Fault. I suspect the two are related. I have all the dependencies, cv_bridge, vision_opencv. I also see cv2.so in /opt/ros/kinetic/lib/python2.7/dist-packages. I am unclear if I need to install opencv on my own or if it is part of the ROS install. But if it isn't part of the install, why do I have a cv2.so in my ros directory? I have done a complete sd card reformat and fresh install of Jessie and then the ROS desktop install multiple times and I keep getting the same result. I have tried catkin packages and ros-packages. Can anyone give me some pointers? Originally posted by FAD0 on ROS Answers with karma: 16 on 2016-11-27 Post score: 0 Original comments Comment by gvdhoorn on 2016-11-27: Any time you run into SEGFAULTs, run the binary (preferably a version with debug symbols) in gdb and after getting the SEGFAULT, get gdb to print a backtrace for you (bt). Please add one of the tutorial C++ node you mentioned. Answer: I figured out the problem. I was publishing a jpeg file but had not installed the image_transport_plugins, so it couldn't handle the jpeg format. Once I installed the plugins it worked. Originally posted by FAD0 with karma: 16 on 2016-11-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2016-11-29: Not having some plugins around on the system should not result in SEGFAULTs, warnings / errors at most. Could you report this over at the image_transport issue tracker? Comment by FAD0 on 2016-12-01: gvdhoorn: I am very new to ROS. There were too many possible things I could have been doing wrong in my environment to know if it was the lack of the proper plugins that was causing the segfault. So I consider this user error. Comment by gvdhoorn on 2016-12-01: You may be new, but we should make sure that plugin discovering / loading code does not cause crashes when no plugins are present on a system. That's why I feel it's important to report this.
{ "domain": "robotics.stackexchange", "id": 26345, "tags": "ros, opencv, image, publisher" }
Python Prime Numbers Code Problem
Question: I was trying to write my own code for primes in Python. I know that code already exists, but I am doing this to challenge my knowledge and make my own solution. I was wondering if any of you guys would tell me what's wrong with it. factor_list = [ ] def integer_test(x): if type(x) == int: return True else: return False def factor_checker(x): count = 0 y = x while count <= y: print(count) count += 1 if integer_test(x / count) == True: factor_list.append(count) factor_checker(45) print(factor_list) When I try to input a number for factor checker and then look at the factor list, I just get an empty list. What is wrong? Thanks. Answer: You don't get an empty list. See this. However you won't get the desired results because in python int divided by int is always int. You may want to try this: import math factor_list = [ ] def integer_test(x): if math.floor(x) == math.ceil(x): return True else: return False def factor_checker(x): count = 0 y = x while count <= y: print(count) count += 1 if integer_test(1.0 * x / count) == True: factor_list.append(count) factor_checker(45) print(factor_list) In this code x/count is a double. So the integer_test will only return true if it's ceiling and floor return the same number i.e the integer itself.
{ "domain": "cs.stackexchange", "id": 2870, "tags": "primes" }
Parity violating metrics
Question: Is there are an example of a parity-violating metric? If so, how do they look like? Are Einstien equations parity invariant? What does it mean for a manifold or a metric to be parity invariant? Answer: To prove parity invariance of a set of equations: First decide how each term in each equation would change under parity inversion. E.g. vectors change sign, pseudo-vectors do not, etc. Replace each term by what it would be under parity-inversion. Find out if the resulting equations are precisely the same as the ones you started with. Tensor notation makes this pretty quick in the case of GR. You mainly need to think about the stress-energy tensor.
{ "domain": "physics.stackexchange", "id": 76759, "tags": "general-relativity, metric-tensor, parity" }
Could someone explain me what the parenthesis with tensors included mean?
Question: The symmetric part of a tensor is denoted using parentheses as: $$T_{(ab)}=\dfrac{1}{2}(T_{ab}+T_{ba})$$ And antisymmetric as: $$T_{[ab]}=\dfrac{1}{2}(T_{ab}-T_{ba})$$ With that in mind, here is a Tensor obtained from General relativity and Gravitation. $$ \dfrac{1}{2} R_{abcd}u^d=\omega_{c[a;b]}+\sigma_{c[a;b]}+\dfrac{1}{3}h_{c[a}\Theta_{,b]} -\dot{u}_{c;[b}u_{a]}+....$$ Note that the last factor of the equation; $\dot{u}_{c;[b}u_{a]}$ can't be explained with those in mind. So there is something basic I'm missing? Answer: Define $T_{cba} = \dot{u}_{c;b}u_{a}$, then this is simply $T_{c[ba]}$, which is to say $(T_{cba}-T_{cab})/2$, which is $(\dot{u}_{c;b}u_{a} - \dot{u}_{c;a}u_{b})/2$
{ "domain": "physics.stackexchange", "id": 44700, "tags": "general-relativity, differential-geometry, tensor-calculus, notation" }
Event scheduler in C
Question: One of my university assignments asked us to create a program using struct in order to create a simple event scheduler. This program is for a single day only, not multiple days. Sample Input / Output For a few reasons, I've opted to create a .GIF to display the sample input / output: An example save file: 8 30 dentist_appointment 14 0 pickup_friend_from_airport 17 0 buisness_meeting_at_the_office 20 30 dinner_reservation Disclaimer / Notice: The following code is not to be copied without proper credit. It is licensed under cc by-sa 3.0 with attribution required. #include <stdio.h> #include <string.h> #include <stdbool.h> #include <stdlib.h> #include <ctype.h> #define _MAX_EVENTS 10 // 10 Events Max #define _MAX_DESCRIPTION 101 // 100 Character Description Max typedef struct { // typedef a struct called event int hour; // Store the hour / HH int minute; // Store the minute / MM char description[_MAX_DESCRIPTION]; // Store the event description } event; // Print the menu selection void printMenu() { puts("+------ SCHEDULER ------+\n" "| 1. New Event |\n" "| 2. Delete Event |\n" "| 3. Display Schedule |\n" "| 4. Save Schedule |\n" "| 5. Load Schedule |\n" "| 6. Exit |\n" "+-----------------------+\n"); } // Return true if an event is NULL, false otherwise bool isNull(const event *e) { return e == NULL; } // Allocate memory for and initialize an event event *initEvent() { event *e = (event*)malloc(sizeof(event)); e->hour = 0; e->minute = 0; strcpy(e->description, ""); return e; } // Take user input until value is between min and max inclusive, return the input int inputRange(const int min, const int max) { int input = 0; char temp[21]; char *prompt = "| Enter a number between %d and %d: "; printf(prompt, min, max); fgets(temp, 21, stdin); input = atoi(temp); while (input > max || input < min) { // Data validation printf(prompt, min, max); fgets(temp, 21, stdin); input = atoi(temp); } return input; } // Setup a new event with user input and return a pointer to the same event event* newEvent(event *e) { if (isNull(e)) { // If e is NULL e = initEvent(); // Initialize it } char *seperator = "+--------------------------------+"; printf("\n%s\n| NEW EVENT |\n%s\n\n", seperator, seperator); puts("+---------- EVENT TIME ----------+"); e->hour = inputRange(0, 23); e->minute = inputRange(0, 59); puts(seperator); puts("\n+--- EVENT DESCRIPTION ---+"); printf("%s", "| Enter a description: "); fgets(e->description, _MAX_DESCRIPTION, stdin); puts("+-------------------------+\n"); puts("| Event successfully added.\n"); return e; } // Add an event to an event list at a specified index void addEventAtIndex(event list[], const event e, const int i) { if (isNull(&e)) { // if our event is NULL, return return; } list[i].hour = e.hour; list[i].minute = e.minute; strcpy(list[i].description, e.description); } // Insertion sort by swapping struct members void sort(event list[], const int size) { for (int i = 1; i < size; i++) { for (int j = i; j > 0 && (list[j - 1].hour > list[j].hour || (list[j - 1].hour == list[j].hour && list[j - 1].minute > list[j].minute)); j--) { int hourJ = list[j].hour; int minuteJ = list[j].minute; char descriptionJ[_MAX_DESCRIPTION]; strcpy(descriptionJ, list[j].description); int hourJMinus1 = list[j - 1].hour; int minuteJMinus1 = list[j - 1].minute; char descriptionJMinus1[_MAX_DESCRIPTION]; strcpy(descriptionJMinus1, list[j - 1].description); list[j].hour = hourJMinus1; list[j].minute = minuteJMinus1; strcpy(list[j].description, descriptionJMinus1); list[j - 1].hour = hourJ; list[j - 1].minute = minuteJ; strcpy(list[j - 1].description, descriptionJ); } } } // Add an event to an event list by sorting it into position void sortInsert(event list[], int *size, event e) { addEventAtIndex(list, e, *size); // Add event to the end of the list (*size)++; // Increment size // Insertion Sort sort(list, *size); } // Display an event in a readable format: [ID] HH:MM - DESCRIPTION void printEvent(const event e) { char h1 = { (e.hour / 10) + '0' }; // Extract the first digit and convert to char (if any, else 0) char h2 = { (e.hour - (e.hour / 10) * 10) + '0' }; // Extract the second digit and convert to char char m1 = { (e.minute / 10) + '0' }; char m2 = { (e.minute - (e.minute / 10) * 10) + '0' }; printf("%c%c:%c%c - %s", h1, h2, m1, m2, e.description); } // Display all events in an event list void printEventList(const event list[], const int size) { if (size == 0) { puts("\n| You have no events scheduled!\n"); return; } char *seperator = "+--------------------------------+"; printf("\n%s\n| MY SCHEDULE |\n%s\n\n", seperator, seperator); for (int i = 0; i < size; i++) { printf("| [%d] ", i); printEvent(list[i]); } putchar('\n'); } // Delete an event from an event list void deleteEvent(event list[], int *size) { if (*size == 0) { // If list is empty puts("\n| Event list already empty.\n"); return; } char temp[21]; int id; char *seperator = "\n+--------------------------------+"; printf("%s\n| DELETE EVENT |%s\n\n", seperator, seperator); for (int i = 0; i < *size; i++) { // Display the event list so the user can see which event to delete printf("| [%d] ", i); printEvent(list[i]); } printf("%s", "\n| Enter the ID of an event to delete: "); fgets(temp, 21, stdin); id = atoi(temp); if (id > *size - 1) { printf("\n| No event located at %d\n", id); return; } printf("| Event [%d] deleted successfully.\n\n", id); // Set hour and minute to some trivially large value for sorting purposes list[id].hour = 99; list[id].minute = 99; strcpy(list[id].description, ""); if (id != (*size - 1)) { // If the event to remove is already last, there's no need to sort it to last sort(list, *size); } (*size)--; // Decrement the size of the list } // Replace all spaces in a string with an underscore char *encode(char *s) { for (int i = 0; i < strlen(s); i++) { if (s[i] == ' ') { s[i] = '_'; } } return s; } // Replace all underscores in a string with an spaces char *decode(char *s) { for (int i = 0; i < strlen(s); i++) { if (s[i] == '_') { s[i] = ' '; } } return s; } // Save an event list to file void saveEventList(char *filename, event list[], int size) { FILE *f = fopen(filename, "w"); if (f == NULL) { // If our file is NULL, return return; } for (int i = 0; i < size; i++) { fprintf(f, "%d %d %s", list[i].hour, list[i].minute, encode(list[i].description)); // Encode the description (replace spaces with underscores) before saving it into the file } printf("\n| %d %s successfully saved into \"%s\".\n\n", size, (size == 1) ? "event" : "events", filename); // Tenary expression to make sure we're grammatically correct fclose(f); } // Load an event list from file void loadEventList(char *filename, event list[], int *size) { FILE *f = fopen(filename, "r"); char temp[6 + _MAX_DESCRIPTION]; // ## ## MAX_DESCRIPTION_LENGTH if (f == NULL) { printf("\n| File \"%s\" not found.\n\n", filename); return; } *size = 0; // Set size to 0 while (fgets(temp, sizeof(temp), f)) { char *word = strtok(temp, " "); // Use space as the token delimiter, get the first token (hour) list[*size].hour = atoi(word); // Store the token into the list word = strtok(NULL, " "); // Get the second token (minute) list[*size].minute = atoi(word); word = strtok(NULL, " "); // Get the third token (description) strcpy(list[*size].description, decode(word)); // Decode our word before copying it (remove underscores) (*size)++; // Increment size with each line (event) added } printf("\n| %d %s successfully loaded from \"%s\".\n", *size, (*size == 1) ? "event" : "events", filename); printEventList(list, *size); // Display the event list when finished, show the user what's been loaded } int main() { event list[_MAX_EVENTS]; int index = 0; // Number of elements in list int selection = 0; char file[FILENAME_MAX]; char response = 'Y'; char temp[21]; while (selection != 6) { printMenu(); // Print the menu printf("%s", "| Please select an option: "); // Prompt for input fgets(temp, 21, stdin); selection = atoi(temp); // Convert string input to int switch (selection) { case 1: // New Event if (index + 1 > _MAX_EVENTS) { printf("| You can only have %d active events at one time!\n\n", index); break; } sortInsert(list, &index, *newEvent(&list[index])); break; case 2: // Delete Event deleteEvent(list, &index); break; case 3: // Display Schedule printEventList(list, index); break; case 4: // Save Schedule if (index == 0) { // No events, don't save anything puts("| You have no events in your schedule!\n"); } else { printf("%s", "| Please enter a \"filename.txt\": "); fgets(file, FILENAME_MAX, stdin); strtok(file, "\n"); // Strip newline from filename saveEventList(file, list, index); } break; case 5: // Load Schedule if (index > 0) { printf("%s", "| Are you sure you want to discard your current schedule? (Y/N): "); response = toupper(getc(stdin)); char c; while (((c = getchar()) != '\n') && (c != EOF)); // Clear buffer, from getc(); } if (response == 'Y') { printf("%s", "| Please enter a \"filename.txt\": "); fgets(file, FILENAME_MAX, stdin); strtok(file, "\n"); // Strip newline from filename loadEventList(file, list, &index); } break; case 6: // Exit Program puts("\n| Thank you!\n"); break; default: // Error puts("\n| Error in selection\n"); break; } } } Questions / Concerns Hopefully I've improved since my last major code review. I've tried to keep the comments useful, so tell me if I'm still being redundant or need to tone that down. There's a few things I have kept in mind while writing this, including not repeating myself as best as possible. Was the ASCII formatting when printing a good idea? Or should that just be left when showing the menu? In what ways can I improve, what shouldn't I be doing, and what should I be doing? Answer: First of all: Nice piece of code you delivered here. Good start! I think you did quite well - the overall structure is solid and I think the intent of most of the variables are clear to the reader. Here are some things I noticed, while looking at your code in no particular order: Method isNull: I think the method isNull isn't really necessary, since the only thing it does is check, whether something is equal to NULL. I think if (somePointer == NULL) is just as clear as if (isNull(somePointer)). You could even use if (!somePtr), which does exactly the same and I think is clear enough as well. I know this piece of software is not about performance, but you could definitely save the extra overhead of a function call here. You even did it in saveEventList - if (f != NULL). do-While in inputRange: This part of your code: printf(prompt, min, max); fgets(temp, 21, stdin); input = atoi(temp); while (input > max || input < min) { // Data validation printf(prompt, min, max); fgets(temp, 21, stdin); input = atoi(temp); } can get written a little bit shorter by using a do-while-loop. Something like this: do { printf(prompt, min, max); fgets(temp, 21, stdin); input = atoi(temp); } while (input > max || input < min); No need for casting malloc: You do this for example in initEventand there is no need for explicit casting. Actually, using an explicit cast is discouraged, as described here. Check malloc for NULL: One thing you should definitely do is check the return value of malloc. When there is a allocation problem malloc will return NULL and you should be able to handle it. In such a case, this: event *e = (event*)malloc(sizeof(event)); // e is NULL e->hour = 0; e->minute = 0; strcpy(e->description, ""); would end in undefined behavior, since you try to dereference NULL. In such a case initEvent() could return NULL as well and the caller has to handle it (print warning, that event couldn't get created). event *e = (event*)malloc(sizeof(event)); if (!e) { return NULL; } ... Minor: Print what is about to get read in: I think you should print an initial message before expecting input from the user (instead of printing just the range, print what is about to get read in - hour, minute). e->hour = inputRange(0, 23); e->minute = inputRange(0, 59); Why bother checking NULL on non-pointer?: I'm confident, that this part in addEventAtIndex() is unnecessary: if (isNull(&e)) { // if our event is NULL, return return; } since you passed the actual structure to the function. You would already get an error, when trying to dereference NULL (there is no way to accidentally pass a struct with the address NULL). Swap method: The swapping part of your sort function should be in an extra function for clarification and I also think that you can shorten it quite a bit: void swapEvents(event list[], int index1, int index2) { int tmpHour = list[index1].hour; int tmpMinute = list[index1].minute; char tmpDescription[_MAX_DESCRIPTION]; strcpy(tmpDescription, list[index1].description); list[index1].hour = list[index2].hour; list[index1].minute = list[index2].minute; strcpy(list[index1].description, list[index2].description); list[index2].hour = tmpHour; list[index2].minute = tmpMinute; strcpy(list[index2].description, tmpDescription); } This would essentially replace your loop-body in the sorting algorithm. Good use of variables in printEvent: I think it's a good thing that you used extra variables in printEvent instead of putting all the arithmetic as printf-arguments. It strongly support readability. Final thoughts: This was probably a really good exercise for a beginner. Nice job you did there. You could think about using an array of pointers to events instead of events. When doing so sorting will significantly improve, since you don't have to copy whole structs back and forth, but only swap pointers - lightning fast.
{ "domain": "codereview.stackexchange", "id": 19196, "tags": "beginner, c, datetime, database, to-do-list" }
Deriving the equivalent capacitance in a series circuit formula
Question: When we derive the formula for the effective capacitance in series, we say: $$Q/C_{eqv} = Q/C_1 + Q/C_2 + Q/C_3$$ (if there were 3 capacitors in this case). We would then cancel $Q$ to obtain the formula. I understand why each capacitor has the same charge, but why does the effective capacitor have the same charge as each individual capacitor? I'd expect the effective capacitor to store a total charge of 3Q (in the given example), not Q? When the capacitor discharges, would the overall amount of charge released not be 3Q (i.e. the overall charge of the capacitors)? I saw a similar question on here, and it was answered by explaining that the 'inner capacitors' are isolated from the rest of the circuit, and the +Q and -Q charges cancel? But even so, the isolated charges can trigger electron flow from the 'outer capacitors' during discharge. If anyone can clear up these doubts, I would be grateful. Answer: Let's imagine a series of three $0.6F$ capacitors, being charged by a $120V$ battery. From this website Each capacitor will end up with $40V$ across it and a charge of $24C$, from $Q=CV$. This can happen by charge (electrons) leaving one capacitor, e.g. the right plate of the left capacitor and ending up on the left plate of the second capacitor - (similarly for the other capacitor) - but only $24C$ has flowed through the battery. (big numbers, but it's just an example) The battery has charged the combination with $24C$ using $120V$ and so the effective capacitance must be $0.2F$. Also if the combination were to discharge through a resistor, only $24C$ would flow through it. Charge cannot flow through a capacitor and the only charge flowing through the resistor would be due to electrons leaving the left plate of the left capacitor and a similar number in the wire moving onto the right plate of the right capacitor.
{ "domain": "physics.stackexchange", "id": 81936, "tags": "electric-circuits, voltage, capacitance" }
Why can't a superconductor make a DC motor self-sustaining?
Question: Superconducting wire can host a low current magnetic field. I do not know if it supports a corresponding electrical field. Can a superconducting wire that sustains a current accelerate a DC motor? Where is the resistance in a superconducting homopolar motor? Please explain if I am way off of target. What am I getting wrong? Answer: There is no reason why you couldn't build a motor using superconducting magnets, or build a simpler homopolar motor using a length of superconducting wire. There would be no heat dissipated due to electrical resistance, but of course there would still be mechanical resistance due to friction on the moving parts. The energy needed to overcome this, and to drive any load to which the motor is attached, would come from the power source, just as in any other electrical motor. Thermodynamically, what must happen is that the current drops to zero as the motor accelerates, as the energy in the circulating current gets converted into kinetic energy. Presumably this is because the acceleration causes the charges to experience a force component opposite to their direction of travel through the wire. My electromagnetics is too rusty for me to be confident about the mechanism that causes this - but the key point is that what happens is that the force that stops the current is due to the fact that the charges in the wire are moving through a magnetic field, which is different from electrical resistance.
{ "domain": "physics.stackexchange", "id": 5746, "tags": "electromagnetism, energy-conservation, electric-circuits, superconductivity, perpetual-motion" }
Does gravity increase the closer to the core you get?
Question: Or does the mantle and crust above you counteract the increase at one point and it actually decreases? Answer: The below figure, taken from Wikipedia shows a model of the free fall acceleration, i.e., 'gravity'. The left-most point corresponds to the center of the Earth; then further right at $6.3\cdot1000$ km you are at the Earth's surface; and then further out you move into space. You can follow the blue line for PREM to get an idea of the average (expected) gravity. As you see, the gravity actually increases slightly within the Earth (reaching a maximum at the core-mantle boundary), but tapers down within the core. To make this kind of calculations, you must think of the Earth like an onion: made up of many concentric spheres. Whenever you move a bit deeper into the Earth, you strip off all the layers you've crossed. As you get closer to the center of the Earth, there are fewer and fewer layers, and eventually, there's nothing left at the center! The reason why gravity goes up ever so slightly within the Earth is that you get close to the much denser core material. If the density of the Earth were constant (per the green 'constant density' line), the gravity would just decrease linearly. See the other answer and the discussion below for some more details on the math and procedures required to make these calculations.
{ "domain": "earthscience.stackexchange", "id": 1984, "tags": "geology, geophysics, earth-observation, gravity" }
Is there any format for official Physiological/Medicine answers?
Question: Assume you have an exam which has 5 extensive questions and 60 minutes. You do not have time to cover most if you write everything in essay format. If you start to write essays, you do not really have time to cover all mechanisms and so on. I just heard that one student did the exam with by list format, for instance sitting -> venous return down & skeletal muscles not working correctly -> trombus. and then describing relevant mechanisms similarly and combining the their roles shortly. Another morphology of H. pylori Curved (spiral shape), Gram-negative, 2-4 micrometer long, coccoid forms in older cultures, Is there any official standard or unofficial accepted way how to answer questions in Physiology/Medicine? Answer: Given the highly variable nature of such processes and systems, it's unlikely that a common standard is feasible or sensible. In medical questions, a bottom-up order like in your example usually makes sense, starting with the most specific known cause and deriving from there until you arrive at the symptoms. Treatments can branch off or into the order at the point where they apply logically. If your exam instructions ask for flow text, your question is alreay unnecessary - but if you believe you have the choice, use a note format that is appropriate by common sense. You are essentially creating a diagram, and like any diagram you have to ensure that it is self-contained and comprehensive. You can shorten the amount of time it takes to write sentences by reducing the literary quality. Of course a well-written text is more pleasing to read and mark, but the main emphasis in your grading will usually be content, meaning that you can afford to form sentences of repetitive structures or lacking logical connection if it is difficult to figure out quickly whether there is one or how to express it etc.
{ "domain": "biology.stackexchange", "id": 1800, "tags": "physiology, homework, microbiology" }
Is there a theoretical limit on telescope's resolution?
Question: Is there a theoretical limit on what we can see with a telescope of fixed size based on Earth's orbit or the Moon, that is in Earth's region of Solar system outside the Earth's atmosphere? Why not instead of sending probes to Pluto and Ceres, just invest efforts into building better telescopes, so to photograph the surfaces of these bodies in greater detail? I suspect that all necessary information in fact reacheds Earth's region, it is just matter of sensibility and exposure to fix it, am I right? Answer: The absolute limit of a telescope resolution is given by diffraction. No matter how perfectly built and aligned is a telescope, you cannot resolve angles smaller than $$\theta \propto \frac{\lambda}{D}$$ where $\lambda$ is the wavelength of interest and $D$ is the diameter of the telescope. This is why a number of millimeters and radio telescope (and also some -antennas) are huge. The first figure here shows the basic principle. Imagine that you can divide what you observe in tiny squares and consider each one as a point like source. Each one will generate a diffraction pattern when passing into the telescope and if two points are too near, you cannot distinguish between them. Wisely you put your telescope in space: turbulence in the atmosphere degrade the signal, and the best/biggest telescopes have hard times to go below $0.5 arc seconds$ Interferometry comes to the rescue increasing $D$ from the telescope size to the distance of two (or more) telescopes (called baseline). Interferometry has been used in radio astronomy since decades. From what I hear optical interferometry is much much more complicated and as far as I know the only large scale attempt is the VLTI project. So you could imagine to have a constellation of relatively small telescopes spread over hundred thousands or millions of kilometers. But this have the huge problem that you have to know the position and timing of every one of them with an impressive precision (my guess is that the precision in position is of the order of $\lambda$). The other problem is light collection. If you want to see something very faint you have two options: 1)you observe for a lot of time or 2) you build a bigger telescope (6 to 40 meters in diameter). And here also the largest baseline interferometers cannot do much, as the amount of light that they collect is just the sum of the light collected by the single telescopes. To conclude: to observe Pluto or Ceres with high enough accuracy you would need a large number of large space telescopes very far away one from the other with perfect telemetry. It's far easier and cheaper to got there to take pictures.
{ "domain": "astronomy.stackexchange", "id": 546, "tags": "telescope" }
Are all models of ocean planets theoretically cloud covered?
Question: Take for example an ocean planet slightly smaller than the Earth. Let's say it has a planet-wide ocean that is on the average 10 miles deep. Say for argument sake, that there is a molten, rotating outer core producing a magnetic field and that there are plate tectonics (although not to the same degree as Earth). Also, imagine that there are volcanoes that reach as high as 5 miles below sea level. The planet's effective temperature is similar to Earth (not counting greenhouse). Would such a world without land always be covered with clouds? Would it necessarily have a runaway greenhouse? How about such a world with no vulcanism? Answer: Having given this some thought, it's a bit of a monster question with a few variables that you don't mention. Planetary tilt, which causes seasons is one. Length of day, for example, a rotation like Venus' 116 days would be quite different than an earth rotation of 24 hours and a tidally locked planet would also be different. Atmospheric pressure is another. I'm assuming Earth like tilt, 24 hour day since you didn't mention, and 1 ATM, but I think, if and when we finally get a good look at earth-like exo-planets, which, granted, might be several decades away, I think we'll see a few unexpected things. OK, moving on. Earth likely got much of it's atmosphere from comets which suggests healthy amounts of CO2, CH4 and NH3, maybe N2, some from the NO family, maybe some Helium, Argon and Neon, those 3 I think we can mostly ignore, some hydrogen perhaps from solar flares, which might get stripped along with the helium from the planet over time. The total amount of comet and asteroid impacts and ratio of gasses add some variability and a really important factor is whether or not the planet has life and undergoes photosynthesis. Photosynthesis would put O2 into the atmosphere and reduce CO2 and would also have the effect of, over time, reacting with and eliminating the virtually all of the CH4 from the air and any dissolved Iron in the oceans, turning the oceans from a muddy brownish/red to clear blue. Photosynthesis likely played a key role in turning the Earth from a hot planet to a snowball. So, lots of variables, but it's a fun speculative question, so I'll give it a shot. How much smaller than Earth? I'm going to go with Venus size but if you have different in mind, let me know. .815 Earth masses, 6,052 km in Radius and gravity of .905 of Earth. Source. 10 miles deep oceans (16.09 km), surface area, 4 Pi R^2 about 7.4 billion cubic km of water. (Earth, by comparison, about 1.37 billion cubic km) Source. Your planet has about 5.4 times as much water as earth (not counting water in the crust, but you didn't mention that, so lets not go there) - and planet size doesn't affect the predictions much. I wonder how much outgassed CO2 etc. would stay dissolved in the ocean and how much would accumulate in the atmosphere? Nitrogen? I'm going to assume that the theory is correct that Earth (and your water world) get most of it's atmosphere and water from asteroids and comets Source and Source, though the 2nd source suggests some could have come from the primordial material of the Earth. If your water-world has a lot of gas trapped below it's crust in it's primordial material, then out-gassing from volcanism becomes a bigger factor and the volcanoes, not the atmosphere above would keep the oceans saturated with gas. That's possible, but I'm going to assume that most of the gas is already in the atmosphere following a late heavy bombardment period. I'll touch a bit more on tectonics later. Titan Probably got it's thick atmosphere from outgassing Source and perhaps Venus from a recent large outgassing event too - just throwing that out there. I'm also assuming that your planet wouldn't have permanent ice over it's oceanic poles due to the warmish temperature you implied and likely oceanic circulation not blocked by land-masses would prevent ice formation. Would it necessarily have a runaway greenhouse? Greenhouse Gases, CO2, CH4, NO family, (H20, more indirectly) If your planet has a lot of CO2/CH4, comparable amounts to the amount of H20, then it's unlikely the oceans could begin to dissolve enough gas and this would lead to a run-away Greenhouse, almost without question. If your planet has more earth like CO2/CH4 levels, perhaps driven by photosynthesis capturing carbon and released oxygen chemically reacting with the CH4, then the runaway greenhouse can be avoided and CH4 be very low concentration. CH4 on Earth is currently about 1.8 PPM and prior to farming and livestock and oil drilling & Fracking which can release some CH4, it was probably less than half that much, less than 1 PPM. In a water world CH4 would probably be even less as there's no biodegrading or digestion of plant-mass releasing CH4. CO2 is more complicated and the amount dissolved in the ocean depends on the total amount available. If there's not too much CO2 to saturate the oceans, then you have an ocean/air equilibrium which is also affected by temperature. For a more detailed answer on this, look into Henry's law, but I'm going to cheat and do a quick and dirty calculation rather than use Henry's. on Earth, the oceans contain about 50 times the CO2 as the Atmosphere: Source. And the oceans weigh about 265-270 times more than the atmosphere. (ocean mass given above, atmosphere mass here.) So if we estimate this as parts per million (ppm), the CO2 concentration in Earth's atmosphere is about 5 times the concentration in Earth's oceans, and at higher temperatures, that ratio goes up, lower temperatures it goes down, but it never gets to the point that there's no CO2 in the air, cause not possible in an equilibrium. With 10 mile deep oceans, we can assume a more CO2 in the oceans, maybe 99% vs about 98% ocean to air ratio currently on earth but the 1% in the air would depend on the total mass of CO2 in the water world's ecosystem, ignoring anything permanently trapped deep on the ocean floor, such as sea-shells. Would such a world without land always be covered with clouds? Lets say you have an Earth like Oxygen/Nitrogen atmosphere, 1 bar, a planet very similar to earth but just oceans. Similar weather (as a starting point), this planet should have more clouds as clouds are more common over the oceans, source, and you would have no low humidity/dry pockets of land, but I would think, probably not 100% clouds, just more clouds. The cloud effect on temperature has some uncertainty to it, as clouds both both reflect sunlight making the planet colder during the day and they trap heat at night, so they play for both the warming team and the cooling team, but what I've read, the overall effect is pretty small. Days could be colder, nights warmer. Also, evaporation tends to cool the air at the surface, which is why islands surrounded by ocean don't ever get the scorching hot temperatures you get in death valley for example, even if they're closer to the equator, so you wouldn't get heatwaves, but you wouldn't get freezing cold either. (I would think). Finally, oceans have lower albedo and a water-world earth would likely have no ice-caps, so, Overall, I think it's likely a water-world earth would be warmer primarily due to lower albedo and this would increase atmospheric water-vapor (not clouds but transparent water vapor, which is a greenhouse gas), so a water-earth would probably several degrees warmer on average with no ice-caps. How much warmer - I have no idea but I don't think a water-earth would be run-away greenhouse. A water world (Earth) could also have less CO2 in the atmosphere due to more dissolved in water and probably less CH4 from decaying matter on land, so, it might actually be colder, but this would more likely depend on the life cycle's O2 to CO2 ratio rather than oceanic absorption of CO2. Too many unknowns to say for sure. Another curious effect to a water world earth is you might get some hurricanes that last for weeks, maybe even months or years in the right conditions, kind of like mini versions of Jupiter's great red spot. All you need for a hurricane to gain strength is warm water and cold air and with no land for the Hurricane to lose strength on, You could get some real jumbos. Maybe category 7s, perhaps 8s. That would be awesome. Now if (and I think this is the gist of your question), what happens if you decrease the atmospheric pressure by significantly lowering the amount of O2/N2 in the atmosphere (or raising it). Lower atmospheric pressure lowers the boiling point of water and if you lower the atmospheric pressure enough, then water starts to boil and you get a partial water vapor atmosphere as a result of low atmospheric pressure and lots of water. I suspect this is unlikely to actually happen as a planet should always have enough other types of gas to prevent this unlikely outcome but a water vapor atmosphere can be approached logically, even if it's not going to really happen. Now, this isn't just high humidity, this is an actual water vapor atmosphere. I imagine a water-vapor rich atmosphere would form droplets and rain fairly regularly and be permanent or nearly permanent cloud cover but that's just a guess, though it would probably appear more like low to the ground smog than cloud cover, the clouds might hover much closer to the earth. If you increase atmospheric pressure, then water has a higher boiling point and takes more energy to vaporize. A denser atmosphere might hold heat better and be a bit warmer. It might also have fewer clouds due to more air and perhaps less variation in temperature and circulation, as it's circulation and warm air cooling off that's the primary driver in cloud formation. Water vapor would still form under a higher pressure atmosphere by wind and by photons. Sunlight plays an important role in evaporation, perhaps even more than temperature based on pan evaporation studies. If you have a very thin atmosphere made almost entirely of water (getting back to our improbable example). You'd drown if you tried to breath it, but as far as temperature of such a planet, at least with earth temperatures, water stays liquid even at quite quite low atmospheric pressure. At 1/2 PSI (1/29th of 1 atm) the boiling temperature of water is 79.6 degrees C. Source. I'm not sure how much heat 1/29th of an atmosphere could trap even if it was mostly water which is a greenhouse gas, so this one is hard to predict. You'd likely see wild night to daytime temperature swings, frequent boiling of the oceans during the day and very heavy rainfall at night, much more rain than we ever see on earth, as 1/29th of an ATM of water vapor is several times more water than is in the earth's atmosphere at any given time. If a water-vapor atmosphere planet (unlikely), got enough heat from it's sun, it could certainly turn into a run-away greenhouse, but I'm not sure if that happens at the radiance earth gets. But a planet like this would probably be subject to bigger temperature swings than we see on Earth and very fast wind. How about such a world with no vulcanism? (Vulcanism - as in, spock, . . . sorry). We think of plate tectonics and volcanism as a process by which gas is given to the atmosphere, and that's true to an extent. Certainly any gas trapped inside a planet at formation can be outgassed through Volcanism, but perhaps a more important aspect of plate tectonics isn't the release of gas but the absorption of gas. Oxygen is very reactive and it combines with basaltic rock to make lighter/stronger rock like granite and bedrock which leads to continents and mountain ranges and all that good stuff. Nitrogen can bind with basalt too, but Oxygen I think, binds more readily. Without photosynthesis and the creation of oxygen, not just life on earth but the continents would look very different and they'd probably be much less permanent. So if all the volcanoes are under water, that limits the atmospheric absorption of oxygen and other gases to just what's disolved in the oceans which could be a lot of NH3, but relatively lower amounts of other gasses. Another effect of all tectonic activity and volcanoes being under water is that volcanic gas like sulfates and tiny dust particles which can cool the planet after a large eruption, are unlikely to reach the atmosphere in any significant volume, so you're likely to avoid any volcanic cooling like you get on occasion on earth. Finally, you might think that volcanoes under oceans would warm the oceans and they would locally, but not much overall. A significant percentage of Earth's volcanoes are under water, but because ocean water circulates, most of the deep oceans stay at a chilly 4 degrees C, as opposed to land which gets warmer as you dig into the ground. Convection circulates heat much faster than conduction. Undersea volcanoes in your water world could be very helpful in the continuation and maintenance of extremophile life due to repleneshment of nutrients but not much in the way of temperature change or cloud formation. Finally - a fun bit worth mentioning. NH3, which is a very common gas/ice in the solar system. It's in all the outer planets and comets and outer moons. The neat thing about NH3 is it's water soluble and it's an energy source to primitive life, so simply by having oceans, stinky but useful NH3 is fairly quickly, mostly removed from the atmosphere and dissolved in liquid oceans and if there's life, it can be consumed, some released as N2 or NO-family and some can be built into amino acids. Some nitrogen can bind with basaltic rock and magma from undersea volcanoes forming different types of rock, but I'm no geologist, so I'm not sure how much that would happen. There's no one simple answer to this, but that's my best guess as a layman. I enjoy thinking about stuff like this.
{ "domain": "astronomy.stackexchange", "id": 1140, "tags": "exoplanet" }
Platform support in the Crystal Clemmys release
Question: Which platforms (OS versions + CPU architectures) are planned to be supported in the ROS2 'Crystal Clemmys' release? Originally posted by leej on ROS Answers with karma: 1 on 2018-08-23 Post score: 0 Answer: At some point the REP 2000 will be updated to define these. For now I would expect that Crystal will target the same platforms as Bounce since there are no new platforms (not considering non-LTS Ubuntu distros) - see http://www.ros.org/reps/rep-2000.html#bouncy-bolson-june-2018-june-2019. The exact minimum versions for dependencies will likely be updated for the rolling platforms like macOS and Windows. Originally posted by Dirk Thomas with karma: 16276 on 2018-08-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31629, "tags": "ros2" }
Integral measure in (nonstandard) Quantum Mechanics
Question: Context Studying a paper exploring possible consequences of a fundamental length scale in Nature, particularly in the context of one-dimensional quantum mechanics [1], the authors argue for a modification of the $\hat{x}$ operator in momentum-space representation: $\hat{x}=i\hbar(1+\beta p^2)\frac{\partial}{\partial p}$, with $\beta$ a positive constant. The momentum operator is as usual in this case, $\hat{p}=p$. It is then discussed $\hat{x}$ is no longer hermitian, but still symmetric nevertheless, i.e., $\langle\psi|\hat{x}|\phi\rangle=\langle\phi|\hat{x}|\psi\rangle^*$, as long as the scalar product is defined as $$ \langle\psi|\phi\rangle= \int \frac{dp}{1+\beta p^2} \psi^*(p) \phi(p). $$ This is very reasonable because the factor $1/(1+\beta p^2)$ exactly cancels the factor $(1+\beta p^2)$ in $\hat{x}$ and one is left with verifying that $\hat{x}$ is symmetric by a simple integration by parts. In a sense, the extra factor on the definition above could be guessed without much effort. On the other hand, in the context of three dimensional quantum mechanics this is not so clear to me. For instance, in [2] the authors propose the following representation of $\hat{x}_i$: $$ \hat{x}_i = i\hbar(1+\beta p^2)\frac{\partial}{\partial p_i} + i\hbar \beta' p_i p_j \frac{\partial}{\partial p_j} + i\hbar \gamma p_i, $$ with $p\equiv|\vec{p}|$ and where $\beta$, $\beta'$, and $\gamma$ are constants. Similar to the previous case, this operator is not hermitian, but it is symmetric under the definition $$ \langle\psi|\phi\rangle= \int \frac{d^3p}{[1+(\beta+\beta')p^2]^{1-\alpha}} \psi^*(\vec{p}) \phi(\vec{p}), $$ where $\alpha$ depends on our choice of $\gamma$ as $$ \alpha = \frac{\gamma-\beta'}{\beta+\beta'}. $$ I can derive this extra factor not by simple guess as before, but by pattern recognition after studying other simpler cases, some trial and error, and so on. At the end of the day, I cannot derive it from some basic understanding. The question How can we derive the extra factor on the definition of the scalar product not by somehow guessing it? My efforts I still don't have the mathematical background to be comfortable with the topic, but some research indicates me the extra factor in the integrals above are different choices of "integral measure". I found mathematical texts on that, but too technical to grasp useful understanding. Indication of any physical approach to this topic, or at least some not too technical but enough for basic application, would be really appreciated. [1] A. Kempf, G. Mangano, and R. B. Mann, Phys. Rev. D 52, 1108 (1995). [2] R. Akhoury and Y.-P. Yao, Phys. Lett. B 572, 37 (2003). Answer: The term you're looking for is "weighted inner product", which is usually first taught in the context of Sturm-Liouville operators. The basic idea, in one dimension, is that given a positive function $w(x) \geq 0$, one can define an inner product via $$\langle\psi|\phi\rangle = \int dx \, w(x) \psi^*(x)\phi(x)\tag{1}$$ where $w(x)$ is the "weight" of the inner product. The generalisation to higher dimensions is obvious. Then one can, for example, define a notion of "orthogonality" with respect to this inner product. If $w(x)\equiv 1$, then $(1)$ is the "usual" inner product, but from a purely mathematical standpoint $w \neq 1$ is an equally good choice and may be useful if you consider, like here, nonstandard quantum mechanics. To really see that this is nothing weird, consider for example spherical coordinates in three dimensions. Then we can write the inner product in cartesian coordinates as usual, $$\langle\psi|\phi\rangle = \int dx \,dy\,dz\, \psi^*(\vec{x})\phi(\vec{x})$$ However, we can also go to spherical coordinates, for which $$\langle\psi|\phi\rangle = \int dr \,d\theta\,d\varphi\, r^2 \sin\theta\, \psi^*(\vec{x})\phi(\vec{x})$$ Here a "weight" $r^2\sin\theta$ has appeared. So as you can see it is natural to introduce weighted inner products when changing coordinate systems. So, as remarked before, one could in principle also have a weight in cartesian coordinates. Sometimes the whole object $dx\,w(x)$ is referred to as "the measure", for example $dr \,d\theta\,d\varphi\, r^2 \sin\theta$ is the "integral measure" in spherical coordinates. However note that "measure theory" in mathematics, while related to this, is a very complicated subject that is not useful in this case. What the authors of the papers are trying to do is to find an appropriate weighted inner product where their modified operator $\hat x$ is still symmetric. To do so, they define the weighted inner product $$\langle\psi|\phi\rangle = \int d^3 \vec{p} \, w(\vec{p}) \psi^*(\vec{p})\phi(\vec{p})\tag{2}$$ where $w(\vec{p})$ is an appropriate weight so that the required condition ($\hat{x}$ symmetric) is satisfied. Apart from this, the choice of $w$ is completely arbitrary. In order to find $w$, you write the condition that $\hat{x}$ is symmetric, i.e. $$\langle\psi|\hat{x}|\phi\rangle=\langle\phi|\hat{x}|\psi\rangle^*\tag{3}$$ for generic $w$ with the weighted inner product $(2)$. This will give you a condition for $w$, which you can solve for. It should be emphasised that any $w$ that makes $(3)$ true will work, you don't need "the most general" $w$ or any "unique" $w$. As such, to make your life easier, you may assume for example that $w(\vec{p}) = w(p^2)$. Mathematically this doesn't have to be the case, but it's simple, and if you can find a $w$ with this property, you might as well go with it.
{ "domain": "physics.stackexchange", "id": 79759, "tags": "quantum-mechanics, operators, integration" }
A naive question about topologically ordered wavefunction?
Question: Topological entanglement entropy (TEE, proposed by Levin, Wen, Kitaev, and Preskill) is a direct characterization of the topological order encoded in a wavefunction. Here I have some confusions, and let's take the spin-1/2 Kitaev model on the honeycomb lattice as an example. The ground-state entanglement entropy of Kitaev model can be calculated exactly, where the TEE=$-ln2$ for both gapped phase and gapless phase. This is consistent with the 4-fold ground-state degeneracy on a torus for both gapped and gapless phases. [Although the ground-state degeneracy may be not well defined in the gapless phase.] Question: The nonzero TEE of the gapless ground-state says that the gapless state has "topological order", but "topological order" is only defined for a gapped phase. How I understand this paradox ? Remarks: I personally think that the concept of "topological order" for a gapped Hamiltonian and for a wavefunction may be different. A related question is: whether a given state $\psi$ is gapped or not? One possible definition may be: If there exists a gapped Hamiltonian whose ground-state is $\psi$, then we say $\psi$ is a gapped state. But this definition seems to be not well defined, since there may exists another gappless Hamiltonian whose ground-state is also $\psi$. A simple example is a free fermion Hamiltonian $H(u)=\sum_k(k^2+u)C_k^\dagger C_k$, where the vacuum state $\left | 0 \right \rangle $ is a gappless ground-state of $H(u=0)$ while $\left | 0 \right \rangle $ is a gapped ground-state of $H(u>0)$, so the gap meaning of a given state (here $\left | 0 \right \rangle$) may be ambiguous. So I personally think that the gapped and gapless ground-states in the Kitaev model are both topologically ordered wavefunctions (from the nonzero TEE), but only the gapped Kitaev Hamiltonian (rather than the gapless Kitaev Hamiltonian) has a well-defined topological order. Thanks in advance! Answer: A very good question. First of all, topological order strictly speaking is only defined for gapped states. But to some extent it can coexist with gapless degrees of freedom. A rather trivial example is just adding something gapless decoupled from the topological order (i.e. phonons). The example of the Kitaev model is quite different though, since the gapless part are fermionic spinons and the gapped part are visons ($Z_2$ gauge fields). The TEE says that the wavefunction of the $Z_2$ gauge field has non-local constraints(i.e. electric field lines must be closed, or only end on the spinons), which is also reflected on the mutual braiding statistics between spinons and visons. But on the other hand, the gapless spinons do have significant effect: for example, if one asks for the topological degeneracy (on top of the $1/L$ low-energy spinon excitations) on a torus, my thought is that the ground state degeneracy is reduced from $4$ to $1$, because spinons going around the large cycles can measure the vison fluxes and such a process has an amplitude $\sim 1/L$. So indeed the distinction you draw is an important one. On the other hand, this gapless state is not robust: there is nothing preventing the spinons from opening a mass gap unless additional symmetries are imposed. So near the gapless phase, there are gapped A and B phases, both of which have the same value of TEE. So it can be considered as the "critical point" between the A and B phase. I recommend a very insightful paper by Bonderson and Nayak, http://arxiv.org/abs/1212.6395, which discussed in great depth how one can define topological order in the presence of gapless degrees of freedom, and how ground state degeneracy and braiding statistics are affected. For the second question, a gapped state should probably be defined as a state where all correlation functions (of physical, local operators) are short-ranged. It seems to me that if a state is gapless, then some correlation function should detect the gaplessness: for example, in your example of Kitaev honeycomb model, although the spin correlation functions are short-ranged, the bond energy correlations are algebraic. I have not seen a counterexample to this criteria, but I don't think it has been proved rigorously either. You can look at http://arxiv.org/abs/math-ph/0507008 for a proof of spectral gap implying short-ranged correlations (also need to be careful about whether one should use connected correlation functions, so many subtle details...).
{ "domain": "physics.stackexchange", "id": 20446, "tags": "condensed-matter, quantum-entanglement, topological-order, topological-phase, topological-entropy" }
Enthalpy of vaporization
Question: I've been thinking about refrigeration technology and am a bit confused about two common answers. Specifically, the part where the expansion valve releases the pressurized fluid and stuff gets real cold. One is that refrigeration works by lower pressure = lower temperature. This makes sense to me because if there is lower pressure, I can imagine it as the opposite of higher pressure = higher temperature. Less pressure means particles are freer to move apart and thus eventually boil and lose energy as they travel further away from each other. The other is the enthalpy of vaporization, which I understand as meaning that some amount of energy is required for a phase change. This also makes sense to me: when the refrigerant enters the low pressure side of the valve, the particles are more free to spread apart, begin to boil, and thus suck up surrounding energy to break apart from each other. Although, it seems a bit more magical than the lower pressure = lower temperature explanation (it seems odd that already hot liquid particles "suck up" more heat). Could someone please help me understand this better? Thanks! NOTE: As you can see, I think of this in very layman terms and am currently reading books like Feynman's lectures. My background is engineering, not physics, so I tend to understand things best in a much more physical-visualization, implementation-details kind of way. Answer: Enthalpy (H) is the internal energy (U) of a material PLUS the product of pressure (P) and volume (V). $H = U + PV$ by definition When something boils, the gas phase takes up more volume than the liquid phase. So unless the boiling is in a vacuum, work is being done by expanding against a pressure, such as atmospheric pressure. This represents a change in the PV term of enthalpy. At a given temperature and pressure (say boiling water at atmospheric pressure), PV is greater for the gas phase than the liquid phase. Additionally, intermolecular forces hold molecules of liquids together. For example, molecules of a particular compound (say fluoromethane) may have a permanent net dipole moment, and the positive end of one molecule is attracted to the negative end of the other. There are other types of intermolecular forces such as hydrogen bonding and London forces. It takes energy to pull the molecules apart against such forces. This represents a change in the internal energy (U) term of enthalpy. At a given temperature and pressure (say boiling water at atmospheric pressure), U is greater for the gas phase than the liquid phase. In summary, a desired region is cooled because some of its energy is transferred to the refrigerant to increase its enthalpy. Part of the energy goes to expanding against a pressure (the PV term) and part goes to the increase in internal energy (the U term).
{ "domain": "physics.stackexchange", "id": 12609, "tags": "temperature, phase-transition" }
dose ROS support tensorflow-gpu version?
Question: and ,where can i get the 'ros tensorflow-gpu' related docs? Originally posted by qrf on ROS Answers with karma: 1 on 2018-04-12 Post score: 0 Answer: ROS and tensorflow are just coupled over the API. You can implement something with ROS and tensorflow if you want. As tensorflow as well as ROS contain APIs for C++ and Python, you should be able to implement something within ROS which uses tensorflow with GPU Support. Originally posted by mgruhler with karma: 12390 on 2018-04-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2018-04-12: There's already quite some activity around ROS + Tensorflow. Perhaps the OP could start there? See: google.com/search?q=ros+tensorflow. Comment by qrf on 2018-04-13: Thanks .That helpls a lot!
{ "domain": "robotics.stackexchange", "id": 30622, "tags": "ros" }
Lennard-Jones potential, distance $r$ for minimum energy
Question: I'm sorry if the question seems stupid. I found (wikipedia) that the Lennard-Jones potential has it's minimum at a distance of $$r = 2^{\frac{1}{6}}\sigma.$$ If $U(r)_{min} = -\epsilon$ $$U(r) = 4\epsilon\left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6} \right].$$ $$-\frac{1}{4} = (\frac{\sigma}{r})^{12} - (\frac{\sigma}{r})^{6}$$ Answer: The minima is found by differentiating and setting the derivative to zero. $$ \frac{dU}{dr}= -4\epsilon \left[12 \left(\frac{\sigma}{r}\right)^{12}\frac{1}{r}- 6 \left(\frac{\sigma}{r}\right)^{6}\frac{1}{r}\right] $$ Setting the term in the square bracket to zero yields the correct expression for the position of the minima: $$ \left(\frac{\sigma}{r}\right)^{6}=\frac{1}{2} $$ or, $$ r=2^{\frac{1}{6}}\sigma. $$ It is possible that you didn't do the derivative correctly.
{ "domain": "physics.stackexchange", "id": 75591, "tags": "homework-and-exercises, potential-energy, differentiation, calculus" }
How durable is a silanized surface?
Question: I have two questions regarding silanization: 1) How durable is a silanized glass surface? (i.e. How long a silanized surface can be kept without any change in its wettability or contact angle?) 2) What are the optimum conditions for keeping a silanized surface while it's idle and dry? (e.g. maintenance temperature, humidity, etc. Should it be kept in a desiccator or not necessarily?) Answer: If done properly, reactions with silanes or siloxanes are effectively permanent, unless removed by serious mechanical abrasion (e.g., scraping the material below the coating) or some sort of etching underneath. The silanization process involves creating new covalent $\ce{Si-O}$ bonds, which are quite strong. The silane surface itself should be durable. However, the contact angle may change as material adsorbs on top of it. For example, a colleague has shown that hydrocarbon contaminants change the wettability of many surfaces I don't think any particular storage is necessary. We store in a vacuum desiccator to minimize the effect of contamination, but as long as you clean the surface, the properties should be consistent.
{ "domain": "chemistry.stackexchange", "id": 4812, "tags": "organic-chemistry, experimental-chemistry, surface-chemistry" }
What is the space complexity of iterative deepening search?
Question: When using iterative deepening, is the space complexity, $O(d)$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)? Answer: As stated in my other answers here and here, the space complexity of these search algorithms is calculated by looking at the largest possible number of nodes that you may need to save in the frontier during the search. Iterative deepening search (IDS) is a search algorithm that iteratively uses depth-first search (DFS), which has a space complexity of $O(bm)$, where $m$ is the maximum depth, with progressively bigger depths, and it turns out that its space complexity is $O(bd)$. (Note that there is also the informed version of IDS, which uses the informed search algorithm A*, rather than DFS, known as iterative deepening A*).
{ "domain": "ai.stackexchange", "id": 2414, "tags": "search, branching-factors, homework, space-complexity, iddfs" }
Sorting and summation of a spreadsheet
Question: I am working on a sorted data file, so there does not need to be any sorting logic. I find unique values, feed them into an array, then use the worksheetfunction.sum to get totals for a field. Is this as efficient as I could be, or are there some things I'm not seeing and should? The "Items" sheet is where a number of accounts are stored (in the thousands). The user cuts and pastes the data from another spreadsheet It is sorted by account, and each line all the columns are identical except for the totals, and the first coloumn which contains the account number. I collect an array of account numbers, then cycle through the array to find the first instance, and continue down until I find the last instance. Populate a second array with the contents of all cells in that row (up to the last column with a header, then get a sum of the totals, and overwrite that element of the array. When completed, I drop the entire array made from the items sheet into the summary sheet. Option Explicit Sub makeSummary() Dim inws As Worksheet Dim outws As Worksheet Dim fndRange As Range Dim zell As Range Dim firstRow As Integer Dim lastRow As Integer Dim colctr As Integer Dim totalCol As Integer Dim LastCol As Integer Dim ctr As Long Dim arrBound As Long Dim distVals() As String Dim newRows() As String ' Initialize variables Set inws = Sheets("Items") Set outws = Sheets("Summary_Sheet") outws.Cells.Clear Set fndRange = Range(GetLast(3, inws.Cells)) LastCol = fndRange.Column lastRow = fndRange.Row Set fndRange = Nothing 'populate the header columns in the output worksheet. For ctr = 1 To LastCol outws.Cells(1, ctr) = inws.Cells(1, ctr).Value Next ctr ' redim array, and populate with unique SFC values ReDim distVals(1) distVals(1) = inws.Cells(2, 1) For ctr = 2 To lastRow If inws.Cells(ctr, 1).Value <> distVals(UBound(distVals)) Then ReDim Preserve distVals(UBound(distVals) + 1) distVals(UBound(distVals)) = inws.Cells(ctr, 1).Value End If Next ctr 'Get upper bound of search array and use it to set max row value of newrows array arrBound = UBound(distVals) ReDim newRows(1 To arrBound, 1 To LastCol) 'build array, For ctr = 1 To arrBound Set fndRange = inws.Cells.Find(distVals(ctr), lookat:=xlPart, LookIn:=xlFormulas) firstRow = fndRange.Row lastRow = fndRange.Row Do Until inws.Cells(lastRow + 1, 1) <> distVals(ctr) lastRow = lastRow + 1 Loop 'fill row For colctr = 1 To LastCol newRows(ctr, colctr) = inws.Cells(firstRow, colctr) Next colctr 'Get total of totals in SFC newRows(ctr, 5) = WorksheetFunction.Sum(Range(inws.Cells(firstRow, 5), inws.Cells(lastRow, 5))) Next ctr 'clean up of destination sheet With outws .Columns("E").NumberFormat = "_($* #,##0.00_);_($* (#,##0.00);_($* ""-""??_);_(@_)" .Range(.Cells(2, 1).Address, .Cells(arrBound + 1, LastCol).Address) = newRows 'excel doesn't recognize the numbers as numbers unless you multiply by 1 and drop the value back down. For Each zell In .Range(.Cells(2, 5).Address, .Cells(arrBound + 1, 5).Address) zell.Value = zell.Value * 1 Next zell .Calculate End With End Sub Function GetLast(choice As Long, rng As Range) ' 1 = GetLast row ' 2 = GetLast column ' 3 = GetLast cell Dim ReturnRng As Range Set ReturnRng = rng.Find(What:="*", After:=rng.Cells(1), lookat:=xlPart, LookIn:=xlFormulas, _ SearchOrder:=xlByRows, SearchDirection:=xlPrevious, MatchCase:=False) If Not ReturnRng Is Nothing Then With ReturnRng Select Case choice Case 1 GetLast = .Row Case 2 GetLast = .Column Case 3 GetLast = .Address Case Else End Select End With End If End Function Answer: Quick Things First, it's good practice to indent all of your code that way Labels will stick out as obvious. Second, Worksheets have a CodeName property - View Properties window (F4) and the (Name) field (the one at the top) can be used as the worksheet name. This way you can avoid Dim inws as Worksheet, set inws = Sheets"Items" and instead just use Items. Comments - "code tell you how, comments tell you why". The code should speak for itself, if it needs a comment, it might need to be made more clear. If not, the comment should describe why you're doing something rather than how you're doing it. Here are a few reasons to avoid comments all together. Integers - integers are obsolete. According to msdn VBA silently converts all integers to long. Naming Your variable names leave a lot to be desired. It's pretty easy to use descriptive names and characters are mostly free, so things like ctr = columnCounter or rowCounter or even just counter. Also, Standard VBA naming conventions have camelCase for local variables and PascalCase for other variables and names. So inws should be inWS or inWorksheet or sourceWorksheet. Right now things like zell and distVals don't really tell me anything at all about what I should be expecting them to do. Function Function GetLast - it's your prerogative, but I think it's unneeded. If you did want you keep it, it would be better as Private Function GetLast(ByVal choice as Long, ByVal rng as Range) as Long Right now you're passing ByRef, which isn't best practice and you don't have a return type defined. You can return a row number, column number or address (string?). That can lead to errors down the line if you change what you're looking for and expect the wrong type back. Personally, Dim lastRow As Long lastRow = Sheet.Cells(Rows.Count, 1).End(xlUp).Row Dim lastColumn As Long lastColumn = Sheet.Cells(1, Columns.Count).End(xlToLeft).Column Would work fine for me. And you could eliminate all of Set fndRange = Range(GetLast(3, inws.Cells)) LastCol = fndRange.Column lastRow = fndRange.Row Set fndRange = Nothing Arrays Dim distVals() As String Dim newRows() As String ReDim distVals(1) distVals(1) = inws.Cells(2, 1) For ctr = 2 To lastRow If inws.Cells(ctr, 1).Value <> distVals(UBound(distVals)) Then ReDim Preserve distVals(UBound(distVals) + 1) distVals(UBound(distVals)) = inws.Cells(ctr, 1).Value End If Next ctr 'Get upper bound of search array and use it to set max row value of newrows array arrBound = UBound(distVals) ReDim newRows(1 To arrBound, 1 To LastCol) I think what's happening here is that you initialize the array, then populate it one cell at a time, assuming that your data is in some sort of order because you're checking only the last element for duplicates, right? That is a lot of redimming and isn't needed. A dictionary could help with that by eliminating duplicates for you Dim valueDictionary As Object Set valueDictionary = CreateObject("Scripting.Dictionary") For counter = 2 To lastRow valueDictionary(Cells(counter, 2).Value) = 1 Next Now arrBound = UBound(distVals) is just valueDictionary.Count Now, your newRows array is a little confusing to me. Why not just build an array, maybe like this - Dim mySummedArray As Variant ReDim mySummedArray(1 To valueDictionary.Count, 1 To 2) ReDim arr(1 To valueDictionary.Count, 1 To 2) counter = 1 Dim key As Variant For Each key In valueDictionary.keys mySummedArray(i, 1) = key i = i + 1 Next For counter = 1 To valueDictionary.Count mySummedArray(counter, 1) = valueDictionary.keys(counter) Next Dim indexValue As String Dim arrayIndex As Long For counter = 1 To lastRow indexValue = Cells(counter, 1) arrayIndex = FindInArray(indexValue, mySummedArray) mySummedArray(arrayIndex, 2) = mySummedArray(arrayIndex, 2) + Cells(counter, 2) Next Using Private Function FindInArray(ByVal indexValue As String, ByVal arrayToSearch As Variant) As Long Dim i As Long For i = LBound(arrayToSearch) To UBound(arrayToSearch) If StrComp(indexValue, arrayToSearch(i, 1), vbTextCompare) = 0 Then FindInArray = i Exit For End If Next End Function Essentially you end up with a 2D array of unique values paired with their sum. But maybe that's not what you're doing? If not, sorry, but maybe the approach is still useful.
{ "domain": "codereview.stackexchange", "id": 22280, "tags": "performance, vba, excel" }
CUDA-Kernel for a Dense-Sparse matrix multiplication
Question: i have been working on a very big project for some time. Within this project, I wrote my own CUDA-kernels to do various operations. One of them is to perform a sparse affine transformation to a list of sparse inputs. Basically my input is a list of sparse vectors which are always either 1 or 0. I know for a fact that I can have at most 32 ones in a single vector. v1 = [0, 0, 0, 1, 0, 0, 1, ...] v2 = [1, 0, 0, 0, 0, 0, 1, ...] ... My idea now was to wrap all these vectors into a sparse format like: 3 2 4 2 ... ----------- 1 2 3 2 5 4 4 4 9 . 5 . . . 8 . Its basically a matrix. The first row coresponds to the amount of non-zero entries. The values below are the indices of the non-zero entries. Now when performing the matrix-vector multiplication, all I have to do for each output element is look at the input matrix, get the weights at the given indices and add them up. So far so good. I wrote the following kernel: __global__ void sparse_affine_kernel( const float* __restrict__ mat, const unsigned int* __restrict__ inp_col_indices, const unsigned int inp_col_max_entries, const float* __restrict__ bia, float* __restrict__ res, const unsigned int m, const unsigned int n, const unsigned int lda, const unsigned int ldc){ // clang-format on // compute which output value we are looking at int col = blockIdx.x * blockDim.x + threadIdx.x; int row = blockIdx.y * blockDim.y + threadIdx.y; // skip out of bounds if (col >= n || row >= m) return; // get the offset at which we look into our sparse input int offset = col * (inp_col_max_entries + 1); // check how many values we are going to read int count = inp_col_indices[offset]; // track the sum float sum = bia[row]; // start at offset + 1 (offset contains the amount of values to read) for (int i = offset + 1; i < offset + 1 + count; i++) { // get the sparse index (set row of the input) auto b_row = inp_col_indices[i]; // get the corresponding weight auto wgt = mat[MATRIX_INDEX(lda, row, b_row)]; sum += wgt; } res[MATRIX_INDEX(ldc, row, col)] = sum; }; Now the code should be somewhat straight forward. Id like to know the following things: Do you see any concrete way of improving this somewhat straight forward operation? Is there anything directly related to CUDA which I could use to improve the performance of this code? Maybe using shared memory? I tried using some shared-memory some time ago and simply remembered that I wasnt able to improve the performance of the code. I am very happy for a review and optimization-ideas for my code :) Greetings Finn Answer: Thank you for offering this for review. I understand you're primarily interested in performance. But I confess I found the code a little on the opaque side and not quite ready to invite lots of folks to collaborate on it. Your introductory paragraphs, outside of the code artifact, were very clear and helpful. I am reading the signature. It could be improved. Starting with a URL pointing to data structure documentation, similar to your opening paragraphs. The code artifact should be self-describing. The terse nomenclature of the signature leaves me with several questions. I do not know what "bia", "lda", & "ldc" mean. For example, when reading them, should I mentally pronounce it "bias"? linear discriminant array? linked data column? IDK. Google offered no relevant abbreviation expansions. Please spell the 5th argument result. Abbreviating its single use helps no one. It was clear enough, but identifiers in a public API have a higher documentation burden than locals. Consider adhering to the convention where input args tend to appear near beginning of signature and outputs near the end. The 1st two lines of code use lovely identifiers and are wonderfully clear, thank you. I am skeptical about the out-of-bounds return. Maybe it is conventional and the right thing to do. In other languages I would expect an exception to be raised. Here, I don't see so much as an errno or error counter being affected. We consult a pair of block globals and a thread global. I am concerned that a subset of threads will win, higher threads will lose, and we've just offered the app developer the gift of a silent Heisenbug. In particular, from a DbC perspective, it does not appear to me that "caller was incorrect" if we're out of bounds. So responsibility is still with the library routine to fulfill the contract. That might be "set an error flag or side-effect the matrix", but that's not what we see implemented. Consider eliding the track the sum comment, as it doesn't add anything beyond what the well-chosen identifier is telling us. I found b_row slightly puzzling. Maybe it could be bit_row? But it seems to be used where a column might be expected. Rather than wgt, please just call it weight, and then we probably don't need the comment to explain it. I imagine that MATRIX_INDEX is a macro with a few adds and multiplies (or shifts), but you did not include it. I was hoping it would help me to better understand the lda / ldc distinction. Overall? This is simple enough code, but it's not a code base I would want to assign or accept maintenance tasks for, not yet.
{ "domain": "codereview.stackexchange", "id": 44338, "tags": "performance, cuda" }
How to correctly publish and receive cv::Mat using cv_bridge? (C++)
Question: Hey everyone, I am having troubles converting a cv::Mat to a sensor_msgs/Image, publishing it and doing the reverse operation to regain the original cv::Mat. The imshow() function shows me the correct image(white-gray-black image with numbers ranging from 0 to 1) before sending, but shows a seemingly random and black-white(binary) image after receiving it. Normalizing the matrix with cv::normalize() before displaying it does not do the trick either. So I would be happy if you could check out my code below or give me a working sample code from one of your projects :). There is the fear that the mistake lies outside the shown code, but I don't think you want to read all my nodes. Extract publisher code: cv::Mat out_mat; // fill mat.... std::cout << "out_mat.type() = " << out_mat.type() << "\n"; // prints out_mat.type() = 5 cv::namedWindow("out_mat", CV_WINDOW_NORMAL); cv::resizeWindow("out_mat", 500, 500); cv::imshow("out_mat", out_mat); // shows correct image cv::waitKey(1); cv_bridge::CvImage cvi_mat; cvi_mat.encoding = sensor_msgs::image_encodings::MONO8; cvi_mat.image = out_mat; cvi_mat.toImageMsg(container); // container of type sensor_msgs/Image pub_peaks_.publish(container); Extract subscriber code: // convert image msg to cv mat cv_bridge::CvImagePtr img; img = cv_bridge::toCvCopy(container, sensor_msgs::image_encodings::MONO8); // container of type sensor_msgs/Image cv::Mat mat_received = img->image; cv::Mat mat; mat_received.convertTo(mat,5); std::cout << "mat type: " << mat.type() << "\n"; // prints mat_c type: 5 cv::namedWindow("Grid", CV_WINDOW_NORMAL); cv::resizeWindow("Grid", 500, 500); cv::imshow("Grid", mat); // shows false/random (binary black-white) image cv::waitKey(1); Every help is appreciated. Originally posted by anonymous28046 on ROS Answers with karma: 65 on 2017-02-06 Post score: 0 Original comments Comment by afranceson on 2017-02-07: Not sure but maybe is the problem is on the matrix type conversion (I suggest to use the CV definition instead the int value!). Did you try to show your mat_received image instead the converted one? Answer: For anyone wondering: As I only needed to send the matrix I created a different rosmsg containing an array where I wrote the values of the matrix. After receiving the array, I convert it back to a matrix. Therefore I don't have to deal with encodings and stuff. Originally posted by anonymous28046 with karma: 65 on 2017-02-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26935, "tags": "ros, c++, image" }
If photons have no mass, why do they gain mass in photon entanglement?
Question: Photons are massless particles. However, this article states that photons can gain mass when they become entangled. How can this happen? From the article: Physicists create new form of light Photons, the elementary particles that make up light, are known to be fast, weightless and to not interact with each other. But in new experiments, physicists at MIT and Harvard have now created a new form of light, demonstrating that groups of photons can be made to interact with each other, slow down and gain mass. Answer: I will give you my "understanding" of how this does not contradict what is known about photons at the elementary particle level. The publication here describes a complicated system of guiding photons through a special medium We search for a photonic trimer using an ultracold atomic gas as a quantum nonlinear medium. This medium is experimentally realized by coupling photons to highly excited atomic Rydberg states by means of electromagnetically induced transparency (EIT) So this is a system with which photons interact and display three photon correlations which induce a mass on the photon following the mathematical model that fits the data: is the effective photon mass, $a$ is the scattering length, $Ω_c$ is the control laser Rabi frequency, and $∆$ is the one-photon detuning. So it is within a complicated mathematical model that the photons acquire mass. To get a perspective on this I think of the $π^0$, it decays into two photons with an angle between them , and this is defined by the mass of the pion. The four vectors of the two photons added will still have the mass of the $π^0$ , the momentum and energy .If one timed the photons, it would seem that they moved with a velocity less than c, but in effect they traverse a longer path than the $π^0$ path, because of the angle between the two photons. This helps me realize that sums of four vectors are not intuitive. The four vectors involved in these photonic states are more complicated, as they are defined by interactions with a system of particles ( the cold gas). This ties up with this popularized explanation: So why are the normally lone-ranging photons suddenly interacting with each other? The team's hypothesis is that as photons bump into the rubidium atoms, they form polaritons – quantum particles that are part-light and part-matter. Polaritons have mass, which is how they can bind to other polaritons. Once they leave the cloud, the atoms they've picked up stay behind, but the photons remain bound together. They are not bound, they are correlated, the way the two photons of the $π^0$ are correlated.
{ "domain": "physics.stackexchange", "id": 55025, "tags": "quantum-mechanics, photons, mass, quantum-entanglement" }
why nominal unification is a first-order unification?
Question: In my understanding, if a unification solves equations of terms that are not higher-order terms, then it is a first-order unification. If a unification solves equations of terms that are higher-order terms, then it is a higher-order unification. Therefore a unification solves equations of $\lambda$-terms should be a higher-order Unification. That is what I thought. Read about unification the paper nominal unification said that nominal unification is a first order unification, but it is solving equations of $\lambda$-terms. Why nominal unification is a first-order unification while clearly solving equations of higher-order terms? Also, that paper shows an example of unification $$ \lambda a.\lambda b.(b \, M_6) =?= \lambda a.\lambda a.(a \, M_7) $$ on the right side $\lambda a.\lambda a.(a \, M_7)$, what is that? Why two abstractions have same bound variable $a$ ? I am confused, could anyone clearify it to me, thanks! Answer: Most experience people have with unification (if any) is usually unification modulo syntactic equality: two terms unify if there is a substitution for unification variables that makes the terms syntactically identical. However, you can consider other base equivalence relations that are coarser than syntactic equality. For example, there's associative, commutative equality that let's you unify two bags (aka multisets) of terms, e.g. the term $\{\!\!\{\mathtt{x},\mathtt{y}\}\!\!\} = \{\!\!\{\mathtt{y},\mathtt{x}\}\!\!\}$ where I'm using $\{\!\!\{ \ldots \}\!\!\}$ to represent a bag. Unification with a base equivalence like this, coarser than syntactic equality, come up, for example, in checking record types. You can imagine you have a bag of record labels and you want to record types to be the same regardless of how you wrote down the labels. So, in this context, higher-order unification is unification modulo $\alpha\beta\eta$-equivalence. The base equivalence is what makes it higher-order-unification, not the terms it operates on. I can apply normal, syntactic, first-order unification to lambda terms too. Unification modulo $\alpha\beta\eta$-equivalence means being able to unify expressions like $E(3) = (3,3)$ via a substitution such as $[E \mapsto \lambda x.(x,x)]$ as these are $\beta$-equivalent. (Following Prolog, I'll use capital letters to indicate unification variables. Also, note $[E\mapsto\lambda x.(3,3)]$ is another non-equivalent substitution which is why higher-order unification produces extra non-determinism.) Nominal unification does not unify these terms. $(\lambda x.(x,x))(3)$ is not $\alpha$-equivalent to $(3,3)$. An application will never be $\alpha$-equivalent to a pair constructor. (However, if the notation was implicitly in curried-style, i.e. $(3,3)$ meant $((,)(3))(3)$, these would syntactically unify. Generally, though, multi-ary application is never $\alpha$-equivalent to unary application.) Nominal unification is unification modulo $\alpha$-equivalence. At a raw syntax level $\lambda x.x$ is syntactically different from $\lambda y.y$ because $x$ is different syntax than $y$. This is undesirable as we want to treat these terms as completely identical. We don't want the choice of variable name to matter to, e.g., type checking. This problem is clearly solvable because we could use deBruijn notation and the only term corresponding to the two above lambda terms would be something like $\lambda\mathbf{1}$. The power of deBruijn notation is that syntactic equality is $\alpha$-equality. Nevertheless, it would be nice not to use deBruijn notation. So one way to implement nominal unification would be to simply convert each term to deBruijn notation and unify. This would be fine if all we needed to do was answer "yay" or "nay", but we also need to produce a substitution. If we convert $\lambda x.x$ and $\lambda y.E$ to $\lambda\mathbf{1}$ and $\lambda E$ respectively, then unify them, we get $[E \mapsto \mathbf{1}]$. But obviously $\lambda x.x \neq (\lambda y.E)[E\mapsto x]$. Basically, we need to keep track of what variable corresponds to $\mathbf{1}$ on each side of the equation. This gives us two bijections (for any particular unification variable, i.e. locally) that maps variables to deBruijn indices, here $[x \mapsto \mathbf{1}]$ and $[y\mapsto\mathbf{1}]$. We can compose these bijections to get a bijection (i.e. permutation) between variables on each side. This leads to $[E \mapsto (x\ y)\bullet x]$, in their notation, which simplifies to $[E\mapsto y]$, the desired substitution. There are some extra concerns about capturing variables that additionally complicates the picture. Summarizing, higher-order unification means unification modulo $\alpha\beta\eta$-equivalence. Nominal unification means unification modulo only $\alpha$-equivalence. This can be reduced to syntactic unification of deBruijn terms, but if we want to produce substitutions on non-deBruijn terms we need to keep track of some extra information. Doing a similar reduction for higher-order unification would involve reducing to terms in $\beta\eta$-normal form, which thus requires such normal forms to exist hence the focus on the typed case, and leads to creating terms out of nowhere which themselves require unification.
{ "domain": "cs.stackexchange", "id": 7683, "tags": "lambda-calculus, unification" }
Operator expectation value for system of non-interacting particles (Fermions)
Question: I am reading the book "Electronic Structure" by Richard Martin which poses the following problem: Show that the expectation value of an operator $\hat O$ in a system of identical, non-interacting Fermions (i.e. electrons with the independent particle approximation) has the following form: $$ \left<\hat O\right> = \sum_{i,\sigma} f_i^\sigma \left<\psi_i^\sigma|\hat O|\psi_i^\sigma\right>$$ Where, $f_i^\sigma = \frac{1}{e^{\beta(\epsilon_i^\sigma - \mu)} + 1}$ and $\psi_i^\sigma$ is the eigenvector of the ith single particle state with spin $\sigma$. I have begun the derivation thusly (j is the jth eigenstate of the multibody system): $$ \left<\hat O\right> = Tr\left(\hat \rho \hat O \right) = \sum_j \left<\Psi_j|\hat \rho \hat O |\Psi_j \right>$$ Since $\hat \rho$ is Hermitian: $$ \sum_j \left<\Psi_j|\hat \rho \hat O |\Psi_j \right> = \sum_j \left<\hat \rho\Psi_j| \hat O |\Psi_j \right> $$ In the Grand Canoncial Ensemble: $\hat \rho = \frac{1}{Z}e^{-\beta(\hat H - \mu\hat N)} $ Further, considering that the jth eigenstate of the multibody system will be composed of N single particle wave functions, one can write: $$ \sum_j \left<\hat \rho\Psi_j| \hat O |\Psi_j \right> = \sum_{\{n_i,\sigma_i\}} \frac{1}{Z}e^{-\beta(\sum_i n_i\epsilon_i^{\sigma_i}-\mu \sum_i n_i)}\left<\Psi_j| \hat O |\Psi_j \right> $$ Based on my understanding of the other quantum mechanics reference I have been using, if $\hat O$ was simply the operator which returns the number of particles in the ith single particle state with spin $\sigma$, the sum reduces to the $f_i^\sigma$. So my guess as how to obtain the general result is to suppose that $\hat O$ is the combination of N single particle operators $\hat O = \sum_i \hat O_i$. Hence, $$\left<\Psi_j| \hat O |\Psi_j \right> = \left<\Psi_j| \sum_i \hat O_i |\Psi_j \right> = \sum_i \left<\psi_i^\sigma | \hat O_i | \psi_i^\sigma \right>$$ After this point, I am stuck and I do not see how this could reduce down to the given result. Further it would seem to me that the author should have placed a restriction on $\hat O$ to be only a single particle operator on the rhs of the target result. Answer: I think this is a situation where it is much easier to derive the desired result using second quantization. To start, let us consider a system of identical particles and let $O = \sum\limits_{k} o_k $ denote a generic one-body operator on the respective Fock space. In the language of second quantization, we can express this operator as $$ O = \sum\limits_{ij} \langle i|o|j\rangle \,a_i^\dagger a_j \quad . $$ The expectation value of an (one-body) operator in the state $\rho$ is defined by $\langle O \rangle_\rho \equiv \mathrm{Tr} \, \rho \, O$, where the trace is performed on the Fock space. By defining the elements of the one-body reduced density matrix in the state $\rho$ as $$\gamma_{ij} \equiv \mathrm{Tr}\, \rho\, a_j^\dagger a_i \quad ,$$ we see that we can write the expectation value of $O$ as $$\langle O \rangle _\rho = \mathrm{tr}\, \gamma \,o \quad , $$ where now the trace is performed on the single-particle Hilbert space. In the following, $i,j$ denote elements of the basis in which the single-particle Hamiltonian is diagonal. For a system of non-interacting fermions and in equilibrium in the grand canonical ensemble, it follows that $$ \gamma_{ij} = \delta_{ij} \,\langle n_i \rangle_\rho \quad . $$ This can be derived by e.g. applying Wick's theorem. Here, $$\langle n_i \rangle_\rho = \frac{1}{e^{(\epsilon_i - \mu)/{k_{\mathrm{B}}T}}+1} $$ is the well-known expression for the average occupation number of the single-particle state $i$. Finally, this shows that indeed $$ \langle O \rangle_\rho = \sum\limits_i \frac{1}{e^{(\epsilon_i - \mu)/{k_{\mathrm{B}}T}}+1}\, \langle i|o|i\rangle \quad \quad ,$$ for a system of non-interacting fermions in grand canonical equilibrium. As a last point, note that an analogous result holds for the case of non-interacting identical bosons.
{ "domain": "physics.stackexchange", "id": 81288, "tags": "quantum-mechanics, homework-and-exercises, condensed-matter, density-operator, wick-theorem" }
Why does a chiral carbon affect some hydrogen's hydrogen environment?
Question: In the screenshot attached taken from Khan Academy's video (https://www.youtube.com/watch?v=p9B4s0N5yk8&list=PL0pJUwkI0YCZRYrgKqHXZ0XCIu11afbJ8&index=3) the thick pink dot refers to a chiral carbon atom. Why is it that the two hydrogens circled red and pink that are bonded to a carbon which is then bonded to a chiral carbon atom are in different hydrogen environments? However, the three green circled hydrogens that are bonded to a carbon which are then bonded to a chiral carbon atom are in the same hydrogen environment? From my understanding of hydrogen environment, a hydrogen environment is characterised by the functional group(s) that is attached to that hydrogen. Therefore, I think that the two hydrogens circled red and pink are in the same hydrogen environment, just like the three green circled hydrogens are in the same chemical environment. Therefore, my question is why does a chiral carbon atom affect the hydrogen environment of a hydrogen? And especially from Khan Academy's example, only some hydrogens are affected by the chiral carbon and some aren't affected by it. Is there some loop hole in my understanding of hydrogen environment? Answer: There is a big difference between the two hydrogens and the hydrogens in the two methyl groups. The issue is that the methyl hydrogens are part of a group that can rotate quickly and so they see an averaged environment which is the same for all the hydrogens. If the bonds were "stuck" and the methyl groups could not rotate your argument might be correct, but the is not the case. The two hydrogens on the carbon next to the chiral centre are also connected to bonds that can rotate. But they never see the same environment even on average because they will always have a different relationship to the hydrogen and hydroxyl on the next carbon. Whatever way you rotate the bond connecting their carbon, the two hydrogens will be in a different environment, so they appear with distinct shifts.
{ "domain": "chemistry.stackexchange", "id": 10787, "tags": "organic-chemistry, nmr-spectroscopy, chirality" }
Can the expansion space time reverse itself and contract the same way?
Question: If there's a mechanism for space-time expanding faster than the speed of light, is there an example of It contracting in the same manner? If whatever mechanism is causing it to expand, can the underlying mechanism reverse itself? Answer: The equation of motion for cosmology from general relativity have a time reversal symmetry, even with a positive cosmological constant to explain dark energy. So any solution of the equations that features an expansion from a big bang can be reversed to give another solution in which the universe contracts towards a big crunch. In this solution the cosmological constant would slow down the rate of contraction instead of accelerating the expansion. There are also solutions of general relativity in which the universe starts with an expansion and ends with a contraction after a moment when the expansion stops and goes into reverse. This is true even with a positive cosmological constant and even when space is flattened by the effects of cosmic inflation. The ultimate fate of the universes evolution depends on the size of the cosmological constant verses the rate of expansion after inflation. If the constant is too small or negative or the rate of expansion starts out too slow, then the universe would stop at some point and contract. However, observations indicate that this is not the case. The universe has already passed the point where the cosmological constant dominates and the expansion will continue to expand at an accelerating rate forever. Of course this assumes that dark energy is correctly explained by a cosmological constant rather than a quantity that changes with time. All relevant observations so far are nicely consistent with this view.
{ "domain": "physics.stackexchange", "id": 2310, "tags": "cosmology, speed-of-light, spacetime, faster-than-light" }
numpy performance of norm calculation using variable dimension
Question: I am looking for advice to see if the following code performance could be further improved. This is and example using a 4x3 numpy 2d array: import numpy as np x = np.arange(12).reshape((4,3)) n, m = x.shape y = np.zeros((n, m)) for j in range(m): x_j = x[:, :j+1] y[:,j] = np.linalg.norm(x_j, axis=1) print x print y Which is printing [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] [[ 0. 1. 2.23606798] [ 3. 5. 7.07106781] [ 6. 9.21954446 12.20655562] [ 9. 13.45362405 17.3781472 ]] As you can see the code is computing the norms of the vectors considering increasing number of columns, so that y[i,j] represent the norm of the vector x[i,:j+1]. I couldn't find if this operation has a name and if it is possible to vectorize further the process and get rid of the for loop. I only found in this post that using np.sqrt(np.einsum('ij,ij->i', x_j, x_j)) is a bit faster than using np.linalg.norm(x_j, axis=1). Answer: Indeed, there is a better way. Exponentiation aside, we can see that this operation is equivalent to multiplication by an upper triangular matrix of 1. The former is about 100x faster. Here the code (Run it online !): Source from time import time as time import numpy as np n = 1000 m = 500 size = n * m triu = np.triu(np.ones((m, m))) x = np.arange(size).reshape((n, m)) y = np.zeros((n, m)) # Your implementation tic = time() for j in range(m): x_j = x[:, :j + 1] y[:, j] = np.linalg.norm(x_j, axis=1) tac = time() print('Operation took {} ms'.format((tac - tic) * 1e3)) # Optimized implementation tic = time() y1 = np.sqrt(np.dot(x**2, triu)) tac = time() print('Operation took {} ms'.format((tac - tic) * 1e3)) # Optimized implementation using cumsum method tic = time() y2= np.sqrt(np.cumsum(np.square(x), axis=1)) tac = time() print('Operation took {} ms'.format((tac - tic) * 1e3)) Output Operation took 1690.1559829711914 ms Operation took 18.942832946777344 ms Operation took 6.124973297119141 ms
{ "domain": "codereview.stackexchange", "id": 38024, "tags": "python, performance, matrix, numpy" }
What is the correct representation of Master Theorem?
Question: What I'm taught in my class - $T(n)=aT(\frac{n}{b})+\theta(n^k\log^pn)$ where $a\geq1$, $b>1$, $k\geq1$ and $p$ is a real number. if $a>b^k$ then, $T(n)=\theta(n^{\log_ab})$ if $a=b^k$ then, if $p>-1$, then $T(n)=\theta(n^{\log_ab}\log^{p+1}n)$ if $p = -1$, then $T(n)=\theta(n^{\log_ab}\log\log n)$ if $p<-1$, then $T(n)=\theta(n^{\log_ab})$ if $a<b^k$, then if $p\geq 0$, then $T(n)=\theta(n^k\log^pn)$ if $p<0$, then $T(n)=O(n^k)$ On certain websites, the above representation for Master Theorem is slightly different as follows - $T(n)=aT(\frac{n}{b})+\theta(n^k(\log n)^p)$ This ambiguity creates a great confusion while solving problems such as : What is the value of the recurrence : $T(n)=T(\sqrt{n})+\theta(\log\log n)$ Substituting $n=2^m$, we get a new expression : $S(m)=S(\frac{m}{2})+\theta(\log m)$ Here $a=1$, $b=2$, $k=0$ and $p=1$ If I apply Master Theorem as per the way I am taught in class, the result I get is : $T(n)=\theta(\log\log\log n)$ and if I solve using the other formula, I get the result as : $T(n)=\theta((\log\log n)^2)$ Which is the correct one? This confusion is making me question every problem that I've solved. Answer: By convention, the notation $log^p(x)$ is defined to be $(log(x))^p$, not $p$ iterations of the $log$ function. This is similar to the trigonometric functions, which gives us identities like $sin^2(x)+cos^2(x)=1$.
{ "domain": "cs.stackexchange", "id": 3851, "tags": "time-complexity, asymptotics, runtime-analysis, recurrence-relation" }
Need some advice on approach to select only the informative emojis from the data set?
Question: I have a giant data set from a local elections, which contains hashtags, emojis, and comments. I wanted to make a network analysis using only emojis. So far I have a network analysis graph made in R which looks like this: Sorry, you may have to zoom in to see the nodes. So, basically my goal is to see what people are talking about as a whole group. Currently there are lot of nodes which don't really say anything concrete about the main context or the hook, yet also creating clutter. I had extracted the data with political hashtags. Therefore, nodes such as milk bottle, cows, joy faces, toffees, flags, etc. doesn't really give me anything between people's comments and the context. My goal is to see a collective sentiment of people via their use of emojis in their context. I don't know if I make sense. I don't know how should I approach the problem of only selecting informative emojis. Should I focus on the hashtags, and make a list of hashtags I am interested in, and only extract their associated comments and emojis? Or should I look at the sentimental values associated with emojis, and focus on the extreme positive and negative ones only? I am pretty lost, any direction which method/ algorithm should I use to de-clutter this graph a bit yet also keep it in the context of politics & elections? Answer: One option is to reframe it as a word embedding problem. Emojis can be embedded in a vector space along with comments and hashtags. Then distance measures and clustering can be used to find the emojis that are associated with different sentiments.
{ "domain": "datascience.stackexchange", "id": 7778, "tags": "data-mining, data-cleaning, visualization, algorithms, social-network-analysis" }
Can NaH open the epoxide ring to form alcohol? If so, how?
Question: I was looking at the organic chemistry 8th Ed textbook by Paula Yurknais Brucie. On page 509, I found this: I think about it for a while and I am doubting that it is wrong. I believe the hydride ion itself cannot act as a nucleophile for SN2. (not really sure but I guess it is too small and too basic to work as a nucleophile, it should be much easier to attack hydrogen rather than carbon for hydride ion) Is this reaction is possible with $\ce{NaH}$ in solvents other than water and alcohol? If so, how does the hydride ion open the epoxide ring? (hoping for an answer with mechanism) And I am specifically talking about hydride ion (or hydrogen anion). Not sodium borohydride $\ce{NaBH4}$, lithium aluminum hydride $\ce{LiAlH4}$, or other metal hydrides. Answer: Further to my comments above, I am going to offer my opinion that the book is wrong and the epoxide is not opened by sodium hydride as depicted. I have failed to find any literature examples of such a reaction and, in 40+ years as a synthetic chemist, I can recall no reactions in which $\ce{NaH}$, unmodified by other reagents, acts as a source of nucleophilic $\ce{H-}$. Can anyone produce a literature example?
{ "domain": "chemistry.stackexchange", "id": 15943, "tags": "organic-chemistry, reaction-mechanism, nucleophilic-substitution" }
What is the symbol for angular frequency?
Question: I am reading the book Signals and Systems Laboratory with Matlab Book by Alex Palamides and Anastasia Veloni I was going through chapter 6 (Fourier transform) and I came across a confusing thing which is symbol of angular frequency Ω, which is upper case omega and in other books it is usually used for unit of resistance(ohm) while lower case omega ω is usually used for angular frequency. So here author/writer forgot this fact? Answer: Authors of signal processing books usually don't write about resistance, so the symbol $\Omega$ can be used for other purposes. One common purpose is to denote angular frequency in continuous time. In this way one can make a distinction between angular frequency in continuous time and normalized angular frequency in discrete time. I would guess that for the latter the authors use $\omega$. In that case we have $$\begin{align}\Omega&=2\pi f\\\omega&=2\pi f/f_\mathrm{s}=2\pi fT\end{align}$$ where $f_\mathrm{s}$ is the sampling frequency, and $T=1/f_\mathrm{s}$ is the sampling interval. Furthermore, $$\omega=\Omega T$$ A famous book which uses this notational convention is Oppenheim and Schafer's Discrete-Time Signal Processing. So the authors of the book you're reading are in good company.
{ "domain": "dsp.stackexchange", "id": 12005, "tags": "fourier-transform, frequency, fourier, frequency-domain, terminology" }
Optimization of Hibernate DAO in desktop application
Question: I'm working on medium sized desktop application (with around 100 tables in database). For persistence layer I decided to use Hibernate for the first time to avoid massive redundancy of code for persistence layer. This is final implementation of DAO interfaces: public class BaseDAOImpl<E extends Serializable> implements BaseDAO<E> { @Override public void insert(E t) { Session session = HibernateUtil.openSession(); try { session.beginTransaction(); session.save(t); session.getTransaction().commit(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } finally { session.close(); } } @Override public void delete(Class<E> e, int id) { Session session = HibernateUtil.openSession(); try { session.beginTransaction(); Object object = get(id, e); if (object != null) { session.delete(object); } session.getTransaction().commit(); } catch (Exception ex) { ex.printStackTrace(); session.getTransaction().rollback(); } finally { session.close(); } } @Override @SuppressWarnings("unchecked") public List<E> getAll(Class<E> c) { Session session = HibernateUtil.openSession(); List<E> t = null; try { session.beginTransaction(); t = session.createCriteria(c).list(); session.getTransaction().commit(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } finally { session.close(); } return t; } @Override public void update(E t) { Session session = HibernateUtil.openSession(); try { session.beginTransaction(); session.merge(t); session.getTransaction().commit(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } finally { session.close(); } } @SuppressWarnings("unchecked") @Override public E get(int id, Class<E> c) { Session session = HibernateUtil.openSession(); E t = null; try { session.beginTransaction(); t = (E) session.get(c, id); session.getTransaction().commit(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } finally { session.close(); } return t; } } Now, I'm a bit suspicious about session management. Is this a proper way to manage them (open session, commit changes, close session) or should I use some other approach, and what are the other suggestions to improve my code? Answer: Wrong Transaction Management Transactions are not a Data Access Layer concern, they are Business Logic Layer concern. Consider how you would delete an entity, (any operation updating the database is similar) : public class WidgetManager { public void deleteWidget(int widgetId) throws BusinessException{ Widget widget = dao.get(Widget.class, widgetId); if (widget == null) throw new BusinessException("WidgetNotFound"); if (!widget.isDeletable()) throw new BusinessException("WidgetCannotBeDeleted"); dao.delete(widget); } } this method should be run in a transaction. I would suggest you use Spring for transaction management as @ohiocowboy. (But using Hibernate API or Hibernate through JPA API (for example EntityManager) is not relevant to your current question.) Repetitive Code The following snippet is repeated in each method: Session session = HibernateUtil.openSession(); try { session.beginTransaction(); // do something with `session` session.getTransaction().commit(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } finally { session.close(); } This situation arises quite frequently, mostly when you deal with resources such as Hibernate sessions, DB transactions, connections, resultsets, files, sockets, ... You can use Execute Around Idiom to alleviate this problem. Common code can be factored thus: interface Function<T, R> { R apply(T t); } <R> R execute(final Function<Session, R> func) { R result = null; Session session = HibernateUtil.openSession(); try { session.beginTransaction(); result = func.apply(session); session.getTransaction().commit(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } finally { session.close(); } return result; } Now get reads like this [Annotations omitted for clarity]: public E get(final Class<E> c, final int id) { return execute(new Function<Session, E>() { public E apply(Session session) { return (E) session.get(c, id); } }); } If you have Java 8, you already have Function interface. And above can be written even more concisely as follows: public E get(final Class<E> c, final int id) { return execute(session -> (E) session.get(c, id)); } For methods that return void, you can avoid unnecessary return null; statements with: interface Consumer<T> { void accept(T t); } void execute(final Consumer<Session> cons) { execute(new Function<Session, Object>() { @Override public Object apply(Session t) { cons.accept(t); return null; } }); } Now insert reads like this: public void insert(final E t) { execute(new Consumer<Session>() { public void accept(Session session) { session.save(t); } }); } If you have Java 8, you already have Consumer interface. And above can be written even more concisely as follows: public void insert(final E t) { execute(session -> {session.save(t);}); } Evil Singleton HibernateUtil is either a singleton or, less likely and much worse, a real utility class that creates a SessionFactory that loads Hibernate mappings on each call to openSession. A utility class should not have any state, such as a SessionFactory. EDIT: What to do about HibernateUtil singleton? Let me tell you first how I would fix it: I would delete the static modifier from the openSession method and fix all the compile errors. But this style depends on how familiar you are with the compiler, your IDE and the code base you are currently working on, not to go out of hand. However you do it in the end HibernateUtil and BaseDAOImpl should look similar to these: public class HibernateUtil { private SessionFactory sessionFactory; public HibernateUtil (SessionFactory sessionFactory, ...) { this.sessionFactory = sessionFactory; } public openSession() { // <----- not static // The contents should not change much, if at all } } public class BaseDAOImpl .... { private HibernateUtil hibernateUtil; public BaseDAOImpl(HibernateUtil hibernateUtil) { this.hibernateUtil = hibernateUtil; } // Later use `hibernateUtil` instance to call `openSession` Session session = hibernateUtil.openSession(); If you are using a DI framework, you make HibernateUtil another component and inject it to DAO instances. If, instead, you are bootstrapping your application from main or some init method: void initMyApp(.....) { // create SessionFactory once SessionFactory sessionFactory = ....... // create HibernateUtil once HibernateUtil hibernateUtil = new HibernateUtil(sessionFactory) ....... // Use the same `hibernateUtil` many times FooDAO fooDAO = new FooDAOImpl(hibernateUtil); BarDAO barDAO = new BarDAOImpl(hibernateUtil); BazDAO bazDAO = new BazDAOImpl(hibernateUtil); }
{ "domain": "codereview.stackexchange", "id": 9385, "tags": "java, optimization, hibernate" }
What is the role of permanent magnets on a Tokamak reactor?
Question: Maybe I'm wrong, but to my understand, electromagnets can only be made with a conductive coil and a ferromagnetic piece, not permanent magnets. I first thought that electromagnets where the only source of magnetic field inside the toroidal structure, so I got confused on to why you would need permanent magnets if there isn't any moving part (unlike electric motors). Answer: The magnetic field has two sources, one external source created by the coils (mainly toroidal: tangential to the large circle of the torus) and one internal one (mainly poloidal: tangential to the small circle of the torus) generated by the plasma current.
{ "domain": "physics.stackexchange", "id": 90699, "tags": "electromagnetism, magnetic-fields, fusion" }
Motivation of Grassmann fields in the Faddeev-Popov method for free Gluon fields
Question: The Faddeev-Popov approach to make the generating functional corresponding to free gluon fields well defined, introduces two independent Grassmann fields. Since these are scalar, their quanta can be interpreted as spin-0 fermions, which means they cannot correspond to physical states. The complete derivation of the method is at the moment a bit outside of my mathematical ability, but I am still curious about the motivation of using Grassmann fields in this procedure. Is a similar approach with commuting fields (as opposed to the anti-commuting properties of the Grassmann fields) impossible or very difficult? On the surface of it, such an approach would make more sense to me. Are results that incorporates fields such as these annoying to theorists? Or is there no reason to care as long as it does not impact the model's ability to make predictions of observables? Answer: Well, one argument goes as follows: The path integral $Z$ for a gauge theory needs gauge-fixing, typically a product of Dirac delta distributions: $\prod_{x,a} \delta(G^a(x))$. To make the path integral independent of gauge-fixing, there must also be a Faddeev-Popov (FP) determinant. We would like the FP determinant to be the result of Gaussian integrations. Unfortunately Grassmann-even Gaussian integrations produce inverse/reciprocal determinants. However Grassmann-odd Gaussian integrations do the job. References: M. Srednicki, QFT, 2007; Chapter 71. A prepublication draft PDF file is available here.
{ "domain": "physics.stackexchange", "id": 90950, "tags": "quantum-field-theory, gauge-theory, path-integral, grassmann-numbers, ghosts" }
Horizontal acceleration on an inclined plane
Question: Assume the inclined plane to be fixed. Taking components of gravitational acceleration along and perpendicular to the inclined plane and further taking a component of the component along the plane in the horizontal direction we can see that there is an acceleration in the horizontal direction due to gravity. But this shouldn't be possible. So what causes the horizontal acceleration. Also, why does taking component of the acceleration along the incline give the correct results anyway? [Edit] The horizontal acceleration is caused by the Normal reaction according to one of the answers. But even if that's the case, why is taking horizontal components of gsinA valid. Answer: [Edit] The horizontal acceleration is caused by the Normal reaction according to one of the answers. But even if that's the case, why is taking horizontal components of gsinA valid. Resolving a vector into its components is always valid. That includes further resolving the component of a vector into its components. So the acceleration down the plane is: $a$ = $g$ sin A. The horizontal component with respect to the ground is: $a$ cos A = $g$ sin A cos A. The vertical component with respect to the ground is: $a$ sin A = $g$ sin A sin A = $g$ sin$^2$A At A=45$^0$ the horizontal and vertical components of the acceleration with respect to the ground are the same. Intuitively you can see that as A increases above 45$^0$ the horizontal component of the acceleration with respect to the ground down decreases and the vertical component increases, with the difference between the two widening the greater A becomes. When A reaches 90 degrees the horizontal component with respect to the ground is zero and the vertical component is g, i.e., the block is freely falling. Going in the other direction with A less than 45$^0$ the acceleration down the plane is decreasing, and the opposite occurs. Both components of the acceleration with respect to the ground are decreasing, but now the horizontal component is greater than the vertical component, with the vertical component decreasing at a faster rate than the horizontal component. Eventually at A = 0$^0$ both components are zero. Hope this helps.
{ "domain": "physics.stackexchange", "id": 64412, "tags": "homework-and-exercises, newtonian-mechanics, forces, acceleration, free-body-diagram" }
Why a black hole sucks nearly everything, but emits gravitons?
Question: What is so inherently different to the idea of gravitons from other particles, that a black hole draws everything possible including massless photons, but emits gravitons the more the stronger it grows? If this is something obvious like "^%#^& magnets, how do they work?" please close the question but at least point me to the direction in which should I look. Answer: The photons and gravitons involved in static fields are not causal, they don't propagate along light cones. They are acausal things in a Feynman framework. Any gauge charge is visible outside the black hole, this is because gauge fields are determined by a Gauss law at infinity. A good classical picture is that a charged black hole has a charge-per-unit-area on the horizon, which is considered as a GR version of an charged plate, the charge per unit area is the electric field density on the horizon, while the horizon always carries mass per unit area away from extremality, and this is by the surface gravity of the black hole. These classical picture don't refer to particles, only to fields. The duality between particle and field description is subtle, and should not be used for casual arguments like this.
{ "domain": "physics.stackexchange", "id": 4536, "tags": "gravity, black-holes" }
Is the historical method of teaching physics a "legitimate, sure and fruitful method of preparing a student to receive a physical hypothesis"?
Question: The French physicist, historian, and philosopher of physics, Pierre Duhem, wrote:The legitimate, sure and fruitful method of preparing a student to receive a physical hypothesis is the historical method. To retrace the transformations through which the empirical matter accrued while the theoretical form was first sketched; to describe the long collaboration by means of which common sense and deductive logic analyzed this matter and modeled that form until one was exactly adapted to the other: that is the best way, surely even the only way, to give to those studying physics a correct and clear view of the very complex and living organization of this science [emphasis added] (Duhem, P.: 1905/1954, ‘The Aim and Structure of Physical Theory’, Princeton University Press, Princeton, NJ, p. 268).Ernst Mach also advocated the history and philosophy of science (HPS) teaching method:A person who has read and understand the Greek and Roman authors has felt and experienced more than one who is restricted to the impression of the present. He sees how men, placed in different circumstances, judge quiet differently of the same things from what we do today. His own judgments will be rendered thus more independent (Mach, E.: 1886/1986, ‘On instruction the Classics and the Sciences’, In: ‘Popular Scientific Lectures’, Open Court Publishing Company, La Sale, IL., p. 347).Igal Galili, author of "Experts’ Views on Using History and Philosophy of Science in the Practice of Physics Instruction," wrote to me that the historical method "presents a great controversy in university physics textbooks of physics." Arnold B. Arons, in his Teaching Introductory Physics, opposes the HPS teaching method; he thinks something "goes wrong if you suspend judgment (page I-229) in order to retrace historical footsteps." Why is this? Is the historical method a "legitimate, sure and fruitful method of preparing a student to receive a physical hypothesis"? Answer: The historical method is necessary for several reasons: The historical literature shows you how the correct arguments won out over time. This is pretty much the only way to persuade yourself that the results we have are correct, since when you are learning something you are recapitulating the historical process, and all the confusions people had in the past will be in your head at some point. It is very difficult to get rid of the bad ideas by using textbooks, because the textbooks are written by people who already believe the result. So if you want to know how come we believe quantum mechanics works, or how come we believe GR, you need to read the historical literature. The arguments in textbooks are only persuasive to the converted. The historical literature is unfortunately not preserved well in most textbooks. The authors of textbooks do not paraphrase the original articles, but try to present their own arguments. These arguments don't need to persuade anyone, since they are preaching to the choir. So the arguments tend to decay over time. There are rare textbooks, like Landau and Lifschitz, or Feynman, where the arguments are generally better, but still, even in the best books, you don't get universally correct arguments. It is amazing how many times someone comes to you with "The argument in this and so textbook is wrong, therefore relativity is wrong". Usually, the argument is wrong, but the result is perfectly fine. The original author is usually better, since this author had to at least persuade the referee and the editor. There are forgotten methods and ideas: if you don't read Gell-Mann, you don't learn current commutators (at least not well, or outside of 2d). If you don't read Kadanoff and Polyakov, you don't learn about OPE closure in higher dimensions determining the anomalous dimensions/critical-exponents (only the most fruitful 2d version, where the OPE is constrained by conformal invariance, has been preserved in textbooks) If you don't read Schwinger and Feynman, you don't understand the particle picture (this is particularly unforgivable). If you don't read Mandelstam, you don't learn S-matrix methods (this is a crime against a whole generation). This is a disease of transmission--- the authors simply wish to transmit the shortest path to the most famous calculation, and the ideas get trod on shamelessly, and the textbooks are nearly always terrible caricatures. There are exceptions, like Weinberg's field theory books, but this is because the author in this case is steeped in the literature, is personally responsible for a large chunk of it, and has his own take on many of the essential results. By reading good physicists of the past, you learn what a good physics argument reads like. This is important too. There is no better guide to scientific style than Einstein. The old literature will introduce you to many problems which used to be considered extremely important, but then were put on the back-burner. Usually this is because 10 years go by or more, and nobody has any ideas. There are always many of these, and they were inspirations to many great developments. For example: Tony Skyrme was directly inspired by the then-ancient and completely discredited ideas of Lord Kelvin on topological ether-knot atoms to make his famous model of the proton as a topological defect in the recently discovered pion condensate. Balachandran, Nair, and Rajeev revived this idea in the context of large N, and Witten made it stick, and this was also an exercise in resuscitating a dead idea. Gutzwiller was completing an old problem, left unsolved in the early days of quantum mechanics, of finding the semiclassical quantization of chaotic systems. Einstein was already wondering about this in the 1910s, the Bohr-Sommerfeld quantization assumed the classical system was integrable. The Gutzwiller trace formula is one of the central discoveries of the late 20th century, and it was musty stuff when it was formulated. Schwarz and Scherk revived the ideas of Kaluza and Klein in the late 1970s to make sense of string theory. The unified field theory ideas were, by 1970, firmly in the dustbin. Kolmogorov attacked the ancient problem of solar-system stability, and showed that perturbation theory can be summed away from resonances. This was turned into the KAM theorem over several decades. Widom recently considers the thermodynamics of three-phase interfaces, a problem which was considered in the 19th century but neglected forever, until Widom revived it. This is a seriously truncated list, because so many developments are resuscitated old problems. Most of the literature is expanding on an old problem which was left unsolved for some reason or another. Unfortuntely, many of the problems considered unsolved in earlier eras were just solved later, and you have to know the later developments to know which are still open. Here is a short woefully incomplete list of dead problems which were never solved and are no longer on anyone's mind as far as I know: Mandelstam's double dispersion relations: How do these work? What the heck are they? What do they mean? This was left unsolved and nobody works on it. Regge degeneracies: I asked about this here, why are the even and odd trajectories degenerate in QCD? This was a pressing issue in the 1960s. Now, nobody knows and nobody cares (hopefully this will change). What is the sigma f(660)? Is it a real particle? Again, this is an off and on question. What is going on with the idea of pomeron quark coupling? Is this for real? Is there such a thing as a pomeron quark vertex? This seems like nonsense, but it predicts that pion-proton total cross sections are 2/3 the proton proton cross section, and this is experimentally more or less true. But the idea contradicts Gribov's idea of a universal coefficient. Is Gribov's idea true? Or is there a pomeron quark coupling? Nobody knows and nobody cares. There are tons more. These old questions are sources of modern ideas, as the ideas come back in and out of fasion. You can't do anything without knowing all of these. It is also a terrible disservice to previous generations to ignore what they wrote. Unfortunately, time is limited, and the old papers can have suboptimal notation and confusing presentation, due to the dead weight of the era's conventions. This is why I think it is useful to make quick summaries of the entire content of the old papers, using modernized notation and methods, while keeping all the ideas. This is a difficult art, modernizing the papers, but it can be done. In principle this is the job of textbooks, but textbook authors are never going to do this, so one should do it as well as possible. The other reason to read the old literature is that there are certain things that pop into one's head that are old, which you won't realize were the same things that popped into the original author's ideas. For example, when I was learning statistical mechanics, I was disappointed that the statistical description was on phase space. I thought "why don't you just make a probability distribution for the particle to be at position x and have velocity v and find the equation for the distribution". Of course, this was the original idea of Boltzmann, and the modern theory evolved from this starting point, not the other way around. But I couldn't learn statistical mechanics until I saw somebody do the obvious thing (the Boltzmann equation) first. There are countless other retreds over old territory: Every decade or so, somebody rediscovers the fact that classical EM can be made symmetric between E and B by introducing a second vector potential. The result is that the photon is doubled, and there are two separate U(1) gauge symmetries, and this is not the right way to do monopoles (it isn't topological and it doesn't give Dirac quantization). This thing is published again and again, and it is massively annoying. For 3 decades after 1957, people kept on rediscovering the Everett interpretation. Each person was sure their own rediscovery was different, because the Everett interpretation was misrepresented in the literature until the internet came along. So you get "many minds" and "decoherence" and "consistent histories" (although Gell-Mann always cites Everett), and so on, all describing what is essentially the same idea. I met a grad student once who had decided to recast all of quantum mechanics in information theoretic terms, and had come up with a new interpretation of quantum mechanics. A quick reference to Everett showed him that his ideas were all essentially present in Everett's thesis. There are downsides to studying too much history. The biggest downside is that it kills the motivation for trying out new ideas, because the good historical stuff seems so different from the shoddy stuff it seems you always think of (of course, this is hindsight working). It is also good to keep in touch with what other people are doing, since the community generally has an intelligence regarding what the fruitful things to study are, an intelligence which is greater than any individual. But the community can also be persuaded to run after nonsense too, like large extra dimensions, so one has to be careful. History makes you brave in this regard, since you can see when arguments are wrong, because when something is wrong, seeing that it is wrong takes only an old argument. Of course, when something is right, it is always completely different from what came before. Anyway, I am on the side of "yes". You should know the history.
{ "domain": "physics.stackexchange", "id": 4213, "tags": "education, history" }
Why is the Future 'done callback' order reversed?
Question: I'd like to attach multiple callbacks to a Future provided by a ClientGoalHandle. However, when I do this I find that the callbacks are being called in reverse order. I'm a little stumped as to why this is happening. When I look through the code for rclpy.executor or rclpy.task I can only see for callback in callbacks. Here's a snippet from my code: @property def logger(self): return self._node.get_logger() def foo(self, *args, **kwargs): self.logger.info('foo') def bar(self, *args, **kwargs): self.logger.info('bar') def baz(self, *args, **kwargs): self.logger.info('baz') def qux(self, *args, **kwargs): self.logger.info('qux') def next(self) -> Optional[Future]: self._clear_cache() if self.goal_handle is not None: self.logger.info("Cancelling current guidance.") future = self.goal_handle.cancel_goal_async() future.add_done_callback(self.foo) future.add_done_callback(self.bar) self._goal_future.add_done_callback(self.baz) self._goal_future.add_done_callback(self.qux) return self._goal_future else: return None Here is the console readout in which you can see bar is called before foo and qux is called before baz: [simple_state_machine-2] [INFO] [1607502712.297026172] [state]: [GoalState] Cancelling current guidance. [velocity_controller-3] [INFO] [1607502712.301784151] [vel_ctrl]: Received cancel request. [simple_state_machine-2] [INFO] [1607502712.302841057] [state]: [GoalState] bar [simple_state_machine-2] [INFO] [1607502712.303296070] [state]: [GoalState] foo [velocity_controller-3] [INFO] [1607502712.397141220] [vel_ctrl]: Guidance cancelled. [velocity_controller-3] [INFO] [1607502712.398128136] [vel_ctrl]: Goal status: 5. [simple_state_machine-2] [INFO] [1607502712.399765926] [state]: [GoalState] qux [simple_state_machine-2] [INFO] [1607502712.400109978] [state]: [GoalState] baz [simple_state_machine-2] [WARN] [1607502712.400789882] [state]: [GoalState] Navigation ended with goal status code: 5 and guidance status code: 0. I can 'fix' this by reversing the future callback order: with future._lock: future._callbacks.reverse() But it feels like a hack to mess around with private variables within the future class. Any ideas? Originally posted by SmallJoeMan on ROS Answers with karma: 116 on 2020-12-09 Post score: 1 Answer: There are two modes that future callbacks are called. Either they're called directly by the future, or they're scheduled with an executor. It's up to the executor to decide what order to call them in. This example creates a future without an executor. from functools import partial from rclpy.task import Future def callback(name, future): print(f"I am callback {name}") f = Future() f.add_done_callback(partial(callback, 'alice')) f.add_done_callback(partial(callback, 'bob')) f.add_done_callback(partial(callback, 'charlie')) f.set_result("foobar") It calls done callbacks in the order they're added I am callback alice I am callback bob I am callback charlie This future schedules callbacks with a SingleThreadedExecutor. import rclpy from rclpy.executors import SingleThreadedExecutor rclpy.init() s = SingleThreadedExecutor() f = Future(executor=s) f.add_done_callback(partial(callback, 'amy')) f.add_done_callback(partial(callback, 'brice')) f.add_done_callback(partial(callback, 'cole')) f.set_result("foobar") s.spin_once() s.spin_once() s.spin_once() s.spin_once() rclpy.shutdown() It happens to call them in reverse. I am callback cole I am callback brice I am callback amy They're reversed because the executor checks which tasks are ready in reverse. Reversing the order means if an old task has yield'd or await'd because it's waiting for something to happen, then a new task (appended to the end of the task list) gets to run first and potentially unblock the old task. That doesn't mean an executor is guaranteed to execute the tasks in reverse though. The order of callbacks in a Multithreaded executor depends on the order the operating system schedules threads. import rclpy from rclpy.executors import MultiThreadedExecutor rclpy.init() s = MultiThreadedExecutor() f = Future(executor=s) for i in range(100): f.add_done_callback(partial(callback, i)) f.set_result("foobar") for i in range(100): s.spin_once() rclpy.shutdown() It tends towards reverse order but is still pretty random ... I am callback 95 I am callback 93 I am callback 92 I am callback 94 I am callback 89 I am callback 88 I am callback 90 ... Originally posted by sloretz with karma: 3061 on 2020-12-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35852, "tags": "ros, ros2, rclpy" }
Calculate Hamiltonian from Lagrangian for electromagnetic field
Question: I am unable to derive the Hamiltonian for the electromagnetic field, starting out with the Lagrangian $$ \mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}\partial_\nu A^\nu \partial_\mu A^\mu $$ I found: $$ \pi^\mu=F^{\mu 0}-g^{\mu 0}\partial_\nu A^\nu $$ Now $$ \mathcal{H}=\pi^\mu\partial_0 A_\mu-\mathcal{L} $$ Computing this, I arrive at: $$ \mathcal{H}=-\frac{1}{2}\left[\partial_0 A_\mu\partial_0 A^\mu+\partial_i A_\mu\partial_i A^\mu\right]+\frac{1}{2}\left[\partial_i A_i\partial_j A_j-\partial_j A_i\partial_i A_j\right] $$ The right answer, according to my exercise-sheet, would be the first to terms. Unfortunately the last two terms do not cancel. I have spent hours on this exercise and I am pretty sure, that I did not commit any mistakes arriving at this result as I double checked several times. My question now is: did I start out right and am I using the right scheme? Is this in principle the way to derive the Hamiltonian, or is it easier to start out with a different Lagrangian maybe using a different gauge? Any other tips are of course also welcome. Maybe the last two terms do actually cancel and I simply don't realize it. Texing my full calculation would take a very long time so I am not going to post it, but as I said, it should be correct. But if everything hints at me having committed a mistake there, I will try again. Edit: After reading and thinking through Stephen Blake's answer, I realized that one can get rid of the last two terms in $H$, even though they do not vanish in $\mathcal{H}$. This is done by integrating the last term by parts and dropping the surface term, leaving $A_i\partial_j\partial_i A_j$. One can now proceed to combine the last two terms: $$ \partial_i A_i\partial_j A_j+A_i\partial_j\partial_i A_j=\partial_i(A_i\partial_j A_j) $$ This can be converted into a surface integral in $H$ which can be assumed to vanish, leaving us with the desired "effective" $\mathcal{H}$. Answer: There doesn't appear to be anything wrong with user35915's calculation. However, in order to get the desired answer, the canonical momenta needs to be different. Starting from user35915's action, $$ S=\int d^{4} x\left( -\frac{1}{4}F_{\mu\lambda}F^{\mu\lambda}-\frac{1}{2}A^{\mu}_{,\mu}A^{\lambda}_{,\lambda}\right) $$ change the second term by integrating by parts and chuck the surface term away to get, $$ S=-\frac{1}{4}\int d^{4} x( F_{\mu\lambda}F^{\mu\lambda}-2A^{\mu}A^{\lambda}_{,\lambda\mu}) \ . $$ Now expand the electromagnetic field tensor $F_{\mu\lambda}=A_{\lambda,\mu}-A_{\lambda,\mu}$ and do a bit of swopping dummy indices to get, $$ S=-\frac{1}{2}\int d^{4}x \eta^{\mu\rho}\eta^{\lambda\sigma}(A_{\lambda,\mu}A_{\sigma,\rho}-A_{\lambda,\mu}A_{\rho,\sigma}-A_{\rho}A_{\lambda,\sigma\mu})\ . $$ The last two terms can be combined into a surface integral which vanishes at infinity and the final form of the action is only the first term in the last line. The Lagrangian is now, $$ L=-\frac{1}{2}\int d^{3}x \eta^{\mu\rho}\eta^{\lambda\sigma}A_{\lambda,\mu}A_{\sigma,\rho}=-\frac{1}{2}\int d^{3}x A_{\mu,\lambda}A^{\mu,\lambda}\ . $$ The reason for getting the Lagrangian in this form is because it looks like the Lagrangian for four scalar fields. The canonical momenta are now, $$ \pi^{\mu}=-A^{\mu}_{,0}=-\frac{\partial A^{\mu}}{\partial t} $$ which look like the momenta for four scalar fields. Now, it's straightforward to go over to the desired Hamiltonian, $$ H=-\frac{1}{2}\int d^{3}x (\pi^{\mu}\pi_{\mu}+A^{\mu}_{,r}A_{\mu,r}) $$
{ "domain": "physics.stackexchange", "id": 11146, "tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, hamiltonian-formalism, constrained-dynamics" }
reference gazebo-plugin in urdf-description
Question: Hi, I'm writing my first description of an robot using urdf. To control the joints i tried to write a plugin for gazebo following the tutorials (simulator_gazebo/Tutorials/GazeboPluginIntro and plugins/model_manipulation_plugin). The Problem is, it seems that gazebo doesn't load my plugin. I think it's because of my urdf-file. It looks like this: <?xml version="1.0"?> <robot name="pnp-robot"> <link name="base"> . . . <!--some link-definitions--> . . . <joint name="base_turntable" type="continuous"> . . <!--some joint-definitions--> . . <!--reference the Model-Plugin--> <gazebo> <plugin filename="lib/libpnp_robot.so" name="Controll_Plugin" /> <base_turntable>base_turntable</base_turntable> <y_first_motor_first_arm>y_first_motor_first_arm</y_first_motor_first_arm> <first_arm_y_sec_motor>first_arm_y_sec_motor</first_arm_y_sec_motor> </gazebo> </robot> When I start the Gazebo and spawn the model, everything is displayed, but the links should be in an other Position if the plugin is running. There also should be an output on my console, but there is nothing. Does someone have an idea? Thanks a lot! Georg Originally posted by Gjaeger on ROS Answers with karma: 36 on 2012-11-30 Post score: 1 Answer: I think i solved the Problem, i just changed the plugin-statement to: <gazebo> <controller:pnp-robot name="controller" plugin="/home/user/workspace_ros/pick_place/pnp_robot/lib/libpnp_robot.so"> <base_turntable>base_turntable</base_turntable> <y_first_motor_first_arm>y_first_motor_first_arm</y_first_motor_first_arm> <first_arm_y_sec_motor>first_arm_y_sec_motor</first_arm_y_sec_motor> </controller:pnp-robot> </gazebo> at least, i got some outputs even if it doesn't change the position of my joints. So I think there must be another problem in my plugin-code: #include "gazebo.hh" #include "boost/bind.hpp" #include "common/common.hh" #include "physics/physics.hh" #include <stdio.h> namespace gazebo { class Controll_Plugin : public ModelPlugin { private: physics::ModelPtr model; private: event::ConnectionPtr updateConnection; private: physics::JointPtr turntable; private: physics::JointPtr first_arm; private: physics::JointPtr sec_arm; public : void Load(physics::ModelPtr model, sdf::ElementPtr sdf) { this->model = model; if(this->LoadParams(sdf)) { this->updateConnection = event::Events::ConnectWorldUpdateStart(boost::bind(&Controll_Plugin::OnUpdate, this)); } } public: bool LoadParams(sdf::ElementPtr sdf) { if(this->findJointByParam(sdf, this->turntable, "base_turntable") && this->findJointByParam(sdf, this->first_arm, "y_first_motor_first_arm") && this->findJointByParam(sdf,this->sec_arm, "first_arm_y_sec_motor")) { return true; } else return false; } public: bool findJointByParam(sdf::ElementPtr sdf, physics::JointPtr &joint, std::string param) { std::cout << "findJointByParam" << std::endl; if(!sdf->HasElement(param)) { return false; } else { joint = this->model->GetJoint(sdf->GetElement(param)->GetValueString()); if(!joint) return false; } return true; } public: void OnUpdate() { std::cout << "onUpdate" << std::endl; this->turntable->SetAngle(2, 120); this->first_arm->SetAngle(0, 120); this->sec_arm->SetAngle(2, 120); } }; GZ_REGISTER_MODEL_PLUGIN(Controll_Plugin) } It would be great, if anyone can help me! Thanks a lot Georg Originally posted by Gjaeger with karma: 36 on 2012-12-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Lorenz on 2012-12-01: Please read the support guidelines before posting. Do not create answers for comments, discussion or updates. Instead, either edit your original post or use the comment functionality.
{ "domain": "robotics.stackexchange", "id": 11947, "tags": "ros, gazebo, urdf, gazebo-plugins" }
Line integral in Peierls substitution
Question: I'm trying to understand the reasoning behind Peierls substitution. The final result seems to be simply replacing the hopping elements $$t_{ij} \to t_{ij} e^{i \frac{q}{\hbar} \int_i^j \vec{A} \cdot d \vec{r}}$$ The line integral in the exponent is taken along the straight line connecting the two sites. Aside from having the shortest euclidean distance, I do not see what is so special about this path? From my understanding, the difference of the line integrals along two different paths is proportional to the magnetic flux enclosed by the two paths. $$\oint \vec{A} \cdot d\vec{r} = \iint \vec{B} \cdot d\vec{S}$$ So it seems to me the choice of path is actually quite important. Answer: In the tight-binding method, the finite difference approximation $\psi'(x) \approx \frac{\psi(x+a/2) - \psi(x-a/2)}{a}$ is made, which means that the momentum operator can be approximated in the following way. $$ p \approx \frac{2 \hbar}{a} \sin\left(\frac{a}{2\hbar} p \right) + \mathcal{O}(a^2) $$ Using the double-angle formula for the cosine, the tight-binding Hamiltonian for a nearest-neighbor hopping on a cubic lattice is recovered. $$ H = \sum_{\mu}^{x,y,z} \frac{p_\mu^2}{2m} \approx \sum_{\mu} \frac{\hbar^2}{m a^2} \left[1-\cos\left(\frac{a}{\hbar} p_\mu \right)\right] $$ This is perhaps more clear when we identify the hopping constant as $t = \frac{\hbar^2}{2ma^2}$ and $k_\mu = \frac{p_\mu}{\hbar}$. Usually, the constant identity term is also neglected for simplicity. Then, $$ H = -2t \sum_{\mu} \cos k_\mu a $$ The electromagnetic interaction in the continuum model can be incorporated by the minimal coupling substitution $\vec{p} \to \vec{p} - q\vec{A}$, so that the single particle Hamiltonian for a charged particle reads $$ H = \sum_{\mu} \frac{(p_\mu - q A_\mu)^2}{2m} + q \phi $$ The claim now is that the line integral form of the Peirels substitution (with the straight line path) can be obtained by just performing the minimal coupling substitution in momentum space. But first, some mathematical results need to be stated first. In the theory of first-order differential equations, the integrating factor method is often used to factorize the linear differential operator $$ e^{-f(x)} \partial_x e^{f(x)} = f'(x) + \partial_x $$ which allows us to take the exponential easily $$ e^{-f(x)} e^{a \partial_x} e^{f(x)} = e^{a (f'(x) + \partial_x)} $$ But the understanding of the operator identity is that there is always a test function on the right. $$ e^{a (f'(x) + \partial_x)} \psi(x) = e^{\int_x^{x+a} f'(t) d t} \psi(x+a) $$ And the form of the Peirels substitution is apparent, with the integral occurring in the exponent. Therefore, the acting of the Hamiltonian on a wavefunction is $$ H \psi(\vec{x}) = -t \sum_{\mu} e^{- i \frac{q}{\hbar} \int_{\vec{x}}^{\vec{x}+\vec{a}_\mu} A_\mu d x_\mu} \psi(\vec{x}+\vec{a}_\mu) + e^{-i \frac{q}{\hbar} \int_{\vec{x}}^{\vec{x}-\vec{a}_\mu} A_\mu d x_\mu} \psi(\vec{x}-\vec{a}_\mu) \\ = -t \sum_{\mu} e^{- i \frac{q}{\hbar} \int_{\vec{x}}^{\vec{x}+\vec{a}_\mu} \vec{A} \cdot d \vec{r}} \psi(\vec{x}+\vec{a}_\mu) + e^{-i \frac{q}{\hbar} \int_{\vec{x}}^{\vec{x}-\vec{a}_\mu} \vec{A} \cdot d \vec{r}} \psi(\vec{x}-\vec{a}_\mu) $$ $\vec{a}_\mu = a \hat{e}_\mu$ is a shorthand, and the integral as seen should be taken along the straight line path. Under a gauge transformation $\vec{A} \to \vec{A}' = \vec{A} + \nabla \Lambda$, it can be shown that the transformed Hamiltonian $H'$ satisfies $$ H'(e^{iq\Lambda/\hbar} \psi) = e^{iq\Lambda/\hbar} H \psi $$
{ "domain": "physics.stackexchange", "id": 100158, "tags": "condensed-matter, magnetic-fields, hamiltonian, tight-binding" }
Math quiz for teachers & students - follow-up
Question: I have been programming a maths quiz that can be used for teachers an I have have been trying to make the code as short as possible so it is easier to understand. If you have seen my previous post you maybe be wondering why I am re-posting as I already have received feedback, when trying to act on this feedback I seemed to have got stuck again. In my code, there's a lot of prompting the user for an input and then checking to see if it's valid - so I have been trying to turn this into a function. Now some of you maybe confused as why I am unable to do this as I have already done this with the variable get_bool_input. However, for this desired function, it needs to be used in a different context which is what I am unable to do. import sys import random def get_bool_input(prompt=''): while True: val = input(prompt).lower() if val == 'yes': return True elif val == 'no': return False else: sys.exit("Not a valid input (yes/no is expected) please try again") status = input("Are you a teacher or student? Press 1 if you are a student or 2 if you are a teacher") if status == "1": score=0 name=input("What is your name?") print ("Alright",name,"welcome to your maths quiz") level_of_difficulty = int(input(("What level of difficulty are you working at?\n" "Press 1 for low, 2 for intermediate " "or 3 for high\n"))) if level_of_difficulty not in (1,2,3): sys.exit("That is not a valid level of difficulty, please try again") if level_of_difficulty == 3: ops = ['+', '-', '*', '/'] else: ops = ['+', '-', '*'] for question_num in range(1, 11): if level_of_difficulty == 1: number_1 = random.randrange(1, 10) number_2 = random.randrange(1, 10) else: number_1 = random.randrange(1, 20) number_2 = random.randrange(1, 20) operation = random.choice(ops) maths = round(eval(str(number_1) + operation + str(number_2)),5) print('\nQuestion number: {}'.format(question_num)) print ("The question is",number_1,operation,number_2) answer = float(input("What is your answer: ")) if answer == maths: print("Correct") score = score + 1 else: print ("Incorrect. The actual answer is",maths) if score >5: print("Well done you scored",score,"out of 10") else: print("Unfortunately you only scored",score,"out of 10. Better luck next time") class_number = input("Before your score is saved ,are you in class 1, 2 or 3? Press the matching number") while class_number not in ("1","2","3"): print("That is not a valid class, unfortunately your score cannot be saved, please try again") class_number = input("Before your score is saved ,are you in class 1, 2 or 3? Press the matching number") # I realised that previously if the user entered an invalid class they would have to start the quiz again from the beginning which is quite else: filename = (class_number + "txt") with open(filename, 'a') as f: f.write("\n" + str(name) + " scored " + str(score) + " on difficulty level " + str(level_of_difficulty)) with open(filename, 'a') as f: f = open(filename, "r") lines = [line for line in f if line.strip()] f.close() lines.sort() if get_bool_input("Do you wish to view previous results for your class"): for line in lines: print (line) else: sys.exit("Thanks for taking part in the quiz, your teacher should discuss your score with you later") if status == "2": class_number = input("Which classes scores would you like to see? Press 1 for class 1, 2 for class 2 or 3 for class 3") if class_number not in (1,2,3): sys.exit("That is not a valid class") filename = (class_number + "txt") with open(filename, 'a') as f: f = open(filename, "r") lines = [line for line in f if line.strip()] f.close() lines.sort() for line in lines: print (line) Answer: Helper functions to prompt Your get_bool_input is not very friendly: it exits on invalid input. It would be better to let the user retry. For that, simply replace the sys.exit with print. So it seems you need a lot of prompting, often for integer. So add another helper method: def prompt_int(prompt=''): while True: val = input(prompt) try: return int(val) except: print("Not a valid number, please try again") Or if you want to limit input to specific choices you can do something like: def prompt_choice(prompt, choices): while True: val = input(prompt) if val in choices: return val else: print("Not a valid choice, please try again") Avoid duplicated code You have some duplicated logic, in many places, for example: if level_of_difficulty == 1: number_1 = random.randrange(1, 10) number_2 = random.randrange(1, 10) else: number_1 = random.randrange(1, 20) number_2 = random.randrange(1, 20) When you see repeated code like that, separate in your mind the parts that are the same and the parts that are different, and try to rewrite in a way that the same part is only written once, and replace the changing part with a variable. Like this: if level_of_difficulty == 1: max_number = 10 else: max_number = 20 number_1 = random.randrange(1, max_number) number_2 = random.randrange(1, max_number) Apply this technique everywhere in your code. Working with files What's going on here: with open(filename, 'a') as f: f = open(filename, "r") lines = [line for line in f if line.strip()] f.close() lines.sort() Opening a file to append, then inside the with block open the same file for reading? That doesn't make any sense. With the nonsense removed: with open(filename) as f: lines = [line for line in f if line.strip()] lines.sort() Poor naming Many variable names don't describe their purpose well. For example "status" is hardly a good name for teacher-student choice. Perhaps "role" would be better.
{ "domain": "codereview.stackexchange", "id": 14878, "tags": "python, quiz" }
C++ File handling, inserts ÿ character while copying contents
Question: [Please don't comment on using Turbo C++. I know that it's obsolete but we are taught this way only.] #include<fstream.h> #include<conio.h> void main() { clrscr(); char ch; ifstream read; read.open("Employee.txt"); ofstream write; write.open("Another.txt"); while(!read.eof()) { read.get(ch); //Also when I use, write<<read.get(ch) is it writes some kind of write<<ch; //address in the file. Please tell me about that too why it happens. } read.close(); write.close(); getch(); } The problem I'm facing is that it appends ÿ character at the end of 'another' file. Answer: This is a deprecated head (C++ headers don't have .h on the end) #include<fstream.h> // Should be #include<fstream> OK your Turbo stuff: #include<conio.h> This is not a valid main() declaration: void main() There are two valid main() declarations accepted by the standard: int main() int main(int argc, char* argv[]) Declare varaibles as close to the point of use as you can: char ch; // You don't use this till the other end of the function. // How am I supposed to remember its type. O yes you // seem to have just shortened the name of the type. // more meaningful names is also useful (this is not a // tersity competition). May as well use the constructor the at opens them. ifstream read; read.open("Employee.txt"); // Much easier to read and write as ifstream read("Employee.txt"); Same again ofstream write; write.open("Another.txt"); Note. If you had included the correct header file. These are both in the standard namespace. This is nearly always wrong way to write a loop while(!read.eof()) The problem is that the eof flag is not set until you read past the end of file. But the last successful read will read upto the end of file (not past). So even though there is absolutely no data left on the stream the eof flag is still false. It is not until you try and read a character that is not there that the flag is set. To get around this you can write the loop like this. while(!read.eof()) { read.get(ch); if (read.eof()) { break;} // PS. This is why you are seeing the funny character on your output // You read past the end of file and even though the stream told // you it had failed (by setting the eof flag). You failed to // check the value after the read and thus printed some random // garbage to your output (as ch was set to random stuff). // STUFF } BUT wait. There are more problems with the above. What happens if eof is never set. But you say that can never happen. Actually it can (and is very common in normal use). If you set any of the other bad bits on the stream you now get stuck in an infinite loop. Because once the bad bit is set no more data is read from the stream so it never advances to reach the end (luckily for you get() is not going to do that but other standard read commands are). But what happens if the file does not exist? It is not eof() but a bad bit is set. So if the file does not exist your loop enters an infinite blocking loop. while(!read.eof()) { int x; read >> x; if (read.eof()) { break;} // STUFF } Here if the input does not contain an int you have an infinite loop. You can fix it by testing for read.bad() but now things are getting more complex. But an easier way to write this is (The correct way): while(read.get(ch)) { // STUFF } Your loop is only entered if the read worked. This works because the result of the get() is a reference to the stream (i.e. a reference to read). When a stream is used in a boolean context (like a while loop test) it is auto converted into a boolean like type. The value of this boolean type is based on the state of the stream, if you can still read from it (ie the last read worked) then it is true otherwise false. As noted above you should declare variables as close to the point of use as possible. Because you declared ch at the top but only use it inside the loop you are polluting the rest of the code with an unneeded value. This can lead to errors where you accidentally use the value. Also when I use, write<<read.get(ch) is it writes some kind of address in the file. Please tell me about that too why it happens. This happens because the result of get() returns a reference to the stream. When used on the right hand side of operator<< there is a conversion into a type that can be written to an output stream. As it happens it is the same as the conversion that happens above (as explained for while). The boolean like type in C++03 is a pointer (NULL pointer is false all other pointers are true). In C++11 this was fixed and the boolean like type is actually a bool. I would not read too much into the value of the pointer only that it is (or is not) NULL. Don't manually close the streams. This is what RAII is for: see My C++ code involving an fstream failed review read.close(); write.close(); You can use standard functions to achieve the same affect as pausing for the human. getch(); // standard compliant way. std::cin.clear(); std::string line; std::getline(std::cin, line);
{ "domain": "codereview.stackexchange", "id": 4858, "tags": "c++, file" }
What defines which universe I will end up in?
Question: Following this question about the Many Worlds Interpretation of QM in which MW is stated to be deterministic: What "chooses" which universe I will be in, i.e., which outcome I will see? I actually think that this is random, and if it is: Why was it stated in that question that MW is deterministic? Answer: You will see all outcomes, because "you" are part of the universe. The idea that it is random is actually more in line with the Copenhagen interpretation than MWI. The real issue is a philosophical one. You are used to defining the concept of "you" in a way that makes sense in a classical world. The "you" which makes sense in a MWI point of view must adapt to fit the model it is being used in. After an event happens, you could choose to define a new observer in a random world, and declare that it embodies the perdurable concept of "you" from that point on. However, the randomness was your own doing, not MWI. From a physics perspective, there's just a bag of particles which are in different states in different worlds. There's no metaphysical "you" that is observing the world from inside that bag. How you adapt this metaphysical concept to fit within MWI is beyond the scope of physics.
{ "domain": "physics.stackexchange", "id": 40732, "tags": "quantum-mechanics, quantum-interpretations" }
Tidal effects in Global Positioning System?
Question: Are tidal effects from the Sun and/or Moon taken into account in GPS systems? More broadly, is the heliocentric system used somehow in GPS? Answer: Are tidal effects from the Sun and/or Moon taken into account in GPS systems? On calculating the orbits of the satellites, yes. The satellites orbit high enough that accurately modeling the orbits of the satellites mandates accounting for third body perturbations from the Moon, the Sun, and the planets. The orbits of the satellites are calculated from the perspective of a geocentric frame, the relative positions of the perturbing bodies are calculated from the perspective of a solar system barycentric frame (which answers your second question). On the timing, no. Imagine a GPS satellite in opposition (on the line between the Sun and the Earth and on the opposite side of the Earth than the Sun) and a person on the Earth who sees the Sun as being straight overhead. The effect of the Sun on the tick rates of the satellite's clock and the person's clock is very small, less than 0.2 microseconds/day. The satellite doesn't hang suspended at that position. It orbits the Earth. Half an orbit later, the satellite will be close to being in conjunction (where it's clock will tick a tiny bit slower). That already small error of 0.2 microseconds/day will average out to near nothing as the satellite orbits the Earth.
{ "domain": "physics.stackexchange", "id": 24171, "tags": "reference-frames, tidal-effect, gps" }
Stock control system
Question: I have programmed this simple Stock Control System using JavaFX for GUI and PostgreSQL for a database. Please go through my code and point out flaws, inefficiencies, and better ways of doing things. I'd also like to be made aware of best practices which I am not following. The last two files are external links because I exceed the character limit (but don't review them). com.HassanAlthaf.StockControlSystem.Main.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.stage.Stage; public class Main extends Application { @Override public void start(Stage stage) throws Exception { Parent root = FXMLLoader.load(getClass().getResource("View/LoginWindow.fxml")); Scene scene = new Scene(root); stage.setScene(scene); stage.setTitle("Authentication Required"); stage.setResizable(false); stage.show(); } public static void main(String[] args) { launch(args); } } com.HassanAlthaf.StockControlSystem.Database.DatabaseAdapter.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.Database; import java.io.File; import java.io.FileReader; import java.sql.Connection; import java.sql.DriverManager; import java.util.Properties; public class DatabaseAdapter { private Connection connection; private String port; private String databaseName; private String username; private String password; private String host; public DatabaseAdapter() { this.fetchAndSetConnectionDetails(); } public Connection getConnection() { try { Class.forName("org.postgresql.Driver"); this.connection = DriverManager.getConnection("jdbc:postgresql://" + this.host + ":" + this.port + "/" + this.databaseName, this.username, this.password); } catch(Exception exception) { exception.printStackTrace(); System.exit(0); } return this.connection; } public void fetchAndSetConnectionDetails() { File databaseSettings = new File("databaseSettings.properties"); try { FileReader reader = new FileReader(databaseSettings); Properties properties = new Properties(); properties.load(reader); this.host = properties.getProperty("host"); this.username = properties.getProperty("username"); this.password = properties.getProperty("password"); this.port = properties.getProperty("port"); this.databaseName = properties.getProperty("databaseName"); } catch(Exception exception) { exception.printStackTrace(); } } } com.HassanAlthaf.StockControlSystem.Stocks.StockController.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.Stocks; import java.sql.SQLException; import java.util.ArrayList; public class StockController { private final StockRepository stockRepository; private ArrayList<String> returnMessages; public StockController() { this.stockRepository = new StockRepository(); } public boolean addStockItem(String productName, String availableQuantity, String unitPrice, String reorderLevel) throws SQLException { ArrayList<String> returnMessages = this.validateStockItemData(productName, availableQuantity, unitPrice, reorderLevel); int numericalAvailableQuantity = 0; int numericalReorderLevel = 0; double numericalUnitPrice = 0; try { numericalAvailableQuantity = Integer.parseInt(availableQuantity); numericalReorderLevel = Integer.parseInt(reorderLevel); } catch (NumberFormatException exception) { returnMessages.add("Available quantity and re-order level may only be whole numbers!"); } try { numericalUnitPrice = Double.parseDouble(unitPrice); } catch (NumberFormatException exception) { returnMessages.add("The unit price may only contain numbers and a decimal point!"); } if(returnMessages.isEmpty()) { StockItem stockItem = new StockItem(productName, numericalAvailableQuantity, numericalUnitPrice, numericalReorderLevel); this.stockRepository.addStockItem(stockItem); return true; } this.returnMessages = returnMessages; return false; } public ArrayList<String> validateStockItemData(String productName, String availableQuantity, String unitPrice, String reorderLevel) { ArrayList<String> returnMessages = new ArrayList<>(); if(productName.length() < 3) { returnMessages.add("The name of your product must be at least 3 characters long!"); } if(availableQuantity.length() == 0) { returnMessages.add("Please enter the available quantity!"); } if(unitPrice.length() == 0) { returnMessages.add("Please enter a unit price!"); } if(reorderLevel.length() == 0) { returnMessages.add("Please enter a re-order level!"); } return returnMessages; } public ArrayList<String> getReturnMessages() { ArrayList<String> temporary = this.returnMessages; this.returnMessages = new ArrayList(); return temporary; } public boolean saveChanges(int id, String productName, String availableQuantity, String unitPrice, String reorderLevel) throws SQLException { ArrayList<String> returnMessages = this.validateStockItemData(productName, availableQuantity, unitPrice, reorderLevel); int numericalAvailableQuantity = 0; int numericalReorderLevel = 0; double numericalUnitPrice = 0; try { numericalAvailableQuantity = Integer.parseInt(availableQuantity); numericalReorderLevel = Integer.parseInt(reorderLevel); } catch (NumberFormatException exception) { returnMessages.add("Available quantity and re-order level may only be whole numbers!"); } try { numericalUnitPrice = Double.parseDouble(unitPrice); } catch (NumberFormatException exception) { returnMessages.add("The unit price may only contain numbers and a decimal point!"); } if(returnMessages.isEmpty()) { StockItem stockItem = new StockItem(id, productName, numericalAvailableQuantity, numericalUnitPrice, numericalReorderLevel); this.stockRepository.updateStockItem(stockItem); return true; } this.returnMessages = returnMessages; return false; } public ArrayList<StockItem> fetchAllStockItems() throws SQLException { return this.stockRepository.fetchAllStockItems(); } public void removeStockItem(int id) throws SQLException { this.stockRepository.removeStockItem(id); } } com.HassanAlthaf.StockControlSystem.Stocks.StockItem.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.Stocks; import javafx.beans.property.DoubleProperty; import javafx.beans.property.IntegerProperty; import javafx.beans.property.SimpleDoubleProperty; import javafx.beans.property.SimpleIntegerProperty; import javafx.beans.property.SimpleStringProperty; import javafx.beans.property.StringProperty; public class StockItem { private IntegerProperty id; private StringProperty productName; private IntegerProperty availableQuantity; private DoubleProperty unitPrice; private IntegerProperty reorderLevel; private DoubleProperty totalValue; public StockItem(String productName, int availableQuantity, double unitPrice, int reorderLevel) { this.id = new SimpleIntegerProperty(0); this.productName = new SimpleStringProperty(productName); this.availableQuantity = new SimpleIntegerProperty(availableQuantity); this.unitPrice = new SimpleDoubleProperty(unitPrice); this.reorderLevel = new SimpleIntegerProperty(reorderLevel); double totalValue = unitPrice * availableQuantity; this.totalValue = new SimpleDoubleProperty(totalValue); } public StockItem(int id, String productName, int availableQuantity, double unitPrice, int reorderLevel) { this.id = new SimpleIntegerProperty(id); this.productName = new SimpleStringProperty(productName); this.availableQuantity = new SimpleIntegerProperty(availableQuantity); this.unitPrice = new SimpleDoubleProperty(unitPrice); this.reorderLevel = new SimpleIntegerProperty(reorderLevel); double totalValue = unitPrice * availableQuantity; this.totalValue = new SimpleDoubleProperty(totalValue); } public void setID(int id) { this.id = new SimpleIntegerProperty(id); } public int getID() { return this.id.get(); } public void setProductName(String productName) { this.productName = new SimpleStringProperty(productName); } public String getProductName() { return this.productName.get(); } public void setAvailableQuantity(int availableQuantity) { this.availableQuantity = new SimpleIntegerProperty(availableQuantity); } public int getAvailableQuantity() { return this.availableQuantity.get(); } public void setUnitPrice(double unitPrice) { this.unitPrice = new SimpleDoubleProperty(unitPrice); } public double getUnitPrice() { return this.unitPrice.get(); } public void setReorderLevel(int reorderLevel) { this.reorderLevel = new SimpleIntegerProperty(reorderLevel); } public int getReorderLevel() { return this.reorderLevel.get(); } public String getTotalValue() { return String.format("%.2f", this.totalValue.get()); } } com.HassanAlthaf.StockControlSystem.Stocks.StockRepository.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.Stocks; import com.HassanAlthaf.StockControlSystem.Database.DatabaseAdapter; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.ArrayList; public class StockRepository { private final DatabaseAdapter databaseAdapter; public StockRepository() { this.databaseAdapter = new DatabaseAdapter(); } public void addStockItem(StockItem stockItem) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("INSERT INTO \"public\".\"stockItems\" ( \"productName\", \"availableQuantity\", \"unitPrice\", \"reorderLevel\") VALUES(?, ?, ?, ?)"); statement.setString(1, stockItem.getProductName()); statement.setInt(2, stockItem.getAvailableQuantity()); statement.setDouble(3, stockItem.getUnitPrice()); statement.setInt(4, stockItem.getReorderLevel()); statement.execute(); statement.close(); connection.close(); } public void removeStockItem(int id) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("DELETE FROM \"public\".\"stockItems\" WHERE \"id\" = ?"); statement.setInt(1, id); statement.execute(); statement.close(); connection.close(); } public void fetchStockItem(int id) { } public ArrayList<StockItem> fetchAllStockItems() throws SQLException { ArrayList<StockItem> stockItems = new ArrayList<>(); Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("SELECT * FROM \"public\".\"stockItems\""); ResultSet resultSet = statement.executeQuery(); while(resultSet.next()) { StockItem stockItem = new StockItem( resultSet.getInt("id"), resultSet.getString("productName"), resultSet.getInt("availableQuantity"), resultSet.getDouble("unitPrice"), resultSet.getInt("reorderLevel") ); stockItems.add(stockItem); } resultSet.close(); statement.close(); connection.close(); return stockItems; } public boolean doesStockItemExist(int id) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("SELECT * FROM \"public\".\"stockItems\" WHERE \"id\" = ?"); statement.setInt(1, id); ResultSet resultSet = statement.executeQuery(); int count = 0; while(resultSet.next()) { count++; } resultSet.close(); statement.close(); connection.close(); if(count == 0) { return false; } else { return true; } } public void updateStockItem(StockItem stockItem) throws SQLException { if(this.doesStockItemExist(stockItem.getID())) { System.out.println(stockItem.getProductName() + " exists!"); Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareCall("UPDATE \"public\".\"stockItems\" SET \"productName\" = ?, \"availableQuantity\" = ?, \"unitPrice\" = ?, \"reorderLevel\" = ? WHERE \"id\" = ?"); statement.setString(1, stockItem.getProductName()); statement.setInt(2, stockItem.getAvailableQuantity()); statement.setDouble(3, stockItem.getUnitPrice()); statement.setInt(4, stockItem.getReorderLevel()); statement.setInt(5, stockItem.getID()); statement.execute(); statement.close(); connection.close(); } } } com.HassanAlthaf.StockControlSystem.Users.User.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.Users; import javafx.beans.property.IntegerProperty; import javafx.beans.property.SimpleIntegerProperty; import javafx.beans.property.StringProperty; import javafx.beans.property.SimpleStringProperty; public class User { private IntegerProperty id; private StringProperty username; private StringProperty password; private IntegerProperty rank; public User() { this.id = new SimpleIntegerProperty(0); this.username = new SimpleStringProperty(""); this.password = new SimpleStringProperty(""); this.rank = new SimpleIntegerProperty(1); } public User(int id, String username, String password, int rank) { this.id = new SimpleIntegerProperty(id); this.username = new SimpleStringProperty(username); this.password = new SimpleStringProperty(password); this.rank = new SimpleIntegerProperty(rank); } public User(String username, String password, int rank) { this.username = new SimpleStringProperty(username); this.password = new SimpleStringProperty(password); this.rank = new SimpleIntegerProperty(rank); } public int getID() { return this.id.get(); } public void setID(int id) { this.id = new SimpleIntegerProperty(id); } public String getUsername() { return this.username.get(); } public void setUsername(String username) { this.username = new SimpleStringProperty(username); } public String getPassword() { return this.password.get(); } public void setPassword(String password) { this.password = new SimpleStringProperty(password); } public int getRank() { return this.rank.get(); } public void setRank(int rank) { this.rank = new SimpleIntegerProperty(rank); } private StringProperty SimpleStringProperty(String string) { throw new UnsupportedOperationException("Not supported yet."); } private IntegerProperty SimpleIntegerProperty(int i) { throw new UnsupportedOperationException("Not supported yet."); } } com.HassanAlthaf.StockControlSystem.Users.UserController.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.Users; import java.sql.SQLException; import java.util.ArrayList; import java.util.regex.Matcher; import java.util.regex.Pattern; import org.mindrot.jbcrypt.BCrypt; public class UserController { private boolean isUserLoggedIn = false; private User loggedInUser; private UserRepository userRepository; public final static int USER_INVALID_CREDENTIALS = 0; public final static int USER_VALID_CREDENTIALS = 1; public final static int USER_ACCESS_DENIED = 2; public final static int RANK_DISABLED_USER = 0; public final static int RANK_REGULAR_USER = 1; public final static int RANK_ADMINISTRATOR = 2; private ArrayList<String> validationErrors; public UserController() { this.userRepository = new UserRepository(); } public int validateCredentials(String username, String password) throws SQLException { if(this.userRepository.isUsernameTaken(username)) { User user = this.userRepository.fetchUserByUsername(username); if(BCrypt.checkpw(password, user.getPassword())) { if(user.getRank() == UserController.RANK_DISABLED_USER) { return UserController.USER_ACCESS_DENIED; } return UserController.USER_VALID_CREDENTIALS; } } return UserController.USER_INVALID_CREDENTIALS; } public int login(String username, String password) throws SQLException { int result = this.validateCredentials(username, password); if(result == UserController.USER_VALID_CREDENTIALS) { this.isUserLoggedIn = true; this.loggedInUser = this.userRepository.fetchUserByUsername(username); } return result; } public void logout() { this.isUserLoggedIn = false; this.loggedInUser = new User(); } public User getLoggedInUser() { return this.loggedInUser; } public void updateLoggedInUser() throws SQLException { int id = this.loggedInUser.getID(); this.loggedInUser = this.userRepository.fetchUser(id); } public boolean userIsDisabled() throws SQLException { this.updateLoggedInUser(); if(this.loggedInUser.getRank() == UserController.RANK_DISABLED_USER) { return true; } return false; } public boolean isAdmin() { if(this.loggedInUser.getRank() == UserController.RANK_ADMINISTRATOR) { return true; } return false; } public boolean createUser(String username, String password, int rank) throws SQLException { this.validationErrors = new ArrayList<>(); this.validationErrors.addAll(this.validateUsername(username)); this.validationErrors.addAll(this.validatePassword(password)); this.validationErrors.addAll(this.validateRank(rank)); if(this.validationErrors.isEmpty()) { password = BCrypt.hashpw(password, BCrypt.gensalt()); User user = new User(username, password, rank); this.userRepository.createUser(user); return true; } return false; } public ArrayList<String> returnAllValidationErrors() { ArrayList<String> returnData = new ArrayList<>(); returnData.addAll(this.validationErrors); this.validationErrors.clear(); return returnData; } public ArrayList<String> validateUsername(String username) throws SQLException { ArrayList<String> validationErrors = new ArrayList<>(); if(username.equals("")) { validationErrors.add("The username field is required to be filled!"); } if(username.length() < 3 || username.length() > 20) { validationErrors.add("The username must be at least 3 characters in length and cannot go over 20."); } Pattern pattern = Pattern.compile("^[a-zA-Z0-9]*$"); Matcher matcher = pattern.matcher(username); if(!matcher.find()) { validationErrors.add("The username may only contain lowercase and uppercase alphabets and numbers."); } if(this.userRepository.isUsernameTaken(username)) { validationErrors.add("This username is already taken! Please try another one!"); } return validationErrors; } public ArrayList<String> validatePassword(String password) { ArrayList<String> validationErrors = new ArrayList<>(); if(password.equals("")) { validationErrors.add("The password field is required to be filled!"); } if(password.length() < 5 || password.length() > 72) { validationErrors.add("The password must be at least 5 characters in length and cannot go over 72."); } return validationErrors; } public ArrayList<String> validateRank(int rank) { ArrayList<String> validationErrors = new ArrayList<>(); if(rank != UserController.RANK_ADMINISTRATOR && rank != UserController.RANK_DISABLED_USER && rank != UserController.RANK_REGULAR_USER) { validationErrors.add("Invalid user rank specified."); } return validationErrors; } public ArrayList<User> fetchAllUsers() throws SQLException { return this.userRepository.fetchAllUsers(); } public boolean editUser(String username, String password, int rank, User user) throws SQLException { this.validationErrors = new ArrayList<>(); if(!username.equals(user.getUsername())) { this.validationErrors.addAll(this.validateUsername(username)); user.setUsername(username); } if(!password.equals("")) { this.validationErrors.addAll(this.validatePassword(password)); user.setPassword(BCrypt.hashpw(password, BCrypt.gensalt())); } if(rank != user.getRank()) { this.validationErrors.addAll(this.validateRank(rank)); user.setRank(rank); } if(this.validationErrors.isEmpty()) { this.userRepository.updateUser(user); return true; } return false; } public void deleteUser(int id) throws SQLException { if(this.userRepository.doesUserExist(id) && this.loggedInUser.getID() != id) { this.userRepository.deleteUser(id); } } } com.HassanAlthaf.StockControlSystem.Users.UserRepository.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.Users; import com.HassanAlthaf.StockControlSystem.Database.DatabaseAdapter; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.ArrayList; public class UserRepository { private DatabaseAdapter databaseAdapter; public UserRepository() { this.databaseAdapter = new DatabaseAdapter(); } public void createUser(User user) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("INSERT INTO \"public\".\"users\" (\"username\", \"password\", \"rank\") VALUES(?, ?, ?)"); statement.setString(1, user.getUsername()); statement.setString(2, user.getPassword()); statement.setInt(3, user.getRank()); statement.execute(); statement.close(); connection.close(); } public void updateUser(User user) throws SQLException { if(this.doesUserExist(user.getID())) { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("UPDATE \"public\".\"users\" SET \"username\" = ?, \"password\" = ?, \"rank\" = ? WHERE \"id\" = ?"); statement.setString(1, user.getUsername()); statement.setString(2, user.getPassword()); statement.setInt(3, user.getRank()); statement.setInt(4, user.getID()); statement.execute(); statement.close(); connection.close(); } } public boolean doesUserExist(int id) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("SELECT COUNT(*) FROM \"public\".\"users\" WHERE \"id\" = ?"); statement.setInt(1, id); ResultSet resultSet = statement.executeQuery(); resultSet.next(); int count = resultSet.getInt(1); resultSet.close(); statement.close(); connection.close(); if(count == 0) { return false; } return true; } public boolean isUsernameTaken(String username) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("SELECT COUNT(*) FROM \"public\".\"users\" WHERE \"username\" = ?"); statement.setString(1, username); ResultSet resultSet = statement.executeQuery(); resultSet.next(); int count = resultSet.getInt(1); if(count == 0) { return false; } return true; } public User fetchUser(int id) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("SELECT * FROM \"public\".\"users\" WHERE \"id\" = ?"); statement.setInt(1, id); ResultSet resultSet = statement.executeQuery(); resultSet.next(); User user = new User(); user.setID(id); user.setUsername(resultSet.getString("username")); user.setPassword(resultSet.getString("password")); user.setRank(resultSet.getInt("rank")); return user; } public User fetchUserByUsername(String username) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("SELECT * FROM \"public\".\"users\" WHERE \"username\" = ?"); statement.setString(1, username); ResultSet resultSet = statement.executeQuery(); resultSet.next(); User user = new User(); user.setID(resultSet.getInt("id")); user.setUsername(username); user.setPassword(resultSet.getString("password")); user.setRank(resultSet.getInt("rank")); return user; } public ArrayList<User> fetchAllUsers() throws SQLException { ArrayList<User> users = new ArrayList<>(); Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("SELECT * FROM \"public\".\"users\""); ResultSet resultSet = statement.executeQuery(); while(resultSet.next()) { User user = new User(); user.setID(resultSet.getInt("id")); user.setUsername(resultSet.getString("username")); user.setPassword(resultSet.getString("password")); user.setRank(resultSet.getInt("rank")); users.add(user); } return users; } public void deleteUser(int id) throws SQLException { Connection connection = this.databaseAdapter.getConnection(); PreparedStatement statement = connection.prepareStatement("DELETE FROM \"public\".\"users\" WHERE \"id\" = ?"); statement.setInt(1, id); statement.execute(); statement.close(); connection.close(); } } com.HassanAlthaf.StockControlSystem.View.AddUserDialog.fxml <?xml version="1.0" encoding="UTF-8"?> <?import javafx.scene.text.*?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <AnchorPane id="AnchorPane" fx:id="addUserDialog" maxHeight="400.0" maxWidth="350.0" minHeight="400.0" minWidth="350.0" prefHeight="400.0" prefWidth="350.0" xmlns:fx="http://javafx.com/fxml/1" xmlns="http://javafx.com/javafx/8" fx:controller="com.HassanAlthaf.StockControlSystem.View.AddUserDialogView"> <children> <Label layoutX="146.0" layoutY="32.0" text="Add User" /> <TextField fx:id="usernameField" layoutX="169.0" layoutY="73.0" /> <ChoiceBox fx:id="rankChoiceList" layoutX="169.0" layoutY="148.0" prefHeight="27.0" prefWidth="167.0" /> <Label layoutX="14.0" layoutY="78.0" text="Username: " /> <Label layoutX="14.0" layoutY="115.0" text="Password:" /> <Label layoutX="14.0" layoutY="153.0" text="Rank:" /> <Button layoutX="293.0" layoutY="359.0" mnemonicParsing="false" onAction="#addUser" text="Add" /> <Button layoutX="14.0" layoutY="359.0" mnemonicParsing="false" onAction="#closeWindow" text="Close" /> <Text fx:id="addUserValidationErrors" layoutX="14.0" layoutY="205.0" strokeType="OUTSIDE" strokeWidth="0.0" wrappingWidth="322.0" /> <TextField fx:id="passwordField" layoutX="169.0" layoutY="110.0" /> </children> </AnchorPane> com.HassanAlthaf.StockControlSystem.View.AddUserDialogView.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.View; import com.HassanAlthaf.StockControlSystem.Users.UserController; import java.io.IOException; import java.net.URL; import java.sql.SQLException; import java.util.ArrayList; import java.util.ResourceBundle; import javafx.collections.FXCollections; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.scene.control.ChoiceBox; import javafx.scene.control.TextField; import javafx.scene.text.Text; import javafx.stage.Stage; public class AddUserDialogView implements Initializable { private MainView mainView; private UserController userController; private Stage stage; @FXML private Parent addUserDialog; @FXML private Text addUserValidationErrors; @FXML private TextField usernameField; @FXML private TextField passwordField; @FXML private ChoiceBox rankChoiceList; public void setMainView(MainView mainView) { this.mainView = mainView; } public void setUserController(UserController userController) { this.userController = userController; } public void show() { Scene scene = new Scene(this.addUserDialog); this.stage = new Stage(); this.stage.setScene(scene); this.stage.setTitle("Add User"); this.stage.setResizable(false); this.populateChoiceBox(); this.stage.show(); } public void populateChoiceBox() { this.rankChoiceList.setItems(FXCollections.observableArrayList( "Disabled Account", "Regular User", "Administrator" ) ); } public void addUser(ActionEvent event) throws SQLException, IOException { boolean result = this.userController.createUser( this.usernameField.getText(), this.passwordField.getText(), this.rankChoiceList.getSelectionModel().getSelectedIndex() ); if(result) { this.addUserValidationErrors.setText("Successfully created user!"); this.mainView.populateUsersList(); } else { ArrayList<String> errors = this.userController.returnAllValidationErrors(); String text = ""; for(String line : errors) { text = text + line + "\n"; } text = text.substring(0, (text.length() - 1)); this.addUserValidationErrors.setText(text); } } public void closeWindow(ActionEvent event) { this.stage.close(); } @Override public void initialize(URL url, ResourceBundle rb) { } } com.HassanAlthaf.StockControlSystem.View.EditStockItemDialog.fxml <?xml version="1.0" encoding="UTF-8"?> <?import javafx.scene.text.*?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <AnchorPane id="AnchorPane" fx:id="editStockItemDialog" maxHeight="400.0" maxWidth="350.0" minHeight="400.0" minWidth="350.0" prefHeight="400.0" prefWidth="350.0" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.HassanAlthaf.StockControlSystem.View.EditStockItemDialogView"> <children> <Label layoutX="94.0" layoutY="28.0" text="Editting Stock Item"> <font> <Font name="Lato-Light" size="18.0" /> </font> </Label> <Label layoutX="25.0" layoutY="73.0" text="Product Name:" /> <Label layoutX="25.0" layoutY="110.0" text="Available Quantity:" /> <TextField fx:id="productNameField" layoutX="158.0" layoutY="68.0" /> <TextField fx:id="availableQuantityField" layoutX="158.0" layoutY="105.0" /> <TextField fx:id="unitPriceField" layoutX="158.0" layoutY="142.0" /> <TextField fx:id="reorderLevelField" layoutX="158.0" layoutY="179.0" /> <Text layoutX="25.0" layoutY="160.0" strokeType="OUTSIDE" strokeWidth="0.0" text="Unit Price:" /> <Text layoutX="25.0" layoutY="197.0" strokeType="OUTSIDE" strokeWidth="0.0" text="Reorder Level:" /> <Button layoutX="277.0" layoutY="350.0" mnemonicParsing="false" onAction="#saveEdit" text="Save" /> <Button layoutX="25.0" layoutY="350.0" mnemonicParsing="false" onAction="#closeEditting" text="Close" /> <Text fx:id="errorsArea" layoutX="25.0" layoutY="234.0" lineSpacing="3.0" strokeType="OUTSIDE" strokeWidth="0.0" wrappingWidth="300.0" /> </children> </AnchorPane> com.HassanAlthaf.StockControlSystem.View.EditStockItemDialogView.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.View; import com.HassanAlthaf.StockControlSystem.Stocks.StockController; import com.HassanAlthaf.StockControlSystem.Stocks.StockItem; import java.io.IOException; import java.net.URL; import java.sql.SQLException; import java.util.ArrayList; import java.util.ResourceBundle; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.scene.control.TextField; import javafx.scene.text.Text; import javafx.stage.Stage; public class EditStockItemDialogView implements Initializable { private StockItem stockItem; private StockController stockController; @FXML private Parent editStockItemDialog; @FXML private Text errorsArea; @FXML private TextField productNameField; @FXML private TextField availableQuantityField; @FXML private TextField unitPriceField; @FXML private TextField reorderLevelField; private Stage stage; private MainView mainView; public void setStockItemObject(StockItem stockItem) { this.stockItem = stockItem; } public void setStockController(StockController stockController) { this.stockController = stockController; } public void setMainView(MainView mainView) { this.mainView = mainView; } public void show() { Scene scene = new Scene(this.editStockItemDialog); this.stage = new Stage(); this.stage.setScene(scene); this.stage.setTitle("Edit Stock Item"); this.stage.setResizable(false); this.setData(); this.stage.show(); } public void setData() { this.productNameField.setText(this.stockItem.getProductName()); this.availableQuantityField.setText(String.valueOf(this.stockItem.getAvailableQuantity())); this.unitPriceField.setText(String.valueOf(this.stockItem.getUnitPrice())); this.reorderLevelField.setText(String.valueOf(this.stockItem.getReorderLevel())); } public void closeEditting(ActionEvent event) { this.stage.close(); } public void saveEdit(ActionEvent event) throws SQLException, IOException { boolean response = this.stockController.saveChanges( this.stockItem.getID(), this.productNameField.getText(), this.availableQuantityField.getText(), this.unitPriceField.getText(), this.reorderLevelField.getText() ); if(response == false) { ArrayList<String> errors = this.stockController.getReturnMessages(); String text = ""; for(String line : errors) { text = text + line + "\n"; } text = text.substring(0, text.length() - 1); this.errorsArea.setText(text); } else { this.errorsArea.setText("Successfully editted!"); this.mainView.initializeStats(); this.mainView.populateStocksList(); } } @Override public void initialize(URL url, ResourceBundle rb) { // TODO } } com.HassanAlthaf.StockControlSystem.View.EditUserDialog.fxml <?xml version="1.0" encoding="UTF-8"?> <?import javafx.scene.text.*?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <AnchorPane id="AnchorPane" fx:id="editUserDialog" maxHeight="400.0" maxWidth="350.0" minHeight="400.0" minWidth="350.0" prefHeight="400.0" prefWidth="350.0" xmlns:fx="http://javafx.com/fxml/1" xmlns="http://javafx.com/javafx/8" fx:controller="com.HassanAlthaf.StockControlSystem.View.EditUserDialogView"> <children> <Label layoutX="147.0" layoutY="32.0" text="Edit User" /> <TextField fx:id="usernameField" layoutX="169.0" layoutY="82.0" /> <TextField fx:id="passwordField" layoutX="169.0" layoutY="123.0" /> <ChoiceBox fx:id="rankListChoiceBox" layoutX="169.0" layoutY="160.0" prefHeight="27.0" prefWidth="167.0" /> <Label layoutX="14.0" layoutY="87.0" text="Username:" /> <Label layoutX="14.0" layoutY="128.0" text="Password:" /> <Label layoutX="14.0" layoutY="165.0" text="Rank:" /> <Button layoutX="294.0" layoutY="359.0" mnemonicParsing="false" onAction="#editUser" text="Edit" /> <Button layoutX="14.0" layoutY="359.0" mnemonicParsing="false" onAction="#closeWindow" text="Close" /> <Text fx:id="messagesArea" layoutX="15.0" layoutY="213.0" strokeType="OUTSIDE" strokeWidth="0.0" wrappingWidth="322.0" /> </children> </AnchorPane> com.HassanAlthaf.StockControlSystem.View.EditUserDialogView.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.View; import com.HassanAlthaf.StockControlSystem.Users.User; import com.HassanAlthaf.StockControlSystem.Users.UserController; import java.io.IOException; import java.net.URL; import java.sql.SQLException; import java.util.ArrayList; import java.util.ResourceBundle; import javafx.collections.FXCollections; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.scene.control.ChoiceBox; import javafx.scene.control.TextField; import javafx.scene.text.Text; import javafx.stage.Stage; public class EditUserDialogView implements Initializable { @FXML private Parent editUserDialog; private MainView mainView; private UserController userController; private Stage stage; private User user; @FXML private Text messagesArea; @FXML private TextField usernameField; @FXML private TextField passwordField; @FXML private ChoiceBox rankListChoiceBox; public void setMainView(MainView mainView) { this.mainView = mainView; } public void setUserController(UserController userController) { this.userController = userController; } public void setUserEntity(User user) { this.user = user; } public void show() { Scene scene = new Scene(this.editUserDialog); this.stage = new Stage(); this.stage.setScene(scene); this.stage.setTitle("Edit User"); this.stage.setResizable(false); this.populateRankChoiceList(); this.populateFields(); this.stage.show(); } public void populateRankChoiceList() { this.rankListChoiceBox.setItems(FXCollections.observableArrayList( "Disabled Account", "Regular User", "Administrator" ) ); } public void populateFields() { this.usernameField.setText(this.user.getUsername()); this.rankListChoiceBox.getSelectionModel().select(this.user.getRank()); } public void editUser(ActionEvent event) throws SQLException, IOException { boolean result = this.userController.editUser( this.usernameField.getText(), this.passwordField.getText(), this.rankListChoiceBox.getSelectionModel().getSelectedIndex(), this.user ); if(result) { this.messagesArea.setText("Successfully updated user!"); mainView.populateUsersList(); } else { ArrayList<String> errors = this.userController.returnAllValidationErrors(); String text = ""; for(String line : errors) { text = text + line + "\n"; } text = text.substring(0, (text.length() - 1)); this.messagesArea.setText(text); } } public void closeWindow(ActionEvent event) { this.stage.close(); } @Override public void initialize(URL url, ResourceBundle rb) { } } com.HassanAlthaf.StockControlSystem.View.LoginView.fxml <?xml version="1.0" encoding="UTF-8"?> <?import javafx.scene.text.*?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <AnchorPane id="AnchorPane" fx:id="loginWindow" maxHeight="400.0" maxWidth="300.0" minHeight="400.0" minWidth="300.0" prefHeight="400.0" prefWidth="300.0" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.HassanAlthaf.StockControlSystem.View.LoginWindowView"> <children> <Button layoutX="124.0" layoutY="290.0" mnemonicParsing="false" onAction="#login" text="Login" /> <TextField fx:id="usernameField" layoutX="67.0" layoutY="165.0" promptText="Username" /> <Label layoutX="56.0" layoutY="40.0" text="Stock Control System" textFill="#404040"> <font> <Font name="Lato Regular" size="20.0" /> </font> </Label> <Label layoutX="63.0" layoutY="350.0" text="Developed by Hassan Althaf" /> <PasswordField fx:id="passwordField" layoutX="67.0" layoutY="208.0" promptText="Password" /> <Text fx:id="messageField" fill="#b20000" layoutX="26.0" layoutY="101.0" strokeType="OUTSIDE" strokeWidth="0.0" textAlignment="CENTER" wrappingWidth="250.0"> <font> <Font name="Lato-Light" size="16.0" /> </font> </Text> </children> </AnchorPane> com.HassanAlthaf.StockControlSystem.View.LoginWindowView.java /* * Program developed by Hassan Althaf. * Copyright © 2015, Hassan Althaf. * Website: http://hassanalthaf.com */ package com.HassanAlthaf.StockControlSystem.View; import com.HassanAlthaf.StockControlSystem.Users.UserController; import java.net.URL; import java.util.ResourceBundle; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.fxml.FXMLLoader; import javafx.fxml.Initializable; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.scene.control.PasswordField; import javafx.scene.control.TextField; import javafx.scene.text.Text; import javafx.stage.Stage; public class LoginWindowView implements Initializable { @FXML private Parent loginWindow; @FXML private TextField usernameField; @FXML private PasswordField passwordField; @FXML private Text messageField; @FXML public void login(ActionEvent event) throws Exception { String username = this.usernameField.getText(); String password = this.passwordField.getText(); UserController userController = new UserController(); int result = userController.login(username, password); if(result == UserController.USER_VALID_CREDENTIALS) { FXMLLoader loader = new FXMLLoader(getClass().getResource("MainWindow.fxml")); Parent mainWindow = loader.load(); MainView mainView = loader.getController(); mainView.setUserController(userController); mainView.show(this.loginWindow); } else if (result == UserController.USER_ACCESS_DENIED) { this.messageField.setText("Your account has been disabled. Please contact an Administrator for more information."); } else if (result == UserController.USER_INVALID_CREDENTIALS) { this.messageField.setText("Sorry, but the details you provided are invalid."); } } public void show(Parent mainWindow) { Scene scene = new Scene(this.loginWindow); Stage stage = new Stage(); stage.setScene(scene); stage.setTitle("Authentication Required"); stage.setResizable(false); stage.show(); Stage mainWindowStage = (Stage) mainWindow.getScene().getWindow(); mainWindowStage.close(); } @Override public void initialize(URL url, ResourceBundle rb) { } } com.HassanAlthaf.StockControlSystem.View.MainView.java com.HassanAlthaf.StockControlSystem.View.MainWindow.fxml Answer: Start using New IO The java.io packages have been replaced for a reason. They don't integrate well into what the language tries to become and are at best unwieldy. Instead you should rely on the new IO or nio packages, that work on Paths instead of Strings and have some other advantages. Loading your properties can be simplified to: try { Properties properties = new Properties(); properties.load(Files.newInputStream(Paths.get(PROPERTIES_PATH), StandardOpenOption.READ, StandardOpenOption.WRITE)); // ... Program against interfaces It's not really good style to have code like: private ArrayList<String> returnMessages; instead you should declare fields as their interface where possible: private List<String> returnMessages; Don't expose methods unnecessarily StockController's validateStockItemData is not used in any class beside the StockController itself. Making it public is a big nono. You're violating the principle of information hiding. I was too lazy to copy over your project into an IDE, but I'm sure there's a lot of similar methods that don't need to be exposed. This btw. also relates to the fact that you return an ArrayList (program against interfaces!) from a should-be-private method (information hiding!) to write it into a private field of a class instead of directly operating on the field. Don't overcomplicate it for yourself! Know when to use Wildcard-Imports import javafx.beans.property.DoubleProperty; import javafx.beans.property.IntegerProperty; import javafx.beans.property.SimpleDoubleProperty; import javafx.beans.property.SimpleIntegerProperty; import javafx.beans.property.SimpleStringProperty; import javafx.beans.property.StringProperty; That's a lot of text for here and also in your class. Any IDE worth it's salt will hide this, but why make it hard. I personally think that 5 to 7 imports is the maximum you should call in from one package if you can avoid it. Then you can still grab a wildcard-import: import javafx.beans.property.*; Stay away from integer flags Integer flags are a remnant of java from versions before java 5. STOP USING THEM. "State of the art" (read: reasonable) is use of an enum to provide typesafety: public final static int USER_INVALID_CREDENTIALS = 0; public final static int USER_VALID_CREDENTIALS = 1; public final static int USER_ACCESS_DENIED = 2; public final static int RANK_DISABLED_USER = 0; public final static int RANK_REGULAR_USER = 1; public final static int RANK_ADMINISTRATOR = 2; This irresponsible fallback to the past is replaced by: public enum LoginResult { ACCESS_DENIED, INVALID_CREDENTIALS, SUCCESS } public enum Rank { DISABLED, REGULAR, ADMINISTRATOR } Did I mention you can use Enums for switch-statements much easier than integer results? switch (result) { case SUCCESS: // show main window break; case ACCESS_DENIED: // account disabled break; case INVALID_CREDENTIALS: // error message break; } Other notes Precompile patterns into a constant and don't recompile them each run. Patterns are inherently reusable. Don't make your validation routine jump through hoops. Consider using a database access framework (JOOQ, hibernate, ...) instead of writing the access layer yourself. Manual error checking by asking the Controller for messages is so C / C++ style. Use exceptions for error messages where applicable. doesUserExist is probably better off as simply exists or maybe even userExists
{ "domain": "codereview.stackexchange", "id": 16475, "tags": "java, xml, finance, javafx" }
Is the language of all TMs accepting all strings starting with 010 decidable?
Question: I am trying to figure out if this language is decidable: $$ \{ \langle M \rangle \mid \text{$M$ accepts all strings starting with 010}\}. $$ My intuition is that it is. Whatever string $w$ starts with 010 it accepts, and if it doesn't it rejects. Answer: According to $Rice's$ $theorem$, $\qquad$ $L$ = { $\langle M \rangle$ | $L (M) ∈ P$ } is undecidable if $P$ is a non-trivial semantic property of $\qquad$$L(M)$. $\qquad$ P is the set of all languages that satisfies a particular property If the following two properties hold, it is proved as undecidable − Property 1 (Semantic) − If $M_1$ and $M_2$ recognize the same language, then either $\qquad$$\qquad$$\langle M_1 \rangle ,\langle M_2\rangle \in L$ or $\langle M_1 \rangle ,\langle M_2\rangle \notin L$. Property 2 (Non-trivial) − There exists $M_1$ and $M_2$ such that $\langle M_1 \rangle \notin L$ and $\langle M_2 \rangle \notin L$. Now, $\quad$ 1) For any two TMs, $M_1$ and $M_2$ with $L(M_1) = L(M_2)$ either both $M_1$ and $M_2$ both accept all strings starting with 010 or both don't accept. $\quad$ 2) There exists TMs that accepts strings starting with 010 and not accepting strings that starts with 010. Therefore, the property of the language of a TM to start with 010 is a semantic and non-trivial. Hence, according to Rice's theorem, the given language is undecidable. $L$ is recognizable as TMs that accepts when given as input strings starting with 010 always accepts and halts. But complement of $L$ isn't recognizable as both $L$ and complement of $L$ being recognizable will mean that $L$ is decidable(why?).
{ "domain": "cs.stackexchange", "id": 12549, "tags": "turing-machines, computability, undecidability" }
Srednicki's QFT: Why $\langle p|\phi(0)|0\rangle$ in the interacting theory is Lorentz invariant?
Question: I am reading Srednicki's QFT and I have met a problem. In its section 5, (5.18) , after deducing the LSZ formula, in order to check whether his supposition "that the creation operators of free field theory would work comparably in the interacting theory" is reasonable, the author considered the number $\langle p|\phi(x)|0\rangle$, where $|p\rangle$is the one-particle state of momentum $p$ and $\phi$ is a field operator in the interacting theory. Using the unitary representation of translation we get $$\langle p|\phi(x)|0\rangle=\langle p|e^{-iPx}\phi(0)e^{iPx}|0\rangle=\langle p|\phi(0)|0\rangle e^{-ipx},$$then the author states that since $\langle p|\phi(0)|0\rangle$ is a Lorentz invariant number, and the only Lorentz invariant number that we can construct from $p$ is $p^2=-m^2$,so we can devide $\phi$ by a constant number to get the same normalization for one-particle momentum state as free field. My question is, how can we know that $\langle p|\phi(0)|0\rangle$ is a Lorentz invariant number? I guess (not sure) maybe that's because under a Lorentz tranformation $\Lambda$, $\phi(x)$ becomes $U(\Lambda)^{-1}\phi (x)U(\Lambda)$=$\phi(\Lambda^{-1}x)$ and $\Lambda^{-1}0=0$ so $\phi(0)$ does not change,thus $\langle p|\phi(0)|0\rangle$ does not change. However, I met a contradiction. Since both $\langle p|\phi(0)|0\rangle$ and $e^{-ipx}$ are Lorentz invariant, $\langle p|\phi(x)|0\rangle$ should be Lorentz invariant ,too. But under the transformation, $\langle p|\phi(x)|0\rangle$ becomes $\langle p|\phi(\Lambda^{-1}x)|0\rangle \neq \langle p|\phi(x)|0\rangle$, I don't understand where I made a mistake. How do we represent Lorentz transformation in quantum theory's formulation in Heisenberg picture? Should the state vector $|p\rangle$ change under transformation? Answer: A function is Lorentz invariant if $f(p) = f(\Lambda p)$. Consider the function $$ f(p) = \langle p| \phi(0) | 0 \rangle = \langle p| U(\Lambda)^{-1} U(\Lambda) \phi(0) U(\Lambda)^{-1} U(\Lambda) | 0 \rangle $$ where in the last equality, we simply introduced factors of $1= U(\Lambda)^{-1} U(\Lambda)$. We now use the fact that $| 0 \rangle $ is Lorentz invariant, $U(\Lambda) | 0 \rangle = | 0 \rangle $ and $\langle p| $ transforms as $$ U(\Lambda) | p \rangle = | \Lambda p \rangle \quad \implies \quad \langle p |U(\Lambda)^{-1} = \langle \Lambda p | . $$ Finally $\phi(x)$ is a scalar field so $$ U(\Lambda) \phi(x) U(\Lambda)^{-1} = \phi(\Lambda x) \quad \implies \quad U(\Lambda) \phi(0) U(\Lambda)^{-1} = \phi(0) . $$ Using all of this, we find $$ f(p) = \langle \Lambda p| \phi(0) | 0 \rangle = f(\Lambda p) $$ Consequently, $f(p)$ is a Lorentz invariant function.
{ "domain": "physics.stackexchange", "id": 89806, "tags": "quantum-field-theory, special-relativity, hilbert-space, operators, invariants" }
Do the Casimir operators for $su(3)$ algebra of particle physics carry any physical meaning?
Question: For the $su(2) $ algebra of angular momentum, the eigenvalues of the Casimir operator, $\hat{J^2}$, represents the square of the total angular momentum of a system. The $su(3)$ algebra has rank 2 and therefore, it has two Casimir operators given by $$ \hat{C}_1=\hat{T}_i \hat{T}_i, \quad \hat{C}_2 =d_{ijk}\hat{T}_i\hat{T}_j\hat{T}_k, $$ where $T_i$'s are the generator and the sum over the repeated indices is implied. The group $SU(3)$ arises in particle physics, in the context of strong interactions and also in approximate flavor $SU(3)$ symmetry. Other than characterizing different multiplets, do the eigenvalues of $\hat{C}_1$ and $\hat{C}_2$ have any physical meaning? Are they measurable like $\hat{J^2}$ is? Answer: As group invariants, they serve to provide interaction terms in SU(3)-symmetric hamiltonians. SU(3) is also the symmetry group of 3-d quantum oscillators, and the quadratic Casimir is, of course, the quadrupole-quadrupole interaction term. I copy the basic formulas from WP above, specifying their eigenvalues for the arbitrary irrep $D(p,q)$ (with p "quarks" and q "antiquarks"), of dimension $d(p,q)=\frac{1}{2}(p+1)(q+1)(p+q+2)$, $$ {C}_1 = (p^2+q^2+3p+3q+pq)/3 \\ {C}_2 =(p-q)(3+p+2q)(3+q+2p)/18 . $$ Note the first is symmetric in p,q interchange, and the second one antisymmetric. The latter is the triangle anomaly coefficient of QFT, and visibly vanishes for real, p=q, irreps!
{ "domain": "physics.stackexchange", "id": 96598, "tags": "particle-physics, standard-model, group-theory, invariants" }
What is the inert pair effect?
Question: I was reading about the p-block elements and found that the inert pair effect is mentioned everywhere in this topic. However, the book does not explain it very well. So, what is the inert pair effect? Please give a full explanation (and an example would be great!). Answer: The inert pair effect describes the preference of late p-block elements (elements of the 3rd to 6th main group, starting from the 4th period but getting really important for elements from the 6th period onward) to form ions whose oxidation state is 2 less than the group valency. So much for the phenomenological part. But what's the reason for this preference? The 1s electrons of heavier elements have such high momenta that they move at speeds close to the speed of light which means relativistic corrections become important. This leads to an increase of the electron mass. Since it's known from the quantum mechanical calculations of the hydrogen atom that the electron mass is inversely proportional to the orbital radius, this results in a contraction of the 1s orbital. Now, this contraction of the 1s orbital leads to a decreased degree of shielding for the outer s electrons (the reason for this is a decreased "core repulsion" whose origin is explained in this answer of mine, see the part titled "Why do states with the same $n$ but lower $\ell$ values have lower energy eigenvalues?") which in turn leads to a cascade of contractions of those outer s orbitals. The result of this relativistic contraction of the s orbitals is that the valence s electrons behave less like valence electrons and more like core electrons, i.e. they are less likely to take part in chemical reactions and they are harder to remove via ionization, because the s orbitals' decreased size lessens the orbital overlap with potential reaction partners' orbitals and leads to a lower energy. So, while lighter p-block elements (like $\ce{Al}$) usually "give away" their s and p electrons when they form chemical compounds, heavier p-block elements (like $\ce{Tl}$) tend to "give away" their p electrons but keep their s electrons. That's the reason why for example $\ce{Al(III)}$ is preferred over $\ce{Al(I)}$ but $\ce{Tl(I)}$ is preferred over $\ce{Tl(III)}$.
{ "domain": "chemistry.stackexchange", "id": 1743, "tags": "inorganic-chemistry, quantum-chemistry, orbitals, periodic-trends, relativistic" }
Using Grand Central Dispatch to fetch data from Parse.com and update UI asynchronously
Question: This works, and the UI is snappy in the simulator, but since this is my first time really using GCD, I'd just like this code to be reviewed before I start using it everywhere. Note that this is inside a PFQueryTableViewController. My function: func tableRefresh() { // get quality of service (high level) let qos = Int(QOS_CLASS_USER_INITIATED.value) // get global queue dispatch_async(dispatch_get_global_queue(qos, 0)) { () -> Void in //execute slow task self.queryForTable() // get main queue, do UI update dispatch_async(dispatch_get_main_queue()) { self.loadObjects() } } } My function in action: @IBAction func done(segue: UIStoryboardSegue) { let addPersonViewController = segue.sourceViewController as! AddPersonViewController // If I get a person back from APVC if let person = addPersonViewController.person { // If that person has a name if let name = person.name { // Note: the reference to current user creates the pointer let newPerson = Person(name: name, user: PFUser.currentUser()!) // Save newPerson.saveInBackgroundWithBlock() { succeeded, error in if succeeded { println("\(newPerson) was Saved") self.tableRefresh() } else { if let errorMessage = error?.userInfo?["error"] as? String { self.showErrorView(error!) } } } } } } Answer: First off, I don't know Swift. These are generic issues that should be addressed in all written code, and I'll leave the Swift-specific stuff to the experts. Comments //execute slow task Comments should say why the code is the way it is, not a generic comment that doesn't tell us anything constructive. If there is an issue with the code being slow, document why it is slow, and maybe state why you haven't been able to fix it for future reference. Spacing } } We definitely don't need that much whitespace hogging our screens. It forces code off the screen, it increases scrolling, and generally slows us down when reading and understanding the code. Indentation } else { if let errorMessage = error?.userInfo?["error"] as? String { self.showErrorView(error!) } } Your indentation is off there. Each closing brace should have the same level indentation as the opening brace, unless they are on the same line, which is rare. Naming Your names should state what the function does or variable is, and only what the function does or variable is, and the variable/function shouldn't be/do anything but what the name says it is/does. done() tells me nothing about what the function does. qos doesn't tell me what the variable does, what it contains, or what it is used for. self.loadObjects() appears to do UI updates, according to the comment. Loading an object is not the same as displaying it, and loading and displaying from a single method violates the Single Responsibility Principle.
{ "domain": "codereview.stackexchange", "id": 15002, "tags": "swift, asynchronous, ios, parse-platform, grand-central-dispatch" }
Particle in a double delta potential, scatter states
Question: I was studying the scatter states of a particle in a double delta potential given in this link: Double delta function well – scattering states But I dont understand how equations (20) and (21) were obtained, any one of the equation is sufficient actually, the other can be obtained. Any help appreciated! Answer: The solution considers waves propagating in the positive $x$ direction, since for $x>a$ the wave function is $\psi(x) = F e^{ikx}$. Accordingly, the internal transmission rate for the internal segment $-a \le x \le a$ is given by the relative amplitude of the $e^{ikx}$ component, the one "transmitted" in the same direction as the outgoing wave. Similarly, the reflection coefficient is given by the "reflected" $e^{-ikx}$ component propagating in the opposite direction. Since in the inner region $\psi(x) = C e^{ikx} + D e^{-ikx}$, where both $C$ and $D$ are proportional to $A$, this gives $T_i = |C|^2/|A|^2$ and $R_i = |D|^2/|A|^2$, with $C$ and $D$ as in Eqs.(11), (12).
{ "domain": "physics.stackexchange", "id": 25707, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, scattering" }
calculating estimate of a state for a system with two observations of the state from different times
Question: I'm struggling with a problem that I just can't seem to get a grasp of. I'm supposed to calculate an estimate for state $x(k)$ at times $k=1,2,3$ from the state-space $ \begin{align} x(k+1) &= A x(k) + B u(k) + w(k) \\ y_1(k) &= x(k) + v1_(k) \\ y_2(k) &= x(k-1) + v_2(k) \\ \end{align} $ where the covariances of the noises are given as $E[w(k)w(k)']$ , $E[v_1(k)v_1(k)']$, $E[v_2(k)v_2(k)']$ and $u(k)$ is given for $k=1,2,3$ $y_1(k)$ is given for $k=1,2,3$ $y_2(k)$ is given for $k=2,3$ What kind of technique am I supposed to use? Information Kalman filter? Answer: HINT $$ \left| \begin{array}{cc} A & 0 \\ 0 & 1 \end{array} \right| $$
{ "domain": "dsp.stackexchange", "id": 5628, "tags": "kalman-filters, homework, estimation" }
How does defrosting your freezer save energy?
Question: I've been told I should defrost my freezer to save energy, wiki, here and here for example, but none of the linked sites is a peer-reviewed paper explaining why (the wiki article doesn't even have references), and I don't find it obvious. I don't understand how the mechanism works, and I ask you for a good paper read on the subject or an explanation. Answer: Refrigerators and freezers work by running a really cold liquid through cooling pipes fitted in the cavity to be cooled. This flow (the compressor) is switched off when the set temperature is reached, the faster the set temperature is reached, the less energy the appliance uses. Cold fluid at $T_c$ runs through the cooling pipes. The cavity to be cooled is at $T_f$. Now let's look at small area $A$ on the surface of a cooling pipe. When the cooling pipe is clean (not frosted over) then Newton's cooling law tells us that the heat flux (amount of heat removed per unit of time) $\dot{q}$ through $A$ is: $$\dot{q}_\textrm{clean}=hA(T_f-T_c)$$ Where $h$ is the heat transfer coefficient. But when the surface is frosted over with porous ice, then: $$\dot{q}_\textrm{frosted}=uA(T_f-T_c)$$ It can be shown that: $$\frac{1}{u}=\frac{1}{h}+\frac{\theta}{k}\implies u=\frac{hk}{k+h\theta}$$ Where $\theta$ is the thickness of the frosty material and $k$ the thermal conductivity of the frosty material. Because the frosty material is a poor conductor of heat ($k$ has a low value): $$h>\frac{hk}{k+h\theta}$$ (Note that the frosty material isn't pure ice, it's highly porous ice that contains much entrapped air, thereby further lowering the $k$ value of the frost). And this means that, all other things being equal: $$\dot{q}_\textrm{clean}>\dot{q}_\textrm{frosted}$$ Multiply this of course for the total surface area of the cooling pipes. So clean cooling pipes carry away the heat more quickly, resulting in the compressor running for shorter times to reach the set temperature. This saves energy, Note also how freezers that have been frosted over more (higher frost thickness $\theta$) perform worse. A slightly more detailed approach: $$\dot{q}_\textrm{clean}=u_1A(T_f-T_c)$$ $$\dot{q}_\textrm{frosted}=u_2A(T_f-T_c)$$ Here it can be shown that: $$\frac{1}{u_1}=\frac{1}{h_1}+\frac{\theta_1}{k_1}+\frac{1}{h_2}$$ And: $$\frac{1}{u_2}=\frac{1}{h_1}+\frac{\theta_1}{k_1}+\frac{\theta_2}{k_2}+\frac{1}{h_3}$$ But here too, because the frost conducts heat poorly ($k_2$ is small): $$u_1>u_2$$ So that clean pipes carry off the heat more quickly, all other things being equal. Symbols used in this section: $h_1$: convection heat transfer coefficient, cooling fluid to metal. $h_2$: convection heat transfer coefficient, metal to air. $h_3$: convection heat transfer coefficient, frost to air. $k_1$: thermal conductivity, metal. $k_2$: thermal conductivity, frost. $\theta_1$: thickness, metal. $\theta_2$: thickness, frost.
{ "domain": "physics.stackexchange", "id": 33165, "tags": "thermodynamics, temperature, everyday-life, freezing" }
Calculus - Equation for rocket max height
Question: I made myself a question involving calculus and physics just for fun but can't figure it all out. A rocket with mass $1000kg$ launches with a net force of $5000N$ for $5s$. Find the maximum height and the time taken to reach the peak. First I found the velocity by integrating: $$v = \int \frac{F}{m} \;\mathrm{dt}$$ Which yields: $$v=\frac{Ft}{m}$$ and substituting everything in I find my maximum velocity: $$25ms^{-1}$$ Now once the velocity reached its maximum, the acceleration will become: $$-10ms^{-2}$$ and this is where my further calculations turn into magic. I figure it should be a bitwise function but can't piece it together. I tried integrating the new acceleration and adding my maximum velocity as $c$, but using graphing tools I see it doesn't make sense. I reasoned that the graph should look like a parabola with the left "leg" being constant up to $25ms^{-1}$ followed by a normal "parabolic drop". Answer: Your solution needs two steps. Find height and velocity after the burn (time $t_1$) The acceleration needs to account for thrust and gravity $a(t) = F(t)/m - g$ $$\begin{aligned} v(t) & = \int a(t)\,{\rm d}t=\int \limits_0^{t} \left( \frac{F}{m}-g \right) \,{\rm d}t = \left( \frac{F}{m}- g \right) t\\ & v_1 = \left( \frac{F}{m}- g \right) t_1 \end{aligned} $$ and $$ \begin{aligned} h(t) &= \int v(t)\,{\rm d}t =\int \limits_0^{t} \left( \frac{F}{m}-g \right) t\, {\rm d} t = \frac{t^2}{2}\left( \frac{F}{m}-g \right) \\ & h_1 = \frac{t_1^2}{2}\left( \frac{F}{m}-g \right) \end{aligned} $$ Find maximum height during free fall (even though it is going upwards) $$ \begin{aligned} v(t) & = v_1 + \int \limits_{t_1}^t (-g)\, {\rm d}t = v_1 - g (t-t_1) \\ & v(t)=0 \left. \vphantom{\int } \right\} t_2 = t_1 + \frac{v_1}{g} \end{aligned} $$ and $$ \begin{aligned} h(t) &= h_1 + \int v(t)\,{\rm d}t = h_1 + \int \limits_{t_1}^t \left( v_1- g ( t-t_1) \right)\,{\rm d}t \\ & h_2 = h(t_2) = h_1 + \frac{v_1^2}{2 g} = \frac{t_1^2}{2 g} \frac{F}{m} \left( \frac{F}{m}-g \right) \end{aligned} $$
{ "domain": "physics.stackexchange", "id": 52430, "tags": "homework-and-exercises, newtonian-mechanics, projectile" }
Is hydroboration a pericyclic reaction?
Question: According to Ahn.N.T "Frontier Orbitals" hydroboration isn't a pericyclic reaction because boron uses two AOs and not one. The same applies for cheletropic reactions, which aren't pericyclic either. But the IUPAC goldbook says different: http://goldbook.iupac.org/P04491.html - that cheletropic reactions are pericyclic. So is hydroboration, too? Answer: Based on the IUPAC Goldbook definition, the only two requirements for a pericyclic reaction are: Concerted mechanism Cyclic transition state Hydroboration reactions (and similar reactions) are pericylic reactions by these criteria, as are all cheletropic reactions. The language about fully conjugated transition states can be used to explain some pericyclic reactions, but not all, and thus IUPAC uses the phrasing may be viewed as.... Hydride shifts, (and all sigmatropic rearrangements ene reactions) involve non-$\pi$ orbitals (even so, the transition states do tend to follow the Hückel rule).
{ "domain": "chemistry.stackexchange", "id": 3957, "tags": "organic-chemistry, reaction-mechanism" }
Trends in atomic radii across a period
Question: I am a 12th grader. Recently, while revising the Periodic Table, I came across the statement: As the effective nuclear charge increases across a period, the atomic radius of the elements decreases on moving from left to right in a period. For some reason, I decided to fact-check this statement and compiled the atomic radii of the elements and graphed it. Unfortunately, I compiled the van der Waals' radii of the elements. Thus, while the graph I obtained did show the expected decreasing trend, it also showed sudden, unexpected fluctuations in the atomic radii (eg: boron, silicon). My question is: why does the van der Waals' radius not obey the expected decreasing trend? Also, when we speak about the atomic radius, which is a better representative for this particular case: the covalent radius, or the van der Waals' radius? I am attaching the graph I obtained for your reference: (van der Waals' radii of the elements (in Å)) Answer: Atomic radii are measured in many different ways—van der Waals radii do not capture precisely the same trends as it is calculated primarily (although not exclusively) from applying the van der Waals equation to gaseous systems and approximating the molecules as spheres. Spikes when moving from the s-block to the p-block are expected (e.g. Be $\rightarrow$ B) even in standard atomic radii maps because the p-orbital is large in size and repels away from the shielding electrons. However, when you define atomic radii based instead on the intensity of the electron cloud at certain distances from the nucleus, the trend with effective nuclear charge is maintained (i.e. the silicon issue goes away even if its van der Waal radius is anomalous).
{ "domain": "chemistry.stackexchange", "id": 16826, "tags": "periodic-trends" }
How to make a robot arm move in gazebo with ROS 1?
Question: Hi, For a school project I want to move this robot arm in gazebo with ros. I already can load in the robotarm model with roslaunch and start gazebo and move the joints in gazebo, but I don't know exactly how I can move this robot arm with ros and use cpp code. I also can't really find any good tutorials. Can somebody please help me? If you know some good tutorials let me know. I use this ubuntu 20.04 LTS ROS 1 noetic gazebo 11.9.1 Originally posted by abracadabra on ROS Answers with karma: 1 on 2021-12-24 Post score: 0 Answer: Hi @abracadabra, I suggest this book: Mastering ROS for Robotics Programming Third Edition by Lentin Joseph and Jonathan Cacace. Chapter 3 and 4 are step by step of what you are looking to set up the robot arm in Gazebo first but it looks you have done this step. However it's critical as the other steps build on that. Check this tutorial to connect to a robot arm with C++: https://community.arm.com/arm-research/b/articles/posts/do-you-want-to-build-a-robot I also suggest you look into MoveIt as it's many great examples and there is a tutorial how to integrate with Gazebo: https://ros-planning.github.io/moveit_tutorials/doc/gazebo_simulation/gazebo_simulation.html Originally posted by osilva with karma: 1650 on 2021-12-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37287, "tags": "ros" }
Is it true that the particles from the transmitting antenna (conductor) flying or travelling in the air?
Question: Is it correct that the varying stream of electrons or photons or wavicles etc. in the transmitting antenna (i.e complex analog signal) are literally flying or traveling in air or vacuum and then get inside the receiving antenna to form back the transmitted information? If not, then what is the best explanation how does the information reach at the receiving end? Answer: Basic phenomenons of the electromagnetic radiation EM radiation happens through the emission of photons from excited subatomic particles. The excitation of subatomic particles happens through the absorption of photons (1. and 2. are not the full picture, see below). Photons are indivisible quanta of the EM radiation. Powerful excitations of subatomic particles led to powerful EM radiation in the range of X-rays emission and Gamma ray emission, soft excitations to infrared emission. The visible light is between (and we call it between because we are adopted to it during our evolution). Applications of EM radiation by humans (according to your question) Accelerated between a gap in the conducting circuit electrons produce EM radiation in a wide range from visible radiation (sparks) to X-rays. See the Spark Gap transmitter of Heinrich Herzt. Moving non-parallel to an external magnetic field electrons undergo a deflection perpendicular to the plane between the direction of movement and the external magnetic field. Take the thumb and the second and third fingers of your hand to visualize this! It's called the Lorentz force. During the deflection the electron looses kinetic energy (gets de-accelerated) and emits EM radiation (and came to standstill after exhausting its kinetic energy in the form of the emitted radiation). Radio waves In an antenna rod - which the s an open electric circuit - electrons get accelerated and de-accelerated forth and back by an antenna generator (AC current): With curtesy to Wikipedia, the animation see here The synchronously and periodical accelerations of the electrons led to a synchronously and periodical emission of photons. In the near field of the antenna - moving with the radiation (with the speedd of light) in the direction of radiation (line of observation perpendicular to the antenna rod) - the radio wave looks like the graphics in this explanation of the photons electric and magnetic field components. Your right thoughts Is it correct that the varying stream of electrons ... in the transmitting antenna ... (induce photons which)... are literally flying or traveling in air or vacuum and then get inside the receiving antenna to form back the transmitted information? Yes, an accelerated stream of electrons induces a stream of photons, flying in empty space away until hitting subatomic particles (the receiving antenna. Making a modulation to the emitted EM radiation one could transmit information. In the easiest way interrupting the radiation (Morse signal mentioned by Anna) or varying the frequency of the radio wave or varying the intensity of the radiation.
{ "domain": "physics.stackexchange", "id": 39616, "tags": "electromagnetism, electromagnetic-radiation, antennas" }
openni_launch fails to start on ROS hydro
Question: I'm using ROS Hydro on Ubuntu 12.04 on a Roomba 4400 base. I've been using roslaunch openni_launch openni.launch to launch openni. The last time i successfully launched openni was today( a few hours before this post). But now i receive the following error when i try to launch openni or any other command which uses kinect ( e.g. roslaunch turtlebot_follower follower.launch) : terminate called after throwing an instance of 'openni_wrapper::OpenNIException' what(): unsigned int openni_wrapper::OpenNIDriver::updateDeviceList() @ /tmp/buildd/ros-hydro-openni-camera-1.9.2-0precise-20150515-0253/src/openni_driver.cpp @ 125 : enumerating image nodes failed. Reason: One or more of the following nodes could not be enumerated: Image: PrimeSense/SensorV2/5.1.0.41: Got a timeout while waiting for a network command to complete! [FATAL] [1462534504.287888196]: Service call failed! [FATAL] [1462534504.288046458]: Service call failed! [FATAL] [1462534504.288046461]: Service call failed! [FATAL] [1462534504.288079748]: Service call failed! [FATAL] [1462534504.288330007]: Service call failed! [FATAL] [1462534504.288357209]: Service call failed! [FATAL] [1462534504.288403796]: Service call failed! [FATAL] [1462534504.288477609]: Service call failed! [FATAL] [1462534504.288604595]: Service call failed! [FATAL] [1462534504.288768839]: Service call failed! [FATAL] [1462534504.288772040]: Service call failed! [FATAL] [1462534504.288905728]: Service call failed! [FATAL] [1462534504.288905732]: Service call failed! [FATAL] [1462534504.289059881]: Service call failed! [FATAL] [1462534504.289899319]: Service call failed! [FATAL] [1462534504.290088786]: Service call failed! [camera/camera_nodelet_manager-2] process has died [pid 6531, exit code -6, cmd /opt/ros/hydro/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-camera_nodelet_manager-2.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-camera_nodelet_manager-2*.log [camera/debayer-4] process has died [pid 6569, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load image_proc/debayer camera_nodelet_manager --no-bond image_raw:=rgb/image_raw image_mono:=rgb/image_mono image_color:=rgb/image_color __name:=debayer __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-debayer-4.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-debayer-4*.log [camera/rectify_mono-5] process has died [pid 6583, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=rgb/image_mono image_rect:=rgb/image_rect_mono __name:=rectify_mono __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-rectify_mono-5.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-rectify_mono-5*.log [camera/rectify_color-6] process has died [pid 6597, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=rgb/image_color image_rect:=rgb/image_rect_color __name:=rectify_color __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-rectify_color-6.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-rectify_color-6*.log [camera/depth_metric_rect-9] process has died [pid 6639, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth/image_rect_raw image:=depth/image_rect __name:=depth_metric_rect __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_metric_rect-9.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_metric_rect-9*.log [camera/points_xyzrgb_sw_registered-13] process has died [pid 6695, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyzrgb camera_nodelet_manager --no-bond rgb/image_rect_color:=rgb/image_rect_color rgb/camera_info:=rgb/camera_info depth_registered/image_rect:=depth_registered/sw_registered/image_rect_raw depth_registered/points:=depth_registered/points __name:=points_xyzrgb_sw_registered __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-points_xyzrgb_sw_registered-13.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-points_xyzrgb_sw_registered-13*.log [camera/depth_registered_rectify_depth-14] process has died [pid 6709, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=depth_registered/image_raw image_rect:=depth_registered/hw_registered/image_rect_raw __name:=depth_registered_rectify_depth __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_registered_rectify_depth-14.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_registered_rectify_depth-14*.log [camera/disparity_registered_sw-17] process has died [pid 6751, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/disparity camera_nodelet_manager --no-bond left/image_rect:=depth_registered/sw_registered/image_rect_raw right:=projector left/disparity:=depth_registered/disparity __name:=disparity_registered_sw __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-disparity_registered_sw-17.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-disparity_registered_sw-17*.log [camera/driver-3] process has died [pid 6538, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load openni_camera/driver camera_nodelet_manager --no-bond ir:=ir rgb:=rgb depth:=depth depth_registered:=depth_registered projector:=projector __name:=driver __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-driver-3.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-driver-3*.log [camera/rectify_ir-7] process has died [pid 6611, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=ir/image_raw image_rect:=ir/image_rect_ir __name:=rectify_ir __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-rectify_ir-7.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-rectify_ir-7*.log [camera/depth_rectify_depth-8] process has died [pid 6625, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=depth/image_raw image_rect:=depth/image_rect_raw __name:=depth_rectify_depth __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_rectify_depth-8.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_rectify_depth-8*.log [camera/depth_metric-10] process has died [pid 6653, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth/image_raw image:=depth/image __name:=depth_metric __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_metric-10.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_metric-10*.log [camera/depth_points-11] process has died [pid 6667, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyz camera_nodelet_manager --no-bond image_rect:=depth/image_rect_raw points:=depth/points __name:=depth_points __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_points-11.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-depth_points-11*.log [camera/register_depth_rgb-12] process has died [pid 6681, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/register camera_nodelet_manager --no-bond rgb/camera_info:=rgb/camera_info depth/camera_info:=depth/camera_info depth/image_rect:=depth/image_rect_raw depth_registered/image_rect:=depth_registered/sw_registered/image_rect_raw __name:=register_depth_rgb __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-register_depth_rgb-12.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-register_depth_rgb-12*.log [camera/points_xyzrgb_hw_registered-15] process has died [pid 6723, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyzrgb camera_nodelet_manager --no-bond rgb/image_rect_color:=rgb/image_rect_color rgb/camera_info:=rgb/camera_info depth_registered/image_rect:=depth_registered/hw_registered/image_rect_raw depth_registered/points:=depth_registered/points __name:=points_xyzrgb_hw_registered __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-points_xyzrgb_hw_registered-15.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-points_xyzrgb_hw_registered-15*.log [camera/disparity_depth-16] process has died [pid 6737, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/disparity camera_nodelet_manager --no-bond left/image_rect:=depth/image_rect_raw right:=projector left/disparity:=depth/disparity __name:=disparity_depth __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-disparity_depth-16.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-disparity_depth-16*.log [camera/disparity_registered_hw-18] process has died [pid 6766, exit code 255, cmd /opt/ros/hydro/lib/nodelet/nodelet load depth_image_proc/disparity camera_nodelet_manager --no-bond left/image_rect:=depth_registered/hw_registered/image_rect_raw right:=projector left/disparity:=depth_registered/disparity __name:=disparity_registered_hw __log:=/home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-disparity_registered_hw-18.log]. log file: /home/asaad/.ros/log/8593b2ce-137e-11e6-b5d3-90004ecd8dc9/camera-disparity_registered_hw-18*.log Originally posted by Asaad Irfan on ROS Answers with karma: 62 on 2016-05-06 Post score: 0 Answer: I re installed openni 1.5.4 ,NITE 1.5.2 and the mask image is now shown. My problem's solved. The link i used to install these was [http://www.20papercups.net/programming/kinect-on-ubuntu-with-openni/] Originally posted by Asaad Irfan with karma: 62 on 2016-05-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24576, "tags": "kinect, openni" }
Cosmology: collisionless vs collisional fluids?
Question: I try to understand the difference between collisionless and collisional fluids in cosmology. My first question is the following. In the context of FLRW cosmology, we suppose that the Universe can be described in terms of a mix of fluids with: $T_{\mu\nu}=\left(\rho c^2 +P\right)u_{\mu}u_{\nu}+Pg_{\mu\nu}$ When we write that, do we suppose a collisionless or collisional nature of the fluids? If this description corresponds to collisional fluids, why cosmological simulations are N-body simulations (collisionless) and are not simply based on hydrodynamics? Can we solve equations of a collisionless system without using particles (and just cells with physical properties like in the collisional case)? At very large scale (scale of homogeneity), when we are not interested in the formation of local structures (like galaxies and superclusters), does the collisionless/collisional description is important? Answer: Q: When we write that, do we suppose a collisionless or collisional nature of the fluids? A: It's the energy-momentum tensor for a perfect fluid Chapter 2.26 Q: If this description corresponds to collisional fluids, why cosmological simulations are N-body simulations (collisionless) and are not simply based on hydrodynamics? A: Cosmological simulations are not always N-body simulations. Physicists often assume dark matter is nearly collisionless. This assumption is justified by looking at e.g. the bullet cluster, where galaxies collide and dark matter does not interact with itself or baryonic matter: Calculations on dark matter self-interaction from the bullet cluster. Baryonic matter, on the other hand, may be assumed to be collisional, but often it can be neglected. Q: Can we solve equations of a collisionless system without using particles (and just cells with physical properties like in the collisional case)? A: Yes, that's possible. Collisionless Boltzmann equation describes collisionless matter (Section 30.5), and it can be discretized onto a mesh grid (into cells). One would also need to have the phase-space for the three velocity coordinates; these can also be discretized. Additionally, one would need to solve the relativistic Poisson equation which would describe the interaction with gravity. Q: At very large scale (scale of homogeneity), when we are not interested in the formation of local structures (like galaxies and superclusters), does the collisionless/collisional description become important? A: That depends on at least: a) whether the matter is highly collisional or not, and b) what kind of physics we are interested in. As an example, dark matter is usually assumed to be highly non-collisional, yet near black holes the chance for collisions is increased. Additional notes: A good summary of the topic
{ "domain": "physics.stackexchange", "id": 23391, "tags": "general-relativity, fluid-dynamics, cosmology, collision, magnetohydrodynamics" }
How to label overlapping objects for deep learning model training
Question: I am training yolov3 to detect a custom object (chickens). In a lot of my training images I have overlapping chickens (can only see a partial chicken etc). Is there a common practice for how to label the data (bounding box) in these cases? Should you only label the portion of the image which you can see? Answer: There is no common practice in labeling the bounding boxes. It is always problem dependent. For example, if you want to count the chickens then you should also label the whole chicken as one instance of a chicken. If you simply what to detect if there is a chicken in the picture you should label the unoccluded part. You have to think about your problem. What is the goal of the algorithm? Could a human do the task without imagining where the rest of the object is? You should also consider the pixel imbalance for your problem. In general, the first method is a harder task than the second method because even humans have problems in labeling the bounding box for occluded objects. Hence, you will have a lot of variance due to this factor. If you label only what you see the bounding box labeling will be more reliable. As far as I know, the PASCAL Visual Object Classes data set which was used in the YOLO publication did only label what you can see and not what is occluded. BTW I hope your task aims to improve the live quality of the chickens. It would be a shame if machine learning would be used to harm them.
{ "domain": "datascience.stackexchange", "id": 4860, "tags": "deep-learning, labels, yolo" }
Meaning of "harmonic"
Question: I'm trying to understand the meaning of the term "harmonic". IE, appearing in following sentence of Fluctuation-dissipation relations for stochastic gradient descent The second relation (FDR2) further helps us determine the properties of the loss function landscape, including the strength of its Hessian and the degree of anharmonicity, i.e., the deviation from the idealized harmonic limit of a quadratic loss surface and a constant noise matrix. What is the meaning of "harmonic limit" here? Also, is it appropriate to term "harmonic approximation" to refer to a method which uses Gaussian-like approximation? (ie, assumes that cumulants of rank-3 and above are zero) Answer: If you make the equivalence between the loss surface (or loss function) and a physical potential then the "harmonic limit" here is the one of a Brownian evolution into a harmonic (or quadratic) potential. That means the stochastic gradient views as a stochastic process is the same than the evolution of a particle driven by a thermal noise into a quadratic potential. As the solution of the Fokker-Planck equation associated with such evolution are actually Gaussian, it could be thought as a Gaussian-like approximation. But the approximation here is really on the loss function.
{ "domain": "physics.stackexchange", "id": 63091, "tags": "thermodynamics, statistical-mechanics, harmonic-oscillator, terminology, anharmonic-oscillators" }
How does a nerve cell adjust if O2 diffusion is interrupted?
Question: What effects would it have on a nerve if the oxygen supply is cut off? Is there any data on this? Does the nerve conductance velocity increase? What about the Amplitude and receptor-channels on/in the nerve? Funnily enough it's insanely hard to come by an answer to this question, or maybe I'm just searching at the wrong places.... Answer: If Oxygen diffusion is interrupted, there is a serious problem. Neurons inside of the brain slowly start to die, many other things like change in personality or the inability to process pain impulses occur if the brain is deprived of oxygen.If this is not fixed within 15 minutes it is impossible to survive. The actual effects on the neurons are as follows: Like in most cells the first solution to this kind of circumstance is anaerobic metabolism, the same happens in neurons but it does not last long and it is inefficient. As far as i'm concerned nerve conductance decreases in this kind of scenario and the receptors shrivel. Well if the receptors are shriveling, they shouldn't be able to transfer impulses meaning that nerve conductance should stop. This link to an article on the effects on the brain from oxygen deprivation. https://www.livestrong.com/article/106179-effects-lack-oxygen-brain/
{ "domain": "biology.stackexchange", "id": 9834, "tags": "physiology" }
Resampling for imbalaced datasets: should testing set also be resampled?
Question: Apologies for what is probably a basic question but I have not been able to find a definitive answer either in the literature or in the Internet. When dealing with an imbalanced dataset one possible strategy is to resample either the minority or the majority class to artificially generate a balanced training set that can be used to train a machine learning model. My doubt stems from the fact that a testing set is supposed to be a representation of what real-world data is going to look like. Under this assumption, my understanding is that the testing set is not to be resampled, unlike the training set, since the imbalance we are trying to deal with is present in the live data in the first place. Could someone please clarify if this intuition is right? Answer: The resampling of the training data is to better represent the minority class so your classifier would have more samples to learn from (Oversampling) or less samples to better differientiate your minority class samples from the rest (Undersampling). Not only your test data must be untouched during oversampling or undersampling but also your validation data. One logical argument that prevents you from touching your test data is that in a real-world scenario, you wouldn't have access to the target variable ( that's what you want to predict ) and in order to perform resampling, you need to know which class a sample belongs to for you to remove it (undersampling) or find it's nearest neighbor(s) (oversampling) Example of an oversampling during cross-validation just below : What i'm basically doing here to avoid leaking information from trainset to testset( and valset ), every iteration, at each fold, i oversample the remaining folds, train a model with the oversampled new trainset, get my preds, and iterate over and over again. Each time i get a new fold for validation, i oversample all the others, and get predictions for that validation fold. for ind,(ind_train,ind_val) in (enumerate (kfolds.split(X,y))): # Stratified Kfold X_train,X_val = X.iloc[ind_train],X.iloc[ind_val] y_train,y_val = y.iloc[ind_train],y.iloc[ind_val] sm = SMOTE(random_state=12, ratio = 1.0) X_train_res, y_train_res = sm.fit_sample(X_train, y_train)##oversampled trainset xgb = XGBClassifier(max_depth=5,colsample_bytree=0.9,min_child_weight=2,learning_rate=0.09,objective = "binary:logistic",n_estimators=148) xgb.fit(X_train_res,y_train_res) val_pred = xgb.predict(X_val) ##out of fold predictions on my validation set train_pred = xgb.predict(X_train)##oof preds on my trainset test_pred = xgb.predict(X_test)##oof preds on my whole test set
{ "domain": "datascience.stackexchange", "id": 5865, "tags": "training, class-imbalance" }
Image Processing
Question: I have an image and I am trying to find its rows and columns. A = imread('lena.jpg'); nrows = size(A,1) ncols = size(A,2) This is 3 dimensional matrix. Output of this 512 and 512. I know A is the matrix but how can I know value of for example A[211][312] ? I want to use convolution filter to the image, obtain new matrix and turn it into a picture like "lena with blur". Answer: You can iterate for each channel: data = double(imread('lena.jpg')); data = data/255; % Potentially optional dataFilt = zeros(size(data)); nChan = size(data,3); kernelFilter = ones(11,11)/121; for iChan = 1:nChan dataFilt(:,:,iChan) = filter2(kernelFilter,data(:,:,iChan)); end subplot(1,2,1) imagesc(data) xlabel('Picture') subplot(1,2,2) imagesc(dataFilt) xlabel('Picture with blur')
{ "domain": "dsp.stackexchange", "id": 8896, "tags": "matlab, convolution, image-processing, blur" }
Probability on the same wavefront
Question: My question is as follows. Is the probability of finding the particle on the same wavefront of its de Broglie wave the same? Answer: No, it is not. It is perfectly possible to produce particle wavepackets that have a varying particle probability density $|\Psi(\vec r)|^2$ along their wavefronts. The simplest example is probably a gaussian wavepacket moving along the $z$ axis, $$ \Psi(\vec r) = N \exp\left(- \frac{r^2}{2\sigma^2}+ i kz\right), $$ for which the wavefronts are the planes $z=\rm const$, along which the particle probability density $$ |\Psi(x,y,z)|^2 = N_z^2 \exp\left(-\frac{x^2+y^2}{\sigma^2}\right) $$ depends on $x$ and $y$.
{ "domain": "physics.stackexchange", "id": 50000, "tags": "quantum-mechanics, wave-particle-duality" }
How rosbridge finds custom messages?
Question: I want to subscribe a custom message from a rosbridge server. First I create a custom package and message by following the ROS official tutorial. Then build with catkin tool with catkin_make install. After source the .bash file I could successfully see my .msg through rosmsg show, or find my package though rospack find. However, when I use my client to request my custom message, it shows: subscribe: Unable to import my_pack.msg from package my_pack. Caused by: No module named my_pack.msg Note: I am using roslibpy as client side to connect to rosbridge server. Also Ubuntu 16.04 and ROS kinetic I think my question would be: how rosbridge locates ROS packages? Should I add additional env var such that ROS bridge could find my package? Originally posted by wthwlh on ROS Answers with karma: 26 on 2019-03-05 Post score: 0 Original comments Comment by gvdhoorn on 2019-03-06:\ Should I add additional env var such that ROS bridge could find my package? No, that should not be necessary. Are you starting the rosapi_node? That is required for rosbridge to understand what msgs/srvs/actions exist. Which launch file do you use to start rosbridge? Answer: @gvdhoom Thanks for the reply, I have figured that out, it turns out I didn't source my setup.bash in that rosbridge server window...That is stupid... Originally posted by wthwlh with karma: 26 on 2019-03-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2019-03-06: Well .. that would certainly be required, yes.
{ "domain": "robotics.stackexchange", "id": 32594, "tags": "ros-kinetic, rosbridge" }
References to programming languages based on conditional logics
Question: Conditional logics are logics which augment traditional logical implication with modal operators corresponding to other notions of condition (for example, the causal conditional $A\; \square\!\!\!\!\to B$ reads "$A$ causes "B", or probabilistic conditioning "$A|B$", which reads "$A$ given $B$"). Typically these logics are studied model-theoretically, but I've wondered about their applications to programming language design (for example, to type imperative actions). I'd appreciate references to their proof theory (ie, sequent calculus/natural deduction), or to programming languages with types based on these kinds of modal operators. Thanks! EDIT: The Stanford Encyclopedia of Philosophy has a nice introduction to the subject. Answer: Check these references: Programming languages CondLP and CondLP+: Gabbay, Giordano, Martelli, Olivetti, Sapino, Conditional reasoning in logic programming, Journal of Logic Programming, Volume 44, Issues 1-3, 1 July 2000, Pages 37-74 Claudia, Oliveira, The implementation of CondLP, Lecture Notes in Computer Science, 1996, Volume 1085/1996, 713-715 Gabbay, Giordano, Martelli, Olivetti, Conditional logic programming, Proc. 11th Int. Conf. on Logic Programming, Santa Margherita Ligure, pages 272–289, 1994. References to proof theory: Olivetti, Pozzato, Schwind, A sequent calculus and a theorem prover for standard conditional logics, Journal ACM Transactions on Computational Logic (TOCL), Volume 8 Issue 4, August 2007
{ "domain": "cstheory.stackexchange", "id": 799, "tags": "reference-request, lo.logic, pl.programming-languages, type-theory, proof-theory" }
Which of the following isomers of 2,3‐dihydroxy‐4‐methoxy‐4‐oxobutanoic acid are identical?
Question: Which of the following are identical? A and B are identical A and B are diastereomers A and C are enantiomers A and B are enantiomers What I know is that when we rotate a Fisher projection by 180°, then we get identical compounds (I might be wrong). But how to tell if the compounds are enantiomers of each other or not? For that how to get the same group, i.e. $\ce{-COOH}$ in this case at the top to compare the compounds? Also, is there any trick to tell that if they are diastereomers, enantiomers or identical by the hydroxyl group positions in Fisher projections? Answer: On rotating the compound B by 180°, we get the two -OH groups on the left side and the two H atoms on the right. In other words, B would then be an enantiomer of A. Note: on rotating a Fischer projection by 180°, the compound remains the same; it's just that all the atoms have to be rotated, i.e. the atoms on C3 in this case would go to C2 and vice versa.
{ "domain": "chemistry.stackexchange", "id": 12838, "tags": "stereochemistry, carbonyl-compounds, isomers, optical-properties" }
A database of books, part 1
Question: Backstory I collect books. A lot of books. Whitepapers, (programming) cookbooks, transcripts, important letters, overviews, you name it. Usually in PDF, otherwise my house would be too small. To keep track of those books, I use an arcane collection system based on notes referring to notes. This obviously has some scalability problems, so for the past year I've been trying to build a proper, relational database for it. It still isn't finished... However, it's a project in parts and the first part has been ready many moons ago. It also hasn't changed much over the time, so I guess it's ready for review. The problem To have a database of books, you want at least 3 things: Generate a list from a set of books. A directory goes in, a text file containing all books comes out. Eventually, a program will have to read it to extract keywords from it. I'll save that for a later version. The pre-loader. Insert or update the list into a database. Create the database if it doesn't exist yet. The CRUDder. Produce a view into the database for the user. The viewer. So, I decided to split those 3 things and give each their own program. Eventually it may end up in one, but with explicit separation. Later on it may also need an automated maintenance program as a 4th module. The database (3NF) should at least contain the name, authors and publishing year of each book. Later versions may include the size in pages, ISBN and keywords. This program does not yet support this, but is should've been written with those features in mind. The pre-loader The pre-loader will take a directory of PDFs. The test set is 222 items large, the final dataset much larger. For now, only PDFs are supported. If I'd ever want to change that, all I'd have to do is change the following: if file.lower().endswith(FILE_EXTENSION): into: for file_extension in FILE_EXTENSIONS: if file.lower().endswith(FILE_EXTENSIONS): and break out of the for after handling the first successful hit, but since I'm not sure I'll ever do that the current, simple version will suffice. All PDFs should be in the following format: String_String_4digits.pdf An example book would look like this: SomeLongBookTitle_AuthorName1&AuthorName2_2017.pdf Where first there's the title of the book, second there's the authors of the book and last there's the publishing year. Author names should be seperated by ampersand, any other character except an underscore will be considered part of an author's name. If the author of a book is unknown, it will be UNK. If a publishing year is unknown, it will be 0000. Books published before the year 1 anno Domini are not supported. If the file is not a PDF, it will be unidentified. Such files may or may not be a problem, but a list is kept. For example, a link may have been created instead of a copy, resulting in a .lnk instead of a .pdf. File names can be very long. Example: FullyIntegratedMultipurposeLow-CostK-BandFMCWRadarModuleWithSub-milimeterMeasurementPrecision_PiotrKaminski&IzabelaSlomian&KrzysztofWincza&SlawomirGruszczynski_2015.pdf Book name: 93 characters. File name (incl. extension): 168 characters. EvaluationOfBatteryAndBatteryChargerShort-CircuitCurrentContributionsToAFaultOnTheDcDistributionSystemAtANuclearPowerPlant_BrookhavenNationalLaboratory_2015.pdf Book name: 122 characters. File name (incl. extension): 160 characters. There can be a lot of authors: TROPOMIATBDOfTheTotalAndTroposphericNO2DataProducts_JHGMVanGefffen&KFBoersma&HJEskes&JDMaasakkers&JPVeefkind_2016.pdf 5 authors. (That, by the way, is an unfortunate example of improper naming. An artefact of being inconsistent and lacking a thorough naming scheme, but irrelevant for the database.) The current idea is the program will deliver a list formatted like this: EffectiveSTL ScottMeyers 0000 ElectricalEngineersReferenceBook MALaughton&DFWarne 2003 ElectroMagneticComplianceWhatIsEFT Microchip 2004 A better approach would perhaps include splitting the authors already so the next step doesn't have to do it. Seems the right thing to do. Perhaps even turning the whole list into a JSON file instead of a text file. Code import os TARGET_DIRECTORY = "Books" FILE_EXTENSION = ".pdf" PRINT_UNIDENTIFIED_LIST = True def print_result(good_count, bad_count, unidentified_list): length_unidentified_list = len(unidentified_list) print("Succesful books: {0}".format(good_count)) print("Bad books: {0}".format(bad_count)) print("Unidentified files: {0}".format(length_unidentified_list)) if PRINT_UNIDENTIFIED_LIST and (length_unidentified_list > 0): print(length_unidentified_list) print(unidentified_list) def make_list(): with open("list.txt", 'w') as output_file: good_count, bad_count = 0, 0 unidentified_list = [] for file in os.listdir(TARGET_DIRECTORY): # Check against .lower() to also catch # accidental uppercased file extension if file.lower().endswith(FILE_EXTENSION): try: book_name, author_name, publishing_year = \ file[:-len(FILE_EXTENSION)].split("_") #print(book_name, author_name, publishing_year) output_file.write( "{0} {1} {2}\n".format( book_name, author_name, publishing_year ) ) good_count += 1 except ValueError: # Added indentation for improved UX print(" Could not handle the following file: {0}".format(file)) print(" Check whether it's name is properly formatted.") bad_count += 1 else: unidentified_list += [file] print_result(good_count, bad_count, unidentified_list) def main(): make_list() if __name__ == '__main__': main() Yes, I know 1 of my lines is slightly too long according to PEP8. That's the result of the make_list function failing the Single Responsibility Principle and lack of helper functions. If that gets fixed, the offending line will probably be fixed in the process. That's also the reason there are no docstrings yet. The function structure as-is is flawed, even though the current function names are very descriptive. I've tried putting the results in a dictionary, like this: result = { 'good_count': 0, 'bad_count': 0, 'unidentified_list': [] } But it didn't improve readability a bit. I've also tried putting everything in a module and use more specialized functions. That got very messy, very fast. Anything and everything is up for review, including the design of the whole process. The current execution time of the program is low enough that I'm not worried, even for larger amounts of data. Answer: I would: Change make_list to return good_count, bad_count and unidentified_list, rather than call print_result. This allows you to change the way that they are displayed at a later date. If you invert file.lower().endswith(FILE_EXTENSION), you can use a guard clause, reducing indentation, and improving readability. You don't need to use the try. Your format can be replaced with ' '.join(), with an addition of '\n'. You can check if file[:-len(FILE_EXTENSION)].split("_") has a length of three. You should stick to one apostrophe type, unless switching to another causes the string to be easier to read. Take open("list.txt", 'w'). As opposed to 'it', "it's". If you don't need to handle nested file types, such as .tar.gz, then you can use os.path.splitext. To simplify getting the name and extension. Creating: import os TARGET_DIRECTORY = "Books" FILE_EXTENSION = ".pdf" PRINT_UNIDENTIFIED_LIST = True def print_result(good_count, bad_count, unidentified_list): print("Succesful books: {0}".format(good_count)) print("Bad books: {0}".format(bad_count)) print("Unidentified files: {0}".format(len(unidentified_list))) if PRINT_UNIDENTIFIED_LIST and unidentified_list: print(len(unidentified_list)) print(unidentified_list) def make_list(): with open("list.txt", "w") as output_file: good_count, bad_count = 0, 0 unidentified_list = [] for file in os.listdir(TARGET_DIRECTORY): name, ext = os.path.splitext(file) if not ext.lower() == FILE_EXTENSION: unidentified_list.append(file) continue values = name.split("_") if len(values) == 3: output_file.write(" ".join(values) + "\n") good_count += 1 else: # Added indentation for improved UX print(" Could not handle the following file: {0}".format(file)) print(" Check whether it's name is properly formatted.") bad_count += 1 return good_count, bad_count, unidentified_list def main(): good_count, bad_count, unidentified_list = make_list() print_result(good_count, bad_count, unidentified_list) if __name__ == '__main__': main() I had a project where I was using the same sort of work flow as you, I had a list of input, and needed to split input into different handlers. And so started to use coroutines. I found them very helpful in these situations, and so an example of a basic usage of these in your application could be: import os from functools import wraps TARGET_DIRECTORY = "Books" FILE_EXTENSION = ".pdf" PRINT_UNIDENTIFIED_LIST = True def coroutine(fn): @wraps(fn) def inner(*args, **kwargs): cr = fn(*args, **kwargs) next(cr) return cr return inner @coroutine def good_files(): count = 0 try: with open("list.txt", "w") as output_file: while True: _, values = yield output_file.write(" ".join(values) + "\n") count += 1 except GeneratorExit: print("Succesful books: {0}".format(count)) @coroutine def bad_files(): count = 0 try: while True: file, _ = yield print(" Could not handle the following file: {0}".format(file)) print(" Check whether it's name is properly formatted.") count += 1 except GeneratorExit: print("Bad books: {0}".format(count)) @coroutine def unidentified_files(): files = [] try: while True: file, _ = yield files.append(file) except GeneratorExit: print("Unidentified files: {0}".format(len(files))) if PRINT_UNIDENTIFIED_LIST and files: print(len(files)) print(files) def make_list(good_files, bad_files, unidentified_files): for file in os.listdir(TARGET_DIRECTORY): name, ext = os.path.splitext(file) values = name.split("_") handler = unidentified_files if ext.lower() == FILE_EXTENSION: handler = (good_files if len(values) == 3 else bad_files) handler.send((file, values)) for handler in (good_files, bad_files, unidentified_files): handler.close() def main(): make_list(good_files(), bad_files(), unidentified_files()) if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 28546, "tags": "python, strings, python-2.x, formatting, iteration" }
Refutation of Darwin's Random Evolution Theory
Question: I saw this refutation online of Darwin's Random Evolution Theory and cannot see any holes with the logic. Can anyone crack this simple refutation? Refutation of the Theory of Random Evolution As for the theory of evolution, which says that living things evolved progressively from mud - first organism - bacteria - fish - animals - humans through tiny random mutations which were advantageous and naturally selected; there's a lot to say on this. All currently living life forms appears to be highly related, sharing the same DNA system and cell structure. This would suggest a common first ancestor as the theory suggests (or better yet - one Designer), however, the most obvious flaw with the theory is that the first organism must have had highly sophisticated intelligent design. There is a minimum requirement for even the most primitive possible life form, without which it could not possibly survive. Minimum Requirements for First Organism The first organism must have a system of producing and/or sourcing energy along with subsystems of distribution and management of that energy which interact and work together, otherwise it cannot power critical tasks such as reproduction. It must have a system of reproduction which necessitates pre-existing subsystems of information storage (DNA), information copying, and information reading/processing which interact with each other and work together. This reproductive system is dependent on a power source, so it must be coordinated with the power system. The reproductive system must also copy/rebuild all critical infrastructure such as the power system and the reproduction system along with the "circuitry" and feedback mechanisms between them, otherwise the child organism will be dead.. It must have a growth system, otherwise the organism will reduce itself every time it reproduces and vanish after a few generations. This growth system necessitates subsystems of ingestion of materials from the outside world, processing of those materials, distribution, and absorption of those materials to the proper place, building the right thing at the right place and in the right amount. It must also have an expulsion system for waste materials. The growth system must also be coordinated with the reproduction system. Otherwise, if the reproduction trigger happens faster than the growth, it will reduce size faster than it grows in size and vanish after a few generations. The growth system also requires connection to the power infrastructure to perform its tasks. All the "circuitry", signaling, and feedback infrastructure which allows the different systems and subsystems to coordinate together and work together must be in place before the organism can "come alive". The reproduction system won't work without coordination with the growth and power systems. Likewise, the power system by itself is useless without the growth and reproduction systems and cannot survive. Only when all the "circuitry", etc. is in place and the power is turned on is there hope for the hundreds of interdependent tasks to start working together. Otherwise, it is like turning on a computer which has no interconnections between the power supply, CPU, memory, hard drive, video, operating system, etc - nothing to write home about. We assume it originated in water since gas is too unstable and solid is too static. If so, the organism must be contained by some kind of membrane otherwise its precious contents will drift away in the water due to natural diffusion or drifting of water due to temperature variations in the water from sunlight, etc. or from heat generated through its own power, or wind, moon, etc. If so, this makes the assembly of such an organism more problematic, since it would need to be closed shut before it can build itself in a stable way. Yet, to build itself it would need to be open for a long time until all systems are built and interconnected. From the above minimum requirements it is clear that the simplest possible surviving organism is by no means simple. You would need thousands of different proteins/lipids etc., in the right proportions, all intricately folded and actively interacting with each other and with sophisticated organelles. Contemplate this and you will see the necessary complexity of this primitive organism is far more sophisticated than anything modern technology has ever produced. Even the most sophisticated Intel CPU is mere child's play compared with the design of such an organism. Answer: I like @5th 's answer but I thought it might be worthwhile to clarify on some points and pull the logic out a bit more. First there is some contesting the overall logic that it assumes all these qualities of life are showing up at once. If they had to, it probably is true that life could not evolve, but the general assumption is that there is a path to do so. Let me try to be convincing of this. Dawkin's theory of the replicator is axiomatic and not entirely helpful at this point, but it does have a lot to say. Once you have self replicating systems, the rest of biology and evolution and selection does seem reasonable, even with the emergence of complicated structures like the eye, the flageller motor and other seemingly unplausibly multi component systems seem to magically come together and do something new in the course of evolution. Dawkins wrote a book on this topic "Climbing Mount Improbable" on this topic which tries to answer this question for several cases. To outline why biological systems can and do seem to leap to new abilities, organs and unprecedented assemblies. In fact the do not, but there are so many possible trials (quintillions? ) over the trillions of years, that eventually some adaptive path to these features, which has usually been found when we go looking for it. As such there is no clear reason why new cases of this argument need to carry extra weight. The argument of statistically improbable combinations has been tried and remains unconvincing with respect to evolution several times. After this, I think it would be fair to say that the origins of life, although coming clear still has many details to be resolved, but progress is being made. Precambrian evidence of primitive life was very different as seen in fossil records. We can see that at one time there were only bacteria and single celled organisms. There is evidence that at one time there appears to be only simple very large single celled organisms that lay in shallow water and soaked up the sunlight, growing and growing. We can see that going back about 1.2-2 billion years, life shows a pretty clear progression where selection and adaptation creates complexity in living things and transitions life from exclusively chemotrophs (metabolizing Sulfur from geological processes) to anoxic (no oxygen environment) photosynthesis, and then oxygen breathing organisms. In the precellular world, the RNA World hypothesis describes how all that life needed was one sort of molecule - RNA. That is chemically a very simple replicator. Evidence is pretty strong that RNA world was possible. Many chemists are interested in proving that RNA can be created more or less spontaneously from early earth chemical environments and recent experiments show that spontaneous soups containing RNA are a reasonable picture of earth at one point. All this being said its probably true that exactly what happened and how it happened will never be 100% known and the argument can be made that something unnatural happened at some point. But I hope you can also see that while the argument may still hold some water, the need for any magical interventions in the action of life is retreating to a point of origin - the beginning of time. Scientific inquiry is very much focused on filling the picture of the origins of life on earth though and given the success of finding probable paths through to the origins of life, the odds actually are in favor for making a statement of the physical mechanisms of life's origins. Just a couple of side notes.. (1) biologists are not stuck on everything being random. There is lots of work on mechanisms evolved to adapt in non-random ways. (2) Darwin is not a sacred cow - just about any working biologist would love to show Darwin was wrong even in a little way. The same is true for Einstein, Newton and the rest. After a few years of trying though what one usually finds that its kinda difficult. Its not a matter of who makes the argument or what it means, but it has to be convincing and that turns out to be difficult.
{ "domain": "biology.stackexchange", "id": 791, "tags": "evolution" }
Ising Ferromagnet: Spontaneous symmetry breaking or not?
Question: In explaining/introducing second-order phase transition using Ising system as an example, it is shown via mean-field theory that there are two magnetized phases below the critical temperature. This derivation is done for zero external magnetic field $B=0$ and termed spontaneous symmetry breaking The magnetic field is then called the symmetry breaking field. But, if the symmetry breaking occurs "spontaneously" at zero external field why do we need to call the external magnetic field the symmetry breaking field? I am confused by the terminology. Answer: This is mostly a question of definitions: Spontaneous symmetry breaking occurs when the underlying laws of a physical system have a symmetry, but the ground state does not. For an Ising system with $B=0$, $$H = \sum_{i,j} J_{ij} s_i s_j$$ we can see explicitly that the energy of a state $\{s_i\}$ is precisely the same as the energy of the state with every spin flipped, $\{-s_i\}$. Nevertheless, the ground state does not have this symmetry - all of the spins are either up or down! There are a lot of subtleties to this idea, especially in how it relates to the limit of infinite system size - I really recommend reading Goldenfeld's Lectures on Phase Transitions and the Renormalization Group to understand this more deeply. By contrast, with $B \neq 0$, the symmetry is explicitly broken - the Hamiltonian $$H = \sum_{i,j} J_{ij} s_i s_j + B \sum_i s_i$$ does not have the $s\to-s$ symmetry. These are two different ideas.
{ "domain": "physics.stackexchange", "id": 7496, "tags": "phase-transition, symmetry-breaking, ising-model" }
Which quantum reality will be seen in a far away galaxy
Question: I recently watched Brian Greene documentaries on parallel universes and am trying to wrap my brain around multiple quantum universes that form at every event and Einstein's relativity that in space-time, past present and future coexist so we can observe the future of a far away galaxy from Earth by moving. https://www.youtube.com/watch?v=MO_Q_f1WgQI - Around 7th minute - explains how an alien moving towards earth has a now slice cutting in Earth's future My question is which quantum universe will be seen because future events in that galaxy would create multiple such universes. My question is regarding many worlds interpretation of quantum mechanics according to which multiple alternate futures are possible https://en.m.wikipedia.org/wiki/Many-worlds_interpretation The question has been answered by James K below. Answer: So, the presenter explains that two observers who are not moving relative to one another will agree on the meaning of "now". However if one observer is moving, then they will no longer agree, and what one considers to be now will be in the past or future of the other. He observes that a sufficiently distant observer, in another galaxy 100 million light years away that is moving at "bicycle" speed relative to us would have a 200 year discrepancy in their understanding of "now". But this alien would have absolutely no way of observing our universe now. They could not see us in our "now", nor in their "now" of 200 years ago, because they are 100 million light years away. They can only observe how our galaxy was 100 million years ago. They can certainly not observe our galaxy in the future, even if they travel towards us at speeds approaching the speed of light. So the explanation of different "now" is about agreement on clocks, not about actual observations. I think then the program errs by treating Special Relativity as a full description of nature, rather than as a model. Relativity is a classical theory, in that in Relativity, if you know the initial state, you can predict the future state. There is no uncertainty principle in general relativity. It is correct to say that in Relativity the future is as real as the past. It is unclear if the same is true in Quantum Mechanics, it depends on how you interpret the maths. Whether the future exists or not cannot be answered by theory. Since different interpretations of QM are based on the same maths, there is no way of testing which interpretation of quantum mechanics is "correct" or even whether calling an interpretation "correct" has any scientific meaning. In the "many worlds" interpretation, the notion of a single space-time which you can take slices of is lost completely. I might note that I'm not a great fan of the explanations in that series, which seem to me to be confusing on a number of levels, and are more aimed at creating "wow" than being clear. For example, the mixed metaphor of "slices of bread and aliens riding bikes" seems to have confused you into thinking that there are aliens who are already seeing how our planet will be 200 years from now.
{ "domain": "astronomy.stackexchange", "id": 2731, "tags": "space-time, multiverse, quantum-mechanics" }
How to control cartesian velocity?
Question: I'm using a Fanuc M10iA robot from the ROS industrial package and I would like to generate a trajectory for this robot from a cartesian path (i.e. a list of cartesian positions and velocities). Currently I use moveIt! and move_group->ComputeCartesianPath() to generate the robot trajectory. The robot moves but I don't know how to specify the velocities for the waypoints. I want to move the robot with a given cartesian velocity and change this velocity in any point in the trajectory. How can I do this with ROS? This is the code that i use to move the robot with MoveIt: (example package: https://gitlab.com/InstitutMaupertuis/fanuc_motion/tree/ros_answer_question) #include <vector> #include <ros/ros.h> #include <eigen_conversions/eigen_msg.h> #include <moveit/move_group_interface/move_group_interface.h> #include <moveit_msgs/ExecuteKnownTrajectory.h> int main(int argc, char **argv) { ros::init(argc, argv, "robot_trajectory"); ros::NodeHandle node; ros::AsyncSpinner spinner(1); spinner.start(); moveit::planning_interface::MoveGroupInterface group("manipulator"); group.setPoseReferenceFrame("/base"); group.setPlanningTime(1); // Create a trajectory (vector of Eigen poses) std::vector<Eigen::Affine3d, Eigen::aligned_allocator<Eigen::Affine3d> > way_points_vector; Eigen::Affine3d pose (Eigen::Affine3d::Identity()); // Square belonging to XY plane pose.linear() << 1, 0, 0, 0,-1, 0, 0, 0, -1; pose.translation() << 0.8, 0.1, -0.1; way_points_vector.push_back(pose); pose.translation() << 1.0, 0.1, -0.1; way_points_vector.push_back(pose); pose.translation() << 1.0, 0.3, -0.1; way_points_vector.push_back(pose); pose.translation() << 0.8, 0.3, -0.1; way_points_vector.push_back(pose); way_points_vector.push_back(way_points_vector[0]); // Copy the vector of Eigen poses into a vector of ROS poses std::vector<geometry_msgs::Pose> way_points_msg; way_points_msg.resize(way_points_vector.size()); for (size_t i = 0; i < way_points_msg.size(); i++) tf::poseEigenToMsg(way_points_vector[i], way_points_msg[i]); moveit_msgs::RobotTrajectory robot_trajectory; group.computeCartesianPath(way_points_msg, 0.05, 15, robot_trajectory); // Execute trajectory ros::ServiceClient executeKnownTrajectoryServiceClient = node.serviceClient<moveit_msgs::ExecuteKnownTrajectory>( "/execute_kinematic_path"); moveit_msgs::ExecuteKnownTrajectory srv; srv.request.wait_for_execution = true; srv.request.trajectory = robot_trajectory; executeKnownTrajectoryServiceClient.call(srv); ros::waitForShutdown(); spinner.stop(); return 0; } Originally posted by AndresCampos on ROS Answers with karma: 21 on 2017-10-06 Post score: 2 Original comments Comment by sharath on 2018-05-22: Hi, I was looking for something similar to what you have asked for controlling the cartesian velocity of the waypoints. I searched but it seems its not yet done. I want to know if you were able to solve it? Answer: MoveIt currently does not provide a method to request specific end-effector velocities. We would welcome such an addition and it is clearly feasible. You can generate a valid time-parameterization for your cartesian trajectory using the moveit_core/trajectory_processing module. These methods generate velocities according to the maxima specified for each joint. If you want to attain a constant end-effector velocity you can rescale the timing relative to the slowest segment. Originally posted by v4hn with karma: 2950 on 2017-10-10 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by kwh5484 on 2019-07-11: Is there still interest in adding this to the MoveIt source code? If so, how you would like this functionality to be added to the overall MoveIt architecture? I've added a function to moveit_core/trajectory_processing/iterative_time_parameterization.cpp that rescales the timing based on a desired cartesian velocity for a specified end effector link. I was planning on using the MoveGroupInterface to pass the desired velocity and end effector link to the trajectory. I am not a MoveIt expert by any means, so if there is an alternate way you would like this implemented please let me know. Alternatively, I could implement it separately from any time parameterization after the trajectory has been generated from the move_group.plan function call. This way, no existing data structures would need to be changed. Comment by gvdhoorn on 2019-07-11: Speaking for myself: yes, there would be interest. but a comment on an already answered ROS Answers question is not necessarily the best place to discuss/announce this. I would suggest to post an issue on the MoveIt issue tracker on github.
{ "domain": "robotics.stackexchange", "id": 29021, "tags": "ros, moveit, linear-velocity, fanuc" }
Can I use IIR or FIR to get a sliding mean?
Question: My output signal shall be computed by (meta code): o = 0 # or some arbitrary initial value for i in input: o = (o * 99 + i) / 100 print o I call this a "sliding mean", but maybe another term is established for this. (If so, mentioning this could help me researching this better ;-) Is there a way to achieve this using the a / b coefficient arrays of IIRs or FIRs? What would they look like? I'm aiming at a solution in Python / scipy.signal. Or is there another established way to achieve this which I can find in such libraries like numpy, scipy, pandas, etc.? Answer: It corresponds to an exponentially weighted moving average (EWMA) filter with parameter $\alpha = 0.01$. The difference equation can be expressed as: $$y(n)-(1-\alpha)y(n-1)=\alpha x(n)$$ $$y(n)-0.99y(n-1)=0.01 x(n)$$ Thus it's an IIR filter. You can easily get the coefficients from the difference equation. $$Y(z)-0.99z^{-1}Y(z)=0.01X(z)$$ $$\frac{Y(z)}{X(z)}=\frac{0.01}{1-0.99z^{-1}}$$ Therefore, a = [1 ; -0.99] and b = 0.01.
{ "domain": "dsp.stackexchange", "id": 6007, "tags": "discrete-signals, filter-design, lowpass-filter, infinite-impulse-response, finite-impulse-response" }
Where does the energy of the electric cable come from?
Question: I'm stuck on one really simple example, I can't figure out what's happening to energy here... (This is not homework) Let's consider an uncharged electric cable, we'll model it by an infinite cylinder on the axis $(Oz)$ with radius $a$ and conductivity $\gamma$ with uniform, constant current $I$, and we'll obviously use cylindrical coordinates $(\vec{e_r},\vec{e_\theta},\vec{e_z})$. If I haven't made any mistake, we should have the electric and magnetic fields $\vec{E},\vec{B}$ as follows : $\vec{E}=\begin{cases}\frac{1}{\gamma}\frac{I}{\pi a^2}\vec{e_z}&r<a\\\vec{0}& r>a\end{cases}$, $\vec{B}=\begin{cases}\frac{\mu_0I}{2\pi}\frac{r}{a^2}\vec{e_\theta}&r\le a\\\frac{\mu_0I}{2\pi}\frac{1}{r}\vec{e_\theta}&r\ge a\end{cases}$ Thus the poynting vector $\vec{\Pi}=\begin{cases}\frac{-r}{2\pi^2\gamma a^4}I^2\vec{e_r}&r< a\\\vec{0}&r> a\end{cases}$ Hence, if we consider an lateral surface $(\mathcal{S})$ orientated towards the inside and a height $h$ of cable, $$\iint_{(\mathcal{S})}\vec{\Pi}\cdot\vec{dS}=\begin{cases}\frac{h}{\pi\gamma a^2}I^2&r< a\\\vec{0}&r> a\end{cases}$$. That expression is most puzzling. The energy comes from the side, however there is no energy outside.... where does the energy come from ? Something weird is going on here. At first I thought it was because we had made the hypothesis of an infinite cable, but it doesn't seem to be related at all. To allow the energy flux, there would need to be charges inside the cable, which is initially not charged. Thus I thought there might be some effect like the Hall effect with local charges appearing, but once again I do not see why that would be. Answer: There is a mistake in your working. The surface integral of the Poynting vector $$ \int \vec{\Pi} \cdot d\vec{S} = \Pi\ 2\pi a h = \frac{a I^2}{2\pi^2 \gamma a^4} 2\pi a h = \frac{I^2 h}{\pi \gamma a^2}$$ As pwf correctly points out, this power matches the power dissipated as Ohmic heating, $VI$, where $$V I = E h I = \frac{I^2 h}{\pi \gamma a^2}$$ This power does not appear to me to come into the wire from outside The Poynting vector immediately outside the wire is zero, as is the flux into the wire from outside. You also say there is "no energy outside... [the wire]". This is not relevant because there is no energy flux into the wire. It is also not true, because the energy density outside the wire is given by $B^2/2\mu_0$. I agree that it requires some thought to work out physically why the Poynting vector must be pointed inwards, but there is no conservation of energy problem. The energy dissipated per unit volume is a constant; as we consider cylindrical shells moving out from the centre, each shell has a volume proportional to the radius $dV = 2\pi rh\ dr$. Thus as we move into the wire from the outside, there needs to be a gradient in the flux deposited by the Poynting vector, such that more energy needs to be deposited in the outer shells. This tells you that the radial component of the Poynting vector must depend linearly on $r$. It is inwards because the Poynting theorem tells you that the divergence of the Poynting vector in cases where the energy density of the fields is constant (true for static fields) will be given by minus $\vec{J} \cdot \vec{E}$, so will always be negative for static fields in a standard conductor.
{ "domain": "physics.stackexchange", "id": 19607, "tags": "electromagnetism, energy, electric-current" }
How does a weight connected to a string over a pulley pulling a cart apply force onto the cart?
Question: How does the mass hanging down on the bottom (Assuming frictionless environment) apply a pull force to the car? How does the weight of the object transfer to the string (tension force and maybe the pulley does something?) which pulls the car? Answer: I wonder if the following diagram is helpful: There is a force from the pulley on the string - that is what allows the tension to "turn the corner" so the force from the weight (which is downwards) is turned into a horizontal force (tension that can pull the cart).
{ "domain": "physics.stackexchange", "id": 44168, "tags": "newtonian-mechanics, forces, newtonian-gravity, free-body-diagram, string" }
Iterating 2D array in Swift
Question: I have a situation where I needed to convert NSNumber 2D array to Float 2D array, so I did it like this: var numArr:[[NSNumber]]() //fill values let floatArr = numArr.map { $0.map { $0.floatValue} } It's working fine, no issues! But I wondering if there's any better way to handle this? Answer: Your code is fine, an equivalent method in Swift 3 would be let floatArr = numArr.map { $0.map { Float($0) } } You get the same result with less code by using the bridging cast from NSNumber to Float, which works for (nested) arrays as well: if let floatArr = numArr as [[Float]] { ... } Actually I don't know of any case (in Swift 3) where this cast can fail. Things change a bit in Swift 4, as a consequence of SE 0170 - NSNumber bridging and Numeric types. Your code still works fine, and is equivalent to let floatArr = numArr.map { $0.map { Float(truncating: $0) } } The truncating initializers where introduced because not every Number can be converted to a scalar value without loosing information, there are also failable init?(exactly:) initializers. Now the bridged cast can fail for numbers which are not exactly representable as a Float, such as Double(1.1), for numbers which exceed the range of Float. (The first case is currently discussed at https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20170612/037499.html and may change in the future.) Example: let numArr = [ [ NSNumber(value: 1.1), NSNumber(value: 2.0) ], [ NSNumber(value: Double.greatestFiniteMagnitude) ], ] if let floatArr = numArr as? [[Float]] { print(floatArr) } else { print("failed") } This produces [[1.10000002, 2.0], [inf]] in Swift 3, but fails in Swift 4. To summarize: Your code is fine. If you don't care about "loosing precision" or overflows then there is no need to change anything. In Swift 4 you can write it as Float(truncating: $0) to emphasize that the result might not be exactly identical to the given numbers. If you care then you can use the new "exactly" initializers, or the bridged cast as? [[Float]] in Swift 4.
{ "domain": "codereview.stackexchange", "id": 26115, "tags": "array, swift, iteration" }
Conservation of Electromagnetic Energy-Momentum Tensor in GR
Question: In the follow Planck units have been used. The Electromagnetic Energy-Momentum Tensor is $T^{\mu\nu} = \frac{1}{4 \pi} \left[ F^{\mu \alpha}F^\nu{}_{\alpha} - \frac{1}{4} g^{\mu\nu}F_{\alpha\beta} F^{\alpha\beta}\right]$ where $F^{\mu \nu}$ denotes the Faraday Tensor. From the second Bianchi identity we know that the 4-divergence of the Einstein Tensor $G^{\mu \nu}$ is null, i.e. $\nabla_{\mu} G^{\mu \nu}=0$. Putting this in the Einstein Equations $G^{\mu \nu}=8\pi T^{\mu \nu}$ we also have the conservation of the energy-momentum tensor: $\nabla_{\mu} T^{\mu \nu}=0$. Now, the problem is that for the Electromagnetic Energy-Momentum Tensor the 4-divergence is not null, but we have instead $\nabla_{\mu} T^{\mu \nu}=F^{\mu \nu}j_{\nu}$, with $j^{\nu}$ the 4-current. So is it not compatible with the Einstein Equations? Am I missing something? Answer: As soon as you have a non-zero current $J$, the EM field is not isolated and it is not the complete source of the gravitational field. On the right-hand side of the Einstein equations you should insert the whole stress-energy tensor and not only the EM one. With the added part you find a consistent system of equations.
{ "domain": "physics.stackexchange", "id": 70777, "tags": "electromagnetism, general-relativity, conservation-laws, stress-energy-momentum-tensor" }
Chiral Condensate and Parity
Question: Chiral condensation is known to be characterized by the nontrivial expectation value of $$\langle \chi_{ai}^{\alpha} \xi_{\alpha j}^a\rangle= U_{ij}$$ where $\chi$ is the left handed Weyl fermion and $\xi$ is the right handed Weyl. $\alpha=1,...,N$ is the color index, $a=1,2$ is the spinor index, and $i,j=1,...,N_f$ is the flavor index. I used the notation from Mark Srednicki: https://web.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf. However, under parity $P$, using $$P^{-1} \chi_{a}P= i (\xi^{\dagger})^{\dot a}$$ $$P^{-1} \xi_{a}P= i (\chi^{\dagger})^{\dot a}$$ it is straight forward to check that the parity $P$ maps $U$ to $U^\dagger$. So the condensate seems to spontaneously break the parity symmetry unless $U$ is Hermitian. So my question is shall we treat the $U$ as the chiral condensate? or shall we just treat the Hermitian part of $U$, i.e., $U+ U^\dagger$ as the chiral condensate? Answer: Let us first right down the condensate in its simpler version (or perhaps more common presentation) $$\langle \bar{q}q\rangle = \langle \bar{u}u + \bar{d}d + \bar{s}s\rangle$$ As you can see it is a hermitian operator from the start, which makes sense because it is an expectation value we could observe. The standard theory uses this expectation value as an order parameter for the spontaneous breaking of chiral symmetry [See. [1,2]]. You can see this by performing a chiral transformation to the quarks involved, i.e. $$\psi \rightarrow e^{i\alpha\gamma_5 T_a}\psi \qquad \text{and}\qquad \bar{\psi}\rightarrow \bar{\psi}e^{i\alpha\gamma_5 T_a}$$ or alternatively $$\psi_L \rightarrow e^{i\alpha T_a}\psi_L\qquad \text{and}\qquad \psi_R\rightarrow e^{-i\alpha T_a}\psi_R.$$ I believe Srednicki is just writing the same thing in a matrix version such that one can compare generators directly, he then motivates the pion effective Lagrangian where later adds a general mass (Srednicki Eq. 83.14). One does need the hermitian conjugate, so that we associate a real parameter to it (namely masses). [1] S. Scherer, A Primer for Chiral Perturbation Theory, (2011), Springer. [2] J. Donoghue, Dynamics of the Standard Model, (2014), Cambridge University Press.
{ "domain": "physics.stackexchange", "id": 67832, "tags": "quantum-field-theory, symmetry, symmetry-breaking, parity" }
A doubt regarding Modelling physical phenomena and position uncertainty
Question: For example, in velocity, when we say $v=\frac{dx}{dt}$, there is no proof for it. Its almost like an axiom. Something taken to be true, without a proof. How do I know that for every $x=f(t)$, $v=f'(t)$? Also, how can I say that the position of an object as a function of time, say an electron, is $x=f(t)$ (this comes when I'm studying electromagnetism and we have to chart the course that a charged particle takes in a constant electric field) when QM says that an object's position cannot be known without uncertainty? Answer: As for your first question, velocity is defined to be the first derivative of position with respect to time. It must be true because it is defined to be true. Likewise, acceleration is defined to be the second derivative of position. Those need no proof. One never needs a proof for a definition. Now where it starts to get interesting is when we add $F=ma$ into the mix. This equation brings in some physical implications. It's possible to create tests to see whether $F=ma$ is consistent with the real world around us (and, with little surprise, we find that it does indeed work. There's a reason it's a famous way of modeling the world!) As for quantum uncertainty, there's an important phrase to help with that which I wish was taught earlier in our education: All models are wrong. Some are useful. That quote, attributed to George Box, is the key to understanding why science's models are applicable to the real world. Every one of them is wrong. They all miss some detail somewhere. There's even some fun theories out there which suggest this must always be true. There must always be something missing. However, when we start talking practically, these models have amazing predictive power. You can look at a situation, analyze it, and predict with great certainty what will happen. In the case of calculating positions and QM uncertainty, we have to apply statistics. While a particle's position is uncertain, the expectation of an object's position (read: the average position if you measured it) is relatively constant, and its standard deviation is very small. You might measure one box as being $1.347m$ away from another box. That's a pretty good measurement. Now due to uncertainty, we know that there's some statistical error to worry about, but it will be less than $0.00000000000000001m$ (I'm pretty sure it should actually be more than 30 digits long, but that depends on your measurement, so I'm giving it a lot of benefit of the doubt, assuming the effects of uncertainty are even larger than they should be). Now practically speaking, you are not going to succeed at measuring these objects to an accuracy where you can observe this quantum fiddly-ness. So we model it without such uncertainty. We know this model is wrong, because it doesn't include uncertainty, but it is useful. For all except particle-physics level predictions, it's a useful way to predict what happens. Your question is specifically about electrons, which are certainly small. In practice, you'll have to rely on your teachers to help you understand when it is reasonable to model electrons as simple classical particles and when you need to bring in quantum mechanics. As an example, you'll learn about a diode, which lets electrons flow in only one directions. In most of its operating regime, classical electromagnetism is sufficient to explain its properties. However, if you reverse-bias them (putting a strong voltage in the direction it doesn't flow), a strange effect called "avalanche breakdown" occurs, letting current flow when the classical models say the current can't flow. The reason? At high enough voltages, the layer which prevents electrons from flowing gets thin enough that quantum mechanical effects start to matter -- we start seeing electrons tunnel through the barrier. How do you know when you can use one model vs the other? Well, the end all answer is empirical testing -- go try it and see. But your teacher should help you understand equations which show what voltages are needed to start having tunneling currents that are not ignorable. In your career, you'll make more assumptions. And sometimes the results wont be useful. A classic example is collisions. When you start modeling collisions, you'll assume they happen instantaneously. If you start trying to do things like design protective gear to save your life in a collision, you'll start to find that model isn't useful. In reality, collisions are complex beasts that take time. If you're designing safety gear, you need better models. Another example is aircraft flight. Typically you get to assume that air moves like a wave. However, as you get close to supersonic speeds, that assumption starts to fall apart. The way it falls apart looks like a shock wave. So expect your career to be full of models that are wrong. But from experience, many of these models are useful. You merely have to know when it is reasonable to apply them.
{ "domain": "physics.stackexchange", "id": 60031, "tags": "kinematics, velocity, definition, differentiation, models" }
Local speed of light and accelerated observers in general relativity
Question: Background: By the equivalence principle, an observer (time-like geodesic + orthonormal frame (3 spacelike, 1 timelike) which serves as their measurement standard) in gravitational free fall sees physics which is locally indistinguishable to that they would see in an inertial frame in flat spacetime. As we well-know, measurements in general relativity can only be made locally due to the anholonomy of curved spacetime (ex. we can project the tangent vector of another particle flying by into the observer's physical space subspace of the tangent space at a point. But how do we make a sensible definition of the speed of a particle whose four-velocity is in another tangent space?). Thus by Einstein's postulate of Special Relativity, the speed of light measured by any observer (time-like geodesic) is $c$ as their measurements are local, and so they might as well be in an inertial frame in Minkowski space. Mathematically: the curved spacetime's metric can always be made close to the Minkowski metric at a point, and so geodesics will locally behave in that famous congruence of inertial observers way - like Einstein and his books in free fall in an elevator. A lemma that needs to be proven: I've read that measuring a passing worldline's speed (spatial projection of four-velocity) as $c$ literally means its tangent is null. (Why?) The spatial velocity the observer gets for a passing object is defined as $$v = \varepsilon^{\alpha} (v_{\delta,\delta(t_1)}) e_{\alpha}$$ where $e_{\alpha}$ is the observer's orthonormal basis for the "space" directions $\alpha = 1,2,3$ and $\epsilon^{\alpha}$ are the pointwise covector counterparts to these vectors. The tangent vector being acted on is that of the beam of light whose world line I've called $\delta$. How do we prove $v = c \implies g(v_{delta,\delta(t_1)},v_{delta,\delta(t_1)}) = 0$ where $t_1$ is the time of the observer's and light beam's worldlines crossing so the meausurment can be done. By the geodesic postulate then, the beam continues on through spacetime after its encounter with you always with a null tangent (parallel transport preserves length of tangent). This is one way to see why light beams follow null geodesics in GR. So here's my question: curved spacetime has geometry pointwise always sufficiently similar to Minkowski, but one needn't be in gravitational free fall. So an accelerated (wrt free fall) observer with $\nabla_X X \neq 0$ shouldn't necessarily measure a local speed of light equal to $c$. Yet light beams are traveling on null geodesics one way or another, and so as long the observer's tangent is time-like, the measurement should come out to $c$ (once we've proven the above lemma). What is going on here? Even forgetting curved spacetime, applying this logic to a Rindler observer makes this all the more clear: local measurements of the speed of light are always going to be $c$. Answer: All timelike observers measure the local speed of light as $c$ - even accelerating observers! Two useful concepts here are local flatness and the construction of momentarily comoving reference frames (MCRFs). Local flatness ensures at any point $P$ in the manifold, there exists coordinates such that the spacetime locally looks like that of special relativity: Minkowski space! At point $P$, you may actually find such coordinates with 6 degrees of freedom left over. These correspond to the 3 boosts and 3 rotations you may perform on your coordinate system. Now there's obviously no inertial frame in which an accelerating observer is always at rest. However there is an inertial frame which momentarily has the same velocity of the observer. This is the MCRF. Of course, a moment later this frame is no longer comoving with the observer. Now consider an event $P$ in an arbitrary curved spacetime where an accelerating observer passes right by a light beam. From local flatness we know we may find coordinates about $P$ where the spacetime looks Minkowskian. Moreover, we may use one of our six degrees of freedom to boost into the MCRF of the accelerating observer at this point. The MCRF is locally an inertial frame which momentarily describes the accelerating observer at $P$. So via the postulates of special relativity, we are guaranteed this accelerating observer measures the speed of light as $c$ at point $P$! Here we've had to use “local” both in the sense of space - the light beam passes right by the observer, and in time - our MCRF represents the observer only for a moment (but that's all we need). Concerning the lemma, the four-velocity of a photon is not well-defined. Typically, the components of the four-velocity for some observer $\mathcal{O}$ with coordinates $(t, x, y, z)$ are $$\vec{u}\rightarrow \left(\frac{dt}{d\tau}, \frac{dx}{d\tau}, \frac{dy}{d\tau}, \frac{dz}{d\tau}\right)$$ where $\tau$ is the proper time of observer $\mathcal{O}$. The issue arises because there is no inertial frame where light is at rest, and hence proper time is ill-defined for a photon. You might get around this using what's known as an affine parameter $\lambda$. However I believe it's simpler to use the four-momentum which is well-defined for a photon. Suppose in the MCRF of the accelerating observer described above with coordinates $(t, x, y, z)$ the light beam propagates in the $x$-direction. The energy, $E$, and momentum, $p$, of a photon are related by $p = E/c$ and in units where $c = 1$: $p = E$. Then the components of the four-momentum in the MCRF are $$\vec{p}\rightarrow (E, p, 0, 0) = (E, E, 0, 0)$$ and as the MCRF is locally inertial, we may use the metric for Minkowski space to compute $$\mathbf{g}(\vec{p},\vec{p}) = \eta_{\mu\nu}p^\mu p^\nu = -E^2 + E^2 = 0$$ which highlights the null trajectory of the photon in this frame. Lastly, there are many popular systems where accelerating observers or observers in curved spacetimes will perceive light traveling slower or faster than $c$ at a distance (e.g. Rindler coordinates for Minkowski space, recessional velocities of distant galaxies in FRW cosmologies, etc). Ultimately, this comes down to the fact that it is impossible to cover these spacetimes with global coordinates that represent an inertial frame.
{ "domain": "physics.stackexchange", "id": 94640, "tags": "general-relativity, special-relativity, differential-geometry, speed-of-light" }
Can I set a constraint on the first tap of an FIR filter such that its inverse is stable?
Question: Let's say I have the following FIR filter $h[n]$, so the output $y[n]$ for an input $x[n]$ is $$ y[n] = \sum_{k=0}^{m-1}x[n-k]h[k] $$ The inverse of this filter is given by the IIR difference equation $$ y[n] = \frac{1}{h[0]}\bigg(x[n] - \sum_{k=1}^{m-1}y[n-k]h[k]\bigg) $$ Is there some constraint I can put on the filter taps such that the inverse is stable? I know that if I keep the zeros of the FIR filter inside the unit circle, then the poles of the inverse filter will also be inside the unit circle implying stability. However, are there more simple constraints I can make on $h[n]$ to ensure the stability of the inverse? I was thinking something like $h[0] > \sum_{k\neq0} h[k]$, but I can't prove stability of that. Answer: The Invertible FIR Filter A constraint based on the first coefficient alone is developed as follows: From Cauchy's argument principle any FIR filter that meets the following constraint will be invertible (including marginal invertibility, change $\le$ to $<$ otherwise): $$\max\left(\arg \left( H(e^{j\omega}) \right)\right)-\min\left(\arg \left( H(e^{j\omega}) \right)\right) \le \pi\tag{1}\label{1}, \space\space \omega \in [\omega_o, \omega_o+\pi) $$ Where: $H(e^{j\omega}) = \displaystyle\sum_{n=0}^{N-1}h_ne^{-j\omega n}$: Frequency response of FIR filter $\arg()$ : unwrapped phase of $H(\omega)$ Note: In an earlier version of this post, I had a simpler subset of the above that constrained the absolute value of the phase to not exceed $\pi$, but consider given any such solution that we can rotate the filter response by a fixed angle without effecting invertibility. Thus it is more generally constrained to be that for a FIR with real coefficients, the difference of the unwrapped phase for $\omega \in [0, \pi)$ cannot exceed $\pi$. Extending this to an FIR with complex coefficients, and given that we rotate the filter response (which shifts the frequency response by a fixed amount), results in the generalized constraint above applicable to any interval over $\pi$ in frequency. Thus to constrain the first coefficient $h_o$ we can derive from $\ref{1}$: $$\arg( h_0) + \max\left(\arg\left(\sum_{n=1}^{N-1}h_ne^{-j\omega n} \right)\right)- \min\left(\arg\left(\sum_{n=1}^{N-1}h_ne^{-j\omega n} \right)\right) \le \pi\tag{2}\label{2}$$ Which shows the complexity of a constraint based on the first coefficient alone but that it can exist. It is equivalent from this and simpler to state that if all the zeros are inside the unit circle (an invertible FIR filter), then the plot of the frequency response on a complex plane as we sweep $\omega$ from $0$ to $2\pi$) cannot encircle the origin. Further details below: An invertible FIR filter is a minimum phase filter, since all zeros must be inside the unit circle (or on the unit circle for marginal stability). However a subset of all possible minimum phase filters given a first tap would be the set of decreasing coefficients as the OP has hypothesized as the would meet the constraint given below under “by inspection”. However other minimum phase filters also exist where subsequent taps are larger than the first. The easiest constraint I can think of that would be on all coefficients specifically beyond solving for the roots is given by Cauchy's argument principle: The frequency response as given by the coefficients cannot encircle the origin for an invertible FIR filter. With the frequency response given as: $$H(z) \bigg|_{z=j\omega} = \sum_{n=0}^{N-1}h_n e^{-j\omega n}$$ Further details below: The invertible filter must have all zeros inside the unit circle, since zeros become poles once the filter is inverted, and any poles outside the unit circle means instability for causal systems. (For marginal stability consideration; meaning a system that neither grows nor decays, the zero can be on the unit circle.) An FIR filter with all zeros inside the unit circle is a minimum phase filter Any other FIR filters would not be invertible. This includes maximum phase filters, which have all zero's outside the unit circle, and mixed phased filters with both minimum and maximum phase components (linear phase filters are mixed phase). So the constraint is the filter must be a minimum phase filter to be invertible. Below I list 4 tests for detecting a minimum phase filter, with the Cauchy's argument principle the closest to providing a simple coefficient constraint rule. By Inspection: A tell-tale sign of a minimum phase filter is the concentration of coefficients toward the start of the filter. Given this, you can rule out many filters by simple inspection if the coefficients are either concentrated in the center or end of the filter. Specifically when considering all filters with the same magnitude response, the coefficients for the minimum phase filter (which is the filter's impulse response), will decay the fastest in time. A detailed proof of this property specific to minimum phase polynomials is given in Oppenheim and Shafer's Digital Signal Processing book and is summarized as: $$\sum_{n=0}^N|h[n]|^2\ge \sum_{n=0}^N|g[n]|^2$$ Where $h[n]$ is a minimum phase filter and $g[n]$ is any other filter with the same magnitude response and $N$ can be any positive integer. This does not mean all coefficients are in decreasing order, for example [5 6 3 2 1] is minimum phase, while [5 8 3 2 1] is not, so this is not necessarily a simple constraint that can be applied but can certainly identify obvious non-minimum phase solutions. Cauchy's Argument Principle: A very simple approach to test for this condition applicable to FIR filters is to use Cauchy's argument principle (see Nyquist's Stability Criterion and Cauchy's Argument Principle) by plotting the frequency response on a complex plane. For a causal FIR filter, the number of clock-wise encirclements of the origin will be equal to the number of zeros outside the unit circle. If all the zeros are inside the unit circle, there will be no encirclements of the origin (I show an example below). For an FIR filter, there cannot be any counter-clockwise encirclements (as all poles are at the origin), so if any encirclements do happen, they will be clock-wise only. Solving for the Roots: Confirm the magnitude of all roots of the polynomial given by the coefficients of the filter are all $|z|\le 1$ Hilbert Transform: Compare the magnitude and phase response for the filter. Since there is a unique relationship between the magnitude response and phase response for minimum phase filters, the two can be compared to determine if the filter in question is indeed the minimum phase solution for that magnitude response. This is detailed further by PeterK in this post: Derive minimum phase from magnitude with relationship copied below: $$ \theta(\omega) = - {\scr H}\left[ \ln(G(\omega)) \right] $$ where ${\scr H}$ is the Hilbert transform and $G\omega$ is the absolute value of the frequency response (magnitude). Every magnitude response has a minimum phase solution, therefore every FIR filter can be decomposed into a minimum phase filter (invertible) cascaded with a all-pass filter (constant magnitude response with phase change only over frequency and not invertible). These concepts are demonstrated with a simple example of a 2 tap FIR filter with coefficients [1 0.5] and it's reverse [0.5 1]. In the first case the filter is minimum phase with a transfer function $1+0.5z^{-1}$ and the second filter is the reverse, which is a maximum phase filter with the transfer function $0.5+z^{-1}$. The magnitude response of both filters is identical but the phase response is very different, as given by the vector diagram showing phase versus frequency for both filters. (This diagram is created by simply replacing $z^{-1}$ with the phasor $e^{-j\omega}$ which is exactly how to get the frequency response from the z-transform). It is a bit of an optical illusion but the resulting phasor given by the sum $\sum_n h[n]e^{-jn \omega}$ does have the exact same magnitude for all frequencies as we sweep the frequency $\omega$ from $0$ to $2\pi$. However notice how constrained the angle will be in the diagram on the left, compared to the one on the right! Minimum phase versus Maximum phase. This plot also demonstrates Cauchy's argument principle showing how the frequency response of a minimum phase filter when plotted on the complex plane will not encircle the origin. Below is a plot of the magnitude and phase response for the two example filters above (with the same magnitude response for either). Since group delay is $d\phi/d\omega$ the minimum phase filter will have the lowest delay while the maximum phase filter will have the largest delay, which makes sense when you consider the placement of the largest tap in the FIR filter which is a series of summed and weighted delays (the energy will emerge from the minimum phase filter sooner). Another example demonstrating the Cauchy argument principle is below, with a plot of the frequency response on the complex plane for filters [5 6 3 2 1] and [5 8 3 2 1]. The filter [5 8 3 2 1] is proven to not be minimum phase since the frequency response encircles the origin: And here is another example for a maximum phase filter, which is also confirmed using Cauchy's Argument Principle. This is for the fourth order filter with coefficients [1 -3 -3 2 5] where we see all four zeros are outside the unit circle as we have four encirclements of the origin. (The easy way to count encirclements is to note a direction on the frequency response with a forward direction consistent with increasing $\omega$, and then draw a vector from the origin out toward infinity at any angle and count how many crossings of the frequency response take place: if the cross is of a forward direction the count increases, and if of a negative direction the count decreases). And here is another simple example with the FIR filter given by coefficients [1, 1,5, 0.6] showing how the frequency response of an FIR with a real positive first coefficient can enter the LHP and still be a minimum phase filter. Specifically, it is the origin that is not encircled consistent with Cauchy's Argument Principle. With that constraint we see that if the origin is every encircled, the phase response will exceed $\pm \pi$. Below the plot is the standard magnitude and phase frequency response as two separate plots. Note, however that it is possible to for the phase response to exceed $\pm \pi$ but still not encircle the origin (in a general frequency response), so the converse may not be true. I haven't worked out an actual minimum phase FIR filter transfer function that would create this, nor have been able to prove that an FIR filter could never have this response, but have sketched the frequency response on the complex plane that would fit this case:
{ "domain": "dsp.stackexchange", "id": 10650, "tags": "finite-impulse-response, infinite-impulse-response, digital-filters, stability, inverse" }
What is a "quasi-local" charge?
Question: Could someone please tell me what is a quasi-local charge? For instance, why are Brown-York charges called quasi-local? Answer: The value of a local quantity at a particular point in spacetime can only depend on the values of other quantities at that same point in spacetime. It cannot depend on the values of those quantities at other points in spacetime. In contrast, the value of a nonlocal quantity at a particular point in spacetime can depend on the values of other quantities at other points in spacetime. There is nothing in principle preventing a nonlocal quantity from depending on the value of other quantities at every point in spacetime, out to infinity. Quasi-local quantities forbid some types of nonlocal behavior. In particular, a quasi-local quantity is only allowed to depend on the values of other quantities in a finite region of spacetime around that point.
{ "domain": "physics.stackexchange", "id": 67561, "tags": "general-relativity, conservation-laws, field-theory, definition, locality" }
Order of reactivity of nucleophilic addition at carbonyl group with Grignard reagent
Question: I thought the order would be $1 > 3 > 2$ as 1 is aldehyde and 2, 3 are ketones. As generally aldehydes are more reactive, I placed it first. I chose 3 over 2 as two alkyl groups would make cleaving of $\ce{C=O}$ difficult in 2. But the answer is given $1 > 2 > 3$, even though phenyl is a bulky group and alkyl groups are electron releasing. How to approach these type of questions? Answer: All the compounds are planar, steric hinderance might have played a more significant role if the attack of nucleophile was in the plane. Nucleophilic addition at the carbonyl group takes place at an angle of 107° with the plane.! Addition of grignard reagents at the carbonyl group are primarily governed by electronic effects. More the positive charge at the carbonyl carbon more will be the rate of reaction. The two phenyl groups reduce the $\delta +$ most significantly by resonance, next the dimethyl ketone by two methyl groups which reduce the positive charge by hyper conjugation and inductive effects. The order can be finalized as: $$1>2>3$$ References Clayden, J.; Greeves, N.; Warren, S. Organic Chemistry, 2nd ed.; Oxford UP: Oxford, U.K., 2012; pp. 130
{ "domain": "chemistry.stackexchange", "id": 11185, "tags": "nucleophilic-substitution" }
How do I solve this projectile motion problem?
Question: Everytime I approach any projectile motion/kinematics problem, I get confused. I don't know how to translate the problem into an operational method, and every time I complete a problem, the next one is a new mystery to me. How should I tackle this issue and master this problem type? Answer: All projectile motion questions are the same. Why do the kinematics equations make sense? We have two key equations in kinematics, and they stem from the mathematical relationship between position, momentum, and velocity. Velocity is defined as $v = \frac{dx}{dt}$. Acceleration is defined as $a = \frac{dv}{dt}$. Importantly, the derivative of a function introduces some redundancy. Consider the equation $v(t) = 2t$, then $$\int v(t)dt = \int 2t dt = t^2 + c.$$ Now, if we know that a particle was initially at rest, we know that $v(t = 0) = 0$. We can use this to find c! $$v(0) = 0^2 + c = 0 \rightarrow c = 0$$ Thus, our initial position and velocity are actually integration constants. This physically explains $$v(t) = v_0 + at$$ $$x(t) = x_0 + v_0t + \frac{1}{2} at^2$$ These are the only unique pieces of information. You can plug these in in crafty ways to solve the problem at hand. Operational Approach: How do I solve these problems systematically? With that under our belt, we can ask “how do I solve my projectile motion problem?”. The following approach will always work for ANY projectile motion problem. Identify what the question is asking for. Identity, mathematically, what you know. Is velocity initially at rest? Then $v(0) = 0m/s$. Is your particle moving only in the vertical direction? Then $a_x = 0m/s^2$. Is acceleration due to gravity? Then $a_y = g$. Is the height of the cliff 5m? Then $y(0) = 5m$. And the list goes on. Write down the kinematics equations (if you don’t want to memorize them, rederive them according to the logic above. Otherwise, just remember them). You should now have a list of quantities and equations. Your task is purely to rearrange the equations you have written in (3) to solve for the quantities you identified in (1). If you have done 1-3 correctly, you will be able to achieve this goal.
{ "domain": "physics.stackexchange", "id": 97887, "tags": "newtonian-mechanics, kinematics, projectile" }
What does $\partial_ν/\partial^2$ mean?
Question: I found such notation in this article link, equations 24-25. I know that $\partial_μ$ is four-gradient, but it does not contain second-order derivatives. Only d'Alembert operator does, $\partial^μ\partial_μ$. Answer: It's fundamentally notation with the following meaning: $1/\partial^2$ is the inverse operator of $\partial^2$. And what does this imply? Basically, that under Fourier transform if $\partial^2$ goes to $p^2$, then $1/\partial^2$ will go to $1/p^2$. In effective field theory, with this reasoning, you can expand in powers of $m^2/\partial^2$ and if momenta is larger than mass then you drop such terms.
{ "domain": "physics.stackexchange", "id": 93034, "tags": "definition, differentiation, notation" }
Spinor field path quantization
Question: Although I have asked a similar question here, here, I find that I don't totally understand it, so I arrange my new ideas to this post. Begin with Berezin integral: $$\left(\prod_i \int d \theta_i^* d \theta_i\right) e^{-\theta_i^* B_{i j} \theta_j}=\operatorname{det} B \tag{9.69}. $$ Now I want to calculate: $$\int \mathcal{D} \bar{\psi} \mathcal{D} \psi \exp \left[i \int d^4 x \bar{\psi}(i \not \partial-m) \psi\right], \tag{A}$$ the book said that $A=\text{det}(i\not\partial-m)$, but that's not clear for me. First, we need to note that the indices $i,j$ in (9.69) is the discretization of all space time, and $i$, $j$ are independent. But now our situation is $\bar{\psi}(x)(i \not \partial-m)_x \psi(x)$, we only have one set of space time points (in Hilbert space), so I want to expand operator $(i\not\partial-m)_x$ in matrix representation. Then, if we define the Green function of $(i\not\partial-m)_x$: $$(i\not\partial-m)_x S_F(x-y)=i\delta^4(x-y), \tag{B}$$ we can represent (see here) $$(i\not\partial-m)_x \psi(x)=\int d^4 y (-i S_F(x-y)^{-1})\psi(y), \tag{C} $$ Now we can write A as: $$ \begin{aligned} A&=\int \mathcal{D} \bar{\psi} \mathcal{D} \psi \exp \left[i \int d^4 x d^4 y\bar{\psi}(x)[-i S_F(x-y)^{-1}] \psi(y)\right]\\ & = \Pi_i \int d\bar{\psi}_i d\psi_i \exp \left[- \bar{\psi}_i[- S_F(x-y)^{-1}]_{ij} \psi_j\right], \end{aligned} \tag{D}$$ where in the second line we use spacetime discretization, and $i$, $j$ sum separately inside the exponent. So from D and (9.69), it seems that $$\int \mathcal{D} \bar{\psi} \mathcal{D} \psi \exp \left[i \int d^4 x \bar{\psi}(i \not \partial-m) \psi\right]=\text{det}(- S_F(x-y)^{-1}). \tag{E}$$ So would $$\text{det}(- S_F(x-y)^{-1})=\text{det}(i\not\partial-m)~?\tag{F}$$ Or where is my problem? Answer: It seems OP's problem is not really about Grassmann variables and spinors per se, but rather how to identify the matrix entries$^1$ $$M(x,y)~=~(-i)(i \not\partial-m)\delta^4(x-y)\tag{i}$$ of an infinite-dimensional matrix $$M(x,y)_{x,y\in\mathbb{R}^4}\tag{ii}$$ in the functional determinant eq. (A). Here the spacetime coordinates $x$ and $y$ are the continuous indices of the infinite-dimensional matrix. So one should strictly speaking write the functional determinant as $$\det \left(M(x,y)_{x,y\in\mathbb{R}^4}\right).\tag{iii}$$ Eq. (iii) should be compared to the book's (conceptionally confusing, but commonly used) shorthand notation, where the 4D Dirac delta distribution is implicitly implied. -- $^1$ The factor $(-i)$ in eq. (i) is determined by comparing eqs. (9.69) & (A).
{ "domain": "physics.stackexchange", "id": 92120, "tags": "quantum-field-theory, path-integral, dirac-delta-distributions, grassmann-numbers, functional-determinants" }
A linear algebra exercise from Griffiths "Introduction to quantum mechanics"
Question: (Edited so that it obeys the rules of homework questions) I am stuck on this linear algebra problem from Griffiths's "Introduction to quantum mechanics". Can somebody give me some guidance? (I am interested to the sentence that is highlighted with red.) Answer: Let's assume you start with an orthonormal basis $|e_{i}\rangle \, , \, i = 1, ...,N$ , where $N$ is the dimensionality of your vector space. Then we would like for the new basis elements to be defined as: \begin{equation} |\tilde{e}_{i}\rangle = S|e_{i}\rangle \end{equation} EDIT: Since it's not allowed to provide a complete proof and I have been compelled to only provide a guideline here, what I can say is that you should then consider what kind of mathematical condition orthonormality implies. That of course should apply to both the old and new basis. You should also think how the Hermitian conjugate of such a matrix appears when dealing with a complex vector space such as this. From these it's fairly easy to deduce the unitarity of $S$.
{ "domain": "physics.stackexchange", "id": 89127, "tags": "quantum-mechanics, homework-and-exercises, operators, linear-algebra" }
Bose-Einstein Grand Canonical partition function derivation step
Question: The total grand canonical partition function is $$\mathcal{Z} = \sum_{all\ states}{e^{-\beta(E-N\mu)}} = \sum_{N=0}^\infty\sum_{\{E\}}{e^{-\beta(E-N\mu)}}$$ For Bose-Einstein or Fermi-Dirac, the energy eigenstates are countable since $E=\sum_{\epsilon}{n_\epsilon \epsilon}$, then $$\mathcal{Z} = \sum_{N=0}^{\infty}\sum_{\{n_\epsilon\}}e^{-\beta\sum_{\epsilon}{n_\epsilon(\epsilon-\mu)}} = \sum_{N=0}^{\infty}\sum_{\{n_\epsilon\}}\prod_\epsilon e^{-\beta n_\epsilon(\epsilon-\mu)}= (\sum_{n_0}e^{-\beta{n_0(\epsilon_0-\mu)}})(\sum_{n_1}e^{-\beta{n_1(\epsilon_1-\mu)}})...$$ This method is used in Pathria's Statistical Mechanics book (page 133). I am having trouble following the argument behind how the last step occurs. The book states the following: "The double summation (first over the numbers $n_\epsilon$ constrained by a fixed value of the total number N, and then over all possible values of N) is equivalent to a summation over all possible values of the numbers $n_\epsilon$ independent of one another" Why is this true? I have been stuck on this argument for hours... Answer: "The double summation (first over the numbers $n_\epsilon$ constrained by a fixed value of the total number N, and then over all possible values of N) is equivalent to a summation over all possible values of the numbers $n_\epsilon$ independent of one another" Why is this true? I have been stuck on this argument for hours... The term $$ \sum_N \sum_{\{n_\epsilon\}}\left(\ldots\right) $$ means a sum over all N and for each fixed N a sum over all $n_\epsilon$ consistent with the fixed value of N. But this is the same as summing over all $n_\epsilon$ regardless of the value of N (i.e., unconstrained summing over $n_\epsilon$. I.e., $$ \sum_N \sum_{\{n_\epsilon\}}\left(\ldots\right) = \sum_{n_1}\sum_{n_2}\sum_{n_3}{}_{\ldots}\left(\ldots\right)\;, $$ where the sums over $n_1$, $n_2$, $n_3$, etc are unconstrained. Consider this Fermionic example. Suppose there are only three single-particle state ($\epsilon$ can be: 1, 2, or 3). Then the different constrained values $n_\epsilon$ for each value of N can be decomposed as: $$ N=0:\qquad n_1=0,n_2=0,n_3=0 $$ $$ N=1:\qquad n_1=0,n_2=0,n_3=1 $$ $$ N=1:\qquad n_1=0,n_2=1,n_3=0 $$ $$ N=1:\qquad n_1=1,n_2=0,n_3=0 $$ $$ N=2:\qquad n_1=0,n_2=1,n_3=1 $$ $$ N=2:\qquad n_1=1,n_2=0,n_3=1 $$ $$ N=2:\qquad n_1=1,n_2=1,n_3=0 $$ $$ N=3:\qquad n_1=1,n_2=1,n_3=1 $$ And you see there are 8=$2^3$ terms because this is the same as summing over each of the two possible values of $n_1$, $n_2$, and $n_3$ independently (for example, regardless of the values I pick for $n_1$ and $n_2$ I can find one state with $n_3=0$ and one state with $n_3=1$). This is true in general not just for the specific example above.
{ "domain": "physics.stackexchange", "id": 21158, "tags": "statistical-mechanics" }
CSS Grid and Flexbox use
Question: I have coded up the layout using the following Dribbble shot: https://dribbble.com/shots/5382121-Nike-Future To help get better at layouts. This is using a mix of Grid and Flexbox, but an area where I think I may need to revisit is the play button which is using the old method of position: absolute to veritally align it to the middle. The ideal solution would be to have the play button positioned on the middle grid line. If people have optimisations I would be happy to hear them. :root { --main-orange: #ff4644; --main-blue: #2f333e; } * { box-sizing: border-box; } body { margin: 0; font-family: sans-serif; } .wrapper { display: grid; height: 100vh; min-height: 800px; grid-template-columns: 50fr 40fr 10fr; grid-template-rows: 50fr 50fr; position: relative; } .main-area { background-color: #fff; grid-column: 1 / 2; grid-row: 1 / 3; display: flex; flex-direction: column; justify-content: space-between; } .play-btn { width: 75px; height: 75px; background-color: var(--main-orange); border-radius: 40px; position: absolute; left: 0; right: 0; margin-left: auto; margin-right: auto; top: 50%; transform: translateY(-50%); display: flex; flex-direction: column; justify-content: center; text-align: center; color: var(--main-blue); text-decoration: none; transition: 0.3s; } .play-btn:hover { box-shadow: 0px 30px 30px 0px rgba(255, 100, 68, 0.4); color: #fff; } .main-nav { width: 100%; display: flex; justify-content: flex-end; padding: 60px 75px; } .main-nav a { color: var(--main-blue); text-decoration: none; text-transform: uppercase; } .logo-area { margin-right: auto; } .logo-area img { width: 75px; } .menu-item { padding-left: 15px; padding-right: 15px; letter-spacing: 7.5px; font-size: 0.75rem; transition: 0.3s; } .menu-item:hover { color: var(--main-orange); } .menu-item:last-child { padding-right: 0; } .nike-box { width: 180px; word-break: break-all; margin-left: 75px; justify-self: flex-end; } .nike-box h2 { font-family: 'Roboto Mono', monospace; text-transform: uppercase; letter-spacing: 3rem; font-size: 4rem; font-weight: 300; } .img-area { grid-column-start: 2; grid-column-end: 3; grid-row-start: 1; grid-row-end: 3; background-image: url('https://images.pexels.com/photos/733505/pexels-photo-733505.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=750&w=1260'); background-size: cover; background-position: top center; background-repeat: no-repeat; } .search-area { grid-column: 3 / 4; grid-row: 1 / 2; background-color: var(--main-blue); } .social-area { grid-column: 3 / 4; grid-row: 2 / 3; background-color: var(--main-orange); color: #fff; display: flex; } .search-icon-container { display: flex; flex-direction: column; width: 100%; align-items: center; padding-top: 60px; color: #fff; } .social-links { padding: 0; list-style-type: none; display: flex; flex-direction: column; align-self: flex-end; width: 100%; align-items: center; padding-bottom: 60px } .social-links li { transform: rotate(-90deg); margin-bottom: 30px; font-size: 0.75rem; } .social-links li:last-child { margin-bottom: 0; } .social-links li a { color: #fff; text-decoration: none; } <head> <link href="https://fonts.googleapis.com/css?family=Roboto+Mono:100,300,400" rel="stylesheet"> <script defer src="https://use.fontawesome.com/releases/v5.6.3/js/all.js" integrity="sha384-EIHISlAOj4zgYieurP0SdoiBYfGJKkgWedPHH4jCzpCXLmzVsw1ouK59MuUtP4a1" crossorigin="anonymous"></script> </head> <div class="wrapper"> <a href="#" class="play-btn">&#9654;</a> <section class="main-area"> <nav class="main-nav"> <a href="#" class="logo-area"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Logo_NIKE.svg/400px-Logo_NIKE.svg.png" alt=""></a> <a href="#" class="menu-item">Mens</a><a href="#" class="menu-item">Womens</a> </nav> <div class="nike-box"> <h2>Nike</h2> </div> </section> <section class="img-area"></section> <section class="search-area"> <div class="search-icon-container"> <i class="fas fa-search"></i> </div> </section> <section class="social-area"> <ul class="social-links"> <li><a href="#">Fb</a></li> <li><a href="#">In</a></li> <li><a href="#">Tw</a></li> </ul> </section> </div> Answer: I would move the play button inside the image section. It make more sense to have a container, not empty, used to host it also by a semantic point of view. By setting it up to display: flex; it will be pretty easy to position it. Of course, you'll need to adapt the design to smaller devices. See code snippet for details. Finally, even if it "sounds" correct, I'd use div in place of section for non semantic (I.E. no header, no footer, no content) portions of your page. I hope this helps. :root { --main-orange: #ff4644; --main-blue: #2f333e; } * { box-sizing: border-box; } body { margin: 0; font-family: sans-serif; } .wrapper { display: grid; height: 100vh; min-height: 800px; grid-template-columns: 50fr 40fr 10fr; grid-template-rows: 50fr 50fr; position: relative; } .main-area { background-color: #fff; grid-column: 1 / 2; grid-row: 1 / 3; display: flex; flex-direction: column; justify-content: space-between; } .play-btn { width: 75px; height: 75px; background-color: var(--main-orange); border-radius: 40px; position: relative; left: -37px; display: flex; flex-direction: column; justify-content: center; text-align: center; color: var(--main-blue); text-decoration: none; transition: 0.3s; } .play-btn:hover { box-shadow: 0px 30px 30px 0px rgba(255, 100, 68, 0.4); color: #fff; } .main-nav { width: 100%; display: flex; justify-content: flex-end; padding: 60px 75px; } .main-nav a { color: var(--main-blue); text-decoration: none; text-transform: uppercase; } .logo-area { margin-right: auto; } .logo-area img { width: 75px; } .menu-item { padding-left: 15px; padding-right: 15px; letter-spacing: 7.5px; font-size: 0.75rem; transition: 0.3s; } .menu-item:hover { color: var(--main-orange); } .menu-item:last-child { padding-right: 0; } .nike-box { width: 180px; word-break: break-all; margin-left: 75px; justify-self: flex-end; } .nike-box h2 { font-family: 'Roboto Mono', monospace; text-transform: uppercase; letter-spacing: 3rem; font-size: 4rem; font-weight: 300; } .img-area { grid-column-start: 2; grid-column-end: 3; grid-row-start: 1; grid-row-end: 3; background-image: url('https://images.pexels.com/photos/733505/pexels-photo-733505.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=750&w=1260'); background-size: cover; background-position: top center; background-repeat: no-repeat; display: flex; align-items: center; } .search-area { grid-column: 3 / 4; grid-row: 1 / 2; background-color: var(--main-blue); } .social-area { grid-column: 3 / 4; grid-row: 2 / 3; background-color: var(--main-orange); color: #fff; display: flex; } .search-icon-container { display: flex; flex-direction: column; width: 100%; align-items: center; padding-top: 60px; color: #fff; } .social-links { padding: 0; list-style-type: none; display: flex; flex-direction: column; align-self: flex-end; width: 100%; align-items: center; padding-bottom: 60px } .social-links li { transform: rotate(-90deg); margin-bottom: 30px; font-size: 0.75rem; } .social-links li:last-child { margin-bottom: 0; } .social-links li a { color: #fff; text-decoration: none; } <head> <link href="https://fonts.googleapis.com/css?family=Roboto+Mono:100,300,400" rel="stylesheet"> <script defer src="https://use.fontawesome.com/releases/v5.6.3/js/all.js" integrity="sha384-EIHISlAOj4zgYieurP0SdoiBYfGJKkgWedPHH4jCzpCXLmzVsw1ouK59MuUtP4a1" crossorigin="anonymous"></script> </head> <div class="wrapper"> <section class="main-area"> <nav class="main-nav"> <a href="#" class="logo-area"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Logo_NIKE.svg/400px-Logo_NIKE.svg.png" alt=""></a> <a href="#" class="menu-item">Mens</a><a href="#" class="menu-item">Womens</a> </nav> <div class="nike-box"> <h2>Nike</h2> </div> </section> <section class="img-area"> <a href="#" class="play-btn">&#9654;</a> </section> <section class="search-area"> <div class="search-icon-container"> <i class="fas fa-search"></i> </div> </section> <section class="social-area"> <ul class="social-links"> <li><a href="#">Fb</a></li> <li><a href="#">In</a></li> <li><a href="#">Tw</a></li> </ul> </section> </div>
{ "domain": "codereview.stackexchange", "id": 33384, "tags": "css" }
If Bernoulli's Principle can only be applied to a streamline, how does it justify the pressure difference that causes lift?
Question: According to the Bernoulli's principle, an increase in the speed of a fluid causes a decrease in pressure. In the case of an aerofoil or aircraft wing, the speed of the fluid (air) above the wing is greater than the speed below it. But since the Bernoulli's Principle can be applied ONLY along points along the SAME streamline, how does the speed difference justify the pressure difference above and below the wing? Answer: $\def\¿{\small}$Yes, Bernoulli's Theorem applies only along one streamline. It is actually a corollary of Bernoulli's theorem (or a shortcut!) that Bernoulli's Equation holds between points C and D that are not along the same streamline, subjected of course to several conditions. $$\¿ P_{\¿C}+\frac12\rho v_{\¿ C}^2=P_{\¿ D}+\frac12\rho v_{\¿ D}^2$$(neglecting the term $\¿ \rho gh$) The proof for this as below: Consider the figure below. We apply Bernoulli's Theorem to streamlines AC and BD seperately. This gives, $$\¿ P_{\¿A}+\frac12\rho v_{\¿ A}^2=P_{\¿ C}+\frac12\rho v_{\¿ C}^2\tag{1}$$ $$\¿ P_{\¿B}+\frac12\rho v_{\¿ B}^2=P_{\¿ D}+\frac12\rho v_{\¿ D}^2\tag{2}$$ When we define points A and B to be far away from the airfoil, we can assume that $\¿ v_{\¿A}=v_{\¿B}$ and $\¿ P_{\¿A}=P_{\¿B}$. Equating equations (1) and (2) gives the initial equation. Indeed, it's a useful corollary. The above is a basic description about Bernoulli's theorem and airfoil. But airfoil's physics is more complicated. See this for more info.
{ "domain": "physics.stackexchange", "id": 83205, "tags": "fluid-dynamics, pressure, flow, bernoulli-equation, lift" }
How are conditionals implemented at hardware level?
Question: Can anyone explain how are conditionals implemented in the CPU? Is special circuitry used? Answer: The semantics of any non-jump CPU instruction is to increment the program counter PC (a.k.a. instruction pointer) so that the subsequent instruction can be executed next. Uncoditional jumps overwrite PC with a new, fixed value, written in the instruction. (Or, for relative jumps, add to PC said value) Conditional jumps instead consider two possible values for PC: one pointing to the next instruction, another written inside the instruction (possibly as a PC increment, for relative jumps). Then, they test some given register, obtaining a bit. For instance, the register might be a flag, which is taken as is. Otherwise, the register might be tested for parity, sign, zero, etc. so to obtain a single bit. This bit controls which one of the two PC values is the one to write in the PC register. Usually, a hardware 2-to-1 multiplexer is responsible to choose between the two ones, exploiting the control bit.
{ "domain": "cs.stackexchange", "id": 10082, "tags": "computer-architecture, cpu" }