anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
rtabmap obstacle_detection error: lookup would require extrapolation into the future when looking up transform
Question: Hi everyone, I have a problem with using rtabmap obstacle_detection nodelet, I get the following error when trying to read the published topic ( in my case called /pointCloud_obstacles): [ERROR] Lookup would require extrapolation -0.120000000s into the future. Requested time 1027.127000000 but the latest data is at time 1027.007000000, when looking up transform from frame [base_link] to frame [map] The difference in the two times is always about 0.1 (seconds?) and it's also interesting how it says that it require extrapolation of a negative time into the future, so it seems more the past than the future. I don't get any error if I do not access the topic (e.g. using rostopic hz or another node). This is the relevant part of the tf tree: This is my launch file, mainly taken from rtabmap outdoor mapping tutorial and from rtabmap outdoor navigation tutorial: <param name="use_sim_time" type="bool" value="true"/> <!-- --> <!-- Stereo Odometry --> <node pkg="rtabmap_ros" type="stereo_odometry" name="stereo_odometry" output="screen"> <remap from="left/image_rect" to="/zed2/left/image_rect_color"/> <remap from="right/image_rect" to="/zed2/right/image_rect_color"/> <remap from="left/camera_info" to="/zed2/left/camera_info"/> <remap from="right/camera_info" to="/zed2/right/camera_info"/> <param name="frame_id" type="string" value="base_footprint"/> <param name="queue_size" type="int" value="20"/> <param name="odom_frame_id" type="string" value="odom"/> <param name="wait_for_transform" type="bool" value="true"/> <param name="Odom/Strategy" type="string" value="0"/> <!-- 0=BOW, 1=OpticalFlow --> <param name="Odom/EstimationType" type="string" value="1"/> <!-- 3D->2D (PnP) --> <param name="Odom/MinInliers" type="string" value="10"/> <param name="Odom/RoiRatios" type="string" value="0.03 0.03 0.04 0.04"/> <param name="Odom/MaxDepth" type="string" value="10"/> <param name="OdomBow/NNDR" type="string" value="0.8"/> <param name="Odom/MaxFeatures" type="string" value="1000"/> <param name="Odom/FillInfoData" type="string" value="$(arg rtabmapviz)"/> <param name="GFTT/MinDistance" type="string" value="10"/> <param name="GFTT/QualityLevel" type="string" value="0.00001"/> </node> <group ns="rtabmap"> <!-- Visual SLAM: args: "delete_db_on_start" and "udebug" --> <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start"> <param name="frame_id" type="string" value="zed2_camera_center"/> <param name="subscribe_stereo" type="bool" value="true"/> <param name="subscribe_depth" type="bool" value="true"/> <remap from="odom" to="/odom"/> <param name="queue_size" type="int" value="30"/> <remap from="left/image_rect" to="/zed2/left/image_rect_color"/> <remap from="right/image_rect" to="/zed2/right/image_rect_color"/> <remap from="left/camera_info" to="/zed2/left/camera_info"/> <remap from="right/camera_info" to="/zed2/right/camera_info"/> <!-- RTAB-Map's parameters --> <param name="Rtabmap/TimeThr" type="string" value="700"/> <param name="Rtabmap/DetectionRate" type="string" value="1"/> <param name="Kp/MaxFeatures" type="string" value="200"/> <param name="Kp/RoiRatios" type="string" value="0.03 0.03 0.04 0.04"/> <param name="Kp/DetectorStrategy" type="string" value="0"/> <!-- use SURF --> <param name="Kp/NNStrategy" type="string" value="1"/> <!-- kdTree --> <param name="SURF/HessianThreshold" type="string" value="1000"/> <param name="Vis/MinInliers" type="string" value="10"/> <param name="Vis/EstimationType" type="string" value="1"/> <!-- 3D->2D (PnP) --> <param name="RGBD/LoopClosureReextractFeatures" type="string" value="true"/> <param name="Vis/MaxWords" type="string" value="500"/> <param name="Vis/MaxDepth" type="string" value="10"/> <param name="approx_sync" type="bool" value="true"/> <param name="RGBD/CreateOccupancyGrid" value="false"/> </node> <!-- nodelet manager --> <node pkg="nodelet" type="nodelet" name="stereo_nodelet" args="manager"/> <!-- downsample pointcloud --> <node pkg="nodelet" type="nodelet" name="points_xyz" args="standalone rtabmap_ros/point_cloud_xyz"> <remap from="depth/image" to="/zed2/depth/depth_registered"/> <remap from="depth/camera_info" to="/zed2/depth/camera_info"/> <remap from="cloud" to="/pointCloud_downsample" /> <param name="decimation" type="double" value="4"/> <param name="voxel_size" type="double" value="0.05"/> <param name="approx_sync" type="bool" value="true"/> </node> ##THIS NODE CREATES PROBLEM <node pkg="nodelet" type="nodelet" name="obstacles_detection" args="load rtabmap_ros/obstacles_detection stereo_nodelet"> <remap from="cloud" to="/pointCloud_downsample"/> <remap from="obstacles" to="/pointCloud_obstacles"/> <param name="use_sim_time" type="bool" value="true"/> <param name="frame_id" type="string" value="base_footprint"/> <param name="map_frame_id" type="string" value="map"/> <param name="Grid/MinClusterSize" type="int" value="5"/> <param name="Grid/ClusterRadius" type="double" value="0.1"/> <param name="Grid/MaxObstaclesHeight" type="double" value="0.5"/> </node> All the code is run into the same machine so it should not be a synchronization issue. I found online that the problem could be due to the simulation time, I tried to set the parameter sim_time to true but I don't know if I did it in the correct way. Originally posted by AlessioParmeggiani on ROS Answers with karma: 165 on 2022-02-06 Post score: 2 Original comments Comment by osilva on 2022-02-07: Hi @parmex, please take a look at this previous question #q357836 and please share the things that you have discarded. Please check both answers. Comment by AlessioParmeggiani on 2022-02-08: Hi @osilva,thank you for the suggestion! I already looked at that question and I found some hint on what could cause the problem but sadly not how to solve it. Anyway, I marked as accepted the answer below, as it solves my issue :) Answer: It looks like waitForTransform is false by default here. Try adding wait_for_transform parameter to obstacles_detection nodelet: <node pkg="nodelet" type="nodelet" name="obstacles_detection" args="load rtabmap_ros/obstacles_detection stereo_nodelet"> <remap from="cloud" to="/pointCloud_downsample"/> <remap from="obstacles" to="/pointCloud_obstacles"/> <param name="use_sim_time" type="bool" value="true"/> <param name="frame_id" type="string" value="base_footprint"/> <param name="map_frame_id" type="string" value="map"/> <param name="wait_for_transform" type="bool" value="true"/> <param name="Grid/MinClusterSize" type="int" value="5"/> <param name="Grid/ClusterRadius" type="double" value="0.1"/> <param name="Grid/MaxObstaclesHeight" type="double" value="0.5"/> </node> The error happens because it is receiving the point cloud before odometry has been completely computed, so it has to wait a little to get TF at that stamp. Originally posted by matlabbe with karma: 6409 on 2022-02-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by AlessioParmeggiani on 2022-02-08: Thank you so much! This solved the problem! :)
{ "domain": "robotics.stackexchange", "id": 37421, "tags": "slam, navigation, pointcloud, rtabmap" }
Do I use the mean vector from my training set to center my testing set when dimension reducing for classification?
Question: Please let me know if this is the right place to ask this (or if any of my tags are wrong) or if I need to write this any differently. Do I use the mean vector from my training set to center my testing set when dimension reducing for classification? I am using the principal component analysis procedure to reduce the dimensions of the training set. I build the classifier. Then, before I classify the feature vectors from the test set, during the centering part of the dimension reduction, do I use the same mean vector from the training set, do I take the mean vector of the testing set and subtract that from the test set, or do I take the mean vector of the union of the training and test set and subtract that from the test set? If the third option, does that mean I was also supposed to use the union of the training and testing set to center the training set as well? No, (for the sake of generalizing to other testing sets) right? Also, even though I am pretty sure the answer will be the same as above, can you please let me know if the same is true for using the covariance matrix from the training set to get an eigenvector matrix and multiplying the inverse (transverse) of it times the test set to reduce it. Or, do we use the testing set or the union of the two to get the covariance and then eigenvector matrix to multiply times the testing set? Please let me know if any of the premises are wrong. This is my first time. Answer: Do I use the mean vector from my training set to center my testing set when dimension reducing for classification?: Yes. Test set must not be combined with training set in any step of calculating the reduced dimension space. Characteristics of final space is determined by training set and test set just follows that i.e. the mean-adjusting step uses training mean. You just calculate the final eigenvectors matrix $E$ (whose dimension is $d\times d$ at the beginning where $d$ is the dimensionality of data, and becomes $d_{reduced}\times d$ after choosing top vectors) and then your test data $D$ ($n\times d$) is just multiplied to that matrix, you get test data in the reduced space ($D^{'}$): $$D_{n\times d}\times E^{T} = D^{'}_{n\times d_{reduced}}$$ where the dimensionality of $E^{T}$ is $d\times d_{reduced}$ as $T$ denotes matrix transpose (you mentioned inverse which is wrong). NOTE: Depending on how you arrange samples in your data matrix, the matrix product will be totally different. Do not get confused if you see different things in literature. The standard form of data is usually $n_{samples}\times n_{features}$ which was assumed above as well. Each row is a sample and each column is a dimension. I hope it helped. You can comment if you had any questions.
{ "domain": "datascience.stackexchange", "id": 7207, "tags": "machine-learning, classification, machine-learning-model, pca, dimensionality-reduction" }
Importing custom messages from other packages in Python
Question: Hi all, I've been on this for hours now and I can't figure it out. It seems as if not this forum or the tutorials have a solution for this simple question. I have downloaded the Vesc stack here, without the vesc_ackerman package. It all builds nicely with the addition of the serial library, and even connects to the device and publishes topics stated in vesc_msgs. You can rostopic echo them, you can rosmsg list them (VescState and VescStateStamped). Now, what I am trying to do is to use these messages in another package, so that I can manipulate it. I have a standard python node code, but it starts with: #import rospy, then #import VescState.msg (or from VescState import XXX) , and continues on (If any1 can point me on how to paste the code here without having only the first line as code format and everything else outside, plus hash-tags making the font huge, i'de love that...). I could go on but it doesnt really matter, because all I get when I try to run the code is: Traceback (most recent call last): File "/home/tim/catkin_ws/src/vesc_comm/src/vesc_comm.py", line 5, in <module> from VescState.msg import * ImportError: No module named VescState.msg I have tried every conceivable way of doing this. i have tried to rename the message files, adding them as native messages to my package, including all the package.xml and cmakelists.txt additions just to make a sanity check. tried catkin_make install. none of it worked. I actually do not want to make these messages native anyway. what I do want is to get my vesc_comm.py script to recognize the VescState.msg file from the other package (again - the messages work with the vesc_driver node and are listed in rosmsg!). Why can I use every single message offered by ROS (std, geometry, pointcloud2...) but get the error stated above when I try and use a custom message?? my ROS is just not recognizing it. what am I doing wrong? Thanks in advance, Steve Originally posted by StevenCoral on ROS Answers with karma: 167 on 2017-09-09 Post score: 1 Original comments Comment by gvdhoorn on 2017-09-09:\ If any1 can point me on how to paste the code here copy-paste code, select lines, press ctrl+k or click the Preformatted Code button (the one with 101010 on it). Comment by StevenCoral on 2017-09-10: Thank you! i was clicking the icon first and then tried to paste code instead of "enter code here". wasnt working. Answer: import rospy import VescState.msg # (or from VescState import XXX) In rospy, the ROS package name is the module name, and the message filename is the classname. As a package can include both messages and services, either a msg or srv submodule needs to be added. So the .msg is not the .msg extension of the filename, but a module. For the package you link (vesc_msgs), the import statement should probably be: from vesc_msgs.msg import VescState or if you just want to import all messages: from vesc_msgs.msg import * It seems as if not this forum or the tutorials have a solution for this simple question This is always difficult, but I have the impression (but I've obviously been doing this for so long that it's easy for me to find this) that most tutorials and example code shows examples of this. See wiki/ROS/tutorials/Defining Custom Messages - Including or Importing Messages - Python fi. Originally posted by gvdhoorn with karma: 86574 on 2017-09-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by StevenCoral on 2017-09-10: Yeah, thas exactly what I thought! knowing python, it felt weird to use the .msg for the filename while periods indicate modules. therefore i completely understood the error message, just didnt know why it worked for std_msgs.msg (that file exists). thanks a lot! ill try this. Comment by StevenCoral on 2017-09-10: btw, also tried 'from vesc_msgs import VescState', didnt work. in my own corrupt way i worked around this issue by copying "_VescState.py" into my src folder and using 'from _vescState import VescState', that did the trick. Thanks again for the explanation as well! Comment by gvdhoorn on 2017-09-10: I'm a bit unsure what to make of your last comments. Just make sure to use the from package_name.msg import X approach. That should work. If it doesn't, something is wrong, but not with the import statement. Comment by StevenCoral on 2017-09-10: update: it works, thanks a lot. I was just mentioning the bad way I got it to work (using the actual python build files from the build directory), but I rather do it the right way (especially for future work).
{ "domain": "robotics.stackexchange", "id": 28809, "tags": "ros, python, messages" }
What countries are leading this "Global Quantum Computing Race"?
Question: The terms Quantum Computing Race and Global Quantum Computing Race have been used in the press and research communities lately in an effort to describe countries making investments into a "battle" to create the first universal quantum computer. What countries are leading this "Global Quantum Computing Race"? Answer: There are several countries that are actively participating in the "Quantum Race", most of which are making significant investments. The estimated annual spending on non-classified quantum-technology research in 2015 broke down like this: United States (360 €m) China (220 €m) Germany (120 €m) Britain (105 €m) Canada (100 €m) Australia (75 €m) Switzerland (67 €m) Japan (63€m) France (52 €m) Singapore (44 €m) Italy (36 €m) Austria (35 €m) Russia (30 €m) Netherlands (27 €m) Spain (25 €m) Denmark (22 €m) Sweden (15 €m) South Korea (13 €m) Finland (12 €m) Poland (12 €m) If you chart that out, it looks something like this: As you can see the European Union invested a combined 550 €m. It's also interesting to see how this investment correlates to patent applications from each of these countries: Since 2015 the interest from countries in quantum computing has grown significantly. The worldwide investment has now broken 2,000 €m. The biggest increase has been seen in China's spending which encompasses quantum computing but includes quantum information systems as well. There are several examples of countries investing in quantum computing in the past year that might be of particular interest. These include: China - In 2017 China announced that it was set to open a National Laboratory for Quantum Information Sciences by 2020. This included a 92-Acre, $10 Billion quantum research center. Japan - In 2017 in a joint, state-sponsored research project with Japan’s National Institute of Informatics and the University of Tokyo produced the machine, Nippon Telegraph and Telephone (NTT) shared a prototype quantum computer for public use over the internet. Sweden - In 2017 Sweden invested 1 billion Swedish Krona (roughly $118 million or 100m €) into a research initiative with the goal of developing a "robust quantum computer". Note that the United States has been seen as not investing enough into quantum computing and over the summer academia and industry testified before the U.S. House Subcommittees on Research & Technology and Energy in an effort to spur investment into this space. Most notably Dr. Christopher Monroe was quoted as saying that "U.S. leadership in quantum technology will be critical to our national security, and will open new doors for private industry and academia while ensuring America’s role as a global technology leader in the 21st century.”
{ "domain": "quantumcomputing.stackexchange", "id": 513, "tags": "experimental-realization, quantum-advantage" }
What are these special strings called?
Question: Is there any specific name for strings of data that have well defined format ? For example URLS, domain names, IP Addresses, Email addresses, File Paths etc. are all having well defined delimiters and data formats. What are these special strings in general called ? Answer: Any set of strings is a (formal) language. That's probably not what you mean since you have quite specific restrictions in mind. Many standardised string formates are defined by regular expressions what makes the respective sets of strings they admit regular languages. Note that some examples are actually finite languages (IPs). In then end, it depends on what you mean by "well defined". Any set of strings is well-defined. Anything more will depend on the kind of specification you give, and then the language will (sort of) inherit the specification's "name".
{ "domain": "cs.stackexchange", "id": 2683, "tags": "terminology, strings" }
Count objects in the image
Question: How an algorithm can be developed to count numbers and bolts in the image below. Answer: I will be using MATLAB as part of my solution. The basic algorithm will be this: Read in the image and convert to black and white Invert the intensities and fill in the holes. There are speckles in the image where when you convert the image to black and white, the objects are not solid. We want to ensure these are solid Count how many unique objects there are using bwlabel. You'll need the Image Processing Toolbox for this algorithm. If you don't have this and are using MATLAB, let me know and I'll edit my post. Without further ado: % Read in image, convert to black and white - Link comes from your image posted here im = imread('https://i.stack.imgur.com/lBGU1.png'); imBW = im2bw(im, 0.3); %// Specify manual threshold of 0.3 % Invert intensities and fill in holes imBWFilled = imfill(~imBW, 'holes'); % Count how many unique objects there are [L,num] = bwlabel(imBWFilled); % Show final image and display number of objects counted in the title imshow(imBWFilled); title(['Total number of objects: ' num2str(num)]); L contains a map where each pixel is an ID of which object that belongs to. 0 means that the pixel belongs to the background while any pixel that has values greater than or equal to 1 means that the pixel belongs to a particular object associated with that ID number. num gives you the total number of objects seen in the image. The output of this code thus gives: If you don't have MATLAB and want to compute this using another language, let me know. You can find the number of objects by considering an image as a connected graph. If you don't have bwlabel, you can use any graph searching algorithm (breadth-first search, depth-first search, etc.) to help you compute this. Start with any pixel that belongs to an object, and perform BFS / DFS to visit those pixels that belong to the object. You then set all of these pixels to belong to an ID number. When the queue / stack is empty, you then select another pixel that belongs to an object and repeat the algorithm. You stop when you have visited all pixels that belong to objects. The total number of IDs you have issued will essentially be how many unique objects you have in your image. Edit - July 14th, 2014 Seeing your comments, you wish to be able to distinguish between what is a nut and what is a bolt. As such, we can simply add on top of the current code. The algorithm that I have developed is loosely based on circularity: For each object, find the centre of mass. This is simply taking all of the X and Y co-ordinates and averaging them. For each object, find their boundaries / perimeters For each centre of mass we have, calculate the distance between this point and all of the object boundary points. Find the difference between the maximum and minimum of these distances - denote this the range We will have N ranges for N objects. Simply take a look at these ranges and threshold the array. If you take a look at the bolts, they have a longer vertical direction than horizontal direction. For the bolts, the vertical and horizontal direction is pretty much the same. As such, the range should give us an idea of what is a nut and bolt because the difference between the maximum distance to the centre and the minimum distances should be very large in comparison to the nut. As such, for each object we have, check to see if the range is below a certain value (nut) or above a certain value (bolt). Without further ado, here is the code: %%% For each object in the image, find the centre of mass centres = zeros(num,2); % Cycle through each unique object label and extract (X,Y) co-ordinates % that belong to each object. Compute centre of mass for each. for n = 1 : num bmap = L == n; [rows,cols] = find(bmap == 1); centres(n,:) = [mean(cols) mean(rows)]; end % Find boundaries of all objects bwBound = bwperim(imBWFilled, 8); % For each object, find the distances between the centre of mass with all % of the pixels along the boundary for each object. Find the range (max - % min). ranges = zeros(num,1); for n = 1 : num bmap = L == n; % Obtain all pixels for an object boundPix = bwBound & bmap; % Logical AND with boundaries map to extract % only those pixels around the perimeter [rows,cols] = find(boundPix == 1); % Find these locations % Compute the distances between the centre of mass with these points dists = sqrt((cols - centres(n,1)).^2 + (rows - centres(n,2)).^2); % Find the difference between the maximum and minimum distances ranges(n) = max(dists(:)) - min(dists(:)); end This isn't complete yet. I stopped here so you can see what the ranges for all of the objects look like: ranges = 37.9615 63.9613 54.9266 5.0716 4.1578 6.3114 7.2356 41.6381 10.3123 34.0938 5.0021 67.3290 As you can see, there are some ranges that are quite small ($< 10$) while there are some ranges that are large. To be safe, let's choose a threshold of 15. As such, for those ranges that are less than 15, this is classified as a nut while those that are larger are classified as a bolt. When we're done, we simply count how many of these fall within each range, and those are the number of nuts and bolts we have. Let's continue with the algorithm: % Find those object IDs that have less than a range of 15. These are the % bolts % The rest are nuts indBolts = find(ranges < 15); indNuts = find(ranges >= 15); % Total number of nuts and bolts numBolts = numel(indBolts); numNuts = numel(indNuts); This will give you 6 and 6 respectively as you expected. Now to finish up the algorithm, I'm going to give you an added bonus. For each unique object we have, I'm going to colour their interiors to be a certain shade of gray. The background will be black, while a nut is gray and a bolt is white. The code to do this is: finalMap = uint8(zeros(size(imBW))); for n = 1 : numBolts finalMap(L == indBolts(n)) = 128; end for n = 1 : numNuts finalMap(L == indNuts(n)) = 255; end figure; imshow(finalMap); title(['Number of Nuts: ' num2str(numNuts) ', Number of Bolts: ' num2str(numBolts)]); The final image we get is:
{ "domain": "dsp.stackexchange", "id": 3078, "tags": "image-processing" }
Diameter Shifting Capillary Tubes
Question: Is there any known material of which to construct a capillary tube that would make it possible to compress the diameter of the capillary tube while maintaining the internal shape? Intent being that one could dynamically manipulate the height of the fluid in the capillary by compressing and subsequently decompressing the tube? Answer: Perhaps you may consider a three-layer sandwich tube: the outer layer is a rigid tube, the medium layer is a thick-walled tube made of open-cell porous rubber, the inner layer is a thin stretchable capillary (say, made of rubber). The outer layer is bonded to the medium layer, and the inner layer is bonded to the medium layer. Changing the pressure of gas inside the porous rubber of the medium layer you can change the diameter of the inner layer.
{ "domain": "physics.stackexchange", "id": 35536, "tags": "electrostatics, surface-tension, capillary-action, adhesion" }
What is the result of taking the real part and imaginary part of a complex signal in the frequency domain?
Question: Suppose that $g(t)$ is a lowpass complex signal with magnitude (solid line) and phase (dashed line) To modulate $g(t)$ into a bandpass equivalent signal $f(t)$ with center frequency $f_c$, we compute the in-phase and quadrature components of $g(t)$ as \begin{align} g_I(t) &= \text{Re}\{g(t)\} \\ g_Q(t) &= \text{Im}\{g(t)\} \end{align} and then compute $$ f(t) = g_I(t)\cos(2\pi f_c t) - g_Q(t) \sin(2\pi f_c t) \tag{1} \label{eq1} $$ To make sense of \eqref{eq1}, I am curious to know what $g_I(t)$ and $g_Q(t)$ would look like in the frequency domain as a function of $G(f)$, the Fourier transform of $g(t)$ shown in the picture above. In other words, what does taking the real part and imaginary part do to what is shown in the picture above? Answer: Since $$g_I(t)=\frac12\big[g(t)+g^*(t)\big]$$ and $$g_Q(t)=\frac{1}{2j}\big[g(t)-g^*(t)\big]$$ the corresponding Fourier transforms are $$G_I(f)=\frac12\big[G(f)+G^*(-f)\big]$$ which is the even part of $G(f)$, and $$G_Q(f)=\frac{1}{2j}\big[G(f)-G^*(-f)\big]$$ which is the odd part of $G(f)$ (times $1/j$). It is easier to visualize what's happening if you write the bandpass signal as $$f(t)=\textrm{Re}\left\{g(t)e^{j2\pi f_ct}\right\}\tag{1}$$ From $(1)$ you can see that you just shift the spectrum of the complex baseband signal to the center frequency $f_c$, and by taking the real part, you just get a mirror-image copy at $-f_c$, because - as explained above - the real part in the time domain corresponds to the even part in the frequency domain.
{ "domain": "dsp.stackexchange", "id": 10841, "tags": "digital-communications, frequency-spectrum, modulation" }
Checking referential integrity of a database schema
Question: I am using PHP PDOs to parse the results I am receiving from MySQL queries against a database. I am now running into an issue with running out of allocated memory. Are there any suggestions on improving the efficiency of my code or another more efficient way to handle query results other than parsing PDOs? This will return whether or not a DB has referential integrity: function generateDSN($host, $dbname) { return 'mysql:host='. $host . ';dbname=' . $dbname; } $dbName = $argv[2]; //Used for query construction $dsn = generateDSN($argv[1], $argv[2]); //create DSN $link = new PDO($dsn, "username", "password"); //Connect to the database //PDO($dsn, username, password) if ($link->connect_error) { die("Connection failed: " . $link->connect_error); } else { echo "Connected successfully\n"; } //validate database connection $query = "SELECT TABLE_CONSTRAINTS.TABLE_NAME, KEY_COLUMN_USAGE.COLUMN_NAME, KEY_COLUMN_USAGE.REFERENCED_TABLE_NAME, KEY_COLUMN_USAGE.REFERENCED_COLUMN_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS RIGHT OUTER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE ON INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_NAME = INFORMATION_SCHEMA.KEY_COLUMN_USAGE.CONSTRAINT_NAME WHERE INFORMATION_SCHEMA.KEY_COLUMN_USAGE.CONSTRAINT_NAME <> 'PRIMARY' AND INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_TYPE = 'FOREIGN KEY' AND INFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_SCHEMA = '".$dbName."';"; //Create query to return list of tables with foreign keys /* Query return format: +--------------+------------------------+-----------------------+------------------------+ | TABLE_NAME | COLUMN_NAME | REFERENCED_TABLE_NAME | REFERENCED_COLUMN_NAME | +--------------+------------------------+-----------------------+------------------------+ */ $result = $link->query($query); //Get query results $x = 0; $tableA = []; $tableB = []; //Create arrays to hold data from FK queries while($tables = $result->fetch(PDO::FETCH_ASSOC)) { $test[] = $tables; $assoc = $test[$x]; //assign assoc[] the value of $query (will iterate through query table row by row) $tableName = $assoc['TABLE_NAME']; $columnName = $assoc['COLUMN_NAME']; $refTableName = $assoc['REFERENCED_TABLE_NAME']; $refColumnName = $assoc['REFERENCED_COLUMN_NAME']; //get data values for each column $fkQueryA = "SELECT DISTINCT " . $columnName . " FROM " . $dbName . "." . $tableName . " ORDER BY ".$columnName." ;"; $fkQueryB = "SELECT DISTINCT " . $refColumnName . " FROM " . $dbName . "." . $refTableName . " ORDER BY ".$refColumnName." ;"; //A -- Table with column that is the foreign key //B -- Table with column that the foreign key references $resultA = $link->query($fkQueryA); $resultB = $link->query($fkQueryB); //Get query results while($var = $resultA->fetchColumn()) { $tableA[] = $var; } //Push query results to table while($vari = $resultB->fetchColumn()) { $tableB[] = $vari; } //Push query results to table $x++; //increment counter to move through $tables } $resultCompAB = array_diff($tableA, $tableB); //return array with all values of A that are not in B if(empty($resultCompAB)) { echo "Database ".$dbName." has referential integrity."; } else { echo "Orphan Values in database ".$dbName.": \n"; array_diff($tableB, $resultCompAB); } //print the results Answer: This code does not work properly. Basically you build two enormous arrays containing all values of foreign keys in your database. That's probably a lot. 1: The $x and $test variables serve no purpose and should be removed. 2: The comparison of tablet A and B should be moved inside the while() loop. I also suggest some unset's and closures of resources that are no longer needed.
{ "domain": "codereview.stackexchange", "id": 12675, "tags": "php, mysql, parsing, pdo" }
Why is the Yarkant River braided in the Pamir mountains?
Question: The formation section in the braided river Wikipedia says that very erodible soil causes braided rivers. Intuitively, a rising mountain range like the Pamirs would not be easily erodible. The Yarkant changes so abruptly, I'm curious what special geological feature is at these straight/meandering-to-braided junctions. The critical factor that determines whether a stream will meander or braid is bank erodibility. A stream with cohesive banks that are resistant to erosion will form narrow, deep, meandering channels, whereas a stream with highly erodible banks will form wide, shallow channels, inhibiting helical flow and resulting in the formation of braided channels. Braid 1 Here's near the edge of the Pamir range. Braid 2 Here's another braided section in the Pamirs, far upstream. A source There appears to be a glacier at a source of the Yarkant in Central Karakoram Park. If glaciers present/past explain braiding, why is it intermittent? Minimal braiding in the Rockies Mountainous braided rivers seem rare/mild elsewhere, like here in the Rockies in Missoula. Answer: Braiding will occur in any river where two conditions are met: 1) a very high sediment load must be available, as is usually the case in a peri-glacial or post-glacial environment, and 2) there is an abrupt change from high energy rivers (steep hydraulic gradient) to low energy deposition with flat space on either side of the river. Go for a hike in any glacial or post-glacial mountains and you will see that rock hardness has little to do with sediment availability - there is always more sediment than the river can mobilize at any one time, even in extreme floods. The classic example is the Canterbury Plains on the east side of South Island New Zealand. Braiding is an inherently unstable and transient river configuration. Meandering is the river's minimum energy configuration. It is another chaotic process, achieved through helicoidal flow and superelevation on the convex side of a bend. It will occur anywhere the river is laterally unconstrained, and where the rotational kinetic energy, perpendicular to the direction of flow, is a significant fraction of the forward kinetic energy.
{ "domain": "earthscience.stackexchange", "id": 590, "tags": "mountains, rivers" }
Converting bytes to an escaped hexadecimal string
Question: I wrote this function to convert an array of bytes to a C/C++ string representation using hexadecimal escape codes (\xhh). Any suggestions are welcome: std::string toEscapedHexaString(const std::uint8_t * data, const int dataSizeBytes, const int maxCols = 80, const int padding = 0) { assert(data != nullptr); assert((maxCols % 4) == 0); assert((padding % 2) == 0); int column = 0; char hexaStr[64] = {'\0'}; std::string result = "\""; for (int i = 0; i < dataSizeBytes; ++i, ++data) { std::snprintf(hexaStr, sizeof(hexaStr), "\\x%02X", static_cast<unsigned>(*data)); result += hexaStr; column += 4; if (column >= maxCols) { if (i != (dataSizeBytes - 1)) // If not the last iteration { result += "\"\n\""; } column = 0; } } // Add zero padding at the end to ensure the data size // is evenly divisible by the given padding value. if (padding > 0) { for (int i = dataSizeBytes; (i % padding) != 0; ++i) { result += "\\x00"; column += 4; if (column >= maxCols) { if ((i + 1) % padding) // If not the last iteration { result += "\"\n\""; } column = 0; } } } result += "\""; return result; } You can control the number of columns in the output, so for example, converting Hello World!\0 with 20 columns max (not counting the quotes): const unsigned char str[] = "Hello World!"; auto result = toEscapedHexaString(str, sizeof(str), 20); /* result = "\x48\x65\x6C\x6C\x6F" "\x20\x57\x6F\x72\x6C" "\x64\x21\x00" */ Answer: Use of assert Some of the uses of assert here look semi-broken, at least to me. Part of the definition of assert is that when you compile with NDEBUG defined, assert becomes a nop. When it is enabled, a failed assertion immediately aborts the program. For testing inputs to a function, neither of those is generally desirable. For such purposes, I usually define an assure that always does its thing, and throws an exception when the test fails. snprintf At the very least, I'd try to wrap the ugliness of using snprintf up into a reasonably neat little function like to_hex that just converts a number into its hex representation: std::string to_hex(int in) { // snprintf ugliness here return std::string(buffer); } Alternatively, since you're converting inputs to strings, and accumulating those together into a buffer anyway, consider using an std::ostringstream. Logic In a few places, the logic seems...less than pleasing, at least to me. For example: if (column >= maxCols) You've previously tested that maxCols % 4 == 0, and (assuming things work correctly) each item you add should be 4 more characters, so you should never see a result that's actually greater than maxCols--only less than maxCols, then equal to maxCols, then back to 0. This is harmless, and if you might want to support a maxCols that was not a multiple of 4, then it might even make good sense--but as things stand right now, it seems like there's a degree of...uncertainty about the real intent. Reorganization Right now you have code to ensure the maximum line length sprinkled liberally throughout the rest of the code. I'd be tempted to define a type that takes that as its single responsibility: when you create it, specify a maximum length. When you add characters to it, it tracks the length of the current "line", and when the maximum is reached (or exceeded) it adds a delimiter, and starts over. With this, the rest of the code gets a lot simpler in a hurry, and each piece of code has a much simpler, clearer purpose.
{ "domain": "codereview.stackexchange", "id": 18576, "tags": "c++, strings, c++14, escaping" }
Mean of integers, ignoring the largest and smallest values
Question: The code returns the mean of a list of integers, except ignoring the largest and smallest values in the list. If there are multiple copies of the smallest value, ignore just one copy, and likewise for the largest value. Assume that the list is length 3 or more. For this problem, as part of my learning, I’ve defined my own functions. Please comment on my variable names, docstrings, readability of code etc. and let me know how you would do it. from typing import Sequence def find_length(seq: Sequence) -> int: """ Return the number of elements in a sequence. >>> numbers = [-10, -4, -2, -4, -2, 0] >>> find_length(numbers) 6 """ count = 0 for _ in seq: count += 1 return count def find_min(numbers: Sequence) -> int: """ Return the smallest number in a sequence. >>> numbers = [-10, -4, -2, -4, -2, 0] >>> find_min(numbers) -10 """ min_number = numbers[0] for number in numbers: if number < min_number: min_number = number return min_number def find_max(numbers: Sequence) -> int: """ Return the biggest number in a sequence. >>> numbers = [-10, -4, -2, -4, -2, 0] >>> find_max(numbers) 0 """ max_number = numbers[0] for number in numbers: if number > max_number: max_number = number return max_number def find_sum(numbers: Sequence) -> int: """ Return sum of the numbers in a sequence. >>> numbers = [-10, -4, -2, -4, -2, 0] >>> find_sum(numbers) -22 """ total = 0 for number in numbers: total += number return total def find_centered_average(numbers: Sequence) -> int: """ Return the centered average of a list of numbers. >>> numbers = [-10, -4, -2, -4, -2, 0] >>> find_centered_average(numbers) -3 """ max_number = find_max(numbers) min_number = find_min(numbers) total = find_sum(numbers) - max_number - min_number centered_average = total // (find_length(numbers) - 2) return centered_average All of that can be accomplished with either: (sum(numbers) - max(numbers) - min(numbers)) // (len(numbers) - 2) Or, centered_numbers = sorted(numbers)[1:-1] sum(centered_numbers) // len(centered_numbers)) There must be a number of other ways to do it. Answer: Well, whenever you want to know which function is better, you should test them. First, your custom functions are just re-inventing the wheel, so you should only use them if you have to (either because it is mandated or they are actually faster, more secure, more memory-friendly, more whatever). When running your functions with growing number of numbers, this is what I get: Here, Input is the length of the numbers list, generated with numpy.random.random_integers(-10, 10, size=n), with \$n\in [10, 10000]\$. So, for larger lists, the difference between your own function and the sorting function is negligible. The sorting should be \$\mathcal{O}(n\log n)\$, but I don't see why your function seems to show a similar behavior. The sum/min/max function should be \$\mathcal{O}(3n)\$, so wins for large lists. For smaller sizes, the two simple implementations perform similarly and your functions slightly slower: This is for numpy.random.random_integers(-100, 100, size=n), with \$n\in [10, 100]\$. All measurements were done in Python 2.7.13. I put your last two snippets into functions: def centered_average_sum(numbers): return (sum(numbers) - max(numbers) - min(numbers)) // (len(numbers) - 2) def centered_average_sort(numbers): centered_numbers = sorted(numbers)[1:-1] return sum(centered_numbers) // len(centered_numbers) Note that I fixed a superfluous closing bracket on the second function.
{ "domain": "codereview.stackexchange", "id": 27806, "tags": "python, python-3.x, reinventing-the-wheel, statistics" }
How AC circuit "Knows" how to divide the voltage?
Question: Let's say we have an AC circuit with EMF source of $12V$ and a resistor of $3 \Omega$ . Then a voltage of $12V$ falls on the resistor. Now if we add another resistor of $3 \Omega$ , the circuit "knows" to drop only $6V$ on each resistor. I heard the fluid in pipes analogy but i'm looking for an electrodynamic explanation. Answer: For my explanation, I will assume a DC circuit to have defined polarities to talk about. For an AC current, the explanation will hold true in any small time intervall where the voltage is nearly constant. So we have a conductor on the potential 0V, which is connected to two resistors $R_1$ and $R_2$ of the same resistance in series and on the far side of $R_2$ the potential is 12V. Let's assume the conductor between $R_1$ and $R_2$ was on the potential 7V (or any other voltage $U$ with 6V $<U \leq $ 12V), so the voltage drops by 5V on $R_1$ and by 7V on $R_2$. This means, that the current through $R_2$ is higher and more electrons flow through it than through $R_1$, meaning that negative charges accumulate in the conductur between the two resistors. As a consequence, it's potential drops until it reaches the equilibrium point of 6V.
{ "domain": "physics.stackexchange", "id": 82384, "tags": "electric-circuits, electrical-resistance, voltage" }
Skobelev graviton-photon cross section diverges
Question: Skobelev calculated in 1975 the cross-section of graviton+photon to graviton +photon and the graviton+graviton to photon+photon. For the latter, he gave the integrated cross-section, but for the first, I suppose, he only provided the differential cross section: $$ \dfrac{d\sigma}{d(\cos \theta)}=\dfrac{k^4 \omega^2}{64\pi}\dfrac{1+\cos^8(\theta/2)}{\sin^4(\theta/2)}$$ Obviously, this cross-section DIVERGES at angles zero or pi, so, how could this cross-section be understood? Is there any method that allowed us to give a finite value? How could we regularize this quantity? Maybe by residues in complex variable? Remark: It is curious for me that gg to photon+photon does not diverge but changing to graviton+photon to graviton+photon diverges. Answer: You don’t need gravitons. Even simple Rutherford scattering has a similar “infrared” divergence. It is a hallmark of large-range forces. See scattering singularity. As a comment by @dmckee elaborating on his answer explains, “$\theta$ only equals zero in the limit where $b\rightarrow\infty$. The beam isn't that big (or if it is the scattering that you want to measure only dominates in a finite region), so the divergence can't be measured. To put it another way, to directly measure the forward cross-section you use a downstream detector which has a finite size which limits the range of $b$ (impact parameter) over which you integrate. The signal is always finite.”
{ "domain": "physics.stackexchange", "id": 58390, "tags": "gravity, scattering-cross-section, quantum-gravity" }
Contradicting Basic and Force definitions of types of Equilibrium
Question: This question is neither a check my work question and nor a homework question, it is a conceptual doubt in a question I found and I have attached a solution just for reference to show how I drew my conclusions. All my previous posts containing this question have been closed due to some reason like this one or "lacks clarity" (please comment if you cannot understand my post and do not vote to close it, I will update the post ASAP because I am apparently at the risk of being banned from asking questions). So I have read that a particle is in Stable Equilibrium when the potential energy is minimum, Unstable Equilibrium when the potential energy is maximum and Neutral Equilibrium when the potential energy is neither maximum or minimum. All of these definitions have the condition that net force on the particle = $0$ (or the particle is at equilibrium or $\frac {dU}{dr} = 0$. Some extensions of these concepts or different definitions of Equilibrium are : "A particle is said to be in Stable Equilibrium when a slight displacement of the particle from equilibrium position makes it oscillate about that position (which means it returns back to the mean position) or that the forces acting on the particle are in the opposite direction to the displacement. The particle is in Unstable Equilibrium when a slight displacement of the particle from equilibrium position makes it go farther away from the position (which means it cannot come back to the mean position) or the forces acting on the particle are in the direction of displacement." So the question that I have my doubt in is as follows : "Two positive charges + q each are fixed at points (-a, 0) and (a, 0). A third charge + Q and mass m is placed at origin. What is the type of equilibrium when a charge Q is given a small displacement along the direction of the line x = y." The answer given is "no type of equilibrium" Firstly I don't understand if the problem is stating whether the particle is constrained to move in the direction of the line x = y only (in which case the equilibrium is surely stable) or just the displacement is in the direction of the line x = y and that the particle is free to move after that (let us assume the latter case). I know there has to be some type of equilibrium because the particle does have net force = $0$ at origin (or $\frac {dU}{dr} = 0$ at origin) (any equilibrium position has an equilibrium type associated with it). Here is my working : Let Charge Q is displaced to point (x,x) and x = A $F_1$ (the force on charge Q due to charge q at (a,$0$) = $\frac {KQq}{((a-x)^2 + y^2)^{3/2}} ((x-a)\hat i + y\hat j)$ $F_2$ (the force on charge Q due to charge q at (-a,$0$) = $\frac {KQq}{((x+a)^2 + y^2)^{3/2}} ((x+a)\hat i + y\hat j)$ for small y we can neglect $y^2$ in comparison to $(a-x)^2$ $\therefore F_1 = \frac {KQq}{((a-x)^2)^{3/2}} ((x-a)\hat i + y\hat j) = \frac {KQq}{((a-x)^3)} ((x-a)\hat i + y\hat j)$ $\therefore F_2 = \frac {KQq}{((x+a)^2)^{3/2}} ((x+a)\hat i + y\hat j) = \frac {KQq}{((x+a)^3)} ((x+a)\hat i + y\hat j)$ Taking a common in the denominator we get $F_1 = \frac {KQq}{a^3((1-\frac {x}{a})^3)} ((x-a)\hat i + y\hat j) = \frac {KQq}{a^3}{((1-\frac {x}{a})^{-3})} ((x-a)\hat i + y\hat j)$ $F_2 = \frac {KQq}{a^3((1-\frac {x}{a})^3)} ((x+a)\hat i + y\hat j) = \frac {KQq}{a^3}{((1+\frac {x}{a})^{-3})} ((x+a)\hat i + y\hat j)$ Using the binomial approximation $(1+x)^n = 1 + nx$ for $x<<1$ in $(1-\frac {x}{a})^{-3}$ to get $(1+3\frac{x}{a})$ and in $(1+\frac {x}{a})^{-3}$ to get $(1-3\frac{x}{a})$ So we get $F_1 = \frac {KQq}{a^3}(1+3\frac{x}{a})((x-a)\hat i + y\hat j) = \frac {KQq}{a^3}((x-a+3\frac{x^2}{a}-3x)\hat i + (y+3\frac{xy}{a})\hat j)$ $F_2 = \frac {KQq}{a^3}(1-3\frac{x}{a})((x+a)\hat i + y\hat j) = \frac {KQq}{a^3}((x+a-3\frac{x^2}{a}-3x)\hat i + (y-3\frac{xy}{a})\hat j)$ Let F = $F_{net} = F_1 + F_2 = \frac {KQq}{a^3}((x-a+3\frac{x^2}{a}-3x)\hat i + (y+3\frac{xy}{a})\hat j) + \frac {KQq}{a^3}((x+a-3\frac{x^2}{a}-3x)\hat i + (y-3\frac{xy}{a})\hat j) = \frac {KQq}{a^3}(-4x\hat i + 2y\hat j)$ Let $F_x$ = component of F in x direction Let $F_y$ = component of F in y direction $F_x = \frac {-4KQqx}{a^3}$ and $F_y = \frac {2KQqy}{a^3}$ $F_x = ma_x = m\frac {d^2x}{dt^2} = \frac {-4KQqx}{a^3}$ and $F_y = ma_y = m\frac {d^2y}{dt^2} = \frac {2KQqy}{a^3}$ $\therefore \frac {d^2x}{dt^2} = -(\sqrt2w^2)x$ and $\frac {d^2y}{dt^2} = w^2y$ $\therefore x = Acos(\sqrt2wt)$ and $y = Ae^{wt}$ (because x,y are = A at time t = $0$) If we go by the definition that the force is in or opposite to the direction of displacement, then we get that it does not come under any type of equilibrium (because the force comes out to be $\frac {2KQqx}{a^3}$(-2$\hat i$ + $\hat j$ for a small displacement of the particle from $(0,0)$ to (x,x). This is neither along nor opposite to the displacement from mean position.) Now, if we go by the definition that the particle is in Unstable Equilibrium if it can not return to the mean position due to a small displacement then the Q charge is in Unstable Equilibrium. (for small displacement the (x,y) coordinates of the particle as a function of time are $(A cos (\sqrt 2wt), Ae^{wt})$ where $w = \sqrt \frac {2KQq}{ma^3}$). So the particle can never return to the mean position because the the y coordinate increases exponentially and when the small displacement assumption is not considered, the y coordinate still keeps increasing making it impossible for the particle to return (the force in y direction is in the direction of displacement in y direction) Answer: I think a good way of answering your question is to look at a potential diagram which will be three dimensional. Using your symbols with the two $q=1$ fixed charges at positions $(-1,0)$ $(+1,0)$and setting the constant in Coulomb's law equal to one the potential at a position $(x,y)$ is $V= \dfrac{1}{\sqrt{y^2+(1+x)^2}}+\dfrac{1}{\sqrt{y^2+(1-x)^2}}$ and you can think of this as a plot of the potential energy, $U$, if the third charge $Q=+1$. WolframAlpha can be used to do the plotting. You can now see that if the third positive charge is placed anywhere other than position $(0,0)$ it will "roll down a hill" and be in a position of unstable equilibrium. The exception at position $(0,0)$ is a saddle point where movement only along the $y=0$ line from position $(0,0)$ increases the potential energy and so the restoring force brings the positive charge back to position $(0,0)$ which along the direction is a position of stable (static) equilibrium. However, any movement off the $y=0$ line will result the charge "rolling down the hill" which is a characteristic of unstable equilibrium. So what happens if the third charge is negative? The potential energy plot becomes negative as shown below using WolframAlpha. Again all positions except $(0,0)$ are unstable with a saddle point at position $(0,0)$ with motion only along the $x=0$ line showing a position of stable equilibrium at $(0,0)$.
{ "domain": "physics.stackexchange", "id": 98276, "tags": "newtonian-mechanics, forces, electrostatics, vectors, equilibrium" }
Navigation: Robot cannnot reach goal
Question: Hello everyone, I'm using navigation stack on a turtlebot-based robot. When launching move_base.launch, I can send 2d nav goal in Rviz and robot cannot find its way to the goal, it just makes small movements. It only reaches it when it is a orientation change but not position one. I looked for errors with roswtf and I get: ERROR The following nodes should be connected but aren't: * /move_base->/move_base (/move_base/global_costmap/footprint) * /move_base->/move_base (/move_base/local_costmap/footprint) How can I fix it? Originally posted by alejandrolri on ROS Answers with karma: 16 on 2021-09-15 Post score: 0 Original comments Comment by Mike Scheutzow on 2021-09-15: Typing turtlebot footprint into the search bar at the top of this page gets 259 hits. Please review some of those existing answers. If you figure it out, and it's different from what @osilva said, you can help out by answering your own question. Answer: Thank you for your help. There wasn't any problem with topics. Tuning navigation parameters solved my issue. Originally posted by alejandrolri with karma: 16 on 2021-09-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by osilva on 2021-09-20: Glad you found a solution
{ "domain": "robotics.stackexchange", "id": 36916, "tags": "ros, navigation, ros-melodic, turtlebot" }
Shouldn't the "even parity" function map 1101 to 0?
Question: From the book Computer organization and design by Patterson&Hennessy: Parity is a function in which the output depends on the number of 1s in in the input. For an even parity function, the output is 1 if the input has an even number of ones. Suppose a ROM is used to implement an even parity function with a 4-bit input. Then the contents of the ROM is $$\text{Address} \ 0 : \ 0 \\ \text{Address} \ 1: \ 1 \\ \text{Address} \ 2 : \ 0 \\ \text{Address} \ 3 : \ 1 \\ \vdots \\ \text{Address} \ 13 : \ 1 \\ \text{Address} \ 14 : \ 0 \\ \text{Address} \ 15 : \ 1$$ As per my understanding, ROM which implements the even parity function should store 0 at both the Address 1 and the Address 2, 1 at the Address 3, ... 0 at both the Address 13 and 14, then 1 at the Address 15, for the Address $k$ to represent the map-value of $(k)_{\text{base}2}$. According to this the concept defined above is not clear enough, Can someone clarify the doubt? Answer: In this context, implementing something as a ROM just means a look-up table. If you want to know the parity of $x$, you put the binary coding of $x$ on the ROM's address wires and the value you read out is the value stored at that memory location within the ROM, which will be either 0 or 1. And, yes, the contents of the ROM that you've quoted are wrong: they seem to be implementing parity in the sense that the output is 1 if, and only if, the input is an odd number, instead of implementing the even parity function.
{ "domain": "cs.stackexchange", "id": 9671, "tags": "logic" }
Basic user interface, run nodes,
Question: Is it possible for me to create an User Interface which allow me to run some nodes by just clicking buttons without typing codes in the terminal? Originally posted by mree on ROS Answers with karma: 41 on 2015-01-02 Post score: 0 Answer: There is a package called node_manager_fkie for managing nodes, topics and so on with a GUI. You can use launch files as well as the buttons in the GUI to launch the nodes that are not running or crashed. Originally posted by emreay with karma: 90 on 2015-01-02 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by mree on 2015-01-14: thanks for the reply. It helps me in a certain way. However, I failed to run some of the nodes. The error "usr/bin/screen is missing' appears. How can I fix this error? Does this have something to do with the OS that I am using?
{ "domain": "robotics.stackexchange", "id": 20467, "tags": "ros, nodes, create, gui" }
Effective use of multiple streams
Question: I am experimenting with streams and lambdas in Java 8. This is my first serious foray using functional programming concepts; I'd like a critique on this code. This code finds the Cartesian product of three sets of ints. My questions/complaints about this snippet are: Are nested flatMaps really the best (most understandable, most CPU/memory efficient) way to do an arbitrary Cartesian product? Or should I stick to imperative style for Cartesian products? Is there a better way to format nested lambdas (see, e.g., my use of flatMap)? I just learned about Collectors; those might be what I need. I'm still learning how to best combine these things. long count = IntStream.rangeClosed(0,9) /* 0 <= A <= 9 */ .parallel() .mapToObj(Integer::valueOf) .flatMap(a -> IntStream.rangeClosed(0,9) /* 0 <= B <= 9 */ .mapToObj(Integer::valueOf) .flatMap(b -> IntStream.rangeClosed(0,9) /* 0 <= C <= 9 */ .mapToObj(c -> (new Product()).A(a).B(b).C(c)) ) ).count(); System.out.println("Enjoy your Cartesian product." + count); UPDATE: I deliberately omitted boilerplate code and class/method definitions, for brevity. The only important thing about Product is that it stores three ints. Answer: Your code is odd in the sense that it is going to a lot of effort to calculate that 103 is 1000. I understand why you are doing it, but I took the liberty of changing the count() terminating function and replacing it with: StringBuilder sb = new StringBuilder(); ....... .forEachOrdered(prod -> sb.append(prod.toString()).append("\n"); This way we can store each result, instead of just counting it. There are a few things to go through here. Product This class looks awkward: .mapToObj(c -> (new Product()).A(a).B(b).C(c)) Why not just have a useful constructor on Product like: public Product(int a, int b, int c) { super(); this.a = a; this.b = b; this.c = c; } Then, your mapping would look like: .mapToObj(c -> new Product(a, b, c)) Magic Numbers You have 0 and 9 as magic numbers in multiple places. These should be declared as (effective) constants, or calculated somehow. With the 9 value especially, if you want to change the size of the product you have to change it in three places. int vs. Integer You start off with an IntStream, but then convert it to a Stream<Integer>. Why? If you can leave things as primitives, you should. The actual stream The most concerning aspect is the stream itself. The flatMap operation is not doing what I think you think it does.... Let's consider the contents of the stream as it goes through things: 'a' range from 0-9 IntStream convert each int to an Integer for each Integer, flatMap that Integer the flatMap creates an IntStream which it converts to a Stream even though we are flatMapping the 'a' value, we are not using the 'a' value in the mapping. All we are really doing is creating a new 'b' value, and who cares that the stream now has the b values streaming through.... with these 'b' values on the stream, we then flatMap again.... again, we ignore what the actual 'b' value is, abut we generate a new 'c' value in a third nested stream, and then we combine the 'a' and 'b' values which are 'in scope', with the 'c' value from the stream, and we make a Product. The point of what I am saying is that you actually are only using the Stream in the most inside IntStream, the 'c' stream. All the other streams are just 'tokens' that you need to count the events you are doing. A for-loop is the right thing for this construct (if it was not for the parallelism). If you include the parallelism, then a fork-join process is right. Not a stream. My Attempt I am not claiming this is right, or best practice, but consider this attempt to get essentially the same result as you: /** Append an int to an array after the last array item **/ public static final int[] append(int[] base, int val) { int[] ret = Arrays.copyOf(base, base.length + 1); ret[base.length] = val; return ret; } /** Stream an int range and map to an appended array */ private static final Stream<int[]> appendInts(int[] base, int size) { return IntStream.rangeClosed(0, size).mapToObj(a -> Cartesian.append(base, a)); } public static void main(String[] args) { final int last = 9; final StringBuilder sb = new StringBuilder(); IntStream.rangeClosed(0,last) /* 0 <= A <= 9 */ .parallel() .mapToObj(a -> new int[]{a}) // stream is int[1] array .flatMap(ab -> appendInts(ab, last)) // stream is int[2] array .flatMap(abc -> appendInts(abc, last)) // stream is int[3] array .forEachOrdered(res -> sb.append(Arrays.toString(res)).append("\n"));; System.out.println("Enjoy your Cartesian product." + sb.toString()); } Update I have rewritten the OP's code to extract the stream preparations. Consider the following code: // convert two input values to a stream consisting of an arrays of int BiFunction<Integer, Integer, Stream<int[]>> stage3 = (a,b) -> { return IntStream.rangeClosed(0, last).mapToObj(c -> new int[]{a, b, c}); }; // convert an input value to a stream consisting of an arrays of int Function<Integer, Stream<int[]>> stage2 = a -> { Stream<Integer> s = IntStream.rangeClosed(0, last).boxed(); Stream<int[]> ret = s.flatMap(b -> stage3.apply(a, b)); return ret; }; IntStream.rangeClosed(0,last) /* 0 <= A <= 9 */ .parallel() .boxed() .flatMap(a -> stage2.apply(a)) .map(r -> Arrays.toString(r)) .forEachOrdered(System.out::println); Which is essentially what the OP's code does.
{ "domain": "codereview.stackexchange", "id": 6732, "tags": "java, functional-programming, stream, lambda" }
What's the difference between PointCloudConstPtr and PointCloud::ConstPtr
Question: In subscriber callbacks our software stack uses a mixture of two function signatures. In the case of subscribing to a sensor_msgs::PointCloud, we use the following two signatures: void callback(const sensor_msgs::PointCloud::ConstPtr& msg) void callback(const sensor_msgs::PointCloudConstPtr& msg) These are both defined in the autogenerated header PointCloud.h template <class ContainerAllocator> struct PointCloud_ { // ---- Snip ---- typedef boost::shared_ptr< ::sensor_msgs::PointCloud_<ContainerAllocator> const> ConstPtr; }; typedef ::sensor_msgs::PointCloud_<std::allocator<void> > PointCloud; typedef boost::shared_ptr< ::sensor_msgs::PointCloud > PointCloudPtr; typedef boost::shared_ptr< ::sensor_msgs::PointCloud const> PointCloudConstPtr; Expanding the typedefs myself, I get the following sensor_msgs::PointCloud::ConstPtr boost::shared_ptr< ::sensor_msgs::PointCloud_<ContainerAllocator> const> sensor_msgs::PointCloudConstPtr boost::shared_ptr< ::sensor_msgs::PointCloud_<std::allocator<void> > const> It's not clear to me what ContainerAllocator gets set to in case 1, and I'm also not sure what the purpose of ContainerAllocater is. Why would I choose one incantation over the other? Originally posted by vpradeep on ROS Answers with karma: 760 on 2018-06-07 Post score: 3 Answer: These are the same. PointCloud_::ConstPtr is templated on the ContainerAllocator, but the PointCloud typedef specifies a default container allocator of std::allocator<void>, so they're the same. I'm not sure why there are two aliases for the same symbol. Originally posted by ahendrix with karma: 47576 on 2018-06-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by vpradeep on 2018-06-07: Makes sense. You're right that PointCloud::ConstPtr uses std::allocator<void>, and hence they're the same. I guess I just tripped myself up when attempting to expand the typedefs by hand. Thanks for the clarification.
{ "domain": "robotics.stackexchange", "id": 30985, "tags": "ros-kinetic, roscpp" }
Two way communication using asymmetric encryption
Question: According to image 1, for two way communication, each person needs to generate a pair of keys. However, according to image 2, only one pair of keys is required for two way communication. Thanks to anyone who can settle my confusion. Source 1 If a two-way communication is required between all five workers, then they all need to generate their own matching public and private keys. Once this is done, all users then need to swap public keys so that they can send encrypted documents, files or messages between each other. Each worker will then use their own private key to decrypt information being sent to them. Source 2 the other key is known as the private key. Data encrypted with the public key can only be decrypted with the private key, and data encrypted with the private key can only be decrypted with the public key. Public key encryption is also known as asymmetric encryption. It is widely used, especially for TLS/SSL, which makes HTTPS possible. Answer: Source We don't use public-private keys to encrypt files or documents. We use public-key cryptography for hybrid encryption or digital signatures. Examples of Hybrid encryption. RSA as Key Encapsulation Mechanism (KEM)) and AES-GCM as Data Encapsulation Mechanism (DEM). This composition of a KEM and a DEM (data encapsulation mechanism; an authenticated cipher serves as a DEM) provides the standard of IND-CCA2/NM-CCA2—ciphertext indistinguishability and nonmalleability under adaptive chosen-ciphertext attack. That is the minimum requirement for modern Cryptography. See the detail, here Similarly, one can achieve with Diffie-Hellman key exchange (DHKE). Two-party exchange keys by the DHKE and generate the sessing key by using a Key Derivation Function. And with the derived key, we use a modern mode of operation like AES-GCM or ChaCha20-Poly1305. Both are Authenticated Encryption modes that provide us with Confidentiality, Integrity, and Authentication. With libsodium you can start to use crypto_box_curve25519xsalsa20poly1305. Based on Elliptic Curver DHKE: Curve X25519, Encryption: XSalsa20 stream cipher, Authentication: Poly1305 MAC. Digital signatures: Here only warn about the common mistake on RSA. RSA decryption is not signature. RSA signature requires special padding RSA-PSS and using the same RSA setup for encryption and signature is not advised. Source It tells almost nothing about the TLS, According to image 1, for two way communication, each person needs to generate a pair of keys. However according to image 2, only one pair of keys is required for two way communication. Thanks to anyone who can settle my confusion. Source 2 only talk about public-private key usage. However, there is a misconception there. You don't encrypt with your private key. The sole purpose of encryption is confidentiality. Your public key is public, so everybody can use your public key to decrypt the message encrypted with your private key! To send an encrypted message to you, you either share a common secret key with the other party - that we don't prefer - or have a private and public key pair. With this pair, once you publish your public key, everybody can send you a message, even anonymously. If you don't want anonymous messages, then the other site must have a public-private key which you know the public key belongs to them. Rest, use a hybrid cryptosystem as above. The 2. source only talk about in this way, explicitly they are talking the same. Each user at least one public-private key and distribute their public key. Of course, you may need a Certificate Authority to prove the ownership of the public key.
{ "domain": "cs.stackexchange", "id": 15493, "tags": "security, encryption" }
Identifying the gas given ratio of its specific heats
Question: The ratio of the heat capacities $\frac{C_\mathrm p}{C_\mathrm v}$ for one mole of a gas is $1.67$. The gas is: a) $\ce{He}$ b) $\ce{H2}$ c) $\ce{CO2}$ d) $\ce{CH4}$ I know how to answer this by analysing the degrees of freedom of the gas molecule. Is there any other way to derive the ratio? Answer: I'd go about such a question looking at the easiest thing first: ideal gases. So, what do you know about the relations between $C_{p}$ and $C_{v}$ (both being the molar heat capacities at constant pressure and constant volume, respectively) for ideal gases? For one thing, it is well known that \begin{equation} C_{p} - C_{v} = R \end{equation} with $R$ is the universal gas constant. You can rearrange this equation for the ratio you are after: \begin{equation} \frac{C_{p} - C_{v}}{C_{v}} = \frac{R}{C_{v}} \\ \frac{C_{p}}{C_{v}} - 1 = \frac{R}{C_{v}} \\ \frac{C_{p}}{C_{v}} = \frac{R}{C_{v}} + 1 \ . \end{equation} Now, you need to find $C_{v}$. Here, the degrees of freedom that you mentioned come into the game, since the equation that helps us out is the equipartition law, stating that each excited quadratic degree of freedom contributes $\frac{1}{2} R T$ to the molar internal energy $U$ \begin{equation} U = \frac{1}{2} R T (f_{\text{trans}} + f_{\text{rot}} + 2 f_{\text{vib}}) \end{equation} with $f_{\text{trans}}$, $f_{\text{rot}}$ and $f_{\text{vib}}$ being the quadratic degrees of freedom for translation, rotation and vibration respectively. It is important to note that only excited degrees of freedom contribute to $U$. Which ones are active depends heavily on temperatur. Vibrations are usually not excited at room temperatur (there are exceptions, e.g. iodine) - they usually need temperatures in the order of $1000 \, \text{K}$ but that varies greatly. Rotations are much easier to excite - I know of no example where they aren't excited at room temperature. Translations take very little energy to excite and can always be considered active. If all degrees of freedom are active you get: For linear molecules $f_{\text{trans}} = 3$, $f_{\text{rot}} = 2$ and $f_{\text{vib}} = 3N - 5$, where $N$ is the number of atoms in the molecule. For non-linear molecules $f_{\text{trans}} = 3$, $f_{\text{rot}} = 3$ and $f_{\text{vib}} = 3N - 6$. Since $C_{v} = \frac{dU}{dT}$ you get \begin{equation} C_{v} = \frac{1}{2} R (f_{\text{trans}} + f_{\text{rot}} + 2 f_{\text{vib}}) \ . \end{equation} Substituting this into the equation for the ratio between $C_{p}$ and $C_{v}$ yields \begin{equation} \frac{C_{p}}{C_{v}} = \frac{2}{(f_{\text{trans}} + f_{\text{rot}} + 2 f_{\text{vib}})} + 1 \ . \end{equation} Now, you only have to know how many degrees of freedom each of the molecules has and plug them into the equation.
{ "domain": "chemistry.stackexchange", "id": 582, "tags": "physical-chemistry, kinetic-theory-of-gases" }
urdf_parser check_urdf in groovy
Question: I don't find any urdf_parser binaries at the Groovy release. Is it hidden somewhere of it didn't compile for this release? Edit 1: Is there a tool like urdf_parser on groovy? Originally posted by nunojpg on ROS Answers with karma: 9 on 2013-01-08 Post score: 0 Answer: This package is deprecated. You are supposed to use the urdf-dom dependency directly instead of this package. As far as I know this is the source code the binaries are compiled from https://github.com/ros-gbp/robot_model-release/tree/release/urdf_parser So I guess there are no binaries in groovy. Originally posted by kalectro with karma: 1554 on 2013-01-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12308, "tags": "ros" }
Cached empty collections
Question: I often need to return empty collections. One of those days I wrote the following to return a cached instance: public static class Array<T> { // As a static field, it gets created only once for every type. public static readonly T[] Empty = new T[0]; } I didn't know about Enumerable<T>.Empty() maybe it didn't exist back then. Although I know now, I still use this one. There are still many functions in BCL that need an array instead of IEnumerable<T>, IList<T> or IReadOnlyList<T>. And array implements all of these so it can be used anywhere. // All these variables share the same array's reference. string[] empty1 = Array<string>.Empty; IEnumerable empty2 = Array<string>.Empty; IEnumerable<string> empty3 = Array<string>.Empty; What do you think about this class? Can you see any other advantages/disadvantages of it over Enumerable<T>.Empty()? And about implementation: Do you think making the caching using a static field would cause any problem? Edit: Array class now has a static, generic Empty method, essentially deprecating this implementation. Answer: Your solution is absolutely correct and practical. In fact Enumerable.Empty<T> also returns empty array under the hood, just slightly in a different way (they have a separate instance holder class that is lazily initialized).
{ "domain": "codereview.stackexchange", "id": 26733, "tags": "c#, .net, array, collections" }
Photoelectric Current
Question: Recently learnt that Intensity= nhv ; where n is the number of photos per unit area My textbook specifies that the photoelectric current only depends upon the number of photons striking per unit time and area. So, why is that photoelectric current remains constant when frequency is changed keeping the Intensity constant? ie- if intensity is kept constant and frequency is doubled , shouldn't the photoelectric current become half the value? Edit: I understand how frequency affects intensity ,just want to know what exactly controls photoelectric current; intensity or number of electrons. Answer: See you are not able to grasp the concept properly.If we increase the frequency then we are increasing the energy of an individual photon,we are not increasing the number of photons.Imagine it in this way.I guess that we all have seen some cartoons at some stage of our lives where the character throws balls of energy.Now say the character faced a powerful villain then it'll have to send more powerful ball of energy to defeat the villain compared to the normal minions.Now,relate that thing over here.You increased the energy of the photon which means that now the electrons,they will come out with more kinetic energy but still the number of photon colliding with an electron is one and so the speed with which electron comes out change but not the current. Now let's come to intensity.Now say the hero sent like hundreds of balls of energy in the form of a beam towards villain.But still all of them have the same energy.Now relate that over here.You increased the intensity.This means that number of photons in the beam have increased but still they are of same energy level i.e. hv and we all know that one photon strikes one electron which is responsible for ejection of electron.So more the number of photons,more the number of collision of photon with loosely bonded electrons and more they will come out.Increasing the frequency only increase the energy of photon but still it'll collide with only one electron.So,to increase photoelectric current,we need more electrons as flow of electrons constitute the current and number of electrons coming out will increase only if we increase the number of photons falling on plate. Hope you found it helpful.
{ "domain": "physics.stackexchange", "id": 55807, "tags": "electrons, electric-current, frequency, intensity" }
Does a non-terminal alkyne react with sodamide?
Question: Hydrocarbon (A), $\ce{C6H10}$, on treatment with $\ce{H2/Ni}$, $\ce{H2}$/Lindlar catalyst and $\ce{Na}$/liquid $\ce{NH3}$ forms three different reduction products (B),(C) and (D) respectively. (A) does not form any salt with ammoniacal $\ce{AgNO3}$ solution but forms a salt (E) on heating with $\ce{NaNH2}$ in an inert solvent. Compund (E) reacts with $\ce{CH3I}$ to give (F). Compound (D) on oxidative ozonolysis gives n-butanoic acid along with other other products. Give structures of (A) to (F) with proper reasoning. From the formula of (A) and the details given, it is clear that (A) is an alkyne. I could make out that (B) would be an alkane while (C) and (D) would be cis and trans alkene respectively. Now, as (A) does not react with ammoniacal $\ce{AgNO3}$ solution, it is not a terminal alkyne. But, then I got confused as the question told, (A) reacts with $\ce{NaNH2}$ to give a salt. Till now, I knew that only terminal alkynes react with $\ce{NaNH2}$. What is it that am I missing? Answer: By the given question conditions you should easily guess that $(A)$ is $\ce {Hex-2-yne}$. When it is reacted with $\ce{NaNH_2}$ in presence of heat it can isomerise to terminal alkyne by following this mechanism and then $\ce{CH_3I}$ addition will simply follow an $\ce{S_N2}$ mechanism. Secondly,Ozonolysis will yield an aldehyde which can get oxidised to acid in oxidative medium.
{ "domain": "chemistry.stackexchange", "id": 9705, "tags": "organic-chemistry, organic-reduction" }
Where can I get historical data of tropical longitude of Delta Cancri
Question: I need historical data of tropical longitude of Delta Cancri. Where can I get it. As of now it is approximately 128 Deg from Vernal Equinox. Where can I get it's historical data, say from 5000 BC to 3000 AD? Answer: Any Planetarium software that simulates precession will tell you this. If you know what to ask it. Delta cancri has a Right ascension of 8hr 45min (or about 131 degrees). It has an ecliptic longitude of about 129 degrees. These are angles measured from the vernal equinox parallel to equator and the ecliptic respectively. I'm not sure how you get 84 degrees, that value is not correct. Note that Delta cancri will still be in close to the same position relative to distant stars 5000 years ago, or 1000 years in the future. It has a slow proper motion. But if you mean the ecliptic longitude, relative to a moving equinox then planetarium software (eg Stellarium) tells me it was +32 in 5000BC and will be 143 degrees in 3000 AD. You can then interpolate between those values. Linear interpolation would be reasonable, as the rate of precession has changed only a little over that time period. Planetarium software could be used to get a more precise value. Year Elip. Long. -5000 32.44 -4900 33.79411111 -4800 35.14883333 -4700 36.50416667 -4600 37.86011111 -4500 39.21666667 -4400 40.57383333 -4300 41.93161111 -4200 43.29 -4100 44.649 -4000 46.00861111 -3900 47.36883333 -3800 48.72966667 -3700 50.09111111 -3600 51.45316667 -3500 52.81583333 -3400 54.17911111 -3300 55.543 -3200 56.9075 -3100 58.27261111 -3000 59.63833333 -2900 61.00466667 -2800 62.37161111 -2700 63.73916667 -2600 65.10733333 -2500 66.47611111 -2400 67.8455 -2300 69.2155 -2200 70.58611111 -2100 71.95733333 -2000 73.32916667 -1900 74.70161111 -1800 76.07466667 -1700 77.44833333 -1600 78.82261111 -1500 80.1975 -1400 81.573 -1300 82.94911111 -1200 84.32583333 -1100 85.70316667 -1000 87.08111111 -900 88.45966667 -800 89.83883333 -700 91.21861111 -600 92.599 -500 93.98 -400 95.36161111 -300 96.74383333 -200 98.12666667 -100 99.51011111 0 = 1 BC 100.8941667 100 102.2788333 200 103.6641111 300 105.05 400 106.4365 500 107.8236111 600 109.2113333 700 110.5996667 800 111.9886111 900 113.3781667 1000 114.7683333 1100 116.1591111 1200 117.5505 1300 118.9425 1400 120.3351111 1500 121.7283333 1600 123.1221667 1700 124.5166111 1800 125.9116667 1900 127.3073333 2000 128.7036111 2100 130.1005 2200 131.498 2300 132.8961111 2400 134.2948333 2500 135.6941667 2600 137.0941111 2700 138.4946667 2800 139.8958333 2900 141.2976111 3000 142.7
{ "domain": "astronomy.stackexchange", "id": 7194, "tags": "star, history, ephemeris, equinox" }
Basic vector addition problem
Question: This is the entire problem: A student adds two vectors with magnitudes of 200 and 40. Taking into account significant figures, which is the only possible choice for the magnitude of the resultant? a. 160 b. 240 c. 200 d. 300 The answer is supposedly C, but it doesn't make sense to me given that 200 + 40 = 240. Could you please explain how I am wrong in my conclusion? Answer: It's a stupid question, but the answer is c) because c) has 1 significant figure. The reason is that 200 and 40 each have one significant figure, and therefore when added together the answer must also have one significant figure only. 200 + 40 = 240, and when rounded to 1 significant figure is 200.
{ "domain": "physics.stackexchange", "id": 10073, "tags": "homework-and-exercises, vectors, error-analysis" }
Brzozowski's algorithm for Kleene star NFA
Question: I have some troubles in understanding of the Brzozowski's algorithm for NFA to minimized DFA transformation. Or maybe I'm doing a mistake in the NFA to DFA transformation steps. The Algorithm, as described in Wikipedia, is as follows: Reverse the NFA. Transform to DFA. Reverse DFA. Transform to DFA again. Let's assume we have the following language: A* (zero or more repetitions of A). For this language we may have the following NFA (over-complicated for a purpose): I'm reversing it. Turning reversed NFA to DFA using the Powerset approach. The init state 3 Epsilon-closure is {1, 2, 3}. Closure {1, 2, 3} transits by A to {1, 2}. Closure {1, 2} transits by A to {1, 2}. So, we have two new states for these two new closures and transitions between them as follows: All states are final as their closures contain state 1. {1, 2, 3} state is a Start state is it contains state 3. This is clearly a DFA that reads the reversed language which is equal in this case to original one A*. There are redundant states, but I assume it does not contradict the Algorithm requirements. Reverse it again. Turning to DFA again. The init state 3 Epsilon-closure is {1, 2, 3}. Closure {1, 2, 3} transits by A to {1, 2}. Closure {1, 2} transits by A to {1, 2}. New closure-states and the topology of the final Automata are the same as on step 2. But this DFA is not minimal. Can you explain me, please, where did I make a mistake? Thanks in advance! Ilya. Answer: I realized this question is a duplicate. Basically, as @Pseudonym mentioned above, when reverting DFA you should avoid introducing new starting state of the NFA even if there were multiple finish states in DFA. Instead just merge epsilon closures of each reversed NFA start state into a single closure and treat it as a start closure during the Powerset construction. For those of you who faced this suboptimal issue after reading of the Wikipedia Article's Note: or add an extra state with ε-transitions to all the initial states, and make only this new state initial. Please be aware, that this note is simply wrong.
{ "domain": "cs.stackexchange", "id": 21107, "tags": "finite-automata" }
Why are only unitary operations allowed in quantum information theory?
Question: In the quantum information theory, any operations for a quantum state have to be unitary operations. Why is this restriction needed? Can't we make a non-unitary operation to a state? I know that a unitary operation is a reversible operation, but I don't even know why it is necessary. Answer: Why is this restriction needed? Here's a pretty simple way to convince yourself this is true: Applying any physical action (up to measurement) on a state essentially means exposing it to a specific Hamiltonian $H$ and using that Hamiltonian to evolve that state in time (the only way to evolve a state in time nontrivially is through the Hamiltonian) through the Schrodinger equation $$H(t)|\psi(t)\rangle = i\partial_t|\psi(t)\rangle.$$ The solution to this equation is given by $$|\psi(t)\rangle=U(t,0)|\psi(0)\rangle,$$ where $U(t,0)$ is an operator that satisfies $$H(t)U(t,0)=i\partial_tU(t,0).$$ The solution of this equation is well known, and is given by the so-called Dyson series $$U(t,0)=\text{T}\exp\left(-i\int_{0}^{t}H(t')\mathrm{d}t'\right),$$ where the $\text{T}\exp$ represents the time-ordered exponential (see wikipedia for the full discussion). The important thing to note is that the Hamiltonian $H$ is a Hermitian operator (by the axioms of quantum mechanics), and consequently $U(t,0)$ is a Unitary operator. Since any physical operation on a state really corresponds to altering the Hamiltonian and using that to evolve the state, we see that any operation applicable to quantum information must be unitary. Can't we make a non-unitary operation to a state? The answer is technically no but realistically yes. As I just demonstrated, any operation on a state that evolves it in time must be unitary. However, that is only if the Hamiltonian is Hermitian. So what would it mean to have a non-Hermitian Hamiltonian? Well, consider dividing our system (Hilbert space) into two subsystems, the internal system $\mathcal{H}_{i}$ and the environment $\mathcal{H}_{e}$. That is, $\mathcal{H}=\mathcal{H}_{i}\otimes\mathcal{H}_{e}$. Then we know that $H$ is Hermitian when acting on the entire system $\mathcal{H}$, but we aren't guaranteed that $H$ is Hermitian when acting on just the internal system $\mathcal{H}_i$. In particular, if there is some interaction between the internal system and the environment, states in the internal system can "bleed" into the environment. This effectively leads to a non-Hermitian form of the internal Hamiltonian, with eigenvalues $$E_n-i\Gamma_n,$$ leading to the time evolution $$|n(t)\rangle=e^{-iE_nt}e^{-\Gamma t}|n(0)\rangle.$$ That is, the eigenstates in the internal system decay into the environment over time. This effectively leads to a non-unitary time evolution in the internal system. A great reference for this type of "non-Hermitian quantum mechanics" is this book by Nimrod Moiseyev. This effect on non-isolated systems is actually very important in quantum information, when you want to actually build a quantum computer. Isolating the environment to maintain unitarity becomes a huge challenge, that is still being solved. I hope this helped! If anything was unclear, just leave a comment and I'll try to explain it in more detail.
{ "domain": "physics.stackexchange", "id": 57315, "tags": "quantum-information, quantum-error-correction" }
Hydrostatic force on spillway gate?
Question: I'm designing an spillway gate that, as the water exceeds a specific height, then it opens pure mechanically, it means the hydrostatic force compensates for the weight of the gate. As you can see the gate is hinged in point $A$. If i try to find out how high water can reach before the gate opens, then i implement two methods: 1) Write the torque equilibrium, clearly this method gives the right answer 2) Use the force balance, when the normal component of hydrostatic force is equal to the weight of the gate, then the gate should move upward and water will flow under it, this method is false, and i spend an awful amount of time to figure it out, but so far i can't find any explanation. Answer: Imagine that the water height is at a point where the force at B is exactly zero but the gate has not yet moved upward. Part of the weight of the gate will now be supported by the hydrostatic force, and part of the weight will be supported by the hinge at point A. In other words, the hydrostatic force must only lift a portion of the total gate weight. Or alternatively, if your gate CG is far enough left, then the hinge at point A will be pushing the gate down.
{ "domain": "physics.stackexchange", "id": 52280, "tags": "homework-and-exercises, fluid-statics" }
Tip Calculator in javascript
Question: Update: revised code. I work at a Starbucks and one of the things that screams at me week after week for automation is the tedious calculation of the tip distribution. So here is my attempt to provide a simple tool for this. (Previous attempt required special software and special instruction to explain how it works.) It has fields for inputting the number of hours for each partner and the amount of money to be dispersed. Clicking "calculate" produces the output with blank lines to help orient the output with the input rows. As a javascript noob, I feel certain I've done some stupids in here I can learn from. <html> <head> <title>Tip Calculator</title> </head> <body onload="loaded()"> <h3>Tip Calculator</h3> Hours:<br> <table id=inputs> </table> <table id=controls> <tr> <td><input type=button value="add row" onclick="add_row()"> <td>Dividend: <input id=dividend type=text size=5 maxlength=5> <td><input type=button value="calculate" onclick="calc()"> </tr> </table> <span id=output colspan=3>&nbsp;</span> </body> <script language="JavaScript"> <!-- var num_inputs = 6; function loaded() { add_row(); } function add_row(){ var table = E('inputs'); var new_row = table.insertRow( table.rows.length ); for( var i=0; i<6; i++) { new_row.insertCell(i).innerHTML = "<input type=text size=5 maxlength=5>"; } } function calc(){ var sum = 0; var inputs = E('inputs').getElementsByTagName('input'); for( var i=0; i < inputs.length; i++ ){ sum += Number(inputs[i].value); } rate = E('dividend').value / sum; E('output').innerHTML = "Total Hours: " + sum + "<br>"; E('output').innerHTML += "Rate (Dividend/Total Hours): " + rate + "<br>"; for( var i=0; i < inputs.length; i++ ){ if( (i % 6) == 0 ) E('output').innerHTML += "<br>"; share = Number( inputs[i].value ) * rate; E('output').innerHTML += inputs[i].value + " &times; " + rate + " = " + share + "<br>"; } } function E( id ){ return document.getElementById( id ); } //--> </script> </html> Answer: What you could do first is to separate out the logic that does the calculation from the logic that handles the UI. This makes it easier to update the math. Also, this makes the logic portable, should you scrap the UI code for something else, like if you move to use a framework. Also, break up the logic to small functions that accept a predictable input. For instance, calculating shares is simply multiplying your inputs by your dividend - an array of input values and a number. // In goes an array of hours and the dividend, out comes an array of shares function getShares(hours, dividend){ const sum = hours.reduce((c, v) => c + v); const rate = dividend / sum; const shares = hours.map(hour => hour * rate); return shares; } I also notice this strange E function which is just short for document.getElementById. There is document.querySelector and document.querySelectorAll that allows you to use CSS selectors to get a reference of your elements. Also, I'd avoid IDs since IDs should only appear once on the page. You can't use it for multiple elements of the same kind, for instance your inputs. Use classes instead. For this case, I would tolerate the innerHTML usage. Otherwise, construct elements using document.createElement. Now to create repeated HTML, you can use array.map and array.join with template literals. function renderResults(results){ return results.map(result => ` <div class="result">${result.hours} x ${result.dividend} = ${result.share}</div> `).join(''); } For your HTML, while there are cases where the quotes on attributes are optional, it's best you put them in for consistency.
{ "domain": "codereview.stackexchange", "id": 25593, "tags": "javascript, calculator" }
What if we all together push walls in same direction?
Question: There is a question, Contradiction between law of conservation of energy and law of conservation of momentum? The answers say that earth does gain some velocity that is negligible. So I ask, what if we all push a wall in the same direction? And if we have been pushing walls since so many years, has the velocity of earth increased? Answer: As hinted at in the comment to your question, one must ask: What are you pushing against? That is, whatever force your hands apply to the wall, your feet must apply to the floor in the opposite direction. The end result is that your efforts have no effect. (Well, if you push hard enough, you can knock the wall down, or perhaps just slide across the floor, depending on the coefficient of friction, but you won't cause any net motion of the Earth.)
{ "domain": "physics.stackexchange", "id": 43680, "tags": "newtonian-mechanics, forces, momentum, conservation-laws, free-body-diagram" }
Can someone Tong got this equation in his QFT notes
Question: Can someone explain how D.Tong got equation 2.18 in his QFT notes in chapter 2? I am lost from equation 2.5, can someone explain? Link to notes: http://www.damtp.cam.ac.uk/user/tong/qft.html Can someone explain why exactly are we doing the Fourier transform in equation 2.5, what does it mean to choose coordinates in which degrees of freedom decouple ? From annihilation and creation operators in quantum mechanics, how do we get equation 2.18 ? Answer: Can someone explain why exactly are we doing the Fourier transform in equation 2.5, what does it mean to choose coordinates in which degrees of freedom decouple? Lets think of a much simple problem: two masses, connected with springs to the walls and to one another. Such a system satisfies the following set of differential equations $$\left\{\begin{matrix}m\dfrac{{\rm d}^{2}x_{1}}{{\rm d}t^{2}}&=&-k x_{1}-k\left(x_{1}-x_{2}\right)\\m\dfrac{{\rm d}^{2}x_{2}}{{\rm d}t^{2}}&=&-k x_{2}-k\left(x_{2}-x_{1}\right)\end{matrix}\right.$$ or in a more natural matrix form $$\dfrac{{\rm d}^{2}\boldsymbol{x}}{{\rm d}t^{2}}=A\boldsymbol{x}$$ with $$A=\frac{k}{m}\left(\begin{matrix}-2&1\\1&-2\end{matrix}\right)$$ You can clearly see that our matrix is not diagonal. This means that $x_{1}$ affects $x_{2}$ and vice versa. A more natural choice of coordinates is $y_{\pm}=\frac{1}{\sqrt{2}}\left(x_{1}\pm x_{2}\right)$. In this new coordinate system we can write $$\left\{\begin{matrix}\dfrac{{\rm d}^{2}y_{+}}{{\rm d}t^{2}}&=&-\frac{k}{m} y_{+}\\ \dfrac{{\rm d}^{2}y_{-}}{{\rm d}t^{2}}&=&-\frac{3k}{m}y_{-}\end{matrix}\right.$$ As you can see, this time one equation does not depend upon another. This means that now we have decoupled our degrees of freedom. We have found eigenmodes - two types of combined movements of the two masses that have a dynamic identical to that of a single effective particle. Each with a different frequency. You are probably familiar with these type of questions from early classes on the mechanics of oscillations. Your problem is not different. Let's, for simplicity, speak of a very similar equation so we can grasp the mathematics more easily $$\dfrac{\partial^{2}\phi(x,t)}{\partial t^{2}}-\dfrac{\partial^{2}\phi(x,t)}{\partial x^{2}}+m^{2}\phi(x,t)=0$$ You can think of $\phi(x,t)$ as the vector $\left(\dots,\phi(-{\rm d}x,t),\phi(0,t),\phi({\rm d}x,t),\dots\right)^{T}$. Then the spatial second derivative is just a linear operator, or a matrix, since $$\dfrac{\partial^{2} \phi(x,t)}{\partial x^{2}}\approx\frac{\phi(x+{\rm d}x,t)-2\phi(x,t)+\phi(x-{\rm d}x,t)}{{\rm d}x^{2}}$$ Therefore, returning to your equation, you have $$\dfrac{\partial^{2} \phi(\boldsymbol{x},t)}{\partial t^{2}}=L\phi(\boldsymbol{x},t)$$ where $L$ is the operator $L\equiv\partial_{j}\partial^{j}-m^{2}$. As I showed above, this $L$ is not diagonal so it couples different $\phi(\boldsymbol{x},t)$ together. To decouple the system, you look at a very common linear combination of such $\phi(\boldsymbol{x},t)$'s - the Fourier Transform $$\phi(\boldsymbol{p},t)=\int{\rm d}^3x e^{i\boldsymbol{p}\cdot\boldsymbol{x}}\phi(\boldsymbol{x},t)$$ The integral is nothing but the continuum version of the discrete summation $\Sigma$ and $e^{i\boldsymbol{p}\cdot\boldsymbol{x}}$ are the coefficients. As David Tong notes, these linear combinations satisfy a much simpler set of equations. For every $\boldsymbol{p}$ you have $$\dfrac{\partial^{2}\phi(\boldsymbol{p},t)}{\partial t^{2}}+(p^2+m^2)\phi(\boldsymbol{p},t)=0$$ Now $\phi(\boldsymbol{p}_{1},t)$ does not depend on $\phi(\boldsymbol{p}_{2},t)$, whenever $\boldsymbol{p}_{1}\neq\boldsymbol{p}_{2}$. And that's it! We have found the eigenmodes of this equation, i.e. we have decoupled the system. From annihilation and creation operators in quantum mechanics, how do we get equation 2.18? Now let's define $\omega_{\boldsymbol{p}}\equiv\sqrt{p^2+m^2}$ such that $$\dfrac{\partial^{2}\phi(\boldsymbol{p},t)}{\partial t^{2}}=-\omega_{\boldsymbol{p}}^{2}\phi(\boldsymbol{p},t)$$ That's exactly the equation of an harmonic oscillator, which is quantized in the Heisenberg picture of quantum mechanics in the form $$\phi(\boldsymbol{p},t)=\frac{1}{\sqrt{2\omega_{\boldsymbol{p}}}}\left[\hat{a}_{\boldsymbol{p}}e^{i(\boldsymbol{p}\cdot\boldsymbol{x}-\omega_{\boldsymbol{p}}t)}+\hat{a}_{\boldsymbol{p}}^{\dagger}e^{-i(\boldsymbol{p}\cdot\boldsymbol{x}-\omega_{\boldsymbol{p}}t)}\right]$$ Then you can return to the coupled basis by an inverse Fourier Transform.
{ "domain": "physics.stackexchange", "id": 54318, "tags": "quantum-field-theory, field-theory, fourier-transform, klein-gordon-equation" }
Cost function in linear regression
Question: Can anyone help me about cost function in linear regression. As from the below plot we have actual values and predicted values and I assumed the answer as zero but it actually is 14/6? Can anyone help out please? Answer: $h_\theta$ implies that you're trying to model the relation between $h$ and $x$ with an straight line coming from the origin $(0,0)$. The parementer $\theta_1$ is the slope of this line. Evaluating $J(\theta_1=0) = J(0)$, implies that $h_\theta(x_i)=0$ whatever the value of $x_i$ is. Since $m=3$, and the labels are $y_1=1, y_2=2, y_3=3$ $J(0) = \frac{1}{6}[(0 -1)^2 + (0 - 2)^2 + (0-3)^2] = \frac{14}{6}$ The value of $\theta_1$ for which $J(\theta_1) = 0$ is obviously 1
{ "domain": "datascience.stackexchange", "id": 4237, "tags": "machine-learning, linear-regression" }
Language of equal numbers of as, bs, cs in any order not context-sensitive?
Question: In his book "Foundations of Computing", professor Allison shows an example of "language of equal numbers of as, bs, and cs, but in any order", formally: $L = \{ w \in \{a,b,c\}^*\ |\ |w|_a=|w|_b=|w|_c \}$, where $|w|_a$ denotes the number of $a$s in a word $w$. This language is a variant of the language $L_1 = \{ a^nb^nc^n |\ n>0 \}$ which is context-sensitive. Later, he says that this language is not context-sensitive because "the working strings grow beyond a constant times the size of the initial input before shrinking down to the final output." However, I can think of a Turing machine that first sorts the input (in a linear space n + c, where n is the input size and c is a constant such as 1) and then accepts/rejects in the very same manner as the machine accepting the language $L_1$. Both subroutines are also linear bounded and thus the machine itself is a linear bounded automaton (LBA). Context-sensitive languages are accepted by LBAs, therefore the language $L$ is context-sensitive. Where am I mistaken? Answer: It seems the author of the book is a little careless at this point. His definition of context-sensitive is that "the right-hand side of a rule must never be shorter than the left-hand side" together with a specific assumption that enables us to generate the empty string. He states "The grammars in Examples 9–1 and 9–2 are not context-sensitive. In both cases, the working strings grow beyond a constant times the size of the initial input before shrinking down to the final output." This is confusing. In his Example 9-1 for equal numbers of $a$, $b$ and $c$'s the only contracting rule is $S\to \lambda$ which can be easily transformed into a grammar where $S$ does not appear to the right of a rule by introducing a new copy of $S$. Then you will have a proper non-contracting grammar for the language you are interested in. The other Example 9-2 for $\{ a^{2^n}\mid n\ge 0\}$ has several rules that are contracting, like $GR\to R$ and $XA\to X$. In this case there seems no fast solution, but with proper care also here one can design a noncontracting grammar for the language. Then, I quote, "The languages generated by those grammars are not context-sensitive". This is not true as you suspected. Both languages can be recognized by a linear space Turing machine, or generated by noncontracting grammar. (I prefer noncontracting here, as context-sensitive is usually a more restrictive type of grammar, which nevertheless is equally powerful.) PS. For the second example several solutions can be found at Context sensitive grammar for $\{a^{2^n}\mid n\ge 0\}$.
{ "domain": "cs.stackexchange", "id": 21420, "tags": "formal-languages, formal-grammars, context-sensitive, linear-bounded-automata" }
pr2_dashboard installation
Question: Hey, Once i have problems with PR2, now i need to install the PR2_Dashboard... someone helps me?? another thing, i saw that the installation starts like this: sudo apt-get install XXXXXXXXXXX, how can i know what is the package to install, is any list of the packages to install?, where can i get it... is better like this i dont have to ask you everytime... thank you!! Originally posted by joseescobar60 on ROS Answers with karma: 172 on 2012-11-05 Post score: 0 Answer: sudo apt-get install ros-fuerte-pr2-gui The name of the debian package to install is always: ros-<distro>-<stack> Note that underscores are replaced by dashes in the stack name, i.e. the debian package for the stack pr2_gui is ros-fuerte-pr2-gui. To find the stack of a package, have a look at the wiki. In your case: http://ros.org/wiki/pr2_dashboard Originally posted by Lorenz with karma: 22731 on 2012-11-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by joseescobar60 on 2012-11-05: thank you, your help is good.... really thanks...
{ "domain": "robotics.stackexchange", "id": 11624, "tags": "ros" }
How to distinguish female and male voices via Fourier analysis?
Question: What makes one, without looking, be able to identify the gender of the talker as male or female? I mean if we Fourier analysed the voice of males and females, how the 2 spectrums are different which account for that distinction in sounds? Answer: This has been extensively studied in linguistics and acoustics. Humans and other primates predict speaker gender through a combination of fundamental frequency $F_0$ ("pitch") and Vocal-Tract-Length estimates ($VTL$) which are a proxy for body size. Sometimes "formant dispersion" is used for $VTL$. It is usually defined as $$\frac{\sum_{i=1}^n(F_{i+1}-F_i)}{n-1}$$where $F_i$ is the $i$th formant frequency and $n$ is the number of formants measured. However this measure is problematic and does not capture information about midrange formants or about formant positioning. See Masculine voices signal men's threat potential in forager and industrial societies An alternative $VTL$ measure is 'formant position', defined as:$$\frac{\sum_{i=1}^nF'_i}{n}$$where $F'_i$ is the $i$th formant standardized across the population measured. However, the usual finding is that a combination of pitch and estimates of vocal tract length give us information about speaker gender and sexual maturity. Looking at male vs female spectra, on average you'd see male voices lower-pitched and and more closely-spaced formants. Acoustic correlates of talker sex and individual talker identity are present in a short vowel segment produced in running speech Vocal Tract Length Perception and the Evolution of Language Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques, but see Formant frequencies and body size of speaker: a weak relationship in adult humans
{ "domain": "physics.stackexchange", "id": 2167, "tags": "acoustics, fourier-transform, signal-processing, biology" }
Highlighting and copying spreadsheet rows that match a criterion
Question: I use the code (provided below) to check for certain criteria in a row. In this case, if cell F in worksheet "Swivel" contains "After Dispute For SBU" then that row needs to be highlighted in yellow and the text changed to red, then the row needs to be copied to another sheet named "Disputed" that is within the same workbook. I have added some code to remove the highlight and the font color change if the row is removed from the workbook manually and then the code is triggered again. This particular code is run from a menu system that I created (code not included). The additional code is also used to protect the color of the header row. This updated code is causing erratic behavior of the cursor and is also running a little longer than it was before the changes, even though the amount of data did not change. Is there a way to improve the code? or is this normal behavior that I need to live with? The code is completing the tasks as expected. Sub Highlight_Copy_Disputed() Application.ScreenUpdating = False ' This part highlights all rows that are Disputed Dim row As Range For Each row In ActiveSheet.UsedRange.Rows If row.Cells(1, "F").Value = "After Dispute For SBU" Then row.Interior.ColorIndex = 6 row.Font.Color = RGB(255, 0, 0) ElseIf row.Cells(1, "F").Value = "Impact Status" Then row.Interior.Color = RGB(197, 190, 151) row.Font.Color = RGB(0, 0, 0) Else row.Interior.ColorIndex = xlNone row.Font.Color = RGB(0, 0, 0) End If Next row ' This part clears the Disputed worksheet and copies all disputed rows to the sheet With ThisWorkbook.Worksheets("Disputed") Range(.Range("A2"), .UsedRange.Offset(1, 0)).EntireRow.Delete End With Dim LR As Long, lr2 As Long, r As Long LR = Sheets("Swivel").Cells(Rows.Count, "A").End(xlUp).row lr2 = Sheets("Disputed").Cells(Rows.Count, "A").End(xlUp).row For r = LR To 2 Step -1 If Range("F" & r).Value = "After Dispute For SBU" Then Rows(r).Copy Destination:=Sheets("Disputed").Range("A" & lr2 + 1) lr2 = Sheets("Disputed").Cells(Rows.Count, "A").End(xlUp).row End If Range("A2").Select Next r Application.ScreenUpdating = True Range("C" & Rows.Count).End(xlUp).Offset(1).Select End Sub Answer: I don't see you using Option Explicit which is always a good idea. But you are defining your variables - which is good. Using variable names like Row is generally a bad idea - Row already means something in excel. The other variables LR, lr2 and r are bad names - what do they do? Why do you need two LR? I'd used SwivelLastRow and DisputedLastRow for the LR variables. The r is just a counter, so why not give it a better name? CurrentRow maybe? I might also break the two procedures up into different subs, maybe Highlight_Disputed and CopyDisputed. You're also using ActiveSheet.UsedRange and select which is generally something to be avoided. The first procedure could be something like this - Dim wsSwivel As Worksheet Set wsSwivel = Sheets("Swivel") Dim TestCell As Range Dim LastRow As Long LastRow = wsSwivel.Cells(Rows.Count, "F").End(xlUp).row Dim TestArea As Range Set TestArea = Sheets("Swivel").Range("F1:F" & LastRow) Dim AfterDispute As String AfterDispute = "After Dispute For SBU" Dim ImpactStatus As String ImpactStatus = "Impact Status" Dim AfterDisputeHighlight As Long AfterDisputeHighlight = 6 For Each TestCell In TestArea If TestCell = AfterDispute Then TestCell.Rows.Interior.ColorIndex = AfterDisputeHighlight TestCell.Rows.Font.Color = RGB(255, 0, 0) ElseIf TestCell = ImpactStatus Then TestCell.Rows.Interior.Color = RGB(197, 190, 151) TestCell.Rows.Font.Color = RGB(0, 0, 0) Else: TestCell.Rows.Interior.ColorIndex = xlNone TestCell.Rows.Font.Color = RGB(0, 0, 0) End If Next I'd also find a way to assign the RGB colors to variables so it looks cleaner. After that you can exit sub or just CopyDisputed. Since CopyDisputed isn't returning a value, it's not a function. And since you already have some needs defined you can put in parameters: Sub CopyDisputed(ByVal FromSwivel As Long, ByVal DisputedText As String) And just use what you have - CopyDisputed LastRow, AfterDispute This eliminates the LR and one of the For Loops. Sub CopyDisputed(ByVal FromSwivel As Long, ByVal DisputedText As String) Dim wsDisputed As Worksheet Set wsDisputed = Sheets("Disputed") Dim CopyRow As Long Dim DisputedRow As Long DisputedRow = 1 wsDisputed.Range("B:Z").ClearContents For CopyRow = FromSwivel To 1 Step -1 If Cells(CopyRow, "F") = DisputedText Then Rows(CopyRow).EntireRow.Copy Destination:=wsDisputed.Rows(DisputedRow) 'Whoops you don't want to delete, but then why are you step -1? 'Rows(CopyRow).EntireRow.Delete shift:=xlUp DisputedRow = DisputedRow + 1 End If Next End Sub I'm assuming I understand your Disputed ws copying to be on the entire row, unless you're trying to keep column A intact? Breaking it into two procedures makes it easier to gather what each code block is doing. But, of course, you can avoid this if you do the copying to Disputed when you do the highlighting of the AfterDispute rows. This is something that would increase performance so you don't need to loop through column F twice: Option Explicit Sub Highlight_Copy_Disputed() Application.ScreenUpdating = False Dim wsSwivel As Worksheet Set wsSwivel = Sheets("Swivel") Dim wsDisputed As Worksheet Set wsDisputed = Sheets("Disputed") Dim TestCell As Range Dim LastRow As Long LastRow = wsSwivel.Cells(Rows.Count, "F").End(xlUp).row Dim DisputedRow As Long DisputedRow = 1 Dim AfterDisputeHighlight As Long AfterDisputeHighlight = 6 Dim TestArea As Range Set TestArea = Sheets("Swivel").Range("F1:F" & LastRow) Dim AfterDispute As String AfterDispute = "After Dispute For SBU" Dim ImpactStatus As String ImpactStatus = "Impact Status" wsDisputed.Range("B:Z").ClearContents For Each TestCell In TestArea If TestCell = AfterDispute Then TestCell.Rows.Interior.ColorIndex = AfterDisputeHighlight TestCell.Rows.Font.Color = RGB(255, 0, 0) TestCell.EntireRow.Copy Destination:=wsDisputed.Rows(DisputedRow) DisputedRow = DisputedRow + 1 ElseIf TestCell = ImpactStatus Then TestCell.Rows.Interior.Color = RGB(197, 190, 151) TestCell.Rows.Font.Color = RGB(0, 0, 0) Else: TestCell.Rows.Interior.ColorIndex = xlNone TestCell.Rows.Font.Color = RGB(0, 0, 0) End If Next Application.ScreenUpdating = True End Sub
{ "domain": "codereview.stackexchange", "id": 18171, "tags": "performance, vba, excel" }
How does batch normalisation actually work?
Question: I actually went through the Keras' batch normalization tutorial and the description there puzzled me more. Here are some facts about batch normalization that I read recently and want a deep explanation on it. If you froze all layers of neural networks to their random initialized weights, except for batch normalization layers, you can still get 83% accuracy on CIFAR10. When setting the trainable layer of batch normalization to false, it will run in inference mode and will not update its mean and variance statistics. Answer: I'm not sure how just training the batch normalisation layer, you can get an accuracy of 83%. The batch normalisation layer parameters $\gamma^{(k)}$ and $\beta^{(k)}$, are used to scale and shift the normalised batch outputs. These parameters are learnt during the back-propagation step. For the $k$th layer, $$y^{(k)} = \gamma^{(k)}\hat{x}^{(k)} + \beta^{(k)}$$ The scaling and shifting are done to ensure a non - linear activation being outputted by each layer. Because batch normalisation scales outputs between 0-1, some activation functions are linear within that range (Eg. $tahh$ and $sigmoid$) Regarding the second fact however, the difference between training and inference mode is this. During training mode, the statistics of each batch norm layer $\mu_B$ and $\sigma^2_B$ is computed. This statistic is used in scaling and normalising the outputs of the batch norm layer to have 0 mean and unit variance. At the same time, the current batch statistic computed is also used to update the running mean and running variance of the population. $\mu_B[t]$ represents the current batch mean, $\sigma^2_B[t]$ represents the current batch variance, while $\mu'_B[t]$ and $\sigma'_B[t]$ represent the accumulated means and variance from the previous batches. The running mean and variance of the population is then updated as $$\mu'_B[t]=\mu'_B[t]× momentum+ \mu_B[t]×(1−momentum)$$ $$\sigma'^2_B[t]=\sigma'^2_B[t] × momentum + \sigma^2_B[t]×(1−momentum)$$ In inference mode, the batch normalisation uses the running mean and variance computed during training mode to scale and normalise inputs in the batch norm layer instead of the current batch mean and variance.
{ "domain": "ai.stackexchange", "id": 2165, "tags": "machine-learning, transfer-learning, batch-normalization" }
Training, Testing and Validation Dataset
Question: I'm training a Unet model for tumor segmentation. I have a dataset of 400 patients for that. The used images are CT scans (3D images) that I divide into 2D images (for a total of 30k 2D images). I am actually splitting the dataset into: 10% test data, 18% validation data, 72% actual training data. I'm dividing the test and training data over patients (i.e. the patients used for testing are not the same as the one for training). Afterwards, I shuffle the 2D images and split in training/validation dataset (i.e. the same patients can be found in training dataset and validation dataset but not same stack images). I have two questions: Should I split the train/validation dataset according to patients too ? Are the division percentages in train/test/validation adapted for my problem ? Answer: Generally numbers (percentages) do not matter. What matters is that your Splitting (Train/test/Validation) does 2 things. Represent the real world sitatution and making sure the model can generalise given that ist evaluated on the holdout sets. So what does that mean here exactly? You have 30k Images and 400 patients. Most likely patients(scans) will differ from each other so you should split according to patients also to make sure the model can generalise on slightly different distributions of images. And according to percentages. You Need to make sure that Things you find in Train test and Validation represent your Problem. This can mean Splitting by Patient, Splitting by some other feature, checking the Distribution of data etc. but what it does not mean is that only cause you have 12% in one set you are sure. What does that mean. Lets say you have 1000 rows of data. You split 90% 10% so in holdout you have 100 data Points. But in the Train set out of the 900, the majority of them are same similiar. And they differ from the 100 Points in holdout. Is this a good split? obviously not cause your model is learning Nothing.
{ "domain": "datascience.stackexchange", "id": 8575, "tags": "dataset, training, computer-vision" }
Why is nickel-56 not a stable nuclide?
Question: Why is $\ce{_{28}^{56}Ni}$ not a stable nuclide? As far as I studied radiochemistry, magical numbers of protons and neutrons correspond to the more stable elements, due to their perfectly spherical form: $2,$ $8,$ $20,$ $28,$ $50,$ $82$ and $126.$ Double magic nuclei are those with both $Z$ and $A - Z$ magical, like $\ce{_2^4He},$ $\ce{_8^{16}O},$ $\ce{_{82}^{208}Pb}.$ All of these end points of decay series and very stable elements. However, $\ce{_{28}^{58}Ni}$ seems to be favored, alongside $\ce{_{28}^{61}Ni}$, instead of the more obvious choice $\ce{_{28}^{56}Ni}$. What is the reason? Answer: Not all stable nuclei with a "magic" number of protons or neutrons are "doubly magic". Perhaps they would be (actually the same number of protons and neutrons) if only internuclear forces were acting within the nuclei, but in practice electromagnetic forces become increasingly important as we move to heavier nuclei. The competition between electromagnetic and internuclear forces leads to stable isotopes generally not being the most "magical" with respect to internuclear forces alone. For example, with 50 as the magic number: Zirconium-90, the most abundant isotope of that element, has 50 neutrons but only 40 protons. Tin forms several relatively abundant (>10%) isotopes with 50 protons but 66, 68 or 70 neutrons. Given this circumstance, the fact that lead-208 is doubly magic is something of a coincidence. This point is emphasized by the familiar uranium-238 decay chain ending with lead-206 which is also stable (the alpha and beta decay mechanisms commonly seen in radioactive decay of heavy elements favor the mass number remaining congruent modulo 4). It becomes evident that "doubly magic" has only limited utility in deciding nuclear stability. Scientists who are seeking such nuclei among superheavy isotopes are (or at least should be) cautious about predicting their actual stability against radioactive decay.
{ "domain": "chemistry.stackexchange", "id": 17820, "tags": "radioactivity, nuclear-chemistry, radiochemistry" }
Amine vs Amide Solubility
Question: I had a question regarding the solubility of amines and amides. I was looking into the solubility of butanamide and n-Butylamine, and it turns out that whilst butylamine is miscible in water, the solubility for butanamide is 163g/L This would imply that amines have a greater solubility than amides. My understanding is that the carbonyl group would add two lone pairs, which act as hydrogen bond acceptors and thus should mean amides are more soluble. (more hydrogen bonds) It'd be much appreciate if someone could point me in the right direction or let me know how my reasoning is flawed. (I have viewed this link but the answer given isn't too helpful: Solubility of Amides) Thanks in advance! Sources for Solubility Values https://en.wikipedia.org/wiki/N-Butylamine https://chem.libretexts.org/Under_Construction/Walker/Chemicals/Substance%3AB/Butanamide Answer: The mixing of two compounds is a process which requires consideration of three types of interactions: solute-solvent, solvent-solvent and solute-solute. You make a good argument for the solute-solvent interaction being stronger in mixtures of butanamide and water, compared to butylamine and water. However, as you say, this argument alone would predict a higher solubility for butanamide, which goes against the observed data. Also, in both cases the solvent-solvent interaction is the same (water with water), so this can't really be the source of an explanation. By exclusion, we're guided towards considering the solute-solute interactions. At a first glance, both butylamine and butanamide display a combination of dipole-dipole and hydrogen bonding interactions, making it tricky to compare how their relative strengths change between compounds. To make things easier, we can refer to additional experimental data. A nice way to gauge the strength of intermolecular interactions in a substance is to look at its melting and boiling points. According to Wikipedia, the melting and boiling points of butylamine are approximately -49 °C and 78 °C respectively. Meanwhile, for butanamide, the values are a whopping 115 °C and 216 °C! Clearly the mutual interactions between butanamide molecules are much stronger than those between butylamine molecules. And there is our answer. You are correct that, compared to a butylamine molecule, a butanamide molecule should interact more strongly with water molecules. However, it turns out that butanamide molecules also interact very strongly with themselves, far more so than butylamine molecules. Taking everything into account, the effect of the solute-solute interactions is more prevalent; the water molecules have more difficulty keeping butanamide molecules apart, to the point that after a critical value of 163 grams of butanamide in a liter of aqueous solution (at 15 °C), the water molecules just can't stop the butanamide molecules from meeting and packing into a solid. Even if you add more butanamide, it doesn't visibly dissolve. Some butanamide molecules in the solid continuously manage to break loose from each other and get pulled into the aqueous solution, but they are replaced just as fast by molecules of butanamide already in solution which get too close and pack into the solid state. You will find that this can be made into a very rough rule of thumb: when comparing the solubility of two roughly similar substances in a same solvent, if one of the substances has significantly higher melting and boiling points, they are likely to be less soluble. Note that I have discussed only the influence of the enthalpic contributions to mixing. An entropic contribution also exists (favouring dissolution for both compounds), but in this case it can be safely neglected due to its comparatively small effect.
{ "domain": "chemistry.stackexchange", "id": 14202, "tags": "organic-chemistry, water, solubility, amines, amides" }
Question on negative energy solutions to the photoelectric effect
Question: the photoelectric effect was explained by Einstein as $$\frac12mv^2=\hbar\omega - W$$ where $W$ is the binding energy of the atom that the electron is in, and $\hbar\omega$ is the energy of the photon that strikes said electron. I know that in order for the electron to leave its atom, $\hbar\omega>W$ and if $\hbar\omega=W$ then the kinetic energy is zero and thus there is no motion. But, if $\hbar\omega<W$ we get a negative solution for the kinetic energy. How can sense be made of this? Answer: If $\hbar\omega<W$, then no matter how intensely you shine light at the metal, no current will flow. In other words, no electrons are ejected. This was one of the key experimental observations of the photoelectric effect.
{ "domain": "physics.stackexchange", "id": 83446, "tags": "quantum-mechanics, photons, electrons, photoelectric-effect" }
Gathering data from database
Question: The following code has one of the most confusing lines I've ever wrote. I can imagine ten other way to write it but I do not know which else could be any better. Note: Db() is very similar to PDO(), it just extends it adding few features I'm not using here. Post::addExtra() add abstract datas elaborating database data. For example he created a $data[13] = $data['from db1'] .' with '. $data['from db2']. These because they are going to be passed to the template. $db = new Db(); $s = new Session(); # Default statement and parameters $stmt = "SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(*) FROM Flags as f JOIN Posts as p1 ON p1.PostPID = f.FlagPID WHERE p1.PostPID = p.PostPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID ORDER BY PostTime DESC LIMIT 0, 30"; $par = array(); # We change the statement if the tab is selected if ($tab = get('tab')) { switch ($tab) { case 'admin': $stmt = "SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(*) FROM Flags as f JOIN Posts as p1 ON p1.PostPID = f.FlagPID WHERE p1.PostPID = p.PostPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID WHERE p.PostUID = 1 ORDER BY PostTime DESC LIMIT 0, 30"; break; case 'trusted': if ($s->isLogged()) { $stmt = "SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(*) FROM Flags as f JOIN Posts as p1 ON p1.PostPID = f.FlagPID WHERE p1.PostPID = p.PostPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID WHERE p.PostUID IN ( SELECT TrustedUID FROM Trust WHERE TrusterUID = :uid ) ORDER BY PostTime DESC LIMIT 0, 30"; $par = array('uid' => $s->getUID()); } else { $stmt = ''; } break; case 'favorite': if ($s->isLogged()) { $stmt = "SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(*) FROM Flags as f JOIN Posts as p1 ON p1.PostPID = f.FlagPID WHERE p1.PostPID = p.PostPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID WHERE p.PostPID IN ( SELECT FavoritePID FROM Favorites WHERE FavoriteUID = :uid ) ORDER BY PostTime DESC LIMIT 0, 30"; $par = array('uid' => $s->getUID()); } else { $stmt = ''; } break; case 'top': $weekAgo = time() - week; $monthAgo = time() - month; $stmt = "SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(*) FROM Flags as f JOIN Posts as p1 ON p1.PostPID = f.FlagPID WHERE p1.PostPID = p.PostPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID WHERE p.PostTime > $monthAgo LIMIT 0, 3 UNION SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(*) FROM Flags as f JOIN Posts as p1 ON p1.PostPID = f.FlagPID WHERE p1.PostPID = p.PostPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID WHERE p.PostTime > $weekAgo ORDER BY PostFlags DESC LIMIT 0, 30"; break; case 'recent': default: break; } } # Loading posts try { $sql = $db->prepare($stmt); $sql->execute($par); $posts['Data'] = $sql->fetchAll(); } catch (PDOException $e) { throw new MyEx($e->getMessage()); } if (count($posts['Data']) > 0) { foreach ($posts['Data'] as &$post) { $post = Post::addExtra($post); } } Answer: General SQL-related advice: I would factor most of these queries into a single view; you can then add WHERE parameters when selecting from the view. You can also remove the few instances of variable expansion inside the queries, and replace them with named parameters throughout (you have :uid already). For example: CREATE OR REPLACE VIEW PostsAnnotated AS SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(*) FROM Flags as f JOIN Posts as p1 ON p1.PostPID = f.FlagPID WHERE p1.PostPID = p.PostPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID ORDER BY PostTime DESC; SELECT FROM PostsAnnotated WHERE PostUID = 1 LIMIT 30; SELECT FROM PostsAnnotated WHERE PostUID IN ( SELECT TrustedUID FROM Trust WHERE TrusterUID = :uid) LIMIT 30; As far as the code around addExtra: I wouldn't set $posts['Data'] to overwrite it immediately. Instead I would loop on the sql results and append to $posts['Data'] (IIRC the syntax is $posts['Data'][] = $next_elem).
{ "domain": "codereview.stackexchange", "id": 183, "tags": "php, mysql" }
Is cofficient of restitution equation valid for collisions involving more than 2 bodies?
Question: I couldn't solve a question and the only equation I was missing was of cofficient of restitution i.e. $e=\frac{v_{final2} - v_{final1}}{u_{intial1} - u_{initial2}}$ . But the collision was among 3 objects. This was the figure given in the question: (elasticity given was 1, fwiw) Won't the third body or the tension impact the velocities? Is coefficient of restitution equation generally valid when collision among 3 bodies takes place? Like would the equation be applicable when masses are aligned this way: Edit: the aforementioned question Edit 2: I have mentioned more on what exactly I am having doubt in the comment section of 147875 sir's answer. Answer: The coefficient of restitution (e) is always between bodies (i.e., two bodies) rather than among bodies (i.e., more than 2 bodies). If you want to deal more than two bodies, always take two at a time along with appropriate components. The coefficient of restitution (e) is defined as, $$ e = \frac{\text{Relative velocity after collision}}{\text{Relative velocity before collision}} $$ If the initial velocity of the center sphere is $u$ and let $u'$ be the velocity after collision and $v$ be the velocity of suspended spheres, which are constrained to move in horizontal direction. Then, coefficient of restitution (e) is given by $$ e = \frac{v\cos(60^\circ) - u'\cos(30^\circ)}{u\cos(30^\circ)} \tag{1} $$ for elastic collision, $e = 1$, you can substitute in (1) and to get $$ \sqrt{3}(u + u') = v \tag{2} $$ In deriving the result above, we took the center sphere along with one of the bottom spheres. We can equation (2) along with equation due to impulse to solve the problem. Edit 1: Really thanks but I am still confused. What I really want to know is that- the external force and the extra body would impact the velocity just after impact of both the bodies but still the equation gives correct results. Why didn't the equation fail in this situation? What principle is it really based on? Like momentum conservation is not applicable here but the equation is still valid. I have phrased the question really badly When solving the problem, we only took the primary collision and the consecutive collisions is not considered. Does the third body have any effect on the velocity of the other two? Yes it does, that's the reason, we took component only along one body and derived the result (1) and (2). Is momentum conservation applicable here? All the laws of elastic collision? Yes it does. We will derive this result below. Conservation of Kinetic Energy gives, $$ \begin{align} \frac{1}{2}mu^2 &= \frac{1}{2}m{u'}^2 + 2 \times \frac{1}{2}m'v^2 \\ \implies m(u^2 - {u'}^2) &= m'v^2 \tag{3} \end{align} $$ conservation of momentum gives, Alng $x$-axis is, $$ 0 = m'v\sin(30^\circ) - m'v\sin(30^\circ) \tag{4} $$ Along $y$-axis is, $$ \begin{align} mu & = mu' + 2 \times m'v\cos(30^\circ) \\ \implies m(u - u') &= \sqrt{3}m'v \tag{5} \end{align} $$ Divide Eq(3) by Eq(5) gives, $$ (u + u') = \frac{v}{\sqrt{3}} \tag{6} $$ Now, if we look Eq(2) and Eq(6) are both same. Hence rules of elastic collision is applicable.
{ "domain": "physics.stackexchange", "id": 82644, "tags": "classical-mechanics" }
Is the test time the phase when the model's accuracy is calculated with test data set?
Question: When papers talk about the "test time", does this mean the phase when the model is passed with new data instances to derive the accuracy of the test data set? Or is "test time" the phase when the model is fully trained and launched for real-world input data? Answer: If it is not defined otherwise, testing is the phase where the model is passed with new data instances to derive the score of the test set. It should not be confused with validation set. A validation dataset is a sample of data held back from training your model that is used to give an estimate of model skill while tuning model’s hyperparameters during training. There are a lot of methods of validation with k-fold cross-validation being one of the most popular. In k-fold cross-validation, the original training set is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for validating the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data for the each epoch. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling is that all observations are used for both training and validation, and each observation is used for validation exactly once. The validation dataset is different from the test set that is also held back from the training of the model, but is instead used to give an unbiased estimate of the skill of the final tuned model when comparing or selecting between final models.
{ "domain": "ai.stackexchange", "id": 1977, "tags": "machine-learning, training, definitions, testing" }
Basic C# R-E-P-L
Question: So I was thinking of making a C# REPL, as I didn't find any viable solution that would work for me, so I made a simple, basic one, I am sure it can be improved, as it seems once the REPL class is initialized it's hard to add an event handler to it, or access the logs, so I am looking for a way to improve it, here's the code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace CS_REPL { public class REPL { List<string> inputLog; public delegate void InputReceived(string input); public event InputReceived InputHandler; private bool loopEnabled = true; public REPL(InputReceived handler) { inputLog = new List<string>(); InputHandler += handler; while (loopEnabled) { string input = ReadLine(); inputLog.Add(input); InputHandler?.Invoke(input); } } public string ReadLine() { PrintShellIndicator(); return ReadInput(); } public void PrintShellIndicator() { Console.Write("> "); } public string ReadInput() { return Console.ReadLine(); } public void setRunning(bool flag) { loopEnabled = flag; } } } Answer: once the REPL class is initialized You cannot initialize or rather create an instance of REPL becasue there is an infinite loop in the constructor - the by default you set loopEnabled = true;. Is this intended? it's hard to add an event handler to it It's hard because there is no object to work with as the new REPL(...) will never return so you can only add the handler once via the constructor. Code The delegate shouldn't be nested inside the REPL class. It also shouldn't be a custom delegate too - if used as an event. By convention we use the EventHandler delegate for this. As you want to pass the input to the event handler you'll need to create custom EventArgs to make it possible. There is a generic EventHandler<TEventArgs> where you can use the new class. public class InputReceivedEventArgs : EventArgs { public InputReceivedEventArgs(string input) { Input = input; } public string Input { get; } } The final REPL class with the new event could look like this: public class REPL { private List<string> _inputLog; // Initialize with an empty event handler. public event EventHandler<InputReceivedEventArgs> InputReceived = delegate { }; public REPL() { _inputLog = new List<string>(); } // Starts the REPL public void Start() { while (true) { string input = ReadLine(); _inputLog.Add(input); OnInput(input); } } // --- all other methods should be private private string ReadLine() { PrintShellIndicator(); return ReadInput(); } private void PrintShellIndicator() { Console.Write("> "); } private string ReadInput() { return Console.ReadLine(); } // Invokes the InputReceived event private void OnInput(string input) { InputReceived.Invoke(this, new InputReceivedEventArgs(input)); } }
{ "domain": "codereview.stackexchange", "id": 22164, "tags": "c#, console, event-handling" }
What is the best XOR neural network configuration out there in terms of low error?
Question: I'm trying to understand what would be the best neural network for implementing an XOR gate. I'm considering a neural network to be good if it can produce all the expected outcomes with the lowest possible error. It looks like my initial choice of random weights has a big impact on my end result after training. The accuracy (i.e. error) of my neural net is varying a lot depending on my initial choice of random weights. I'm starting with a 2 x 2 x 1 neural net, with a bias in the input and hidden layers, using the sigmoid activation function, with a learning rate of 0.5. Below my initial setup, with weights chosen randomly: The initial performance is bad, as one would expect: Input | Output | Expected | Error (0,0) 0.8845 0 39.117% (1,1) 0.1134 0 0.643% (1,0) 0.7057 1 4.3306% (0,1) 0.1757 1 33.9735% Then I proceed to train my network through backpropagation, feeding the XOR training set 100,000 times. After training is complete, my new weights are: And the performance improved to: Input | Output | Expected | Error (0,0) 0.0103 0 0.0053% (1,1) 0.0151 0 0.0114% (1,0) 0.9838 1 0.0131% (0,1) 0.9899 1 0.0051% So my questions are: Has anyone figured out the best weights for a XOR neural network with that configuration (i.e. 2 x 2 x 1 with bias) ? Why my initial choice of random weights make a big difference to my end result? I was lucky on the example above but depending on my initial choice of random weights I get, after training, errors as big as 50%, which is very bad. Am I doing anything wrong or making any wrong assumptions? So below is an example of weights I cannot train, for some unknown reason. I think I might be doing my backpropagation training incorrectly. I'm not using batches and I'm updating my weights on each data point solved from my training set. Weights: ((-9.2782, -.4981, -9.4674, 4.4052, 2.8539, 3.395), (1.2108, -7.934, -2.7631)) Answer: The initialization of the weights has a big impact on the results. I'm not sure specifically for the XOR gate, but the error can have a local minimum that the network can get "stuck" in during training. Using stochastic gradient descent can help give some randomness that gets the error out of these pits. Also, for the sigmoid function, weights should be initialized so that the input to the activation is close to the part with the highest derivative so that training is better.
{ "domain": "ai.stackexchange", "id": 506, "tags": "neural-networks, training, backpropagation, xor-problem" }
Young's double slit and determinism
Question: In Cohen-Tannoudji's QM book, pg. 6, the following was said about the Young's double slit experiment: Moreover, as the photons arrive one by one, their impacts on the screen gradually build up the interference pattern. This implies that, for a particular photon, we are not certain in advance where it will strike the screen. Now these photons are emitted under the same conditions. Thus another classical idea has been destroyed: that the initial conditions completely determine the subsequent motion of a particle. I don't think I agree with what the author has said here. Since we can't perform measurement on a particular emitted photon without disturbing it, there isn't a way we can measure the initial conditions of the photon after it leaves the source. Although photons are emitted under the same conditions, i.e. from the same source, each photon may travel in different directions and velocities after leaving the source. Thus each photon, although from the same source, can have differing initial conditions, which in terms affect where they strike the screen. The reason why we can't predict where the photon strikes the screen may be because we don't know the initial condition of the photon. What do you think about my argument? Answer: You are right up to a point- when the authors state that the photons are created under the same initial conditions, that is clearly an imprecise statement, as the starting position and direction of the individual photons are subject to a degree of uncertainty. However, that uncertainty is not enough to account for the results of the interference effects. Our theories suggest that even if multiple photons were to be presented to the slits in exactly the same way, they would still be detected on the screen in a dispersed diffraction pattern. You could test the extent to which the variation in initial conditions contributed to the uncertainty in the final detected positions of the photons by moving the position of the source slightly when performing the experiment. You would find that tiny movements of the source apparatus relative to the screen would not result in major changes to the diffraction pattern, and therefore could not be the cause of the wide spread of locations at which the photons are found on the detector beyond the slits.
{ "domain": "physics.stackexchange", "id": 85516, "tags": "quantum-mechanics, photons, double-slit-experiment" }
Understanding Killing vectors of FLRW metric
Question: I am trying to understand how to calculate the Killing vectors of FLRW metric \begin{equation} ds^2 = dt^2 - R(t)^2\left( \frac{dr^2}{1 - k r^2} + r^2 d\theta^2 + r^2 \sin\theta d\phi^2\right). \end{equation} I'm following this article, where they explicitly use Killing equations \begin{equation} \xi_{\mu; \nu} + \xi_{\nu; \mu} = 0. \end{equation} However, when I get lost when they state the results of solving Killing's equation for the $t=constant$ submanifold \begin{aligned} \xi^{t} &=0 \\ \xi^{r} &=\sqrt{1-k r^{2}}\left(\sin \theta\left(\cos \phi \delta a_{x}+\sin \phi \delta a_{y}\right)+\cos \theta \delta a_{z}\right) \\ \xi^{\theta} &=\frac{\sqrt{1-k r^{2}}}{r}\left[\cos \theta\left(\cos \phi \delta a_{x}+\sin \phi \delta a_{y}\right)-\sin \theta \delta a_{z}\right]+\left(\sin \phi \delta b_{x}-\cos \phi \delta b_{y}\right) \\ \xi^{\phi} &=\frac{\sqrt{1-k r^{2}}}{r}\left[\frac{1}{\sin \theta}\left(\cos \phi \delta a_{y}-\sin \phi \delta a_{x}\right)\right]+\cot \theta\left(\cos \phi \delta b_{x}+\sin \phi \delta b_{y}\right)-\delta b_{z} \end{aligned} Where they state that there are 6 Killing vector fields, as required by the maximal symmetry of the 3-manifold. Firstly, Killing equation can only be run for indices $\mu = 0,1,2,3$. Therefore, I suppose that in reality, we have something like $\{\xi^{(i)}_\mu\}$, where the index $i$ refers to different the Killing vectors, and, in the previous case, it would run $i= 1,...6$ and $\mu$ is the component of the $i$-th Killing vector. Secondly, I don't understand the result given in the mentioned article. I suppose $\delta a_x, \delta a_y, ...$ are the Killing vectors because there are 6 of them, but I don't get why they are arranged in this manner, nor what the $\delta$ means. Therefore I conclude that I don't understand properly Killing vectors and how to calculate them. Could someone help me see what I am missing, or point me to any book where it is adequately explained? (I have already looked at MTW and Carroll) Thanks. Answer: I agree that the notation there is not a model of clarity, but I believe what they are doing is parametrizing the six independent Killing vector fields in the one set of equations (5.1). In other words, the components of Killing's equation (4.11–20) determine the components of $\xi^\mu$ up to a set of six arbitrary constants, and the authors have lumped these six constants into two three-component vectors $\delta \vec{a}$ and $\delta \vec{b}$. Different choices of these constants lead to different Killing vector fields (KVFs); $\delta a_x \neq 0$ (and all others zero) gives you one KVF, $\delta a_y \neq 0$ (and all others zero) gives you another one, and so on. Note that a linear combination of any two KVFs is itself a KVF, so this notation emphasizes that this manifold has a six-dimensional "space" of Killing vector fields. The distinction between $\delta \vec{a}$ and $\delta \vec{b}$ is that KVFs with $\delta \vec{a} = 0$ leave the constant-$r$ spheres invariant as well. (All of these KVFs leave the constant-$t$ hypersurfaces invariant, by construction.) I have no idea why they use the $\delta$'s in the notation for these vectors either. It might be explained later in the paper but I did not see it at a cursory glance.
{ "domain": "physics.stackexchange", "id": 88575, "tags": "general-relativity, cosmology, metric-tensor, symmetry, vector-fields" }
Interview coding test: Searcher
Question: I did a task for an interview and the solution was not accepted. The task was to implement the class search function by name. The number of classes in the input data from 0 to 100000. Class names are no longer than 32 characters, contains only letters and numbers, are unique. The application starts with flags -Xmx64m -Xms64m -Xss64m. Expected, when the project is opened first time the data is indexed, then searches are performed quickly. Can you take a look at this and let me know where I went wrong and if is possible to somehow refactor, speed up this implementation (except the normal implementation of cache). import java.text.Collator; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; public class Searcher implements ISearcher { private Map<Character, List<Entity>> storage = new HashMap<>(); private Map<String, String[]> cache; public Searcher() { } /** * Refreshes internal data structures for fast search * * @param classNames class names in project * @param modificationDates class modification date in ms */ @Override public void refresh(String[] classNames, long[] modificationDates) { cache = new HashMap<>(); for (int i = 0; i < classNames.length; i++) { char startChar = classNames[i].charAt(0); if (!storage.containsKey(startChar)) { List<Entity> list = new ArrayList<>(); list.add(new Entity(classNames[i], modificationDates[i])); storage.put(startChar, list); } else { storage.get(startChar).add(new Entity(classNames[i], modificationDates[i])); } } } /** * Looking for a suitable class names starting with start * * @param start beginning of a class name * @return an array of length 0 to 12, class names, ordered by modification date * and lexicographically */ @Override public String[] guess(String start) { if (!storage.containsKey(start.charAt(0))) return new String[0]; if (cache.containsKey(start)) return cache.get(start); Collator collator = Collator.getInstance(); String[] result = storage.get(start.charAt(0)).stream() .filter(entity -> entity.name.startsWith(start)) .sorted((entity1, entity2) -> { int value = entity2.time.compareTo(entity1.time); return value == 0 ? collator.compare(entity1.name, entity2.name) : value; }) .limit(12) .map(e -> e.name) .toArray(String[]::new); cache.put(start, result); return result; } private static class Entity { String name; Long time; Entity(String name, long time) { this.name = name; this.time = time; } } } TestDataGenerator: import java.util.Date; import java.util.Random; public class TestDataGenerator { private static final char[] LOWER_CHARS = "abcdefghijklmnopqrstuvwxyz".toCharArray(); private static final char[] UPPER_CHARS = "ABCDEFGHIJKLMNOPQESTUVWXYZ".toCharArray(); private static final char[] NUMBERS = "1234567890".toCharArray(); private String[] names; private long[] modificationDates; private String[] masks; public TestDataGenerator(int numberClasses) { initMasks(); init(numberClasses); } private void initMasks() { masks = new String[10]; StringBuilder sb; Random random = new Random(); for (int i = 0; i < masks.length; i++) { sb = new StringBuilder(); for (int k = 0; k < 4; k++) { if (k % 2 == 0) { sb.append(LOWER_CHARS[random.nextInt(LOWER_CHARS.length - 1)]); } else { sb.append(UPPER_CHARS[random.nextInt(UPPER_CHARS.length - 1)]); } } masks[i] = sb.toString(); } } private void init(int numberClasses) { Random random = new Random(); names = new String[numberClasses]; modificationDates = new long[numberClasses]; for (int i = 0; i < numberClasses; i++) { names[i] = getName(random); modificationDates[i] = getDate(random); } } private String getName(Random random) { StringBuilder sb = new StringBuilder(); int temp; sb.append(masks[random.nextInt(masks.length - 1)]); for (int i = 0; i < 32; i++) { temp = random.nextInt(100); if (temp % 2 == 0) { sb.append(LOWER_CHARS[random.nextInt(LOWER_CHARS.length - 1)]); } else if (temp % 3 == 0) { sb.append(NUMBERS[random.nextInt(NUMBERS.length - 1)]); } else { sb.append(UPPER_CHARS[random.nextInt(UPPER_CHARS.length - 1)]); } } return sb.toString(); } private long getDate(Random random) { long max = System.currentTimeMillis(); long min = new Date(2015, 11, 11).getTime(); return min + (long) (random.nextDouble() * (max - min)); } public String[] getNames() { return names; } public long[] getModificationDates() { return modificationDates; } } SearcherTest: import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.Random; import java.util.stream.Collectors; public class SearcherTest { public static void main(String[] args) { TestDataGenerator generator = new TestDataGenerator(1000); Searcher searcher = new Searcher(); Random random = new Random(); String[] searchBy = getSearchBy(generator.getNames(), random); searcher.refresh(generator.getNames(), generator.getModificationDates()); List<Long> timeGuess = new ArrayList<>(); long start, end; try { File file = new File("/users/kubreg/result.txt"); if (!file.exists()) { file.createNewFile(); } FileWriter fw = new FileWriter(file); fw.write("GENERATED DATA, ELEMENTS - " + generator.getNames().length + "\r\n"); for (int i = 0; i < generator.getNames().length; i++) { fw.write(generator.getNames()[i] + " " + generator.getModificationDates()[i] + "\r\n"); } fw.write("\r\n"); String search; String[] result; for (int i = 0; i < 10000; i++) { start = System.currentTimeMillis(); search = searchBy[random.nextInt(searchBy.length - 1)]; result = searcher.guess(search); end = System.currentTimeMillis(); timeGuess.add(end - start); fw.write(search + "\r\n"); fw.write(result.length + "\r\n"); for (String aResult : result) { fw.write(aResult + "\r\n"); } fw.write("\r\n"); } System.out.println("Average time GUESS - " + timeGuess.stream().collect(Collectors.averagingLong(l -> l - 1))); fw.close(); } catch (IOException e) { e.printStackTrace(); } } public static String[] getSearchBy(String[] names, Random random) { int range = random.nextInt(names.length - 1); String[] result = new String[range]; for (int i = 0; i < range; i++) { String tmp = names[random.nextInt(names.length - 1)]; result[i] = tmp.substring(0, random.nextInt(16) + 1); } return result; } } Answer: Java 8 Map Since you're already on Java 8, you can use Map.computeIfAbsent(K, Function) to put a List into your Map if none exists: storage.computeIfAbsent(startChar, k -> new ArrayList<>()) .add(new Entity(classNames[i], modificationDates[i])); Java 8 Comparator You can also use a number of Comparator methods, starting with Comparator.comparing(Function), to replace the in-line lambda you are using to sort your Stream<Entity>: // assuming getters are given as such Comparator<Entity> COMPARATOR = Comparator.comparing(Entity::getTime).reversed() .thenComparing(Entity::getName, Collator.getInstance()); According to your implementation, you want to compare entity2.time - entity1.time first, so you have to reverse the Comparator, then comparing on the name. Immutable classes and getters As hinted above, providing getter methods lets you use method references for your Entity class, and you can make it immutable too by final-ing the fields. Deprecated Date constructor new Date(2015, 11, 11).getTime() This constructor is deprecated, and in Java 8, you should consider using the new java.time.* APIs: long min = ZonedDateTime.of(LocalDate.of(2015, 11, 11), LocalTime.MIDNIGHT, ZoneOffset.UTC).toEpochSecond() * 1000; try-with-resources and hard-coding file paths You should use try-with-resources on your FileWriter for safe and efficient handling of the underlying I/O resource. Is there any special reason why you have went with a decidedly non-Unix line-separator (\r\n) while writing to a seemingly Unix-based filesystem? Also, do consider not hard-coding the file path and take that as an input from String[] args instead. edit: Since you do intend to write to a file using OS-dependent line separator, consider using a PrintWriter to do so too... File outputFile = getOutputFile(args); // construct from program inputs try (PrintWriter printWriter = new PrintWriter(outputFile)) { // ... printWriter.println(); // ... }
{ "domain": "codereview.stackexchange", "id": 19814, "tags": "java, performance, interview-questions, search" }
Is that true that plant stem cells can be used in humans?
Question: I was reading an article (which seems very fake to me) on sensitive topics, but there was one astonishing statement: Stem cells are obtained from certain plants that grow all over the world. Once the stem cells have been obtained, the doctor will inject them on the target organ... I want to ask specialists if this particular statement can be true. If yes, does it imply nucleus replacement in stem cells, or anything like that? Sorry guys, for the stupid question. Answer: https://stemcells.nih.gov/info/basics/6.htm ... Viruses are currently used to introduce the reprogramming factors into adult cells, and this process must be carefully controlled and tested before the technique can lead to useful treatment for humans. In animal studies, the virus used to introduce the stem cell factors sometimes causes cancers. Researchers are currently investigating non-viral delivery strategies. In any case, this breakthrough discovery has created a powerful new way to "de-differentiate" cells whose developmental fates had been previously assumed to be determined. In addition, tissues derived from iPSCs will be a nearly identical match to the cell donor and thus probably avoid rejection by the immune system. The iPSC strategy creates pluripotent stem cells that, together with studies of other types of pluripotent stem cells, will help researchers learn how to reprogram cells to repair damaged tissues in the human body. So as that all points out, no, the genetics of it will cause a plant stem cell to be genetically not a match, where it might do something for a little while, but upon that cells first interactions, it will stimulate the immune system to get rid of it, rather than incorporate it. Anymore, I want to know about how Bone Morphinogenic Proteins (BMP-4 or above) can be injected into an organ, and if that will help stem cells for reviving an organ at all.
{ "domain": "biology.stackexchange", "id": 8989, "tags": "human-biology, plant-physiology, stem-cells" }
Naming secondary amine IUPAC system
Question: $\ce{CH3NHCH(CH3)CH2CH3}$ Should this be named as 2-(N-methyl)butanamine or 2-(methylamino)butane? In other words, do we name the compound as 'alkanamine' or treat amino group as a substituent of butane? Answer: It should be N-methyl-2-butanamine. Amine is treated as a functional group attached. So we write it as alkanamine as it is the only functional group here. Image source: ChemSpider
{ "domain": "chemistry.stackexchange", "id": 9819, "tags": "organic-chemistry, nomenclature" }
Calculation of Field That Creates a Spherecally Symmetrical Charge Distribution
Question: Gauss's Law I am trying to do this calculation using Gauss's Law. $$ Φ_Ε = \int \vec E \cdot d \vec A = \frac{Q_{in}}{ε_0}$$ One of the ways to derive this law is to concider a spherical surface at the centre of which there is a point charge. Because of the symmetry the field has the same value in every point of the Gaussian surface. $$ \int E\hat r \cdot dA \hat r= \int EdA =EA = \frac{q}{ε_0} $$ If i want to calculate it by solving the surface integral would I differentiate the surface of the sphere? $$dA=d(4πr^2)=8πrdr $$ $$ \int EA = \int \frac{k_eQ_{in}}{r^2} 8πrdr = \int \frac{2Q_{in}}{ε_0r}dr$$ But this is not the right outcome. Is it maybe because the $dr$ indicates a change only in radial direction? Calculation of the spherical distribution Since the distribution is spherically symmetrical the charge density is $ρ=ρ(r)$, right? In case for example when we want to calculate the field at a point inside the sphere: $$ E= \frac{Qin}{4πr^2ε_0} = \frac{ \int^a_0 ρ(r)dV}{4πa^2}=\frac{ \int^a_0 ρ(r)4πr^2dr}{4πa^2}$$ Where $ a \le R$ End So I guess what i am askind is can I solve these integrals by further differentiating the differentials? i.e. $dA \to 8πrdr$ or $dV \to 4πr^2dr $ Answer: The short answer to your question about the manipulation $dA \to 8\pi r\,dr$ is that no, you cannot perform this manipulation to do the surface integral since the integral is being taken over a surface of constant radius, and this manipulation would have you integrate with respect to the radius. Note that the expression $8\pi r\,dr$ is a literal interpretation of how the surface area of the ball changes with small changes of the radius $dr$. If you want to evaluate the integral $\iint_S d\vec A\cdot \vec E$, you will have to note that the differential area element $d\vec A$ is $( r^2\sin\theta \,d\theta\,d\phi)\,\hat r$, and that the integral is being taken over the range $\theta \in [0,\pi]$, $\phi \in [0,2\pi)$, not over $r$ since we are evaluating the integral at constant $r$. Formally, you are parametrizing the surface of the ball of radius $r$: $$ S(\phi, \theta) = (x(\phi,\theta),y(\phi,\theta),z(\phi,\theta)), $$ and computing the surface integral using the Jacobian factor $r^2\sin\theta\,d\theta\,d\phi$. Hence, $$ \iint_S d\vec A\cdot \vec E = \frac{Q_{\text{in}}}{\varepsilon_0}\iint_S d\phi\,d\theta\,\frac{r^2\sin\theta}{4\pi r^2} = \frac{Q_{\text{in}}}{\varepsilon_0}\frac{4\pi}{4\pi} = \frac{Q_{\text{in}}}{\varepsilon_0}. $$ Since $\iint_S d\phi\,d\theta\,\sin\theta = 4\pi$. As for the electric field a distance $a$ from the center of a spherically-symmetric charge distribution $\rho(r)$, yes you still pull out Gauss's law: $$ \iint_{S_a} d\vec A\cdot \vec E = \frac{1}{\varepsilon_0}\iiint_B dV\,\rho(r) \implies \vec E(a) = \hat r\frac{1}{4\pi\varepsilon_0 a^2} \iiint_B dV\,\rho( r). $$ In this second case, the volume $V$ of the ball is given by $4\pi r^3/3$, and the manipulation $dV = 4\pi r^2\,dr$ is valid since in this case, we are integrating over the volume of the ball, not just a surface of constant radius. In each case, pay attention to your region of integration. If the region is at a constant radius, you will not integrate with respect to the radius. If the region is a volume integral, you will integrate with respect to the radius (so long as the region has spherical symmetry).
{ "domain": "physics.stackexchange", "id": 33432, "tags": "homework-and-exercises, electrostatics" }
Does treewidth $k$ imply the existence of a $K_{1,k}$ minor?
Question: Let $k$ be fixed, and let $G$ be a (connected) graph. If I'm not mistaken, it follows from the work of Bodlaender [1, Theorem 3.11] that if the treewidth of $G$ is roughly at least $2k^3$, then $G$ contains a star $K_{1,k}$ as a minor. Can we make the term $2k^3$ smaller? That is, does say treewidth at least $k$ already imply the existence of a $K_{1,k}$-minor? Is there a proof somewhere? [1] Bodlaender, H. L. (1993). On linear time minor tests with depth-first search. Journal of Algorithms, 14(1), 1-23. Answer: It is indeed true that every graph $G$ with no $K_{1,k}$ minor has treewidth at most $k-1$. We prove this below, first a few definitions: Let $tw(G)$ be the treewidth of $G$ and $\omega(G)$ be the maximum size of a clique in $G$. A graph $H$ is a triangulation of $G$ if $G$ is a subgraph of $H$ and $H$ is chordal (i.e has no induced cycles on at least $4$ vertices). A triangulation $H$ of $G$ is a minimal triangulation if no proper subgraph of $H$ is also a triangulation of $G$. A subset $X$ of vertices of $G$ is a potential maximal clique if there exists a minimal triangulation $H$ of $G$ such that $X$ is a maximal clique of $H$. It is well known that $$tw(G) = \min_{H} \omega(H) - 1$$ Here, the minimum is taken over all minimal triangulations $H$ of $G$. The above formula implies that to prove that $tw(G) \leq k-1$ it is sufficient to prove that all potential maximal cliques of $G$ have size at most $k$. We now prove this. Let $X$ be a potential maximal clique of $G$, and suppose that $|X| \geq k+1$. We will use the following characterization of potential maximal cliques: a vertex set $X$ is a potential maximal clique in $G$ if, and only if, for every pair $u$, $v$ of non-adjacent (distinct) vertices in $X$ there is a path $P_{u,v}$ from $u$ to $v$ in $G$ with all its internal vertices outside of $X$. This characterization can be found in the paper Treewidth and Minimum Fill-in: Grouping the Minimal Separators by Bouchitte and Todinca. With this characterization it is easy to derive a $K_{1,k}$ minor from $X$. Let $u \in X$. For every vertex $v \in X \setminus \{u\}$, either $uv$ is an edge of $G$ or there is a path $P_{u,v}$ from $u$ to $v$ with all internal vertices outside $X$. For all $v \in X$ that are non-adjacent to $u$ contract all the internal vertices of $P_{u,v}$ into $u$. We end up with a minor of $G$ in which $u$ is adjacent to all of $X$, and $|X| \geq k+1$. So the degree of $u$ in this minor is at least $k$, completing the proof.
{ "domain": "cstheory.stackexchange", "id": 3290, "tags": "graph-theory, treewidth, graph-minor" }
Refraction of light but slightly twisted
Question: This is the question: (I haven't bothered to type it because anyway I needed to put the picture of the circles.) So, now what I did first was basic stuff and found that the first angle of refraction was $30^\circ$. After that, some math showed that the line was tangent to the inner sphere. After this I can actually resort to lengthy non-physics related, completely mathematical stuff wherein I will need to take help of equation of tangents and stuff but that would take up a lot of time and is not possible in the exam hall. This is one of the best(intriguing) questions I have seen so far, in ray optics. Any help on how to solve it (non-mathematically)? Answer: In real life the ray will hardly enter the sphere because most of it will be reflected. In this case the answer would be trivially 0°. But I suppose the authors still meant the ray to be traced through the spheres, so I proceed with this assumption. This problem is actually quite possible to solve without special physical insight. The numbers given here are "nice" enough that it'd be unwise not to use this. So, you just need Snell's law and a bit of trigonometry. Here's a sketch of the solution: 1 . Use Snell's law to find out that the angle of refraction. Notice that it's 30°, and the ratio of the outer and the inner radii is 2. As this ratio is the sine of the angle of refraction, the ray is tangent to the inner sphere. Draw the lines in the figure to see this. 2 . Do the same as in step 1, but for the inner sphere. You'll find the angle of refraction is once again 30°, so the ray is tangent to an auxiliary circle of radius $R/2$. 3a . Use symmetry with respect to the line that crosses the tangent point found in step 2 and the origin. You'll find that the ray exits the inner sphere becoming tangent to it, then exits the outer sphere becoming tangent to it. This symmetry argument is justified by Helmholtz reciprocity principle. 3b . Alternatively, if symmetry doesn't convince you enough, continue the ray to the point of exit from the inner sphere and prove that its incidence angle is 30°. Proceed accordingly. 4 . Carefully calculate the accumulated angles of deviation, match to the answers. The diagram will look something like this (red lines are auxiliary, green is the ray, blue denotes the angles):
{ "domain": "physics.stackexchange", "id": 80415, "tags": "homework-and-exercises, refraction, geometric-optics" }
What are the conditions necessary for a programming language to have no undefined behavior?
Question: For context, yesterday I posted Does the first incompleteness theorem imply that any Turing complete programming language must have undefined behavior?. Part of what prompted me to ask that question in the first place is that, awhile ago, someone on the learnprogramming subreddit told me something about the reason C++ in particular having so much undefined behavior is because, for it to not have undefined behavior, it would have to use a much more restrictive language model, but they didn't explain what that means exactly. I had also asked on Quora awhile ago about why C++ compilers don't always throw errors when a program contains undefined behavior and at least one answer mentioned something about it being fundamentally impossible to always detect undefined behavior at compile time and that this was related to the halting problem being undecidable. Those two things combined have me wondering about models of computing more generally -- my understanding is that all/most popular programming languages, including C++, are Turing complete, and since I was told the problem of detecting UB in C++ is fundamental and related to the halting problem, I thought that perhaps all Turing complete programming languages must have undefined behavior and C++ is just worse at hiding it than others. But judging from the answers to my above-linked question, I was mistaken about that. So my question now is, what conditions need to be imposed on a Turing complete language in order to guarantee that all possible programs written in the language will have fully defined behavior determined by the language specification? And, on a side note, does the answer have anything to do with the incompleteness theorems? I ask the latter question because the idea of defining a language for which all possible programs have fully defined behavior seems quite similar to the idea of defining an axiom system for which all possible theorems are provable/disprovable. Answer: The problem of statically detecting undefined behavior has nothing to do with undefinedness as such. It's just impossible to prove in general that programs in a Turing-complete language will do anything (Rice's theorem). For example, if your main function looks like int main() { do_something(); cout << "Done" << endl; } then for any algorithm attempting to determine whether the program halts, there is some definition of do_something that will fool it. For the same reason, for any algorithm attempting to determine whether the program displays Done, there is a definition of do_something that will fool it. For the same reason, if you add "42"[42]; at the end, then for any compiler that tries to warn you about undefined behavior, there is a definition of do_something that will fool it. Whether a program displays Done is decidable for a Turing-complete language that has no way to display text. Likewise, whether a program has undefined behavior is decidable for a Turing-complete language that completely defines the behavior of every program (as Turing's original computing model did, for instance). It is possible to detect and warn about undefined behavior in C++ in many cases, and popular compilers could do a better job of it than they do. someone [...] told me [...] for [C++] to not have undefined behavior it would have to use a much more restrictive language model, but they didn't explain what that meant exactly. They probably meant that defining the behavior of every program makes optimization more difficult. For example, if the effect of an out-of-bounds array access is defined, then every array access has to be compiled into code that checks whether the index is out of bounds so it can do the mandated thing (unless the compiler can prove that that code is dead, which in many cases is not possible). If the effect is not defined then the compiler can just generate a single memory-access instruction. It may crash the program, or overwrite some other variable causing weird, hard-to-debug behavior down the line, but that's okay, because that can only happen when the index was out of bounds, and the spec says anything goes then.
{ "domain": "cs.stackexchange", "id": 21417, "tags": "programming-languages, undecidability, mathematical-foundations, decidability, incompleteness" }
Is isospin magnitude $I$ conserved?
Question: Here is a table of isospin conservation in certain reactions. It is often loosely stated that 'isospin is always conserved in strong interactions', but it is never clear whether they mean total isospin I or its component I3. Also in the table below for the middle reaction it appears that sometimes a superposition of I=0,1 can react into just I=1, suggesting I may not be conserved. In this question isospin conservation for total isospin or third component of isospin? it confirms that I3 is conserved but just says 'I is an approximate symmetry', not being clear to whether that means I itself is conserved or not while another answer says it is conserved but is downvoted. Answer: Your table is invisible, but I'll answer by explaining the "why", from which all else follows. If you switched off all interactions, and set the masses of the up and down quarks equal, you'd have a perfect su(2) symmetry of your Hamiltonian, so 3 generators conserved, and hence both the Casimir and the $I_3$ generator conserved. But this is not a fact of nature, as $m_d=4.8MeV \approx 2\times m_u$, where $m_u=2.3MeV$, so, quite badly broken. How does coupling to the strong interactions make it an approximate, pretty good, symmetry? Unlike electromagnetism and the weak interactions, the strong interactions couple "blindly" to the up and down quarks, and so treat them both equally. The strong force breaks chiral symmetry dynamically and converts the above current quark masses to constituent quark masses, both about 300MeV, a third of the nucleon mass, as the strong interactions are characterized by a much higher scale, $\Lambda_{QCD}\approx 200MeV$, w.r.t. which the above mass difference is insignificant, $\kappa = (m_d-m_u)/\Lambda_{QCD}\approx$ 1%. So the strong interactions do their thing and they barely notice the 1% isospin breaking. Within such small breaking slop, Isospin and G-parity, predicated on it, are excellent approximate symmetries. This means that Isospin, all 3 generators, and hence their Casimir, I(I+1), are essentially conserved in the strong interactions. You use this conservation in all Clebsching problems in all reactions, as elementary particle texts instruct you to, with lots of problems relying on it. $I_3$ is actually better protected against the explicit breaking of the other two generators, because it does not change quark flavor, so it just counts the up and down content difference of your process, and depends less on masses. Adding the strange quark, whose mass is very different, has trained people to systematically account for this flavor su(3) breaking, the technology of the celebrated Eightfold Way: there is beautiful method in its madness. When you switch on the electromagnetic and weak interactions, isospin is broken by subleading corrections, comparable or bigger to the mass breaking, κ, above, and you need sensible perturbative ways to account for it. But the takeaway is that the entire isospin su(2) is conserved in the strong interactions, and you should utilize its conservation the way you would spin. In the context of your specific question with the emergent table, because of isospin conservation, you know that only the isotriplet combination of your reactants goes to the uniquely isotriplet final state, and the isosinglet combination channel does not connect to the reaction!
{ "domain": "physics.stackexchange", "id": 73911, "tags": "particle-physics, standard-model, strong-force, isospin-symmetry" }
Represent a real number without loss of precision
Question: Current floating point (ANSI C float, double) allow to represent an approximation of a real number. Is there any way to represent real numbers without errors? Here's an idea I had, which is anything but perfect. For example, 1/3 is 0.33333333...(base 10) or o.01010101...(base 2), but also 0.1(base 3) Is it a good idea to implement this "structure": base, mantissa, exponent so 1/3 could be 3^-1 {[11] = base 3, [1.0] mantissa, [-1] exponent} Any other ideas? Answer: It all depends what you want to do. For example, what you show is a great way of representing rational numbers. But it still can't represent something like $\pi$ or $e$ perfectly. In fact, many languages such as Haskell and Scheme have built in support for rational numbers, storing them in the form $\frac{a}{b}$ where $a,b$ are integers. The main reason that these aren't widely used is performance. Floating point numbers are a bit imprecise, but their operations are implemented in hardware. Your proposed system allows for greater precision, but requires several steps to implement, as opposed to a single operation that can be performed in hardware. It's known that some real numbers are uncomputable, such as the halting numbers. There is no algorithm enumerating its digits, unlike $\pi$, where we can calculate the $n$th digit as long as we wait long enough. If you want real precision for things irrational or transcendental numbers, you'd likely need to use some sort of system of symbolic algebra, then get a final answer in symbolic form, which you could approximate to any number of digits. However, because of the undecidability problems outlined above, this approach is necessarily limited. It is still good for things like approximating integrals or infinite series.
{ "domain": "cs.stackexchange", "id": 3090, "tags": "binary-arithmetic, arithmetic, floating-point, real-numbers, number-formats" }
Does differentiation and (conformal) normal ordering commute?
Question: In Polchinski's String theory Volume I, chapter 2, he writes down the following OPE, $$ X^\mu (z_1,\bar{z}_1)X^\nu(z_2,\bar{z}_2) = -\frac{\alpha'}{2}\eta^{\mu\nu}\ln|z_{12}|^2 + :X^\nu X^\mu(z_2,\bar{z}_2): + \sum_{k=1}^{\infty}\frac{1}{k!} \bigg[ (z_{12})^k :X^\nu\partial^kX^\mu(z_2,\bar{z}_2): + \ (\bar{z}_{12})^k :X^\nu\bar{\partial}^kX^\mu(z_2,\bar{z}_2): \bigg] $$ which he gets by taylor expanding $:X^\mu(z_1,\bar{z}_1)X^\nu(z_2,\bar{z}_2): $ since it can be written as a sum of holomorphic and antiholomorphic functions (since it is harmonic in $(z_1,\bar{z}_1)$). But if we call the harmonic function $\ f(z_1,\bar{z}_1;z_2,\bar{z}_2) = :X^\mu(z_1,\bar{z}_1)X^\nu(z_2,\bar{z}_2):$, and taylor expand $\ f$ in $(z_1,\bar{z}_1)$ around $(z_1,\bar{z}_1) = (z_2,\bar{z}_2)$, then we get terms that go like ~ $\partial^k f$ ~ $\partial^k :X^\mu(z_1,\bar{z}_1)X^\nu(z_2,\bar{z}_2):$ evaluated at $(z_2,\bar{z}_2)$. Here the derivatives are outside the normal ordered expression while in Polchinski's expression, they are inside the normal ordered expression. Why are they equal? What am I misunderstanding? Reference for the expression : Eq. 2.2.4 Pg 38, Polchinski Chapter 2 (hardcover) (2004 reprint I think). Edit 1: $z_{12} = z_1 - z_2$ Edit 2: All partial derivatives are with respect to $z_1,\bar{z}_1$. Answer: Yes, differentiation and normal ordering commute. You can prove this using the definition (2.2.7) of the normal ordering: $$:\mathcal{F}: = \exp \left( \frac{\alpha '}{4} \int d^2z_1 d^2z_2 \ln |z_{12}|^2 \frac{\delta}{\delta X^{\mu}(z_1 , \bar{z}_1)} \frac{\delta}{\delta X^{\nu}(z_2 , \bar{z}_2)} \right) \mathcal{F}$$ Using this expression, the commutativity of the normal order and differentiation boils down to commutativity of functional derivative and differentiation. Let us consider any functional $F$ of $X^{\mu}(z )$. Then $$\frac{\delta F[X^{\mu}(z )]}{\delta X^{\mu}(z_1)} = \lim\limits_{\epsilon \rightarrow 0} \frac{F[X^{\mu}(z) + \epsilon \delta(z-z_1) ]-F[X^{\mu}(z)]}{\epsilon} $$ (see for instance eq A.28 here). Then apply this for $F[X^{\mu}(z , \bar{z})] = \partial X^{\mu}(z)$: $$\frac{\delta \partial X^{\mu}(z)}{\delta X^{\mu}(z_1)} = \delta ' (z-z_1)$$ On the other hand, $$\partial \frac{\delta X^{\mu}(z)}{\delta X^{\mu}(z_1)} = \delta ' (z-z_1)$$ since $\frac{\delta X^{\mu}(z)}{\delta X^{\mu}(z_1)} = \delta (z-z_1)$. The two expressions are equal, which proves the claim.
{ "domain": "physics.stackexchange", "id": 38746, "tags": "string-theory, conformal-field-theory" }
Is autoclaving sucrose solution necessary?
Question: I use 10% sucrose solution to feed lab mosquitoes. Until now, I mix sucrose in autoclaved water and use it directly for feeding mosquitoes. Is it necessary to autoclave the sucrose solution itself before use? Answer: I found a standard protocol for the same here : http://vosshall.rockefeller.edu/assets/file/Vosshall%20Lab%20Mosquito%20Rearing%20SOP%20DEC%2012-2014.pdf Take a look at page no.8. They suggest to autoclave the sucrose solution.
{ "domain": "biology.stackexchange", "id": 7694, "tags": "zoology, materials" }
Sign problem and stoquastic Hamiltonians
Question: What is the sign problem in quantum simulations and how do stoquastic Hamiltonians solve it? I tried searching for a good reference that explains this but explanations regarding what the sign problem is are very hand-wavy. A related question, for stoquastic Hamiltonians are only off-diagonal terms zero or non-positive or are diagonal terms also zero and non-positive? Slide 2 here suggests all matrix terms are non-positive, but that means the diagonals have to all be zero, as a Hamiltonian is positive semi-definite and positive semi-definite matrices have non-negative diagonal entries. Answer: Stoquastic Hamiltonians do not suffer from the "sign problem" since for any observable A $ \langle A \rangle = \frac{1}{Z} \cdot \text{Tr } Ae^{-\beta H} = \frac{1}{Z} \cdot \sum_c A(c)p(c) $ and all weights $ p(c) \geq 0 $. A simple proof: Define $ G = d I - H $, where $ d = \text{max}_i H_{ii} $. All matrix elements of $ G $ are non-negative and so this holds for $G^n, \forall n$. \begin{align*} \langle A \rangle & = \frac{1}{Z} \cdot \text{Tr } A e^{-\beta H} \\ &= \frac{1}{Z} \cdot \text{Tr } A e^{-\beta (dI - G)} \\ &= \frac{e^{-\beta d}}{Z} \cdot \text{Tr } A e^{\beta G} \\ &= \frac{e^{-\beta d}}{Z} \cdot \sum_n \frac{\beta^n}{n!} \text{Tr }A G^n \\ &= \frac{e^{-\beta d}}{Z} \cdot \sum_{n} \sum_{x, y} \frac{\beta^n}{n!} \cdot \langle x|A|y \rangle \langle y|G^n|x \rangle \end{align*} and all weights $ e^{-\beta d} \cdot \frac{\beta^n}{n!} \cdot \langle y|G^n|x \rangle $ are non-negative.
{ "domain": "quantumcomputing.stackexchange", "id": 2310, "tags": "hamiltonian-simulation" }
More recent data and simulations of "Milkomeda", the collision of the Milky Way and Andromeda galaxies?
Question: The Space.com headline Hubble Telescope Spots Two Galaxies in a Doomed (but Dazzling) Dance; The galaxies will ultimately crash into each other was probably overstated as seems to be policy in some popular press sites. The galaxies are not "doomed". However the paragraph below is interesting: Our Milky Way, for example, is on an inevitable collision course with the neighboring behemoth galaxy — Andromeda. Individual star systems like ours will likely be largely undisrupted, but distant observers will see the two galaxies gradually become one in some four billion years. ESA nicknames this new merged galaxy "Milkomeda." Question: How much more is predicted now about the "upcoming" collision and possible merger of the Milky Way and Andromeda galaxies than has already been described in this answer's citation of a 2012 Astrobites review The Fate of the Milky Way? What measurements (if any) have contributed to this additional level of prediction? The current data could be improved by future additional HST observations. Also, soon it will be possible to compare them with independent water maser measurements for individual sources in M31 (see this Letter), which might allow measurement of other cool effects, such as the M31 proper motion rotation, and the increase in Andromeda’s apparent size due to its motion towards us. Answer: Since the second data release (DR2) of the European Space Agency's Gaia mission there has been a revolution in astrometry, including measuring the motion of the Andromeda Galaxy. On February this year van der Marel et al (also in ArXiv) published interesting results on that matter by using Gaia's DR2 measurements. The results reveal that the collision is going to happen 600 million years later than the previous estimate (in 4.5 Gyr instead of 3.9 Gyr). Also Andromeda appears to have more tangential motion than previously thought and thus "the galaxy is likely to deliver more of a glancing blow to the Milky Way than a head-on collision". Also interesting, M33 (Triangulum galaxy) is going to make its first infall towards Andromeda and might interact gravitationally quite a lot with it. The absence of stellar streams between Triangulum and Andromeda show that this is the first time they are going to meet each other. Thanks to Gaia the measurements are now so precise that for the first time we are able to notice even the minute rotation of both Triangulum and Andromeda galaxies astrometrically (and not only by using doppler shifts).
{ "domain": "astronomy.stackexchange", "id": 3927, "tags": "observational-astronomy, milky-way, galactic-dynamics, n-body-simulations" }
Does resampling cause phase shifts?
Question: I have a signal sampled at 40MHz, I would like to resample it to 37MHz. The signal is not periodic, I did resampling with Matalb resample function and it doesn't cause phase shift (as far as I understood). Matlab applies an anti-aliasing FIR filter and compensate for the delay introduced by the process. I want to do the same process in Python. I know there is a resample function in scipy.signal but the documentation is not clear for me. Does the resample function introduces phase shifts? if so, there is also a decimate function in scipy as well which applies similar FIR filter. Should I use decimate function over resample? Answer: Unfortunately, although scipy.signal.decimate has a zero phase shift argument, the decimation factor can only be an integer so you won't be able to downsample from 40KHz to 37kHz. scipy.signal.resample, on the other hand, can do the resampling you want but may (and most likely will) introduce phase shifts in your signal. There are other options, ( e.g sklearn.resample), so have a look at them as well.
{ "domain": "dsp.stackexchange", "id": 8736, "tags": "resampling, decimation, anti-aliasing-filter" }
How I get webcam to my browser using rosbridge
Question: I want to show the webcam of my laptop to browser using ros. I see a better discussion at here. I am uses ros-fuerte. The problem is that I cant able to install gscam , mjpeg_server in my system. How can I fix it on my ros fuerte. Please help me if there you know any good tutorial Originally posted by unais on ROS Answers with karma: 313 on 2013-04-15 Post score: 0 Answer: Hi geek, you can download the source of the mjpg-streamer. Look here: http://sourceforge.net/projects/mjpg-streamer/develop or here: http://sourceforge.net/projects/mjpg-streamer/ download and make it. Then you can start the mjpg_streamer via shell so that it provides an apache server and a www directory, where it uploads the webcam pictures. Then you are able to include the created html or php file to your own website. Maybe this helps you. I also use the mjpg_streamer and view the live stream with vlc or vlc plugin. Maybe this information helps you with your problem. Best regards, zumili Originally posted by zumili with karma: 141 on 2013-04-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13816, "tags": "mjpeg-server, gscam, rosbridge, ros-fuerte, webcam" }
Square root of NOT as a time-dependent unitary matrix
Question: I want to express the square root of NOT as a time-dependent unitary matrix such that each $n$ units of time, the square root of NOT is produced. More precisely, I want to find a $U(t_0,t_1)$ such that $U(t_0,t_1) = \sqrt{\text{NOT}}$, if $t_1-t_0=n$ for some $n$. One possible solution is to express $\sqrt{\text{NOT}}$ as a product of rotation matrices, and then, parametrize the angles in a clever way to depend on the time. But I do not know how to express $\sqrt{\text{NOT}}$ as a product of rotation matrices. Any help? Answer: $$ \sqrt{NOT} = e^{(\frac{i \pi}{4} I_2 - \frac{i \pi}{4} \sigma_x)}\\ U(t) = e^{\frac{t-t_0}{t_1 - t_0} (\frac{i \pi}{4} I_2 - \frac{i \pi}{4} \sigma_x)} $$
{ "domain": "quantumcomputing.stackexchange", "id": 263, "tags": "quantum-gate, gate-synthesis" }
Big-O and not little-o implies theta?
Question: If $f(n)$ is in $O(g(n))$ but not in $o(g(n))$, is it true that $f(n)$ is in $\Theta(g(n))$? Similarly, $f(n)$ is $\Omega(g(n))$ but not in $\omega(g(n))$ implies $f(n)$ is in $\Theta(g(n))$? If not, can you provide an explanation/counter-example, please? Answer: Let's start with the simple case $g = 1$, and $f$ having positive values only (that's all we care about with functions that represent complexity). $f \in O(1)$ means that $f$ is bounded: there exists $B$ such that $f(n) \le B$ (for sufficiently large values of $n$). $f \in o(1)$ means that $\lim_{n\to\infty} f(n) = 0$. $f \in \Theta(1)$ means that $f$ is bounded above and below: there exists $A \gt 0$ and $B$ such that $A \le f(n) \le B$ (for sufficiently large values of $n$). Note that $f \in \Theta(1)$ requires a positive (nonzero) lower bound for $f$: for sufficiently large $n$, $f(n) \ge A$. If $f \in o(1)$ then for sufficiently large $n$, $f(n) \lt A$. It is possible for neither of these to hold if $f$ oscillates between “large” (bounded below) and “small” (converging to zero) values, for example $$f(n) = \begin{cases} 1 & \text{if \(n\) is even} \\ 1/n & \text{if \(n\) is odd} \\ \end{cases}$$ Informally speaking, half of $f$ is $\Theta(1)$ (the even values) and half is $o(1)$ (the odd values), so $f$ is neither $\Theta(1)$ nor $o(1)$, despite being $O(1)$. With $g = 1$, there are regularity conditions on $f$ that's sufficient to make $O$ and not $o$ imply $\Theta$: this does hold if $f$ is monotonic; more generally, it holds if $f$ has a limit at $\infty$. With arbitrary positive $g$, these sufficient conditions translate to conditions on the quotient $f/g$ being monotonic (because $f \in O(g)$ iff $f/g \in O(1)$, etc.). It isn't enough for $f$ and $g$ to be both increasing or any such condition from real analysis. You can take any $g$ and multiply it by the $f$ above to get a counterexample to your conjecture. With algebraic conditions on $f$ and $g$, if they are taken from sufficiently restricted sets of functions, the conjecture may hold. For example, it holds if they're both polynomials (for polynomials, $f \in O(g)$ if $\deg(f) \le \deg(g)$, $f \in o(g)$ if $\deg(f) \lt \deg(g)$ and $f \in \Theta(g)$ if $\deg(f) = \deg(g)$). But as soon as you add “perturbations” to $f$ and $g$, all bets are off.
{ "domain": "cs.stackexchange", "id": 6728, "tags": "asymptotics, landau-notation" }
Errors importing old projects in eclipse
Question: Hi, I followed this wiki http://www.ros.org/wiki/IDEs to start using eclipse while developing my ros nodes. I started importing an old node I developed a couple of weeks ago but it seems that eclipse cannot resolve subscribe. Here's my code that fails: cmd_vel_sub_ = node_.subscribe <geometry_msgs::Twist>(sub_cmd_vel, 1, boost::bind(&RosListenNode::twist_pub, this, _1)); and here's the error I get: Description Resource Path Location Type Invalid arguments ' Candidates are: ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(#0), #1 *, const ros::TransportHints &) ros::Subscriber subscribe(ros::SubscribeOptions &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(#0)const, #1 *, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(const boost::shared_ptr<const #0> &), #1 *, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(const boost::shared_ptr<const #0> &)const, #1 *, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(#0), const boost::shared_ptr<#1> &, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(#0)const, const boost::shared_ptr<#1> &, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(const boost::shared_ptr<const #0> &), const boost::shared_ptr<#1> &, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (#1::*)(const boost::shared_ptr<const #0> &)const, const boost::shared_ptr<#1> &, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (*)(#0), const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, void (*)(const boost::shared_ptr<const #0> &), const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, const boost::function<void (const boost::shared_ptr<const #0> &)> &, const boost::shared_ptr<const void> &, const ros::TransportHints &) ros::Subscriber subscribe(const std::basic_string<char,std::char_traits<char>,std::allocator<char>> &, unsigned int, const boost::function<void (#1)> &, const boost::shared_ptr<const void> &, const ros::TransportHints &) ' bridge.cpp /RosToPlatform-RelWithDebInfo@RosToPlatform/src line 134 Semantic Error Any idea on how to solve this problem? Originally posted by hisdudeness on ROS Answers with karma: 51 on 2012-11-05 Post score: 0 Original comments Comment by hisdudeness on 2012-11-06: A weird thing, if I remove the boost:bind call and substitute it with NULL I get no errors. So I thought it was a problem with boost but if I add boost::bind(&RosListenNode::twist_pub, this, _1); it doesn't result as an error.It seems like eclipse is having problems with the templates Answer: Problem solved! It was Eclipse Juno's fault. I downloaded Helios and it works just fine. I also tried Indigo but I get the same error. Originally posted by hisdudeness with karma: 51 on 2012-11-06 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 11628, "tags": "eclipse" }
Canvas color detection
Question: I am coding a game, and I need to detect the color of a rectangles on a canvas, by moving a character and touching them, so that a message will be displayed "this is magenta" and so on. Please find below the game and my coding, so that you will better understand me: jsFiddle var canvas = document.getElementById("canvas"); var context = canvas.getContext("2d"); /*moving grey character*/ var xPos = 0; var yPos = 0; var bucketWidth = 100; var bucketHeight = 10; context.fillStyle = "grey"; context.fillRect(xPos, yPos, bucketWidth, bucketHeight); context.strokeRect(xPos, yPos, bucketWidth, bucketHeight); /*static red character*/ var xPos2 = canvas.width / 2; var yPos2 = canvas.height / 2; var bucketWidth2 = 100; var bucketHeight2 = 10; context.fillStyle = "red"; context.fillRect(xPos2, yPos2, bucketWidth2, bucketHeight2); /*static magenta character*/ var bucketWidth3 = 100; var bucketHeight3 = 10; var xPos3 = canvas.width / 2; var yPos3 = - bucketHeight3 - bucketHeight3 + canvas.height / 2; context.fillStyle = "magenta"; context.fillRect(xPos3, yPos3, bucketWidth3, bucketHeight3); /*Function to move the grey character from left to right and from top to button*/ function move(e){ if (e.keyCode === 37 && xPos > 0 && xPos <= canvas.width-bucketWidth){ xPos -= bucketWidth; } else if (e.keyCode === 39 && xPos >= 0 && xPos < canvas.width-bucketWidth){ xPos += bucketWidth; } else if (e.keyCode === 40 && yPos >= 0 && yPos < canvas.height-bucketHeight){/*down arrow*/ yPos += bucketHeight; } else if (e.keyCode === 38 && yPos > 0 && yPos <= canvas.height-bucketHeight){ /*up arrow*/ yPos -= bucketHeight; } canvas.width= canvas.width; context.fillStyle = "grey"; context.fillRect(xPos, yPos, bucketWidth, bucketHeight); context.strokeRect(xPos, yPos, bucketWidth, bucketHeight); context.fillStyle = "red"; context.fillRect(xPos2, yPos2, bucketWidth2, bucketHeight2); context.fillStyle = "magenta"; context.fillRect(xPos3, yPos3, bucketWidth3, bucketHeight3); } document.onkeydown = move; By the way, I was told to use the function getImageData(), but I could not apply the function by moving the character. Somebody send me an example, but it is working by using the mouse and jQuery was used, but I am not allowed to use jQuery. Please see this link. Answer: Reading the color on the canvas is not the right way to proceed, for several reasons : You want the color string (expl : 'red'), not the r,g,b values provided by getImageData (expl : 255, 0, 0) . You can't draw the buckets with gradient or color animation with such a method. Anyway you are the one handling the buckets, so what about keeping them all within a nice Object ? And what about going Object for the buckets also ? You'll have great flexibility to change anything to your buckets later. Below I define a BucketGrid class that will handle the buckets, and a Bucket class that holds data relative to a bucket. http://jsfiddle.net/gamealchemist/ydtkbzoc/6/ BucketGrid Class : function BucketGrid(columnCount, rowCount, bucketWidth, bucketHeight) { var buckets = []; this.columnCount = columnCount; this.rowCount = rowCount; this.bucketWidth = bucketWidth; this.bucketHeight = bucketHeight; this.getBucket = function (column, row) { return buckets[column + row * columnCount]; } this.setBucket = function (column, row, bucket) { bucket.column = column; bucket.row = row; buckets[column + row * columnCount] = bucket; } this.insertNewBucket = function (column, row, color) { this.setBucket(column, row, new Bucket(this, color)); } this.isValidPosition = function (column, row) { return ((column >= 0) && (column < this.columnCount) && (row >= 0) && (row < this.rowCount)); } this.draw = function (context) { for (var i = 0; i < buckets.length; i++) { var thisBucket = buckets[i]; if (thisBucket) thisBucket.draw(context); } } } Bucket Class : function Bucket(owner, color) { this.owner = owner; this.column = 0; this.row = 0; this.color = color; this.draw = function (context) { var owner = this.owner, bw = owner.bucketWidth, bh = owner.bucketHeight; context.fillStyle = this.color; context.strokeStyle = '#000'; context.save(); context.translate(this.column * bw, this.row * bh); context.fillRect(0, 0, bw, bh); context.strokeRect(0, 0, bw, bh); context.restore(); } } Setup : var buckets = new BucketGrid(4, 38, 100, 10); buckets.insertNewBucket(2, 20, 'red'); buckets.insertNewBucket(2, 21, 'magenta'); /*moving grey character*/ var hero = new Bucket(buckets, 'grey'); Handlers : /*Function to move the grey character from left to right and from top to button*/ function move(e) { var keyCode = e.keyCode; switch (keyCode) { case 37: if (buckets.isValidPosition(hero.column - 1, hero.row)) hero.column--; break; case 39: if (buckets.isValidPosition(hero.column + 1, hero.row)) hero.column++; break; case 40: /*down arrow*/ if (buckets.isValidPosition(hero.column, hero.row + 1)) hero.row++; break; case 38: /*up arrow*/ if (buckets.isValidPosition(hero.column, hero.row - 1)) hero.row--; break; } var hovered = buckets.getBucket(hero.column, hero.row); colorDiv.innerHTML = hovered ? hovered.color : 'none'; drawScene(); } function drawScene() { context.clearRect(0, 0, context.canvas.width, context.canvas.height); buckets.draw(context); hero.draw(context); } launch the game : var canvas = document.getElementById("canvas"); var colorDiv = document.getElementById('whichColor'); var context = canvas.getContext("2d"); document.onkeydown = move; drawScene();
{ "domain": "codereview.stackexchange", "id": 11286, "tags": "javascript, html5, canvas" }
Time-dependent pertubation theory assigning the order of expansion to squares of the solution
Question: What is the square of a solution from time dependent pertubation theory? Assume we have found the corrections up to second order such that $$ |\psi(t)\rangle \approx |\psi^0(t)\rangle + |\psi^1(t)\rangle +|\psi^2(t)\rangle $$ The population of an eigenstate of the unperturbed system after time $t$ is then $P_a\equiv |\langle \phi_a|\psi(t)\rangle|^2$. Where $|\phi_a\rangle $ are given as eigenstates of the unperturbed time independent system. Defining the expansion coefficients as $$ c^m_a(t)=\langle \phi_a|\psi^m(t)\rangle, \ \tilde c^m_a(t)=\langle \psi^m(t)|\phi_a\rangle, $$ one obtains the following expression for the population of state $a$ $$ P_a \approx |c_a^0|^2+|c_a^1|^2+|c_a^2|^2+2\Re(\tilde c_a^0c_a^1 + \tilde c_a^0c_a^2+ \tilde c_a^1c_a^2) $$ Now we assume that our system was in eigenstate $\phi_b$ of the unperturbed system at the beginning from which follows $c_a^0=0$. The expression for the population simplifies under this assumption to $$P_a \approx |c_a^1|^2+|c_a^2|^2+2\Re(\tilde c_a^1c_a^2)$$ Does the last term $2\Re(\tilde c_a^1c_a^2)$ vanish for some reason or is this term a part of the solution up to second order ? Also, is there a way to asign orders of expansion to the population or in general to squares of the solution built from all correction terms. I.e. what whould the following be ? $$P_a^0 = ? \\ P_a^1 =? \\ P_a^m=?\\ $$ Answer: Usually the property calculated via wavefunctions obtained by first order time-dependent pertubation is also called first order. For example the population $$ P^{(1)}=\langle \Psi^{(1)} |\Psi^{(1)}\rangle $$ Generally $P^{(n)}$ is unequal to the "proper" expansion which shall be denoted as $\tilde P^{(n)}$, in the pertubation parameter . The difference is obvious when both objects are expanded in the occuring orders of the pertubation. The following definitions are used $\Psi^{(n)}=\sum^n_{m=0}\Psi^m$, where $\Psi^n$ is the contribution/correction of order $n$, where $\Psi^{(n)}$ is the sum of all terms and therefor the full wavefunction within time-dependent pertubation theory up to order $n$. The "proper" population can also be written as sum of corrections, $$\tilde P^{(n)}=\sum_{m=0}^n\tilde P^m = \sum_{m=0}^n\sum^m_{i=0}\sum_{j=0}^i\langle \Psi^j|\Psi^{m-j}\rangle $$ Which allows the identification of $\tilde P^m=\sum^m_{i=0}\sum_{j=0}^i\langle \Psi^j|\Psi^{m-j}\rangle$. Compare this with the population obtained by simply plugging in the wavefunction of order $n$ into the calculation of the popuation, $$ P^{(n)}=\langle \Psi^{(n)}|\Psi^{(n)} \rangle =\sum_{i=0}^n \sum_{j=0}^n \langle \Psi^i | \Psi^j\rangle $$ This expression can also be expanded in powers of the pertubation, but now we need two parameters to specify what we are looking at, first the order of the wavefunction which was plugged in $n$ and then a second parameter $k$ for the order in the pertubation. $$ P^{(n)}=\sum_{k=0}^{2n}P^{n,k}=\sum^n_{m=0} \tilde P^m + \sum^{2n}_{l=n+1}\sum^l_{h=n+1}\langle \Psi^h| \Psi^{l-h}\rangle $$ Recall that $k$ stands for the order of the pertubation parameter, while $n$ stands for the order of the wavefunction that is plugged in. We see that we need to define $P^{n,k}$ via cases. Looking at the sums we can derive $$ P^{n,k} =\tilde P^k \quad \forall \ k \leq n$$ and $$ P^{n,k} = \sum^{2n}_{j=n+1}\langle \Psi^j|\Psi^{k-j}\rangle \quad \forall \ n+1 < k \leq 2n $$ We see that the expansion are identical in when comparing the contributions at a given order of the pertubation parameter up to the order of wavefunction that was plugged in. The higher orders in the pertubation parameter differ and one must make clear if the population was obtained by plugging in a wavefunction of order $n$ or if the population was directly expanded. The problem is often not adressed as the expansions can be identical depending on the choice of zero order term $\Psi^0$. This can cause the first order expansion of the population via wavefunctions to be identical to the "proper expansion" up to second order in the pertubation parameter. One case where this happens is the derivation of Fermis golden rule in spectroscopy.
{ "domain": "physics.stackexchange", "id": 61161, "tags": "quantum-mechanics, hilbert-space, time, perturbation-theory, time-evolution" }
Showing that $\lg(n!)$ is or is not $o(\lg(n^n))$ and $\omega(\lg(n^n))$
Question: My instructor assigned a problem that asks us to determine which asymptotic bounds apply to a certain $f(n)$ for a certain $g(n)$, in my case $f(n) = \lg(n!)$ and $g(n) = \lg(n^n)$. For clarity, the convention we use in our class is that $\lg = \log_2$, the "binary logarithm". I know that by Stirling's approximation, $\lg(n!)$ grows in $O(n\lg(n))$, and evaluating the limit $\lim_{n \to \infty} \frac{n\lg(n)}{n\lg(n)} = C$, some constant > 0, and so $\lg(n!)$ is in $\theta(\lg n^n)$. $\theta$ also means that my $f(n)$ is is in $O(g(n))$ and $\Omega(g(n))$, but this does not mean that my $f(n)$ is in $o(g(n))$ or $\omega(g(n))$. For that, I believe I would need to evaluate $\lim_{n \to \infty} \frac{\lg(n!)}{\lg(n^n)}$, but I am not certain. What strategy would I use to show that $f(n)$ is in $o(g(n))$ or $\omega(g(n))$? Would I evaluate $\lim_{n \to \infty} \frac{\lg(n!)}{\lg(n^n)}$? Answer: You seem tot be trying to prove something that is false. If $f=O(g)$ then $\lim_{n\to\infty}g/f > 0$ so $f\neq \omega(g)$. Similarly, if $f=\Omega(g)$ then $f\neq o(g)$. Since you already have that $\lg n! = \Theta(\ln n^n)$, that gives you big-$O$ and big-$\Omega$, which preclude little-$\omega$ and little-$o$, respectively.
{ "domain": "cs.stackexchange", "id": 13379, "tags": "asymptotics" }
Do neurons with dopamine inside only send signals to another neuron with dopamine?
Question: A nerve cell with dopamine receptors gets an action potential and releases dopamine to other neurons. Does this nerve cell only release to cells with dopamine inside? Because what if a neuron has a dopamine receptor but serotonin as a transmitter? Answer: No, neurons with all sorts of neurotransmitters frequently synapse onto neurons that release other types of neurotransmitters. In fact, aside from the most common neurotransmitters like glutamate and GABA, this is the rule rather than the exception. You mention specifically dopamine, and this is a perfect example--the medium spiny neurons in the striatum that are the most "famous" dopamine-receiving cells are NOT dopaminergic themselves--they are all GABAergic. The other aminergic neurons are similar; the noradrenergic locus ceruleus neurons and the serotonergic neurons of the raphe nuclei project to much of the forebrain, which contains few noradrenergic or serotonergic neurons of its own. Neurons frequently have receptors for neurotransmitters that they do not produce, and these are commonly known as heteroreceptors, as opposed to autoreceptors, which respond to the same transmitter that the cell possessing the receptor responds to. Autoreceptors commonly exist more for the purpose of negative feedback ("I've already released enough of my transmitter, therefore I need to stop so I don't overload the cells I synapse onto") than for neurons releasing the same transmitter to activate one another.
{ "domain": "biology.stackexchange", "id": 10920, "tags": "brain, neuron" }
Problems that are Cook-reducible to a problem in NP $\cap$ co-NP
Question: Let $\mathcal{A}$ be a problem in $\text{NP} \cap \text{co}$-$\text{NP}$. Now assume we can reduce another problem $\mathcal{B}$ to it using Cook reduction. What conclusions can we draw about $\mathcal{B}$? Does this question even make sense? I'm asking because from what I understand Cook reductions differ from Karp reductions (for example, $\text{NP}$ cannot be distinguished from $\text{co}$-$\text{NP}$). I'm pretty confused and can't seem to really understand the properties of Cook reductions. Any good reference about the topic would also be appreciated! I hope this question is not too basic, but I was not able to find anything about it. Answer: Nice question/homework! Stated in another way: $\mathsf{NP}$ is not closed under Cook reductions (assuming $\mathsf{P}\neq \mathsf{NP}$). How about $\mathsf{NP}\cap\mathsf{coNP}$? Is $\mathsf{NP}\cap\mathsf{coNP}$ closed under Cook reductions? Is the only reason that $\mathsf{NP}$ is not closed under Cook reductions is because it is not closed under complement so if we take those $\mathsf{NP}$ problems whose complement is also $\mathsf{NP}$ do we get around the problem? For example, if we have a Cook reduction from a problem to Factoring, would that mean that the problem is in $\mathsf{NP}\cap\mathsf{coNP}$? An oracle doesn't just allow us to ask about membership and non-membership in the oracle, it allows us to ask very complicated list of questions where each question can depend on the answer for the previous one. Let's look at a problem $Q \in \mathsf{P^{NP \cap coNP}}$. We know that there is a polynomial-time algorithm $M$ and a set $A\in \mathsf{NP \cap coNP}$ such that $M^A$ solves the problem $Q$. Is $Q$ in $\mathsf{NP}$? Hint: when does $M^A$ accept an input $x$? Don't read the answer below if this is an assignment, the answer is not difficult, you should be able to solve it if you spend a few hours on it, and spending those hours is what makes you learn. Let's look at the execution of $M^A$ on an input $x$. $M$ will make a number of queries to $A$ during its computation and will receive YES and NO answers, and finally will accept or reject. If we could compute that answers to the queries in polynomial time we would have shown that the problem is in $\mathsf{P}$, we would simulate the algorithm $M$ and whenever it asked a query from $A$ we would compute the answer and give to $M$ and continue with its simulation. But $A\in\mathsf{NP \cap coNP}$ and we don't know how to compute the answers to queries in polynomial time. But we don't need to do this in polynomial-time! We can just guess the answers to the questions and verify our guesses in polynomial-time. To be able to verify the answers for both positive and negative answers we need $A\in\mathsf{NP\cap coNP}$, that is why this does not work for $A \in\mathsf{NP}$, that would allow only verifying positive answers. Let $V_{YES}$ and $V^{NO}$ be two polynomial-time verifiers for membership and non-membership in $A$. Consider the verfier $V(x,y)$ which works as follows: I. check that $y$ consists of 1. a string $c$ (computation of $M$ on $x$), 2. a list of strings $q_1,\ldots,q_m$ (queries to the oracle in $c$), 3. a list of strings $a_1,\ldots,q_m$ (answers to queries from oracle), 4. a list of strings $w_1,\ldots,w_m$ (certificates/proofs/witnesses for the correctness of the query answers). II. check that the list of queries $q_1,\ldots,q_m$ contains all oracle queries in $c$, III. check that the computation $c$ is an accepting computation of $M$ on $x$ if answers to the queries are $a_1,\ldots,a_m$. IV. for all $1\leq i \leq m$, check that if $a_i=YES$ then $V^{YES}$ accepts $(q_i,w_i)$ and if $a_i=NO$ then $V^{NO}$ accepts $(q_i,w_i)$. All of these steps can be checked in polynomial-time. So we have a verifier for YES answers of $M^A$. Furthermore note that if $M^A$ accepts $x$, then there is $y$ satisfying these conditions which has polynomial-size: the computation of a polynomial-time machine is of polynomial-size and the number of queries and the size of all queries are also polynomial in the input. Moreover the size of certificates for queries are also bounded by some polynomial in the size of the queries, so again are polynomial in the size of input. In short we have a polynomial-time verifier with polynomial-size certificates for $M^A$. This completes the proof that $Q \in \mathsf{NP}$. A similar argument shows that $Q\in\mathsf{coNP}$. So $Q\in\mathsf{NP\cap coNP}$. In other words, $\mathsf{P^{NP\cap coNP}} \subseteq \mathsf{NP\cap coNP}$.
{ "domain": "cs.stackexchange", "id": 1005, "tags": "complexity-theory, reductions, np" }
Why is the raised cosine pulse for digital communications so popular?
Question: The raised cosine pulse is essentially a windowed sinc function, designed with some roll-off in mind. Why do we use this particular window, as opposed to other windows such as Hann, Hamming, Kaiser, etc? All of these have zero ISI when applied to a sinc function, and have tunable frequency characteristics just like the raised cosine. Is the raised cosine optimal for digital comms in any sense? Answer: There is no good reason, except that it's familiar and "good enough" for many applications. Many other better pulses have been proposed; see for instance "Nyquist filters in non-ISI transmission", by Ping-Kuen Lam, E.W. McCune and M.A. Soderstrand, DOI:10.1109/MWSCAS.1997.666198.
{ "domain": "dsp.stackexchange", "id": 6249, "tags": "digital-communications, window-functions" }
Unable to move the arm through Moveit...Error:Maybe failed to update robot state, time diff: 0,822s
Question: Hi, I am trying to control gazebo simulated arm mounted on mobile base through moveit. When I try to execute the trajectories after setting starting and goal positions, I am getting "Maybe failed to update robot state, time diff: 0,822s"in my terminal. When I remove robot_state_publisher from my launch file trajectories are being executed. But i need tf tree to visualize octomap. my launch files: <launch> <arg name="paused" default="false" /> <arg name="use_sim_time" default="true" /> <arg name="gui" default="true" /> <arg name="headless" default="false" /> <arg name="debug" default="false" /> <arg name="model" default="$(find kate_jaco)/urdf/kate_jaco.xacro"/> <arg name="rvizconfig" default="$(find kate_jaco)/rviz/urdf.rviz" /> <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched --> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="/home/iki/iki.world"/> <arg name="debug" value="$(arg debug)" /> <arg name="gui" value="$(arg gui)" /> <arg name="paused" value="$(arg paused)" /> <arg name="use_sim_time" value="$(arg use_sim_time)" /> <arg name="headless" value="$(arg headless)" /> </include> <!-- Load the URDF into the ROS Parameter Server --> <param name="robot_description" command="$(find xacro)/xacro.py '$(find kate_jaco)/urdf/kate_jaco.xacro'"/> <param name="use_gui" value="$(arg gui)"/> <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" /> <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" respawn="false" output="screen"> </node> <rosparam file="$(find kate_jaco)/config/kate_jaco.yaml" command="load"/> <rosparam file="$(find kate_jaco)/config/jaco.yaml" command="load"/> <node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false" output="screen" ns="/kate_jaco" args="joint1_position_controller joint2_position_controller joint_state_controller jaco2_controller jaco2_gripper_controller "> </node> <!-- Run a python script to the send a service call to gazebo_ros to spawn a URDF robot --> <node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen" args="-urdf -model kate_jaco -param robot_description"/> </launch> my moveit launch file: <launch> <rosparam command="load" file="$(find kate_jaco_moveit)/config/joint_names.yaml"/> <include file="$(find kate_jaco_moveit)/launch/planning_context.launch" > <arg name="load_robot_description" value="true" /> </include> <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher"> <param name="/use_gui" value="false"/> <rosparam param="/source_list">[/kate_jaco/joint_states]</rosparam> </node> <include file="$(find kate_jaco_moveit)/launch/move_group.launch"> <arg name="publish_monitored_planning_scene" value="true" /> </include> <include file="$(find kate_jaco_moveit)/launch/moveit_rviz.launch"> <arg name="config" value="true"/> </include> </launch> PS: I strongly feel, there is something wrong with the robot state publisher and tf_static topic published by it. Originally posted by ARB on ROS Answers with karma: 51 on 2018-03-26 Post score: 0 Answer: Was able to solve this.....there was problem with octomap update..due to heavy point cloud data,move group was unable to update robot state quickly..just reduce the rate at which point cloud is being published. Originally posted by ARB with karma: 51 on 2018-03-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 30445, "tags": "ros, robot-state-publisher, tf-static, ros-kinetic" }
Using Concurrent Dictionary and Lazy to cache expensive query results and only run query once in a threadsafe manner
Question: Ok, so I'm querying a webservice. This webservice is slow with multiple o's. I want to cache the results of the query, because I only want to query on a given set of parameters once during my transaction (In addition to the query being slow, it's also rate limited, so I don't want to perform unnecessary queries). Finally, to speed things up, I want to be able to run queries in parallel, as latency is my biggest problem here. With all of that, I think I need a thread safe way to cache data and only run the queries I actually need to run. I think I've managed to cobble that together with Concurrent Dictionary and Lazy private static ConcurrentDictionary<KeyObject, Lazy<IEnumerable<ValueObject>>> DataCache = new ConcurrentDictionary<KeyObject, Lazy<IEnumerable<ValueObject>>>(); public static IEnumerable<ValueObject> GetData(KeyObject key, QueryObject query) { var value = new Lazy<IEnumerable<ValueObject>>(() => { return query.Run(); }); DataCache.TryAdd(key, value); return DataCache[key].Value; } } If I'm correct in understanding how all the moving parts work, it should go something like this. Generate dictionary key and Lazy intializer for value Try and add lazy initializer to dictionary with key. If another thread has already added key, fail and continue. Try and get value of lazy intializer on record in the dictionary. If another thread is already getting value, block until other thread has retrieved value. Does this sound correct? Answer: What you have will work, but there are a few incremental improvements. Rather than using TryAdd you can use GetOrAdd which returns the value for that key. This will save you a lookup (and also remove a race condition where the key is removed between calls by another thread). Also note that, currently, you're caching any errors that you get, and continuing to serve them out. If you're okay with that, or aren't worried about it, then okay. You should do some tests to be sure, but it may be faster to use the overload of GetOrAdd that accepts a function, rather than a value. This will allow you to construct the Lazy only when a new one is needed, rather than on every call. Constructing a Lazy isn't that expensive (it doesn't need to actually compute the value after all) but this is likely still a win in most situations.
{ "domain": "codereview.stackexchange", "id": 23194, "tags": "c#, multithreading, cache, lazy" }
Confusion of Schrödinger equation and complex conjugates
Question: I have a similar question that was asked in the following link: (Schrödinger's Equation and its complex conjugate). But I find both the question and answers not specific enough. So let me rephrase the question. The Schrödinger equation for $\psi$ is given by $$-\frac{\hbar^2 }{2m}\frac{\partial^2\psi}{\partial x^2} + V(x)\psi = i \hbar \frac{\partial \psi}{\partial t}$$ So it is clear when one takes the complex conjugation of the above equation, it becomes $$-\frac{\hbar^2 }{2m}\frac{\partial^2\psi^*}{\partial x^2} + V(x)\psi^* = -i \hbar \frac{\partial \psi^*}{\partial t}$$ Therefore the Schrödinger equation for $\psi^*$ has the minus sign in front of the time derivative term. However, when one treats $\psi$ in the first equation as a placeholder, or a dummy variable, and replace it with $\psi^*$, the equation becomes $$-\frac{\hbar^2 }{2m}\frac{\partial^2\psi^*}{\partial x^2} + V(x)\psi^* = i \hbar \frac{\partial \psi^*}{\partial t}$$ which cannot be right. My question then is why one cannot treat $\psi$ in the first equation as a placeholder? Where is the logical pitfall in replacing $\psi$ with $\psi^*$? Answer: You can always change the symbol that stands for a dummy variable, but you can't change its interpretation. Your mistake is tantamount to starting from the equation $$x + 1 = 2$$ which has solution $x = 1$, then declaring $x$ is a dummy variable and replacing it with $-x$, for $$-x + 1 = 2$$ which has solution $x = -1$. This is totally valid, but these two $x$'s don't mean the same thing.
{ "domain": "physics.stackexchange", "id": 52122, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, complex-numbers" }
Free quantum field theories as fixed points of Wilsonian RG
Question: Consider Euclidean Klein Gordon quantum field theory on the toroidal spacetime $X\simeq S^1\times \cdots\times S^1$, with action $$S(\varphi) = \int_X \varphi(\Delta+m^2)\varphi$$ and scalar field $\varphi\in C^\infty_X$, the space of smooth functions on $X$. Here, $\Delta$ is the Laplace operator corresponding to the standard flat metric on the torus. The claim is that this theory is invariant under the Wilsonian renormalization semigroup. To fomulate this statement precisely, let $C^\infty_{(a,b)}$ denote the linear span of smooth functions with eigenvalue in $(a,b)$ under application of $\Delta$, and decompose the space of smooth functions according to the eigenvalues of the Laplace operator: $$C^\infty_{[0,\Lambda)} \simeq C^\infty_{[0, \Lambda')}\oplus C^\infty_{[\Lambda',\Lambda)}.$$ Using the formula for the action of the Wilsonian renormalization semigroup $S[\Lambda]\to S[\Lambda']$ on the space of quantum field theories as given in Renormalization and Effective Field Theory, we have, for $\varphi\in C^\infty_{[0, \Lambda')}$, $$S[\Lambda'](\varphi)=\frac{\hbar}{i}\log \bigg(\int_{\varphi^\perp \in C^\infty_{[\Lambda',\Lambda)}}d\mu^\perp \exp(iS[\Lambda](\varphi+\varphi^\perp)/\hbar)\bigg),$$ where we have factored the low-energy Feynman measure out from the higher-energy Feynman measure via $d\mu_\Lambda\equiv d\mu_{\Lambda'}\wedge d\mu^\perp$, where the Feynman measure $d\mu_\lambda$ at a general energy scale $\lambda$ is defined by the condition $$\int_{\varphi\in C^\infty_{[0,\lambda)}}d\mu_\lambda \exp(iS[\lambda](\varphi)/\hbar)\equiv 1. $$ First note that the operator $(\Delta+m^2)$ lies in the algebra generated by the operators $1,\Delta$, and therefore trivially respects the eigenspace decomposition of $\Delta$. Therefore, the modes of the scalar field are uncoupled, and the action factorizes as $$S[\Lambda](\varphi+\varphi^\perp)=S[\Lambda](\varphi)+S[\Lambda](\varphi^\perp),$$ and therefore the functional integral also factorizes: $$S[\Lambda'](\varphi)=S[\Lambda](\varphi)+\frac{\hbar}{i}\log \bigg(\int_{\varphi^\perp \in C^\infty_{[\Lambda',\Lambda)}}d\mu^\perp \exp(iS[\Lambda](\varphi^\perp)/\hbar)\bigg),$$ and therefore the action gains at most a constant offset, which nonetheless leaves the overall theory invariant. Therefore massive Klein Gordon theory, and, as it seems, any other free theory which does not couple low- and high-energy degrees of freedom, must be conformally invariant, that is, invariant under Wilson's renormalization semigroup. This statement, however, seems to contradict the literature, so I guess I am looking for the source of my misunderstanding. Answer: The source of your misunderstanding is that you are using the "wrong" RG. I explained this at length in my answer to Wilsonian definition of renormalizability . You are using what I called the nonautonomous version of the RG. whereas to see fixed points etc. it's better to work with the autonomous version which involves integration over fast modes followed by a rescaling that restores the original UV cutoff. This is most intuitive in the lattice block spin approach where you have a random field on $\mathbb{Z}^d$, then you make a new field of block averages where the blocks have linear size say $L$. You need to rescale, i.e., shrink your new lattice ($(L\mathbb{Z})^d$) by a factor of $L$ so the RG becomes an evolution over the fixed space of unit lattice theories. Unless your RG is a "time"-independent dynamical system on a fixed space, it is hard to talk about a strict notion of fixed points. If you do this change of coordinates to the autonomous setting you will see that the mass term will destroy the fixed point property of the massless Gaussian. Namely, the mass will grow according to $m^2\rightarrow L^2 m^2$ at each RG iteration. Fixed points only correspond to $m^2=0$ (or $m^2=\infty$).
{ "domain": "physics.stackexchange", "id": 45724, "tags": "quantum-field-theory, lagrangian-formalism, renormalization, action, regularization" }
Building a robotic clamp
Question: If I had a single stepper motor how could I use it to create a robotic clamp that could simply grab hold of something like a plank of wood and release it? Are there any standard parts that I could use for this? I'm having trouble finding out what the names of the parts would be. Answer: Look for "parallel jaw gripper kit." Ignoring the pneumatic designs, you will find a typical parallel jaw gripper has a set of 1:1 gears which ensure the two jaws travel at equal speeds in opposite directions, connected to parallelogram 4-bar linkages which keep the clamping jaws parallel to each other. You can find them with or without the actuator.
{ "domain": "robotics.stackexchange", "id": 1010, "tags": "robotic-arm" }
How to obtain the four-velocity of a fluid from the metric?
Question: Let's say I'm working with a metric tensor for some spacetime with components $g_{\alpha \beta}$ relative to some coordinates $(\tau, x, y, z)$. Is there a general way of obtaining the four-velocity of the fluid $u^{\alpha}$? I'm not sure how to go about doing this. EDIT: if this is not possible. Would it help if the spacetime is a vacuum spacetime so that the stress-energy tensor is zero? Answer: Use $g_{\alpha\beta}$ to compute $R_{\alpha\beta\mu\nu}$. Use $R_{\alpha\beta\mu\nu}$ to compute $R_{\alpha\beta}$ and $R$. Use $R_{\alpha\beta}$ and $R$ to compute $G_{\alpha\beta}$. Use $G_{\alpha\beta}$ to compute $T_{\alpha\beta}$. Use an unknown $\rho$ and $p$ to express the stress tensor in terms of $U_0,$ $U_1,$ $U_2,$ $U_3,$ $\rho,$ $p,$ and the metric. You have six unknowns, the trace $T=T^\alpha_{\,\,\,\beta}$ tells you how $\rho$ and $p$ are related. So write everything in terms of $U_0,$ $U_1,$ $U_2,$ $U_3,$ $\rho,$ $T,$ and the metric. So only five unknowns left. You can look at $T_{00}$ to learn $U_{0}$ in terms of $T_{00},$ $\rho,$ $T,$ and the metric. You can look at $T_{11}$ to learn $U_{1}$ in terms of $T_{11},$ $\rho,$ $T,$ and the metric. You can look at $T_{22}$ to learn $U_{2}$ in terms of $T_{22},$ $\rho,$ $T,$ and the metric. You can look at $T_{33}$ to learn $U_{3}$ in terms of $T_{33},$ $\rho,$ $T,$ and the metric. So now, since we know $T$ and the metric, there is really only one unknown, $\rho$. Now. If $\rho$ and $T$ are both zero, it is all hopeless since the fluid has no gravitational effects (it has no energy and no pressure) so could have any four velocity whatsoever and you wouldn't know it. If $\rho$ is nonzero then you can consider that it was $\rho$ and $p/\rho$ that were the initial unknowns you didn't care about. And $\rho$ just affects the overall size of the stress energy tensor, so when it gets bigger (for fixed $p/\rho$) every component of the stress energy tensor just gets bigger by the same factor. So you can't always find a four velocity from the metric, but hopefully you can find it in your situation. Don't forget that the four velocity is a unit vector, so you can (using the metric) to get an equation relating the components to each other, hence get another equation with the density $\rho.$
{ "domain": "physics.stackexchange", "id": 26950, "tags": "general-relativity, metric-tensor" }
Moving the player across an ASCII art "world"
Question: Wow, this one is definitely going to need some improvement. So, just for fun, I decided to make a program where the player moves across a 2-dimensional ASCII-art map. If the player types something such as down, the on screen representation moves down and "loads" a bit more of the map. from console import clear class Move(object): up_down = 1 right_left = 1 class Actions(object): action = { "down": 1, "up": -1, "left": -1, "right": 1 } def set_move_values(action): if action == "down": Move.up_down += Actions.action["down"] elif action == "up": Move.up_down += Actions.action["up"] elif action == "left": Move.right_left += Actions.action["left"] elif action == "right": Move.right_left += Actions.action["right"] def render_map(x, y): for _ in range(y): print(''.join(['.' for _ in range(x + 4)])) print(''.join(['.' for _ in range(x + 2)])), '@', print(''.join(['.' for _ in range(1)])) for _ in range(1): print(''.join(['.' for _ in range(x + 4)])) def run_program(): while True: render_map(Move.right_left, Move.up_down) set_move_values(raw_input('> ')) clear() if __name__ == "__main__": run_program() All I'm really looking for is, what general stuff can I improve? Also, if possible, I'd like something like right 5 to work to. Note: This is Python 2.7, I just prefer Python3's print function better. Also, console is an ios module, as I wrote this script on ios. Answer: A few comments: You already did a good job with PEP8. The only error I get back is that it expects two blank lines between functions Regarding PEP257, please add some docstrings and comments to make clear what's the code goal to the potential reader If find the Move class confusing. The up_down and left_right class attributes seem coordinates. I'd prefer to have a class named, for example, Player with x and y instance attributes. Note the differenct between class and instance attributes if you want to have multiple players in the future The translation between actions and movements isn't very effective because you need to hardcode all four cases and use up/down/left/right multiple times If iteration is all what is needed, I recommend using xrange instead of range If just one iteration is needed (range(1)), just drop the loop for better readability Use the multiplication operator to get long strings of the same charater instead of loops ('.' * n) The code I'd have written would be more or less as follows (more comments below, console library not used): """ASCII world game.""" class Player(object): """Player object with some actions (just move for now).""" def __init__(self): """Initialize coordinates.""" self.x = 1 self.y = 1 def move(self, dx, dy): """Move player to some position. :param dx: How much to move in the x axis :type dx: int :param dy: Move much to move in the y axis :type dy: int """ # Ignore invalid movements if (self.x + dx) < 1 or (self.y + dy) < 1: return self.x += dx self.y += dy class GameMap(object): """GameMap that shows where the player is.""" X_OFFSET = 4 def render(self, player): """Render map and player position.""" def render_empty_row(): """Render row in which there is no player.""" print('.' * (player.x + self.X_OFFSET)) for _ in xrange(player.y): render_empty_row() print('{} @ {}' .format('.' * (player.x - 1), '.' * (self.X_OFFSET / 2))) render_empty_row() class Command(object): """Command reader from stdin.""" # Map text command to player movement CMD_TO_MOVE = { 'u': (0, -1), 'up': (0, -1), 'd': (0, 1), 'down': (0, 1), 'l': (-1, 0), 'left': (-1, 0), 'r': (1, 0), 'right': (1, 0), } # Default value for unknown command NO_MOVE = (0, 0) def read(self): """Read command (up, down, left, right) from stdin. The command is converted to a movement for the player using the same coordinate system. :returns: Player movement :rtype: tuple(int, int) """ command_str = raw_input('> ') return self.CMD_TO_MOVE.get(command_str, self.NO_MOVE) class Game(object): """Game object to keep state and read command for next turn.""" def __init__(self): """Get all the objects needed to run the game.""" self.player = Player() self.game_map = GameMap() self.command = Command() def run(self): """Render map and read next command from stdin.""" while True: self.game_map.render(self.player) self.player.move(*self.command.read()) if __name__ == "__main__": game = Game() game.run() What I tried to show: Object oriented design: use the language of the problem space to define a class for each type of object in the problem. In general, when you think about the problem, if it's a name it should be a class, if it's a verb it should be a method Use docstrings for every class and method (use sphinx format for arguments and return values if possible) Make a difference about what is command (word) and a movement (delta in the coordinate system) Use constants instead of harcoded values (X_OFFSET) Note that not everything should be a class. The command reader could be a function, but I preferred to use class in this case for consistency. I hope this helps.
{ "domain": "codereview.stackexchange", "id": 8648, "tags": "python, game, python-2.x" }
Interpreting quantitive outputs from maximum likelihood phylogenetic trees
Question: I ran a calculation in RAxML to determine the majority consensus phylogeny of a maximum likelihood bootstrap (How to show bootstrap values on a phylogenetic tree constructed with RAxML), and I got three output files: RAxML_bipartitions.output_bootstrap.tre RAxML_bipartitionsBranchLabels.output_bootstrap.tre RAxML_info.output_bootstrap.tre What is the difference between the RAxML_bipartitions.output_bootstrap.tre file and RAxML_bipartitionsBranchLabels.output_bootstrap.tre? Answer: In summary, RAxML_bipartitions.output_bootstrap.tre Is the only file of interest. The reason this is true in this context is really complicated and you have to understand the statistics of likelihood and how they are interpreted within phylogeny to understand why. This file is simply the final output of a non-parametric bootstrap analysis performed by maximum likelihood. What on earth is a non-parametric boostrap? A non-parametric bootstrap is resampling each alignment position with replacement. Thus if we have alignment positions 1,2,3,4,5 A bootstrap resample for 2 replicates might be, Replicate 1 1,1,3,5,2 Replicate 2 4,2,5,2,1 The ML algorithm will make trees of replicates 1 and 2 and find the consensus between them. If you think about it in any other context a bootstrap replicate is pretty meaningless because it no longer reflects the true biological sequence. Thus information on how the consensus was derived, are not really of interest to us providing we are confident this has been done correctly, viz. RAxML_bipartitionsBranchLabels.output_bootstrap.tre and RAxML_bipartitionsBranchLabels.output_bootstrap.tre So why is this output of limited use? There are situations to some investigators this information is useful, but assess the robustness of a tree topology its not needed. The only thing we want is a phylogram (bestTree) with the bootstrap values superimposed on them. We really don't need complicated stuff such as the tree to be represented for example as a polytomy (non-bifurcating tree) because we can just read the bootstraps to make that deduction (values >> 75%). In addition, there is not perfect consensus what boostrap value constitutes robustness, but generally most agree >80% is robust. What output files have useful information in them? The information that is important are the files associated with "bestTree", that was the single maximum likelihood tree performed on the intact native sequence. The "info" file for this contains 3 really important parameters: -lnL ... very important!! Gamma distribution parameter "alpha", PINVAR, proportion of invariant sites, -lnL is the highest log-likelihood (probability) of the phylogeny. It is usually a very small number for which where is an enormous amount of theory over it. Alpha parameter of the gamma distribution this is the shape parameter of the mutation rate, if it is very low (<1) the distribution of mutations across the alignment is very tight clustered and approximates to a negative binomial distribution. This means some sites don't mutate at all and a small number of sites mutate alot. If it is very large >200 (which is never observed) it approximates to the Poisson distribution, meaning the mutation distribution is randomised across the alignment. PINVAR this is a straight percentage/frequency and simply means the sites that don't mutate. How are they calculated? PINVAR and alpha are not emperically calculated, i.e. if you look at an alignment and say 'no mutations at that position', PINVAR would of course agree but may consider other invariant depending on the phylogeny. These parameters are calculated by maximum likelihood and you can begin to see why the calculation takes so long ... alpha and PINVAR affect the tree topology (which affects -lnL), but the topology affects alpha and PINVAR. Thus, is a multidimensional search of tree and parameter space. So what stuff do I report in my Results? Anyway reporting -lnL is good technique and shows the reader you've done maximum likelihood, citing PINVAR and alpha from gamma distribution helps ('Methods' parameters were calculated reiteratively under maximum likelihood). This is only useful for bestTree. The -lnL, PINVAR and gamma's alpha are also calculated for every single bootstrap replicate, but these values are of limited to use, because we have resampled the data, only the consensus tree counts... Obviously presenting the bootstrapped phylogram is extremely important. Welcome to the technical world of phylogeny! The amino acid matrix you used BTW .. LG is in vogue right now. How do I do it? When I do this stuff its via Biopython and ETE3, I capture the values within the pipeline and don't examine the output files of RAxML because I generate my own.
{ "domain": "bioinformatics.stackexchange", "id": 1201, "tags": "phylogenetics, phylogeny, software-usage" }
Finding the most frequent element assuming $\Theta(n)$ frequency
Question: We know [Ben-Or 1983] that deciding whether all elements in an array are distinct requires $\Theta(n \log(n))$ time; and this problem reduces to finding the most frequent element, so it takes $\Theta(n \log(n))$ time to find the most frequent element (assuming the domain of the array elements is not small). But what happens when you know that there's an element with frequency at least $\alpha \cdot n$? Can you then decide the problem, or determine what the element is, in linear time (in $n$, not necessarily in $1/\alpha$) and deterministically? Answer: Here is an algorithm for all $0<\alpha\leq 1$. I'm assuming your data can be ordered and that comparing two elements is done in constant time. Run a few levels of the quick-sort recursion (choosing the pivot optimally in linear time with the Median of Medians algorithm) until you have partitioned the elements into "buckets" $B_1,\ldots, B_m$ each of size $\frac{\alpha n}{4} \leq |B_i| \leq \frac{\alpha n}{2}$, where all elements in $B_i$ are smaller or equal to all elements in $B_{i+1}$. This will take $O(n\log(1/\alpha))$ time. Now notice that because the relative majority element $e$ is present at least $\alpha n$ times and each bucket has at most $\frac{\alpha n}{2}$ elements, the majority element needs to fill at least one of the buckets completely. Thus $e$ is also the first element in some bucket. Notice also that there are at most $4/\alpha$ buckets as each bucket contains at least $\frac{\alpha n}{4}$ elements. Thus you can pick the first element in each bucket, and choose the element with maximum frequency among those in $O(n/\alpha)$ time. Thus, you can find that relative majority element $e$ in $O(n\log(1/\alpha) + n/\alpha) = O(n/\alpha)$ time.
{ "domain": "cs.stackexchange", "id": 16905, "tags": "algorithms, arrays, space-complexity, linear-complexity" }
Does the carbocation rearrangement in a SN1 reaction specify a change in degree?
Question: I am asked to determine whether the following process is likely to involve a carbocation rearrangement or not: I think it is possible through the rearrangement given above (though the rearrangement cannot become a secondary or primary carbocation). Can someone please clarify if the carbocation rearrangement specifies a change in degree? Answer: The most common carbocation rearrangements involve [1,2] hydride or alkyl shifts. As a typical result a secondary carbocation is converted to a more stabilized tertiary carbocation. In the course of these reactions, the migrating group and the positive charge change place, they move to opposite directions. This is not the case in your example. I fail to see how your transformation can be described as a cationic rearrangement!
{ "domain": "chemistry.stackexchange", "id": 3434, "tags": "organic-chemistry, carbocation" }
Reusing UMAP transformation
Question: I have this use case: I want to apply a dimension reduction with UMAP to a initial dataset of high-dimension vectors (100d), and later, in second place, have the oppurtinity to add new data points from the original space (so a vector of 100d), that will be transformed according to the first transformation (aka without recalculating the umap vectors for the first N vectors). Is it possible? Does UMAP have a transformation matrix ( or similar)? Answer: UMAP has a transform method that can achieve this. Note that this is a somewhat expensive computation, and not almost instant as it can be with, for example, PCA. See the UMAP documentation for more details. If you need a very fast transform method for new data then I would encourage you to look at Parametric UMAP which uses a neural network to learn a direct mapping from input space to the embedding space and provides very efficient transforms.
{ "domain": "datascience.stackexchange", "id": 10356, "tags": "dimensionality-reduction" }
Rebase values in an array to match the chart scale
Question: I have 2 long[24] arrays. Both of them contain values for each hour of a day. One of them has hit count and other queue times. Now queue times in ms are 5 digit numbers, counts are 2 digit numbers. I want to show them in one chart, because queue times are to some degree corresponding with hits, but the scale is totally different. So what I came up with is that I take the max value from one list and recalculate all the values from second array using that maximum. var count = new long[24]; // values similar to [0, 9, 25, 65 ..] var waitings = new long[24]; // values similar to [34111, 65321, 5003 ..] count = NormalizeDataset(count, waitings.Max()); // ... skipped @functions { private long[] NormalizeDataset(long[] count, long max) { long countMax = count.Max(); var returnable = new long[24]; for (int i = 0; i < count.Length; i++) { var l = (count[i] * max) / countMax; returnable[i] = l; } return returnable; } } Does doing it like this make sense? How is this problem usually solved? Answer: Even though there is not much code to review a few remarks: returnable has a hard coded length. Should your input array ever need to be larger you'd have to remember to make this larger as well. A simple var returnable = new long[count.Length]; would eliminate any need to worry about this ever again. Given the current data constraints it's not really a problem but multiplying by max first could yield in an overflow. It might pay off to store the scaling factor as double instead (in which case it can also be pre-calculated). Using LINQ would make the code a bit bit more concise. Refactored method: private long[] NormalizeDataset(long[] count, long max) { double scale = max / (double)count.Max(); return count.Select(x => x * scale).ToArray(); } Given that scale is probably around the order of 100 and that the scaling is more or less an arbitrary choice you could just not use double for the scale.
{ "domain": "codereview.stackexchange", "id": 12023, "tags": "c#, array" }
ROS Answers SE migration: vslam future
Question: Hi! As i could see in SVN - the vslam's latest edition was made 5 months ago. Does where any plans to continue development of this package? Thanks! Originally posted by noonv on ROS Answers with karma: 471 on 2011-11-17 Post score: 1 Answer: No. The red-boxed warning on the wikipage is a pretty good clue. Of course, patches are accepted, if you'd like to continue development. VSLAM has always seen a lot of interest; it just turns out that getting it work reliably, and in general, is extremely difficult. Originally posted by Mac with karma: 4119 on 2011-11-17 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 7334, "tags": "ros, vslam" }
Employee database
Question: To keep in practice with good techniques of java programming, I've decided to write a database. All it does it stores employees, allows users that are logged in to get/set employees, and has a login mechanism that prevents any methods from being used if you are not logged in. I'm looking for feedback in the following areas: Structure Is the overall structure of the program good? Are there any improvements that can be made to make the structure of the program more efficient/compact? Login Mechanism Is the way I implemented this system okay? Is there a better way I can implement this login system? Exceptions I created two custom exceptions for this program. Is the way I coded these exceptions acceptable? Should I have had a CustomException class that they could inherit from, so I can just have the message in printErrorMessage as a parameter/hard coded? I used Eclipse while writing these exceptions, so the serialVersionUID is auto-generated. Efficiency/Compactness Is there any way this program can be made more efficient? Such as the getHighestSalary method in the Database class. Any and all suggestions are invited and appreciated. Style and other neatness tips are encouraged as well. Database.java package database; import java.util.ArrayList; import java.util.Collections; import java.util.List; public class Database { private ArrayList<Employee> employees; private final String username; private final String password; private boolean loggedIn; public Database() { this.employees = new ArrayList<Employee>(); this.username = generateUsername(); this.password = generatePassword(); this.loggedIn = false; populateEmployees(); } public void addEmployee(Employee employee) throws LoginException { if(loggedIn) { this.employees.add(employee); sortEmployees(); return; } throw new LoginException("Not Logged In!"); } public Employee getHighestSalary() throws LoginException { if(loggedIn) { Employee highest = this.employees.get(0); for(Employee employee : this.employees) { if(employee.getSalary() > highest.getSalary()) { highest = employee; } } return highest; } throw new LoginException("Not Logged In!"); } @SuppressWarnings("unchecked") public void sortEmployees() throws LoginException { if(loggedIn) { Collections.sort((List)this.employees); return; } throw new LoginException("Not Logged In!"); } public Employee getEmployee(String name) throws EmployeeNotFoundException, LoginException { if(loggedIn) { for(Employee employee : this.employees) { if(employee.getName().equals(name)) { return employee; } } throw new EmployeeNotFoundException("Employee Not Found!"); } throw new LoginException("Not Logged In!"); } //Filler for tester class private void populateEmployees() { for(int i = 0; i < 10; i++) { this.employees.add(new Employee("Employee" + i)); } } public void login(String username, String password) { if(this.username.equals(username) && this.password.equals(password)) { this.loggedIn = true; } } public ArrayList<Employee> getEmployees() throws LoginException { if(loggedIn) { return this.employees; } throw new LoginException("Not Logged In!"); } //Used for testing private String generateUsername() { return "username123"; } //Used for testing private String generatePassword() { return "password123"; } } Employee.java package database; public class Employee { private final String name; private int age; private int salary; public Employee(String name) { this.name = name; } public Employee(String name, int age) { this.name = name; this.age = age; } public Employee(String name, int age, int salary) { this.name = name; this.age = age; this.salary = salary; } public String getName() { return this.name; } public int getAge() { return this.age; } public int getSalary() { return this.salary; } public String toString() { return "Name: " + this.name; } } LoginException.java package database; public class LoginException extends Exception { private static final long serialVersionUID = 1L; public LoginException(String message) { super(message); } public void printErrorMessage() { System.out.println("LoginException: Not logged in!"); } } EmployeeNotFoundException.java package database; public class EmployeeNotFoundException extends Exception { private static final long serialVersionUID = 1L; public EmployeeNotFoundException(String message) { super(message); } public void printErrorMessage() { System.out.println("EmployeNotFoundException: The employee you are searching for could not be found!"); } } Tester.java package database; public class Tester { @SuppressWarnings("unused") public static void main(String[] args) { Database database = new Database(); database.login("username1234", "password123"); //Should throw `LoginException` try { for(Employee employee : database.getEmployees()) { System.out.println(employee); } } catch (LoginException e) { e.printErrorMessage(); } //Should throw `EmployeeNotFoundException` try { Employee test = database.getEmployee("Ben"); } catch (EmployeeNotFoundException e) { e.printErrorMessage(); } catch (LoginException e) { e.printErrorMessage(); } } } Answer: Structure Database does not adhere to the concept of Single Responsibility. Ignoring the test code, there are some considerations to make. I would only keep the autenthication methods and perhaps add DBMS methods as transaction functionality in this class. All ORM-related methods should be put in seperate Repository classes. public class EmployeeRepository : Repository<Employee> { public void save(Employee employee) { /* .. */ } public void delete(int employeeId) { /* .. */ } public Employee Get(int employeeId) { /* .. */ } // and so on .. } Employee is an entity, but does not have a primary key. I would add an int employeeId;. Some fields are final, others aren't, but only getters are available. You are inconsistent. Always override equals and hashcode for entities, in order to distinguish them from other instances. Login Mechanism You check a plain password against a stored plain password. This is as bad a practice I can think of. Fortunately for you, many companies wouldn't even mind this flow ;-) You are much better of hashing the password, using a salt, perhaps some pepper using key stretching. And please don't try to implement this yourself, use existing APIs. Exceptions Your *Exception classes provide utility methods that write to the console. I can understand why you add them for a trivial example as this, but they wouldn't make sense otherwise. There already is the message. I don't understand why you don't print the message, but a hardcoded string instead. public void printErrorMessage() { System.out.println("LoginException: Not logged in!"); } public void printErrorMessage() { System.out.println(e.getMessage()); } I used Eclipse while writing these exceptions, so the serialVersionUID is auto-generated. If your IDE generates code for you, do you want to know why this was done? Efficiency/Compactness Your loops can be refactored. Let's take one example. Employee highest = this.employees.get(0); for(Employee employee : this.employees) { if(employee.getSalary() > highest.getSalary()) { highest = employee; } } Employee highest = Collections.max(employees, Comparator.comparing(c -> c.getSalary())); How efficient is it to sort entities on the repository? This is something calling code should bother with, not the repository. public void sortEmployees() { /* .. */ }
{ "domain": "codereview.stackexchange", "id": 34959, "tags": "java, reinventing-the-wheel, database, authentication, exception" }
what doest "hit or miss" in morphology do?
Question: I searched for hit or miss on the internet and Gonzalez's book but there is just a formula and example. In that example the hit or miss just finds a pixel and no more answers. I want to know by concept what is use and purpose of hit or miss transform. Answer: The Hit-or-Miss transform as the name suggest uses 2 structuring elements (SE) to identify structures which are specific to the foreground (first SE) and background (second SE). Here, you have a good example for corner detection. The only difference is that the two SE are merged into 1: The 1 are part of the foreground SE in order to determine a specific shape in the pattern, and the 0 are part of the background SE. Consequently, the algorithm hits only if all the conditions on the 1 and 0 are respected, else it's a miss. This principle is mainly used for skeleton computation, but it can be derived to build all the mathematical morphology basic operations. Here are more explanations: - A complete video of a class. - Much more examples in this complete lesson. - StackOverflow
{ "domain": "dsp.stackexchange", "id": 3595, "tags": "image-processing, morphology" }
Can in the case of multiple sclerosis (MS), a too high osmotic pressure in the nerve, lead to a high intracellular concentration of potassium?
Question: Can in the case of multiple sclerosis (MS), a too high osmotic pressure in the nerve, lead to a high intracellular concentration of potassium, and also lead to 'pumping up' of nerve cells, which then due to the strongly increased internal pressure, the insulating myelin sheath can be pressed broken, resulting in a lesion, or a strong reduced transmission of stimuli ? If that were to happen, one can expect in cerebrospinal fluid, cytokines, which are normally also a component of the nerve cell contents, and that it wrongly can be interpreted, as inflammatory reaction to a potential autoimmune disease. Is there research of this ? Answer: Firstly, demyelination has got nothing to do with membrane integrity of the neuron. Demyelination causes the current to "leak" through the ion-channels and thus causing loss of signal. There is an old article which reports a study on membrane integrity of RBCs of MS patients but there seems to be no connection. It has been proven that neurons do secrete some cytokines such as IFNgamma but the signaling is mostly para/auto-crine and I dont think the cytokines will accumulate in CSF. MS is identified by the presence of anti-myelin antibodies, which might be the exact biomarker for the disease. A general neuroinflammation is not concluded as MS. So to answer your final question Is there research of this ? There is none that exactly talks about neuron membrane dynamics in MS. The most parsimonious assumption is that there is no effect, but any hypothesis has to be proven. It might be experimentally difficult to address this question and how to design the experiment for this study would be an interesting question.
{ "domain": "biology.stackexchange", "id": 937, "tags": "cell-biology, cell-membrane" }
Merging predicted values into measured values
Question: I created the following function below to merge the real values with the predicted values (when real are absent) in a new column in data.frame. The function actually works, but I would like to optimize it, because with the dataset that I work with, the function takes about two hours to run. The function seems to be slowed down by the loop. p <- function(object, newdata = NULL, type = c("link", "response", "terms"), rse.fit = FALSE, dispersion = NULL, terms = NULL, na.action = na.pass, ...) { { pred <- predict (object,newdata) } vetor1 <- (newdata$ALT) # Creates a column vector from the actual heights of the data.frame vetor1[is.na(vetor1)] <- 0 # Replaces the NA's present in the vector created above the numeric value 0 vetor2 <- c(pred) # Creates a vector from the predicted data for(i in 1:length(vetor1)){ # The loop is executed until all values vector1 pass the following condition if(vetor1[i]==0.00){ # If a value of the first vector has the value 0, ie, if it is absent vetor1[i]=vetor2[i] # Then the predicted value will replace the missing value newdata$ALTMISTA <- vetor1 # The vector1, already possessing the actual values and the predicted values merged into the same vector goes on to become a new column in data.frame, this column is called a ALTMISTA } } return (newdata) } Answer: I am assuming that the call to predict is not what is taking most of the time, so I'm not including that in my analysis. One thing that is slowing down your code is that you are assigning vetor1 to newdata$ALTMISTA every iteration of the loop (well, every iteration that is 0). That could be pulled out of the loop since it only needs to be done once. vetor1 <- (newdata$ALT) # Creates a column vector from the actual heights of the data.frame vetor1[is.na(vetor1)] <- 0 # Replaces the NA's present in the vector created above the numeric value 0 vetor2 <- c(pred) # Creates a vector from the predicted data for(i in 1:length(vetor1)){ # The loop is executed until all values vector1 pass the following condition if(vetor1[i]==0.00){ # If a value of the first vector has the value 0, ie, if it is absent vetor1[i]=vetor2[i] # Then the predicted value will replace the missing value } } newdata$ALTMISTA <- vetor1 # The vector1, already possessing the actual values and the predicted values merged into the same vector goes on to become a new column in data.frame, this column is called a ALTMISTA return (newdata) Second, you replace NA with 0, and then test against 0. That means you are replacing both NA's and any 0's in the data. If you only want to replace NA's, just replace those without recoding them. vetor1 <- (newdata$ALT) # Creates a column vector from the actual heights of the data.frame vetor2 <- c(pred) # Creates a vector from the predicted data for(i in 1:length(vetor1)){ # The loop is executed until all values vector1 pass the following condition if(is.na(vetor1[i])){ # If a value of the first vector is NA, ie, if it is absent vetor1[i]=vetor2[i] # Then the predicted value will replace the missing value } } newdata$ALTMISTA <- vetor1 # The vector1, already possessing the actual values and the predicted values merged into the same vector goes on to become a new column in data.frame, this column is called a ALTMISTA return (newdata) Now the explicit loop can be replaced by vectorized functions, in this case ifelse vetor1 <- (newdata$ALT) # Creates a column vector from the actual heights of the data.frame vetor2 <- c(pred) # Creates a vector from the predicted data vetor1 <- ifelse(is.na(vetor1), vetor2, vetor1) # replace elements of vetor1 that are NA with corresponding elements of vetor2 newdata$ALTMISTA <- vetor1 # The vector1, already possessing the actual values and the predicted values merged into the same vector goes on to become a new column in data.frame, this column is called a ALTMISTA return (newdata) Finally you can eliminate some single use intermediate variables. This may not appreciably speed up the function, but it does make for cleaner code. vetor1 <- (newdata$ALT) newdata$ALTMISTA <- ifelse(is.na(vetor1), pred, vetor1) return (newdata) As you did not provide example data, I was not able to benchmark these alternatives (nor, for that matter, even run them).
{ "domain": "codereview.stackexchange", "id": 10702, "tags": "performance, r" }
What is the effect of an increase in pressure on latent heat of vaporization?
Question: What is latent heat of vaporization ($L_v$) in the first place? Wikipedia seems to indicate that it is the energy used in overcoming intermolecular interactions, without taking into account at all any work done to push back the atmosphere to allow for an increase in volume when a liquid boils. If that is so, then would it be correct that $L_v$ decreases as boiling point rises, because at the higher boiling point, less energy is required to overcome the weaker intermolecular interactions? Otherwise, would it then be correct to say that $L_v$ increases as boiling point rises, assuming constant-volume? Let's say there is a beaker containing 1 kg of liquid water at the boiling point, and an identical beaker, containing an identical amount of water at the same temperature. However, this second beaker is perfectly sealed such that the volume of its contents, both liquid and gaseous, will not change. Since both liquids are at the boiling point, applying heat should cause boiling to occur. Compared to the first beaker, then, would the second beaker (in theory) require more or less heat for its 1 kg of liquid water to completely boil? Thanks! Answer: The name of the property is itself a clue here : enthalpy of vaporization. By nature, enthalpy does take into account the work required to push against the atmosphere. You can see the impact of increasing the pressure on the enthalpy of vaporization on a Mollier diagram. Increasing the pressure has the overall effect of reducing the enthalpy of vaporization, until it becomes zero at the critical point. At this stage, there is no longer a phase change associated with vaporization.
{ "domain": "physics.stackexchange", "id": 84752, "tags": "thermodynamics, temperature, pressure" }
premiss of reduction rule (abst) of pure type systems
Question: $$(abst) \:\frac{\Gamma, x: t_1 \vdash t_2: t_3 \quad \Gamma \vdash (x: t_1) \to t_3: s}{\Gamma \vdash (x: t_1. t_2): (x: t_1) \to t_3}$$ In this rule, why is $(x: t_1) \to t_3$ required to be an inhabitant of some sort? Isn't assuming $t_1: s$ enough? Answer: There are two issues here. The first most obvious one, is that if there is no rule $(s_1,s_2,s)$ in the pure type system, then there is no way to get from the hypotheses $$ \Gamma \vdash t_1:s_1$$ and $$ \Gamma, x:t_1\vdash t_3:s_2$$ to the conclusion $$ \Gamma\vdash (x:t_1)\rightarrow t_3 : s$$ You can see that if there is no such rule, then your hypothesis is insufficient for forming the product type, and therefore you shouldn't be able to form terms of that type. Note that in the Calculus of Constructions, any $s_1, s_2$ has a corresponding $s$ ($=s_2$), so the premisses are sufficient for the conclusion in all cases. Another, more subtle issue is that even the fact $\Gamma, x:t_1\vdash t_3:s_2$ for some $s_2$ is not obvious from the fact $$\Gamma, x:t_1\vdash t_2:t_3 $$ While actually true, this fact can be subtle to prove depending on how you set up your rules. The rule you give minimizes the hassle, as it gives you directly the hypothesis that the product is well-formed without having to prove it after the fact.
{ "domain": "cs.stackexchange", "id": 6147, "tags": "lambda-calculus, type-theory, semantics" }
What happens with the Fock state after the Schwinger boson transformation?
Question: Let us imagine that initially, I had the following Hamiltonian. $$ \hat{H} = \frac{\alpha}{4} (a^\dagger b - b^\dagger a )^2 + \frac{\beta}{2}(a^\dagger a - b^\dagger b), \quad [a,a^\dagger] = 1, \quad [b, b^\dagger] = 1, $$ here $\alpha$ and $\beta$ - some real constants. And the initial state was $|\psi_0 \rangle = | n,0 \rangle = \frac{(a^\dagger)^n}{\sqrt{n!}}|0,0\rangle $. After applying Schwinger transformation ($L_z \rightarrow \frac{1}{2}(a^\dagger a - b^\dagger b)$, $L_{+} \rightarrow a^\dagger b$, $L_{-} \rightarrow a b^\dagger$) the Hamiltonian transforms into $$ \hat{H} = \alpha L_x^2 + \beta L_z, $$ here $L_{x} = \frac{1}{2}\left( L_{+} + L_{-} \right)$. What will be with the initial state under this transform? Answer: I assume you're looking for the components of the state in the angular momentum representation? The state itself is technically the same, but you want it in a different basis. We need to think about the actions of $L^2$ and $L_z$ in the Fock space. Consider $$L_z(a^{\dagger})^n |0\rangle = \frac{n}{2} (a^{\dagger})^n |0\rangle$$ $L_z=\frac{1}{2}(N_a-N_b)$ is just two number operators, so this is easy to evaluate. $L^2$ takes a bit of work; first, you should define $L_y=\frac{i}{2}(L_--L_+)$. Doing some operator math, you should get that $$L^2 = L_x^2+L_y^2+L_z^2 = L_z^2-\frac{1}{4}(L_--L_+)^2+-\frac{1}{4}(L_-+L_+)^2 = L_z^2-\frac{1}{4}(2L_+L_-+2L_-L_+) =\frac{1}{4}(a^\dagger aa^\dagger a+b^\dagger bb^\dagger b-2a^\dagger ab^\dagger b + 2a^\dagger ba b^\dagger+2a b^\dagger a^\dagger b ) =\frac{1}{4}(N_a^2+N_b^2+2N_aN_b+2N_a+2N_b )=\frac{1}{4}(N_a+N_b)(N_a+N_b+2)$$ In particular, $$L^2(a^{\dagger})^n |0\rangle = \left(\frac{n}{2}\right)\left(\frac{n}{2}+1\right) (a^{\dagger})^n |0\rangle$$ This is really nice, since this tells you that $(a^{\dagger})^n |0\rangle$ is an eigenstate of both $L_z$ and $L^2$ with quantum numbers $m_l=\frac{n}{2}$ and $l=\frac{n}{2}$, or just $$\left|n,0\right\rangle \propto (a^{\dagger})^n |0\rangle \propto \left|\frac{n}{2},\frac{n}{2}\right\rangle_l,$$ up to some normalization constant that can be computed using the original state.
{ "domain": "physics.stackexchange", "id": 78143, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, operators, angular-momentum" }
Algebraic construction of $\varepsilon$-biased sets
Question: Let $\ell> 1$ be an integer and consider the mapping $\text{Tr}:\mathbb{F}_{2^\ell}\to\mathbb{F}_{2^\ell}$ defined by $$\text{Tr}(x)=x^{2^0}+x^{2^{1}}+\cdots+x^{2^{\ell-1}}$$ It is then possible to show the following $\text{Tr}$ maps $\mathbb{F}_{2^\ell}$ into $\mathbb{F}_2$. If $a\in\mathbb{F}_{2^\ell}$ is non-zero, then the mapping $f_a:\mathbb{F}_{2^\ell}\to\mathbb{F}_{2}$ defined by $f_a(x)=\text{Tr}(a\cdot x)$ is $\mathbb{F}_2$-linear and $\mathbb{E}_{x\sim\mathbb{F}_{2^\ell}}[f(x)]=\frac{1}{2}$. Now, we consider the set $S=\{s(x,y,z):x,y,z\in\mathbb{F}_{2^\ell}\}$ such that we index the entries of $s(x,y,z)$ by $0\leq i,j$ such that $i+j\leq c\sqrt{n}$ ($c$ is a constant so that there are exactly $n$ entries). For such $x,y,z$ and $i,j$ we set $s(x,y,z)_{i,j}=\text{Tr}(x^iy^jz)$. I want to show that for an appropriate choice of $\ell$, the set $S$ described above is an $\varepsilon$-biased set of size $O(n\sqrt{n}/\varepsilon^3)$. Fix $\vec{0}\neq \tau\in\{0,1\}^n$, what we need to show is that (under good choice of $\ell$) $$\bigg|\mathbb{E}_{s\in S}\Big[(-1)^{\langle s,\tau\rangle}\Big]\bigg|\leq \varepsilon$$ Let $x,y,z\in\mathbb{F}_{2^\ell}$ and consider $\langle s(x,y,z),\tau\rangle$, I managed to show that (from $\mathbb{F}_2$-linearity above while indexing $\tau$ as we index $s(x,y,z)$) $$\langle s(x,y,z),\tau\rangle=\cdots=f_z\Big(\sum_{i,j}x^iy^j\tau_{i,j}\Big)$$ Finally, I thought of defining the bi-variate polynomial $p_\tau(x,y)=\sum\limits_{i,j}x^iy^j\tau_{i,j}$ and saying that since it is a non-zero polynomial of low degree at most $c\sqrt{n}$ it attains each value of $\mathbb{F}_{2^\ell}$ with multiplicity at most $c\sqrt{n}2^\ell$ (from Schwartz-Zippel), so $$\forall\alpha\in\mathbb{F}_{2^\ell}:\Pr\limits_{x,y\in\mathbb{F}_{2^\ell}}[p_\tau(x,y)=\alpha]\leq c\sqrt{n}/2^\ell$$ I want to use it but I am stuck..., maybe we can say that the distribution of $p_\tau(x,y)$ is close enough to $U_{\mathbb{F}_{2^\ell}}$ in statistical distance in order to infer that the expeced value of $f_z(p_\tau(x,y))$ is close enough to $1/2$? Answer: recall that $$\langle s(x,y,z),\tau\rangle=\cdots=f_z\Big(\sum_{i,j}x^iy^j\tau_{i,j}\Big)$$ if we define $p_\tau(x,y)=\sum\limits_{i,j}x^iy^j\tau_{i,j}$, we have $$\langle s(x,y,z),\tau\rangle=f_{p_\tau(x,y)}(z)$$ So, observe that whenever $p_\tau(x,y)\neq 0$ we win as $z$ is uniform and the expected value of $f_{p_\tau(x,y)}(z)$ is also $1/2$ which means that the contribution to the bias of these $x,y$ is zero. Again from Schwartz-Zippel we have only $O(\sqrt{n}/2^\ell)$ many zeros of $p_\tau$ on them we lose the entire bias. So, the total bias is at most $O(\sqrt{n}/2^\ell)\stackrel{?}{\leq}\varepsilon$. Choosing $\ell=\Omega(\log(\sqrt{n}/\varepsilon))$ finishes the construction.
{ "domain": "cstheory.stackexchange", "id": 4615, "tags": "derandomization, pseudorandomness" }
Why is the Planck length considered fundamental, but not the Planck mass?
Question: The planck length is considered by many to be a lower bound of the scale where new physics should appear to account for quantum gravity. The reasoning behind, as far as I understand, is that $l_{P}=\sqrt{\dfrac{\hbar G}{c^3}}$ consists of the fundamental constants of gravity and relativistic quantum mechanics. By the same argument $m_{P}=\sqrt{\dfrac{\hbar c}{G}}$ should be equally important, no? What am I missing? Answer: From the perspective of particle physics, you are correct, the Planck length and Planck mass are essentially equivalent concepts: the Planck mass describes a (very high) energy scale ($\sim 10^{19}$ GeV) at which new physics must emerge, just as the Planck length entails a (very short) length scale beyond which we need a new description. If we set $\hbar=c=1$ (which are really just conversion factors between units) we see that they are inverses of each other, $m_P=1/l_P$. More precisely, if we take the Einstein-Hilbert action for gravity and expand around a flat metric $g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}$, where we can interpret $h_{\mu\nu}$ as the graviton field, the resulting action will have an infinite number of higher order terms suppressed by powers of the Planck mass. Roughly, we have $$\mathcal{L}_{EH} \sim \frac{1}{2} \partial h\partial h+ \frac{1}{m_P} h\partial h \partial h + \frac{1}{m_P^2} h^2\partial h \partial h + \ldots $$ (as well as terms from higher derivative corrections, which are also higher order in $1/m_P$). So we have predictive control at energies scales much less than $m_P$, where the infinite number of higher order terms can be ignored. But once we reach the Planck scale (i.e. energy scales of $m_P$ or length scales of $l_P$) the non-renormalizable effects become important and all the quantum corrections and higher order terms render the above Lagrangian equation useless, and we require a new description.
{ "domain": "physics.stackexchange", "id": 69762, "tags": "physical-constants" }
Simpson's method for numerically computing the integration of a function
Question: I have implemented the Simpson's rule for numerical integration. Check this video for the implemented function. namespace Simpsons_method_of_integration { //https://www.youtube.com/watch?v=ns3k-Lz7qWU using System; public class Simpson { private double Function(double x) { return 1.0 / (1.0 + Math.Pow(x, 5)); //Define the function f(x) } public double Compute(double a, double b, int n) { double[] x = new double[n + 1]; double delta_x = (b - a) / n; x[0] = a; for (int j = 1; j <= n; j++)//calculate the values of x1, x2, ...xn { x[j] = a + delta_x * j; } double sum = Function(x[0]); for (int j = 1; j < n; j++) { if (j % 2 != 0) { sum += 4 * Function(x[j]); } else { sum += 2 * Function(x[j]); } } sum += Function(x[n]); double integration = sum * delta_x / 3; return integration; } } public class MainClass { public static void Main() { Simpson simpson = new Simpson(); double a = 0d;//lower limit a double b = 3d;//upper limit b int n = 6;//Enter step-length n if (n % 2 == 0)//n must be even { Console.WriteLine(simpson.Compute(a, b, n)); } else { Console.WriteLine("n should be an even number"); } Console.ReadLine(); } } } Output: 1.07491527775614 How can I make this source code more efficient? Answer: This: if (j % 2 != 0) { sum += 4 * Function(x[j]); } else { sum += 2 * Function(x[j]); } can be expressed as: sum += (2 << (j % 2)) * Function(x); In order to make your algorithm more useful, you should inject the Function as a delegate parameter to the method, and also make it static: public static double Compute(Func<double, double> fx, double a, double b, int n) { double h = (b - a) / n; double sum = fx(a) + fx(b); double x = a; for (int j = 1; j < n; j++) { x += h; sum += (2 << (j % 2)) * fx(x); } return sum * h / 3; } Used as: Simpson.Compute(x => 1 / (1 + Math.Pow(x, 5)), a, b, n);
{ "domain": "codereview.stackexchange", "id": 37938, "tags": "c#, numerical-methods" }
Problem with libhdf5 dependency
Question: Recently I had an issue updating my Fuerte packages. When I ran $ sudo apt-get upgrade I received the error "E: Unable to correct problems, you have held broken packages." for 12 packages. After digging into the issue, I found that ros-fuerte-perception-pcl requires libhdf5-serial-dev. However, I had installed Paraview (which has a PCL viewer plugin) and paraview requires libhdf5-openmpi-dev. It appears these two packages are mutually exclusive. Because I haven't been using Paraview lately, I decided to uninstall it so I could update my ROS Fuerte install. However, since it was coexisting peacefully before I feel something must have changed. During the uninstall of Paraview and update of perception-pcl, I noticed that libnetcdf6 can use either libhdf5-serial or -openmpi $ apt-cache depends libnetcdf6 libnetcdf6 Depends: libc6 Depends: libcurl3-gnutls Depends: libgcc1 Depends: libgfortran3 |Depends: libhdf5-serial-1.8.4 Depends: <libhdf5-1.8.4> libhdf5-lam-1.8.4 libhdf5-mpich-1.8.4 libhdf5-openmpi-1.8.4 libhdf5-serial-1.8.4 Depends: libstdc++6 Conflicts: libnetcdf6:i386 Could perception-pcl be modified to accept either also? This would allow Paraview installation again. Yes, I'm also going to query the Ubuntu package maintainers about doing the same with Paraview. Originally posted by dougbot01 on ROS Answers with karma: 342 on 2013-07-14 Post score: 2 Answer: The problem can be overcome by a little hack. Because libhdf5-serial package contains all the files paraview needs, it is just sufficient to tell apt-get that the package libhdf5-openmpi has already been installed. You can download a "fake" package from https://app.box.com/s/1atz60r66dwke4xmsx01 . Originally posted by peci1 with karma: 1366 on 2013-10-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by peci1 on 2013-10-14: If you don't trust that file, you can create your own onw using the tutorial at http://ubuntuforums.org/showthread.php?t=726317 (just the main-package part, without coping with the .conf file). Comment by peci1 on 2013-10-14: To install the downloaded .deb file just double-click it and install it. If that doesn't work for you, you can always do sudo dpkg -i name_of_file.deb .
{ "domain": "robotics.stackexchange", "id": 14915, "tags": "ros, ros-fuerte, perception-pcl" }
How can I calculate the relative change in Precipitation using CMIP models without producing unrealistic results in Dry areas?
Question: I am calculating changes in precipitation using the Delta method, wherein the relative changes are calculated thus: Delta change = (modelled Future climatology (2050s) - modelled historical climatology (1981-2010)) / modelled historical climatology (1981-2010)) This Delta (or anomaly) data is added to the actual observations to produce a final bias corrected future projection for Precipitation using: Future projection = Observed climatology (1981-2010) * (1 + Delta change). My problem is that there are some very large numbers being produced in the dataset (presumably due to dry areas having unrealistically large relative differences calculated between future and present). How can I account for this? They mention it in this paper: https://www.nature.com/articles/s41597-019-0343-8 but I don't follow exactly what they did: We note that in very dry areas (i.e. monthly historical precipitation close to zero) relative changes could produce unreasonably large relative precipitation increases (e.g. Sahara Desert). To avoid this, we made two adjustments: (1) we set a threshold of 0.1 mm month−1 both for current and future GCM values, which prevents indetermination in Eq. 2; and (2) we truncate the top 2% of anomaly values to the 98th percentile value in the empirical probability distribution for each anomaly gridded dataset Answer: (1) we set a threshold of 0.1 mm month−1 both for current and future GCM values = they masked out all grid cells with less than 0.1 mm month−1 precipitation. That way they're avoiding dividing by very small numbers in what you correctly identified as dry areas and (2) we truncate the top 2% of anomaly values to the 98th percentile value in the empirical probability distribution for each anomaly gridded dataset = for each delta dataset (or anomaly gridded dataset) they set all grid cells with anomalies greater than the 98th percentile to the value of the 98th percentile. Presumably to catch any outliers that were not addressed with the masking of dry areas.
{ "domain": "earthscience.stackexchange", "id": 2331, "tags": "climate-change, precipitation, climatology" }