anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Bishop Diagonal
Question: I have solved this kata on Codewars.com and it passes all tests. I am looking for refactoring tips and proper coding tips as well. Also I am a beginner that is trying to Improve. I would like to shorten my code and clean it up. Task In the Land Of Chess, bishops don't really like each other. In fact, when two bishops happen to stand on the same diagonal, they immediately rush towards the opposite ends of that same diagonal. Given the initial positions (in chess notation) of two bishops, bishop1 and bishop2, calculate their future positions. Keep in mind that bishops won't move unless they see each other along the same diagonal. Example For bishop1 = "d7" and bishop2 = "f5", the output should be ["c8", "h3"]. For bishop1 = "d8" and bishop2 = "b5", the output should be ["b5", "d8"]. The bishops don't belong to the same diagonal, so they don't move. Input/Output [input] string bishop1 Coordinates of the first bishop in chess notation. [input] string bishop2 Coordinates of the second bishop in the same notation. [output] a string array Coordinates of the bishops in lexicographical order after they check the diagonals they stand on. My solution that passes all tests on Codewars.com and that I want refactored is below class Kata { public bool ChessBoardCellColor(string cell1, string cell2) { string x; string y; var xaxis = new Dictionary<char, int>() { { 'a', 1 }, { 'b', 2 }, { 'c', 3 }, { 'd', 4 }, { 'e', 5 }, { 'f', 6 }, { 'g', 7 }, { 'h', 8 } }; var input1 = cell1.ToCharArray(); int j; xaxis.TryGetValue(input1[0], out j); x = ((j % 2 == 1 && input1[1] % 2 == 1) || (input1[1] % 2 == 0 && j % 2 == 0)) ? "black" : "white"; var input2 = cell2.ToCharArray(); int k; xaxis.TryGetValue(input2[0], out k); y = ((k % 2 == 1 && input2[1] % 2 == 1) || (input2[1] % 2 == 0 && k % 2 == 0)) ? "black" : "white"; return x == y ? true : false; } public Tuple<bool, string> BishopDia(string bishop1, string bishop2) { if (ChessBoardCellColor(bishop1, bishop2)) { int add = 1; string BishopNextDiagonal; string BishopPreviousDiagonal; string BishopNextLeftDiagonal; string BishopPreviousRightDiagonal; while (add <= 8) { var x = Convert.ToChar(bishop1[0] + add); var value = Char.GetNumericValue(bishop1[1]) + add; BishopNextDiagonal = x + "" + value; var boolean = BishopNextDiagonal == bishop2; if (boolean) return new Tuple<bool, string>(true, "Backwards-Left"); var x1 = Convert.ToChar(bishop1[0] - add); var value1 = Char.GetNumericValue(bishop1[1]) - add; BishopPreviousDiagonal = x1 + "" + value1; var boolean1 = BishopPreviousDiagonal == bishop2; if (boolean1) return new Tuple<bool, string> (true, "Forwards-Right"); var x2 = Convert.ToChar(bishop1[0] - add); var value2 = Char.GetNumericValue(bishop1[1]) + add; BishopNextLeftDiagonal = x2 + "" + value2; var boolean2 = BishopNextLeftDiagonal == bishop2; if (boolean2) return new Tuple<bool, string>(true, "Backwards-Right"); var x3 = Convert.ToChar(bishop1[0] + add); var value3 = Char.GetNumericValue(bishop1[1]) - add; BishopPreviousRightDiagonal = x3 + "" + value3; var boolean3 = BishopPreviousRightDiagonal == bishop2; if (boolean3) return new Tuple<bool, string>(true, "Forwards-Left"); add++; } return new Tuple<bool, string> (false, ""); } else return new Tuple<bool, string> (false, ""); } public string[] Figure(string bishop1, string bishop2, Tuple<Func<int, bool>, Func<int, bool>> b1, Tuple< Func<int, bool> , Func<int, bool>> b2, Tuple<Func<int, int>, Func<int, int>> operation1, Tuple<Func<int, int>,Func<int,int>> operation2) { var count = 1; var bishopcoordinateX = ' '; var bishopcoordinateX1 = ' '; while (count < 8) { count++; var value = Char.GetNumericValue(bishop2[1]); if (b2.Item1((bishop2[0] - 0)) && b2.Item2((int)value)) { bishopcoordinateX = Convert.ToChar(operation2.Item1(bishop2[0])); value = operation2.Item2((int)value); bishop2 = bishopcoordinateX + "" + value; } var value1 = Char.GetNumericValue(bishop1[1]); if (b1.Item1((bishop1[0] - 0)) && b1.Item2((int)value1)) { bishopcoordinateX1 = Convert.ToChar(operation1.Item1(bishop1[0])); value1 = operation1.Item2((int)value1); bishop1 = bishopcoordinateX1 + "" + value1; } } return new string[] { bishop1, bishop2 }.OrderBy(c => c).ToArray(); } public string[] BishopDiagonal(string bishop1, string bishop2) { var x = BishopDia(bishop1, bishop2); if (x.Item1) { int Max = 'h' - 0; int Min = 'a' - 0; if (x.Item2 == "Backwards-Left") { return Figure(bishop1, bishop2, new Tuple<Func<int, bool>, Func<int, bool>>(z => z > Min, z => z > 1 ), new Tuple<Func<int, bool>, Func<int, bool>>(z => z < Max, z => z < 8 ), new Tuple<Func<int, int>, Func<int, int>>(z => z - 1, z => z - 1 ), new Tuple<Func<int, int>, Func<int, int>>(z => z + 1, z => z + 1 )); } else if(x.Item2 == "Forwards-Right") { return Figure(bishop1, bishop2, new Tuple<Func<int, bool>, Func<int, bool>>(z => z < Max, z => z < 8), new Tuple<Func<int, bool>, Func<int, bool>>(z => z > Min, z => z > 1), new Tuple<Func<int, int>, Func<int, int>>(z => z + 1, z => z + 1), new Tuple<Func<int, int>, Func<int, int>>(z => z - 1, z => z - 1)); } else if(x.Item2 == "Backwards-Right") { return Figure(bishop1, bishop2, new Tuple<Func<int, bool>, Func<int, bool>>(z => z < Max, z => z > 1), new Tuple<Func<int, bool>, Func<int, bool>>(z => z > Min, z => z < 8), new Tuple<Func<int, int>, Func<int, int>>(z => z + 1, z => z - 1), new Tuple<Func<int, int>, Func<int, int>>(z => z - 1, z => z + 1)); } else if (x.Item2 == "Forwards-Left") { return Figure(bishop1, bishop2, new Tuple<Func<int, bool>, Func<int, bool>>(z => z > Min, z => z < 8), new Tuple<Func<int, bool>, Func<int, bool>>(z => z < Max, z => z > 1), new Tuple<Func<int, int>, Func<int, int>>(z => z - 1, z => z + 1), new Tuple<Func<int, int>, Func<int, int>>(z => z + 1, z => z - 1)); } else return new string[] { bishop1, bishop2 }.OrderBy(c => c).ToArray(); } else return new string[] { bishop1, bishop2 }.OrderBy(c => c).ToArray(); } } Answer: First of all I'll upvote the question for the effort. You split the problem into subproblems and you show good understanding of C# idiomatic as well as general programming skills. On the other hand I think you overcomplicate the solution, because your general analysis of the problem is a little too "chessish". Instead I would try with some kind of "mathematical" model: A chessboard is an 8x8 matrix or coordinate system. A chess field can be converted into a matrix coordinate set (x, y) by: field = "f4" => (x, y) = (field[0] - 'a', field[1] - '1') = (5, 3) (zero based) Two fields are on the same "diagonal" if their offset has equal length in x and y: abs(field1.X - field2.X) == abs(field1.Y - field2.Y) From this mathematical model and analysis it is very simple to calculate the result. Some comments on the code: var boolean = BishopNextDiagonal == bishop2; if (boolean) return new Tuple<bool, string>(true, "Backwards-Left"); Because you only use the boolean variable once then just do: if (BishopNextDiagonal == bishop2) return new Tuple<bool, string>(true, "Backwards-Left"); This is rather complicated to understand: public string[] Figure(string bishop1, string bishop2, Tuple<Func<int, bool>, Func<int, bool>> b1, Tuple<Func<int, bool>, Func<int, bool>> b2, Tuple<Func<int, int>, Func<int, int>> operation1, Tuple<Func<int, int>, Func<int, int>> operation2) {...} What are b1, b2, operation1/2 doing? A better naming would be suitable. Tuples are nice objects for holding (temporary) data, but I wouldn't use them in the above situation. Instead I would make a class or find another approach.
{ "domain": "codereview.stackexchange", "id": 25227, "tags": "c#, beginner, programming-challenge, chess" }
NFA not accepting a certain string
Question: I am trying to make a non-deterministic NFA that does not contain a string "101". How do I make my NFA so that it does not have this string? My attempt: Answer: I'll give you a recipe for constructing automata for $\Sigma^* \setminus F$ for any regular $F \subseteq \Sigma^*$. Construct a (complete) DFA $A_F$ for $F$. Construct the complement automaton $\overline{A_F}$. Optional: minimise. The first step is particularly easy (if maybe time consuming) for finite $F$. The second one is standard (flip accepting and non-accepting states) and should be covered by any textbook on this subject. Of course, step 1 may blow up the size of the automaton (going from an easy-to-design NFA to DFA); this can in particular be the case for finite sets. In your particular case, though, you never get more than five states.
{ "domain": "cs.stackexchange", "id": 4438, "tags": "automata, finite-automata" }
what are the ethical considerations when preparing a bacterial culture for testing antibiotic resistance?
Question: I cannot think of any ethical issues associated with preparing a bacterial culture to test antibiotic resistance. The bacteria I am using is Bacillus Megaterium so its not pathogenic. Thanks in advance for your help! Answer: While I think the question is off-topic, I could not help but trying to answer it! It is easy to find ethical issues in every activity. Potential ethical issues You are killing bacteria Your research has an environmental cost. You consume loads of plastic thingies for example. Your research costs money as well. You consume tax payer money (depending on your source of funds) You are selecting for anti-biotic resistance and might, by mistake, release them in nature (of even they are not pathogens, they might become, they might transfer their resistance to pathogenic bacteria as well or your experience may have all sorts of unexpected consequences) Are you sure it is not pathogenic? We are never sure of anything, so just don't take risks. Don't play with nature, you are not God! Are you helping those nasty big pharmaceutical companies, huh? You are a clever person. There are so many other things you could do for the world that would be so much more beneficial. Your time could be used for better purposes. You could work on improving sanitation conditions in India, or improve women's right around the world, or help children involved in the current civil war in Somalia for examples. Basics of philosophy of ethics For any question of ethics, one has to define his/her own values and from this point this person can formulate an opinion on a particular case. Of course, I personally do not feel bad about killing bacteria and consider antibiotic resistance a very important field of study to improve our everyday well-being but other people may disagree based on their own values. I am not formally trained as a philosopher. So, please read the wikipedia articles on ethics and values I linked above and if you have any other question of Philosophy, you can ask them on Philosophy.SE
{ "domain": "biology.stackexchange", "id": 7224, "tags": "microbiology, bacteriology" }
rosrun python executable
Question: How do you set which python executable rosrun uses? I am on Arch linux and I assume rosrun defaults to python 3 while trying to interpret python 2.7 source, which of course fails. EDIT: The issue was in the package scripts. They used "#!/usr/bin/env python" which defaults to python3 on Archlinux. I changed it to "#!/usr/bin/python2". Originally posted by clauniel on ROS Answers with karma: 23 on 2015-01-09 Post score: 2 Answer: On Arch, since /usr/bin/python points to Python 3, some steps need to be taken when using ROS: Force the use of Python 2 by providing Python 2 paths/executables to catkin/CMake (as explained in the wiki), Patch shebang lines of Python scripts to ensure that Python 2 is used. All the ROS packages available in the AUR are fixed automatically thanks to a script available in ros-build-tools: # Fix Python2/Python3 conflicts /usr/share/ros-build-tools/fix-python-scripts.sh -v 2 /path/to/src/dir Originally posted by bchr with karma: 596 on 2015-02-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 20527, "tags": "ros, python, archlinux, rosrun" }
Time evolution of expectation value of an operator
Question: I'm studying QM from Sakurai, and I have a doubt regarding the proof given that in the case of time independent Hamiltonian the expectation value of an observable doesn't change with time. The argument goes as follows: At any time the expectation value of an observable $B$ is given by: $$<\alpha_0|U^{\dagger}BU|\alpha_0>$$ Where $U$ is the time evolution operator and $|\alpha_0>$ is the ket in our initial state. Suppose that $|\alpha_0>$ is also an eigenket of some other operator $A$ (which commutes with the Hamiltonian operator), that the Hilbert space is finite-dimensional and that the eigenkets of $A$ form a basis of the Hilbert space. We can use the explicit form of the time evolution operator: $$U=\exp{\frac{-iHt}{\hbar}}$$ And we know that it acts on the eigenkets of $A$ by multiplication by a phase. Sakurai then writes this: $$<\alpha_0|\exp{\frac{iE_{\alpha_0}t}{\hbar}}B\exp{\frac{-iE_{\alpha_0}t}{\hbar}}|\alpha_0>$$ My doubt is: the observable $B$ could send the ket $|\alpha_0>$ in a superposition of eigenkets, and then the simple multiplication by $\exp{\frac{iE_{\alpha_0}t}{\hbar}}$ wouldn't hold anymore. What am I missing? Answer: The left exponential evolves the $\langle \alpha_0 \lvert$ on the left. This is one of the pitfalls of Dirac notation, it would be unambiguous to write $$ (\mathrm{e}^{-\mathrm{i}E_{\alpha_0} t} \lvert \alpha_0 \rangle,B \mathrm{e}^{-\mathrm{i}E_{\alpha_0} t} \lvert \alpha_0 \rangle)$$ where $(\dot{},\dot{})$ denotes the inner product on the Hilbert space, i.e. $(\lvert\psi\rangle,\lvert\phi\rangle) = \langle \psi\vert\phi \rangle$.
{ "domain": "physics.stackexchange", "id": 24341, "tags": "quantum-mechanics, notation, time-evolution" }
Scikit-Image, Numpy, and Selecting Colors (python)
Question: I'm trying to mask colored features from a photograph so that I can do some other processing on them. I've played with a few packages (scikit-image, mahotas, and openCV) and have settled on Scikit-image because it plays well with scikit-learn. I'd like to accomplish two things, eventually: (1) get colored features so that I can compute things like length, and (2) remove colored features from the image while retaining the white spidering-looking veins (the lattice-looking stuff around the margin of the image will need to go, but that can be a later endeavour). I'm new to image processing in Python, and I think I've mislead myself by using strange color spaces. Perhaps it would have been better to define an upper and lower boundary for each colour, and mask the image array? I'm definitely open to any flexible solution to the problem. Eventually I'll be applying this to hundreds of photos.. Original Photo Code from skimage import io, img_as_float, color, exposure img = img_as_float(io.imread('./images/testimage2.JPG')) # Isolate paint marks # Put image into LAB colour space image_lab = color.rgb2lab(img) img = exposure.rescale_intensity(img) # Colours of interest color_array = np.array([ [[[255, 255, 0.]]], # Yellow stuff [[[255, 190, 200.]]], # Pink stuff [[[255, 165, 0.]]], # Orange stuff [[[255, 0, 0.]]], # Red stuff ]) # Loop through the color array and pick out the colored features for i in range(0,color_array.ndim): # Compute distance between the color of interest and the actual image # http://scikit-image.org/docs/dev/api/skimage.color.html#skimage.color.deltaE_cmc # "The usual values are kL=2, kC=1 for “acceptability”" distance_color = color.deltaE_ciede2000(color_array[i], image_lab, kL=2, kC=1, kH=0.5) # Normalise distance distance_color = exposure.rescale_intensity(distance_color) # Mask image image_filtered = img.copy() image_filtered[distance_color > 0.5] = 0 # Plot it up print ("Filtered to: ", color_array[i]) f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(20, 10)) ax0.imshow(img) ax1.imshow(distance_color, cmap='gray') ax2.imshow(image_filtered) plt.show() Results Answer: Thanks to Maximilian Matthé, I have an answer. Below is his OpenCV code translated to Scikit-image. Note that I changed some parameters (e.g., reduce the size of the morphological disk) to replicate his results. from skimage import color from skimage.morphology import disk, opening, dilation img_hsv = color.rgb2hsv(img) # Image into HSV colorspace h = img_hsv[:,:,0] # Hue s = img_hsv[:,:,1] # Saturation v = img_hsv[:,:,2] # Value aka Lightness plt.figure(1, figsize=(15, 15)) plt.subplot(4,2,1); plt.imshow(h, cmap='gray'); plt.title('Hue') plt.subplot(4,2,2); plt.imshow(s, cmap='gray'); plt.title('Saturation') plt.subplot(4,2,3); plt.imshow(v, cmap='gray'); plt.title('Value') plt.tight_layout() mask = (s > 0.35).astype(np.uint8); # Thresholding in the Saturation-channel plt.subplot(4,2,4); plt.imshow(mask); plt.title('mask') disk_elem = disk(1) # Remove small regions opened = opening(mask, selem=disk_elem) plt.subplot(4,2,5); plt.imshow(opened); plt.title('Opened mask') square_elem = square(2) # rejoin colored features dilated = dilation(opened, selem=square_elem) plt.subplot(4,2,6); plt.imshow(dilated); plt.title('Opened mask') img2 = img.copy() img2[dilated.astype(bool), :] = 0; # Set the pixels to zero, where plt.subplot(4,2,7); plt.imshow(img2); plt.title('Final Image')
{ "domain": "dsp.stackexchange", "id": 4565, "tags": "image-processing, python, image-segmentation" }
How to calibrate a qubit other than qubit 0?
Question: I replicated successfully the qiskit tutorial (in a jupyter notebook using VSCode) found here: https://learn.qiskit.org/course/quantum-hardware-pulses/calibrating-qubits-using-qiskit-pulse But when I tried to calibrate a different qubit (changing the line qubit = 0 to, say, qubit = 1, I get only background noise (ranging from -6.15 to -5.90 a.u.) instead of a nice peak (reaching around 2 a.u.) when plotting sweep_values. By the way, I don't know if this is important, I had to also change the line sweep_values.append(res[qubit]) to sweep_values.append(res[0]) because it was giving me an error (res is of size 1). This was on ibm_hanoi. Why do I get only noise when switching to another qubit? Answer: Did you modify this cell in section 2? sweep_gate = Gate("sweep", 1, [freq]) qc_sweep = QuantumCircuit(1, 1) qc_sweep.append(sweep_gate, [0]) qc_sweep.measure(0, 0) qc_sweep.add_calibration(sweep_gate, (0,), sweep_sched, [freq]) # Create the frequency settings for the sweep (MUST BE IN HZ) frequencies_Hz = frequencies_GHz*GHz exp_sweep_circs = [qc_sweep.assign_parameters({freq: f}, inplace=False) for f in frequencies_Hz] Try replacing it with this: n_bits = backend_config.n_qubits sweep_gate = Gate("sweep", 1, [freq]) qc_sweep = QuantumCircuit(n_bits, 1) qc_sweep.append(sweep_gate, [qubit]) qc_sweep.measure(qubit, 0) qc_sweep.add_calibration(sweep_gate, (qubit,), sweep_sched, [freq]) # Create the frequency settings for the sweep (MUST BE IN HZ) frequencies_Hz = frequencies_GHz*GHz exp_sweep_circs = [qc_sweep.assign_parameters({freq: f}, inplace=False) for f in frequencies_Hz] I would recommend looking through all the cells to see if there are other places where they also assumed qubit = 0.
{ "domain": "quantumcomputing.stackexchange", "id": 4824, "tags": "qiskit, openpulse" }
ROS PCL: Help with Moving Least Squares filter
Question: I'm trying to implement a simple ROS node to perform Moving Least Squares filtering on a sensor_msgs/PointCloud2 topic. I'm following this PCL tutorial, which uses the pcl/surface/mls.h file. My code is at this GitHub page, but replicated below; #include <ros/ros.h> // PCL specific includes #include <sensor_msgs/PointCloud2.h> #include <pcl_conversions/pcl_conversions.h> #include <pcl/point_cloud.h> #include <pcl/point_types.h> #include <pcl/io/pcd_io.h> #include <pcl/kdtree/kdtree_flann.h> #include <pcl/surface/mls.h> /** * Simple class to allow appling a Moving Least Squares smoothing filter */ class MovingLeastSquares { private: double _search_radius; public: MovingLeastSquares(double search_radius = 0.03) : _search_radius(search_radius) { // Pass }; ros::Subscriber sub; ros::Publisher pub; void cloudCallback (const sensor_msgs::PointCloud2ConstPtr& cloud_msg); }; /** * Callback that performs the Point Cloud downsapling */ void MovingLeastSquares::cloudCallback (const sensor_msgs::PointCloud2ConstPtr& cloud_msg) { // Container for original & filtered data pcl::PCLPointCloud2 cloud; // Convert to PCL data type pcl_conversions::toPCL(*cloud_msg, cloud); // Convert to dumbcloud pcl::PointCloud<pcl::PointXYZ>::Ptr dumb_cloud (new pcl::PointCloud<pcl::PointXYZ> ()); //pcl::MsgFieldMap field_map; //pcl::createMapping<pcl::PointXYZ>(cloud_msg->fields, field_map); //pcl::fromPCLPointCloud2<pcl::PointXYZ>(cloud, *dumb_cloud); pcl::fromPCLPointCloud2<pcl::PointXYZ>(cloud, *dumb_cloud); // Create a KD-Tree pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ>); // Output has the PointNormal type in order to store the normals calculated by MLS pcl::PointCloud<pcl::PointNormal> mls_points; // Init object (second point type is for the normals, even if unused) pcl::MovingLeastSquares<pcl::PointXYZ, pcl::PointNormal> mls; mls.setComputeNormals (true); // Set parameters mls.setInputCloud (dumb_cloud); mls.setPolynomialFit (true); mls.setSearchMethod (tree); mls.setSearchRadius (_search_radius); // Reconstruct mls.process (mls_points); // Convert from dumbcloud to cloud pcl::PCLPointCloud2 cloud_filtered; pcl::toPCLPointCloud2(mls_points, cloud_filtered); // Convert to ROS data type sensor_msgs::PointCloud2 output; pcl_conversions::moveFromPCL(cloud_filtered, output); // Publish the data pub.publish (output); } /** * Main */ int main (int argc, char** argv) { // Initialize ROS ros::init (argc, argv, "pcl_mls"); ros::NodeHandle nh("~"); // Read optional leaf_size argument double search_radius = 0.03; if (nh.hasParam("search_radius")) { nh.getParam("search_radius", search_radius); ROS_INFO("Using %0.4f as search radius", search_radius); } // Create our filter MovingLeastSquares MovingLeastSquaresObj(search_radius); const boost::function< void(const sensor_msgs::PointCloud2ConstPtr &)> boundCloudCallback = boost::bind(&MovingLeastSquares::cloudCallback, &MovingLeastSquaresObj, _1); // Create a ROS subscriber for the input point cloud MovingLeastSquaresObj.sub = nh.subscribe<sensor_msgs::PointCloud2> ("/input", 10, boundCloudCallback); // Create a ROS publisher for the output point cloud MovingLeastSquaresObj.pub = nh.advertise<sensor_msgs::PointCloud2> ("/output", 10); // Spin ros::spin (); } This compiles fine for me, but I don't get any points coming through the output topic. Can anyone tell me what I'm doing wrong? I suspect it is something to do with the conversions at line 46 or line 74. Thank you! Answer: Update: I got it working. Thanks for the feedback everyone. To address some of the comments: I wasn't aware that 'filtering' has a specific meaning in the context of PCL. My code wasn't trying to implement an actual filter. I was able to compile and run the PCL tutorial no problems Sure enough, my problem was in the conversion from ROS message types to PCL point cloud types. I was essentially missing a call to pcl_conversions::toPCL(). The working code can be seen here.
{ "domain": "robotics.stackexchange", "id": 1600, "tags": "ros, point-cloud" }
How to assign a new level to many levels of a categorical variable
Question: I have a model which has many categorical variables. For each categorical variable there are many levels, like 50~. But not all of them have significant counts. I got these counts using the function value_counts() in Python: A 50 B 38 C 26 D 18 E 10 ... T 1 X 1 Z 1 How can I change the levels with count (say) less than 5 to a new level "others"? for x in data.class: if x.value_counts() <30: x = "others" Answer: Using the notation you gave (i.e. data is the dataframe and data.class is the categorical variable you want to process), this can be done this way: def cut_levels(x, threshold, new_value): value_counts = x.value_counts() labels = value_counts.index[value_counts < threshold] x[np.in1d(x, labels)] = new_value cut_levels(data.class, 30, 'others') Note that this modifies the original data, thus if you want to keep it, do a copy of it before or change the code to this: def cut_levels(x, threshold, new_value): x = x.copy() value_counts = x.value_counts() labels = value_counts.index[value_counts < threshold] x[np.in1d(x, labels)] = new_value return x new_class = cut_levels(data.class, 30, 'others')
{ "domain": "datascience.stackexchange", "id": 1060, "tags": "python, categorical-data" }
Forecasting sequence data with intermittent peaks
Question: I'm trying to forecast a sequence that looks like below: I know ARIMA, INAR, GLM, etc. but none of these works for this data. Algorithms I found for intermittent time series (ADIDA, Croston, etc.) only output a constant expected value as forecast. After looking at Cryo's comment, I plotted the 2 graphs he mentioned. heights of the peaks (I took the values >=600 as peaks): time intervals between the peaks: These graphs do seem a lot more "normal" for any model to handle. But I see no obvious trend/seasonality in either of the graphs, and their ACF and PACFs are quite small as well. So far, I tried the GLM models (negative binomial, binomial, poisson, etc.,) which all gave me predictions of too small variance, and ARIMA (even though the data is integer) which gives me something like below (this one is the "time intervals between peaks" series): What are some better models/ideas to try to forecast these 2 series? And what level of prediction accuracy should I expect in the end? Is it always possible to make relatively accurate forecasts for a time series? I'm a newbie in series forecasting, thanks a lot for your help...! Answer: Transformed series, delay until next peak ($\tau_i$) and next peak height ($h_i$) also look better to me. If the correlation between subsequent points is low you may struggle in predicting. Are there any external variables you can bring in? Also, can you pick up any patterns if you plot your plots in a 2d plot, with $\tau_i$ on horizontal axis as $h_i$ on vertical? I would try to predict $\tau_i,\,h_i$ in a single model. Your height is integer valued, as I understood, and your delay would be strictly non-negative, might as well assume it is integer-valued too. $$ \begin{align} \tau_i&\sim NB\left(r^{(\tau)}_{i},\,p^{(\tau)}_{i}\right) \\ h_i&\sim NB\left(r^{(h)}_{i},\,p^{(h)}_{i}\right) \end{align} $$ Where NB is negative binomial.This may require some more refinement, e.g. if your variance is smaller than your mean, you may need a different distribution, or may need to write it in terms thinning operators (more like INAR). Then you will need to map your prior data points into $r$ and $p$. Might be something like (and similar for $h$): $$ \begin{align} r^{(\tau)}_i&=\exp\left(\sum_{j=i-1-k}^{i-1}\alpha^{(r,\tau)}_j\cdot\tau_j+\sum_{j=i-1-k}^{i-1}\beta^{(r,\tau)}_j\cdot h_j\right) \\ p^{(\tau)}_i&=expit\left(\sum_{j=i-1-k}^{i-1}\alpha^{(p,\tau)}_j\cdot\tau_j+\sum_{j=i-1-k}^{i-1}\beta^{(p,\tau)}_j\cdot h_j\right) \end{align} $$ In the end you write down the likelihood and optimize to find $\alpha$-s and $\beta$-s (for some chosen look-back $k$). Clearly, a numerical solver will be necessary. I am not sure whether any package already does this, perhaps some packages aimed at modelling integer-valued sequences do. In my recent experience optax works well for these optimization tasks, but it is very much a general purpose optimizer, so some handy-work will be required
{ "domain": "datascience.stackexchange", "id": 11953, "tags": "forecasting" }
"error: expected ‘)’ before ‘Project’" in /opt/ros/groovy/include/ros/publisher.h during catkin_make
Question: I go into those file and see neither syntax error or the string 'Project' Do anyone knows what happen? https://github.com/ros/ros_comm/blob/indigo-devel/clients/roscpp/include/ros/publication.h In file included from /opt/ros/groovy/include/ros/node_handle.h:32:0, from /opt/ros/groovy/include/ros/ros.h:45, from /home/ardrone/catkin_ws/src/droneproject/src/Main.cpp:4: /opt/ros/groovy/include/ros/publisher.h: In member function ‘void ros::Publisher::publish(const boost::shared_ptr<T>&) const’: /opt/ros/groovy/include/ros/publisher.h:69:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:69:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:69:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:75:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:75:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:75:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:79:7: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:79:7: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:79:7: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h: In member function ‘void ros::Publisher::publish(const M&) const’: /opt/ros/groovy/include/ros/publisher.h:102:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:102:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:102:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:108:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:108:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:108:11: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:112:7: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:112:7: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/publisher.h:112:7: error: expected ‘)’ before ‘Project’ In file included from /opt/ros/groovy/include/ros/parameter_adapter.h:32:0, from /opt/ros/groovy/include/ros/subscription_callback_helper.h:35, from /opt/ros/groovy/include/ros/subscriber.h:33, from /opt/ros/groovy/include/ros/node_handle.h:33, from /opt/ros/groovy/include/ros/ros.h:45, from /home/ardrone/catkin_ws/src/droneproject/src/Main.cpp:4: /opt/ros/groovy/include/ros/message_event.h: In member function ‘typename boost::disable_if<boost::is_void<M2>, boost::shared_ptr<T> >::type ros::MessageEvent<M>::copyMessageIfNecessary() const’: /opt/ros/groovy/include/ros/message_event.h:228:5: error: expected ‘)’ before ‘Project’ In file included from /opt/ros/groovy/include/ros/subscriber.h:33:0, from /opt/ros/groovy/include/ros/node_handle.h:33, from /opt/ros/groovy/include/ros/ros.h:45, from /home/ardrone/catkin_ws/src/droneproject/src/Main.cpp:4: /opt/ros/groovy/include/ros/subscription_callback_helper.h: In member function ‘virtual ros::VoidConstPtr ros::SubscriptionCallbackHelperT<P, Enabled>::deserialize(const ros::SubscriptionCallbackHelperDeserializeParams&)’: /opt/ros/groovy/include/ros/subscription_callback_helper.h:160:7: error: expected ‘)’ before ‘Project’ In file included from /opt/ros/groovy/include/ros/node_handle.h:35:0, from /opt/ros/groovy/include/ros/ros.h:45, from /home/ardrone/catkin_ws/src/droneproject/src/Main.cpp:4: /opt/ros/groovy/include/ros/service_client.h: In member function ‘bool ros::ServiceClient::call(MReq&, MRes&)’: /opt/ros/groovy/include/ros/service_client.h:66:7: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/service_client.h: In member function ‘void ros::ServiceClient::deserializeFailed(const std::exception&)’: /opt/ros/groovy/include/ros/service_client.h:180:5: error: expected ‘)’ before ‘Project’ In file included from /opt/ros/groovy/include/ros/node_handle.h:40:0, from /opt/ros/groovy/include/ros/ros.h:45, from /home/ardrone/catkin_ws/src/droneproject/src/Main.cpp:4: /opt/ros/groovy/include/ros/advertise_service_options.h: In member function ‘void ros::AdvertiseServiceOptions::init(const string&, const boost::function<bool(MReq&, MRes&)>&)’: /opt/ros/groovy/include/ros/advertise_service_options.h:62:7: error: expected ‘)’ before ‘Project’ /opt/ros/groovy/include/ros/advertise_service_options.h:67:7: error: expected ‘)’ before ‘Project’ droneproject/CMakeFiles/DroneProject.dir/build.make:54: recipe for target 'droneproject/CMakeFiles/DroneProject.dir/src/Main.o' failed make[2]: *** [droneproject/CMakeFiles/DroneProject.dir/src/Main.o] Error 1 CMakeFiles/Makefile2:476: recipe for target 'droneproject/CMakeFiles/DroneProject.dir/all' failed make[1]: *** [droneproject/CMakeFiles/DroneProject.dir/all] Error 2 Makefile:113: recipe for target 'all' failed make: *** [all] Error 2 Originally posted’-before-‘project’"-in-/opt/ros/groovy/include/ros/publisher.h-during-catkin_make/) by Alvar on ROS Answers with karma: 1 on 2014-12-23 Post score: 0 Answer: This is most likely a syntax error in your own code. It would help if you'd add the relevant source code to your OP (Main.cpp?). Originally posted by gvdhoorn with karma: 86574 on 2014-12-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Alvar on 2014-12-24: I found out that it is the ROS_INFO, ROS_ASSERT macro causing the error though the reason is ye unknown Comment by gvdhoorn on 2014-12-24: Again, this is most likely a syntax error in your own code. We can only help you if you show us what you are doing exactly. The ROS_INFO and ROS_ASSERT macros have been used in hundreds, if not thousands of packages already. I'd be very surprised if they are the cause here.
{ "domain": "robotics.stackexchange", "id": 20422, "tags": "ros, ros-groovy" }
What is the spectrum of energies for the potential $ a^{x} $?
Question: Given a certain potential $ a^{x} $ with positive non-zero 'a' are there a discrete spectrum of energy state for the Schrodinger equation $$ \frac{- \hbar ^{2}}{2m} \frac{d^{2}}{dx^{2}}f(x)+a^{x}f(x)=E_{n}f(x)$$ Is there an example of this potential in physics? EDIT: what would happen if we put instead $ a^{|x|} $ so the potential is EVEN and tends to infinity as $ |x| \to \infty $ Answer: I) OP's potential $$V(x)~=~a^x~=~e^{bx}, \qquad b~:=~\ln a ~\in~ \mathbb{R}, $$ is the so-called Liouville potential. There are no (discrete) bound states. In scattering theory, an incoming wave at $x=-\infty$ gets reflected by the so-called "Liouville wall", and returns to $x=-\infty$. This potential is used in e.g. Liouville theory, which is important in dilaton gravity theories and string theory. II) On the other hand, even potentials $V(x)=V(-x)$ of e.g. the form $$V(x)~=~e^{b|x|}$$ or $$V(x)~=~\cosh(bx)$$ have discrete spectra. III) Finally, let us mention that double Liouville potentials $$V(x)~=~A_1e^{b_i x} + A_2e^{b_2 x} $$ (and multiple Liouville potentials) have also been studied in the literature. See also Toda field theory.
{ "domain": "physics.stackexchange", "id": 5281, "tags": "schroedinger-equation, semiclassical" }
How to set the visual and collision at ones using sdf to use the same geometry and pose
Question: Hi I like to define the visual and collision apparans of a link at ones. Is there a way to define variables? Or is there a way to say visual == collision? The example shows a link which has the same collision and visual dimensions is it possible to write that shorter: <link name='Wall_01'> <pose>0 0 0 0 0 0</pose> <collision name='collision_01'> <pose>0 0.05 1 0 -0 0</pose> <geometry> <box> <size>5 0.1 2</size> </box> </geometry> </collision> <visual name='visual_01'> <pose>0 0.05 1 0 -0 0</pose> <geometry> <box> <size>5 0.1 2</size> </box> </geometry> <material> <script> <uri>file://media/materials/scripts/gazebo.material</uri> <name>Gazebo/Grey</name> </script> </material> </visual> </link> Thanks Originally posted by Markus Bader on Gazebo Answers with karma: 61 on 2013-10-07 Post score: 1 Answer: There have been discussions about this, but no pull requests. One proposal is to use xacro (gazebo issue #210), which could be used to insert blocks of repeated sdf elements. This would require xacro to be packaged independently from ROS to avoid a gazebo dependency on ROS. There were some alternative syntax proposals in gazebo issue #427. One of the concerns with expanding the syntax is overwhelming complexity. Feel free to chime in with either or both of these feature requests. Originally posted by scpeters with karma: 2861 on 2013-10-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Markus Bader on 2013-11-04: Using xacro was my final solution
{ "domain": "robotics.stackexchange", "id": 3483, "tags": "gazebo-model, collision, sdformat" }
Monte Carlo coin flip simulation
Question: I've been learning about Monte Carlo simulations on MIT's intro to programming class, and I'm trying to implement one that calculates the probability of flipping a coin heads side up 4 times in a row out of ten flips. Basically, I calculate if the current flip in a 10 flip session is equal to the prior flip, and if it is, I increment a counter. Once that counter has reached 3, I exit the loop even if I haven't done all 10 coin flips since subsequent flips have no bearing on the probability. Then I increment a counter counting the number of flip sessions that successfully had 4 consecutive heads in a row. At the end, I divide the number of successful sessions by the total number of trials. The simulation runs 10,000 trials. def simThrows(numFlips): consecSuccess = 0 ## number of trials where 4 heads were flipped consecutively coin = 0 ## to be assigned to a number between 0 and 1 numTrials = 10000 for i in range(numTrials): consecCount = 0 currentFlip = "" priorFlip = "" while consecCount <= 3: for flip in range(numFlips): coin = random.random() if coin < .5: currentFlip = "Heads" if currentFlip == priorFlip: consecCount += 1 priorFlip = "Heads" else: consecCount = 0 priorFlip = "Heads" elif coin > .5: currentFlip = "Tails" if currentFlip == priorFlip: consecCount = 0 priorFlip = "Tails" else: consecCount = 0 priorFlip = "Tails" break if consecCount >= 3: print("Success!") consecSuccess += 1 print("Probability of getting 4 or more heads in a row: " + str(consecSuccess/numTrials)) simThrows(10) The code seems to work (gives me consistent results each time that seem to line up to the analytic probability), but it's a bit verbose for such a simple task. Does anyone see where my code can get better? Answer: When you're writing code for educational purposes (or sometimes other purposes), verbose is good because it helps you understand what's really going on. So making the code shorter or snappier or whatever is not necessarily going to make it better. With that disclaimer out of the way: one of the most common ways to condense Python code is to use list comprehensions or generators instead of loops. A list comprehension is what you use when you're constructing a list element by element: in its simplest form, instead of this, the_list = [] for something in something_else: the_list.append(func(something)) you write this: the_list = [func(something) for something in something_else] If you're doing something else instead of creating a list, you can have Python create an object that generates the elements on demand, rather than actually creating a list out of them. An object of that sort is called a generator and you can create one like this: the_generator = (func(something) for something in something_else) You can omit the parentheses when the generator is passed to another function as an argument, though. the_sum = sum(func(something) for something in something_else) would be equivalent to, but better than, count = 0 for something in something_else: count += func(something) There are a lot of functions in Python that take iterables (list, generators, etc.) and "condense" them into one value using some sort of operation. You can also create your own, corresponding to whatever you would be doing to the result of the loop. You can convert most loops into generator expressions this way. So let's investigate how you could use a generator to represent the sequences of consecutive throws in each trial. You can create a generator that produces 10 random numbers easily: (random.random() for i in xrange(10)) (this is for Python 2.x; xrange was renamed to range for Python 3). Or you can create a generator that produces 10 random values which are either 0 or 1: (random.randint(0,1) for i in xrange(10)) That saves you from having to check each random number against 0.5. In fact, you could produce a generator that produces 10 randomly chosen words, "Heads" or "Tails", like so: (random.choice(("Heads","Tails")) for i in xrange(10)) but it'll be easier to stick with numbers. (It's usually better to represent things with numbers or objects than with strings.) But perhaps you're thinking, "why are you telling me to make 10 numbers when I only have to check until I find a group of four consecutive heads?" For one thing, if you're just flipping 10 coins each time, it really doesn't matter because you'll make the computer flip at most 6, and on average 3, extra coins in each trial. That doesn't take very long - it'll extend the runtime of this part of your program by 50%, but we're talking 50% of a fraction of a second. It's not worth the effort to figure out how to do it for such a small number of flips. But if each flip had, say, a billion trials, then you would definitely want to stop early. Fortunately, a generator can do this for you! Since generators produce their elements only on demand, you can stop taking elements from it once you get what you want, and not waste much of any computation. I'll address this more later. Anyway, suppose we have our generator that produces 10 binary values 0 (tails) or 1 (heads). Is there a way to go through this and check to see whether there is a sequence of four or more heads? It turns out that just such a function is provided in itertools.groupby, which takes any iterable (list, generator, etc.) and groups consecutive identical elements. An example of its usage is for k, g in itertools.groupby([1,0,0,1,1,1,1,0,0,0]): print k, list(g) and this would print out something like 1 [1] 0 [0,0] 1 [1,1,1,1] 0 [0,0,0] So you can check for four or more consecutive heads by just looking at the length of the group and whether the key is heads or tails. for k, g in itertools.groupby(random.randint(0,1) for i in xrange(10)): if k and len(g) >= 4: # got a run of 4 or more consecutive heads! # wait, what now? (In Python, 1 is true and 0 is false in a boolean context, so if k is equivalent to if k == 1.) OK, what shall we do with our run of 4 or more consecutive heads? Well, you're trying to find the number of trials in which this occurs. So it probably makes sense to set a success flag if this happens. success = False for k, g in itertools.groupby(random.randint(0,1) for i in xrange(10)): if k and len(g) >= 4: success = True break # this stops asking the generator for new values But wait! This is starting to look a lot like the kind of loop that can be converted to a generator expression, isn't it? The only catch is that we're not adding anything up or constructing a list. But there is another function, any, that will go through a generator until it finds an element which matches a condition, and that's just what this for loop does. So you could write this as success = any(k and len(g) >= 4 for k, g in itertools.groupby(random.randint(0,1) for i in xrange(10)) Now finally, you'll want to count how many times this happens over, say, 10000 trials. So you might write that as something like this: successes = 0 for i in xrange(10000): if any(k and len(g) >= 4 for k, g in itertools.groupby(random.randint(0,1) for i in xrange(10)): successes += 1 But of course, we can also convert this to a generator, since you're just adding up numbers: successes = sum(1 for i in xrange(10000) if any(k and len(g) >= 4 for k, g in itertools.groupby(random.randint(0,1) for i in xrange(10))) The generator produces a 1 each time it finds a group of 4 consecutive 1s among the 10 random numbers generated. The last thing you'd want to do is divide by the total number of trials. Well, actually, what you really want to do is calculate the average instead of the sum, and in some places you can find a function mean which is kind of like sum except that it calculates the mean instead of the total. You could use such a function if you had it. But I don't know that one is in the Python standard library, so you can just do the division: probability = sum(1 for i in xrange(10000) if any(k and len(g) >= 4 for k, g in itertools.groupby(random.randint(0,1) for i in xrange(10))) / 10000 So the task you're trying to accomplish can actually be written in one line of Python. But it's a rather complicated line, and I wouldn't necessarily recommend actually doing this. Sometimes it's good to use a good old fashioned for loop to keep the code clear. More often, though, it's better to split your code up into modular pieces that are more useful than just what you're using them for.
{ "domain": "codereview.stackexchange", "id": 12593, "tags": "python, simulation, statistics" }
Function and method debugging decorator - Part 2
Question: This is a follow-up to this question. I've refactored my previous debugging decorator, and added a couple new features, and changed a few things. Here's a complete list of things that have changed: There is only one decorator, Debug, and it now supports functions, and class methods. Each debug message is prefixed with [debug] to help distinguish it from normal output. The output now tells you what it's outputting, rather than just outputting unreadable data. The decorator will now output local variables names, along with argument and keyword argument names as well. I'm wondering the following: Is there a way to get the values of local variables in the function, or is that just not possible? Is there a shorter way to get the names of local variables than function.__code__.co_varnames? Is it a good idea to create an empty string, and then add to, and re-assign it to build an output string? Is this Python 3, and Python 2.7 compatible? How's my documentation? Is this code "pythonic"? debug.py from pprint import pformat from inspect import getargspec class Debug(object): """Decorator for debugging functions. This decorator is used to debug a function, or a class method. If this is applied to a normal function, it will print out the arguments of Keyword arguments: debug -- Whether or not you want to output debug info. Generally, a global DEBUG variable is passed in here. """ def __init__(self, debug=True): self.debug = debug def __format_debug_string(self, function, *args, **kwargs): """Return a formatted debug string. This is a small private helper function that will return a string value with certain debug information. Keyword arguments: function -- The function to debug. *args -- The normal arguments of the function. **kwargs -- The keyword arguments of the function. """ debug_string = "" debug_string += "[debug] {}\n".format(pformat(function)) debug_string += "[debug] Passed args: {}\n".format(pformat(args)) debug_string += "[debug] Passed kwargs: {}\n".format(pformat(kwargs)) debug_string += "[debug] Locals: {}".format(pformat(function.__code__.co_varnames)) return debug_string def __call__(self, function): def wrapper(*args, **kwargs): if self.debug: if getargspec(function).args[0] != "self": print(self.__format_debug_string(function, *args, **kwargs)) else: print(self.__format_debug_string(function, *args, **kwargs)) print("[debug] Parent attributes: {}".format(pformat(args[0].__dict__))) return function(*args, **kwargs) return wrapper Here are a few small, albeit unreadable tests, but it's good enough to get the point across: from debug import Debug @Debug(debug=True) def a(a, b): d = 10 return a * b print(a(10, 10)) class B(object): def __init__(self, a, b): self.a = a self.b = b @Debug(debug=True) def e(self, c): return self.a * self.b * c c = B(10, 10) print(c.e(10)) Here's the output of these tests: [debug] <function a at 0x1bf9d38> [debug] Passed args: (10, 10) [debug] Passed kwargs: {} [debug] Locals: ('a', 'b', 'd') 100 [debug] <function B.e at 0x1944ce8> [debug] Passed args: (<B object at 0x1bfc838>, 10) [debug] Passed kwargs: {} [debug] Locals: ('self', 'c') [debug] Parent attributes: {'a': 10, 'b': 10} 1000 Answer: You can improve the following: if getargspec(function).args[0] != "self": print(self.__format_debug_string(function, *args, **kwargs)) else: print(self.__format_debug_string(function, *args, **kwargs)) print("[debug] Parent attributes: {}".format(pformat(args[0].__dict__))) If the code is executed no matter the statement, and it always goes first, move it above the condition: (and don't forget to reverse the condition) print(self.__format_debug_string(function, *args, **kwargs)) if getargspec(function).args[0] == "self": print("[debug] Parent attributes: {}".format(pformat(args[0].__dict__))) As for this: debug_string = "" debug_string += "[debug] {}\n".format(pformat(function)) debug_string += "[debug] Passed args: {}\n".format(pformat(args)) debug_string += "[debug] Passed kwargs: {}\n".format(pformat(kwargs)) debug_string += "[debug] Locals: {}".format(pformat(function.__code__.co_varnames)) return debug_string You can remove the = "" entirely: debug_string = "[debug] {}\n".format(pformat(function)) debug_string += "[debug] Passed args: {}\n".format(pformat(args)) debug_string += "[debug] Passed kwargs: {}\n".format(pformat(kwargs)) return debug_string + "[debug] Locals: {}".format(pformat(function.__code__.co_varnames)) It may not look as visually stimulating, but, it's not as redundant. Is it a good idea to create an empty string, and then add to, and re-assign it to build an output string? If you were directly printing these then it would be a bad idea, but in this case, not really. However, I suppose you could move them to an object, or an array and return the result of a join function. You could even return it as an array, and print each [debug] result. Which would remove the need for the \ns at the end, and DRY up the [debug] at the beginning of the strings (put it in the loop, not altogether) You've got a few too long lines, by PEP8 standard: debug -- Whether or not you want to output debug info. Generally, a global DEBUG variable is passed in here. debug_string += "[debug] Locals: {}".format(pformat(function.__code__.co_varnames)) print(self.__format_debug_string(function, *args, **kwargs)) print("[debug] Parent attributes: {}".format(pformat(args[0].__dict__))) As for your documentation: function, it will print out the arguments of Keyword arguments: I'm a bit confused by that, grammatically. __call__ is a more complex function (in my mind, at least) than __format_debug_string, but it has no documentation. Is this Python 3, and Python 2.7 compatible? It ran fine when I tested it in Python 2.7.9 and 3.1.1
{ "domain": "codereview.stackexchange", "id": 15654, "tags": "python" }
Gas in movement
Question: Suppose I have a gas contained in a solid box and I drop it from a certain height, is the temperature of the gas going to change because of the velocity that it acquires during the fall? If so, by how much? Answer: The temperature of the gas will go up, but only after the box hits the ground and the falling gas swirls around and randomizes its velocity. The lost potential energy of the gas has to show up somewhere...
{ "domain": "physics.stackexchange", "id": 8249, "tags": "thermodynamics" }
The sum of multi-class prediction is not 1 using tensorflow and keras?
Question: I am studying how to do text classification with multiple labels using tensorflow. Let's say my model is like: model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, 50, weights=[embedding_matrix], trainable=False), tf.keras.layers.LSTM(128), tf.keras.layers.Dense(4, activation='sigmoid')]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=tf.metrics.categorical_accuracy) I have 4 classes, the prediction function has the following results: pred=model.predict(X_test) pred array([[0.915674 , 0.4272042 , 0.69613266, 0.3520468 ], [0.915674 , 0.42720422, 0.69613266, 0.35204676], [0.915674 , 0.4272042 , 0.69613266, 0.3520468 ], [0.9156739 , 0.42720422, 0.69613266, 0.3520468 ], ...... You can see that every data has 4 prediction values. But the sum of them is not 1, which I do not understand. Did I do something wrong, or how to interpret this? Answer: To get the summation in the last layer to be one, you need to use Softmax activation not Sigmoid. Often Sigmoid is used in a special case when there is a binary classification problem.
{ "domain": "datascience.stackexchange", "id": 9290, "tags": "classification, keras, tensorflow, predict, sigmoid" }
Is it possible for a Turing machine to be able to reduce a grammar and tell where it fits in chomsky hierarchy?
Question: For example: This looks like a context free grammar: → → | → | → | c but it can be reduced to this regular language: → | → | → | c I want to know if it is possible or not and why Answer: It is not possible: it is undecidable whether a context-free grammar describes a regular language. For a proof, see e.g. Undecidable Problems for Context-free Grammars by Hendrik-Jan Hoogeboom.
{ "domain": "cs.stackexchange", "id": 11722, "tags": "turing-machines, formal-grammars, chomsky-hierarchy" }
How can I leverage my experience with the JVM and Android to do some hobby-level building and controlling robots at home?
Question: I'm looking for a product for home hobby-level robotics. Something that has modular robot hardware allowing me to design the robot itself, but lets me write the logic in a JVM or Android language. I have searched several times so I'm not expecting a big list, but it could be designed any of three ways that I can think of: connects to a configurable API endpoint, OR conversely, publishes an HTTP or RPC API for a controlling client to connect to, or supports running a jar or apk on the robot itself, or plugs into and carries around an Android device or small computer with some standard language-agnostic way of communicating with control APIs locally. I'd like to dabble in robotics. The thing I know best is the JVM. For better or worse, I've worked almost exclusively with the JVM for over 10 years and know several JVM-based languages. Call me specialized, but I'm not investing full work days in developing a new skill, and if I can't lean on my familiarity with known tech, I would anticipate dipping a toe into robotics to be more frustrating than fun. I know the JVM is heavy for the low-cost, bare-metal architecture choices that are typically made in the field of robotics; let's not talk about that here. Extra points if a kit comes with a Kotlin library. Answer: Have you taken a look at LeJOS? It's a port of the Java VM and SDK to the various Lego MINDSTORMS robot kits. The Lego kits themselves are quite capable as hobbyist robotics kits go, I bought the NXT version years ago and had a lot of fun with it. The newest EV3 kit has a powerful ARM CPU and an SD card reader for loading software, it looks like a good option if you don't want to spend much time fiddling around with firmware installation.
{ "domain": "robotics.stackexchange", "id": 2203, "tags": "software, wifi, platform" }
Simplest "physical" photon generating Feynman diagram
Question: I'm half way through the excellent "Student Friendly Quantum Field Theory" and I read that single vertex Feynman diagrams in QED are "not physical" because their corresponding amplitudes are zero. For example the diagram $$ e_{\mathbf{p_1}}^- + e_{\mathbf{p_2}}^+ \to \gamma_{\mathbf{k_1}} $$ has a probability amplitude that (when you calculate it) includes a factor of $\delta^{(4)}(k_1 - p_1 - p_2)$, where the boldface p's and k are 3-momenta and the normal typeface are 4-momenta. The argument goes that since the photon is massless we must have $k_{1\mu}k_1^{\mu} = 0$, but if you work out $(p_1+p_2)_{\mu}(p_1 + p_2)^{\mu}$ it turns out non-zero, therefore we can't find a real photon momentum that makes the dirac delta, and consequently the amplitude $\langle\gamma_{\mathbf{k_1}}\lvert e_{\mathbf{p_1}}^- e_{\mathbf{p_2}}^+\rangle$ non zero. A similar reasoning shows that every other single vertex Feynman diagram (e.g. $e_{\mathbf{p_1}}^-\to \gamma_{\mathbf{k_1}} + e_{\mathbf{p_2}}^-$) are also "non physical". So my questions are: If these diagrams are non-physical what's the simplest diagram that generates a photon that is physical. Or, "where do all the photons come from?" Are there any interpretations (for the fact the amplitude for single vertex diagrams are zero) other than that they are "non-physical"? For example, perhaps photons with $k^2 \ne 0$ are possible, but live too short a time to be observed. Please be gentle, I'm not actually a student, just an enthusiast and this is my lockdown reading! Answer: The reason you state for why the amplitude $e^+e^-\to\gamma$ vanishes is correct. But I would like to simplify it a bit. Mainly because it is not a consequence of QFT but of Special Relativity. Suppose you are in the center-of-mass frame of the electron-positron pair. Momentum conservation tells you that in this frame the resulting particle will be produced at rest and will have a mass of $M^2 = (p_1+p_2)^2 = (E_1+E_2)^2>4m_e^2$. The photon is never at rest and its mass squared is zero. So there is no way to conserve momentum. Note that there simply cannot exist a photon with $k^2\neq 0$. Such a photon would not travel at the speed of light, so it's inherently a contradiction. Also note that, by the same argument as above done for the opposite process, we can show that photons cannot decay. And the same conclusions hold for all particles with zero mass. The simplest process that gives photons is $e^+e^-\to2\gamma$. The presence of two photons makes it kinematically viable because we can have a nonzero invariant mass in the center-of-mass frame of the two.
{ "domain": "physics.stackexchange", "id": 67005, "tags": "quantum-field-theory, feynman-diagrams" }
Multiple matching in Maximum Flow problem?
Question: I'm sorry if this has already been asked before, but I couldn't find any similar questions. The situation is as such: Assume there are x restaurants, each with a capacity q, and y people, each of which can only eat from one restaurant. The x restaurants can only serve the people within radius r of its location. Try and find the maximum number of people that can be served by a restaurant and what restaurant each one must eat from. My approach was to split it up into something similar to the bipartite matching problem. Have the x restaurants on one side and y people on the other. Then, I'd have a start node which connects to every restaurant using a directed edge of capacity q. Then, I'd connect all the y people with a directed edge of capacity 1 to a sink node, t. Lastly, I'd connect every restaurants to all the people within radius r using a directed edge of capacity 1. Running the Ford-Fulkerson algorithm on this, however, might result in one restaurant serving less than they potentially could (for example, let's say we have two restaurants each with capacity 2, 4 people, and the first restaurant connects to the first three people and the second connects to the last two; then, if the path from the first restaurant to the third is augmented in the algorithm, we can only serve 3 people instead of an optimum of 4). My workaround to this is maybe augment an s-t path with the least amount of incoming edges to the person (so a situation like the above can be avoided), but this clearly slows down the efficiency significantly of the algorithm, as well as making it much more difficult to provide an argument saying it's optimal. Is there any literature concerning problems like these, or any suggestions on how I can improve my approach? Thanks! Answer: Your approach works. The situation you're worried about can't happen. It appears you haven't quite fully grasped how the Ford-Fulkerson algorithms yet, so focus on getting more familiar with it. It always finds the max-flow, and the max-flow always solves your problem. The key misunderstanding is where you wrote: Running the Ford-Fulkerson algorithm on this, however, might result in one restaurant serving less than they potentially could This is false. That actually can't happen. Work through the proof of correctness for the Ford-Fulkerson algorithm to understand why not: if there's a larger flow, then there will always be an augmenting path you can find to increase the size of the flow. This was presumably an exercise to help you get acquainted with network flow algorithms, and it worked: it uncovered a gap in your understanding. This is great -- it means you have something yet to learn about how the algorithms work. What a great opportunity! Now you get to spend some time communing with a textbook learning this lovely subject. Have fun!
{ "domain": "cs.stackexchange", "id": 6495, "tags": "optimization, bipartite-matching, max-flow, ford-fulkerson" }
Dimensionless Constants in Physics
Question: Forgive me if this topic is too much in the realm of philosophy. John Baez has an interesting perspective on the relative importance of dimensionless constants, which he calls fundamental like alpha, versus dimensioned constants like $G$ or $c$ [ http://math.ucr.edu/home/baez/constants.html ]. What is the relative importance or significance of one class versus the other and is this an area that physicists have real concerns or expend significant research? Answer: first of all, the question you are asking is very important and you may master it completely. Dimensionful constants are those that have units - like $c, \hbar, G$, or even $k_{\rm Boltzmann}$ or $\epsilon_0$ in SI. The units - such as meter; kilogram; second; Ampere; kelvin - have been chosen partially arbitrarily. They're results of random cultural accidents in the history of mankind. A second was original chosen as 1/86,400 of a solar day, one meter as 1/40,000,000 of the average meridian, one kilogram as the mass of 1/1,000 cubic meters (liter) of water or later the mass of a randomly chosen prototype, one Ampere so that $4\pi \epsilon_0 c^2$ is a simple power of 10 in SI units, one Kelvin as 1/100 of the difference between the melting and boiling points of water. Clearly, the circumference of the Earth, the solar day, a platinum prototype brick in a French castle, or phase transitions of water are not among the most "fundamental" features of the Universe. There are lots of other ways how the units could be chosen. Someone could choose 1.75 meters - an average man's height - to be his unit of length (some weird people in the history have even used their feet to measure distances) and he could still call it "one meter". It would be his meter. In those units, the numerical values of the speed of light would be different. Exactly the products or ratios of powers of fundamental constants that are dimensionless are those that don't have any units, by definition, which means that they are independent of all the random cultural choices of the units. So all civilizations in the Universe - despite the absence of any interactions between them in the past - will agree about the numerical value of the proton-electron mass ratio - which is about $6\pi^5=1836.15$ (the formula is just a teaser I noticed when I was 10!) - and about the fine-structure constant, $\alpha\sim 1/137.036$, and so on. In the Standard Model of particle physics, there are about 19 such dimensionless parameters that "really" determine the character of physics; all other constants such as $\hbar,c,G,k_{\rm Boltzmann}, \epsilon_0$ depend on the choice of units, and the number of independent units (meter, kilogram, second, Ampere, Kelvin) is actually exactly large enough that all those constants, $\hbar,c,G,k_{\rm Boltzmann},\epsilon_0$, may be set equal to one which simplifies all fundamental equations in physics where these fundamental constants appear frequently. By changing the value of $c$, one only changes social conventions (what the units mean), not the laws of physics. The units where all these constants are numerically equal to 1 are called the Planck units or natural units, and Max Planck understood that this was the most natural choice already 100 years ago. $c=1$ is being set in any "mature" analysis that involves special relativity; $\hbar=1$ is used everywhere in "adult" quantum mechanics; $G=1$ or $8\pi G=1$ is sometimes used in the research of gravity; $k_{\rm Boltzmann}=1$ is used whenever thermal phenomena are studied microscopically, at a professional level; $4\pi\epsilon_0$ is just an annoying factor that may be set to one (and in Gaussian 19th century units, such things are actually set to one, with a different treatment of the $4\pi$ factor); instead of one mole in chemistry, physicists (researchers in a more fundamental discipline) simply count the molecules or atoms and they know that a mole is just a package of $6.022\times 10^{23}$ atoms or molecules. The 19 (or 20?) actual dimensionless parameters of the Standard Model may be classified as the three fine-structure constants $g_1,g_2,g_3$ of the $U(1)\times SU(2)\times SU(3)$ gauge group; Higgs vacuum expectation value divided by the Planck mass (the only thing that brings a mass scale, and this mass scale only distinguishes different theories once we also take gravity into account); the Yukawa couplings with the Higgs that determine the quarks and fermion masses and their mixing. One should also consider the strong CP-angle of QCD and a few others. Once you choose a modified Standard Model that appreciates that the neutrinos are massive and oscillate, 19 is lifted to about 30. New physics of course inflates the number. SUSY described by soft SUSY breaking has about 105 parameters in the minimal model. The original 19 parameters of the Standard Model may be expressed in terms of more "fundamental" parameters. For example, $\alpha$ of electromagnetism is not terribly fundamental in high-energy physics because electromagnetism and weak interactions get unified at higher energies, so it's more natural to calculate $\alpha$ from $g_1,g_2$ of the $U(1)\times SU(2)$ gauge group. Also, these couplings $g_1,g_2$ and $g_3$ run - depend on the energy scale approximately logarithmically. The values such as $1/137$ for the fine-structure constant are the low-energy values, but the high-energy values are actually more fundamental because the fundamental laws of physics are those that describe very short-distance physics while long-distance (low-energy) physics is derived from that. I mentioned that the number of dimensionless parameters increases if you add new physics such as SUSY with soft breaking. However, more complete, unifying theories - such as grand unified theories and especially string theory - also imply various relations between the previously independent constants, so they reduce the number of independent dimensionless parameters of the Universe. Grand unified theories basically set $g_1=g_2=g_3$ (with the right factor of $\sqrt{3/5}$ added to $g_1$) at their characteristic "GUT" energy scale; they may also relate certain Yukawa couplings. String theory is perfectionist in this job. In principle, all dimensionless continuous constants may be calculated from any stabilized string vacuum - so all continuous uncertainty may be removed by string theory; one may actually prove that it is the case. There is nothing to continuously adjust in string theory. However, string theory comes with a large discrete class of stabilized vacua - which is at most countable and possibly finite but large. Still, if there are $10^{500}$ stabilized semi-realistic stringy vacua, there are only 500 digits to adjust (and then you may predict everything with any accuracy, in principle) - while the Standard Model with its 19 continuous parameters has 19 times infinity of digits to adjust according to experiments.
{ "domain": "physics.stackexchange", "id": 15536, "tags": "dimensional-analysis, physical-constants" }
Create distortion from basic linear (and non-linear if neccessary) DSP elements
Question: I'm studying mechatronics and I'm intrested in DSP basics. My lecturer said that there are four basic linear DSP elements: Adder (and other mathematical operations) Amplification (shown on diagrams as triangle) Delay ($z^{-1}$) Junction I know from electronics that I can create distortion using for example op-amps and clipping circuits - and I can do it practically. But is there a way to create distortion using four DSP elements theoretically? I know distortion is non-linear - so if not this four elements, what kind of elements existing in theoretically DSP should i use? Answer: Using those four basic elements will allow you to implement linear systems, which can change the magnitude and phase of the input signal, but which will not add the harmonics that are expected from a distortion effect. In order to create distortion in that sense (i.e., non-linear distortion) you will need some non-linear element. The most basic implementation would use a (soft) clipping function, possibly followed by some linear filter to shape the spectrum. A very accessible reference is the DAFX (Digital Audio Effects) book edited by Udo Zölzer. It has a chapter on non-linear processing, including distortion and overdrive. Note that I found an error in one of the equations for producing distortion (Eq. (5.9)). You can find the corrected formula in this answer to a related question.
{ "domain": "dsp.stackexchange", "id": 4891, "tags": "linear-systems, non-linear, distortion" }
Modelling an inelastic, rough, constrained collision
Question: Understood situations: a) Inelastic, rough collision of free spheres In such a collision, two coefficients are used. The coefficient of restitution in the normal direction (the ratio $c_N$ of relative normal velocities before and after, between $0$ and $1$) determines elasticity, with $1$ being perfectly elastic and $0$ perfectly inelastic. The coefficient of restitution in the tangential direction ($c_T$, the same for tangential velocities) determines smoothness, with $1$ being perfectly smooth and $-1$ perfectly rough. Both may be somewhat dependent on the impact parameters (and not intrinsic to the spheres), but they (together with momentum and angular momentum conservation laws) define the collision result. A further discussion can be found in this PDF from page 14 onwards. Important source for understanding perfect roughness and the relation of roughness to energy conservation. b) Elastic, smooth collision with constraints One of the spheres is constrained to move along a circle and simplified to a point mass. The other sphere is free and impacts it at an oblique angle. We have three degrees of freedom and three conservation laws (two for angular momentum and one for energy). From these, we can determine the post-collision state of the system. The problem: Generalizing situation b) to inelasticity and roughness cannot be done through coefficients of restitution. A perfectly smooth collision need not always have $c_T = 1$ and a perfectly elastic collision need not have $c_N = -1$. A simple counterexample is a perfectly elastic and perfectly smooth collision of a very light orbiting point mass and a stationary free sphere. The point mass will bounce back with little change to the speed of the free sphere, effectively a $c_T$ of $-1$, even though the collision is smooth. How can such a collision be characterised instead? Insights: There are two effects at hand: The normal bounce and the tangent bounce. If we disregard one, the other behaves in line with coefficients of restitution (i.e. a COR of $-1$ is a perfect bounce and a COR of $1$ is nothing changing). In the general situation, we have four unknowns: The two speeds of the free marble (the most convenient coordinate system here being the speed in the normal direction and the speed in the tangent direction), the angular velocity of the point, and the rotational angular velocity of the marble. Angular momentum around the center of orbit (of the constrained point) is conserved and all forces act through the contact point, so the angular momentum of just the free marble around the contact point is also conserved. This gives us two equations. There are three known solutions that conserve energy: A "collision" where all parameters stay the same (total pierce), a perfectly smooth collision, and a perfectly rough collision where the normal velocity of the free marble stays the same (just the tangent component of the collision). By analogy with nonconstrained collisions, there should be a perfectly rough, perfectly elastic collision that conserves energy, but I cannot find it. Simply finding the differences of pre- and post-collision speeds and rotation speeds for both $c_T = -1$ and $c_N = -1$ and adding them together with the original values (adding the impulses) leads to the total energy changing (in either direction depending on the setup). $c_T$ and $c_N$ are also the subject of a Q&A I posted with a simpler setup and more details on the results. Answer: Sigh, I wrote this answer when there was more details in the question... I agree with JAlex that the simplest way to account for conversation of momentum is to use a single impulse $J$ to represent the collision. $$\hat{v_t} = \frac{J_t}{m_2} + v_t$$ $$\hat{v_n} = \frac{J_n}{m_2} + v_n$$ $$\hat{\omega_S} = -\frac{J_t}{L} + \omega_S$$ $$\hat{\omega} = \frac{J \bullet r_1}{m_1(r_1 \bullet r_1)} + \omega$$ As a slight change in notation, $L$ is the angular inertia of the sphere. To ensure that the collision doesn't result in penetration: $$\hat{v_n} \leq -\hat{\omega} \, \sin(\alpha)$$ This results in a deformed half plane (topologically) constraint on the impulse. To ensure that energy is conserved: $$m_2(\hat{v_t}^2+\hat{v_n}^2)+L\hat{\omega_S}^2 + m_1 r_1^2 \hat{\omega}^2 \leq m_2(v_t^2+v_n^2)+L\,\omega_S^2 + m_1 r_1^2 \omega^2 $$ This is a deformed disk constraint. The Intersection of these two constraints define the area of valid collisions. All the stuff about slip velocity and friction is really just to get a better estimate of what the tangential impulse will be, but unusual internal geometry / structure or external constraints can violate those rules. In particular, because of the constraint on the point, you can have the relative slip-velocity double, or double in the opposite direction, even with no tangential impulse (frictionless). If you want to model very high friction but elastic (aka super ball) collisions, then you need to define the deformation model you're going to use in order to get a single defined answer, rather than a valid range. If you want to assume that the slip/sliding velocity will reduce from a non-zero value to zero during the collision then it doesn't make sense to try to conserve energy, as there must have been rubbing to reduce that slip velocity which must result in frictional losses. Deformation Modeling On possible deformation model is fully elastic: $$ F_t = -k_t \, x_t $$ $$ F_n = -k_n \, x_n $$ Where $x$ is the contact displacement, and $k$ represents the stiffness of the material. Geometric constraints: $$\frac{d \, x_n}{d\,t} = v_n + \omega\, r_1 \, \sin(\alpha) $$ $$\frac{d \, x_t}{d\,t} = v_t - \omega\, r_1 \, \cos(\alpha) - r_2 \, \omega_S $$ Equations of motion: $$\frac{d \, v_t}{d\,t} = \frac{F_n}{m_2} $$ $$\frac{d \, v_n}{d\,t} = \frac{F_t}{m_2} $$ $$\frac{d \, \omega_S}{d\,t} = \frac{F_t}{L} $$ $$\frac{d \, \omega}{d \, t} = \frac{-F_n \sin(\alpha)-F_t \cos(\alpha)}{m_1 \, r_1}$$ Then if we initialize $x$ to zero we can integrate until $x_n$ is once again zero, and at that point we'll have our new velocities. Note that even though there's no damping in these equations it still doesn't guarantee that there isn't energy lost. If $x_t$ doesn't reach zero at the same time that $x_n$ does then there will be energy "lost" that's stored in the tangential stiffness when the collision ends. This same integration could be done with damping terms added to the force equations to model less elastic collisions.
{ "domain": "physics.stackexchange", "id": 70515, "tags": "newtonian-mechanics, conservation-laws, collision" }
if P = NP, does it mean that P = NP = NP-complete?
Question: Lets assume P = NP, so all problems in NP are decidable in polynomial time, Therefore I can solve all problems in NP in polynomial claiming P = NP = NPC. But then, how come Σ* belongs to P = NPC because I can't reduce the even length string language (as an example of a language that is not trivial) to Σ*? one of the steps I made must be incorrect (I sure hope so) please help me find it. Answer: No. Even assuming $\mathsf{P}=\mathsf{NP}$, it is not true that all the languages therein are $\mathsf{NP}$-complete. An example of a language that is in $P$ but it is not $NP$-complete (regardless of the $P$ vs $NP$ matter) is $\Sigma^*$, as you noticed. Indeed, there is no way to reduce any language $L \in \mathsf{NP} \setminus \{ \Sigma^* \}$ to $\Sigma^*$ since, given $x \not\in L$, there is no function $f$ such that $f(x) \not\in \Sigma^*$. A similar argument shows that $\emptyset$ is not $\mathsf{NP}$-complete. However, if $\mathsf{P}=\mathsf{NP}$, then it is true that all languages in $\mathsf{NP} \setminus \{\emptyset, \Sigma^*\}$ are $\mathsf{NP}$-complete. To see this, let $A \in \mathsf{NP} \setminus \{\emptyset, \Sigma^*\}$ and pick any $L \in \mathsf{NP}$. Since there are $y,z$ such that $y \in A$ and $z \not\in A$, a valid Karp reduction from $L$ to $A$ is the following: $$f(x) = \begin{cases} y & \text{if } x \in L \\ z & \text{if } x \not\in L \end{cases}. $$ Notice that $f$ can be computed in polynomial time since $L \in \mathsf{NP} = \mathsf{P}$ by hypothesis (and hence it is possible to check whether $x \in L$ in polynomial time).
{ "domain": "cs.stackexchange", "id": 21249, "tags": "complexity-theory, np-complete, p-vs-np, check-my-answer" }
Can I research in web technologies with an academic approach?
Question: I'm an undergraduate computer engineering student. I know that I like to become a researcher in my major in the future. I also work as a junior web developer at a small start-up, and I think I really like web and web technologies. Can I do research in this field and are there problems about web technologies that can be solved by academic research? If so, where should I start? What should I study in order to be prepared to work on these problems? How can I find open problems about the web? Answer: Being a web developer, I am sure you realize that most large scale websites contain at least one of databases, high availability server, front end design, an algorithm of some sort, etc. Each of these areas has an active research community. Database researchers mostly study data structures and algorithms that speed up database operations. UI/UX researchers study human-computer interactions, to find better designs and UIs for users to consume. The algorithm community studies characterising various algorithmic problems and finding fast algorithms for them. The distributed systems researchers study how to build high functioning and available servers. This is of course in no way a summary, but it should give you an idea. So to answer your question, web research is a very generic field that involves and combines many different disciplines. You’ll have to narrow down exactly what you’re interested in web development before participating in their respective research communities.
{ "domain": "cstheory.stackexchange", "id": 5329, "tags": "reference-request" }
Does all slowed light becomes circularly polarized?
Question: When un-polarized light goes through a quarter wave plate the one of the wavelength is retarded by 90° phase, resulting in circularly polarized light. Does it means that all circularly polarized light are traversing less than speed of light in vacuum? Answer: Imagine you want to describe a linearly polarized plane wave propagating in z Direction. $$u(z,t)=\vec p e^{i(kz-\omega t)}$$, where $\vec p$ is a polarization vector (Jones vector). Using this formalism we can describe a quater waveplate using the following matix (i stands for a retardation of 90°). $$W=\left(\begin{array}{aa} 1 & 0\\ 0 & i \end{array}\right)$$ Let's have a look at a linearly polarized state. $$\vec p=\frac{1}{\sqrt 2}\left(\begin{array}{c}a\\b\end{array}\right)$$ The quater wave plate now acts on this state. $$\vec p'=M\vec p=\vec p=\frac{1}{\sqrt 2}\left(\begin{array}{c}a\\ib\end{array}\right)$$ This is a circularly polarized light. It can be rewritten in the following way. $$\vec p'=\frac{1}{\sqrt 2}\left(\begin{array}{c}a\\0\end{array}\right)+\frac{1}{\sqrt 2}\left(\begin{array}{c}0\\b\end{array}\right)e^{i\frac{\pi}{2}}$$ $$u'(z,t)=\vec p' e^{i(kz-\omega t)}$$ So it turns out that the circularly polarized light is just a superposition of two linearly polarized states (as you mentioned already) with a phase-shift of 90°. There is no mechanism involved changing the speed of light in the vacuum whatsoever. Hence the answer is: No, circularly polarized light propagates at the same speed in vacuum as un-polarized light, namely at the speed of light.
{ "domain": "physics.stackexchange", "id": 33816, "tags": "electromagnetic-radiation, speed-of-light, polarization" }
Please help me with this doubt from waves
Question: what is phase difference and how to visualize it? i am able to understand it pretty well in sinusoidal waves but please tell me what it is in other type of waves like plane waves,spherical waves,etc. Answer: Edit: I think you'll find all the details you need at this question. As Asher commented, when a wave is described as sinusoidal, or triangular, or square, that's its amplitude profile. When a wave is described as plane or spherical, that's the spatial profile perpendicular to the direction of propagation. For example, a plane wave of sinusoidal amplitude will have the same phase, i.e. amplitude at all $(x,y)$ coordinates for a given $z$ (direction of propagation). If the amplitude is maximum at $z_0$, then the amplitude will be a peak at $z_1$ which is one-quarter wavelength away. Similarly, for a spherical wave, the phase is the same on all points on a spherical surface (common radius from the origin).
{ "domain": "physics.stackexchange", "id": 28054, "tags": "waves, acoustics" }
Maths of chords
Question: The next question has been transmitted from the math stack exchange site as the math community suggested me to ask physicists: I have a few naive questions on music theory. Let us assume that I have two pitches A and C with certain frequencies. Then the corresponding sound waves are pure sinusoidal waves. But what happens if I vibrate the chord A-C? To get the resulting wave, I should simply sum up mathematically the corresponding sin waves? Conversely, if someone gives me a wave, how can I say that is, for instance, an A-C chord? I am looking for an oversimplified model to see the essentials of the theory. Any remarks welcomed. Answer: What you suggest is essentially correct. If you have a standing wave (say, a vibrating piano string) vibrating at 440 Hz, you will create the note A (above middle C). You could write the displacement of the string by $$y(x,t)=y_0 \cos(\omega t-kx)$$ where $x$ is the position of the string, $\omega=2\pi f$ (for frequency $f$) and $k=2\pi /\lambda$ is the wave number. This string will produce a sound equivalent to a single frequency tone $f$=440 Hz A. You also have the relationship $\lambda f=v$ for $v$ the speed of the wave on the string. If you had a second string, vibrating at whatever the frequency of C is, you would have the same expression for the second string with the correct frequency of C. The result at your ear (position $\vec{r}$) would be something like a pressure (sound) wave that obeys the principle of superposition, $$P(\vec{r},t)=P_A \cos(\omega_A t +\vec{k}_A\cdot \vec{r})+P_C \cos(\omega_C t +\vec{k}_C\cdot \vec{r})$$ The slight complication here is the relationship bewteen $\lambda$ and $f$ will no longer be the speed on the string, but rather the speed of sound in air. But it's roughly equivalent - you can add the two waves together. That's the simple, essentially correct, picture. The complication brought up by CDCM is very real, however - notes from musical instruments are never "pure tones". Even the vibrating strings of the piano produce higher harmonics (at the octave, 5th above that, 4th above that, etc etc etc). So in reality the pressure wave from your "single note A" looks more like $$P(\vec{r},t)=\sum P_i\cos(\omega_i t-\vec{k}_i\cdot \vec{r})$$ for a set of frequencies $f_i$ that is highly dependent on the instrument used. That set of frequencies is not only why a guitar sounds different from a flute, but why 300 year old Italian string instruments are more valuable than the modern equivalents.
{ "domain": "physics.stackexchange", "id": 45148, "tags": "waves, acoustics, fourier-transform, frequency, superposition" }
cmd_vel or commands/velocity?
Question: I'm trying to use the existing amcl/gmapping and create my own wheel motor drive actuator. it seems that the output from navigation (amcl/gmapping) are fed into move_base http://wiki.ros.org/move_base the above wiki shows that move_base dumps the velocity commands to /cmd_vel but on the turtlebot simulator, I see the commands finally go to ***/commands/velocity, in fact the /cmd_vel topic doesn't exist at all. so which one is it really? I just need to know the correct place for my actuator to get its commands from. also gmapping does SLAM so supposedly it can also do localization, what is the reason we use a separate navigation package (amcl) ? thanks Yang Originally posted by teddyyyy123 on ROS Answers with karma: 1 on 2015-12-22 Post score: 0 Original comments Comment by Procópio on 2015-12-22: please, create another question about your gmapping and amcl doubts, as it relates to a different subject. Answer: I am not familiar with the turtlebot simulator, but what matters is the type of your topic. Velocity messages, as the /cmd_vel produced by move_base have the type Twist . So, as long as you have the same topic type, you can always rename the topic to match the one expected by your subscriber, using the remap command. Originally posted by Procópio with karma: 4402 on 2015-12-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23280, "tags": "navigation, move-base, turtlebot" }
Is anti-unitary quantum mechanics possible?
Question: According to Wigner’s theorem, every symmetry operation must be represented in quantum mechanics by a unitary or an anti-unitary operator. To see this, we can see that given any two states $|\psi\rangle$ and $|\psi'\rangle$, you would like to preserve $$|\langle\psi|\psi\rangle'|^2=|\langle O\psi|O\psi \rangle'|^2$$ under some transformation $O$. If $O$ is unitary, that works. Yet, we can verify that an anti-unitary $A$ operator such that $$\langle A\psi| A\psi'\rangle= \langle\psi|\psi'\rangle^* ,$$ works too; where ${}^*$ is the complex conjugate. Note that I cannot write it as $\color{red}{\langle \psi| A^\dagger A|\psi\rangle'}$ as $A$ does not behave as usual unitary operators, it is only defined on kets $|A\psi\rangle=A|\psi\rangle$ not bras. Could one build a version of quantum mechanics using only anti-unitary operations? Any anti-unitary operation is of the form $A=UK$, where $U$ is a unitary operator and $K$ is the complex conjugation operator (which is itself anti-unitary). For a state $|\psi\rangle=\sum_\lambda c_\lambda |\lambda\rangle$ in some base $\{|\lambda\rangle\}$, then $K|\psi\rangle = \sum_\lambda c_\lambda^* |\lambda\rangle$. A common example of an anti-unitary operator is the time-reversal operator. Assumption It seems to me that I can plug $K$ all over quantum mechanics to make any unitary transformation anti unitary and get back the same results, as it seems to preserve the actual probabilities that can be measured. However, part of this question came from an earlier question in Quantum Computing Stack Exchange question of mine, where the answer showed that if anti-unitary operations existed, you could build faster-than-light messaging devices. So what is wrong with my assumption? Is anti-unitary quantum mechanics problematic? Answer: You can't make continuous transformations anti-unitary because the product of two anti-unitary transformations must be unitary. As a special case, you cannot make transformations that are continuously connected to unity anti-unitary because an identity operator is not anti-unitary. Thus, the only transformations that can be represented by anti-unitary operators are discrete. And indeed, this does show up in quantum mechanics. For example, the time-reversal operation is represented by an anti-unitary matrix.
{ "domain": "physics.stackexchange", "id": 81886, "tags": "quantum-mechanics, hilbert-space, operators, unitarity, time-reversal-symmetry" }
Computing a transitive completion / path existence oracle
Question: There has been a few questions (1, 2, 3) about transitive completion here that made me think if something like this is possible: Assume we get an input directed graph $G$ and would like to answer queries of type "$(u,v)\in G^+$?", i.e. asking if there exists an edge between two vertices in the transitive completion of a graph $G$? (equivalently, "is there a path from $u$ to $v$ in $G$?"). Assume after given $G$ you are allowed to run preprocessing in time $f(n,m)$ and then required to answer queries in time $g(n,m)$. Obviously, if $f=0$ (i.e. no preprocessing is allowed), the best you can do is answer a query in time $g(n)=\Omega(n+m)$. (run DFS from $u$ to $v$ and return true if there exists a path). Another trivial result is that if $f=\Omega(min\{n\cdot m,n^\omega\})$, you can compute the transitive closure and then answer queries in $O(1)$. What about something in the middle? If you are allowed, say $f=n^2$ preprocessing time, can you answer queries faster than $O(m+n)$? Maybe improve it to $O(n)$? Another variation is: assume you have $poly(n,m)$ preprocessing time, but only $o(n^2)$ space, can you use the preprocessing to answer queries more efficient than $O(n+m)$? Can we say anything in general about the $f,g$ tradeoff that allows answering such queries? A somewhat similar tradeoff structure is considered in GPS systems, where holding a complete routing table of all pairwise distances between locations is infeasible so it's using the idea of distance oracles which stores a partial table but allow significant query speedup over computing the distance of the whole graph (usually yielding only approximated distance between points). Answer: Compact reachability oracles exist for planar graphs, Mikkel Thorup: Compact oracles for reachability and approximate distances in planar digraphs. J. ACM 51(6): 993-1024 (2004) but are "hard" for general graphs (even sparse graphs) Mihai Patrascu: Unifying the Landscape of Cell-Probe Lower Bounds. SIAM J. Comput. 40(3): 827-847 (2011) Nevertheless, there is an algorithm that can compute a close-to-optimal reachability labeling Edith Cohen, Eran Halperin, Haim Kaplan, Uri Zwick: Reachability and Distance Queries via 2-Hop Labels. SIAM J. Comput. 32(5): 1338-1355 (2003) Maxim A. Babenko, Andrew V. Goldberg, Anupam Gupta, Viswanath Nagarajan: Algorithms for Hub Label Optimization. ICALP 2013: 69-80 Building on the work of Cohen et al. and others, there is quite a bit of applied research (database community) see e.g. Ruoming Jin, Guan Wang: Simple, Fast, and Scalable Reachability Oracle. PVLDB 6(14): 1978-1989 (2013) Yosuke Yano, Takuya Akiba, Yoichi Iwata, Yuichi Yoshida: Fast and scalable reachability queries on graphs by pruned labeling with landmarks and paths. CIKM 2013: 1601-1606
{ "domain": "cstheory.stackexchange", "id": 2807, "tags": "graph-theory, graph-algorithms, space-time-tradeoff, transitive-closure" }
How can I calculate the measuring probabilities of a two qubit state along a certain axis?
Question: If I have an arbitrary two-qubit state (in this example given from spins), and I want to measure the state of the first spin along a direction $\vec{n}$. This vector n has an angle theta with respect to the $z$-axis. Now I want to know the possible measurement results and their probabilities, for a given theta. Unfortunately, I am not sure how to start with this, as I would usually use Born‘s rule to get the probability of measuring a certain defined state from the existing superposition. But here, I don’t know, what the state that would be in the bra side of the Born’s rule is. Therefore, I would be really grateful for any hints on how to get from measuring along a certain direction to calculating the probability. (If necessary, I can use a certain value for theta) Answer: First, you need to construct the measurement projection operators. For measuring spin along $\vec{n}$, the two projection operators will be given by $$ P_{\pm} = \frac{I \pm \vec{n}\cdot\vec{\sigma}}{2}\,.$$ Since you are only measuring the first qubit, your projection measurement operators $\{M_k\}$ will be $$\{M_k\} \equiv \{ P_+\otimes I, P_- \otimes I \}\,.$$ So, if your initial two-qubit state is $|\psi\rangle$, then you can have two outcomes, $+1$ and $-1$ and the probabilities will be given by $$p(+1) = \langle\psi|(P_+\otimes I)|\psi\rangle\,,$$ $$p(-1) = \langle\psi|(P_-\otimes I)|\psi\rangle\,.$$
{ "domain": "quantumcomputing.stackexchange", "id": 5511, "tags": "measurement" }
Time-reversibility symmetry in classical mechanics
Question: Newton's laws are invariant under time reversal transformation $$ t \longrightarrow -t $$ for time-independent potentials. But Hamilton-Jacobi equation is too an equivalent description of classical mechanics which is not invariant under this transformation because of being 1st order in time. Where is the fallacy? Answer: The Hamilton -Jacobi equation: $$\frac{\partial S}{\partial t}=-H$$ with: $$H=\frac{\partial \mathscr L}{\partial \dot{q}}\,\frac{d q}{dt}- \mathscr L$$ you get: $$\frac{\partial S}{\partial t}=-\frac{\partial \mathscr L}{\partial \dot{q}}\,\frac{d q}{dt}+ \mathscr L\tag 1$$ reverse time $t\mapsto -t$ we obtain for eq. (1) $$-\frac{\partial S}{\partial t}=+\frac{\partial \mathscr L(-t)}{\partial \dot{q}}\,\frac{d q}{dt}+ \mathscr L (-t)\tag 2$$ so equation (2) is equal equation (1), if $\mathscr L (-t)=\mathscr L (+t)$ this is the case in your for your question. you get the same results for Euler- Lagrange equations.
{ "domain": "physics.stackexchange", "id": 60191, "tags": "classical-mechanics, hamiltonian-formalism, time-reversal-symmetry" }
Deriving the first moment of Collisionless Boltzmann Equation in Spherical Polar Coordinates
Question: I am following these notes: Dynamics and Astrophysics of Galaxies. After equation 6.37, we have: \begin{equation*} p_r\,\frac{\partial f}{\partial r} + \frac{p_\theta}{r^2}\,\frac{\partial f}{\partial \theta} + \frac{p_\phi}{r^2\,\sin^2\theta}\,\frac{\partial f}{\partial \phi} -\left(\frac{\mathrm{d} \Phi}{\mathrm{d} r}-\frac{p_\theta^2}{r^3}-\frac{p_\phi^2}{r^3\,\sin^2\theta}\right)\,\frac{\partial f}{\partial p_r} +\frac{p_\phi^2\,\cos\theta}{r^2\,\sin^3\theta}\,\frac{\partial f}{\partial p_\theta} = 0\,.\\ \end{equation*} This is the Collisionless Boltzmann Equation in Spherical Polar Coordinates. Then We now multiply this by $p_r$ and integrate over all $(p_r,p_{\phi},p_{\theta})$ using that $\mathrm{d}p_r\,\mathrm{d}p_\phi\,\mathrm{d}p_\theta = r^2\,\sin\theta\,\mathrm{d}v_r\, \mathrm{d}v_\phi\,\mathrm{d}v_\theta$ and using partial integration to deal with the derivatives of f with respect to the momenta \begin{align}\label{eq-spher-jeans-penult} \frac{\partial (r^2\,\sin\theta\,\nu\,\overline{v^2_r})}{\partial r} + \frac{\partial (\sin\theta\,\nu\,\overline{v_r\,v_\theta})}{\partial \theta} & + \frac{\partial (\nu\,\overline{v_r\,v_\phi}/\sin\theta)}{\partial \phi}\\ & +r^2\,\sin\theta\,\nu\,\left(\frac{\mathrm{d} \Phi}{\mathrm{d} r}-\frac{\overline{v_\theta^2}}{r}-\frac{\overline{v_\phi^2}}{r}\right) = 0\nonumber\,. \end{align} I would like to arrive to this result myself. Multiplying the SPC CBE by $p_r$ & going ahead with the suggested volume element, considering the first term in the above equation, not being worried about the integration limits gives us: \begin{equation} \int p_r \frac{\partial f}{\partial r} p_r r^2 \sin\theta \mathrm{d}v_r \mathrm{d}v_{\theta} \mathrm{d}v_{\phi} =\int v_r^2 \frac{\partial f}{\partial r} r^2 \sin \theta \mathrm{d}v_r \mathrm{d}v_{\theta} \mathrm{d}v_{\phi} =r^2 \sin \theta \int v_r^2 \frac{\partial f}{\partial r} \mathrm{d}v_r \mathrm{d}v_{\theta} \mathrm{d}v_{\phi} \end{equation} Using 6.32 from the mentioned notes, \begin{equation} \frac{\partial (r^2\,\sin\theta\,\nu\,\overline{v^2_r})}{\partial r} = \frac{\partial}{\partial r} \left( r^2 \sin\theta \int v^2_r f \mathrm{d}v_r \mathrm{d}v_{\theta} \mathrm{d}v_{\phi} \right) \end{equation} Which is not equal to what I have found just a line above. What am I doing wrong? Some question on Math.SE & Physics.SE. Answer: After correspondence with the author of those notes, I arrived at the following conclusion. When we do $\frac{\partial f}{\partial r}$, we require that all other (other than $r$) variables which f is dependent on, are held constant. That is: $\theta, \phi, p_r, p_{\theta}, p_{\phi}$ are constants. Note that: $(p_r, p_{\theta}, p_{\phi})=(\dot{r}, r^2 \sin^2 \theta \dot{\phi}, r^2\dot{\theta})$ Keep in mind that: $p_r$, ie $\dot{r}$ is held constant: it does not mean $r$ is held constant (in fact, we are differentiating with respect to $r$). $p_{\theta}$, ie $r^2\sin^2\theta\dot{\phi}$ is held constant. This does not mean that either $r$, $\theta$ or $\dot{\phi}$ is held constant. $p_{\phi}$, ie $r^2\dot{\theta}$ is held constant. We are not claiming this about $r$ or $\dot{\theta}$. Writing the steps emphasizing this statement above: $$\int d\vec{p} p_r \frac{\partial f}{\partial r} p_r \Bigr|_{\theta, \phi, p_r, p_{\theta}, p_{\phi}}$$ Writing out explicitly what we mean by $d\vec{p}$: $$=\int \mathrm{d}(p_r,p_{\theta},p_{\phi}) p_r \frac{\partial f}{\partial r} p_r \Bigr|_{\theta, \phi, p_r, p_{\theta}, p_{\phi}}$$ We can move out the partial differentiation because all other term than $\frac{\partial f}{\partial r}$ is held constant: [THIS IS THE MAIN STEP] $$=\frac{\partial}{\partial r}\int \mathrm{d}(p_r,p_{\theta},p_{\phi}) p_r^2 f \Bigr|_{\theta, \phi, p_r, p_{\theta}, p_{\phi}}$$ Changing from the $p$-nomenclature to the variables they represent: $$\frac{\partial}{\partial r} \int \mathrm{d}(\dot{r}, r^2\dot{\theta}, r^2\sin^2\theta\dot{\phi}) v_r^2 f \Bigr|_{\theta, \phi, \dot{r}, r^2 \sin^2 \theta \dot{\phi}, r^2\dot{\theta}}$$ ie $$\frac{\partial}{\partial r} \int \mathrm{d}(v_r, r \sin\theta v_{\phi}, r v_{\theta}) v_r^2 f \Bigr|_{\theta, \phi, \dot{r}, r^2 \sin^2 \theta \dot{\phi}, r^2\dot{\theta}}$$ This step I am unsure about (ie why I don't have a $dr$ and $d\theta$ term, for example), but we say: $$\mathrm{d}(v_r, r \sin\theta v_{\phi}, r v_{\theta}) = r^2 \sin\theta \mathrm{d}v_r \mathrm{d}v_{\theta} \mathrm{d}v_{\phi}$$ Rewrite integral $$\frac{\partial}{\partial r} \int r^2 \sin\theta \mathrm{d}v_r \mathrm{d}v_{\theta} \mathrm{d}v_{\phi} v_r^2 f$$ $$=\frac{\partial}{\partial r} r^2 \sin\theta \int \mathrm{d}v_r \mathrm{d}v_{\theta} \mathrm{d}v_{\phi} v_r^2 f$$ $$=\frac{\partial}{\partial r} \left( r^2 \sin\theta \nu \bar{v_r^2} \right)$$ As claimed by the notes. Same answer on Math.SE & Physics.SE.
{ "domain": "astronomy.stackexchange", "id": 5178, "tags": "galactic-dynamics, stellar-dynamics" }
Failed Packet Flags
Question: Ok... More issues are occurring... Now, the code is publishing [I use the spinOnce() to correct the sync error], but the terminal gives me a new error: allenh1@muri-pc7:~$ rosrun rosserial_python serial_node.py /dev/ttyACM0 [INFO] [WallTime: 1332971976.596828] ROS Serial Python Node [INFO] [WallTime: 1332971976.601747] Connected on /dev/ttyACM0 at 57600 baud [INFO] [WallTime: 1332971978.757289] Note: publish buffer size is 512 bytes [INFO] [WallTime: 1332971978.757710] Setup publisher on imu [sensor_msgs/Imu] [INFO] [WallTime: 1332971995.901270] Failed Packet Flags [INFO] [WallTime: 1332971998.228596] Failed Packet Flags [INFO] [WallTime: 1332971999.715138] Failed Packet Flags [INFO] [WallTime: 1332972000.021163] Failed Packet Flags [INFO] [WallTime: 1332972001.216033] Failed Packet Flags [INFO] [WallTime: 1332972001.883928] Failed Packet Flags [INFO] [WallTime: 1332972002.676350] Failed Packet Flags [INFO] [WallTime: 1332972003.079734] Failed Packet Flags [INFO] [WallTime: 1332972003.143579] Failed Packet Flags [INFO] [WallTime: 1332972003.897713] Packet Failed : Failed to read msg data [INFO] [WallTime: 1332972005.732057] Failed Packet Flags [INFO] [WallTime: 1332972006.467836] Failed Packet Flags [INFO] [WallTime: 1332972006.535782] Failed Packet Flags [INFO] [WallTime: 1332972007.064265] Failed Packet Flags [INFO] [WallTime: 1332972007.592927] Failed Packet Flags [INFO] [WallTime: 1332972007.859641] Failed Packet Flags [INFO] [WallTime: 1332972011.647770] Failed Packet Flags [INFO] [WallTime: 1332972012.508285] Failed Packet Flags [INFO] [WallTime: 1332972013.304900] Failed Packet Flags What could be the cause of this error? Is this a code issue? Here's my arduino code: #include <ros.h> #include <std_msgs/String.h> #include <std_msgs/Float64.h> #include <sensor_msgs/Imu.h> ros::NodeHandle nh; sensor_msgs::Imu data; ros::Publisher p("imu", &data); char frame[] = "/imu"; int xPin = A0; int yPin = A1; // select the input pin for the potentiometer// select the pin for the LED double xValue = 0; int xValueRaw = 0; double yValue = 0; int yValueRaw = 0; // variable to store the value coming from the sensor void setup() { nh.initNode(); nh.advertise(p); nh.spinOnce(); } void loop() { // read the value from the sensor: xValueRaw = analogRead(xPin); xValue = (((xValueRaw * 0.0049) - 1.66) / (0.312)) * 9.80665; //convert to volts yValueRaw = analogRead(yPin); yValue = (((yValueRaw * 0.0049) - 1.66) / (0.312)) * 9.80665; nh.spinOnce(); //now, we publish the m/s^2 acceleration to ROS. data.header.frame_id = frame; data.orientation_covariance[0] = -1; //tells ROS to ignore the orientaiton of the data (not provided) data.angular_velocity_covariance[0] = -1; //tells ROS to ignore the angular velocity of the data (not provided) data.linear_acceleration.x = xValue; data.linear_acceleration.y = yValue; data.linear_acceleration.z = 0; data.header.stamp = nh.now(); p.publish(&data); nh.spinOnce(); delay(10); } Originally posted by allenh1 on ROS Answers with karma: 3055 on 2012-03-28 Post score: 1 Answer: It seems to get better when I lower the use of nh.spinOnce(). Is there some ratio I should set up for this? Originally posted by allenh1 with karma: 3055 on 2012-03-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8778, "tags": "ros, arduino, imu, rosserial, rosserial-python" }
Bash pass multiple commands to SSH and make readable
Question: I have a script that sends multiple commands after logging in a remote machine via SSH. Problem is, it's barely readable. I've tried quoting everything with "", but it's terrible (everything appears in pink), and here documents are just as bad (everything appears in gray). I'm using Gedit as an editor, but I tried Emacs as well. Can I make the editor parse the remotely executed code as code? #!/bin/bash function remoteExec() { echo "remote execution starting" ssh -n "$1" << 'EOF' #the following code is _not_ highlighted by real editors currIt=\"$2\" #I hope this parameter is passed correctly while [ "$currIt" -lt 5 ]; do echo "hello" currIt=$(($currIt + 1)) done EOF } I'd rather not create a separate script just for these lines, because then I'd have to copy it on the remote machine, or pipe it, and I have arguments too. EDIT: as Glenn noticed, as far as highlighting goes, there isn't much to do. I'm open to solutions putting the code in a separate script. I need to transfer the code to the remote host then, and pass it the necessary arguments. Answer: You're quoting the heredoc terminator, so $2 will certainly not be expanded When you want to pass multiple commands to ssh, wrap them as a single script to an interperter, thus ssh sees one single command. Lots of whitespace for readability. And yes, not much opportunity for syntax highlighting. Since you need variable expansion within the script, you need to worry about quoting variables you want to be expanded on the remote host. I changed the while loop to a for loop to minimize the number of $-signs to escape. If the script you want to send is more complex, be careful about how you use single and double quotes. function remoteExec() { echo "remote execution starting" ssh -n "$1" << EOF bash -c ' for (( currIt="$2"; currIt < 5; currIt++ )); do echo "hello" done ' EOF } Given an arbitrary number of args: function remoteExec() { local host=$1 local startIt=$2 local quotedArgs shift 2 for arg in "$@"; do quotedArgs+="\"$arg\" " done echo "remote execution starting" ssh -n "$host" << EOF bash -c ' for (( currIt="$startIt"; currIt < 5; currIt++ )); do echo "hello" done args=( $quotedArgs ) printf "%s\n" "\${args[@]}" # escape this sigil ' EOF } remoteExec host 3 arg1 arg2 "this is arg3"
{ "domain": "codereview.stackexchange", "id": 6406, "tags": "bash" }
Dense Spherical Black Hole Shell with a Region Inside
Question: I'm going to propose a thought experiment, based on two ideas. One: A uniform spherical shell, by the Shell Theorem, does not exert any gravitational force on objects existing in the interior of the shell. Two: A black hole, created by matter dense enough to lie within its Schwarzschild radius, is inescapable, even by light. Suppose (don't ask how) we create a uniform spherical shell composed entirely of matter dense enough to form a black hole, such that we have a continuous event horizon that appears as two concentric spherical regions. One can think of this as a spherical shell with an infinite number of infinitesimally sized black holes (at least, from a macroscopic point of view, to avoid Pauli's Exclusion Principle), or simply with a very large mass density across the surface (which is allowed thickness in the radial direction, to remain a three-dimensional construct). The overlapping event horizons make this system appear as a single black hole from the outside. Obviously, this system is unstable, and will collapse into a messy crunch fairly quickly, but before it does, its properties seem contradictory. So, what happens on the inside? This question should probably be addressed for completely overlapping interior event horizons (such that no region inside the shell sits outside of the collective event horizon, and all of the black hole "cores" sit inside of the event horizon of every other black hole), and for some space existing between the event horizons of black holes on opposite sides of the shell, so that an event horizon free region exists within the shell. Will objects on the inside feel the effects of the gravitational force, or will it be a happy island of no external gravity (that is, simply flat space)? Does the answer vary based on what major theory is used to address it? Answer: The shell theorem holds in full general relativity, a result known as Birchoff's theorem. Nothing special happens on the inside until the matter collapses to the radius of your observer, or unless your observer is trying to see through the shell of matter to the outside world. This, of course, won't work. But that isn't a local measurement.
{ "domain": "physics.stackexchange", "id": 6175, "tags": "general-relativity, gravity, black-holes" }
Does the neural network calculate different relations between inputs automatically?
Question: Suppose you want to predict the price of some stock. Let's say you use the following features. OpenPrice HighPrice LowPrice ClosePrice Is it useful to create new features like the following ones? BodySize = ClosePrice - OpenPrice or the size of the tail TailUp = HighPrice - Max(OpenPrice, ClosePrice) Or we don't need to do that because we are adding noise and the neural network is going to calculate those values inside? The case of the body size maybe is a bit different from the tail, because for the tail we need to use a non-linear function (the max operation). So maybe is it important to add the input when it is not a linear relationship between the other inputs not if it's linear? Another example. Consider a box, with height $X$, width $Y$ and length $Z$. And suppose the real important input is the volume, will the neural network discover that the correlation is $X * Y * Z$? Or we need to put the volume as input too? Sorry if it's a dumb question but I'm trying to understand what is doing internally the neural network with the inputs, if it's finding (somehow) all the mathematically possible relations between all the inputs or we need to specify the relations between the inputs that we consider important (heuristically) for the problem to solve? Answer: On paper, one expects a complex enough network to determine any complicated function of a limited number of inputs, given a large enough dataset. But in practice, there is no limit to the possible difficulty of the function to be learnt, and the datasets can be relatively small on occasion. In such cases - or arguably in general - it is definitely a good idea to define some combination of the inputs depending on some heuristics as you suggested. If you think some combination of inputs is an important variable by itself, you definitely should include it in your inputs. We can visualize this situation in TensorFlow playground. Consider the circular pattern dataset on top left corner with some noise. You can use the default setting: $x_1$ and $x_2$ as inputs with 2 hidden layers with 4 and 2 neurons respectively. It should learn the pattern in less than 100 epochs. But if you reduce the number of neurons in the second layer to 2, it is not going to get as good as before. So, you are making the model more complicated to get the correct answer. You can experiment and see that one needs at least one 3 neuron layer to get the correct classification from just $x_1$ and $x_2$. Now, if we examine the dataset, we see the circles so we know that instead of $x_1$ and $x_2$, we can try $x_1^2$ and $x_2^2$. This will learn perfectly without any hidden layers as the function is linear in these parameters. The lesson to be learnt here is that, our prior knowledge of the circle ($x_1^2 + x_2^2 = r^2$) and familiarity with the data helped us in getting a good result with a simpler model (smaller number of neurons), by using derived inputs. Take the spiral data at the lower right corner for a more challenging problem. For this one, if you do not use any derived features, it is not likely to give you the correct result, even with several hidden layers. Keep in mind that every extra neuron is a potential source of overfitting, on top of being a computational burden. Of course the problem here is overly simplified but I expect the situation to be more or less the same for any complicated problem. In practice, we do not have infinite datasets or infinite compute times and the model complexity is always a restriction, so if you have any reason to think some relation between your inputs is relevant for your final result, you definitely should include it by hand at the beginning.
{ "domain": "ai.stackexchange", "id": 1564, "tags": "neural-networks" }
Resources for interesting facts about biology
Question: Are there resources for interesting or fun facts about biology, for example: fastest animal/bird/insect largest organ in human body largest virus largest genome largest de novo synthesized genome largest protein longest experiment done so far most expensive experiment done so far biggest discovery by luck ... You get the idea. It would be nice if these is a curated "database" for all these interesting facts that might be fascinating to lay people as well. Answer: Whether you find the content interesting or not I can't say, but you might want to check out the BioNumbers database. It has a category for "Amazing bionumbers" and a "BioNumber of the Month" section.
{ "domain": "biology.stackexchange", "id": 1926, "tags": "general-biology" }
$2$-sorted array. How to sort it in minimal number of comparisons ?
Question: It is given array $2$-sorted array $a[1..n]$. $2$-sorted denotes that $a[1]\le a[3]\le...\le$ and $a[2]\le a[4]\le ..\le$ Obviously we may split array into two sorted arrays and then merge two arrays - it requires $n-2$ comparisons. However I think about lower bound. I believe that $n-2$ is lower bound number of comparisons, but I can't see a way to prove it. Can you give me a clue ? Answer: Hint:Show that the algorithm must compare the following pairs: $$ (a[1],a[2]), (a[2],a[3]), (a[3],a[4]), \ldots, (a[n-1],a[n]). $$ For each comparison $(a[i],a[i+1])$, assume that you haven't compares $a[i]$ to $a[i+1]$ but you have done all other comparisons. Show that you still don't know the correct sorted order of the array.
{ "domain": "cs.stackexchange", "id": 5575, "tags": "algorithms, sorting, lower-bounds" }
How does the quantization error generate noise?
Question: I'm learning about sampling and DSP on my own. I have a hard time to understand how the quantization error results in noise. I think I miss a fundamental understanding but can't tell what it is. So how does the quantization error generate noise? Answer: Suppose I have a multitone signal (six carriers, at ±1/1000, ±2/1000 and ±7/1000 of sampling frequency) x = (1:1000); wave = sin(x/1000*2*pi) + sin(x/1000*2*pi*2) + sin(x/1000*2*pi*7); which is quantized using a 14-bit ADC wave_quant = round(wave * 16384) / 16384; The difference wave_qnoise = wave_quant - wave; gives the quantization error The corresponding spectrum wave_qnoise_freq = mag(fftshift(fft(wave_qnoise)) / sqrt(1000)); shows the generated noise floor across the entire spectrum. This assumes that the quantization error does not introduce a bias. If the ADC always chooses the lower value wave_quant_biased = floor(wave * 16384) / 16384; we get a quantization error that is no longer centered around zero wave_qnoise_biased = wave_quant_biased - wave; which has a definite spike in the FFT in the DC bin wave_qnoise_biased_freq = mag(fftshift(fft(wave_qnoise_biased)) / sqrt(1000)); This becomes a real problem with e.g. Quadrature Amplitude Modulation, where a DC offset in the demodulated signal corresponds to a sine wave at the demodulation frequency.
{ "domain": "dsp.stackexchange", "id": 4931, "tags": "noise, sampling" }
Single-qubit rotations on a subspace within two-qubit unitary
Question: I would like to implement the operation $$ U(a,b) = \exp\left(i \frac{a}{2} (XX + YY) + i \frac{b}{2} (XY - YX) \right) $$ ($a,b \in \mathbb{R}$) without using Baker-Campbell-Hausdorf expansion, which at first seems necessary since $[(XY - YX), (XX + YY)] \neq 0$. My intuition is that this can be done in the same way that $\exp(i(aX + bY))$ does not require a BCH expansion to implement. The above operation is generated by these two matrices: \begin{align} i \frac{a}{2} (XX + YY)\rightarrow i a \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \\ i \frac{b}{2} (XY - YX)\rightarrow i b\begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & \text{-}i & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}. \end{align} Since sum of these matrices is proportional to the operator $(aX - bY)$ in the $(|01\rangle,|10\rangle)$ subspace it seems possible that the operation can be done with a general single-qubit rotation $\text{R}_\hat{n}$ in that subspace. Taking the (unnormalized) unit vector $\hat{n} = a\hat{x} - b\hat{y}$ this rotation is given by $$ \text{R}_\hat{n} (\theta) = \cos\frac{\theta}{2} + i \sin \frac{\theta}{2} (a X - b Y) $$ so that the operation can be implemented as $$ U(a, b) = \text{CNOT}^{2\rightarrow 1} \text{CR}_{\hat{n}}(\theta)^{1\rightarrow 2} \text{CNOT}^{2\rightarrow 1} $$ where $\text{CR}_{\hat{n}}(\theta)$ is a controlled version of $\text{R}_\hat{n}$ and $i\rightarrow j$ indicates an operation on qubit $j$ controlled by qubit $i$. My main concern is that since neither $(XY - YX)$ nor $(XX + YY)$ has support in the $|00\rangle, |11\rangle$ subspace that there's something missing or wrong in this process. My question is, is this a valid decomposition for $U(a, b)$ or is there something wrong in the above reasoning? Answer: Your approach is correct. In particular, sandwiching a controlled rotation between two CNOT gates is a common technique for implementing rotations on the $|01\rangle, |10\rangle$ subspace on hardware that does not implement it natively. We can justify your approach using the fact that if $A$ has eigendecomposition $$ A = \sum_i \lambda_i|i\rangle\langle i| $$ then $e^A$ has eigendecomposition $$ e^A = \sum_i e^{\lambda_i}|i\rangle\langle i|. $$ Consequently, if $A$ is block diagonal $$ A = \begin{pmatrix} A_1 & & & \\ & A_2 & & \\ & & \ddots & \\ & & & A_k \\ \end{pmatrix} $$ then so is $e^A$ $$ e^A = \begin{pmatrix} e^{A_1} & & & \\ & e^{A_2} & & \\ & & \ddots & \\ & & & e^{A_k} \\ \end{pmatrix}. $$ In the present case $$ \frac{a}{2}(XX + YY) + \frac{b}{2}(XY - YX) = \begin{pmatrix} 0 & & \\ & aX - bY & \\ & & 0 \end{pmatrix} $$ so $$ \exp\left(i\frac{a}{2}(XX + YY) + i\frac{b}{2}(XY - YX)\right) = \begin{pmatrix} e^0 & & \\ & e^{iaX-ibY} & \\ & & e^0 \end{pmatrix} = \begin{pmatrix} 1 & & \\ & R_{\hat n}(2t) & \\ & & 1 \end{pmatrix} $$ where $n$ is the normalized real 3-vector $(-a, b, 0) / t = (-\alpha, \beta, 0)$, $\hat n = \beta Y-\alpha X$ and $t = \|n\|_2 = \sqrt{a^2 + b^2}$. In particular, we do not need to be concerned about the subspace $|00\rangle, |11\rangle$ because it is the eigenspace of the operator in the exponent associated with eigenvalue zero which means that $U(a, b)$ acts on it as identity. Note that in practice it is sometimes possible to avoid using Baker-Campbell-Hausdorff by bringing the terms of the exponent into a form in which they commute, e.g. by regrouping the terms $$ \begin{align} U(a,b) &= \exp\left(i \frac{a}{2} (XX + YY) + i \frac{b}{2} (XY - YX) \right) \\ &= \exp\left(\frac{i}{2} X\otimes(aX + bY) + \frac{i}{2} Y\otimes(aY - bX) \right) \\ \end{align} $$ where $X\otimes(aX+bY)$ and $Y\otimes(aY-bX)$ commute. Therefore, $$ \begin{align} U(a, b) &= \exp\left(\frac{i}{2} X\otimes(aX + bY)\right) \exp\left(\frac{i}{2} Y \otimes(aY - bX)\right) \\ &= \exp\left(\frac{it}{2} X\otimes(\alpha X + \beta Y)\right) \exp\left(\frac{it}{2} Y\otimes(\alpha Y - \beta X)\right) \end{align} $$ where $t = \sqrt{a^2 + b^2}, \alpha = \frac{a}{t},\beta = \frac{b}{t}$ as before. Notice that $[X\otimes(\alpha X + \beta Y)]^2 = I$ and $[Y\otimes(\alpha Y - \beta X)]^2 = I$ so $$ \exp\left(\frac{it}{2} X\otimes(\alpha X + \beta Y)\right) = I \cos\frac{t}{2} + i X\otimes(\alpha X + \beta Y) \sin\frac{t}{2} \\ \exp\left(\frac{it}{2} Y\otimes(\alpha Y - \beta X)\right) = I \cos\frac{t}{2} + i Y\otimes(\alpha Y - \beta X) \sin\frac{t}{2} $$ (c.f. equation $(4.7)$ on p.175 in Nielsen & Chuang). Thus, $$ U(a, b) = II \cos^2\frac{t}{2} + ZZ\sin^2\frac{t}{2} + i\left(\frac{\alpha}{2}(XX + YY) + \frac{\beta}{2}(XY - YX)\right)\sin t. $$ In matrix representation $$ U(a, b) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos t & (-\beta + i\alpha)\sin t & 0 \\ 0 & (\beta + i\alpha)\sin t &\cos t & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $$ where we recognize the middle $2\times 2$ block as $R_{\hat n}(2t)$ with $\hat n = \beta Y-\alpha X$ as before.
{ "domain": "quantumcomputing.stackexchange", "id": 2323, "tags": "quantum-gate, gate-synthesis, matrix-representation, pauli-gates" }
Convolution of $\sin(\omega t)$ and $\cos(\omega t)$?
Question: Let $$x(t)=\sin\left(\frac{\pi t}{4}\right)$$ $$y(t)=\cos\left(\frac{\pi t}{4}\right)$$ I need to find the Convolution $$z(t)=x(t) * y(t) \tag{1}$$ Expanding $(1)$ gives $$\begin{align} z(t) &=\int_{-\infty}^{\infty}x(\tau)y(t-\tau)d\tau \\ \\ &=\int_{-\infty}^{\infty}\sin\left(\frac{\pi \tau}{4}\right)\cos\left(\frac{\pi (t-\tau)}{4}\right)d\tau \\ \\ &= \frac{1}{2}\cos\left(\frac{\pi t}{4}\right)\int_{-\infty}^{\infty}\sin\left(\frac{\pi \tau}{2}\right)d\tau+\sin\left(\frac{\pi \tau}{4}\right)\int_{-\infty}^{\infty}\sin^2\left(\frac{\pi t}{4}\right)d\tau \\ \end{align}$$ The first integral equals $0$ because it's an odd signal and the second integral equals $\infty, so is undefined. That means convolution is not possible for these signals. But if I do it by fourier transform then I get $$\begin{align} Z(j\omega) &= X(j\omega)Y(j\omega) \\ &= \frac{\pi^2}{j}( \delta(\omega- \tfrac{\pi}{4})-\delta(\omega+\tfrac{\pi}{4}) ) \\ &= 2\pi^2\sin \left(\frac{\pi t}{4} \right) \\ \end{align}$$ Why are those two methods giving different results? However the answer given is $4\sin\left(\frac{\pi t}{4}\right)$ Where did I make a mistake? Answer: This is a theoretical question without much practical interest but still it can be nice to check the results and investigate if intuition still holds. First of all, if the convolution is regarded an LTI operation between an input $x(t)=\cos(\omega_0 t)$ and a system $y(t)=h(t)=\sin(\omega_o t)$ then it's immediately obvious that since $h(t)$ is an unstable system, the output can be unbounded even for a bounded input signal. Furthermore assuming that Fourier transforms of the input, the system and the output exists (the integrals converge!) then we can assert the property that: $$ x(t) \star y(t) \longleftrightarrow X(\omega)Y(\omega) $$ When the sinusodal signals $x(t)=\cos(\omega_0 t)$ and $y(t)=\sin(\omega_0 t)$ are considered, it's obvious that their Fourier integrals do not converge. Hence their formal Fourier transforms do not exist. The solution is the acceptance of the generalised impulse function to represent the Fourier transform of the sinusodial signals; i.e., $$ x(t)=\cos(\omega_0 t) \longleftrightarrow X(\omega) = \pi [\delta(\omega-\omega_0) + \delta(\omega+\omega_0)]$$ and $$ y(t)=\sin(\omega_0 t) \longleftrightarrow Y(\omega) = \frac{\pi}{j} [\delta(\omega-\omega_0) - \delta(\omega+\omega_0)]$$ Then we consider that the above theorem still holds for the impulse functions as well and apply it here: $$ z(t)=\sin(\omega_0 t) \star \cos(\omega_0 t) \longleftrightarrow Z(\omega)=\frac{\pi}{j}[\delta(\omega-\omega_0)-\delta(\omega+\omega_0)] \cdot \pi[\delta(\omega-\omega_0)+\delta(\omega+\omega_0)] $$ Using the impulse sifting property; $$f(x)\delta(x-a) = f(a)\delta(x-a)$$ (NOTE-WARNING: sifting property of $\delta(x-a)$ strictly requires that the function $f(x)=\delta(x)$ be sufficiently smooth around the discontinuity implied by the sifting function $\delta(x-a)$. Henceforth, in this application the property cannot be applied, as assuming that $\delta(x)$ is a sufficiently smooth function would then contradict the use of $\delta(x-a)$ as a sifting function in its own sifting property, and thus the rest of this manipulations is just an algebraically consistent illusion from mathematical point of view...) we shall perform th multiplications as follows: $$ Z(\omega)= \frac{\pi^2}{j}[ \delta(\omega-\omega_0) \delta(\omega-\omega_0) + \delta(\omega-\omega_0) \delta(\omega+\omega_0) - \delta(\omega+\omega_0) \delta(\omega-\omega_0) - \delta(\omega+\omega_0) \delta(\omega+\omega_0)] $$ $$ Z(\omega)= \frac{\pi^2}{j}[ \delta(0) \delta(\omega-\omega_0) + \delta(-2\omega_0) \delta(\omega+\omega_0) - \delta(2\omega_0) \delta(\omega-\omega_0) - \delta(0) \delta(\omega-\omega_0)] $$ Now noting that $\delta(2\omega_0)=\delta(-2\omega_0)= 0$ those terms cancel and $\delta(0)=\infty$ terms remain, hence $$ Z(\omega)= \frac{\pi^2}{j}[ \delta(0) \delta(\omega-\omega_0) - \delta(0) \delta(\omega-\omega_0)] $$ $$ Z(\omega)= \pi \delta(0) \frac{\pi}{j} [ \delta(\omega-\omega_0) - \delta(\omega-\omega_0)] $$ Which is recognized as the Fourier transform of the infinite amplitude sine wave as: $$ \boxed{ z(t) = \pi \delta(0) \sin(\omega_0 t) } $$ The conclusion is that the convolution between $x(t)=\sin(\omega_0 t)$ and $y(t)=\sin(\omega_0 t)$ produces an infinite amplitude sinudoidal wave of the same frequency $\omega_0$. The time domain verification is as follows: $$x(t) \star y(t) = \int_{-\infty}^{\infty} x(t-\tau)y(\tau) d\tau \leftrightarrow z(t) = \int_{-\infty}^{\infty} \cos(\omega_0(t-\tau)) \sin(\omega_0 \tau) d\tau$$ Using the trigonometric identity of $$\cos(x)\sin(y) = 0.5[\sin(y+x) + \sin(y-x)]$$ we can break the integral into two, where $x = \omega_0(t-\tau)$ and $y=\omega_0\tau$ , hence $$z(t) = 0.5 \int_{-\infty}^{\infty} \sin(\omega_0(t-\tau)+\omega_0 \tau) + \sin(\omega_0 \tau - \omega_0(t-\tau) ) d\tau$$ $$z(t) = 0.5 \int_{-\infty}^{\infty} \sin(\omega_0 t)d\tau + 0.5 \int_{-\infty}^{\infty} \sin(2\omega_0 \tau - \omega_0 t) d\tau$$ Now the first integral becomes $$ 0.5 \sin(\omega_0 t) \int_{-\infty}^{\infty} d\tau$$ while the second integral can be shown to be zero after a suitable change of variables, assuming for a given (fixed) t, $\phi = 2\omega_0 \tau - \omega_0 t$ and $d\phi = 2\omega_0 d\tau$ then the second integral becomes $ \frac{1}{2\omega_0} \int_{-\infty}^{\infty} \sin(\phi) d\phi = 0$. Theferore the result of the convolution is: $$z(t) = \left( 0.5 \int_{-\infty}^{\infty} 1 d\tau \right)\sin(\omega_0 t) $$ Note that the integral that weights the sine wave has infinte value, moreover it can be shown that by the forward and inverse Fourier transform pairs one can recognize the integral as the forward Fouier transform $H(\omega)$ (evaluated at $w=0$, $H(0)$) of the signal $x(t)=1$ which is: $$\mathcal{F} \{ 1\} = \int_{-\infty}^{\infty} 1 e^{-j \omega t} dt \equiv 2\pi \delta(\omega) $$ and therefore setting $w=0$ yields $$\int_{-\infty}^{\infty} 1 dt \equiv 2\pi \delta(0) $$ Finally plugging this into the result yields; $$\boxed{ z(t) = \pi \delta(0) \sin(\omega_0 t) }$$ which is the same as the result obtained from the Fourier method.
{ "domain": "dsp.stackexchange", "id": 5597, "tags": "convolution, linear-systems" }
Why does packet delivery time only count transmission time once?
Question: Why is packet delivery time equal to transmission time + propagation delay, and not equal to 2*(transmission time) + propagation delay? At the end of packet delivery time, doesn't that mean that those bits haven't been received? Answer: After the transmission time has passed, the last bit of the packet has just been put on the wire. That bit (just like all the others) will take an amount of time equal to the propagation delay to reach the receiver. When the last bit reaches the receiver, all the earlier bits will have reached it. So the total time taken is the transmission time plus the propagation delay. It might be easier to visualize what's going on if you think about a train instead of a data packet. Imagine that we're standing by the track, some distance apart. We want to know how long it is from when the front of the train passes me (the time at which "I start sending the train to you") until the whole train has passed you (the time at which "you finish receiving the train"). This is equal to the time it takes the whole train to pass me (transmission time), plus the time it takes the back of the train to reach you (propagation delay). Or, if you prefer, it's equal to the time it takes the front of the train to reach you (propagation delay) plus the time it takes the rest of the train to pass you (transmission time).
{ "domain": "cs.stackexchange", "id": 13100, "tags": "computer-networks" }
First non-repeated character in a string in c
Question: Write an efficient function to find the first non-repeated character in a string. For example, the first non-repeated character in "total" is 'o' and the first non-repeated character in "teeter" is 'r'. Please see my solution and give me some feedback and suggestions for improving and optimizing it if needed. #include <stdio.h> #define SIZE 100 /* max size of input string */ #define LIM 5 /* max number of inputs */ char repeat_ch(const char *); /* return first non-repeated character */ int main(void) { char line[SIZE]; int i = 0; char ch; while(i < LIM && gets(line)) { ch = repeat_ch(line); if(ch != NULL) printf("1st non-repeated character: %c\n", ch); else printf("There is no unique character in a string: %s\n", line); i++; } } char repeat_ch(const char *string) { char array[130] = {0}; char *p = string; /* store each character in array, use ascii code as an index for character * increment each time the same character appears in a string */ while(*p) // stop when '\0' encountered { array[*p]+=1; p++; } while(*string) { if(array[*string] == 1) return *string; // stop when unique character found string++; } return *string; } Answer: Alex, it looks quite efficient. There are some issues that I see: gets should never be used. Use fgets (but note that you will have to strip the trailing \n) define main at the end to avoid the need for a prototype for repeat_ch declare repeat_ch as static your limit does not work as i is not incremented. But why not stop on reading an empty string rather than limit the number of loops? NULL is normally defined as (void *) 0 so comparing a char with NULL is wrong. The compiler will warn you of that. Just use 0 or better '\0' In repeat_ch function name is inaccurate - function looks for a non-repeated char. the two // comments are noisy (ie. don't tell reader anything) array would be better sized 256 p should be const for-loops would be better for (const char *p=string; *p; ++p) { array[*p] += 1; } for (const char *p=string; *p; ++p) { if (array[*p] == 1) { return *p; } } return '\0';
{ "domain": "codereview.stackexchange", "id": 19850, "tags": "optimization, c, strings, array" }
The difference between a bit and a Qubit
Question: Ok, I have done a lot of research on Quantum computers. I understand that they are possibly the future of computers and may be commonplace in approximately 30-50 years time. I know that a Binary is either 0 or 1, but a Qubit can be 0 or 1. But what I don't understand is how it can be anything other then 0 or 1. Surely a computer can only understand on and off, despite however fast it may be? Answer: Probably the easiest analogy is to probabilities. If your computer can flip fair coins, you can think of each coin being in state Tails with probability 1/2 and in state Heads with probability half. So it's appropriate to think of a coin not as a bit but as a vector of two elements $(p_T, p_H)$, where $p_T$ is the probability of tails, $p_H$ is the probability of heads and we have that $p_T \geq 0$, $p_H \geq 0$, and $p_T + p_H = 1$. Once we sample a coin its state "collapses" to either tails or heads. Just about the same happens with qubits. You can also think of quibit as being "in between" two states. It's also easiest to think of qubit not as a bit but as a vector of two elements $(q_0, q_1)$. Now, however, we allo $q_0$ and $q_1$ to be negative, and even further, to be any two complex numbers, as long as $|q_0|^2 + |q_1|^2 = 1$. Just as with sampling coins, once you measure a qubit, it collapses to either $0$ or $1$, and in fact you get $0$ with probability $|q_0|^2$ and $1$ with probability $|q_1|^2$. However, and this is the crucial part, these quantum probabilities can be made to cancel each other because they take negative values; this cannot happen with classical probability. This is the power of qubits. BTW these complex valued quantities describe amplitudes and are related to the wave-particle duality, i.e. the discovery that particles in fact behave as waves, until you measure them, and then they behave as particles. The cancelation I am talking about is in fact interference between waves. So another answer to your question is that a qubit is not a discrete object that's either in one state or not, it's a wave..until you measure it.
{ "domain": "cs.stackexchange", "id": 1323, "tags": "terminology, quantum-computing" }
What kind of answer does TCS want to the question "Why do neural networks work so well?"
Question: My Ph.D. is in pure mathematics, and I admit I don't know much (i.e. anything) about theoretical CS. However, I have started exploring non-academic options for my career and in introducing myself to machine learning, stumbled across statements such as "No one understands why neural networks work well," which I found interesting. My question, essentially, is what kinds of answers do researchers want? Here's what I've found in my brief search on the topic: The algorithms implementing simple neural networks are pretty straightforward. The process of SGD is well-understood mathematically, as is the statistical theory. The universal approximation theorem is powerful and proven. There's a nice recent paper https://arxiv.org/abs/1608.08225 which essentially gives the answer that universal approximation is much more than we actually need in practice because we can make strong simplifying assumptions about the functions we are trying to model with the neural network. In the aforementioned paper, they state (paraphrasing) "GOFAI algorithms are fully understood analytically, but many ANN algorithms are only heuristically understood." Convergence theorems for the implemented algorithms are an example of analytic understanding that it seems we DO have about neural networks, so a statement at this level of generality doesn't tell me much about what's known vs. unknown or what would be considered "an answer." The authors do suggest in the conclusion that questions such as effective bounds on the size of the neural network needed to approximate a given polynomial are open and interesting. What are other examples of mathematically specific analytical questions that would need to be answered to say that we "understand" neural networks? Are there questions that may be answered in more pure mathematical language? (I am specifically thinking of methods in representation theory due to the use of physics in this paper --- and, selfishly, because it is my field of study. However, I can also imagine areas such as combinatorics/graph theory, algebraic geometry, and topology providing viable tools.) Answer: There are a bunch of "no free lunch" theorems in machine learning, roughly stating that there can be no one master learning algorithm that performs uniformly better than all other algorithms (see, e.g., here http://www.no-free-lunch.org/ ). Sure enough, deep learning can be "broken" without much difficulty: http://www.evolvingai.org/fooling Hence, to be provably effective, a learner needs inductive bias --- i.e., some prior assumptions about the data. Examples of inductive bias include assumptions of data sparsity, or low dimensionality, or that the distribution factorizes nicely, or has a large margin, etc. Various successful learning algorithms exploit these assumptions to prove generalization guarantees. For example, (linear) SVM works well when the data is well-separated in space; otherwise -- not so much. I think the main challenge with deep learning is to understand what its inductive bias is. In other words, it is to prove theorems of the type: If the training data satisfies these assumptions, then I can guarantee something about the generalization performance. (Otherwise, all bets are off.) Update (Sep-2019): In the two years since my posted answer, there has been a great deal of progress in understanding the inductive bias implicit in various DL and related algorithms. One of the key insights is that the actual optimization algorithm being used is important, since uniform convergence cannot explain why a massively over-parametrized system such as a large ANN manages to learn at all. It turns out that the various optimization methods (such as SGD) are implicitly regularizing with respect to various norms (such as $\ell_2$). See this excellent lecture for other examples and much more: https://www.youtube.com/watch?v=zK84N6ST9sM
{ "domain": "cstheory.stackexchange", "id": 4174, "tags": "machine-learning" }
Subspace-evasive set performance in the random case
Question: A subspace evasive set is defined as a large subset of a vector space which has small intersection with any $k$ dimensional affine space. That is, it "evades" all affine subspaces of small enough dimension. Formally, for parameters $k$ and $\epsilon > 0$, a $(k,c)$-subspace evasive set $S \in \mathbb{F}_q^n$ is such that $|S| > |\mathbb{F}_q^{n}|^{1-\epsilon}$ and $|S \cap H| < c$ for all $k$-dimensional affine subspaces $H \in \mathbb{F}_q^n$. The goal is to make the intersection $c$ as small as possible. It is claimed (as trivial in most references) that a random set $S \in \mathbb{F}_q^n$ of size $|\mathbb{F}_q^{n}|^{1-\epsilon}$ has (whp) intersection $c = O(k/\epsilon)$. Is there a way of showing this? It is supposed to be a simple application of a probabilistic method, but I am stuck and clueless. Answer: Here's a quick calculation. For a fixed affine subspace $H$ of dimension $k$, the probability that a random point falls in $H$ is $1/q^{n-k}$. The probability that at least $c$ points in $S$ fall in $H$ is at most: $${q^{(1-\epsilon)n} \choose c} \cdot \frac{1}{q^{(n-k)c}}\leq \frac{q^{(n-\epsilon n) c}}{q^{(n-k)c}} = \frac{1}{q^{(\epsilon n -k)c}}$$ A trivial bound on the number of subspaces of dimension $k$ is $q^{n(k+1)} < q^{2kn}$. So, by the union bound, the probability that there exists a subspace $H$ containing at least $c$ points of $S$ is at most: $$q^{2nk} \cdot \frac{1}{q^{(\epsilon n -k)c}} < 1$$ if $c = 4k/\epsilon$ and $k<\epsilon n/3$.
{ "domain": "cstheory.stackexchange", "id": 2771, "tags": "pr.probability, linear-algebra, coding-theory" }
What are these (eggs?) ? ( Location - India )
Question: The picture can be zoomed in on clicking Description: There are 24 of them, each ovoid, 1mm long and attached to the switch's frame by suction cups connected by tubes. They are secured to the base and couldn't be brushed off with a stick. Found them in East of India. UPDATE The ovoid heads have reduced in number (17) and have turned brownish (10.04.2017). The number of ovoid heads are 11 now and two of them have a black spot at the proximal end (could be the eye spots of developing embryos) (13.04.2017). The bunch has 4 of them now (24.04.2017). Research & Question: Assuming those are eggs (resemble those of Green lacewing and with the changes observed it is evident), what group of arthropods have such stalked eggs other than Green lacewing? Answer: The group of insects called Neuropterans have several species that lay eggs on stalks. This group includes lacewings, owlflies and antlions, however stalked eggs are not a defining trait of the group. In addition to the Green lacewing, the Blue-eyed lacewing from Australia lays stalked eggs while owlflies do not. Mantidflies also have stalked eggs which are members of the Neuroptera order, but the stalks are relatively short compared to lacewings. This thread provides some interesting discussion. P.S. While not an arthropod, at least one Genus of Molluscs (Nucella), lay stalked eggs too. These are certainly eggs from this order (Neuroptera). Incubation time for these type of eggs can be anywhere from 2-7 days so you may already be having hatching if there are less eggs than what was initially there.
{ "domain": "biology.stackexchange", "id": 6940, "tags": "species-identification, eggs, arthropod" }
What's the best way to contribute back stacks and packages?
Question: We use ROS extensively where I work, and some of what we've written would be useful to the community and the engineers here would love to give back. Given that: We have code that would generally be useful (e.g. Bumblebee server, Gstreamer videonodelet) We don't have any company sponsored external repositories. We don't necessarily have funding to provide more than minimal support. (though we might sometimes where current projects are actively using said code) Should we still try to publish this to the community? What would be the best way to publish? Is there a set of "best practices"? Is there a preferred or ROS official VCS host that we should/could be using? What are the licence requirements/guidelines? BSD/GPL/Mozilla/Apache? Should we just edit the wiki to include links to our stuff, or is that frowned upon? Originally posted by Asomerville on ROS Answers with karma: 2743 on 2011-06-21 Post score: 5 Answer: Should we still try to publish this to the community Yes! What would be the best way to publish? Whatever way is best for you, it's your code. To avoid having to set up and maintain your own external facing servers there are lots of options. Having tried a number I ultimately settled on hg hosted on bitbucket and I've been happy with that, but YMMV. You'll have to pick what works best for you. Other options are git+github, svn/hg+googlecode, bzr+launchpad and I'm sure many more. The ros indexers seem to play nice with all the above mentioned VCS's which is nice. Merging kwc's answers too: http://www.ros.org/wiki/Get%20Involved As for the rest of the bullets: * Is there a preferred or ROS official VCS host that we should/could be using? We prefer hosts with the best performance and uptimes, which are probably the same criteria you would use as well. * What are the licence requirements/guidelines? BSD/GPL/Mozilla/Apache? BSD and Apache are generally the friendliest licenses if you wish to encourage use. We have had to do additional work on occasion to remove GPL code from libraries. * Should we just edit the wiki to include links to our stuff, or is that frowned upon? You are welcome to create wiki pages for your own packages/stacks. You should use discretion when editing wiki pages that aren't yours. Originally posted by Patrick Bouffard with karma: 2264 on 2011-06-21 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by Asomerville on 2011-06-22: I'll edit to include this, but I was wondering if there's a set of best practices for doing so: Preferred or ROS official VCS host? Better to try and submit as a replacement to the "official" package? License requirements? Etc.
{ "domain": "robotics.stackexchange", "id": 5919, "tags": "ros" }
Compute Taylor expansion of a vectorial function
Question: I'm not sure this question even deserves to be posted on this great forum, but I've been stuck the past 2 hours on a relatively easy thing. Context: (in case it helps) Im trying to study stability of Lagrangian points in a 3 body problem: In the picture $S$ denotes the sun, $J$ Jupiter and $L_4$ the equilibrium solution for an asteroid with relatively small mass. $R$, defined as $d(S,J)$ is the length of the edges of the equilateral triangle. Now the idea is to perturb the position of $\vec{y_4}=L_4$ hence we set $\vec{y}=\vec{y_4}+\vec{z}$ for a "small perturbation $\vec{z}"$ This is what follows ($d_i:=||\vec{y_4}-\vec{y_i}||$, for $i=S,J )$ My interpretation: I believe this is the Taylor expansion of the function $f(\vec{y})=\frac{\vec{y}-\vec{y_i}}{||\vec{y}-\vec{y_i}||^3}$ around $\vec{y_4}$ evaluated at $\vec{y_4}+\vec{z}$ My calculations: $ f'(\vec{y})=\frac{1}{||\vec{y}-\vec{y_i}||^3}-3\frac{\vec{y}-\vec{y_i}}{||\vec{y}-\vec{y_i}||^4} \\ f(\vec{y_4}+\vec{z})=f(\vec{y_4})+f'(\vec{y_4})\cdot \vec{z}+\mathcal{O}(\vec{z}^2) $ Clearly there are problems with my derivative (first term is a scalar unless I interpret $\vec{1}=(1,1,1)$). Although if I evaluate the derivative at $y_4$ I get an expression very similar to the one in the image, but still with mixed scalars and vectors. I'm pretty sure there is some theory on how to handle these functions, but I have never really handled these expressions in 2 years of mathematics and I have no clue on how to fix my lack of knowledge in this context. Answer: The function of concerned: $$ \vec{s}_i = \frac{\vec{y}-\vec{y}_i}{\Vert \vec{y}-\vec{y}_i \Vert^3} $$ where $\vec{y} =\vec{y}_4 + \vec{z}$, and $\vec{z}$ is of small magnitude to be expanded to the first order of $z$. $$\tag{1} \vec{s}_i = \frac{\vec{y}_4 +\vec{z} -\vec{y}_i}{\Vert \vec{y}-\vec{y}_i \Vert^3} = \frac{\vec{y}_4 -\vec{y}_i}{\Vert \vec{y}-\vec{y}_i \Vert^3} + \frac{\vec{z} }{\Vert \vec{y}-\vec{y}_i \Vert^3} \approx \frac{\vec{y}_4 -\vec{y}_i}{\Vert \vec{y}-\vec{y}_i \Vert^3} + \frac{\vec{z} }{\Vert \vec{y}_4-\vec{y}_i \Vert^3} $$ The second term is already of order $z$, therefore in the denominator, the $\vec{y}$ can be replaced by $\vec{y}_4$. Denote the distance $d_i \equiv \Vert\vec{y}_4 - \vec{y}_i\Vert$. Now, the equstion is concerned about the Taylor expansion of the denominator in series of small $\vec{z}$: $$ f(\vec{y}_4 + \vec{z}) = f(\vec{y}_4) + \vec{\nabla}_z f\Big\vert_{\vec{z}=0} \cdot \vec{z} + \sum_{i,j} \frac{1}{2} \frac{\partial^2 f}{\partial z_i \partial z_j}\Big\vert_{\vec{z}=0} z_i z_j + ... $$ Apply the Taylor expansion to the denominator: $$\tag{2} \frac{1}{\Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert^3} = \frac{1}{\Vert \vec{y}_4-\vec{y}_i \Vert^3} + \vec{\nabla}_z \left[\frac{1}{\Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert^3}\right]_{\vec{z}=0} \cdot \vec{z} $$ Calculate the gradient term: $$\tag{3} \vec{\nabla}_z \left[\frac{1}{\Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert^3}\right] =-3 \left[\frac{1}{\Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert^4}\right] \vec{\nabla}_z \Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert \\ = -3\left[\frac{1}{\Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert^4}\right]\frac{\vec{y}_4-\vec{y}_i+\vec{z}}{\Vert\vec{y}_4-\vec{y}_i+\vec{z}\Vert}=-3\frac{\vec{y}_4-\vec{y}_i+\vec{z}}{\Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert^5} $$ Substitue Eq.(3) with $\vec{z}= 0$ into Eq.(2) $$\tag{4} \frac{1}{\Vert \vec{y}_4-\vec{y}_i+\vec{z} \Vert^3} = \frac{1}{\Vert \vec{y}_4-\vec{y}_i \Vert^3} - 3 \frac{\left(\vec{y}_4-\vec{y}_i\right) \cdot \vec{z}}{\Vert \vec{y}_4-\vec{y}_i\Vert^5} = \frac{1}{d_i^3} - 3 \frac{\left(\vec{y}_4-\vec{y}_i\right) \cdot \vec{z}}{d_i^5} $$ Plug the result of Eq.(4) into Eq.(1): $$\tag{1} \vec{s}_i \approx \frac{1}{d_i^3}\left(\vec{y}_4-\vec{y}_i\right) - 3 \frac{\left(\vec{y}_4-\vec{y}_i\right) \cdot \vec{z}}{d_i^5} \left(\vec{y}_4-\vec{y}_i\right)+ \frac{\vec{z} }{d_i^3} $$
{ "domain": "physics.stackexchange", "id": 77958, "tags": "classical-mechanics, three-body-problem" }
Can any object pass into the event horizon of a black hole and then escape?
Question: With the exception of Hawking radiation - which (as I understand) is the capture of one member of a virtual pair of photons that appears at the event horizon by the black hole with the "escape" of the other photon into the surroundings - I find no conjectures that any particle of mass could cross a black hole's event horizon and then pass back across the event horizon - essentially "escaping" from the black hole in the sense that it avoids falling into the singularity. I am not claiming others have not addressed this conjecture , just that I have not found any in my literature search. Certainly there are many published explanations that say "no", but consider the following scenario: Although it appears that an electrically charged black hole may not ever occur naturally, there are two general-relativity solutions that do allow for the possibility of an electrically charged black hole (the Reissner-Nordstrom and the Kerr-Newman black holes). If a charged particle's velocity trajectory intersected the event horizon of a charged black hole at the limit of a tangent trajectory (an infinitesimal fraction of a degree below the tangent and therefore crossing the event horizon);could there be a case where the magnitude of the electric charge of the black hole and the opposite magnitude of the charge of the particle of mass combined with a sufficiently high magnitude of velocity of the particle would allow the particle to then recross the event horizon into normal space? Essentially, could the coulombic repulsive force between the particle and the singularity diminish the net force of the singularity acting on the mass of the particle sufficiently for this hypothesized escape trajectory to be possible? Answer: The answer is no. But you are not too far from something that is possible to extract energy, charge and mass from a BH, without extracting any actual particles. Two reasons, besides the obvious one that no particle escapes the horizon First, why can't what you say happen: The repulsive electric Coulomb like field from the BH is already being accounted in whether the charged particle can get close enough to the horizon. A BH with a certain mass and angular momentum J can only exist up to a certain amount of charge. It's called an extremal (Kerr Newman) BH. If a charge tries to join it, even at the speed of light, it won't be able to. The BH is already at that point holding as much charge as possible to remain a BH, any more and the self repulsive would repel its own mass. It's been tried in simulations and numerical solutions. The same is true for a maximum J, any more and the 'centrifugal-like effect would force matter out. How is that relevant? A charged BH has its own mechanism for repelling charge approaching it. If it turns out it can pass the horizon, it's because the BH was not extremal, and could support more charge. Once inside it stays there. But there is a second reason, phenomena and factor that elucidates even more, and that's close to what you were saying, but without going inside the horizon. The Penrose process. Penrose did it first for J. He found that if you have a particle rotating around the BH in the ergosphere (if I remember right, I'm going a little fuzzy here but the effect is real, and I have below a reference and it's references you can get more on), and it's J is aligNed with (again, not sure if anti-aligned) the J for the BH, it split into a particle and a virtual particle, with the virtual,particle falling inside the BH horizon, and the real particle can escape from the ergosphere (which is outside the horizon) and carry with it more J and actually more energy than what it came in with. The virtual particle goes in with negative mass and opposite J, and the net effect is That the BH looses some mass and J, and gives it to the outgoing particle. It's called extracting energy from the BH. The BH gets a lower mass, and J, but looses no real particle. The same is true by extracting charge,having a charged particle going in, it can extract charge from the BH, and mass. All the laws of BH thermodynamics (such as total entropy increases, with BH entropy given by its horizon area) still hold. The process can extract large percentage of the energy of the BH, and could be the basis for energetic jets. It does require a particle getting close to the BH, and some of the right directions of the motion. The effect has not been specifically observed yet. See the wiki article below about the Penrose process for J. Also true for charge, see Ref. 5 in the wiki article. There's been many other papers, there's no controversy, it's accepted. See https://en.m.wikipedia.org/wiki/Penrose_process
{ "domain": "physics.stackexchange", "id": 36971, "tags": "electromagnetism, black-holes, event-horizon, coulombs-law" }
What is exactly meant by neural network that can take different types of input?
Question: There is a scientific document that implements a convolutional neural network to classify 3 different types of data, although how exactly, is unknown to me. Here's the explanation of network architecture: This section describes architecture of our neural net which is depicted in Fig. 3. Our network has three types of inputs: Screenshot (we use upper crop of the page with dimensions 1280 × 1280, however this net can work with arbitrarily sized pages), TextMaps (tensor with dimensions $128 \times 160 \times 160$) and Candidate boxes (list of box coordinates of arbitrary length). A screenshot is processed by three convolutional layers (the first two layers are initialized with pretrained weights from BVLC AlexNet). TextMaps are processed with one convolutional layer with kernel size $1 \times 1$ and thus its features capture various combinations of words. These two layers are then concatenated and processed by final convolutional layer. What exactly is implied by Our network has three types of inputs above? Is it possible for convolutional neural network to pass different types of inputs differently? From my understanding, neural network for Screenshot input would be created like this: def CNN(features, labels, mode): input_layer = tf.reshape(image, [-1, 1280, 1280, 1]) # Conv+ReLU conv_relu_1 = tf.layers.conv2d( inputs=input_layer, filters=96, kernel_size=[11, 11], padding="same", activation=tf.nn.relu) # MaxPool pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[3, 3], strides=2) # Conv + ReLU ... So let's say this is first neural network, then should I create another neural network for TextMaps and concatenate results? Or does every magic just happen in a single neural network? In short, can I create neural network that takes different types of input individually or do I use different neural networks for each of them and then group their outputs? Thank you! Answer: In short, can I create neural network that takes different types of input individually or do I use different neural networks for each of them and then group their outputs? Yes, you can. Check the Functional API of Keras, on how to define multi input/output networks. Then you can create different models for the processing of each input and fuse them together into a single multi-input model using the keras.models.Model() class. In the following example, you can see that the main_input is processed differently than the aux_input and both are thereafter merged together to be propagated through the rest of the layers of the network.
{ "domain": "datascience.stackexchange", "id": 4449, "tags": "python, neural-network, tensorflow, convolutional-neural-network" }
timeConversion - C function to convert 12-hour AM/PM format into military time
Question: I have written a program in C, which given a time in 12-hour AM/PM format, converts it to military time (24 hours). Function Description The timeConversion function should return a new string representing the input time in 24-hour format. timeConversion has the following parameter(s): string s: a time in 12 hour format Returns string: the time in 24 hour format Input Format A single string s that represents a time in 12-hour clock format (i.e.:hh:mm:ssAM or hh:mm:ssPM) 1) Sample Input 07:05:45PM Sample Output 19:05:45 2) Sample Input 12:01:00PM Sample Output 12:01:00 3) Sample Input 12:01:00AM Sample Output 00:01:00 Code: #include <asm-generic/errno-base.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stddef.h> #include <sys/types.h> #include <errno.h> #include <err.h> #include <stdbool.h> static char *timeConversion(char *restrict); int main(void) { char *s = NULL; size_t n = 0; getline(&s, &n, stdin); char *res = timeConversion(s); (void)fprintf(stdout, "%s\n", res); exit(EXIT_SUCCESS); } static char * timeConversion(char *restrict s) { int h1 = (int)s[0] - '0'; int h2 = (int)s[1] - '0'; int HH = h1 * 10 + h2 % 10; char t[] = { s[8], s[9] }, *fmt = s; char *out = malloc(BUFSIZ * sizeof(char)); if (out == NULL) errx(EXIT_FAILURE, "%s", strerror(ENOMEM)); memmove(fmt, fmt+2, strlen(fmt)); fmt[strcspn(fmt, "\r\t\n")] = 0; fmt[strlen(fmt) - 2] = '\0'; _Bool status = strcmp(t, "AM"); if (!status) { if (HH == 12) HH = 0; } else { HH += 12; if (HH == 24) HH -= 12; } sprintf(out, "%02d%s", HH, fmt); return (char *)(out); } I know this may not be the best solution, but it was more or less what came my mind. And well, I wrote it on a Linux Machine (Gentoo), and I see that it works as expected, with the inputs above. But when I tried to test it on another machine (Mac OS), the results were: Sample Input 07:05:45AM Sample Output 19:05:45 I'd like to know if there is any way to improve it or a solution to "fix" the behavior of this program? Answer: There is a problem comparing a literal string "AM" and a char array not ended with '\0' ==> t[] char t[] = { s[8], s[9] } //int strcmp(const char *str1, const char *str2) _Bool status = strcmp(t, "AM"); If you want to compare char t[] with "AM", you need to add an extra char at the end to sepecify that the string is ended. char t[] = { s[8], s[9], '\0'}; Note: you can see your current t[] values just printing them, and you will see the extra characters that strcmp is using when comparing with "AM". sprintf(out, "output: %s", t); output: AM╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠╠└ñÌ☺
{ "domain": "codereview.stackexchange", "id": 43638, "tags": "c, datetime" }
Does the magnetic field, circulating the moving uniformly charged sphere, exert force/tension on it? If so, how is it interpreted in the charge frame?
Question: I am aware of this SE question, however, it does not solve my problem. Assume that a bulk uniformly charged nonconductive sphere is set in motion along the $x$-axis in the lab frame of reference. Since the electrical field complies with the inverse square law everywhere outside the sphere, some magnetic fields are anticipated to be induced, circulating the $x$-axis. However, if we consider specific lines of this magnetic field very close to the surface of the sphere, it seems that every infinitesimally small area element undergoes a Lorentz force $F$ (the red arrows) shown in the figure. If such forces exist, they exert tension on the sphere tending to change the shape of the sphere in the lab frame. However, how this tension is interpreted in the charge reference frame as there is no Lorentz force in the latter?! Recall that when $v$ approaches the speed of light, these forces can probably smash the sphere into pieces! Answer: Yes test charges on the surface of the moving charged sphere feel this force. Yes it affects the strain of the sphere of charge in the lab frame. But don't forget that some external forces are needed to keep a charged sphere intact in the first place. Even in its rest frame there are electric fields pointing outward trying to make the charged sphere expand. Note that $(1/4\pi\epsilon_0)q\hat{r}/r^2$ is not a correct expression for the electric field of a moving charged sphere. That is only the correct for the electric field of a motionless charged sphere. For a moving charged sphere, some of the electric field is transformed into magnetic field. I've seen this derived for a moving point charge by taking the electric field of a point charge and applying a Lorentz boost to the electromagnetic tensor $F_{\mu\nu}$. The internal strains of the charged sphere have to be the same in both the lab frame and the co-moving frame (assuming $v\ll c$ so we can ignore time dilation). In the co-moving frame, all of the strain comes from the electric field. In the lab frame, some of the electric field is transformed into magnetic field, and some of the strain comes from the charged sphere's motion through its magnetic field, while some still comes from the electric field.
{ "domain": "physics.stackexchange", "id": 97610, "tags": "electromagnetism, special-relativity, forces, reference-frames, inertial-frames" }
Beginner's Knight's tour in Python (BFS)
Question: I wanted to implement this as short and effective as I could without using anything that is not basics, in order to improve my skills. I would appreciate input about things like memory leaks I missed, simpler way of doing things instead reinventing the wheel, methods I should have used / implemented / not implemented (could have used built-ins and defaults...) and styling. In general, does this implementation follows common best practices? import random as r import itertools as it import numpy as np class P: up = 8 down = 0 def __init__(self,x=0,y=0,parent=None, random = False): self.x = x if not random else r.randint(P.down,P.up-1) self.y = y if not random else r.randint(P.down,P.up-1) self.parent = parent def __str__(self): return '({},{})'.format(self.x,self.y) def __repr__(self): return '({},{} [{},{}])'.format(self.x,self.y,self.parent.x if self.parent else '-',self.parent.y if self.parent else '-') def __eq__(self,other): return self.x == other.x and self.y == other.y # for np.unique def __gt__(self,other): return self.x + self.y > other.x + other.y # for np.unique def __lt__(self,other): return self.x + self.y < other.x + other.y def __hash__(self): return hash((self.x, self.y)) def valid_moves(self): def valid(*num): # a better way to check if all number in num return True? # this seems nice and short to me, but I guess there is a better way return sum([n >= P.down and n < P.up for n in num]) // len(num) a,b = [1,-1,],[2,-2] # is there a way to shorten this list construction? t = [P(self.x + i, self.y + j, self) for i in a for j in b if valid(self.x+i,self.y+j)] + [P(self.x + i, self.y + j, self) for i in b for j in a if valid(self.x+i,self.y+j)] return np.unique(t) s = P() e = P(random = True) print('findint shortest path from {} to {}'.format(s,e)) ls = s.valid_moves() while True: if e in ls or e == s: print('found end node...') #find the end node that has parents - not the #original "e" that has no parents andt herefore #cannot be used to generate path curr = ls.pop(ls.index(e)) path = [curr] print('generating path...') while curr != s: curr = curr.parent path += ['->'] path += [curr] print('path: ',path) break tmp = [p.valid_moves() for p in ls] # flatten tmp into 1-d list ls = list(it.chain(*tmp)) # Do I have any memory leaks? del tmp Answer: For a beginner this is pretty good, there are some serious styling issues though which I would like to point out. Review imports .Only import what you need, ==> from random import randint Doing import random as r does not help readability IMO Constants should be UPPERCASE, rename UP and DOWN accordingly See PEP8#constants def valid(*num): This could be made simpler with the all() keyword return all(n in range(P.DOWN, P.UP) for n in num) Here all() checks if all points are in range and will return True if it is, else it will return False t = [P(self.x + i, self.y + j, self) ... This list comprehension is way too long, and you actually repeat yourself A few fixes are needed It is good to split up your lines for better readability like this: [P(...) for i in a for j in b ...] You loop over [-1, 1], [2, -2] with 2 contatinated list comprehensions you could do this in one turn. By having a constant of all possible directions for instance, this will make it look a little better. I end up with this: DIRECTIONS = ((1, 2), (1, -2), (-1, 2), (-1, 2), (2, -1), (2, 1), (-2, 1), (-2, -1)) paths = [P(self.rank + delta_rank, self.file + delta_file, self) for delta_rank, delta_file in P.DIRECTIONS if valid(self.rank + delta_rank, self.file + delta_file)] Always guard your code! There is the issue of naming. You have many unmeaningfull names class P() What is P? Point? a, b This doesn't say much e, s I would rename to end, start etc (t, ls, ...) In chess x is called a RANK and y is called a FILE When writing good code, it helpfull to give your variables and instances good names, where it is clear on first sight what they do. So when you revisit your code, or let other people use your code they know what is what. Revised code from random import randint from itertools import chain import numpy as np class P: UP = 8 DOWN = 0 DIRECTIONS = ((1, 2), (1, -2), (-1, 2), (-1, 2), (2, -1), (2, 1), (-2, 1), (-2, -1)) def __init__(self, rank=0, file=0, parent=None, random=False): self.rank = rank if not random else randint(P.DOWN, P.UP-1) self.file = file if not random else randint(P.DOWN, P.UP-1) self.parent = parent def __str__(self): return '({},{})'.format(self.rank, self.file) def __repr__(self): return '({},{} [{},{}])'.format(self.rank, self.file, self.parent.rank if self.parent else '-', self.parent.file if self.parent else '-') def __eq__(self, other): return self.rank == other.rank and self.file == other.file def __gt__(self, other): return self.rank + self.file > other.rank + other.file def __lt__(self, other): return self.rank + self.file < other.rank + other.file def __hash__(self): return hash((self.rank, self.file)) def valid_moves(self): def valid(*num): return all(n in range(P.DOWN, P.UP) for n in num) paths = [P(self.rank + delta_rank, self.file + delta_file, self) for delta_rank, delta_file in P.DIRECTIONS if valid(self.rank + delta_rank, self.file + delta_file)] return np.unique(paths) def find_path(start, end): valid = start.valid_moves() while True: if end in valid or end == start: curr = valid.pop(valid.index(end)) path = [curr] while curr != start: curr = curr.parent path += ['->'] path += [curr] return path tmp = [p.valid_moves() for p in valid] valid = list(chain(*tmp)) if __name__ == '__main__': start, end = P(), P(random=True) print('Finding shortest path from {} to {}'.format(start, end)) print('Path ', find_path(start, end))
{ "domain": "codereview.stackexchange", "id": 28807, "tags": "python, beginner, python-3.x, breadth-first-search, chess" }
Time for black body to cool to a given temperature
Question: I'm trying to figure out the time required for a blackbody to cool assuming it only looses heat via radiation. I can estimate the mass, specific heat, surface area, emissivity, initial temperature, final temperature, etc. I know the amount of heat which has to be lost is: $ Q = mc\Delta T$, where $m$ is mass, $c$ is specific heat, $\Delta T$ is a temperature_change I know the rate of heat loss is: $ P = \epsilon kST^4$, where $k$ is Stefan Boltzmann, $T$ is the body temperature, $S$ is the surface area, $\epsilon$ is the emissivity. The body temperature changes as the blackbody cools so I cannot simply use the initial or final temperature or I get very different results. I think I have to somehow integrate $P$ from $t = 0$ (when body is at initial temperature) to when the body is at the final temperature. Any help would be greatly appreciated. Answer: Let us assume temperature of black body is $T_1$ and of surrounding is $T_2$, also $T_1>T_2$ , at time $t =0$. Heat lost by body at any instant is $F= k*S_{area}*(T^4 -T_2^4)$. Where T is temp at that instant and $k$ is the Boltzmann constant and $S_{area}$ is the surface of the body. Now this lost heat can be written as $F= - ms \frac{dT}{dt}$ dT : small change in temp dt : small change in time $-ms \frac{dT}{dt} = \sigma(T^4 - T_2^4 )S_{area}$ Integrate this expression and put the limits and solve.
{ "domain": "physics.stackexchange", "id": 55087, "tags": "thermodynamics, temperature, thermal-radiation, estimation" }
Is Density Matrix simulation same as Tensor Network simulation?
Question: I read that there are two major ways of simulating quantum circuits - State Vector Simulation and Tensor Network Simulation. But Qiskti provides a backend to simulate state-vectors and a backend for density matrix simulations. Is there a difference between density matrix simulations and tensor network simulations ? Does density matrix simulations also scale polynomially with the number of qubits ? Answer: A state vector on $n$ qubits is a $2^n$ complex number representing a system in a pure state. A density matrix on $n$ qubits is a $2^n$ by $2^n$ matrix representing a system in a mixed state. Mixed states are the more general description of a quantum system, they handle situations, for example, when you have a classical statistical ensemble of pure states. Often, however, you only need to deal with pure states, for example if you have closed system and don't do any measurements on the system. Different simulators will differ on whether they simulate cases where the system is a pure state and systems where there is a mixed state. Because mixed states are more general, there are places where you have to used mixed state simulation, but, for example for simulating a quantum circuit without the measurement, you really just need the pure state. Tensor networks are different concept altogether. Tensor networks are different ways to represent the objects like pure states (a $2^n$ vector of complex numbers) or mixed states (a $2^n$ by $2^n$ matrix of complex numbers). Tensor networks can also be used to represent objects besides states (pure or mixed), for example they can be used to represent unitary evolution operators, aka quantum gates. What are these tensor networks? A tensor is a generalization of a vector or a matrix to an object that is indexed by $r$ different indices (so a vector has one index $v_i$, a matrix has two $m_{i,j}$, and a tensor may have more indices like $t_{i,j,k,l}$). Tensor networks are a set of tensors along with a way to "contract" the tensors. It's easiest to just write a simple example. A tensor network representation of a matrix $m_{i,j}$ might be $$m_{i,j} = \sum_{k,l} a_{i,k} b_{k,l} c_{l,j}$$ The thing on the left is a matrix, and we are expressing it as a sum (contracting indices) over a product of three tensors. Tensor network methods use this sort of thing to represent states and unitaries and other quantum linear algebraic structures. When people talk about a pure state simulator, they tend to be refering to a simulator that just keeps the pure state in memory. When people talk about a tensor network simulator, it could be a pure or mixed state simulator, it just refers to the internal representation used for the states. As for scaling, none of the simulations scale generally polynomially with $n$. Tensor network methods, however, often have some tunable dimension (the dimension of that index that was summed over above) and for constant tunable dimension, the scaling can be polynomial. However, this often only works for simulating some particular states and not all states (so great when this works, not so great when it doesn't).
{ "domain": "quantumcomputing.stackexchange", "id": 3950, "tags": "simulation, density-matrix, tensor-networks" }
Find the time between two events by customer id
Question: I need to find a customer has bought P1, and after how many days customer will buy P2. I am unable to find the days between order of P1 and the next order of P2 by the same customer. I have data as shown below. Customer ID Order_Date Product C-87 11/20/2018 P2 C-87 7/25/2018 P1 C-87 7/19/2019 P1 C-87 8/2/2018 P2 C-87 12/9/2019 P1 ... ... ... C-22 9/22/2018 P2 C-22 9/4/2018 P2 C-22 1/15/2018 P1 C-22 9/5/2019 P2 C-22 3/20/2018 P1 Answer: You can split it into two dataframe containing only one of P1 and P2 first. df1 = df[df.index[df['Product'] == 'P1'].tolist()] df2 = df[df.index[df['Product'] == 'P2'].tolist()] And then marge df1 and df2 on Customer ID and Product df1.merge(df2, 'inner', left_on=['Customer ID', 'Product'], right_on=['Customer ID', 'Product'], copy=False)
{ "domain": "datascience.stackexchange", "id": 7422, "tags": "machine-learning, python, data-mining, pandas, pattern-recognition" }
Question from electrostatic, smaller sphere carved out of larger sphere both of uniform charge density
Question: Question: The sphere of radius a was filled with positive charge at uniform density $\rho$. Then a smaller sphere of radius $\frac{a}2$ was carved out, as shown in the figure, and left empty. What are the direction and magnitude of electric field at A? At B? How do I solve this question? The hint given for this question tells us to suppose the uniform charge density of the inner off-center sphere to be -$\rho$. How does that make sense? Isn't it necessary to not "suppose" things which can't be actually possible? Answer: The great thing about the equations that govern electrostatics is that at long as the boundary conditions are the same, then it doesn't matter what you "suppose". This is in fact why the method of images is so useful at solving certain problems, even if the method isn't describing what is actually happening. In your case here, yes in reality there is just a hole being carved out. But the equations describing this can't tell the difference between that scenario and the scenario where the hole is actually an equal combination of positive and negative charges. What makes the latter method nicer is that you already know how the electric fields of balls of charge behave, and you know that electric fields follow the law of superposition. So which would you rather do: integrate vectors over a nontrivial volume (reality), or simpler addition (mathematical/physical trick that gives the same answer)?
{ "domain": "physics.stackexchange", "id": 71143, "tags": "homework-and-exercises, electrostatics" }
Band limited signals that are sparse
Question: I am looking for examples of signals that are band limited but also sparse. For instance, the spiking of the neurons can be modeled as banded in frequency domain and sparse in time domain. What other examples exists for such type of signals? Answer: If a signal is sparse in the time domain, it has infinite support in the frequency domain, thus is not band-limited in the mathematical sense. However it might be so close to band-limited that the out-of-band spectrum disappears under the quantization or other noise floor.
{ "domain": "dsp.stackexchange", "id": 6634, "tags": "discrete-signals, signal-analysis, bandpass" }
Expectation value on coherent states of $(\hat{a}+\hat{a}^\dagger)^n$
Question: I need to evaluate the following expectation value $$ \langle \alpha \vert (\hat{a}+\hat{a}^\dagger)^n \vert \alpha\rangle $$ The formulation is very easy, but I can not tackle the problem. Any hint? Answer: A general parameter $n$ involves hypergeometric functions. For that expression, we make one observation that in natural unit, the position operator $\hat x$ can be written as follows $$\hat x = \frac{1}{{\sqrt 2 }}\left( {\hat a + {{\hat a}^\dagger }} \right) \tag{1}\label{posop}.$$ We also need the position-representation of a general coherent state $|\alpha\rangle$ $$\alpha(x) = \left\langle {x} \mathrel{\left | {~ \alpha } \right. } \right\rangle = {\pi ^{ - 1/4}}\exp \left[ { - \frac{{{{\left( {x - \sqrt 2 {\alpha _1}} \right)}^2}}}{2} + ix\sqrt 2 {\alpha _2}} \right],$$ with the complex number $\alpha = \alpha_1+i\alpha_2$. Now we have all the ingredients for the solution. The quantity we are after reads $$\mathcal{I} = \left\langle \alpha \right|{\left( {\hat a + {{\hat a}^\dagger }} \right)^n}\left| \alpha \right\rangle = {2^{n/2}}\left\langle \alpha \right|{{\hat x}^n}\left| \alpha \right\rangle \tag{2}\label{ans},$$ where we have used Eq. \eqref{posop} for the position operator. Now we insert an identity inside Eq. \eqref{ans} as follows \begin{align} \mathcal{I} &= \int {dx \cdot } \left\langle \alpha \right|{\left( {\hat a + {{\hat a}^\dagger }} \right)^n}\left| x \right\rangle \left\langle {x} \mathrel{\left | {\vphantom {x \alpha }} \right.} {\alpha } \right\rangle , \\ & = {2^{n/2}} \cdot { } \int {dx \cdot } \left\langle \alpha \right|{{\hat x}^n}\left| x \right\rangle \alpha \left( x \right), \\ & = {2^{n/2}} \cdot { } \int {dx \cdot } {x^n}\left\langle {\alpha } \mathrel{\left | \right. } {x} \right\rangle \alpha \left( x \right), \\ & = {2^{n/2}} \cdot { } \int {dx \cdot } {x^n}\cdot{\alpha ^*}\left( x \right)\alpha \left( x \right), \\ & = {2^{\frac{n}{2} - 1}}\left[ {\left( {{{\left( { - 1} \right)}^n} + 1} \right)\Gamma \left( {\frac{{n + 1}}{2}} \right){}_1{F_1}\left( { - \frac{n}{2};\frac{1}{2}; - 2{\alpha _1}^2} \right) - \sqrt 2 {\alpha _1}\left( {{{\left( { - 1} \right)}^n} - 1} \right)n\Gamma \left( {\frac{n}{2}} \right){}_1{F_1}\left( { - \frac{n}{2} + \frac{1}{2};\frac{3}{2}; - 2{\alpha _1}^2} \right)} \right], \end{align} where $_1{F_1}$ is the hypergeometric function. For the first few values of $n$, the expression read as follows: Some properties of the above table, for the final expression of $\mathcal{I}$ in Eq. \eqref{ans}, are listed here. It is always real, since the operator $({\hat a + {{\hat a}^\dagger }})^n$ is Hermitian for any $n$. The final expression does not depend on the imaginary part of $\alpha$, since the position and momentum coordinates are decoupled in an idealized coherent state. An interesting thing would be to couple these two degrees of freedom (position and momentum), e.g., via spin-orbit interaction, and look at the moments of the respective operators. For $\alpha_1=0$, i.e., a coherent state at the origin of the phase space, the expression is tractable. It becomes zero for odd values of $n$, and for even values of $n$ it reads $\mathcal{I} = (n-1)!!$.
{ "domain": "physics.stackexchange", "id": 77167, "tags": "quantum-mechanics, homework-and-exercises, coherent-states" }
Pauli equation: hermite adjoint when deriving probability density
Question: When trying to derive the probability density from the Pauli equation, I face a problem. Starting from the Pauli equation $$ i\hbar \frac{\partial \Psi}{\partial t}=\hat H_0 \Psi +\mu_B \ \hat \sigma \cdot \mathbf{B} \Psi, $$ I need to adjoint it: $$ -i\hbar \frac{\partial \Psi^+ }{\partial t}=\hat H_0^* \Psi^+ +\mu_B \ \left( \hat \sigma \cdot \mathbf{B} \Psi \right)^+. $$ So, I'm trying to calculate the multiplication in brackets, using the properties of conjugation $(AB)^+=B^+A^+$: $$ \left( \hat \sigma \cdot \mathbf{B} \Psi \right)^+ \equiv \bigg( \left( \hat \sigma \cdot \mathbf{B} \right) \Psi \bigg)^+= \Psi^+ \left( \hat \sigma \cdot \mathbf{B} \right)^+ = \Psi^+ \mathbf{B}^+ \hat \sigma^+= \Psi^+ \mathbf{B}^T \hat \sigma. $$ Here I've used the facts that the magnet field is real $(\mathbf{B}^+=\mathbf{B}^T)$ and pauli matrices are hermitian $(\hat \sigma^+ =\hat \sigma)$. However, in the book (Greiner, Quantum Mechanics: an introduction) there is another answer: $$ \left( \hat \sigma \cdot \mathbf{B} \Psi \right)^+ = \Psi^+ \hat \sigma \cdot \mathbf{B}. $$ Where is the mistake? Thanks in advance. P.S. I understand that $\hat \sigma$ is an operator, and so it must act some function, but... I still don't see my mistake. Answer: Your textbook is right. In fact, your third equation mistake is not even a mistake, since you omitted a dot, so your answer is meaningless, as it stands. What you need to understand is the Hermitian 2×2 matrix $$ \left( \hat \sigma \cdot \mathbf{B} \right )= \left( \hat \sigma \cdot \mathbf{B} \right )^\dagger; $$ it is the sum of three Pauli matrices, with real coefficients, the components of the real magnetic field. So a good Hermitian piece of the Hamiltonian. So its Hermitian conjugate is just itself. You may think of the vector consisting of the three Pauli matrices and that of the magnetic field, but the two are dotted, so you have a scalar, ignorant of transpositions. The only Hermitian conjugation involved is that of each and all Pauli matrices, so the book answer is trivially right. Try an explicit example.
{ "domain": "physics.stackexchange", "id": 71191, "tags": "quantum-mechanics, operators" }
Perfect Rectangle checker
Question: While submitting this question i found that someone already made up this question in python, here is my java implementation of the Perfect-Rectangle-Challange challenge: Given an array rectangles where rectangles[i] = [xi, yi, ai, bi] represents an axis-aligned rectangle. The bottom-left point of the rectangle is (xi, yi) and the top-right point of it is (ai, bi). Return true if all the rectangles together form an exact cover of a rectangular region. (for more details follow the link above) Note: my implementation requires at least Java 11 entry class Solution (given from leetcode): /** * https://leetcode.com/problems/perfect-rectangle/ */ public class Solution { //Assesment: given method within given class from leetcode - this interface may not be modified public boolean isRectangleCover(int[][] input) { InputProvider inputProvider = new InputProvider(); inputProvider.handle(input); ArrayDeque<Rectangle> rectangles = inputProvider.getRectangles(); PerfectRectangleChecker perfectRectangleChecker = new PerfectRectangleChecker(inputProvider.getBounds()); return perfectRectangleChecker.check(rectangles); } class Point public class Point { public final int x; public final int y; public Point(int x, int y){ this.x = x; this.y = y; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Point point = (Point) o; return x == point.x && y == point.y; } @Override public int hashCode() { return Objects.hash(x, y); } } class Rectangle public class Rectangle { public final Point[] points; public final int area; public Rectangle(int x0, int y0, int x1, int y1) { points = new Point[4]; points[0] = new Point(x0,y0); points[1] = new Point(x1,y0); points[2] = new Point(x0,y1); points[3] = new Point(x1,y1); area = (x1-x0)*(y1-y0); } public Rectangle(int[] input) { this(input[0],input[1],input[2],input[3]); } } class InputProvider public class InputProvider { private final ArrayDeque<Rectangle> rectangles = new ArrayDeque<>(); private int boundsX0 = Integer.MAX_VALUE; private int boundsY0 = Integer.MAX_VALUE; private int boundsX1 = Integer.MIN_VALUE; private int boundsY1 = Integer.MIN_VALUE; public void handle(int[][] input) { Arrays.stream(input).forEach(this::processInput); } public void processInput(int[] input){ rectangles.add(new Rectangle(input)); updateBounds(input); } public ArrayDeque<Rectangle> getRectangles() { return rectangles; } public Rectangle getBounds() { return new Rectangle(boundsX0, boundsY0, boundsX1, boundsY1); } private void updateBounds(int[] input) { boundsX0 = Math.min(input[0], boundsX0); boundsY0 = Math.min(input[1], boundsY0); boundsX1 = Math.max(input[2], boundsX1); boundsY1 = Math.max(input[3], boundsY1); } } class PerfectRectangleChecker public class PerfectRectangleChecker { private final HashSet<Point> disjunctiveCorners = new HashSet<>(); private final Rectangle bounds; private int area; public PerfectRectangleChecker(Rectangle bounds) { this.bounds = bounds; } public boolean check(ArrayDeque<Rectangle> rectangles) { for (Rectangle r : rectangles) { processRectangles(r); } if (isAreaMismatching()){ return false; } if (boundsMismatchDisjunctivePoints()){ return false; } if(disjunctiveCornersMismatchAmount()){ return false; } //not simplified return statement to emphasize the three checks performed return true; } private boolean disjunctiveCornersMismatchAmount() { return disjunctiveCorners.size() != 4; } private boolean boundsMismatchDisjunctivePoints() { return Arrays.stream(bounds.points).anyMatch(Predicate.not(disjunctiveCorners::contains)); } private boolean isAreaMismatching() { return area != bounds.area; } private void processRectangles(Rectangle r) { area = area + r.area; Arrays.stream(r.points).forEach(this::processDisjunctiveCorners); } private void processDisjunctiveCorners(Point p) { if (disjunctiveCorners.contains(p)) { disjunctiveCorners.remove(p); } else { disjunctiveCorners.add(p); } } } Tests: public class SolutionTest { final static int[][] VALID_DATA = {{1, 1, 3, 3}, {3, 1, 4, 2}, {3, 2, 4, 4}, {1, 3, 2, 4}, {2, 3, 3, 4}}; final static int[][] INVAILD_DATA = {{0,0,1,1},{0,0,2,1},{1,0,2,1},{0,2,2,3}}; @Test public void testValidInput() { Solution solution = new Solution(); Assert.assertTrue( solution.isRectangleCover(VALID_DATA) ); } @Test public void testInvalidInput() { Solution solution = new Solution(); Assert.assertFalse(solution.isRectangleCover(INVAILD_DATA) ); } //more test after more data is provided } Answer: Some minor changes could be applied to the code, for example the following lines : ArrayDeque<Rectangle> rectangles = inputProvider.getRectangles(); public ArrayDeque<Rectangle> getRectangles() { ... } private final HashSet<Point> disjunctiveCorners = new HashSet<>(); public boolean check(ArrayDeque<Rectangle> rectangles) { ... } The could be rewritten using the Deque interface and the Set interface : Deque<Rectangle> rectangles = inputProvider.getRectangles(); public Deque<Rectangle> getRectangles() { ... } private Set<Point> disjunctiveCorners = new HashSet<>(); public boolean check(Deque<Rectangle> rectangles) { ... } I see no other things I would change in your code, one reflection about the algorithm : it seems me that to see if all the rectangles together form an exact cover of a rectangular region you could add the areas of the rectangles and check if the total area is equal to the rectangle having the most sw corner and the most ne corner between all rectangles. In case of one gap between two rectangles the total area would be minor than the perfect rectangle area, in case of intersection the total area would be greater. Following this idea I rewrote the algorithm in this way : public class Solution { private static int calculateArea(int[] rectangle) { return (rectangle[2] - rectangle[0]) * (rectangle[3] -rectangle[1]); } public static boolean isRectangleCover(int[][] rectangles) { int x0 = Integer.MAX_VALUE; int y0 = Integer.MAX_VALUE; int x1 = Integer.MIN_VALUE; int y1 = Integer.MIN_VALUE; int totalArea = 0; for (int[] rectangle : rectangles) { x0 = Math.min(rectangle[0], x0); y0 = Math.min(rectangle[1], y0); x1 = Math.max(rectangle[2], x1); y1 = Math.max(rectangle[3], y1); totalArea += calculateArea(rectangle); } return totalArea == calculateArea(new int[]{x0, y0, x1, y1}); } } I have updated the test class with cases from leetcode: public class SolutionTest { //[[1,1,3,3],[3,1,4,2],[3,2,4,4],[1,3,2,4],[2,3,3,4]] @Test public void test1() { int[][] rectangles = {{1, 1, 3, 3}, {3, 1, 4, 2}, {3, 2, 4, 4}, {1, 3, 2, 4}, {2, 3, 3, 4}}; assertTrue(Solution.isRectangleCover(rectangles)); } //[[1,1,2,3],[1,3,2,4],[3,1,4,2],[3,2,4,4]] @Test public void test2() { int[][] rectangles = {{1, 1, 2, 3}, {1, 3, 2, 4}, {3, 1, 4, 2}, {3, 2, 4, 4}}; assertFalse(Solution.isRectangleCover(rectangles)); } //[[1,1,3,3],[3,1,4,2],[1,3,2,4],[3,2,4,4]] @Test public void test3() { int[][] rectangles = {{1, 1, 3, 3}, {3, 1, 4, 2}, {1, 3, 2, 4}, {3, 2, 4, 4}}; assertFalse(Solution.isRectangleCover(rectangles)); } //[[1,1,3,3],[3,1,4,2],[1,3,2,4],[2,2,4,4]] @Test public void test4() { int[][] rectangles = {{1, 1, 3, 3}, {3, 1, 4, 2}, {1, 3, 2, 4}, {2, 2, 4, 4}}; assertFalse(Solution.isRectangleCover(rectangles)); } } Update : Thanks to Martin's comments I saw that the above solution works just for the cases available on the leetcode site without login, so the additional test cases fail. To pass all the test cases it is necessary to use the property that in a perfect rectangle the vertices are present in just one rectangle, while the others are shared by 2 or 4 rectangles. If a vertex (x, y) is equal to a List<Integer> with two elements it is possible to define a custom Comparator that orders by x and then y like below and a specific TreeSet: Comparator<List<Integer>> comp = (l1, l2) -> { int diff = l1.get(0) - l2.get(0); if (diff == 0) { return l1.get(1) - l2.get(1); } return diff; }; SortedSet<List<Integer>> set = new TreeSet<>(comp); int totalArea = 0; Once defined it, it is possible calculate the sum of all rectangle areas and creating the set with the vertices that are owned just by one rectangle : for (int[] rectangle : rectangles) { totalArea += calculateArea(rectangle); int x0 = rectangle[0]; int y0 = rectangle[1]; int x1 = rectangle[2]; int y1 = rectangle[3]; List<List<Integer>> list = List.of(List.of(x0, y0), List.of(x0, y1), List.of(x1, y0), List.of(x1, y1)); for (List<Integer> pointRep : list) { if (set.contains(pointRep)) { set.remove(pointRep); } else { set.add(pointRep); } } } Because the set is ordered, if the cardinality is 4 it is possible to compare the total area with the perfect rectangle and check the result : if (set.size() != 4) { return false; } int[] sw = set.first().stream().mapToInt(i->i).toArray(); int[] ne = set.last().stream().mapToInt(i->i).toArray(); return totalArea == calculateArea(new int[] {sw[0], sw[1], ne[0], ne[1] }); Combining all the code lines together below my updated solution : class Solution { public static boolean isRectangleCover(int[][] rectangles) { Comparator<List<Integer>> comp = (l1, l2) -> { int diff = l1.get(0) - l2.get(0); if (diff == 0) { return l1.get(1) - l2.get(1); } return diff; }; SortedSet<List<Integer>> set = new TreeSet<>(comp); int totalArea = 0; for (int[] rectangle : rectangles) { totalArea += calculateArea(rectangle); int x0 = rectangle[0]; int y0 = rectangle[1]; int x1 = rectangle[2]; int y1 = rectangle[3]; List<List<Integer>> list = List.of(List.of(x0, y0), List.of(x0, y1), List.of(x1, y0), List.of(x1, y1)); for (List<Integer> pointRep : list) { if (set.contains(pointRep)) { set.remove(pointRep); } else { set.add(pointRep); } } } if (set.size() != 4) { return false; } int[] sw = set.first().stream().mapToInt(i->i).toArray(); int[] ne = set.last().stream().mapToInt(i->i).toArray(); return totalArea == calculateArea(new int[] {sw[0], sw[1], ne[0], ne[1] }); } private static int calculateArea(int[] rectangle) { return (rectangle[2] - rectangle[0]) * (rectangle[3] -rectangle[1]); } }
{ "domain": "codereview.stackexchange", "id": 41764, "tags": "java, object-oriented" }
How much of a battery needs to be replaced for it to be a fresh battery?
Question: If I take a battery that is "dead" and completely replace the electrolyte and maybe the terminals, would it be a "fresh" battery? So instead of pulling up to a charging station in your electric car, plugging in, and waiting half an hour or more to be able to continue, could you pull up, suction the electrolyte out, pump some fresh stuff in, and be back up to a fully charged state? Would the electrolyte have to be charged before pumping it in? Answer: One plate is lead, one plate is lead dioxide. Hand-waving a big: sulfuric acid dissociates in water, making a lot of hydrogen sulfate ions. The lead plate reacts with the hydrogen sulfate to make lead sulfate, one hydrogen, and two electrons. The lead oxide plate takes a hydrogen sulfate, three hydrogens, and one electron to (also) make lead sulfate. This means there are three lead reactions required to make sufficient hydrogens for the lead oxide reaction, but the lead oxide reaction only consumes one of the six electrons generated. That is, the process releases 5 electrons. The process also generates a considerable amount of lead sulfate. This forms as a powdery coating over all the plates in the battery. Interesting tidbit: any mechanical shock (bumps in the road, etc.) can knock that powdery coating off. As the coating is no longer attached to a particular plate, it's not going to get "energized" when the charge reaction happens, and thus will not convert from lead sulfate back to lead or lead oxide. Eventually, enough of this powder accumulates on the bottom of the battery and will short two (or more) plates, resulting in a higher self-discharge rate. Eventually, the self-discharge rate gets high enough that the battery won't "hold a charge" and then it gets replaced. So, anyways, if all you do is pump out the electrolyte, then you haven't done anything to address the fact that the lead plates in the battery have corroded away (to form lead sulfate). Replacing just the electrolyte doesn't charge a battery. You would have to replace the electrolyte AND the lead plates. But, if you take a dead battery, and then replace the electrolyte and charge plates, then it's a new battery. The easier thing to do than cracking open the battery and replacing all the internals would be to just have an exchange program, like propane tank exchanges in the US. Users are charged a fee which equates to the energy (gas, or in this example electricity) added to the "vessel" as well as a small service fee to clean/inspect/recondition the "vessel." Personally, I think that's the way of the future for electric vehicles. It's just too hard to provide the electric power required to charge a battery at anywhere close to the rate gas vehicles refuel, which "charge" at a rate of megawatts - check energy density of gasoline and multiply it by 10 gallons per minute.
{ "domain": "engineering.stackexchange", "id": 1153, "tags": "battery" }
Multi-channel audio upsampling interpolation
Question: I have a four-channel audio signal from a microphone tetrahedral array. I wish to upsample it from 48 kHz to 240 kHz. Is there a preferred interpolation method for audio? Does cubic interpolation (or any other) have any advantages over linear for the specific case of audio? Assuming I am using cubic interpolation, do I interpolate each channel separately or is there any benefit in using a bicubic interpolation over all four channels? Answer: Does cubic interpolation (or any other) have any advantages over linear for the specific case of audio? You'd use neither for audio. The reason is simple: The signal models you typically assume for audio signals are very "Fourier-y", to say, they assume that sound is composed of weighted harmonic oscillations, and bandlimited in its nature. Neither linear interpolation nor cubic interpolation respect that. Instead, you'd use a resampler with a anti-imaging / anti-aliasing filter that is a good low-pass filter. Let's take a step back: When we have a signal that is discrete in time, i.e. has been sampled at a regular lattice of time instants, its spectrum is periodic – it repeats every $f_s$ (sampling freq.). Now, of course, we rarely look at it this way, because we know that our sampling can only represent a bandwidth of $f_s/2$, we typically only draw the spectrum from 0 to $f_s/2$, for example: S(f) ^ |--- | \ | \ --- | --/ \ | \------\ +----------------------'---> f 0 f_s/2 Now, the reality of it is that in fact, we know that for real-valued signals, the spectrum is symmetrical to $f=0$: S(f) ^ ---|--- / | \ --- / | \ --- / \-- | --/ \ /------/ | \------\ ---'----------------------+----------------------'---> -f_s2/2 0 f_s/2 But, due to the periodic nature of the spectrum of something that got multiplied with a "sampling instance impulse train", that thing repeats to both sides infinitely, but we only typically "see" the 1. Nyquist zone (marked by :) : S(f) : : ^ : : ---|--- : ------- … : / | \ : / \ … : --- / | \ --- : --- / \ --- : / \-- | --/ \ : / \-- --/ \ : /------/ | \------\ : /------/ \------\ -------'----------------------+----------------------'---------------------------------------------'--> -f_s/2 0 f_s/2 f_s When we increase the sample rate, we "just" increase the observational width. Just a random example: S(f) ^ ---|--- :------ … / | \ /: \ … --- / | \ --- --- / : \ --- / \-- | --/ \ / \-- : --/ \ /------/ | \------\ /------/ : \------\ -------'----------------------+----------------------'---------------------------------------------'--> -f_s/2 0 f_s/2 new f_s/2 f_s Try that! Take an audio file, let the tool of your liking show you its spectrum. Then, just insert a $0$ after every sample, save as a new audio file (python works very well for such experiments), and display its spectrum. You'll see the original audio (positive half of the) spectrum on the left side, and its mirror image on the right! Now, to get rid of these images, you'd just low-pass filter to your original Nyquist bandwidth. And that's really all a resampler does: change the sampling rate, and make sure repetitions and foldovers (aliases) don't appear in the output signal. If you're upsampling by an integer factor $N$ (say, 48 kHz -> 192 kHz), then you just insert $N-1$ zeros after every input sample and then low-pass filter; it's really that simple. In the ideal case, that filter would be a rectangle: Let through the original bandwidth unaltered, suppress everything not from there. A filter with a rectangular spectral shape has (infinite!) sinc shape in time domain, so that's what sinc interpolation is (and why it's pretty much as perfect as it gets). Since that sinc is infinitely long, and your signal isn't, well, that's not really realizable. You can have a truncated sinc interpolation, however. As a matter of fact, even that would be overkill: your original audio has low-pass characteristics, anyway! (simply because of the anti-aliasing filters that you invariably need before sampling the analog audio source; not to mention that high frequencies are inaudible, anyways.) So, you'd simply go with a "good enough" low pass filter after inserting these zeros. That keeps the computational effort at bay, and also might be even better than the truncation of the sinc. Now, what if your problem is decidedly not an integer interpolation? For example, 240000 / 44800 is definitely not an integer. So, what to do? In this relatively benign case, I'd go for a rational resampler: First, we go up by an integer factor $N$, so that the resulting sampling rate is a multiple of the target sampling rate. We'd do the low-pass filtering as explained above, limiting the resulting signal to its original 44.8 kHz/2 bandwidth, and then apply a downsampling by $M$, i.e. anti-aliasing filtering it to the target 240 kHz/2 bandwidth, and then throwing out $M-1$ of $M$ samples. It's really that easy! In fact, we can simplify further: since the anti-imaging filter cuts off at 22.4 kHz, and the anti-aliasing filter only after 120 kHz, the latter is redundant, and can be eliminated, so that the overall structure of a rational resampler becomes: Upsampling -> core filter -> downsampling (in fact, we can even apply multirate processing and flip the order, greatly reducing effort, but that'd lead too far here.) So, what are your rates here? For 44800 Hz in, 240000 Hz out, the least common multiple is 3360000 Hz = 3360 kHz, that's up by a factor of 75, low pass filter, and then down by 14. So, you'd need a 1/75 band lowpass filter. It's easy to design one using python or octave!
{ "domain": "dsp.stackexchange", "id": 7479, "tags": "interpolation, audio-processing, array-signal-processing" }
How are time translations understood in non-relativistic quantum mechanics?
Question: Let us consider nonrelativistic quantum mechanics, wherein Galiliean relativity reins supreme. This question is motivated by what I think is a misunderstanding I'm having in reading Fonda's Symmetry Principles in Quantum Physics. Consider two different observers $\overline{O}$ and $O$ of a given quantum system $S$. These observers differ only in their time coordinate. Their time coordinates are related by $\overline{t} = t - \tau$; that is, $\overline{O}$'s time coordinate is delayed with respect to $O$'s. Now Fonda suggests that in terms of how the two observers view the quantum system, the "translation" between the two frames of reference in their vectors (neglecting the ray technicality) used to describe a given state must obey $$|(\phi_{\overline{O}}(\overline{t}),\psi_{\overline{O}}(\overline{t}))|^2=|(\phi_{O}(t),\psi_{O}(t))|^2.$$ My question is how do we justify this requirement on the translation map? Do we argue that since $\overline{t} = t - \tau$ (since these are the same absolute times), we must have $\psi_{\overline{O}}(\overline{t}) = \psi_{O}(t-\tau))$ and so, trivially, $$|(\phi_{\overline{O}}(\overline{t}),\psi_{\overline{O}}(\overline{t}))|^2 = |(\phi_{O}(t-\tau),\psi_{O}(t-\tau))|^2 .$$ Then by the unitary evolution of non-relativistic quantum mechanics, one has (if we evolve forward in time by $\tau$, $$|(\phi_{O}(t-\tau),\psi_{O}(t-\tau))|^2 = |(\phi_{O}(t),\psi_{O}(t))|^2$$ and so we arrive at the required form. The reason I am not sure about this is that Fonda says "In fact, the state of the system $S$ that $\overline{O}$ is observing at his time $\overline{t}$ is the evolved, through the time interval $\tau$, of the state observed by $O$ at his own time $t$", whereas my interpretation of what we have done is just opposite! I seem to have concluded that the state of the system $S$ that $\overline{O}$ is observing at his time $\overline{t}$ is the evolved backwards, through the time interval $\tau$, of the state observed by $O$ at his own time $t$ (since $\psi_{\overline{O}}(\overline{t}) = \psi_{O}(t-\tau))$). The discussion in question is related to what Fonda discusses near equation 1.9 below. Eq (1.3) alluded to is about the unitarity of time evolution (a hypothesis in nonrelativistic QM, I suppose). Answer: I believe you're are using the wrong sign in the time-translation. If we write $$ \phi'(t) = \phi(t-\tau) $$ then $\phi'$ is translated forward in time with respect to $\phi$. This is because it's a passive transformation. Thus, in the notation of the book (which only uses a single time coordinate) we have $$ \mathbf{T}\mathbf{\Phi}_O(t) = \mathbf{\Phi}_{\overline{O}}(t) = \mathbf{\Phi}_O(t+\tau) $$ and $$ \left|\bigl( \phi_{\overline{O}}(t),\psi_{\overline{O}}(t)\bigr) \right| = \left| \bigl( \phi_O(t+\tau),\psi_O(t+\tau) \bigr)\right| $$ which is consistent with the statement in the book. Note that it works the same for other coordinate translations. For example, for a wavefuction $\psi(x)$ we have the position basis state $$ |\psi(x)\rangle = \int \psi(x) |x\rangle dx. $$ Let $T|x\rangle = |x+a\rangle$. Then $$ T|\psi(x)\rangle = \int \psi(x) |x + a\rangle dx. $$ The displaced wavefunction is $$ \langle x' | T|\psi(x)\rangle = \int \psi(x) \langle x'|x + a\rangle dx = \int \psi(x) \delta(x+a-x') dx = \psi(x'-a) $$
{ "domain": "physics.stackexchange", "id": 96937, "tags": "quantum-mechanics, symmetry" }
simple_navigation_goals no such file or directory
Question: Hello, I am new to ROS. I am using ubuntu 16.04 and am trying to follow this tutorial I followed all the steps, however when I run ./bin/simple_navigation_goals in the end, I get bash: ./bin/simple_navigation_goals: No such file or directory Can someone let me know why this is? I have already tried to do source devel/setup.bash after I did catkin_make Thanks, Aaron Originally posted by aarontan on ROS Answers with karma: 135 on 2018-06-06 Post score: 0 Original comments Comment by jayess on 2018-06-06: I see that's how the tutorial that you linked to tells you to run the node way which I find strange. Usually, you'd run the node with rosrun <package-name> <node-name> Can you try running the node that way and see what happens? Answer: figured it out, you have to cd in to /catkin_ws/devel/lib/simple_navigation_goals and then run ./simple_navigation_goals to work... Originally posted by aarontan with karma: 135 on 2018-06-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by jayess on 2018-06-06: How did you compile the node and how are you trying to run it? You should be able to use rosrun from anywhere in your terminal (i.e., not having to be in /catkin_ws/devel/lib/simple_navigation_goals ) Comment by aarontan on 2018-06-08: yes, you are correct. thank you!
{ "domain": "robotics.stackexchange", "id": 30980, "tags": "ros, ros-kinetic, bash" }
Wavefronts, refraction, and the marching soldiers analogy
Question: I am not a physicist, but rather a middle school science teacher. Please be gentle. The marching soldiers has been a really good analogy for explaining why a change of direction is caused by hitting the boundary of a new medium at a non-normal angle. In a pre-Covid time, you could actually have students do this in the classroom. Very fun. Except... what does the pole locking the marchers together represent? Yes, it's a wavefront, but what makes that wavefront stay together in shape? I've never seen that explained. I think I understand that being in the same phase is what makes that surface of points a wavefront, but is this a descriptive definition or a prescriptive one? What keeps a wavefront together in the same shape, in perpetuity? Suppose we are in a universe where change in direction does not happen passing through a new medium. We start the marching soldier analogy: Alice is connected to Bob by a pole, and Alice hits the slower medium first because they are approaching the medium-boundary at a non-normal angle. We would then say that holding the same pole as Alice makes Bob slow down too even though he's still in the faster medium because that is the only way to preserve direction— the thing we're observing in this supposed universe. We would just say that of the analogy: the pole does it, they're connected (sharing information about velocity?), so Bob slows down too. But why? Maybe the pole has some property that keeps it from turning, so Bob has to slow down too. Maybe in this universe, the definition of a wavefront— or whatever this universe calls the pole— is the set of all points where the wave stays in the same direction. That does not happen in our universe, so we don't have to answer that. In our universe, it is still the analogical pole that does it. (Is it possible that saying the "pole does it" is doing too much work in my understanding? Where are the cause and effect happening in the real world as opposed to the analogy?) In our universe, a wavefront is the set of all points in a wave that is at the same phase. Okay. Alice slows down in the new medium, but Bob at the other end of the pole in the faster medium does not, so the pole turns. But why is there a pole? Why is the wavefront staying in phase through its shape when that's just the definition of what the wavefront is? It feels a bit like circular reasoning to me: the wavefront is changing direction because the wavefront... is a set of points in a wave that we observe to be changing direction? Does this question make any sense?? It is obviously something I don't understand about what a wavefront is... can anybody explain it? Why does being the same phase lock you in shape? It seems like information has to be shared between photons for this to work, so that's my hint that the answer is getting quantum. But please explain it? I am comfortable with pop-sci explanations of quantum electrodynamics, what's the leap I need? Or maybe it's not quantum and it's actually much simpler than I make it out to be? Essentially, what makes the wavefront a prescriptive thing? The wavefront is a set of points in the same phase, what makes them stay that shape (we also just accept the pole doesn't shrink, bend, or deform in any way) even when a part of it starts changing speed? Answer: As you know, the peaks and troughs of water waves are examples of wavefronts. Wavefronts are lines or surfaces along which particles are oscillating in phase, so asking why a wavefront stays in phase would indeed invite the response 'by definition'. But it would not be illogical to ask why a straight or plane wavefront stays this shape when the wave passes into a new medium. When we are presented with a good picture of water waves entering shallower water from deeper, across a straight interface, it's easy enough to explain the change in angle of the wavefronts and their getting closer together, in terms of the speed change. It seems perfectly reasonable, given straightness of both the incident waves and the interface, that the refracted wavefronts should also be straight. You can back this up, by noting that the extra time spent in the 'slower' medium is proportional to the distance along the wavefront. This is one sort of 'explanation'. What my last paragraph has avoided is explaining how incident wavefronts actually travel and get to be the refracted wavefronts. This is more difficult, and invites the general question: how do waves travel, and specifically, how do straight wavefronts travel (in a quasi two dimensional medium)? Answer (a) We represent the wave mathematically and solve the equations that result. Answer (b) (less rigorous) We use Huygens' Principle, which starts by regarding all points on a straight wavefront as acting like 'point sources' and producing circular wavefronts. We then draw an 'envelope' around these 'wavelets'. This is not difficult to apply to refraction; it used to be on A-level syllabuses in the UK. But I'm afraid that a pole that takes an active part, as opposed to merely showing the position of the wavefront, has no analogue in the propagation of real waves, as far as I know.
{ "domain": "physics.stackexchange", "id": 73523, "tags": "optics, waves, electromagnetic-radiation, photons, refraction" }
Writing an expression as divergence (vector algebra)
Question: Let $\Psi(\mathbf{r})\equiv \Psi$ is a Fermionic field operator. And there is an expression: $$ \mathbf{\nabla} \cdot \mathbf{J_{r}} = (\nabla^2 \Psi^\dagger)(\nabla\Psi\cdot\mathbf{A}) + (\nabla \Psi^\dagger)\cdot(\nabla^2\Psi\mathbf{A}) $$ here $\mathbf{A}$ is a vector field. Is there any way to write the expression on right side as a divergence of some other expression to find value of $\mathbf{J_r}$? Answer: Our equation is \begin{equation} \boldsymbol{\nabla\cdot}\mathbf J_{\mathbf r}\boldsymbol=\left(\nabla^2 \Psi^\dagger\right)\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\mathbf A\vphantom{\nabla^2 \Psi^\dagger}\right)\boldsymbol+\left(\boldsymbol\nabla\Psi^\dagger\right)\boldsymbol\cdot\left(\mathbf A\nabla^2 \Psi\right) \tag{01}\label{01} \end{equation} The right hand side $\:\texttt{RHS}\:$ is \begin{equation} \texttt{RHS}\boldsymbol=\underbrace{\left(\nabla^2 \Psi^\dagger\boldsymbol\nabla\Psi\boldsymbol+\boldsymbol\nabla\Psi^\dagger\nabla^2 \Psi\right)}_{\boxed{1}}\boldsymbol\cdot\mathbf A \tag{02}\label{02} \end{equation} But \begin{equation} \boxed{1}\boldsymbol=\left(\nabla^2 \Psi^\dagger\boldsymbol\nabla\Psi\boldsymbol+\boldsymbol\nabla\Psi^\dagger\nabla^2 \Psi\right)\boldsymbol=\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\vphantom{\nabla^2 \Psi^\dagger}\right)\boldsymbol\nabla\Psi^\dagger\boldsymbol+\left(\boldsymbol\nabla\Psi^\dagger\boldsymbol\cdot\boldsymbol\nabla\vphantom{\nabla^2 \Psi^\dagger}\right)\boldsymbol\nabla\Psi \tag{03}\label{03} \end{equation} Using the following vector formula \begin{equation} \boldsymbol\nabla\left(\mathbf a\boldsymbol\cdot\mathbf b\right) \boldsymbol= \left(\mathbf a\boldsymbol\cdot\boldsymbol\nabla\right)\mathbf b\boldsymbol+\left(\mathbf b\boldsymbol\cdot\boldsymbol\nabla\right)\mathbf a\boldsymbol+\mathbf a\boldsymbol\times\left(\boldsymbol\nabla\boldsymbol\times\mathbf b\right)\boldsymbol+\mathbf b\boldsymbol\times\left(\boldsymbol\nabla\boldsymbol\times\mathbf a\right) \tag{04}\label{04} \end{equation} with $\:\mathbf a\boldsymbol\equiv\boldsymbol\nabla\Psi\:$ and $\:\mathbf b\boldsymbol\equiv\boldsymbol\nabla\Psi^\dagger\:$ we have \begin{equation} \boxed{1}\boldsymbol=\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\vphantom{\nabla^2 \Psi^\dagger}\right)\boldsymbol\nabla\Psi^\dagger\boldsymbol+\left(\boldsymbol\nabla\Psi^\dagger\boldsymbol\cdot\boldsymbol\nabla\vphantom{\nabla^2 \Psi^\dagger}\right)\boldsymbol\nabla\Psi\boldsymbol=\boldsymbol\nabla\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right) \tag{05}\label{05} \end{equation} since $\:\boldsymbol{\nabla\times\nabla}\Psi\boldsymbol=\boldsymbol 0\boldsymbol=\boldsymbol{\nabla\times\nabla}\Psi^\dagger\:$. From equations \eqref{02},\eqref{05} we have \begin{equation} \texttt{RHS}\boldsymbol=\boldsymbol\nabla\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right)\boldsymbol\cdot\mathbf A \tag{06}\label{06} \end{equation} Using the following vector formula \begin{equation} \boldsymbol\nabla\boldsymbol\cdot\left(\psi\mathbf a\right) \boldsymbol= \mathbf a\boldsymbol\cdot\boldsymbol\nabla\psi\boldsymbol +\psi\boldsymbol\nabla\boldsymbol\cdot\mathbf a \tag{07}\label{07} \end{equation} with $\:\mathbf a\boldsymbol\equiv\mathbf A\:$ and $\:\psi\boldsymbol\equiv\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right)\:$ we have \begin{equation} \texttt{RHS}\boldsymbol=\boldsymbol\nabla\boldsymbol\cdot\left[\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right)\mathbf A\right]\boldsymbol-\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right)\left(\boldsymbol\nabla\boldsymbol\cdot\mathbf A\vphantom{\nabla^2 \Psi^\dagger}\right) \tag{08}\label{08} \end{equation} that is finally \begin{equation} \boxed{\:\:\left(\nabla^2 \Psi^\dagger\right)\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\mathbf A\vphantom{\nabla^2 \Psi^\dagger}\right)\boldsymbol+\left(\boldsymbol\nabla\Psi^\dagger\right)\boldsymbol\cdot\left(\mathbf A\nabla^2 \Psi\right)\boldsymbol=\boldsymbol\nabla\boldsymbol\cdot\left[\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right)\mathbf A\right]\boldsymbol-\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right)\left(\boldsymbol\nabla\boldsymbol\cdot\mathbf A\vphantom{\nabla^2 \Psi^\dagger}\right)\vphantom{\dfrac{\dfrac{a}{b}}{\dfrac{a}{b}}}\:\:} \tag{09}\label{09} \end{equation} So note that this expression could be a divergence if $\:\boldsymbol\nabla\boldsymbol\cdot\mathbf A\boldsymbol=0$ (this reminds us the Coulomb gauge). Also if $\:\left(\boldsymbol\nabla\Psi\boldsymbol\cdot\boldsymbol\nabla\Psi^\dagger\right)\boldsymbol=\texttt{constant}\:$ then this is identically zero.
{ "domain": "physics.stackexchange", "id": 83319, "tags": "field-theory, vector-fields" }
Rewriting scala code in object-oriented style style to reduce repetitive use of similar functions
Question: I need help in rewriting my code to be less repetitive. I am used to coding procedural and not object-oriented. My scala program is for Databricks. how would you combine cmd 3 and 5 together? Does this involve using polymorphism? My notebook first import parquet staging files in parallel. Then it run notebooks in parallel. I am repeating my parallel function, tryNotebookRun, twice, but for different scenarios. ////cmd 1 // Set Environment var client = "client" var storageAccount = "storageaccount" var container = client + "-dl" // Connect to Azure DataLake spark.conf.set( "fs.azure.account.key." + storageAccount + ".dfs.core.windows.net", dbutils.secrets.get(scope = storageAccount, key = storageAccount) ) // Set database spark.sql("USE " + client) ////cmd 2 //import needed packages import scala.concurrent.duration._ import scala.concurrent.{Future, blocking, Await} import scala.concurrent.ExecutionContext import scala.language.postfixOps import scala.util.control.NonFatal import scala.util.{Try, Success, Failure} import java.util.concurrent.Executors import com.databricks.WorkflowException import collection.mutable._ import scala.collection.mutable.Map ////cmd 3 ///this part set up functions and class for importing stg parquet files as spark tables in parallel // the next two functions are for retry purpose. if running a process fail, it will retry def tryRun (path: String, schema: String, table: String): Try[Any] = { Try{ dbutils.fs.rm(s"dbfs:/user/hive/warehouse/$client.db/$schema$table", true) spark.sql(s"DROP TABLE IF EXISTS $schema$table") var df = sqlContext.read.parquet(s"$path/$schema$table/*.parquet") df.write.saveAsTable(schema + table) } } def runWithRetry(path: String, schema: String, table: String, maxRetries: Int = 3) = { var numRetries = 0 while (numRetries < maxRetries){ tryRun(path, schema, table) match { case Success(_) => numRetries = maxRetries case Failure(_) => numRetries = numRetries + 1 } } } case class tableInfo(path: String, schema: String, table: String) def parallelRuns(tableList: scala.collection.mutable.MutableList[tableInfo]): Future[Seq[Any]] = { val numRunsInParallel = 5 // If you create too many notebooks in parallel the driver may crash when you submit all of the jobs at once. // This code limits the number of parallel notebooks. implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numRunsInParallel)) Future.sequence( tableList.map { item => Future { runWithRetry(item.path, item.schema, item.table) } .recover { case NonFatal(e) => s"ERROR: ${e.getMessage}" } } ) } ////cmd 4 ///Load STG data in the format of Parquet files from Data Lake to Databrick //Variables val schema = "STG" val dataFolder = List(schema) var tableCollection = MutableList[tableInfo]() //List of data to be added val tableList = List( "AdverseEvents", "Allergies" ) for (table <- tableList){ for (folder <- dataFolder){ var path = s"abfss://$container@$storageAccount.dfs.core.windows.net/$folder" var a = tableInfo(path, schema, table) tableCollection += a } } val res = parallelRuns(tableCollection) Await.result(res, 3000000 seconds) // this is a blocking call. res.value ////cmd 5 ///this part set up functions and class for running cdm notebooks in parallel /// the next two functions are for retry purpose. if running a process fail, it will retry def tryNotebookRun (path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String]): Try[Any] = { Try( if (parameters.nonEmpty){ dbutils.notebook.run(path, timeout, parameters) } else{ dbutils.notebook.run(path, timeout) } ) } def runWithRetry(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String], maxRetries: Int = 3) = { var numRetries = 0 while (numRetries < maxRetries){ tryNotebookRun(path, timeout, parameters) match { case Success(_) => numRetries = maxRetries case Failure(_) => numRetries = numRetries + 1 } } } case class NotebookData(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String]) def parallelNotebooks(notebooks: Seq[NotebookData]): Future[Seq[Any]] = { val numNotebooksInParallel = 5 // If you create too many notebooks in parallel the driver may crash when you submit all of the jobs at once. // This code limits the number of parallel notebooks. implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numNotebooksInParallel)) val ctx = dbutils.notebook.getContext() Future.sequence( notebooks.map { notebook => Future { dbutils.notebook.setContext(ctx) runWithRetry(notebook.path, notebook.timeout, notebook.parameters) } .recover { case NonFatal(e) => s"ERROR: ${e.getMessage}" } } ) } ////cmd 6 //run notebooks in parallel val notebooks = Seq( NotebookData("AUDAdverseEvents", 0, Map("client"->client)), NotebookData("AUDAllergies", 0, Map("client"->client)) ) val res = parallelNotebooks(notebooks) Await.result(res, 3000000 seconds) // this is a blocking call. res.value ``` Answer: in tryRun()/runWithRetry(): Naming is extremely poor. The function name should show what it does. "Run" what? The comment doesn't even help -- it just says "a process". What does it do? There is no need to use a Try/Success/Failure construct as you are capturing neither the result of the process on success nor the exception on failure. You can just use the standard try/catch keywords. You don't specify the return type of runWithRetry() - what's your intention? How is the caller supposed to know if it succeeded or failed? Why don't you use the case class tableInfo as a parameter to these? More importantly from an OO point of view, why aren't these methods defined inside the class? in parallelRuns(): Why are you using (explicitly, in fact) a MutableList? The list is not mutated, and seems to have no need to be mutated. Why are you explicitly defining the parallelism instead of just using .par to get a Parallel Collection? What is the comment about "notebooks" referring to? What is the .recover {} clause supposed to do? The Future, as written, can't fail. in cmd 4: Please use yield inside the for to create a list, instead of building up the list item by item. in cmd 5: Create a class that does the necessary steps, and then have two things that inherit from it, specializing as necessary.
{ "domain": "codereview.stackexchange", "id": 40076, "tags": "object-oriented, scala, apache-spark" }
Expected behaviour when cmd_vel == 0
Question: I've just finished writing a hardware interface for my custom robot. It finally performs as hoped, but I'm not sure what cmd_vel == 0 m/s should actually cause. Does it mean: Stop powering the robot's wheels, allowing it to coast downhill (i.e. robot is not powered) Hold the robot still, as if it were being actively braked (i.e. robot velocity = 0 m/s) Maybe it's just personal preference, but I am very interested in hearing what the ROS official convention is. Originally posted by georgeknowlden on ROS Answers with karma: 13 on 2019-09-09 Post score: 0 Original comments Comment by pasindu_sandima on 2020-11-30: Hi, I have the same question. I used the diff drive controller in my robot using velocity joint interface. When I place the robot on a ramp and apply 0 cmd_vel , it coasts down the ramp. But I want to achieve the other behavior which it brakes it self so it won't coasts down. Any help on how to achieve that? Comment by gvdhoorn on 2020-12-01: Please do not post follow-up questions as answer to already answered questions. Your question will have very low visibility. I'd suggest to post a new question, clearly describing your desired behaviour and referring to this one. Edit: seems you already did: #q366698. Answer: For all robots I've worked with, wrote my own hw interfaces for or have used hw interfaces of others with, cmd_vel was always used to dictate the state in which the system should be (ie: messages encode the desired state or set point). For those robots a command of 0 m/s meant: motors powered, but not rotating. This has made sense to me, as by convention cmd_vel carries setpoints in the form of geometry_msgs/Twist, which contains the following comment: This expresses velocity in free space broken into its linear and angular parts. So, Twists sent to a mobile base platform encode a body relative set of linear and angular velocities. These are then typically mapped onto joint space velocities for wheels in case of a wheeled mobile base such that 'the robot' (or mobile base) attains the desired attitude. Following this, a Twist carrying only zeroes would encode for a 0 m/s linear and 0 rad/s angular state in the body local reference frame, or in other words: a non-moving robot. Your other option (unpowered or backdrivable actuators) would lead to a non-zero state in case of "coast[ing] downhill" (as the encoders, which will probably be present to support velocity control, will register a non-zero displacement). That would lead to a velocity error, which a controller would probably try to rectify (by breaking or applying a corrective velocity). Edit: I don't believe there is a REP that documents or standardises this particular aspect (as in: in a robot-agnostic manner), but there is REP 119: Specification for TurtleBot Compatible Platforms, which in the section called TurtleBot Node Core API writes: Subscribed Topics cmd_vel (geometry_msgs/Twist) The desired velocity of the robot. The type of this message is determined by the drive_mode parameter. Default is geometry_msgs/Twist. which seems to confirm my experience and intuition. As almost all use of cmd_vel seems to follow this convention (as authors of early nodes looked to existing implementations to match their own against, and TurtleBot and PR2 were the most prominent ones), I believe only your second alternative would be the correct one. Edit 2: there is also REP 147: A Standard interface for Aerial Vehicles, which doesn't directly deal with cmd_vel or its semantics, but discusses something similar in the Rate Interface section: The command is a body relative set of accelerations in linear and angular space. Note that this talks about 'accelerations', but the idea and use is similar to cmd_vel and velocities for wheeled robots. Originally posted by gvdhoorn with karma: 86574 on 2019-09-09 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by georgeknowlden on 2019-09-09: Thank you for such a detailed response. This is exactly the information I was looking for, and it makes perfect sense. Comment by gvdhoorn on 2019-09-09: I believe that if you'd actually want to achieve something like backdrivability, stopping any active controller could be used achieve that (in combination with some support in your hardware_interface). You can stop and start controllers with ros_control by calling the appropriate service on your instance of the ControllerManager. A combination of prepareSwitch(..) and doSwitch(..) in your hardware_interface could probably be used to determine whether there is any active controller, and when there isn't, you could enable some special mode in your hardware that allows for coasting or backdrivability.
{ "domain": "robotics.stackexchange", "id": 33742, "tags": "navigation, ros-melodic" }
Combinatorial sum in a problem with a Fermi gas
Question: I'm solving a problem involving a Fermi gas. There is a specific sum I cannot figure my way around. A set of equidistant levels, indexed by $m=0,1,2 \ldots$, is populated by spinless fermions with population numbers $\nu_m =0 $ or $1$. I need to compute the following sum over the set of all possible configurations $\{ \nu_l \}$: $Q(\beta,\beta_c) = \sum_{\{ \nu_l \}} \sum_{l} \prod_m \exp({\beta_c \, l \, \nu_l}-{ [ \beta \, m + i \phi] \, \nu_m} )$. Any hints on how to deal with this are appreciated. This is not homework, it is a research problem. It is known that $\beta >0$, $\beta_c>0$, and $\phi \in [0; 2 \pi ]$. EDIT: corrected with the complex phase (the sum is coming from a generating function) Answer: This is not an answer just some thoughts from playing with the expression. I've read the question before you included the phase, so for now let $\phi = 0$ (sorry it this makes my response useless for you). I'll simply write $Z$ instead of your $Q(\beta, \beta_c)$ and also drop the arguments where obvious. Denote by $Z_{abc\dots}$ the partition function where we do not include the sites at $a, b, c, \dots$ in the problem. Also denote $f_k = 1 + \exp(-\beta k)$ and $g_k = 1 + \exp((\beta_c - \beta) k)$. Now (unless I screwed up), by summing over the site at $k$ we can get the relation $$Z = f_kZ_k + g_k \sum_{\nu \setminus k} \prod_{m \neq k} \exp(-\beta m \nu_m) $$ and iterating it $$Z = \left( \prod_{m \in abc\dots z} f_m \right) Z_{abc\dots z} + $$ $$ \left(g_a f_b \dots f_z + f_a g_b \dots f_z + \cdots + f_a f_b \dots g_z) \right) \sum_{\nu \setminus abc\dots z} \prod_{m \neq abc \dots z} \exp(-\beta m \nu_m).$$ It is a simple observation that for the reduces system consisting of a single level $a$ we get $Z_{bc \ldots z} = g_a$ so the first term above gives a similar contribution like the other terms (all but one factors are $f$ and one of them is $g$). Therefore, we can write $$Z = \left( \prod_{m} f_m \right) \left ( \sum_k \frac{g_k} { f_k} \right).$$ These expressions are exact in case we have finite number of states. Otherwise they are just formal and are to be understood as limits only if everything converges.
{ "domain": "physics.stackexchange", "id": 1474, "tags": "mathematical-physics, mathematics, fermions" }
Which Rosenblatt's paper describes Rosenblatt's perceptron training algorithm?
Question: I struggle to find Rosenblatt's perceptron training algorithm in any of his publications from 1957 - 1961, namely: Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms The perceptron: A probabilistic model for information storage and organization in the brain The Perceptron — A Perceiving and Recognizing Automaton Does anyone know where to find the original learning formula? Answer: The paper (or report) that formally introduced the perceptron is The Perceptron — A Perceiving and Recognizing Automaton (1957) by Frank Rosenblatt. If you read the first page of this paper, you can immediately understand that's the case. In particular, at some point (page 2, which corresponds to page 5 of the pdf), he writes Recent theoretical studies by this writer indicate that it should be feasible to construct an electronic or electromechanical system which will learn to recognize similarities or identities between patterns of optical, electrical, or tonal information, in a manner which may be closely analogous to the perceptual processes of a biological brain. The proposed system depends on probabilistic rather than deterministic principles for its operation, and gains its reliability from the properties of statistical measurements obtained from large populations of elements. A system which operates according to these principles will be called a perceptron. See also Appendix I (page 19, which corresponds to page 22 of the pdf). The paper The perceptron: A probabilistic model for information storage and organization in the brain (1958) by F. Rosenblatt is apparently an updated and nicer version of the original report. A more accessible (although not the most intuitive) description of the perceptron model and its learning algorithms can be found in the famous book Perceptrons: An Introduction to Computational Geometry (expanded edition, third printing, 1988) by Minsky and Papert (from page 161 onwards).
{ "domain": "ai.stackexchange", "id": 1783, "tags": "neural-networks, machine-learning, reference-request, perceptron" }
How is that distribution of ROS have?
Question: How is that distribution of ROS have?, This because when installing the lightweight simulator tutorial Understanding ROS nodes, I could not change my distro for alleged distribution is hydro, but it says there is no package or file I'm working with ubuntu 12.04 thanks for your help and attention given Error message: $ sudo apt-get install ros-<hydro>-ros-tutorials Error bash: hydro: No such file or directory Originally posted by felipe on ROS Answers with karma: 1 on 2014-01-30 Post score: -2 Original comments Comment by ahendrix on 2014-01-30: Can you update your question with the exact command and error message that you're seeing? Comment by felipe on 2014-01-30: command sudo apt-get install ros--ros-tutorials Error bash: hydro: No such file or directory Comment by gustavo.velascoh on 2014-01-30: Did you install ROS following the instructions in http://wiki.ros.org/hydro/Installation/Ubuntu? Comment by gustavo.velascoh on 2014-01-30: The command should be `$ sudo apt-get install ros-hydro-ros-tutorials´ Comment by felipe on 2014-01-30: gustavo.velascoh thank you very much for your great help and I could really solve my problem with the help of gustavo.velascoh you are all very friendly. good evening Answer: When replacing <distro> with the name of your distribution, the angle brackets typically delimit the thing to be replaced, and should be removed in the final command. Thus, you should be running: sudo apt-get install ros-hydro-ros-tutorials Originally posted by ahendrix with karma: 47576 on 2014-01-30 This answer was ACCEPTED on the original site Post score: 8
{ "domain": "robotics.stackexchange", "id": 16836, "tags": "ros" }
Integer Quantum Hall effect, scattering
Question: I'm confused about the scattering mechanism in the Integer Quantum Hall effect. I often read the statement, that at a hall plateaux, the particles can't scatter, since an integer number of Hall plateaux is occupied, so the particles have no available state to scatter into. So then I don't understand the origin of the hall resistivity. Why do we always have non-zero hall resistivity, even if there is no scattering allowed. The origin of the hall resistivity is scattering, right? Greetings Answer: In short The origin of the hall resistivity is scattering, right? No. The origin of the plateaux in the Hall resistivity/conductance is due to the lack of scattering because of dissipationless flow. Otherwise you'd have the normal (non quantum) Hall effect. $R_H$ just arises from a transverse voltage because of charge accumulation in the direction $\perp$ to the current flow. Why do we always have non-zero hall resistivity, even if there is no scattering allowed. Because the Hall resistance is defined as $R_H = V_H/I$, where $V_H$ is the transverse voltage but $I$ is the "normal" current. Hall resistance along $y$ does not impede the current to flow along $x$. In depth As a reminder, here is the plot of the Hall (transverse) resistance $R_H$ and the longitudinal resistance $R_x$. The battery leads are along the $x$ (longitudinal) direction. So if there are free carriers, the current $I$ will be along $x$. The flow is said to be dissipationless if $R_x = 0$. The transverse resistance $R_H$ is just defined as $V_H/I$, where $V_H$ is the transverse voltage. The larger the current, the more charge accumulation in the transverse direction and hence the larger $V_H$, which in turns increases the value of $R_H$. At the plateaux of $R_H$, the Landau levels are filled. There are no available states for particles to scatter into. They can only do the skipping orbit at the edge, which is hence dissipationless, hence why $R_x = 0$. At the jump, Landau levels are not full. More final states are available for the electrons, so they can scatter in random directions. This introduces dissipation in the "usual" ohmic way. Hence why $R_x \neq 0$ at the jumps.
{ "domain": "physics.stackexchange", "id": 66957, "tags": "quantum-mechanics, solid-state-physics, quantum-hall-effect" }
Need help in understanding these relational algebra queries
Question: I just finished reading about operators in relational algebra and tried to solve this problem. But I dont even understand what the first statement is doing. From what I know, $π$ (project) is a unary opeartor that takes a single relation and selects some attributes from all of the attributes (columns) of that relation. Then what does $\pi_{R-S,S}(r)$ mean? Also, why have they used $r$ as the argument to project operator? Shouldnt it be $R$ as that is the name of the relation given in the problem? Let R and S be relational schemes such that $R={a,b,c}$ and $S={c}$. Now consider the following queries on the database: $\pi_{R-S}(r) - \pi_{R-S} \left (\pi_{R-S} (r) \times s - \pi_{R-S,S}(r)\right )$ $\left\{t \mid t \in \pi_{R-S} (r) \wedge \forall u \in s \left(\exists v \in r \left(u = v[S] \wedge t = v\left[R-S\right]\right )\right )\right\}$ $\left\{t \mid t \in \pi_{R-S} (r) \wedge \forall v \in r \left(\exists u \in s \left(u = v[S] \wedge t = v\left[R-S\right]\right )\right ) \right\}$ Select R.a,R.b From R,S Where R.c = S.c Which of the above queries are equivalent? 1 and 2 1 and 3 2 and 4 3 and 4 Answer: Recall the definition of schema: The name of a relation and the set of attributes for a relation is called schema. An example of schema of a Movies relation is: Movies(title, year, length, genre) In the question, $r$, $s$ represent relation names; $R$, $S$ represent schema. Thus, we have $R = r(a,b,c)$ and $S = s(c)$. A schema consists of a set of attributes. Notation $R - S$ means the set difference between $R$'s set of attributes and $S$'s set of attributes. Thus, we have $R - S = \{a,b\}$. Now we have $\Pi_{R-S,S}(r) = \Pi_{a,b,c}(r)$. Back to the question we have, let's work with the following example: suppose $r$ has two tuples: $(1,3,5)$ and $(2,4,8)$ with first value corresponds to $a$, second value corresponds to $b$, and the third value corresponds to $c$. $s$ has two tuples: $(5)$, $(5)$. Note we use bag semantics, which aligns with SQL semantics. Let's consider option 1 first. $$ \begin{align*} \Pi_{R-S}(r) - \Pi_{R-S}(\Pi_{R-S}(r)\times s - \Pi_{R-S,S}(r)) &= \Pi_{a,b}(r) - \Pi_{a,b}(\Pi_{a,b}(r)\times s - \Pi_{a,b,c}(r)) \\ &= \Pi_{a,b}(r) - \Pi_{a,b}(\Pi_{a,b}(r)\times s - r) \end{align*} $$ Evaluate $\Pi_{a,b}(r)\times s$ will lead to tuples: $(1,3,5)$, $(1,3,5)$, $(2,4,5)$, $(2,4,5)$. Then, $\Pi_{a,b}(r)\times s - r$ means take out all the tuples that show up in $r$, which leads to $\{(2,4,5),(2,4,5)\}$. Then $\Pi_{a,b}(\Pi_{a,b}(r)\times s - r)$ leads to $\{(2,4),(2,4)\}$. $\Pi_{a,b}(r)$ is $\{(1,3),(2,4)\}$. The set difference between $\{(1,3),(2,4)\}$ and $\{(2,4),(2,4)\}$ is $\{(1,3)\}$, which is the result after evaluating option 1. Now, let's consider option 2. $$ \{t | t \in \Pi_{R-S}(r) \land \forall u \in s (\exists v \in r (u = v[S] \land t = v[R-S]))\} $$ we have the following notation $v$ is a tuple in $r$ $t \in \Pi_{a,b}(r)$ means that $t$ can be $(1,3)$ or $(2,4)$ $u$ is a tuple in $s$ Let's focus on $\forall u \in s (\exists v \in r (u = v[S] \land t = v[R-S]))$ part. Because it is for every $u$, we have when $u = (5)$, there is a $v$: $(1,3,5)$ that satisfies the requirement: $v[S] = v[c] = (5) = u$ and $v[R-S] = v[\{a,b\}] = (1,3) = t$ where $t = (1,3) \in \Pi_{R-S}(r)$. Thus $(1,3)$ belongs to the result set of option 2. now we look at the second $u = (5)$ in $u$ and by the same analysis above, we see $(1,3)$ belongs to the result set of option 2. Thus, the result set of option 2 is $\{(1,3),(1,3)\}$. Let's consider option 3. $$ \{t | t \in \Pi_{R-S}(r) \land \forall v \in s (\exists u \in r (u = v[S] \land t = v[R-S]))\} $$ Option 3 has the same set of notation as option 2. Let's focus on $\forall v \in s (\exists u \in r (u = v[S] \land t = v[R-S]))$: for $v = (1,3,5)$, there is a $u = (5)$ such that the requirement is satisfied: $v[S] = v[c] = (5) = u$ and $v[R-S] = (1,3) = t$. Thus, $(1,3)$ belongs to the final result set of option 3 for $v = (2,4,8)$, there is no such $u$ satisfies the constraint. Thus, the final result set of option 3 is $\{(1,3)\}$. Let's consider option 4. The SQL is evaluated to $\{(1,3),(1,3)\}$. From the above example, we can see that option 1 and option 2 are not equivalent option 2 and option 3 are not equivalent Now, let's consider another example where $r$ has tuples $\{(1,3,5),(2,4,6)\}$ with the first value corresponds to $a$, the second value corresponds to $b$, and the third value corresponds to $c$. $s$ has tuples $\{(5),(6),(7)$}. By repeating the same evaluation like we did with the first example, we can see option 1 is evaluated to $\emptyset$ whereas option 3 is evaluated to $\{(1,3),(2,4)\}$. Thus, option 1 and option 3 are not equivalent. Thus, we show that option 2 and option 4 are equivalent.
{ "domain": "cs.stackexchange", "id": 16644, "tags": "database-theory, relational-algebra" }
can't locate node [my_imu] in package [imu_localisation]
Question: I've been trying to make a simple cpp code that does the double integration for an IMU to return the estimated position. The package imu_localisation is listed in rospack list, and roslaunch finds the package and the launch file as well. However when I try to launch it I am hit with: ERROR: cannot launch node of type [imu_localisation/my_imu]: can't locate node [my_imu] in package [imu_localisation] I've tried catkin_make and source devel/setup.bash in that order multiple times to no avail. The error occurs when I run roslaunch imu_localisation localise.launch The following is the my_imu.cpp code. #include "ros/ros.h" #include "std_msgs/String.h" #include "sensor_msgs/Imu.h" #include "geometry_msgs/Vector3.h" #include "nav_msgs/Odometry.h" sensor_msgs::Imu prev_imu; void imuCallback(const sensor_msgs::Imu::ConstPtr& imu){ prev_imu.linear_acceleration.x = imu->linear_acceleration.x; prev_imu.linear_acceleration.y = imu->linear_acceleration.y; prev_imu.linear_acceleration.z = imu->linear_acceleration.z; } int main(int argc, char **argv){ ros::init(argc, argv, "my_imu"); double samplePeriod = 0.02; ros::NodeHandle nh; geometry_msgs::Vector3 vel,prev_vel,pos,prev_pos; ros::Publisher pos_pub=nh.advertise<nav_msgs::Odometry>("/odom",1); ros::Subscriber imu_sub=nh.subscribe("/imu/data" , 50, imuCallback); ros::Rate loop_rate(50); while(ros::ok()){ std_msgs::String msgs; if(prev_vel.x == 0){ prev_vel.x = prev_imu.linear_acceleration.x * samplePeriod; prev_vel.y = prev_imu.linear_acceleration.y * samplePeriod; prev_vel.z = prev_imu.linear_acceleration.z * samplePeriod; } // vel.x = prev_vel.x + prev_imu.linear_acceleration.x * samplePeriod; vel.y = prev_vel.y + prev_imu.linear_acceleration.y * samplePeriod; vel.z = prev_vel.z + prev_imu.linear_acceleration.z * samplePeriod; ROS_INFO("vel x : %f", vel.x); pos.x = prev_pos.x + vel.x * samplePeriod; pos.y = prev_pos.y + vel.y * samplePeriod; pos.z = prev_pos.z + vel.z * samplePeriod; ROS_INFO("pos x : %f", pos.x); prev_vel = vel; prev_pos = pos; loop_rate.sleep(); } } And here is my launch file localise.launch <launch> <node pkg="imu_localisation" name="my_imu" type="my_imu" output="screen"> </node> </launch> And the CMakeLists.txt cmake_minimum_required(VERSION 2.8.3) project(imu_localisation) find_package(catkin REQUIRED COMPONENTS geometry_msgs roscpp rospy sensor_msgs std_msgs ) catkin_package() include_directories( ${catkin_INCLUDE_DIRS} ) add_executable(imu_localisation src/my_imu.cpp) add_dependencies(imu_localisation ${imu_localisation_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) target_link_libraries(imu_localisation ${catkin_LIBRARIES} ) Any pointers? I'm sure I'm missing something dead simple here. Originally posted by Aerugo2272 on ROS Answers with karma: 3 on 2019-07-26 Post score: 0 Answer: There is a small error in your CMakeLists.txt file: ## Declare a C++ executable ## With catkin_make all packages are built within a single CMake context ## The recommended prefix ensures that target names across packages don't collide add_executable(imu_localisation src/my_imu.cpp) This section is telling catkin_make to build the src/my_imu.cpp code into a node called imu_localisation, you're launch file is looking for a node called my_imu which is never being created. If you update this line as shown below and rebuild your package this should start working. add_executable(my_imu src/my_imu.cpp) Hope this helps. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-07-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Aerugo2272 on 2019-07-29: Thanks that worked! Small note, had to change the name in "add_dependencies" and "target_link_libraries" as well to my node name rather than the package name catkin_create_package gives it by default. Comment by PeteBlackerThe3rd on 2019-07-30: That is very true, I missed that one. Glad it's fixed.
{ "domain": "robotics.stackexchange", "id": 33529, "tags": "ros-melodic, roslaunch, catkin-make" }
Multi-channel mean field theory
Question: I have always been confused about the theoretical foundation of the mean field approximation. Below I follow the book Many-body Quantum Theory in Condensed Matter Physics by Bruus and Flensberg, Chapter 4. For an interaction term $AB$ in a Hamiltonian, the mean field decoupling is done by ignoring higher order fluctuations of $A, B$ around some mean field average: $$ \begin{align*} AB &= [\underbrace{ (A - \langle A \rangle) }_{\text{fluctuation}} + \langle A \rangle] [\underbrace{ (B - \langle B \rangle) }_{\text{fluctuation}} + \langle B \rangle] \\ &\approx (A - \langle A \rangle) \langle B \rangle + \langle A \rangle (B - \langle B \rangle) + \langle A \rangle \langle B \rangle \\ &= A \langle B \rangle + \langle A \rangle B - \langle A \rangle \langle B \rangle \tag{4.7} \end{align*} $$ However, in practice, we are usually decoupling 4 operators; thus we have multiple ways (called channels by physicists) to group them into two pairs, and perform the above decoupling. For example, consider the Hartree-Fock approximation for a system of fermions $$ \begin{align*} H &= H_0 + V_{\text{int}} \tag{4.18a} \\ H_0 &= \sum_\nu \xi_\nu c^\dagger_\nu c_\nu \tag{4.18b} \\ V_{\text{int}} &= \frac{1}{2} \sum_{\nu,\nu'} \sum_{\mu,\mu'} V_{\nu\mu\mu'\nu'} c^\dagger_\nu c^\dagger_\mu c_{\mu'} c_{\nu'} \tag{4.18c} \end{align*} $$ The Hartree mean field decoupling of $V_{\text{int}}$ is (I am being sloppy about possible $\{c, c^\dagger\} = 1$ when commuting the $c, c^\dagger$ operators) $$ \begin{align*} V^H_{\text{int}} &= \frac{1}{2} \sum_{\nu,\nu'} \sum_{\mu,\mu'} (c^\dagger_\nu c_{\nu'}) (c^\dagger_\mu c_{\mu'}) \\ &= \frac{1}{2} \sum_{\nu,\nu'} \sum_{\mu,\mu'} V_{\nu\mu\mu'\nu'}\Big[ c^\dagger_\nu c_{\nu'} \langle c^\dagger_\mu c_{\mu'} \rangle + \langle c^\dagger_\nu c_{\nu'} \rangle c^\dagger_\mu c_{\mu'} - \langle c^\dagger_\nu c_{\nu'} \rangle \langle c^\dagger_\mu c_{\mu'} \rangle \Big] \tag{4.22} \end{align*} $$ The Fock mean field decoupling of $V_{\text{int}}$ is $$ \begin{align*} V^F_{\text{int}} &= -\frac{1}{2} \sum_{\nu,\nu'} \sum_{\mu,\mu'} (c^\dagger_\nu c_{\mu'}) (c^\dagger_\mu c_{\nu'}) \\ &= -\frac{1}{2} \sum_{\nu,\nu'} \sum_{\mu,\mu'} V_{\nu\mu\mu'\nu'}\Big[ c^\dagger_\nu c_{\mu'} \langle c^\dagger_\mu c_{\nu'} \rangle + \langle c^\dagger_\nu c_{\mu'} \rangle c^\dagger_\mu c_{\nu'} - \langle c^\dagger_\nu c_{\mu'} \rangle \langle c^\dagger_\mu c_{\nu'} \rangle \Big] \tag{4.23} \end{align*} $$ In principle, we also have a third way of decoupling in the Cooper pairing channel, but it is often simply ignored when one is not caring about superconductivity: $$ \begin{align*} V^C_{\text{int}} &= \frac{1}{2} \sum_{\nu,\nu'} \sum_{\mu,\mu'} (c^\dagger_\nu c^\dagger_\mu) (c_{\mu'} c_{\nu'}) \\ &= \frac{1}{2} \sum_{\nu,\nu'} \sum_{\mu,\mu'} V_{\nu\mu\mu'\nu'}\Big[ c^\dagger_\nu c^\dagger_\mu \langle c_{\mu'} c_{\nu'} \rangle + \langle c^\dagger_\nu c^\dagger_\mu \rangle c_{\mu'} c_{\nu'} - \langle c^\dagger_\nu c^\dagger_\mu \rangle \langle c_{\mu'} c_{\nu'} \rangle \Big] \end{align*} $$ Then the Hartree-Fock mean field Hamiltonian, with $V^C_{\text{int}}$ omitted, is constructed as $$ H_{\text{HF}} = H_0 + V_{\text{int}}^H + V_{\text{int}}^F \tag{4.24} $$ Question 1: Why directly adding $V_{\text{int}}^H$ and $V_{\text{int}}^F$? Why is this not a double-counting? Say, why not use $(V_{\text{int}}^H + V_{\text{int}}^F)/2$? Or even more radically, $\alpha V_{\text{int}}^H + \beta V_{\text{int}}^F$ with $\alpha + \beta = 1$? Question 2: By Wick's theorem, we can decompose $$ \begin{align*} & c^\dagger_\nu c^\dagger_\mu c_{\mu'} c_{\nu'} \\ &= N[c^\dagger_\nu c^\dagger_\mu c_{\mu'} c_{\nu'}] \\ &\quad + N[c^\dagger_\nu c_{\nu'}] c^{\dagger \bullet}_\mu c^\bullet_{\mu'} + c^{\dagger \bullet}_\nu c^\bullet_{\nu'} N[c^\dagger_\mu c_{\mu'}] + c^{\dagger \bullet}_\nu c^\bullet_{\nu'} c^{\dagger \circ}_\mu c^\circ_{\mu'} && \text{(Hartree)} \\ &\quad - N[c^\dagger_\nu c_{\mu'}] c^{\dagger \bullet}_\mu c^\bullet_{\nu'} - c^{\dagger \bullet}_\nu c^\bullet_{\mu'} N[c^\dagger_\mu c_{\nu'}] - c^{\dagger \bullet}_\nu c^\bullet_{\mu'} c^{\dagger \circ}_\mu c^\circ_{\nu'} && \text{(Fock)} \\ &\quad + N[c^\dagger_\nu c^\dagger_\mu] c^\bullet_{\mu'} c^\bullet_{\nu'} + c^{\dagger\bullet}_\nu c^{\dagger\bullet}_\mu N[c_{\mu'} c_{\nu'}] + c^{\dagger\bullet}_\nu c^{\dagger\bullet}_\mu c^\circ_{\mu'} c^\circ_{\nu'} && \text{(Cooper)} \end{align*} $$ Here $N$ is the normal-ordering symbol (all operators are at equal time, so time-ordering is omitted); contracted pairs of operators are indicated by bullets (following the notation in Wikipedia). Different ways to contract the operators correspond to the 3 channels described above. Can we rigorously relate the mean field decoupling to Wick's theorem? References that establish the relation between them are also welcome. Answer: While decoupling is physically intuitive, I find that the variational approach to mean field the most consistent mathematically. It relies on Bogoliubov's inequality and is even presented in wikipedia Mean Field. The idea is that you want to calculate the canonical ensemble of Hamiltonian $H$ and temperature $T$, you can use its defining variational principle, namely it's the ensemble minimising the free energy: $$ F = \langle H\rangle-TS $$ In QM, the ensemble is typically represented by a density matrix $\rho$, so: $$ \langle H\rangle = Tr (\rho H) \\ S = -Tr(\rho\ln\rho) $$ and you can check that the minimum is indeed given by: $$ \rho = \frac{1}{Z}e^{-\beta H} \\ Z = Tr(e^{-\beta H}) $$ The idea is that since you cannot do the calculation for the original $H$, you restrict yourself to a family of parametrised ensembles, and find the parameters minimising the original problem. The condition of minimising $F$ (or practically being a stationary point) is precisely the self-consistent equation of mean field. This is the finite temperature version of the variational method for estimating the ground state of a quantum system. In practice, these family of ensembles are canonical ensembles of quadratic Hamiltonians, which you can easily solve and use Wick's theorem (his is where it comes to play) to calculate any expected values. Note that since only $\rho$ is relevant for the variational principle, adding an overall constant to the Hamiltonian ansatz is irrelevant. Indeed, it gets cancelled out by dividing by the partition function. Say in general that you are interested in the Hamiltonian: $$ H = H_0+V_{int} $$ and you replace $V_{int}$ by a simpler term $V_{mf}$ depending on various parameters $\lambda$ a parameter to be optimised using the variational principle. In the case of the form $V_{mf}=\lambda X$, the self consistent equation gives the intuitive result: $$ \lambda = \frac{\partial \langle V_{int}\rangle_{mf} }{\partial \langle X\rangle_{mf}} \tag{1} $$ with the average $\langle ...\rangle_{mf}$ is taken using the canonical ensemble of $H_{mf} = H_0+V_{mf}$. Question 1 This is not double counting. The Hartree mean field is about using the ansatz: $$ V_{mf} = \sum_{\nu,\nu'} \sum_{\mu,\mu'} V_{\nu\mu\mu'\nu'}\left[\lambda_{\mu\mu'}^Hc^\dagger_\nu c_{\nu'} + \lambda_{\nu\nu'}^Hc^\dagger_\mu c_{\mu'}\right] $$ with the $\lambda_{\mu\mu'}^H,\lambda_{\nu\nu'}^H$ freely varying parameters, while the Fock mean fields about using the ansatz: $$ V_{mf} = \sum_{\nu,\nu'} \sum_{\mu,\mu'} V_{\nu\mu\mu'\nu'}\left[\lambda_{\mu'\nu}^Fc^\dagger_\mu c_{\nu'} + \lambda_{\mu\nu'}^Fc^\dagger_\nu c_{\mu'}\right] $$ with the $\lambda_{\mu\mu'}^F,\lambda_{\nu\nu'}^F$ freely varying parameters. For a general $V_{\nu\mu\mu'\nu'}$, these two ansatz are different, so it is not redundant. Question 2 Wick's theorem now comes from the fact that you want to calculate $\langle V_{int}\rangle_{mf}$. The average is obtained by doing all contractions possible, theoretically giving you all three channels. However, depending on your mean field ansatz, some of them will trivially vanish, which is why often times not all of them are accounted for. Note that calculating the average using Wick's theorem and applying the self consistent equation $(1)$, you obtain the correct values of the parameters, namely the corresponding expected value. Hope this helps. Example Including hopping and interactions, consider the following Hamiltonian: $$ H = H_0+V_{int} \\ H_0 = \sum_x h_{xy}c_x^\dagger c_y \\ V_{int} = \frac{1}{2}\sum_{x,y} V_{xy}c^\dagger_xc^\dagger_yc_yc_x $$ with $\xi$ real and $V_{xy}=V_{yx}$ real. Say you want to approximate by the mean field Hamiltonian: $$ H_{mf} = H_0+V^H_{mf}+V^F_{mf} \\ V^H_{mf} = \sum_x \lambda_x c_x^\dagger c_x \\ V^F_{mf} = \sum_{x\neq y} \mu_{xy}c_x^\dagger c_y $$ $\lambda$ real parameters and $\mu$ complex parameters satisfying $\mu_{xy} = \mu_{yx}^*$ to be determined by the variation principle. You then have: $$ \begin{align} \langle V_{int}\rangle_{mf} &= \sum_{x,y} V_{xy}\langle c^\dagger_xc^\dagger_y c_yc_x\rangle_{mf} \\ &= \sum_{x,y} V_{xy}\left[\langle c^\dagger_xc_x\rangle_{mf} \langle c^\dagger_y c_y\rangle_{mf} -\langle c^\dagger_xc_y\rangle_{mf} \langle c^\dagger_y c_x\rangle_{mf}\right] \\ \end{align} $$ From $(1)$ I obtain: $$ \begin{align} \lambda_x &= \sum_{y\neq x}V_{xy}\langle c^\dagger_yc_y\rangle_{mf} \\ \mu_{xy} &= -V_{xy}\langle c^\dagger_yc_x\rangle_{mf} \end{align} $$ so the mean field Hamiltonian is: $$ H_{mf} = H_0+\sum_{y\neq x}V_{xy}\langle c^\dagger_yc_y\rangle_{mf} c^\dagger_xc_x-V_{xy}\langle c^\dagger_yc_x\rangle_{mf} c_x^\dagger c_y $$ which is equivalent to the decoupling method. Note that since $H_{mf}$ is defined up to an irrelevant additive constant, you don't need to add the fully contracted term.
{ "domain": "physics.stackexchange", "id": 92869, "tags": "quantum-field-theory, many-body, wick-theorem, mean-field-theory" }
Proof of decidability of type checking of calculus of (co)inductive constructions?
Question: I often see it asserted that type checking is decidable for CIC, but I haven't seen it proven. Is there a good paper (or simple demonstration) of this? Answer: I found another reference that goes through a detailed proof of the decidability of typechecking for systems of dependent types up to the CIC: Chapter 2 of Advanced Topics in Types and Programming Languages: Dependent Types, David Aspinall & Martin Hofmann. As you probably know, the proof of decidability is conditional on decidability of $\beta$-equality, which itself is implied by the normalization of the calculus. The proof of that statement is significantly more difficult, partly because it implies consistency of the logical system.
{ "domain": "cstheory.stackexchange", "id": 3772, "tags": "type-theory, decidability, calculus-of-constructions" }
Types which correspond to sets of cardinality of continuum
Question: Are types which correspond to sets with cardinality of continuum possible in MLTT (or in any other constructive theory)? On the first sight, they aren't, since elements of types are terms and we have only countable number of terms. Does it mean that sets with cardinality of a continuum really doesn't exist and just a nice abstraction which simplifies mathematics a lot? Answer: You must be careful here. You are using set-theoretic concepts (cardinal, continuum) outside set theory. There is potential for confusion. Your question can be understood in several ways. Maybe you are asking whether there can be uncountably many terms of a given type. The answer is: obviously not since there are only countably many finite strings, and eact term is a string (or a finite tree if we think of abstract syntax). You might be asking this question because you claim that "the elements of types are terms". This, in my opinion, is a very damaging view of types. It is like saying that the elements of $\mathbb{R}$ are only certain expressions which denote real numbers. Another possibility is that you are asking whether there is a model of type theory in which some of the types are interpreted as uncountable sets. The answer is yes, for example the set-theoretic type model in which types are sets. In this case $\mathtt{nat} \to \mathtt{bool}$ has the power of continuum because it is the sets of all infinite boolean sequences. You could be asking whether inside type theory we can prove that there are types of the cardinality of continuum. In this case the question does not make sense because the notion "cardinality of continuum" is something that only make sense in set theory. You need to rephrase it so that it makes sense in type theory, but there are complications. Cardinality just does not behave the same way in type theory as it does in set theory. For example, you cannot show that cardinals (whatever you think they are) are linearly ordered. But we can still define special cases. Thus we can define the notion of an uncountable type: Definition uncountable (A : Type) := forall f : nat -> A, exists x : A, forall n : nat, ~ (f n = x). In words, $A$ is uncountable if for every sequence $f : \mathbb{N} \to A$ there exists $x : A$ such that $x$ is not in the image of $f$. The Baire space $\mathbb{N}^\mathbb{N}$ is uncountable: Theorem baire_uncountable : uncountable (nat -> nat). Proof. intro f. exists (fun n => S (f n n)). intro n. intro E. absurd (f n n = S (f n n)). - auto. - pattern (f n) at 1. rewrite E. reflexivity. Qed. So, classically this would mean that the cardinality of the Baire space is larger than $\aleph_0$, but as I said, cardinalities inside type theory, and in constructive mathematics in general, behave a lot less nicely than in set theory.
{ "domain": "cstheory.stackexchange", "id": 2695, "tags": "lo.logic, type-theory, constructive-mathematics" }
rosmsg tutoral error
Question: I was working through the tutorials on rosmsg and encountered this error. I went back and followed the instructions on the previous tutorials, but cannot seem to find the problem. At the end it says that the manifest must not contain exec_depend, but the tutorial told me to include it. What am I doing wrong? ddimassa@ddimassa-ThinkPad-T470:~/catkin_ws/src/beginner_tutorials$ rosmsg show beginner_tutorials/Num Traceback (most recent call last): File "/opt/ros/lunar/bin/rosmsg", line 35, in <module> rosmsg.rosmsgmain() File "/opt/ros/lunar/lib/python2.7/dist-packages/rosmsg/__init__.py", line 754, in rosmsgmain sys.exit(rosmsg_cmd_show(ext, full, command)) File "/opt/ros/lunar/lib/python2.7/dist-packages/rosmsg/__init__.py", line 619, in rosmsg_cmd_show rosmsg_debug(rospack, mode, arg, options.raw) File "/opt/ros/lunar/lib/python2.7/dist-packages/rosmsg/__init__.py", line 450, in rosmsg_debug print(get_msg_text(type_, raw=raw, rospack=rospack)) File "/opt/ros/lunar/lib/python2.7/dist-packages/rosmsg/__init__.py", line 427, in get_msg_text package_paths = _get_package_paths(p, rospack) File "/opt/ros/lunar/lib/python2.7/dist-packages/rosmsg/__init__.py", line 554, in _get_package_paths results = find_in_workspaces(search_dirs=['share'], project=pkgname, first_match_only=True, workspace_to_source_spaces=_catkin_workspace_to_source_spaces, source_path_to_packages=_catkin_source_path_to_packages) File "/opt/ros/lunar/lib/python2.7/dist-packages/catkin/find_in_workspaces.py", line 143, in find_in_workspaces source_path_to_packages[source_path] = find_packages(source_path) File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 86, in find_packages packages = find_packages_allowing_duplicates(basepath, exclude_paths=exclude_paths, exclude_subspaces=exclude_subspaces, warnings=warnings) File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 146, in find_packages_allowing_duplicates xml, filename=filename, warnings=warnings) File "/usr/lib/python2.7/dist-packages/catkin_pkg/package.py", line 587, in parse_package_string raise InvalidPackage('Error(s) in %s:%s' % (filename, ''.join(['\n- %s' % e for e in errors]))) catkin_pkg.package.InvalidPackage: Error(s) in /home/ddimassa/catkin_ws/src/beginner_tutorials/package.xml: - The manifest (with format version 1) must not contain the following tags: exec_depend Originally posted by ddimassa on ROS Answers with karma: 1 on 2017-12-07 Post score: 0 Answer: This is due to the package.xml being version 1 (as it says in the last line), and not version 2. I don't have lunar running, but catkin_create_pkg in kinetic automatically creates a Version 2 package.xml. If you followed the tutorials previous to rosmsg, it should have been created automtically correct (at least in kinetic). Change the <package> tag (line 1 or 2 of the package.xml) to <package format="2"> and you should be good to go. Originally posted by mgruhler with karma: 12390 on 2017-12-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 29553, "tags": "ros, tutorial, rosmsg" }
Why is Sodium Transported Across the Cation Exchange Membrane in the Chlor Alkali Process?
Question: In the Chlor Alkali Process, a sodium ion is transported across a cation exchange membrane to create NaOH in the Anode Chamber: The Half reactions are: 2Cl− → Cl2 + 2e− and 2H2O + 2e− → H2 + 2OH− and the full reaction is: 2NaCl + 2H2O → Cl2 + H2 + 2NaOH Why does the sodium ion get transported across the membrane? Sodium isn't involved in the anode/cathode reactions, so there's no concentration gradient driving it. I understand that the Anode reaction produces OH-, and that there is now a charge imbalance so that the sodium ion crosses in order to maintain electroneutrality. But how can this be described mathematically? I don't see how the standard mass conservation/ Nerset-Plank equations can describe this motion. Answer: There is a rather interesting explanation as to why only sodium ion can pass through a cation exchange membrane and chloride ion is repelled by it. Let us recall some properties a) Ion exchangers are very good electrical conductors. This is well studied in the 1940s. The current is carried, not by electrons but by mobile ions in ion exchange resins. In a cation exchanger, current is carried by mobile cations. b) Imagine in the cation exchanger, it consists of a sulfonated polymer. Let us call that R-SO3H, where H is mobile and R-SO3(-) unit is immobile. c) Ignore the flow of the chlor-alkali cell for the time being. d) The punch line is that the bulk solution has to be electrically neutral all the time. Now consider an electrolytic cell which consists of NaCl solution. There is a partition of a cation exchange membrane in between. On the left there is an anode where chloride ions are being oxidized and the right, water is being reduced to hydrogen. Right hand side: When water is being reduced, neutral hydrogen gas is leaving the solution, leaving behind negatively charged hydroxide ions. There is a charge imbalance in the bulk which cannot exist. What has to happen? Na(+) ions must cross the cation exchanger from the left hand side to ensure there is no bulk charge imbalance. Now chloride cannot pass through the ion exchange membrane because the R-SO3(-) has a negative charge. All it can do is to repel the negatively charged chloride ion. You can read more about Donnan effect and Donnan potential. Left hand side: You are losing negatively charged chloride ion as neutral chlorine gas. There is excess positive Na(+) ions there. The positive ions have to cross the membrane in order to maintain electrical neutrality in the bulk and eventually join the free hydroxide ions. As a combined effect, the right hand side becomes a solution of NaOH (contaminated with NaCl) and the concentration of NaCl on the left hand side depletes.
{ "domain": "chemistry.stackexchange", "id": 14496, "tags": "electrochemistry, electrolysis" }
Example of time-dependent constant of motion in classical mechanics
Question: In classical mechanics text, when learning about Poisson brackets, one gets $\frac{df}{dt} = \{f,H\} +\frac{\partial f}{\partial t}$, where $H$ is the Hamiltonian of the system and for $\frac{df}{dt}=0$, $f$ is a constant of motion. It is taught that if there is no explicit time-dependence in $f$, then $\{f,H\}=0$. However, I am just wondering if there is any obvious example for which $f$ has an explicit time-dependence. Answer: Example: a free particle in 1D: $$H=\frac{p^2}{2m}.$$ Two constants of motion are $$p\quad\text{ and }\quad x-\frac{p}{m}t.$$ The latter depends explicitly on $t$.
{ "domain": "physics.stackexchange", "id": 73503, "tags": "classical-mechanics, conservation-laws, hamiltonian-formalism, poisson-brackets, integrals-of-motion" }
Is there an analogue of Maxwell's equations in $2+2$ dimensions?
Question: I'm quite familiar with Maxwell's equations in the context of real Lorentzian manifolds in $1+2$ and $1+3$ dimensions. But then is there an analogue of Maxwell's equations in $2+2$ dimensions? How about for real pseudo-Riemannian manifolds of dimension $p+q$, i.e. of signature $(p,q)$? Somehow I think the answer is no because maybe Lorentz invariance gets destroyed, though I'm not certain. Answer: The Maxwell equation can be fairly simply written as $$\nabla_{\mu} F^{\mu\nu} = 0$$ This expression does not depend in any way on the metric signature, and as such can be used with any metric signature. In particular, the gauge fixing of the equation does not depend on the signature, as far as I know, so that we can rewrite it as $$\Box A_\mu - {R^\nu}_{\rho} A^\rho = 0$$ Of course, Lorentz invariance will be lost, and replaced with invariance under the group $O(p,q)$. Let's see what this implies : $p = 1$, $q = n-1$ This is the classical case of the Lorentzian manifold. In this case, the principal part of the PDE is a hyperbolic system. This can be generally shown to have a single solution for initial $A_\mu$, $\partial_\nu A_\mu$, if the spacetime is globally hyperbolic and everything is regular enough. $p = n - 1$, $q = 1$ The opposite case, for which we can show easily that it's equal to $$- \Box A_\mu + {R^\nu}_{\rho} A^\rho = 0$$ with the same solutions. $p = 0$, $q = n$ The Riemannian case, in which case we just end up with the Laplace equation. $$\Delta_g A_\mu - {R^\nu}_{\rho} A^\rho = 0$$ with $\Delta_g$ the Laplace-Beltrami operator. It is fairly well known that Cauchy-type initial conditions are generally too constraining to solve elliptic equations, hence they do not really correspond to what we typically expect of physics. An interesting treatment of physics in the Riemannian case can be found here : https://web.archive.org/web/20170318151343/http://www.gregegan.net/ORTHOGONAL/ORTHOGONAL.html Note that in general, it's always possible to have closed timelike curves in a Riemannian space (you can just turn around), as can be shown by the fact that it is invariant under the rotation group $O(n)$. Hence we can "boost" to the frame $(x,t) \to (-x, -t)$. This makes for pretty bad things. The same is true for the case $p = n$, $q = 0$, simply by taking the opposite sign. $p > 1$, $q > 1$ This is the ultrahyperbolic case, with more than one timelike dimension. Much like in the Riemannian case, since any timelike plane forms a Riemannian submanifold, there are always closed timelike curves in it. The ultrahyperbolic wave equation tends to either have no solution or non-unique solutions, and they are in general unstable. A good review on the topic can be found here : http://rspa.royalsocietypublishing.org/content/465/2110/3023
{ "domain": "physics.stackexchange", "id": 42132, "tags": "electromagnetism, maxwell-equations, spacetime-dimensions" }
Publicly available genome sequence database for viruses?
Question: As a small introductory project, I want to compare genome sequences of different strains of influenza virus. What are the publicly available databases of influenza virus gene/genome sequences? Answer: There area few different influenza virus database resources: The Influenza Research Database (IRD) (a.k.a FluDB - based upon URL) A NIAID Bioinformatics Resource Center or BRC which highly curates the data brought in and integrates it with numerous other relevant data types The NCBI Influenza Virus Resource A sub-project of the NCBI with data curated over and above the GenBank data that is part of the NCBI The GISAID EpiFlu Database A database of sequences from the Global Initiative on Sharing All Influenza Data. Has unique data from many countries but requires user agree to a data sharing policy. The OpenFluDB Former GISAID database that contains some sequence data that GenBank does not have. For those who also may be interested in other virus databases, there are: Virus Pathogen Resource (VIPR) A companion portal to the IRD, which hosts curated and integrated data for most other NIAID A-C virus pathogens including (but not limited to) Ebola, Zika, Dengue, Enterovirus, and Hepatitis C LANL HIV database Los Alamos National Laboratory HIV database with HIV data and many useful tools for all virus bioinformatics PaVE: Papilloma virus genome database (from quintik comment) NIAID developed and maintained Papilloma virus bioinformatics portal Disclaimer: I used to work for the IRD / VIPR and currently work for NIAID.
{ "domain": "bioinformatics.stackexchange", "id": 66, "tags": "database, public-databases, covid-19, genome-sequencing, sars-cov-2" }
What model is suitable for classification of a small data set?
Question: I have a dataset that consists of 365 records, and I want to apply a classification model on it (binary classification). As an output, in addition to the classification labels, I want to retrieve the classification confidence for each instance. I don't know how to deal with such a case. Can I use, for example, linear classifiers (SVM, logistic regression) with this small dataset? Because, I want to retrieve the classification confidence as well. I read that decision trees can be a good classifier for small datasets, but how can I retrieve the classification confidence with it? The dataset consists of tweets, each classified as positive or negative (from a sentiment perspective), and my feature vector consists of 2400 features (combination between word2vec embeddings and other features). Also, do you recommend me to use word2vec embeddings with such a small dataset? I think the classifier can't learn something from them using small dataset. Answer: The question whether to use a linear classifier depends less on the number of samples you have in your dataset and more whether your dataset is linearly separable (by the way, SVMs can be non-linear with the kernel trick). Now with regards to confidence in the classification, In SVMs there is a method that calculates the probability that a given sample belongs to a particular class using Platt scaling ("Original Paper"). This is the approach that is used in sklearn's SVM confidence implementation. You can read more about it in the following link: How To Compute Confidence Measure For SVM Classifiers In both SVMs and linear regression models you can calculate the distance of a sample from the border and treat it as a confidence measurement (but it is not exactly that). With decision trees I'm not an expert but a similar question was posted and answered in the following link: Decision tree, how to understand or calculate the probability/confidence of prediction result I would strongly recommend using some known embedding method like the word2vec, since as you mentioned, your dataset is too small for your model to be able to properly learn an encoding of context and vocabulary from.
{ "domain": "datascience.stackexchange", "id": 4675, "tags": "classification, multiclass-classification" }
Using a loose array just to specify the object properties
Question: I'm using this array to specify the properties I want to operate on. I didn't use for..in because there's a property that doesn't follow the same pattern. const data = {}; [ 'teaching_levels', 'axes', 'accessibility_resources', 'contents', 'resources', ].forEach((name) => { data[name] = { options: results[name], values: req.query[name], }; }); // This property is different data.licenses = results.licenses; Is it alright to have an array like that? Answer: I would say, yes, definitely. That saves a lot of manual code writing and makes it easier to change in the event you need to change all of them. The only difference I would probably make is, add your different properties when you create the object, that will not only slightly improve speed, but will also make the code a tiny bit smaller. const data = { licenses: results.licenses }; [ 'teaching_levels', 'axes', 'accessibility_resources', 'contents', 'resources', ].forEach(name => data[name] = { options: results[name], values: req.query[name], });
{ "domain": "codereview.stackexchange", "id": 28650, "tags": "javascript" }
Flickering of map in rviz (gmapping)
Question: link text I am doing navigation on ROS, while in gmapping the map which is published is also flickering. Please check the given link so you will get rough idea. Is there any TF issue? Originally posted by rohanmore26 on ROS Answers with karma: 11 on 2018-02-13 Post score: 1 Answer: It looks like you have multiple nodes publishing conflicting transforms between the odometry and map frames. You should run roswtf and rosrun tf view_frames , and see if either tool complains about multiple publishers for the same transform. Originally posted by ahendrix with karma: 47576 on 2018-02-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30030, "tags": "ros-indigo" }
Editing Answers on answers.ros.org
Question: [originally asked to Tully via email] What strategies are used for editing answers on answers.ros.org? I've seen a number of edited answers (like this one) and I've never understood the reasoning behind it. Is it just to consolidate the best info in the best answer, even if the same information is just a little further down the page? Originally posted by David Lu on ROS Answers with karma: 10932 on 2011-06-14 Post score: 1 Answer: Yes, the goal of answers.ros.org is to get people the best answers. Overall the website is designed to be editable by the active members of the community. And unlike a mailing list archive it is easy and desirable to update everything, like it was a wiki. The presentation of the site is a little different from wiki's in that it's structured around the question and answer format, which is good for debugging and troubleshooting, and searching, but not for general documentation. As a moderator I usually edit questions to make them more accurately phrased. Keep the formatting readable and keep the questions consistent. (For example people tend to ask questions for which the title is completely unreleated.) And also I update tags regularly, to keep them standardized, and also sometimes new associations are made in the answers. The most important part as a moderator/member is to build it up as a knowledge base so that if someone else comes to it a month later they get the best answer as quickly as possible. So occasionally I'll update answers too, generally the answers are more experienced members so they usually need less editing. Originally posted by tfoote with karma: 58457 on 2011-06-14 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by KoenBuys on 2011-06-16: I try to edit answers/questions as the code layout makes it hard to read Comment by kk2105 on 2021-06-13: How can we get the option to "Edit" answers or questions? Comment by tfoote on 2021-06-14: You can edit your own questions and answers. Editing others posts is reserved for moderators and others who have reached very high karma levels. Comment by kk2105 on 2021-06-14: Thanks for the answer.
{ "domain": "robotics.stackexchange", "id": 5840, "tags": "ros, answers.ros.org" }
Can a large drop in the PSD indicate the presence of a periodic noise?
Question: I was wondering if a large drop in the plot of the PSD of a sensor measurement could indicate the presence of a periodic noise. Here is my PSD: Thank you. Answer: No, that's not an inference you can make. Anything periodic would, on the contrary, lead to spikes in the PSD.
{ "domain": "dsp.stackexchange", "id": 11647, "tags": "fourier-transform, noise, power-spectral-density, fourier" }
Challenges when trying to build a telescope using two lenses
Question: I have some lenses - a 10 cm and 7 cm convex lense and a 4 cm and 1 cm concave lenses. I am trying to make a telescope by combining one of convex and concave lenses, but I am struggeling to do so. The picture that is created using these two lenses doesn't show anything other than the thing that can be seen by eyes. Is it possible to make simple telescope using these lenses? PS: All of numbers are lense diameters. Answer: Diameter is important, but if you're trying to get any magnification out of that system, you need to look at the focal length. If you have a very distant light source, like the Sun, focused through a convex lens, until the image of the source is as small and as clear as possible, then the focal length is the distance between the lens and the image you're creating. Like when you burn paper with the lens, focal length is the lens-paper distance. For concave lenses focal length is a little more tricky to define, but they do have a focal length anyway. If F is the focal length of the forward lens (the objective), and f is the focal length of the lens near your eye (the ocular, or the eyepiece), then the magnification of the instrument is: M = F / f So you need an objective with a long focal length, and an eyepiece with a short focal length. Both the objective and the eyepiece could be convex actually. It's just that the image will be reversed. If the eyepiece is concave, the image will be straight-up, but the field of view of the telescope will be narrow (like looking through a peephole). If you use both convex lenses, the distance between them need to be close to the sum of their focal lengths, F + f. Start there and adjust it slightly for best results. If you use a concave lens for the eyepiece, then the distance between lenses needs to be the difference of their focal lengths, F - f. Both lenses need to be perpendicular on their common axis, to minimize aberrations. It helps if they are mounted in a tube, to reduce glare, but it's not absolutely necessary. They could simply be attached to a long rod or something - but you'll get a lot of glare into the eyepiece that way. Do not necessarily try to get very high magnification. If the diameter of the objective is D, measured in mm, then the maximum usable magnification is 2D. So an objective 100 mm in diameter could never provide more than 200x magnification - I mean it could, but the image would be blurry. Even then, this is a theoretical limit, and in practice the image will get pretty bad long before you reach the limit, unless you use high quality lenses designed specifically for telescopes, etc. Also, at high magnification it's difficult to hold the scope steady, and the image is jumping all over the place. Try and make a 5x ... 8x instrument first, see how that goes. A lot of binoculars are in this range. I've built several refractor telescopes from lenses when I was in high school. It's totally doable. It just takes some patience to figure out how to solve all problems.
{ "domain": "astronomy.stackexchange", "id": 2372, "tags": "telescope, amateur-observing, diy" }
Simple lower bounds against AC0
Question: It is known that $Parity \notin AC^0$ (nonuniform), but the proof is rather involved and combinatorial. Are there simpler, but weaker lower bounds, say for $NP \not \subseteq AC^0$ or $NEXP \not \subseteq AC^0$? For example, can nontrivial simplifications be obtained in the proof of $NEXP \not \subseteq ACC^0$ to deal only with the special case of $AC^0$? Answer: I guess it depends on your point of view, but the proof via approximating polynomials (along the lines of Razborov-Smolensky) that Parity isn't in AC0 is not so involved... The natural way in which one would modify the proof that "NEXP is not in ACC0" to yield "NEXP not in AC0" would be to give a SAT algorithm for AC0 circuits that beats exhaustive search. However, all known SAT algorithms of this kind actually use the same or similar techniques as the "Parity not in AC0" lower bounds, so the proof would not get any simpler. (It would be interesting to find an AC0 SAT algorithm where this is not the case.)
{ "domain": "cs.stackexchange", "id": 1036, "tags": "complexity-theory, lower-bounds, circuits" }
Can we access HDFS file system and YARN scheduler in Apache Spark?
Question: We can access HDFS file system and YARN scheduler In the Apache-Hadoop. But Spark has a higher level of coding. Is it possible to access HDFS and YARN in Apache-Spark too? Thanks Answer: Yes. There are examples on spark official document: https://spark.apache.org/examples.html Just put your HDFS file uri in your input file path as below (scala syntax). val file = spark.textFile("hdfs://train_data")
{ "domain": "datascience.stackexchange", "id": 206, "tags": "bigdata, apache-hadoop" }
XML schema parsing and XML creation from flat files
Question: I am new to Python and had to create a schema parser to pull information on attributes and complex types, etc. and then convert data from flat files into the proper XML format. We are processing a lot of data and I've run into some issues once we pass around 1 millions processed records. I'm looking for any suggestions on code style, best python practices, etc. that will clean this up a bit and increase efficiency. Here is the part that pulls in the schema: import csv, ConfigParser, re, datetime, time from lxml import etree elemdict = {} parser = etree.XMLParser() data = etree.parse(open("testschema.xsd"),parser) root = data.getroot() version = root.get("version") rootelement = root[0] elements = rootelement[0][0].getchildren() for e in elements: ename = e.get("name") elemdict[ename] = [] subelements = e[0][0].getchildren() for se in subelements: elemdict[ename].append(se.attrib) specials = root.getchildren()[1:] specialtypes = {} for sp in specials: sname = sp.get("name") specialtypes[sname] = {} specialtypes[sname]["type"] = sp.tag.split('}')[-1] #removes namespace to get either complex or simple type, another option here would be to use xpath('local-name()') or remove the first 34 characters from the tag, none of them really clean options typeelements = sp.getchildren()[0].getchildren() specialtypes[sname]["details"] = [] if specialtypes[sname]["type"] == "complexType": specialtypes[sname]["requireds"] = [] for t in typeelements: specialtypes[sname]["details"].append(t.get("name")) if (not "minOccurs" in t.attrib) or int(t.get("minOccurs"))>0: specialtypes[sname]["requireds"].append(t.get("name")) else: for t in typeelements: specialtypes[sname]["details"].append(t.get("value")) That pulls all information into two dictionaries, elemdict and specialtypes. Special types includes both enums and complex types. Here is the code that creates the XML elements and returns, where d is the root element, datainput is the line we are obtained from the file, name is the element name, and requireds is a list of required fields for the element: def addData(d, datainput, name, requireds): try: datavals = dict((k.lower().replace(" ","_"),v) for k,v in datainput.iteritems()) #lower case all the keys for consistency except AttributeError: print "Invalid row, skipping: " + str(datainput) return for i in range(len(requireds)): rname = requireds[i]["name"] rtype = requireds[i]["type"] if rtype[0:3] != "xs:": #if no xs: prefix, element is a complex element of one of our types for j in range(len(specialtypes[rtype]["requireds"])): spname = specialtypes[rtype]["requireds"][j] if not((rname + "_" + spname) in datavals) or len(datavals[(rname + "_" + spname)])<=0: print "missing attributes for complex element " + rname + " in values " + str(datavals) + ", returning" return elif not(rname in datavals) or len(datavals[rname])<=0: print "no " + rname + " in values " + str(datavals) + ", returning" return element = etree.SubElement(d, name) element.set("id", datavals["id"]) for i in range(len(elemdict[name])): cname = elemdict[name][i]["name"] ctype = elemdict[name][i]["type"] if ctype[0:3] != "xs:" and specialtypes[ctype]["type"] == "complexType": #if the type does not start with and xml type and is a complex type specified by us validComplex = True for j in range(len(specialtypes[ctype]["requireds"])): spname = specialtypes[ctype]["requireds"][j] if not((cname + "_" + spname) in datavals) or len(datavals[(cname + "_" + spname)])<=0: validComplex = False break if validComplex: temp = etree.SubElement(element, cname) for d in specialtypes[ctype]["details"]: if (cname + "_" + d) in datavals and len(datavals[(cname + "_" + d)]) > 0: tempsubelem = etree.SubElement(temp, d) tempsubelem.text = datavals[(cname + "_" + d)] elif cname in datavals and len(datavals[cname]) > 0: temp = etree.SubElement(element, cname) try: if ctype == "xs:date": temp.text = str(datetime.date(*time.strptime(datavals[cname],dateformat)[0:3])) else: temp.text = datavals[cname] except ValueError: temp.text = removeNonAscii(datavals[cname]) Answer: You might want to rethink your approach in a fundamental way if you want to create millions of rows of XML. Define Python classes to contain your actual data. These must be absolutely correct, based on ordinary Python processing. No XSD-based lookup or validation or range checking or anything. Just Python. Your application data must be in Plain Old Python Objects (POPO). Write a "serializer" that creates XML from your Plain Old Python Objects. Since the Python objects are already absolutely correct, the output XML will be absolutely correct. Or locate a seraializer. http://coder.cl/products/pyxser/ Django has one. https://stackoverflow.com/questions/1500575/python-xml-serializers Once you have this infrastructure of Python classes that produce XML in place, then you do the following. Rewrite your XSD parser so that it creates your Plain Old Python Object class definitions from the XSD. It's an XSD->Python translator. Now you do the bulk of your work in Python. Millions of rows in simple Python class definitions is easy. XML is merely serialized Python objects. Since the XSD is not used, this, too, is easy. Further (and most importantly) you do a one-time-only conversion of XSD to Python. Look at this Stackoverflow question for a direction to take. https://stackoverflow.com/questions/1072853/convert-xsd-to-python-class
{ "domain": "codereview.stackexchange", "id": 501, "tags": "python, beginner, parsing, xml" }
Custom sorting an Excel Data Table (+ visual formatting)
Question: I'm (re)building a data table to track our clients that receive regular income payments. Specifically, I need to pull this data into other workbooks for other reports, and since I was here anyway I decided to upgrade it and anticipate its' future growth. A sample of the data table (minus sensitive data): My code finds the Table Range, Determines the location of the sort-columns, Sorts the table using a 2-Level custom sort (then A-Z by name) and then does some visual formatting. There is a sheet for every year and a button on each sheet, all linking to the same Macro, which operates on the Active Sheet. These days, my main focus is on Maintainability (by me or someone else). In essence, if you got hired and were handed this as a thing to maintain, what would you be thinking as you read through it? (There are a few standard methods not included. You may safely assume they do what they say they do) Module "A1_Public_Variables" Option Explicit Public Const TOP_LEFT_CELL_STRING As String = "Client Name" Public Const CLIENT_NAME_HEADER As String = "Client Name" Public Const INCOME_AMOUNT_HEADER As String = "Income" Public Const PAYMENT_FREQUENCY_HEADER As String = "Frequency" Public Const PAYMENT_DAY_HEADER As String = "Date Paid" Public Const BASE_MONTH_HEADER As String = "Base Month" Public Const ASCENTRIC_WRAPPER_HEADER As String = "Wrapper" Public Const ASCENTRIC_ACCOUNT_NUMBER_HEADER As String = "Ascentric Acc #" Public Const ACCOUNT_TO_PAY_FROM_HEADER As String = "Account to pay from?" Module "B1_Sort_Button_Click" Option Explicit Sub BtnSort_Click() '/==================================================================================================== '/ Description: '/ For the active sheet, finds the data Table and sortKey columns using headers. '/ Sorts clients based on payment frequency, then payment day, then Client Name. '/ Colours rows depending on their payment frequency. '/==================================================================================================== StoreApplicationSettings DisableApplicationSettings '/ set Worksheet Dim ws_this As Worksheet Set ws_this = ActiveSheet '/ Get table Range Dim tableRange As Range Set tableRange = GetTableRange(ws_this) '/ Validate Column Headers ValidateTableHeaders ws_this, tableRange '/ Get sort columns Dim paymentFrequencyColNum As Long Dim paymentDayColNum As Long Dim clientNameColNum As Long FindColumnIndexes ws_this, tableRange, paymentFrequencyColNum, paymentDayColNum, clientNameColNum '/ Sort Table SortTableRange ws_this, tableRange, paymentFrequencyColNum, paymentDayColNum, clientNameColNum '/ Visual Formatting FormatTableRange ws_this, tableRange, paymentFrequencyColNum RestoreApplicationSettings End Sub Module "B2_Get_Table" Option Explicit Public Function GetTableRange(ByRef ws_this As Worksheet) As Range '/ Finds the top left cell in the table by its' text. Determines the bounds of the table and returns it as a range object. '/ Find top left cell of table Dim searchRange As Range Set searchRange = ws_this.Range(Cells(1, 1), Cells(10, 10)) Dim topLeftCell As Range Set topLeftCell = CellContainingStringInRange(searchRange, TOP_LEFT_CELL_STRING) '/ Find table range Dim finalRow As Long, finalCol As Long Dim row As Long, col As Long row = topLeftCell.row col = topLeftCell.Column finalRow = Cells(Rows.Count, col).End(xlUp).row finalCol = Cells(row, Columns.Count).End(xlToLeft).Column Set GetTableRange = Range(topLeftCell, Cells(finalRow, finalCol)) End Function Module "B3_Validate_Table_headers" Option Explicit Public Sub ValidateTableHeaders(ByRef ws_this As Worksheet, ByRef tableRange As Range) '/ Checks for the existence of all expected headers. ws_this.Activate '/ Get Expected Headers Dim passedValidation As Boolean Dim strErrorMessage As String Dim expectedHeaders(1 To 21) As String '/ 9 headers + 12 months ThisWorkbook.GetDataTableHeaders expectedHeaders(1), expectedHeaders(2), expectedHeaders(3), expectedHeaders(4), expectedHeaders(5) _ , expectedHeaders(6), expectedHeaders(7), expectedHeaders(8), expectedHeaders(9) Dim i As Long For i = (UBound(expectedHeaders) - 11) To UBound(expectedHeaders) expectedHeaders(i) = MonthName(i - UBound(expectedHeaders) + 12) Next i '/ Get Header Row Dim arrHeaderRow As Variant arrHeaderRow = Array() Dim row As Long, col As Long Dim firstCol As Long, finalCol As Long row = tableRange.row firstCol = tableRange.Column finalCol = firstCol + (tableRange.Columns.Count - 1) ReDim arrHeaderRow(firstCol To finalCol) For col = firstCol To finalCol arrHeaderRow(col) = Cells(row, col).Text Next col '/ Search header row for all expected Headers Dim LB1 As Long, UB1 As Long AssignArrayBounds expectedHeaders, LB1, UB1 Dim ix As Variant Dim searchString As String passedValidation = True For i = LB1 To UB1 searchString = expectedHeaders(i) ix = IndexInArray1d(arrHeaderRow, searchString) If IsError(ix) Then passedValidation = False strErrorMessage = strErrorMessage & "Could not find header """ & searchString & """ (non-case sensitive)" End If Next i '/ If applicable, show error message and stop execution If Not passedValidation Then PrintErrorMessage strErrorMessage, endExecution:=True End Sub Module "B4_Get_Column_Indexes" Option Explicit Public Sub FindColumnIndexes(ByRef ws_this As Worksheet, ByRef tableRange As Range, ByRef paymentFrequencyColNum As Long, ByRef paymentDayColNum As Long, ByRef clientNameColNum As Long) '/ Pulls out the header row as an array. Search for specific headers and returns their column numbers. ws_this.Activate '/ Get Header Row as range Dim rngHeaderRow As Range Dim lngHeaderRow As Long Dim firstCol As Long, finalCol As Long firstCol = tableRange.Column finalCol = firstCol + (tableRange.Columns.Count - 1) lngHeaderRow = tableRange.row Set rngHeaderRow = Range(Cells(lngHeaderRow, firstCol), Cells(lngHeaderRow, finalCol)) '/ Read Header Row to Array Dim arrHeaderRow As Variant arrHeaderRow = Array() Dim col As Long, i As Long ReDim arrHeaderRow(1 To tableRange.Columns.Count) For col = firstCol To finalCol i = (col - firstCol) + 1 arrHeaderRow(i) = Cells(lngHeaderRow, col).Text Next col '/ Find column numbers paymentFrequencyColNum = IndexInArray1d(arrHeaderRow, PAYMENT_FREQUENCY_HEADER) + (firstCol - 1) paymentDayColNum = IndexInArray1d(arrHeaderRow, PAYMENT_DAY_HEADER) + (firstCol - 1) clientNameColNum = IndexInArray1d(arrHeaderRow, CLIENT_NAME_HEADER) + (firstCol - 1) End Sub Module "B5_Sort_Table" Option Explicit Public Sub SortTableRange(ByRef ws_this As Worksheet, ByRef tableRange As Range, ByVal paymentFrequencyColNum As Long, ByVal paymentDayColNum As Long, ByVal clientNameColNum As Long) '/ Sorts range based on payment frequency, then payment day, then Client Name, using custom sort lists for the first 2. ws_this.Activate '/ Get Custom sort list for payment frequency Dim paymentFrequencySortList As Variant paymentFrequencySortList = GetpaymentFrequencySortList() Dim strPaymentFrequencySortList As String strPaymentFrequencySortList = Join(paymentFrequencySortList, ",") '/ Get Custom sort list for payment day Dim paymentDaySortList As Variant paymentDaySortList = GetPaymentDaySortList() Dim strPaymentDaySortList As String strPaymentDaySortList = Join(paymentDaySortList, ",") '/ Get first/last rows Dim firstRow As Long, finalRow As Long firstRow = tableRange.row finalRow = firstRow + (tableRange.Rows.Count - 1) '/ get column ranges Dim rngPaymentFrequencyCol As Range, rngPaymentDayCol As Range, rngClientNameCol As Range Set rngPaymentFrequencyCol = Range(Cells(firstRow, paymentFrequencyColNum), Cells(finalRow, paymentFrequencyColNum)) Set rngPaymentDayCol = Range(Cells(firstRow, paymentDayColNum), Cells(finalRow, paymentDayColNum)) Set rngClientNameCol = Range(Cells(firstRow, clientNameColNum), Cells(finalRow, clientNameColNum)) '/ Sort Range With ws_this.Sort .SortFields.Clear .SortFields.Add key:=rngPaymentFrequencyCol, SortOn:=xlSortOnValues, Order:=xlAscending, CustomOrder:=CVar(strPaymentFrequencySortList) '/ CVar is necessary to get VBA to accept the string. No idea why. .SortFields.Add key:=rngPaymentDayCol, SortOn:=xlSortOnValues, Order:=xlAscending, CustomOrder:=CVar(strPaymentDaySortList) .SortFields.Add key:=rngClientNameCol, SortOn:=xlSortOnValues, Order:=xlAscending .SetRange tableRange .Header = xlYes .MatchCase = False .SortMethod = xlPinYin .Apply End With End Sub Public Function GetpaymentFrequencySortList() As Variant Dim arr As Variant arr = Array() ReDim arr(1 To 3) arr(1) = "Monthly" '/ "Low" item arr(2) = "Quarterly" arr(3) = "Annually" '/ "High" item GetpaymentFrequencySortList = arr End Function Public Function GetPaymentDaySortList() As Variant Dim arr As Variant arr = Array() ReDim arr(1 To 31) arr(1) = "1st" '/ "Low" Item arr(2) = "2nd" arr(3) = "3rd" arr(4) = "4th" arr(5) = "5th" arr(6) = "6th" arr(7) = "7th" arr(8) = "8th" arr(9) = "9th" arr(10) = "10th" arr(11) = "11th" arr(12) = "12th" arr(13) = "13th" arr(14) = "14th" arr(15) = "15th" arr(16) = "16th" arr(17) = "17th" arr(18) = "18th" arr(19) = "19th" arr(20) = "20th" arr(21) = "21st" arr(22) = "22nd" arr(23) = "23rd" arr(24) = "24th" arr(25) = "25th" arr(26) = "26th" arr(27) = "27th" arr(28) = "28th" arr(29) = "29th" arr(30) = "30th" arr(31) = "31st" '/ "High" Item GetPaymentDaySortList = arr End Function Module "B6_Format_Table" Option Explicit Public Sub FormatTableRange(ByRef ws_this As Worksheet, ByRef tableRange As Range, ByVal paymentFrequencyColNum As Long) '/ Colour rows based on Payment frequency, add cell borders, autofit columns and then set the "Cash Made Available?" columns to fixed-width. ws_this.Activate '/ Set fixed width for "Cash Made Available?" columns Dim colWidthCashAvailable As Long colWidthCashAvailable = 10 '/ Set Range bounds of table Dim firstRow As Long, firstCol As Long Dim finalRow As Long, finalCol As Long Dim topLeftCell As Range Set topLeftCell = Cells(tableRange.row, tableRange.Column) AssignRangeBoundsOfData topLeftCell, firstRow, finalRow, firstCol, finalCol, False Dim firstCashAvailableCol As Long firstCashAvailableCol = finalCol - (12 - 1) '/ 12 months '/ Colour rows based on payment frequency ws_this.Cells.Interior.Color = xlNone Dim row As Long, col As Long Dim paymentFrequency As String Dim strColour As String, dblColourShade As Double Dim rngRow As Range For row = firstRow + 1 To finalRow '/ +1 for headers '/ Set strColour inside conditions in case we want to use different colours for each in the future paymentFrequency = Cells(row, paymentFrequencyColNum).Text Set rngRow = Range(Cells(row, firstCol), Cells(row, finalCol)) Select Case paymentFrequency Case Is = "Monthly" strColour = "Lumin Tourquoise" dblColourShade = DblBaseShade - (DblShadeIncrement * -6) ColourFill rngRow, strColour, dblColourShade Case Is = "Quarterly" strColour = "Lumin Tourquoise" dblColourShade = DblBaseShade - (DblShadeIncrement * -4) ColourFill rngRow, strColour, dblColourShade Case Is = "Annually" strColour = "Lumin Tourquoise" dblColourShade = DblBaseShade - (DblShadeIncrement * -2) ColourFill rngRow, strColour, dblColourShade Case Else ErrorMessage "Couldn't identify frequency """ & paymentFrequency & """ on row " & row & ". Please check that it is entered correctly." End Select Next row '/ Set Borders Dim rngCell As Range ws_this.Cells.Borders.LineStyle = xlNone For row = firstRow + 1 To finalRow '/ +1 for headers Set rngRow = Range(Cells(row, firstCol), Cells(row, finalCol)) For Each rngCell In rngRow rngCell.BorderAround xlContinuous, xlThin, xlColorIndexAutomatic Next rngCell Next row '/ Set Header Borders Dim rngHeaderRow As Range Set rngHeaderRow = Range(Cells(firstRow, firstCol), Cells(firstRow, finalCol)) For Each rngCell In rngHeaderRow rngCell.BorderAround xlContinuous, xlMedium, xlColorIndexAutomatic Next rngCell Set rngCell = Range(Cells(firstRow - 1, firstCashAvailableCol), Cells(firstRow - 1, finalCol)) '/ The extra "Cash made available" Header Cell rngCell.BorderAround xlContinuous, xlMedium, xlColorIndexAutomatic '/ Set column widths ws_this.Columns.AutoFit For col = firstCashAvailableCol To finalCol Columns(col).ColumnWidth = colWidthCashAvailable Next col End Sub Answer: All right, I'll try to give some feedback where I can. My comments are mostly inside the code. I wouldn't use variable names with underscores, but you need to do what you need to do with your constants. I would recommend changing this_ws to CurrentWS I have no idea what storeapplicationsettings, disableapplicationsettings or restoreapplicationsettings do. What if there's an error? Will the settings all remain disabled? In fact, I don't see any error handling at all. As for the comments like '/ Get sort columns why not say something like Call FindColumnIndexes to obtain sort columns. Sub BtnSort_Click() '/==================================================================================================== '/ Description: '/ For the active sheet, finds the data Table and sortKey columns using headers. '/ Sorts clients based on payment frequency, then payment day, then Client Name. '/ Colours rows depending on their payment frequency. '/==================================================================================================== '? StoreApplicationSettings '? DisableApplicationSettings '/ set Worksheet Dim CurrentWS As Worksheet Set CurrentWS = ActiveSheet '/ Call Function GetTableRange to obtain the table's range Dim tableRange As Range Set tableRange = GetTableRange(CurrentWS) '/ Call Sub ValidateTableHeaders to check for existence of expected headers ValidateTableHeaders CurrentWS, tableRange '/ Call Sub FindColumnIndexes to check for headers and obtain column numbers Dim paymentFrequencyColNum As Long Dim paymentDayColNum As Long Dim clientNameColNum As Long FindColumnIndexes CurrentWS, tableRange, paymentFrequencyColNum, paymentDayColNum, clientNameColNum '/ Call Sub SortTableRange to apply sort defined within that sub SortTableRange CurrentWS, tableRange, paymentFrequencyColNum, paymentDayColNum, clientNameColNum '/ Call Sub FormatTableRange for Visual Formatting FormatTableRange CurrentWS, tableRange, paymentFrequencyColNum 'If we don't get here, what happens? RestoreApplicationSettings End Sub Okay, that was pretty simple and explains to any future readers what you're doing and why you're doing it. If they want to see how it's done, they can check that process out. Public Function GetTableRange(ByRef CurrentWS As Worksheet) As Range '/ Finds the top left cell in the table by its' text. Determines the bounds of the table and returns it as a range object. '/ Find top left cell of table 'Why were those cells picked? How is this working? Dim searchRange As Range Set searchRange = CurrentWS.Range(Cells(1, 1), Cells(10, 10)) Dim topLeftCell As Range '? I assume this finds a range Set topLeftCell = CellContainingStringInRange(searchRange, TOP_LEFT_CELL_STRING) '/ Find table range 'Why only give a full name to half of these? Dim FinalRow As Long, FinalCol As Long Dim StartRow As Long, StartCol As Long StartRow = topLeftCell.row StartCol = topLeftCell.Column FinalRow = Cells(Rows.Count, col).End(xlUp).row FinalCol = Cells(row, Columns.Count).End(xlToLeft).Column Set GetTableRange = Range(topLeftCell, Cells(FinalRow, FinalCol)) End Function Not too much confusion on this one, except using functions that aren't supplied. Public Sub ValidateTableHeaders(ByRef CurrentWS As Worksheet, ByRef tableRange As Range) '/ Checks for the existence of all expected headers. ' no need to activate anything, we haven't moved as we passed CurrentWS in here via argument 'CurrentWS.Activate '/ Get Expected Headers Dim passedValidation As Boolean Dim strErrorMessage As String 'Will this always be 1 to 21? Dim expectedHeaders(1 To 21) As String '/ 9 headers + 12 months 'Again, I'm not sure what this is doing, but all right ThisWorkbook.GetDataTableHeaders expectedHeaders(1), expectedHeaders(2), expectedHeaders(3), expectedHeaders(4), expectedHeaders(5) _ , expectedHeaders(6), expectedHeaders(7), expectedHeaders(8), expectedHeaders(9) Dim i As Long 'Do you need to use this notation if you will always have 1 to 21 and look for 9? Why is the one above 'Explicitly defined and looks for what is expected, but then this one seems lost and needs to check? For i = (UBound(expectedHeaders) - 11) To UBound(expectedHeaders) expectedHeaders(i) = MonthName(i - UBound(expectedHeaders) + 12) Next i '/ Get Header Row Dim arrHeaderRow As Variant 'why are you setting this? arrHeaderRow = Array() 'Remind me what tableRange is - I know it's a range, but if it's the entire table, how are you using 'tablerange.column and tablerange.row? Dim TblRow As Long, TblCol As Long Dim FirstCol As Long, FinalCol As Long TblRow = tableRange.row FirstCol = tableRange.Column FinalCol = FirstCol + (tableRange.Columns.Count - 1) ReDim arrHeaderRow(FirstCol To FinalCol) For TblCol = FirstCol To FinalCol arrHeaderRow(TblCol) = Cells(TblRow, TblCol).Text Next TblCol '/ Search header row for all expected Headers 'There has to be a better name for these, I can take a guess but I don't know what that function is doing 'If you find yourself using numbers in variable names, you either have too many variables or your variables 'aren't descriptive enough in their name Dim LB1 As Long, UB1 As Long '? AssignArrayBounds expectedHeaders, LB1, UB1 'Why ix? For Index? Dim ix As Variant Dim searchString As String passedValidation = True For i = LB1 To UB1 searchString = expectedHeaders(i) '? What's this function do? ix = IndexInArray1d(arrHeaderRow, searchString) If IsError(ix) Then passedValidation = False strErrorMessage = strErrorMessage & "Could not find header """ & searchString & """ (non-case sensitive)" End If Next i '/ If applicable, show error message and stop execution If Not passedValidation Then PrintErrorMessage strErrorMessage, endExecution:=True End Sub Same as before, some names changed, other need better names. More functions that are mysterious. I did have questions about your arrays. Public Sub FindColumnIndexes(ByRef CurrentWS As Worksheet, ByRef tableRange As Range, ByRef paymentFrequencyColNum As Long, ByRef paymentDayColNum As Long, ByRef clientNameColNum As Long) '/ Pulls out the header row as an array. Search for specific headers and returns their column numbers. ' no need to activate anything, we haven't moved as we passed CurrentWS in here via argument 'CurrentWS.Activate '/ Get Header Row as range Dim rngHeaderRow As Range Dim lngHeaderRow As Long Dim FirstCol As Long, FinalCol As Long 'I'm still confused if tablerange is a large range, what column is it picking? FirstCol = tableRange.Column FinalCol = FirstCol + (tableRange.Columns.Count - 1) 'same here lngHeaderRow = tableRange.row Set rngHeaderRow = Range(Cells(lngHeaderRow, FirstCol), Cells(lngHeaderRow, FinalCol)) '/ Read Header Row to Array ' why not Dim arrheaderow() As Variant Dim arrheaderrow As Variant 'What's going on here? arrheaderrow = Array() 'Not a fan of these variables, not descriptie at all Dim col As Long, i As Long ReDim arrheaderrow(1 To tableRange.Columns.Count) For col = FirstCol To FinalCol i = (col - FirstCol) + 1 arrheaderrow(i) = Cells(lngHeaderRow, col).Text Next col '/ Find column numbers 'I have no idea what happens here paymentFrequencyColNum = IndexInArray1d(arrheaderrow, PAYMENT_FREQUENCY_HEADER) + (FirstCol - 1) paymentDayColNum = IndexInArray1d(arrheaderrow, PAYMENT_DAY_HEADER) + (FirstCol - 1) clientNameColNum = IndexInArray1d(arrheaderrow, CLIENT_NAME_HEADER) + (FirstCol - 1) End Sub Nothing new here. Public Sub SortTableRange(ByRef CurrentWS As Worksheet, ByRef tableRange As Range, ByVal paymentFrequencyColNum As Long, ByVal paymentDayColNum As Long, ByVal clientNameColNum As Long) '/ Sorts range based on payment frequency, then payment day, then Client Name, using custom sort lists for the first 2. ' no need to activate anything, we haven't moved as we passed CurrentWS in here via argument 'CurrentWS.Activate '/ Get Custom sort list for payment frequency Dim paymentFrequencySortList As Variant 'Why are you calling this to populate your array? It looks like it could be a constant paymentFrequencySortList = GetpaymentFrequencySortList() Dim strPaymentFrequencySortList As String strPaymentFrequencySortList = Join(paymentFrequencySortList, ",") '/ Get Custom sort list for payment day 'Same question here Dim paymentDaySortList As Variant paymentDaySortList = GetPaymentDaySortList() Dim strPaymentDaySortList As String strPaymentDaySortList = Join(paymentDaySortList, ",") '/ Get first/last rows 'One is capital the other isn't, I'd stick with capitals Dim firstRow As Long, FinalRow As Long firstRow = tableRange.row FinalRow = firstRow + (tableRange.Rows.Count - 1) '/ get column ranges 'This would be a great place to explain how you're getting this information 'and why you're doing it that way Dim rngPaymentFrequencyCol As Range, rngPaymentDayCol As Range, rngClientNameCol As Range Set rngPaymentFrequencyCol = Range(Cells(firstRow, paymentFrequencyColNum), Cells(FinalRow, paymentFrequencyColNum)) Set rngPaymentDayCol = Range(Cells(firstRow, paymentDayColNum), Cells(FinalRow, paymentDayColNum)) Set rngClientNameCol = Range(Cells(firstRow, clientNameColNum), Cells(FinalRow, clientNameColNum)) '/ Sort Range 'Is this a standard sort that should never change? If so, indicate that With CurrentWS.Sort .SortFields.Clear .SortFields.Add Key:=rngPaymentFrequencyCol, SortOn:=xlSortOnValues, Order:=xlAscending, CustomOrder:=CVar(strPaymentFrequencySortList) '/ CVar is necessary to get VBA to accept the string. No idea why. .SortFields.Add Key:=rngPaymentDayCol, SortOn:=xlSortOnValues, Order:=xlAscending, CustomOrder:=CVar(strPaymentDaySortList) .SortFields.Add Key:=rngClientNameCol, SortOn:=xlSortOnValues, Order:=xlAscending .SetRange tableRange .Header = xlYes .MatchCase = False .SortMethod = xlPinYin .Apply End With End Sub This one has a great opportunity for comments explaining why you call functions and how you determined methods. Public Sub FormatTableRange(ByRef CurrentWS As Worksheet, ByRef tableRange As Range, ByVal paymentFrequencyColNum As Long) '/ Colour rows based on Payment frequency, add cell borders, autofit columns and then set the "Cash Made Available?" columns to fixed-width. ' no need to activate anything, we haven't moved as we passed CurrentWS in here via argument 'CurrentWS.Activate '/ Set fixed width for "Cash Made Available?" columns Dim colWidthCashAvailable As Long colWidthCashAvailable = 10 '/ Set Range bounds of table 'poor firstrow, the only lowercase Dim firstRow As Long, FirstCol As Long Dim FinalRow As Long, FinalCol As Long Dim topLeftCell As Range Set topLeftCell = Cells(tableRange.row, tableRange.Column) '? AssignRangeBoundsOfData topLeftCell, firstRow, FinalRow, FirstCol, FinalCol, False Dim firstCashAvailableCol As Long firstCashAvailableCol = FinalCol - (12 - 1) '/ 12 months '/ Colour rows based on payment frequency CurrentWS.Cells.Interior.Color = xlNone 'These are good variable names, but we run into row and col again Dim row As Long, col As Long Dim paymentFrequency As String Dim strColour As String, dblColourShade As Double Dim rngRow As Range For row = firstRow + 1 To FinalRow '/ +1 for headers '/ Set strColour inside conditions in case we want to use different colours for each in the future paymentFrequency = Cells(row, paymentFrequencyColNum).Text Set rngRow = Range(Cells(row, FirstCol), Cells(row, FinalCol)) 'You might be better off making strColour a constant - it does the same thing each case? Select Case paymentFrequency Case Is = "Monthly" strColour = "Lumin Tourquoise" dblColourShade = DblBaseShade - (DblShadeIncrement * -6) ColourFill rngRow, strColour, dblColourShade Case Is = "Quarterly" strColour = "Lumin Tourquoise" dblColourShade = DblBaseShade - (DblShadeIncrement * -4) ColourFill rngRow, strColour, dblColourShade Case Is = "Annually" strColour = "Lumin Tourquoise" dblColourShade = DblBaseShade - (DblShadeIncrement * -2) ColourFill rngRow, strColour, dblColourShade Case Else ErrorMessage "Couldn't identify frequency """ & paymentFrequency & """ on row " & row & ". Please check that it is entered correctly." End Select Next row '/ Set Borders Dim rngCell As Range CurrentWS.Cells.Borders.LineStyle = xlNone For row = firstRow + 1 To FinalRow '/ +1 for headers Set rngRow = Range(Cells(row, FirstCol), Cells(row, FinalCol)) For Each rngCell In rngRow rngCell.BorderAround xlContinuous, xlThin, xlColorIndexAutomatic Next rngCell Next row '/ Set Header Borders Dim rngHeaderRow As Range Set rngHeaderRow = Range(Cells(firstRow, FirstCol), Cells(firstRow, FinalCol)) For Each rngCell In rngHeaderRow rngCell.BorderAround xlContinuous, xlMedium, xlColorIndexAutomatic Next rngCell Set rngCell = Range(Cells(firstRow - 1, firstCashAvailableCol), Cells(firstRow - 1, FinalCol)) '/ The extra "Cash made available" Header Cell rngCell.BorderAround xlContinuous, xlMedium, xlColorIndexAutomatic '/ Set column widths CurrentWS.Columns.AutoFit 'This is that 10 from the very beginning, right? For col = firstCashAvailableCol To FinalCol Columns(col).ColumnWidth = colWidthCashAvailable Next col End Sub Overall, it's mostly cleaning up the variable names, putting in meaningful and descriptive comments and being consistent. I didn't see any methods that need approving, no extra loops or anything. I did wonder why the day and frequency sort lists had their own functions that seem static.
{ "domain": "codereview.stackexchange", "id": 17285, "tags": "vba" }
What effect does diameter and length of a wire have on the force and extension?
Question: For example, if wire $A$ has diameter $D$ and length $L$ when a force of $4N$ is applied, it extends to let's say, $0.8mm$. Another wire $B$ has length and diameter $2L$ and $2D$ respectively. Now, will the extension be greater or lower? By the way, both wires obey Hooke's law and are identical except for their dimensions (i.e. same material, same young modulus, etc.) Please help. Answer: Since it has been specified that Hooke's law is applicable, we use the formula, $Y=\frac{F}{A}\frac{l}{\Delta l}$ . Placing the respective values in their positions in the formula and equating $Y$ , we get a value for the extension of the wire $B$. Please remember that Physics Stack Exchange does not allow exact solutions to homework questions, and any answer which provides such is likely to be flagged and removed.
{ "domain": "physics.stackexchange", "id": 72623, "tags": "homework-and-exercises, length" }
Dispersion in crystall for a random direction
Question: Lets say one get a dispersion of electrons or phonons from this website: http://www.matprop.ru Usually dispersion in particular directions is drawn: as for wurtzite crystall it is A to L, L to U and to M, M to Г, an so on... The question is how to get a dispersion in any direction, knowing the previous one (to see http://www.matprop.ru/GaN_bandstr) Answer: It is not possible to get the dispersion in any direction using just the given dispersion. Some knowledge of the underlying crystal and electronic structure is required.
{ "domain": "physics.stackexchange", "id": 23243, "tags": "solid-state-physics, crystals" }
environment variable 'ROS_ENV_LOADER' is not set
Question: Hello all, When I tried to launch "roslaunch pr2_gripper_sensor_action pr2_gripper_sensor_actions.launch", I got the following error: while processing /opt/ros/fuerte/stacks/pr2_common/pr2_machine/pr2.machine: environment variable 'ROS_ENV_LOADER' is not set. Machine xml is <machine address="c2" env-loader="$(env ROS_ENV_LOADER)" name="c2"/> I have no idea how to deal with that problem. Could anyone help me out? Thank you. Originally posted by Xiaolong on ROS Answers with karma: 66 on 2012-12-17 Post score: 0 Answer: Please read the support guidelines before posting here. Always tag your questions. You are trying to use a PR2 specific launch file probably on a computer that's not on the PR2. Normally, the machine definitions are controlled using the ROBOT environment variable. Try setting ROBOT to sim: export ROBOT=sim Originally posted by Lorenz with karma: 22731 on 2012-12-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Xiaolong on 2012-12-20: Hi Lorenz, thank you for you reply. I will notice what you said in the future. However, after I set ROBOT to sim, the problem is not solved. I still got that error. Can you figure out what's happening there? Thank you. Comment by Xiaolong on 2012-12-20: I tried other package such as "pr2_interactive_gripper_pose_action" in the pr2_object_manipulation folder. It works fine. Could it be something wrong in the package "pr2_gripper_sensor_action" itself? Comment by Lorenz on 2012-12-20: My guess is that the package is really PR2 specific and does not work in simulation. That's why it maybe does not use the ROBOT environment variable to select the correct machines to launch the nodes on. I'm not sure about that though.
{ "domain": "robotics.stackexchange", "id": 12137, "tags": "ros" }