anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
N Queens Problem - Number of Possible Placements
Question: So, I have been looking into the famous back tracking problem called N Queens. The problem is essentially finding the number of possible ways you can place a n number of queens on a n x n chess board such that all of the queens are not in any other attack positions from other queens on the board. I came across the following statement on Wikipedia: “The problem of finding all solutions to the 8-queens problem can be quite computationally expensive, as there are 4,426,165,368 possible arrangements of eight queens on an 8×8 board,[a] but only 92 solutions.” The link for the article is here: https://en.wikipedia.org/wiki/Eight_queens_puzzle I was confused since it states there are 4,426,165,368 possible arrangements of the queens on a 8 by 8 board. According to my calculations the number of possible arrangements for n queens on a n x n board is equal to the following (the logic for the below equation is similar to the n choose k permutation formula): So, in the 8x8 case the number of possible arrangements would be 64 * 63 * 62 * 61 * 60 * 59 * 58 * 57, which is equal to 178,462,987,637,760. What is the reason for the differences in our calculations? Answer: 4,426,165,368 is the number of ways to place 8 identical queens into 64 places, i.e the number of combinations. "Identical" means positions are considered equivalent if one queen is swapped to another queen. Each combination corresponds to $8!$ positions if the queens are non-identical. Therefore, $$ 64!/(64-8)!/8! = 178462987637760/40320 = 4426165368 $$
{ "domain": "cs.stackexchange", "id": 21354, "tags": "combinatorics" }
Variant of bipartite matching, with real capacities from source and to sink, all others unlimited
Question: I've got a variant of bipartite graph matching and I can't find any literature about it. We have bipartite graph with real capacity edges from source to left vertices (the sum of which is 1), real capacity edges from right vertices to sink (also summing to 1), and unlimited capacity edges between left and right. What is the fastest algorithm for max flow here? To phrase it more colorfully, you have N types of dog food, totaling 1 kilogram; and N dogs, with each dog being able to eat a maximum quantity of food, but in total they can eat 1 kilogram. Not all dogs will eat all types of dog food, but they will eat up to their "capacity" of any combination of the types that they do like. How do you get the largest quantity of dog food fed to your dogs? So far it looks like Edmonds-Karp is actually faster than push-relabel. I've been using the networkx python package implementation of both of these algorithms, but now I need to optimize for speed. I will be implementing a solution in C or Cython, but I'm concerned about the algorithm itself. It feels like there should be a specialized solution for this that works faster. Does anyone have any ideas? Answer: Looks like the answer I was looking for was something like Dinitz's algorithm with dynamic trees. There are some implementation optimizations that can be added since we know it's a bipartite graph.
{ "domain": "cs.stackexchange", "id": 9325, "tags": "algorithms, graphs, network-flow, bipartite-matching" }
My CNN model Accuracy doesn't increase (high loss and low acc)
Question: Well, I need to do a CNN to classify if a Image is from one or another class. But my model return high losses (6.~8.) and low accuracies (0.50 on max). I tried to include more layers, change my activation functions, and nothing works. My database is 142 .jpg imgs (71 for each class) This is my code: OLD CODE def ReadImages(Path): LabelList = list() ImageCV = list() classes = ["nonPdr", "pdr"] # Get all subdirectories FolderList = [f for f in os.listdir(Path) if not f.startswith('.')] print(FolderList) # Loop over each directory for File in FolderList: for index, Image in enumerate(os.listdir(os.path.join(Path, File))): # Convert the path into a file ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (600,700))) LabelList.append(classes.index(os.path.splitext(File)[0])) return ImageCV, LabelList model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu", input_shape=(700,600,3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(4,4), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='RMSprop', loss='binary_crossentropy', metrics=['accuracy']) data, labels = ReadImages(TRAIN_DIR) model.fit(np.array(data), np.array(labels), epochs=10, batch_size=20) model.save('model.h5') What can I do to improve my model? I appreciate your help! UPDATE I tried to do what Shubham Panchal said but isn't resolve the problem: THINGS THAT I TRIED - Reduce Img size - lr=0.0001 - optimizers: adam, sgd, rmsprop - put more layers - put dropout layer - normalize the data with np.array(data) / 255.0 - Increase the data (1400 total, 700 each class) My code: model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu", input_shape=(150,150,3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(3,3), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(3,3), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(1, activation='softmax')) opt = SGD(lr=0.0001, momentum=0.9) model.compile(optimizer = opt, loss="binary_crossentropy", metrics=['accuracy']) #model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) data, labels = ReadImages(TRAIN_DIR) model.fit(np.array(data) / 255.0, np.array(labels), epochs=10, batch_size=16) My console: Epoch 1/10 1400/1400 [==============================] - 58s 42ms/step - loss: 7.9712 - acc: 0.5000 Epoch 2/10 1400/1400 [==============================] - 59s 42ms/step - loss: 7.9712 - acc: 0.5000 Epoch 3/10 1400/1400 [==============================] - 59s 42ms/step - loss: 7.9712 - acc: 0.5000 ... Anyone have any ideia what can I do?? Answer: Here are some hacks which you can use to improve then model. The dataset seems to be inadequate. Try image augmentation. Image augmentation basically applies to different transformations to your images. Like the rotation, scale, color, whitening etc. are changed. It helps the model to generalise better on the image. See here. Use softmax classification function. Instead of sigmoid, try softmax activation function since you are working on a classification task. Sigmoid is mostly reserved for binary classification tasks. model.add(Dense(1, activation='softmax')) Add more convolution layers. Consider adding more Conv2D layers to the model because the image size is quite large. More the number of layers, the better feature extraction will take place. Due to less layers, your model is not able to extract smaller features which may be required for proper classification. Tips: Try adam optimizer instead of rmsprop. Restrict the kernel size to ( 3 , 3 ). Use a smaller batch size. Your batch size is 20 for 142 images. That makes only ~7 batches. Lower it to a number like 6 or 10. Use Dropout layers in between the Dense layers. The smaller learning rate always helps like 0.001 or 0.0001.
{ "domain": "datascience.stackexchange", "id": 5970, "tags": "machine-learning, python, cnn, image-classification, image-recognition" }
Geosynchronous satellites with polar orbit?
Question: I know that satellites can go in polar orbits, but as far as I have read, this is only done by LEO satellites. I think it is possible to have geosynchronous satellites with polar orbits (near polar), but how feasible is it? I'm wrong? PD: English is not my native language Answer: Geo synchronous, yes. Geo stationary, No. "Geosynchronous" means that the orbital period is the same as Earth's rotational period. "Geostationary" means that the satellite always stays directly above the same spot on Earth's surface. You can have a geosynchronous orbit in any plane and with any eccentricity, but a geostationary orbit is only possible if the orbit is circular, and in the plane of the Earth's equator.
{ "domain": "physics.stackexchange", "id": 60888, "tags": "orbital-motion, satellites" }
Live migration (to other host) of ROS 2 nodes: supported?
Question: Is there a way to specify a ROS2 component (node) by some method such as PID and migrate the node to a server or computer with another ROS2 environment in the default function or in the provided library that can be included / imported? I wanna realize live migration of ROS2 node, but I would like to implement it with the function provided by ROS2 as much as possible. I'm using foxy in my environment, so please let me know if you have a similar implementation in a similar version. Originally posted by hamstick on ROS Answers with karma: 3 on 2020-11-10 Post score: 0 Answer: I wanna realize live migration of ROS2 node, but I would like to implement it with the function provided by ROS2 as much as possible. This is (currently) not supported. And personally, I'm also not sure whether this should be something ROS should do at all. Managing state like this and migrating running processes to other hosts is quite complex. There are solutions to this in the form of technology stacks like k8s and similar platforms and we should probably use those, instead of integrating something like it in ROS proper. It's not migration yet, but there are some people looking into deploying ROS 2 applications onto/using kubernetes. See ROS 2 on Kubernetes and the earlier Robotics Distributed System based on Kubernetes on ROS Discourse. Originally posted by gvdhoorn with karma: 86574 on 2020-11-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by hamstick on 2020-11-15: Thank you so much. The configuration of ROS2 + Kebernetes has many introductions such as how to do it just by searching lightly on the internet. I would like to take a closer look at the implementation.
{ "domain": "robotics.stackexchange", "id": 35742, "tags": "ros2" }
Proportionality of rate of decay with the amount of nuclei present in the substance
Question: Why is the rate of decay of a substance directly proportional to the amount of nuclei present in the substance? I don't know much about this topic, my teacher introduced us to this concept in class today, i couldn't wrap my head around this, because it felt really absurd. Some phenomenons which may be predicted based on this law is that the substance may never decay completely. And may keep approaching zero. Answer: I'd like to start by saying well done. It's really good that you started examining model as soon as you were introduced to it, and thought about the behaviours it predicts. Nice job. Anyway, as I've alluded to, what you're describing is a model. Rather than the atoms colluding and agreeing that in a given time span a certain percentage of them will decay, what happens is that any given atom has a certain probability of decaying in said time span. (Let's say 5%, just for example.) If there is a statistically large number of atoms (and there will be in any sample you can handle on a lab bench) then almost exactly 5% of the atoms will have decayed. However, as the number of atoms decreases this is less likely to hold true. As an example, if you were to roll 600 dice, you would expect close to 100 dice to show a six. If you were to roll only 6 dice, you wouldn't be surprised if none of them came up six. The same thing happens with the atoms. Measurements of the decay rate will get less and less likely to match the statistical ideal as the population of undecayed atoms drops, until eventually there will be only a single atom, and obviously by that point the only thing that can happen is none of the sample decays or all of the sample decays.
{ "domain": "physics.stackexchange", "id": 53227, "tags": "nuclear-physics, radiation, half-life" }
Kinetic Energy of a Body about Instantaneous Axis of Rotation
Question: When we write the expression for the kinetic Energy of a rotating body, we write is as $$E = \frac{I_{\text{CM}}{\omega}^2}{2} + \frac{M{V_{\text{CM}}}^2}{2}$$ So, basically we wrote the expression as $$E_{\text{body/ground}} = E_{\text{body/CM}} + E_{\text{CM/ground}}$$ However, I want to consider a case where I write the expression about a point which is at rest. For example, if I take a cylinder rolling without slipping on a surface, can I write it as $$E= \frac{I_{\text{IAOR}}{\omega}^2}{2}$$ where IAOR is the instantaneous axis of rotation, because it is at rest? Answer: This is simple to see how the equation holds mathematically, $$E= K.E._{trans} + K.E._{rot} \\ \implies E= \frac{I_{\text{CM}}{\omega}^2}{2} + \frac{M{V_{\text{CM}}}^2}{2} \\ \implies E = \frac{(I_{\text{CM}} + MR_{CM}^2){\omega}^2}{2} \\ \implies E=\frac{I_{IOAR}\omega^2}{2}$$ since about the IOAR $V_{CM}= R_{CM}\omega $and $I_{IOAR} = I_{CM} + MR_{CM}^2$ where $R_{CM}$ is the distance of center of mass from the instantaneous axis of rotation. You could also say that it is purely rotational motion so you can use $E=\frac{I\omega^2}{2}$ but you cannot say it is at rest.
{ "domain": "physics.stackexchange", "id": 80818, "tags": "newtonian-mechanics, rotational-dynamics, energy-conservation, moment-of-inertia" }
Why are polar seas rich with nutrients and plankton?
Question: It is well known that baleen whales migrate to the polar seas to feed on plankton and krill (marine crustaceans). The question why are the polar seas rich with krill and plankton? If the answer "it is because they are rich with nutrients" then I ask why are they rich with nutrients? Answer: Water density and temperature is very important for understanding water ecosystems. Water is most dense a little above freezing temperature; when surface water is warmer than that, it's less dense and sits on top of cooler water. When surface water is chilled to freezing and increases in density, it can sink and mix with deeper water. Photosynthesis can only occur in the top of the water column because light doesn't penetrate very deep. When the surface water stays warm and doesn't mix, that top layer gets depleted of nutrients. Mixing brings up anything that has settled and allows life to use the "full column" of nutrients. See for example https://www.nature.com/scitable/knowledge/library/the-biological-productivity-of-the-ocean-section-70631438/ : In the high latitude ocean, surface water is cold and therefore the vertical density gradient is weak, which allows for vertical mixing of water to depths much greater than the sunlit "euphotic zone" as a result, the nutrient supply is greater than the phytoplankton can consume, given the available light
{ "domain": "biology.stackexchange", "id": 12456, "tags": "zoology, ecology, marine-biology, biogeography" }
Information to Energy equations?
Question: Are there are any known formulas or equations that can calculate information from energy or energy from information? One of them, I believe, is the Bekenstein bound which is the maximum information that can be in a given space before becoming a black hole. https://en.wikipedia.org/wiki/Bekenstein_bound The equations involved can be converted for both mass and energy. So I am wondering if there are other equations involving this kind of conversion from energy to information. Answer: The key bound is the Landauer bound: a process that erases 1 bit of information has to spend $k_B T \ln(2)$ J of energy as waste heat to carry away the entropy. Beside that link and the Bekenstein bound, the other main links between energy and information are the various bounds on how fast quantum states can change, such as the Margolus-Levitin bound (an "operation" must take at least time $\pi\hbar/2E$ where $E$ is the system energy), the Mandelstam-Tamm bound (the time is at least $\pi \hbar/2\sqrt{\sigma^2}$ where $\sigma^2$ is the variance of system energy) and their many relatives. The speed bounds and the Bekenstein bounds are related (see the 12-step argument in this essay on what is going on). While the speed and Bekenstein bounds deal with how information-storing fields can change (and gravitate), the Landauer bound links information processing to thermodynamics.
{ "domain": "physics.stackexchange", "id": 52195, "tags": "thermodynamics, energy, statistical-mechanics, entropy, information" }
Is there an official name for a notion of "reusably universal"?
Question: There are several different (probably inequivalent) notions of computational universality (see for example the last couple pages of http://www.dna.caltech.edu/~woods/download/WoodsNearyTCS07-DRAFT.pdf) and there is no consensus among experts about which notions are most correct (see for example http://cs.nyu.edu/pipermail/fom/2007-October/012148.html). I'm trying to say something about a particular model of biomolecular computation. I'd like to argue that it's "more universal" or "more usefully universal" than some other models, because you can construct a universal machine that runs a program and then deletes the input at the end and is ready to run another program. Contrast this to, say, cellular automata, which can emulate any Turing machine, but then at the end of the computation, you've got a final, unchangeable configuration. To emulate another TM, you need to define a completely separate CA. So I'd like to say something is "reusably universal" if it behaves like your desktop, not a CA (i.e., can execute multiple programs without needing to recreate the universe). Has this notion been formalized anywhere? Answer: As you mention in Automata Theory / Formal Language Thesis Topic my supervisors have at least some of the same intution about "reusable universality" as "better" than CA-style. Im not sure that a name is given though: http://www.diku.dk/~neil/blobentcs.pdf I havent focused much on that part, but as I see it, when going through biocomputing literature, the major difference lies in the meaning of the word "programming/programmable", e.g. what is it in fact that is programmable? That, and the "stored-program" part as well but I appreciate the nuance posed by your question I've got no readily available answer to what it is called though
{ "domain": "cstheory.stackexchange", "id": 94, "tags": "universal-computation" }
DLL for basic math operations
Question: I've written a simple C# class to use as a dll for simple math calculations: addition, subtraction, division, and multiplication. How can I improve this code, specifically for addition and multiplication? I've tested it with a C# GUI and a VB.NET GUI. I've tried it with VS 2005, 2008, 2010, 2012, and the Community Editions as well. Class1.cs using System; using System.Collections.Generic; using System.Text; namespace my_class { public class Class1 { public static double addfunc(double num1, double num2) { return (num1 + num2); } public static double subfunc(double num1, double num2) { return (num1 - num2) } public static double divfunc(double num1, double num2) { return num1/num2; } public static double mulfunc(double num1, double num2) { return num1*num2; } } } Here is the code and screenshot of the working example: Form1.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace test_dll { public partial class Form1 : Form { public Form1() { InitializeComponent(); } double num1, num2; double add; double sub; double multi = 1.0 ; double div = 1.0 ; private void button1_Click(object sender, EventArgs e) { num1 = double.Parse(textBox1.Text); num2 = double.Parse(textBox2.Text); add = my_class.Class1.addfunc(num1,num2); MessageBox.Show(add.ToString()); } private void button2_Click(object sender, EventArgs e) { num1 = double.Parse(textBox1.Text); num2 = double.Parse(textBox2.Text); sub = my_class.Class1.subfunc(num1,num2); MessageBox.Show(sub.ToString()); } private void button3_Click(object sender, EventArgs e) { num1 = double.Parse(textBox1.Text); num2 = double.Parse(textBox2.Text); multi = my_class.Class1.mulfunc(num1,num2); MessageBox.Show(multi.ToString()); } private void button4_Click(object sender, EventArgs e) { num1 = double.Parse(textBox1.Text); num2 = double.Parse(textBox2.Text); div = my_class.Class1.divfunc(num1, num2); MessageBox.Show(div.ToString()); } } } Answer: Please be consistent: public static double addfunc(double num1, double num2) { return (num1 + num2); } public static double mulfunc(double num1, double num2) { return num1*num2; } In one method, you use parenthesis around the expression, and in another, you don't. In one method, you use spaces around the operator; in the other, you don't. These methods could also be improved by using C# naming conventions: addfunc should be named with PascalCase as AddFunc at a minimum--writing out the full word "function" wouldn't hurt (although technically, this is called a "method" in C#). Your indentation could also be improved. Use a consistent 4 spaces (or 2 spaces, or 1 tab, if you prefer--4 spaces is the international default, however): using System.Windows.Forms; namespace test_dll { public partial class Form1 : Form { public Form1() { InitializeComponent(); } ... } } Also, name your namespace and form something descriptive, rather than test_dll and Form1 More inconsistency here: double num1, num2; double add; double sub; double multi = 1.0 ; double div = 1.0 ; Why do you assign some of the values, but not others? Why do you have spaces before some of your semi-colons? Why aren't these explicitly stated to have private scope, instead of trusting that the maintainer knows this? I do like how you separated the values for the end result from the two input values, however--that shows more clearly that these values are not inputs. Now this is a much more serious problem: private void button1_Click(object sender, EventArgs e) { num1 = double.Parse(textBox1.Text); num2 = double.Parse(textBox2.Text); add = my_class.Class1.addfunc(num1,num2); MessageBox.Show(add.ToString()); } First of all, why do you have add as a private field? You never use the value anywhere else other than this method. You should scope that tighter to show exactly where it is used. Also, even if you did use it somewhere else, the value is only refreshed when you click the add button, so you have the potential to be working with outdated data. There is a similar problem with num1 and num2--even though you update these every time a button is pressed, you only use the value you read in the method for that button, and no where else; these too should be local variables. Second, guess what happens if I type test in the textbox? double.Parse will crash. You should use double.TryParse instead. Thirdly (naming again), how do you expect me to know that this method is for the add function without reading it? Suppose you just hired me, and I was handling a bug report (coming up in just a bit) in the division method? I now have to spend four times as much time finding the bug and fixing it if I start at the top of the file and work my way down, instead of just reading the method signatures. Fourthly, your button handlers are all incredibly WET. You should consider doing something like this: private void AddClick(object sender, EventArgs e) { var result = GetResult(MathHelpers.AddFunction); MessageBox.Show(result); } private double GetResult(Func<double, double, double> computingFunction) { var num1 = double.Parse(textBox1.Text); var num2 = double.Parse(textBox2.Text); return computingFunction(num1, num2); } Using a delegate for that one step that changes makes this code much cleaner to read and maintain; if, for example, you had a bug in the code to read the values from the textboxes, you had to fix it in 8 separate places--once for each textbox for each operation. Finally, a value divided by 0 is most definitely not infinity. It is undefined. Consider, for example, if 1 / 0 == Infinity and 2 / 0 == Infinity. Now, we can say that 1 / 0 == 2 / 0, and that 1 == 2. Clearly not true.
{ "domain": "codereview.stackexchange", "id": 21544, "tags": "c#, beginner, .net, library" }
flatten API response, and renaming prop - is there a solution by iterating key/value pair?
Question: I want to sum up and flatten the structure of an API response. Since I didn't succeed by iterating over the key/value pairs of input.statistic I came up with another solution which does the job but I'm not happy with it. What I do recieve is: input = { "data": some unimportant stuff, "statistic": [ { "valueId": 111, "statistic": { "min": 0, "max": 0, "average": 0.12 } }, { "valueId": 222, "statistic": { "min": 0, "max": 1, "average": 0.14 } } ] } At this point I'm only interested in the statistic data and I want to change it to something like that: { "stat111": [ {name: "min", value: 0}, {name: "max", value: 0}, {name: "average", value: 0.12} ], "stat222": [ {name: "min", value: 0}, {name: "max", value: 1}, {name: "average", value: 0.14} ] } What I do is: const statDat = {}; for (const item of input.statistic) { const nameTop = 'stat' + item.valueId.toString(); const props = []; for (let key in item.statistic) { props.push({name: key, value: item.statistic[key]}); } statDat[nameTop] = props; } So statDat looks the way I want it but once more I'm sure that there is a better and newer way of flatten and renaming the structure. Any hints? Answer: If you're looking for "newer", then I would recommend familiarizing yourself with a good Javascript api, such as the Mozilla Developer Network's. That has such entries as array's forEach, Object.entries, destructuring assignments, and arrow functions. Those (and the new object notation) combine to change your code to const statDat = {}; input.statistic.forEach( (item) => { const nameTop = 'stat' + item.valueId; // toString() unnecessary statDat[nameTop] = Object.entries(item.statistic).map( ([name,value]) => ({name,value}) ); }); we can also use Object.fromEntries as in Jonah's answer to get these five lines down to one command (which I break into 4 lines for readability). const statDat2 = Object.fromEntries( input.statistic.map( ({valueId,statistic}) => [ 'stat'+valueId, Object.entries(statistic).map( ([name,value]) => ({name,value}) ) ] ) ); const input = { "data": "some unimportant stuff", "statistic": [ { "valueId": 111, "statistic": { "min": 0, "max": 0, "average": 0.12 } }, { "valueId": 222, "statistic": { "min": 0, "max": 1, "average": 0.14 } } ] }; const statDat = {}; input.statistic.forEach( (item) => { const nameTop = 'stat' + item.valueId; statDat[nameTop] = Object.entries(item.statistic).map( ([name,value]) => ({name,value}) ); }); console.log(statDat); // or const statDat2 = Object.fromEntries( input.statistic.map( ({valueId,statistic}) => [ 'stat'+valueId, Object.entries(statistic).map( ([name,value]) => ({name,value}) ) ] ) ); console.log(statDat2); Compressing five lines of code into one usually makes that one line a bit harder to understand than any of the five original lines, but much easier to understand overall.
{ "domain": "codereview.stackexchange", "id": 43204, "tags": "javascript, typescript, iteration" }
What to do with stale centroids in K-means
Question: When I run Kmeans on my dataset, I notice that some centroids become stale in the they are no longer the closest centroid to any point after some iteration. Right now I am skipping these stale centroids in my next iteration because I think those centroids no longer represent any useful set of the data, however I wanted to know if there are other reasonable ways to deal with these centroids. Answer: k-means finds only a local optima. Thus a wrong number of cluster or simply some random state of equilibrium in the attracting forces could lead to empty clusters. Technically k-means does not provide a procedure for that, but you can enrich the algorithm with no problem. There are two approaches which I found that are useful: remove the stale cluster, choose a random instance from your data set and create a new cluster with centroid equal with the chosen random point remove the stale cluster, choose the farthest distant point from any other centroids, create a new cluster with centroid in that point Both procedures can lead to indefinite running time, but if the number of this kind of adjustments is finite (and usually it is) that it will converge with no problem. To guard yourself from infinite running time you can set an upper bound for the number of adjustments. The procedure itself is not practical if you have a huge data set a a large number of clusters. The running time can became prohibitive. Another procedure to decrease the chances for that to happen is to use a better initialization procedure, like k-means++. In fact the second suggestion is an idea from k-means++. There are no guarantees, however. Finally a note regarding implementation. If you can't change to code of the algorithm to make those improvements on the fly, your only option which comes to my mind is to start a new clustering procedure where you initialize the centroid positions for non-stale clusters, and follow procedures for stale clusters.
{ "domain": "datascience.stackexchange", "id": 630, "tags": "clustering, k-means, unsupervised-learning" }
segmentation fault in robot_self_filter_color
Question: Hi, I'm seeing a segmentation fault running robot_self_filter_color in ROS Fuerte using the default installation of PCL, which I believe is PCL 1.5. I am using the following launch file to start the Kinect and filter out the PR2 using the self filter <launch> <arg name="kinect_frame_prefix" default="/head_mount_kinect" /> <arg name="kinect_camera_name" default="head_mount_kinect" /> <arg name="high_res" default="false" /> <!-- separate self filter Kinect points for creating object models with higher resolution--> <node pkg="robot_self_filter_color" type="self_filter_color" respawn="true" name="object_modeling_kinect_self_filter" output="screen" launch-prefix="gdb -ex run --args"> <remap from="cloud_in" to="/$(arg kinect_camera_name)/depth_registered/points" /> <remap from="cloud_out" to="/$(arg kinect_camera_name)/rgb/object_modeling_points_filtered"/> <param name="sensor_frame" type="string" value="$(arg kinect_frame_prefix)_rgb_optical_frame" /> <param name="subsample_value" type="double" value=".005"/> <rosparam command="load" file="$(find pr2_object_manipulation_launch)/config/object_modeling_self_filter.yaml" /> </node> <!-- start the Kinect --> <include file="$(find rgbd_assembler)/launch/openni_node.launch"> <arg name="kinect_frame_prefix" value="$(arg kinect_frame_prefix)"/> <arg name="kinect_camera_name" value="$(arg kinect_camera_name)"/> <arg name="high_res" value="$(arg high_res)"/> </include> </launch> The segmentation fault is in PCL called from filter on line 105 of self_filter_color.cpp. The backtrace from GDB is: (Unfortunately, I don't have a specially compiled version of PCL and the released one doesn't include debug tags.) #0 0x00007ffff6ac8c4f in void pcl::getMinMax3D(pcl::PointCloud const&, Eigen::Matrix&, Eigen::Matrix&) () from /opt/ros/fuerte/lib/libpcl_filters.so.1.5 #1 0x00007ffff6ad3cda in pcl::VoxelGrid::applyFilter(pcl::PointCloud&) () from /opt/ros/fuerte/lib/libpcl_filters.so.1.5 #2 0x0000000000434867 in filter (output=..., this=0x7fffffffda70) at /opt/ros/fuerte/include/pcl-1.5/pcl/filters/filter.h:117 #3 SelfFilter::cloudCallback (this=0x7fffffffd560, cloud2=...) at /tmp/buildd/ros-fuerte-pr2-object-manipulation-0.6.7/debian/ros-fuerte-pr2-object-manipulation/opt/ros/fuerte/stacks/pr2_object_manipulation/perception/robot_self_filter_color/src/self_filter_color.cpp:105 Playing with the self_filter_color.cpp file has revealed very little. Any hints would be great. Thanks, Jenny Originally posted by jbarry on ROS Answers with karma: 280 on 2013-05-31 Post score: 0 Answer: Hi Jenny, take a look at this ticket on Github: https://github.com/ros-perception/perception_pcl/issues/10 - I had a problem with robot_self_filter too, resulting in an error. After updating my pcl lib with the ones mentioned in the ticket, I had no problems. Originally posted by alex_rockt with karma: 76 on 2013-06-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14383, "tags": "ros" }
Simplify C# Code
Question: Is there a better way to write this in vs2010 C#? public bool IsAccept() { //check the status is accept if (Status == null) return false; return Status.ToLower() == "accept"; } public bool IsRefer() { //check the status is refer if (Status == null) return false; return Status.ToLower() == "refer"; } public bool IsAnyReviewState() { if (IsAccept() || IsRefer()) return true; return false; } Maybe a simplified way in C# 4 which I'm still learning. Answer: A simpler version of your code (will work in all .NET versions down to 2.0) is the following: public bool IsAccept { get { return string.Equals("accept", Status, StringComparison.OrdinalIgnoreCase); } } public bool IsRefer { get { return string.Equals("refer", Status, StringComparison.OrdinalIgnoreCase); } } public bool IsAnyReviewState { get { return IsAccept || IsRefer; } } But a more appropriate solution would be to avoid string literals where an enumeration fits. If you receive the value from an external source, you should parse it into an enum as soon as possible, so that rest of the code deals with typed data only.
{ "domain": "codereview.stackexchange", "id": 3374, "tags": "c#, asp.net" }
What effect does vortexing have on a fluid sample that simple mechanical shaking does not?
Question: Some protocols call for fluid samples to be mixed with a "vortexer" on the high setting. What effect does the vortexing have on fluid samples that mechanical shaking does not? Does it shear long molecules like DNA, for instance? A friend is setting up a new lab and asked if vortexers were strictly necessary. I am aware they are often called for in various protocols, but I actually don't know what specific effect the vortexing has that makes it better than manual shaking. What do you think? Answer: In my experience, shaking and mixing have different "dead spaces". Supposed you had an eppendorf tube and you stirred it around with a pipet tip for thirty minutes. You would have great convective mixing in the radial direction but virtually no mixing in the Z-direction. Supposed you had a very viscous fluid like PEG. If you set the tube on a shaker for an hour it will virtually not mix. The vortexer provides a very different profile of mixing. Instances where I wouldn't use the vortexer would be for sensitive objects like DNA and cells. (update) We had some interesting suggestions from the FDA (since the FDA pays attention to these type of discussions). To get proper mixing, it is often not considered enough to merely swirl a tube a bunch. Their actual recommendation is to train technicians to perform a 90 degree angle arc with their forearm to generate proper force in mixing.
{ "domain": "biology.stackexchange", "id": 233, "tags": "lab-techniques" }
Relating Fock states to eigenfunctions in space domain
Question: How can I relate the eigenvalues of $H=\hbar\omega(a^\dagger a+1/2)$ to the eigenfunctions of $H=\frac{p^2}{2m}+\frac{1}{2}m\omega^2 x^2$, with $p=-i\hbar\nabla$? I mean, how the analytical approach to the solution of this problem can be related to the algebraic one, obtaining a 1:1 correspondence between the solutions? Answer: The ladder operators $a$ and $a^\dagger$ can perfectly be defined as differential operators. One starts off from the Hamiltonian $$H = -\frac{1}{2}\frac{d^2}{dx^2} + \frac{1}{2} x^2$$ that for simplicity is normalized on units of $\hbar\omega$ and moreover whose units were chosen so that $\sqrt{\frac{\hbar}{m\omega}}=1$. Then the eigen functions of the Schroedinger equation of the harmonic oscillator $H\psi_n =E_n \psi_n$ are: $$\psi_n(x) = \langle x|n\rangle = \frac{\pi^{-1/4}}{\sqrt{2^n n!}}\exp(-x^2/2) H_n(x) \quad \quad \text{(1)}$$ with the corresponding eigenvalues $E_n = n+1/2$. $H_n(x)$ is an abreviation for the Hermite polynomials ($H_0(x)=1,\, H_1(x)=2x\,$ etc.). Then the ladder operators: $$ a =\frac{1}{\sqrt{2}}\left( x + \frac{d}{dx}\right) \equiv \frac{1}{\sqrt{2}}\left( x +\frac{i}{\hbar} p\right) $$ and $$ a^\dagger =\frac{1}{\sqrt{2}}\left( x - \frac{d}{dx}\right) \equiv \frac{1}{\sqrt{2}}\left( x -\frac{i}{\hbar} p\right) $$ can be defined. They do exactly the job that you expect from them: First of all the Hamiltonian written in $a^\dagger$ and $a$ is written as expected: $$ H\psi = (a^\dagger a + \frac{1}{2})\psi \equiv ( -\frac{1}{2}\frac{d^2}{dx^2} + \frac{1}{2} x^2)\psi$$ Furthermore an annihilation operator acting on the ground should be zero: $\langle x| a |0\rangle =0$ leads to the following differential equation $$\left( x + \frac{d}{dx}\right)\psi_0(x) =0$$ that once solved yields the wave function of the ground state (apart from the normalization): $$\psi_0(x) = C e^{-\frac{x^2}{2}}$$ Furthermore one can find (look up $\psi_0$ and $\psi_1$ in formula (1)): $$\langle x| a^\dagger |0\rangle =\sqrt{1}\langle x | 1\rangle =\psi_1(x)$$ or the same in differential language (knowing the correct normalization constant $C =\pi^{-1/4}$) $$\frac{1}{\sqrt{2}}\left( x - \frac{d}{dx}\right) \psi_0(x) \equiv\frac{1}{\sqrt{2}}\left( x - \frac{d}{dx}\right) \pi^{-1/4}e^{-x^2/2} = \frac{\pi^{-1/4} }{\sqrt{2}} 2x e^{-x^2/2}= \psi_1(x) $$ and so on. As it is wellknown, the ladder operators fulfill: $$a|n\rangle =\sqrt{n}|n-1\rangle \quad \text{and} \quad a^\dagger|n\rangle =\sqrt{n+1}|n+1\rangle$$ These relations can be checked with their differential form given above in the following way: $$\langle x | a| n\rangle =\sqrt{n}\langle x|n-1\rangle \quad \text{respectively}\quad a \psi_n(x) =\sqrt{n}\psi_{n-1}(x) $$ and $$\langle x | a^\dagger| n\rangle =\sqrt{n+1}\langle x|n+1\rangle \quad \text{respectively}\quad a^\dagger \psi_n(x) = \sqrt{n+1}\psi_{n+1}(x) $$ where the $\psi_n(x)$ are given by the formula (1) indicated above respectively the $\psi_n$ are the eigenfunctions of the differential operator $H = -\frac{1}{2}\frac{d^2}{dx^2} + \frac{1}{2} x^2$. Actually, the eigenvalues of the Hamiltonian of the quantum harmonic oscillator can be determined completely algebraically.
{ "domain": "physics.stackexchange", "id": 70658, "tags": "quantum-mechanics, hilbert-space, wavefunction, harmonic-oscillator, quantum-states" }
Energy of resultant photons from meson decay
Question: I am a little unsure how to answer the following question, Find the energies of two photons emitted in opposite directions along the pion's original line of motion if the pion has a r.m.e of 500MEV and is moving with a Kinetic energy of 0.8 Gev. I was planing to use the equation $$ 4E1E2 = (mc^2)^2$$ where $m$ is the rest mass of the pion. My issue is, how do I account for the kinetic energy of the pion? Answer: All you need to do is conserve energy and momentum in the lab frame. Firstly you conserve energy in lab frame: \begin{equation} E_{\gamma 1} + E_{\gamma2} = E_{\pi} = 1.3GeV \end{equation} Then you work out what the pion's momentum was (still in the lab frame) using the mass-energy-momentum relation where the $E_\pi$ is the total kinetic and mass energy: \begin{equation} E_{\pi}^2 = m_{\pi}^2c^4 + p_{\pi}^2c^2 \end{equation} Then you apply the relation connecting the photon energy and momentum: \begin{equation} p_{\gamma} = \frac{E_{\gamma}}{c} \end{equation} And conserve momentum (still in the lab frame): \begin{equation} \vec{p}_{\gamma1} + \vec{p}_{\gamma2} = \vec{p}_\pi \end{equation} Note that $\vec{p}_{\gamma1}$ and $\vec{p}_{\gamma2}$ are going to be in opposite directions and one of them will be in the same direction as $\vec{p}_\pi$ so you can relate the moduli of the vectors like so (if you choose $\gamma_1$ to be the one going "forward": \begin{equation} p_{\gamma1} - p_{\gamma2} = p_\pi \end{equation} Between these four equations you can solve for $E_{\gamma1}$ and $E_{\gamma2}$.
{ "domain": "physics.stackexchange", "id": 18363, "tags": "special-relativity, particle-physics, radiation, pions" }
Python solution to Code Jam's 'Rounding Error'
Question: The "Rounding Error" problem of Round 1B of Google Code Jam 2018 is as follows: Problem To finally settle the age-old question of which programming language is the best, you are asking a total of N people to tell you their favorite language. This is an open-ended question: each person is free to name any language, and there are infinitely many languages in the world. Some people have already responded, and you have gathered this information as a list of counts. For example, 1 2 means that you have asked 3 people so far, and one picked a particular language, and the other two picked some other language. You plan to publish the results in a table listing each language and the percentage of people who picked it. You will round each percentage to the nearest integer, rounding up any percentage with a decimal part equal to or greater than 0.5. So, for example, 12.5% would round up to 13%, 99.5% would round up to 100%, and 12.4999% would round down to 12%. In surveys like this, sometimes the rounded percentages do not add up to exactly 100. After you are done surveying the remaining people, what is the largest value that the rounded percentages could possibly add up to? Input The first line of the input gives the number of test cases, T. T test cases follow; each consists of two lines. The first line consists of two integers N and L: the total number of people in the survey, and the total number of different languages represented among the people who have already responded. The second line consists of L integers Ci; the ith of these is the number of people who said that the ith of the represented languages was their favorite. Output For each test case, output one line containing Case #x: y, where x is the test case number (starting from 1) and y is the largest value that the percentages could possibly add up to, as described above. Limits 1 ≤ T ≤ 100. 1 ≤ L < N. 1 ≤ Ci, for all i. The sum of all Ci values < N. Time limit: 10 seconds per test set. Memory limit: 1GB. This is my Python solution. However, I get the 'Time Limit Exceeded' result even for the smallest test case. How can I speed up this solution? from functools import reduce def main(): def gen_sums(total, freq, remaining): """Generate percentages' sums""" if not remaining: yield reduce(lambda x, y: x + (y*100+total//2)//total, freq, 0) else: seen = set() for i in range(len(freq)): if not freq[i] in seen: yield from gen_sums(total, freq[:i] + [freq[i]+1] + freq[i+1:], remaining-1) seen.add(freq[i]) yield from gen_sums(total, freq+[1], remaining-1) T = int(input()) for i in range(1, T+1): total_people, num_languages = map(int, input().split()) languages_frequency = [int(x) for x in input().split()] if not 100 % total_people: print('Case #{}: {}'.format(i, 100)) continue not_responded = total_people - sum(languages_frequency) max_percentage = max(gen_sums(total_people, languages_frequency, not_responded)) print('Case #{}: {}'.format(i, max_percentage)) main() Answer: 1. Review It is hard to test this code because it gets its data from standard input. This means that in order to test it or measure its performance you have to write the test case to a file and then run the code with standard input redirected from that file. It would be easier to test if the code were refactored so that the input-reading part of the code was in a separate function from the problem-solving part, like this: from functools import reduce def gen_sums(total, freq, remaining): """Generate percentages' sums""" if not remaining: yield reduce(lambda x, y: x + (y*100+total//2)//total, freq, 0) else: seen = set() for i in range(len(freq)): if not freq[i] in seen: yield from gen_sums(total, freq[:i] + [freq[i]+1] + freq[i+1:], remaining-1) seen.add(freq[i]) yield from gen_sums(total, freq+[1], remaining-1) def max_percentage(total_people, languages_frequency): if not 100 % total_people: return 100 not_responded = total_people - sum(languages_frequency) return max(gen_sums(total_people, languages_frequency, not_responded)) def main(): T = int(input()) for i in range(1, T+1): total_people, num_languages = map(int, input().split()) languages_frequency = [int(x) for x in input().split()] result = max_percentage(total_people, languages_frequency) print('Case #{}: {}'.format(i, result)) Now it's easy to test the code from the interactive interpreter or from a unit test, for example: >>> max_percentage(11, [1, 2, 3, 4]) 99 and easy to measure its performance: >>> from timeit import timeit >>> timeit(lambda:max_percentage(14, []), number=1) 8.237278351996792 2. Performance As demonstrated above, the code in the post takes more than 8 seconds to figure out that when there are 14 voters (and no votes cast yet), the maximum rounded percentage is 101%. Why does the code in the post take so long to solve this problem? Well, it's because it carries out a search over all possible assignments of votes. But if there are \$n\$ voters then there are of the order of $$\exp{\pi\sqrt {2n \over 3}}$$ possible assignments of votes (see the asymptotic formula for the partition numbers) and so the runtime is exponential in the number of votes. But in fact the problem is easy to solve by hand with a little bit of mathematics. If there are 14 voters, then each vote is worth \${100 \over 14} = 7{1\over7}\$ percent, and so four votes are worth \$28{4\over7}\$ percent, which rounds up to 29. Fewer than four votes will round down, and more than four will be wasteful since the extra votes over four could be used to contribute to another block of four votes. So the maximum rounded percentage is found when we group as many of the votes as possible into blocks of four votes. In this case there are three such blocks, leaving two votes over, giving a rounded total of \$29·3 + 14 = 101\$. Similar analysis shows that if there are 19 voters, then each voter contributes \${100\over19} = 5{5\over19}\$ percent, and so it is most efficient to put voters in pairs, because two votes contibute \$10{10\over19}\$ percent, which rounds up to 11. So we have nine pairs of votes and one left over, giving a rounded total of \$11·9 + 5 = 104\$. But it will be a very long time before max_percentage(19, []) returns the result. So searching over all possible assignments of votes is not going to work even for moderately large problem sizes. Instead, you need to program the kind of mathematical analysis that I carried out in the examples above. I'm not going to spoil the problem for you by giving away my solution, but I'll just show one performance measurement: >>> timeit(lambda:max_percentage2(14, []), number=1) 0.00011452200124040246 This is more than 70,000 times faster than the code in the post.
{ "domain": "codereview.stackexchange", "id": 30523, "tags": "python, python-3.x, programming-challenge, time-limit-exceeded, combinatorics" }
Proving that the average case complexity of binary search is O(log n)
Question: I know that the both the average and worst case complexity of binary search is O(log n) and I know how to prove the worst case complexity is O(log n) using recurrence relations. But how would I go about proving that the average case complexity of binary search is O(log n)? Answer: I think most text book will provide you a good proof. For me, I can show the average case complexity as follows. Assuming a uniform distribution of the position of the value that one wants to find in an array of size $n$. For the case of 1 read, the position should be in the middle so there is a probability of $\frac{1}{n}$ for this case For the case of 2 reads, one will read the middle position and then 1 of the 2 other middle positions from the 2 sub-arrays. This probability is $\frac{2}{n}$ For the case of 3 reads, there are $2*2$ positions which result in this cost as you go into the 4 sub-arrays of the first 2 sub-arrays. The probability for this cost is $\frac{2^2}{n}$ ... For the case of $x$ reads, the probability for this case is $\frac{2^{x-1}}{n}$ For the average case, the number of reads will be $\sum\limits_{i=1}^{\log(n)} \frac{i2^{i-1}}{n} = \frac{1}{n} \sum\limits_{i=1}^{\log(n)} i2^{i-1}$ Now you can do integration on an approximation formula which will give you $O(n\log(n))$. Note that $\int\limits_{1}^{\log(n)} x 2^x dx$ can be calculated and bounded into $\log(n)*2^{\log(n)} = n\log(n)$ This is a very good way to do that applies to many cases. Another way to see it can also be $i2^{i-1} < \log(n) * 2^{i-1}$ Then the formula above is bounded by $\frac{\log(n)}{n} \sum\limits_{i=1}^{\log(n)} 2^{i-1}$ The summation part is actually $\frac{1 - 2^{\log(n)}}{1 - 2} = 2^{\log(n)} - 1 = n - 1$ which is definitely less than $n$, multiplying this with $\frac{\log(n)}{n}$ gives you what you want $\log(n)$ So you will get the bound as you want $O(\log(n))$
{ "domain": "cs.stackexchange", "id": 3458, "tags": "algorithms, time-complexity, average-case, binary-search" }
Cross-validating Fixed-effect Model
Question: Code Summary The following code is a data science script I've been working on that cross-validates a fixed effect model. I'm moving from R to Python and would appreciate feedback on the code below. The code does the following: Split data into train and test using a custom function that groups/clusters the data Estimate a linear fixed effect model with train and test data Calculate RMSE and tstat to verify independence of residuals Prints RMSE, SE, and tstat from cross-validation exercise. Note: the code downloads a remote data set, so the code can be run on its own. Code from urllib import request from scipy import stats import pandas as pd import numpy as np import statsmodels.api as sm print("Defining functions......") def main(): """ Estimate baseline and degree day regression. Returns: data.frame with RMSE, SE, and tstats """ # Download remote from github print("Downloading custom data set from: ") print("https://github.com/johnwoodill/corn_yield_pred/raw/master/data/full_data.pickle") file_url = "https://github.com/johnwoodill/corn_yield_pred/raw/master/data/full_data.pickle" request.urlretrieve(file_url, "full_data.pickle") cropdat = pd.read_pickle("full_data.pickle") # Baseline WLS Regression Cross-Validation with FE and trends print("Estimating Baseline Regression") basedat = cropdat[['ln_corn_yield', 'trend', 'trend_sq', 'corn_acres']] fe_group = pd.get_dummies(cropdat.fips) regdat = pd.concat([basedat, fe_group], axis=1) base_rmse, base_se, base_tstat = felm_cv(regdat, cropdat['trend']) # Degree Day Regression Cross-Validation print("Estimating Degree Day Regression") dddat = cropdat[['ln_corn_yield', 'dday0_10C', 'dday10_30C', 'dday30C', 'prec', 'prec_sq', 'trend', 'trend_sq', 'corn_acres']] fe_group = pd.get_dummies(cropdat.fips) regdat = pd.concat([dddat, fe_group], axis=1) ddreg_rmse, ddreg_se, ddreg_tstat = felm_cv(regdat, cropdat['trend']) # Get results as data.frame fdat = {'Regression': ['Baseline', 'Degree Day',], 'RMSE': [base_rmse, ddreg_rmse], 'se': [base_se, ddreg_se], 't-stat': [base_tstat, ddreg_tstat]} fdat = pd.DataFrame(fdat, columns=['Regression', 'RMSE', 'se', 't-stat']) # Calculate percentage change fdat['change'] = (fdat['RMSE'] - fdat['RMSE'].iloc[0])/fdat['RMSE'].iloc[0] return fdat def felm_rmse(y_train, x_train, weights, y_test, x_test): """ Estimate WLS from y_train, x_train, predict using x_test, calculate RMSE, and test whether residuals are independent. Arguments: y_train: Dep variable - Full or training data x_train: Covariates - Full or training data weights: Weights for WLS y_test: Dep variable - test data x_test: Covariates - test data Returns: Returns tuple with RMSE and tstat from ttest """ # Fit model and get predicted values of test data mod = sm.WLS(y_train, x_train, weights=weights).fit() pred = mod.predict(x_test) #Get residuals from test data res = (y_test[:] - pred.values) # Calculate ttest to check residuals from test and train are independent t_stat = stats.ttest_ind(mod.resid, res, equal_var=False)[0] # Return RMSE and t-stat from ttest return (np.sqrt(np.mean(res**2)), t_stat) def gc_kfold_cv(data, group, begin, end): """ Custom group/cluster data split for cross-validation of panel data. (Ensure groups are clustered and train and test residuals are independent) Arguments: data: data to filter with 'trend' group: group to cluster begin: start of cluster end: end of cluster Return: Return test and train data for Group-by-Cluster Cross-validation method """ # Get group data data = data.assign(group=group.values) # Filter test and train based on begin and end test = data[data['group'].isin(range(begin, end))] train = data[~data['group'].isin(range(begin, end))] # Return train and test dfs = {} tsets = [train, test] # Combine train and test to return dfs for i, val in enumerate([1, 2]): dfs[val] = tsets[i] return dfs def felm_cv(regdata, group): """ Cross-validate WLS FE model Arguments: regdata: regression data group: group fixed effect Returns: return mean RMSE, standard error, and mean tstat from ttest """ # Loop through 1-31 years with 5 groups in test set and 26 train set #i = 1 #j = False retrmse = [] rettstat = [] #for j, val in enumerate([1, 27]): for j in range(1, 28): # Get test and training data tset = gc_kfold_cv(regdata, group, j, j + 4) # Separate y_train, x_train, y_test, x_test, and weights y_train = tset[1].ln_corn_yield x_train = tset[1].drop(['ln_corn_yield', 'corn_acres'], 1) weights = tset[1].corn_acres y_test = tset[2].ln_corn_yield x_test = tset[2].drop(['ln_corn_yield', 'corn_acres'], 1) # Get RMSE and tstat from train and test data inrmse, t_stat = felm_rmse(y_train, x_train, weights, y_test, x_test) # Append RMSE and tstats to return retrmse.append(inrmse) rettstat.append(t_stat) # If end of loop return mean RMSE, s.e., and tstat if j == 27: return (np.mean(retrmse), np.std(retrmse), np.mean(t_stat)) if __name__ == "__main__": RDAT = main() print(RDAT) # print results print("---Results--------------------------------------------") print("Baseline: ", round(RDAT.iloc[0, 1], 2), "(RMSE)", round(RDAT.iloc[0, 2], 2), "(se)", round(RDAT.iloc[0, 1], 3), "(t-stat)") print("Degree Day: ", round(RDAT.iloc[1, 1], 2), "(RMSE)", round(RDAT.iloc[0, 2], 2), "(se)", round(RDAT.iloc[1, 3], 2), "(t-stat)") print("------------------------------------------------------") print("% Change from Baseline: ", round(RDAT.iloc[1, 4], 4)*100, "%") print("------------------------------------------------------") Answer: A first analysis. Once I have more time, I'll try to look in what happens with the data exactly, these are some remarks are about the general code quality: pickle from the python documentation: Warning The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source. If this is partially processed data, another intermediary format like feather or parquet or so might be more appropriate Functions I would make even more functions, instead of cramming everything into the main actually, almost everywhere where you do a print('<doing this>'), I would make a different functions - fetch_data - baseline_regression`` -degree_day` - ... caching Instead of downloading the file each time you do the analysis, you might try to cache it dataframe indexing sometimes you use df['<key>'], sometimes df.<key>. Try to be consistent return values in gc_kfold_cv you use 5 lines to put the result in a dictionary, with 1 and 2 as keys. Simpler would be to return a tuple test, train or a dict {'test': test, 'train': train} keyword arguments in tset[2].drop(['ln_corn_yield', 'corn_acres'], 1), the significance of the 1 is unclear, so better use axis=1 or axis='columns'
{ "domain": "codereview.stackexchange", "id": 31831, "tags": "python, statistics" }
Why do children prefer sweeter foods?
Question: As we get older, we tend to lose our sweet tooth and become more tolerant to bitter foods, like vegetables. However, I never understood how this works. Why is it that children prefer sweeter foods, even some that adults may consider "too sweet"? In fact, is there any reason they would also dislike bitter foods, even when they can be beneficial to their health? This just seems bizarre to me that the body would start out craving sweets and lose this later on. Are only humans like this? Is there anything suggesting that younger animals prefer sweet foods too, but like them less as they get older? Does this have any biological advantage or is it just random? Answer: @Colombo explains one reason that I think is obvious. However, there has been some research done on this. One other reason is because it would provide an evolutionary advantage in environments where calories are scarce. Also, sugar actually acts like a pain reliever. Studies show that giving sugar to babies and children during surgery act like a pain reliever. The sweet tooth could be controlled by hormones secreted from the growing bones. Some common hormones like insulin also affect sensory centers in the brain. This explains why the sweet tooth goes away as an adult. Reference NPR -http://www.npr.org/sections/thesalt/2011/09/26/140753048/kids-sugar-cravings-might-be-biological
{ "domain": "biology.stackexchange", "id": 5060, "tags": "food, psychology, health, taste, children" }
How was this result on discrete Fourier series achieved?
Question: I was trying to do the question 10, part b of the following document (https://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/assignments/MITRES_6_007S11_hw10.pdf) I was going through the solution(https://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/assignments/MITRES_6_007S11_hw10_sol.pdf). Can someone explain to me how they reached from step 3 to answer to part b) of the 10th question. I understood till step-3 of the solution. Answer: Note that $x[n]$ has period $N$ and the sum you refer to has $NM$ terms, so the sum has the form $$\sum_{n=0}^{NM-1}x[n]f[n]=x[0]f[0]+x[1]f[1]+\ldots+x[N-1]f[N-1]+x[0]f[N]+\ldots +x[N-1]f[2N-1]+x[0]f[2N]+\ldots=\\=x[0]\big(f[0]+f[N]+f[2N]+\ldots+f[MN-N]\big)+x[1]\big(f[1]+f[N+1]+\ldots+f[MN-N+1]\big)+\ldots+x[N-1]\big(f[N-1]+f[2N-1]+\ldots+f[MN-1]\big)=\\=\sum_{n=0}^{N-1}x[n]\sum_{l=0}^{M-1}f[n+lN]$$ With $f[n]=e^{-jk2\pi n/NM}$ we obtain $$\frac{1}{NM}\sum_{n=0}^{NM-1}x[n]e^{-jk2\pi n/NM}=\frac{1}{NM}\sum_{n=0}^{N-1}x[n]\sum_{l=0}^{M-1}e^{-jk2\pi (n+lN)/NM}\tag{1}$$ For the other sum over $y[n]$ you can do exactly the same. The final result is obtained as follows. The right-hand side of Eq. $(1)$ can be written as $$\frac{1}{NM}\sum_{n=0}^{N-1}x[n]\sum_{l=0}^{M-1}e^{-jk2\pi (n+lN)/NM}=\frac{1}{NM}\sum_{n=0}^{N-1}x[n]e^{-j(k/M)2\pi n/N}\sum_{l=0}^{M-1}e^{-jk2\pi l/M}\tag{2}$$ The last sum over $l$ in Eq. $(2)$ is given by $$\sum_{l=0}^{M-1}e^{-jk2\pi l/M}=\begin{cases}M,&k=mM,\;m\in\mathbb{Z}\\0,&\text{otherwise}\end{cases}\tag{3}$$ Consequently, for $k=mM$, Eq. $(2)$ evaluates to $$\frac{1}{N}\sum_{n=0}^{N-1}x[n]e^{-j(k/M)2\pi n/N}=\frac{1}{N}a_{k/M}\tag{4}$$ where $a_k$ are the discrete Fourier series (DFS) coefficients of $x[n]$. For other values of $k$ the expression in Eq. $(2)$ equals zero. Exactly the same manipulation can be done with the sum over $y[n]$.
{ "domain": "dsp.stackexchange", "id": 6800, "tags": "discrete-signals, fourier-series" }
Temperature converter using "switch" statement
Question: I wrote a program to convert temperature into different units, where the users have to put a space between temperature and unit: 23 f, 34 c, 45 k using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace PracticeSet1 { class Program { public static float temp; public static char tempUnit; static void Main(string[] args) { //Getting user input Console.WriteLine("Enter Temprature in to convert it into other i.e 30 k, 45 f, 50 c *Put space between value and unit* "); string[] tempInput = Console.ReadLine().Split(); //parse element 0 temp = float.Parse(tempInput[0]); //assinging tempUnit tempUnit = char.Parse(tempInput[1]); switch (tempUnit) { //Converting temp to F and K if tempUnit == c case 'c': Console.WriteLine("Celsius To Farhenheit and Kelvin"); convertCelsiusToFarhenheit(); convertCelsiusToKelvin(); break; //Converting temp to C and F if tempUnit == K case 'k': Console.WriteLine("Kelvin"); convertKelvinToCelsius(); convertKelvinToFarhenheit(); break; //Converting temp to C and K if tempUnit == F case 'f': Console.WriteLine("Farhenheit to Celsius and kelvin"); convertFarhenheitToCelsius(); convertFarhenheitToKelvin(); break; } } static void convertFarhenheitToCelsius() { Console.WriteLine((temp - 32) * 0.5556 + "°C"); } static void convertFarhenheitToKelvin() { Console.WriteLine((temp + 459.67) * 5 / 9 + "°K"); } static void convertCelsiusToFarhenheit() { Console.WriteLine((temp * 1.8) + 32 + "°F"); } static void convertCelsiusToKelvin() { Console.WriteLine(temp + 273.15 + "°K"); } static void convertKelvinToCelsius() { Console.WriteLine(temp - 273.15 + "°C"); } static void convertKelvinToFarhenheit() { Console.WriteLine(temp - 459.67 + "°F"); } } } Answer: public static float temp; public static char tempUnit; There's no reason to make these into global variables. It's better to use local variables: static void Main(string[] args) { float temp; char tempUnit; } If you want a first step into OOP, try wrapping the temp and tempUnit into a class (or struct) of their own. While Console.ReadLine().Split() is essentially the same as Console.ReadLine().Split(' '), I would suggest putting the character in there. It's a matter of readability. Initially, I assumed it would split on newlines, not spaces, until I remembered. Or, alternatively, add a comments that mentions you're splitting on spaces. You're not handling garbage values. What happens if the user enters 123f or batman or k 123? Try to handle invalid data and show the user an error message that describes the problem. if(!float.TryParse(tempInput[0], out temp) { Console.WriteLine($"Cannot parse the value \"{tempInput[0]}\""); //Wait until user presses enter and close the application Console.ReadLine(); return; } The subsequent code is the same, as TryParse will still put a usable value in the temp variable. Console.WriteLine(temp + 273.15 + "°K"); While this currently works, I highly suggest not doing this. It's very hard to spot that you're first doing a mathematical operation (number + number), and then a string concatenation (number + string, which is handled as string + string). Suppose you're in a culture where the unit comes before the value (e.g. °K 123). You can't just simply invert the code ("°K" + temp + 273.15); because if temp equals 123, your result will be °K123273.15 because it is now concatenating three string values. While you could work around this issue with parentheses, it's still not a good fix. There's no reason to inline all statement. Especially when you're a beginner, but this applies to experts as well. Handling the same operations on multiple lines does not affect application performance, but it does dramatically lower readability. var convertedValue = temp + 273.15; var convertedValueString = convertedValue + "°K"; Notice that I didn't use Console.WriteLine() yet. You've had to copy paste that in every method, which is not a reusable pattern. It would be better to have the method return a string: static string convertCelsiusToKelvin() { var convertedValue = temp + 273.15; var convertedValueString = convertedValue + "°K"; return convertedValueString; } And wrap the method call in the Main() method: switch (tempUnit) { //Converting temp to F and K if tempUnit == c case 'c': Console.WriteLine("Celsius To Farhenheit and Kelvin"); Console.WriteLine(convertCelsiusToFarhenheit()); Console.WriteLine(convertCelsiusToKelvin()); Note: This can be optimized further, but this is already a good first step. It teaches you to separate the calculation from the presentation of the calculated value. Try to separate the logic into methods more. Your Main() method currently is doing many things that don't really have anything to do with each other: Read the values Decide which conversion calculations to do (As per my suggestion) Printing the converted values. A minor redesign to show you what I mean. First, let's define a struct for our values: public struct Temperature { float Value; char Unit; } You could redesign your main method to be much more abstracted: static void Main(string[] args) { Temperature input = GetInputFromUser(); Temperature valueKelvin = CalculateKelvin(input); Print(valueKelvin); Temperature valueFahrenheit = CalculateFahrenheit(input); Print(valueFahrenheit); Temperature valueCelsius = CalculateCelsius(input); Print(valueCelsius); Console.ReadLine(); } Some explanatory examples: public static Temperature CalculateKelvin(Temperature inputValue) { switch(inputValue.Unit) { case 'K': //Same unit, so no conversion needed return value; case 'F': // F => K return convertFahrenheitToKelvin(inputValue); case 'C': // C => K return convertCelsiusToKelvin(inputValue); } } public static void Print(Temperature value) { Console.WriteLine($"{value.Value} °{value.Unit}"); }
{ "domain": "codereview.stackexchange", "id": 30302, "tags": "c#, beginner, unit-conversion" }
DIY CO2 sensor calibration
Question: Summary: Are there any worthwhile methods for rough calibration of CO2 sensors without proper chemistry lab equipment, calibration gas, or other known-good sensor available? It would only be used as an air quality indicator. Full story: I have a couple of SEN0159 analog CO2 modules based on the MG-811 sensor. According to the manufacturer, calibration has to be done manually for each individual sensor, i.e. to determine the parameters for the conversion function from voltage to concentration percentage. There are two sets of parameters mentioned as examples in the documentation, but these differ wildly from one another, so indeed these particular values cannot be trusted. For the calibration, one would need to expose the sensor to at least two samples of known CO2 concentration. One of these would obviously be the baseline value, in fresh outdoor air, which can be treated as 400 ppm (or nearby meteorological station data could be consulted). For other type of sensors, a zero ppm calibration point would be available to use with e.g. pure nitrogen gas. This particular sensor has a measurement range of 400 to 10000 ppm though, meaning that some other point has to be used within those bounds. So the challenge is to obtain an approximately known CO2 concentration in some container (with the sensor in it). I am asking in Engineering with the hope to get some creative answers using common workshop/household tools, cheap material, and widely available CO2 sources like soda siphon, carbonated water, burning something etc, maybe even simple exhaled air. Answer: If you're really only looking for a rough calibration, you may be able to set up a dilution system. 10,000 ppm is 1%, so you'd be aiming for a gas mixture of 1% CO2 and 99% air. Use an air compressor and small flowmeter to get the air at some known flowrate, say 100 ml/minute. Then put some dry ice in a mostly-sealed box. As it sublimates it will displace the air and eventually be filled with pure CO2. Now you need 1 ml/min drawn out of the box and combined with the air flow. You'll need a little pressure to drive it, so you'd need a small pump to draw the CO2 out and push it through the flowmeter into the combined stream. Acquiring all those bits and pieces may well cost more than buying or renting a calibrated reference sensor.
{ "domain": "engineering.stackexchange", "id": 4531, "tags": "measurements, sensors" }
How to solve signal MFSK or FHSS question (received signal+ noise+jamming)
Question: I'm trying to solve the following: \begin{equation} A \cos(2\pi f t + \theta_1) + B \cos(2\pi f t + \theta_2) = D\cos(?f?\theta) \end{equation} I just need to know the correct value of D, the value of frequency and delta is not important since it is non-coherent. Answer: $$A\cos(2\pi f+\theta_1)+B\cos(2\pi f+\theta_2)=C\cos(2\pi f+\theta_3)$$ where $$C=|u|\quad\textrm{and}\quad \theta_3=\arg\{u\}$$ with $$u=Ae^{j\theta_1}+Be^{j\theta_2}$$ The constant $C$ can be written as $$C=\sqrt{A^2+2AB\cos(\theta_1-\theta_2)+B^2}$$
{ "domain": "dsp.stackexchange", "id": 7172, "tags": "frequency-spectrum, fsk, spread-spectrum" }
openni kinect depth value
Question: I use the openni SimpleViewer, the depth value is 10000 max. Can somebody tell me what's the value stands for? Why it's not 2047 max ? I find that there is an SampleConfig.xml, how to configure my kinect? Originally posted by Bruce on ROS Answers with karma: 26 on 2011-05-03 Post score: 0 Answer: Hi, the value of 10000 is the maximum depth in millimeters. Thats not a software limit, but a limit given by the hardware. Also the value is somewhere given in the OpenNI/PS-Engine libraries. As "Mac" mentioned, the point clouds published by the node is already in meters in the corresponding coordinate frame. -Suat Originally posted by Suat Gedikli with karma: 91 on 2011-06-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5507, "tags": "ros, openni, depth" }
Does Bell's inequalities also rule out non-computable local hidden variable theories?
Question: I have beenn reading different articles on Bell's assumptions and interpretations, including superdeterminsm. I always end up dizzy when I try tho think about this specific question, so any hints would be greatly appreciated. Let us imagine that the randomnes in quantum mechanics arises not because of some chaotic hidden variable process but by an actually non-computable local process. For instance, an underlying 3D local cellular automata with an oracle that can solve the halting problem. In this case there is no way, even in principle, to predict the behaviour of the hidden variables, they will behave trully randomly (not just pseudorandomly). So the questions are two: 1) Could such an inherent randomness from a non-computable law be the source of the "true" irreducible randomness of quantum mechanics? 2)Woud such kind of source bypass Bell's assumptions, in a way that you can still mantain local determinism but are at the same time barred from ever using it to make any exact predictions (because of it not being computable) ? Update: First, this is not my personal theory, it is only a question about the foundations of physics. It is mainstream physics, and not some sort of vague delirium. The two questions seem unrelated but I believe they are, and I will try to explain my train of thought about why. First, when we usually talk about the randomness being an irreducible property of nature, my interpretation is that we are somehow saying that there are no laws that determine a particular measurement. This in turn implies that there is no hidden variables theory. Bell inequelities are supossed to distinguish between these two positions (I MIGHT BE WRONG HERE). If randomness is inherent but because of laws, even if there are not hidden variables (I am not sure if we could talk of a hidden variables theory if such theory is not computable, because there nothing to predict), then perhaps this assumption was not considered as a possibility on bell's theorem. My intuition is that perhaps this assumption is not included in Bells's assumptions. But I am not sure how the option of hidden variables was formalized. Perhaps it is completely irrelevant if there are hidden variables or not. That is why I posted this question.I am confussed. For instance, equations (1) and (2) of this article , that tries to dissect Bell's assumptions, would not make sense to me. Answer: It shouldn't make a difference whether the hidden variables are generated by a computable or non-computable rule, so yes, Bell's proof should rule out non-computable local hidden variables too. Here's a simple toy model of Bell inequality violations I wrote up a while ago: Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get the same result--if Bob scratches box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a cherry too. Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs of cards in such a way that the "hidden" fruit in a given box of one card always matches the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the same as the other--if the first card was created with hidden fruits A+,B+,C-, then the other card must also have been created with the hidden fruits A+,B+,C-. The problem is that if this were true, it would force you to the conclusion that if Alice and Bob are picking randomly which box to scratch on each trial (with a 1/3 chance of A, B, or C each time), then if they do this a large number of times, we should expect that in the subset of trials where Alice and Bob happened to pick different boxes to scratch, they should find the same fruit at least 1/3 of the time. For example, if we imagine Bob and Alice's cards each have the hidden fruits A+,B-,C+, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be: Bob picks A, Alice picks B: opposite results (Bob gets a cherry, Alice gets a lemon) Bob picks A, Alice picks C: same results (Bob gets a cherry, Alice gets a cherry) Bob picks B, Alice picks A: opposite (Bob gets a lemon, Alice gets a cherry) Bob picks B, Alice picks C: opposite results (Bob gets a lemon, Alice gets a cherry) Bob picks C, Alice picks A: same results (Bob gets a cherry, Alice gets a cherry) Bob picks C, Alice picks picks B: opposite results (Bob gets a cherry, Alice gets a lemon) In this case, you can see that that if they are equally likely to pick each combination of boxes, then 2 times out of 6 when they choose different boxes, they will get the same fruit (i.e. a 1/3 chance of the same result). You'd get the same answer if you assumed any other preexisting state where there are two fruits of one type and one of the other, like A+,B+,C- or A+,B-,C-. On the other hand, if you assume a state where each card has the same fruit behind all three boxes, so either they're both getting A+,B+,C+ or they're both getting A-,B-,C-, then of course even if Alice and Bob pick different boxes to scratch they're guaranteed to get the same fruits with probability 1. So if you imagine that when multiple pairs of cards are generated by the machine, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C- while other pairs are created in homogoneous preexisting states like A+,B+,C+, then the probability of getting the same fruits when you scratch different boxes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get the same answers in less than 1/3 of trials where you scratch different boxes, provided you assume that each card has such a preexisting state with "hidden fruits" in each box. But now suppose Alice and Bob look at all the trials where they picked different boxes, and found that they only got the same fruits 1/4 of the time! That would be the violation of Bell's inequality, and something equivalent actually can happen when you measure the spin of entangled photons along one of three different possible axes. So in this example, it seems we can't resolve the mystery by just assuming the machine creates two cards with definite "hidden fruits" behind each box, such that the two cards always have the same fruits in a given box. Something mathematically analogous is predicted in certain experiments where entangled photons are passed through polarizers at different angles, in which the probability that both photons will have the same result (both pass through their respective polarizers, or both are blocked) is $ cos^2 (\theta) $, where $ \theta $ is the angle between the two polarizers. Suppose the experimenters decide in advance that on each trial, they will choose randomly between one of three angles for their polarizer: 0 degrees from vertical, 60 degrees, or 120 degrees. On a given trial, if both experimenters choose the same angle, then $ \theta = 0 $ so they are guaranteed to get the same result with probability 1 (both photons pass or both are blocked), but if they choose different settings the probability of getting the same result would be $ cos^2 (\pm 60) $ or $ cos^2 (\pm 120) $, which in both cases gives a probability of 1/4. But the Bell inequality whose derivation I sketched says that if you want to explain the perfect match when both experimenters make the same choice using local hidden variables, the probability of a match when they make different choices should be no lower than 1/3. Note that nothing in the argument depends on how the device printing up pairs of lotto cards decides what hidden fruits to put in the boxes. All that matters is that there must be some fraction of trials where it prints up a "homogeneous" pair and some fraction where it prints up an "inhomogenous" pair, and that the probability that Alice and Bob choose to scratch the same box or different boxes on a given trial is statistically uncorrelated with whether the device printed up a homogenous or inhomogenous pair on that trial (this last assumption would be violated with superdeterminism, but I don't think you were talking about that).
{ "domain": "physics.stackexchange", "id": 18705, "tags": "quantum-mechanics, soft-question, bells-inequality, cellular-automaton" }
Filtering a phone number
Question: I am learning Scala still and refactoring some old code. I was hoping the community here could give me pointers to fix this code up since there are some issues I have with it visually, but don't know if there's an elegant solution for my vision. class PhoneNumber(phonenumber:String) { def number: String = { phonenumber.filter(_.isDigit) match { case x if x.startsWith("1") && x.length == 11 => x.substring(1) case x if x.length != 10 => "0000000000" case x => x } } def areaCode: String = number.take(3) override def toString: String = "(" + areaCode + ") " + number.substring(3,6) + "-" + number.takeRight(4) } I want to refactor my case statement as much as possible because right now I think it looks kind of strange. One thing I am particularly not fond of is the case x => x, but I am wondering what others think about my approach in general and what I might do to make this code better visually and functionally. Answer: One of the main changes you'll probably want to make is to change number and areaCode from functions to values. As your code is right now, every time you use number it is having to recompute your top code block. The same goes for areaCode. At first glance I don't see a reasonable way to eliminate the conditional statements that you step though to calculate number. Because of that, there isn't a great way to utilize pattern matching. So IMO you may as well take advantage of the fact that conditional statements are expressions in Scala. Another small tweak I made in calculating number was to change "0000000000" to "0" * 10. Its not so hard to count out ten zeros, but if you needed a larger amount of them you now know about this other method. I added class fields for the other subsections of a typical phone number, e.g. exchangeCode and stationNumber. Finally I utilized string interpolation for your toString method as to my mind it makes the expression easier to mentally parse. class PhoneNumber(phoneNumber: String) { val number = { val tmpNumb = phoneNumber.filter(_.isDigit) if (tmpNumb.startsWith("1") && tmpNumb.length == 11) tmpNumb.drop(1) else if (tmpNumb.length != 10) "0" * 10 else tmpNumb } val areaCode = number.take(3) val exchangeCode = number.drop(3).take(3) val stationNumber = number.drop(6).take(4) override def toString = s"($areaCode) $exchangeCode-$stationNumber" }
{ "domain": "codereview.stackexchange", "id": 14223, "tags": "scala" }
Circular buffer wrap around with chorus effect
Question: I am developing an embedded DSP audio processor and am trying to implement a chorus effect. To my understanding, the chorus effect is multiple delayed versions of the original signal, where the delay itself is random. In my program I have set the delays to be randomly defined upon initialization and then remain static during the program execution. The challenge I'm faced with is that in using a circular buffer to store my past signal, I cannot simply wrap delayed signals back around the circular buffer as there is multiple delays present. With a single delay I can simply check if the delay required exceeds the current position in the circular buffer, if so, I can jump to the end of the circular buffer subtract the delay and add my current position in the circular buffer. if (Delay > Current_Position_in_Buffer) { Circular_Array[Length_of_Array + Current_Position_in_Buffer - Delay]; } else { Circular_Array[Current_Position_in_Buffer - Delay]; } Is there an efficient way to check and adjust the position in my circular array of each delayed signal, without separately checking each delay with an if statement? Answer: If I understood correctly what you are asking, then this is probably more of a programming question... But the solution would be to use modular arithmetic to compute the position of the delay. With a length equal to a power of 2, this could be accomplished with an AND instruction. const int Mask = (1 << 16) - 1; Circular_Array[(Current_Position_in_Buffer - Delay) & Mask]; In this example, the length of your buffer is $2^{16}$. More generally you can use the modulo operator, but it will be slower.
{ "domain": "dsp.stackexchange", "id": 6248, "tags": "audio, c" }
Confusion over plant's "magnetic field" being used to reduce wastage of pesticide
Question: As far as I am aware, a plant does not have a magnetic field of any significant magnitude. What I wonder is, is there any electromagnetic related basis for the crop spraying technique outlined in the science section of a generally highly regarded newspaper and listed below. If the idea was not associated with a university, I would not be asking the question. Irish Times Crop Spraying Article : ...is a magnetic spraying technology that helps farmers grow more by using less. The system, which has been three years in development, gives better coverage than conventional crop spraying systems and also reduces spray drift by more than 80 per cent. [The company} is based at NovaUCD, (University College Dublin) the university’s centre for new ventures. “The technology is based around attaching magnetic inserts onto a sprayer which sends an electromagnetic charge into the sprayed liquid,” “All living plants and soil have a magnetic field so the magnetically charged liquid is attracted to its target. The benefits of our technology include increased profitability, increased productivity and better environmental performance.” The company has worked closely with UCD to develop its system which has also been independently tested as far afield as Ethiopia, Kenya and the US. However, this study, Plant's magnetic field strength is less than a millionth of Earths. seems, to me at least, to make the whole idea a non starter. In an article in the Journal of Applied Physics, the UC Berkeley scientists describe.. their ultimate failure to detect a magnetic field. They established, however, that the plant generated no magnetic field greater than a millionth the strength of the magnetic field surrounding us here on Earth. I appreciate that this may turn out to be a biology based question, but I can't immediately see, from a physics point of view, how this scheme could possibly work. Answer: Replace "magnetically charged" by "magically charged" - I would rather agree to that :) (and that's actually how I read it by mistake! ;)) Writing of magnetic charges (=monopoles) actually doesn't increase the plausibility. I think it's just a hoax (I mean, that the university is involved, not the whole business) - though this word would be not appropriate, if there is something connected to this that is really funded by governmental money. Actually, it reminds me of this question (or more directly, this website), about a perpetual motion machine. There I strongly suspect, that the "company" might have got governmental subsidies because they produce green energy - this is a magic term in Europe. Maybe here there might be also some save-the-environment-money involved???
{ "domain": "physics.stackexchange", "id": 30391, "tags": "electromagnetism, biophysics" }
Do regular languages belong to Space(1)?
Question: I was wondering, if we take some regular language, will it be in Space(1)? For a regular language X, for instance, we can construct an equivalent NFA that matches strings in the regular language. But I cannot see why is X in Space(1). If it is true, why is X or any other regular language in Space(1)? Answer: A regular expression can be transformed into an NFA as you say. And an NFA can be transformed into a DFA. This latter transformation is exponential in the worst case (in terms of the size of the original NFA), but that is irrelevant. The amount of time this transformation takes is independent from the size of the input, and is thus $O(1)$. Similarly, the size of this DFA is also independent from the size of the input, so storing it takes $O(1)$ space. No further space is needed other than the DFA, and thus a recognizer for a regular expression can run in $O(1)$ space.
{ "domain": "cs.stackexchange", "id": 13781, "tags": "complexity-theory, turing-machines, space-complexity" }
How to deal with unbalanced data in pixelwise classification?
Question: I'm trying to train a fully convolutional network to generate binary image showing roads from satellite photos. For a long time, I struggled to get the network to output anything but black - then I figured out the problem. I have plenty of training data, but of course most of the output pixels will be black, since roads take only small percentage of space. I've created a toy example showing the problem: from keras.models import Sequential from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D import numpy as np import random import sklearn.model_selection THRESHOLD = 3 BALANCED = False if BALANCED: X = [[ random.randint(THRESHOLD + 1, 255) if random.randint(0, 1) else random.randint(0, THRESHOLD) for _ in range(10)] for _ in range(10**4)] Y = [[0] if x[0] <= THRESHOLD else [1] for x in X] else: X = [[random.randint(0, 255) for _ in range(10)] for _ in range(10**4)] Y = [[0] if x[0] <= THRESHOLD else [1] for x in X] X, Y = np.array(X), np.array(Y) X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split( X, Y, test_size = 0.2) model = Sequential() model.add(Dense(8, activation = "sigmoid", input_shape = (10, ))) model.add(Dense(1, activation = "sigmoid")) model.compile(loss = "binary_crossentropy", optimizer = "adam") model.summary() model.fit(X_train, Y_train, batch_size = 10, epochs = 10, verbose = 2, validation_data = (X_test, Y_test)) Y_res = model.predict(X_test) Ys0 = [y[0] for x, y, y_opt in zip(X_test, Y_res, Y_test) if y_opt[0] == 0] Ys1 = [y[0] for x, y, y_opt in zip(X_test, Y_res, Y_test) if y_opt[0] == 1] print(np.mean(Ys0)) print(np.mean(Ys1)) Here, I artificially create data, with X being ten random bytes, and Y being boolean thresholding first byte on the value of three. That means normally only around 1/64 of all samples will be generated with y equal to 0, similarly to my original problem. There's also a second dataset, with the same decision boundary, but more evenly (class-wise) distributed data - set by changing BALANCED toggle in the code. The results confirm my suspicions. For balanced set, actual 0's are given, on average, value of 0.025. 1's - 0.97. That's reasonable. On the unbalanced, raw dataset, 0's have mean of 0.87, and 1's - 0.98. That means the zeros are pretty much always classified as one anyway. On my real data, the effect is even worse, and the returned value is pretty much equal to percentage of black, regardless of real classification. I tried adding class_weights = [100, 1] to fit call, but it didn't change much. Is there a good strategy for dealing with such unbalanced data? In this toy example I could probably sample the x-y pairs (e.g. keep only 5% of 1-classified data), but I don't think this would work for my full problem - since I'm using FCN, I'm putting a whole image into the network at once. There are no images with 50% roads, so I cannot possibly sample images to make roads take enough space. I also tried, quite randomly, changing loss functions, including using one for categorical data and one-hot encoding expected outputs (i.e. [1, 0] and [0, 1] instead of 0 and 1). This did not fix anything either. Answer: Looks like the problem lied just in poorly chosen parameters. In the example given above, simply increasing epoch count significantly improved the results. In my actual image processing problem, I had to decrease learning rate, since with the default one, ReLU's were saturating very quickly, hindering further progress.
{ "domain": "datascience.stackexchange", "id": 3169, "tags": "classification, keras" }
What does a supernova look like at its peak luminosity?
Question: I know that in some types of supernovae, the cause of the increased luminosity is the radioactive decay of certain elements ejected during the explosion, so a question came to my mind. If the ejected material carrying the isotopes that decay to give the electromagnetic radiation is expelled at velocity of say 5% the speed of light, and given the fact that some supernovae stay extremely luminous for more than 4 weeks, then by that time the radioactive isotopes will have traveled more than 30 billion kilometers from the exploded star. So, does that mean a supernova at 4 weeks can be expected to look like a star with a radius of 30 billion kilometers and luminosity of $10^8$-$10^9$ times the solar luminosity ? Or I am getting the idea of the radioactive decay as the source of the supernova luminosity wrong ? Answer: Your math does check out: \begin{align} r&=vt \\ &=0.05\cdot2.9979\times10^{10}\frac{cm}s\cdot4\cdot604800\,s\\ &=3.63\times10^{15}\,cm\\ &=36.3\times10^9\,km\\ &=0.012\,pc \end{align} When a supernova explodes, it enters the free expansion phase, it's position is linear in time ($r=vt$, as used above). It stays in this phase for a few hundred years (depends heavily on the ambient density); assuming 200 years, then $$ r_{fe}=9.45\times10^{18}\,cm=3\,pc $$ After this point, the Supernova Remnant (technically speaking, SNe is the explosion while SNR is the result of the material after said explosion) continues expanding, though at a reduced rate (because it has swept-up ambient material this entire time, building up a thick shell of thickness $w\sim0.1\,pc$) for many thousands of years. SNe theory says a normal Type Ia produces about $0.5\,M_\odot$ of nickel-56 which then decays to an excited state of cobalt-56, which then emits an X-ray photon: \begin{align} \,^{56}{\rm Ni}+e^-&\to\,^{56}{\rm Co}^*+\nu_e \\ \,^{56}{\rm Co}^*&\to\,^{56}{\rm Co}+\gamma \end{align} The cobalt-56 (lifetime around 100 days) then decays to iron-56 which also decays with some X-ray photons. Until SN 2014J, we had only observed the iron-56 decay line due to the fact that the lifetime of the above reaction is about 9 days and the ejecta are opaque to these lines due to Compton scattering in this same time-frame. SN 2014J provided $\gamma$-ray and X-ray emissions due to the cobalt-56, proving the theory correct. Note that the shell remains very thick during this whole time. Wikipedia provides an image of SN 1006 (exploded in the year 1006, so it's now 1008 years old) that shows the expansion of the shell: This shell is measured to be between 0.04 and 0.2 pc, which are roughly $1.2\cdot10^{12}$ km and $6.2\cdot10^{12}$ km thick, which is just shy of 1 lightyear. And after all this time, it is strong in radio, X-ray & $\gamma$-ray emissions (from this site):
{ "domain": "physics.stackexchange", "id": 15954, "tags": "electromagnetic-radiation, radiation, radioactivity, supernova" }
Processing form values for each day of week
Question: I am using PHP, and I am getting the value back from the form. Since I am working with time, I have an array by (1 x 4) x 7, and I feel like I could do a better job at it. I just don't know exactly how should I approach the problem $schedule = array( round(abs(strtotime($_POST['mon'][1]) - strtotime($_POST['mon'][0])) / 3600, 2), round(abs(strtotime($_POST['mon'][3]) - strtotime($_POST['mon'][2])) / 3600, 2), round(abs(strtotime($_POST['tue'][1]) - strtotime($_POST['tue'][0])) / 3600, 2), round(abs(strtotime($_POST['tue'][3]) - strtotime($_POST['tue'][2])) / 3600, 2), round(abs(strtotime($_POST['wed'][1]) - strtotime($_POST['wed'][0])) / 3600, 2), round(abs(strtotime($_POST['wed'][3]) - strtotime($_POST['wed'][2])) / 3600, 2), round(abs(strtotime($_POST['thu'][1]) - strtotime($_POST['thu'][0])) / 3600, 2), round(abs(strtotime($_POST['thu'][3]) - strtotime($_POST['thu'][2])) / 3600, 2), round(abs(strtotime($_POST['fri'][1]) - strtotime($_POST['fri'][0])) / 3600, 2), round(abs(strtotime($_POST['fri'][3]) - strtotime($_POST['fri'][2])) / 3600, 2), round(abs(strtotime($_POST['sat'][1]) - strtotime($_POST['sat'][0])) / 3600, 2), round(abs(strtotime($_POST['sat'][3]) - strtotime($_POST['sat'][2])) / 3600, 2), round(abs(strtotime($_POST['sun'][1]) - strtotime($_POST['sun'][0])) / 3600, 2), round(abs(strtotime($_POST['sun'][3]) - strtotime($_POST['sun'][2])) / 3600, 2), ); Answer: Well, every time you see a repeated code, you could think of a loop. if I am not mistaken, only a weekday is changed, so it could be used to form the loop: $schedule = []; $weekdays = ['mon','tue','wed','thu','fri','sat','sun']; foreach ($weekdays as $day) { $schedule[] = round(abs(strtotime($_POST[$day][1]) - strtotime($_POST[$day][0])) / 3600, 2), $schedule[] = round(abs(strtotime($_POST[$day][3]) - strtotime($_POST[$day][2])) / 3600, 2), }
{ "domain": "codereview.stackexchange", "id": 33201, "tags": "php, form" }
How to Implement Global Ros Variables
Question: I am working on a stationary pick-and-place type robot. I have created a large catkin workspace for a project that uses various packages. Currently, when I want to change certain variables in my project (like camera location), I need to open multiple packages and edit files individually. I want to streamline the project by creating a configuration file that holds global variables for the entire workspace (so they only need to be edited in one place). Is this possible to do in ROS? Any suggestions would be appreciated! Originally posted by fruitbot on ROS Answers with karma: 19 on 2020-10-13 Post score: 0 Original comments Comment by gvdhoorn on 2020-10-13:\ Currently, when I want to change certain variables in my project (like camera location), I need to open multiple packages and edit files individually are you not using TF? Edit: reason I ask this and don't directly address your question is I'm wondering whether this is an xy-problem. If you could give a few more examples of the sort of data you're looking to share, perhaps we can give better answers. Comment by fruitbot on 2020-10-13: Yes I am using TF. I realize I wasn't clear in my original question. For each variable I want to change, there is only ONE corresponding file I need to edit. When my physical set up changes, say I set up the robot in a new area, I need to edit quite variables, and therefore quite a few files. In addition to camera location, there are multiple bins that the arm sorts items in to, whose position and size may change. Workspace area may change size. Most of the edits take place in my main urdf, but a couple are in other places. Perhaps I should edit my urdfs so all the configurable information is my main urdf? The reason I was hoping to make some sort of global configuration file is because other people will be setting up my project and I want to make configuration as easy and straight forward as possible. This is the first time I've heard of the xy problem, what a handy idea! Comment by fruitbot on 2020-10-13: Edit: IP address for robotic arm is another variable, one that is not urdf related. Another is camera path location (eg. /dev/video0, /dev/video1) Answer: Given the types of data you mention, I would suggest to use the ROS parameter server for this. The roslaunch rosparam tag in combination with .yaml files makes for a very convenient way to store and load settings of a ROS application. Example: camera: device: /dev/video0 robot: ip_address: 1.2.3.4 workspace: area: [0.1, 0.3, 0.2, 0.4] bins: - id: 1 area: [0.1, 0.3, 0.2, 0.4] - id: 4 ... etc. Whether you load those parameters in the private namespace of nodes, or put them in a separate namespace and then have nodes load "their section" from that namespace would be a design decision only you can make (the former would reduce coupling though, so might perhaps be a better approach). One thing I'm still wondering about: if you have to edit multiple files "when things change", this may be an indication of a violation of separation of concerns (ie: too many nodes/classes/functions need to be aware of various details to be able to do their work). It may be good to investigate whether all those entities actually need all that information, or whether they could get it from somewhere else (as part of a service request coming in, fi). Originally posted by gvdhoorn with karma: 86574 on 2020-10-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by fruitbot on 2020-10-14: The ROS parameter server looks like the tool I need. I will continue to look into it. Your concern about "separation of concerns" may be valid. This project is my first experience with ROS, so I will keep an eye out for that as I and clean things up. Thanks for your help!
{ "domain": "robotics.stackexchange", "id": 35626, "tags": "ros, ros-melodic, catkin, environment-variables" }
What would be a good loss function to penalize big differences and reward small ones, but not in a linear way?
Question: I have an image with the differences between 2 other images. Concentrations of black pixels mean similar regions between the images, whereas, white values highlight differences. Thus I want a function to provide rewards when the pixels are the same (or relatively similar - a term to weight this "similarity threshold" would be nice) and penalize when the differences are bigger (penalizing more as the differences grow). A differentiable function is much appreciated. So in the context of machine learning and this loss function being a way to help train a generator, what kind of function do you recommend or can come up with? Remember, the ideia is to reward similarities and penalize differences (such that "really different" equates to a bigger loss than "slightly off" or "different"). Thanks in advance to you all! Answer: Square loss (MSE or SSE) does this. Let $y_i$ be an actual value and $\hat{y}_i$ be its estimated value (prediction). $$SSE = \sum (y_i -\hat{y}_i)^2$$ $$MSE=\dfrac{SSE}{n}$$ Except for numerical issues of doing math on a computer, these are optimized at the same parameter values of your neural network. The squaring is critical. If a prediction is off by 1 unit, it incurs one unit of loss. If the prediction is off by 2, instead of incurring 2 units of loss, there are 4 units of loss—much worse than being off by 1. If the prediction is off by 3, wow—9 units of loss! (If you look at statistics literature or some Cross Validated posts, you may see $n-p$ in the denominator of MSE, where $p$ is the number of parameters in the regression equation. This does not change the optimal value but does have some advantages in linear regression, chiefly that it is an unbiased estimate of the error variance under common assumptions for linear regression that you are unlikely to make in a neural network problem.)
{ "domain": "datascience.stackexchange", "id": 6907, "tags": "deep-learning, gan" }
Potential side effects of replacing read group tags in BAM file
Question: I have a set of BAM files where the read group tags have some (default?) values, i.e.: @RG ID:RG0 LB:LB0 PU:PU0 SM:SM0 This creates issues in my downstream analyses, where multiple BAM files with the same SM tag are used. Samtools provides a command to replace the read group tag. However, I am not sure if there are possible side-effects that I should be aware of, and I might need to remap the BAM file after this replacement. I want to replace ID and SM with the sample id, which is unique for each BAM file. Do I need to remap and/or run additional steps, or replacing the RG tags should be sufficient to update the BAM files in a consistent way? Answer: You do not need to remap the files, replacing the read group information with samtools is sufficient to deal with this. When you update your pipeline, have it use the sample name to construct the read group information. That way you won't run into this problem again.
{ "domain": "bioinformatics.stackexchange", "id": 2113, "tags": "sam, samtools" }
sensor_msgs::CompressedImage subscribe fails to build
Question: Hey I have the following subscriber: ImageConverter(ros::NodeHandle &n) : n_(n), it_(n_) { image_pub_ = it_.advertise("/output_img",1); cv::namedWindow(OPENCV_WINDOW); image_transport::TransportHints TH("compressed"); image_sub_compressed.subscribe(n,"/Logitech_webcam/image_raw/compressed",5,&ImageConverter::imageCallback,ros::VoidPtr(),TH); } And the callback function void imageCallback(const sensor_msgs::CompressedImageConstPtr& msg) When I compile this I get an error: from /home/johann/catkin_ws/src/uncompressimage/src/publisher_uncompressed_images.cpp:1: /usr/include/boost/function/function_template.hpp: In instantiation of ‘static void boost::detail::function::function_void_mem_invoker1<MemberPtr, R, T0>::invoke(boost::detail::function::function_buffer&, T0) [with MemberPtr = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&]’: /usr/include/boost/function/function_template.hpp:934:38: required from ‘void boost::function1<R, T1>::assign_to(Functor) [with Functor = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&]’ /usr/include/boost/function/function_template.hpp:722:7: required from ‘boost::function1<R, T1>::function1(Functor, typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type) [with Functor = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&; typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type = int]’ /usr/include/boost/function/function_template.hpp:1069:16: required from ‘boost::function<R(T0)>::function(Functor, typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type) [with Functor = void (ImageConverter::*)(const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&); R = void; T0 = const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&; typename boost::enable_if_c<boost::type_traits::ice_not<boost::is_integral<Functor>::value>::value, int>::type = int]’ /home/johann/catkin_ws/src/uncompressimage/src/publisher_uncompressed_images.cpp:27:126: required from here The red error statement was: /usr/include/boost/function/function_template.hpp:225:11: error: no match for call to ‘(boost::_mfi::mf1<void, ImageConverter, const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&>) (const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&)’ BOOST_FUNCTION_RETURN(boost::mem_fn(*f)(BOOST_FUNCTION_ARGS)); I am not using BOOST, and searching around hasn't helped me solve it, this is how I think it should be based on reading the github for the sensor_msgs::compressedImages, if another method exists that is better? Originally posted by jtim on ROS Answers with karma: 153 on 2016-07-15 Post score: 0 Answer: ended up recompiling opencv and ros from source, the error was removed. However the subscription didn't work. Therefore I subscribed to the raw image that I had republished in a launch file. Now everything works. Originally posted by jtim with karma: 153 on 2016-07-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25252, "tags": "ros, sensor-msgs, cv-bridge, compressedimage, compressed-image-transport" }
Convert "String" sqrt(#) into numerical value
Question: I had to come up with a way to convert sqrt(#) into its actual numeric value because before I would have an array containing elements such as: [sqrt(3), -sqrt(3), 1] If I tried to multiply, I would get the error: Argument "-sqrt(3)" isn't numeric in multiplication (*) Here is my work-around, but I think there's a better way. Any suggestions? #!/usr/bin/perl #perl Solver.pl use strict; use warnings; my $roots = "Sqrt[3] ||-Sqrt[3] ||1"; my @rootList = split(/ \|\|/, $roots); # fill array with string's values separated by " ||" # Convert any Sqrt numbers to numerical value # (ex.) Sqrt[3] -> sqrt(3) for $_ ( @rootList ){ $_ =~ s/Sqrt\[/sqrt(/g; # Sqrt[ -> sqrt( $_ =~ s/\]/)/g; # ] -> ) if( $_ =~ /-sqrt/ ){ # replace string of negative sqrt() my $temp = substr( $_, 6 ); # take "#)" $temp =~ s/\)//g; # remove ")" $_ = -1*sqrt( $temp ); } elsif( $_ =~ /sqrt/ ){ # replace positive sqrt() my $temp = substr( $_, 5 ); $temp =~ s/\)//g; $_ = sqrt( $temp ); } } print "[@rootList]\n"; Answer: I found the use of Sqrt[...] to be cumbersome, why have the case in that format, and why have the [] style braces? I would normally not recommend this, but a simple solution, if you trust, or can sanitize your inputs, is to just use the eval function. for $_ ( @rootList ){ $_ =~ s/Sqrt\[(.+)\]/sqrt($1)/; $_ = eval($_); $@ and die $@; } That does it..... all of it. Convert Sqrt[...] to sqrt(...), and eval the result.
{ "domain": "codereview.stackexchange", "id": 14047, "tags": "regex, perl" }
How do I write a valid CIF, PDB or XYZ file from the coordinates listed below?
Question: I want to create a 3D molecular file using coordinates taken from: Hrynchuk, RJ; Barton, RJ; Robertson, BE (1983). "The crystal structure of free base cocaine, C17H21NO4" (PDF). Canadian Journal of Chemistry 61 (3): 481–487. doi:10.1139/v83-085. Here are the coordinates: X Y Z C(1) 0.5419 0.1631 0.7095 C(2) 0.4579 0.2047 0.5345 C(3) 0.3080 0.1713 0.5168 C(4) 0.2903 0.0286 0.5727 C(5) 0.3950 0.0004 0.7409 C(6) 0.3870 0.1080 0.8678 C(7) 0.4858 0.2192 0.8450 C(8) 0.1028 0.2512 0.3219 C(9) 0.0323 0.2713 0.1411 C(10) 0.0857 0.2213 0.0238 C(11) 0.0189 0.2435 -0.1419 C(12) -0.0995 0.3166 -0.1875 C(13) -0.1528 0.3671 -0.0719 C(14) -0.0880 0.3462 0.0951 C(15) 0.5172 0.1331 0.4127 C(16) 0.7261 0.1009 0.3465 C(17) 0.6418 -0.0439 0.8543 N(1) 0.5324 0.0158 0.7182 O(1) 0.2257 0.1933 0.3470 O(2) 0.0560 0.2836 0.4306 O(3) 0.4616 0.0531 0.3097 O(4) 0.6481 0.1713 0.4393 H(1) 0.6210 0.1840 0.7230 H(2) 0.4640 0.2980 0.5170 H(3) 0.2820 0.2340 0.5830 H(4) 0.1930 0.0110 0.5730 H(5) 0.3050 -0.0240 0.4930 H(6) 0.3850 -0.0960 0.7770 H(7) 0.4090 0.0810 0.9760 H(8) 0.2870 0.1410 0.8610 H(9) 0.4440 0.3140 0.8150 H(10) 0.5560 0.2330 0.9400 H(11) 0.1580 0.1680 0.0530 H(12) 0.0640 0.2050 -0.2240 H(13) -0.1470 0.3330 -0.3060 H(14) -0.2320 0.4140 -0.103 H(15) -0.1280 0.3810 0.186 H(16) 0.7330 0.0190 0.3810 H(17) 0.8130 0.1470 0.3710 H(18) 0.6990 0.1060 0.2370 H(19) 0.7250 -0.0340 0.836 H(20) 0.6470 -0.0070 0.961 H(21) 0.6230 -0.1410 0.861 Converting this to XYZ coordinates, namely to: 43 Cocaine 1983 C 0.5419 0.1631 0.7095 C 0.4579 0.2047 0.5345 C 0.3080 0.1713 0.5168 C 0.2903 0.0286 0.5727 C 0.3950 0.0004 0.7409 C 0.3870 0.1080 0.8678 C 0.4858 0.2192 0.8450 C 0.1028 0.2512 0.3219 C 0.0323 0.2713 0.1411 C 0.0857 0.2213 0.0238 C 0.0189 0.2435 -0.1419 C -0.0995 0.3166 -0.1875 C -0.1528 0.3671 -0.0719 C -0.0880 0.3462 0.0951 C 0.5172 0.1331 0.4127 C 0.7261 0.1009 0.3465 C 0.6418 -0.0439 0.8543 N 0.5324 0.0158 0.7182 O 0.2257 0.1933 0.3470 O 0.0560 0.2836 0.4306 O 0.4616 0.0531 0.3097 O 0.6481 0.1713 0.4393 H 0.6210 0.1840 0.7230 H 0.4640 0.2980 0.5170 H 0.2820 0.2340 0.5830 H 0.1930 0.0110 0.5730 H 0.3050 -0.0240 0.4930 H 0.3850 -0.0960 0.7770 H 0.4090 0.0810 0.9760 H 0.2870 0.1410 0.8610 H 0.4440 0.3140 0.8150 H 0.5560 0.2330 0.9400 H 0.1580 0.1680 0.0530 H 0.0640 0.2050 -0.2240 H -0.1470 0.3330 -0.3060 H -0.2320 0.4140 -0.103 H -0.1280 0.3810 0.186 H 0.7330 0.0190 0.3810 H 0.8130 0.1470 0.3710 H 0.6990 0.1060 0.2370 H 0.7250 -0.0340 0.836 H 0.6470 -0.0070 0.961 H 0.6230 -0.1410 0.861 Opening this with Accelrys gives this structure. https://i.stack.imgur.com/ijb19.png Answer: Generally speaking, babel from the Openbabel suite will convert chemical file formats. The openbabel wiki has a page with examples on input in free-form fractional coordinate format, and the abstract of the paper has the unit cell dimensions. Single CIF files can be requested from the Cambridge Crystallographic Data Centre.
{ "domain": "chemistry.stackexchange", "id": 2072, "tags": "crystal-structure, software, cheminformatics" }
How can I find the center character of a two-tape Turing Machine in n transitions?
Question: The problem I have been given is to find a two-tape ND Turing Machine over {a,b,c} that only accepts odd length strings with a c in the middle position. The problem with this is that the question specifies that it must be done in n+2 transitions. I can figure out how to do it in 1.5 * n transitions, by iterating over the tape and writing every other character to the second tape, then reversing at a space input and going back the same number of characters that are on the second tape. There are some other methods I thought of, but the lowest number of transitions I can get to is 1.5 * n which is not low enough. This is homework so I am not looking for a full resolution or a walkthrough, but if anyone can point be in the right direction I would be very grateful as this has been irritating me for half a day now. Answer: You explicitly try to find the middle position and then check whether it has a $c$ at that position. You can save some time by using the power of nondeterminism here. Choose any $c$ and check whether the number of symbols after that position matches the number before that position. When the computation makes a wrong choice it will not accept. If there is indeed a $c$ at the middle position there is a guess that will accept.
{ "domain": "cs.stackexchange", "id": 10193, "tags": "formal-languages, turing-machines" }
Cluster/Similarity problem with two datasets of different cardinality
Question: I want to cluster financial products according to their similarity. I have two dataset of different cardinality: One-to-One dataset: One ID has One attribute/feature per column - Describes a financial product One-to-Many: One ID has multiple attributes/features per column - What a financial product consists of and the corresponding weights within the financial product the One-to-One dataset looks like: the One-to-Many dataset looks like: How to deal with such datasets, when trying to cluster/label the IDs into similar groups? I am afraid if joining the two datasets the repetition of the One-to-One table records (e.g. Region 'Europe' will be then recorded 8 times for ID 'XADV') will interfer with the estimation of a model/cluster etc. Also I am unsure how to get the SUB_ID column in relation to the SUB_ID_WEIGHT as at the one side it is a categorical problem as well as a regression problem. SUB_ID rerg is in ID XADV & ZZZSD but with massive weight differences, while e.g. SUB_ID AA1111 is not in ID XADV at all. Any idea in terms of dataset engineering or models/algorithms that can be used for such a use case? Any pointing much appreciated! Best Max Answer: There are different possible approaches. Without closer look into your data it is hard to tell which one would be the best. In the following, I will list multiple approaches. Pivot Table This is the straight forward approach: Create one columns for each SUB_ID-Value and fill the SUB_ID_WEIGHT in. You need to impute missing values. I assume 0 would be a good weight for SUB_IDs not listed for a given ID. Afterwards, you could merge both tables and get something like: ID REGION STYLE DURATION dhce rerg dfbrt vdfv csefe ... AA1111 BB3333 XADV Europe Fast Short 0.023 0.034 0.064 0.12 0.004 ... 0 0 ZZZSD Europe Slow Short 0.0112 0.223 0.0123 0.5 0 ... 0.011 0.0254 .... This table could then be used to perform the clustering (you will still have to deal with the categorical columns). PROS: This approach is easy to implement. CONS: Depending on the number of SUB_IDs, you might end up with MANY columns, which brings the curse of dimensionality, i.e. might not be well suited for clustering. In some cases, the distance between two rows (that is used for clustering algorithms) might not be the most meaningful, which leads to less meaningful clusters. But it needs domain expertise to deside about this. Descriptive Statistics You could extract for each ID a number of statistics over SUB_IDs and SUB_ID_WEIGHTs, e.g. Number of listed SUB_IDs Min / Max / Mean SUB_ID_WEIGHT The augment the first table with these statistics PROS: This approach is easy to implement. CONS: The clustering result depends strongly on your choice of statistics. Embedding In order to reduce the dimensionality, one can use an emedding, i.e. a representation with less dimensions that preserves relevant information. Prominent approaches are auto-encoders (by using the latent representation as , t-sne or umap. As input serves again the Pivot-Table from above, that can be used to train the embedding and transform each row in a lower dimension space. PROS: One additional layer for which libraries exist. CONS: Needs enough training data Builds on the distance mentioned above Custom distance function Most clustering algorithms just need a distance function between pairs of instances. All we did above was to find a representation that comes with such a distance (e.g. the euclidean). Instead you could define your own distance function between two IDs. Examples Jaccard Distance: Each ID is represented by a set of SUB_IDs. The Jaccard Distance measures how similar these sets are. Downfall: this would ignore weights. Fuzzy Jaccard Distance: There are extensions that deal with fuzzy sets, i.e. include the weights. Unfortunately, I am no expert for these and I am not sure if there are good libraries for them. If you have some more domain knowledge, e.g. that AA1111 is similar to BB3333, but not to rerg, you could also include this in the definition of the distance function. PROS: Can be designed to work exactly for your use case. CONS: Need work to come up with a meaningful distance function. Multi-Instance-Clustering Multi-Instance-Learning is a field that deals with exactly your type of 1:n relation. Unfortunately, Multi-Instance-Learning concentrates on supervised classification, but there are some works for clustering as well. PROS: Most flexible approach. CONS: Not much work done, yet. You might end up implementing a research paper
{ "domain": "datascience.stackexchange", "id": 11969, "tags": "classification, dataset, clustering" }
How can I find tight asymptotic bounds for this half-history recurrence relation?
Question: The recurrence relation $\forall n\in\mathbb{N}\cup\{0\}$ is $T(n)=\Theta(n^2)+2\sum_{i=1}^{\lfloor\frac{n}{2}\rfloor}{T(i)}$, with base case of $T(0)=0$. Fairly simple tree analysis shows that $T(n)\in\Omega(n^3)$ and $T(n)\in O(n^{\log_2{n}})$. But I want to find $f(n)$ for which $T(n)\in\Theta(f(n))$, or at least tighter lower and upper bounds. Answer: It is $n^{\Theta(\log n)}$, although I'm not sure exactly what the constant in the theta is. For the upper bound (the one you already have), note that, even without the $n^2$ term but with a base case of $1$ rather than $0$, this recurrence is dominated term-by-term by the recurrence $$U(n)=nU(\frac{n}{2}).$$ For the lower bound, note that it dominates the recurrence $$L(n)=\frac{n}{2}L(\frac{n}{4}).$$
{ "domain": "cstheory.stackexchange", "id": 3516, "tags": "time-complexity, recursion, asymptotics" }
Can I label my QuantumCircut qubits by something other than integers in Qiskit?
Question: I am learning Qiskit, and I have a specific application in mind where all of the qubits in my QuantumCircit have natural labels (say, by letters of the alphabet). I can initialize this with the code qc = QuantumCircuit(0) for label in keys: qc.add_register(QuantumRegister(1,label)) This works and displays very nicely, but immediately gives me problems. For instance, something like qc.x(qubit="label") gives me the error CircuitError: "Invalid bit index: 'label' of type '<class 'str'>'" Is there any way around this? Can I choose the labels I desire for my QuantumCircut, or must I stick with the default? Answer: You can use your method to initialize the qubits, but you need to remember their indices to apply gates. One way to do this is to create a dictionary mapping each label to its corresponding index and then use it when applying quantum gates to your circuit. The code would look something like this: from qiskit import QuantumCircuit, QuantumRegister labels = ['qub1', 'qub2', 'qub3', 'ancilla'] index = {k: v for v, k in enumerate(labels)} # map the labels to their indices qc = QuantumCircuit() for label in labels: qc.add_register(QuantumRegister(1, label)) x_gates = ['qub1', 'qub3', 'ancilla'] for register in x_gates: qc.x(index[register]) qc.draw() The circuit will then look like this: Be careful though, because not all register names are valid according to the OpenQASM specification!
{ "domain": "quantumcomputing.stackexchange", "id": 4578, "tags": "qiskit" }
Confusion over training accuracy vs. training loss
Question: I had a small discussion with my friends on overfitting and we became confused over the two terms: "training accuracy" and "training loss (or cost)". This is the first time I've heard the term training accuracy. So far, I have only calculated accuracy on the validation and test sets. My understanding is that training accuracy and training cost are just one thing, and more generally accuracy is only applied for classification problems. Is that correct? Answer: The concepts of "loss" and "accuracy" are NOT the same. The loss function is what you minimize during the training of your model. There are many types of loss functions. You usually choose the loss function depending on the "task" you are facing; for instance, binary classification uses binary cross-entropy as loss function, multi-class classification uses categorical cross-entropy, regression uses mean squared-error (MSE) or mean absolute error (MAE). Accuracy is a concept from classification (either binary classification or multiclass classification). It is defined as the percentage of correctly classified elements. Of course, during training, when the loss decreases, we expect that the accuracy increases. This, however, is not always the case. That's why it is important to monitor both loss function and accuracy in both the training and validation data, so that we understand how our model actually performs. Note that accuracy has its own problems representing the performance of a model. For instance, if a binary classification dataset has 95% positive elements and 5% negative elements, a classifier that ALWAYS classifies as positive will obtain a 95% accuracy. There are other measures that account for this, like the area under the ROC curve (AUC) or the F1.
{ "domain": "datascience.stackexchange", "id": 12034, "tags": "training, terminology" }
Recommendations for multi-camera fusion for 360 deg image?
Question: Looking for recommendations for ROS nodes or C++ libraries that can take multiple independent camera image streams and combine them into a single 360 deg image stream. I’m looking for something that handles stitching the images together rather than just overlaying them based on the URDF file. Any recommendations? Originally posted by msmcconnell on ROS Answers with karma: 268 on 2020-11-05 Post score: 3 Original comments Comment by peci1 on 2020-11-09: Is the calibration between the cameras known and static? Or do you need to also do some feature matching and pose optimization before the stitching? Comment by peci1 on 2020-11-09: You can also give a try to panotools, but I've never used it for anything else than stitching panoramas from my outdoor trips... However, it's opensource, high quality, and it should have all the features you might need. Comment by msmcconnell on 2020-11-09: camera positions are static relative to each other but the system is on a moving base. Answer: For the case of static and known camera calibration, we've created a package called virtual_camera that stitches a spherical panorama from a configurable number of input cameras (we use 5 horizontal cams and 1 ceiling-looking). It hasn't been properly published anywhere, but you can access its docs and code from http://cmp.felk.cvut.cz/nifti/sw/virtual_camera/ . The code in there is a little old, I think it was last tested on Groovy or so, but it shouldn't be that difficult to get it running on modern ROS1 distros. We have an unpublished version for Melodic - if you were interested, PM me on ROS discourse (peci1). Example input (all cameras stitched next to each other for the visualization, but normally it's 6 distinct images): Example output: Originally posted by peci1 with karma: 1366 on 2020-11-09 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by msmcconnell on 2020-11-09: Thanks @peci1 ! This is exactly the kind of thing I am looking for. I will message you on discourse and I'll probably mark this as the answer, but I'll give some more time for others to respond if there is additional tooling out there. Comment by nyxrobotics on 2021-09-13: I came here looking for the same thing. Can I get the source code? Comment by peci1 on 2021-10-05: We're now in the process of rationalizing our codebase and open-sourcing as much as possible. The virtual camera package should be released in a month or two. In the meantime, have a look at https://github.com/tu-darmstadt-ros-pkg/image_projection (maybe we'll merge our implementation with this one if we find the two close enough). Comment by chwbo on 2022-04-06: Hi @peci1. I'm also facing the same thing. Have you released the virtual camera package? if so, can you share it with me? Comment by peci1 on 2022-04-06: I know I've already said several times it's around the corner, but a lot of things are happening now and this is not at the very top of our work list. But it is still there! I'll post here once the release is done! Comment by Pran-Seven on 2023-02-07: Hey, is there any update to this? Comment by peci1 on 2023-03-05: Time is scarcer than I though. This is still close to the top of my list, but never made it to the very top. But I still want to make the release! Comment by KL_Newt on 2023-04-02: Hi @peci1 will appreciate if you share when it's released! I'm new to all these and doing them as a hobby, but what started from an idle search about how 360 cameras work turned into a fascination on how complex all these stuff are from taking pictures to stitching them to turning them into 360 pictures (still not sure if 360 refers to spherical or not). Grappling with a lot of new concepts to me from the lens all the way to the software, so this will be very interesting to me as well. Thanks in advance! Comment by GYL_99 on 2023-08-02: @peci1 it is amazing! Where can get the source code? I will appreciate if you share when it's released!
{ "domain": "robotics.stackexchange", "id": 35717, "tags": "ros-kinetic, camera" }
Thin-wedge Interference Problems (Classical Waves Problem)
Question: I would like to solve the problem on the following image: My question is: Why is the answer to (a) a minimum? When the light wave hits the top surface of the top glass, a wave will be reflected with phase change of pi. The non-reflected goes to the bottom surface of the top glass to be reflected again with no phase change. The non-reflected part still goes to the top of the down glass and is reflected with phase change pi. We don't care about other reflections as amplitude diminishes too much. Now, We have three waves who are reflected towards our eye. 2 and 3 automatically cancel each other. There is only wave 1 left. So, this is not perfectly destructive. How can they call this a minimum? Answer: For a start, and in context of question (a), you can simplify the situation by thinking it as two glass slabs parallel to each other with an air interface of neglictible thickness between them. I'll consider only what happens in the first slab as by symmetry, what happens in the second one is the same. Now, first: All the light you want to consider is the light that goes into the slabs. In effect, each time light meets an interface, part of it is transmitted and part of it is reflected. So think about the part of the light that is first transmitted through the top interface of the top glass (which is the great majority of it). Then, when it goes to the bottom*, part of it is transmitted and the rest is reflected back into the slab. This happens an infinite number of times (every time with a decreased intensity) on both interfaces so that in the end, the the light that comes back out from the slabs to your eyes is the majority of the light. Therefore, you can neglect your first bullet point. (I recommend you to look up the maths in your textbook, it's only a sum of an arithmetic progression if my memory doesn't betray me). Second: Magic happens at the interface of both of the slabs. Go back to the * in the previous paragraph. At that time, two reflexions occur: one from the bottom of the top slab (with no phase shift as it is a glass/air interface) and one from the top of the bottom slab (with a phase shift as it is an air/glass interface). As we consider the air thickness to be neglictible, we obtain two waves with a Pi phase shift that destructively interfere. All you have to do is prove that the intensities of both of these are comparable (with the aforementioned sums) so that you can detect a minimum. Hope that helps EDIT This image (taken from the book Introduction to Optics by Pedrotti) illustrates what I mention in the first paragraph of my explanation: after multiple reflections and transmissions, the intensities of the rays add up. With this, you can figure out the sum I mentionned by yourself. The r and t respectively stand for the reflection and transmission coefficients. Be careful with the t though as it is also used for the thickness of the slab (on the right side of the picture).
{ "domain": "physics.stackexchange", "id": 16855, "tags": "homework-and-exercises, waves, interference" }
Motion of tennis ball bouncing
Question: I'm currently stuck on the last part of a question. Here is the diagram for the question: The last part of the question asks to calculate the direction of the ball at B. Assumptions are there is no air resistance and the ball bouncing does not affect the horizontal velocity of the ball. You are also given that the ball takes 0.6 seconds to travel the 24m and the height of B is 0.75m. I know to solve the problem I need the horizontal and vertical components of the velocity at B and I can use trig to find the angle. The horizontal velocity I have correctly worked out to be 40m/s in a previous part of the question. For the vertical velocity again in a previous part of the question I calculated the vertical velocity at A (correctly) to be 4.02m/s. I decided to calculate the impact velocity of the tennis ball as a starting point: \begin{align} v^2 =& u^2+2as \\ v =& \sqrt(4.02^2+2*9.8*2.8) \\ =& 8.43m/s \, . \end{align} My assumption is that this is the initial vertical velocity as the tennis ball rises up off the ground. Working on that assumption: \begin{align} v =& u + at \\ =& 8.43-9.8*0.15 \\ =& 6.96m/s \end{align} using $\tan\theta = 6.96/40$ gives $\theta = 9.87^\circ$. The answer is supposed to be 6.09, I am fairly certain the 8.43 is correct, so where have I made a mistake? Answer: Without doing the math, I think your problem lies in your assumption that the ball's vertical speed immediately before and after the bounce are the same. The diagram suggests this isn't the case since the trajectory after the bounce is not a mirror image what it was before the bounce. As far as I can tell you haven't used the given piece of info that the height of the ball at B is 0.75 m. Try using this to find the balls vertical speed right after the bounce.
{ "domain": "physics.stackexchange", "id": 36820, "tags": "projectile" }
How do we get rid of dangerous explosive acetone peroxide and nitroglycerine?
Question: Imagine the police (or army or whoever's job is that) found out that some maniac placed a lot of primary explosives under a building. They have evacuated the building and deactivated the trigger. What now? You have a barrel full of Acetone peroxide (terrorist's favourite explosive). You move it - it might blow. Even if it doesn't - can you react acetone peroxide slowly, to avoid explosion? What do factories do with this compound formed as a side waste? Could you answer the same for nitroglycerine? Answer: I don't know if it's practical to dissolve or otherwise stabilize acetone peroxide, but I could see absorbing nitroglycerine with sawdust or trapping it in gelatine, rendering it more stable, which is how dynamite is made. Certainly if it can be done safely, detonating in place is be the preferred solution for dealing with unstable explosives. It's done frequently with unexploded munitions if they're deemed too dangerous to move, e.g. WWII bomb in Munich. I also visited a potash mine once. Old dynamite tends to "sweat" nitroglycerine, but the explosives lockers are designed such that the contents can be detonated in place without causing damage, if this happens as it's safer than attempting to remove them.
{ "domain": "chemistry.stackexchange", "id": 3826, "tags": "safety, cleaning, explosives" }
Error post updating and installing nightly packages. locale::facet::_S_create_c_locale name not valid
Question: Hi, I updated my sim to try the damping tutorial for Sandia Hands and DRC Sim wont start thereafter. Error keeping looping and trying to start and spawn the worlds and models. Log can be seen below: process[rosout-1]: started with pid [9927] terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid [rosout-1] process has died [pid 9927, exit code -6, cmd /opt/ros/fuerte/share/rosout/bin/rosout __name:=rosout __log:=/home/sudip/.ros/log/49d503dc-c584-11e2-ae97-4c809313301c/rosout-1.log]. log file: /home/sudip/.ros/log/49d503dc-c584-11e2-ae97-4c809313301c/rosout-1*.log respawning... [rosout-1] restarting process Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in ignored process[rosout-1]: started with pid [9929] terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid [rosout-1] process has died [pid 9929, exit code -6, cmd /opt/ros/fuerte/share/rosout/bin/rosout __name:=rosout __log:=/home/sudip/.ros/log/49d503dc-c584-11e2-ae97-4c809313301c/rosout-1.log]. log file: /home/sudip/.ros/log/49d503dc-c584-11e2-ae97-4c809313301c/rosout-1*.log respawning... [rosout-1] restarting process Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in ignored Originally posted by Sudip_Gupta on Gazebo Answers with karma: 21 on 2013-05-27 Post score: 0 Answer: Looks like a problem with your locale setting. Originally posted by gerkey with karma: 1414 on 2013-05-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3319, "tags": "gazebo" }
Why $2$ photons are necessary for conservation of four-momentum?
Question: Source Since the p0 is composed of quark-antiquark pairs, it can decay electromagnetically into photons. It requires two photons to conserve momentum Ok, why 2 and not 1 ? it is possible, leaving the 2 photons, to make sure that the momentum is not conserved? Physically, it's impossible. Why is physically impossible ? Mathematically, any pair of photons whose sum of four-momentum does not preserve the initial four-momentum satisfies my condition For example, in the center of mass of the particle that decays the particle velocity is zero, so the 3-momentum is zero and the 4-momentum is p = (Mc, 0, 0, 0) with M the mass of the particle. If it then decays into two particles we say that these two new momentum are for example (m1c, 0, m1 * 5, 0) and (m2c, m2 * 2, 0, 0), so the first mass particle m1 has speed 5 on the y-axis, the second mass particle m2 has velocity 2 on the x-axis. The sum of these two vectors is ((m1 + m2) c, m2 * 2, m1 * 5, 0) which, as you can see, is very different from our initial p vector. Answer: Go to the rest frame of the pion. In this frame, the total momentum is $0$ and it has energy $E=Mc^2$. If it decays simply to 1 photon, by energy conservation this photon would need to have energy $E$ as well. A photon with energy $E$ has momentum $p=E/c=Mc$ in the direction that it travels. As you can see, there's no way that this photon has $0$ momentum and so there's no way for momentum conservation to happen. Now, if you have 2 photons, then each photon can take half the energy $E/2$ and move in opposite directions, each having momentum with magnitude $E/2c=Mc/2$. Since the two photons are moving in opposite directions their momentum will cancel thereby conserving the total momentum of $0$ that the pion started off with.
{ "domain": "physics.stackexchange", "id": 51896, "tags": "photons, momentum, conservation-laws" }
Spearman correlation between two genes
Question: I have FPKM data for three genes like below: Samples gene1 gene2 gene3 TCGA-2Y-A9GS-01A 0.9 7.4 1.0 TCGA-2Y-A9GT-01A 0.8 1.0 0.3 TCGA-2Y-A9GU-01A 0.6 2.0 0.2 TCGA-2Y-A9GV-01A 1.2 0.5 0.1 TCGA-2Y-A9GW-01A 3.8 2.1 0.4 TCGA-2Y-A9GX-01A 2.3 2.0 1.5 I used cor.test cor.test(~ gene1 + gene2, data = df2, method="spearman", continuity=FALSE, conf.level=0.95) Spearman's rank correlation rho data: gene1 and gene2 S = 5686100, p-value = 6.083e-09 alternative hypothesis: true rho is not equal to 0 sample estimates: rho 0.2984045 I have a warning message which I didn't see before. Warning message: In cor.test.default(x = c(0.9, 0.8, 0.6, 1.2, 3.8, 2.3, 3.8, 0.4, : Cannot compute exact p-value with ties Do I need to care about this warning message? Is it good using FPKM data for correlation? For plotting I used ggscatter. ggscatter(data, x = "gene1", y = "gene2", add = "reg.line", conf.int = TRUE, cor.coef = TRUE, cor.method = "spearman", xlab = "gene1", ylab = "gene2") Is this fine or I need to use any log for the scatter plot? Answer: Since Spearman is a rank-based test, it relies on you being able to accurately decide on the ranking of your observations by some metric (usually the magnitude of the numbers). If two observations have identical values (are tied), then they cannot be definitively ranked. Since the ranks are not unique, exact p-values cannot be determined. e.g. in your data, for gene 2: gene2 rank TCGA-2Y-A9GS-01A 7.4 1 TCGA-2Y-A9GT-01A 1.0 5 TCGA-2Y-A9GU-01A 2.0 3= TCGA-2Y-A9GV-01A 0.5 6 TCGA-2Y-A9GW-01A 2.1 2 TCGA-2Y-A9GX-01A 2.0 3= By way of example, it's possible to do the test with either TCGA-2Y-A9GU-01A or TCGA-2Y-A9GX-01A ranked number 3 - meaning a definitive p-value cannot be determined. If there are only a few ties, you probably don't need to worry too much. If your FPKMs (or better, TPMs) were calculated with greater precision, then ties would be less likely. Is there a reason you only have these numbers to 1 decimal place? Log or linear scale doesn't make a huge difference (and neither is 'wrong'). If you choose to plot log counts, you need to consider what happens if you have zeroes in your data.
{ "domain": "bioinformatics.stackexchange", "id": 723, "tags": "rna-seq, fpkm, correlation" }
Length contraction inside a wire carrying current
Question: I'm trying to learn the connection between special relativity and magnetism. I know that if I place a positive charge, at rest, next to wire with current, I should not observe any force on it because there is no electric field and there is no magnetic force as my charge is at rest. But here is what confuses me - the wire contains moving electrons and according to what I learned, the stationary charge should observe a length contraction of those electrons and so the density of them will increase and a negative electric field should be observed. This is definitely not the case and I wonder if someone can explain to me what is wrong in my analysts. Thanks! Answer: Consider a wire with no current flowing through it, and equal densities of negative charges (electrons) and positive charges (protons). Give the electrons some velocity (such that, in their rest frame, they have constant separation): then you will get a current due to the moving charge, and you will get a negative charge density, due to length contraction. This reasoning is correct. The wire now carries a current and has a negative charge. Consider a wire with no current flowing through it, and a lower density of electrons than protons. This wire has an overall positive charge density. If you give the electrons a correctly chosen velocity (again, such that, they see no change in separation), then you will get a current due to the moving charge, and the negative charge density will rise (due to length contraction) to cancel out the positive charge. This reasoning is also correct, and this time you have a wire with a current but no overall charge. You see that the current through a wire doesn't determine the charge on the wire: you also need to know the charge on the wire when it has no current. By adding or removing electrons from the wire, you can get any combination of current and charge (in principle—real matter will probably disintegrate at some point). In your scenario, the wire is constructed as in my second example, so that it carries a current and has no charge in the frame of the test charge. The test charge thus feels no force since there is no electric field. Note that real wires are more complicated than this. Consider a loop of wire with no current and no charge. Apply some force (visiting bar magnet?) to get all the electrons circulating through it, like the first construction. In this scenario, the electrons can't see a constant separation in their rest frames, since that would imply contraction of the negative charge in the rest frame of the wire and the separation of charge into positive and negative zones. The density of electrons in the wire's rest frame must remain constant, so the electrons in their own frames see the other electrons pulled away from them. The end result, once the force is removed and we have circulating electrons at equilibrium, is that each small segment of the wire "microscopically" looks like it was made by the second construction, as the overall charge must remain zero. If this sounds weird, it probably should. Consider the Ehrenfest paradox and Bell's spaceship paradox to further understand how this works. TL;DR: The charge on a wire is given by the relationship between electron density and proton density. The velocity of the electrons gives us the current and the relationship between electron density relative to the test charge's frame versus relative to the electrons' frame. The current does not give us the relationship between electron density and proton density: that remains a free parameter which we can adjust to get whatever overall charge we want. Specifically, the charge is controlled by whatever the wire is connected to at its ends, since that is what will be supplying/removing electrons.
{ "domain": "physics.stackexchange", "id": 73690, "tags": "electromagnetism, special-relativity" }
Iterating 2d array
Question: I have implemented a MatrixIt class to traverse a 2D array. Is it possible to make the code clearer and shorter? public class MatrixIt implements Iterator<Integer> { private final int[][] data; private int row = 0; private int column = 0; public MatrixIt(int[][] data) { this.data = data; } @Override public boolean hasNext() { while (row < data.length && data[row].length == column) { column = 0; row++; } return row < data.length; } @Override public Integer next() { if (!hasNext()) { throw new NoSuchElementException(); } return data[row][column++]; } } Answer: Interestingly, the thing that most facilitates a simple row-wise iterator is to arrange the storage in the same format that we'll iterate. Instead of having an array of references to per-row arrays, representing a matrix by a single linear array is a good choice: faster to create, and with better memory locality: public class Matrix { private int[] data; int width; int height; int get(int x, int y) { return data[y * width + x]; } void set(int x, int y, int value) { data[y * width + x] = value; } } Then the iterator can be a simple (linear) iterator over the data.
{ "domain": "codereview.stackexchange", "id": 41744, "tags": "java, matrix, iterator" }
What's happen with my kinect?[img included]
Question: image:alt text http://dl.dropbox.com/u/12446150/wrong_kinect.png I just downloaded openni_kinect and try to get depth image. But it looks weird. Is this right? Cheers. Originally posted by enddl22 on ROS Answers with karma: 177 on 2011-05-19 Post score: 0 Answer: Rviz seems to interpret the 16 (or 14?) bit depth as 8 bit gray value. Therefore you have an "overflow" at each 256 depth-steps, causing the value to cycle between black and white several times. I wonder why it's not getting really white, but that could either be the alpha of 0.5 in rviz or a confusion between signed and unsigned char. Does the pointcloud look normal? Originally posted by Felix Endres with karma: 6468 on 2011-05-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Bradley Powers on 2011-05-19: Bingo! I ran into this as well. I didn't really want to mess with fixing rviz, so I just used point clouds.
{ "domain": "robotics.stackexchange", "id": 5605, "tags": "openni-kinect" }
Turtlebot2 does not move in Gazebo
Question: Hi all! So according to this tutorial: http://wiki.ros.org/turtlebot_simulator/Tutorials/hydro/Explore%20the%20Gazebo%20world All I need to do to tele-op the turtlebot2 in gazebo is: roslaunch turtlebot_gazebo turtlebot_empty_world.launch roslaunch turtlebot_teleop keyboard_teleop.launch However when I do this, the turtlebot does not move. I have also tried running: roslaunch turtlebot_bringup minimal.launch and creating a python node to move it, all to no avail. What am I missing? Thank you to bit-pirate for catching my missing specs: Ubuntu 12.04, ROS Hydro, i used sudo apt-get install ros-hydro-turtlebot* to get the packages. EDIT: My Turtlebot2 simulation IS MOVING, however it is moving so slow that you can't even see it move unless you leave Gazebo running for a while and come back to it. How can I make Gazebo run faster? Originally posted by Robocop87 on ROS Answers with karma: 255 on 2013-11-09 Post score: 1 Original comments Comment by RB on 2013-11-09: I am thinking of using turtlebot on Gazebo. As I am new to Gazebo; initially I thought that for turtlebot 2D navigation stack is already set up and we can control everything from rviz. Does we need to write code for moving the turtlebot? Do you know any robot in Gazebo-Ros for which 2D navigation stack is already tuned properly? Comment by Robocop87 on 2013-11-09: Hi Brian, the navigation stack does already have a lot of utilities for using the turtlebot, the only one I know of that uses rviz though is the goal waypoints you can set while using amcl. I am writing code to move it simply because my application requires it. Comment by RB on 2013-11-09: Hi, my requirement is also similar to you, but right now I need a robot upon which 2D navigation stack properly set up. I am planning to incorporate my own code inside amcl. As you have said for TurtleBot, we can use 2D nav stack; does this robot can be properly launched in Gazeb? Does turtleBot can avail all the functionality of 2D nav stack.? Thanks 4 ur reply. Comment by bit-pirate on 2013-11-10: @Robocop87 The instructions on the wiki work out of the box for me - no need to create nor run extra nodes. Please specific more details about your setup (what and how did you install the turtlebot packages, what versions are you using (Ubuntu, ROS, turtlebot packages)). Comment by Robocop87 on 2013-11-10: Added the specs, thanks for the catch. Still doesn't work for me though =/ Answer: First of all you should not use turtlebot_bringup minimal.launch. That launcher is for running the real robot. The Gazebo plugins for TurtleBot are basically reproducing the same functionalities (not everything is implemented though). What does Gazebo's real time factor say? Maybe the simulation is running really slowly on your machine (my "Real Time Factor" is at 1.0). Note that turtlebot_teleop's keyboard_teleop requires you to keep the buttons pressed. Please also try roslaunch kobuki_keyop keyop.launch. Originally posted by bit-pirate with karma: 2062 on 2013-11-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Robocop87 on 2013-11-10: Thanks for your answer bit-pirate, I have since stopped attempting to run turtlebot_bringup as I realized it was causing nodes to crash in Gazebo. My real time factor says 1.0 Comment by bit-pirate on 2013-11-10: Odd... Please add the version numbers of Gazebo and the turtlebot, turtlebot_apps, turtlebot_simulator, kobuki and kobuki_desktop packages to your question. Comment by Robocop87 on 2013-11-10: Gazebo: 2.0.0 Turtlebot: 2.2.2 Turtlebot_apps: 2.2.4 Turtlebot_simulator: 2.1.1 Kobuki: 0.5.5 Kobuki_desktop: 0.3.1 Comment by bit-pirate on 2013-11-10: Gazebo 2.0! That might be it. I tested version 2.0 after the release and ran into some problems. So far no one got around implementing the necessary changes. Please revert back to Gazebo 1.9.1. and do try teleop again. Comment by Robocop87 on 2013-11-15: Could this possibly have any connection with my graphics card? I imagine it moving too slow could mean its not good enough? Comment by bit-pirate on 2013-11-17: If your real time factor is at 1.0, it should be fine. If the simulation time would pass much slower than real time, then there would probably be too much load on your system. Did you test the turtlebot simulation with Gazebo 1.9? Comment by Robocop87 on 2013-12-07: No I never did, I never figured out how to revert the version and I ended up just using another computer...sorry!
{ "domain": "robotics.stackexchange", "id": 16102, "tags": "gazebo, turtlebot2, turtlebot" }
Summation limits in DFT
Question: Assume a discrete time signal $(x_n)$ is given. Some texts define the DFT as $$ X[k] = \sum_{n=-N}^N x_n\exp\left(\frac{-2\pi j k n}{N}\right) $$ while others define it as $$X[k] = \sum_{n=0}^{N-1} x_n\exp\left(\frac{-2\pi j k n}{N}\right) $$ or $$X[k] = \sum_{n=1}^N x_n\exp\left(\frac{-2\pi j k n}{N}\right)$$ or many other seemingly random variants with different indices. So what is the true index limits for DFT? Or is there no unique consensus and any random author is allowed to introduce a new one? Answer: The second expression is the most common to me, for a standard $N$-length signal. The third could be equivalent if the $x[n]$ sequence is periodic, as the Fourier kernel $\exp\left(\frac{-2\pi j k n}{N}\right)$ has the same value (namely, 1) for $n=0$ and $n=N$. The first one with $2N+1$ length seems odd to me, in both meanings. Yet, if the signal is considered to be null outside the considered support (the limits of the summation), the magnitude or the energy of the output may provide a sound amplitude spectrum (as a potential phase term could be cancelled). I suspect than the diversity of formulae you have found (the second and the third) is related to different indexings that one can find in computer software: zero-based numbering (in C, Python) or one-based indices (Fortran, Matlab). I believe that numbering should start at zero (see "Why numbering should start at zero (EWD 831)", Dijkstra, Edsger Wybe).
{ "domain": "dsp.stackexchange", "id": 9883, "tags": "discrete-signals, fourier-transform, dft" }
has extra blank 's and I have no idea why. please help me get them out
Question: This is the result of the code it has extra option tags no idea why. I have no idea why It produces the extra option box. Below is the result of the query. Correct number of rows correct response... ParentID Parent 1 Pipe 2 Valve 3 Control Valve 4 Pump 5 Chamber 6 Meter 7 Check 8 Distribution Pipe This is some php that makes the dropdown: $qry = "select distinct ParentID, Parent from tbl where ChildID = '0' and Parent is not null"; $parentRslt = mysql_query($qry); $count = mysql_num_rows($parentRslt); $opt7 = null; for($m=0;$m<$count;$m++) { $var = mysql_result($parentRslt,$m,'ParentID'); $name = mysql_result($parentRslt,$m,'Parent'); $var = trim($var); $name = trim($name); $opt7.="<option value='$var'>$m $name<option>"; } $form = " <form method='post'> Network Action: <input type='text' name='child' /> <select name='parentID'> $opt7 </select> <input type='submit' name='submit' value='submit' /></form>"; return $form; Answer: You should close the option tag. Change $opt7.="<option value='$var'>$m $name<option>"; to $opt7.="<option value='$var'>$m $name</option>";
{ "domain": "codereview.stackexchange", "id": 817, "tags": "php, javascript, mysql, html" }
Why don't we calculate swaps and other steps except comparison for finding time complexity of a sorting algorithm?
Question: I was learning some basic sorting techniques with their complexity. However I cannot understand why only the number of comparisons are taken into account while calculating time complexity and operations such as swap are ignored. Link to selection sort analysis. Please help me understand. Answer: There are 2 major reasons: For most algorithms the number of other operations can be bounded by a multiple of the number of comparisons (i.e. runtime in $\mathcal{O}(\#comparisons)$). This is because e.g. a swap does not occur without a prior comparison requiring this swap (depends on algorithm). Using comparisons you can prove an $\mathcal{O}(n \log n)$ lower bound for any comparison based sorting algorithm on $n$ elements.
{ "domain": "cs.stackexchange", "id": 2627, "tags": "algorithm-analysis, runtime-analysis, sorting" }
Calculate karma based on retweets
Question: This script is meant to look for an id in the data attribute in each containment div, then send an Ajax call to get the amount of retweets, calculate the karma based on those retweets and then display that number in a div. This code looks like spaghetti. How do I improve this to make each function more independent? There is some abstraction function that I can't figure out. My other question is, how can I use a jQuery deferred object to create a promise and update the div once I get a response? function get_retweet(id) { var url = 'https://api.twitter.com/1/statuses/show/' + id + '.json', karma; $.ajax({ url: url, type: 'GET', dataType:'jsonp', crossDomain: true, success: function(data) { display_karma(data.retweet_count); }, error: function() {alert('fail');} })); } function calc_karma(tweets) { return tweets *10 +10; } function display_karma(retweets) { var id, karma, el =$('.tweetContainer'); //id = $(el).data(el, tweet_id); id = '248988915661410304'; karma = calc_karma(retweets); el.find('.tcPoints').text(karma); } function start_get_karma() { var id; $('.tweetContainer').each(function(index,el) { //id = $(el).data(el, tweet_id); id = '248988915661410304'; get_retweet(id); }); } Answer: Here are some tips: Use $.getJSON() to get JSON. Also, name the first parameter json for clarity. Note from documentation for jQuery.getJSON() JSONP If the URL includes the string "callback=?" (or similar, as defined by the server-side API), the request is treated as JSONP instead. See the discussion of the jsonp data type in $.ajax() for more details. More information here Old Code: var url = 'https://api.twitter.com/1/statuses/show/' + id + '.json', karma; $.ajax({ url: url, type: 'GET', dataType:'jsonp', crossDomain: true, success: function(data) { display_karma(data.retweet_count); }, error: function() {alert('fail');} })); New Code: var url = 'https://api.twitter.com/1/statuses/show/' + id + '.json?callback=?'; $.getJSON( url, function (json) { display_karma(json.retweet_count); }).error(function () { alert('fail'); }); Eliminate commented code. Use a source control system like git, svn, mercurial to keep track of changes. Here's a good start for it Old Code: function start_get_karma() { var id; $('.tweetContainer').each(function(index,el) { //id = $(el).data(el, tweet_id); id = '248988915661410304'; get_retweet(id); }); } New Code: function start_get_karma() { var id = '248988915661410304'; $('.tweetContainer').each(function (index, el) { get_retweet(id); }); } Function calls are expensive, so use a basic loop instead of each() when appropriate. Old Code: $('.tweetContainer').each(function (index, el) { get_retweet(id); }); New Code A: for(var i = 0; len = $('.tweetContainer').length; i < len; i++){ get_retweet(id); } However, there's a problem. Making multiple calls to get_retweet() with the same static value doesn't make sense. So just make one call. New Code B: if( $('.tweetContainer').length ){ get_retweet(id); } Don't declare variables if they're only used once. Old Code: function display_karma(retweets) { var id, karma, el =$('.tweetContainer'); id = '248988915661410304'; karma = calc_karma(retweets); el.find('.tcPoints').text(karma); } New Code: function display_karma(retweets) { var id = '248988915661410304'; $('.tweetContainer').find('.tcPoints').text(calc_karma(retweets)); } Have variable names give a hint to the type. retweets sounds like a function or array. Try naming it retweet_amount or something similar. Final Code: function calc_karma(tweets) { return (tweets * 10) + 10; } function display_karma(retweet_amount) { $('.tweetContainer').find('.tcPoints').text(calc_karma(retweet_amount)); } function get_retweet(id) { var url = 'https://api.twitter.com/1/statuses/show/' + id + '.json?callback=?'; $.getJSON( url, function (json) { display_karma(json.retweet_count); }).error(function () { alert('fail'); }); } function start_get_karma() { var id = '248988915661410304'; if( $('.tweetContainer').length ){ get_retweet(id); } }
{ "domain": "codereview.stackexchange", "id": 2433, "tags": "javascript, jquery, ajax, twitter" }
Special Conformal Transformation Acting on Spinor Variables
Question: I'm working in 3,1 Minkowski spacetime, representing null vectors as a product of two commuting spinors so that eg. $$p_i^{\dot{\alpha}\alpha} = |i]^{\dot{\alpha}}\langle i|^\alpha.$$ I know that special conformal transformations act in terms of the spinors as $$K_{i\dot{\alpha}\alpha} = \frac{\partial}{\partial|i]^{\dot{\alpha}}} \frac{\partial}{\partial\langle i|^\alpha}.$$ Is it known how to give a finite transformation of $K_i$ acting on the spinors? So of the form $$e^{b\cdot K_i}|i\rangle = f_b(|i\rangle)$$ for some function $f_b$ and vector $b$? It looks intuitively to me like it should be straight-forward given that $$K_i |i\rangle =0,$$ however I imagine there are some difficulties in taking the exponential of a second derivative operator. Answer: I found the answer I was looking for in twistor space, where the conformal group acts linearly. Under a Fourier transform back to momentum space, we can write a special conformal transformation acting on $|j\rangle$ as $$|j\rangle^\alpha \mapsto |j\rangle^\alpha + i\, b^{\alpha\dot{\alpha}}\frac{\partial}{\partial|j]^{\dot{\alpha}}}.$$ I'm not really sure how useful this statement is, but I think it makes sense. I would be interested to know if it's correct to write that $$(e^{b\cdot K_j}|j\rangle)^\alpha = |j\rangle^\alpha + i \,b^{\alpha\dot{\alpha}}\frac{\partial}{\partial|j]^{\dot{\alpha}}},$$ and if that is correct, how to get to the right hand side from the exponential on the left.
{ "domain": "physics.stackexchange", "id": 78582, "tags": "quantum-field-theory, special-relativity, conformal-field-theory, differential-equations" }
Module preservation and hub genes finding
Question: I ran module preservation analysis between two condition and on the basis of preservation z summary i would to go for downstream analysis.So I m interested in looking for hub-genes of those modules which have lowest z summary. But so far in the tutorial i didn't find a section in the WGCNA tutorial where how to find hubgenes after module preservation analysis was mentioned. Any suggestion would be really appreciated Answer: Simply select the module(s) you are interested in and look for hub genes using the standard calculations. Having done a module preservation analysis does not change the procedure in the slightest.
{ "domain": "bioinformatics.stackexchange", "id": 1226, "tags": "networks, wgcna" }
Kinematics confusion regarding sign of integration
Question: I was solving some problems regarding non-inertial frames, and Newtonian mechanics in general, when I faced a major doubt regarding one of the seemingly simple topics, and I'd appreciate it if someone clear this doubt for me. Suppose, I have a body that I drop from a certain height under gravity. I want to know the velocity of the body at a certain height from the ground. There are no forces acting on the body, apart from gravity. $$m\frac{dv}{dt} = mv\frac{dv}{dx}= -mg $$ As you can see, I have taken the vertically up direction as positive, and thus $g$ is negative. Then it is just simple integration : $$\int_0^vvdv=-g\int_{x_i}^{x_f}dx$$ Since $x_f<x_i,$ we get the expected answer, as the integral should be positive on both sides. This is perfectly fine. However, let me now consider the downward direction, as positive. In that case, $g$ would be positive. Hence, the only change that we'll get would be : $$\int_0^vvdv=g\int_{x_i}^{x_f}dx$$ However, since $x_f<x_i,$ the right-hand side becomes negative, while the left-hand side can only be positive. This is clearly impossible, so there must be some negative sign on the RHS to balance this. I would have to flip the limits of the integral to make sense of this. I'm not being able to understand where this extra negative sign would come from if I take the downward direction as positive. My guess is, in this case, $dx$ would be negative. However, I don't seem to know why? Can $dx$ carry a negative sign? If not, how do we resolve this issue? If I take the downward direction to be positive, and so $g$ is positive, how do I show that velocity increases or find the velocity at some height above the ground? In simple problems like this one, this doesn't create that much confusion. But if we include more force terms, that depend on height, and then integrate, our choice of positive up or down seems to matter a lot. For some reason, $dx$ and $g$ seem to have opposite signs in front of them. I can't seem to figure out why. I'm really sorry in advance if this problem is rather trivial. Answer: In the second part, your assumption that $x_f\lt x_i$ is incorrect. In fact, $x_f\gt x_i$ if you take the down direction to be positive. See the analysis below for a more detailed answer. In the first part with down being the negative direction: Your first equation $$mv\frac{dv}{dx}= -mg$$ would yield $$\int vdv=-g\int_{x_i}^{x_f}dx$$ so that there is no $m$ symbol (you have accidentally left the $m$ in your equations). This will yield the equation $$v^2=-2g(x_f - x_i)$$ so that for $v\gt 0$ then $x_f\lt x_i$ In other words, $x_f$ must be more negative (meaning smaller) than $x_i$ for $v$ to be positive since we are taking the down direction to be negative. If for example we take some point where $x=0$, then after a certain time $x_i=-5$ then a little later $x_f=-20$. This means that $x_f-x_i\lt 0$ which is consistent. In the second part with down being the positive direction: Now you are taking downward to be positive in which case for $$v^2=2g(x_f - x_i)$$ then $x_f \gt x_i$ so that $$v^2\gt 0$$ always meaning $v\gt 0$ Your assumption that $x_f\lt x_i$ is incorrect. If for example we take $x_i=5$ then the final value for $x$ can be $x_f=20$ since the positive direction is down so that $x_f-x_i$ is positive.
{ "domain": "physics.stackexchange", "id": 83139, "tags": "newtonian-mechanics, forces, kinematics, velocity, calculus" }
Geometric meaning of spin connection
Question: A very short question: Does the spin connection that we encounter in General Relativity $$\omega_{\mu,ab}$$ have a geometric meaning? I am supposing it does because it comes from mathematical terms that take geometric parts on a manifold but I am not sure how to visualize that? Answer: The spin connection has a geometric meaning - it is a connection associated to a particular non-coordinate frame, which diagonalizes the metric. Here's how: Let $M$ be our spacetime. The tangent bundle $TM$ may be thought of as the associated bundle to an $\mathrm{SO}(n)$-principal bundle, where the $\mathrm{SO}(n)$ matrices represent ordered orthonormal bases at every point (every column vector is orthonormal to every other in such a matrix, which is the way in which it represents a basis). The spin connection is now just a $\mathfrak{so}(n)$-valued connection 1-form $\omega$ on $TM$, which may locally be expanded as $$ \omega = \omega_\mu \mathrm{d}x^\mu = {{\omega_\mu}^a}_b {T^b}_a\mathrm{d}x^\mu$$ and the ${{\omega_\mu}^a}_b$ the the connection coefficients physicists usually deal with, and the ${T^b}_a$ are a basis for the $\mathfrak{so}(n)$ matrices, usually the simple antisymmetric matrices with two non-zero entries own would always write down. Usually, we think of tangent vectors as being expanded as $v = v^\mu \partial_\mu$, so the natural basis at every point is given by the coordinates, which may be arbitrarily ugly. in particular, the metric is $g_{\mu\nu}$. We now want to (locally) change frames such that the metric becomes the standard diagonal metric $\eta_{\mu\nu}$ 1 because that one is evidently easier to work with. Such a (local) change of frames is given by a linear invertible map $$ e : TM \to TM$$ which is given in components by ${e^a}_\mu$ with $b^a = e^a{}_\mu v^\mu$ for $v$ the components in the coordinate basis and $b$ the components in the diagonal basis. $e$ is called the vielbein. Since $TM$ carries the natural Levi-Civita connection given by the Christoffel symbols $\Gamma$, we get a connection on the bundle by $$ \omega = e \Gamma e^{-1} + e \mathrm{d}e^{-1}$$ or, in components, $$ {{\omega_\mu}^a}_b = {e^\nu}_b {\Gamma^\lambda}_{\mu\nu}{e^a}_\lambda - {e^\nu}_b \partial_\mu {e^a}_\nu$$ which is how one obtains the spin connection. We may think of the spin connection as describing the Levi-Civita connection in a "moving frame" whose motion is given by the vielbein such that the metric takes the simple form we are used to from Euclidean/Minkowski space. 1$\mathrm{SO}(n)$ is the Riemannian, $\mathrm{SO}(1,n-1)$ the Lorentzian case, but there's not much of a difference in the description we have here.
{ "domain": "physics.stackexchange", "id": 24053, "tags": "general-relativity, differential-geometry, differentiation" }
Error Installing ROS on Debian Stretch
Question: I am trying to install ROS on Debian Stretch running on a Beaglebone Black using the instructions here http://wiki.ros.org/Debian/Installation/Source the script file install_ros.sh gives the following error while executing ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release ==> cmake /home/debian/ros_catkin_ws/src/catkin -DCATKIN_DEVEL_PREFIX=/home/debian/ros_catkin_ws/devel_isolated/catkin -DCMAKE_INSTALL_PREFIX=/home/debian/ros_catkin_ws/install_isolated -DCMAKE_BUILD_TYPE=Release -G Unix Makefiles in '/home/debian/ros_catkin_ws/build_isolated/catkin' Unhandled exception of type 'OSError': Traceback (most recent call last): File "./src/catkin/bin/../python/catkin/builder.py", line 965, in build_workspace_isolated number=index + 1, of=len(ordered_packages) File "./src/catkin/bin/../python/catkin/builder.py", line 665, in build_package destdir=destdir, use_ninja=use_ninja File "./src/catkin/bin/../python/catkin/builder.py", line 397, in build_catkin_package run_command_colorized(cmake_cmd, build_dir, quiet, add_env=add_env) File "./src/catkin/bin/../python/catkin/builder.py", line 187, in run_command_colorized run_command(cmd, cwd, quiet=quiet, colorize=True, add_env=add_env) File "./src/catkin/bin/../python/catkin/builder.py", line 205, in run_command raise OSError("Failed command '%s': %s" % (cmd, e)) OSError: Failed command '['cmake', '/home/debian/ros_catkin_ws/src/catkin', '-DCATKIN_DEVEL_PREFIX=/home/debian/ros_catkin_ws/devel_isolated/catkin', '-DCMAKE_INSTALL_PREFIX=/home/debian/ros_catkin_ws/install_isolated', '-DCMAKE_BUILD_TYPE=Release', '-G', 'Unix Makefiles']': [Errno 2] No such file or directory <== Failed to process package 'catkin': Failed command '['cmake', '/home/debian/ros_catkin_ws/src/catkin', '-DCATKIN_DEVEL_PREFIX=/home/debian/ros_catkin_ws/devel_isolated/catkin', '-DCMAKE_INSTALL_PREFIX=/home/debian/ros_catkin_ws/install_isolated', '-DCMAKE_BUILD_TYPE=Release', '-G', 'Unix Makefiles']': [Errno 2] No such file or directory Command failed, exiting. Any help would be appreciated Thanks Simon Originally posted by simat on ROS Answers with karma: 16 on 2015-12-14 Post score: 0 Answer: After a bit of detective work I have worked out what the problem is. 'cmake' is not part of the standard Debian installation on the Beaglebone Black. Installing cmake with 'sudo apt-get install cmake' fixed this problem. If I can edit the ROS wiki I will add the command to the script file. My beaglebone is currently compiling ROS, has been coming up with some warnings which require the package 'python-nose'. Are these warnings important? Simon Originally posted by simat with karma: 16 on 2015-12-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2015-12-15:\ If I can edit the ROS wiki I will add the command to the script file. Well you can, it's a wiki. Just register for an account, and you should be able to edit anything. Comment by simat on 2015-12-15: Edited wiki, added 'sudo apt-get install cmake python-nose' to install script
{ "domain": "robotics.stackexchange", "id": 23226, "tags": "ros, debian" }
How does time dilation apply at the centre of a black hole?
Question: At the direct centre of a black hole, you have $0$ GPE, and the net gravitational force is $0$. So how does time dilation work there (or is this another physics mystery)? Specification: I refer to how the time dilation changes from the actual surface of the black hole to the centre (I suspect it will continue to increase, but in a different way) Related: https://physics.stackexchange.com/a/251020/182917 Answer: There is no "center of a (Schwarzschild) black hole" in a traditional sense. At the horizon, the schwarzschild "radial" direction switches from spacelike to timelike, and the singularity becomes more a "time in the future" than a "place you can go to". Also, arguments like this about potential energy don't generalize well from Newtonian mechanics, and to the extent that you can use them, you have to use the relativistic generalizations of them, which involve terms that look like $1 - \frac{2M}{r}$, which is infinite for $r=0$
{ "domain": "physics.stackexchange", "id": 78205, "tags": "black-holes, time-dilation" }
Confusion with regards to units in Sinc pulse
Question: I am currently reading a section in the Handbook for MRI Pulse Sequence book where it says that the time-bandwidth product for a SINC pulse is given by: $$ T \Delta f = Z $$ where $T$ is the length of the pulse, $\Delta f$ is the bandwidth and $Z$ is the number of zero crossings of the SINC pulse. Now, my question might be very naive but it says that this is a dimensionless quantity but I do not understand why that will be: For example, if the bandwidth is described in radians/sec, does it not have units of radians? In that case, how does it relate to the number of zero crossings, which is indeed a discrete number. Answer: The length of the pulse, $T$, would be in units of time (for example, in seconds). The bandwidth would be in units of frequency (for example, in Hz). So $$[T][\Delta f]=sec\frac{1}{sec}=1$$ That means that $Z$ is a dimensionless quantity. That makes sense, as $Z$ is just a natural number that describes the number of times that something happens (in this case, the number of zero crossings), and should not have units at all.
{ "domain": "dsp.stackexchange", "id": 3780, "tags": "signal-analysis, time-frequency, bandwidth, mri" }
Is it possible for a solar system to have planets orbiting the star(s) in a spherical pattern?
Question: By the question I mean that the planets spin in their ellipse but all ellipses describe the surface of a near-sphere shape around a star. Same question but for a solar system with 2 stars of irrelevant sizes or nature Answer: I think what you mean is - is it possible for a planetary system to exist such that the planets do not orbit in a single plane, but the planets have a large scatter of inclination angles? Our solar system has a relatively modest range, providing you ignore Pluto, of orbital inclination values (and eccentricities); zero to 7 degrees (Mercury). This is thought to be due to the way that the solar system was formed; from a rotating protoplanetary disk. Other planetary systems are thought to form in the same way and indeed there is evidence from many of the multiple planetary systems discovered via the transiting technique, that many other planetary systems are also very "flat" and often flatter than our solar system. (e.g. Fang & Margot 2012) Nevertheless there are exceptions. One can use the Rossiter-Mclaughlin effect to estimate the projected orientation of a transiting planet's orbit to the equatorial plane of rotation of its parent star. There are many examples of planets which have orbits that go over the rotation poles of their parent stars or are even retrograde. For example: Anderson et al. 2010; Triaud's 2011 PhD thesis. About 1/3 of "hot Jupiters" are misaligned in this way. The misalignment may be as a result of dynamical interaction with other planets or as a result of close fly-bys by other stars or interactions with a binary companion. EDIT: The R-M effect is suggestive of non-coplanarity, but as only one planet is seen, it is not conclusive. There is at present I think only one solid example where the measurements suggest non-coplanarity of two planets and that is in the planetary system surrounding Upsilon And A. Using radial velocities and astrometry from the fine guidance sensors on HST, MacArthur et al. (2010) were able to establish that the c and d planets (i.e. the 2nd and 3rd planets in the system) were inclined at angles of $30\pm1$ degrees with respect to each other. This is much larger than the differences seen in our solar system (maybe 7 degrees). The presence of two stars is something that is thought to be a key trigger of misalignment. Something called the Kozai mechanism can cause a periodic exchange between orbital inclination and eccentricity so that a planet flips backwards and forwards between two radically different orbits. It is quite possible for an outer planet to be affected by Kozai cycles whilst closer in, perhaps more massive planets contine on their more "usual" co-planar paths. If you would like (much) more information then a good read is the review by Melvynn Davies et al. given at last year's "Protostars and Planets VI" conference. The talk can also be viewed here (he's a good speaker).
{ "domain": "physics.stackexchange", "id": 17480, "tags": "orbital-motion, planets, stars, solar-system" }
Data measuring interface
Question: I have an hardware related question. I want to create a measurement set-up so that I can record data for DSP analysis. When searching around I see I have a few different options. The ones I have found are: Data acquisition box / interface Logic analyzer Scope Soundcard (have specific input type (XLR/ TRS-Jack) ) From DSP perspective what are the differences between these types of measuring devices? Specially the difference between a DAQ and a logic analyser? As Questioned in the comments: what are the parameters i want to measure. Here is the list for now(witch can change later on): Torque Acceleration(vibration) (velocity) Force if possible Ammeter different switches for safety sensors(just low and high over time) some pulse generators I'm not sure what the bandwidth is of every signal. But i assume that the maximum bandwidth is from the acceleration sensors. I assume that the bandwidth is f_max = 10kHz so f_sampling = 25 or 30 kHz. Answer: Eventhough it's not fully clear what sort of an environment you are referring to by the measurement set-up, in the most broadest sense of the term you need either a data acquisition interface or a sound card for the cheapest semi-alternative. A data acquisition interface to a PC is what you mainly need. There are a bunch of alternatives. You must set your requirements (data rate, #channels, bit depths, ADC linearity, SNR etc.) to get the optimal match for the price. Note that the software interface to the acquisition hardware is another concern that makes big functional differences in the end. The cheapest alternative to a professional data acquisition hardware can be a PC audio soundcard. They will record AC coupled and distorted signals. So no DC is captured unles you modify (if possible) by shorting the soundcard's ADC chip's input-DC blocking caps. There are a number of soundcard capturing software which also simulate an oscilloscope as well. Most typical PC (laptop) sound cards do provide 16 bits, 2 ch, 96 kHz sampling these days. A logic analyser is almost useless from a DSP point fo view. Logic analysers are used for digital electronics such as logic gates, microprocessors, interface protocols etc. to analyse their bits, high and low values, ones and zeros. An oscilloscope is again useful mainly for analog electronics diagnosis purposes. It's an indispensible hardware tool for any hardware project, but not exclusively required for data capturing, unless you want to observe in real-time what you are capturing.
{ "domain": "dsp.stackexchange", "id": 5532, "tags": "hardware" }
How do obligate carnivores not have nutrient deficiencies?
Question: I know that if a human eats just meat without plants, they develop issues like scurvy. If my understanding of it is correct that there are nutrients not found in meat, then how do obligate carnivores not suffer from certain nutrient deficiencies? Answer: To add to Bryans answer there are a few other issues. mainly pickiness and brains. Human can handle an all animal diet surprisingly well, look at the modern Mongolian nomad diet, which is exclusively animal based. One problem we see in modern humans in industrialized nations is they often avoid eating organ meats, eating only muscle tissue "meat" can easily lead to a nutrient deficiencies. Ask yourself when is the last time you sat down to a liver or kidney dinner. many vitamins are almost exclusively found in organ meats and not muscle tissue. So many times when people from first world countries end up on an all meat diet they end up with deficiencies because they are not eating the vitamin rich portions of the animal. The other problem is humans have very big brains, and brains can't burn fat. They can only burn carbohydrates or protein. you may already see the problem, animal tissue is rather poor in carbohydrates. But, you say, they have lots of protein, which is true, but humans along with many animals do not metabolize proteins well, doing so on a large scale leads to ketone and other toxic byproducts. Human brains demand so many calories that they burn thorough what little carbohydrates animal tissue have, very quickly forcing ketosis. So humans must burn far more protein than most other animals on a all animal diet even when lipids are available because carbohydrates are lacking. Carnivores also have adaptations to better deal with ketones, they still produce a lot of ketones but have better methods for managing them, mostly better ways to excrete them. Most apes only eat small amounts of meat and rarely lack carbohydrates in their diet, have not needed this adaptation. This is true for most of human history and the exceptions are very noticeable. The Mongolians I mentioned earlier also eat a lot of milk and cheese which contain a decent amount of carbohydrates.
{ "domain": "biology.stackexchange", "id": 11428, "tags": "nutrition" }
Why can't O form OF6?
Question: At first I thought that the reason is what I was taught, the absence of vacant d-orbitals but then I did some research. Most of the answers stated what I initially thought but one particular answer told that it's a common misconception (d-orbital theory). According to it, there can be two possible reasons- Not much electronegativity difference between O and F. The atomic size of Oxygen is not sufficient for it to hold 6 atoms. So which one is likely to be more appropriate, the d-orbital theory or the electronegativity and atomic size theory? Answer: One possible explanation involves atomic size. You need a large enough atom in the middle to stabilize all the occupied orbitals in the octahedral structure; sulfur makes it but oxygen does not. This reference gives the molecular orbitals for sulfur hexafluoride. A key feature of these molecular orbitals is a doubly degenerate pair, labeled $\mathrm{2e_g}$ in the above source, which effectively contains the "extra" valence electrons in fluorine-based orbitals. The picture below, taken from a different source due to limited downloading options from the free one, includes one of these combinations (here labeled $\mathrm{3e_g}$, as different nomenclature is used) among others. If we look carefully we see that the ostensibly nonbonding orbital has antibonding overlap between adjacent lobes and so is technically antibonding. The same is true of the other component of this degenerate pair. So to make the orbital effectively nonbonding and keep the octahedral structure from blowing apart, we need a large central atom to act as a "spacer" between orbital lobes. Apparently sulfur is large enough, oxygen is not.
{ "domain": "chemistry.stackexchange", "id": 12562, "tags": "inorganic-chemistry" }
In a DAG, finding the path with the highest score
Question: Given a directed, acyclic graph in which each node has an assigned integer score, what is a fast way of finding the path from a start and end vertex with the highest cumulative score? I thought of a DFS approach in which we start at the end and run the graph in the reverse way, saving at each node the best cumulative score attainable. To print the results, we start at the first node and greedily pick the next node with the highest cumulative score. However, I don't think this is the best way as we might be traversing the same paths a lot of times if we are given an unfriendly graph. Is there a better way of doing this? Answer: Hint: find a topological ordering, and for each vertex $v$, in the topological ordering, compute (the score of) the path with the highest score that ends at $v$.
{ "domain": "cs.stackexchange", "id": 15655, "tags": "algorithms, graphs, graph-traversal, weighted-graphs, dag" }
Choosing attributes for k-means clustering
Question: The k-means clustering tries to minimize the within-cluster scatter and maximizing the distances between clusters. It does so on all attributes. I am learning about this method on several datasets. To illustrate, in one the datasets countries are compared based on attributes related to their Human development Index. However some of the attributes are completely unrelated to this dimension, for example total population of countries. How to deal with this attributes? As mentioned before k-means tries to minimize the scatter based on all attributes, which would mean this additional attributes could hurt the clusters. To illustrate, I know the k-means cannot discern three clusters that are perfectly clustered around one dimension and are completely scattered around the other. Should one just exclude some attributes based on prior knowledge? Is their perhaps a processes that identifies irrelevant attributes. Answer: First of all, if you know that certain attributes shouldn't alter the clusters, you should remove them altogether. There is no point in hoping that K-Means will figure it out on its own if that can be fixed upstream. Second, obviously, not every attribute should affect the clusters equally. K-Means is based on the concept of distances between your points. Based on the distance matrix, the algorithm will find different clusters. The good thing is that you can tweak how the distance is calculated. You could weigh the different attributes such that differences between certain attributes are more important than others. Third, if you want to programmatically find the "best" attributes for clustering, I don't know of any efficient ways to do it. Meaning that your best bet is to try different combinations of attributes and see how good the clustering becomes. To rate the quality of clustering, there exist metrics like the Dunn Index, or the Davies-Bouldin Index (see this link for more detailed information: https://medium.com/@ODSC/assessment-metrics-for-clustering-algorithms-4a902e00d92d)
{ "domain": "datascience.stackexchange", "id": 7746, "tags": "clustering, unsupervised-learning, k-means, noise" }
Renormalization conditions on $\phi^4$ theory
Question: In chapter 10 of Peskin & Schroeder the renormalization of the $\phi^4$ theory is discussed. The renormalization conditions are imposed such that the infinities are absorbed into the counter term variables, if I understand correctly. My question is, why does one use the specific renormalization conditions? I find in a later post why do the renormalization conditions related to the exact propagator exist (Understanding renormalization conditions in the $\phi^4-$theory), but it is not discussed much, neither in P&S, nor in the aforementioned post why do we have to impose the amputated 4 point diagram should be equal to $-i\lambda$ at $s=4m^2$ and $t=u=0$? Any help would be appreciated. Answer: I think P&S explain their motivation in the book, if I'm not wrong (I haven't opened it in a long time!). I think the choice is motivated from an experimental POV. Suppose you were performing an $2\to2$ scattering experiment in the CM frame at energy $2E$. Then, $$s=4E^2, \qquad t=-4(E^2-m^2) \sin^2\frac{\theta}{2} , \qquad t=-4(E^2-m^2) \cos^2\frac{\theta}{2}. $$ To experimentally test out our theory, we first need to do a couple of "setup" experiments. Note that $\lambda$ and $m$ are parameters in our theory which must be fixed by experiment (i.e. the theory does not by itself predict these values). To fix these, we do one experiment at some energy scale $E_0$ which will fix $\lambda(E_0)$ and $m(E_0)$. Once this is known, the theory predicts the results of all other experiments at all energy scales. Every other experiment then can be used to test out the theory. OK, now what's the best choice for $E_0$? Well, one natural choice is to measure it at the lowest possible energy scale, which in this case is $E_0=m$ (when $E<m$ there is no particle excitation so there is nothing to scatter). So, we perform the experiment at this scale and find $\lambda(m) = \lambda_0$ and $m(m) = m_0$. The renormalization condition chosen by P&S precisely defines the renormalized coupling and mass to these experimentally measured values. Note that there are many other useful renormalization conditions. Sometimes, it is convenient to fix $\lambda$ in the unphysical regime of scattering (say when $t>0$). This choice makes the theoretical calculations easier, but then a few extra steps have to be taken to match anything to experiment.
{ "domain": "physics.stackexchange", "id": 86203, "tags": "quantum-field-theory, renormalization" }
Attempting to eliminate var (and imperative style) from my Piece class
Question: I've been cooking with gas since I got Daniel C Sobral's help on my last question. I am now re-reading Odersky's "Programming in Scala, 2nd Edition" (finished my first reading about this time last year). I am eager to understand how to alter my mental modeling of problems to more fully embrace the functional programming style. However, I have spent hours looking at the code below attempting to figure out how to eliminate the var references. I am sure my imperative past is overshadowing and blinding me to functional possibilities. I think I have retained overall "referential transparency" at each method; i.e. none of the var-ness escapes the local scope of the method (or function) within which it is defined. However, I would like to understand how I might achieve a higher level of functional programming purity, even if it is slightly unreasonable, within each method. I am more looking for ways I need to change my problem solving approaches to be more myopically functional in nature. Specifically, how might I approach eliminate each instance of var. Thank you for any guidance. case class Bitmap2d(name: String, rowsByColumns: List[List[Boolean]], faceUp: Boolean) { //require(rowsByColumns != null) //Assumed that if null was allowed as parameter, an Option would be used require(validateRectangular, "all rows must have same length") def validateRectangular: Boolean = { rowsByColumns.forall(_.size / rowsByColumns.head.size == 1) } } class Piece(name: String, charRep: Char, rowsByColumnsAndUp: List[List[Boolean]]) { val translations = createTranslations() def printTranslations() = { println("Name: " + name + " Char: " + charRep) for (bitmap2d <- translations) { println(" Orientation: " + bitmap2d.name) for (row <- bitmap2d.rowsByColumns) { for (pixel <- row) { val value = if (pixel) "1" else "0" print(value); } println() } } } private def createTranslations() = { //generate all 7 translations val bitmap2dRaws = for (i <- 0 to 7) yield translateBasedOnBitsForXYR(rowsByColumnsAndUp, i) var bitmaps = Set[List[List[Boolean]]]() var result = List[Bitmap2d]() for ( bitmap2dRaw <- bitmap2dRaws if (!bitmaps.contains(bitmap2dRaw._2)) ) { bitmaps += bitmap2dRaw._2; result = Bitmap2d(bitmap2dRaw._1, bitmap2dRaw._2, bitmap2dRaw._3) :: result } result.reverse } private def translationSideUp(bits: Int) = { val flipX = ((bits & 1) == 1) val flipY = ((bits & 2) == 2) ((flipX || flipY) && (!(flipX && flipY))) } private def translationDescription(bits: Int) = { var result = List[String]() if ((bits & 1) == 1) { result = "FlipX" :: result } if ((bits & 2) == 2) { result = "FlipY" :: result } if ((bits & 4) == 4) { result = "Rotate" :: result } result.reverse } private def translateBasedOnBitsForXYR(rowsByColumns: List[List[Boolean]], bits: Int) = { require (((bits >= 0) && (bits < 8)), "bits must contain a value between 0 (inclusive) and 8 (exclusive)") var result = rowsByColumns; if ((bits & 1) == 1) { result = translateAroundXAxis(result) } if ((bits & 2) == 2) { result = translateAroundYAxis(result) } if ((bits & 4) == 4) { result = translateRotate90DegreesRight(result) } (translationDescription(bits).mkString("+") , result, translationSideUp(bits)) } private def translateAroundXAxis(rowsByColumns: List[List[Boolean]]) = { if (rowsByColumns.size > 1) { rowsByColumns.reverse } else { rowsByColumns } } private def translateAroundYAxis(rowsByColumns: List[List[Boolean]]) = { if (rowsByColumns.head.size > 1) { for (row <- rowsByColumns) yield row.reverse } else { rowsByColumns } } private def translateRotate90DegreesRight(rowsByColumns: List[List[Boolean]]) = { val width = rowsByColumns.head.size val height = rowsByColumns.size val linearized = //need non-recursive random access ( for { row <- rowsByColumns pixel <- row } yield pixel ).toArray var result = List[List[Boolean]]() for (i <- 0 to (width - 1)) { var tempRow = List[Boolean]() for (j <- 0 to (height - 1)) { tempRow = linearized((width * (j + 1)) - 1 - i) :: tempRow } result = tempRow :: result } result } } Answer: One good technique at eliminating vars is recursion -- it can certainly be used in this example. Alternatively, you can identify a common pattern, such as fold, traversal, etc. For example: var bitmaps = Set[List[List[Boolean]]]() var result = List[Bitmap2d]() for ( bitmap2dRaw <- bitmap2dRaws if (!bitmaps.contains(bitmap2dRaw._2)) ) { bitmaps += bitmap2dRaw._2; result = Bitmap2d(bitmap2dRaw._1, bitmap2dRaw._2, bitmap2dRaw._3) :: result } result.reverse Recursion: def getResult(bitmap2dRaws: ???, bitmap: Set[List[List[Boolean]]], result: List[Bitmap2d]): List[Bitmap2d] = bitmap2dRaws match { case Seq(bitmap2dRaw, rest: _*) if (!bitmaps.contains(bitmap2dRaw._2) => getResult(rest, bitmaps + bitmap2dRaw._2, Bitmap2d(bitmap2dRaw._1, bitmap2dRaw._2, bitmap2dRaw._3) :: result) case Seq(bitmap2dRaw, rest: _*) => getResult(rest, bitmaps, result) case _ => result.reverse } getResult(bitmap2dRaws, Set[List[List[Boolean]], Nil) Fold: bitmap2dRaws.foldLeft((Set[List[List[Boolean]]], List[Bitmap2d])) { case ((bitmaps, result), bitmap2dRaw) if (!bitmaps.contains(bitmap2dRaw._2) => (bitmaps + bitmap2dRaw._2, Bitmap2d(bitmap2dRaw._1, bitmap2dRaw._2, bitmap2dRaw._3) :: result) case ((bitmaps, result), _) => (bitmaps, result) }._2.reverse You can use the same techniques for the var inside translateRotate90DegreesRight as well. In other places you might use Option: private def translationDescription(bits: Int) = { var result = List[String]() if ((bits & 1) == 1) { result = "FlipX" :: result } if ((bits & 2) == 2) { result = "FlipY" :: result } if ((bits & 4) == 4) { result = "Rotate" :: result } result.reverse } becomes: private def translationDescription(bits: Int) = { val flipX = if ((bits & 1) == 1) Some("FlipX") else None val flipY = if ((bits & 2) == 2) Some("FlipY") else None val rotate = if ((bits & 4) == 4) Some("Rotate") else None List(flipX, flipY, rotate).flatten // if this doesn't work, try flatMap(x => x) } Finally (unless I missed something), the var inside translateBasedOnBitsForXYR can be avoided simply by using multiple val, and if statements like this: val xTranslation = if ((bits & 1) == 1) translateAroundXAxis(rowsByColumns) else rowsByColumns and so on.
{ "domain": "codereview.stackexchange", "id": 1596, "tags": "functional-programming, scala, matrix" }
Imaginary voltage in simple RC circuit
Question: As a homework assignment, we have to find $U_a(\omega)$, that is the voltage that drops over the right resistor in relation to the frequency $\omega$ of the input AC voltage $U_e$. http://wstaw.org/m/2012/07/01/blindleistung3.png For the two extreme cases, $\omega \to 0$ and $\omega \to \infty$ I expect $U_a = 0$ since in the $\omega \to 0$ case, the capacitors will block the whole current. In the $\omega \to \infty$ case, the left capacitor will short the whole thing so that the right resistor does not get any current through it. To get the voltage $U_a(\omega)$ I tried these steps: Calculate the total impedance of the circuit $Z$: $$ Z(\omega) = R + \left( \frac 1{Z_C(\omega)} + \frac 1{Z_C(\omega) + R} \right)^{-1}$$ The impedance of the capacitor with capacitance $C$ is: $$ Z_C(\omega) = \frac{1}{i \omega C}$$ Calculate the total current: $$ I(\omega) = \frac{U_e(\omega)}{Z(\omega)} $$ Use the current divider rule to get the current that flows through the right branch (with capacitor and resistor). The current should be just this: $$ I_r(\omega) = I(\omega) \frac{Z_C(\omega)}{2 Z_C(\omega) + R} $$ Multiply that current with $R$ and get the result: $$ U_a(\omega) = I_r(\omega) R $$ When I plot real and imaginary parts of $U_a$ against $\omega$, I get the following: (The top line is the real part, the bottom line the imaginary part.) http://wstaw.org/m/2012/07/01/3.png When I plot the absolute value of $U_a$, I get the following plot: http://wstaw.org/m/2012/07/01/4.png Which one is the current that is really measured? The real part of the absolute value? I have to find out the value of $\omega$ for which the output voltage is the highest. I tried $\frac{\mathrm d U_a(\omega)}{\mathrm d \omega} = 0$, but I only get $\omega = \pm \frac{i}{RC}$ as solutions, which do not really make sense to me. How can I find the actual maximum? Answer: Don't forget the context you're working in. You're solving for the phasor voltage across the resistor. When you measure the actual time domain voltage with, say, an oscilloscope, you'll see a sinusoid with an amplitude and a phase (referenced to the source $U_e$). The magnitude of $U_a$ is the amplitude you will measure. The angle (phase) of $U_a$ is the phase you will measure. UPDATE: In response to the last question: First, we limit $\omega$ to being a real number. Now, if you work out $\frac{U_a}{U_e}$ by hand (something I highly recommend if you haven't), you should get (assuming I've not made an error): $\frac{U_a}{U_e} = \dfrac{j \omega RC}{1 - (\omega RC)^2 + j3 \omega RC}$ Now, you can "see" where this is maximum without calculation. As $\omega$ increases from zero, the real part of the denominator is decreasing and becomes zero when $\omega = \frac{1}{RC}$. From that point on, the magnitude of the denominator increases faster than the magnitude of the numerator.
{ "domain": "physics.stackexchange", "id": 3900, "tags": "homework-and-exercises, electric-circuits" }
What is the true meaning of a minimum phase system?
Question: What is the true meaning of a minimum phase system? Reading the Wikipedia article and Oppenheim is some help, in that, we understand that for an LTI system, minimum phase means the inverse is causal and stable. (So that means zeros and poles are inside the unit circle), but what does "phase" and "minimum" have to do with it? Can we tell a system is minimum phase by looking at the phase response of the DFT somehow? Answer: The relation of "minimum" to "phase" in a minimum phase system or filter can be seen if you plot the unwrapped phase against frequency. You can use a pole zero diagram of the system response to help do a incremental graphical plot of the frequency response and phase angle. This method helps in doing a phase plot without phase wrapping discontinuities. Put all the zeros inside the unit circle (or in left half plane in the continuous-time case), where all the poles have to be as well for system stability. Add up the angles from all the poles, and the negative of the angles from all the zeros, to calculate total phase to a point on the unit circle, as that frequency response reference point moves around the unit circle. Plot phase vs. frequency. Now compare this plot with a similar plot for a pole-zero diagram with any of the zeros swapped outside the unit circle (non-minimum phase). The overall average slope of the line with all the zeros inside will be lower than the average slope of any other line representing the same LTI system response (e.g. with a zero reflected outside the unit circle). This is because the "wind ups" in phase angle are all mostly cancelled by the "wind downs" in phase angle only when both the poles and zeros are on the same side of the unit circle line. Otherwise, for each zero outside, there will be an extra "wind up" of increasing phase angle that will remain mostly uncancelled as the plot reference point "winds" around the unit circle from 0 to PI. (...or up the vertical axis in the continuous-time case.) This arrangement, all the zeros inside the unit circle, thus corresponds to the minimum total increase in phase, which corresponds to minimum average total phase delay, which corresponds to maximum compactness in time, for any given (stable) set of poles and zeros with the exact same frequency magnitude response. Thus the relationship between "minimum" and "phase" for this particular arrangement of poles and zeros. Also see my old word picture with strange crank handles in the ancient usenet comp.dsp archives: https://groups.google.com/d/msg/comp.dsp/ulAX0_Tn65c/Fgqph7gqd3kJ
{ "domain": "dsp.stackexchange", "id": 12028, "tags": "filters, filter-design, minimum-phase" }
Is the number density of photons frame dependent?
Question: In a relativistic context, the energy of a "single" photon, thought of as a massless particle with an on-shell condition $p^2=0$ or $E^2=\vec{p}^2$, depends on the frame. In other words, performing a boost preserves the light-like property of its four-momentum but it doesn't preserve the actual components. If we consider a scenario at $T=0$, so ignoring any thermal effects, let us imagine we have a bath of photons following some distribution due to some production mechanism. Will the photon number density observed depend on the frame of the observer? Would that mean that different observers see a different number of photons? What about the case at $T\neq 0$? Will having a temperature break Lorentz invariance? Answer: In any inertial frame, the number of photons is frame-independent. Just suppose you have an ideal photon detector that scoops up all the photons, clicking each time. The number of clicks is frame-independent, so the number of photons is too. Since volume is frame-dependent, that means the number density of photons is frame-dependent as well. Subtleties appear when we consider non-inertial frames, in which case the essential problem is that detectors with relative acceleration have different definitions of the vacuum state, and hence different definitions of photons, which are excitations about it. Thus, the detectors don't even agree on the number of photons. This is the Unruh effect.
{ "domain": "physics.stackexchange", "id": 70339, "tags": "quantum-field-theory, special-relativity, statistical-mechanics, observables" }
Which elementary particles does light interact with?
Question: Other than electrons, does light interact with the other subatomic particles? Also, do different elementary particles behave differently when interacting with light (X-rays or gammas)? Can you say, for example, that this is an up quark and this is charm? Answer: Light, at the level of particles which you are asking, is composed out of a confluence of photons. Photons are elementary particles in the standard model of particle physics. These particles are at the underlying level of all matter as we presently know it. To first order, photons interact with the electromagnetic interaction with all charged particles in the table, and thus with all complex particles , atomic and molecular entities that contain them. For example the neutron is neutral, but the quarks contain in it are charged thus there is a first order interaction. Neutral elementary particles, other than the photon, are the neutrino and the Higgs. There is no first order interaction for these neutral elementary particles. But interactions of elementary particles go through higher order terms in the expansion for calculating the crossections. There exists photon-photon scattering whose crossection grows with energy. There also exists neutrino photon scattering through Z and W exchanges in higher order diagrams, this reaction is important for cosmological models. Equivalently there exists a higgs boson to two photon decay, so in cosmological studies the higher order diagrams would be again important. So the answer to your question is: Light interacts to first order ( higher crossections) with all charged elementary particles and through higher order diagrams with the neutral elementary particles.
{ "domain": "physics.stackexchange", "id": 34319, "tags": "particle-physics, photons, standard-model, elementary-particles" }
Exact complexity of a problem in $\cap_{m \geq 2}\mathsf{AC}^0[m]$
Question: Let $x_i \in \{-1,0,+1\}$ for $i \in \{1,\ldots,n\}$, with the promise that $x = \sum_{i=1}^n{x_i} \in \{0,1\}$ (where the sum is over $\mathbb{Z}$). Then what is the complexity of determining if $x = 1$? Notice that trivially the problem lies in $\cap_{m \geq 2}{\mathsf{AC}^0[m]}$ because $x \equiv 1\bmod{m}$ iff $x = 1$. Question is: does the problem lie in $\mathsf{AC}^0$? If so, what is the circuit witnessing this? If not, how does one prove this? Answer: You can use the usual switching lemma argument. You haven't explained how you represent your input in binary, but under any reasonable encoding, the following function is AC$^0$-equivalent to your function: $$ f(x_1,\ldots,x_n) = \begin{cases} 0 & \text{if }x_1 - x_2 + x_3 - x_4 + \cdots - x_n = 0, \\ 1 & \text{if }x_1 - x_2 + x_3 - x_4 + \cdots - x_n = 1, \\ ? & \text{otherwise.} \end{cases} $$ (We assume that $n$ is even.) Following these lecture notes, suppose that $f$ can be computed by a depth $d$ circuit of size $n^b$. Then a random restriction of $n - n^{1/2^d}$ inputs leaves a function of decision tree complexity at most $2^d(b+1)+1$ with probability at least $1-1/(3n)$. A calculation will probably show that this is another instance of $f$ (on a smaller input size) with probability $\Theta(1/\sqrt{n})$, and so there is some random restriction which yields both an instance of $f$ on $n^{1/2^d}$ inputs and a function with constant decision tree complexity, leading to a contradiction. The same argument should yield exponential lower bounds.
{ "domain": "cstheory.stackexchange", "id": 2342, "tags": "reference-request, complexity-classes, circuit-complexity, upper-bounds" }
Is it possible, and if so how should we remodel thought experiments in classical SR in the context of the Scharnhorst Effect?
Question: I was thinking of the following thought experiment which is related to the ideas user David Elm asked here Suppose you have a pair of photon mirror clocks as is often used in SR, where one of the clocks is the usual configuration height separated by a distance $H$, and the other clock has its mirrors in the same configuration but also has two parallel conducting plates with a very small separation $L$ apart. This is sketched below: So the first clock would tick according every $\frac{2H}{c_0}$ seconds (where $c_0$ is the speed of light in vaccum). And the second clock would tick at a rate of $ \frac{2H}{\left(1 + \frac{11e^4 }{2^6 (45)^2 (m_e L)^4} \right)c_0 } $ according to the formula of Scharnhorst which is reproduced on page 19 of The Scharnhorst Effect: Superluminality and Causality in Effective Field Theories by Sybil Gertrude de Clark here. So now consider some observer moving in at a velocity $v$ parallel to the motion of the two photons. So classical SR tells us the speed of light in the vacuum is the same for all observers, so no matter how fast the observer moves the photon in the first mirror configuration should always be seen moving at the same speed either away or towards our observer. Now classical SR also tells us the same thing should be true for the second mirror as well (after all it is a vacuum between those two conducting plates). Now according to the observer the second mirror's photon CLEARLY moves faster than the first's if we assume the validity of the Scharnhorst effect. And no matter how fast the observer moves in the direction parallel to both photons, BOTH of the configurations should appear to continue moving at the same velocity. Is this the right way to think about this? The problem is just taken at face value we can consider some of Einstein's old thought experiments and then derive two different time dilation formulas (one each for each of the clocks), and similarly two different length contraction formulas etc etc... which (I think?) cannot both be simultaneously valid. So how should we correctly model what our observer sees? Answer: The clocks in this section are ill posed, the scharnhorst effect's relative speed up of light only applies in the direction NORMAL to the two conducting plates. There are still versions of this question that can be asked but its different than the exact version here. See page 2038 of: "1993 J. Phys. A: Math. Gen. 26 2037" QED between parallel mirrors: light signals faster than c, or amplified by the vacuum by Barton and Scharnhorst. Now here are some observations where things get strange. In SR there is an unambiguous notion of "photon clock". Two perfect mirrors spaced some fixed distance $d$ apart with a single particle of light bouncing between them. If you reduce the distance $d$ by $2$ then the light bounces twice as frequently (as it maintains the same speed), so that if you measure time elapsed by measuring distance travelled by that bouncing photon, you always get an unambiguous measure of proper time, regardless of how far away $d$ the mirrors are arranged. What QED basically proposes via the Scharnhorst effect is that as the mirrors get closer and closer the rate of bouncing between the photons accelerates at a rate coupled to $d^4$. So photon clocks cannot be used to measure proper time. To pick a particular photon clock distance $d$ and say "this is how we measure time" is an extremely arbitrary choice and the rate of oscillation couples in a non-linear fashion to the separation $d$ so the way proper time is measured has to be different than this. The next important thing to point out is that if we have our photon clocks with photons bouncing and all the mirrors are parallel to each other, we do still have the question of "how do the apparent speeds of all of these apparent clocks change as an observer accelerates and approaches $c$". That's still an interesting and strange question but i'll ask it at a later time.
{ "domain": "physics.stackexchange", "id": 99479, "tags": "special-relativity, thought-experiment" }
What does it mean for a complex inner product to have $U(n)$ symmetry?
Question: Does it only mean that if you have an $n$-component vector $\phi$, you can transform it with $A$, where $A\in U(n)$, so that you get $A\phi$, and then you can get the original vector $\phi$ back with $A^\dagger (A\phi)$? Is this all the phrase "the complex inner product has $U(n)$ symmetry" means? Or are there other implications? Answer: $U(n)$ symmetry for a complex inner product means the following. For any two vectors $\phi$ and $\psi$, and for any $A\in U(n)$, the inner product of $A\phi$ with $A\psi$ is the same as the inner product of $\phi$ with $\psi$: $$\langle \psi, \phi \rangle = \langle A\psi, A\phi\rangle.$$ $A$ is therefore a symmetry in the sense that it does not change the inner product between any pairs of vectors. This is a result of the fact that $A^\dagger=A^{-1}$ for all $A\in U(n)$. Note that $$\langle A\psi, A\phi\rangle=\langle \psi, A^\dagger A\phi\rangle=\langle \psi, A^{-1} A\phi\rangle=\langle \psi, \phi\rangle.$$
{ "domain": "physics.stackexchange", "id": 73149, "tags": "quantum-mechanics, hilbert-space, symmetry" }
Writing a multidimensional array view for contiguous arrays in C++20
Question: When creating little games or other programs I often need multidimensional arrays. Usually I just do the simple std::vector<std::vector<T>> thing for simplicity's sake. However, this runs way more allocations and the data is not stored in one contiguous location, which brings some drawbacks with it. To fix this I wanted to create a multidimensional array view, with which I can just have something along the following lines (with an example of an array transformed to a 3x3 matrix): std::vector<int> vec{ /* ... */ }; MDSpan<int, 3, 3> view{ vec.data() }; view[1][2] = 123; My first approach was only templated on the datatype, but not the dimensions. This allowed for dynamic resizing and reshaping of the view, but it didn't allow for the multiple subscript operators returning different types, either new views, or values if the span is one-dimensional. After that I restructured my code to be templated on the dimensions as well, which got rid of the previously mentioned subscript problem and additionally got compile time constant evaluation possibilities. #pragma once #include <concepts> #include <utility> #include <array> template <template<typename, auto...> typename U, typename V, auto first, auto... others> struct DropFirstPackValue { using type = U<V, others...>; }; template <template<typename, auto...> typename U, typename V, auto first, auto... others> using DropFirstPackValue_t = DropFirstPackValue<U, V, first, others...>::type; template <typename T, std::size_t... strides> requires (sizeof...(strides) > 0) class MDSpan { public: constexpr MDSpan() = default; constexpr MDSpan(T* begin) { reset(begin); } constexpr void reset(T* begin) { m_begin = begin; } constexpr const T& at(std::integral auto... indices) const requires(sizeof...(strides) == sizeof...(indices)) { const std::array arr{ indices... }; std::size_t offset{}; for (std::size_t i{}, size{ arr.size() }; i < size; ++i) { offset += getStridesProduct(size - 1 - i) * arr[i]; } return *(m_begin + offset); } constexpr T& at(std::integral auto... indices) requires(sizeof...(strides) == sizeof...(indices)) { return const_cast<T&>(std::as_const(*this).at(indices...)); } constexpr const T& operator[](std::size_t index) const requires(sizeof...(strides) == 1) { return at(index); } constexpr T& operator[](std::size_t index) requires(sizeof...(strides) == 1) { return at(index); } constexpr auto operator[](std::size_t index) requires(sizeof...(strides) > 1) { const std::size_t offset{ getStridesProduct(dimensions()) }; T* begin{ m_begin + index * offset }; return DropFirstPackValue_t<MDSpan, T, strides...>{ begin }; } constexpr T* data() { return m_begin; } constexpr auto begin() { return Iterator{ *this }; } constexpr auto end() { return Iterator{ *this, stride(0) }; } constexpr static std::size_t dimensions() { return m_strides.size(); } constexpr static std::size_t stride(std::size_t index) { return m_strides[index]; } constexpr bool empty() const { return !m_begin; } constexpr operator bool() const { return !empty(); } constexpr bool operator==(const MDSpan& other) const { return m_begin == other.m_begin; } constexpr bool operator!=(const MDSpan& other) const { return !(*this == other); } private: constexpr static std::size_t getStridesProduct(std::size_t index) { std::size_t product{ 1 }; for (std::size_t i{}; i + 1 < index; ++i) { product *= stride(dimensions() - 1 - i); } return product; } private: T* m_begin; inline constexpr static std::array m_strides{ strides... }; private: class Iterator { public: constexpr Iterator operator++(int) { return Iterator{ m_owner, m_index++ }; } constexpr Iterator operator++() { ++m_index; return *this; } constexpr Iterator operator--(int) { return Iterator{ m_owner, m_index-- }; } constexpr Iterator operator--() { --m_index; return *this; } constexpr auto operator*() { return m_owner[m_index]; } constexpr bool operator==(const Iterator& other) const { return m_owner == other.m_owner && m_index == other.m_index; } constexpr bool operator!=(const Iterator& other) const { return !(*this == other); } private: constexpr Iterator(MDSpan& owner, std::size_t index = 0) : m_owner{ owner }, m_index{ index } { } friend class MDSpan; private: MDSpan& m_owner; std::size_t m_index; }; }; I would appreciate any kind of review and criticism, but have some specific questions, too: MDSpan::operator[] returning a sub-view could be marked as const, since it does not modify this in any way. However it still grants access to the underlying data. The same applies for MDSpan::data(), MDSpan::begin() and MDSpan::end(). Therefore it should not be const in my opinion. Is this the right decision? In the non-const version of MDSpan::at() I use the const one with a const_cast. Since I am just dropping the previously applied const qualifier, this should be well defined, but is it good practice to do so? MDSpan::Iterator::operator*() and MDSpan::operator[] return new MDSpan objects, and not references. Is there a way to work around this? While those MDSpans are lightweight and you can still achieve the same as with references, since those are views, there might still be some confusion as in e.g. range based for loops not being able to use references. In my first approach, the dynamic one, I was checking for indexing out of bounds via assert() from <cassert>, however, since my static one should be able to be used as constexpr, I can't do this anymore so I dropped all checks, which makes it more lightweight as well. Should I still keep the checks, or maybe switch to exceptions instead and just use the preprocessor to enable/disable error checking for debug/release builds? And if I use bounds checking, should I disallow access with greater/equal indices to one of the strides or only disallow indexing which would access not owned memory? (I know about C++23's std::mdspan, but wanted to implement my own multidimensional view. This is just for private projects and might even not get used at all.) Answer: Overall impression: well-written and well presented. It's pretty clear what's going on. I would have liked to have seen the unit tests, too - that generally helps reviewers. The constructor default-initialises m_begin, then assigns. It's better to use a real initialiser: constexpr MDSpan(T* begin) m_begin{begin} { } *(m_begin + offset) is more conventionally written m_begin[offset]. We could really use a constructor for creating a MDSpan<const T> from MDSpan<T> where T is non-const. To make Iterator fully functional, it needs a public default constructor and the crucial members for its traits to work: public: using value_type = T; using pointer = T*; using reference = T&; using difference_type = std::ptrdiff_t; using iterator_category = std::bidirectional_iterator_tag; It shouldn't be too hard to make it satisfy the requirements for a random access iterator, which can improve its utility with standard algorithms. The span class lacks cbegin()/cend(). I mention that because iterator should convert to const_iterator but not vice versa, which can be slightly tricky if we template the iterator to avoid writing it out twice. Reverse iterator functions would also be good to have, and trivially implemented using std::make_reverse_iterator(). Answering the specific questions: I agree that the subview [] operator mustn't be const if it allows read-write access to elements. However, we could provide a const version that provides a subview of const T elements. Use concepts to avoid collision when T is already const. That's safe, but when you move to C++23, be aware of deducing this, which is the modern way to write a single member function for both const and mutable objects. We could overload the operators for different number of dimensions, using constraints. As a user, I wouldn't expect to pay the cost of bounds checking unless I specifically ask for it. That's the main difference between at() and [] in standard collections, for example. I would throw std::out_of_range from the at() member when attempting to access outside the bounds of the span (regardless of whether that's within the underlying object, it's clearly erroneous - std::string_view is a good model to imitate for this).
{ "domain": "codereview.stackexchange", "id": 45455, "tags": "c++, array, c++20" }
Work-Energy conservation with friction
Question: I didn't go to the lesson of work-energy theorem, so I miss something about this subject. I know the formulas, but I can't figure it out. This question has many quantities. Here is the problem, The sled ($m = 11.1\;\mathrm{kg}$) shown in the figure leaves the starting point with a velocity of $25.1\;\mathrm{m/s}$. Use the work-energy theorem to calculate the sled’s speed at the end of the track or the maximum height it reaches if it stops before reaching the end. The straight sections of the track (A, B, D, and E) have a coefficient of friction of $0.409$ with the sled, and $284.9\;\mathrm{J}$ are lost to friction in the circular section of the track (C). Actually, In the inclined plane segment A, there is height and plane width, I should use $$K_2 - K_1 = W$$ For A: $${m \over 2} v_\mathrm{f}^2 - {m \over 2} v_\mathrm{i}^2 = m g\cos(50^\circ) u_k d$$ $$v_\mathrm{f}=28.84\;\mathrm{m/s}$$ For B and D I used same equation without cos(degree) For C part of it I used only change of kinetic energy $$K_2 - K_1 = 284.9\;\mathrm{J}$$ My result is $26.46\;\mathrm{m/s}$ Answer is $24.112\;\mathrm{m/s}$ but I can't figure out the forces and quantities By the way: I don't want the calculations I just want to understand the basic logic of it. Answer: Try to keep this tidy. It is a straight forward calculation, but there are many terms, so tidiness is the key. Start with the Energy work relation: $$E_A - E_E = W$$ where $E_A$ is the energy at the beginning, $E_E$ the energy at the end and $W$ the energy loss due to friction. We have to split $W$ further into $$W = W_A + W_B + W_C + W_D + W_E$$ where $W_X$ denotes the energy loss for the part $X$. What you already have is $W_A$: $$W_A = \mu \cdot m \cdot g \cdot \frac{H}{\sin(\alpha)} \cdot \cos(\alpha)$$ I call $\mu$ the friction coefficient, the height $H$ and $\alpha$ is the only angle given in the drawing. $\frac{H}{\sin(\alpha)} = d $ in your naming convention. I prefer to keep as few variables as possible. What are the other quantities? $W_B$ and $W_D$ are easy, $W_C$ is already given, $W_E$ is calculated the same way as $W_A$, hence we are almost there. $E_A$ and $E_E$ are given as sum of kinetic and potential energies. A bit of algebra, and you have the result.
{ "domain": "physics.stackexchange", "id": 21491, "tags": "homework-and-exercises, newtonian-mechanics, energy, work" }
Rotation matrix to Twist command
Question: I want to implement the J+RRT algorithm for which I need to use the Twist command to navigate the robot in Euclidean space. First I calculate the difference in rotation matrices from the end-effector frame T1 to the goal frame T2 like so: Tdiff * T1 = T2 // * T1^-1 Tdiff * T1 * T1^-1 = T2 * T1^-1 Tdiff = T2 * T1^-1 I am using the Frame separated as a rotation and translation. Effectively this means that T1^-1 = [R1^T -t1; 0 0 0 1] RPY seems to be given in extrinsic x, y, z angles, which means that I need to convert it to intrinsic angles for the Twist command. This means that the RPY angles that I get from the Tdiff I would need to convert to my R1 rotation frame. Is my thinking correct? The code I'm using is this: KDL::Rotation inv_src_rot = src.M.Inverse(); double x, y, z; KDL::Rotation diff_rot = tgt.M * inv_src_rot; diff_rot.GetRPY(x, y, z); twist.rot = src.M * KDL::Vector(x, y, z); This is the code for the RPY calculation: https://github.com/orocos/orocos_kinematics_dynamics/blob/master/orocos_kdl/src/frames.cpp#L237-L260 EDIT: Right as I posted this I had another idea, which seems to me more correct. If my goal is to go towards the goal rotation tgt.M in the end-effector reference frame src.M, it would make sense to calculate the RPY angles from the rotation src.M * tgt.M, since in this case the extrinsic frame has now become src.M. Now when I calculate RPY it will be the rotation tgt.M given in the src.M frame, if I'm not mistaken? KDL::Rotation diff_rot = src.M * tgt.M; diff_rot.GetRPY(x, y, z); twist.rot = KDL::Vector(x, y, z); Answer: Angular velocities are strongly related to angle-axis description of a 3D rotation. Instead of using RPY, just use the angle-axis vector (as in angle times unit axis) of the rotation error (R1.R2^T). Then the angular velocity to drive this vector to 0 is just proportional to the vector (e.g. this is a proportional velocity control in SO(3)).
{ "domain": "robotics.stackexchange", "id": 39011, "tags": "rotation, frames" }
Which is hardest: iron, brass or bone?
Question: I was hopping around random wikipedia articles when I came across the article for the Behemoth. In the description for the beast it says: His bones are as strong pieces of brass; his bones are like bars of iron So it got me thinking, which of these three substances is hardest: iron, brass or bone? (I had a quick look at the Mohs scale, which lists iron as 4, but could not find anything for brass or bone.) Answer: These two sources both put bone at a hardness of 5: http://www.chacha.com/question/how-hard-is-bone-according-to-moh's-hardness-scale https://answers.yahoo.com/question/index?qid=20110310200841AABwtMj Whether they are trustworthy is questionable though, so take it as you will. This source put brass at 3 and iron at 4.5: http://www.jewelrynotes.com/the-mohs-scale-of-hardness-for-metals-why-it-is-important/ and this image puts brass at 4 and iron at 4-5 (Similar to 4.5): http://patentimages.storage.googleapis.com/WO2001048807A1/imgf000009_0001.png While these different sources seem to have conflicting data, I think it would be safe to assume that Brass is the softest of these three materials, Iron comes second, and Bone is the hardest. Edit: In the description of that monster, the adjective used is 'strong'. You may want to consider how much force each of these materials can withstand instead of how hard they each are :)
{ "domain": "chemistry.stackexchange", "id": 6069, "tags": "physical-chemistry" }
On $\Delta^{+}$ particle decay
Question: Using isospin notation $$ \Delta^+=\left|\frac 3 2,\frac 1 2\right\rangle=\frac{1}{\!\sqrt{3}}\bigg(|duu\rangle+|udu\rangle+|uud\rangle\!\bigg) $$ It is known all of the $\Delta$ baryons with mass near $1232 \,\operatorname{MeV}$ quickly decay via the strong force into a nucleon (proton or neutron) and a pion of appropriate charge. My question is... is it just experimental evidence or rather a theoretical necessity? Going ahead with isospin formalism: \begin{aligned} \Delta^+=\left|\frac 3 2,\frac 1 2\right\rangle\longrightarrow & \frac{1}{\!\sqrt{3}}\left|\frac{1}{2}, -\frac{1}{2}\right\rangle|1,1\rangle+\sqrt{\frac{2}{3}}\left|\frac{1}{2}, \frac{1}{2}\right\rangle|1,0\rangle\\&\frac{1}{\sqrt 3}|n\rangle|\pi^+\rangle+\sqrt{\frac 2 3}|p\rangle|\pi^0\rangle \end{aligned} This reminds me a lot of a change of basis for the state $|\Delta^+\rangle$ from the coupled basis to the uncoupled basis, if we suppose to apply the algebra of addition of angular momenta. Is it just a coincidence? Why $1/2$ and $1$ though? Aren't there other ways of obtaining $3/2$? Is this decay predictable using just the expression for $\Delta^+$ in terms of quarks? Answer: My question is... is it just experimental evidence or rather a theoretical necessity? Both. It is an experimental fact, which confirmed/motivated the theory of strong interactions, early on. Strong means fast decay, short lifetimes. Theory went hand in hand with experiment back then, the mid 1950s. The theory quickly appreciated Heisenberg's spin-like treatment of charge-irrelevance (isotopy) was a conserved property of the strong interactions, and it helped organize all experimental data, which then confirmed further logical-connection predictions of it, like the one you are discussing. Is it just a coincidence? Why $1/2$ and $1$ though? Aren't there other ways of obtaining $3/2$? Not really, in pragmatic terms. While you could imagine, in a different world, adding isospin 1/2 with isospin 2 to also get isospin 3/2, there are no isospin 2 mesons below the mass of the Δ to serve... Is this decay predictable using just the expression for $\Delta^+$ in terms of quarks? Yes and no. People have used this decay in conjuring up and cleaning up the quark model in the first place. (The complete symmetry of the uuu fermion quarks of the $\Delta^{++}$ led to a spin-statistics paradox which ushered in the hypothesis of color.) Now, any half-decent quark model incorporates these facts and , in reverse, "postdicts" them! The specific decay you have is the standard illustration/first-exercise of isospin applications in virtually every introductory textbook on the subject. It is supposed to remind you of Clebsching, because it is formally identical to that problem: theory is application of mathematical rules of logic. That is the whole point of isospin: you handle it exactly like spin! In particular, squaring the respective channel amps gives you double the rate for the proton versus neutron mode.
{ "domain": "physics.stackexchange", "id": 87052, "tags": "particle-physics, strong-force, pions, isospin-symmetry" }
Daylight hours on Titan?
Question: I still cannot seem to wrap my head around Titan's day and night based off of math. From what I understand, Titan has a day of 16 earth days (384 hours) in which it circles Saturn once (its day and also 1-year orbit around Saturn). But during one full day would we see only one sunrise and sunset per day (with length of sunlight increasing/decreasing with the seasons as we have on earth) and then another sunrise and sunset (daily?) as Saturn blocks the sun casting Titan into complete darkness and then dawn as it peeks out from the other side again? Answer: Titan orbits Saturn in the same plane as the rings, and the planet's equator, but much further out. Saturn is tilted with respect to the plane of it orbit by 27 degrees This tilt means that usually the moon doesn't enter Saturn's shadow at all. If you were on the moon you would experience 8 earth-days of (weak distant hazy) daylight and 8 days of darkness. However at two points in Saturn's orbit, its equatorial plane is aligned to the sun (These are the Saturn's spring and autumn equinox) at these times Titan can be eclipsed. What you would see would depend where you are on Titan. If you are on the side facing Saturn, you would get 8 days of nighttime (with Saturn as a large feature of the sky (except you can't see it because of the haze)) Then the sun rises it moves slowly across the sky before moving behind Saturn. When it goes behind Saturn it would be a solar eclipse. The sky would go dark. Then the sun would come out from behind Saturn and the sky would become light again. Later, the sun would set for 8 days of darkness. If you are on the far side from Saturn you would get 8 days of (weak) daylight, followed by 8 days of night. The time when the sun is eclipsed would happen at night, so you wouldn't notice it. This is really no different to what happens on a day on Earth went the sun is eclipsed by the moon. There is sunrise, and the world becomes light. Eclipse and the world becomes dark. The eclipse ends and the world becomes light again and sunset. You don't normally call start of an eclipse "sunset" nor is the ending "sunrise". Unlike eclipses on Earth, the eclipse of Titan by Saturn can last up to 6 hours. Also unlike Earth, there will be repeated eclipses in a season lasting about 1 earth year. and these seasons occur every 15 years
{ "domain": "astronomy.stackexchange", "id": 6212, "tags": "planet" }
Output of classifier.predict Tensorflow extract probabiltity
Question: When I do a prediction with my DNN clasifier I get a dictionary like this. {'probabilities': array([9.9912649e-01, 8.7345875e-04, 8.5633601e-12], dtype=float32), 'logits': array([ 12.641698, 5.599522, -12.840958], dtype=float32), 'classes': array(['0'], dtype=object), 'class_ids': array([0])} Can someone explain me the values of probability and logits? Why the three values ? The docs just states Evaluated values of predictions tensors. And do not refer (the docs) to a struct/explanation of the output Thanks! Answer: At the probabilities key you will find the probabilities of every label. Tensorflow just chooses the one with the highest. So in order to get the probability of the current outcome, you need to do something like this. results = classifier.predict(input_fn = lambda: mem_input_fn()) for r in results: idx = r["classes"][0] # idx is the predicted label print idx, r["probabilities"][int(idx)]
{ "domain": "datascience.stackexchange", "id": 4353, "tags": "tensorflow, programming" }
What is the relation of the transition band's width and the filter order for the FIR windowing method
Question: When designing an FIR filter using the windowing method, how can one estimate the filter order ? It's obvious that the type of window and the transitions band's width has some effect on the order. Answer: There are only heuristic formulas for estimating the filter order. For a Kaiser window (which is probably the most frequently used window for filter design) the required filter order can be estimated from [1] $$M=\frac{A-8}{2.285\,\Delta\omega}\tag{1}$$ where $A=-20\log\delta$ ($\delta$ is the maximum deviation from the desired response), and $\Delta\omega$ is the (smallest) transition bandwidth. This formula is of course only valid for the approximation of ideal frequency selective filters (low pass, high pass, etc.). This formula is implemented in Matlab's kaiserord.m function. [1] Oppenheim, Schafer, Buck, Discrete-Time Signal Processing, 2nd ed., p. 476
{ "domain": "dsp.stackexchange", "id": 2422, "tags": "filter-design, window-functions" }
ROS and Research Robotics Career Finder
Question: I was wondering if there were any plans on doing a "career board" for people involved in the ROS Community. More specifically, a place where people can find jobs with companies that do research robotics and personal robotics. Also, a place where companies can post internship, co-op, and full time job postings for members of the ROS community to see. I'm reaching the end of my academic career, and it's never too early to start looking. Originally posted by mjcarroll on ROS Answers with karma: 6414 on 2011-05-05 Post score: 2 Answer: While not ROS specific, the robotics-worldwide mailing list is a good place to find numerous postings of jobs, internships, postdoc/phd positions, and conference/journal/workshop announcements. Originally posted by fergs with karma: 13902 on 2011-05-05 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by mjcarroll on 2011-05-05: Excellent resource, I did not know about this. Thanks!
{ "domain": "robotics.stackexchange", "id": 5524, "tags": "ros" }
A binary search solution to 3Sum
Question: I tried a binary solution to 3Sum problem in LeetCode: Given an array nums of \$n\$ integers, are there elements \$a\$, \$b\$, \$c\$ in nums such that \$a + b + c = 0\$? Find all unique triplets in the array which gives the sum of zero. Note: The solution set must not contain duplicate triplets. Example: Given array nums = [-1, 0, 1, 2, -1, -4], A solution set is: [ [-1, 0, 1], [-1, -1, 2] ] My plan: divide and conquer threeSum to an iteration and a two_Sum problem. break two_Sum problem to a loop binary search The complexity is: \$O(n^2\log{n})\$. class Solution: """ Solve the problem by three module funtion threeSum two_sum bi_search """ def __init__(self): self.triplets: List[List[int]] = [] def threeSum(self, nums, target=0) -> List[List[int]]: """ :type nums: List[int] :type target: int """ nums.sort() #sort for skip duplicate and binary search if len(nums) < 3: return [] i = 0 while i < len(nums) - 2: complement = target - nums[i] self.two_sum(nums[i+1:], complement) i += 1 #increment the index while i < len(nums) -2 and nums[i] == nums[i-1]: #skip the duplicates, pass unique complement to next level. i += 1 return self.triplets def two_sum(self, nums, target): """ :type nums: List[int] :tppe target: int :rtype: List[List[int]] """ # nums = sorted(nums) #temporarily for testing. if len(nums) < 2: return [] i = 0 while i < len(nums) -1: complement = target - nums[i] if self.bi_search(nums[i+1:], complement) != None: # 0 - target = threeSum's fixer self.triplets.append([0-target, nums[i], complement]) i += 1 while i < len(nums) and nums[i] == nums[i-1]: i += 1 def bi_search(self, L, find) -> int: """ :type L: List[int] :type find: int """ if len(L) < 1: #terninating case return None else: mid = len(L) // 2 if find == L[mid]: return find if find > L[mid]: upper_half = L[mid+1:] return self.bi_search(upper_half, find) if find < L[mid]: lower_half = L[:mid] #mid not mid-1 return self.bi_search(lower_half, find) I ran it but get the report Status: Time Limit Exceeded Could you please give any hints to refactor? Is binary search is an appropriate strategy? Answer: Your bi_search() method is recursive. It doesn’t have to be. Python does not do tail-call-optimization: it won’t automatically turn the recursion into a loop. Instead of if len(L) < 1:, use a while len(L) > 0: loop, and assign to (eg, L = L[:mid]) instead of doing a recursive call. Better: don’t modify L at all, which involves copying a list of many numbers multiple times, a time consuming operation. Instead, maintain a lo and hi index, and just update the indexes as you search. Even better: use a built in binary search from import bisect.
{ "domain": "codereview.stackexchange", "id": 33967, "tags": "python, python-3.x, programming-challenge, time-limit-exceeded, k-sum" }
We can not do work with internal force is it right?
Question: In a washing machine you wash clothes in heavy mode then it moves a bit. Then it is doing work with internal force. Then does it violate that any work can not be done with internal force? Answer: Friction: As sku pointed out in a comment, friction plays in important role here: it makes Earth part of the system. As the internal parts of the washer move, if the device were floating free in space, its body would also move due to linear momentum conservation. But the device is actually sitting on Earth on a floor which is not frictionless: so whenever static friction can't be high enough to stop the body from moving, it'll move. Of course, for you to observe a net displacement after the machine cycle is over, there has to be (supposing the machine starts and finishes in the same configuration) a source of asymmetry. This is usually provided by the machines' foot and/or floor (unevenness, inclination, ratchet effect, etc.) or by the drum not being symmetrically loaded (the clothes can clump mostly on one side of the drum) and therefore not roting uniformly.
{ "domain": "physics.stackexchange", "id": 47456, "tags": "newtonian-mechanics, forces, work, free-body-diagram" }
Derivation of Schwarzschild metric
Question: I was studying the book of Hartle on general relativity. In chapter 9, "The Geometry Outside a Spherical Star", he suddenly introduces a metric named Schwarzschild metric and then goes on describing the geometry it produces. I did not quite get how exactly this was a metric generated by a spherical start. There must be some methodology of arriving at this metric. In non-relativistic Newtonian limit, I know how to show $g_{00} = 1+2\phi/c^2$, but for general case, I did not find anything useful. What is the systematic logical sequence behind this result? Answer: A typical method is to make the ansatz that the spherically-symmetric metric has the form $$ds^2=-A(r)dt^2+B(r)dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)$$ and determine which functions $A(r)$ and $B(r)$ make the metric satisfy the Einstein field equations, which in vacuum are $R_{\mu\nu}=0$ everywhere (except perhaps at some singularity). This ansatz lets you reduce partial differential equations to ordinary differential equations. You should try doing this yourself, by hand! Calculate the Christoffel symbols, the Riemann tensor, and the Ricci tensor in terms of $A$ and $B$ and their first and second $r$-derivatives. Then set Ricci to zero and solve for $A$ and $B$. You will have a great sense of satisfaction in solving Einstein’s field equations for a Schwarzschild black hole. It turns out that this metric describes not just black holes but also the vacuum outside an uncollapsed and non-rotating star. But it is easiest to first think about the all-vacuum black hole case.
{ "domain": "physics.stackexchange", "id": 70167, "tags": "homework-and-exercises, general-relativity, black-holes, metric-tensor, symmetry" }
How to prove the average velocity formula without calculus
Question: I often see the formula: $$ \overline v = \frac {v + v_0}{2} $$ in physics textbooks, to describe the average velocity for an object moving with constant acceleration. This formula is often substituted into the $x = x_0 + \overline v t$ formula in order to derive the $x = x_0 + v_0t + 1/2at^2$ formula we are so familiar with. With a calculus background we don't even need this formula. We just keep integrating from a = constant until we get the same formula. But if you don't have a calculus background I guess using this first equation is a necessary stepping stone on the way to deriving the formula. That the average velocity is the mid-point between the initial and final velocities makes intuitive sense I guess, but I want to be able to demonstrate to students that this is true in a more rigorous way. The only problem is that the only way I can see to proving that this formula is indeed true is to use calculus! In particular, I would use the knowledge that the area under the v/t curve is the displacement. Drawing a simple v/t graph that is linear (constant acceleration) I could show geometrically that the area under this curve is the same formula. And there's the problem. The area under the curve bit of knowledge implies a knowledge of calculus. Is there any other way to demonstrate to a non-calculus student that it is true? I've never seen any textbook attempt to justify this formula. It just seems to pop out of nowhere without any rigorous proof. Answer: My take... First, consider constant velocity $v$, then by daily intuition, the average velocity $\bar v=v$ (because its the same $v$ throughout, thus the average cannot be anything other than $v$) Now consider acceleration, and the initial velocity $v_0$, Then the final velocity $v$ must be greater than $v_0$ The average $\bar v$, which is some kind of representative of the initial and final velocities, thus must obey the following: $$v_0<\bar v<v$$ as if $v_0=\bar v$ or $v = \bar v$, then it does not represent that there's a contribution of $v$ (respectively $v_0$) in it Since we have two velocities $v$ and $v_0$, we expect the representative $\bar v$ must be contributed equally by them. This suggest that $v$ and $v_0$ are in equal proportions to made up $\bar v$ $$\bar v= nv+nv_0$$ Now it is easy to see that $0<n<1$ as otherwise the above inequality will be violated Therefore having $n=\frac{1}{2}$ is a solution (Thanks wikipedia for the better answer below!) 2) Being a representative also means it must be a value that balances the given values thus the following must also be true (That is, the total difference between the representative to each of the given velocities must vanishes else you will end up having $\bar v$ being tipped over towards either $\bar v$ or $v$ more): $$(v-\bar v)+(v_0-\bar v)=0$$ Thus rearranging gives: $$\frac{v_0+v}{2}=\bar v$$ As required 3) A geometric proof
{ "domain": "physics.stackexchange", "id": 26707, "tags": "kinematics, velocity, calculus" }
Could dark matter be made of black holes?
Question: I have read these questions and their answers: Dark matter black holes Could Dark Matter form black hole? Does dark matter originate from matter falling into a black hole? following up dark matter accretion in supermassive black holes None of those answered what I am asking. Dark matter: does not interact with EM waves, so it is not visible directly creates gravitational effects Black Hole: does not let EM waves escape, so it is not directly observable creates gravitational effects Based on these facts, there seems to be a similarity between dark matter and black holes. Could it be, that the regions of space (intra-galaxy clusters) where we found gravitational effects that cannot be explained with normal matter, so where we suspect dark matter to exist, that that region of space in between galaxy clusters is just a region that is full of black holes that would have the same gravitational effects and would not be visible directly? Answer: I'm citing this to answer your question (and I encourage you to read the whole text): Since black holes have mass, one hypothesis for dark matter was that it was made up of lots of massive astrophysical compact halo objects, or MACHOs. These would be compact objects that do not emit electromagnetically, such as black holes, dead (non-spinning) neutron stars, or old and cold white dwarfs (sometimes called black dwarfs). If lots of these objects existed in the right distribution in the halos of galaxies, it could explain the observed rotation curves. However, gravitational microlensing observations have mostly ruled out the possibility of MACHOs as the explanation for dark matter. The current leading dark matter candidates are known as weakly interacting massive particles (WIMPs) "The observed rotation curves" are what indicates that there is a matter that we don't see (more in the article linked above), which we call "dark matter". It seems that gravitational microlensing measurements have ruled out the possibility for dark matter to be made of black holes. Thus the answer is a no. Though I can't find a paper that clearly announces it, so if anyone can comment I'll edit it. Edit: by chance, I just attended a talk by J.C. Bellido on Primordial Black Holes as a candidate for Dark Matter. I am no expert, thus I'm unsure I got all his points right, but my understanding is the following : Stellar black holes are too massive and effectively ruled out as DM candidate by lensing measurements. Massive primordial black holes, formed during the radiation-dominated era, are of large mass range and may account for dark matter. Still, more precise lensing experiment might hint that this is indeed the case or not, in the near future. Probably useful reference : Massive Primordial Black Holes as Dark Matter and their detection with Gravitational Waves
{ "domain": "physics.stackexchange", "id": 48692, "tags": "general-relativity, black-holes, dark-matter" }
Locate Non Homogeneous Areas in an Image
Question: I need to choose some points on an image. These points should be chosen more where there is lots of color changes, transitions and variations. Which techniques can I use for determining where most color changes and transitions occur in an image? Answer: In general, the approach to take, is to have a local feature which has high value for such areas in the image. There are many approaches to shape such a feature. Probably the easiest one would be by local variance. I tried 3 different approaches to this: Local Variance by a Filter. Local Variance of a Super Pixel. Using the Weak Texture from Noise Level Estimation from a Single Image (By Masayuki Tanaka). I applied it on the Lenna image and got: Higher values shows non homogenous area. It seems that for high SNR images you can work with the local Variance. But the super pixel approach seems to be more robust. In my opinion the Super Pixel result is the best of all 3. It was achieved by: Apply Super Pixel based Segmentation (SLIC Based). Per Label Mean Calculate the mean value of each Super Pixel by the indices of each super pixel. Per Label Variance Calculate the variance of each super pixel using only its pixels. In a more general form if you look for features for homogeneity and use them to find homogenous regions and then select the inverse. Those are very popular in segmentation. Another approach would be using more advanced features such as: BRISK Feature. FAST Feature. HOG Feature. MSER Feature. Then count how many of those are found within each Super Pixel. A Super Pixel with more features will be less homogenous. The full code is available on my StackExchange Signal Processing Q75536 GitHub Repository (Look at the SignalProcessing\Q75536 folder).. Update: today I encountered Robust Segmentation Free Algorithm for Homogeneity Quantification in Images.
{ "domain": "dsp.stackexchange", "id": 10250, "tags": "computer-vision, image-processing, image-segmentation" }
At 80°C a solution having pH = 7 is?
Question: When I came across this question I couldn't understand whether, they are asking the nature of solution at 25°C which has pH = 7 at 80°C or nature of solution at 80°C which has pH = 7 at 25°C ? Because in the first case answer would be acidic and in second case answer would be basic Answer: The neutral $\pu{pH}$ is half the $\pu{pK}$ value for water dissociation, and of course the latter is almost exactly $14$ at $25°$C and one bar. That's where we get $\pu{pH}=7$ for a neutral solution — under ambient conditions. The open source reference [1] gives an expression for $\pu{pK_w}$ as a functoon of temperature and pressure. Because water autoionization is endothermic, $\pu{pK_w}$ will go down on heating and the neutral $\pu{pH}$ will follow suit. For instance, at low pressure and $75°$C $\pu{pK_w}=12.70$, so the corresponding neutral $\pu{pH}$ would be $6.35$. A solution with $\pu{pH}=7$ at this temperature would be basic; if cooled to ambient temperature the solution would autoionize less and we'd see the $\pu{pH}$ rise above $7$. Reference Andrei V. Bandura; Serguei N. Lvov (2006). "The Ionization Constant of Water over Wide Ranges of Temperature and Density" . J. Phys. Chem. Ref. Data 35, 15–30. https://doi.org/10.1063/1.1928231
{ "domain": "chemistry.stackexchange", "id": 17776, "tags": "acid-base, ph" }
Are holes a fundamental particle? Are they a real thing or just a construct?
Question: Some electronics textbooks seem to refer to holes as just a construct, while solid state physics textbooks seem to imply that holes are a very real thing. I understand that holes are vacancies (p-n junctions) that move towards the cathode while electrons move towards the anode, however, is a hole a real thing, or is it just a construct? I just want to think of it as a construct but I want to hear the general consensus. For example, recombination occurs in a semiconductor when an electron fills a hole. What exactly is meant, then, when a photon is absorbed in semiconductor (e.g. a silicon p-i-n photodetector absorbs a lower-energy visible red photon about 400 nm within its band gap energy). When the semiconductor absorbs the photon, it cannot simply convert to an electron because charge, spin and lepton number would be violated. Is it converting into an electron and hole, and is a hole an actual, elemental particle? Does it make sense that an electron and hole move in opposite directions towards the anode and cathode, respectively, when radiative absorption occurs? Answer: To adress the question of yours about conservation of charge / lepton number / spin: The definition of a "hole" as the absence of an electron requires there to be many electrons "arround". Whenever something is the absence of something else, this means that the presence of something is the default. You can as well identify a hole in a paper, because there is paper arround, but you wouldn't point into the sky and say "Oh look! A Paper hole". It's the same with electrons: In the scenario you described, you have to think about that there are many electrons in the crystal, occupying different states (quantum mechanical states) $|\Psi_{\vec{k}}\rangle$. The Photon will excite one of those electrons $|\Psi_{\vec{k_0}}\rangle$, bringing it to a different state $|\Phi\rangle$. You then have one excited electron, and one hole in the entirety of other states. Your other question asked wether this notion of holes is all that there is about this topic. It is not: Think about this: Imagine electrons as little balls with finite diameter, being at rest, for example arranged in a squared pattern. Take one of those Balls away. This is your hole. Now you apply an electric field, which will accelerate all balls to the left. What happens to your hole? Right, it accelerates to the left as well. This is not the behaviour you expect from the semiconductor holes, which are supposed to mimic positive charges in every possible way. To explain the semiconductor wholes, we have to get back to the "entirety of states $\Psi_{\vec{k}, n}$, mentioned before. Also, we need something that is called band-structure. And we need Schrödingers equation. It's solutions in a semiconductor can be denoted by $|\Psi_{\vec{k},n}\rangle$, where n denotes the that called band. For each band n, there is a class of states $|\Psi_{\vec{k},n}\rangle$ that satisfies Schrödingers equation with Energies $E(\vec{k})$. This picture shows $E(\vec{k})$ pretty well. What is important is that the classical properties of the electron (like accelleration) behave as if they had a mass which is proportional to the second derivative of $E(\vec{k})$. This is called effective mass. The next thing is: The states are usually filled up from the lowest energy to the highest. Assume that one of the lower energy bands in the picture is completely filled up with electrons. Now one electron is excited by a photon to one of the higher energy "bands" (which would be the highes parabola in the picture). The single electron being in the higher-energy-band does sit somewhere near $\vec{k}=0$. The parabola does have a positive second derivative here (which means a positive effective mass). Now look at the hole: It is at maximum of the lower parabola, and does have negative effective mass. Thus applying an electric field will accellerate it to a direction you would expect a positive charge to accellerate to. Long story short: In semiconductor physis, a hole isn't just the absence of an electron, but a certain state of a quantum mechanical many-body system, which doesn't behave like the absence of an electron at all.
{ "domain": "physics.stackexchange", "id": 45160, "tags": "electromagnetic-radiation, condensed-matter, photons, electrons, solid-state-physics" }