anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
A question about star names... E.G. CW Tauri, DZ Bootis, IQ Lupi, EU Eridani
Question: Take the following examples of star names/identifications:- Regulus is the common name for a star. HIP49669 is the Hipparcos ID for the star HD87901 is the Henry Daper ID for the star 32 Leonis is the Flamsteed ID for the star Alpha Leonis is the Bayer ID for the star And now for a probably unrelated stars at random. Why and how does stars like 'CW Tauri', 'DZ Bootis', 'IQ Lupi', 'EU Eridani' get their name, is there a catalogue for them? Who assigns their name? Do they individually mean something or randomly chosen letters? I'm not just after those stars but all stars with similar style naming. Thanks in advance Answer: The Bayer catalogue uses Greek letters, then lower case Latin letters and if needed, upper case letters. But it never gets beyond "Q" In 1855, the German astronomer Argelander proposed naming a particular variable star "R" (as using upper case letters at the end of the alphabet would avoid clashing with the Bayer catalogue. In 1867 it was agreed by the German astronomical society to name variable stars R,S,T... Z, then RR, RS, RT up to ZZ, followed by the constellation name. This system provided enough names until 1907, when they reached ZZ Cygni. It was decided to start again with AA, up to AZ, Then BB to BZ etc. That system provides 344 names and worked only until 1929, and after that, when more variables than this were found in a single constellation, they switched to a 'Vxyz " where "xyz" is a number starting with 345 e.g. V345 Cyg. So the "2-letter designation" is particular to variable stars http://cdsarc.u-strasbg.fr/afoev/var/edenom.htx and is used in the General Catalogue of Variable stars, but the scheme doesn't originate in the at Catalogue.
{ "domain": "astronomy.stackexchange", "id": 4471, "tags": "star" }
Finding equation of motion for a two mass one spring system
Question: I'm trying to find the equation of motion for the system below. As you see, there is a trailer connected to a car with a spring. There are a "b.v" force where b representing a coefficient of road and internal friction of vehicles. l is not written here, it is the length of the spring at equilibrium. What I have done so far? $$ m_{1} \ddot{x_{1}} = u_{1} - b_{1} \dot{x_{1}} - k (x_{1} - x_{2} + l) - b_2 \dot{x_{2}} $$ $$ m_{2} \ddot{x_{2}} = - k (x_{1} - x_{2} - l) - b_2 \dot{x_{2}} $$ I don't know if I'm on the right track, any help will be appreciated. Answer: According to your formulation of the problem, there are three forces acting on the car: $F_1$, the friction force and the spring force. The magnitude of the force exerted by the spring is given by $$ |F_{spring}|=k|(x_1-x_2)-l|. $$ If $(x_1-x_2)>l$, the spring force will act on the car in the left direction according to your sketch. We can therefore write, assuming the $x$-axis points to the right, simply using Newton's law: $$ m_1\ddot{x_1}=F_1-b_1\dot{x_1}-k(x_1-x_2-l)$$ On the other hand, there are two forces acting on the trailer: the friction force and the spring force, which will be in the direction opposite to the spring force acting on the car. Hence we get $$ m_2\ddot{x_2}=-b_2\dot{x_2}+k(x_1-x_2-l)$$ and you have your equations of motion.
{ "domain": "physics.stackexchange", "id": 27438, "tags": "homework-and-exercises, newtonian-mechanics, friction, spring" }
Add postinstall rule for deb package creation
Question: I'm trying to get a fork of the pr2-grant released for our ros_ethercat package as we find it very useful. In the pr2 repo, there were some post-install rules to set the sticky bit of pr2-grant. Is there a way to do this with catkin / bloom? Below is the debian/postinst rule #!/bin/sh set -e PKG=pr2-grant case "$1" in configure) chown root.root /usr/bin/pr2_grant chmod +s /usr/bin/pr2_grant chown root.root /usr/bin/pr2-grant chmod +s /usr/bin/pr2-grant ;; *) echo "postinst called with unknown argument \`$1'" >&2 exit 0 ;; esac #DEBHELPER# Thanks in advance! Originally posted by Ugo on ROS Answers with karma: 1620 on 2014-09-03 Post score: 2 Answer: Bloom allows you to patch the generated Debian files at your will. So yes, you can add whatever post-install rule you would like. The comfortable thing is that bloom will automatically reapply your patch for future releases. Originally posted by Dirk Thomas with karma: 16276 on 2014-09-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Ugo on 2014-09-03: great thanks Comment by Subhi on 2021-11-03: Hello, would you please provide some detailed information about how to do that using Bloom, thanks in advanced
{ "domain": "robotics.stackexchange", "id": 19278, "tags": "catkin, bloom-release" }
Using Diebold-Mariano test
Question: I've got predicted results from two different types of neural networks. Now I would like to run significance testing on both of the results to prove that they do not have equal predictive accuracy. I've learnt that the only tool in the game for this is Diebold-Mariano test. What tool I can use to run this testing (Matlab? R?) Answer: So you want to do a Diebold-Mariano test eh? How about the Diebold-Mariano test dm.test function in the forecast package of R? dm.test {forecast} Diebold-Mariano test for predictive accuracy Package: forecast Version: 6.2 Description The Diebold-Mariano test compares the forecast accuracy of two forecast methods. Usage dm.test(e1, e2, alternative=c("two.sided","less","greater"), h=1, power=2) (Took me ten seconds to find)
{ "domain": "datascience.stackexchange", "id": 759, "tags": "statistics" }
Can I classify set of documents using classifying method using limited number of concepts ?
Question: I have set of documents and I want classify them to true and false My question is I have to take the whole words in the documents then I classify them depend on the similarity words in these documents or I can take only some words that I interested in then I compare it with the documents. Which one is more efficient in classify document and can work with SVM. Answer: Both methods work. However, if you retain all words in documents you would essentially be working with high dimensional vectors (each term representing one dimension). Consequently, a classifier, e.g. SVM, would take more time to converge. It is thus a standard practice to reduce the term-space dimensionality by pre-processing steps such as stop-word removal, stemming, Principal Component Analysis (PCA) etc. One approach could be to analyze the document corpora by a topic modelling technique such as LDA and then retaining only those words which are representative of the topics, i.e. those which have high membership values in a single topic class. Another approach (inspired by information retrieval) could be to retain the top K tf-idf terms from each document.
{ "domain": "datascience.stackexchange", "id": 90, "tags": "machine-learning, classification, text-mining" }
Reference units for CO2 concentration in the atmosphere and meaning of a slight downward tendancy during the 1600
Question: I discovered here a set of yearly historical atmospherical CO2-concentrations from year 0 - 2014. Since they haven't mentioned which measure they used, I'm not sure whether it is right to talk about ppm by volume. The name of the file contains the following "mole_fraction_of_carbon_dioxide_in_air_input4MIPs" which leads me to think of ppm by mole. Do you think I'm right? This leading to: is it plausible to have that downward tendency about 1600 and if why? Thank you in advance for your help. Edit: I plotted the data with Excel: Answer: The file you provided do not have metadata nor context, which make it pretty much useless as it is. But, by reading the title and as you suggested, we can assume that the unit used in the sheet is mole fraction of $CO_2$ in air. Also the value of the data is fitting the range of the $CO_2$ historic measurements on Earth. By definition, a mole fraction is the ratio of mole of a given component (solute) over the total mole count in a target solution. In our case, this mean: $mole\:fraction\:of\:CO_2= \frac{measured\:CO_2}{total\:mole\:in\:the\:sample\:solution}$ This is well expressed and explained here in the section titled: Parts per Million by Volume (or mole) in Air In air pollution literature ppm applied to a gas, always means parts per million by volume or by mole. These are identical for an ideal gas, and practically identical for most gases of air pollution interest at 1 atm. Another way of expressing this value is ppmv. One part per million (by volume) is equal to a volume of a given gas mixed in a million volumes of air: $1\:ppm = \frac{1\:gas\:volume}{1\:million\:air\:volume}$ at 1 atmosphere pressure. Thus a simple conversion can be applied and the equation solved, knowing the weight of a mole of $CO_2$ and the typical weight of air. To answer your question, mole and volume are interchangeable in this context. This would be my honest take, considering the limited available information. About the downward tendency during the 1600: This was around the time when the Little Ice Age began; There was the Maunder minimum (first low) due to a solar spot low and then a small peak and then the Dalton minimum (second low) before the present peak. Here is an example of how low the sunspot activity was at that time (especially Maunder) from Britannica.com Consequences of those two events, detectable already during Maunder was a trend of global cooling and a diminution of the atmospheric $CO_2$.
{ "domain": "earthscience.stackexchange", "id": 1924, "tags": "atmosphere, climate-change, co2" }
Why does there have to be an electric field if there is potential difference?
Question: I read in a few books that there is always an electric field if there is an electric potential. I went through this but the question on this page only states that there is an electric field and it is specific only to batteries. It may be very obvious, but I don't get the exact logic. I can understand that there is electric potential if there is a field, but the opposite doesn't seem so obvious to me. Why, if there is a potential difference, is an electric field generated? Answer: Start by noting that the electrical potential is an energy per unit charge. In an electric field $E$ the field produces a force on a charge $Q$ of: $$ F=EQ $$ so if we move the charge a distance $dr$ the work done is just force times distance or: $$ W=EQ\,dr $$ The work done per unit charge is $E\,dr$, and this is what we mean by the change in the potential: $$ dV = E\,dr \tag{1} $$ Now, you start by saying you understand why there must be a potential when there is an electric field, and obviously it's because if there is a field then there is a force on a charge, and there must be an associated energy change given by equation (1) when we move that charge. We get the energy change, i.e. the potential difference, by integrating equation (1): $$ \Delta V = \int E\,dr \tag{2} $$ But we can rearrange equation (1) to give: $$ E = \frac{dV}{dr} \tag{3} $$ This is telling us that if there is an energy change $dV$ when we move a unit charge a distance $dr$ then there must have been some force acting on the charge to do that amount of work. This force is the field $E$ times the (unit) charge, so the conclusion is that if the potential changes with distance there has to be a field $E$ present.
{ "domain": "physics.stackexchange", "id": 32125, "tags": "electrostatics, electric-fields, charge, potential, voltage" }
What is the best way to detect rectangular pulse for AFSK demodulation?
Question: I've got a WAV file with a rectangular pulses coding zeros and ones. By specification, a 1 bit is a 320 microseconds wide rectangle, and a 0 bit is a 640 microseconds. So basically the idea is to detect "zero points" where the signal comes over the time axis. When I get it, further processing is trivial. The naive approach is to iterate samples in the signal and find points where sample * sampleNext < 0. For low-noise files with this approach produces rectangles quite close to the specified ones, but for high-noise ones I get all sorts of rectangles from 20 to 1400 microseconds in width. Is there a better way to detect zero points in a noisy rectangular pulse? Answer: How about you take a short time Fourier transform with window length equal to the granularity with which you want to locate the zero crossing. At a zero crossing the signal will have high frequency components. You could detect those by running a short time Fourier transform on the incoming samples to identify the location of zeros. For ex: if you window 5ms of data and take FFT and see if there are high frequency components of significant magnitude then this would mean that this window contains the zero crossing. If you want, you can go even narrower like 1ms, but you be would loosing on frequency resolution. So, pick an optimum resolution in frqeuency and required zero crossing accuracy in time and window the incoming signal and keep taking the FFTs. You could go for rectangular windows. Even if noise is high it will be constant in spectrum (atleast close to it as is theoretically for white nosie) and this should not be a big trouble in FFT domain, unless it is higher compared to signal power itslef. For just 13 sample long pulse durations use the following emperical algorithm Maintain two numerical derivate with a reference point x samples apart. Fir ex: consider "sample number" $N_o$, take two derivates between $N_o$, $N_o -x$ call this gradient 1, and second derivative between $N_o$, $N_o + x$, call this gradient 2, take the magnitude of these two gradients, if they are greater than a threshold, usually gradient is close to 90 degree near the zero crossing, then the point $N_o$ is a good approximation of zero crossing. $x$, and the threshold can be tuned based on pulse length.
{ "domain": "dsp.stackexchange", "id": 8666, "tags": "fft, discrete-signals, frequency-spectrum" }
How do I measure the distance that a cord (string) has moved?
Question: For a pet project, I am trying to fly a kite using my computer. I need to measure how far a cord extends from a device. I also need to somehow read out the results on my computer. So I need to connect this to my pc, preferably using something standard like USB. Since the budget is very small, it would be best if I could get it out of old home appliances or build it myself. What technology do I need to make this measurement? Answer: Sure, here are a couple of choices for you: For high end, you can look at a 200 counts per revolution rotary encoder like this one: $30 Sparkfun 200 Counts Per Revolution Rotary Encoder You'll need a microcontroller like an Arduino to count the rotations, there's some sample code to play with. $5 Adafruit 24 Counts Per Revolution Rotary Encoder Cheaper, not quite as much community support in comments. For a little more fun, you can combine the rotary encoder with a gearmotor so you could control the kite string with one piece of equipment. $40 Pololu 64 Counts Per Revolution Encoder + Gearmotor
{ "domain": "robotics.stackexchange", "id": 140, "tags": "wheel, usb, encoding" }
Custom strcat() with different arguments
Question: I am teaching myself C, and feel like I am just starting to get the hang of pointers and arrays (I come from Python, where everything is magic). I'm looking for reviews, especially if I'm doing anything wrong. In this code I wrote my own strcat() function, though it needs 3 args instead of the standard 2 (I couldn't figure out how to do it with just 2 without overrunning allotted memory). #include <stdio.h> #include <string.h> char* cat(char *dest, char *a, char*b) { int len_a = strlen(a); int len_b = strlen(b); int i; for(i = 0; i < len_a; i++) { dest[i] = a[i]; } puts(""); int j; for(j = 0;j < len_b; i++, j++) { dest[i] = b[j]; } dest[i] = '\0'; puts("FUNCTION FINISHED"); } int main() { char strA[] = "I am a small "; char strB[] = "cat with whiskers."; char strC[strlen(strA) + strlen(strB) + 1]; printf("length A: %lu, B: %lu, C: %lu\n", strlen(strA), strlen(strB), strlen(strC)); printf("sizeof A: %lu, B: %lu, C: %lu\n", sizeof(strA), sizeof(strB), sizeof(strC)); printf("A: '%s'\nB: '%s'\n", strA, strB); cat(strC, strA, strB); printf("c: '%s'\n", strC); printf("length A: %lu, B: %lu, C: %lu\n", strlen(strA), strlen(strB), strlen(strC)); return 0; } Answer: char* cat(char *dest, char *a, char*b) Interface critique: You should have the caller specify a maximum size for the destination buffer, and error out when there is not enough space. The mark of a good C programmer is to create interfaces which make this sort of condition unambiguous, rather than blasting away on the buffer, potentially past the allocation size. for(i = 0; i < len_a; i++) { dest[i] = a[i]; } Strictly speaking this is fine, but it is rather un-C-like. I would prefer: while (*a) *dest++ = *a++; With this you do not need to call strlen, either. Assuming you added the parameter you should add in my interface critique (let's call the new parameter destsz), you could make sure you don't write more than this size with something like: int cat(char *dest, size_t destsz, const char *a, const char *b) { while (*a && destsz) { *dest++ = *a++; --destsz; } while (*b && destsz) { *dest++ = *b++; --destsz; } if (!destsz) return -1; // ran out of buffer; error condition *dest = 0; // we have space, so write NUL return 0; }
{ "domain": "codereview.stackexchange", "id": 2111, "tags": "c, strings" }
Would a block fall down if you hold it against a completely frictionless surface?
Question: If I were to hold a block against a vertical wall that has a static and kinetic friction coefficient of 0, would the block fall down? I think that it would fall down because the only force holding it up is the friction force acting against gravity. However, since the coefficients of friction are zero, there will be no friction force, no matter how hard I push the block against the wall. However, this doesn't make intuitive sense because I'm sure that a block would still stay in place if you pushed it against the wall with a lot of force, no matter how frictionless the wall is. We are assuming that the wall is rigid and won't deform under the block's pressure. Answer: Yes it would slide down, if the objects that applies the vertical force also has a zero coefficient. This could be achieved by a pusher with a (frictionless) roller on the end, or the object that applies the force has the same attributes.
{ "domain": "physics.stackexchange", "id": 75205, "tags": "newtonian-gravity, friction" }
Are depth-first/breadth-first considered special cases of best-first?
Question: In literature in general, are DFS and BFS considered to be special cases of best-first search? Suppose the cost function is: $f(n) = g(n) + h(n)$ Is it correct to reason that if and $g(n) = depth(n)$, then If $\forall_n h(n) = 0$, we have a breadth-first search If $h(n) = i$, where i is the number of nodes visited before visiting, we have depth-first search. ? Are depth-first and breadth-first search usually considered special cases of best-first search? Answer: The answer to your question is, in both cases, No. The reason is as follows: Both depth-first search and breadth-first search are uninformed search algorithms. A distinctive feature of these algorithms is that they stop once the goal node is generated. Now, if all operators are given the same cost (so that you consider them to have the same cost equal to 1), then: Breadth-first search is guaranteed to find the solution, and also to deliver an optimal one. This is, it is both complete and admissible (some people refer to this also as been optimal). Depth-first search, however, is neither complete nor admissible because it is bounded by a maximum depth. It might well happen that the goal lies beyond it and thus, it cannot guarantee to find it. Best-First search, on the other hand, uses an OPEN list where nodes are sorted in increasing order of $f(n)=g(n)+h(n)$ where: $g(n)$ is the cost of the current path from the start node to node $n$, and $h(n)$ is an estimate of the remaining effort to reach the goal from $n$. As such, $f(n)$ stands for an estimate of the best path from the start node to the goal that goes through node $n$. However, in contrast with the case of uninformed search algorithms, Best-first search algorithms halt only when the solution is about to be expanded! This makes a lot of sense. Under a number of assumptions (such as admissibility of the heuristic function), sorting nodes in ascending order guarantees that the true cost is above the estimation given by $f(n)$, so that only when expanding a goal you can ensure that all the other nodes stand for potential solutions with a cost greater or equal than the cost of the current solution. Therefore, from your definitions of $f(n)$ (and assuming unit costs): Making $f(n)=\mathrm{depth}(n)$ would result in an algorithm that expands nodes in the same order than a breadth-first search algorithm (up to tie-breaking). In addition, it will expand more nodes than it since after generating the goal, it has to expand all nodes in the current level before expanding the goal ---in the next level. Your second choice, $f(n)=\mathrm{depth}(n) + i$, with $i$ been "the number of nodes visited before visiting" is rather confusing to me. I do not know what you might mean by "visited before visiting". In spite of it, however, note that a Best-First search algorithm would proceed further after generating the goal node much the same. Your question makes a lot of sense. In case you want to access additional material related to it, please refer to Robert C. Holte. Common Misconceptions Concerning Heuristic Search". Symposium on Combinatorial Search SoCS 2010. Pages 46--51. See, specifically, the bottom of the left column in page 49. Hope this helps,
{ "domain": "cs.stackexchange", "id": 7344, "tags": "algorithms, graphs, search-algorithms, heuristics" }
Valid approach to implementing a band pass filter
Question: Suppose I want to perform Spectral shaping on a signal, i.e, modify the gains in a band of frequencies. Would there be any difference if I do that using the following two methods? - Use a band pass filter to apply the gain in the band. Calculate the bin numbers corresponding to the band, and apply the gain in those bins. I'm interested to know if the second method is a valid approach, if not, why is it discouraged? Answer: The second method is the "Frequency Sampling" method of filter design, where the filter coefficients (the impulse response) are determined using the Inverse Discrete Fourier Transform of the sampled frequency response desired. It is extremely simple to implement but for most cases it is a poor choice as the result will be an exact match at the frequency bins selected, but excessive ripple in between (compared to other filter design algorithms such as Windowing, or Parks-McLellan and Least-Squares). Therefore much longer filter lengths are required over those optimized approaches mentioned. The Frequency Sampling approach can be compared to (preferred) windowing design approaches in that for windowing approaches, the desired impulse response is determined from the Inverse Discrete Time Fourier Transform (IDTFT) in contrast to the Inverse Discrete Fourier Transform (IDFT) for Frequency Sampling. The IDTFT has a time domain that extends to infinity, while the IDFT is time limited and thus an aliasing of the coefficient values results for filters with long impulse responses (time aliasing when the desired frequency response is under-sampled). Below compares the DTFT and DFT for an arbitrary waveform of the same time duration for non-zero values, but shows in this case that the DTFT has specific zeros assumed for time extending to infinity (and is therefore aperiodic), and results in a continuous function in frequency (we can approximate the DTFT by zero-padding the FFT for example). In contrast the DFT is time limited to N samples; I show it as periodic as we could get the same result if we continued time to infinity but repeated the waveform in each time slot (similar to the Fourier Series Expansion, where the time domain waveform from 0 to T is reconstructed from integer multiples of frequency harmonics--- if you continued the waveform beyond the time 0 to T, the waveform would be periodic in time). The key point is we see that the DFT is sampled in frequency while the DTFT is continuous. Sampling in one domain causes aliasing in the other when the number of samples over the duration is not sufficient. This is what drives a longer filter length than what we could achieve with the other approaches to meet similar performance. For additional information on windowing vs frequency sampling, see this post: Difference between frequency sampling and windowing method My go-to approach for designing bandpass digital filters prototypes would be the least-squares (firls in Matlab/Octave, scipy.signal.firls in Python) and Parks-McClellan (firpm in Matlab, remez in Octave, scipy.signal.remez in Python) algorithms. Matt L had great comments at this post FIR Filter design: Window vs Parks-McClellan and Least-Squares on the continued merits of Windowing design approaches when resources for filter computation are limited (such as possible beamforming applications where the coefficients may need to be computed on the fly).
{ "domain": "dsp.stackexchange", "id": 7467, "tags": "bandpass" }
Help in understanding the implementation/application of scope trees
Question: I’m learning (self-taught) about language implementation and compiler design, and I’m implementing a toy language to cement the concepts in my mind. However, I’m having trouble understanding how scope trees are meant to be used. In my toy language, I’m using the Visitor pattern to traverse the syntax tree as a simple interpreter. I assign a pointer to a given symbol table to a member of the syntax node to make the various symbols available at “run time”. The symbol tables are hash tables on a stack, and I resolve symbols defined in a parent scope by inspecting the stack. But the literature I’ve read (specifically Language Implementation Patterns by Terence Parr) talks about a scope tree as a distinct tree structure, like the syntax tree, and traversing the scope tree. Does a scope tree stand separately and alongside the syntax tree, and if so how does one track the current position in the scope tree while traversing the syntax tree? Is it simply a global pointer to a scope node/symbol table that’s adjusted whenever a scope-affecting node is encountered in the syntax tree? Or, Is it okay for the scope tree’s tree structure to be implicitly defined by piggy-backing the syntax tree as I have done? I feel I am polluting the syntax nodes definitions by adding a symbol table member. Answer: My opinion on this is that the scope tree is a derived entity from the syntax tree, therefore as you perform your semantic analysis by walking the syntax tree you create a temporary scope tree on the fly and by doing that, the current position is automatically tracked. The point is that both the scope tree and the syntax tree have to be married together somehow. As this would allow you to track the position of the symbols in unison. So as you traverse a method called a() on the syntax tree you statically know where to start in the symbol tree. Note: you do not require a scope tree, you could use a stack but the advantage of the tree is that the storage of the scope is persistent. Therefore for simple validations a stack could be used to determine whether or not a variable is out of scope but for mode advanced analysis problems a scope-tree which behaves like a multidimensional stack would be required. i.e.: int a() { int b; int c; { int b; int c; } } int b() { int a; } So whenever you go into a function such as a you can create a function node and every time you see a declaration you can create a symbol and add it to the function's scope (push) and when you see a declaration you can resolve it by traversing up the tree. Language implementation patterns by Terence Parr has a complete example of this and a very useful book.
{ "domain": "cs.stackexchange", "id": 4907, "tags": "data-structures, programming-languages, compilers" }
System curve for use in determining pump operating point
Question: What is meant by a "system curve" that is used to determine the operating point of a pump. I know what a "pump curve" is, but I often hear about a "system curve" being generated so that the pump operating point can be determined by where the pump curve and system curve intersect. Answer: Imagine if you were pumping through a single pipe. That's all the pump does, is take in water from a source and pumps it along a very long length of pipe. Since this is just plain piping, the friction loss isn't difficult to determine, but you need to know the velocity to calculate the Reynolds number. Without it, you can't use a Moody Chart to calculate a friction factor to find out the loss. But, with a good estimate of the flow rate, the estimated range of the flow rate could be guessed at, and a friction factor can be guessed. With a friction factor estimate, but a flow rate unknown, you finally settle on the pressure loss across this long length of pipe: $$\Delta P =f_D \frac{\rho V^2}{2}\frac{L}{D} = f_D \frac{8\rho Q^2}{\pi^2}\frac{L}{D^5}$$ Where P is pressure, Q is the flow rate, $\rho$ is the density, L is pipe length, D is diameter, and $f_D$ is the friction factor. Let's take cast iron 100mm pipe, highly turbulent regime, running for 10km. Plugging all the numbers in this theoretical situation, we come to (hypothetically): $$\Delta P = 0.304 Q^2 \frac{kPa}{(\frac{L}{s})^2} $$ Now we have a formula to give an idea for pump sizing! More importantly, we have a curve. Plugging in any arbitrary value of Q (in L/s) would give you the pressure loss across the pipe in kPa. This curve is the system curve, and it naturally looks like a parabola. This is, in general, true for all fluid systems without any type of controls response (acting under natural behavior). You can plot this curve on top of the pump curve, and find out where the system will reach equilibrium. Note, it isn't hard with such a simple system to get a lot of liquid out. More complex system have a lot more intricate mechanics, but the general parabola rule still applies. Thus, most people work through the complex mechanics of their system to simplify it into a single point. In our case, operating at 25 L/s would mean a pump that has to pump 190 kPa. This is the single operating point. Typically, many engineers will slightly increase these values, so they will always find an operating point that's safe. In this case, going to 30 L/s would mean a 275 kPa. Thus, the only parabola that goes through the point 30 L/s, 275 kPa and the origin (there is only one parabola that does this) would be the system curve.
{ "domain": "engineering.stackexchange", "id": 1631, "tags": "mechanical-engineering, fluid-mechanics, pumps" }
Filtering bases based on phred qualities with pysam
Question: Is there a way to filter bases in BAM files based on phred quallities through python's pysam ? I have a code here that Takes the nucleobases per position from a BAM file using pysam's pileup function Saves it in ReverseList and ForwardList based on both strands (i.e forward and reverse), I want to reject those bases that have phred quality below 25 to be not stored in the ForwardList and ReverseList lists so that they are not used for further analysis. samfile = pysam.Samfile( filename, "rb" ) ReverseList = [''] * lenref ForwardList = [''] * lenref for pileupcolumn in samfile.pileup() : for pileupread in pileupcolumn.pileups: if (pileupread.alignment.mapping_quality <= 15): continue if not pileupread.is_del and not pileupread.is_refskip: if pileupread.alignment.is_reverse: #negative ReverseList[pileupcolumn.pos] += pileupread.alignment.query_sequence[pileupread.query_position] else: ForwardList[pileupcolumn.pos] += pileupread.alignment.query_sequence[pileupread.query_position] samfile.close() Where lenref = 16569(length of mitochondrial genome) and filename is the name of BAM file. I want to filter based of phred qualities of bases. Answer: The PileupRead object as a query_position attribute, which you can use for this: for pileupcolumn in samfile.pileup() : for pileupread in pileupcolumn.pileups: if (pileupread.alignment.mapping_quality <= 15): continue if not pileupread.is_del and not pileupread.is_refskip: if pileupread.alignment.query_qualities[pileupread.query_position] < 10: # Skip entries with base phred scores < 10 continue if pileupread.alignment.is_reverse: #negative ReverseList[pileupcolumn.pos] += pileupread.alignment.query_sequence[pileupread.query_position] else: ForwardList[pileupcolumn.pos] += pileupread.alignment.query_sequence[pileupread.query_position] Note the 6th line, which implements a filter with a threshold of 10.
{ "domain": "bioinformatics.stackexchange", "id": 165, "tags": "sam, python, pysam" }
rgbdslam crashes on repeated "send model" request
Question: I would like to repeatedly query RGBDSLAM for pose estimate (map is less important but would be nice as well). However if I make repeated calls to "send model" or "save trajectory", either in the GUI or using service calls I get the following error message: [rgbdslam-7] process has died [pid 20770, exit code -11]. log files: /home/james/.ros/log/6dd532b8-573d-11e1-8050-002608dcf037/rgbdslam-7*.log I have tried different request intervals but it still happens when I wait more than 10 seconds between each request, and ideally I would like much more frequent tf information than this. I am using ubuntu 11.10 and ROS electric on a 2.8ghz core 2 duo with 8gb of RAM. Could the speed of my computer be the problem? Is there anything else I can try? Thanks in advance, James Originally posted by jwrobbo on ROS Answers with karma: 258 on 2012-02-14 Post score: 0 Answer: Hi James, sounds weird. Have you looked into the logfile mentioned in the error message? Otherwise you would either need to run it in a debugger or fiddle a little with the code, as I cannot reproduce the problem. Concerning the speed, you can have a look at the function "void GraphManager::sendAllClouds()" in graph_manager.cpp around line 1350. There the frequency of sending the transforms and clouds is throttled to 5 Hz in the line ros::Rate r(5); //slow down a bit, to allow for transmitting to and processing in other nodes You can set that any value. You can omit sending of the cloud by removing (or commenting out) the following line: graph_[i]->publish("/openni_rgb_optical_frame", now, batch_cloud_pub_); Are you only interested in the current pose estimate or in the whole trajectory? Edit: I added the feature to send the current pose estimate after each processed frame. That means, you can just listen to the transformation from /map to /openni_camera and use the sending of the clouds only if you want to update the map. This doesn't fix the crash you mentioned though. Please download it here and report back whether that works for you. You need to replace the rgbdslam folder with the folder rgbdslam_dev-f4835f9e1e39 (i.e. rename the latter to the former) and rebuild. Originally posted by Felix Endres with karma: 6468 on 2012-02-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8237, "tags": "slam, navigation" }
Using gas laws to find density
Question: An unknown gas at $\mathrm{63.1\ ^\circ C}$ and $\mathrm{1.05\ atm}$ has a molar mass of $\mathrm{16.04\ g/mol}$. What is the density of the gas? I know I need to use the $pV=nRT$ equation but it gives me g/mol and not moles. I'm not really sure what to do here. $$1.05\ V = n\ 0.08206 \cdot (63.1+273.15)$$ Ok that's as far as I got. I have no idea how would I even go about converting g/mol back to moles so I can solve for V, and after that I would also need to figure out a way to find grams so I can divide g/v to get the density. Answer: Okay, to find the density of the gas, to need to know its mass and its volume. So to do this, lets say we have one mole of this substance. So its mass will be $16.04~\mathrm g$. Now all we need to do is find its volume. This can be done by rearranging the ideal gas equation: $$V = \frac{nRT}{p}$$ Now all we have to do is plug in the values. However you have be careful that you use the write units. This is a common mistake, especially when using the ideal gas law however it is good to see that you have used the correct units: $$\begin{align}V &= \mathrm{\frac{1~mol\times0.08206~L~ atm~ mol^{-1}~K^{-1}\times336.25~K}{1.05~atm} }\\ &=\mathrm{26.28~L}\end{align}$$ As you can see, all the units cancel nicely so that you get the final answer in litres which is the correct unit for volume. Now, density can be calculated: $$\rho =\mathrm{\frac{16.04~g}{26.28~L} = 0.61~g~L^{-1}}$$
{ "domain": "chemistry.stackexchange", "id": 4255, "tags": "homework, gas-laws, density" }
Boost shared pointer publishing - zero copy
Question: Hello Community, I have read that the intra process communication can be done by publishing message using boost shared pointer of the message. Then there is zero copy. Apparentely this is what makes nodelets so powerful. My question now, is this valid only for nodelets that we should publish using pointers and there is zero copy ? Is this also true between different nodes ? Also is there a requirement that the nodelets be launched from the same nodelet manager for the zero copy to be true ? Is there a way to check/test whether there is zero copy happening in my implementation or my code is using network resources ? Originally posted by ROSfc on ROS Answers with karma: 54 on 2016-11-18 Post score: 1 Answer: 1 . My question now, is this valid only for nodelets that we should publish using pointers and there is zero copy ? Is this also true between different nodes ? No, only nodelets can exchange pointers*, as they share the same address space (they're basically nodes mapped onto threads instead of processes) (* this is not entirely true: with a suitable transport (such as ethzasl_message_transport) zero-copy msg exchange is also possible between nodes, but that is not out-of-the-box supported and comes with some constraints) 2 . Also is there a requirement that the nodelets be launched from the same nodelet manager for the zero copy to be true ? yes, again because they need to share an address space. Different managers will each have their own address spaces. 3 . Is there a way to check/test whether there is zero copy happening in my implementation or my code is using network resources ? I know of no other way than checking resource usage (ie: CPU / memory). But in most contexts where nodelets make sense, the increase in performance is so noticeable that you'll know when it's not working (this is obviously not a good way to check, but is at least something). Originally posted by gvdhoorn with karma: 86574 on 2016-11-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by jespestana on 2019-02-13: Thanks for the in-depth answer! I have a follow-up comment/question about question 3: would it not be possible to check that the zero-copy has taken place by checking for equality the addresses to which the message pointers are pointing to (on the side of the publisher and each subscriber)? Comment by gvdhoorn on 2019-02-13: Yes, theoreticall this should be possible. I'm not entirely sure (any more, it's been a while) whether there is no place in the control flow between publisher and subscriber (in nodelet) that potentially alters the address. But in principle the pointers should be equal. Comment by jespestana on 2019-02-13: I think so too. Thanks. Actually, I just checked it. And yes, you get the same address on both sides of the message transport (subscriber and publisher).
{ "domain": "robotics.stackexchange", "id": 26277, "tags": "ros, roscpp, publisher" }
Example of a superword w such that v^2 isn't its subword
Question: What is an example of an infinite word(superword) w such that if a nonempty word v belongs to L = {1,2,3}*, v^2 isn't a subword of w? For example if w = 123123123...123 and v = 123, v^2 = 123123 hence it's a subword of w, I can't seem to find a superword that fits the requirement. Answer: There is a dedicated page on "square-free words" on wikipedia here, with references. As you can see, there is an exemple of a square-free word on a three lettre alphabet.
{ "domain": "cs.stackexchange", "id": 3618, "tags": "formal-languages" }
ROS 2 Admin Privileges on Windows 10
Question: I'm wondering why Windows 10 needs administrator privileges to build and run ROS 2, and if this could be changed with a patch? To build ROS 2 you will need a Visual Studio Command Prompt (“x64 Native Tools Command Prompt for VS 2019”) running as Administrator. This is detailed in the build settings, but it doesn't link to an explanation of why. Note that the first time you run any executable you will have to allow access to the network through a Windows Firewall popup. This, also from the build guide, is my best guess as to why, but I'm asking because I'm not sure. Originally posted by allenh1 on ROS Answers with karma: 3055 on 2019-08-12 Post score: 0 Original comments Comment by Dirk Thomas on 2019-08-12: Is this the explanation you are looking for: https://index.ros.org/doc/ros2/Installation/Dashing/Windows-Development-Setup/#patch-exe-opens-a-new-command-window-and-asks-for-administrator Answer: There is at least one issue: patch.exe seems to require running in administrator mode. There is more context in this rviz_ogre_vendor ticket and a note about it on the Windows development setup page. Originally posted by sloretz with karma: 3061 on 2019-08-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33609, "tags": "ros, ros2, windows10" }
OpenCV difference between roslaunch and rosrun
Question: OS: Ubuntu 16.04.2 LTS Xenial ROS: Kinetic. OPenCV: 3.2.0-dev Python 2.7.12 ROS_ROOT=/opt/ros/kinetic/share/ros ROS_PACKAGE_PATH=/home/odroid/ros_ws/src:/opt/ros/kinetic/share ROS_MASTER_URI=http://localhost:11311 ROSLISP_PACKAGE_DIRECTORIES=/home/odroid/ros_ws/devel/share/common-lisp ROS_DISTRO=kinetic ROS_ETC_DIR=/opt/ros/kinetic/etc/ros If I run a python script through a launch file, OpenCV cannot open a camera, but running it manually openCV works fine. Here’s the bare bones Code Snippet where it works/fails. print cv2.__version__ camera=cv2.VideoCapture("/dev/video8") print camera if not camera.isOpened(): camera=None print "No camera" return camera Here’s the line from the launch file: <node name="whisker_sensor" pkg="ros_whisker_sensor" type="whisker_sensor.py" respawn="false" output="screen"/> Here’s the output of the screen dump from that launch: …. process[whisker_sensor-10]: started with pid [32257] 3.2.0-dev <VideoCapture 0xb6d487b0> No camera …. Notice it sets up a camera Object, but it fails to open. However if I kill the specific node and then re-run it manually from a rosrun command: like this: odroid@fish01:~$ rosnode kill /whisker_sensor ; rosrun ros_whisker_sensor whisker_sensor.py killing /whisker_sensor killed 3.2.0-dev <VideoCapture 0xb6d017b0> 136 0.723058982284 136 0.723058982284 134 0.731470868094 It passes the isOpened() check and starts dumping the data I’m looking for from a processed image. Does anyone have any thoughts as to why camera.isOpened() fails under a roslaunch but works under a rosrun? Is there any way I can get more informative information out of the camera function? I have a feeling its a path or library issue, but I can't figure out what's different between the launch file and run approaches. Originally posted by graeme on ROS Answers with karma: 46 on 2018-01-25 Post score: 1 Original comments Comment by rdelgadov on 2018-01-26: If is a usb camera you will try to use usb_cam package. Answer: I ended up installing usb_cam as suggested,(I'm going to use it eventually) but in the process discovered the solution from a problem someone else had with usb_cam: https://answers.ros.org/question/219886/libuvc-launch-get-permission-denied-opening-usb-error/ The camera code needs to have un-restricted access to the /dev/video## ports. by doing this: sudo chmod a+rwx /dev/video* Prior to starting the ros stack, my ros code has access to the camera. I added this chmod to my start up script and I'm all set. Originally posted by graeme with karma: 46 on 2018-01-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29863, "tags": "ros, roslaunch, ros-kinetic, rosrun" }
What is a electronic configuration of scandium ion?
Question: The electronic configuration of Scandium is: $\ce{[Ar] 4s^2 3d^1}$ What is the electronic configuration of Scandium ion? Also about the electronic configuration of Cobalt, which is $\ce{[Ar] 4s^2 3d^7}$. So What is the electronic configuration if Cobalt ion? Is it $\ce{[Ar] 4s^2 3d^6}$ or $\ce{[Ar] 4s^1 3d^7}$ ? And why? Thanks for your help. Answer: The electronic configuration of the cobalt is $\ce{[Ar] 3d^1 4s^2}$ the electrons with the higher energy will be snatched and the electronic configuration of Argon is very stable then with scandium you will have easely $\ce{Sc^{3+}}$. For the Cobalt it is a bit more difficult, like you write the configuration you cannot find a correct answer. For all element first you do first the configuration using the Klechkovsky rule and after you put all orbitals by principal quantum number growing. So the cobalt configuration is $\ce{[Ar] 3d^7 4s^2}$ then if you snatch two electrons from the $\ce{4s}$ orbital you have a stable configuration for the $\ce{Co(II)}$ ion. You cannot snatch them from the $\ce{3d}$ orbital (even if $\ce{[Ar] 3d^5 4s^2}$ looks stable because the $\ce{3d}$ orbital is half full then the spin is maximal) because its energy is lesser than the energy of the $\ce{4s}$ orbital. You can find the Cobalt at different oxydation states from $\ce{+I}$ to $\ce{+IV}$ but it depends on what you have in your solution or in your gas if you have a gas. NB : Remember that the configuration of the elements are given if gas phase, then for example the more stable configuration of the copper iron is for $\ce{Cu^+}$ and not for $\ce{Cu^2+}$, $\ce{Cu^2+}$ is stable in water, so the answer may depends on the problem you have. Explaination for the copper : Stability in aqueous conditions depends on the hydration energy of the ions when they bond to the water molecules (an exothermic process). The $\ce{Cu^{2+}}$ ion has a greater charge density than the $\ce{Cu^+}$ ion and so forms much stronger bonds releasing more energy. The extra energy needed for the second ionisation of the copper is more than compensated for by the hydration, so much so that the $\ce{Cu^+}$ ion loses an electron to become $\ce{Cu^{2+}}$ which can then release this hydration energy. I hope it can help you !
{ "domain": "chemistry.stackexchange", "id": 5206, "tags": "electronic-configuration" }
Output results of a search for usage of a pardot-form
Question: I'm always down to learn better ways of doing thing and I wanted to see if I can get an input from the community to see if there is a way that I can improve this function: function pardot_dashboard_query() { $args = [ 's' => '<!-- wp:acf/pardot-form ', 'sentence' => 1, 'post_type' => [ 'post', 'page' ], ]; $pardot_posts = get_posts($args); if (!$pardot_posts) { echo 'There are no active Pardot Forms.'; return; } echo '<p>The Pardot Form is active on the following pages/posts:</p>'; ?> <ul> <?php foreach ($pardot_posts as $post): ?> <li><a href="<?= $post->guid ?>"><?= $post->post_title ?: 'No title available' ?><?= ' (' . ucfirst($post->post_type) . ')' ?></a></li> <?php endforeach; ?> </ul> <?php } If there are other means of outputs and or ways to shrink it down - All help will be appreciated! Answer: I generally don't advocate for printing any content from within a function, but I'll not dwell on this because it is a somewhat common practice for WordPressers. Name your function to clearly express what it does. Please greater importance on clarity than brevity. declare the return value as void to help you and your IDE to better understand the function's behavior. avoid single-use variable declarations unless they are valuably self-documenting. I don't see you using excessive spacing like many WP devs, so kudos for that. I prefer curly brace syntax for language construct looping. I don't personally find the verbose loop end to be valuable. use printf() and sprintf() to elegantly marry variables and expressions into a template string. This is easier to read than a bunch of concatenation and interpolation. I prefer not to jump in and out of php tags, so I'll demonstrate staying inside of PHP the whole time. I'm not sold on that href value being legit. Is that value actually a valid url? Code: function printPardotDashboardItems(): void { $pardot_posts = get_posts([ 's' => '<!-- wp:acf/pardot-form ', 'sentence' => 1, 'post_type' => [ 'post', 'page' ], ]); if (!$pardot_posts) { echo '<p>There are no active Pardot Forms.</p>'; } else { $items = []; foreach ($pardot_posts as $post) { $items[] = sprintf( '<li><a href="%s">%s (%s)</a></li>', $post->guid, $post->post_title ?: 'No title available', ucfirst($post->post_type) ); } printf( '<p>The Pardot Form is active on the following pages/posts:</p><ul>%s</ul>', implode("\n", $items) ); } }
{ "domain": "codereview.stackexchange", "id": 42076, "tags": "php, html" }
Can there be induced current in a flat plate if i allow a magnet to oscillate above it?
Question: I know that if it is a loop induced current can be produced but what if the loop is now "non hollow" like its just a plate. I read that the plate will still oppose the magnetic flux which causes the oscillating magnet to slow down. So I was thinking that since it has the ability to slow the magnet then there should be an induced current. However what is the direction of the current? Do I just take the plate as one thick piece of wire and try finding out the direction of current? Answer: First let's consider an oscillating plate kept between 2 poles of a strong magnet. The motion of the plate gets damped and it comes to a halt in the magnetic field. This can be explained on the basis of Electromagnetic induction. Magnetic flux associated with the plate keeps on changing as the plate moves in and out of the region between the magnetic pols. The flux change induces Eddy currents in the plate. Directions of Eddy currents are opposite when the plate oscillates into the region between the poles and when it when it oscillates out of the region. The area of the plate determines the damping magnetic moment m = IA . So if we reduce the are by making slots, the damping gets reduced and the plate oscillates more freely. Eddy currents are undesirable. For example, in a transformer with a metallic core, Eddy currents heat up the core and dissipate electrical energy. Similarly, such phenomenon is possible with oscillating magnetic poles and a stationary plate.
{ "domain": "physics.stackexchange", "id": 57886, "tags": "electromagnetism" }
Multiple GIT repositories in a workspace
Question: Dear fellow ROS-users, I was wondering about the best practice to maintain multiple GIT repositories within a workspace (and possibly another GIT-repository). This problem is also present when forking repositories of GitHub for instance. We have a current set-up: Four workspaces. One big repository containing all packages we have created, containing all the workspaces. What we want: Different repositories for different (groups of) packages. Automatic updates (git pull) of repositories. I have taken a look at GIT submodules. But in my opinion it is quite unclear when working on these. For instance: On what repository am I working (committing/pushing/pulling) at the moment? This results in commits done on wrong repositories. Adding a folder of a submodule when adding on its parent result in the folder added to the parent GIT repository. Is was wondering how other people are doing this. Second, what are your thoughts on the best way to do this and also having all the GIT repositories up-to-date? Thank you. Originally posted by mathijsdelangen on ROS Answers with karma: 88 on 2014-12-03 Post score: 1 Answer: http://wiki.ros.org/wstool is exactly what you are looking for. Originally posted by dornhege with karma: 31395 on 2014-12-03 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by mathijsdelangen on 2014-12-03: It works like a charm. Thank you for your quick comment on this issue!
{ "domain": "robotics.stackexchange", "id": 20229, "tags": "ros, github, git" }
How to solve for the trajectory of the center of mass?
Question: I'm working on the physics engine component of a game engine I'm building, and I need some guidance with this particular situation. Consider a square with mass M that is free to translate in the xy plane and free to rotate about any axis perpendicular to the page (Fig. 1) If a linear impulse J is applied at a point above the center of mass (CM) as shown below, I know there must be some angular impulse (momentary torque) generated since there is a component of J that is perpendicular to the displacement vector from CM. I imagine this angular impulse will tend to rotate the square clockwise. However, I can also imagine that the CM will also undergo translation since the square is not constrained. How would I go about computing the overall rotational + translational motion of this system? Answer: The motion (acceleration) of the center of mass (CM) is $\vec a_{CM} = {\vec F_{ext} \over M}$ where $\vec F_{ext}$ is the total applied external force and M is the mass of the square. The rotation about the CM is ${d\vec L_{CM} \over dt} = \vec \tau_{ext\enspace CM}$ where $\vec L_{CM}$ is the angular momentum about the CM and $\vec \tau_{ext \enspace CM}$ is the total external torque about the CM. For a short duration applied force $\vec F_{applied}$, the impulse $\vec P = \int_{t_1}^{t_2} \vec F_{applied}\enspace dt$ and the velocity of the CM after the impulse is $\vec v_{CM} = {\vec P \over M}$. Similiarly, the angular velocity after the impulse is $\vec \omega = {{\vec r_c \times \vec P} \over I_c}$ where $\vec r_c$ is the vector from the CM to the point where the force is applied, and $I_C$ is the moment of inertia with respect to the CM. If there other forces besides the force for the impulse- gravity and any constraints- these need to be also considered.
{ "domain": "physics.stackexchange", "id": 89287, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames" }
How can particles being closed strings in String Theory create solidity in objects?
Question: I understand how particles with certain masses can form to make atoms, which create solidity in objects due to Pauli's Exclusion Principle and what have you. These particles actually have mass and to a certain extent clearly would produce solidity in objects. But how can particles as closed strings (which I assume would be massless, please correct me if I'm wrong) still produce solid objects? Answer: You say: I understand how particles with certain masses can form to make atoms, which create rigidity in objects due to Pauli's Exclusion Principle and what have you. These particles actually have mass and to a certain extent clearly would produce rigidity in objects. Do you understand what rigidity is ? I would define it as the resistance of a solid to change, up to a dimensional length of 10^-8 cm. If you have an electron microscope you would see that there is some motion always happening at those lengths and nothing looks "rigid". At those dimensions the basic fabric of nature, which is quantum mechanics, appears and terms like "rigidity" and "mass' have to be rethought for the microcosm. So the "particles" with "mass" making up atoms and the atoms making up solids have a different behavior/appearance in the small dimensions than the large dimensional objects they make up in aggregate. A nucleus and its electrons are moving in a cloud of probabilities , described mathematically , and seen only in designed experiments. The collective behavior is what we observe macroscopically. But how can particles as closed strings (which I assume would be massless, please correct me if I'm wrong) still produce rigid objects? An elementary particle, when represented mathematically as the vibration of a closed string does have mass, but this mass is not the same as the one you measure macroscopically on a scale, it is the relativistic mass. It has charge and spin and all the quantum numbers that describe the particle. The string is a mathematical description of the known properties of elementary particles that is hoped to be more general then the Standard Model description, and will allow for predictions and the inclusion of gravity. In atomic dimensions it is indistinguishable from the point particle of normal quantum mechanical description, so the mass of an electron for example is the same whether you call it a string or a point particle. One has to build knowledge of physics slowly and systematically, in order not to get side tracked and confused by terminology.
{ "domain": "physics.stackexchange", "id": 2839, "tags": "string-theory, atoms, particle-physics, string, subatomic" }
Showing $SU(N)$ matrices commute with conjugate transpose
Question: $SU(N)$ is the group of all $N\times N$ matrices that satisfy $$ \mathbb{U}^\dagger\mathbb{U}=1~~,\quad\text{and}\qquad \det \mathbb{U}=1~~. $$ Denoting the $\mu$-row and $\nu$-column entry in $\mathbb{U}$ as $U^\mu_\nu$, the unitarity constraint may be written as $$ \mathbb{U}^\dagger\mathbb{U}=1\quad\implies\quad \big( U^\dagger \big)^\nu_\mu U_\nu^\lambda=\delta_\mu^\nu~~. $$ I assume that the unitarity constraint is such that $$ \mathbb{U}^\dagger\mathbb{U}=1\quad\iff\quad \mathbb{U}\mathbb{U}^\dagger=1~~, $$ and I want to demonstrate this with the index algebra, and I am seeking input on whether or not there's a better way to show it. I will take the indexed expression and multiply from the right with $U^\mu_\sigma$, then assume the commutivity, and obtain a true expression as \begin{align} \mathbb{U}^\dagger\mathbb{U}=1\quad\implies\qquad\qquad \big( U^\dagger \big)^\nu_\mu U_\nu^\lambda&=\delta_\mu^\nu\\ U^\mu_\sigma\big( U^\dagger \big)^\nu_\mu U_\nu^\lambda&=U^\mu_\sigma\delta_\mu^\nu\\ \left[U^\mu_\sigma\big( U^\dagger \big)^\nu_\mu \right]U_\nu^\lambda&=U^\mu_\sigma\delta_\mu^\nu\\ \text{Assume }U^\mu_\sigma\big( U^\dagger \big)^\nu_\mu=\delta^\nu_\sigma\quad\implies\qquad \qquad \qquad \delta^\nu_\sigma U_\nu^\lambda&=U^\mu_\sigma\delta_\mu^\nu\\ U^\lambda_\sigma&=U^\lambda_\sigma~~. \end{align} I don't like it that I assumed $U^\mu_\sigma\big( U^\dagger \big)^\nu_\mu =\delta^\nu_\sigma $ in verifying this for myself. What is a better way for me to convince myself that $$ \mathbb{U}^\dagger\mathbb{U}=1\quad\iff\quad \mathbb{U}\mathbb{U}^\dagger=1~~? $$ When I try to use $(\mathbb{U}^\dagger\mathbb{U})^\dagger=1^\dagger$, the identity $(AB)^\dagger=B^\dagger A^\dagger$ does not let me cast the unitarity constraint in the desired form $\mathbb{U}\mathbb{U}^\dagger=1$. Answer: From $U^\dagger U = 1$, you get $U^\dagger = U^{-1}$; then $UU^\dagger = UU^{-1}= 1$. You don't have to worry about anything; $\det U = 1$, this matrix is invertible.
{ "domain": "physics.stackexchange", "id": 73613, "tags": "homework-and-exercises, group-theory, linear-algebra" }
What does "a" mean at the end of a refrigerant's designation (R-134a)?
Question: I was studying the designation of names to refrigerants. The following is the basic formula: R - (m - 1) ( n + 1) ( o ) where: m = number of carbon atoms in the refrigerant n = number of hydrogen atoms in the refrigerant o = number of fluorine atoms in the refrigerant So R-134a has: 1 + 1 = 2 carbon atoms 3 - 1 = 2 hydrogen atoms 4 fluorine atoms What does the 'a' at the end mean? Answer: R134a..Here "a" is used to denote that it is an isomer. R134 and R134a have same chemical formula and atomic weight but different chemical structures. R134 has NBP of about -19 C whereas R134a has a NBP of about -26C. And dont use capital 'A' in R134a …. "A" denotes that the refrigerant is non -azeotropic. ResearchGate
{ "domain": "engineering.stackexchange", "id": 653, "tags": "chemical-engineering, refrigeration" }
Complexity of the search version of 2-SAT assuming $\mathsf{L = NL}$
Question: If $\mathsf{L = NL}$, then there is a logspace algorithm that solves the decision version of 2-SAT. Is $\mathsf{L = NL}$ known to imply that there is a logspace algorithm to obtain a satisfying assignment, when given a satisfiable 2-SAT instance as input? If not, what about algorithms which use sub-linear space (in the number of clauses)? Answer: Given a satisfiable 2-CNF $\phi$, you can compute a particular satisfying assignment $e$ by an NL-function (that is, there is an NL-predicate $P(\phi,i)$ that tells you whether $e(x_i)$ is true). One way to do that is described below. I will freely use the fact that NL is closed under $\mathrm{AC}^0$-reductions, hence NL-functions are closed under composition; this is a consequence of NL = coNL. Let $\phi(x_1,\dots,x_n)$ be a satisfiable 2-CNF. For any literal $a$, let $a^\to$ be the number of literals reachable from $a$ by a directed path in the implication graph of $\phi$, and $\let\ot\leftarrow a^\ot$ the number of literals from which $a$ is reachable. Both are computable in NL. Observe that $\let\ob\overline \ob a^\to=a^\ot$, and $\ob a^\ot=a^\to$, due to skew-symmetry of the implication graph. Define an assignment $e$ so that if $a^\ot>a^\to$, then $e(a)=1$; if $a^\ot<a^\to$, then $e(a)=0$; if $a^\ot=a^\to$, let $i$ be minimal such that $x_i$ or $\ob x_i$ appears in the strongly connected component of $a$ (it cannot be both, as $\phi$ is satisfiable). Put $e(a)=1$ if $x_i$ appears, and $e(a)=0$ otherwise. The skew-symmetry of the graph implies that $e(\ob a)=\ob{e(a)}$, hence this is a well-defined assignment. Moreover, for any edge $a\to b$ in the implication graph: If $a$ is not reachable from $b$, then $a^\ot<b^\ot$, and $a^\to>b^\to$. Thus, $e(a)=1$ implies $e(b)=1$. Otherwise, $a$ and $b$ are in the same strongly connected component, and $a^\ot=b^\ot$, $a^\to=b^\to$. Thus, $e(a)=e(b)$. It follows that $e(\phi)=1$.
{ "domain": "cstheory.stackexchange", "id": 3199, "tags": "sat, search-problem, logspace" }
Equivalence of some Automata & Language & NFA
Question: I read some note about Automaton Course. i see this note, that following all is the same. but i think the L(g) is not equal to NFA and regular expression. anyone could help me with defining the language of this figures (nfa, regular expression and grammar): Answer: We'll describe, in words, the languages encoded in each representation. Then, we'll see whether we end up with equivalent languages. We'll start with the regular expression. This regular expression says: all strings that start with an $a$ or a $b$, followed by any number of repetitions of either of the strings $b$ or $bb$. Note that the $bb$ is completely superfluous in this definition since we can always just choose $b$ twice in a row. So this language is really "either an $a$ or a $b$, followed by any number of $b$s. Now, for the automaton. The first observation is that the only way we can get an $a$ is if it's the first symbol we see; that's the only place an appropriate transition is defined and we never return to the initial state. We don't need to see an $a$ to accept; we can see a $b$ instead and accept (both $a$ and $b$ are accepted by the automaton). Now, how many $b$s can we see and still accept? Suppose the NFA always goes to state $2$ after the first input symbol (why not? it's an NFA). If we: see no more $b$s, we are in an accepting state... so we can see no $b$s and accept see one more $b$, we can go to state $4$ and accept see two more $b$s, we can go to state $3$ and then return to state $2$ where we accept more than two $b$s, we can go to state $3$, back to state $2$, and then we're in exactly the same situation as we were earlier, except now we have 2 fewer $b$s to worry about processing. This should convince you that we can, in fact, see any number of $b$s after seeing either an $a$ or a $b$. Notice that we get the same thing as we did for the regular expression. Now for the grammar. We note that the only way to produce a string with an $a$ is to use the production $S \rightarrow aAB$, so if we have a string with an $a$, it starts with an $a$ and that's the only $a$ in the string. In this case, we can always choose $B \rightarrow \epsilon$ and use the productions for $A$ to get any string of $b$s. However - it appears your concerns were justified. Consider the other production for $S$ - $S \rightarrow bAb$. This is the only way we can get a string that starts with $b$. This production also says that any string that starts with a $b$ must have at least two $b$s in it - one at the front, and a different one at the back. In particular, we cannot get the string $b$ from this grammar. But this string is assuredly in the languages of the RE and automaton. Therefore: $L(r) = L(M) \neq L(G)$.
{ "domain": "cs.stackexchange", "id": 3436, "tags": "formal-languages, regular-languages, automata, formal-grammars, regular-expressions" }
Why is the equation $\mathbb{E} \left[ (Y - \hat{Y})^2 \right] = \left(f(X) - \hat{f}(X) \right)^2 + \operatorname{Var} (\epsilon)$ true?
Question: In the book An Introduction to Statistical Learning, the authors claim (equation 2.3, p. 19, chapter 2) $$\mathbb{E} \left[ (Y - \hat{Y})^2 \right] = \left(f(X) - \hat{f}(X) \right)^2 + \operatorname{Var} (\epsilon) \label{0}\tag{0},$$ where $Y = f(X) + \epsilon$, where $\epsilon \sim \mathcal{N}(0, \sigma)$ and $f$ is the unknown function we want to estimate $\hat{Y} = \hat{f}(X)$ is the output of our estimate of $f$, i.e. $\hat{f} \approx f$ They claim that this is easy to prove, but this may not be easy to prove for everyone. So, why is equation \ref{0} true? Answer: Let's say we have $a$ - constant and $\epsilon \sim \mathcal{N}(0,\sigma)$, then: $$\mathbb{E}\left[(a+\epsilon)^2\right] = \mathbb{E}\left[a^2\right] + 2 \mathbb{E}\left[a\right]\mathbb{E}\left[\epsilon\right] + \mathbb{E}\left[\epsilon^2\right] $$ Expectations of constants are just the constants: $\mathbb{E}[a] = a$ and $\mathbb{E}[a^2] = a^2$ The mean of $\epsilon$ is zero $\mathbb{E}[\epsilon] = 0$. And the expectation of $\epsilon^2$ is its variance: $$ \mathop{\mathrm{Var}}(\epsilon) = \mathbb{E}[\epsilon^2] - \mathbb{E}[\epsilon]^2 = \mathbb{E}[\epsilon^2]$$ Substituting, we get an expression for the original expectation: $$\mathbb{E}\left[(a+\epsilon)^2\right] = a^2 + \mathop{\mathrm{Var}}(\epsilon) \tag{*}$$ Getting to the expectation in the book, we first substitute the values for $Y$ and $\hat{Y}$: $$\mathbb{E}\left[(Y - \hat{Y})^2\right] = \mathbb{E}\left[((f(X) - \hat{f}(X)) + \epsilon)^2\right]$$ In the book it is assumed that $X$, $f$ and $\hat{f}$ are constant. So we can use the expression (*) with the constant being $a = f(X) - \hat{f}(X)$: $$\mathbb{E}\left[(Y - \hat{Y})^2\right] = (f(X) - \hat{f}(X))^2 + \mathop{\mathrm{Var}}(\epsilon)$$
{ "domain": "ai.stackexchange", "id": 2830, "tags": "machine-learning, proofs, function-approximation, statistical-ai, probability-theory" }
Making sense of QFT
Question: I don't get what is it we're trying to do in QFT. I'm currently in the beginning of the course and a clear picture of what we're trying to achiever hasn't been painted yet to me. From what I've been able to gather is, for a spin 0 field, we wish to have an operator-density field which satisfies the Klein Gordon equation and then another operator-density field which satisfies the momentum-position like commutation relation with this field. Now after this, we construct a Hamiltonian-density operator field and integrate it over space to get the Hamiltonian operator from the scalar field. Now is this Hamiltonian operator supposed to be applied in the Schrodinger's equation in QM? What is the vector space this Hamiltonian operator is going to act upon? When/How are the particle creation-annihilation process going to come into the picture? Can someone please provide me with a picture/roadmap of the things we are trying to do in QFT. Like in QM, we replaced knowledge of particle with a wave function/quantum state and then had an evolution operator for this state. Answer: The quick and dirty version is that you model all the particles of a given type as excitations of a series of quantum harmonic oscillators: $$ H = \int\frac{\mathrm{d}^{3}\vec{p}}{(2\pi)^3} E_{\vec{p}} \left(a_{\vec{p}}^{\dagger}a_{\vec{p}} + \frac{1}{2}\right) $$ so a particle of momentum $\vec{p}$ would be the $|1\rangle$ state of the harmonic oscillator of momentum $\vec{p}$. Note $E_{\vec{p}}^2 - \vec{p}^2 = m^2$ in natural units and $E_{\vec{p}}$ is an angular frequency by the de Broglie relation. To simplify this you define a thing called a 'field operator' that allows you work in position instead of momentum space: $$ \phi = \int\frac{\mathrm{d}^{3}\vec{p}}{(2\pi)^3}\frac{1}{2E_{\vec{p}}}\left(a_{\vec{p}} \mathrm{e}^{-ipx} + a^{\dagger}_{\vec{p}} \mathrm{e}^{ipx}\right) $$ where $p$ and $x$ without arrows indicates four-vectors and four-position. If you plug this in and chug through the algebra you get the standard field theoretic Hamiltonian for a free (scalar) field: $$ H = \int \mathrm{d}^{3}\vec{x}\left(\left(\frac{\partial\phi}{\partial t}\right)^2 + \left(\nabla\phi\right)^2 - m^2\phi^2\right) $$ The Hilbert space for this Hamiltonian is just what you'd expect from a set of harmonic oscillators: $$ \mathcal{H} = \bigotimes_{\vec{p}}\mathcal{H}_{\vec{p}} $$ where $\mathcal{H}_{\vec{p}}$ is the Hilbert space for a single harmonic oscillator, and in the expression for the Hamiltonian we've really suppressed an uncountable series of $\otimes \mathbb{I} \otimes$ before and after each ladder operator. Sometimes people call this a Fock space, but it isn't really a Fock space. It has similar properties, but its construction is very different [1]. For dynamics, you use the Heisenberg picture, and in particular you use the Heisenberg equation (not the Schrodinger equation): $$ \frac{\mathrm{d}\phi}{\mathrm{d}t} = i\left[H,\phi\right] \\ \frac{\mathrm{d}\pi}{\mathrm{d}t} = i\left[H,\pi\right] $$ where $\pi = \frac{\partial\phi}{\partial t}$ is the momentum conjugate of the field defined in the usual way from the Lagrangian. Again, ploughing through the algebra you will find that the field obeys the Klein-Gordon equation: $$ \left(\Box + m^2\right)\phi = 0 $$ Naturally this is a rather bizarre statement to make about the universe. Why are all particles excitation of a harmonic oscillator? Is it just an approximation, like so many things in physics which are modeled by harmonic oscillators, or is there something more fundamental going on? Obviously, the answer is that there is something more fundamental. To see it, you have to look at the differential geometric structure of the spacetime manifold, and in particular the different representations of its isotropy group (the Lorentz group). In doing this, you see that the position space picture is the natural starting point and it amazingly turns into harmonic oscillators when you do a Fourier transform. Essentially this is the true mathematical formalism of canonical quantisation. I am happy to go into the technical details of this construction if you want (it explains vector fields and spinor fields as well, which the above approach does not), but it's mostly of mathematical and philosophical interest rather than anything practical with calculations. (It's also useful if you want to look at unification and stuff I suppose.) [1]: In particular, $\mathcal{H}$ comes already equipped with the idea of indistinguishability built-in, because if you're going to call the state $|2\rangle$ of a harmonic oscillator a 2-particle state (where both have the same momentum), there's already no concept of 'which particle is 1 and which is 2'.
{ "domain": "physics.stackexchange", "id": 40748, "tags": "quantum-field-theory, hilbert-space, klein-gordon-equation" }
Time complexity of a greedy approach for Independent Set: the Heaviest-First Algorithm
Question: The heaviest-first algorithm is a greedy approximation algorithm that finds an independent set $S$ of nodes in a graph $G=(V,E)$, so that the sum of the weights of the nodes in $S$ is as large as possible. My question is that I want to find the time complexity of the following algorithm (when applied on $N \times N$ grid graphs): Answer: Let's analyze the runtime. In each iteration of the loop we look for the maximum weight node. If you do this naively, you look at all nodes. By using a clever data structure you could speed this up. Adding the node to S takes constant time for all reasonable set data structures. Now we remove $v_i$ and its neighbors. This can be done in time proportional to the number of incident edges at $v_i$ and at its neighbors, for reasonable graph representations, e.g. an adjacency list. In your comment you stated that you had problems analyzing the algorithm because you didn't know the number of iterations of the loop. You could conservatively assume that you remove only $v_i$ itself to get a runtime of $O(|V| \times $finding the max$)$, so $O(|V|^2)$. Since $G$ is a grid graph, each node has degree at most four. So even if you assume optimistically that you remove five nodes in each iteration you get the same asymptotic runtime.
{ "domain": "cs.stackexchange", "id": 8069, "tags": "graphs, greedy-algorithms" }
Does a photon's wavelength (and energy) change when reflecting off a mirror?
Question: The momentum of a photon is $\ p=E/c.$ When a photon reflects off a mirror, it is elastic scattering. Elastic scattering should keep the energy of the photon. But radiation pressure states, that part of the momentum of the photon will be transferred to the mirror, this is how the photon exerts pressure on the mirror. Now if $\ p=E/c\ $ and the momentum of the photon changes (part of it gets transferred to the mirror), and the momentum of the photon depends on the wavelength $\ p = h/\lambda$. https://en.wikipedia.org/wiki/Radiation_pressure Elastic scattering is a form of particle scattering in scattering theory, nuclear physics and particle physics. In this process, the kinetic energy of a particle is conserved in the center-of-mass frame, but its direction of propagation is modified (by interaction with other particles and/or potentials). Furthermore, while the particle's kinetic energy in the center-of-mass frame is constant, its energy in the lab frame is not. Generally, elastic scattering describes a process where the total kinetic energy of the system is conserved. https://en.wikipedia.org/wiki/Elastic_scattering In Rayleigh scattering a photon penetrates into a medium composed of particles whose sizes are much smaller than the wavelength of the incident photon. In this scattering process, the energy (and therefore the wavelength) of the incident photon is conserved and only its direction is changed. In this case, the scattering intensity is proportional to the fourth power of the reciprocal wavelength of the incident photon. Now this is a contradiction. How can the energy of the photon be kept, and at the same time how can the photon exert pressure on the mirror, thus losing momentum, and change its wavelength? $p=E/c$, so the momentum and energy of the photon cannot change without the other. If the photon's energy is kept during elastic scattering (mirror reflection), and the photon still exerts radiation pressure on the mirror, then the photon's momentum has to change (part of it needs to get transferred to the mirror), so the energy needs to change too. Question: Does the wavelength of the photon change during elastic scattering (mirror reflection)? Answer: If a mirror reflection affected the energy of the photons to a large extent the colors would change, and it would not be a "true mirror" The fact that the colors do not change for a "true" mirror , means that the interaction of the photons is elastic, i.e no energy is lost in our frame, the lab, reference. Elastic scattering keeps the energy of the photon the same in the center of mass system "Photon + mirror" Because the mass of the mirror is so very large, the lab frame is also the center of mass for the "photon+mirror" scattering, the tiny $ΔE$ due to the momentum given to the mirror is not discernible or computable to give a different center of mass frame to the lab frame.
{ "domain": "physics.stackexchange", "id": 79667, "tags": "quantum-mechanics, photons, energy-conservation, reflection" }
Why does ice water get colder when salt is added?
Question: It is well known that when you add salt to ice, the ice not only melts but will actually get colder. From chemistry books, I've learned that salt will lower the freezing point of water. But I’m a little confused as to why it results in a drop in temperature instead of just ending up with water at 0 °C. What is occurring when salt melts the ice to make the temperature lower? Answer: When you add salt to an ice cube, you end up with an ice cube whose temperature is above its melting point. This ice cube will do what any ice cube above its melting point will do: it will melt. As it melts, it cools down, since energy is being used to break bonds in the solid state. (Note that the above point can be confusing if you're new to thinking about phase transitions. An ice cube melting will take up energy, while an ice cube freezing will give off energy. I like to think of it in terms of Le Chatelier's principle: if you need to lower the temperature to freeze an ice cube, this means that the water gives off heat as it freezes.) The cooling you get, therefore, comes from the fact that some of the bonds in the ice are broken to form water, taking energy with them. The loss of energy from the ice cube is what causes it to cool.
{ "domain": "chemistry.stackexchange", "id": 12445, "tags": "water, solutions, heat, phase" }
Why does a local inertia frame Lorentz transform when going from $x$ to $x + dx$?
Question: In Zee's GR book, pg. 600, it was written that On a curved manifold, as we move from point $x$ to a nearby point $x + dx$, we expect that the local frame will rotate or Lorentz transform, depending on whether the manifold is locally Euclidean or Minkowskian. How do we know that the local inertial frame will rotate or Lorentz transform when we go from $x$ to $x+dx$? Answer: This is because a manifold doesn't have vectors in and of itself. You have to supplement an abstraction called the tangent space. That is a vector space. Curves arise as the most natural objects to define on a manifold. And when you move from point $x$ to $x+dx$ along a curve, and if you're using the tangent space all along this displacement to find your bearings on the landscape, you must define a way to say when vectors are moving parallelly or not. You can picture this like trying to move an arrow that describes your motion when going from a point to another on the surface of an apple. The arrow will have to rotate, to adapt to the new "tangent space". In case your manifold is locally Minkowskian, it will have to Lorentz-rotate. Tony Zee is notorious for a pedagogical style in which he sacrifices some level of mathematical rigour in order to provide an intuitive picture, and he's a master at this. The really rigorous concept that takes you from vectors at $x$ and those at $x+dx$ is the connection.
{ "domain": "physics.stackexchange", "id": 76923, "tags": "general-relativity, reference-frames, differential-geometry, coordinate-systems" }
Why does hydrogen burn with a pale blue flame while its emission spectral lines are red in colour?
Question: I was studying oxidising flames and realized that sodium burns with a bright yellow flame the wavelength of which is around 588 nm. Then I searched for emission spectra of sodium and found that the lines around 588 nm are the most intense ones, so it appears yellow. The same happened with potassium. But when I arrived at hydrogen, I knew it burned with a pale blue flame, but its emission spectral lines were most intense in the red region. So why does it still gives a blue flame? As you see, it emits blue light too, but the red one is the most intense and so electrical discharge of hydrogen emits red light. But its oxidising flame is pale blue. Would it burn with a red flame at higher temperature or something like that or are spectral lines not related to flame colours? Answer: It is a very interesting question, but comparing a combustion spectrum with an atomic emission one is like comparing apples and oranges. A flame is a luminous gas phase chemical reaction where the hydrogen atoms are combining with oxygen atoms. It is a chemiluminescence phenomenon. A discharge tube emission is an atomic emission. You posted a picture of a hydrogen spectrum from a discharge tube which consists of discrete lines in the UV, visible, and IR regions. In the same way, the hydrogen-air flame has an emission in the ultraviolet, visible and in the IR. The blue flame is not a line spectrum, as you might speculate. It is a continuum (broad band), which indicates that this is due to molecular emission in the flames, not from hydrogen atoms! This is in contrast with the alkali metals whose compounds easily atomize in the flame. Also note that what may appear to us as a single color, like blue, is rarely a single wavelength. There is a beautiful article by R.W. Schefer , W.D. Kulatilaka, B.D. Patterson, and T.B. Settersten on Visible emission of hydrogen flames, Combustion and Flame 156 (2009) 1234–1241. Go to Google Scholar and find it; it is open access on Google. Look at their experiment: They view the "blue" hydrogen flames by various optical filters which block or pass certain wavelengths. See many colors are present in the "blue" flame of hydrogen. In fact most of the emission is in the ultraviolet region. Direct flame luminosity photographs in a laminar, diffusion H2-jet flame. (a) Unfiltered, f /2.4 aperture; (b) short-wavelength pass filter with 550 nm cutoff wavelength, f /2.4 aperture; (c) long-wavelength pass filter with 530 nm cutoff wavelength. Images taken at f /2.4 aperture with 90-ms exposure time. Fuel jet exit velocity is 47 m/s with a coflow air velocity of 0.57 m/s. Reynolds number = 837. Most of the emission is in the UV and infrared as you can see from the spike near 300 nm and intense band after 700 nm. Most people cannot see beyond 700 nm. The infrared emission is due to excited water molecule or perhaps its fragments. Note: The above spectrum consists of two parts, UV on the left, and visible extending from $\pu{350 nm}$ to $\pu{850 nm}$ on the right hand side, recorded separately. For this figure, the later was amplified by a factor of 6.5 with respect to the OH in the ultraviolet.
{ "domain": "chemistry.stackexchange", "id": 15368, "tags": "spectroscopy, hydrogen, atomic-structure" }
How to prove this commutation identity?
Question: How to prove this identity? $$[(\overrightarrow{\sigma}\cdot\overrightarrow{r}),(\overrightarrow{\sigma}\cdot\overrightarrow{p})]=\frac{4i}{\hbar}\overrightarrow{L}\cdot\overrightarrow{S}+3i\hbar$$ I tried somethings but i just ended up lost. Answer: You will need: Commutation relation of $\mathbf r,\mathbf p$, given by $[r_i,p_j]=i\hbar\delta_{ij}$ where $\delta_{ij}$ is Kronecker's delta The following identity for the Pauli matrices $$\sigma_i\sigma_j=\delta_{ij}+i\epsilon_{ijk}\sigma_k$$ where $\epsilon_{ijk}$ is the Levi-Civita symbol. The rest is just willful work.
{ "domain": "physics.stackexchange", "id": 82145, "tags": "quantum-mechanics, commutator" }
Are electrons just incompletely evaporated black holes?
Question: Imagine a black hole that is fast-approaching its final exponential throws of Hawking evaporation. Presumably, at all points in this end process there will remain a region that identifiably remains "the black hole" until the the very end, as opposed to huge swarm of fundamental particles that is being radiated out from it. As the mass of the black hole descends to that of individual particles, it would seem entirely feasible that the very last fermionic Hawking radiation event available to the almost-deceased black hole could leave it with an unbalanced charge, e.g. -1, and an unbalanced spin, say 1/2. It would also have some kind of mass of course, but that aspect of the final residue could be fine-tuned to any specific value by photon emissions of arbitrary frequencies. After photon emission mass trimming, the resulting black hole residuum would reach a point where it is no longer be able to evaporate into any known particle, because there is no longer any lower-mass option available to it for removing the -1 charge and 1/2 spin. The black hole residuum will at that point be stuck, so to speak, stuck with exact charge, spin, and mass features of an electron. And so my question: Is it an electron? And if so, by equivalence, is every electrons in the universe really just a particular type of black hole that cannot evaporate any further due to the constraints of charge and spin conservation? And if so, why are charge and spin so uniquely combined in such black hole remnants, so that e.g. a remnant of -1 charge and zero spin is not permitted, at least not commonly, and the mass is forced to a very specific associated level? Is there anything in the current understanding of general relativity that would explain such a curious set of restrictions on evaporation? The full generalization of this idea would of course be that all forms of black hole evaporation are ultimately constrained in ways that correspond exactly to the Standard Model, with free fundamental particles like electrons being the only stable end states of the evaporation process. The proton would be a fascinating example of an evaporation that remains incomplete in a more profound way, with the three quarks remaining incapable of isolated existence within spacetime. The strong force, from that perspective, would in some odd sense have to be a curious unbalanced remnant of those same deeper constraints on the overall gravitational evaporation process. This may all be tautological, too! That is, since Hawking radiation is guided by the particles possible, the constraints I just mentioned may be built-in and thus entirely trivial in nature. However, something deeper in the way they work together would seem... plausible, at least? If an electron is an unbalanced black hole, then the particles given off would also be black holes, and the overall process would be not one of just particle emission, but of how black holes split at low masses. Splitting with constraints imposed by the structure of spacetime itself would be a rather different way of looking at black hole evaporation, I suspect. (final note: This is just a passing thought that I've mulled over now and then through the years. Asking it was inspired by this intriguing mention of Wheeler's geon concept by Ben Crowell. I should add that I doubt very seriously that that my wild speculations above have anything to do with Wheeler's concept of geons, though.) Answer: Yes and no. Electrons - and all other elementary particles - may be viewed as microstates of very tiny black holes. As one considers increasingly heavy elementary particles (e.g. those in the Hagedorn spectrum of string theory), they increasingly morph into black hole microstates. When the elementary particle masses sufficiently surpass the Planck scale, most of the elementary particles look like typical black hole microstates. So quantum gravity as we understand it today implies that there is a gradual transition from elementary particles and black holes. However, if the elementary particles - very light black hole microstates - are (much) lighter than the Planck scale, the description of these "black holes" using the most naive equations of general relativity (Einstein's equations) becomes highly inaccurate. Corrections such as (powers of curvature tensors) $R^n$ to the equations of motion, and various quantization rules and other deformations from quantum mechanics, restore their importance – those can only be neglected in the very large size limit. Consequently, most predictions made by classical GR are seriously inaccurate or downright wrong for the elementary particles if they are treated as black holes. For example, the charge/mass ration of an electron (or other known charged particles) vastly exceeds the upper limit defining "extremal" black holes in GR. Such black holes wouldn't be classically allowed, but this regime is highly non-classical, so these objects do exist with the known properties. It is actually necessary for the charged elementary particles to behave as "not allowed" overcharged superextremal black holes. It's needed for regular large charged black holes to fully evaporate, which is needed for other reasons. All these claims are equivalent to the so-called weak gravity conjecture. http://arxiv.org/abs/hep-th/0601001
{ "domain": "physics.stackexchange", "id": 21537, "tags": "quantum-field-theory, black-holes, standard-model, hawking-radiation" }
What exactly does No cloning mean, in the context of Quantum Computing?
Question: I am trying to get an intuitive idea of how the No-Cloning theorem affects Quantum computation. My understanding is that given a qubit $Q$ in superposition $Q_0 \left| 0 \right> + Q_1 \left| 1 \right>$, NCT states another Qubit $S$ cannot be designed such that $S$ is equivalent to the state of $Q$. Now the catch is, what does Equivalent mean? It could mean either that: $S = S_0 \left| 0 \right> + S_1 \left| 1 \right>$ such that $S_0 = Q_0, S_1 = Q_1$. Or it could mean that $S = Q$, meaning that if $S$ is observed to be some value ( for example 0) then $Q$ MUST be that same value, and vice versa. So it seems that point 2, occurs anyways in entangled systems (particularly cat-states), so I can eliminate that option and conclude that that No Cloning states, given a qubit $Q$, it's impossible to make another qubit $S$ such that: $S = S_0 \left| 0 \right> + S_1 \left| 1 \right>$ such that $S_0 = Q_0, S_1 = Q_1$. Is this correct? Answer: You need to use a more precise notion of the cloning process, in order to understand the general statement and its repercussions. I will give you some outline here (mainly following the explanations of B. Schumacher and M. Westmoreland given in the reference), with an emphasis on the most important aspects of it, but to fully appreciate the importance of the No-cloning Thm I highly recommend looking through the various ways you can prove it (I can show you some ways of proving it in this post, if you will see it necessary). Main statement: No-cloning theorem states that no unitary cloning machine exists that would clone arbitrary initial states. A softer version would be: Quantum information cannot be copied exactly. Repercussions if arbitrary cloning was possible: (not following any specific order) If a hypothetical device exists that could duplicate the state of a quantum system, then an eavesdropper would be able to break the security of the $BB84$ key distribution protocol. A cloning machine would allow to create multi-copy states $|x\rangle^{\otimes n}$ from a single state $|x\rangle.$ But take another single state $|y\rangle,$ create its corresponding multi-state $|y\rangle^{\otimes n},$ and you can overcome the basic distinguishability limitations of states in Quantum Mechanics, as multi-copy states can be better distinguished (correct term would be more reliably) than single states. Recall that the distinguishability of two states $x,y$ is given by their amount of overlap, i.e. $|\langle x | y\rangle|,$ the closer this is to vanishing, the better we can distinguish between the states. (If you're not familiar with the concept of multi-states being more reliably distinguishable, let me know). The no-cloning theorem guarantees the no-communication theorem and thus prevents faster than-light communication using entangled states. (the no-communication theorem basically says that: if two parties have systems $A$ and $B$ respectively, and suppose their joint state is entangled $|\psi^{AB}\rangle,$ then: the two parties cannot transfer information to each other either by: choosing different measurements for their respective systems, or evolving their systems using different unitary time evolution operators.) More precise definition of the cloning problem: There are three elements involved, the initial state (input) to be copied $(1)$, a blank state onto which we want to create the copy $(2)$ and a machine that plays the role of the cloning device $(3)$. The composite system is then $(123).$ Suppose the state of $(2)$ is $|0\rangle,$ state of $(1)$ being $|\Phi\rangle$ and the starting state of $(3)$ is $|D_i\rangle.$ Let us denote the action of the cloning device by the unitary operator $U.$ It is important to point out that the starting state of the composite system $(23)$ and the action $U$ is independent of the state to be copied, i.e. system $(1).$ Our starting composite state is then: $$ |123\rangle_i = |\Phi\rangle \otimes |0\rangle \otimes |D_i\rangle $$ By applying $U$, thus after cloning, the state of $(1)$ is unchanged, but upon success of the cloning, the state of $(2)$ must be exactly that of $(1).$ So $$ U|123\rangle_i = |\Phi\rangle \otimes |\Phi\rangle \otimes |D_f\rangle $$ Given this description, the no-cloning theorem says that such $U$ does not exist for arbitrary states of $(1).$ Hints on the proofs: One way would be to use the principle of superposition to show that such cloning is not possible, by showing that if the device is to work for two orthogonal states, it would create entangled outputs for their superposition. (thus the subsystems are no longer even in pure states) Another way would be going back to the concept of distinguishability between non-orthogonal states, and using the fact that unitary time evolution preserve inner products, thus showing that the cloning device is impossible as it would allow considerable improvement on the distinguishability. Reference: A highly recommended reference, also for further reading on all this matter, would be the book of Quantum Processes, systems & Information by Benjamin Schumacher and Michael Westmoreland. (Of relevance here, are chapters 4 and 7.)
{ "domain": "physics.stackexchange", "id": 29915, "tags": "quantum-mechanics, quantum-information, hilbert-space, quantum-computer, unitarity" }
Alternatives to Toronto Book Corpus
Question: As the toronto book corpus is no longer available (or rather, only in lowercase), I am looking for an alternative dataset of comparable language variety, quality, and size. Any suggestions? The Gutenberg Standardized Corpus is too big and still requires lots of preprocessing. Answer: First, for context, I suggest other bypasses check the writeup of a researcher who also tried to find the Toronto Book Corpus. There is a potential copy of the corpus shared by Igor Brigadir on Twitter , although it is not certain that it is the same exact corpus (see discussion). HuggingFace datasets hosts a copy of this corpus. As you noted, this version is in lowercase. There are other people who have replicated the corpus to some degree, like Shawn Presser, who shared it on Twitter (download link). Here is some context for this replication and more info on the matter. This replication is NOT in lowercase. Also, here you can find the instructions and code to replicate it yourself. Finally, there is this paper studying the problems of the Toronto Book Corpus and its replications.
{ "domain": "datascience.stackexchange", "id": 11519, "tags": "nlp, dataset" }
Applying a function to each row of a matrix
Question: My goal is to apply a function func1 to each row of the matrix input and then return a new one resulting from the transformation. The code works but when the data frame contains more than 1 million rows, it become extremely slow. How can I optimize my code? I start learning programming and I am not familiar with strategies to speed up R code. The functions performs 2 main steps: Find the locations of all neighboring cells that are located in the extent PR from a focal cell, extract raster's values at these locations and calculate probability matrix Find the maximum value in the matrix and the new cell corresponding with the maximum value. Here's the data frame and raster: library(dplyr) library(raster) library(psych) set.seed(1234) n = 10000 input <- as.matrix(data.frame(c1 = sample(1:10, n, replace = T), c2 = sample(1:10, n, replace = T), c3 = sample(1:10, n, replace = T), c4 = sample(1:10, n, replace = T))) r <- raster(extent(0, 10, 0, 10), res = 1) values(r) <- sample(1:1000, size = 10*10, replace = T) ## plot(r) Here's my code to apply the function to each row in the matrix: system.time( test <- input %>% split(1:nrow(input)) %>% map(~ func1(.x, 2, 2, "test_1")) %>% do.call("rbind", .)) Here's the function: func1 <- function(dataC, PR, DB, MT){ ## Retrieve the coordinates x and y of the current cell c1 <- dataC[[1]] c2 <- dataC[[2]] ## Retrieve the coordinates x and y of the previous cell c3 <- dataC[[3]] c4 <- dataC[[4]] ## Initialize the coordinates x and y of the new cell newc1 <- -999 newc2 <- -999 if(MT=="test_1"){ ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - PR) : (c1 - 1)), y = c((c2 - PR) : (c2 - 1))) ## cells at upper-left corner V1 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - PR) : (c1 - 1)), y = c((c2 - 1) : (c2 + 1))) ## cells at upper-middle corner V2 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - PR) : (c1 - 1)), y = c((c2 + 1) : (c2 + PR))) ## cells at upper-right corner V3 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - 1) : (c1 + 1)), y = c((c2 - PR) : (c2 - 1))) ## cells at left corner V4 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB V5 <- 0 ## cell at middle corner ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - 1) : (c1 + 1)), y = c((c2 + 1) : (c2 + PR))) ## cells at right corner V6 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 + 1) : (c1 + PR)), y = c((c2 - PR) : (c2 - 1))) ## cells at bottom-left corner V7 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 + 1) : (c1 + PR)), y = c((c2 - 1) : (c2 + 1))) ## cells at bottom-middle corner V8 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 + 1) : (c1 + PR)), y = c((c2 + 1) : (c2 + PR))) ## cells at bottom-right corner V9 <- mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB } else if(MT=="test_2"){ ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - PR) : (c1 - 1)), y = c((c2 - PR) : (c2 - 1))) ## cells at upper-left corner V1 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - PR) : (c1 - 1)), y = c((c2 - 1) : (c2 + 1))) ## cells at upper-middle corner V2 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - PR) : (c1 - 1)), y = c((c2 + 1) : (c2 + PR))) ## cells at upper-right corner V3 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - 1) : (c1 + 1)), y = c((c2 - PR) : (c2 - 1))) ## cells at left corner V4 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB V5 <- 0 ## cells at middle corner ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 - 1) : (c1 + 1)), y = c((c2 + 1) : (c2 + PR))) ## cells at right corner V6 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 + 1) : (c1 + PR)), y = c((c2 - PR) : (c2 - 1))) ## cells at bottom-left corner V7 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 + 1) : (c1 + PR)), y = c((c2 - 1) : (c2 + 1))) ## cells at bottom-middle corner V8 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * DB ## Extract the raster values with coordinates in matC matC <- expand.grid(x = c((c1 + 1) : (c1 + PR)), y = c((c2 + 1) : (c2 + PR))) ## cells at bottom-right corner V9 <- harmonic.mean(raster::extract(r, cbind(matC[,1], matC[,2])), na.rm = T) * sqrt(2) * DB } ## Build the matrix of cell selection tot <- sum(c(1/V1, 1/V2, 1/V3, 1/V4, 1/V6, 1/V7, 1/V8, 1/V9), na.rm = TRUE) mat_V <- matrix(data = c((1/V1)/tot, (1/V2)/tot, (1/V3)/tot, (1/V4)/tot, V5, (1/V6)/tot, (1/V7)/tot, (1/V8)/tot, (1/V9)/tot), nrow = 3, ncol = 3, byrow = TRUE) while((newc1 == -999 && newc2 == -999) || (c3 == newc1 && c4 == newc2)){ ## Test if the new cell is the previous cell if(c3 == newc1 && c4 == newc2){ mat_V[choiceC[1], choiceC[2]] <- NaN ## print(mat_V) } ## Find the maximum value in the matrix choiceC <- which(mat_V == max(mat_V, na.rm = TRUE), arr.ind = TRUE) ## print(choiceC) ## If there are several maximum values if(nrow(choiceC) > 1){ choiceC <- choiceC[sample(1:nrow(choiceC), 1), ] } ## Find the new cell relative to the current cell if(choiceC[1]==1 & choiceC[2]==1){ ## cell at the upper-left corner newC <- matrix(c(x = c1 - 1, y = c2 - 1), ncol = 2) } else if(choiceC[1]==1 & choiceC[2]==2){ ## cell at the upper-middle corner newC <- matrix(c(x = c1 - 1, y = c2), ncol = 2) } else if(choiceC[1]==1 & choiceC[2]==3){ ## cell at the upper-right corner newC <- matrix(c(x = c1 - 1, y = c2 + 1), ncol = 2) } else if(choiceC[1]==2 & choiceC[2]==1){ ## cell at the left corner newC <- matrix(c(x = c1, y = c2 - 1), ncol = 2) } else if(choiceC[1]==2 & choiceC[2]==3){ ## cell at the right corner newC <- matrix(c(x = c1, y = c2 + 1), ncol = 2) } else if(choiceC[1]==3 & choiceC[2]==1){ ## cell at the bottom-left corner newC <- matrix(c(x = c1 + 1, y = c2 - 1), ncol = 2) } else if(choiceC[1]==3 & choiceC[2]==2){ ## cell at the bottom-middle corner newC <- matrix(c(x = c1 + 1, y = c2), ncol = 2) } else if(choiceC[1]==3 & choiceC[2]==3){ ## cell at the bottom-right corner newC <- matrix(c(x = c1 + 1, y = c2 + 1), ncol = 2) } newc1 <- newC[[1]] newc2 <- newC[[2]] } return(newC) } Here's the elapsed time when n = 10000. Ideally, I would like to reduce the time required at < 1 min. user system elapsed 108.96 0.01 109.81 Answer: Did dome upgrades, but only for 'test_1' case, you can update 'test2' case similarly. For me this function run in 13.54 sek vs 26.16 sek for your original code. func1 <- function(dataC, PR, DB, MT){ ## Retrieve the coordinates x and y of the current cell c1 <- dataC[[1]] c2 <- dataC[[2]] ## Retrieve the coordinates x and y of the previous cell c3 <- dataC[[3]] c4 <- dataC[[4]] ## Initialize the coordinates x and y of the new cell newc1 <- -999 newc2 <- -999 a1 <- c((c1 - PR), (c1 - 1)) a2 <- c((c2 - PR), (c2 - 1)) a3 <- c((c2 - 1), (c2 + 1)) a4 <- c((c2 + 1), (c2 + PR)) a5 <- c((c1 - 1), (c1 + 1)) a6 <- c((c1 + 1), (c1 + PR)) xx <- c(a1, a2, a3, a4, a5, a6) xx <- seq(min(xx), max(xx)) gg <- expand.grid(xx, xx, KEEP.OUT.ATTRS = F) gg <- as.matrix(gg) gg1 <- gg[, 1] gg2 <- gg[, 2] ff2 <- function(matC) { y1 <- raster::extract(r, matC) mean(y1, na.rm = T) } cgrid <- function(x, y) { gg[gg1 >= x[1] & gg1 <= x[2] & gg2 >= y[1] & gg2 <= y[2], ] } if (MT == "test_1") { ## cells at upper-left corner V1 <- ff2(cgrid(x = a1, y = a2)) * sqrt(2) * DB ## cells at upper-middle corner V2 <- ff2(cgrid(x = a1, y = a3)) * DB ## cells at upper-right corner V3 <- ff2(cgrid(x = a1, y = a4)) * sqrt(2) * DB ## cells at left corner V4 <- ff2(cgrid(x = a5, y = a2)) * DB V5 <- 0 ## cell at middle corner ## cells at right corner V6 <- ff2(cgrid(x = a5, y = a4)) * DB ## cells at bottom-left corner V7 <- ff2(cgrid(x = a6, y = a2)) * sqrt(2) * DB ## cells at bottom-middle corner V8 <- ff2(cgrid(x = a6, y = a3)) * DB ## cells at bottom-right corner V9 <- ff2(cgrid(x = a6, y = a4) ) * sqrt(2) * DB } ## Build the matrix of cell selection V <- c(V1, V2, V3, V4, V5, V6, V7, V8, V9) tot <- sum(1/V[-5], na.rm = TRUE) mat_V <- matrix((1/V)/tot, nrow = 3, ncol = 3, byrow = TRUE) mat_V[5] <- V5 while ((newc1 == -999 && newc2 == -999) || (c3 == newc1 && c4 == newc2)) { ## Test if the new cell is the previous cell if (c3 == newc1 && c4 == newc2) { mat_V[choiceC[1], choiceC[2]] <- NaN ## print(mat_V) } ## Find the maximum value in the matrix choiceC <- which(mat_V == max(mat_V, na.rm = TRUE), arr.ind = TRUE) ## If there are several maximum values if (nrow(choiceC) > 1) choiceC <- choiceC[sample.int(nrow(choiceC), 1L), ] ## Find the new cell relative to the current cell newC <- c(x = c1 + (choiceC[1] - 2), y = c2 + (choiceC[2] - 2)) newC <- matrix(newC, ncol = 2) newc1 <- newC[[1]] newc2 <- newC[[2]] } return(newC) }
{ "domain": "codereview.stackexchange", "id": 32739, "tags": "performance, matrix, r" }
Unable to start rqt_* plugins
Question: Hello, I am using ROS melodic on Arch linux. I installed ROS and it's plugins using yay. I am able to use all parts of ROS just like I used to on my ubuntu. The only problem is with rqt_* plugins. I have installed them on my system. Whenever I try to launch them, I am faced with the following error. Note that I have sourced ROS environment using following command. (I have placed it in my .bashrc.) source /opt/ros/melodic/setup.bash For ex. here I am starting rqt_image_view and I get the following error. (The error is the same even if I run it using rosrun rqt_image_view rqt_image_view having run roscore). [jack@lenovo ~]$ rqt_image_view CompositePluginProvider.discover() could not discover plugins from provider "<class 'rqt_gui.rospkg_plugin_provider.RospkgPluginProvider'>": Traceback (most recent call last): File "/opt/ros/melodic/lib/python3.9/site-packages/qt_gui/composite_plugin_provider.py", line 57, in discover plugin_descriptors = plugin_provider.discover(discovery_data) File "/opt/ros/melodic/lib/python3.9/site-packages/rqt_gui/ros_plugin_provider.py", line 67, in discover plugin_descriptors += self._parse_plugin_xml(package_name, plugin_xml) File "/opt/ros/melodic/lib/python3.9/site-packages/rqt_gui/ros_plugin_provider.py", line 128, in _parse_plugin_xml for library_el in root.getiterator('library'): AttributeError: 'ElementTree' object has no attribute 'getiterator' CompositePluginProvider.discover() could not discover plugins from provider "<class 'qt_gui.recursive_plugin_provider.RecursivePluginProvider'>": Traceback (most recent call last): File "/opt/ros/melodic/lib/python3.9/site-packages/qt_gui/composite_plugin_provider.py", line 57, in discover plugin_descriptors = plugin_provider.discover(discovery_data) File "/opt/ros/melodic/lib/python3.9/site-packages/qt_gui/recursive_plugin_provider.py", line 53, in discover plugin_descriptors = self._plugin_provider.discover(discovery_data) File "/opt/ros/melodic/lib/python3.9/site-packages/rqt_gui/ros_plugin_provider.py", line 67, in discover plugin_descriptors += self._parse_plugin_xml(package_name, plugin_xml) File "/opt/ros/melodic/lib/python3.9/site-packages/rqt_gui/ros_plugin_provider.py", line 128, in _parse_plugin_xml for library_el in root.getiterator('library'): AttributeError: 'ElementTree' object has no attribute 'getiterator' CompositePluginProvider.discover() could not discover plugins from provider "<class 'qt_gui.recursive_plugin_provider.RecursivePluginProvider'>": Traceback (most recent call last): File "/opt/ros/melodic/lib/python3.9/site-packages/qt_gui/composite_plugin_provider.py", line 57, in discover plugin_descriptors = plugin_provider.discover(discovery_data) File "/opt/ros/melodic/lib/python3.9/site-packages/qt_gui/recursive_plugin_provider.py", line 53, in discover plugin_descriptors = self._plugin_provider.discover(discovery_data) File "/opt/ros/melodic/lib/python3.9/site-packages/rqt_gui/ros_plugin_provider.py", line 67, in discover plugin_descriptors += self._parse_plugin_xml(package_name, plugin_xml) File "/opt/ros/melodic/lib/python3.9/site-packages/rqt_gui/ros_plugin_provider.py", line 128, in _parse_plugin_xml for library_el in root.getiterator('library'): AttributeError: 'ElementTree' object has no attribute 'getiterator' qt_gui_main() found no plugin matching "rqt_image_view/ImageView" try passing the option "--force-discover" I have been facing this thing for sometime and unable to resolve this problem. Any help would be greatly appreciated. Originally posted by jacka122 on ROS Answers with karma: 5 on 2021-02-20 Post score: 0 Original comments Comment by ejalaa12 on 2021-02-20: You're using ROS melodic, which works python2, but it seems you're using python3. The error you're getting is because rqt in melodic is implemented in python2, and specifically uses the xml standard library. The getiterator method exists only in python2 and was deprecated since python2.7. Comment by jacka122 on 2021-02-25: Anyway I could resolve this issue? Comment by ejalaa12 on 2021-02-25: Actually the getiterator method was removed between python3.8 and python3.9. What you could do is either use python3.8, that should work, you would only get a deprecation warning. Or you can clone the rqt_gui package in a workspace and edit the ros_plugin_provider.py file to replace getiterator with iter. Compile and source. Comment by jacka122 on 2021-02-25: Thanks a ton! compiling works like a charm! Boy I wish there had been some sort of backward compatibility with python. Anyway thanks a lot! I am surprised I could not find the problem reported by others! What are the odds I stubmled onto this!? Comment by ejalaa12 on 2021-02-25: You welcome :). The deprecation warning for using this method was available since python 2.7 ! which means they had until python 3.8 to use the new method. The old method was finally removed in python3.9. You were on a very edgy case indeed ^^. Could you tick the answer I posted then ? Comment by jacka122 on 2021-02-25: Of course! I ticked the answer :) Answer: the getiterator method was removed between python3.8 and python3.9. What you could do is either use python3.8, that should work, you would only get a deprecation warning. Or you can clone the rqt_gui package in a workspace and edit the ros_plugin_provider.py file to replace getiterator with iter. Compile and source. Originally posted by ejalaa12 with karma: 81 on 2021-02-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2021-02-26: And .. in all cases report on the appropriate issue tracker and as @jacka122 now knows how to fix it: contribute the fix in a Pull Request. Comment by gvdhoorn on 2021-02-26: And it seems someone already reported it: ros-visualization/rqt#240, and there is also a Pull Request fixing it: ros-visualization/rqt#241. Comment by ejalaa12 on 2021-02-26: @jacka122 so your issue will be fixed as soon as the PR will be merged. At this point you won't need your local modified version of the package.
{ "domain": "robotics.stackexchange", "id": 36117, "tags": "ros, python, ros-melodic, rqt" }
Free diving physiological changes
Question: When training for free diving, there are several physiological and psychological changes that enable this activity, however one of the changes that I do not understand is increased resistance to blood acidification. Part of the resistance to pain is psychological but from my understanding the body itself is also less sensitive to the carbon dioxide buildup in the blood itself, so I am wondering how this happens. Is this a change in the nervous system, blood or chemoreceptors themselves? Answer: According to Libretexts Medicine link There are two types of chemoreceptors that help regulate breathing. Those with the most impact, the central chemoreceptors, can be desensitized. The relevant quote is: Central chemoreceptors: These are located on the ventrolateral surface of medulla oblongata and detect changes in the pH of spinal fluid. They can be desensitized over time from chronic hypoxia (oxygen deficiency) and increased carbon dioxide. Peripheral chemoreceptors: These include the aortic body, which detects changes in blood oxygen and carbon dioxide, but not pH, and the carotid body which detects all three. They do not desensitize, and have less of an impact on the respiratory rate compared to the central chemoreceptors. From FreediveUK website. There are two types of breath training necessary. One is psychological, to learn to control the breathing reflex due to high CO2 levels and the other is to train the body to operate with lower levels of O2. They also mention the mammalian dive reflex which produces a physiological response: When the face is submerged and water fills the nostrils, sensory receptors sensitive to wetness within the nasal cavity and other areas of the face supplied by the fifth (V) cranial nerve (the trigeminal nerve) relay the information to the brain.1 The tenth (X) cranial nerve, (the vagus nerve) – part of the autonomic nervous system – then produces bradycardia and other neural pathways elicit peripheral vasoconstriction, restricting blood from limbs and all organs to preserve blood and oxygen for the heart and the brain (and lungs), concentrating flow in a heart–brain circuit and allowing the animal to conserve oxygen.
{ "domain": "biology.stackexchange", "id": 12511, "tags": "biochemistry, hematology, chemical-communication" }
Behavior of Saha and Boltzmann
Question: So I'm just wondering why the Saha and Boltzmann distributions behave differently as temperature increases? I know one is for ionization levels while the other is for energy levels but is that the answer? Answer: It's likely due to a combination of two reasons, one of which you already mentioned: As you state, the Saha equation relates the ionization levels while the Maxwell-Boltzmann distribution deals with the energy levels The Saha equation incorporates quantum mechanical effects in deriving it's equation while the Maxwell-Boltzmann consider quantum effects negligible. I recommend checking out Fundamentals of Plasma Physics by J Bittencourt, as the Maxwell-Boltzmann distribution and Saha ionization equations are well covered in the book.
{ "domain": "physics.stackexchange", "id": 13045, "tags": "astrophysics" }
What does it look like to split an EPR pair?
Question: I am reading Quantum Computation and Quantum Information by Michael A. Nielsen & Isaac L. Chuang, and I am confused about a concept presented in Section 1.3.7: Quantum Teleportation. The book writes "While together, Alice and Bob generated an EPR pair, each taking one qubit of the EPR pair when they separated." For example, if the EPR pair in question is: $|\beta_{00}\rangle=\frac{|00\rangle+|11\rangle}{\sqrt{2}}$ How can we write mathematically what Alice and Bob each have in their possession? This idea comes up again when quantum teleportation is applied to superdense coding in Section 2.3, where again Alice and Bob "share a pair of qubits in the entangled state $|\beta_{00}\rangle$. Thanks in advance! Answer: The whole point of an EPR pair is that you cannot write (without losing some information) "This is what Alice has" and "This is what Bob has". Partial descriptions can be given using reduced density matrices. However, if you want to identify which bits Alice and Bob each have, we often use a notation like $$ (|0\rangle_A|0\rangle_B+|1\rangle_A|1\rangle_B)/\sqrt{2} $$ which helps to emphasise which ket corresponds to the qubits being held by each party.
{ "domain": "quantumcomputing.stackexchange", "id": 1422, "tags": "quantum-state, entanglement, nielsen-and-chuang, teleportation, superdense-coding" }
Thermodynamic limit and periodic boundary conditions
Question: Background: Concerning the use of periodic boundary conditions for systems of $N$ (identical) particles, Ref. 1 considers a cube $\Lambda$ with volume $L^3$ and notes regarding the interaction potential of the form $V(x_1-x_2)$: It is often useful, for calculations, to express $V$ in the plane-wave basis $|k\sigma\rangle$ with periodic boundary conditions. The Fourier transform of $V(x)$ in a cubic box $\Lambda$ is given by $$ \tilde V_L(k)=\int_\Lambda \mathrm dx\, e^{-ikx} \, V(x) \tag{3.111}$$ with corresponding Fourier series $$V_L(x) = \frac{1}{L^3} \sum\limits_k e^{ikx}\, \tilde V_L(k) \quad .\tag{3.112} $$ Note that $V_L(x)$ is a periodic function with period $L$ and is only equal to $V(x)$ if $x$ is in $\Lambda$. In the limit of infinite volume, however, $\tilde V_L(k)$ approaches the Fourier transform $\tilde V(k)$ of $V(x)$ in the whole space, $$ \lim\limits_{L\to\infty}\tilde V_L(k) = \int \mathrm d x\, e^{-ikx}\, V(x) = \tilde V(k) \tag{3.113}$$ and $$\lim\limits_{L\to\infty} V_L(x)=V(x) \quad .\tag{3.114} $$ The authors then proceed to express the two-body interaction in second quantization and again note that $V_L(x_1-x_2)$ is not equal to $V(x_1-x_2)$, only if $x_1-x_2 \in \Lambda$. But again they argue that in the limit of infinite volume, we recover the original potential. All of this more or less makes sense to me, but I have one doubt. Question: It seems that it matters what cube exactly we take. Indeed, I see that if we take $\Lambda=[-L/2,L/2]^3 \subset \mathbb R^3$, then the limits in $(3.113)$ and $(3.114)$ yield the desired results. More generally, we can take some "sequence" of cubes which "converge" to $\mathbb R^3$, a notion which I think can be made rigorously. However, by taking $\Lambda=[0,L]$ instead, I cannot see that this is true in general. First, it does not reproduce the Fourier transform in equation $(3.113)$. Moreover, for such an interval (I think for any asymmetric interval), the periodically extended interaction potential (cf. Ref. 4) $V_L$ is not symmetric anymore, which should be the case for identical particles. But the authors do nowhere assume some special choice of $\Lambda$. Other References (see below) proceed and argue in the same manner. References: Many-Body Problems and Quantum Field Theory: An Introduction. Martin and Rothen. Section 3.2 page 111 and section 4 pages 135-137. Nonequilibrium many-body theory of quantum systems. Stefanucci and Leuuwen. Appendix E, page 529. Response Theory of the Electron-Phonon Coupling. Starke and Schober. Sections 2.2.1 and A.5. arxiv link. Theoretical Solid State Physics In two Volumes. Volume 1. Albert Haug. Chapter II.B, section 24 (a), page 200. Closely related PSE post: Operators and periodic boundary conditions Answer: You could construct $V_L$ in a first naive way without using a unit cells. Starting with $V$ defined on $\mathbb R^3$, take a Bravais lattice $\Lambda$ in real space (sorry for the conflict, but this is standard notation). The idea is by taking the lattice $L\Lambda=\{Lx|x\in\Lambda\}$ with $L\to\infty$, you should recover the infinite space limit. There are two ways of viewing the construction of $V_L$. In Fourier space, this is done by sampling the Fourier transform: $$ \tilde V_L(k) = \sum_{l\in \frac{1}{L}\Lambda^*} \tilde V(l)\delta(k-l) $$ where $\frac{1}{L}\Lambda^*$ is the dual lattice of $L\Lambda$. Note in particular that the dependence in $L$ is inverted when going to dual space, and that the sampling lattice gets finer as $L\to\infty$ which is why: $$ \tilde V_L \to \tilde V $$ In real space, this is done by the Poisson summation formula: $$ V_L(x) = \sum_{y\in L\Lambda}V(x-y) $$ Intuitively, if $V$ decays fast, you'll only have one term left. You can prove that under more general assumptions over $V$: $$ V_L\to V $$ You can check that that both approach are related by Fourier transform. However, if you know a bit of sampling, it's best to filter out high frequencies before the sampling in order to avoid aliasing. This is how you recover your method and the unit cell comes into play. The idea is to use a brick-wall filter over a unit cell of $\Lambda$ (real space) $\Omega$, to that $V$ becomes $V_{L\Omega}:=V1_{L\Omega}$ (indicator function). This is why when you calculate the Fourier transform, you only integrate over the finite domain. In Fourier space, $\tilde V_{L\Omega}$ is a smoothed out version of $\tilde V$. Having done this filtering, you can now proceed to sampling in dual space, or equivalently periodising in real space. It turns out the thanks to the filtering, the next step loses no information thanks to the the higher dimensional version of the Sannon-Wittaker interpolation formula in dual space or by direct inspection in real space. $V_L$ is no given by: $$ V_L(x) = \sum_{y\in L\Lambda}V_{L\Omega}(x-y) \\ \tilde V_L(k) = \sum_{l\in \frac{1}{L}\Lambda^*} \tilde V_{L\Omega}(l)\delta(k-l) $$ In order to recover the correct limiting behavior $V_L\to V$, one approach is to impose that $L\Omega\to\mathbb R^3$. Indeed, $V_L$ and $V$ coincide on $L\Omega$, so this is sufficient. $L\Omega\to\mathbb R^3$ is equivalent to $\Omega$ being a neighborhood of the origin. This is why your counter example failed since the origin was at the boundary. The fact that $\Omega$ is a unit cell is necessary and sufficient for the sampling/periodisation to be invertible. However, it is not entirely relevant if all you are interested in is the limit. You can choose $\Omega$ to be rather arbitrary and relax the condition of being a unit cell of $\Lambda$. The key argument is that $V_L,V$ coincide in a neighbourhood of the origin. A natural generalisation would be that the complementary set of $(\Lambda-\{0\})+\Omega$ is a neighbourhood of the origin. This guarantees that the periodisation does not generate an overlap at the origin. Btw, you can generalise the approach by considering different filters instead of brick wall filters. Hope this helps.
{ "domain": "physics.stackexchange", "id": 95166, "tags": "quantum-mechanics, operators, hilbert-space, mathematics, boundary-conditions" }
BMI limits in interpretation in people with high/low heights
Question: In a study, I measure BMI of a large sample of adult people. Among them, there is some who have very low and very high height. As I can read on the internet (like here or even in the wiki, for instance), BMI is not relevant for very short or very tall people. Some math professor even wrote an article with a "better" calculation of BMI, but with a very low impact in science world (didn't find it on pubmed) so it's not really peer-reviewed. Let's take some examples, assuming "normal BMI" is 21: for a 110cm tall adult, ideal weight is 25kg for a 130cm tall adult, ideal weight is 36kg These weights appear really low to me. Note that dwarfism cutoff is 145cm in France. My question is then the following: How could I choose a cutoff on height to exclude BMI values ? NB: I didn't find anything on pubmed nor on google scholar, but maybe I missed an important article Answer: The best procedure is going to depend on what exactly you're going to do with your data. Regardless, you should NOT exclude the BMI values. Just include the height, weight, and BMI. If you're simply reporting BMI as a characteristic of your study population, you can annotate that figure and let your reviewers and readers decide what to make of it. This follows the principles in Chapter 4 of Hulley's Designing Clinical Research. You want to get and maintain all the data. If you're going run some further analysis on BMI, or if BMI is an outcome in your study, then you can evaluate the impact of a cutoff on your results, but you have to decide ahead of time how you'll do your primary analysis. I recommend using the entire data set (no height cutoff) for your primary analysis (because *there is no established practice for a BMI height cutoff in the general medical literature), and then running a secondary analysis that excludes those research subjects above some threshold (you can pick it, but depending on the size of your sample, 2 standard deviations could be reasonable). That secondary analysis would be a hypothesis generating analysis. If excluding very short and very tall people changed the results of your analysis, that's something to mention in your discussion and something to consider for the next study. *If you're in a specialized field, then follow the practice of that field for your primary analysis.
{ "domain": "biology.stackexchange", "id": 8866, "tags": "human-biology, research-design" }
Navigation stack parameters tuning to pass through a narrow space
Question: I've already applied nav stack to my own robot(it's not omni, but diff) and now I'm tuning parameters. I'd like to make the robot to pass through a narrow space which consits of a wall and obstacle. However, when it comes close to the space, it stucks. Concretely speaking, when it comes close to the narrow space, it does the followings. (1) goes closer to the wall or obstacle in a very low velocity(I think it's strange behavior). (2) goes back or rotate a little. (3) continues (1) and (2) permanently without recovery behaviors(Local minimum ??). Actually, when I put a obstacle very close(about 20cm) to the robot, it behaves in the manner. I'd like to know ... (1)What parameters are important to avoid above problems ? (footprint, inflation_radiusm pass_distance_bias, goal_distance_bias, occdist_scale?) (2)If it's not the parameters' problem, could you tell me how to solve it ? Thanks in advance. Originally posted by moyashi on ROS Answers with karma: 721 on 2013-01-16 Post score: 4 Answer: Hi I had the same problem. Finally I solved it by reducing the inflation radius in the local map, and setting the inflation radius of the global map a little higher than the radius of the robot. Playing with this two parameters I realized that the robot wont never travel through a zone that contains an inflated obstacle of the local costmap. If you set this value high, the robot wont pass through a narrow space. The obstacles on the global costmap allows to control the distance between the trayectory and the obstacles, so that will prevent the collision. Hope it helps you ;) Originally posted by g.aterido with karma: 229 on 2013-01-16 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by damjan on 2013-01-17: Same thing worked for me on a Pioneer 3-DX robot. Reducing the inflation radius in the local map had by far the biggest effect on the robot's ability to pass through narrow spaces. Comment by moyashi on 2013-01-29: Thank you for your answers ! I reduced inflation radius from 0.5 to 0.3 and adjusted footprint very tightly, then the robot could pass through narrow space.
{ "domain": "robotics.stackexchange", "id": 12459, "tags": "navigation" }
Finding possible forces that could have caused the motion of a system of particles
Question: 1, Assuming we have a system of particles and we know the position of each particle at any time, what are the possible forces acting on each particle at any time? Is there anything "interesting" we can say about this problem, and how would we state this sort of problem more formally, in a rigorous way? Assuming there is only one particle, or non the particles do not interact in any way and aren't constrained in any way, the solution for a certain particle is to simply take the second derivative and multiply it by the mass of the particle. 2, Even assuming additional constraints on the system (which are satisfied at any time), like some of the particles being connected by massless rods etc, taking the second derivative and multiplying will also yield a solution, right? But how would I go about proving that? How do I show that in some sense, maybe a force acting on some particle A connected to another particle B by a massless rod won't cause some additional force to act on B? Perhaps I should be more exact about what a the "massless rod"/constraint will do ( it won't do anything as long as the particles are the proper distance apart - a massless rod is really just a tool that doesn't formally make sense). I could also instead think of some band/stick that will contract when stretched, and push out when compressed, so it won't create any force as the system goes through the motions. Still I don't feel completely convinced that it couldn't happen, that maybe the derived forces might cause some sort of weird movement. Ideally, I'd like a proof that if the derived forces act on the system and we add constraints that are satisfied at any time, then no force will be transferred through a rod in some sense or anything like that. But maybe the analogy with a band is as good of a argument as possible. I'm sorry if this is too vague. Answer: 1, Assuming we have a system of particles and we know the position of each particle at any time, what are the possible forces acting on each particle at any time? Is there anything "interesting" we can say about this problem, and how would we state this sort of problem more formally, in a rigorous way? Assuming that you're viewing the system from an inertial (i.e. non-accelerating) reference frame, you can determine the total force on each particle based on its acceleration, using Newton's Second Law $\vec{F}(t)=m\vec{a}(t)$. This total force is the sum of all the forces exerted on this particle by every other particle in the system, as well as any forces coming from outside the system, including constraint forces*. 2, Even assuming additional constraints on the system (which are satisfied at any time), like some of the particles being connected by massless rods etc, taking the second derivative and multiplying will also yield a solution, right? It depends on what you mean by "a solution", as it's not clear what specifically you're trying to solve for. That said, taking the second derivative of the position of each particle and multiplying by the mass will yield the total force on each particle, which includes the constraint forces*. In order to study the dynamics of the system to any greater extent using forces**, you must know what the constraint forces are. Usually, these are not explicitly known, so there's not much more than can be said without more information. But how would I go about proving that? How do I show that in some sense, maybe a force acting on some particle A connected to another particle B by a massless rod won't cause some additional force to act on B? If the massless rod is rigid, then there will be a force acting on B. We know this because the center of mass of the system must accelerate, and B must be kept a fixed distance from the center of mass, so B must also accelerate. I could also instead think of some band/stick that will contract when stretched, and push out when compressed, so it won't create any force as the system goes through the motions. If there is a finite speed of sound in the elastic rod, then you can exert force on A without exerting force on B, but only for a short time (namely, on the order of $\frac{L}{v}$ for a rod with length $L$ and sound speed $v$). This is because, when you exert force on A, the compression in the rod must be transmitted from one end of the rod to the other, and propagates at the speed of sound; in the meantime, elastic potential energy is stored in the propagating disturbance in the rod. Once the disturbance reaches the end of the rod, B accelerates, and the elastic potential energy is turned into kinetic energy, pulling A along with B***. You can't do this in the steady state, though, because a time-indepenently stretched or compressed rod has to have force exerted from both sides, which means you're exerting force on both A and B. *In order for a constraint to constrain the motion of a particle, it must exert a force on that particle. The forces which implement the constraint are called constraint forces. **There is a way to get around not knowing constraint forces: don't use forces to study dynamics. Instead, use Lagrangian mechanics or Hamiltonian mechanics, which are based on energy rather than forces, and deal with constraints without having to actually compute what the corresponding forces are. ***In reality, the elastic potential energy is never perfectly and entirely transmitted to B. Some of it gets turned into kinetic energy, and the rest reflects back down the rod towards A, where A picks up some kinetic energy, and so on, and both masses end up oscillating while the whole system moves with the remaining kinetic energy.
{ "domain": "physics.stackexchange", "id": 60277, "tags": "newtonian-mechanics, forces, classical-mechanics" }
Field Effect and it's effect on basicity on amines
Question: What is the order of basicity between $\ce{Me2NH}$, $\ce{Me3N}$, $\ce{MeNH2}$ in a protic solvent. It has something to do with field effect which I could not understand and could not find anywhere. Can someone please explain? Answer: In case of Ethyl in protic solvent the basicity is: $$\ce{Et2NH \gt Et3N \gt EtNH2}$$ This is a result of two opposing factors, inductive Effect and Solvation. In case of other alkyl groups than Ethyl: $$\ce{R2NH \gt RNH2 \gt R3N}$$ And it is as the following when inductive effect becomes dominant in gaseous state: $$\ce{R3N \gt R2NH \gt RNH2}$$ Field Effect (or I may say inductive effect) as stated on Wikipedia The 'Inductive Effect' is an experimentally observable effect of the transmission of charge through a chain of atoms in a molecule. The permanent dipole induced in one bond by another is called inductive effect. The electron cloud in a $\sigma$-bond between two unlike atoms is not uniform and is slightly displaced towards the more electronegative of the two atoms. This causes a permanent state of bond polarization, where the more electronegative atom has a slight negative charge ($\delta^{–}$) and the other atom has a slight positive charge ($\delta^{+}$). If the electronegative atom is then joined to a chain of atoms, usually carbon, the positive charge is relayed to the other atoms in the chain. This is the electron-withdrawing inductive effect, also known as the $-I$ effect. Some groups, such as the alkyl group, are less electron-withdrawing than hydrogen and are therefore considered as electron-releasing. This is electron releasing character and is indicated by the $+I$ effect. In short, alkyl groups tend to give electrons, leading to induction effect. The $\ce{R}$ groups are $+I$ that give $\ce{N}$ a more electron density and hence $\ce{N}$ becomes more basic, and hence, we can say that $\ce{R3N}$ would be most basic as it has the most alkyl groups, and thus the basicity in gaseous state can be predicted. But in solution, the water molecule tend to solvate amines and decrease its basicity, the solvation depends, here, mainly on size as difference between $\ce{-R}$ and $\ce{-H}$ groups and thus the two orders of basicity arise for alkyl amines.
{ "domain": "chemistry.stackexchange", "id": 1329, "tags": "organic-chemistry, acid-base, solvents, amines" }
Plugin system for calling methods
Question: I have a small "plugin system" (I'm not sure this is the right name). It allow you to store objects (plugins), and then call some methods from each of them. With this approach we have absolutely no overhead on iterating/object pass/callback call. // Need this SET, because in C++11 auto can only be static const. // And we may need to change them later. #ifndef SET #define SET(NAME, VALUE) decltype(VALUE) NAME = VALUE #endif Plugin1 plugin1(param1, param2); Plugin2 plugin2(param1, param2); Plugin3 plugin3(param1, param2); SET(plugins, std::forward_as_tuple( // Pay attention here. We store &&, not objects plugin1, plugin2, plugin3 )); Then, when I need to call some function from each of plugins, or do something with each of them, I iterate through plugins at compile time (I check this with generated asm code, and it just calls or even inline do_it, without anything else) and call the callback function (I'm not showing iterate code here; it's trivial): struct Call{ float k=0; template<typename T, int Index> // lambda function not efficient than this. Tested -O2 clang, gcc 4.8 inline void do_it(T &&t){ std::cout << "value = " <<t << " ; " << "id = " << Index << std::endl; } }; And if I need to do something with a specific plugin, I just directly use plugin1, plugin2, and plugin3. No need to call std::get<>. Plus the IDE highlights available plugins when you type them. Is it ok to store rvalue references on objects, or I have to store objects directly in my tuple? Like this: SET(plugins, std::tuple( Plugin1(param1, param2), Plugin2(param1, param2), Plugin3(param1, param2) )); When I iterate, I pass a plugin as a usual reference: struct Call{ float k=0; template<typename T, int Index> // lambda function not efficient than this. Tested -O2 clang, gcc 4.8 inline void do_it(T &t){ std::cout << "value = " <<t << " ; " << "id = " << Index << std::endl; } }; With this approach we have additional move constructor calls for each element in the tuple. The previous version is free. I ask about this because I previously read about data locality and now I worry about how data are placed in memory. Answer: First, you are not storing &&, but rather &. Why? because plugin1 etc. are lvalue references, and because of the reference collapsing rules. Second, you can use a tuple of references as you would use a reference itself. That is, everything is fine unless you return a tuple of references to local objects, in which case references are dangling. Third, between forwarding (forward_as_tuple) and copying (make_tuple) there's another option: template<class... A> constexpr std::tuple<A...> auto_tuple(A&&... a) { return std::tuple<A...>(std::forward<A>(a)...); } which keeps a reference to lvalues, and copies rvalues only. I find this most convenient. In general, storing references or objects depends on what you want to do. Think of what you would choose if it was only one object, then keep the same choice for the tuple. Check here for more.
{ "domain": "codereview.stackexchange", "id": 7641, "tags": "c++, design-patterns, c++11, template, template-meta-programming" }
Turing machine that checks whether a given string is an output of a given machine and input
Question: Is there a Turing machine such that, given a description $\langle M \rangle$ of a Turing machine $M$, an input $x$ and a string $y$, computes whether or not $y$ is the output of $M$ input $x$? My guess is that the answer is no because this might imply that the set of strings with Kolmogorov complexity greater than or equal to their length is decidable. Answer: No. Any such machine $T^*$ would allow you to immediately solve the halting problem. Given a description of $M$, construct a Turing machine $T_M$ that simulates $M$ and then outputs some fixed string, e.g., "0". Notice that, given $M$, $T_M$ is computable. Then $T^*$ with input $T_M$ and "0" accepts if $M$ halts and rejects if $M$ does not halt.
{ "domain": "cs.stackexchange", "id": 17754, "tags": "turing-machines, computability, undecidability, kolmogorov-complexity" }
Expectation value of Number operator in eigenstates of Hamiltonian involving creation and anihilation operators
Question: I wish to estimate the expectation value of the number operator : $$\langle \hat{N} \rangle =\langle \hat{a}^{\dagger} \hat{a} \rangle$$ I know the Hamiltonian operator of my system, and thus want the expectation value of this operator when taken between eigenstates of the Hamiltonian. The Hamiltonian reads : $$\hat{H}=E_0 \hat{I_d} + i E_1 (\hat{a}^{\dagger}+\hat{a}) + E_2 (\hat{a}^{\dagger}+\hat{a})^2 $$ Or equivalently : $$\hat{H}=E_0 \hat{I_d} + i E_1 (\hat{a}^{\dagger}+\hat{a}) + E_2 ({\hat{a}^{\dagger}}^2+\hat{a}^{\dagger}\hat{a}+\hat{a}\hat{a}^{\dagger}+{\hat{a}}^2)$$ Where $\hat{I_d}$ is the identity operator, $\hat{a}^{\dagger}$ and $\hat{a}$ are respectively creation and anihilation operators, and where $E_0$,$E_1$ and $E_2$ are real numbers. How would I go about finding the eigenstates $\vert \Psi_i \rangle$ of $\hat{H}$ ? In order to then compute the expectation value of $\hat{N}$ when the system is in an eigenstate of $\hat{H}$: $$\frac{\langle \Psi_i \vert \hat{N} \vert \Psi_i \rangle}{\langle \Psi_i \vert\Psi_i \rangle}$$ Answer: The Hamiltonian reads : $$\hat{H}=E_0 \hat{I_d} + i E_1 (\hat{a}^{\dagger}+\hat{a}) + E_2 (\hat{a}^{\dagger}+\hat{a})^2 $$ ... How would I go about finding the eigenstates $\vert \Psi_i \rangle$ of $\hat{H}$ ? In order to then compute the expectation value of $\hat{N}$ when the system is in an eigenstate of $\hat{H}$: $$\frac{\langle \Psi_i \vert \hat{N} \vert \Psi_i \rangle}{\langle \Psi_i \vert\Psi_i \rangle}$$ First, it may be helpful to recognize that your Hamiltonian can be written in terms of a position operator $\hat X$ as: $$ \hat H = \alpha + \beta\hat X + \gamma \hat X^2\;, $$ since $$ \hat X \propto \hat a + \hat a^\dagger\;. $$ Therefore, your Hamiltonian is like a shifted simple harmonic oscillator, but with no kinetic term. You can shift the creation operators like: $$ \hat b = \hat a + \delta\;, $$ where $\delta$ is a number (not an operator), and where the shift can be chosen to put the Hamiltonian into the form: $$ \hat H = \epsilon + \zeta (\hat b^\dagger + b)^2 $$ $$ =\epsilon + \eta {\hat Y}^2\;, $$ where $$ \hat Y \propto (\hat b + \hat b^\dagger) $$ where $\epsilon$ and $\zeta$ are numbers. Therefore, eigenstates of the Hamiltonian are eigenstates of the $\hat Y$ operator, which can be written in the form: $$ \left|y\right>\propto e^{\sqrt{2}~yb^\dagger-(b^\dagger)^2/2}\left|0\right>\;. $$ and where: $$ \hat H |y\rangle = |y\rangle(\epsilon + \eta y^2)\;. $$ The expectation value of the number operator is thus: $$ \langle N\rangle =\langle a^\dagger a\rangle = \frac{\langle 0 |e^{\sqrt{2}~yb-(b)^2/2} a^\dagger a e^{\sqrt{2}~yb^\dagger-(b^\dagger)^2/2}\left|0\right>}{\langle 0 |e^{\sqrt{2}~yb-(b)^2/2}e^{\sqrt{2}~yb^\dagger-(b^\dagger)^2/2}\left|0\right>} $$
{ "domain": "physics.stackexchange", "id": 94311, "tags": "quantum-mechanics, operators, hilbert-space, quantum-optics, linear-algebra" }
What is the most efficient algorithm to compute polynomial coefficients from its roots?
Question: Given $n$ roots, $x_1, x_2, \dotsc, x_n$, the corresponding monic polynomial is $$y = (x-x_1)(x-x_2)\dotsm(x-x_n) = \prod_{i}^n (x - x_i)$$ To get the coefficients, i.e., $y = \sum_{i}^n a_i x^i$, a straightforward expansion requires $O \left(n^2\right)$ steps. Alternatively, if $x_1, x_2, \dotsc, x_n$ are distinct, the problem is equivalent to polynomial interpolation with $n$ points: $(x_1, 0), (x_2, 0), \dotsc, (x_n, 0)$. The fast polynomial interpolation algorithm can be run in $O \left( n \log^2(n) \right)$ time. I want to ask whether there is any more efficient algorithm better than $O \left(n^2\right)$? Even if there are duplicated values among $\{x_i\}$? If it helps, we can assume that the polynomial is over some prime finite field, i.e., $x_i \in \mathbf{F}_q$. Answer: This can be done in $O(n \log^2 n)$ time, even if the $x_i$ have duplicates, via the following divide-and-conquer method. First compute the coefficients of the polynomial $f_0(x)=(x-x_1) \cdots (x-x_{n/2})$ (via a recursive call to this algorithm). Then compute the coefficients of the polynomial $f_1(x)=(x-x_{n/2+1})\cdots(x-x_n)$. Next, compute the coefficients of $f(x)=f_0(x)f_1(x)$ using FFT-based polynomial multiplication. This yields an algorithm whose running time satisfies the recurrence $$T(n) = 2 T(n/2) + O(n \log n).$$ The solution to this recurrence is $T(n) = O(n \log^2 n)$. This all works even if there are duplicates in the $x_i$. (You might also be interested in Multi-point evaluations of a polynomial mod p.)
{ "domain": "cs.stackexchange", "id": 14940, "tags": "algorithms, time-complexity, polynomials" }
Best approach for text classification of phrases with little syntactic difference
Question: So I have the task of classifying sentences based on their level of 'change talk' shown. Change talk is a psychology term used in counseling sessions to express how much the client wants to change their behavior. So let's say there are two classes: change talk; and non-change talk. An example of change talk is: "I have to do this." or "I can achieve this." An example of non-change talk is "I can't do this." or "I have no motivation." My issue is, if I want to take a machine learning approach in classifying these sentences, which is the best approach? SVM's? I do not have a lot of training data. Also - all the tutorials I look at use sentences with obvious words that can easily be classified (e.g. "The baseball game is on tomorrow." -> SPORT, or "Donald Trump will make a TV announcement tomorrow." -> POLITICS). I feel my data is harder to classify as it typically does not have keywords relating to each class. Some guidance on how people would approach this task would be great. Answer: Your problem as you said is a high level of syntax overlapping between your sentences. take a look at these two sentences: Work to live versus live to work. The earlier that you can allow yourself to enjoy other things in life, aside from your job while the latter means obtaining resources so that you can be a functional member of society, and to permit yourself a good lifestyle. They are very different semantically. So when you vectorizing those sentences with techniques like Bag of words or Cosin similarity will be useless as both sentences contain the same corpus. The other problem you are dealing with (based on examples you provided) is dealing with the short text which makes it difficult to be vectorized by other simple but efficient techniques like TF-IDF. so regardless of what classification you are going to use, the performance of the classification model won't be high and that's because the input to the model is not correct. On the other hand, deep learning methods like RNN or Transformers which solve sequence-to-sequence tasks like yours with ease can be very helpful. Named Entity Recognition models are what you need and given that your data is very domain-specific, you need to train your own model using your data. I recommend the Spacy Python package. So once you have your model, you will have two entities, CHANGE TALK, and NON-CHANGE TALK. Then you can simply count how many of them you have in your paragraph. Of course, that's the simplest way of dealing with your problem. You can add more entities and then they will act as features by which you can train any classification models. Hope this helps.
{ "domain": "datascience.stackexchange", "id": 10088, "tags": "deep-learning, nlp, svm, text-classification, language-model" }
Relation between rain, time and terrains
Question: I would like to understand whether an approximate relation between the rain and the physical properties of a particular terrain (composed of a specific material such as asphalt or clay or grass or plastic, etc.) exists as a function of time. In this request, I'd like to understand, even in an approximate way, also which physical properties of the terrain the rain impacts. For instance, the friction or the absorption of water by the terrain. Basically, I would like to get a better idea on the effect of the rain / water to terrains. I have a hard time even finding material on this. Thank you. Answer: Here are some search topics to try, which will probably get you started on answering your questions. First, in the field of civil engineering you will find treatment of surface water drainage and management which deals with the response of terrain (paved and unpaved) to rain water. Second, in the fields of soil science and range agronomy you will find treatment of rain response of different types of soils on drainage, percolation and subsurface water movement.
{ "domain": "physics.stackexchange", "id": 58935, "tags": "forces, classical-mechanics, friction, water" }
Griffiths E&M questions about example 6.1
Question: I understand why $K_b = M\mbox{sin}\theta \hat{\phi}$ but when I try to use this to calculate the magnetic field, I keep coming up with an integral that cancels itself out and I don't understand how Griffiths computes the integral. I think I may be setting it up wrong or something is missing conceptually. Here is my logic: \begin{equation} A(r) = \frac{\mu_0}{4\pi}\int \frac{K(r')}{|r-r'|}da' \end{equation} Since $K(r') = M\mbox{sin}\theta \hat{\phi}$ in order to integrate this we have to express this with respect to cartesian unit vectors. Using $\hat{\phi} = -\mbox{sin}\phi \hat{x}+ \mbox{cos}\phi \hat{y}$, we have, \begin{equation} K_b = -M\mbox{sin}\theta\mbox{sin}\phi\hat{x} + M\mbox{sin}\theta\mbox{cos}\phi \hat{y} \end{equation} Moreover, in spherical coordinates, $|r-r'| = \sqrt{R^2+r^2-2Rr\mbox{cos}\theta}$, where $R$ is the radius of the sphere. And the integration thingy is $R^2\mbox{sin}\theta d\phi d\theta$. Then the potential is, \begin{equation} A(r) = \frac{\mu_0}{pi}\int \frac{-M\mbox{sin}\theta\mbox{sin}\phi\hat{x}}{\sqrt{R^2+r^2-2Rr\mbox{cos}\theta}}R^2\mbox{sin}\theta d\phi d\theta + \frac{\mu_0}{\pi} \int \frac{M\mbox{sin}\theta\mbox{cos}\phi \hat{y}}{\sqrt{R^2+r^2-2Rr\mbox{cos}\theta}}R^2\mbox{sin}\theta d\phi d\theta \end{equation} \begin{equation} = \frac{\mu_0}{4\pi}\int_0^{\pi}\frac{-M\mbox{sin}\theta\hat{x}}{\sqrt{R^2+r^2-2Rr\mbox{cos}\theta}}R^2\mbox{sin}\theta d\theta \int_0^{2\pi}\mbox{sin}\phi d\phi + \frac{\mu_0}{4\pi}\int_0^{\pi}\frac{M\mbox{sin}\theta\hat{y}}{\sqrt{R^2+r^2-2Rr\mbox{cos}\theta}}R^2\mbox{sin}\theta d\theta \int_0^{2\pi}\mbox{cos}\phi d\phi \end{equation} But then \begin{equation} \int_0^{2\pi} \mbox{sin}\phi d\phi = \int_0^{2\pi} \mbox{cos}\phi d\phi =0 \end{equation} So this would mean $A(r) = 0$, which can't be the case since $B = \nabla \times A$. If anyone could point to where I went wrong that would be greatly appreciated! Answer: You have \begin{equation*} \mathbf{A}(\mathbf{r}) = \frac{\mu_{0}}{4\pi} \iint \frac{\mathbf{K}(\mathbf{r'})}{|\mathbf{r} - \mathbf{r'}|} \text{d}^{2}S' \end{equation*} I'll only write the $x$ component. \begin{equation*} A_{x}(\mathbf{r}) = \frac{\mu_{0}}{4\pi} \iint \frac{-M\sin(\theta')\sin(\phi')} {|\mathbf{r} - \mathbf{r'}|} {R}^{2} \sin(\theta') \text{d}\theta'\text{d}\phi' \end{equation*} where \begin{align*} |\mathbf{r} - \mathbf{r'}| &= \sqrt{ \left[ r\cos(\phi)\sin(\theta) - R\cos(\phi')\sin(\theta')\right]^{2} + \left[ r\sin(\phi)\sin(\theta) - R\sin(\phi')\sin(\theta')\right]^{2} + \left[r\cos(\theta) - R\cos(\theta')\right]^{2}} \\ &= \sqrt{r^{2} + {R}^{2} - 2rR \left[ \cos(\theta)\cos(\theta') + \sin(\theta)\sin(\theta') \left(\cos(\phi)\cos(\phi') + \sin(\phi)\sin(\phi')\right) \right]} \end{align*} Your denominator is the result of evalutating $\theta = 0$ on this complete denominator. It is indeed true that the vector potential vanishes on the $z$ axis \begin{equation*} \mathbf{A}(r,\theta = 0, \phi) = \mathbf{A}(x = 0, y = 0, z) = \mathbf{0} \, , \end{equation*} but you need the vector potential in all space in order to compute the rotor. This integral may not have an analitic solution, that might be the reason why Griffiths followed another approach.
{ "domain": "physics.stackexchange", "id": 84599, "tags": "homework-and-exercises, electromagnetism, magnetic-fields, magnetic-moment" }
Two formulas for final Velocity, two different results
Question: I have this problem: A student throws a set of keys vertically upward to her sorority sister, who is in a window 3.30 m above. The second student catches the keys 1.60 s later 1) With what initial velocity were the keys thrown? 2) What was the velocity of the keys just before they were caught? For question #1 I got 10 m/s, which is correct. However, for the second question I also got the right answer but at first I did not know what formula to use since there are two kinematic equations specific for final Velocity (1) $v = v_o + at $ (2) $v^2 = {v_o}^2 + 2ad $ I know the correct result comes from using formula (1) but I do not understand why using both formulas I get different results. How then would I know which formula to use in different situations? Answer: I know the correct result comes from using formula (1) but I do not understand why using both formulas I get different results. How then would I know which formula to use in different situations? I'm not sure why you're using formula (1) as it has two unknowns $v$ and $v_0$; the final and initial velocities respectively. Assuming you took gravitational acceleration $a=-g\approx -10 \mathrm{ms^{-2}}$ then I would use $$d=v_0 t - \frac12 g t^2$$ which when solving for $v_0$ gives $$v_0 =\frac{3.3+0.5\cdot 10 \cdot 1.6^2}{1.6}=10.0625\approx 10\mathrm{ms^{-1}}$$ for the intitial velocity. Then using formula (2) we have $$v=\sqrt{10.0625^2-2\cdot 10 \cdot 3.3}=5.9375\mathrm{ms^{-1}}$$ for the final velocity.
{ "domain": "physics.stackexchange", "id": 43132, "tags": "homework-and-exercises, kinematics, velocity" }
Negative results on identical particles approach to Graph Isomorphism (GI) problem
Question: There has been some efforts to attack graph isomorphism problem using quantum random walk of hard-core bosons (symmetric but no double occupancy). Symmetric power of adjacency matrix, which seemed promising, was proved to be incomplete for general graphs in this paper by Amir Rahnamai Barghi and Ilya Ponomarenko. Other similar approach was also refuted in this paper by Jamie Smith. In both of these papers, they use the idea of coherent configuration (schemes) and alternative but equivalent formulation of cellular algebra (matrix subalgebra indexed by a finite set -here vertex set- closed under point-wise multiplication, complex conjugate transpose, and containing Identity matrix I and all-one matrix J) respectively to provide necessary counter arguments. I find it very difficult to follow those arguments and even if I follow individual arguments vaguely I do not understand the core idea. I would like to know if the essence of the arguments can be explained in generic terms -may be at the cost of slight rigour- without using the language of scheme theory or cellular algebra. Answer: You can do much better than checking all n! permutations when brute forcing a solution, http://oeis.org/A186202 The grail is showing that you can't do much better than that, or exploiting the fact that most graphs have no symmetry in them and using this to speed calculation.
{ "domain": "cstheory.stackexchange", "id": 1805, "tags": "cc.complexity-theory, graph-theory, graph-algorithms, graph-isomorphism" }
torch cuda not able to identify gpu on aws g4dn.xlarge
Question: I have created an EC2 instance with GPU g4dn.xlarge and launched it. I want to run some code from command line and this code is pytorch based. While pycuda is able to identify the GPUs the pytorch is not able to identify it. import pycuda.driver as cuda import torch cuda.init() num_gpus = cuda.Device.count() print(f"Number of GPUs: {num_gpus}") print("is torch cuda avaialable",torch.cuda.is_available()) print("torch cuda count",torch.cuda.device_count()) the output for the above code will be Number of GPUs: 1 is torch cuda avaialable False torch cuda count 0 Here are the pytorch and cuda version I am using pytorch 2.0.1 cpu_py310h07ccb54_0 cudatoolkit 11.7.0 h254b3b0_10 nvidia pycuda 2021.1 py310h06b8198_3 conda-forge Answer: It looks like the issue is you have the cpu version of pytorch installed instead of the gpu version. If you go to the pytorch home page: https://pytorch.org/get-started/locally/ you can use the configuration table to install the cuda 11 or cuda 12 version of pytorch and you should be good to go.
{ "domain": "datascience.stackexchange", "id": 12009, "tags": "pytorch, generative-models, cuda" }
What size feet for this wind-protection panel?
Question: I need to create a series of free-standing perspex panels to add height to an existing fence for wind-proofing purposes. My idea is that each panel would be fixed to two supporting poles that are mounted on flat horizontal 'feet'. In this diagram, the wind would be directly onto the unprotected side of the fence, so the new panels are on the 'inside' of the fence: Dimensions: Height of existing fence: 1 m Height from ground to base of perspex panel: 1 m Height from ground to top of perspex panel: 2.75 m Width of each panel: 1.4 m -> Surface area of each panel: 2.45 $m^2$ Feet: Made of iron (7.86 $g/m^3$) with cross-section 10 mm x 100 mm -> Mass of each foot (per meter length): 1000 g/m How long / heavy do those feet have to be to make the panels stable in a wind of (say) 6 m/s? Answer: This is a side view of the panels where: $H$ is the height of the resultant concentrated wind force $P_W$ from the ground. Given the parameters of the problem, H should be equal to : $$ H= \frac{1+2.75}{2}= 1.875 [m]$$ $L_f$ is the total length of the horizontal part of the feet. The length of the feet will affect the weight, and the total weight of the feet $W_f$ will be equal to : $$W_f = 2 L_F \cdot q\cdot g$$ where: q is the mass per meter (1 kg/m) g is the acceleration of gravity (~10 $m/s^2$) 2 is because there are two (2) feet per stand therefore: $$W_f = 20 \frac{N}{m} L_f $$ $W_p$: is the weight of the vertical part of the panel (glass and metallic part). I will be assuming here that the weight is about 10 kg (i.e. 100 N). $P_W$ is the total force of each panel (assuming that there is a uniform pressure on the panel (this is a semi valid assumption, but its simple enough to present here). This is the most involved part so I will break it up. wind panel In order to calculate the wind force on the panel: The nominal wind pressure is $q_p$ $$q_n = \frac{1}{2} \rho v^2$$ where: $\rho = 1.225[kg/m^3]$ is the air density $v$ is the air velocity (6[m/s]) The wind pressure $q_p $ on the flat panel is $$q_p = C_d \cdot q_n = C_d \cdot \frac{1}{2} \rho v^2$$ where: $C_d$: is the drag coefficient. The total wind force $P_w$ on the panel is: $$P_w= q_p \cdot A = A\cdot C_d \cdot \frac{1}{2} \rho v^2$$ Moment equilibrium. If the wind blows in the direction of the initial image, then the panel will start to rotate/pivot around the rightmost part of the feet in the image (lets call that point A). In that case the moment equilibrium about A, will be: $$\sum M_A = -P_w H + W_p \cdot L_f + P_f\cdot \frac{L_f}{2}$$ This need to be positive in order for the panel not to rotate about A. Therefore $$\sum M_A \ge 0 \Rightarrow -P_w H + W_p \cdot L_f + P_f\cdot \frac{L_f}{2} \ge 0 $$ I will substitute the parameters that are depended on $L_f$: $$-P_w H + W_p \cdot L_f + 20 \frac{N}{m} L_f \cdot \frac{L_f}{2} \ge 0 $$ $$ \frac{20 \frac{N}{m}}{2}L_f^2 + W_p \cdot L_f - P_w H \ge 0$$ You can then solve this equation and obtain two solutions. One will be positive and one will be negative. Values greater than the positive value, and values smaller than the negative value will satisfy the above requirement. (The negative value makes sense, because negative $L_f$, just means that the feet extend to the left). Note that you should investigate also the other way overturning (wind blowing from the right. another solution instead of really long feet. Provided the existing fence has adequate structural capacity, you could try the following solution. where: red is the old fence green are the new panels.
{ "domain": "engineering.stackexchange", "id": 4072, "tags": "pressure, applied-mechanics, statics, torque, wind" }
Conversion of molecular structure into Fischer projection for compounds having more than two chiral carbons
Question: Question What is the Fischer projection of the given compound's enantiomer? Answer My attempt I tried to solve this by first converting it into the Fischer projection of the given compound and then taking its mirror image since that would provide the Fischer projection of the enantiomer This is what I got as the Fischer projection of the given molecule. Why is this wrong? How do you find the Fischer projection for a compound having more than two chiral centers? Answer: To do this exercise, you must need to know how to assign stereo configuration according to Cahn–Ingold–Prelog priority rules, and what Fischer projection represent. First, get correct assignment of three stereo centers in the given molecule. That should be $(1R,2R,3S)$ as depicted in the image: Now, sketch a Fischer projection with three stereo centers without assigning any groups but number the stereo centers as $1,2,3$. Put the least priory group of stereo center 1 (that would be $\ce{CH3}$ group) on top of the vertical line. By definition, top and bottom vertical lines are wedges-down (dashes) while all horizontal lines are wedges-up (solid wedges). If you put highest priority group ($\ce{Cl}$ group) on left horizontal line and second priority group ($\ce{OH}$ group) on right horizontal line, you get the correct clock-wise rotation for $(1R)$ configuration (See the image). You can follow same procedure or the one you are familiar to assign other stereo centers correctly. For example, for stereo center 3, the least priority group is $\ce{H}$ while the highest one is $\ce{F}$ group. The second priority group is substituted carbon chain, which is already in the projection (the last priority to consider is $\ce{CH3}$ group). Thus, I put $\ce{H}$ group on bottom vertical line (should be a dash) as did for stereo center 1. Then, to get correct counter-clock-wise rotation to give $(3S)$ configuration, you must put $\ce{F}$ group on right horizontal line so last priory group $\ce{CH3}$ would occupy the remaining left horizontal point. Note: You can exchange any group at a stereo center with another group at same center to reverse the configuration, e.g., $(R) \rightarrow (S)$. If you repeat same with two different group, you get back the original configuration. Once I was done with assigning the groups, I have performed these operation twice to change the positions of three groups at stereo center 3 to my likeness. Still, the configuration remains same. Finally, you may able to assign remaining two groups ($\ce{Br}$ and $\ce{F}$ groups) for stereo center 2 to get $(2R)$ configuration. To get the enantiomer, you can sketch the mirror image of the compound, which is $(1S,2S,3R)$ as depicted in the image and also tally with the given answer for the corresponding enanthiomer.
{ "domain": "chemistry.stackexchange", "id": 14263, "tags": "organic-chemistry, nomenclature, stereochemistry" }
Logistic Regression: Is it viable to use data that is outdated?
Question: TLDR: Want to predict who makes the playoffs (1,0), but there are more playoff spots now than there were in the past, is it okay to use that past data? I want to use binary logistic regression on MLB data to estimate each team's probability of reaching the playoffs this upcoming season. There is data going back as far as the seasons of the 1870s. However, my issue is that the structure of the playoffs and baseball as a whole has changed often over the years. Specifically, the changes deal with the number of playoff spots, which is in part due to an increase in the number of teams. For example, up until 1969 there were 20 teams, and there was only the championship (World Series), so, technically, only 2 teams made it to the "playoffs". The number of playoff spots has increased gradually to its present state, which is 10, in 2012, and there are now 30 teams. To me, it makes sense to only use data from 2012 (to 2019) since it reflects the state of the upcoming season. This gives me 240 observations, thus 80 positive outcomes for my playoff (dependent) variable. However, I have about 40 predictors after removing highly correlated ones, which means that I should have way more observations. Though I know that the number of predictors will likely decrease once I fit the model, I still fear my sample size may still be too low. This makes me consider going further back to the previous era beginning in 1994 when there were 8 playoff spots, simply for the sake of more observations. My question is that would it be viable to use such data in a regression, given that it may not accurately reflect the circumstances of what I'm trying to estimate? Could I maybe even go back to 1969? I found this article which is pretty much exactly what I'm trying to do, and he uses data back to 1969, but it just seems like an issue to me. Answer: Your thinking is sensible. Indeed, in a perfect world, your training data should be completely representative of the data you'll encounter. However, in practice, you often find that "unrepresentative" data may still have some value. Ultimately, whatever you do is good if it improves your model, so if using "outdated" data helps, then do it! Here is what you could experiment with: Let the data speak You could compare models using more or fewer data and it might give you an idea of the ideal cutoff point. Implement "time decay" Maybe, 1870 data is useful but likely, it's less useful than 1871 and even less useful than last year's. You could weight your training instance based on how old they are so that your recent data points have a bigger impact. Create time-insensitive features By this I mean, you could find a way to turn your problem such that the number of playoffs team doesn't matter. Instead of a binary "playoff/no playoff", your problem could be to instead rank your teams, then you can select the playoffs team based on the cutoff at this specific year. You could also add how many playoff teams there are as a feature so that the learning algorithm is aware of how many spots there are.
{ "domain": "datascience.stackexchange", "id": 7438, "tags": "dataset, data, logistic-regression" }
Is there a word or short phrase that denotes the apparently moving part of the lunar limb?
Question: Between new moon and full moon, the moon's disc as viewed from the earth is bounded by a semicircular arc on its right and an arc on its left that moves to the right until at full moon it forms the other part of the circle. Between full moon and new moon, the disc is bounded by a semicircular arc on its left and an arc on its right that moves until it coincides with the leftside arc, making the visible moon disappear in the sky. Is there a word or short phrase that denotes the apparently mobile part of the lunar limb? Answer: It's called "the terminator line" or simply "the terminator"; where the illumination from the sun terminates. There's a fuller discussion in EarthSky.org's What is the moon’s terminator line?, with ends: Bottom line: The terminator line – on Earth’s moon and other planets and moons in space – is caused by sunlight falling on the surfaces of these worlds. It’s the line dividing night and day on the moon and other solar system objects.
{ "domain": "astronomy.stackexchange", "id": 6311, "tags": "the-moon, terminology, moon-phases" }
Why is translation so much faster in prokaryotes than eukaryotes?
Question: Prokaryotes perform transcription and translation much faster than eukaryotes. If memory serves, a single 70S prokaryotic ribosome can incorporate around 20 amino acids per second, whereas the 80S eukaryotic counterpart is much slower, at around 2 amino acids per second. Is the reason for this known? The only possibility I can think of is that prokaryotic mRNAs are often polycistronic, whereas eukaryotic mRNAs are not and tend to involve co-translational protein folding. Slower translation may be able to improve folding accuracy. Other than that, I can't think of any reason the 80S ribosome would be physically slower than the 70S ribosome. It's not like DNA replication where accuracy is exceedingly more important in multicellular organisms than in fast-replicating unicellular prokaryotes. Answer: Unless the poster can cite more recent papers to support the assertion regarding a difference in rates of prokaryotic and eukaryotic protein synthesis, I would say that this is incorrect. Lacroute and Stent (1968) reported a rate of 15 amino acids per sec for β-galactosidase in Escherichia coli, whereas Knopf and Lamfrom (1965) reported a rate of 7 amino acids per sec for globin chains in rabbit reticulocytes. This does not appear much different to me, especially as a recent study by Li et al. (2014) showed that the rate of protein synthesis varies with the complexity of the assembly (if any) into which a protein is incorporated.
{ "domain": "biology.stackexchange", "id": 8298, "tags": "translation, eukaryotic-cells, ribosome, prokaryotic-cells" }
How do Galilean transformations give the idea of vector velocity additions or subtractions?
Question: I have been reading an article on Galilean transformation from Wikipedia and encountered a sentence, quoted- 'In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities as vectors.' from the 1st paragraph under 'Translation' section. How does the Galilean transformation give such notion? Is this by taking time derivatives of the position coordinates such as $\dot x' = \dot x - v$ which does seem to give the idea, but then again I am not sure. Doesn't this cause the position of the point to change with time? Answer: Here’s an image to help with understanding the same. We have frame $S$ and frame $S’$ with a relative velocity of $\vec v$ whose origins coincided at $t=0$. Say there’s an event which in $S$ is at $\left(\vec x,t\right)$ and in $S’$ is at $\left(\vec x’,t’\right)$. Now I order to relate the observations between the frames, we can see that given $t=t’$, the two displacements are just related by a simple vector addition. Namely $$\vec x’=\vec x + \vec vt$$
{ "domain": "physics.stackexchange", "id": 64594, "tags": "velocity, inertial-frames, galilean-relativity" }
Newton's Universal Law of Gravitation doubt
Question: The Universal Law of Gravitation states that the module of the force, $F$ is $$F = \frac{GmM}{r^2},$$ where $m$ and $M$ are the mass of the two objects and $r$ is the distance between the two objects. From this, it is also derived that the gravitational potential energy is equal to $$ E = -\frac{GmM}{r}$$ So does this mean that when the two objects are together, i.e. $r=0$, the force and the gravitational energy is infinite? How is it possible that energy and force be infinite? Please can you explain my doubt? Answer: At $r=0$, the force is $+\infty$, and the potential energy is $-\infty$. This arises if you consider point masses, which we model mathematically using Dirac-delta distributions. Physically, we cannot have two point masses overlapping ($r=0$) since this would require an infinite amount of energy. Why? Well, fix one mass, and bring the other mass from $r=\infty$ towards $r=0$. Then, as the masses get closer, more and more energy is required to bridge the gap. So there is no contradiction, because you can never construct any physical system with overlapping point masses. Furthermore, in any real system, the objects are not point masses, but have some finite volume. Even if you consider an electron as a point mass, the electromagnetic force would far outweigh the gravitational force.
{ "domain": "physics.stackexchange", "id": 60388, "tags": "energy, gravity, newtonian-gravity, potential" }
FizzBuzz using switches
Question: I'm learning about control structures now, and I want to see if I'm doing this as cleanly and efficiently as possible. Seems like there should be a less... verbose way of writing the switch but I wouldn't know what it is. Any criticism welcome! FizzBuzz.java public class FizzBuzz { public static final int fizz = 3; public static final int buzz = 5; public static void main(String[] args) { int status = -1; for (int fbNumber = 1; fbNumber <= 100; fbNumber++) { if ((fbNumber % fizz == 0) && (fbNumber % buzz == 0)) { status = 1; } else if ((fbNumber % fizz == 0) && (fbNumber % buzz != 0)) { status = 2; } else if ((fbNumber % fizz != 0) && (fbNumber % buzz == 0)) { status = 3; } else { status = 4; } switch (status) { case 1: System.out.println("FizzBuzz"); break; case 2: System.out.println("Fizz"); break; case 3: System.out.println("Buzz"); break; case 4: System.out.println(fbNumber); break; default: System.out.println("Number could not be evaluated"); } } } } Answer: Here's the thing. When you see switch (status) { case 1: case 2: case 3: case 4: default: } what do you know? What does case 1 mean? Is 1 even a sensible status? The same thing applies with status = 1; status = 2; status = 3; status = 4; It's just not a meaningful thing to write. There's also: public static final int fizz = 3; public static final int buzz = 5; Although this kind of makes sense, it doesn't really make sense. Fizz isn't 3. 3 is just the factor that triggers "Fizz" in the output. You should really be encapsulating the state not over the result (FizzBuzz/Fizz/Buzz/n) but over the input (Fiz/Not Fizz, Buzz/Not Buzz). Java doesn't have tuples, so let's use javafx.util.Pair<Boolean, Boolean>, and give it a loose abstraction: public static class IsFizzOrBuzz extends Pair<Boolean, Boolean> { public IsFizzOrBuzz(boolean isFizz, boolean isBuzz) { super(isFizz, isBuzz); } public boolean isFizz() { return getKey(); }; public boolean isBuzz() { return getValue(); }; } Unfortunately you can't do case on objects, so the de-facto alternative is a hash map static public Map<IsFizzOrBuzz, String> responses; static { responses = new HashMap<IsFizzOrBuzz, String>(); responses.put(new IsFizzOrBuzz(true, true), "FizzBuzz"); responses.put(new IsFizzOrBuzz(true, false), "Fizz"); responses.put(new IsFizzOrBuzz(false, true), "Buzz"); responses = Collections.unmodifiableMap(responses); } Then the proper abstraction looks like public static int FIZZ_FACTOR = 3; public static int BUZZ_FACTOR = 5; public static IsFizzOrBuzz classify(int n) { return new IsFizzOrBuzz(n % FIZZ_FACTOR == 0, n % BUZZ_FACTOR == 0); } public static String getResponse(int n, IsFizzOrBuzz classification) { if (responses.containsKey(classification)) { return responses.get(classification); } return String.valueOf(n); } public static void main(String[] args) { for (int i = 1; i <= 100; i++) { System.out.println(getResponse(i, classify(i))); } } I'm going to be honest, though. Although this works quite well in a concise language: fizz_factor = 3 buzz_factor = 5 responses = { (True, True): "FizzBuzz", (True, False): "Fizz", (False, True): "Buzz" } def classify(n): return (n % fizz_factor == 0, n % buzz_factor == 0) def get_response(n, classification): return responses.get(classification, str(n)) def main(): for i in range(1, 101): print(get_response(i, classify(i))) main() in Java can you really call this worthwhile? You end up fighting the language most of the time. A simple class FizzBuzz { public static int FIZZ_FACTOR = 3; public static int BUZZ_FACTOR = 5; public static String getFizzBuzzResponse(int n) { if ((n % FIZZ_FACTOR == 0) && (n % BUZZ_FACTOR == 0)) { return "FizzBuzz"; } if (n % FIZZ_FACTOR == 0) { return "Fizz"; } if (n % BUZZ_FACTOR == 0) { return "Buzz"; } return String.valueOf(n); } public static void main(String[] args) { for (int i = 1; i <= 100; i++) { System.out.println(getFizzBuzzResponse(i)); } } } is easy enough. YAGNI.
{ "domain": "codereview.stackexchange", "id": 10981, "tags": "java, beginner, fizzbuzz" }
Light's inverse square law: Does it require a minimum distance from the source?
Question: Does the inverse square law begin to take effect the moment light leaves its source? For example, does light's intensity decrease, i.e. does the area in which the photons might land increase, at a few millimeters from the source? I happened to come across an article about emergency lights and photometry from a few decades ago that appears to answer in the negative: "The minimum test distance in photometry of these sources is called the 'minimum inverse-square distance.' The illumination from the light source, measured at distances greater than this minimum, obeys the inverse-square law which is a necessary criterion for the determination of luminous intensity. [...] The minimum inverse-square distance is determined by the type and size of the light source, lens, reflector, etc., and must be considered individually for each unit. If this distance is more than 100 meters (approximately 328 feet), a ranger larger than 100 meters must be used." Source: Howett, et al. 1978. "Emergency vehicle warning lights: state of the art." USDC. NBS Special Publication 480-16. Answer: As many have said, the inverse square law applies to point-sources. These are idealized light sources which are sufficiently small compared to the rest of the geometry that their size is of no importance. If a light source is larger, it is typically modeled as a collection of idealized light sources, potentially using integration. The exact definition of "sufficiently small" varies with application. The definition of a "point source" for astronomy is quite different from the definition of "point source" for a LCD projector. There is actually a limit to this process. The inverse square law is only valid in its normal form if you are working on scales where light can be modeled purely as a wave. As you get very small, on the microscopic scales, those assumptions break down. You instead have to think about the statistical expectation of photons, which follows the statistical analogue of the inverse square law. Even smaller, and you start to enter the world of quantum mechanics, where you have to account for the actual waveforms of the objects under study. Ignoring these corner cases, nearly all cases you find will have "sufficiently small" defined by macroscopic factors, like the sizes and locations of lenses. Its rare to find oneself in the world where the microscopic factors matter.
{ "domain": "physics.stackexchange", "id": 30555, "tags": "quantum-mechanics, classical-mechanics, optics, visible-light, electromagnetic-radiation" }
FFT - second and further divides and conquers - need help
Question: ​ ​Hello, I would like to ask you for help in understanding Fast Fourier Transform. Most articles about FFT describe a simple DFT example with N=8 number of samples. They divide it on half, to evens and odds. And to that point everything is quite clear for me (according to that I have implemented the algorithm, and I get the expected outcome for the whole range of frequencies - in that case from 1 to 8). But then they instruct divide it again, analogous to the first divide, and then again. Until you end up with N/8. And there is something about bit reversal. But they don't show it, just say "analogous to the first divide". But it seems in my case "analogous" is still not enough to understand. Sorry for that. So for those next divides I get wrong results. I do something wrong. Don't know what. Please help me. Please let me show you my way of thinking through a simple code example . First I start with DFT and that is clear for me: for(int freqBin=1; freqBin <= sampleRate; freqBin++) { _Sfx[0] = 0.0f; // I need to make it zero to perform _Sfx[0] += … // in next coming for loop for (int n=0; n<sampleRate; ++n) { complex<float> _Wnk_N = exp(-1i * 2.0 * M_PI * n * freqBin / sampleRate); _Sfx[0] += inputSignal[n] * _Wnk_N; } output[freqBinK] = _Sfx[0]; } Then I make first divide: for(int freqBin=1; freqBin <= sampleRate/2; freqBin++) { for(int i=0; i<2; i++) { _Sfx[i] = 0.0f; } for (int n=0; n<sampleRate/2; ++n) { complex<float> _Wnk_N = exp(-1i * 2.0 * M_PI * n * freqBin / (sampleRate/2.0)); _Sfx[0] += inputSignal[2*n ] * _Wnk_N; _Sfx[1] += inputSignal[2*n +1] * _Wnk_N; } complex<float> _Wk_N = exp(-1i * 2.0 * M_PI * freqBin / sampleRate); _Sf_I_half = _Sfx[0] + _Wk_N * _Sfx[1]; _Sf_II_half = _Sfx[0] - _Wk_N * _Sfx[1]; output[freqBinK] = _Sf_I_half; output[freqBinK + sampleRate /2] = _Sf_II_half; } and that is also clear for me - not for sure but I think so, because results are as expected. But then I make next divide like that: for(int freqBinK=1; freqBinK <= sampleRate/4; freqBinK++) { for(int i=0; i<4; i++) { _Sfx[i] = 0.0f; } for (int n=0; n<bufferSize/4; ++n) { complex<float> _Wnk_N = exp(-1i * 2.0 * M_PI * n * freqBin / (sampleRate/4.0)); _Sfx[0] += inputSignal[ 2*(2*n) ] * _Wnk_N ; _Sfx[1] += inputSignal[ 2*(2*n) +1 ] * _Wnk_N ; _Sfx[2] += inputSignal[ 2*(2*n +1) ] * _Wnk_N ; _Sfx[3] += inputSignal[ 2*(2*n +1) +1] * _Wnk_N ; } complex<float> _Wk_N = exp(-1i * 2.0 * M_PI * freqBin / sampleRate); complex<float> _Wk_N2 = exp(-1i * 2.0 * M_PI * freqBin / (sampleRate/2)); _Sf_I_quarter = (_Sfx[0] + _Wk * _Sfx[1]) + _Wk_N2 * (_Sfx[2] + _Wk * _Sfx[3]); _Sf_II_quarter = (_Sfx[0] - _Wk * _Sfx[1]) + _Wk_N2 * (_Sfx[2] - _Wk * _Sfx[3]); _Sf_III_quarter = (_Sfx[0] + _Wk * _Sfx[1]) - _Wk_N2 * (_Sfx[2] + _Wk * _Sfx[3]); _Sf_IV_quarter = (_Sfx[0] - _Wk * _Sfx[1]) - _Wk_N2 * (_Sfx[2] - _Wk * _Sfx[3]); output[freqBin] = _Sf_I_quarter; output[freqBin + sampleRate /4] = _Sf_II_quarter; output[freqBin + 2 * sampleRate /4] = _Sf_III_quarter; output[freqBin + 3 * sampleRate /4] = _Sf_IV_quarter; } and for that code I get only half range as expected, but above Nyquist (more than N/2) there are some unexpected values. Why? What did I do wrong in that last example of code? Could you modify it for me and put it as it should be? Please don’t write about c++ programming. When I asked that question on Stackoverflow, most answers were about code, but I am asking about FFT. And please don't show me any algorithm to do complete FFT, I just want to understand just that next step, after divide DFT on half. If I understand it I hope I will be able to create algorithm to compute whole range FFT. And last thing: of course I know there are lot of solutions for FFT, for free, ready to use. But my goal is not to make FFT, but to understand it. For any help great thanks in advance. Best regards EDIT: There was question what exactly outputs I mean when I say "expected" or "unexpected". So let's say I have input signal like that: for(int sample=0; sample<8; sample++) { inputSignal[sample] = sinf(1.0 * (float)sample * 2.0* M_Pi / 8.0); } When I run that signal through the DFT I get: output[1] = 4.0; output[2] = 0.0; output[3] = 0.0; output[4] = 0.0; output[5] = 0.0; output[6] = 0.0; output[7] = 4.0; output[8] = 0.0; I need to comment it: All zeros are approximate. Actually to be more precise all zeros are something like 8.52367e-07. But it's not the point, that's why I approximated it. As you can see I start output from 1, not from zero. Ask why? I make my DFT frequency loop also from 1. Because I am not interested in frequency 0 Hz. And those numbers correspond to frequency. So for random reader of that question I thought it would be more clear that output[1] is for 1 Hz. But actually it's also not the point. But to the point. Now you can see my outputs. And in my opinion they are expected values. output[1] is more than 0.0, because my input signal has a 1 Hz sinusoid. And output[7] is also more than 0.0, because it's a mirror freq in relative to the Nyquist frequency, which is 4 Hz, and as long as I know it's expected. Do you agree? In my second example of code (where I divide DFT for evens and odds) gives me exactly the same output values. That's why I suppose I make first "divide and conquer" in proper way. But my next "divide and conquer" (my third example of code) gives me different values: output[1] = 4.0; output[2] = 0.0; output[3] = 0.0; output[4] = 0.0; output[5] = 2.82843; output[6] = 0.0; output[7] = 2.82843; output[8] = 0.0; And due to fact they are different values, I think those values are unexpected. And that's why I think I do something wrong. But can't find the answer, what exactly is wrong? Only can see that problem is for frequencies above Nyquist, but why? Please help me. Answer: This is the place where I learned what you are asking: MIT Opencourseware 6.008 Digital Signal Processing . See lecture 18. And if you have the chance also see lectures 19 and 20. You are trying to implement the decimation in time version of the FFT algorithm. There's also a decimation in frequency explained in lecture 19. About your bit-reversal doubt. Look at the steps: original sample indexes 0,1,2,3,4,5,6,7. You divide even and odd sample indexes {0,2,4,6};{1,3,5,7}. You divide even and odd again in every group sample indexes {0,4};{2,6};{1,5};{3,7}. You stop here Compare original indexes 0,1,2,3,4,5,6,7 with final indexes 0,4,2,6,1,5,3,7. Take both of them to bits. original indexes 000,001,010,011,100,101,110,111 the final indexes 000,100,010,110,001,101,011,111 . Now reverse every bit representation in the input to get the bit representations in the output. Do you see how for instance 001 is now 100 and 100 is now 001. Of course if you reverse 000,010,101,111 you get exactly the same positions that is why samples 0,2,5 and 7 of the input remain at their original index positions in the output. This is the famous bit reversal
{ "domain": "dsp.stackexchange", "id": 6138, "tags": "fft, fourier-transform" }
Expanding space and red shift
Question: Assuming space is really expanding - Due to expanding space, distant galaxies are supposed to be moving away from us. When light leaves a distant galaxy, it's wavelength is red shifted to begin with due to doppler's effect. But thereafter, the light keeps propagating through space which is continuously expanding. With expanding space, the red shift should keep on increasing for billions of years while it travels. Therefore, the cumulative red shift must be much larger than what should be due to the doppler's effect at the onset. Has this effect been taken into account when the calculations of rate of expansion of the universe are done? Because, farther the source, more time the light has to travel through the expanding space, causing the red shift to be even greater. Also, isn't it possible that some galaxies may be travelling towards us but because of cumulative red shift (due to expanding space), they appear to be moving away? Same logic should apply to the gravitational waves, their frequency, wavelength, and amplitude. Question is - does the cumulative red shift (due to travel through ever expanding space), make any sense? If so, has it been accounted for while doing calculations? Answer: The redshift of distant galaxies is mainly due to the expansion of space whilst the light has been travelling towards us. The "cumulative redshift" you refer to is the "cosmological redshift". Galaxies also have a "peculiar" velocity with respect to the co-moving cosmological rest frame. This produces a regular doppler shift. An example helps. There are 100 galaxies gravitationally bound in a cluster of galaxies that is so far away that the average redshift is 0.1 (the light has wavelengths 10% longer). This redshift is dominated by the cosmological redshift. However, if the galaxies are taken individually, some have redshifts a little bigger than 0.1, some a little smaller (for a typical cluster, this might amount to a dispersion of 0.002 in redshift). This is because the galaxies have their own peculiar motion with respect to the cluster. As I say, peculiar motions tend to produce redshifts of magnitude 0.001-0.002. The cosmological redshift increases with distance (Hubble's law!), so once galaxies are far enough away that their cosmological redshifts are larger than this (much more than about 300 million light years), then the peculiar motions become negligible.
{ "domain": "physics.stackexchange", "id": 81168, "tags": "universe, space-expansion, redshift" }
What instrument should I use to vary the power given to a heating element?
Question: I have a nickel-chrome wire as a heating element and I want to vary the power given to it to control the amount of heating the wire produces. Which instrument should I use? Please don't give theoretical explanations here, tell me how I can do it practically. Answer: In simpler times, a rheostat would do the job. It diminishes and increases the current according to need: A rheostat is a variable resistor which is used to control current. They are able to vary the resistance in a circuit without interruption. The construction is very similar to the construction of a potentiometers Microelectronics is much more efficient and so the use has fallen off Rheostats were often used as power control devices, for example to control light intensity (dimmer), speed of motors, heaters and ovens. Nowadays they are not used for this function anymore. This is because of their relatively low efficiency. In power control applications they are replaced by switching electronics
{ "domain": "physics.stackexchange", "id": 25590, "tags": "electricity, experimental-physics" }
Mini HttpClient json post by Action parameter
Question: This is an implementation of HttpClient Json Post by Action parameter. Logic: It is mainly convenient to pass the url and object (automatically converted to json by Json.Net), and can use Action as a parameter to customize function. default timeout is 15 sec , if you want to change the time , it could change timeout parameter value. Code: using System; using System.Net.Http; using System.Text; using System.Threading.Tasks; using Newtonsoft.Json; public class Program { public static void Main(string[] args) { Test().Wait(); } public static async Task Test() { //SuccessExecute await Execute("https://jsonplaceholder.typicode.com/posts", new { value = "ITWeiHan" }); //Result : {"value": "ITWeiHan","id": 101} //ErrorExecute await Execute("https://jsonplaceholder.typicode.com/error404", new { value = "ITWeiHan" }); //Result : Error 404 } public static async Task Execute(string url, object reqeustBody) { await HttpClientHelper.PostByJsonContentTypeAsync(url, reqeustBody , successFunction: responsebody => { //Your Success Logic Console.WriteLine("Success"); Console.WriteLine(responsebody); }, errorFunction: httpRequestException => { //Your Error Solution Logic Console.WriteLine("Error"); Console.WriteLine(httpRequestException.Message); } ); } } public static class HttpClientHelper { public static async Task PostByJsonContentTypeAsync(string url, object reqeustBody, Action<string> successFunction, Action<HttpRequestException> errorFunction, int timeout = 15) { var json = JsonConvert.SerializeObject(reqeustBody); using (var client = new HttpClient() { Timeout = TimeSpan.FromSeconds(timeout) }) using (var request = new HttpRequestMessage(HttpMethod.Post, url)) using (var stringContent = new StringContent(json, Encoding.UTF8, "application/json")) { request.Content = stringContent; try { using (var httpResponseMessage = await client.SendAsync(request)) { HttpResponseMessage response = await client.GetAsync(url); response.EnsureSuccessStatusCode(); var responseBody = await httpResponseMessage.Content.ReadAsStringAsync(); successFunction(responseBody); } } catch (HttpRequestException e) { errorFunction(e); } } } } Answer: Great job. Here are some minor notes though. In newer versions of C# you can leverage async Main in order not to wait for your test method. public static async Task Main(string[] args) { await Test(); } You can speed things up a bit with executing your tests in parallel public static async Task Test() { var successExecute = Execute("https://jsonplaceholder.typicode.com/posts", new { value = "ITWeiHan" }); //Result : {"value": "ITWeiHan","id": 101} var errorExecute = Execute("https://jsonplaceholder.typicode.com/error404", new { value = "ITWeiHan" }); //Result : Error 404 await Task.WhenAll(successExecute, errorExecute); } Also, PostByJsonContentTypeAsync throws couple of more exceptions. Namely InvalidOperationException and ArgumentNullExceptions. Are you intentionally not handling them?
{ "domain": "codereview.stackexchange", "id": 36238, "tags": "c#, json, async-await, rest" }
Hyperconjugation - concept clarification
Question: As I currently understand it, Hyperconjugation occurs in carbocations and alkenes via the interaction of the electrons in the MO of a C-H bond at an adjacent carbon to bring stability to the system.The interaction requires for in phase overlap and occurs when the C-H bond is in the same plane as the vacant p orbital. (or the pi bond in case of alkenes) It is thus not seen if the beta hydrogens are in the plane perpendicular to them. If this is true, then how does hyperconjugation work, if at all in cyclic systems where free bond rotation is impossible and thus the orbitals will never align perfectly? What conceptual bridge am i missing here? Answer: Cyclic systems aren't planar. The system orients itself in such a way that the vacant p orbital is in plane in order to gain the ability to engage in hyperconjugation which stabilizes it very well.
{ "domain": "chemistry.stackexchange", "id": 13942, "tags": "hyperconjugation" }
Longest "bounded" subsequence
Question: Given a sequence of comparable objects $a_1, a_2, \ldots a_n,$ how quickly can we find the longest subsequence $s$ such that $s_i > s_1$ for $i > 1$? Or equivalently, how quickly can we find $$ \underset{i}{\arg\max} \; \# \{j \mid a_j > a_i, i < j \le n \} $$ Can it be done in less than $\Omega(n\,\lg n)$ comparisons in the worst case? A $\Theta(n\,\lg n)$ algorithm is to scan from right to left, inserting each element into a balanced tree, keeping track of how many elements in the tree are larger. This was inspired by this thread on reddit. Answer: No, $\Omega(n\log n)$ comparisons are required. Let $k = \Theta(n)$ be an integer parameter and consider the following family of inputs. $$k+1,\:k+2,\:k,\:k+2,\:k-1,\:k+2,\:\ldots,\:2,\:k+2,\:1,\:k+2,\:x_1,\:x_2,\:\ldots,\:x_k$$ Note that, for $a\in\{1,\:2,\:\ldots,\:k+1\}$, the maximum bounded subsequence beginning with $a$ consists of $a$, then $a$ copies of $k+2$, then a subsequence of $x_1,x_2,\ldots,x_k$. It is not profitable to begin anywhere else, since the maximum bounded subsequence beginning with $k+1$ has length at least $k+2$. Let the adversary in this lower bound choose a random permutation $\pi$ on $\{1,2,\ldots,k\}$ and answer as though $x_j=\pi(j)+1/2$. This setting ensures that every maximum bounded subsequence beginning with $a\in\{1,\:2,\:\ldots,\:k-1\}$ has length exactly $k+2$. By the following argument, however, the algorithm cannot be sure of this fact unless it sorts $x_1,x_2,\ldots,x_k$. Suppose that the algorithm cannot infer the order of $x_i$ and $x_j$. Without loss of generality, assume that $i=\pi^{-1}(\pi(j)-1)$, so that $x_i$ immediately precedes $x_j$ in the imagined sorted order of $x_1,x_2,\ldots,x_k$. It is consistent with the algorithm's observations that $$x_\ell=\begin{cases}\pi(j)+1/2&\text{if }\ell=i\\\pi(\ell)+1/2&\text{if }\ell\ne i,\end{cases}$$ which would give rise to a bounded subsequence of length $k+3>k+2$.
{ "domain": "cs.stackexchange", "id": 3400, "tags": "algorithms, subsequences" }
Where does the light in the ceiling of the British house of commons chamber come from?
Question: I am sorry if this is a strange question. Sometimes I like to visit parliaments from around the world and compare their buildings. After looking at the British house of commons chamber, I noticed that there was sun light coming out of ceiling of the chamber. At first I thought that the ceiling was made of glass and it was just sun light. However after checking the building from outside there seems to be at least another floor on top of the chamber. So it can't be sun light. Then I thought that it must be just some electric light. However, after checking some old images of the chamber(before electricity was used) there is still light coming from ceiling. Does anyone know how this happens? Answer: It is not sunlight. The lights in the roof are artificial. Your 'old photo before electricity' is presumably from before the war, when the roof did have real windows, in 'direct contact with the sky'. See this aerial view from 1919 for reference: There is recorded evidence of resistance to the building of offices etc. above the chamber, as members worried about the lack of daylight. Before the Debate goes any further I would like to express my disapproval of the suggestion of the hon. Member for West Walthamstow (Mr. McEntee) and certain other hon. Members that when we come to rebuilding the House of Commons special accommodation should be made available for other purposes above it. If that were done, we would have no daylight whatever in the new Chamber, and having sat, like many others, for two years in this artificial illumination, I certainly look forward to the day when we can have a Chamber very much better lit than the former House of Commons used to be. Sir Alfred Beit (St. Pancras, South-East), 28/Oct/1943 The parliment.uk website states that When the Chamber was rebuilt after 1945 at the cost of £2 million, Sir Giles Gilbert Scott designed a steel-framed building of five floors (two taken by the Chamber), with offices both above and below. "The Construction of the House of Commons" by Oscar Faber notes further that The elaborate wall and ceiling linings to the frame replicate the original timber detailing by Pugin. The frame steps back along its length and addtional height above the Chamber has been allowed for a two-tier roof over the whole rebuilt House of Commons. and An elaborate network of branched ductwork sent the conditioned air to numerous outlets set into the ornate detail of the Chamber’s gallery structure, and at the upper level of the spring point of the replica roof trusses. Extract ducts were positioned at regular centres in the apex of the roof lining of the Chamber. i.e. the new chamber internal roof was designed to look like the old one, but also had extraction ducts etc. added, in keeping with there being in-use office space above. The phrase "two tier roof" does not refer to the internal roof of the commons, since that is a ceiling, not a roof. Rather, the new roof of the commons simply has two tiers, but this is not accurately picked up by Google Maps. See here: Furthermore, a friend of mine who is currently at work inside the House of Commons right now has confirmed both that the look/feel of the roof-lights is artificial, and that the offices on the floor above do not follow a long corridor around the void which would be required to direct daylight down to the commons chamber roof two floors below.
{ "domain": "engineering.stackexchange", "id": 2051, "tags": "civil-engineering" }
Are electronic wavefunctions in band gap insulators localized? is a single-particle picture sufficient in this case?
Question: I am having trouble understanding the physics of band gap insulators. Usually in undergrad solid state physics one looks at non-interacting electrons in a periodic potential, with no disorder. Then, if the chemical potential lies in the gap between two bands, the material is insulating. At least in this derivation, the individual electronic wavefunctions composing the bands are not localized. However, when talking about insulators, people often think about localized electrons. Do the electronic wavefunctions become localized in band gap insulators? If they are, is it because of interactions? I was thinking that perhaps, since screening is not effective in insulators, the role of interactions is increased, and therefore perhaps the entire non-interacting, single-particle picture used to construct the band structure breaks down. Similarly, an impurity potential will not be screened and could localize the states. So which is it? Answer: I think you don't 100% understand the "simple case" of a perfect crystal without electron-electron or electron-phonon interactions. Let's say this crystal has full bands, a full valence band and empty conduction band. Say there are N electrons in the valence band (N is some huge number), one for each of the N valence-band states. In linear algebra terms, the electronic states in the valence band form an N-dimensional space of kets. This space, like any space in linear algebra, has infinitely many different bases. It has a basis of the N bloch states, which are delocalized, and it also has, say, the basis of N Wannier orbitals which are all localized. You can say "There's an electron in each delocalized bloch state of the valence band". Yes, you're right. I can say "There's an electron in each localized wannier state of the valence band." I'm right too. Electrons are indistinguishable, it's meaningless to assign individual electrons to individual states in this situation, and to say whether or not they're localized. Therefore, the material itself, in its perfectly insulating state, does not reveal to us whether it makes sense to think of electrons as localized or not. On the other hand, if there is (say) an electron in the conduction band, you can look at whether or not it's localized. Insulators like sapphire are usually described as having localized electrons because when current is moving through them, it is usually via electrons which happen to be occupying localized wavefunctions during the process of moving. It's not because Bloch's theorem doesn't apply. (Although it might not apply.) They may have some current due to electrons occupying delocalized states too, but it's usually a much smaller contributor to the current than electrons occupying localized states during the course of their motion (hopping / polaron / anderson-localized, whatever).
{ "domain": "physics.stackexchange", "id": 9483, "tags": "condensed-matter, solid-state-physics, electronic-band-theory" }
C++ thread-safe object pool
Question: This is a modern C++ implementation of a thread-safe memory pool -- I want to make sure I am solving the critical section problem correctly (no deadlocks, starvation, bounded waiting) and I am getting details like the rule of five correct (copy not allowed, move doesn't trigger double release). Design criteria: Small number of items in pool (e.g. 5 to 10) since each may have a large memory footprint -- the number can be decided / fixed when the pool is created. Uses RAII to guarantee an object is released when code leaves scope. If there are no items available in the pool, the code blocks until one is available. The typical usage pattern would be: ObjectPool<Foo> pool(5, Foo_ctor_args); ... { ObjectPool<Foo>::Item foo = pool.acquire() foo.object.doSomething(); } Class template: #include <vector> #include <mutex> #include <condition_variable> #include <cassert> #include <iostream> template <typename T> class ObjectPool { private: std::vector<T> objects; std::vector<bool> inUse; std::mutex mutex; std::condition_variable cond; public: class Item { public: T& object; Item(T& o, size_t i, ObjectPool& p) : object{o}, index{i}, pool{p} {} Item(const Item&) = delete; Item(Item&& other) : object{other.object}, index{other.index}, pool{other.pool} { other.index = bogusIndex; // <-- don't release } Item& operator=(const Item&) = delete; Item& operator=(Item&& other) { if (this != &other) { object = other.object; index = other.index; pool = other.pool; other.index = bogusIndex; // <-- don't release } return *this; } ~Item() { if (index != bogusIndex) { pool.release(index); index = bogusIndex; // <-- avoid double release } } private: constexpr static size_t bogusIndex = 65535; size_t index; ObjectPool<T>& pool; }; template<typename... Args> ObjectPool(size_t maxElems, Args&&... args) : inUse(maxElems, false) { for (size_t i = 0; i < maxElems; i++) objects.emplace_back(std::forward<Args>(args)...); } Item acquire() { std::unique_lock<std::mutex> guard(mutex); while (true) { for (size_t i = 0; i < objects.size(); i++) if (!inUse[i]) { inUse[i] = true; return Item{objects[i], i, *this}; } cond.wait(guard); } } private: void release(size_t index) { std::unique_lock<std::mutex> guard(mutex); assert(index < objects.size()); assert(inUse[index]); inUse[index] = false; cond.notify_all(); } }; Answer: Only allow ObjectPool to create Items The rule of five is used correctly as far as I can see, but there are still some ways to create an incorrect Item, and use that to corrupt an existing ObjectPool. Consider: ObjectPool<int> pool(10); int value = 42; decltype(foo)::Item item(value, 0, pool); To prevent this from happening, make all constructors private, and make Item a friend of ObjectPool: template <typename T> class ObjectPool { ... class Item { friend ObjectPool; Item(T& o, size_t i, ObjectPool& p) : object{o}, index{i}, pool{p} {} Item(const Item&) = delete; Item(Item&& other) : object{other.object}, index{other.index}, pool{other.pool} { other.index = bogusIndex; // <-- don't release } public: T& object; ... }; ... }; Make Item work like a std::unique_ptr An Item is almost like a std::unique_ptr; while the pool is the actual owner, Item acts like a unique reference. I would make the member variable object private, and instead add these member functions to access it: T& operator*() { return object; } T* operator->() { return &object; } T* get() { return &object; } Then you can do: auto foo = pool.acquire(); foo->doSomething(); Consider adding a constructor that takes an initizalier list An object pool is like a container. Consider that most STL containers allow initializing using a std::initializer_list, you might want to add that for your object pool, so that you could write something like: ObjectPool<std::string> pool = {"foo", "bar", "baz", ...}; Set bogusIndex to std::numeric_limits<std::size_t>::max() Even if you cannot think of a use case for an object pool of 65535 or more items now, consider that you might need it in the future, or someone else using your code might. If you are going to use a special value for index to indicate that an Item doesn't need to be released, give it a value that really can never be a valid index, like the maximum possible value for a std::size_t. Use notify_one() When you release an Item back to the pool, it doesn't make sense to wake up all threads that are waiting. Only one will be able to get the Item that was just released. So use notify_one() instead. Consider supporting non-copyable value types Since you store the objects in a std::vector, this requires T to be copy-assignable and copy-constructible. Consider storing them in some way that does not have this limitation, for example by using a std::deque instead. Item's move constructor should std::move the object If you are moving one Item into another, it makes sense to also use move-assignment on object. It should be as simple as: object = std::move(other.object) acquire() is an \$O(N)\$ operation It is unfortunate that acquire() does a linear scan through inUse[]. It should be possible to turn this into an \$O(1)\$ operation. One possibility is to use a std::set to store the indices of the items that are in use, but while technically \$O(1)\$ amortized, that might involve a lot of unwanted allocations. There are other ways to keep track of this though.
{ "domain": "codereview.stackexchange", "id": 42850, "tags": "c++, memory-management, thread-safety" }
Where is ros2doctor?
Question: I am going through the tutorials on dashing distro. Up to this point (mostly) ok, but this is baffling. I am at the "Getting started with ros2doctor", but "doctor" or "wtf" do not exist as a ros2 command. It is not included in the command list from "ros2 -h" either (and neither is in the example output in earlier tutorial!). So why should it even be available? (ubuntu 18, binary version). Thanks! Originally posted by Almost on ROS Answers with karma: 25 on 2020-06-17 Post score: 0 Answer: ros2doctor was released in eloquent and likely is not backported to dashing, the feature list for eloquent is here Originally posted by johnconn with karma: 553 on 2020-06-17 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 35145, "tags": "ros" }
Speed performance of picking unique value from binary matrix file
Question: I wrote this code to read a square matrix saved as binary file, that it can be int, uint, float etc... to pick a value from the matrix at given row y and given column x. Could this code be faster to pick this value, better than 20 seconds. The maximum row and column number in matrix is 3601 for each one. import struct #convert x,y indexes of 2D array into index i of 1D array #x: index for cols in the 2D #y: index for rows in the 2D def to1D(y,x,width): i = x + width*y return i def readBinary_as(filename,x,y,file_type,width=3601): with open(filename,'rb') as file_data: #initialize, how many bytes to be read nbByte = 0 #data type of the file, uint, int, float, etc... coding = '' if file_type == "signed int": nbByte = 2 coding = '>h' #2B Signed Int - BE if file_type == "unsigned int": nbByte = 2 coding = '>H' #2B Unsigned Int - BE if file_type == "unsigned byte": nbByte = 1 coding = '>B' #1B Unsigned Byte - BE if file_type == "float": nbByte = 4 coding = '>f' #4B float32 - BE #index of my value in 1D array i = to1D(y,x,width) for each_cell in range(0,i): file_data.read(nbByte) #read and save the picked value my_value_pos = file_data.read(nbByte) val = struct.unpack(coding,my_value_pos)[0] return val Answer: Try using seek() instead of a loop that reads 1 value at a time. import io import struct #convert x,y indexes of 2D array into index i of 1D array #x: index for cols in the 2D #y: index for rows in the 2D def to1D(y,x,width): i = x + width*y return i def readBinary_as(filename,x,y,file_type,width=3601): with open(filename,'rb') as file_data: #initialize, how many bytes to be read nbByte = 0 #data type of the file, uint, int, float, etc... coding = '' if file_type == "signed int": nbByte = 2 coding = '>h' #2B Signed Int - BE if file_type == "unsigned int": nbByte = 2 coding = '>H' #2B Unsigned Int - BE if file_type == "unsigned byte": nbByte = 1 coding = '>B' #1B Unsigned Byte - BE if file_type == "float": nbByte = 4 coding = '>f' #4B float32 - BE #index of my value in 1D array i = to1D(y,x,width) offset = i * nbByte # seek to byte offset of desired data file_data.seek(offset) #read and save the picked value my_value_pos = file_data.read(nbByte) val = struct.unpack(coding,my_value_pos)[0] return val
{ "domain": "codereview.stackexchange", "id": 36608, "tags": "python, performance, python-3.x, file" }
Can a block set on a inclined plane fall vertically downwards?
Question: At the top of a wedge of inclination $\theta$, a block is kept. There are no external forces acting on this block in horizontal direction. Is is possible for this block to fall vertically downwards? (Image below for illustration/clarity ONLY regarding my question) I have tried to evaluate the answer using FBDs and no matter how I attempt it, my final answer comes out to be either $\sin\theta = 0$ or $cos\theta = 0$ but for either of those conditions, it wouldn't be an inclined plane. I have also referred to the following questions asked on this site but I haven't got the answer to my question: Free body diagram of block on accelerating wedge A block on inclined plane block slides on smooth triangular wedge kept on smooth floor.Find velocity of wedge when block reaches bottom Please notify me in the comments if more clarity is needed anywhere. Answer: Can a block set on a inclined plane fall vertically downwards Only if the mass of the incline, $M$, were zero. Since the only external forces acting on the combination of the two blocks are vertical (gravity and the normal reaction force of the supporting surface on $M$), the center of mass (COM) of the two block system can only undergo vertical downward motion. Since block $M$ can only move horizontally to the left in the reference frame of the floor, block $m$ must move downward and to the right in the reference frame of the floor, in such a manner that the horizontal components of the two motions cancel so that the COM moves downward. The greater the mass of $m$ is compared to that of $M$ the smaller the component of the motion of $m$ to the right. Hope this helps.
{ "domain": "physics.stackexchange", "id": 89202, "tags": "homework-and-exercises, newtonian-mechanics, forces, work, free-body-diagram" }
Why aren't scalar fields called spin-infinity fields?
Question: A spin-N field goes back to its original state after a rotation of $360^\circ/N$. So a spin-1 field takes $360^\circ$, a spin-1/2 field takes $720^\circ$ and a spin-2 field takes $180^\circ$. But this breaks down for scalar fields (called spin-0 fields). As it seems to suggest you must rotate it infinity times to get back to its original state. Whereas in fact you can rotate it an arbitarily small amount. Hence shouldn't scalar fields be called "spin-infinity fields"? And a true spin-0 field should not look the same under any amount of rotation. Why is this logic wrong? Answer: The naming conventions of these things are based in group theory, not on this observation about periodicity. That is, whenever we say "spin-$j$", we mean that the field transforms under the irreducible representation of $SU(2)$ (or $SO(3)$ for integer spin, the Lie algebras are isomorphic) indexed by the number $j$. The finite irreducible representations of this Lie algebra are all classified by the highest eigenvalue of the $J_z$ matrix, which is $j$. The dimension of the spin-$j$ representation is $2j+1$. In particular, the $j=0$ representation is the trivial one, meaning that the field does not actually transform. All of this only applies to finite dimensional representations. If you were to take $j$ to infinity, then you would be dealing with an infinite dimensional representation of $SU(2)$, and the rules for these objects change, so your analysis of the periodicity actually breaks down in precisely the limit you're asking about.
{ "domain": "physics.stackexchange", "id": 73210, "tags": "quantum-mechanics, quantum-spin" }
Do perfumes and deodorants contribute to global warming?
Question: If perfumes and deodorants cause air pollution, does that mean they also heat the planet? Answer: Maybe a little, but not much. Personal care products that have a strong odor are typically associated with volatile organic compound (VOC) emissions. VOCs are not really greenhouse gases, because they are reactive and very short-lived in the presence of sunlight. However, VOCs do have a small indirect effect on global warming because they increase tropospheric ozone production in polluted atmospheres. Tropospheric ozone is a pretty important greenhouse gas, because it has a strong radiative forcing. However, it is relatively short-lived and not a driver of anthropogenic climate change. Air pollution is generally studied in terms of human-health. These air pollutants are often referred to as criteria (e.g. PM2.5, NOx, tropospheric ozone, carbon monoxide, sulfur dioxide) and hazardous (e.g. ammonia, metals, VOCs) air pollutants. While CAPs and HAPs do strongly influence atmospheric chemistry (which can influence greenhouse gases), air pollution usually involves chemicals that have short lifetimes (hours to days). Conversely, strong greenhouse gases (e.g. CO2, methane, HFCs) are long-lived and therefore can increase steadily over long periods of time (years). This makes GHGs strong climate forcers, because climate is on the decadal scale. Greenhouse gases are generally a distinctly separate list than CAPs and HAPs. If you think about it, it makes sense that long-lived greenhouse gases generally do not directly cause adverse health effects. If they did, we would be breathing "polluted" air our entire lives! Health-based air pollution was regulated earlier than GHGs, so any pollutant that fits both criteria (e.g. ozone) would already be regulated/controlled.
{ "domain": "earthscience.stackexchange", "id": 2310, "tags": "climate-change, climate, pollution, air" }
Does a FTS work on the same principle as a michelson (amplitude division) interferometer?
Question: As far as I can tell within an Fourier Transform Spectrometer the spectral information is gained from changing the path length along one arm, this sounds very similar to a michelson interferometer but using two apertures instead of one. So are there any underlying differences between the two? Answer: You are right, fourier transform spectrometer is just a scanning Michelson interferometer. In spectroscopic applications these are just synonims. Spectral information is a fourier transform of intensity dependence on path length, thus the name. Usually a term wavemeter is used, meaning some interferometric device to measure wavelength, including FTS as well as Fabry–Pérot interferometers.
{ "domain": "physics.stackexchange", "id": 4069, "tags": "optics, interferometry" }
Spin velocity in table tennis
Question: I have read on papers that argued, when flying, we have constant spin velocities $\omega_x, \omega_y, \omega_z$, that primarily depend on the initial spin setting e.g., topspin, bottom spin and sidespin. Taking the following coordinate setting. Then topspin and bottom spin affect the spin velocity $\omega_w$, and sidespins affect the $\omega_y$ component. My question is what about $\omega_z$? Answer: Top/bottom spin rotates as if the ball were rolling toward/away from you, and sidespin rotates about a vertical axis as if the ball were a top. Rotation about the third axis (which extends in a line directly away from you) can be called cork or corkscrew spin. It won't affect the trajectory of the ball as much as other types of spin, but can cause an unexpected bounce.
{ "domain": "physics.stackexchange", "id": 100087, "tags": "newtonian-mechanics, rotational-dynamics, projectile, aerodynamics" }
What is the definition of variance for a non-Hermitian operator?
Question: I am trying to understand what is the correct way to compute the variance of a non-Hermitian operator. I was thinking that it was simply something that: $$ \langle (\Delta a)^2 \rangle = |\langle \psi| \hat{a}^2 |\psi\rangle| -| \langle \psi |\hat{a}|\psi\rangle|^2 $$ But now I have read on Wikipedia that the variance for a random complex variable can be written as: $$ Var[Z] = \mathbb{E}[|Z|^2] - |\mathbb{E}[Z]|^2 $$ In the first term, the absolute value is computed before the expectation value, so I think the formula I have written before may be wrong. Now I am thinking that it may be $$ \langle (\Delta a)^2 \rangle = |\langle \psi| \hat{a}^\dagger \hat{a} |\psi\rangle| -| \langle \psi |\hat{a}|\psi\rangle|^2 $$ Is this correct? Answer: Since is not possible to associate an observable to a non-Hermitian operator $\hat{a}$, is impossible to give a physical meaningful definition of variance since the measure process is undefined. Let us ignore this problem and suppose that an hypothetical measure of the operator $\hat{a}$ has as results the (complex) eigenvalues of $\hat{a}$ with probability distribution given by the squared of the projection of the state on the eigenvectors, just like the Hermitian operators. Here it is an additional problem. If an operator is not Hermitian their eigenvectors may not form complete set of the Hilbert space. Let suppose the operator $\hat{a}$ admit a complete set of eigenvectors to avoid this problem. \par Since the outcome of the measure is a complex number, the measure process should be modelled like a complex random variable. The definition of variance for a complex random variable is \begin{equation} \mathrm{Var}[Z] = \mathbb{E}[|Z|^2] - |\mathbb{E}[Z]|^2 \end{equation} Is clear that the second term in the expression is equal to the modulus squared of the expected value, which can be written like \begin{align} \mathbb{E}[Z] &= \sum_\alpha \alpha | \langle \alpha| \psi \rangle |^2 = \\ &= \sum_\alpha \alpha \langle \psi| \alpha \rangle \langle \alpha|\psi\rangle = \\ &= \sum_\alpha \langle \psi|\alpha\rangle\langle \alpha|\hat{a} |\psi\rangle = \\ &= \langle\psi|\hat{a}|\psi\rangle \end{align} For the first term, instead \begin{align*} \mathbb{E}[|Z|^2] &= \sum_\alpha |\alpha|^2 |\langle \alpha| \psi \rangle |^2 = \\ &= \sum_\alpha \alpha\alpha^* \langle \psi|\alpha\rangle \langle \alpha|\psi\rangle = \\ &= \sum_\alpha \langle \psi|\hat{a}^\dagger| \alpha\rangle \langle \alpha|\hat{a}|\psi\rangle = \\ &= \langle \psi |\hat{a}^\dagger \hat{a} | \psi\rangle \end{align*} Finally, \begin{equation} \langle \left( \Delta a \right)^2 \rangle = \langle\psi|\hat{a}^\dagger \hat{a} | \psi\rangle - \langle \psi|\hat{a}^\dagger |\psi\rangle\langle \psi|\hat{a} |\psi\rangle \end{equation} This definition is consistent with the usual one for Hermitian operator because in case $\hat{a}=\hat{a}^\dagger$ \begin{equation} \langle \left( \Delta a \right)^2 \rangle = \langle \psi|\hat{a}^\dagger \hat{a}| \psi\rangle - \langle \psi|\hat{a}^\dagger|\psi\rangle \langle\psi|\hat{a} |\psi\langle = \langle \psi|\hat{a}^2 |\psi\rangle - |\langle \psi|\hat{a} |\psi\rangle|^2 \end{equation} In both these definition we have used the hypothesis that, despite being a non-Hermitian operator, $\hat{a}$ admits a complete set of eigenvectors. Therefore this assumption is fundamental as it guarantee to have a normalized distribution. In fact, if $\left\{|\alpha\rangle \right\}$ is not a complete set, the sum of the squared projection can be less than one. If instead $\left\{|\alpha\rangle\right\}$ is an overcomplete set, the sum of the probabilities will be greater than one.
{ "domain": "physics.stackexchange", "id": 52584, "tags": "quantum-mechanics, operators" }
Is There any RNN method used for Object detection
Question: after reading the state of the art about object detection using CNN (R-CNN Faster R-CNN ,YOLO, SSD...) I was wondering if there is a method that use RNN's or that combine the use of CNN's and RNN's for object detection ?? Thank you Answer: Yes, there have been many attempts, but perhaps the most noteable one is the approach described in the paper of Andrej Karpathy and Li Fei-Fei where they connect a CNN and RNN in series (CNN over image region + bidirectional RNN + Multimodal RNN) and use this for labeling a scene with a whole sentence. Though, this one is more than just object detection as it leverages a data set of scenes and their descriptions to generate natural language descriptions of new unseen images. Another example is Ming Liang and Xiaolin Hu's approche where they mix a CNN with an RNN and use this architecture for better object detection. As Ming and Xiaolin explained in their paper (linked above), the RNN is used to improve the CNN: A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer.
{ "domain": "datascience.stackexchange", "id": 5124, "tags": "deep-learning, neural-network, computer-vision, convolutional-neural-network, rnn" }
Are there useful utilities for visualizing rosgraphs?
Question: --Just in case there are others, and for those who may have missed some of the tools that do exist-- Are there any useful tools to help visualize a ROS computation graph? (Or to generate nice pictures for papers based on these graphs?) Originally posted by SL Remy on ROS Answers with karma: 2022 on 2012-10-02 Post score: 0 Answer: I've seen that people use rxgraph, but often don't realize they can also use xdot to generate images from a saved .dot file. roslaunch somepkg somefilename.launch rxgraph -o somefilename.dot rosrun xdot xdot.py somefilename.dot (And you can edit somefilename.dot in realtime which I think is a useful way to generate figures that fit the needs of a paper or a presentation.) Originally posted by SL Remy with karma: 2022 on 2012-10-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11203, "tags": "ros, rxgraph" }
Question about gravitational time dilatation
Question: For $g_{00} = 1 - 2GM/c^2r$ , the time interval $\Delta T$ measured in a stationary frame at a distance $r$ from the source and the time interval $\Delta t$ measured by a frame at $r= \infty$ are related via $$\Delta T = \sqrt{1 - \frac{2GM}{rc^2}} \Delta t$$ But, the invariant quantity $\Delta \tau^2$ in both frames should be the same: $$\Delta \tau^2 = \left( 1 - \frac{2GM}{c^2r} \right) (\Delta T)^2 = (\Delta t)^2$$ which gives the reverse relation between $\Delta T$ and $\Delta t$. What am I doing wrong? Answer: The eigen time of a stationary frame at distance $r$ from a gravitational source is $$(\Delta T)^2 =\frac{(\Delta s)^2}{c^2} = \left(1-\frac{2GM}{c^2 r}\right)(\Delta t)^2$$ i.e. the time measured in the stationary frame at distance $r$. The quantity $\Delta t$ is the coordinate time. In order to know the coordinate time one can go to a frame in a region where there is no gravitational effect (i.e. an area with a Minkowski metric), formally achieved by putting $r\rightarrow \infty$. In such a frame we would have as eigen time (since there we have $(\Delta x)=(\Delta y)= (\Delta z)=0$.): $$\frac{(\Delta s)^2_{r\rightarrow \infty}}{c^2} = (\Delta t)^2 -[(\Delta x)^2 -(\Delta y)^2 -(\Delta z)^2]/c^2 = (\Delta t)^2$$ In such a frame the eigen time corresponds to the coordinate time $\Delta t$. So one can relate the eigen time measured at distance $r$ with the eigen time measured at $r\rightarrow \infty$ by $$(\Delta T)^2_{r} = \left(1-\frac{2GM}{c^2 r}\right)(\Delta t)^2_{r\rightarrow \infty}$$ Therefore there is no contradiction at all.
{ "domain": "physics.stackexchange", "id": 93413, "tags": "general-relativity, time-dilation" }
Electric Field at the center of the aperture in this case
Question: A small circular piece of radius b much smaller than a is cut from the surface of a spherical shell of radius a. What's the Electric Field at the center of the aperture (magnitude as well as direction)? This is not a homework question. But the reason for asking it is - The electric field at the center of a disk ("of infinitely small width ") is zero . But in the formula for the Electric Field due to disk as I put L=0 E= p/2e * (1- (L/(R^2 +L^2)^1/2) , I don't get E=0 ( where p is the charge density and e is epsilon) I searched about it on net I found something which I didn't understand wholly . I tried to attach the file but it isn't uploading. Secondly , I couldn't integrate (in my question) over the remaining charge distribution to sum the contribution of all the elements to find the field at the center of the aperture. I wasn't able to do it because the field strength was to be found out at the center of the aperture. The center of the aperture was troubling. The question also had a hint with it which said " Remembering the superposition principle , you can think about the effect of replacing the piece removed , which itself is a practically a little disc. " Does it mean that we simply find out the remaining charge on sphere (after subtracting the charge gone with the disc )? I suppose it shouldn't mean this. I would like to understand the answer using both the ways because I want to find the mistake in integration as well as understand the hint. The question figure is- Answer: I'll only explain a couple of stuff and you'll do the rest. All you need to do here is find the electric field for a disc of radius r, and take the limit as r tends to zero. This limiting field wil NOT be zero because of the field produced by the point charges at a point on the conductor. Next subtract this field from the field of a conductor ($\sigma / \epsilon_0$) This will show you an interesting fact. For the spherical conductor, at any point $P$ of its surface, half the field is produced by the charges at all other points$P' \ne P$ on the surface, and the other half is produced by the charge at P. The superposition of these two contributors gives the familiar field just outside a conductor as $\sigma / \epsilon_0$.
{ "domain": "physics.stackexchange", "id": 34869, "tags": "electric-fields, integration, superposition" }
Generators of ${\rm SU}(n)$ are traceless. Why?
Question: A general element of the Lie group ${\rm SU}(n)$ is written as $$ g({\vec{\theta}})=e^{-i\sum_a\theta_a T_a} $$ where $\theta_a$ for $(a=1,2,\ldots,n^2-1)$ denotes $n^2-1$ real parameters. The unitarity $g^\dagger g=I$ demands that $T_a^\dagger=T_a$ i.e. the generators be hermitian. The unimodularity condition, $\det g=+1$, further tells that $$\det g=e^{-i\sum_a\theta_a{\rm Tr}(T_a)}=+1.$$ This only requires that $$\sum_a\theta_a{\rm Tr}(T_a)=2\pi n$$ where $n$ is any integer. What restricts the value of $n$ to be zero or in other words, what makes the generators traceless? Answer: The zero trace condition for the Lie algebra is the counterpart of the unit determinant condition for the Lie group. Indeed, if $M\in {\rm SU}(N)$, then $\det M=1$. Now consider the exponential map $\exp:\mathfrak{su}(N)\to \mathrm{SU}(N)$, which for a matrix group and algebra is really the matrix exponential. Now recall that $$\ln\det \exp X=\operatorname{tr} X\tag{1}.$$ Since $\exp X\in \mathrm{SU}(N)$ we must have $\det \exp X=1$. As a result $\ln\det \exp X=0$ for any $X\in \mathfrak{su}(N)$ and hence $\operatorname{tr}X=0$ for any $X\in \mathfrak{su}(N)$. The reason $\operatorname{tr}X=2\pi n$ with $n\in \mathbb{Z}\setminus \{0\}$ is not an option is exactly because of the fact that we can form any linear combination with complex coefficients of a given basis of generators as mentioned in comments to this post and in Connor's answer. Nevertheless, we give another approach, which is more direct. Let $M(t)=\exp tX$ for $t\in (-\epsilon,\epsilon)$ be a short path with $M(0)=1$ and $M'(0)=X$. The derivative of $\det M(t)$ is easily calculated from (1) along the path. Indeed taking the derivative with respect to $t$ we find $$\dfrac{1}{\det (\exp tX)}\dfrac{d}{dt}(\det(\exp tX))= \operatorname{tr} X\tag{2}.$$ Since $\exp tX\in \mathrm{SU}(N)$ we have $\det \exp tX=1$ for all $t\in (-\epsilon,\epsilon)$ and the LHS of (2) is trivially zero. As such $\operatorname{tr}X=0$ follows. Note this is not special to ${\rm SU}(N)$: the argument applies to any matrix group where $\det M =1$. As I mentioned in the beginning: zero trace is the Lie algebra counterpart of unit determinant.
{ "domain": "physics.stackexchange", "id": 98094, "tags": "group-theory, lie-algebra, trace" }
Why is the constant velocity model used in a projectile motion derivation?
Question: I was re-studying university physics last week, I'm now in the chapter about kinematics in 2 dimensions and specifically the one treating projectile motion. In page 86 of his book (Serway - Physics for scientists and engineers) he derives the equation of the range of the projectile motion to be: $$R=\frac{{v_i}^2\sin2\theta_i}{g}$$ But I don't know why he used one of his assumptions $\color{red}{\bf Question1:}$ Why $v_{xi}=x_{x\rlap\bigcirc B}$? Where $\rlap\bigcirc {\,\sf B}$ is the time when the projectile stops. $\color{darkorange}{\bf Question2:}$ Why did he use the particle under constant velocity model to derive that formula, whereas here we deal with a projectile under constant acceleration? Any responses are welcome, I'm disappointed a lot about those matters! Answer: I don't understand question 1: where does he equate a speed to a position? As far as question 2 is concerned, it is basically what DavePhD said, but maybe I can extend it a bit more saying something about the conservation of linear momentum: Along the x-direction, there is no external force (because gravity points downwards only, assuming a flat surface) so the linear momentum of the projectile is conserved. Since $p_x = mv_x$, $v_x$ is constant.
{ "domain": "physics.stackexchange", "id": 45687, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, projectile" }
SICP - exercise 2.69 - generate a huffman tree from a set of ordered leaves
Question: From SICP Exercise 2.69: The following procedure takes as its argument a list of symbol-frequency pairs (where no symbol appears in more than one pair) and generates a Huffman encoding tree according to the Huffman algorithm. (define (generate-huffman-tree pairs) (successive-merge (make-leaf-set pairs))) Make-leaf-set is the procedure given above that transforms the list of pairs into an ordered set of leaves. Successive-merge is the procedure you must write, using make-code-tree to successively merge the smallest-weight elements of the set until there is only one element left, which is the desired Huffman tree. (This procedure is slightly tricky, but not really complicated. If you find yourself designing a complex procedure, then you are almost certainly doing something wrong. You can take significant advantage of the fact that we are using an ordered set representation.) make-code-tree creates a tree from two branches, make those branches as left and right branches respectively, combine all symbols of the branches as its own set of symbols, and finally add the weights of the branches as its own weight. adjoin-set inserts a tree to the set while remaining ordered based on the weight of the items. Though I am confident my code is correct, I am extremely unsure of my solution. What's bothering me is the hint that it's tricky. It does gave correct results for all tests I did. I have thought of alternative ways of doing this, but the procedures I came up with were very complex. Also, I think this is very slow. Please review my code. (define (successive-merge leaves) (if (= (length leaves) 1) (car leaves) (successive-merge (adjoin-set (make-code-tree (car leaves) (cadr leaves)) (cddr leaves))))) How can I make this code better and faster? And are there better ways of doing this? Answer: Instead of using (= (length leaves) 1) to measure the list, consider just using (null? (cdr leaves)) instead, that way it's a constant operation. Otherwise, if you're not using destructive operations (to improve adjoin-set), this looks as good as it can get. Also make sure to have the correct indentation (if that wasn't caused by pasting), e.g.: (define (successive-merge leaves) (if (null? (cdr leaves)) (car leaves) (successive-merge (adjoin-set (make-code-tree (car leaves) (cadr leaves)) (cddr leaves))))) Also take a look at this post maybe, one of the posts regarding the same exercise.
{ "domain": "codereview.stackexchange", "id": 18177, "tags": "performance, tree, scheme, sicp, compression" }
Can the average length of the day and night of a planet be different?
Question: At one point in "Marvel's Agents of S.H.I.E.L.D.", some agents are on a planet where the day, defined as the length of time where the sun shines on the planet, occurs only once every 18 years for a couple hours. Is that a plausible scenario? Can the length of the day and the night be drastically different for a given planet? Please, do not be afraid to be technical in your answer. EDIT: (After Rob's answer) On this hypothetical planet, the "conservation of daylight hours" as he called it seems to be violated. In other words, on earth, when you have long days on a specific location in the northern atmosphere, this is compensated by long nights on a corresponding location in the southern atmosphere. On the planet in question, the long nights/short days seem to be the case for all the planet. Answer: It depends what you mean by day and night. The day and night are not of equal lengths now, where I live at latitude 53N. The tilt of the Earth's rotation axis with respect to the ecliptic plane means that this is generally true. The situation you describe would have to be considerably more extreme. If the planet was in a highly eccentric orbit and had a very large (close to 90 degrees) tilt of its rotation axis with respect to its orbital plane and the tilt was such that one rotation pole pointed "towards" the star and you were talking about the amount of "day" that was experienced by an observed at the opposite pole, then it could be arranged. However note that there is some sort of "conservation of daylight hours" going on here, since an observer at the other rotation pole would experience almost continuous daylight (albeit that in a highly eccentric orbit, the daylight would be very dim for most of the planet's year) with a brief period of darkness.
{ "domain": "physics.stackexchange", "id": 27128, "tags": "astronomy, planets, rotational-kinematics" }
Uppercase vs lowercase letters in reference genome
Question: I am using a reference genome for mm10 mouse downloaded from NCBI, and would like to understand in greater detail the difference between lowercase and uppercase letters, which make up roughly equal parts of the genome. I understand that N is used for 'hard masking' (areas in the genome that could not be assembled) and lowercase letters for 'soft masking' in repeat regions. What does this soft masking actually mean? How confident can I be about the sequence in these regions? What does a lowercase n represent? Answer: What does this soft masking actually mean? A lot of the sequence in genomes are repetitive. Human genome, for example, has (at least) two-third repetitive elements.[1]. These repetitive elements are soft-masked by converting the upper case letters to lower case. An important use-case of these soft-masked bases will be in homology searches: An atatatatatat will tend to appear both in human and mouse genomes but is likely non-homologous. How confident can I be about the sequence in these regions? As you can be about in non soft-masked based positions. Soft-masking is done after determining portions in the genome that are likely repetitive. There is no uncertainty whether a particular base is 'A' or 'G', just that it is part of a repeat and hence should be represented as an 'a'. What does a lowercase n represent? UCSC uses Tandom Repeat Finder and RepeatMasker for soft-masking potential repeats. NCBI most likely uses TANTAN. 'N's represents no sequence information is available for that base. It being replaced by 'n' is likely an artifact of the repeat-masking software where it soft-masks an 'N' by an 'n' to indicate that portion of the genome is likely a repeat too. [1] http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1002384
{ "domain": "bioinformatics.stackexchange", "id": 26, "tags": "fasta, genome" }
C++ Calculator for complex numbers - follow-up
Question: After following the suggestions from the first question on that topic (link), I'd like to show you the result now: #include <iostream> class ComplexNumber { private: double real; double imaginary; public: ComplexNumber operator+(ComplexNumber b) { //Just add real- and imaginary-parts double real = this->real + b.real; double imaginary = this->imaginary + b.imaginary; ComplexNumber c = ComplexNumber(real, imaginary); return c; } ComplexNumber operator-(ComplexNumber b) { //Just subtract real- and imaginary-parts double real = this->real - b.real; double imaginary = this->imaginary - b.imaginary; ComplexNumber c = ComplexNumber(real, imaginary); return c; } ComplexNumber operator*(ComplexNumber b) { //Use binomial theorem to find formula to multiply complex numbers double real = this->real * b.real - this->imaginary * b.imaginary; double imaginary = this->imaginary * b.real + this->real * b.imaginary; ComplexNumber c = ComplexNumber(real, imaginary); return c; } ComplexNumber operator/(ComplexNumber b) { //Again binomial theorem double real = (this->real * b.real + this->imaginary * b.imaginary) / (b.real * b.real + b.imaginary * b.imaginary); double imaginary = (this->imaginary * b.real - this->real * b.imaginary) / (b.real * b.real + b.imaginary * b.imaginary); ComplexNumber c = ComplexNumber(real, imaginary); return c; } void printNumber(char mathOperator) { std::cout << "a " << mathOperator << " b = " << this->real << " + (" << this->imaginary << ") * i" << std::endl; } /* * Constructor to create complex numbers */ ComplexNumber(double real = 0.0, double imaginary = 0.0) { this->real = real; this->imaginary = imaginary; } }; int main() { /* * Variables for the real- and imaginary-parts of * two complex numbers */ double realA; double imaginaryA; double realB; double imaginaryB; /* * User input */ std::cout << "enter real(A), imag(A), real(B) and imag(B) >> "; std::cin >> realA >> imaginaryA >> realB >> imaginaryB; std::cout << std::endl; /* * Creation of two objects of the type "ComplexNumber" */ ComplexNumber a(realA, imaginaryA); ComplexNumber b(realB, imaginaryB); /* * Calling the functions to add, subtract, multiply and * divide the two complex numbers. */ ComplexNumber c = a + b; c.printNumber('+'); c = a - b; c.printNumber('-'); c = a * b; c.printNumber('*'); c = a / b; c.printNumber('/'); return 0; } If you have any suggestions on further improving the code, I would really appreciate it if you share them with me. Answer: Use field initialization lists: So your constructor ComplexNumber(double real = 0.0, double imaginary = 0.0) { this->real = real; this->imaginary = imaginary; } Can become: ComplexNumber(double real = 0.0, double imaginary = 0.0) : real(real), imaginary(imaginary) { } Simplify your returns I could see an argument for making an extra ComplexNumber to hold your return value if you need to further modify it or if the name of that variable is explanatory in showing what the return means, but as it stands, your c is neither of those. Simplify ComplexNumber c = ComplexNumber(real, imaginary); return c; To just return ComplexNumber(real, imaginary); Make your operator functions const Since you (correctly) don't modify a when you do a + b, the operator function can (and should) be declared const. That way, even if you have a const object, you'll still be able to call it (and if you accidentally try to modify the member variable, you'll know immediately in the form of a compilation error). That'd look like: ComplexNumber operator+(const ComplexNumber &b) const { Notice I've also declared b as const here since you shouldn't be modifying it either. I've also passed it by reference to save you some overhead. Make your class printable with std::cout Your printNumber is very specific. In fact, if you ever want to use this class for anything other than simply showing arithmetic results, that print may not be what you want. Instead, I'd make a generic str() that just returns a string version of the complex number. Something like: std::string str() { std::ostringstream oss; oss << this->real << " + (" << this->imaginary << ") * i"; return oss.str(); } And then in the global scope, you can overload the << operator for std::cout: std::ostream& operator<<(std::ostream &os, const ComplexNumber &cn) { return os << cn.str(); } And now when you want to print it in main(), you can say: std::cout << "a + b = " << a + b << std::endl; std::cout << "a - b = " << a - b << std::endl; std::cout << "a * b = " << a * b << std::endl; std::cout << "a / b = " << a / b << std::endl; Look at how easy that becomes to read and understand!
{ "domain": "codereview.stackexchange", "id": 37311, "tags": "c++, beginner, reinventing-the-wheel, complex-numbers" }
Finding electric flux due to charges through a sphere
Question: Problem: Two charges of magnitudes $-2Q$ and $+Q$ are located at points $(a,0)$ and $(4a, 0)$ respectively. What is the electric flux due to these charges through a sphere of radius '$3a$' with its center at the origin? Solution: Using Gauss's law, $Flux(ϕ)=q/ ϵ0$, Where $q$ is the total charge inside the surface and $ϵ0$ is the permittivity of free space. So, $Flux(ϕ)=−2Q/ϵ0$ Doubt: How can we find the solution for the same using the formula: $Φ = E⋅S = EScosθ$ Where $E$ is the magnitude of the electric field, $S$ is the area of the surface, and $θ$ is the angle between the electric field lines and the normal (perpendicular) to S. Answer: You just calculate the flux manually as it were. For example, flux is given as: $$\text{Flux}=\oint\vec E\cdot \mathbf{\hat n}\;dA.$$ So the expression for the field is: $$\vec E(\vec r)=-k{2Q\over |\vec r-(a\mathbf{\hat x})|^3}(\vec r-(a\mathbf{\hat x}))+k{Q\over |\vec r-(4a\mathbf{\hat x})|^3}(\vec r-(4a\mathbf{\hat x})).$$ Since the closed surface in question is a sphere, we can find the form of $\mathbf{\hat n}$ easily enough. It is the radially outward unit vector which is given by: $$\mathbf{\hat n}=\cos(\theta)\mathbf{\hat x}+\sin(\theta)\mathbf{\hat y}.$$ And Gauss' law tells you that the external charge doesn't contribute to the flux, so you can neglect its contribution to the overall field so that $\vec E(\vec r)$ becomes: $$\vec E(\vec r)=-k{2Q\over |\vec r-(a\mathbf{\hat x})|^3}(\vec r-(a\mathbf{\hat x})).$$ So plugging this stuff in we get: $$\begin{align} \text{Flux}&=\oint -k{2Q\over |\vec r-(a\mathbf{\hat x})|^3}(\vec r-(a\mathbf{\hat x}))\cdot [\cos(\theta)\mathbf{\hat x}+\sin(\theta)\mathbf{\hat y}] \;dA\\ &=\int_0^{\pi}\int_0^{2\pi} -k{2Q(3a-\cos(\theta))9a^2\sin(\phi)\over [9a^2(1+a^2-2a\cos(\theta)]^{3/2}}\;d\theta d\phi\\ &=\int_0^{2\pi} k{4Q(3a-\cos(\theta))9a^2\over [9a^2(1+a^2-2a\cos(\theta)]^{3/2}}\;d\theta . \end{align} $$ And the rest I leave as an exercise, it is mainly a matter of simplifying enough so that one can find a suitable integral formula to perform the integration.
{ "domain": "physics.stackexchange", "id": 100526, "tags": "electrostatics" }