anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Converting an integer to Greek and Roman numerals
Question: Yesterday I posted a question about converting an integer to it's Roman Numeral equivalent. Apparently you guys liked it and gave me some good advice, so I decided to take all your advice, and go a step further, it will now not only convert to Roman but will now also convert to Greek numerals. I would like some critique on this project that I'm continuing. # <~~ coding=utf-8 ~~> import argparse opts = argparse.ArgumentParser() opts.add_argument("-g", "--greek", type=int, help="Convert to Medieval Numerals") opts.add_argument("-r", "--roman", type=int, help="Convert to Roman Numerals") args = opts.parse_args() ROMAN_NUMERAL_TABLE = ( ("~M", 1000000), ("~D", 500000), ("~C", 100000), ("~L", 50000), ("~X", 10000), ("~V", 5000), # "~" indicates a Macron ("M", 1000), ("CM", 900), ("D", 500), ("CD", 400), ("C", 100), ("XC", 90), ("L", 50), ("XL", 40), ("X", 10), ("IX", 9), ("V", 5), ("IV", 4), ("I", 1) ) GREEK_NUMERAL_TABLE = ( ("α", 1), ("β", 2), ("γ", 3), ("δ", 4), ("ε", 5), ("Ϝ", 6), ("ζ", 7), ("η", 8), ("θ", 9), ("ι", 10), ("κ", 20), ("λ", 30), ("μ", 40), ("ν", 50), ("ξ", 60), ("ο", 70), ("π", 80), ("ϙ", 90), ("ρ", 100), ("σ", 200), ("τ", 300), ("υ", 400), ("φ", 500), ("χ", 600), ("ψ", 700), ("ω", 800), ("ϡ", 900), ("α", 1000), ("β", 2000), ("γ", 3000), ("δ", 4000), ("ε", 5000), ("ϛ", 6000), ("ζ", 7000), ("η", 8000), ("θ", 9000) # The Greeks weren't very creative ) def convert_init(number, convert_to=None): """ Convert a number to a numeral, Greek or Roman >>> print(convert_init(45, convert_to=GREEK_NUMERAL_TABLE)) ϜϜϜϜϜϜϜγ >>> print(convert_init(45, convert_to=ROMAN_NUMERAL_TABLE)) XLV """ display_numerals = [] for numeral, value in sorted(convert_to)[::-1]: # sort the list from largest to least count = number // value number -= count * value display_numerals.append(numeral * count) return ''.join(display_numerals) if __name__ == '__main__': if args.greek: data = convert_init(int(args.greek), convert_to=GREEK_NUMERAL_TABLE) with open("greek_numerals.txt", "a+") as file_data: file_data.write(data) # Write it to a file elif args.roman: data = convert_init(int(args.roman), convert_to=ROMAN_NUMERAL_TABLE) with open("roman_numerals.txt", "a+") as file_data: file_data.write(data) else: raise NotImplementedError("{} is not implemented yet".format(args)) Key points I would like to focus on, of course critique the whole program. I used writing to a file because when if I output it to the console I get an IO error, basically an encoding error. Does this have to do with my computer or with the program? Would there be a way for me to condense the two tuples down into one and still convert correctly? Answer: Your conversion routines make no sense. I don't see why 45 should be ϜϜϜϜϜϜϜγ, as stated in your doctest. I also get this clearly wrong result: >>> convert_init(1939, ROMAN_NUMERAL_TABLE) 'XLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXLXVIV' Since you are using Unicode for the Greek numerals, I should be pedantic and point out that the Roman numerals should also use the appropriate Unicode characters. The docstring would be better written as def convert_init(number, convert_to=None): """ Convert a number to a Greek or Roman numeral >>> convert_init(45, convert_to=ROMAN_NUMERAL_TABLE) 'XLV' """ … … since the print statement obscures the fact that you are returning a string. (Based on just your docstring, it could also be returning some other kind of object, whose __str__ produces the desired output.) What is convert_init? Did you mean convert_int? Your args = opts.parse_args() is not guarded by if __name__ == '__main__' as it should be. You shouldn't need to cast int(args.greek) or int(args.roman), since argparse should take care of the type=int for you.
{ "domain": "codereview.stackexchange", "id": 23085, "tags": "python, python-2.x, roman-numerals, number-systems" }
What happens to the human ear as it naturally goes deaf?
Question: I was always told that when the human ear goes deaf to a specific sound (auditory frequency), then that sound is heard one last time, and upon fading away, will never be able to be heard again. Is this in fact true? If so, why do we hear that sound (which seems to happen randomly and is somewhat loud for a split second) just before the ear becomes deaf to it? Ultimately speaking, what exactly happens as the human ear naturally goes deaf? Answer: I don't think you will hear the frequency "one last time", before it fades away. More likely you will slowly lose your range of hearing from high to low frequency. Hair cells in the cochlea are arranged in a spiral on the basilar membrane, with high frequency cells at the apex. For some reason, high frequency cells are more sensitive to damage, either by loud noises, drugs (including certain antibiotics that cause deafness), or old age. By adulthood, humans have already lost a significant portion of their high-frequency hearing (children hear up to 22kHz, adults 16-18kHz from memory). Significant hearing loss can also be caused by the loss of outer hair cells, which act as a cochlea amplifier, or loss of the type I spiral ganglion neurons that connect the hair cells to the brain. But typically age-related hearing loss is normally caused by the death of Inner Hair Cells.
{ "domain": "biology.stackexchange", "id": 7624, "tags": "human-biology, hearing" }
Kinetic isotope experiment used to distinguish radical and concerted mechanisms for the Diels Alder reaction
Question: In the Pericyclic Reactions Oxford Primer it is written that: the deuterium substitution of the hydrogen atoms in the diene and dienophile that participate in a possibly Diels-Alder reaction, gives rise to inverse secondary isotope effect (as the carbon atoms change from trigonal to tetrahedral geometry): If both bonds are forming at the same time, the isotope effect when both ends are deuterated is geometrically related to the isotope effect at each end. If the bonds are being formed one at a time, the isotope effects are arithmetically related. I don't understand the principle behind this. Why does the isotope effect occur and how does it tell us whether the mechanism is radical or pericyclic? I need an answer much more concise than what is presented in the book. Answer: What Fleming is likely referring to is work by Edward Thornton (J. Am. Chem. Soc. 1972, 94, 1168) in which he attempted to make use of secondary kinetic isotope effects (KIE) to prove whether the Diels Alder reactions being studied were indeed pericyclic, or occurred via a radical pathway.* The phrasing in your post (which I assume is from the book itself) isn't particularly helpful. By 'geometric', Thornton was actually talking about the geometric mean and the arithmetic mean as ways of considering the isotope effects. The reaction they actually studied was the retro Diels-Alder of 9,10-dihydro-9,10-ethanoanthrace, with varying levels of deuteration (none, di, tetra) as shown in the figure below. In order to discuss whats going on, we need to define some terms: In the reaction, two bonds are being broken, lets call those bonds A and B. Breaking bond A has a KIE of fA and breaking bond B has a KIE of fB Each molecule (no deuteration, di-, tetra-) has an associated rate constant, lets call these k0, k2, k4 (with the subscript indicating how many deuteriums we have). We can now consider the calculation that Thornton went through to determine if the reaction is indeed a concerted one:# In a completely concerted reaction, intuitively fA = fB, since the TS is symmetrical. If the reaction is non-concerted (aka stepwise), fA cannot equal fB unless there is symmetry to the TS. Considering the di-deuterated molecule, two pathways are possible (breaking A or B first) to give a non-symmetrical intermediate. Thornton defines rate equations for these two as fAk0/2 and fBk0/2. We can do this since k0 is the same no matter which bond is broken in the undeuterated starting material (with the f modifying it to take into account the KIE). k2 (the rate constant for the retro Diels-Alder of the di-deuterated starting material) can therefore be thought of as the sum of two pathways (break A then B or B then A). Mathematically: k2 = k0(fA + fB)/2. Crucially, this relationship is valid independent of whether the reaction is concerted or stepwise. k4 (the rate constant for the retro Diels-Alder of the tetra-deuterated starting material) is able to be obtained similarly. Mathematically: k4 = fA fBk0. At this point, Thornton provides the result of solving the two previous equations simultaneously, which provides a relationship as below: $$f_\mathrm{A},f_\mathrm{B} = \frac{k_2}{k_0} \pm \left[\left(\frac{k_2}{k_0}\right)^2 - \frac{k_4}{k_0}\right]^{1/2}$$ It is from this relationship that we are able to consider whether the reactions are concerted or stepwise: if concerted, fA = fB = k2/k0, which means that (k2/k0)2 == k4/k0. This now allows us to measure the rate constants, plug in the numbers, and determine if the reaction is concerted. *: There are many examples of Diels Alder reactions that are stepwise, via radical mechanisms. The KIE's measured are necessarily small, meaning its often difficult to know if what you're looking at is actually correct. One thing that does hold up generally is that the reactions aren't ionic, due to the lack of any real solvent dependance (if the reactants are soluble in the solvent and you heat it hot enough, they generally proceed at the same rate, suggesting no stabilisation of 'intermediates' in polar solvents). #: I'm paraphrasing the original paper, which will hopefully name it more digestible, but I still encourage you to read the actual thing, despite it being fairly dense
{ "domain": "chemistry.stackexchange", "id": 8320, "tags": "organic-chemistry, reaction-mechanism, pericyclic" }
What is the displacement of the block, given force and mass?
Question: Suppose I have a block which has mass 2 kg. I apply a force of 10N. The object gets accelerated and then retards finally comes to rest. Given this much information, can we find the displacement of the block? Please Help. Thanks Answer: Let's suppose that the time in which the force acts on the block is $\Delta t=t_{2}-t_{1}$. Using the second law of motion: $$\frac{d^2}{dt^2}x=\frac{1}{m}F$$ So the velocity is: $$v=\frac{d}{dt}x=\int_{t_{1}}^{t_{2}}\frac{1}{m}Fdt=\frac{1}{m}F\Delta t$$ Using this initial velocity, you can calculate the distance that the block travels until it stops with this formula: $$\Delta KE=W$$ So: $$-\frac{mv^2}{2}=-\mu mgd\cos\left(\theta\right)$$ If the surface is horizontal, $\theta=0$, so: $$d=\frac{v^2}{2\mu g}$$ where $v$ is the initial velocity, $\mu$ is the coefficient of friction and $g$ is gravitational acceleration.
{ "domain": "physics.stackexchange", "id": 44024, "tags": "kinematics" }
Search for differential equation from Green function
Question: Let's consider the following: We have a Green function $G$, and we want to know what linear differential equation is solved by $G$. How to do this? The question is: If I know $G$, then is there a method that allow to solve equation $LG=\delta$ with respect to $L$? In other words, normally we have the differential equation, and we try to get Green function $G$ in order to solve it. I know the Green function of the equation, but try to obtain the right equation that is solved by $G$. Answer: The following is a bit of an inductive approach and it would probably not work for all Green functions. The basic equation that you want to solve is $$ \hat{D} G(\mathbf{x}) = \delta(\mathbf{x}) , $$ where $\hat{D}$ is the differential operator that you want to find and $G$ is the Green function, which is known. Say for instance your Green function is given by $$ G(\mathbf{x})=\int \frac{1}{m^2+|\mathbf{k}|^2}\exp(i \mathbf{k}\cdot\mathbf{x}) d^3k . $$ if one can somehow get rid of the denominator inside the integral, one can see that the result would produce the Dirac delta function. So, the differential operator must produce $m^2+|\mathbf{k}|^2$ when it operates on the exponential function inside the integral. For each $\mathbf{k}$, we need a gradient operator, which would bring down a $i\mathbf{k}$. So, it then follows that the required differential operator is $$ \hat{D}=m^2-\nabla^2 . $$ This inductive approach is perhaps not very useful for a general case, but it should cover most of the typical cases that one finds. If there is another case that you are interested in that cannot be treated in this way, please include it in the question and then we can think how to deal with it.
{ "domain": "physics.stackexchange", "id": 60522, "tags": "greens-functions" }
F - measure derivation (harmonic mean of precision and recall)
Question: We can define the F - measure as follows: $F_{\alpha}=\frac{1}{\alpha \frac{1}{P}+(1-\alpha)\frac{1}{R}} $ Now we might be interested in choosen a good $\alpha$. In the article The truth of the F-measure the author states that one can choose the conditions: $\beta=R/P$, where $\frac{\partial F_{\alpha}}{\partial P}=\frac{\partial F_{\alpha}}{\partial R}$ and then we obtain $\alpha=1/(\beta^2+1)$ and $F_{\beta}=\frac{(1+\beta^2)PR}{\beta^2 P+R} $ It is said that The motivation behind this condition is that at the point where the gradients of E w.r.t. P and R are equal, the ratio of R against P should be a desired ratio $\beta$. I understand that the condition will garantify that the user is willing to trade an increment in precision for an equal loss in recall. But I do not get why the equality of both partial derivatives correspond to these hypothesis. I would rather understand when one partial derivative equals the other partial derivative multiplied by minus one. Could anyone explain me why the desired condition (condition in words) correspond to this equality (condition in math terms)? EDIT: Well, we could do the following: $\partial F=\frac{\partial F_{\alpha}}{\partial P}\partial P+\frac{\partial F_{\alpha}}{\partial R}\partial R$. And since we want for $\partial P=-\partial R$ that $\partial F=0$, we obtain easily the condition. But I have one problem with this: Since $\frac{\partial F_{\alpha}}{\partial P}/\frac{\partial F_{\alpha}}{\partial R}=1$, and the fact that the gradient is perpendicular to each level curve (https://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/2.-partial-derivatives/part-b-chain-rule-gradient-and-directional-derivatives/session-36-proof/MIT18_02SC_pb_32_comb.pdf) we would have that the level curve must have $m=-1$. Nonetheless, when I calculate the level curve for some constant $c$ I get the result, $R(P)=\frac{c(1-\alpha ) P}{P-c\alpha}$, which clearly is not a linear funcion with $m=-1$. What am I missing? Answer: why the equality of both partial derivatives correspond to these hypothesis. I would rather understand when one partial derivative equals the other partial derivative multiplied by minus one. Your intuition of "trading off" by "subtracting" a value is correct when you speak in terms of $\Delta R$ and $\Delta P$ (as you yourself noticed in the edit), but is not valid if you speak in terms of derivatives. Think of it this way: a partial derivative with respect to a parameter tells you by how much your metric changes if you improve that variable. By the very design, $F$ will always increase if you improve either $R$ or $P$ - its partial derivatives are positive everywhere. They are not equal everywhere, however. At some configurations $(R,P)$ you would gain more by improving one parameter rather than the other. Say, if your $R$ is much lower than $P$, your $F$-score would increase more if you improved the $R$ a bit rather than $P$. In the case of an $F_1$ metric, $R$ and $P$ enter the formula symmetrically and it is kind-of easy to see that whenever $R=P$, either of the components contributes equally to the increase of $F$. In fact, the set of points where $R$ and $P$ are "equal in importance" is exactly the line $R=P$ (which can also be written as $R/P=1$, if you prefer). Now consider the equation for $F_\beta$: $$\frac{\text{constant}}{\beta^2\frac{1}{R} + \frac{1}{P}}.$$ Let us see how much it changes in response to increasing $R$ or $P$. As both parameters are in the denominator, the change in value of $F_\beta$ is proportional to the change in the value of its denominator, hence we may simplify a bit and only consider the change in $$\beta^2\frac{1}{R} + \frac{1}{P}$$ You can now compute that a $dx$ increase in $R$ would decrease the value of the expression above by $\beta^2\frac{dx}{R^2}$. A similar increase in $P$ would decrease the value of the above expression by $\frac{dx}{P^2}$. There is a set of points $(R,P)$ where these effects are exactly equal. These are the points where $$\beta^2\frac{dx}{R^2} = \frac{dx}{P^2} \quad\Rightarrow\quad \frac{R}{P}=\beta$$ When $\beta > 1$ you are effectively putting more marginal importance on recall, in the sense that until your recall is at least $\beta$ times the precision, it is always beneficial to work on it, rather than improving precision. What am I missing? You are missing the fact that the condition $\frac{\partial F_\alpha}{\partial P} = \frac{\partial F_\alpha}{\partial R}$ does not hold everywhere, but just on the set of points $R=\beta P = \sqrt{\frac{1-\alpha}{\alpha}}P$. This line intersects the level curve at just a single point. The line with slope -1 would be the tangent to the level line at that point - the green dashed line on the illustration below.
{ "domain": "datascience.stackexchange", "id": 3204, "tags": "machine-learning, clustering, metric" }
use ros header files in c++ project
Question: Hi, I am new to ROS. I want to include ros header files (for example #include <rosbag/bag.h>) in a c++ project. I do not want to create a node and start roscore. How can I include this header + other dependencies in a CMakeLists file ? Thanks. Originally posted by lounis on ROS Answers with karma: 11 on 2018-02-16 Post score: 0 Answer: See #q238955 catkin (http://wiki.ros.org/catkin/Tutorials) which makes use of CMake is the best supported way to go, otherwise you could do cmake on your own like: find_package(roscpp REQUIRED) include_directories(${roscpp_INCLUDE_DIRS}) ... Replace roscpp with whatever library you need. pkg-config also can get ros libraries and includes. Originally posted by lucasw with karma: 8729 on 2018-02-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by lounis on 2018-02-16: @lucasw Thanks a lot. This is just what I was looking for. Comment by lounis on 2018-02-19: using these lines in my CMakeList : cmake_minimum_required(VERSION 2.8.12) project(writeBag) find_package(roscpp REQUIRED) I get this error :CMake Error at CMakeLists.txt:3 (find_package): By not providing "Findroscpp.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package Comment by lounis on 2018-02-19: ../.. configuration file provided by "roscpp", but CMake did not find one. Could not find a package configuration file provided by "roscpp" with any of the following names: roscppConfig.cmake roscpp-config.cmake Add the installation prefix of "roscpp" to CMAKE_PREFIX_PATH or set.. Comment by lounis on 2018-02-19: I fixed this issue : I needed to source the ROS setup.bash file...
{ "domain": "robotics.stackexchange", "id": 30066, "tags": "c++, rosbag, ros-indigo" }
Physical intuition on vortex sound generation
Question: Whenever a fluid has a nonzero vorticity, it looses some of its energy in a form of sound wave. Formally is this mechanism described by Lighthill's equation or some related model (like e.g. Curle's theory). But I have a lack of physical intuition on this process. Derivation of the Lighthill's model is based on disturbances of momentum flux tensor in a stokesian fluid, so nothing straightforward. I am fully capable of understanding this derivation with all the maths, but I can't "explain that to my grandmother in plain english". Could anybody provide that sort of explanation? Note 1: I am mainly interested in generating noise in turbulent fluid states. Tone-like sounds generated by periodical emerging of a macroscopic vortex eddies (Strouhal's Aeolian sound) are clear to me. Note 2: I am aware of brilliant M. C. Howe's publication on the Theory of Vortex Sound. I have read it. But it is mainly on applied maths - that's why I am asking this question. Answer: Good question. My Problem is that I can't give you a "Mainstream answer". The reason is that Turbulence is considered happening In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations. But with these premise you are not able to produce sound, which needs vibration. Why would a single object start to ever vibrate through internal stresses and tensions? There simply must be something else, as the vibration allways needs some kind of a shock, or sudden release. My explanation is, that the Turbulence is collision and friction, instead of viscous forces. And these collisions are able to produce vibrations, aka Sound. A Turbulent flow is a fluid cutted in peaces, internal surface tensions inside the fluid. But as said, this is not "Mainstream physics", the mainstream physics just haven't explained this yet. This forces me to make another approach; Vortex tube The German language version of this wiki explains how the whistling sound is crucial for the Functionality. If the sound is removed, the temperature differences which can be produced drops from 40 Kelvin to just few Kelvin's. This supports the idea of the separating surfaces inside the fluid. The whistling sound comes from these surfaces, and when this sound disappears the fluids are contacted and the temperature is transferred. Answer; The noise is caused by the collisions of the separated Fluid components. I hope this helps. But also feel free to vote this down. Our thoughts or votes doesn't really change anything in nature.
{ "domain": "physics.stackexchange", "id": 26373, "tags": "fluid-dynamics, acoustics, turbulence, vortex" }
Disparity between training and testing errors with deep learning: the bias-variance tradeoff and model selection
Question: I am developing a convolutional neural network and have a dataset with 13,000 datapoints that is split 80%/10%/10% train/validation/test. In tuning the model architecture, I found the following, after averaging results over several runs with different random seeds: 3 conv layers: training MAE = 0.255, val MAE = 0.340 4 conv layers: training MAE = 0.232, val MAE = 0.337 5 conv layers: training MAE = 0.172, val MAE = 0.328. Normally, I'd pick the model with the best validation MAE (the trends are the same for the testing MAE, for what it's worth). However, the architecture with the best validation MAE also has the largest difference between training and validation MAE. Why is what I'd normally think of as overfitting giving better results? Would you also go with 5 convolutional layers here, or are there concerns with a large difference in training and validation/testing performance? On what I imagine is a related note, I am familiar with the article "Reconciling modern machine-learning practice and the classical bias–variance trade-off" in PNAS, which has the thought-provoking image below. Is this something that's actually observed in practice -- that you can have minimal training error but good out-of-sample, generalizable performance, as shown in subpanel B? Answer: Your question is, what model is better between one that seems more overfitted (larger difference between train and eval set) but it has also higher scores or one that has less variance between train and eval set but at the same time it has worst results. Everything assuming that you have done a correct train test split and there is no data leakage and distributions keep the same in every split (this is important to check). There was a discussion about this some time ago. The answer seems to be kind of subjective since the quantitative analysis has been performed Normally there is the following trade-offs: Complexity: Occams razor and complexity vs interpretability. In your case, both models are almost with the same complexity (it's not a linear regression against DL, just a couple layers more) and the interpretability stays the same, Generalization. You want your model to behave in the best possible way in production, an overfitted model in train seems to have more probable cause to fail due to a change of distribution in production. You only have 3 data points so it's hard to say what it will be best. My suggestions will be that: Add some more layers (6,7,8) just to see when your test results start to go down (you can still overfit much more) and then visualize the data and keeping both concepts defined before choosing what are the best architectures for your model Investigate with more parameters (adding one more layer seems to be a high difference hyperpameter), like learning rate, layer size, activations functions and so on... Consider using one of the famous architectures for your problem, they are developed in every framework and tested by a lot of people, they are there because the seem to be the best at their task, give them ago. There has been already a lot of electricity wasted in deep learning hyperparameter tunning.
{ "domain": "datascience.stackexchange", "id": 7611, "tags": "machine-learning, neural-network, overfitting, convolutional-neural-network" }
Are there any redundant divs in my first proper webpage? Or any way to group classes in my CSS file
Question: This is my first full formatted webpage done while doing the odin project. Is there any redundant divs? Can I reduce my css file by grouping some classes? And how to give styles to a button under a specific div without using class? html{ font-family: 'Roboto',sans-serif; } body{ margin:0; display: flex; flex-direction: column; justify-content: space-between; } .header{ background-color: #1f2937; padding-top:10px; display: flex; align-items: center; justify-content: space-between; } .logo{ color:#f9faf8; font-size: 24px; padding-left: 35px; font-weight: bolder; } a{ color:#e5e7eb; font-size: 18px; text-decoration: none; padding:7px; } .hero{ background-color: #1f2937; display:flex; padding:70px 90px 80px 90px; gap:20px; } .hero-image{ display: flex; justify-content: center; align-items: center; background-color: gray; height:30vh; width: 45vw; color: #e5e7eb; } h2{ font-size: 48px; font-weight: bold; color:#f9faf8; margin:0; } .random{ display: flex; flex-direction: column; align-items: center; padding:20px 20px 100px 20px; } .info{ font-size: 32px; color: #1f2937; font-weight: bolder; padding-bottom: 30px; } .random-image{ border-width: 2px; border-radius: 15px; border-color: #3882f6; height:105px; width:105px; border-style: solid; } .quote{ background-color:#e5e7eb; display:flex; flex-direction: column; justify-content: space-around; padding: 60px 250px 60px 250px; } .quote-text{ font-size: 32px; font-style: italic; color: #1f2937; } p{ color:#e5e7eb; font-size: 18px; margin:0; } .action{ background-color: #3882f6; margin: 80px 110px 80px 110px; padding: 50px 90px 50px 90px; color: #f9faf8; display: flex; gap:300px; border-radius: 10px; } button{ background-color:#3882f6; color:#f9faf8; border-radius: 25px; padding:7px 25px 7px 25px; } .footer{ background-color: #1f2937; color:#e5e7eb; font-size: 18px; display: flex; justify-content: center; padding:20px 0; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="styles.css"> <title>Odin-Webpage</title> </head> <body> <div class='header'> <div class='logo'> <u>ONE PIECE</u> </div> <div> <a href="">Manga</a> <a href="">Anime</a> <a style="padding-right:35px" href="">About</a> </div> </div> <div class='hero'> <div style="width:80vh"> <h2>One Piece is really awesome </h2> <p>One Piece written by Eichiro Oda is the most popular manga in the world, surpassing the likes of Batman and other multi-author comics. </p> <button style="border-width: 0;margin-top: 4px;">Sign up</button> </div> <div class='hero-image'> <img src='one-piece.jpg' style="height:45vh;width: 45vw;"> </div> </div> <div class='random'> <div class='info'>Some random information.</div> <div style="display: flex;justify-content: space-around;text-align: center;"> <div style="padding-left: 100px;"> <img class='random-image' src='luffy.png'> <div>The protaganist Monkey D.Luffy aka Rubber boy</div> </div> <div> <img class='random-image' src='wings.png'> <div>The wings of the future Pirate King</div> </div> <div> <img class='random-image' src='whitebeard.png'> <div>The strongest man in the world</div> </div> <div style="padding-right:100px;"> <img class='random-image' src='usopp.png'> <div>The only God in one piece universe</div> </div> </div> </div> <div class='quote'> <div class='quote-text'>Pirates are evil? The Marines are righteous?… Justice will prevail, you say? But of course it will! Whoever wins this war becomes justice!</div> <div style="text-align:right;font-size: 25px;"><b>-Donquixote Doflamingo</b></div> </div> <div class='action'> <div> <b>Call to action! It's time!</b><br> Support one piece by clicking that button right over there! </div> <div> <button style="border-color: white;border-style: solid;">Sign up</button> </div> </div> <div class='footer'> Copyright Gagan Karanth 2021 </div> </body> </html> Is there any standard order in arranging the css classes among professionals? Answer: And how to give styles to a button under a specific div without using class? You can do this: .action button { /* styles */ } The selector selects all buttons that’re decendants of elements that have the class action. Your HTML contains many inline styles (style="…"); it’s generally best to avoid doing that. for instance: <div class='info'>Some random information.</div> <div style="display: flex;justify-content: space-around;text-align: center;"> … </div> Consider using a class for the 4-column section instead of inline styles. <div class='header'> … </div> … <div class='footer'> Copyright Gagan Karanth 2021 </div> These divs can be replaced with HTML5 <header> and <footer> tags. <div> <a href="">Manga</a> <a href="">Anime</a> <a style="padding-right:35px" href="">About</a> </div> … <div style="display: flex;justify-content: space-around;text-align: center;"> <div style="padding-left: 100px;"> <img class='random-image' src='luffy.png'> <div>The protaganist Monkey D.Luffy aka Rubber boy</div> </div> … <div style="padding-right:100px;"> <img class='random-image' src='usopp.png'> <div>The only God in one piece universe</div> </div> </div> padding-left and padding-right should usually be applied to parent elements rather than to the first children and to the last children. I would change the navigation links’ (Manga, Anime, and About)’s container element from a div to a nav, then delete the inline style padding-right:35px from the "About" link and declare it in nav’s CSS. <nav> <a href="">Manga</a> <a href="">Anime</a> <a href="">About</a> </nav> nav { padding-right: 35px; } Likewise to the 4-columns part (adding a class and moving the inline styles to the CSS per a previous suggestion): <div class='images'> <div> <img class='random-image' src='luffy.png'> <div>The protaganist Monkey D.Luffy aka Rubber boy</div> </div> <div> <img class='random-image' src='wings.png'> <div>The wings of the future Pirate King</div> </div> <div> <img class='random-image' src='whitebeard.png'> <div>The strongest man in the world</div> </div> <div> <img class='random-image' src='usopp.png'> <div>The only God in one piece universe</div> </div> </div> .images { display: flex; justify-content: space-around; text-align: center; padding-left: 100px; padding-right: 100px; } Some colors in the CSS are repeated throughout the file. CSS variables are useful for keeping all colors’ definitions in 1 place. :root { --dark-bg: #1f2937; } … .header, .hero { background-color: var(--dark-bg); } .info, .quote-text { color: var(--dark-bg); } But old browsers can’t do CSS custom properties, or variables.
{ "domain": "codereview.stackexchange", "id": 42665, "tags": "html, css, html5" }
Calculate result of an election using the D'Hondt method
Question: I have learned about list comprehensions a week or so ago and understood that they can save a lot of screen space. Though, I have been told that this is bad practice only to use them, even if it brings tremendous benefits. Where is the line? The version with no nested comprehensions is at least three times longer. This function takes a dictionary like this and a number of seats in the council. "Name": "Votes for 1st candidate","Votes for 2nd candidate",... parties = {"020": [550,198,52,57], "De Vrie": [364,45,58,95,101,10,41,101,92,66,18,35,20,47,19,10,20,15,32,40,15,26,8,16,8,41,13,19,119,10,6,6,18,29,12,28,133,3,15,22], "inter": [299,85,26,34,29,15,18,38,27,16,18,9,8,13,17], "activisten": [161,44,139,155,25,53,58,61], "newdems": [78,77,52,35,59,20,21,6,11,8,11,23,7,28,32,19]} election_result(parties, 7) It prints how many seats each party is assigned according to the D'Hondt method: 020 - 1 seat(s) De Vrie - 3 seat(s) inter - 1 seat(s) activisten - 1 seat(s) newdems - 1 seat(s) Function itself: def election_results(parties: dict, seats: int) -> dict: """Looks cool""" assigned_seats = {p: sum(all_votes) // (sum([sum(all_votes) for all_votes in parties.values()]) // seats) for p, all_votes in parties.items()} while sum([seats_ for seats_ in assigned_seats.values()]) < seats: assigned_seats[[p for p in parties if (sum(parties[p]) // (assigned_seats[p] + 1)) == max([sum(all_votes) // (assigned_seats[p] + 1) for p, all_votes in parties.items()])][0]] += 1 [print(f"\r{k} - {v} seat(s)") for k, v in assigned_seats.items()] return assigned_seats P.S. Converting the while loop is definitely too much Answer: Short answer: Here are the rules of thumb with list comprehensions: don't nest them don't use them for side effects don't use them if they require more than ~70 characters width, or have longwinded conditions don't use them to build a big list if you're just going to take one element don't use them if they in any way make the code harder to understand than a traditional loop--never try to be clever or write code because it "looks cool" (unless you're code golfing). This code breaks all of the rules. Long answer: Yes, this is definitely a misuse of list comprehensions. Frankly, it's an unreadable mess. Where is the line? The line is the same as any code feature/tool: use it if it benefits the code in terms of some metric(s) (performance, readability, maintainability, etc) and don't use it if it doesn't. Software engineering is basically a long series of judgement calls like this, analyzing tradeoffs and picking the solution that makes the most sense for whatever goals you might have. In almost any non-competitive, non-code golf context, the goals are speed of development versus maintainability, with performance occasionally an important factor. If all things seem equal, lean towards maintainability while keeping a reasonable pace and avoiding premature optimizations. As a rule of thumb, lines should never be longer than 80 characters in any language. Putting your code into Black gives immediate improvements in this regard: def election_results(parties: dict, seats: int) -> dict: """Looks cool""" assigned_seats = { p: sum(all_votes) // (sum([sum(all_votes) for all_votes in parties.values()]) // seats) for p, all_votes in parties.items() } while sum([seats_ for seats_ in assigned_seats.values()]) < seats: assigned_seats[ [ p for p in parties if (sum(parties[p]) // (assigned_seats[p] + 1)) == max( [ sum(all_votes) // (assigned_seats[p] + 1) for p, all_votes in parties.items() ] ) ][0] ] += 1 [print(f"\r{k} - {v} seat(s)") for k, v in assigned_seats.items()] return assigned_seats A rule of thumb with list comprehensions is to avoid nesting them. This code would be a terrible chore to safely modify without introducing bugs. One would need to de-obfuscate the code just to figure out what it even does, much less go about modifying it. Nested list comprehensions often hide unnecessary repeated work. Another rule of thumb with list comprehensions is to avoid using them for side effects, like printing. List comprehensions allocate memory, so using them to print involves expensive work that just gets thrown away to the garbage collector. Worse, it's not idiomatic and hides the intent of the code. Functions that perform logic shouldn't print to begin with. They should silently return results and let the caller decide what to do. """Looks cool""" is pretty much an insult to the reader. Imagine there's a bug in the company's code and the developer that was hired to replace you has to deal with this function. I'd vouch they'll see it as extremely uncool. Be nice to that person because you may find yourself in the same position. A good point is that you've added some type hints. However, the dict hint doesn't contain key and value types. Another good point is that your variable names are generally pretty clear. But the inner list comprehensions have no names, so it's unclear what they represent. Intermediate variables and functions would de-anonymize them and make their purpose clear. One list comprehension achieves nothing except a wasted allocation and loop: while sum([seats_ for seats_ in assigned_seats.values()]) < seats: can be while sum(assigned_seats.values()) < seats: Here's a first pass at a rewrite. I haven't looked at the algorithm, so there are probably other optimizations available. I didn't bother ensuring that I haven't broken something. If I did regress functionality, my point is proven. from typing import Any, Dict, Iterable, List Parties = Dict[str, List[int]] Result = Dict[str, int] def flatten(lst: Iterable) -> List: return [x for y in lst for x in y] def sum_dict_values(dct: Dict[Any, List[int]]) -> int: return sum(flatten(dct.values())) def find_best_party(parties: Parties, assigned_seats: Result) -> str: return max(parties, key=lambda p: sum(parties[p]) // (assigned_seats[p] + 1)) def determine_election_results(parties: Parties, seats: int) -> Result: total_votes = sum_dict_values(parties) assigned_seats = { p: sum(votes) // (total_votes // seats) for p, votes in parties.items() } while sum(assigned_seats.values()) < seats: best_party = find_best_party(parties, assigned_seats) assigned_seats[best_party] += 1 return assigned_seats def main(): parties = { "020": [550, 198, 52, 57], "De Vrie": [364, 45, 58, 95, 101, 10, 41, 101, 92, 66, 18, 35, 20, 47, 19, 10, 20, 15, 32, 40, 15, 26, 8, 16, 8, 41, 13, 19, 119, 10, 6, 6, 18, 29, 12, 28, 133, 3, 15, 22], "inter": [299, 85, 26, 34, 29, 15, 18, 38, 27, 16, 18, 9, 8, 13, 17], "activisten": [161, 44, 139, 155, 25, 53, 58, 61], "newdems": [78, 77, 52, 35, 59, 20, 21, 6, 11, 8, 11, 23, 7, 28, 32, 19], } assigned_seats = determine_election_results(parties, 7) for k, v in assigned_seats.items(): print(f"\r{k} - {v} seat(s)") if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 44221, "tags": "python, algorithm" }
How to determine relative polarity (basic procedure)?
Question: I am taking AP Chemistry, and have noticed that I can identify a polyatomic polar molecule, but struggle to determine which molecule is the most polar given a set of several polar molecules. What would be a basic procedure/set of questions to ask myself in determining which molecule is the most polar? I hope my question is not too general... Thank you! Answer: Determine which molecules are polar out of the given options. Determine the polar bond (This will be the bond causing the molecule to be asymmetrical) The molecule with the polar bond that has the greatest difference in electronegativity is the most polar. For example a carbon-oxygen bond is more polar than an oxygen-fluorine bond because the difference in electronegativity for oxygen and carbon is greater than the difference between fluorine and oxygen. You can find electronegativity values in the following table or a table like it, I am not sure if you are provided with one. If not you can normally judge electronegativity by the how fare an atom is from fluorine, as seen in the table below. The size of the molecule also affects polarity, for example polarity decreases with the size of alcohols, but large complex molecules should not come out at your level.
{ "domain": "chemistry.stackexchange", "id": 8032, "tags": "molecules, intermolecular-forces, polarity" }
I am studying polarization of EM waves. Where does this general form come from?
Question: This is a snapshot from The Physics of Waves by Georgi. I am wondering if anyone could explain where this general form of polarization vector comes from? Thank you very much. Answer: A simple linear(ly) polarized transversal wave propagating in the $\hat z$ direction if polarized along the $\hat x$ axis can be written as $\textbf{A} = \hat x A cos(\omega t -\kappa z)$ where $A$ is the oscillation amplitude and $\kappa = 2\pi / \lambda$. Now if you superimpose on the wave $\textbf{A}$ another linearly polarized wave $\textbf{B} = \hat y B cos(\omega t -\kappa z +\phi)$ that also propagates in the $\hat z$ direction but oscillates along the $\hat y$ direction with amplitude $B$ and phase $\phi$ relative to the first one then the sum of the two waves will be $$ \textbf{C} = \hat x A cos(\omega t -\kappa z) + \hat y B cos(\omega t -\kappa z +\phi) $$ This wave can also exist just like the other two if the medium of propagation is linear and thus linear superposition holds. Now assume that $\phi = -\pi/2$ in which case $ \textbf{C} = \hat x A cos(\omega t -\kappa z) + \hat y B sin(\omega t -\kappa z) $. For any $\alpha= \omega t -\kappa z$ as $\alpha$ runs over all values in $[0, 2\pi]$ the end point of the vector $ \textbf{C} = \hat x A cos(\alpha) + \hat y B sin(\alpha) $ describes an ellipse of semi-major and semi-minor axes $A$ and $B$, and is called elliptic polarization. In the special case of $A=B$ you get circular polarization for obvious reasons and is used exclusively in earth-satellite radio links, for example. The complex notation in your textbook comes from the Euler formula replacing the trigonometric functions. $$ \textbf{C} = \hat x A cos(\omega t -\kappa z) + \hat y B sin(\omega t -\kappa z) \\ = \Re [\hat x A e^{\mathfrak{j}( \omega t -\kappa z))}] +\Re [\hat y B e^{\mathfrak{j}( \omega t -\kappa z -\pi/2))}] $$ or $$ \tilde {\textbf{C}} = \hat x \tilde{A} -\mathfrak{j} \hat y \tilde{B} $$ where the tilde denotes the complex amplitude. Now if the two linear polarized components are not aligned with the coordinate axes you still get the endpoint describing an ellipse as is in your textbook just with tilted major/minor axes relative to the coordinate axes.
{ "domain": "physics.stackexchange", "id": 56921, "tags": "homework-and-exercises, electromagnetism, waves, electromagnetic-radiation, polarization" }
Why is magnetic field vector perpendicular to magnetic force vector?
Question: So recently in physics class, we learned about the magnetism right hand rules. One of them states that the index finger points in the direction of the velocity of a particle, the middle finger points in the direction of the magnetic field, and the thumb points in the direction of the magnetic force. I'm curious why the magnetic field vector is perpendicular to the magnetic force vector. Answer: I am still thinking very hard about this problem, but so far I have realized one very peculiar thing. The magnetic field tried to make any particle go in a circle in such a way that it makes a magnetic dipole that the existing field can try to push away! Just imagine a magnetic field into the plane of the paper, and a positive particle going towards right in the plane, the magnetic field exerts an upward force making it go in a circle and thus trying to make a magnetic dipole which would have its own field out of the plane. If such a dipole would be made, the existing magnetic field would try to push it away if it were free and if it were fixed it would reduce the net magnetic flux. Further thought development; If you have a loop having clockwise current as seen from behind, it would have a magnetic field pointing towardw the other side from which we see, if there is a positively charged particle moving towards the right, it would be forced to move in a circular path in a counterclockwise direction from our point of view. The existing loop forces/forges another loop which has current going in opposite direction than itself, it is interesting to note that both loops have effectively locked their negative-positive charges in a loop. As current goes clockwise in original loop, its electrons go counterclockwise these along with themselves seem to be taking the positively charged particle along with them for a ride in a circular embrace! They are not pulling the particle close to themselves because there is no net electric field outside of the wire, so it still is an enigma why the locking system appears while electric field seems not to be there at all! I know this does not completely answer the question, but I am still thinking on the subject and after thinking about this, I thought maybe someone else can benefit from this and answer efficiently if I cant!
{ "domain": "physics.stackexchange", "id": 17768, "tags": "electromagnetism, magnetic-fields" }
Looping through a string finding nested brackets
Question: So I've been trying to check through a string, looping for, [ (start) and ] (end). In this, I also have to check for nested brackets. My expected output, given input [depth1[depth2]] is as follows: depth2 depth1[depth2] To achieve this, I wrote this little snippet of code, which uses a Dictionary<int, int> to store the initial bracket's position at a given depth. If it finds another opening bracket, the depth will increase (and the position will be recorded). It will look for the next ending bracket (with respect to any other opening brackets), and get the initial position of the bracket for that depth. (Thus we get both the beginning and ending of that nested string) var depthItems = new Dictionary<int, int>(); int depth = 0; string input = "[depth1[depth2]]"; for (int i = 0; i < input.Length; i++) { if (input[i] == '[') { depth++; depthItems.Add(depth, i); } else if (input[i] == ']') { int item = depthItems[depth]; depthItems.Remove(depth); depth--; Console.Write($"Found item at ({item}, {i}) -> "); Console.WriteLine(input.Substring(item + 1, i - item - 1)); } } I'm not sure how viable this would be when looking up against large strings with lots of nested brackets - I think it's a start, at least. I will note that I was first surprised that higher "depths" were returned first, but then realised that in order for the lower ones to complete, the higher ones should have been returned first. Not a bad thing, but curious to note. So how could this be improved, especially in terms of memory (i.e. large strings)? Answer: I'm not sure how viable this would be when looking up against large strings with lots of nested brackets [...] "how viable" is a very vague question. Two common performance metrics are CPU load and memory load, perhaps that's what you were trying to get at. It's good to clarify what exactly you want to know. In terms of CPU, how will this program behave as the input grows? The algorithm does a single pass over all characters of the input. When the input doubles, the number of operations doubles, roughly. => In other words, it's linear. Can it be any better than linear? => No, because we cannot know the positions of the brackets without looking through each character once. In terms of memory, how will this program behave as the input grows? Aside from the memory used by the input string, what consumes memory in this program? => The dictionary storing the starting positions of brackets. How do brackets on the same nesting level impact the memory consumption? That is, in a string like [][][][]? => Since the storage for a [ is removed when the corresponding ] is found, brackets at the same nesting level use a single entry in the dictionary. How do deeply nested brackets impact the memory consumption? => The dictionary has as many entries as the deepest nesting level. As closing brackets are found, entries get deleted. The maximum memory consumed by the dictionary throughout the program is proportional to the maximum depth of nesting encountered. So how could this be improved, especially in terms of memory (i.e. large strings)? The current performance of the program is on the expected order of complexity, so it's fine. However, a dictionary is not the most natural choice for this purpose, and its actually overkill. A simpler data structure, a stack, would have been enough to solve this. Instead of storing (depth, index) pairs in a dictionary, you could store just index in a stack: When you see a [, push the index onto the stack. When you see a ], pop the last pushed index from the stack. The result will be equivalent, the depth variable becomes unnecessary.
{ "domain": "codereview.stackexchange", "id": 31619, "tags": "c#, strings, balanced-delimiters" }
How to make sense of the 'Relaxation Method' for measuring reaction kinetics?
Question: I get the concept that one can measure the rate of reaction by measuring the rate at which an equilibrium state reaches another equilibrium state after the system is perturbed (usually by heating). However, I do have an issue with it. Consider the reaction $$\ce{A <=> B}$$ My lecture handout suggests that the rate of change of $[\ce{A}]$ is given by: $$\frac{\mathrm d[\ce{A}]}{\mathrm dt}=-k_\text{forward}[\ce{A}]+k_\text{reverse}[B]$$ However, surely it's just $\frac{\mathrm d[\ce{A}]}{\mathrm dt}=-k_\text{forward}[\ce{A}]$. The equation in the lecture handout would make sense at equilibrium because that is when $\frac{\mathrm d[\ce{A}]}{\mathrm dt}=0$ but I think it seems to insinuate that this is always valid (without explicitly saying so). Am I right or is that equation always valid? If it is: why? Answer: You have two simultaneous reactions, $$\begin{array}{rcl} \ce{A} &\xrightarrow{k_1}& \ce{B}\\[6pt] \ce{B} &\xrightarrow{k_{-1}}& \ce{A} \end{array}$$ and $\dfrac{\mathrm d[\ce{A}]}{\mathrm dt}$ equals $-k_1 [\ce{A}]$ for the first reaction alone and $k_{-1} [\ce{B}]$ for the second alone. To get the total $\dfrac{\mathrm d[\ce{A}]}{\mathrm dt}$ for the reactions running simultaneously, you have to add the contributions from each reaction: $$\dfrac{\mathrm d[\ce{A}]}{\mathrm dt} = -k_1 [\ce{A}] + k_{-1} [\ce{B}]$$ This will always be true; at equilibrium you'll have $\dfrac{\mathrm d[\ce{A}]}{\mathrm dt} = 0$ and $$K = \frac{k_1}{k_{-1}} = \frac{[\ce{B}]}{[\ce{A}]}$$ If you did it your way, this wouldn't be the case.
{ "domain": "chemistry.stackexchange", "id": 2697, "tags": "physical-chemistry, kinetics" }
Why is the period of a pendulum on the Moon $\sqrt{6}$ times its period on the Earth?
Question: I came across this equation: $T_m= \sqrt{6}T_e $. Can anyone tell me how this equation is derived? This is how I tried to, but I got stuck after some time. So the time period of a simple pendulum on the earth= $2\pi\sqrt{l/g}$ The time period of a pendulum on the moon is $2\pi\sqrt{l/(g/6)}$ Now how do I create an equation which shows the time period of a pendulum on the moon with respect to the time period of a pendulum on the earth. And please be as detailed as possible! Answer: You can modify the second formula: $$T_m = 2 \pi \sqrt{\frac{l}{\frac{g}{6}}} = 2 \pi \sqrt{\frac{6l}{g}} = 2 \pi \sqrt{6\frac{l}{g}} = \sqrt{6} \left(2 \pi \sqrt{\frac{l}{g}}\right) = \sqrt{6} T_e$$
{ "domain": "physics.stackexchange", "id": 69536, "tags": "homework-and-exercises, classical-mechanics" }
Batteries and fields?
Question: Batteries generate fields in wires that essentially cause the movement of electrons. I think of batteries as two charged plates that essentially contain a mechanism between them to move electrons from the positive to negative plate. If you have two charged plates they would essentially create a field everywhere. Now consider a circuit that consists of two batteries and a resistor. The first battery would generate a field everywhere. The second battery would also generate a field everywhere. Now say the first battery is 6V and the second battery is 3V. That means inside the first battery they're is a field that would result in 1C of charge gaining 6J of potential energy if moved from the negative to positive terminal. But in a circuit, shouldn't the field from the second battery affect the field inside the first battery? In other words, shouldn't the field from the 3V battery affect the field inside the 6V battery? Since fields can be added, shouldn't the field inside the 6V battery change if a 3V battery is brought near it? So why do we assume the a n-V battery remains constant in voltage regardless of what other batteries are near it? Answer: Electrochemical cells (batteries) are not passive components, instead they're active charge-pumps having internal feedback effects which produces a relatively constant voltage at the output terminals. If an external field impinges on a battery's terminals, this will produce a temporary small change in potential on the terminals. But the battery then actively responds with a very small, brief current which only persists long enough to restore the potential across the terminals to the same value it had before the external field was applied. So, if you wave an electrically-charged balloon back and forth around one terminal of a D-cell, a very small AC current will appear in the battery terminals and within the electrolyte: exactly the current needed to maintain the ~1.5 volts of potential difference across the battery terminals. To understand the details of battery operation, look into the physics of "half-cell" electrochemistry. Below is a very brief ELI5 version. A metal plate dunked into fluid electrolyte will rapidly dissolve: rapidly like sugar or salt in water. But as metal atoms corrode away, they each leave behind one or more outer electrons in the metal's "electron sea." Quickly the metal becomes charged net-negative, and the electrolyte becomes charged net-positive by the population of metal +ions in solution. Electrostatic attraction tends to pull the metal ions back towards the opposite-charged metal surface, while thermal vibrations tend to make the atoms diffuse away into the electrolyte. When the "force of dissolving" equals the electrostatic attraction, the corrosion process slows to a halt. There will be a constant voltage left between the electrolyte and the metal. If an external circuit is used to increase this voltage, then the metal ions are forced back to the metal surface and so we have "charging a battery" and also electroplating. If an external circuit should reduce this voltage, then more atoms can dissolve from the metal surface and so we have "discharging a battery" and also electrolytic corrosion. Note that a single "battery" cell consists of two different metal plates connected by a common electrolyte, and the voltage on the terminals is simply the difference between the half-cell voltages which appear in the thin region between the electrolyte and each metal plate. Note that all components are conductors, and the entire "charge pump" is located in a molecules-thin layer adjacent to each metal surface. A battery is a chemically-fueled, constant-voltage charge pump.
{ "domain": "physics.stackexchange", "id": 13133, "tags": "electromagnetism, electric-circuits" }
RViz does not respect transparency (alpha) for a simple URDF component (e.g. cylinder)
Question: Hello, I am trying to make a part of my URDF model semi transparent since the part is a clear 12" acrylic plate. So I tried making the plate a flat cylinder with a light grey color and alpha set to 0.1 as shown below. However, the plate looks solid grey in RViz--the same as if I had set alpha to 1.0. I can make the whole robot more or less transparent by changing the alpha property for the Robot Model display. However, I want just the plate to be transparent. Below is a test URDF file that illustrates the problem. I am trying this using the latest Electric debian packages under Ubuntu 10.04. <?xml version="1.0"?> <!-- XML namespaces --> <robot xmlns:sensor="http://playerstage.sourceforge.net/gazebo/xmlschema/#sensor" xmlns:controller="http://playerstage.sourceforge.net/gazebo/xmlschema/#controller" xmlns:interface="http://playerstage.sourceforge.net/gazebo/xmlschema/#interface" xmlns:xacro="http://ros.org/wiki/xacro" name="test_urdf"> <link name="base_link"> <visual> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry> <cylinder length="0.005" radius="0.15" /> </geometry> <material name="clear_color"> <color rgba="0.5 0.5 0.5 0.1"/> </material> </visual> </link> </robot> Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-08-09 Post score: 2 Original comments Comment by Nick Armstrong-Crews on 2011-08-26: +1: I have seen the same problem in diamondback on Ubuntu 10.10 Answer: Can you report this problem on the bug tracking of the visualization stack? You find a link at the bottom of the wiki page http://www.ros.org/wiki/visualization Originally posted by Wim with karma: 2915 on 2011-09-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Pi Robot on 2011-09-01: Done! The ticket can be found here: https://code.ros.org/trac/ros-pkg/ticket/5145
{ "domain": "robotics.stackexchange", "id": 6377, "tags": "rviz, urdf" }
What is the physical significance of molecular partition function?
Question: What is the physical significance of molecular partition function? I reason, that the molecular partition function \begin{align} q &= \sum_i \exp(-\beta\varepsilon_i),& \beta &= \frac{1}{k_\mathrm{B}T} \end{align} is found in the expression for the populations of each configuration, derived from the Boltzmann distribution: $$p_i = \frac{\exp(-\beta\varepsilon_i)}{\sum_i \exp(-\beta\varepsilon_i)}$$ Answer: The partition function $q=\sum_i\exp(-E_i/k_BT)$ in your question can be regarded as the effective number of levels accessible to the molecule at a given temperature. It also means that in the equilibrium distribution the partition functions tells us how the systems are partitioned or divided up among the different energy levels. In determining the form of the Boltzmann distribution the total number of 'systems' is N and these are divided up as $n_1$ systems have energy $E_1$, $n_2$ have energy $E_2$ etc. subject to conditions $\sum_i n_i=N$ and $\sum_in_iE_i=E$ where E is the total energy. It is found overwhelmingly that the chance that $n_i$ states have energy $E_i$ is given by $n_i = N\exp(-E_i/k_BT)/q$ In practice the partition function acts as a normalisation term in the calculation all sorts of thermodynamic properties, for example the average (internal) energy of N molecules with quantum energy levels $E_i$ is $$ U=N\frac{\sum E_i e^{-E_i/k_BT}}{\sum e^{-E_i/k_BT} }$$ and the calculating the entropy $S=-Nk_BT\sum p_i \ln(p_i)$ where $p_i=n_i/N =\exp(-E_i/k_BT)/q$ is the probability that the ith state is occupied. [ If the states are degenerate, with degeneracy $g_i$, then the partition function is $q=\sum_i g_i\exp(-E_i/k_BT)$.]
{ "domain": "chemistry.stackexchange", "id": 10391, "tags": "statistical-mechanics" }
What can be inferred about the closeness of reduced qubit states from the closeness of the bipartite quantum state?
Question: Given a qubit state $|\psi\rangle \in \mathcal{H}$, and two bipartite general mixed states $\rho$ and $\sigma$, such that, $$\langle \psi|\otimes \langle \psi|\rho - \sigma |\psi\rangle \otimes |\psi \rangle \ \leqslant \epsilon$$ Now suppose the reduced state of $\rho, \sigma$ be such that, $$ \rho_r = Tr_1(\rho) = Tr_2(\rho), \hspace{5mm} \sigma_r = Tr_1(\sigma) = Tr_2(\sigma)$$ Then can we say something about the closeness of the reduced state in terms of epsilon? In other words, $$\langle \psi| \rho_r - \sigma_r|\psi\rangle \leqslant ? $$ Answer: No, there's not a lot you can say. Consider these two cases, both with $\epsilon=0$. First, the obvious one, $\rho=\sigma=|\psi\rangle\langle\psi|\otimes |\psi\rangle\langle\psi|$. Clearly $\rho_r-\sigma_r=0$. Second, let $|\psi^\perp\rangle$ be orthogonal to $|\psi\rangle$. You can have $$\rho=(|\psi\rangle\langle\psi|\otimes |\psi^\perp\rangle\langle\psi^\perp|+|\psi^\perp\rangle\langle\psi^\perp|\otimes |\psi\rangle\langle\psi|)/2$$ and $$\sigma=|\psi^\perp\rangle\langle\psi^\perp|\otimes |\psi^\perp\rangle\langle\psi^\perp|.$$ Now you have $$ \langle\psi|\rho_r-\sigma_r|\psi\rangle=\frac12, $$ which is more or less as far away as you can get.
{ "domain": "quantumcomputing.stackexchange", "id": 1743, "tags": "quantum-state, density-matrix, fidelity, linear-algebra" }
accessing multiple kinect in ros groovy
Question: recently I was tasked with accessing multiple kinect in ros groovy ubuntu 12.04 in my lab, i just connect one kinect to my pc , and come across many problems during connecting 2 kinect to my pc ,and I have modified the launch file sevral times ,but failed .thanks for your help Originally posted by wlzlnu on ROS Answers with karma: 26 on 2014-10-31 Post score: 0 Original comments Comment by Dan Lazewatsky on 2014-10-31: Without more details it will be impossible to help you. What problems are you having? What launch file did you modify? How did you modify it? Please edit your original question to add more details. Comment by wlzlnu on 2014-11-01: I find some answers http://answers.ros.org/question/96071/accessing-multiple-kinects-in-ros-hydro/ . and then two kinects works but two kinects just show the rgb image or just show the depth image .they cant show the rgb and depth image at the same time .thanks for your help ! Answer: the problem has been solved successfully.its my pc's problem.I changed a pc.and then it works well. Originally posted by wlzlnu with karma: 26 on 2014-11-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19911, "tags": "ros, kinect, multiple, ros-groovy" }
Robot model orientation is improper with respect to map frame
Question: Robot model is not properly oriented (upside down) in rviz when "map" is selected as fixed frame. When the fixed frame is odom the robot model was perfectly aligned and oriented in rviz. How to bring the robot to same position as it was in odom frame. Refer the result below Originally posted by Kishore Kumar on ROS Answers with karma: 173 on 2016-01-29 Post score: 0 Answer: The co-ordinate system used is wrong, solution for this question solved the issue. Originally posted by Kishore Kumar with karma: 173 on 2016-02-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23591, "tags": "navigation, fixed-frame, base-odometry, amcl, 2dcostmap" }
Find char which appears maximal number of times
Question: I had 15 minutes to code the following question: Given a file name return the char which appears maximal number of times. please do not take into account the unit test. Please look at the final foreach loop, I could have also used a for loop, I have doubts what is the best way to implement finding a max value and get the key from the dictionary. Also please comment about code style. Thanks using System.Collections.Generic; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace JobInterviewTests { [TestClass] public class GetMostRepeatedCharQuestion { [TestMethod] public void TestForAasMax() { char result = GetMaxCharHelper.GetMaxReaptedCharFromFile("MaxA.txt"); Assert.AreEqual('A', result); } } public class GetMaxCharHelper { public static char GetMaxReaptedCharFromFile(string fileName) { Dictionary<char, int> charToCount = new Dictionary<char, int>(); using (var streamReader = new StreamReader(fileName)) { string line = streamReader.ReadLine(); while (line != null) { char[] charArray = line.ToCharArray(); foreach (var currentChar in charArray) { if (charToCount.ContainsKey(currentChar)) { charToCount[currentChar]++; } else { charToCount.Add(currentChar, 1); } } //don't forget to get next line when reading from a file line = streamReader.ReadLine(); } int maxCount = 0; char maxChar = '\0'; foreach (var current in charToCount) { if (current.Value > maxCount) { maxCount = current.Value; maxChar = current.Key; } } return maxChar; } } } } Answer: Assuming modern C#, there are a couple of things you could do to be more concise: Instead of the dance with while((line = reader.NextLine())...) you can just use File.ReadLines and foreach over that. You can declare variables inside the TryGetValue like so: charToCount.TryGetValue(currentChar, out int count) where you can use count afterwards without having to declare it on its own line. While keeping score of your most seen char inside the loop is certainly more efficient (as @pattpass did in their answer), you could also extract the character with the highest count at the end with a still readable oneliner: return charToCount.OrderByDescending(cc => cc.Value).First().Key; What's completely absent in your method is error handling. What happens if the file does not exist or you aren't allowed to access it? It seems like you handle empty files with returning \0 and 0. It is debatable if that's what one would expect. All in all, your code could look somewhat likes this: static char GetMostRepeatedChar(string filename) { // you could also not handle this case here and delegate // that responsibility to the caller if(!File.Exists(filename)) { return '\0'; } var charCount = new Dictionary<char, int>(); foreach (var line in File.ReadLines(filename)) { foreach (var c in line) { charCount.TryGetValue(c, out int count); charCount[c] = count + 1; } } if (charCount.Count == 0) { // this is debatable and depends on the real use case // in production code you should never throw a raw 'Exception' // but rather something more specific throw new Exception("empty file"); } return charCount.OrderByDescending(cc => cc.Value).First().Key; }
{ "domain": "codereview.stackexchange", "id": 31603, "tags": "c#, interview-questions" }
Detection periodic elements in image
Question: I am working on a university project in which I need to find a periodic net element in an image. The net is a set of diagonal lines in a specific angle ( which I don't know in advance ) on a noisy image. They are hard to detect so I would like to use all the information I can get my hands on. My current algorithm is based on using edge detection + hough transform to find lines at specific angle range. I was wondering if there is any way to detect periodic signals in image? Something based on ff2 or something like that... Thanks, Answer: Another approach might be to perform the Radon / Hough transform first, then detect the points. e.g. R = radon(I,0:179) in MATLAB. It gives this image: The x-axis is angle (0-180 deg) and the y-axis is distance from the centre. Each local minimum represents a line. It shows 6 lines ~75 degrees, 2 around 90 degrees, and 3 around 170 degrees. (This is MATLAB angles which go clockwise from x-axis because the y-coords are upside down) Edit: Forgot Radon and Hough transforms were roughly the same. Update: I wrote some MATLAB code to locate the angles and mean separation between lines. close all I = imread('testt.jpg'); I = rgb2gray(I); I = I(2:end-1,:); % Radon transform R = radon(I,0:179); imagesc(R); colormap gray(256); pause; % Radon transform of smoothed image Rg = radon(imgaussian(I,2),0:179); imagesc(Rg); colormap gray(256); pause; % Take it away Rf = R - Rg; imagesc(Rf); colormap gray(256); pause; % Chop off out of range parts chop = size(Rf,1) - size(I,1); chop = ceil(chop/2); Rf = Rf(chop+1:end-chop,:); imagesc(Rf); colormap gray(256); pause; % Negative lines - threshold Rf(Rf > 0) = 0; imagesc(Rf); colormap gray(256); pause; % Plot sum - peaks are angles Rp = sum(abs(Rf)); plot(Rp); % Get the peaks sep by at least 15 deg [p,a] = findpeaks(Rp,'minpeakdistance',15,'sortstr','descend'); hold on; scatter(a,p,'r*'); hold off; pause; % Iterate through peaks and find fequencies for j = 1:numel(a) % Get subsection of Radon transform around angle and transpose vstart = max([a(j)-10,1]); vend = min([a(j)+10,size(Rf,2)]); Rsub = Rf(:,vstart:vend).'; imagesc(Rsub); colormap gray(256); pause; RsubP = sum(abs(Rsub)); plot(RsubP); pause; % Find peak correlation with a bit of smoothing xp = xcorr(imgaussian(RsubP,2),imgaussian(RsubP,2)); plot(xp); pause; [rp,rl] = findpeaks(xp,'sortstr','descend'); wave(j) = abs(rl(1) - rl(2)); disp(['Angle: ' num2str(a(j))]); disp(['Wavelength: ' num2str(wave(j))]); disp(['Strength: ' num2str(p(j))]); pause; end which results in: (red * are possible angles) and Angle: 73 Wavelength: 16 Strength: 12401.356 Angle: 92 Wavelength: 54 Strength: 9442.2545 Angle: 175 Wavelength: 33 Strength: 9030.1877
{ "domain": "dsp.stackexchange", "id": 999, "tags": "image-processing, computer-vision, image-segmentation" }
Generalization of the Coulomb Force to the Lorentz-Force - Is it "guessing"?
Question: it's me again, and I'm still stuck with the paper Generalization of Coulomb’s law to Maxwell’s equations using special relativity by Kobe, like in my previous question. My problem now lies in chapter 6: The author looks at the relativistic version of newtons 2nd law for a motionles charge, \begin{align} \frac{dp^i}{d\tau} = \frac{q}{c} E^i \gamma = \frac{q}{c} F^{i0}u_0 \end{align} And then concludes that since the equation is proportional in the Tensors $F$ and $u$, in general there has to hold the equation: \begin{align} \frac{dp^i}{d\tau} = \frac{q}{c} F^{i\nu}u_\nu \end{align} Is this a valid derivation? To state it in a more mathematical way: Given the 4-vectors $a$ and $b$ and the tensor c: If I can always find a reference system with \begin{align} a^i = c^{i0}b_0 \end{align} Does this mean that \begin{align} a^i = c^{i\nu}b_\nu \end{align} in every reference frame? I tried to solve it myself, but I'm just turning arround the same arguments in my head all the time. Answer: The three equations $$ a^i = c^{i0}b_0 $$ between components of four-tensors $a,b,c$ are not enough to conclude the equations $$ a^i = c^{ik}b_k $$ are valid. For a counterexample, consider the equation ($u$ is 4-velocity, $x$ coordinate four-vector): $$ u^i = \frac{1}{c^2}u^iu^0 \left( - \frac{1}{2}u_0\right) ~~~~(*) $$ Here, $\frac{1}{c^2}u^iu^0$ stand for components of a 2-rank tensor $u\otimes u/c^2$ and $-\frac{1}{2}u_0$ stands for zeroth component of 4-vector $b = -\frac{1}{2}u$. In this case, there is always a frame where $$ - \frac{1}{2}u^0u_0 = c^2 $$ because this equation is actually equation for the Lorentz gamma factor $$ \frac{1}{2}\gamma^2c^2 = c^2 $$ and there is always a frame where the particle has gamma factor $\sqrt{2}$. In that frame the equation (*) is valid for all $i$. However, the equation $$ u^i = \frac{1}{c^2}u^iu^\nu \frac{1}{2}u_\nu $$ is not generally valid - it can be rewritten as $$ u^i = -\frac{1}{2}u^i $$ which is obviously not true. The original Kobe's argument is somewhat different, I think. In addition to $$ \frac{dp^i}{d\tau} = \frac{q}{c} F^{i0}u_0 $$ he also assumes that this is valid only if the particle's velocity is zero. Then that equation can be expressed this way: $$ \frac{dp^i}{d\tau} = \frac{q}{c} F^{i\nu}u_\nu, $$ because $u_i = 0$. But such equation between the four-vector components is, due to its form, valid in all Lorentz frames, so it is Lorentz-covariant. He then probably argues that this covariance is something we expect and demand from a physical law, we should assume that the actual forces obey this equation, even when the particle's velocity is not zero, i.e. when $u_i \neq 0$.
{ "domain": "physics.stackexchange", "id": 51108, "tags": "electromagnetism, special-relativity, tensor-calculus, maxwell-equations, covariance" }
What is the best way to classify data not belonging to set of classes?
Question: I am building a multi-class support vector machine (8 classes to be precise) on an image dataset of pre-defined classes. And then I thought of a question: What if I have an image that doesn't belong to the set of predefined classes, what would be the outcome? So I decided to experiment with it and the result was very bad. I got a higher accuracy for images that don't belong to any of the classes. Some images gave 98% accuracy, that they belonged to a particular class, even though my expectation should be that they should should have a very low accuracy. I also tried using OnceClass SVM to first predict if it's part of the class or not. If yes, then what's the label? (Meaning I have 2 models). But this doesn't seem to work as the OneClass SVM couldn't classify the "other" images well. Now I am running out of ideas of how to go about it. How can I approach this problem? Answer: Just a proposal of a method to try out. Stage $1$: Use one-class SVM to assign those images that do not belong to the set of predefined classes as the $9$-th class. Stage $2$: For those images that passes through your filter, let the multi-class SVM assign them to one of the $8$ classes.
{ "domain": "datascience.stackexchange", "id": 10715, "tags": "machine-learning, predictive-modeling, svm, multiclass-classification" }
Populate Comboboxes with Hashtables using PowerShell
Question: Background This is a PowerShell program that uses XAML code from Visual Studio to create a GUI. In this program, there are various Comboboxes (Drop-down menus) used to select different features of the program. What I'm Trying To Acheieve: I am trying to populate the Comboboxes with the data stored in the different Hashtables. The Problem There are different Comboboxes with different names that need to be populated. I cannot iterate through an array of the different Comboboxes. This is because I cannot (or haven't discovered the solution) have an array of objects. Therefore, this is the solution that I have resulted to. This works and solves my problem. However, I want to make the code more efficient and cleaner looking. I want to hear insight from other programmers as to how they would solve this problem, because I am curious. The Code For($i = 0; $i -lt $ColumnsHashTable.Column1.Length; $i++) { $UI.Combobox1.Items.Add($ColumnsHashTable.Column1[$i]) } For($i = 0; $i -lt $ColumnsHashTable.Column2.Length; $i++) { $UI.Combobox2.Items.Add($ColumnsHashTable.Column2[$i]) } For($i = 0; $i -lt $ColumnsHashTable.Column3.Length; $i++) { $UI.Combobox3.Items.Add($ColumnsHashTable.Column3[$i]) } For($i = 0; $i -lt $ColumnsHashTable.Column4.Length; $i++) { $UI.Combobox4.Items.Add($ColumnsHashTable.Column4[$i]) } For($i = 0; $i -lt $ColumnsHashTable.Column5.Length; $i++) { $UI.Combobox5.Items.Add($ColumnsHashTable.Column5[$i]) } Answer: The first thing to observe is that the five pieces of code are identical except for two parameters that are varying. We can capture the common the code in a function (let's call it populateComboBox), and call that five times: populateComboBox $UI.Combobox1 $ColumnsHashTable.Column1 populateComboBox $UI.Combobox2 $ColumnsHashTable.Column2 populateComboBox $UI.Combobox3 $ColumnsHashTable.Column3 populateComboBox $UI.Combobox4 $ColumnsHashTable.Column4 populateComboBox $UI.Combobox5 $ColumnsHashTable.Column5 function populateComboBox($comboBox, $column) { For($i = 0; $i -lt $column.Length; $i++) { $comboBox.Items.Add($column[$i]) } } It's best not to use old C-style for loops with an index unless you need to because they add visual noise: populateComboBox $UI.Combobox1 $ColumnsHashTable.Column1 populateComboBox $UI.Combobox2 $ColumnsHashTable.Column2 populateComboBox $UI.Combobox3 $ColumnsHashTable.Column3 populateComboBox $UI.Combobox4 $ColumnsHashTable.Column4 populateComboBox $UI.Combobox5 $ColumnsHashTable.Column5 function populateComboBox($comboBox, $column) { foreach ($item in $column) { $comboBox.Items.Add($item) } } We could leave it there. I think that is acceptable code. If we wanted to however, we could take it further in one of two directions. First, instead of the five calls to populateComboBox, we could make a loop from 1 to 5, and get the appropriate properties to pass to the function with Get-ItemProperty. (I'm assuming that ComboBox1 and so on are really named that way and are not just example names.) The other idea would be to make an array of (ComboBox, Column) pairs and iterate through that. I quite like that idea because as a general rule it's good to put as much of your code as possible into datastructures. It might be overkill in this case however. Added stuff: To expand on the "other idea" above, we can make an array of (ComboBox, Column) pairs. We can use an array of arrays for this: $comboVals = ( ($UI.Combobox1, $ColumnsHashTable.Column1), ($UI.Combobox2, $ColumnsHashTable.Column2), ($UI.Combobox3, $ColumnsHashTable.Column3), ($UI.Combobox4, $ColumnsHashTable.Column4), ($UI.Combobox5, $ColumnsHashTable.Column5) ); (If you are wondering, I didn't put @ symbols on the arrays here because PowerShell is smart enough to figure out here that we are using arrays.) We can then iterate through that data structure calling the same populateComboBox function we defined before: foreach ($pair in $comboVals) { populateComboBox $pair[0] $pair[1] } Below is the final script. Note that I haven't tried running this because it is not runnable for me since I don't have the ComboBoxes and columns to test with. I think it should be okay though. $comboVals = ( ($UI.Combobox1, $ColumnsHashTable.Column1), ($UI.Combobox2, $ColumnsHashTable.Column2), ($UI.Combobox3, $ColumnsHashTable.Column3), ($UI.Combobox4, $ColumnsHashTable.Column4), ($UI.Combobox5, $ColumnsHashTable.Column5) ); foreach ($pair in $comboVals) { populateComboBox $pair[0] $pair[1] } function populateComboBox($comboBox, $column) { foreach ($item in $column) { $comboBox.Items.Add($item) } }
{ "domain": "codereview.stackexchange", "id": 21271, "tags": "gui, powershell" }
Convert $\sum x_i = y$ to 3-sat
Question: I have a simple looking question. What is the most efficient conversion of $\sum_{i=1}^n x_i = y$ to 3-sat? Here $x_i$ is either $1$ or $0$ and $y$ is some positive integer. Can you do better than making a SATISFIABILITY instance with $\binom{n}{y}$ clauses, each of which is the conjunction of $y$ positive literals and $n-y$ negative literals and then just feeding the whole thing into the Tseitin transform? Answer: Many better techniques for enforcing cardinality constraints are described in this answer. For the special case $y=1$, see also Encoding 1-out-of-n constraint for SAT solvers. Read those links; they suggest more efficient conversions, though I'm not aware of any reason to expect that they are necessarily optimal, so they don't answer your question about the most efficient conversion.
{ "domain": "cs.stackexchange", "id": 1807, "tags": "satisfiability, sat-solvers" }
Change the melody of human speech using FFT and polynomial interpolation
Question: I'm trying to do the following: Extract the melody of me asking a question (word "Hey?" recorded to wav) so I get a melody pattern that I can apply to any other recorded/synthesized speech (basically how F0 changes in time). Use polynomial interpolation (Lagrange?) so I get a function that describes the melody (approximately of course). Apply the function to another recorded voice sample. (eg. word "Hey." so it's transformed to a question "Hey?", or transform the end of a sentence to sound like a question [eg. "Is it ok." => "Is it ok?"]). Voila, that's it. What I have done? Where am I? Firstly, I have dived into the math that stands behind the fft and signal processing (basics). I want to do it programatically so I decided to use python. I performed the fft on the entire "Hey?" voice sample and got data in frequency domain (please don't mind y-axis units, I haven't normalized them) So far so good. Then I decided to divide my signal into chunks so I get more clear frequency information - peaks and so on - this is a blind shot, me trying to grasp the idea of manipulating the frequency and analyzing the audio data. It gets me nowhere however, not in a direction I want, at least. Now, if I took those peaks, got an interpolated function from them, and applied the function on another voice sample (a part of a voice sample, that is also ffted of course) and performed inversed fft I wouldn't get what I wanted, right? I would only change the magnitude so it wouldn't affect the melody itself (I think so). Then I used spec and pyin methods from librosa to extract the real F0-in-time - the melody of asking question "Hey?". And as we would expect, we can clearly see an increase in frequency value: And a non-question statement looks like this - let's say it's moreless constant. The same applies to a longer speech sample: Now, I assume that I have blocks to build my algorithm/process but I still don't know how to assemble them beacause there are some blanks in my understanding of what's going on under the hood. I consider that I need to find a way to map the F0-in-time curve from the spectrogram to the "pure" FFT data, get an interpolated function from it and then apply the function on another voice sample (I mean curve fitting) **Is there any elegant (inelegant would be ok too) way to do this? I need to be pointed in a right direction because I can feel I'm close but I'm basically stuck. I need to increase/manipulate frequencies at certain point in time, I know it does mean to move data from one bin to another but I need to know how to work with the STFT data. ** The code that works behind the above charts is taken just from the librosa docs and other stackoverflow questions, it's just a draft/POC so please don't comment on style, if you could :) fft in chunks: import numpy as np import matplotlib.pyplot as plt from scipy.io import wavfile import os file = os.path.join("dir", "hej_n_nat.wav") fs, signal = wavfile.read(file) CHUNK = 1024 afft = np.abs(np.fft.fft(signal[0:CHUNK])) freqs = np.linspace(0, fs, CHUNK)[0:int(fs / 2)] spectrogram_chunk = freqs / np.amax(freqs * 1.0) # Plot spectral analysis plt.plot(freqs[0:250], afft[0:250]) plt.show() spectrogram: import librosa.display import numpy as np import matplotlib.pyplot as plt import os file = os.path.join("/path/to/dir", "hej_n_nat.wav") y, sr = librosa.load(file, sr=44100) f0, voiced_flag, voiced_probs = librosa.pyin(y, fmin=librosa.note_to_hz('C2'), fmax=librosa.note_to_hz('C7')) times = librosa.times_like(f0) D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max) fig, ax = plt.subplots() img = librosa.display.specshow(D, x_axis='time', y_axis='log', ax=ax) ax.set(title='pYIN fundamental frequency estimation') fig.colorbar(img, ax=ax, format="%+2.f dB") ax.plot(times, f0, label='f0', color='cyan', linewidth=2) ax.legend(loc='upper right') plt.show() Hints, questions and comments much appreciated. Answer: I've answered the same question on stackoverflow. I've managed to modify F0 by modyfying stft output and the compute it back to amplitudes. Answer is here, I won't be duplicating the same text here on DSP. https://stackoverflow.com/questions/63339608/change-the-melody-of-human-speech-using-fft-and-polynomial-interpolation
{ "domain": "dsp.stackexchange", "id": 9742, "tags": "python, stft" }
Question on ordered exponential explanation in Wikipedia
Question: Let $\gamma:[0,1] \to \mathbb R^2$ be a path describing a rectangle with vertices $x$, $x+u$, $x+u+v$, $x+v$, where $x, u, v \in \mathbb R^2$ ($u, v$ linearly independent). Let $J:\mathbb R^2 \to \mathfrak{so}(n) $ be a smooth map (a family of antisymmetric matrices, $\mathfrak{so}(n)$ is the Lie algebra of SO(n)). Then, the ordered exponential $OE[J]:[0,1]\to SO(n)$ (along $\gamma$) is the solution to $$\frac{d}{dt} OE[J](t) = J(\gamma(t))OE[J](t)$$ with $OE[J](0)= 1$. In the last section of Wikipedia it is written that the following holds: $$OE[- {J}] = \exp [- {J}(x+v) (-v)] \exp [- {J}(x+u+v) (-u)] \exp [- J(x+u) v] \exp [- {J}(x) u] $$ $$= [1 - {J}(x+v) (-v)][1 -{J}(x+u+v) (-u)][1 - \operatorname{J}(x+u) v][1 - {J}(x) u].$$ Why this is true? What puzzles me is that $OE[-J]$ in general is not an exponential, but in this case we can express as composition of exponentials. Moreover somehow in the second line there are no exponentials at all. How can we prove this? Some notes on the answer by Qmechanic. We have that (see e.g. wikipedia) $$OE[J](t) = \sum_{n=0}^{+\infty} \frac 1 {n!} \int_{[0,t]^n} \mathcal{T}\{J(t_1)J(t_2)\cdots J(t_n)\}dt_1dt_2\cdots dt_n.$$ Now for $t\to 0$ $$\int_{[0,t]^n} \mathcal{T}\{J(t_1)J(t_2)\cdots J(t_n)\}dt_1dt_2\cdots dt_n = J(0)^n t^n + O(t^{n+1})$$ (this is obtained for n=2 by $\int_{[0,t]^2}f(s_1)f(s_2)ds_1ds_2 = \int_{[0,t]^2}(f(0)+O(s_1))(f(0)+O(s_2))ds_1ds_2$) thus $$OE[J](t) = \sum_{n=0}^{+\infty} \frac 1 n \int_{[0,t]^n} \mathcal{T}\{J(t_1)J(t_2)\cdots J(t_n)\}dt_1dt_2\cdots dt_n = e^{J(0)t}+ o(t),$$ then we use $e^{J(0)t} = (1+J(0)t)+ o(t)$ Answer: OP is presumably missing that Wikipedia mentions that $\gamma$ is an infinitesimal rectangle. Each of the 4 contour integrals of the 4 sides of $OE[-J]$ are replaced to with the initial value of $J$ times the infinitesimal side length to lowest order$^1$. The exponentials are Taylor expanded to first order in the infinitesimal side length. x+v x+u+v -----<----- | 3 | | | v v 4 ^ 2 | | | 1 | ----->----- x u x+u $\uparrow$ Fig. 1. A counterclockwise infinitesimal rectangular loop. -- $^1$ This is similar to the estimate $$\int_0^{\epsilon}\! dx~f(x)~=~f(0)\epsilon + {\cal O}(\epsilon^2).$$
{ "domain": "physics.stackexchange", "id": 90664, "tags": "operators, mathematical-physics, curvature, wilson-loop" }
What is the use of chloroform in the procedure of determining peroxide value?
Question: In this procedure of determining peroxide value, the oil sample is dissolved in acetic acid chloroform solution. I understand that the acid is added so that the following reaction occurs $\ce{2ROOH + 2H^+ + 2I^- -> 2ROH + I2 + H2O}$. But what is the point of the added chloroform? Answer: Acetic acid is required for this method for exactly the reason you gave. It also has the fortunate property of being miscible with many polar and non-polar solvents. Because this is a method for the analysis of oil samples, it is necessary to chose a non-polar solvent capable of dissolving the sample that is also miscible with the acetic acid. Chloroform is one solvent that fits that description nicely. As a quick side-note, the use of chloroform in this test procedure is actually now discouraged by the Standard Methods for the Analysis of Oils, Fats and Derivatives. Isooctane seems to be the preferred solvent, apparently for health and safety reasons.
{ "domain": "chemistry.stackexchange", "id": 7295, "tags": "organic-chemistry, redox" }
Speed of light definition
Question: I understand the speed of light to be an EXACT number. If permeability of free space has a factor of pi in it, how can this be? Answer: I reject the notion that $\epsilon_0=4\pi\times10^{-7}\,{\rm F/m}$ and $\mu_0\sim9\times10^{-12}\,{\rm H/m}$, they are precisely $\epsilon_0=1$ and $\mu_0=c^{-2}$. This obviously takes care of any issue with defining any unit with $\pi$ in it because, $$ \sqrt{\frac{1}{\epsilon_0\mu_0}}=\sqrt{\frac{1}{1\cdot c^{-2}}}=\sqrt{c^2}=c $$ We use the MKSA system because it's fairly convenient for most every day purposes (for the most part). Nature, however, does not know what a meter or a second is to define $c$ and force photons to travel at it; it is what it is due to a fundamental symmetry of the universe.
{ "domain": "physics.stackexchange", "id": 46438, "tags": "speed-of-light, si-units, unit-conversion, metrology" }
how to use word embedding to do document classification etc?
Question: I just start learning NLP technology, such as GPT, Bert, XLnet, word2vec, Glove etc. I try my best to read papers and check source code. But I still cannot understand very well. When we use word2vec or Glove to transfer a word into a vector, it is like: [0.1,0.1,0.2...] So, one document should be like: [0.1,0.1,0.2...] [0.1,0.05,0.1...] [0.1,0.1,0.3...] [0.1,0.15,0.1...] ....... So, one document is a matrix. If I want to use some traditional method like random forest to classify documents, how to use such data? I was told that Bert or other NLP models can do this. But I am really curious about how the word embedding are applied in the traditional methods? Answer: So, one document is a matrix. If I want to use some traditional method like random forest to classify documents, how to use such data? You can't, at least not directly because traditional methods require a fixed number of features for every instance. In the case of document classification the instance must represent the document, so unless all the documents have exactly the same length (unrealistic) it's impossible to use a set of vectors as features. The traditional approach would consist in representing a document with a vector where each cell represents a word in the vocabulary, and the value is for instance the TFIDF weight of the word in the document.
{ "domain": "datascience.stackexchange", "id": 7498, "tags": "classification, nlp, random-forest, word2vec, word-embeddings" }
How long does it take an iceberg to melt in the ocean?
Question: This is a quantitative question. The problem is inspired by this event: On August 5, 2010, an enormous chunk of ice, roughly 97 square miles (251 square kilometers) in size, broke off the Petermann Glacier along the northwestern coast of Greenland. The Petermann Glacier lost about one-quarter of its 70-kilometer- (40-mile-) long floating ice shelf, said researchers who analyzed the satellite data at the University of Delaware. Question: Imagine an iceberg that is moving freely in the ocean. Given that the temperature of the surrounding water is $T = 4$ Celsius and the temperature $T=0$ Celsius is evenly distributed throughout the volume of the iceberg estimate how long does it take the iceberg to melt completely in the ocean? We will find the mass of the iceberg from the event description. Average thickness of the chunc is estimated about $500$ m. For simplify we suppose that the iceberg is spherical during the melting. Answer: Heat enters through the surface of the iceberg. This surface area obeys $$A \propto r^2,$$ with $r$ some linear measure of the size of the iceberg (e.g. the radius). The heat that has entered therefore obeys $$Q \propto r^2 t,$$ with $t$ the time for melting. The heat required to melt an iceberg depends on mass, which obeys $$Q \propto M \propto r^3.$$ Combining these proportionalities, $$r^2 t \propto r^3,$$ or $$t \propto r.$$ So we expect the time for an iceberg to melt to depend linearly on the radius of the iceberg. From Newton's law of cooling, we could also assume $$t \propto \dfrac{1}{\Delta T}.$$ I took a cube with radius $r \approx 2 \;\mathrm{cm}$ and put in in my water bottle, where I estimate $\Delta T = 20^\circ \mathrm{C}$. It melted in $3 \; \mathrm{min}.$ We can estimate $r$ for an iceberg by cutting each of the length, width, and thickness in half (since a radius is half a diameter) and then taking the geometric mean of those: $$r_{\rm berg} \approx (8 \;\mathrm{km} \cdot 8 \;\mathrm{km} \cdot 250 \;\mathrm{m})^{1/3} \approx 2\times 10^3 \;\mathrm{m}.$$ Finally, we can scale the melting time up to account for the larger iceberg, then scale it up again to account for the smaller temperature difference. That gives $$t_{\rm berg} \approx 3 \;\mathrm{min} \cdot \dfrac{2 \times 10^3 \;\mathrm{m}}{2 \;\mathrm{cm}} \cdot \dfrac{20^\circ \mathrm{C}}{4^\circ \mathrm{C}} = 1.5 \times 10^6 \;\mathrm{min} \approx 3 \;\mathrm{year}.$$ So roughly speaking, we might expect the iceberg to take three years to melt.
{ "domain": "physics.stackexchange", "id": 7922, "tags": "thermodynamics, geophysics" }
Using $_POST and $_SESSION - passing variables between pages
Question: I have an index page which gets passed $_POST['timestart'] and $_POST['timeend'] variables. In addition, I have a cart page that has variables passed to it from the index page, and it passes variables back (with header) to the index page depending on what is done. In order to retain the initial $_POST['timestart'] and $_POST['timeend'] variables, I end up storing these variables in a SESSION. My final solution, which works, is something like this. My index page: <?php session_start(); if ($_POST != NULL) { $_SESSION['date'] = $_POST; $timestart = new \DateTime($_SESSION['date']['timestart']); $timeend = new \DateTime($_SESSION['date']['timeend']); $start = $timestart->format('Y-m-d'); $end = $timeend->format('Y-m-d'); } elseif ($_POST == NULL && $_SESSION != NULL) { $timestart = new \DateTime($_SESSION['date']['timestart']); $timeend = new \DateTime($_SESSION['date']['timeend']); $start = $timestart->format('Y-m-d'); $end = $timeend->format('Y-m-d'); } elseif ($_POST == NULL && $_SESSION == NULL) { $start = ""; $end = ""; } if ($start == NULL && $end == NULL) { echo 'please select a date range'; } else { ....main page.... <form method="post" action="cart/cart_update.php"> <input type="hidden" name="model_name" value="<?php echo $item ?>" /> <input type="hidden" name="type" value="add" /> <input type="hidden" name="return_url" value="<?php echo $current_url ?>" /> </form> } My cart page (just so it is clear what is happening): <?php session_start(); if(isset($_GET["emptycart"]) && $_GET["emptycart"]==1) { $return_url = base64_decode($_GET["return_url"]); unset($_SESSION["inventory"]); header('Location:'.$return_url); } if(isset($_POST["type"]) && $_POST["type"]=='add') { $model_name = filter_var($_POST["model_name"], FILTER_SANITIZE_STRING); $qty = filter_var($_POST["qty"], FILTER_SANITIZE_NUMBER_INT); $return_url = base64_decode($_POST["return_url"]); $new_item = array(array('name'=>$model_name, 'qty'=>$qty)); $start = $_POST["timestart"]; $end = $_POST["timeend"]; if(isset($_SESSION["inventory"])) { $found = false; foreach ($_SESSION["inventory"] as $cart_itm) { if ($cart_itm["name"] == $model_name) { $model[] = array('name'=>$cart_itm["name"], 'qty'=>$qty); $found = true; } else { $model[] = array('name'=>$cart_itm["name"], 'qty'=>$cart_itm["qty"]); } } if($found == false) { $_SESSION["inventory"] = array_merge($model, $new_item); } else { $_SESSION["inventory"] = $model; } } else { $_SESSION["inventory"] = $new_item; } header('Location:'.$return_url); } if(isset($_GET["removep"]) && isset($_GET["return_url"]) && isset($_SESSION["inventory"])) { $model_name = $_GET["removep"]; $return_url = base64_decode($_GET["return_url"]); $model = NULL; foreach ($_SESSION["inventory"] as $cart_itm) { if($cart_itm["name"]!=$model_name) { $model[] = array('name'=>$cart_itm["name"], 'qty'=>$cart_itm["qty"]); } $_SESSION["inventory"] = $model; } header('Location:'.$return_url); } ?> My question...for the code I am using in the index page: if ($_POST != NULL) { $_SESSION['date'] = $_POST; $timestart = new \DateTime($_SESSION['date']['timestart']); $timeend = new \DateTime($_SESSION['date']['timeend']); $start = $timestart->format('Y-m-d'); $end = $timeend->format('Y-m-d'); } elseif ($_POST == NULL && $_SESSION != NULL) { $timestart = new \DateTime($_SESSION['date']['timestart']); $timeend = new \DateTime($_SESSION['date']['timeend']); $start = $timestart->format('Y-m-d'); $end = $timeend->format('Y-m-d'); } elseif ($_POST == NULL && $_SESSION == NULL) { $start = ""; $end = ""; } Is this idea a good solution, or is there a better, more elegant way to approach this? Answer: My opinion is that using $_POST == NULL and $_SESSION == NULL is too general and could lead to errors. You should use specific conditions for $_POST["date"] and also validate the data. Plus, I would avoid the repetition in your code by calling a function. // will return NULL if $dt not valid or empty function get_valid_date($dt) { if (!empty($dt)) { try { $temp=new \DateTime($dt); return $temp->format("Y-m-d"); } catch (Exception $e) { return NULL; } } return NULL; } // temp store for either $_POST["date"] or $_SESSION["date"] if isset $date=array(); if (isset($_POST["date"])) { $date=$_POST["date"]; } else if (isset($_SESSION["date"])) { $date=$_SESSION["date"]; } // get valid $start/$end date from $date - NULL if not valid or empty $start = get_valid_date($date["timestart"]); $end = get_valid_date($date["timeend"]); // save $date in $_SESSION only if both $start and $end are valid if (!empty($start) && !empty($end)) { $_SESSION["date"]=$date; } else { unset($_SESSION["date"]); }
{ "domain": "codereview.stackexchange", "id": 15520, "tags": "php, session" }
Distorted images after camera calibration+stereo_image_proc
Question: Hello everyone! This question might not be specific to ROS, but I decided to give it a try anyway. I am using stereo_image_proc to rectify my camera's image output and produce the corresponding disparity image, but the results are bizarre. As seen below, where the top images are the unrectified images displayed within my node (plus an unscaled disparity image obtained locally, which is of no interest to this discussion) and the bottom images are the output from stereo_image_proc, severely distorted due to some mistake I am making. For camera calibration, I used the graphical tool in the camera_calibration package. I published those using the CameraInfo message, doing nothing obviously wrong. So what could I be doing wrong? These are the hypotheses I have tested: Wrong calibration parameters: Initially I was using the parameters obtained from my camera's standard calibration tool (smallv, from SVS). I thought that might be the problem, due to conflicting calibration models. After reading about both models, I was convinced they were equivalent, but decided I should try calibration from within ROS just to be sure. Turns out the parameters I got were very similar, as were the results. Swapped left/right images: I don't know why I thought that could be the reason for such a problem, but a quick test ruled that out of the question. If someone has a strong argument for this hypothesis, though, I will be more thorough in my testing. Swapped matrices in CameraInfo: The results are so bad I thought I might be swapping a couple parameters in the calibration results. What is the standard way to build CameraInfo messages from the raw result from camera_calibration? I simply parsed the text output and copied the values I deemed appropriate into the messages, since I have not yet implemented the set_camera_info service. I would appreciate any help I can get, I am completely stumped! Originally posted by georgebrindeiro on ROS Answers with karma: 1264 on 2012-08-27 Post score: 0 Original comments Comment by georgebrindeiro on 2012-08-27: Note: I am able to rectify the images using SVS, but I would like to be able to do that using stereo_image_proc instead Comment by georgebrindeiro on 2012-08-27: If I remove the distortion parameters, the images are at least visible... So my guess is the problem is there. Answer: Just as a note: I hardcoded the calibration parameters I obtained from the ROS calibration tool and was successful. So either SVS parameters are not directly compatible (in terms of units) with OpenCV parameters, or I made some mistake in parameter passing. EDIT: Both things were actually happening: OpenCV assumes units in meters and SVS in millimeters, so I had to multiply P[3] (essentially the baseline) by 0.001 I made a stupid mistake in an accessor function for the projection matrix Wrong Version inline void left_P(boost::array& P) { for(int ii = 0; ii Correct Version inline void left_P(boost::array& P) { for(int ii = 0; ii Originally posted by georgebrindeiro with karma: 1264 on 2012-11-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10777, "tags": "calibration, camera-info, image, stere-image-proc, camera-calibration" }
What is the difference between Founder effect and Bottleneck effect?
Question: Both are examples of genetic drift in which there is a change in the allele frequency when the population size becomes small. What is the difference between the two? Answer: Genetic drift is an EVOLUTIONARY PROCESS Source: Wikipedia > Genetic Drift Genetic drift (also known as allelic drift or the Sewall Wright effect2 after biologist Sewall Wright) is the change in the frequency of a gene variant (allele) in a population due to random sampling of organisms. I think that these terms are used without a perfect semantic. An obvious case A population bottleneck is a DEMOGRAPHIC EVENT Source: Wikipedia > Population Bottleneck A population bottleneck [..] is a sharp reduction in the size of a population due to environmental events (such as earthquakes, floods, fires, disease, or droughts) or human activities (such as genocide) A founder effect is an EVOLUTIONARY PROCESS THAT RESULTS FROM A FOUNDING EVENT Source: Provine 2004 The “founder” principle, as introduced by Mayr in 1942, was an auxiliary mechanism (less important than random drift) for producing reduced variability in an isolated population started by a few individuals or even by a single fertilized female. Mayr saw a distinction between the “founder” effect and random drift. The founders were not a random sampling of the main population, but, once isolated as a small population, would undergo random drift In other words, a founder effect does NOT refer to the ppulation bottleneck causing by a founding event. A founder effect refers to the increase in genetic drift (loss of genetic diversity is expected) caused by a founding event. A founding event is the event by which few individuals (eventually only one) quit a large population to settle down a new small populations. As such, a founder event is a type of population bottleneck. I think that one could further extrapolate the term "founder effect" to any increase in genetic drift caused by a population bottleneck. Do all authors use the same definitions? I have not done a good literature review to answer this question but I would not be surprised if different authors, use slightly different definitions. For example, some authors (e.g. Peery at al 2012, Raymond and O'Brien 1993, Duarte et al. 1992) have used the term "genetic bottleneck" instead of "population bottleneck". Wikipedia indicates that these two terms are synonyms. However I could not find any paper that formally define the term genetic bottleneck but all the papers using this term seem to be specifically interested into the genetic signature of population bottleneck. As such, the semantic relationship between population bottleneck and genetic bottleneck could be the same as between founding event and founder effect. Genetic bottleneck could be defined as the increase of genetic drift that is caused by population bottleneck.
{ "domain": "biology.stackexchange", "id": 5244, "tags": "evolution" }
How does added weight of 100kg affect the height of a originally 1m stack of paper
Question: Lets say i had a stack of papers, 1 meter tall. Assuming that each paper would be around 0.1mm in thickness, the total number of papers should be 10000. Now, when placing a weight of 100kg on top of said stack, the stack appears to "compress". How could I go about finding how much the height of the stack decreased? And how could i get the original height from the height with the weight on top? Answer: This depends on a lot of things, not least of which is how "flat" or rumpled the papers actually are and how much air gap on average is between each one. For paper there is probably no better way than doing the experiment and measuring. For a more rigid stack like a stack of metal plates or even plastic or composites (with little or no air gap), you could use the simple linear elastic Hooke's law: $$F=kx$$ Where force $F$ required to compress the stack a distance of $x$ from its neutral position is given by the stiffness $k$. For common materials this stiffness can be calculated from the Elastic Modulus $E$ of the material plus the height $L$ and the cross sectional area $A$ of the stack, viz. $$F= \frac {EA}{L} x $$
{ "domain": "physics.stackexchange", "id": 92249, "tags": "homework-and-exercises, mass, weight" }
motion_planning_msgs
Question: I am trying to use kinematics_msgs package, which depends on motion_planning_msgs. So I have downloaded motion_planning_msgs package and tried to compile, but when i do rosmake motion_planning_msgs, this never ends compiling. I've seen this package has been deprecated, but if I want to use kinematics_msgs, what should I do? Thank you in advance Originally posted by BLaS on ROS Answers with karma: 56 on 2012-09-26 Post score: 0 Answer: Have a look at the "Used By" on the right hand side here and take your pick http://ros.org/wiki/kinematics_msgs Originally posted by MarkyMark2012 with karma: 1834 on 2012-09-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11147, "tags": "ros" }
Calculate odds of winning (UK) Lottery jackpot
Question: I am interested in calculating odds of winning UK Lottery. The format is that 6 numbers from 1-59 are drawn. I am interested only (at this stage) in the odds of winning the jackpot (matching six balls). As an aside, I'm interested in the odds for a total balls count of 49, and 59, to see the change in chance of winning. The mathematical formula for calculating the odds is (where 49 is total balls, 6 is number drawn: \$\text{Odds of winning} = \dfrac{49!}{6!*(49-6)!}\$ The main method of my code is to collect input from the user on parameters of the draw. I have a class called DrawInfo to store information about the draw. I have a simple method to return the Factorial of a number. I have a method to calculate the odds of winning the jackpot. This is currently all in the one class, as a small, simple app. I do appreciate that DrawInfo could live in its own class. class Program { static void Main(string[] args) { Console.WriteLine("Enter the total number of balls in the draw: "); int totalBalls = int.Parse(Console.ReadLine()); Console.WriteLine("enter the number of balls drawn: "); int ballsDrawn = int.Parse(Console.ReadLine()); DrawInfo di = new DrawInfo(totalBalls, ballsDrawn); int totalWinOdds = FindJackpotWinningOdds(di); Console.WriteLine(String.Format("the odds are 1/{0:n0}", totalWinOdds)); Console.ReadLine(); } static int FindJackpotWinningOdds(DrawInfo di) { BigInteger totalBallsFactorialSum = Factorial(di.TotalBalls); BigInteger ballsDrawnFactorialSum = Factorial(di.BallsDrawn); BigInteger JackpotWinningOdds = 0; JackpotWinningOdds = totalBallsFactorialSum / ((ballsDrawnFactorialSum * Factorial((di.TotalBalls - di.BallsDrawn)))); return (int)JackpotWinningOdds; } static BigInteger Factorial(BigInteger i) { if (i <= 1) { return 1; } return i * Factorial(i - 1); } } public class DrawInfo { public int TotalBalls { get; set; } public int BallsDrawn { get; set; } public DrawInfo(int totalBalls, int ballsDrawn) { this.TotalBalls = totalBalls; this.BallsDrawn = ballsDrawn; } } Answer: Quick remarks: Don't abbreviate needlessly: di. Why assign totalBallsFactorialSum and ballsDrawnFactorialSum, when you are only using them once? You're not even consistent: in the case of Factorial((di.TotalBalls - di.BallsDrawn)) you don't assign the result to a variable. Don't overdo it with the brackets: there's no point for the inner ones in Factorial((di.TotalBalls - di.BallsDrawn)). JackpotWinningOdds doesn't folow the capitalization conventions. Why is it called FindJackpotWinningOdds? Wouldn't CalculateJackpotOdds be better? The this in this.TotalBalls = totalBalls; and this.BallsDrawn = ballsDrawn; is superfluous. TotalBalls and BallsDrawn should be private set;. Why even assign the result to JackpotWinningOdds? This whole method can be reduced to a one-liner, though perhaps it would be best to split it over multiple lines to increase legibility: return (int)( Factorial(di.TotalBalls) / ( Factorial(di.BallsDrawn) * Factorial(di.TotalBalls - di.BallsDrawn) ) ); This method could even just be a method on DrawInfo -- together with BigInteger Factorial(BigInteger i), of course, and Factorial() could then even be a private method.
{ "domain": "codereview.stackexchange", "id": 18276, "tags": "c#, statistics" }
What causes that foul taste with combination of toothpaste and orange juice?
Question: Ugh, I drank orange juice too soon after brushing my teeth and my mouth feels awful! What causes that foul taste with this combination of toothpaste and orange juice? Is it a reaction between the oil of wintergreen in the toothpaste and the citric acid in the OJ? Is it something else? What's the explanation for this? As a second part to this question, is there a way to stop this reaction from happening, aside from not drinking orange juice after brushing? Answer: Why do juice taste awful? Awful taste is due to sodium laureth sulfate, also known as sodium lauryl ether sulfate (SLES), or sodium lauryl sulfate (SLS)- depending on which toothpaste you use. SLES and SLS are surfactants(wetting agent). Both chemicals are added in toothpaste to create foam and make the paste easier to spread around your mouth. Both chemicals they suppress the receptors on our taste buds that perceive sweetness, inhibiting our ability to pick up the sweet notes of food and drink. Also break up the phospholipids on our tongue. These fatty molecules inhibit our receptors for bitterness and keep bitter tastes from overwhelming us, but when they're broken down by the surfactants in toothpaste, bitter tastes get enhanced. Thus, anything you eat will be less sweeter and more bitter! Source: mentalfloss How to prevent awful taste after brushing? After a bit research I found few ways to avoid awful taste: Avoid brushing. (not recommended) Wait for about 30 minutes. Look for SLS-free toothpaste. eat a little bit of cheese first.(suggested by repurposer) hot water will do the trick.
{ "domain": "chemistry.stackexchange", "id": 3788, "tags": "everyday-chemistry, food-chemistry, taste" }
What would happen if we did a Newton's bucket experiment in the closest accessible approximation to "empty space?"
Question: I will get right to the question; for readers unfamiliar with its genesis, I append a background section below. I want to know how testable the prediction is that the water in a rotating bucket would experience a centrifugal pseudo force in the closest approximation to empty space we have access to (i.e., as far away from the Solar System as we could reasonably deploy a suitable test station). I envision the simplest possible experimental design, along the lines of the one described in the following post: [https://physics.stackexchange.com/questions/416011/newtons-bucket-artificial-gravity-absolute-rotation-and-machs-principle][1] A fancy satellite experiment has been done in near-Earth orbit (Gravity Probe B, launched in 2004 with data collection continuing until 2011). As I understand it, the results were consistent with predictions of standard GR: the so-called Lense–Thirring effect, which dominated behavior of the gyroscopes on the satellite, arose almost entirely from the effects of Earth's rotation on adjacent spacetime. So, let's get as far away from local masses as possible and see how well standard GR's predictions hold up. Is this idea ridiculous? Background to the question In its simplest variant, a Newton's bucket experiment involves spinning a bucket of water in Earth's gravitational field and observing that the surface of the water is not flat due the centrifugal pseudo force. In this case, it is obvious that the bucket is spinning relative to the patch of Earth's surface underneath it. Einstein attributed his development of general relativity (GR), in part, to the thought experiment of spinning the bucket in "empty space" where there is no obvious way to determine whether or not the bucket is actually spinning. Mach had earlier posed a version of this thought experiment; he invoked distant celestial objects (e.g., stars) as defining a reference frame relative to which the bucket's rotational motion could be judged. There is a HUGE subsequent literature about whether or not distant celestial objects are relevant to behavior of the water in the bucket. As I understand it, distant celestial objects are not relevant in mainstream variants of GR. The issue has been addressed dozens of time on the Physics Stack Exchange. In my survey of these previous posts, no one attempts to answer my empirically oriented question in their responses to them. All of them seem to pose the problem as one of prediction rather than observational testing of predictions. Answer: It is clear that your underlying question is the validity of Mach's principle in the context of General Theory of Relativity (GR). It is known that Einstein was quite enamoured with Mach's principle and tried to formulate GR to satisfy it in a more general way. However, it seems to be the case that GR does not satisfy Mach's principle. This is easy to see in the solution of the Kerr black hole, whereby the angular momentum of the black hole plays a rôle, even as the spacetime asymptotically approaches Minkowski spacetime far away. There is no distant stars in the solution of the ideal Kerr black hole. This means that the solution for the Newton's bucket problem in GR should have bulges to account for centripetal acceleration, even though it would not in any way look like the usual stuff we see on Earth. A good textbook on GR would emphasise that the solution to a rotating system in GR is not obtained by taking Minkowski spacetime and setting it to rotate.
{ "domain": "physics.stackexchange", "id": 94868, "tags": "general-relativity, cosmology, reference-frames, machs-principle" }
Help to get robot jacobian from system of implicit equations
Question: I have system of two equations that describes position of robot end-effector ($X_C, Y_C, Z_C$), in accordance to prismatic joints position ($S_A, S_B$): $S^2_A - \sqrt3(S_A + S_B)X_C = S^2_B + (S_A - S_B)Y_C$ $X^2_C + Y^2_C + Z^2_C = L^2 - S^2_A + S_A(\sqrt3X_C + Y_C)+M(S^2_A+S_BS_A + S^2_B)$ where M and L are constants. In paper, author states that differentiating this system at given point ($X_C, Y_C, Z_C$) gives the "differential relationship" in form: $a_{11}\Delta S_A + a_{12}\Delta S_B = b_{11}\Delta X_C + b_{12}\Delta Y_C + b_{13}\Delta Z_C$ $a_{21}\Delta S_A + a_{22}\Delta S_B = b_{21}\Delta X_C + b_{22}\Delta Y_C + b_{23}\Delta Z_C$ Later on, author uses those parameters ($a_{11}, a_{12}, b_{11}...$) to construct matrices, and by multiplying those he obtains Jacobian of the system. Im aware of partial differentiation, but I have never done this for system of equations, neither I understand how to get those delta parameters. Can anyone explain what are the proper steps to perform partial differentiation on this system, and how to calculate delta parameters? EDIT Following advice given by N. Staub, I differentiated equations w.r.t time. First equation: $S^2_A - \sqrt3(S_A + S_B)X_C = S^2_B + (S_A - S_B)Y_C$ $=>$ $2S_A \frac{\partial S_A}{\partial t} -\sqrt3S_A \frac{\partial X_C}{\partial t} -\sqrt3X_C \frac{\partial S_A}{\partial t} -\sqrt3S_B \frac{\partial X_C}{\partial t} -\sqrt3X_C \frac{\partial S_B}{\partial t} = 2S_B\frac{\partial S_B}{\partial t} + S_A\frac{\partial Y_C}{\partial t} + Y_C\frac{\partial S_A}{\partial t} - S_B\frac{\partial Y_C}{\partial t} - Y_C\frac{\partial S_B}{\partial t}$ Second equation: $X^2_C + Y^2_C + Z^2_C = L^2 - S^2_A + S_A(\sqrt3X_C + Y_C)+M(S^2_A+S_BS_A + S^2_B)$ $=>$ $2X_C \frac{\partial X_C}{\partial t} + 2Y_C \frac{\partial Y_C}{\partial t} + 2Z_C \frac{\partial Z_C}{\partial t} = -2S_A \frac{\partial S_A}{\partial t} + \sqrt3S_A \frac{\partial X_C}{\partial t} +\sqrt3X_C \frac{\partial S_A}{\partial t} + S_A \frac{\partial Y_C}{\partial t} + Y_C \frac{\partial S_A}{\partial t} + 2MS_A \frac{\partial S_A}{\partial t} + MS_B \frac{\partial S_A}{\partial t} + MS_A \frac{\partial S_B}{\partial t} + 2MS_B \frac{\partial S_B}{\partial t}$ then, I multiplied by $\partial t$, and grouped variables: First equation: $(2S_A -\sqrt3X_C - Y_C)\partial S_A +(-2S_B -\sqrt3X_C + Y_C)\partial S_B = (\sqrt3S_A +\sqrt3S_B)\partial X_C + (S_A - S_B)\partial Y_C$ Second equation: $(-2S_A+\sqrt3X_C+Y_C+2MS_A + MS_B)\partial S_A + (MS_A + 2MS_B)\partial S_B = (2X_C-\sqrt3S_A)\partial X_C + (2Y_C-S_A)\partial Y_C + (2Z_C)\partial Z_C$ therefore I assume that required parameters are: $a_{11} = 2S_A -\sqrt3X_C - Y_C$ $a_{12} = -2S_B -\sqrt3X_C + Y_C$ $a_{21} = -2S_A + \sqrt3X_C + Y_C + 2MS_A + MS_B$ $a_{22} = MS_A + 2MS_B$ $b_{11} = \sqrt3S_A +\sqrt3S_B$ $b_{12} = S_A - S_B$ $b_{13} = 0$ $b_{21} = 2X_C - \sqrt3S_A$ $b_{22} = 2Y_C - S_A$ $b_{23} = 2Z_C$ Now. According to paper, jacobian of the system can be calculated as: $J = A^{-1} B$, where $A=(a_{ij})$ $B=(b_{ij})$ so I im thinking right, it means: $$ A = \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{matrix} $$ $$ B = \begin{matrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ \end{matrix} $$ and Jacobian is multiplication of reverse A matrix and B matrix. Next, author states that Jacobian at given point, where $X_C = 0$ $S_A=S_B=S$ $Y_C = l_t-\Delta\gamma$ is equal to: $$ J = \begin{matrix} \frac{\sqrt3S}{2S-l_t+\Delta\gamma} & -\frac{\sqrt3S}{2S-l_t+\Delta\gamma}\\ \frac{2(l_t-\Delta\gamma)-S}{6cS-2S+l_t-\Delta\gamma} & \frac{2(l_t-\Delta\gamma)-S}{6cS-2S+l_t-\Delta\gamma} \\ \frac{2Z_C}{6cS-2S+l_t-\Delta\gamma} & \frac{2Z_C}{6cS-2S+l_t-\Delta\gamma} \\ \end{matrix} ^T $$ Everything seems fine. BUT! After multiplicating my A and B matrices I get some monster matrix, that I am unable to paste here, becouse it is so frickin large! Substituting variables for values given by author does not give me proper jacobian (i tried substitution before multiplying matrices (on parameters), and after multiplication (on final matrix)). So clearly Im still missing something. Either Ive done error in differentiation, or Ive done error in matrix multiplication (I used maple) or I dont understand how to subsititue those values. Can anyone point me in to right direction? EDIT Problem solved! Parameters that I calculated were proper, I just messed up simplification of equations in final matrix. Using snippet from Petch Puttichai I was able to obtain full Jacobian of system. Thanks for help! Answer: Recall that the Jacobian can be interpreted as the matrix which maps the joint velocity to end-effector velocity. $ \dot{x} = J(\theta)~\dot{\theta} $ In your case $x = [x_c,y_c,z_c] $ (prefer lower case for vectors and scalar) and $\theta = [s_A,s_B] $. If you differentiate the system as suggested you should get the second set of equations with $\Delta s_A, s_A, \Delta s_B, s_B, ...$. To do so you can do the partial time differentiation and multiply your result by $\Delta t$ as for any time dependent variable $u$, $\frac{\partial u}{ \partial t } = \frac{\Delta u }{\Delta t}$. NOTE : your first equation can be rewritten $ (\sqrt{3} x_c +y_x ) (s_A+s_B) = {s_A}^2 + {s_B}^2 - 2s_A$ which make the distance (the second equation) expressed only from the system ($L,M$) and joint ($s_A,s_B$) parameters.
{ "domain": "robotics.stackexchange", "id": 1673, "tags": "kinematics, jacobian" }
If some humans inherited 3% of Neanderthal DNA, why are we 99.9% same genome?
Question: Many sources say that humans are 99.5 to 99.9 percent the same. Also some sources state that some humans have 3.4% Neanderthal DNA and some don't share those genes. Why is that? Answer: 3% and 99.9% are from different calculation method. 20% (a rough estimation, might be wrong) of neanderthal DNA survived in the modern human genome, but it is 'diluted' by the human population, which lead to 2 percent of neanderthal DNA in each human genome. This is the number 3%. But how can we know it's neanderthal DNA? If DNA fragment contains several neanderthal special SNP (single-nucleotide polymorphism), we will treat it as neanderthal DNA. (Based on linkage disequilibrium, the inference is reasonable.) The fragment might of tens of thousands nucleotides, but the difference are just one or two base. But when we compare two human, we calculate the difference at nucleotides level, rather than DNA fragment. This is the number 99.9% come from. If we have the complete genome of neanderthal and human, by comparing them at base level, the similarity might be great than 99%, I suppose.
{ "domain": "biology.stackexchange", "id": 9769, "tags": "genetics, human-genetics, dna-sequencing, human-genome" }
Bonding and coordination of oxygen in a Ga2O3 crystal structure
Question: I'm trying to find the crystal structure of gallium oxide ($\ce{Ga2O3}$). However, I find the images of the crystal structure in the peer-reviewed journals problematic. First, I hop to the wikipedia article of Gallium oxide and find this image: There was no labeling of the color, but I think Cyan is Gallium, and Red is Oxygen. The trouble is Ga typically has 3+ and Oxygen has 2- valence. Yet in this image we see Ga have 4 and 6 bonds! How is this possible? The source is from a paper: J. Åhman, G. Svensson and J. Albertsson: A Reinvestigation of β-Gallium Oxide. In: Acta Cryst. (1996). C52, 1336-1338. (The image was made by a wiki admin on chemistry structure, who made many structures for wikipedia.) So I check out the article and it has this figure: That's basically unreadable, so I go to Acta Crystallographica Section C and it has an interactive 3D chem model viewer and a model for Gallium Oxide: Again I see Gallium with 6 bonds and oxygens with 3 bond! Is this correct? please tell me how? In Galazka's β-Ga2O3 for wide-bandgap electronics and optoelectronics,I find this description: Crystal structure of $\ce{β-Ga2O3}$ $\ce{β-Ga2O3}$ crystallizes in the base-centred monoclinic system in the space group C2/m. The unit cell (figure 1) contains 20 atoms consisting of crystallographically inequivalent $\ce{Ga^3+}$ and $\ce{O^2−}$ atoms. In this low symmetry structure Ga atoms are coordinated tetrahedrally and octahedrally (Ga1 and Ga2, respectively), while O atoms are coordinated three and fourfold (O1/O2 and O3, respectively). O1 shares two bonds with Ga2 and one bond with Ga1, O2 shares three bonds with Ga2 and one bond with Ga1, while O3 shares two bonds with Ga1 and one bond with Ga2. The lattice parameters of $\ce{β-Ga2O3}$ are listed in table 1. There are two easy cleavage planes: the (100) plane formed by O3 atoms and (001) plane formed by O1 atoms. and this figure: Can someone please explain the crystal structure of $\ce{Ga2O3}$, beta-gallium oxide? Answer: Does it help if you think of this as an ionic compound? In ionic compounds, there are no directed bonds. The coordination number depends on the size of the ions, coupled with the need for electroneutrality. For example, in calcium oxide, both calcium and oxygen (in red) have octahedral coordination, with 6 neighbors each: Granted, the electronegativity difference in calcium oxide is 2.44, while in gallium oxide it is only 1.65, and not different from that of silicon oxide, which has a network covalent structure. Also, consider structure like that of hexagonal ice, where each oxygen is surrounded by four hydrogen atoms, two connected by covalent bonds, and two connected by hydrogen bonds. In this case, the atomic distances are clearly different, and you would be correct to say that oxygen makes two bonds, with a tetrahedral coordination if you consider the hydrogen atoms from neighboring atoms. To distinguish these three extremes (ionic bonds in one case, covalent bonds plus contacts with neighboring atoms, network covalent structure) and to categorize what is going on with gallium oxide, you would have to look at the gallium oxygen bond lengths, which should be available in the paper you cite.
{ "domain": "chemistry.stackexchange", "id": 13843, "tags": "molecular-structure, crystal-structure, oxidation-state, valence-bond-theory, crystallography" }
Magnetic field of a moving point charge
Question: I'm having a hard time to grasp one simple idea. Let us say we have a moving point charge, q, with velocity v, in the x direction. Now calculating both the Magnetic and the Electric fields is easy: We know that B=0 in the charge's frame, hence we obtain B in the lab's frame. Now, let us say we would want to know the same fields, but with a stationary charge, positioned above the moving charge (perpendicular). I could not just surmise that there is no Magnetic field in the moving charge's frame since, in turn, the other one is now moving in the other one's frame, so in both frames there is Magnetic field. I would be more than grateful if someone could settle this, thanks. Answer: Remember that superposition holds for the electric and magnetic fields. That is, you can calculate them individually and then add their fields together to get the field at any point. For the moving charge, $q_1$, the magnetic field is 0 in its frame but boosting to the co-moving frame, we have $$ \mathbf{B}_{q_1}=\gamma\frac{\boldsymbol{\beta}\times\mathbf{E}}{c}=\gamma q_1\frac{\boldsymbol{\beta}\times\hat{\mathbf{r}}}{4\pi\epsilon_0cr^2} $$ where $\gamma$ is the normal Lorentz factor, and $\boldsymbol{\beta}=\mathbf{v}/c$ is the (reduced) velocity of the particle in the lab frame (note that the above equation reduces to the Biot-Savart law for $\gamma\approx1$). Since, in the lab frame, the magnetic field of the stationary charge, $q_2$, is 0, then the total magnetic field is given by $\mathbf{B}=\mathbf{B}_{q_1}$. In the case of the co-moving frame, as stated the magnetic field is zero for $q_1$ but now the charge $q_2$ is moving (in the opposite direction) and we get the similar magnetic field: $$ \mathbf{B}=\mathbf{B}_{q_2}=\gamma q_2\frac{-\boldsymbol{\beta}\times\hat{\mathbf{r}}}{4\pi\epsilon_0cr^2} $$ So, yes, there is going to be a magnetic field in both frames because you have a moving charge in both frames.
{ "domain": "physics.stackexchange", "id": 11744, "tags": "electromagnetism" }
Calculating the power output of car gives two different answers
Question: Here is the problem: A 500 kg car accelerates from rest to 100 m/s over a distance of 400m with the average frictional force of 1200N. If it took the car 7.3 seconds to do this, what is the power output in kW? This was a question on Khan Academy. This is how they solved it: ​$KE=250kg*(110m/s)^2=3,025,000J$ $Work_{friction}=1200N*400m=480,000J $ $Power = \frac{3,025,000J+480,000J}{7.3 sec*1000} = 480.137 kW$ This is what I did: $a=\frac vt$ $\Sigma F=m*v/t$ $F_{car}-F_{friction}=m*v/t$ $F_{car}=m*v/t+F_{friction}$ $Power=\frac{400(mv/t+F_{friction})}{7.3s * 1000}=478.59 kW$ What explains the difference in the answers I got? ​​ Answer: Your approach is fine. It's the given values that are defective. If you calculate the distance required for a car to reach 110 m/s in 7.3 seconds, it's actually 401.5 meters. If you insert that value into your power formula instead of the given 400 meters, you'll get the same answer they got.
{ "domain": "physics.stackexchange", "id": 41354, "tags": "homework-and-exercises, energy, acceleration, work, power" }
Time dilation Vs Time duration Or watching a movie at the speed of light
Question: I am not a physicist but a physics enthusiast and while trying to understand time dilation I came up with this thought experiment in order to understand things better based on the answers to my 2 questions below: a boy is standing still on Earth a girl is about to leave Earth on a spaceship the boy starts watching a 2-hour movie the girl starts watching the same move inside the spaceship, at the same time as soon as the girl starts watching the movie, she leaves earth onboard her spaceship and accelerates to speeds near the speed of light the boy remains on earth watching the movie the girl is inside the spaceship watching the movie while travelling at a speed near the speed of light and while she is watching the movie she uses a camcorder to film the screen that is projecting the movie back on earth, the boy finishes watching the movie and at that exact point sees the girl landing back on earth due to time dilation the time experienced by the girl inside the spaceship was more than 2 hours. What it means is that while on the spaceship, the girl watched the movie, the movie ended and the girl had yet to arrive and land back on earth. She had time left to do more stuff inside that spaceship like maybe take a nap or something. And after some time had passed on that spaceship and after she was done watching the movie, only then did she arrive back on earth. the girl is now back on earth, exits the spaceship sits next to the boy, takes out her camcorder and they both sit and watch the movie she recorded on her spaceship Question #1: Is point 9 correct? Question #2: When the boy and the girl sit down on earth to watch the camcorder-movie, how long will that movie last and will the playing-speed of the movie be normal? (please excuse me in advance if this 2nd question in particular is completely ridiculous and laughable) Answer: You have it backwards. The girl on the spaceship is the one who experiences acceleration. So when she returns to earth, she will have felt less time pass than the boy who stayed on earth. If you travel very very fast, you will return to an earth far in the future. So for her to return right as the boy finishes her movie, she'd experience a very short flight. (And the acceleration required to get up up close to light speed and then deccelerate back to the earth frame would be a horrible, horrible death). For your second question, the recording would look exactly the way it looked to the girl when she watched the movie. So they wouldn't observe any unusual looking images. Imagine it were film, instead of digital, just to help make a picture of how this happens. Each frame of film will move through the camera while it is recording at some rate. Say, 50 frames per second. So each second as experienced by the girl and the camera, it would record 50 frames. When the film is back on earth it would be played at a rate of 50 frames per second. Therefore, in one second of playback it would show 50 frames. But if somehow the boy were watching the girl's screen as she traveled, he would see the movie playing very slowly.
{ "domain": "physics.stackexchange", "id": 66649, "tags": "special-relativity, time-dilation" }
What is a non-linear space of connections
Question: In the book "Loops Knots Gauge Theory and Quantum Gravity" when trying to define a loop representation, one needs to integrate over the space of connections (modulo Gauge transformations). There, they say this space is non-linear and inf-dim. What does it mean for a space to be non-linear? (The space of 1 forms is a vector-space no?) Answer: Let $\pi:P\rightarrow M$ be a principal fibre bundle and $\mathrm{Con}(\pi)$ the bundle of smooth principal connections over $\pi$. This can actually be realized as an affine bundle over $M$ whose total space is $J^1(\pi)/G$, where $G$ is the structure group. The corresponding vector bundle is $\mathrm{Ad}(\pi)\otimes\Lambda^1(M)$, where $\mathrm{Ad}(\pi)$ is the vector bundle associated to $\pi$ through the adjoint representation of $G$ on $\mathfrak{g}$. More generally if $\pi:N\rightarrow M$ is a fibered manifold, then the space of all smooth connections on $\pi$ can be identified with $\Gamma(\pi^1_0)$, where $\pi^1_0:J^1(\pi)\rightarrow N$ is the first affine jet bundle of $\pi$. This is again an affine bundle (although over $N$ rather than $M$). The corresponding vector bundle is $V(\pi)\otimes_N\Lambda^1(M)$, where the tensor product is taken over $N$, and $V(\pi)$ is the vertical tangent bundle of $\pi:N\rightarrow M$. In somewhat more familiar terms, connections of a given type (i.e. smooth sections of the corresponding bundles) always form an affine space. Affine spaces are "almost linear", but are nonetheless not vector spaces. Arbitrary linear combinations of connections are not themselves connections.
{ "domain": "physics.stackexchange", "id": 100262, "tags": "gauge-theory, representation-theory, wilson-loop" }
Exact solution of Qubit Decoherence using Transfer Matrix
Question: I'm going through a particular paper on decoherence: Exact Solution of Qubit Decoherence models by a transfer matrix method I'm having trouble understanding a particular step in the mathematics involved. I'm not able to understand how equation (4) was obtained from the equation in Fig: 1. Fig 1: Equation (4): It is told in the paper that "They key to the method is to compute and iterate the superoperators..". It would be a great help if someone could give me a hint on what is going on. I would like to try it out myself and thus, I'm not expecting any complete answers. Thankyou. Answer: It's a case of bad labeling: the $i$,$j$ labels in Fig.1 and Eqs.(4-5) have different meaning. In addition, subscript 1 was dropped on all $B$'s in Eq.(5). Other than that, it's straightforward algebra: Start by rewriting the final result of Fig.(1) in the familiar operator-product form, expand, and rearrange: $$ \overline{\left[ E \cos(B_1\tau) - i {\hat B_1}\cdot{\vec \sigma}\sin(B_1\tau)\right]{\vec \sigma}(0) \left[ E \cos(B_1\tau) + i {\hat B_1}\cdot{\vec \sigma}\sin(B_1\tau)\right]} = \\ = \overline{\cos^2(B_1\tau)} \;{\vec \sigma}(0) + i {\vec \sigma}(0) \left[\overline{\cos(B_1\tau)\sin(B_1\tau){\hat B_1} } \cdot{\vec \sigma}\right] - \\ - i \left[ \overline{\cos(B_1\tau)\sin(B_1\tau){\hat B_1} } \cdot{\vec \sigma} \right] {\vec \sigma}(0) + \overline{\sin^2(B_1\tau)\left[{\hat B_1} \cdot{\vec \sigma}\right] {\vec \sigma}(0) \left[{\hat B_1} \cdot{\vec \sigma} \right] }=\\ = \overline{\cos^2(B_1\tau)} \;{\vec \sigma}(0) + i \sum_i{\left(\overline{\cos(B_1\tau)\sin(B_1\tau)B_{1,i} } \right) \left[ {\vec \sigma}(0)\sigma_i - \sigma_i {\vec \sigma}(0)\right]} + \sum_{i,j}{\left( \overline{\sin^2(B_1\tau)B_{1,i} B_{1,j} } \right) \sigma_i \;{\vec \sigma}(0)\; \sigma_j} $$ Now identify $$ I_0 = \overline{\cos^2(B_1\tau)} $$ $$ I_i = \overline{B_{1,i} \cos(B_1\tau)\sin(B_1\tau) } $$ $$ I_{ij} = \overline{B_{1,i} B_{1,j} \sin^2(B_1\tau) } $$ and tidy up: $$ \overline{\left[ E \cos(B_1\tau) - i {\hat B_1}\cdot{\vec \sigma}\sin(B_1\tau)\right]{\vec \sigma}(0) \left[ E \cos(B_1\tau) + i {\hat B_1}\cdot{\vec \sigma}\sin(B_1\tau)\right]} = $$ $$ = I_0 \; {\vec \sigma}(0) + i \sum_i {I_i \left[ {\vec \sigma}(0)\sigma_i - \sigma_i {\vec \sigma}(0)\right]} + \sum_{i,j}{I_{ij}\;\sigma_i\; {\vec \sigma}(0)\; \sigma_j } $$
{ "domain": "physics.stackexchange", "id": 30403, "tags": "quantum-mechanics, operators, quantum-information, mathematics, decoherence" }
How to find the uncertainty in an experiment where we only measure a single set of results?
Question: I was reading about a Bell’s inequality experiment from the paper https://arxiv.org/abs/quant-ph/0205171. In the paper it gives a set of recorded values: I understand how they got the value for S, but I am having trouble understanding how to use this formula to find the error: The final answer should be: $S = 2.307 ± 0.035$ What should $N_i(\partial S/\partial N_i)^2$ be? Answer: Rewriting Equation 21 in different notation, \begin{equation} S = E(N_1,N_2,N_3,N_4) - E(N_5, N_6, N_7, N_8) + E(N_9, N_{10}, N_{11}, N_{12}) - E(N_{13}, N_{14}, N_{15}, N_{16}) \end{equation} where (rewriting Equation 25 in different notation) \begin{equation} E(N_i, N_j, N_k, N_l) = \frac{N_i + N_j - N_k - N_l}{N_i + N_j + N_k + N_l} \end{equation} So for example \begin{equation} \frac{\partial S}{\partial N_1} = \frac{1}{N_1 + N_2 + N_3 + N_4} - \frac{N_1+N_2-N_3-N_4}{\left(N_1 + N_2 + N_3 + N_4\right)^2} \end{equation} WARNING The above is just meant to show you how the partial derivative would work. To actually figure out what entries in Table I correspond to $N_1, N_2$, etc, you need to be careful about whether the pairs of angles should be perpendicular or not, or whether the angles $a, a', b, b'$ are being used, following the original notation in Eqs 21 and 25 carefully.
{ "domain": "physics.stackexchange", "id": 91914, "tags": "experimental-physics, error-analysis, bells-inequality" }
Periodicity of the discrete-time Fourier Transform
Question: The DTFT of a sequence $x[n]$ can be written as $$X(e^{j\omega}) = \sum_{n = -\infty}^{\infty} x[n] e^{-j\omega n}.$$ Is the smallest (fundamental) period in frequency of the DTFT always $2\pi$? Or can it be smaller than $2\pi$? In addition, I was wondering why we cannot use that same notation for the argument with the CTFT. We typically denote the continuous-time Fourier Transform by $X(j\omega)$. But what's wrong with using $X(e^{j\omega})$ where $$X(e^{j\omega}) = \int_{-\infty}^{\infty} x(t) e^{-j\omega t} dt?$$ I know for a fact that this is wrong because the notation inherently implies the periodicity of $X(e^{j\omega})$, since $X(e^{j(\omega + 2\pi)}) = X(e^{j\omega})$, which is obbiously not true in this case. But I don't understand what mathematically does not permit us to use $e^{j\omega}$ as an argument in the continuous-time case. Answer: The DTFT is always $2\pi$-periodic. However, it can also have a smaller period, namely a fraction of $2\pi$. Take any sequence $x[n]$ for which the DTFT exists and insert $L-1$ zeros between the samples. The DTFT of the new sequency $\hat{x}[n]$ can then be written as $$\hat{X}(e^{j\omega})=\sum_{n=-\infty}^{\infty}\hat{x}[n]e^{-jn\omega}=\sum_{n=-\infty}^{\infty}\hat{x}[nL]e^{-jnL\omega}\tag{1}$$ $\hat{X}(e^{j\omega})$ as given by $(1)$ clearly has a period of $2\pi/L$. Concerning your second question, the reason why a continuous-time Fourier transform $X(j\omega)$ cannot be written as $X(e^{j\omega})$ is that $$X(j\omega)=\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt\neq \int_{-\infty}^{\infty}x(t)(e^{j\omega})^{-t}dt\tag{2}$$ simply because generally $$e^{-j\omega t}\neq (e^{j\omega})^{-t}\tag{3}$$ unless $t$ is an integer (as is the case with the DTFT). Note that if $(3)$ were not true, i.e., if $e^{-j\omega t}= (e^{j\omega})^{-t}$ were true, we could easily show that $e^{-j\omega t}=1$ for all $t$. Just write $\omega=2\pi/T$ and you get $$e^{-j\omega t}=e^{-j2\pi t/T}=\left(e^{-j2\pi}\right)^{t/T}=1^{t/T}=1$$ which is of course absurd. Proof of Eq. $(3)$: Let $z$ be a complex number with $|z|=1$ (the magnitude is irrelevant here): $$z=e^{j\theta}\tag{4}$$ Let $a$ and $b$ be real numbers. Then $$z^{ab}=e^{j\theta a b}\tag{5}$$ And with $z^a=u$ $$\left(z^a\right)^b=u^b=e^{j\arg\{u\}b}\tag{6}$$ With $\arg\{u\}=\text{pv}\{\theta a\}\in (-\pi,\pi]$, where $\text{pv}$ denotes the principal value, it is clear that generally $$z^{ab}=e^{j\theta a b}\neq \left(z^a\right)^b=e^{j\,\text{pv}\{\theta a\}b}\tag{7}$$ There are two cases where $(5)$ and $(6)$ are equal: if $\text{pv}\{\theta a\}=\theta a$, which is the case if $\theta a\in (-\pi,\pi]$. if $b$ is integer, since $$\text{pv}\{\theta a\}=\theta a+2\pi k\tag{8}$$ with some appropriately chosen integer $k$ (such that the result is in the interval $(-\pi,\pi]$), then $$\left(z^a\right)^b=e^{j\,\text{pv}\{\theta a\}b}=e^{j(\theta a+2\pi k)b}=e^{j\theta ab}=z^{ab},\qquad b\in\mathbb{Z}\tag{9}$$ [Note that $(9)$ would also hold for rational $b$ as long as $kb$ is integer.]
{ "domain": "dsp.stackexchange", "id": 6068, "tags": "discrete-signals, fourier-transform, continuous-signals, fourier, dtft" }
Solve a set of "restricted" linear equations efficiently
Question: I was recently asked to solve the following challenge (in C++) as part of the interview process. However, I haven't heard from them at all afterwards, and based on past experiences of unsuccessful applicants that I've read online, my submission didn't meet their standards. Since I did solve the challenge to the best of my abilities, I'm at a loss to understand in what ways I could have made a better solution. I'm posting the problem statement (in my own words) and my solution here. Please critique it as you would for a potential applicant to your team (as a means for gauging whether it's worthwhile to have a subsequent phone-screen with such an applicant). Input Details The utility would take as input an input file containing a list of equations, one per line. Each equation has the following format: <LHS> = <RHS>, where LHS is the left-hand side of the equation and is always a variable name. RHS is the right hand side of the equation and can be composed of the following only: Variables Unsigned integers The + operator Assumptions Input is well-formed i.e. Number of variables = Number of equations, with each variable occurring on the LHS of exactly one equation. The system of equations has an unique solution, and does not have circular dependencies. There are one or more white spaces between each token (numbers, + operator, variables). A variable name can only be composed of letters from the alphabet (e.g. for which isalpha(c) is true). All integers will fit in a C unsigned long. Output Format The utility would print the value of each variable after evaluating the set of equations, in the format <variable name> = <unsigned integer value>. The variables would be sorted in ascending (lexicographic) order. Sample Input Output Input file: off = 4 + r + 1 l = 1 + or + off or = 3 + 5 r = 2 Expected output for the above input: l = 16 off = 7 or = 8 r = 2 Implementation Notes Due to the simplified nature of the input equations, a full-blown linear equation solver is not required in my opinion (as such a solver would have at least quadratic complexity). A much simplified (and asymptotically faster) solution can be arrived at by modeling the set of input equations as a Directed Acyclic Graph (DAG), by observing the dependencies of the variables from the input equations. Once we can model the system as a DAG, the steps to derive the variable values are as follows: Construct the dependency DAG, where each node in the graph corresponds to a variable, and \$(a, b)\$ is a directed edge from \$a\$ to \$b\$ if and only if the variable \$a\$ needs to be fully evaluated before evaluating \$b\$. Order the vertices in the DAG thus constructed using topological sort. For each vertex in the sorted order, evaluate its corresponding variable fully before moving on to the next vertex. The algorithm above has a linear complexity, which is the best we could achieve under the current assumptions. I've encapsulated the algorithm in the following class (I've used Google's C++ Style Guide in my code - not sure it's the best choice, but I preferred to follow a style guide that's at least recognized by and arrived at by a non-trivial number of engineers.) Class header file: // // Class that encapsulates a (constrained) linear equation solver. See README.md // for assumptions on input restrictions. // #include <unordered_map> #include <vector> #include <list> #ifndef _EVALUATOR #define _EVALUATOR class Evaluator { private: // Stores the values of each variable throughout algorithm std::vector<UL> variable_values_; // Hash tables that store the correspondence between variable name and index std::unordered_map<std::string, UL> variable_index_map_; std::unordered_map<UL, std::string> index_variable_map_; // Adjacency list for DAG that stores the dependency information amongst // variables. If A[i, j] is an edge, it implies variable 'i' appears on the // RHS of definition of variable 'j'. std::vector<std::list<UL> > dependency_adj_list_; // List of equations stored as indices. If the list corresponding to eq[i] // contains 'j', then variable 'j' appears on the RHS of variable 'i'. std::vector<std::list<UL> > equation_list_; // For efficiency, this list stores the number of dependencies for each // variable, which is useful while executing a topological sort. std::vector<UL> num_dependencies_; // Resets all internal data structures void Clear(); // Prints values of internal data structures to aid in debugging void PrintState(); // Adds an entry corresponding to each new variable detected while parsing input UL AddNewVar(std::string& ); // Parse the input equations from filename given as argument, and build the // internal data structures coressponsing to the input. bool ParseEquationsFromFile(const std::string&); // If DAG in dependency_adj_list_ has a valid topological order, returns // true along with the ordered vertices in the input vector bool GetTopologicalVarOrder(std::vector<UL>&); public: Evaluator() {}; /** * @brief Evaluate the set of constrained linear equations and returns the * values of the variables as a list. * * @param[in] string: Filename containing list of constrained linear equations. * @param[in] vector<string>: If solution exists, returns the values of * variables in lexicographic order (ascending). * * @return True if solution exists (always exists for valid input), false if * input is not well-formed (See README.md for more details about input * format). */ bool SolveEquationSet(const std::string&, std::vector<std::string>& ); }; #endif The main class file: #include "evaluator.h" #include <sstream> #include <unordered_set> #include <set> #include <queue> #include <algorithm> #include <cassert> #ifdef _EVALUATOR // Used for early returns if the expression is false #define TRUE_OR_RETURN(EXPR, MSG) \ do \ { \ bool status = (EXPR); \ if (status != true) \ { \ cerr << __FUNCTION__ \ << ": " << MSG << endl; \ return false; \ } \ } while(0) #endif using namespace std; //**** Helper functions local to the file **** // Returns true if each character in the non-empty string is a digit bool IsNumber(string s) { return !s.empty() && std::all_of(s.begin(), s.end(), ::isdigit); } // Given a string, returns a vector of tokens separated by whitespace vector<string> ParseTokensFromString(const string& s) { istringstream iss(s); vector<string> token_list; string token; while (iss >> token) token_list.push_back(token); return token_list; } // Returns true if the string can be a valid variable name (i.e has // only alphabetical characters in it). bool IsValidVar(string& v) { for (auto& c: v) TRUE_OR_RETURN(isalpha(c), "Non-alphabetical char in variable: " + v); return true; } //******************************************** void Evaluator::Clear() { variable_values_.clear(); variable_index_map_.clear(); index_variable_map_.clear(); dependency_adj_list_.clear(); equation_list_.clear(); num_dependencies_.clear(); } void Evaluator::PrintState() { for (auto i = 0U; i < dependency_adj_list_.size(); ++i) cout << index_variable_map_[i] << "(" << i << ") =>" << "Value(" << variable_values_[i] << "), Deps(" << num_dependencies_[i] << ")" << endl; } // Ensures all data structures correctly set aside an entry for the new variable UL Evaluator::AddNewVar(string& v) { if (variable_index_map_.count(v) == 0) { dependency_adj_list_.push_back(list<UL>()); equation_list_.push_back(list<UL>()); variable_values_.push_back(0); num_dependencies_.push_back(0); variable_index_map_.insert(make_pair(v, dependency_adj_list_.size() - 1)); index_variable_map_.insert(make_pair(dependency_adj_list_.size() - 1, v)); assert(num_dependencies_.size() == variable_values_.size() && variable_index_map_.size() == variable_values_.size() && variable_values_.size() == dependency_adj_list_.size()); } return variable_index_map_[v]; } // Parses equation from given input file line-by-line, checking // for validity of input at each step and returning true only if // all equations were successfully parsed. bool Evaluator::ParseEquationsFromFile(const string& sEqnFile) { string line; ifstream infile(sEqnFile); // This LUT serves as a sanity check for duplicate definitions of vars // As per spec, only ONE definition (appearance as LHS) per variable is handled unordered_set<string> defined_vars; while (getline(infile, line)) { vector<string> tokens = ParseTokensFromString(line); string lhs = tokens[0]; // Check if equation is adhering to spec TRUE_OR_RETURN(tokens.size() >= 3 && IsValidVar(lhs) && tokens[1] == "=", "Invalid equation: " + line); // Check if variable on LHS was previously defined - this would make the // current approach untenable, and require general equation solver. TRUE_OR_RETURN(defined_vars.count(lhs) == 0, "Multiple defn for: " + lhs); defined_vars.insert(lhs); const UL lhs_idx = AddNewVar(lhs); // The operands appear in alternate positions in RHS, tracked by isOp for (size_t i = 2, isOp = 0; i < tokens.size(); ++i, isOp ^= 1) { string token = tokens[i]; if (isOp) TRUE_OR_RETURN(token == "+", "Unsupported operator: " + token); else { if (IsNumber(token)) variable_values_[lhs_idx] += stol(token); else { TRUE_OR_RETURN(IsValidVar(token), "Invalid variable name: " + token); // Token variable must be evaluated before LHS. // Hence adding token => LHS edge, and adding token to RHS of // equation_list_[lhs] auto token_idx = AddNewVar(token); dependency_adj_list_[token_idx].push_back(lhs_idx); assert(lhs_idx < equation_list_.size()); equation_list_[lhs_idx].push_back(token_idx); num_dependencies_[lhs_idx]++; } } } } return (variable_index_map_.size() == dependency_adj_list_.size() && dependency_adj_list_.size() == variable_values_.size()); } // Execute the BFS version of topological sorting, using queue bool Evaluator::GetTopologicalVarOrder(vector<UL>& ordered_vertices) { ordered_vertices.clear(); queue<UL> q; for (auto i = 0U; i < dependency_adj_list_.size(); ++i) if (num_dependencies_[i] == 0) q.push(i); while (!q.empty()) { UL var_idx = q.front(); ordered_vertices.push_back(var_idx); q.pop(); for (auto& nbr: dependency_adj_list_[var_idx]) { assert(num_dependencies_[nbr] >= 0); num_dependencies_[nbr]--; if (num_dependencies_[nbr] == 0) q.push(nbr); } } return (ordered_vertices.size() == dependency_adj_list_.size()); } // Solves the constrained set of linear equations in 3 phases: // 1) Parsing equations and construction of the dependency DAG // 2) Topological sort on the dependency DAG to get the order of vertices // 3) Substituting the values of variables according to the sorted order, // to get the final values for each variable. bool Evaluator::SolveEquationSet(const string& eqn_file, vector<string>& solution_list) { Clear(); vector<UL> order; TRUE_OR_RETURN(ParseEquationsFromFile(eqn_file), "Parsing Equations Failed"); TRUE_OR_RETURN(GetTopologicalVarOrder(order), "Topological Order Not Found"); // Populate variable values in topological order for (auto& idx: order) for (auto& nbr: equation_list_[idx]) variable_values_[idx] += variable_values_[nbr]; // Get keys from the LUT sorted in ascending order set<pair<string, UL> > sorted_var_idx; for (auto& vi_pair: variable_index_map_) sorted_var_idx.insert(vi_pair); for (auto& vi_pair: sorted_var_idx) solution_list.push_back(vi_pair.first + " = " + to_string(variable_values_[vi_pair.second])); return true; } #endif The usage of the class is as follows: string eqn_file, log_file; Evaluator evaluate; vector<string> solution_list; // Logic to get input filename from user - skipping it here bool bStatus = evaluate.SolveEquationSet(eqn_file, solution_list); for (auto& s: solution_list) cout << s << endl; Answer: I see a number of things that may help you improve your code. Note that I am not referring to Google's C++ style guide, so contradictions between my suggestions and that guide simply mean that Google is wrong. :) Fix the bugs There is a spurious #endif at the end of evaluator.cpp that prevents it from being compiled. The evaluator.h class relies on an undefined type UL. If that's something that's defined by your particular compiler, you should be aware that it's non-standard and therefore not portable. I fixed this by adding this line: using UL = unsigned long; Make sure you have all required #includes The evaluator.cpp uses cerr and endl in the TRUE_OR_RETURN macro but doesn't #include <istream>. It also uses ifstream but doesn't #include <fstream>. Also, carefully consider which #includes are part of the interface (and belong in the .h file) and which are part of the implementation. For example, the interface relies on std::string but the .h file is missing #include <string>. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. Don't use std::endl when '\n' will do Using std::endl emits a \n and flushes the stream. Unless you really need the stream flushed, you can improve the performance of the code by simply emitting '\n' instead of using the potentially more computationally costly std::endl. Avoid function-like macros In modern C++, there are very few places that a function-like macro should be used. Let's look at yours: // Used for early returns if the expression is false #define TRUE_OR_RETURN(EXPR, MSG) \ do \ { \ bool status = (EXPR); \ if (status != true) \ { \ cerr << __FUNCTION__ \ << ": " << MSG << endl; \ return false; \ } \ } while(0) There are a number of problems with this macro. First, we don't really need the status variable at all. The if could be written: if (!(EXPR)). Second, cerr and endl (if it's used at all, see previous suggestion) should be fully qualified with the std namespace. Third, __FUNCTION__, while common, is not standard. One could use __func__ instead, which is standard. Fourth, I'd much rather see the macro eliminated entirely in favor of simple inline code, not least because printing the internal function name is not very user friendly and is only meaningful to a programmer working on the code. Use const references where practical The code currently declares its IsNumber function like so: bool IsNumber(string s) { return !s.empty() && std::all_of(s.begin(), s.end(), ::isdigit); } This has two problems. First it passes by value, so a new string is created on every call. This is quite wasteful of both time and memory. Second, it should actually be a const reference, since s is not modified within the function. Think carefully about signed versus unsigned If, indeed, my assumption about UL meaning unsigned long is correct, then this assert is useless: assert(num_dependencies_[nbr] >= 0); Because num_dependencies_ is defined as: std::vector<UL> num_dependencies_; So therefore the assert can never be false. Declare only one variable per line The test code includes this line: string eqn_file, log_file; It's generally better style to declare each variable on its own line. Don't visually align variables The code currently contains a number of lines that look like these: string eqn_file, log_file; Evaluator evaluate; vector<string> solution_list; While that may look pretty to some people, and perhaps you use an IDE or other code formatter that does that automatically, in my experience, it simply creates a maintenance headache because another programmer (who perhaps is not using your same tools) who modifies this (e.g. according to these suggestions) to add the missing namespace specifications ends up with this: std::string eqn_file; std::string log_file; Evaluator evaluate; std::vector<std::string> solution_list; Now it doesn't look so pretty, so either that poor programmer has to redo all of the fiddly alignment or better, in my view, is to eliminate it entirely to end up with this: std::string eqn_file; std::string log_file; Evaluator evaluate; std::vector<std::string> solution_list; Rethink your interfaces Right now, the code is used like this: bool bStatus = evaluate.SolveEquationSet(eqn_file, solution_list); This has a couple of problems. First, why doesn't SolveEquationSet return a solution_list instead of taking it as a parameter (in an overly subtle way, by reference)? Second, it would be much more flexible if it would take a generic istream & rather than a string that's presumed to be a file name. This would allow, for example, simple testing by constructing equations in a std::stringstream and passing a reference to that for evaluation. Third, why is evaluate a class with non-static functions rather than, say a functor? Once it has returned the solution set, there's not really anything useful to do with it, so I'd suggest that the usage should look more like this: auto solution_list = EquationSolver(infile); if (solution_list.size() == 0) { std::cout << "There are no solutions\n"; } else { for (const auto &sol : solution_list) { std::cout << sol << '\n'; } } Simplify your algorithm The solution you've written works, which is good, but it's not very efficient in terms of time or space. Here's an alternative approach: scan each equation into a Variable class which contains the variable name, a std::set of dependencies and a single unsigned for the sum of any constants process the list by adding the value of each Variable with 0 dependencies to the constant of the current Variable, eliminating that Variable from the dependencies repeat from step 2 until all dependency lists are empty This is guaranteed to work because of the definition of the input equations and uses only a single nested std::set instead of maps, sets, vectors, and queues as with the current solution. Test the code We were both wondering about the complexity and runtime speed, so I wrote a test harness. It uses this stopwatch class for timing. I made a slight change to your code to allow it to either take its input from a stream or by passing a filename and put that in "orig.h" and factored out the revised version from this question into "Variable.h". #include "orig.h" #include "Variable.h" #include "stopwatch.h" #include <fstream> #include <sstream> #include <cassert> std::string original(std::istream &in) { static Evaluator evaluate; std::vector<std::string> solution_list; evaluate.SolveEquationSet(in, solution_list); std::stringstream ss; for (const auto &item: solution_list) ss << item << '\n'; return ss.str(); } std::string Edward(std::istream &in) { auto sol{solve(in)}; std::stringstream ss; for (const auto &item: sol) ss << item << '\n'; return ss.str(); } class TestRig { public: TestRig(std::string myname, std::string (*myfunc)(std::istream &in)) : name{myname}, testfunc{myfunc} {} std::string operator()(const std::string &instring) const { std::stringstream in{instring}; Stopwatch<> sw; auto solution = testfunc(in); sw.stop(); std::cout << name << ": " << sw.elapsed() << "us\n"; return solution; } private: std::string name; std::string (*testfunc)(std::istream &in); }; static const TestRig tests[]{ { "Edward", Edward }, { "original", original }, }; int main() { const std::string input{"b = c + d + 3\nd = e + 4\na = b + c + d + 1\ne = 7\nc = d + 2"}; std::stringstream in{input}; const auto golden = original(in); for (auto testcount = 10; testcount; --testcount) { for (const auto &test: tests) { auto sol = test(input); assert(sol == golden); } } } Results I compiled the program on my 64-bit Linux machine with g++ version 6.3.1 and -O2 optimization. Here's what I got when I ran the program: Edward: 18us original: 32us Edward: 12us original: 18us Edward: 11us original: 16us Edward: 10us original: 15us Edward: 10us original: 15us Edward: 15us original: 15us Edward: 10us original: 15us Edward: 10us original: 15us Edward: 10us original: 15us Edward: 10us original: 15us So it seems that the revised code is about 50% faster.
{ "domain": "codereview.stackexchange", "id": 24888, "tags": "c++, algorithm, object-oriented, c++11, interview-questions" }
Loop to extract json from dataframe and storing in a new dataframe
Question: I have a dataframe (obtained from a csv saved from mysql) with several columns, and one of them consist of a string which is the representation of a json. The data looks like: id email_id provider raw_data ts 1 aa@gmail.com A {'a':'A', 2019-23-08 00:00:00 'b':'B', 'c':'C'} And what my desired output is: email_id a b c aa@gmail.com A B C What I have coded so far is the following: import pandas as pd import ast df = pd.read_csv('data.csv') df1 = pd.DataFrame() for i in range(len(df)): dict_1 = ast.literal_eval(df['raw_content'][i]) df1 = df1.append(pd.Series(dict_1),ignore_index=True) pd.concat([df['email_id'],df1]) This works but it has a very big problem: it is extremely low (it takes hours for 100k rows). How could I make this operation faster? Answer: Finally I got an amazing improvement thanks to stack overflow, regarding two things: https://stackoverflow.com/questions/10715965/add-one-row-to-pandas-dataframe https://stackoverflow.com/questions/37757844/pandas-df-locz-x-y-how-to-improve-speed Also, as hpaulj pointed, changing to json.loads slightly increases the performance. It went from 16 hours to 30 seconds row_list = [] for i in range(len(df)): dict1 = {} dict1.update(json.loads(df.at[i,'raw_content'])) row_list.append(dict1) df1 = pd.DataFrame(row_list) df2 = pd.concat([df['email_id'],df1],axis=1)
{ "domain": "codereview.stackexchange", "id": 35792, "tags": "python, performance, pandas" }
Private method to validate Excel header
Question: I am new to unit testing and I am unsure how to design classes that use third party libraries, so that I can easily test them. The example I'll use is with EPPlus-OfficeOpenXml (an Excel document manipulator). I have a class to import excel documents and validate them. Here is a sample of my code: Imports OfficeOpenXml Public Class TestImporter Private errorList As New List(Of String) Public Function import(fileName As String) As List(Of String) Dim openFile As New FileInfo(fileName) Dim document As New ExcelPackage(openFile) Dim sheet As ExcelWorksheet = document.Workbook.Worksheets(1) validateAll(sheet) Return errorList End Function Private Sub validateAll(sheet As ExcelWorksheet) validateHeaders(sheet) End Sub Private headers As String() = {"Header 1", "Header 2", "Header 3", "Header 4", "Header 5"} Private Sub validateHeaders(sheet As ExcelWorksheet) For i As Integer = 1 To headers.Length If Not (sheet.Cells(1, i).Value = headers(i - 1)) Then errorList.Add(String.Format("The header '{0}' does not match '{1}'", sheet.Cells(1, i).Value, headers(i - 1))) End If Next End Sub End Class The validation subroutines are private, because the only action I want to expose for this class is the import functionality. How should I design this class in order to test my validation of Excel inputs? My first thought is have a test file for each test. Then pass in the name of the test file into the import function in each test. This doesn't seem to be the correct solution, and seems kind of hackish. Answer: It's basically like RubberDuck said. [...] the only action I want to expose for this class is the import functionality. This means the validation logic is an implementation detail of the TestImporter class. When you test a unit, you only care about the public interface, and treat private members (which are called by the public ones) as implementation details; not testing them directly is fine, unless you do want to test that logic separately... How should I design this class in order to test my validation of Excel inputs? So you want to test the validation logic? Great! Treat it as a unit then (as opposed to an implementation detail), and extract it into its own class. You will want to decouple the validation logic from the TestImporter class though, and it's easier to do that if you code against an abstraction. Make a TestImporter constructor inject the dependency: Private ReadOnly HeaderValidator As IHeaderValidationLogic Public Sub New(validator As IHeaderValidationLogic) HeaderValidator = validator End Sub Now, instead of calling validateAll, you call into the IHeaderValidationLogic interface: validationErrors.AddRange(HeaderValidator.Validate(sheet)) But this creates another problem: does the validation logic really needs to be coupled with an ExcelWorksheet object? What if you switched to the VSTO interop assemblies and worked off a Microsoft.Office.Interop.Excel.Worksheet object instead, would the logic be any different? Best would be to abstract away that "worksheet", and make the validation logic work off what it actually validates: the contents of the worksheet. Extract the values of range $A$1:$E$1 into an array, and pass that array (of strings?) to your header validation logic. Assuming we're returning a list of error messages, the IHeaderValidationLogic interface could look like this: Public Interface IHeaderValidationLogic Function Validate(headerContents As IEnumerable(Of String)) As IReadOnlyList(Of String) End Interface Now, returning a list of error messages isn't very .NET-like. Why not do this instead? Public Interface IHeaderValidationLogic Sub Validate(headerContents As IEnumerable(Of String)) End Interface And then the implementation can throw some ValidationException, with whatever string content you want it to have. And that exception would bubble up the stack to Import(fileName As String), which could now be a Sub - because again, returning errors isn't very .NET-like. Note that your method names should be PascalCase, to follow the established VB.NET naming conventions. If you're going to want to write a unit test for the actual Import procedure, you'll need to reduce coupling - otherwise your tests will incur I/O, which slows them down and makes them dependent on the file system, which isn't ideal. Extract another interface. Public Interface IExcelPackageProvider Function Open(fileName As String) As ExcelPackage End Interface Now you can mock this interface and make the Open method return a stub ExcelPackage with a fake ExcelWorksheet object that you've setup for your tests, and as a side-effect of doing this, you've extracted a highly specialised and reusable component from your Import procedure, making the class more cohesive and a little less coupled. Private ReadOnly WorkbookProvider As IExcelPackageProvider Private ReadOnly HeaderValidator As IHeaderValidationLogic Public Sub New(provider As IExcelPackageProvider, validator As IHeaderValidationLogic) WorkbookProvider = provider HeaderValidator = validator End Sub It's now the job of WorkbookProvider to deal with file I/O and return a workbook object for the Import method to work with.
{ "domain": "codereview.stackexchange", "id": 20029, "tags": "object-oriented, unit-testing, excel, vb.net" }
Constructing a binary tree with given traversals
Question: I have given a post order & in order traversal a BST & I need to construct it. I want to know how to do this. for eg. Post Order : DCBGFEA In Order : BDCAFGE This is how I am trying to do : From POST order it is clear that A is root. So in IN order, I consider everything right to A is at it's right sub tree & so for left. Now, I consider A's left sub tree. It has BDC. Again from POST order, I conclude that B is root. Now, again consider B's sub tree. It has contents DC & POST order, C has to be at root. So now I get C at root. But where to place C, at B's left or right ? It's where I am getting stuck now. Same happening with A's right sub tree also. I place E as root of A's right sub tree. Then I conclude element below should be F, but does it come at right or left of E ? I want to know how to tackle this problem & also is there any better approach for this to do. Answer: You are doing well. From the postorder determine the root, from the inorder determine which letters are in the left and right subtrees. Once you have done that, you can proceed in the same way for the subtrees. In the example you start with postorder DCBGFE A and inorder BDC A FGE. Using postorder you determine the root as A, and using inorder split the letters into left BDC and FGE. Thus (1) the left subtree has postorder DCB and inorder BDC while (2) the right subtree has postorder GFE and inorder FGE. Let us look at left subtree (1) which you mention in your question. The root is B, and both DC follow to B in the inorder and thus are right from B. How to place them. This tiny subtree has postorder DC and inorder DC. Hence its root is C, and D is before C in the inorder, and is in the left subtree of C.
{ "domain": "cs.stackexchange", "id": 1239, "tags": "binary-trees" }
Calculating the amount of cubes needed to form a sum
Question: Since I've never done any performance programming (Aside from the better choices such as array vs list etc. The real basics.), I should probably read up on it. But I had to start somewhere, so, someone I know - which is a much better programmer than I am - tasked me with making this "assignment": You are given the total volume m of the building. Being given m can you find the number n of cubes you will have to build? The parameter of the function findNb (find_nb, find-nb) will be an integer m and you have to return the integer n such as \$n^3 + (n-1)^3 + \dots + 1^3 = m\$ if such a n exists or -1 if there is no such n. He gave me this template with it: using System; public class Program { public static void Main() { Console.WriteLine(findNb(4183059834009)); Console.WriteLine(findNb(24723578342962)); Console.WriteLine(findNb(135440716410000)); Console.WriteLine(findNb(40539911473216)); } public static long findNb(long m) { } } In which I inserted: for(long n = 1; n < (m / 3); n++) { double vol = 0; for(long i = 0; i < n; i++) vol += Math.Pow((n - i), 3); if(vol > m) return -1; if(vol == m) return n; } return -1; This code works, but it takes incredibly long for the larger numbers, as is my main problem. What can I do to shorten the time it takes for this code takes to complete? Answer: When there's math to be done, always check if some formula exists $$1^3+2^3 + \dots+n^3 = m = (1+2 + \dots+n)^2$$ in conjunction with some other formula (interesting wikipedia article title) $$\sum^n_{k=1}k = \frac{n(n+1)}{2}$$ you get some equation for the result $$\sqrt{m} = 1+2 + \dots+n = \sum^n_{k=1}k = \frac{n(n+1)}{2}$$ and maybe some closed formula $$ \begin{align} \sqrt{m} &= \frac{n(n+1)}{2}\\ 2\sqrt{m} &= n(n+1)\\ 2\sqrt{m} &= n^2+n\\ 0&= n^2+n - 2\sqrt{m} \end{align} $$ Which has the 2 solutions $$n_{1,2} = -\frac12 \pm\sqrt{\frac14+2\sqrt{m}}$$ As \$n\$ should be a positive number, only the first solution makes sense. Return \$ -\frac12 +\sqrt{\frac14+2\sqrt{m}}\$ if it is an integer or \$-1\$ otherwise. The results are: $$ \begin{array}{r|r} m & n\\ \hline 4183059834009 & 2022\\ 24723578342962 & -1\\ 135440716410000 & 4824\\ 40539911473216 & 3568\\ \end{array} $$ 24723578342962 is a very close call, because $$\sum^{3153}_{k=1}k^3=24723578342961$$ is only 1 off. The floating point result is \$n=3153.00000000003\$. If you don't want to get false positives due to floating point precision (lack thereof), you can do the check by converting the result to some integer type and see if you get the exact number by doing the calculations with that.
{ "domain": "codereview.stackexchange", "id": 20204, "tags": "c#, performance" }
Logistic Regression mapping formula
Question: Sigmoid function predicts the probability value which is between 0 & 1. What is the formula in logistic regression that maps the predicted probabilities to either 1 or 0? Answer: You get the output of the logistic regression $\sigma$, which is between $0$ and $1$. Default option (is spit out out from most packages): In order to get class labels you simply map all values $\sigma \leq 0.5$ to $0$ and all values $\sigma >0.5$ to $1$. The $\sigma =0.5$ belonging to class $0$ can be different in different implementations (practically irrelevant). But it must be deterministic in order to get reproducible results. This implies that random assignment for the threshold $0.5$ should not be done. Depending on your application you might change this rule. For example if the negative effect of wrong labels $1$ is associated with high cost (e.g. label 1 means that a person does get good conditions for life insurance). If you only want to give good conditions when your model predicts a high probability larger than 0.95. Then we would have the following rule: All values $\sigma \leq 0.95$ to $0$ and all values $\sigma >0.95$ to $1$. You have to implement this by our own from the probabilities that the logistic regression fit gives you.
{ "domain": "datascience.stackexchange", "id": 10419, "tags": "logistic-regression, sigmoid" }
Definition of Shear strain
Question: From Wikipedia article on deformation, the shear strain is defined as the angle of the deformation. I had always thought of it as the limiting ratio of the difference in perpendicular displacement of the beginning and end of a line element with the length of that line element. $$\frac{\partial{u_y}}{\partial{x}}$$ Where $u_y$ is the displacement of a point in the $y$ direction. In this sense, the definition would follow similarly to that of normal strain. That is, a ratio of change in length to that of original length. Why is it that it is defined as the angle instead? I understand that in very small lengths (differential lengths) the two are the same. For pure shear stress, $$\frac{\partial{u_y}}{\partial{x}}=\tan{(\alpha)}\approx\alpha\;\;;\alpha\approx0$$ But why is it defined as the angle and not the ratio? Answer: Let (x,y) be the coordinates of an arbitrary material point in the undeformed configuration of the material, and let u(x,y) and v(x,y) be the displacements of this material point in the x and y directions, respectively. Then the coordinates of the material point in the deformed configuration of the material are (x+u,y+v). The differential position vector between two closely neighboring points in the deformed configuration of the material will be: $$\mathbf{ds}=\left(dx+\frac{\partial u}{\partial x}dx+\frac{\partial u}{\partial y}dy\right)\mathbf{i}+\left(dy+\frac{\partial v}{\partial x}dx+\frac{\partial v}{\partial y}dy\right)\mathbf{j}$$ The square of this differential position vector (in the deformed configuration of the body) is given, to linear terms in the displacements, by:$$(ds)^2=(dx)^2+(dy)^2+2\frac{\partial u}{\partial x}(dx)^2+2\frac{\partial u}{\partial y}dxdy+2\frac{\partial v}{\partial y}(dy)^2+2\frac{\partial v}{\partial x}dxdy$$ If we subtract the square of the length of the differential position vector in the undeformed configuration of the material, we obtain:$$(ds)^2-(ds)_0^2=2\frac{\partial u}{\partial x}(dx)^2+2\left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)dxdy+2\frac{\partial v}{\partial y}(dy)^2$$where $$(ds)^2_0=(dx)^2+(dy)^2$$Next, if we divide by $(ds)^2_0$, we obtain:$$\frac{(ds)^2-(ds)_0^2}{(ds)^2_0}=2\left[\frac{\partial u}{\partial x}\cos^2{\alpha}+\left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)\cos{\alpha}\sin{\alpha}+\frac{\partial v}{\partial y}\sin^2{\alpha}\right]$$ The term in parenthesis is the strain in the differential position vector between the undeformed and deformed configurations of the material: $$\epsilon=\epsilon_{xx}\cos^2{\alpha}+2\epsilon_{xy}\cos{\alpha}\sin{\alpha}+\epsilon_{yy}\sin^2{\alpha}$$ This illustrates how the partial derivatives of the displacements (including the shear components) are related to the changes in length of the material elements.
{ "domain": "physics.stackexchange", "id": 50392, "tags": "newtonian-mechanics, forces, classical-mechanics, material-science, stress-strain" }
What's the correct interpretation for the relativity of simultaneity?
Question: I'm currently reading "Introduction to special relativity" by Resnick, and I have a question about the relativity of simultaneity. He uses this example: So, from the point of view of $S$, $S'$ is moving at speed $v$: $S$ perceives the two events in $A$ and $B$ as simultaneous and $S'$ doesn't. Then there is another example that should be the specular situation: In this case $S'$ sees $S$ moving at speed $-v$, therefore the events are simultaneous for $S'$ but not for $S$. Here is my question: these two situations are exactly the same situation seen from two different frames of reference, right? $S$ always perceives itself as stationary, therefore it sees $S'$ moving, and viceversa for $S'$. Therefore I'm wondering: from their own points of view, do they both perceive the events as simultaneous? Who's the one that actually doesn't see the events happening at the same time? My idea is that $S$ sees the events happening simultaneously, but, from its point of view, $S'$ won't see them being simultaneous. On the other hand, $S'$ actually sees the events happening at the same time, but for him $S$ doens't. Is this interpretation correct? Answer: My idea is that S sees the events happening simultaneously, but, from its point of view, S' won't see them being simultaneous. On the other hand, S' actually sees the events happening at the same time, but for him S doens't. Is this interpretation correct? No, this is not correct. In the first figure, $S$ sees the strikes happen at the same time, while $S'$ does not. In the second figure, the opposite is the case. If your interpretation were correct, then special relativity would really be of no use at all, since the "point" of using Lorentz transformations to change frames of reference is to faithfully represent what an observer in a different reference frame would see. I think the problem here is that the drawing is a little misleading. It looks like the first frames in both of your figures are identical, except for a change of velocity $v \rightarrow -v$. This gives the impression that the lightning strikes in some sense "actually" happen at the same time, but that one of the observers is subsequently misled by their motion. But this is not true. To see this let us stick to the frame of $S$. In this frame, the lightning strikes in Fig. 2 do not happen at the same time. The strike at $A$ happens first, then a little later the strike at $B$. Thus, in frame $S$, the situation illustrated in the top panel of Fig. 2 never happens. That is, the lightning strikes are never present at the same time, unlike what the figure seems to indicate. The observer in $S$ would really see first the lightning at $A$, then a little later the lightning at $B$. Edit: I think the above can be clarified more by attaching some space-time coordinates to the events. In Fig. 1, the strikes occur at the following space-time positions \begin{equation} \begin{aligned} & \text{Strike 1} & \text{Strike 2} \\ \mathrm{S:} \qquad &(t,x) = (0,A), \qquad & (t,x) = (0,B), \\ \mathrm{S':} \qquad &(t',x') = \left(-\gamma \frac{v}{c^2} A, \gamma A\right), \qquad & (t',x') = \left(-\gamma \frac{v}{c^2} B,\gamma B\right) \end{aligned} \end{equation} where $\gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$. Note that since $A<0$ and $B>0$ according to the drawing (assuming that $x$ increases to the right), we see that in frame $S'$ strike $B$ occurs at a negative time, while $A$ occurs at a positive time. Thus, according to the observer in $S'$, strike $B$ occurs before strike $A$. In Fig. 2, the strikes occur at the following space-time positions \begin{equation} \begin{aligned} & \text{Strike 1} & \text{Strike 2} \\ \mathrm{S:} \qquad &(t,x) = \left(\gamma \frac{v}{c^2} A, \gamma A\right), \qquad & (t,x) = \left(\gamma \frac{v}{c^2} B,\gamma B\right) \\ \mathrm{S':} \qquad &(t',x') = (0,A), \qquad & (t',x') = (0,B). \end{aligned} \end{equation} These expressions look very similar to the case in Fig. 1 except for two differences: Firstly, we have made the replacement $v \rightarrow -v$, but second: The strikes happen at different times and positions. This is a different physical situation from the situation depicted in Fig. 1 - it is not the same situation, just seen from a different point of view.
{ "domain": "physics.stackexchange", "id": 96713, "tags": "special-relativity, reference-frames, inertial-frames" }
scan_tools runtime error undefined symbol cblas_dtrsv ubuntu 12.04.2LTS
Question: I have downloaded the scan_tools source code from git: it did build successfully, but at runtime I get: "laser_scan_matcher_node: symbol lookup error: /usr/lib/libgsl.so.0: undefined symbol: cblas_dtrsv". When I run ldd on the laser_scan_matcher_node binary, I see that libgsl.so.0 is linked, but NOT libgslcblas. I am on ubuntu 12.04.2LTS. What could I do to make it work, please ? Originally posted by micmac on ROS Answers with karma: 141 on 2013-03-05 Post score: 0 Answer: By browsing the web, I have found a way to build it on ubuntu 12.04.2.LTS. In fact only the laser_scan_matcher triggers the symbol lookup error. STEPS: rosmake scan_tools sudo apt-get remove binutils-gold (this one doesn't link libgslcblas correctly, as stated on this page: http://stackoverflow.com/questions/14405601/gnu-scientific-library-cblas-symbol-lookup-error ). rosmake --pre-clean laser_scan_matcher : the build will not finish correctly (problems with libboost),but next step will finish the build: sudo apt-get install binutils-gold rosmake laser_scan_matcher : the build finishes successfully. This time when I run "ldd laser_scan_matcher_node" on the binary, libgslcblas is linked correcly, and the laser_scan_matcher works :) Originally posted by micmac with karma: 141 on 2013-03-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13215, "tags": "ubuntu" }
[ROS2] rclcpp::Node vs rclcpp_lifecycle::LifecycleNode
Question: Hi all, a philosophical question that tends to technical. ROS2 has reached the 6th release and lifecycle nodes are not yet fully supported by all the ROS2 modules (e.g. image_transport). Now my question:is it really worth continuing to work with lifecycle or is it better to move to standard nodes? Will ever lifecycle nodes get all the support that standard nodes receive? Thank you Walter Originally posted by Myzhar on ROS Answers with karma: 541 on 2020-09-02 Post score: 1 Answer: Yes, it is worth using lifecycle nodes. Certain things though where component nodes might be better suited, those should take priority for the time being. Currently the launch system and components don't play well with lifecycle nodes so its a "one or the other" for the time being. Hopefully that will change soon. Things like image_pipeline, for instance, we want to use components for the 0-copy low latency aspects of loading multiple nodes into the same process. This is especially important for image and pointcloud processing pipelines because we have large data flying around whose transport would be non-trivial. But if components aren't as critical, like when passing around typical or smaller messages, then I would recommend lifecycle so that you can control the bringup and shutdown of your system in ways you never could in ROS1. Lifecycle will eventually get full support because it kind of has to and all the major projects like Navigation2 and Moveit2 either have all of their servers as lifecycle nodes now (or have it in their roadmap to complete very shortly). I would not recommend using, in any situation, the standard node. There's so much benefit to be gained from either a component or lifecycle node, you should make use of at least one or the other since you have access to them. Originally posted by stevemacenski with karma: 8272 on 2020-09-02 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Myzhar on 2020-09-03: I explain my situation. I'm updating the ROS2 wrapper for the Stereolabs ZED cameras. I started it with Bouncy and we decided to use the lifecycle model for all the reasons you cited. At that time TF broadcaster support was missing and Image Transport too, two fundamental features for the wrapper of a sensor like the ZED. So we decided to wait before improving the wrapper. Now we are back on developing, hoping that something changed, but the situation is yet the same both with Eloquent and Foxy. That's why I have the doubt that lifecycle will be always in late respect to standard components. Comment by Myzhar on 2020-09-03: To solve this problem I'm using composition, I have two Node components: one subscribes to image and camera_info topics and republish them using image transport, the other subscribes to odom and pose topics and published the relative TFs. But you can understand that this is awful.
{ "domain": "robotics.stackexchange", "id": 35493, "tags": "ros, ros2, rclcpp" }
Different kinds of the same isotope
Question: I apologize if this is an obvious question, but I can't find the answer anywhere. In this page: http://ie.lbl.gov/education/parent/U_iso.htm are listed the isotopes of Uranium. Some of them, for example U238 are listed many times, with "m1" or "m2" attached to the mass number. What does this mean? Answer: These "metastable states" are excited states of the nucleus that have a non-trivial lifetime (most nuclear excited states decay very quickly).
{ "domain": "physics.stackexchange", "id": 11390, "tags": "nuclear-physics, isotopes" }
Record Cataloging Program
Question: I made a simple program to catalog some old records I have. It seems a tad redundant in the searching function. Does anyone know what I can do about that? import easygui as eg import sys namedoc = open(r"C:\Users\User\Desktop\RcrdCat\names.txt", 'a') nd2 = open(r"C:\Users\User\Desktop\RcrdCat\names(2).txt", 'a') authdoc = open(r"C:\Users\User\Desktop\RcrdCat\authors.txt", 'a') yeardoc = open(r"C:\Users\User\Desktop\RcrdCat\dates.txt", 'a') pubdoc = open(r"C:\Users\User\Desktop\RcrdCat\pubs.txt", 'a') rpmdoc = open(r"C:\Users\User\Desktop\RcrdCat\rpms.txt", 'a') conddoc = open(r"C:\Users\User\Desktop\RcrdCat\conditions.txt", 'a') sleevedoc = open(r"C:\Users\User\Desktop\RcrdCat\sleeves.txt", 'a') doclist = [namedoc, yeardoc, pubdoc, rpmdoc, conddoc, sleevedoc] def getlen(doc): templist = doc.readlines() listlen = len(templist) templist = [] return listlen def mainmenus(): mm = eg.buttonbox("What would you like to do?", "Categorizer", ["Add Records", "Search Records", "Exit"]) if mm == "Exit": exitprgm() elif mm == "Add Records": addrecs() elif mm == "Search Records": searchrecs(getsearchterms()) def addrecs(): info = eg.multenterbox("Please enter all record info", "Add Records", ["Name", "Year", "Publisher", "RPM", "Condition", "Sleeve", "Name (2)", "Artist"]) if info == None: mainmenus() else: namedoc.write(info[0] + " \r\n") yeardoc.write(info[1] + " \r\n") pubdoc.write(info[2] + " \r\n") rpmdoc.write(info[3] + " \r\n") conddoc.write(info[4] + " \r\n") nd2.write(info[6] + " \r\n") authdoc.write(info[7] + " \r\n") if info[5] == "Yes" or info[5] == "No": sleevedoc.write(info[5] + " \r\n") else: eg.msgbox("Please enter \"Yes\" or \"No\"") addrecs() addrecs() def getsearchterms(): term = eg.enterbox("Please enter your term in the following way: the word \"name\", \"year\", \"pub\", \"rpm\", \"cond\", or \"sleeve\", then a space, then the corresponding value") try: term = term.split() return term except AttributeError: mainmenus() def searchrecs(term): hits = [] if term[0] == "name": myrange = getlen(namedoc) for number in range(getlen(namedoc)): locstring = namedoc.readlines(number) if term[1] in locstring == True: hits.append(number) else: pass elif term[0] == "year": for number in range(getlen(yeardoc)): locstring = yeardoc.readlines(number) if term[1] in locstring == True: hits.append(number) else: pass elif term[0] == "pub": for number in range(getlen(pubdoc)): locstring = pubdoc.readlines(number) if term[1] in locstring == True: hits.append(number) else: pass elif term[0] == "rpm": for number in range(getlen(rpmdoc)): locstring = rpmdoc.readlines(number) if term[1] in locstring == True: hits.append(number) else: pass elif term[0] == "cond": for number in range(getlen(conddoc)): locstring = conddoc.readlines(number) if term[1] in locstring == True: hits.append(number) else: pass elif term[0] == "sleeve": for number in range(getlen(sleevedoc)): locstring = sleevedoc.readlines(number) if term[1] in locstring == True: hits.append(number) else: pass else: eg.msgbox("Please enter valid search criteria") hitnums = len(hits) allinfo = [] eg.msgbox("Found " + str(hitnums) + " hits. Click OK to view.") for number in hits: for doc in doclist: allinfo.append(doc.readlines(number)) mation = ["Name: " + allinfo[0], "Year: " + allinfo[1], "Publisher: " + allinfo[2], "RPM: " + allinfo[3],"Condition: " + allinfo[4],"Sleeve: " + allinfo[5]] ftext = str() for num in range(len(mation)): ftext = ftext + num + " " eg.textbox("", "Results", ftext) def exitprgm(): for file in doclist: file.write("\r\n") file.close() sys.exit() mainmenus() Answer: So I've got a little disclaimer. I'm not at a pc with python on it. A quick look would sugggest an approach like: def searchrecs(term): hits = [] searchterms = { "name": namedoc, "year": yeardoc, "pub": pubdoc, } if(term[0] in searchterms.keys()): for searchterm, searchdoc in searchterms: if term[0] == searchterm: myrange = getlen(searchdoc) for number in range(getlen(searchdoc)): locstring = searchdoc.readlines(number) if term[1] in locstring == True: hits.append(number) else: pass else: eg.msgbox("Please enter valid search criteria") Forgive any c# syntax in this. I will fix any errors as I found them.
{ "domain": "codereview.stackexchange", "id": 5783, "tags": "python, python-3.x" }
A problem related to Work done by falling bodies : Expert's attention much needed!
Question: I'm having a lot of trouble with this question, that I've found in my textbook. I've solved it in my own way and it's very simple! But the solution in the book is totally different. It doesn't make any sense to me and the answers don't match either!! I think my method & answer are correct, but I'm not getting any strong support or proof, that's why I came here and posted... So, the problem goes as follows - A hammerhead weighing about 1 kg hits on a nail at 0.8 m/s. the nail penetrates 0.02m deep in the ground. Calculate the average Resisting Force of the ground. So I did it in this way- $$ \frac {mv^2}{2} = Fd .$$ here d = 0.02m (As we know, Work Done = total energy, [important] the whole potential energy was transferred into Kinetic energy just before the nail was hit, & the energy was 0 when the nail stopped) So, I got $F$ from here! But the method given in the book is like this : First they have calculated the height from where the hammerhead fell with the help of this law - $$ v^2 = 2gh $$ then they have said that, [important] as the hammerhead penetrated into the ground 0.02m, so it has traveled (h+0.02)m and we need to calculate the Potential Energy using this height. (Isn't this insane!!?) Then the book says : total potential energy $$ mg(h+0.02) = Fd, $$ & they have found the $F$ like this. So, is my method wrong!? What is wrong in there, because two answers don't match! Why didn't they use $W = mgh$, rather than $mg(h+0.02)$ ?? My logic - we use $W = mgh$ law as long as $g$ is constant, but in the book they used - $$ mg(h+0.02) = (mgh + mg*0.02). $$ [ Here's the thing, how come you to use $mg*0.02$ when the acceleration wasn't even $g$ when the nail traveled the 0.02m, since after hitting the nail the hammerhead will not travel the 0.02m with the acceleration $g$ ? It rather will experience some retardation, otherwise the hammerhead & the nail will keep goin' in forever! ] If my method is 100% correct then how do I prove them wrong? Please tell me some mathematical way, because my logic isn't enough for some of my friends! :-( thanks for reading this whole big question. any helps will be appreciated greatly! and sorry for my poor english! Answer: You have assumed that the entire energy dissipated by friction is the KE of the hammer on impact. But the problem details that besides the energy on impact, the hammer gains energy by dropping further. It loses PE corresponding to dropping an additional $0.02m$. That energy has to go somewhere, and it goes into work done against friction. So you could revise your formula to be: $$\frac{mv^2}{2} + mg(0.02m) = Fd$$ ...after hitting the nail the hammerhead WILL NOT travel 0.02m at 'g'... That's not what the equation is saying. It's just using the formula for the difference in potential energy. $$\Delta PE = mg\Delta h$$ The $g$ there isn't describing the acceleration of the hammer during the impact. It's using the strength of the gravitational field to calculate the energy released by decending the additional $0.02m$ during the impact.
{ "domain": "physics.stackexchange", "id": 19385, "tags": "homework-and-exercises, newtonian-mechanics, forces, energy, work" }
Depth First Search vs Breadth First Search
Question: So after years of teaching myself, I think I'm ready to stand on the shoulders of giants and ask you a good question! I have a tree of nodes. What I want is a function that returns an array of the entire tree sorted by depth. In my original attempt I wasn't aware at the time that I was performing a depth first search. My solution was to: recursively walk the tree, annotating depth as I go along. sort the above based on the depth annotation. filter out the depth annotation and return the sorted array. That was three steps invoking 3 loops! So then someone alerted me to the concept of a breadth first search. I did my research and built (on my own) a BFS function! It looked so simple and did what I needed. Then when I timed both versions; completely bafflingly; the cumbersome DFS version is faster! Why??? Here is my depth-first-search: function dfsElementsInTree(input){ // perform depth first search but // return a depth sorted array of an element or elements and any children let output = []; if (Symbol.iterator in input) // input is a HTMLcollection for (const element of input) doTraversal(element); else doTraversal(input); return output.sort((a, b) => a.depth - b.depth).map(item=>item.element); function doTraversal(element, depth=0) { output.push({element, depth}); if (element.children.length) depth++; for (const child of element.children) doTraversal(child, depth); } } Here is my breadth-first-search: function bfsElementsInTree(input) { // perform a breadth first search in order to have elements ordered by depth. let output = []; let queue = []; let visited = []; const enqueue = item => {queue.push(item); visited.push(item);}; if (Symbol.iterator in input) // input is a HTMLcollection for (const element of input) queue.push(element); else queue.push(input); while (queue.length) { for (const element of queue) for (const child of element.children) if (!visited.includes(child)) enqueue(child); output.push(queue.shift()); } return output; } Ready for benchmarking here: https://jsben.ch/ZNuAx But if you want to test it yourself, here's some code to generate some trees: // Create trees of divs as such: // (left to right) // 1 // 1 -> 2 // 1 -> 2 -> 3 // 1 -> 2 // 1 const a1 = document.createElement('div'); const a2 = document.createElement('div'); const a3 = document.createElement('div'); const a4 = document.createElement('div'); const a5 = document.createElement('div'); [a1,a2,a3,a4,a5].forEach(e => e.className ='1'); const b2 = document.createElement('div'); const b3 = document.createElement('div'); const b4 = document.createElement('div'); [b2,b3,b4].forEach(e => e.className ='2'); const c3 = document.createElement('div'); c3.className = '3'; a2.appendChild(b2); b3.appendChild(c3); a3.appendChild(b3); a4.appendChild(b4); [a1,a2,a3,a4,a5].forEach(e => document.body.appendChild(e)); Thank you so much for your time. It's a real treat to have an expert looking over my shoulder! Answer: I solved it. It turns out my looping logic was a bit out. No need for both a while and a for! (and while we're at it, we don't have to check for visited. Thanks @vnp in comments) edit: since I'm not actually using a queue, I missed that I don't now need to maintain two arrays! Thanks @Ry in the comments!) Wrote it again for speed and here it is: function bfsElementsInTree(input) { // perform a breadth first search in order to have elements ordered by depth. (Deepest last) let output = []; if (Symbol.iterator in input) // input is a HTMLcollection for (let i = 0, max = input.length; i < max; i++) output[i] = input[i]; else output.push(input); for (let i = 0; i < output.length; i++) { const children = output[i].children; for (let j = 0, max = children.length; j < max; j++) output.push(children[j]); } return output; } And new benchmark: https://jsben.ch/F1zzW
{ "domain": "codereview.stackexchange", "id": 38081, "tags": "javascript, algorithm, tree, breadth-first-search, depth-first-search" }
In Python, why subsetting with or without square bracket is different?
Question: Suppose I have a data frame called quoteDF quotesDF volume shares 2017-01-03 2934300 100 2017-01-04 3381400 120 2017-01-05 2682300 140 2017-01-06 2945500 160 2017-01-09 3189900 180 2017-01-10 4118700 200 If I do, > quotesDF.loc[1, 'shares'] 120 > quotesDF.loc[1, ['shares']] shares 120 Name: 1, dtype: object Why the first one retunrs 120, the second one retures shares 120? In my mind, they are the same thing, except I put the second one in a vector. But, the first one is a vector stands by itself. It's just that I didn't put the square bracket on it. Why Python give me such a confusing time? Answer: Assuming, you have a pandas dataframe, .loc is strictly label based. Since you're using [] it accesses the column you're specifying inside the brackets and that is the reason you're getting shares 120. Read documentation for better explanation. Here is another link that has answers similar to your question.
{ "domain": "datascience.stackexchange", "id": 1969, "tags": "python, dataset" }
Find a specific file, or find all executable files within the system path
Question: I wrote a function to find a specific file, or all the executable files with a given name and accessing flag, and I want to make sure it is as cross-platform as possible. Currently, I'm using the PATHEXT environment variable to get the extensions for files that the operating system considers executable. import os class CytherError(Exception): def __init__(self, message): Exception.__init__(self, message) self.message = 'CytherError: {}'.format(repr(message)) def where(name, flags=os.F_OK): result = [] extensions = os.environ.get('PATHEXT', '').split(os.pathsep) if not extensions: raise CytherError("The 'PATHEXT' environment variable doesn't exist") paths = os.environ.get('PATH', '').split(os.pathsep) if not paths: raise CytherError("The 'PATH' environment variable doesn't exist") for path in paths: path = os.path.join(path, name) if os.access(path, flags): result.append(os.path.normpath(path)) for ext in extensions: whole = path + ext if os.access(whole, flags): result.append(os.path.normpath(whole)) return result Usage: In[-]: where('python') Out[-]: "C:\Python35\python.exe" Please be brutal in pointing out anything you find, except for: Commenting Docstrings PEP 8 spacing I will have these things covered later. Answer: I could have just used the function which(), offered by shutil. Documented here. Example: >>> from shutil import which >>> which('python') 'C:\Python35\python.exe'
{ "domain": "codereview.stackexchange", "id": 19233, "tags": "python, python-3.x, file, file-system" }
Pair-annihilation why does it occour?
Question: Why does pair annihilation occur with particles and only their matching anti-particle? E.g., electrons and positrons, but not protons and positrons? What is the difference? Answer: We (physicists) believe the reason is this: known symmetries and conservation laws. For example, the mutual annihilation of a proton and positron would remove $2\,e$ charge units from the Universe. This violates the conservation of charge principle, which can be seen to arise from the application of Noether's theorem to the global gauge symmetry of the classical electrodynamic Lagrangian, as discussed in QMechanic's answer to the question "Noether theorem and classical proof of electric charge conservation". Other similar conservation laws are conservation of color charges in QCD, conservation of lepton number, baryon number (quark number) and so forth. Your proposed reaction would also violate conservation of lepton number. But ultimately the answer is simply experimental evidence, i.e. mutual annihilation simply doesn't happen experimentally unless between antiparticles. The failure to observe certain reactions is what led theoretical physicists to define Lagrangians with various continuous symmetries symmetries (and thus, through Noether's theorem, Noether charge (i.e. quantum number) conservations) as well as the notion of anti-particle itself in the first place as a way to organize our knowledge of experimental particle interaction observations. The theoretical reasons above are simply sufficient conditions that would explain our observations, but they are not necessary conditions. It's simply that, so far, our theories agree with experiment and we therefore have no reason to believe they are wrong.
{ "domain": "physics.stackexchange", "id": 21102, "tags": "particle-physics, conservation-laws, antimatter" }
API using NodeJS to recive data from client and send it to a server
Question: Hello I have a NodeJS which act like an interface it connects the client(mobile app) to the actual server My node JS API receives data from the client to verify it and send it to the server. I want to know does this code looks good? Is that how API should be written in NodeJS? Does this code violate any of the SOLID principles? How can I improve this code? const axios = require("axios") require("express-async-errors") const bodyParser = require("body-parser") const express = require("express") const LOLTrackingSystem = require("../methods/onlineGamesTracking/LOLTracking") const getUserData = require("../methods/leagueOfLegends/getUserData") //======[Gizmo Operator Settings]======= // userId = token //========[Gizmo API URL]========== apiRoute = (api) => { api.use(bodyParser.json()) // api.use(bodyParser.urlencoded({ extended: false })); // api.use(express.json()); api.post("/api/auth", (req, res) => { const clientAuth = { username: req.body.username, password: req.body.password } const loginValidationURL = `http://${process.env.HOST}/api/users/${clientAuth.username}/${clientAuth.password}/valid` axios .get(loginValidationURL, { headers: { Authorization: process.env.AUTH } }) .then((response) => { let token = response.data.result.identity.userId response.data.result.result == 0 ? res.json({ status: "Authenticated", result: 1, token: hash }) : res.json({ status: "Wrong Username or Password", result: 0 }) }) .catch((error) => { console.log("error " + error) res.json({ status: `Couldn't reach Arena Gaming Server. Try again later`, result: 404 }) }) }) api.post("/api/memberProfile", (req, res) => { var token = req.headers["authorization"] // bcrypt.compare(token, hash).then(function(res) { // // res == true // }); const memberProfileURL = `http://${process.env.HOST}/api/users/${encryptedId}` axios .get(memberProfileURL, { headers: { Authorization: process.env.AUTH } }) .then((response) => { response.data }) .catch((error) => { console.log("error " + error) res.json({ status: `Couldn't reach Arena Gaming Server. Try again later`, result: 404 }) }) }) //============================================================================= // Gizmo Related API //============================================================================= // MemberState Get The UserId & HostId when member login or logout of Gizmo api.post("/api/gizmo/memberState/:userId/:host/:state", async (req, res) => { const member = { state: req.params.state, userId: req.params.userId, hostComputer: req.params.host } if (Number(member.state)) { console.log( "[Logged In] Member with userId: " + member.userId + " To HostId: " + member.hostComputer + "[Sent From GizmoServer]" ) await getUserData(member.userId, async (subscribed) => { // Check if the user is subscribed to the tracking system or not //*TODO* Add a list of currently tracking members that we can see if (subscribed) { LOLTrackingSystem(member.userId) // Start tracking that member res.json("[Started] Member " + member.userId + " Tracking") } else { res.json( "[Started] Member " + member.userId + " is not subscribed to the tracking system" ) } }) } else { console.log( "[Logged Out] Member with userId: " + member.userId + " From HostId: " + member.hostComputer + "[Sent From GizmoServer]" ) } }) } module.exports = apiRoute Thanks Answer: the code has plenty of room for improvement, and yes it violates the solid principals like the Single responsibility principle "your functions are way too big and the module in itself has too many responsibilities ", to clarify there are no rules on how API should be written in NodeJS there are code practices and design patterns that help us developers write more maintainable code, so don't try to follow every single rule you will get stuck in optimizing and it really depends on your project needs. Now for the improvement here are some suggestions you can work on : separate your endpoints in different files, take a look at my answer here, you can use the express router move your constant variables like "loginValidationURL " in a different file like config.js then import and use it don't overuse URL params use post request to get objects to your server Build the application with a framework independent mindset Make your functions lighter separate them by roles like this take this example on your "/api/auth endpoint" it will explain many things // Your implimation api.post("/api/auth", (req, res) => { const clientAuth = { username: req.body.username, password: req.body.password }; const loginValidationURL = `http://${process.env.HOST}/api/users/${clientAuth.username}/${clientAuth.password}/valid`; axios .get(loginValidationURL, { headers: { Authorization: process.env.AUTH } }) .then(response => { let token = response.data.result.identity.userId; response.data.result.result == 0 ? res.json({ status: "Authenticated", result: 1, token: hash }) : res.json({ status: "Wrong Username or Password", result: 0 }); }) .catch(error => { console.log("error " + error); res.json({ status: `Couldn't reach Arena Gaming Server. Try again later`, result: 404 }); }); }); /* Api Handler * file path : routes/api/client.js * in this file i only write the client api endpoints /client/auth /client/logout /client/:id the basic crud * i have seperate the main job in a controller module like that the api is mucj cleanter */ const express = require("express"); const router = express.Router(); const { authClient } = require("../controller/clientcontroller"); router.post("/client/auth", (req, res) => { authClient(req, res) }); /* controller * file path : routes/controller/clientcontroller.js * in this file i only write the client controllers for each api call */ const { extractClientAuth } = require("./api/client/reqHandler"); // this module will be responsible for extracting and validation req objects const { loginValidationUrlBuilder } = require("../../config"); // this module is responsible for all your static and config objects like axios custom headers const { apiClient } = require("./services/externalJobs"); // this for exrtenal jobs and api calls const { responseBuilder } = require("./api/tools"); // this module for creating response objects function authClient(req, res) { const clientAuth = extractClientAuth(req); const loginValidationURL = loginValidationUrlBuilder(clientAuth); apiClient .login(loginValidationURL) .then(response => res.json(responseBuilder.authSuccess(response))) .catch(error => { res.json(responseBuilder.authFailed(error)); }); } this is by no mean the best way it's just a way to structures your project take from it what you like another this I can recommend this video on youtube channel Dev Mastry he does an excellent job of explaining how to create a scalable node application using express and MongoDB https://www.youtube.com/watch?v=fy6-LSE_zjI
{ "domain": "codereview.stackexchange", "id": 37075, "tags": "javascript, node.js, express.js" }
Find the uncertainty in position
Question: The question says: A proton is accelerated to one tenth of the velocity of light. If it's velocity can be measured with a precision $\pm 1\%$. What must be its uncertainty in position? Therefore, $v=0.1 \times c =3\cdot 10^7\ \pu{m/s}\\ \Delta(v)=\frac{1}{100}\\ m= 1.6\cdot10^{-27}\ \pu{kg}$ Then I directly substituted these values into the formula: $$\Delta(v)\cdot\Delta(\text{position})\cdot \text{mass}=\frac h{4\pi}$$ To get the uncertainty in position, however, the answer I got was approx. $3.5 \cdot 10^{-6}\ \pu{m}$ which is way too different from the correct answer: $0.5\cdot10^{-13}\ \pu{m}.$ Can anyone please explain me how to solve this question? Answer: The question tells you \begin{aligned} &&v&=3\cdot10^7\:\mathrm{m/s}\pm1\%\\ &&\Delta v &= |(v+1\%v) - (v-1\%v)|\\ \implies&&\Delta v &= \frac{2}{100}v\\ \therefore&&\Delta v&= 2\cdot3\cdot10^5\:\mathrm{m/s} \end{aligned} Hence \begin{aligned} &&\Delta x\cdot\Delta p &\geq \frac{\hbar}{2}\\ \implies&&\Delta x\cdot\Delta (v\cdot m) &\geq \frac{\hbar}{2}\\ \implies&&\Delta x&\geq \frac{\hbar}{2\cdot\Delta v\cdot m }\\ \therefore&&\Delta x &\gtrapprox\frac{1.05\cdot10^{−34}\:\mathrm{J\cdot s}}{2\cdot6\cdot10^5\:\mathrm{m/s}\cdot1.6\cdot10^{−27}\:\mathrm{kg}}\\ &&\Delta x&\gtrapprox5.5\cdot10^{-14}\:\mathrm{m}\\ \end{aligned}
{ "domain": "chemistry.stackexchange", "id": 1365, "tags": "physical-chemistry, quantum-chemistry" }
How to solve system's general stability from transfer function?
Question: I have a homework which should solve by me. My problem is questions are really simple or should I think outside of the box? Like, bode diagram, nyquist or etc.? And, are my answers correct? Thanks. Question-1 $$G(s) = K\dfrac{As+1}{Bs+1}$$ For which values $K, A$ and $B$ is the system always stable? Should I look directly to the pole of the system? $Bs+1=0$ $s=-1/B \implies \text{So, must } B>0$ Is it enough? Or, anything else? What about K, A? Question-2 $$G(s) = K\dfrac{As+1}{(Bs+1)(Cs+1)}$$ For which values $K, A, B$ and $C$ is the system always stable? Should I look directly to the pole of the system or anything else? $$Bs+1=0 \wedge Cs+1=0$$ $$s=-1/B \wedge s=-1/C$$ $$\implies B>0 \wedge C>0$$ Is it enough? Or, anything else? What about $K$ and $A$? Answer: I would like to extent the already given answer by MrYouMath. So question 1 is pretty straight forward and you already got it right. If there's no right half plane (RHP) pole then it doesn't matter what gain you chose. Even for $A = B$, $G(s) = K$ yields a finite response. For Question 2 have a look at the Routh Hurwitz Array \begin{array} {|r|r|} \hline s^2 & B \cdot C & 1 \\ \hline s^1 & B+C & 0\\ \hline s^0 & 1 & \\ \hline \end{array} In order for the system to be stable there must not be any sign changes in the first column, hence $$BC > 0 \quad \land \quad B+C > 0$$ From $BC > 0$ we derive that B and C must have the same sign. $B+C > 0$ yields that the sign has to be positive. As you see neither $A$ nor $K$ are involved in that. If you want to explore other methods like root locus, bode, ... Keep in mind that you have variables ($A$,$B$,$C$) in there. I know that you can see the gain margins for root loci in Python, Matlab, etc. but I think that's it. I don't think (but I stand to be corrected) that you can derive the values for $A$,$B$,$C$ that way. I think with Bode plots this may work, however as you've seen it's much easier to solve with Hurwitz or by just looking at the poles.
{ "domain": "engineering.stackexchange", "id": 1740, "tags": "control-engineering, transfer-function, stability" }
K-Fold cross validation-How to calculate regular parameters/hyper-parameters of the algorithms
Question: K-fold cross-validation divides the data into k bins and each time uses k-1 bins for training and 1 bin for testing. The performance is measured as the average across all the K runs err ← err + (y[i] − y_out)^2 as demonstrated in Wikipedia and the literature err ← 0 for i ← 1, ..., N do // define the cross-validation subsets x_in ← (x[1], ..., x[i − 1], x[i + 1], ..., x[N]) y_in ← (y[1], ..., y[i − 1], y[i + 1], ..., y[N]) x_out ← x[i] y_out ← interpolate(x_in, y_in, x_out) err ← err + (y[i] − y_out)^2 end for err ← err/N But what about the parameters that are obtained from the training? is it the average across all the training or does it to be picked from the best output in k-fold cross-validation? Do we need to run the same ML algorithm in k-fold cross-validation or each fold can have a different algorithm? I think we need to run only one algorithm for the k-fold and for each individual algorithms we need to run k-fold cross-validation. Answer: Cross-validation is a method to obtain to obtain a reliable estimation of the performance. The performance is obtained as the average across the CV "folds" because this way it doesn't depend on a single test set, i.e. the impact of chance is minimized. In the case of hyper-parameter selection, the goal is not only to evaluate but also to select the hyper-parameters values based on this evaluation. This turns the CV process into a training stage, because it is used to determine something about the model. When the goal is to select the best hyper-parameters among a set of possible assignments of their values, the method is run across all the CV "folds" for every possible assignment, and then the average performance is also obtained for every assignment. At the end of the CV process the assignment which corresponds to the maximum average performance is selected. Now that the parameters are fixed, one still has to determine the true performance on a fresh test set because the high performance among the parameters assignments could be due to chance. This is why a model is trained again with these parameters (usually using the whole training data), then applied to a fresh test set to obtain the final performance. Notice that everything in CV is done the same way across the "folds": the same method(s) are run for every fold, and the results are always obtained across all "folds". In particular one should never select the best model or parameters by picking the maximum "fold".
{ "domain": "datascience.stackexchange", "id": 9875, "tags": "machine-learning" }
In golf, is there a rotational mechanical advantage of using a thicker grip?
Question: I was asked to migrate this question to the physics exchange. Ok, so many professionals are now using the new SuperStroke golf grips. I am basically thinking about the concept of a bigger grip, but for all golf clubs. For the irons, not necessarily as big as a SuperStroke grip, but bigger than the standard. So first I'm reminded of my old physics classes. If you look at the gears of a clock, or any kind of mechanical gear action where a pipe/shaft is attached to a large gear, and the other end of the pipe/shaft is attached to a smaller gear, one inch of turn circumference on the larger gear effects the smaller gear based on how big the larger gear is. If the gear is huge, one inch of turn circumference will move the smaller gear less than if the larger gear happened to be very tiny. So in putting or hitting, if the grip has a larger circumference, would that create more room for error in the rotation of the hands, thereby more room for error in the rotation of the club head? In other words, by having a larger grip, would there be any kind of mechanical advantage in terms of club head rotation relative to hand rotation? If I have a tendency to rotate my hands within a range of +/- 1 inch of circumference, would the club head rotate within a smaller range if the grip was bigger? Answer: When you have a wide circumference grip, a small displacement on the surface will result in a smaller rotation of the club. This can be expressed mathematically: for a displacement $d$ along the surface of a cylinder with radius $r$, the angle $\theta$ is given by $$\theta = \frac{d}{r}$$ As you can see, the larger $r$, the smaller $\theta$ for the same value of $d$. On the other hand, if your hands rotate through a given angle, then giving yourself a bigger grip makes it easier to rotate the club (you have bigger torque). Imagine trying to spin a wheel: it's easier to do so by gripping the wheel near the outside than near the axle. So if you believe your hands move by a constant distance, then the wide grip will help; if you think they move by a constant angle, then you will more easily transmit that change in angle to the club. What happens in the actual club is more a question of biomechanics than physics - but those are the underlying physical principles.
{ "domain": "physics.stackexchange", "id": 26979, "tags": "classical-mechanics, rotational-dynamics, rotation, rotational-kinematics" }
math tool to compute time of sunrise and sunset
Question: What is the mathematical tool that enable to compute time of sunrise and time of sunset at a specific geographical location in the same part of a year, say given month and day exactly? I have noticed that the time of sunrise and the time of sunset is the same every year in a given month and day, say February 4th for example (at a specific geographical location of course). Thank you. Answer: The rise and set equations below are from Astronomical Algorithms, and the approximate Sun position is from the Astronomical Almanac. The rise/set algorithm works for any celestial object given it's Right Ascension and Declination. So, for the Sun, you must first compute the RA/Dec of the Sun and use that in the rise/set algorithm. The $h_0$ parameter accounts for the fact that Sun rise/set is defined by the upper limb of the Sun, and a standard atmospheric refraction model. An implementation of the below algorithm, including Julian Date, Sun position, and rise and set is implemented here: Rise and Set Algorithm. Both algorithms require the Julian Date rather than a Gregorian Calendar Date. Since almost all programming languages can supply the date as the number of seconds or milliseconds elapsed since Jan 1 1970 (Unix Time), the routines below convert between Unix Time and Julian Date. The implementation is in JavaScript but will be trivial to convert to any other language. Since JavaScript returns Unix Time in milliseconds, these functions use that, if you're language uses seconds instead, replace 86400000 below with 86400. Other methods of computing the Julian Date are here: Julian Date Algorithms. function JulianDateFromUnixTime(t){ //Not valid for dates before Oct 15, 1582 return (t / 86400000) + 2440587.5; } function UnixTimeFromJulianDate(jd){ //Not valid for dates before Oct 15, 1582 return (jd-2440587.5)*86400000; } Now use $jd$ to compute the RA/Dec of the Sun: This algorithm for the Sun position is accurate to 1 degree between 1950 and 2050 according to the Astronomical Almanac. If a higher precision is necessary (and it usually isn't), a method like VSOP87 can be used. $$ \begin{align*} n&=jd-2451545.0 \\ L&=280.460+0.9856474n \\ g&=375.528+.9856003n \\ \lambda &=L+1.915 \sin g+0.020 \sin 2g \\ \beta&=0.0 \\ \epsilon &=23.439-0.0000004n \\ \cos\lambda \tan \alpha &=\cos\epsilon \sin\lambda \\ \sin \delta &=\sin\epsilon \sin \lambda \end{align*} $$ Where $\lambda$ is the ecliptic longitude, $\beta$ is the ecliptic latitude (always 0), $\alpha$ is the Right Ascension, $\delta$ is the Declination. And $ jd $ is the Julian Date. Now use $\alpha$ and $\delta$ to compute the rise and set times: The algorithm below is from Meeus' Astronomical Algorithms, that book considers longitudes to be negative to the East, which is opposite of modern definitions, so set $L = -L$ for most applications. $$ \cos H_0 = \dfrac{\sin h_0 - \sin \phi \sin \delta }{\cos \phi \cos \delta} \ $$ If $\cos H_0$ < -1 or > 1, the point is either always above or below the horizon. $T=(jd-2451545.0)/36525.0 $ $\Theta_0 = 280.46061837+360.98564736629*(jd-2451545.0)+0.000387933T^2 - T^3/38710000.0$ $$ \begin{cases} transit & \dfrac{\delta + L - \Theta_0 }{360^{\circ}} \\ \\ rise & transit - \dfrac{H_0}{360^{\circ}} \\ \\ set & transit + \dfrac{H_0}{360^{\circ}} \end{cases} $$ $jd$ is the Julian Date for the date in question. $\delta$ Declination. $L$ Longitude. $\phi$ Latitude. $h_0$ Apparent rise or set angle, -0.8333 for the Sun, +0.125 for the Moon, and -0.5667 for most other objects. $\Theta_0$ Greenwich sidereal time at 0h for the day in question. The results $transit$, $rise$, $set$, are in fractions of a day, so multiply by 24 to get the time in hours. If any of them are greater than 1, or less than 0, add or subtract 1 to bring it in the range from 0 to 1. They will be in the UTC time zone, so conversion to the local time zone may be desired. Final: Since the Sun moves throughout the day, it may be desired to produce a more accurate result by recomputing everything again, but using the JD for the rise, set or transit time. In the Arctic Circle, it's possible $H_0$ will indicate no rise or set for the first iteration, but the Sun may move to a position where it does rise or set, so it may be desired to retry values throughout the day.
{ "domain": "astronomy.stackexchange", "id": 6779, "tags": "sunrise, sunset" }
Experimental evidence for non-abelian anyons?
Question: Since non-abelian anyons have become quite fashionable from the point of view of theory. I would like to know, whether there has actually been experimental confirmation of such objects. If you could cite original literature, that would be great! Answer: As far as I know we do not yet have definitive verification of non-abelian statistics which would indicate the existence of non-abelian Anyons. The latest results I know about are the ones mentioned by akhmeteli and An, et. al. "Braiding of Abeliana and Non-Abelian Anyons in the Fractional Quantum Hall Effect." Arxiv:1112.3400. However, I do not believe either are accepted as definitive proof. You are correct in that there are other candidate systems proposed, such as $p+ip$ superfluids, superconductors, and other systems. Some good references and also a review seem to be given in these review articles. As I understand it, the big push to observe evidence of Majorana fermions. That we do have some pretty good evidence for. Pairs of Majorana fermions are supposed to realize non-abelian statistics, but this has not yet been implemented and observed (to my knowledge). As a final note, we do have evidence (see here and here) of abelian anyonic statistics, however I am under the impression that there may be some controversy about this.
{ "domain": "physics.stackexchange", "id": 11198, "tags": "quantum-field-theory, specific-reference, topological-order, topological-insulators, anyons" }
What is the idea behind canonical quantization?
Question: From what I understand, canonical quantization of a classical theory consists of replacing the observables by abstract operators, of which only the commutation rules, which have to correspond to the Poisson brackets, are given. I assume this ensures that in the macroscopic limit we recover classical mechanics (through the Ehrenfest theorem). From these abstract operators we can also recover the dynamics, the time evolution operator is $e^{iHt/\hbar}$, the uncertainty relation is $$\sigma_A\sigma_B ~\ge~ {1\over2}\left|\langle [A,B]\rangle\right|$$ and (in the Heisenberg picture) the time evolution of an observable is $$i\hbar\frac{dA}{dt} = [A,H]+i\hbar\frac{\partial A}{\partial t}.$$ Alternatively one can start by explicitly constructing the state space as a function space, the observables as operators on that space (by making some standard substitutions, taking care of Hermiticity, etc) and one observes that the same commutation relations hold. My question is if in the first approach, does one completely let go of the explicit description of Hilbert space as a function space and the operators as explicit operators on this space, and instead one works with "the" (abstract) Hilbert space and operators on it of which we only need to specify the commutation relations? Or is it really the same thing, and ultimately we always will need the explicit description to derive some of the properties of the system. I've been struggling to make my question clear, if it isn't, please let me know. Answer: Let's take the canonical commutation relations (CCR), in their exponentiated form (Weyl's relations): $$V(\eta)T(q)=e^{-i\eta\cdot q}T(q)V(\eta)\; ,$$ where $\{V(\eta)\}_{\eta\in \mathbb{R}^d}$ and $\{T(q)\}_{q\in \mathbb{R}^d}$ are objects of a given normed algebra with involution. This is a very general notion, that is nowadays taken as the definition of the CCR. If we take the exponentials of the position and momentum operators $V(\eta)=e^{i\eta\cdot x}$ and $T(q)=e^{i q\cdot (-i\nabla_x)}$ in $L^2(\mathbb{R}^d)$ we see that they satisfy the Weyl's relations, and they are objects of the $C^*$ algebra of bounded operators on that space. Now let's start with $W=\{V(\eta), T(q)\}_{\eta, q\in \mathbb{R}^d}$, and construct the $C^*$ algebra $V$ that contains $W$, i.e. $V=\overline{W}$ (the closure of $W$ in the given norm $\lVert \cdot\rVert$ of our obects). This is called the CCR $C^*$ algebra. So, as you can see, the starting point is very abstract, and is given by this CCR $C^*$ algebra. Now it is possible to show that each $C^*$ algebra has at least one faithful representation as a subalgebra of the bounded operators on some Hilbert space (the so-called GNS construction). Another remarkable result is the Stone-von Neumann theorem, that says that all the irreducible (i.e. such that the only subspace invariant under the action of the operators is the zero vector) representations of the CCR algebra are unitarily equivalent (i.e. related by a unitary transformation) and in turn equivalent to the representation given by the usual position and momentum operators on $L^2(\mathbb{R}^d)$ I gave above. Putting together the results, it is then evident that it is sufficient to give the CCR algebra, for it is always represented irreductibly (up to unitary isomorphisms) by the canonical position and momentum operators on $L^2$. Also, the concept of quantum states is directly related to the $C^*$ algebra of observables (it is a subset of its topological dual); and the normal states (a subset of the predual of the von Neumann algebra $V''$, and of the quantum states) are in one-to-one correspondence with the density matrices in the corresponding representation. Concerning evolution and classical limit (in relation to classical dynamics), these concepts are more easily understood using the point of view of semiclassical analysis, i.e. studying the (Weyl, Wick, anti-Wick) quantization of classical symbols into pseudodifferential operators, and their semiclassical expansions. Nevertheless the quantum evolution may be seen as an automorphism of the $C^*$ algebra of observables (or of the quantum states) that satisfies some regularity assumptions. Remark: the Stone-von Neumann theorem is only valid for "finite dimensional" Weyl relations, i.e. if $\eta,q \in \mathbb{R}^d$ (the result can be extended by Mackey theory to any locally compact group). If e.g. we consider the analogous "infinite dimensional" CCR algebra generated by $$W(\psi)W(\phi)=e^{-i\mathrm{Im}\langle\psi,\phi\rangle}W(\psi+\phi)\; ,$$ where $\psi,\phi\in \mathscr{H}$, $\mathscr{H}$ infinite dimensional Hilbert space, we have infinitely many unitarily inequivalent irreducible representations. In that situation (this is the case of bosonic quantum field CCR), we have other representations that are inequivalent to the Schrödinger (or the Fock) one; and thus becomes really crucial to see the quantum theory as the theory generated by the algebra of (noncommutative) observables.
{ "domain": "physics.stackexchange", "id": 21587, "tags": "quantum-mechanics, classical-mechanics, operators, hilbert-space, quantization" }
rosdep install -ay fails on voyage linux/debian squeeze
Question: Hi there! I am trying to install ROS on an ALIX board running Voyage Linux 0.8.5 which is based on Debian Squeeze. I followed the experimental guide for installing ROS Fuerte on Debian. Things are working fine until I perform step 1.5 entering rosdep install -ay --os="debian:squeeze" I found some suggestions on how to fix the problem, but they didn't help: http://answers.ros.org/question/40131/definition-for-ros-packages-on-debian-squeeze/ : This guy has a very similar problem, but updating the rosdep database did not change anything. However, the test works on my machine. http://answers.ros.org/question/33002/rosdep-install-a-fails-on-debian-sqeeze/ : Similar problem, but I am not sure what to do in my case. However, the package python-qt-bindings I am missing is not contained by the git repository. Therefore, the suggestion probably wouldnt't help anyway, right? http://answers.ros.org/question/40081/rosdep-doesnt-recognize-os/ : I tried to manually (apt-get) install python-qt-bindings and pcl (or their equivalents for Debian Squeeze in the repositories). It doesn't work but it could be caused by the fact that the equivalent Debian packages do not have the same names, so ROS/rosdep does not detect them. However, this is just guess. I am really confused and I really need some help to ROS running. Finally, this is the output after performing rosdep install -ay --os="debian:squeeze": Executing script below with cwd=/tmp {{{ #!/bin/bash export PREFIX=/usr/ [ f932cebad87302d8ea0ec1fd39b24d99 = `cat $PREFIX/include/eigen3/eigen-version.installed` ] }}} cat: /usr//include/eigen3/eigen-version.installed: No such file or directory /tmp/tmpeRgUII: line 3: [: f932cebad87302d8ea0ec1fd39b24d99: unary operator expected Executing script below with cwd=/tmp {{{ #!/bin/bash set -o errexit dpkg-query -W -f='${Package} ${Status}\n' yaml-cpp-sourcedep 2>&1 | awk '{\ if ($4 =="installed") exit 0 else print "yaml-cpp-sourcedep not installed" exit 1}' }}} yaml-cpp-sourcedep not installed Executing script below with cwd=/tmp {{{ #!/bin/bash export PREFIX=/usr/ if [ -f $PREFIX/include/assimp/assimp-version.installed ]; then [ 2ed0b9954bcb2572c0dade8f849b9260 = `cat $PREFIX/include/assimp/assimp-version.installed` ] else false fi }}} ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: python_qt_binding: No definition of [ros] for OS [debian] rviz: No definition of [python-qt-bindings] for OS [debian] runtime_monitor: Missing resource rxbag ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/root/ros/diagnostics ROS path [2]=/root/ros/common ROS path [3]=/root/ros/laser_pipeline ROS path [4]=/root/ros/executive_smach_visualization ROS path [5]=/root/ros/visualization_tutorials ROS path [6]=/root/ros/geometry ROS path [7]=/root/ros/pluginlib ROS path [8]=/root/ros/bullet ROS path [9]=/root/ros/robot_model ROS path [10]=/root/ros/xacro ROS path [11]=/root/ros/dynamic_reconfigure ROS path [12]=/root/ros/executive_smach ROS path [13]=/root/ros/driver_common ROS path [14]=/root/ros/visualization_common ROS path [15]=/root/ros/python_qt_binding ROS path [16]=/root/ros/bond_core ROS path [17]=/root/ros/image_common ROS path [18]=/root/ros/geometry_visualization ROS path [19]=/root/ros/common_rosdeps ROS path [20]=/root/ros/diagnostics_monitors ROS path [21]=/root/ros/bfl ROS path [22]=/root/ros/common_tutorials ROS path [23]=/root/ros/robot_model_visualization ROS path [24]=/root/ros/geometry_experimental ROS path [25]=/root/ros/perception_pcl ROS path [26]=/root/ros/visualization ROS path [27]=/root/ros/robot_model_tutorials ROS path [28]=/root/ros/filters ROS path [29]=/root/ros/orocos_kinematics_dynamics ROS path [30]=/root/ros/geometry_tutorials ROS path [31]=/root/ros/nodelet_core ROS path [32]=/opt/ros/fuerte/stacks ROS path [33]=/opt/ros/fuerte/share ROS path [34]=/opt/ros/fuerte/share/ros turtle_actionlib: Missing resource turtlesim ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/root/ros/diagnostics ROS path [2]=/root/ros/common ROS path [3]=/root/ros/laser_pipeline ROS path [4]=/root/ros/executive_smach_visualization ROS path [5]=/root/ros/visualization_tutorials ROS path [6]=/root/ros/geometry ROS path [7]=/root/ros/pluginlib ROS path [8]=/root/ros/bullet ROS path [9]=/root/ros/robot_model ROS path [10]=/root/ros/xacro ROS path [11]=/root/ros/dynamic_reconfigure ROS path [12]=/root/ros/executive_smach ROS path [13]=/root/ros/driver_common ROS path [14]=/root/ros/visualization_common ROS path [15]=/root/ros/python_qt_binding ROS path [16]=/root/ros/bond_core ROS path [17]=/root/ros/image_common ROS path [18]=/root/ros/geometry_visualization ROS path [19]=/root/ros/common_rosdeps ROS path [20]=/root/ros/diagnostics_monitors ROS path [21]=/root/ros/bfl ROS path [22]=/root/ros/common_tutorials ROS path [23]=/root/ros/robot_model_visualization ROS path [24]=/root/ros/geometry_experimental ROS path [25]=/root/ros/perception_pcl ROS path [26]=/root/ros/visualization ROS path [27]=/root/ros/robot_model_tutorials ROS path [28]=/root/ros/filters ROS path [29]=/root/ros/orocos_kinematics_dynamics ROS path [30]=/root/ros/geometry_tutorials ROS path [31]=/root/ros/nodelet_core ROS path [32]=/opt/ros/fuerte/stacks ROS path [33]=/opt/ros/fuerte/share ROS path [34]=/opt/ros/fuerte/share/ros smach_viewer: Missing resource xdot ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/root/ros/diagnostics ROS path [2]=/root/ros/common ROS path [3]=/root/ros/laser_pipeline ROS path [4]=/root/ros/executive_smach_visualization ROS path [5]=/root/ros/visualization_tutorials ROS path [6]=/root/ros/geometry ROS path [7]=/root/ros/pluginlib ROS path [8]=/root/ros/bullet ROS path [9]=/root/ros/robot_model ROS path [10]=/root/ros/xacro ROS path [11]=/root/ros/dynamic_reconfigure ROS path [12]=/root/ros/executive_smach ROS path [13]=/root/ros/driver_common ROS path [14]=/root/ros/visualization_common ROS path [15]=/root/ros/python_qt_binding ROS path [16]=/root/ros/bond_core ROS path [17]=/root/ros/image_common ROS path [18]=/root/ros/geometry_visualization ROS path [19]=/root/ros/common_rosdeps ROS path [20]=/root/ros/diagnostics_monitors ROS path [21]=/root/ros/bfl ROS path [22]=/root/ros/common_tutorials ROS path [23]=/root/ros/robot_model_visualization ROS path [24]=/root/ros/geometry_experimental ROS path [25]=/root/ros/perception_pcl ROS path [26]=/root/ros/visualization ROS path [27]=/root/ros/robot_model_tutorials ROS path [28]=/root/ros/filters ROS path [29]=/root/ros/orocos_kinematics_dynamics ROS path [30]=/root/ros/geometry_tutorials ROS path [31]=/root/ros/nodelet_core ROS path [32]=/opt/ros/fuerte/stacks ROS path [33]=/opt/ros/fuerte/share ROS path [34]=/opt/ros/fuerte/share/ros robot_monitor: Missing resource rxbag ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/root/ros/diagnostics ROS path [2]=/root/ros/common ROS path [3]=/root/ros/laser_pipeline ROS path [4]=/root/ros/executive_smach_visualization ROS path [5]=/root/ros/visualization_tutorials ROS path [6]=/root/ros/geometry ROS path [7]=/root/ros/pluginlib ROS path [8]=/root/ros/bullet ROS path [9]=/root/ros/robot_model ROS path [10]=/root/ros/xacro ROS path [11]=/root/ros/dynamic_reconfigure ROS path [12]=/root/ros/executive_smach ROS path [13]=/root/ros/driver_common ROS path [14]=/root/ros/visualization_common ROS path [15]=/root/ros/python_qt_binding ROS path [16]=/root/ros/bond_core ROS path [17]=/root/ros/image_common ROS path [18]=/root/ros/geometry_visualization ROS path [19]=/root/ros/common_rosdeps ROS path [20]=/root/ros/diagnostics_monitors ROS path [21]=/root/ros/bfl ROS path [22]=/root/ros/common_tutorials ROS path [23]=/root/ros/robot_model_visualization ROS path [24]=/root/ros/geometry_experimental ROS path [25]=/root/ros/perception_pcl ROS path [26]=/root/ros/visualization ROS path [27]=/root/ros/robot_model_tutorials ROS path [28]=/root/ros/filters ROS path [29]=/root/ros/orocos_kinematics_dynamics ROS path [30]=/root/ros/geometry_tutorials ROS path [31]=/root/ros/nodelet_core ROS path [32]=/opt/ros/fuerte/stacks ROS path [33]=/opt/ros/fuerte/share ROS path [34]=/opt/ros/fuerte/share/ros tf2_visualization: Missing resource xdot ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/root/ros/diagnostics ROS path [2]=/root/ros/common ROS path [3]=/root/ros/laser_pipeline ROS path [4]=/root/ros/executive_smach_visualization ROS path [5]=/root/ros/visualization_tutorials ROS path [6]=/root/ros/geometry ROS path [7]=/root/ros/pluginlib ROS path [8]=/root/ros/bullet ROS path [9]=/root/ros/robot_model ROS path [10]=/root/ros/xacro ROS path [11]=/root/ros/dynamic_reconfigure ROS path [12]=/root/ros/executive_smach ROS path [13]=/root/ros/driver_common ROS path [14]=/root/ros/visualization_common ROS path [15]=/root/ros/python_qt_binding ROS path [16]=/root/ros/bond_core ROS path [17]=/root/ros/image_common ROS path [18]=/root/ros/geometry_visualization ROS path [19]=/root/ros/common_rosdeps ROS path [20]=/root/ros/diagnostics_monitors ROS path [21]=/root/ros/bfl ROS path [22]=/root/ros/common_tutorials ROS path [23]=/root/ros/robot_model_visualization ROS path [24]=/root/ros/geometry_experimental ROS path [25]=/root/ros/perception_pcl ROS path [26]=/root/ros/visualization ROS path [27]=/root/ros/robot_model_tutorials ROS path [28]=/root/ros/filters ROS path [29]=/root/ros/orocos_kinematics_dynamics ROS path [30]=/root/ros/geometry_tutorials ROS path [31]=/root/ros/nodelet_core ROS path [32]=/opt/ros/fuerte/stacks ROS path [33]=/opt/ros/fuerte/share ROS path [34]=/opt/ros/fuerte/share/ros laser_filters: No definition of [pcl] for OS [debian] turtle_tf: Missing resource turtlesim ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/root/ros/diagnostics ROS path [2]=/root/ros/common ROS path [3]=/root/ros/laser_pipeline ROS path [4]=/root/ros/executive_smach_visualization ROS path [5]=/root/ros/visualization_tutorials ROS path [6]=/root/ros/geometry ROS path [7]=/root/ros/pluginlib ROS path [8]=/root/ros/bullet ROS path [9]=/root/ros/robot_model ROS path [10]=/root/ros/xacro ROS path [11]=/root/ros/dynamic_reconfigure ROS path [12]=/root/ros/executive_smach ROS path [13]=/root/ros/driver_common ROS path [14]=/root/ros/visualization_common ROS path [15]=/root/ros/python_qt_binding ROS path [16]=/root/ros/bond_core ROS path [17]=/root/ros/image_common ROS path [18]=/root/ros/geometry_visualization ROS path [19]=/root/ros/common_rosdeps ROS path [20]=/root/ros/diagnostics_monitors ROS path [21]=/root/ros/bfl ROS path [22]=/root/ros/common_tutorials ROS path [23]=/root/ros/robot_model_visualization ROS path [24]=/root/ros/geometry_experimental ROS path [25]=/root/ros/perception_pcl ROS path [26]=/root/ros/visualization ROS path [27]=/root/ros/robot_model_tutorials ROS path [28]=/root/ros/filters ROS path [29]=/root/ros/orocos_kinematics_dynamics ROS path [30]=/root/ros/geometry_tutorials ROS path [31]=/root/ros/nodelet_core ROS path [32]=/opt/ros/fuerte/stacks ROS path [33]=/opt/ros/fuerte/share ROS path [34]=/opt/ros/fuerte/share/ros laser_assembler: No definition of [pcl] for OS [debian] rxbag_plugins: Missing resource rxbag ROS path [0]=/opt/ros/fuerte/share/ros ROS path [1]=/root/ros/diagnostics ROS path [2]=/root/ros/common ROS path [3]=/root/ros/laser_pipeline ROS path [4]=/root/ros/executive_smach_visualization ROS path [5]=/root/ros/visualization_tutorials ROS path [6]=/root/ros/geometry ROS path [7]=/root/ros/pluginlib ROS path [8]=/root/ros/bullet ROS path [9]=/root/ros/robot_model ROS path [10]=/root/ros/xacro ROS path [11]=/root/ros/dynamic_reconfigure ROS path [12]=/root/ros/executive_smach ROS path [13]=/root/ros/driver_common ROS path [14]=/root/ros/visualization_common ROS path [15]=/root/ros/python_qt_binding ROS path [16]=/root/ros/bond_core ROS path [17]=/root/ros/image_common ROS path [18]=/root/ros/geometry_visualization ROS path [19]=/root/ros/common_rosdeps ROS path [20]=/root/ros/diagnostics_monitors ROS path [21]=/root/ros/bfl ROS path [22]=/root/ros/common_tutorials ROS path [23]=/root/ros/robot_model_visualization ROS path [24]=/root/ros/geometry_experimental ROS path [25]=/root/ros/perception_pcl ROS path [26]=/root/ros/visualization ROS path [27]=/root/ros/robot_model_tutorials ROS path [28]=/root/ros/filters ROS path [29]=/root/ros/orocos_kinematics_dynamics ROS path [30]=/root/ros/geometry_tutorials ROS path [31]=/root/ros/nodelet_core ROS path [32]=/opt/ros/fuerte/stacks ROS path [33]=/opt/ros/fuerte/share ROS path [34]=/opt/ros/fuerte/share/ros librviz_tutorial: No definition of [python-qt-bindings] for OS [debian] rviz_plugin_tutorials: No definition of [python-qt-bindings] for OS [debian] pcl_ros: No definition of [pcl] for OS [debian] Originally posted by brhempen on ROS Answers with karma: 31 on 2012-08-31 Post score: 1 Answer: No one has yet defined any rosdep rules for python-qt-bindings in Debian Squeeze. You can define them and contribute back to the community by following these directions. Originally posted by joq with karma: 25443 on 2012-08-31 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by joq on 2012-08-31: One of the Ubuntu stanzas may possibly work for Squeeze. You can experiment with them.
{ "domain": "robotics.stackexchange", "id": 10836, "tags": "ros, installation, rosdep, packages, debian" }
Iterating through list and calling method with different parameters for each item
Question: I am iterating over a list of environments but calling the same method each time. The only difference is the parameters are changing depending on what environment it is. The result so far is similar to the code below, i know this is not great but cant find a better way to solve it. Any ideas? foreach (var environment in Environments) { if(environment == "environment1") { FillList("connectionstring1", query1); } else if (environment == "environment2") { FillList("connectionstring1", query2); } else if (environment == "environment3") { FillList("connectionstring2", query1); } } Answer: The second parameter has the same value twice (query1) the first parameter also has the same value twice ("connectionstring1"), this means you can remove 1 if statement if you create 2 variables with initial values like this: foreach (var environment in Environments) { string connectionString = "connectionstring1"; var query = query1; if (environment == "environment2") { query = query2; } else if (environment == "environment3") { connectionString = "connectionstring2"; } FillList(connectionString, query); } Another option would be to add Dictionary<string, Action> which will allow you to easily map string -> method call like this: Dictionary<string, Action> dictionary = new Dictionary<string, Action> { ["environment1"] = () => FillList("connectionstring1", query1), ["environment2"] = () => FillList("connectionstring1", query2), ["environment3"] = () => FillList("connectionstring2", query1) }; foreach (var environment in Environments) { dictionary[environment].Invoke(); }
{ "domain": "codereview.stackexchange", "id": 24198, "tags": "c#, beginner" }
Momentum/kinetic energy: classical mechanics
Question: Can I affirm that the momentum gives us the effect that we observe when the sphere impacts on a surface (for example if a car impacts against a wall we observe fractures and visual damage)? After, for the kinetic energy can I affirm that is the energy released or dissipated when a car impacts against a wall? Links related to my question: Momentum a good definition Answer: After, for the kinetic energy can I affirm that is the energy released or dissipated when a car impacts against a wall? Yes you can, but it should be qualified that it is the macroscopic kinetic energy of the car that is dissipated in the collision. The macroscopic kinetic energy is the kinetic energy associated with the velocity of the center of mass of the car as a whole. In the collision the loss of macroscopic, or external, kinetic energy becomes an increase in the microscopic internal energy (kinetic plus potential) associated with the fracturing and visible damage to the car (and possibly the wall as well) resulting in raising the temperatures of the colliding objects. Overall, external plus internal energy is conserved. Only the macroscopic kinetic energy is not conserved. I would add that the difference between momentum and macroscopic kinetic energy in your collision example is that momentum is conserved whereas macroscopic kinetic energy energy would not be. Macroscopic kinetic energy is only conserved in a perfectly elastic collision. Perfectly elastic collisions do not exist at the macroscopic level. As a final comment, it is not uncommon for one to think momentum is not conserved when a wall brings a car to a stop since the wall does not appear to move. But the wall/earth actually does move to conserve momentum. Thing is its mass is so huge compared to the car, its movement is so infinitesimal as to be immeasurable and is therefore ignored from a practical perspective. Hope this helps.
{ "domain": "physics.stackexchange", "id": 64836, "tags": "classical-mechanics, energy, definition" }
Topological strings: Why is the complex structure for $T^2$ denoted as $\tau$ in string theory?
Question: In these notes by Vafa on topological string theory he says in page 7 that the moduli of the 2-torus can be repackaged into two quantities: $$A=iR_1/R_2 \,\,\,\,\,\,\,\,\, \tau=iR_2/R_1$$ where $A$ describes the overall area of the torus or its size and $\tau$ describes its complex structure or its shape. Why $A$ measures the area? Why is $\tau$ describing the complex structure of $T^2$? The complex structure of $T^2$ which is Kahler is a tensor $J$. What is its relation to this $\tau$? And what has the complex structure to do with the shape of $T^2$? I would assume that the cohomology class of the Kahler form only as to do with the area. Later he says that this is an example of mirror symmetry in string theory. Why? Mirror symmetry relates two different CYs. Here we only have different moduli of $T^2$ Finally, which parameters actually correspond to the moduli space of $T^2$? Both $A,\tau$ only $A$ or only $\tau$? This is a quite mathematical question but it is in the heart of string theory. Answer: I will attempt to answer with very little string theory background - because your questions seem oriented towards this basic case rather then the theory in general. First, a correction. On page 7 of that article, it defines $A=iR_1R_2$, not $R_1/R_2$. So, since the torus is flat, $A$ is $i$ times the usual area $R_1R_2$. As you say, a complex structure is a map $J$ such that $J^2=-1$. It comes from thinking of the complex structure on $\mathbb{C}$, where $iz=i(x+iy)=-y+ix$, so it exchanges the roles of the two coordinates. If the torus is a rectangular region of $\mathbb{C}$ with the opposite sides identified, the complex structure is a "rotation+flip", and changes the appearance of the rectangle. Since $\tau$ is the ratio of the two sides of the rectangle, it tells us something about the shape of the torus [some clarity below]. The torus is a CY manifold in 1 dimension, so the symmetry $A\leftrightarrow\tau$ is a map between two CY manifolds. He equates this with T-duality $A\leftrightarrow 1/A$, which is closely related to mirror symmetry. Well, to be clear we are talking about the torus metrics, which are completely specified by $R_1$ and $R_2$. (This is not the same as "the moduli space of $\mathbb{T}^2$" because that would mean more or less structure, depending on the context. To a topologist, the moduli space of tori is 0-dimensional, since there is only one 2d topological surface with genus 1). That means just $A$ wouldn't cut it - there would be pairs $(R_1,R_2)$ with the same $A$ but different sizes. If you include $\tau$ (linearly independent from $A$), then you can break that degeneracy. So the moduli space is parametrized by either the pair $(R_1,R_2)$ or $(A,\tau)$. (He does say that for more general tori you need to consider real parts for $A$ and $\tau$, so the moduli space would be bigger). [Some Clarity] In case that wasn't clear - consider the complex structure of $\mathbb{C}$, the imaginary unit $i$. It's action on the edges is $(R_1,R_2)\rightarrow (-R_2,R_1)$ So what happens to $A$ and $\tau$ under this map? $$A=iR_1R_2\rightarrow A'=-iR_2R_1$$ $$\tau=iR_2/R_1\rightarrow \tau'=i(R_1)/(-R_2)$$ So $A$ doesn't tell us anything about the complex structure, because under that map we just get $A\rightarrow -A$. However, $\tau\rightarrow -1/\tau$, so $\tau$ tells "how wide" and "how long" the torus is (at least, the ratio of these), which is the complex structure.
{ "domain": "physics.stackexchange", "id": 97603, "tags": "string-theory, differential-geometry, topological-field-theory, calabi-yau, moduli" }
Producing recursive directory listings
Question: This produces recursive directory listings(it's intended to do other things, so there will be something else instead of echo, but it isn't important), yet it isn't very intuitive. How can I improve it? #! /bin/bash deploy () { for file in $(ls $1); do if [ ! -d "$2$file" ]; then echo ${2}${file} else deploy $2$file "$2$file/" fi done; } deploy . Output example: tmp/kdecache-z/http/ba9c27404980248a61dcdadd35f5f52395b8bdde tmp/kdecache-z/http/scoreboard tmp/kdecache-z/icon-cache.kcache tmp/kdecache-z/ksycoca4 tmp/kdecache-z/ksycoca4stamp tmp/kdecache-z/plasma-svgelements-.customized1 tmp/kdecache-z/plasma-svgelements-internal-system-colors tmp/kdecache-z/plasma_theme_.customized1.kcache tmp/kdecache-z/plasma_theme_internal-system-colors.kcache tmp/kdecache-z/Активные tmp/kdecache-z/напоминанияrc tmp/kdecache-z/Устаревшие tmp/kdecache-z/напоминанияrc tmp/kdecache-z/Шаблоны tmp/kdecache-z/напоминанийrc tmp/kde-z/xauth-1000-_0 tmp/ksocket-z/kdeinit4__0 tmp/ksocket-z/kio_http_cache_cleaner tmp/ksocket-z/klauncherhX2255.slave-socket tmp/ksocket-z/KSMserver__0 tmp/pulse-PKdhtXMmr18n/native tmp/pulse-PKdhtXMmr18n/pid tmp/qtsingleapp-homezS-8560-3e8 tmp/qtsingleapp-homezS-8560-3e8-lockfile tmp/skype-2664/DbTemp./temp-KXp68f252Xx0kBMtqNdj7NFo tmp/skype-2664/DbTemp./temp-NAailvY24hLGYNrZiJ99QIgH tmp/sni-qt_skype_2664-v6Xyna/icons/hicolor/24x24/apps/skype_2664_2d1ee5482260fd9cd180b32787792683.png tmp/sni-qt_skype_2664-v6Xyna/icons/hicolor/24x24/apps/skype_2664_37e170fc54e7355d9d298917e74f9ea9.png tmp/sni-qt_skype_2664-v6Xyna/icons/hicolor/24x24/apps/skype_2664_602c84fa0f4d61c64f770495a500279e.png tmp/sni-qt_skype_2664-v6Xyna/icons/hicolor/24x24/apps/skype_2664_f2fc4a539a7b9553f5b35241d1154e84.png tmp/z.socket P.S. It is working perfectly for me so I just want to make it more beautiful. Answer: In fairness, this task is relatively simple, and your solution is actually pretty good. Shell scripts are often just a hack to solve a specific task, and, your script is much better than many that I see that do that. There are two things to consider in your script. The first is whether it can be improved. The second is whether there's a better way. Now, about improving it. I would: remove the need for the second parameter name the parameters, include the / in a different place use quotes around variables in specific places. reverse the if-blocks to get rid of the 'negate' condition. Consider the following: deploy () { local dir="$1" for file in $(ls "$dir/" ); do # Fully qualified file fqfile="$dir/$file" if [ -d "$fqfile" ]; then deploy "$fqfile" else echo "$fqfile" fi done; } Introducing the fqfile allows the simplifcation of other variables too. Now, having said all that, your command could easily be solved with the find command: find . -type f -exec echo {} \; The above command will recursively find all files below ., and echo them.
{ "domain": "codereview.stackexchange", "id": 12197, "tags": "beginner, bash, shell" }
beginner_tutorials/msg/Num.msg: Invalid declaration: int 64 num
Question: Whenever I open my terminal I get the following warning ROS_DISTRO was set to 'jade' before. Please make sure that the environment does not mix paths from different distributions. Also when I run catkin-make I get the following error: ~/catkin_ws$ catkin_make Base path: /home/vivek/catkin_ws Source space: /home/vivek/catkin_ws/src Build space: /home/vivek/catkin_ws/build Devel space: /home/vivek/catkin_ws/devel Install space: /home/vivek/catkin_ws/install #### #### Running command: "make cmake_check_build_system" in "/home/vivek/catkin_ws/build" #### -- Using CATKIN_DEVEL_PREFIX: /home/vivek/catkin_ws/devel -- Using CMAKE_PREFIX_PATH: /opt/ros/indigo -- This workspace overlays: /opt/ros/indigo -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using empy: /usr/bin/empy -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/vivek/catkin_ws/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- Using Python nosetests: /usr/bin/nosetests-2.7 -- catkin 0.6.14 -- BUILD_SHARED_LIBS is on -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - beginner_tutorials -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'beginner_tutorials' -- ==> add_subdirectory(beginner_tutorials) -- Using these message generators: gencpp;genlisp;genpy /opt/ros/indigo/share/genmsg/cmake/pkg-genmsg.cmake.em:56: error: <class 'genmsg.base.InvalidMsgSpec'>: /home/vivek/catkin_ws/src/beginner_tutorials/msg/Num.msg: Invalid declaration: int 64 num Traceback (most recent call last): File "/usr/bin/empy", line 3276, in <module> if __name__ == '__main__': main() File "/usr/bin/empy", line 3274, in main invoke(sys.argv[1:]) File "/usr/bin/empy", line 3257, in invoke interpreter.wrap(interpreter.file, (file, name)) File "/usr/bin/empy", line 2262, in wrap self.fail(e) File "/usr/bin/empy", line 2253, in wrap callable(*args) File "/usr/bin/empy", line 2326, in file self.safe(scanner, done, locals) File "/usr/bin/empy", line 2368, in safe self.parse(scanner, locals) File "/usr/bin/empy", line 2388, in parse token.run(self, locals) File "/usr/bin/empy", line 1403, in run interpreter.execute(self.code, locals) File "/usr/bin/empy", line 2565, in execute exec(statements, self.globals) File "<string>", line 38, in <module> File "/opt/ros/indigo/lib/python2.7/dist-packages/genmsg/deps.py", line 45, in find_msg_dependencies_with_type spec = genmsg.msg_loader.load_msg_from_file(msg_context, msg_file, full_type_name) File "/opt/ros/indigo/lib/python2.7/dist-packages/genmsg/msg_loader.py", line 284, in load_msg_from_file raise InvalidMsgSpec('%s: %s'%(file_path, e)) genmsg.base.InvalidMsgSpec: /home/vivek/catkin_ws/src/beginner_tutorials/msg/Num.msg: Invalid declaration: int 64 num CMake Error at /opt/ros/jade/share/catkin/cmake/safe_execute_process.cmake:11 (message): execute_process(/home/vivek/catkin_ws/build/catkin_generated/env_cached.sh "/usr/bin/python" "/usr/bin/empy" "--raw-errors" "-F" "/home/vivek/catkin_ws/build/beginner_tutorials/cmake/beginner_tutorials-genmsg-context.py" "-o" "/home/vivek/catkin_ws/build/beginner_tutorials/cmake/beginner_tutorials-genmsg.cmake" "/opt/ros/indigo/share/genmsg/cmake/pkg-genmsg.cmake.em") returned error code 1 Call Stack (most recent call first): /opt/ros/jade/share/catkin/cmake/em_expand.cmake:25 (safe_execute_process) /opt/ros/indigo/share/genmsg/cmake/genmsg-extras.cmake:300 (em_expand) beginner_tutorials/CMakeLists.txt:13 (generate_messages) -- Configuring incomplete, errors occurred! See also "/home/vivek/catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/vivek/catkin_ws/build/CMakeFiles/CMakeError.log". make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed Originally posted by Vivek on ROS Answers with karma: 1 on 2016-02-07 Post score: 0 Answer: The key error here is: genmsg.base.InvalidMsgSpec: /home/vivek/catkin_ws/src/beginner_tutorials/msg/Num.msg: Invalid declaration: int 64 num It looks like your message definition is incorrect. There is no space in the int64 type. For Num.msg, try: int64 num Originally posted by ahendrix with karma: 47576 on 2016-02-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Vivek on 2016-02-07: Thank you!
{ "domain": "robotics.stackexchange", "id": 23674, "tags": "ros" }
Guardian not working in Gazebo
Question: I am trying to study the guardian-ros-pkg to learn more about ros, controlling a skid steer 4 wheel robot, and SLAM. I have installed the package from source and now I'm trying to run a simulation in gazebo following the instructions found here. First, I cannot teleop the robot properly using the teleop_keyboard node. Q and E work, causing the robot to rotate in place around the Z axis (yaw), but WASD do not cause the robot to translate. Pressing or holding Shift makes no difference. Second, when I try to run the SLAM algorithm for navigation, the robot just rotates in place endlessly and never explores the space. It should look like this. When I launch with guardian_robotnik.launch, all of the models load just fine, but I get an error: [FATAL] [1343077767.457597153, 0.712000000]: GuardianController I also get this warning: [ WARN] [1343077768.818456569, 0.726000000]: multiple inconsistent exists due to fixed joint reduction, overwriting previous value [false] with [true]. Third, when I try to launch rviz to view the map generated by SLAM, I get several erros and a seg fault, but I have never had a problem opening rviz before. OpenGL Warning: glXChooseFBConfig returning NULL, due to attrib=0x8010, next=0x8023 OpenGL Warning: glXCreatePbuffer not implemented by Chromium [ERROR] [1343074659.579961264, 41.872000000]: Caught exception while loading: OGRE EXCEPTION(3:RenderingAPIException): Unable to create Pbuffer in GLXPBuffer::GLXPBuffer at /tmp/buildd/ros-electric-visualization-common-1.6.3/debian/ros-electric-visualization-common/opt/ros/electric/stacks/visualization_common/ogre/build/ogre_src_v1-7-3/RenderSystems/GL/src/GLX/OgreGLXRenderTexture.cpp (line 137) Running Electric on Ubuntu 11.10 guest, OS X 10.7.4 host with 3D acceleration enabled for the guest. Have I just missed a step, or is something wrong here? Originally posted by jerdman on ROS Answers with karma: 35 on 2012-07-23 Post score: 0 Answer: Maybe you should ask to the company Robotnik directly www.robotnik.es/en/contact Originally posted by martimorta with karma: 843 on 2012-07-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10325, "tags": "slam, gazebo, navigation, ubuntu, ubuntu-oneiric" }
Chemical Anti-bonding and Pauli repulsion
Question: We can explain chemical anti-bonding just using the Pauli repulsion correct?Let's take He2. 2 atoms of He share 4 1s electrons and since the magnetic spin for electrons has 2 values there would be 2 electrons with the same wave function which would violate the Pauli exclusion principle.This makes He2 unstable and it disassociates in 2 He atoms Am I correct? Answer: The reason we speak of orbitals as being exclusively inhabited by 2 (max) electrons (with opposing electron spin quantum number) is because electrons are fermions and therefore observe the exclusion principle. The exclusion principle constrains the allowed electron configurations, disallowing occupation of lower E orbitals by more than 2 electrons. In the case of $\ce{He2}$ it requires occupation of orbitals that raise the total E above the energy of the atoms at greater separation. So, in a word, the answer is yes. The wikipedia entry on the Pauli exclusion principle alludes to its role: It has been shown that the Pauli exclusion principle is responsible for the fact that ordinary bulk matter is stable and occupies volume. This suggestion was first made in 1931 by Paul Ehrenfest, who pointed out that the electrons of each atom cannot all fall into the lowest-energy orbital and must occupy successively larger shells. Atoms, therefore, occupy a volume and cannot be squeezed too closely together.[12] I should add that dispersion interactions mean that there actually is a very weak attractive interaction between He atoms before the repulsive term "kicks in". See e.g. this wikipedia description of the London dispersion force, which has this illustration of the potential between two Ar atoms: See also the table at the bottom of the wikipedia article, which explains that for the Ne dimer, dispersion contributes 100% of the total intermolecular interaction energy (I would modify that perhaps to attractive interaction).
{ "domain": "chemistry.stackexchange", "id": 12787, "tags": "inorganic-chemistry, quantum-chemistry" }
Uploading a video in S3 Using Future
Question: I've posted a similar program previously, but I have not made any major modifications in the code, so I am posting it again by deleting the previous question. I am afraid of the thread keyword and I am not too familiar with threads and blocking, so can you please review this code for me? Is the use of future enough or not? This code comes from here. Note: The AWS credentials are fake. package controllers import java.awt.image.BufferedImage import java.util.ArrayList import scala.collection.JavaConversions.mapAsJavaMap import scala.concurrent.Future import org.apache.http.NameValuePair import org.apache.http.message.BasicNameValuePair import com.amazonaws.auth.BasicAWSCredentials import com.amazonaws.services.s3.AmazonS3Client import com.twilio.sdk.TwilioRestClient import com.twilio.sdk.TwilioRestException import net.liftweb.json.DefaultFormats import net.liftweb.json.Serialization.write import play.api.Play.current import play.api.i18n.Lang import play.api.i18n.Messages import play.api.libs.concurrent.Execution.Implicits._ import play.api.mvc.Action import play.api.mvc.Controller object Application extends Controller { val bucketImages = "images35" val bucketVideos = "videos35" val bucketAudios = "audios35" val AWS_ACCESS_KEY = "qqqqqqqqqqqqqqqqqqqqqqqqqqq" val AWS_SECRET_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxx" val yourAWSCredentials = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY) val amazonS3Client = new AmazonS3Client(yourAWSCredentials) val ACCOUNT_SID = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" val AUTH_TOKEN = "tytytytytyyyyyyyyyyyyyyyyyyyyyyyyyy" implicit val formats = DefaultFormats def uploadVideo = Action.async(parse.multipartFormData) { implicit request => session.get("userI").map { userId => try { var videoName: String = null val upload = for { done <- Future { request.body.files.map { mov => videoName =System.currentTimeMillis() + ".mpeg" amazonS3Client.putObject(bucketVideos, videoName, mov.ref.file) } } } yield done upload.map { res => val map = Map("result" -> "success", "videoName" -> videoName) Ok(write(map)) } } catch { case e: Exception => Future{ Ok(write(Map("result" -> "error"))) } } }.getOrElse { Future{ Ok(write(Map("result" -> "nosession"))) } } } } application.conf play { akka { event-handlers = ["akka.event.slf4j.Slf4jEventHandler"] loglevel = WARNING actor { default-dispatcher = { fork-join-executor { parallelism-min = 300 parallelism-max = 300 } } } } } contexts { simple-db-lookups { fork-join-executor { parallelism-factor = 10.0 } } expensive-db-lookups { fork-join-executor { parallelism-max = 4 } } db-write-operations { fork-join-executor { parallelism-factor = 2.0 } } expensive-cpu-operations { fork-join-executor { parallelism-max = 2 } } } Contexts Object object Contexts { implicit val simpleDbLookups: ExecutionContext = Akka.system.dispatchers.lookup("contexts.simple-db-lookups") implicit val expensiveDbLookups: ExecutionContext = Akka.system.dispatchers.lookup("contexts.expensive-db-lookups") implicit val dbWriteOperations: ExecutionContext = Akka.system.dispatchers.lookup("contexts.db-write-operations") implicit val expensiveCpuOperations: ExecutionContext = Akka.system.dispatchers.lookup("contexts.expensive-cpu-operations") } Answer: First, remove import play.api.libs.concurrent.Execution.Implicits._. You don't want the default Play execution context. Instead, you should use one of the ExecutionContexts you've defined and aren't using. Perhaps define one for S3 operations alone, and you don't need the others if you aren't going to use them. Future.apply requires the implicit ExecutionContext, so you can either import the correct one, or explicitly pass it. Assuming you have a context defined like this in your configuration: contexts { s3-ops { fork-join-executor { parallelism-factor = 10.0 } } } Then in your controller you could add: implicit val ec: ExecutionContext = Akka.system.dispatchers.lookup("contexts.s3-ops") Now in the body of the controller function, I'll work my way inward. With the handling of the session, why bother returning data to the user if they don't have a session? It would seem more appropriate to return Forbidden, which can shorten that line to Future.successful(Forbidden). Note that Future.successful creates a Future that has already been completed. The use of try/catch is not very Scala-like. There are other nicer ways to handle errors in Scala like Try. Future is actually very similar to Try, though, and it should provide all the error handling you need in this case. If an exception occurs within a Future, then it becomes a failed Future. No exceptions will propagate upward. Most of the code inside your try block can go inside the Future apply instead. var videoName: String = null is something that should be avoided altogether. First, it's best to avoid mutable vars, and second you should never find yourself assigning null to anything. I'm not sure the body of your current Future where you're mapping request.body.files is doing what you think it's doing. request.body.files is a Seq[FilePart], which means if there are multiple FileParts, you're going to upload them all separately, overwrite videoName and only return the name of the last uploaded FilePart. Or worse, if there are no FileParts, that part will silently fail. If you're only going to have a single FilePart, it'd be better to name it in the upload body and access via request.body.file("filename") which will be an Option[FilePart]. From there you can map the Option[FilePart] to the actual upload. Since you're not actually doing anything with the PutObjectResult from the S3 response, it'd be better to return the name of the file in the Future, so that you have access to it when you map the Future, and would no longer have a need for a var. Following the suggestions above, it would now be a Future[Option[String]]. This would leave you with three return states for the Future: A successful Future containing Some[String] (upload success on both ends). A successful Future containing None (no file was uploaded to the server). A failed Future (the upload to S3 failed somehow). You'd then have something that looks roughly like this: Future { request.body.file("filename").map{ file => val videoName = ... amazonS3Client.putObject(...) videoName } }.map{ upload => upload.map{videoName => Ok(...)}.getOrElse{BadRequest(...)} }.recover{ case _ => // recover the failed `Future` to a default value, in this case a `Result` InternalServerError } Much cleaner. Note that recover accepts a PartialFunction[Throwable, ?], similarly to a catch block, so you can be more fine grained in your error handling if you need to. A couple more notes: There's no need to use Lift's json API for your responses. Play's implementation works just fine: import play.api.libs.json.Json Ok(Json.obj("result" -> "success", "videoName" -> videoName)) Your imports take up a lot of space. Many can be condensed into one liners: import play.api.i18n.Lang import play.api.i18n.Messages Can be: import play.api.i18n.{Lang, Messages}
{ "domain": "codereview.stackexchange", "id": 8252, "tags": "multithreading, scala, amazon-s3" }
What is the meaning of $\vec H$ with respect to the total field?
Question: Now before saying anything, I have seen the similar questions in this page regarding $\vec H$ but no one has fully convinced me yet, since I will try to give another perspective to this question. We know that the total contribution to a magnetic field is $$\vec B_{total}=\vec B_{material}+\vec B_0$$ with $\vec B_{material}$ and $\vec B_0$ being the field magnetized in the material (like a cylinder or whatever figure with no field generated from itself) and the field generated in by a common source (like a magnet or a spiral) respectively. The topic of magnetic fields in the matter corresponds to $\vec B_{material}$ and the magnetic fields on the void corresponds to $\vec B_0$. But from the topic of magnetic fields in the matter, we know $$\vec B_{material}=\mu_0 (\vec H+\vec M)$$ which generates a huge doubt for me since I've always associated $\vec H$ with the "cause" of the $\vec B_{material}$ and $\vec M$ with the "consecuence" of the $\vec B_{material}$, but now we remember that for a $\vec B_{material}$ to exist, it has to be an original magnetic field that magnetizes the material, because by itself it is unable to generate such field, and I thought we associated that field to $\vec B_0$, but now I think about it and it could also be $\vec H$, isn't it? And the "response" of the material I thought it would be $\vec B_{material}$ but then again it could also be $\vec M$, since it is associated to the consecuence of a magnetic field, what's going on? I think my head is too confused so maybe I contradicted myself a couple times but it just doesn't make sense to me intuitively what letter corresponds to what contribution. And the whole thing can also be compared to the electric field, with $\vec E=\vec E_0+\vec E_{material}$, where we have $\vec D=\epsilon \vec E+\vec P$, what is the contribution to what or what is the response of the material to an external field? It's all very confusing. EDITED PART FOR THE BOUNTY UPDATE: The comments here suggest that the $\vec B_0$ and $\vec B_m$ refer to the $\vec H$ and $\vec M$ and no more. But today, my teacher explained to us with the example of a problem that was like so and actually didn't agree with the comments here: A sphere of radius R of a linear magnetic material of permeability $\mu$ is located in a region of empty space where a uniform magnetic field $B_0$ exists. a) Knowing that the magnetization that appears on the sphere is uniform, calculate $M$, the dipole moment induced in the sphere and the $B$ field at all points in space. And my teacher did the following: After finding the magnetic potential vector $\vec A$, we can find the $\vec B_m=\nabla\times \vec A=\frac 23 \mu_0 \vec M$, and since $\vec B_m=\mu_0(\vec H_m+\vec M)$ we get $$\vec H_m=\frac{\vec B_m}{\mu_0}-\vec M=-\frac{\vec M}{3}$$ we can find the total magnetic field by using the expression I was told in the comments to be untrue: $$\vec B_{total}=\vec B_0+\vec B_m=\mu \vec H_{total}$$ and since $\vec H_{total}=\vec H_m+\vec H_0$ and $\vec B_0=\mu_0 \vec H_0$ and finally we got earlier $\vec B_m=\frac 23 \mu_0 \vec M$ we can substitute and find $\vec M$: $$\vec M=\frac{3(\frac{\mu}{\mu_0}-1)\vec B_0}{2\mu_0+\mu}$$ Which again the result is irrelevant for my purposes, since I'm looking, again, to understand how there are so many different contributions that I've lost sense of what letter means what, I thought $\vec M$ was the magnetic field caused in the material as a response of an external field $\vec H$, but now it turns out we have $B_0$ and $B_m$, and I'm quite lost. Answer: Here the problem is that you are using too many notations and you are changing them constantly. The master equation in electrostatics : $$ \vec{D}= \epsilon_0 \vec{E} + \vec{P}.$$ (i) $ \vec{D} $ is known as the electric displacement vector. This represents the electric field (But it's really not a true electric field. Check its dimension.) in a system due to the free charges. For example, the a charged conductor or ions embedded in a dielectric material are the free charges of this system. We can control these free charges and consequently we have full control over $ \vec{D}$. (ii) $ \vec{P} $ is the polarisation vector. It is defined as the electric dipole moment per unit volume of a system. Polarised atoms or atoms having permanent dipole moment in the system create tiny dipoles. These tiny dipole moments constitute this polarisation vector. Also note that these tiny dipoles create the bound charges in the system. The value of these bound charges can be obtained from the polarisation vector itself. (iii) $ \vec{E}$ is the total electric field of a system. Means it’s the field due to both free and bound charges present in the system. (iv) $\epsilon_0$ is obviously the permittivity of free space. Now for linear dielectrics (Dielectrics in which polarisation varies linearly with electric field, $ \vec{E}$.), the defining equation is, $$ \vec{P} = \epsilon_0 \chi_e \vec{E}. $$ Note that in RHS we are putting total field $ \vec{E}$ not $ \vec{D}$. This is the definition. By using this convention everything works out well. This also leads to the equation $ \vec{D} = \epsilon \vec{E} $. Where $ \epsilon $ is the permittivity of the dielectric material. Exactly similar quantities appear in magnetostatics too. The master equation in magnetostatics: $$ \vec{H} = (\vec{B}/\mu_0) - \vec{M}. $$ (i) Here $ \vec{H} $ is known as the Auxiliary field. This represents the magnetic field (not true magnetic field) in a system due to the free currents. For example a constant current carrying wire embedded in a paramagnetic material provides the free current to this system. We can control these free currents and consequently we have full control over $ \vec{H}$. (ii) $ \vec{M} $ is the magnetisation vector. It is defined as the magnetic dipole moment per unit volume of a system. The atoms in the system act as tiny magnetic dipoles. These tiny dipole moments constitute this magnetisation vector. Also note that these tiny dipoles create the bound currents in the system. The value of these bound currents can be obtained from the magnetisation vector itself. (iii) $ \vec{B}$ is the total magnetic field of the system. Means it’s the field due to both free and bound currents present in the system. (iv) $\mu_0$ is obviously the permeability of free space. Now for linear magnetic materials (materials in which magnetisation varies linearly with auxiliary field, $ \vec{H}$.), the defining equation is, $$ \vec{M} = \chi_m \vec{H}. $$ This also leads to the equation $ \vec{H} = \vec{B}/\mu $. Where $ \mu $ is the permeability of the material. Now let’s come to your problem. A sphere of radius R of a linear magnetic material of permeability μ is located in a region of empty space where a uniform magnetic field B0 exists. a) Knowing that the magnetization that appears on the sphere is uniform, calculate M, the dipole moment induced in the sphere and the B field at all points in space. Here, the free currents or the current you can control is producing a magnetic field $\vec{B_0}$, outside the given system. So, the auxiliary field will be $ \vec{H} = B_0/\mu_0$ outside the material. Using the equations mentioned above you need to find magnetisation vector $ \vec{M} $ and total magnetic field $ \vec{B}$ inside the material. As you have mentioned the total magnetic field and auxiliary field inside the material due to the constant magnetisation vector only are $ 2\mu_0 \vec{M}/3$ and $ - \vec{M}/3$, respectively. Therefore in this case the total auxiliary field inside the material is $ \vec{H} = (\vec{B_0}/\mu_0) - (\vec{M}/3)$. So, by putting these in the magnetostatics master equation and using $ \vec{H} = \vec{B}/\mu $ we get, $$ \frac{\vec{B_0}}{\mu_0} - \frac{\vec{M}}{3} = \frac{\mu}{\mu_0}(\frac{\vec{B_0}}{\mu_0} - \frac{\vec{M}}{3}) - \vec{M}. $$ Simplify the above equation to get your desired result.
{ "domain": "physics.stackexchange", "id": 98791, "tags": "electromagnetism, electrostatics, magnetic-fields, electric-fields, magnetostatics" }
How to decompose a controlled unitary $C(U)$ operation where $U$ is a 2-qubit gate?
Question: In the vein of this question, say I have a 2-qubit unitary gate $U$ which can be represented as a finite sequence of (say) single qubit gates, CNOTs, SWAPs, cXs, cYs and cZs. Now I need to implement a controlled-$U$ gate and hence need its decomposition in terms of simple one- and two-qubit gates. Is there a standard method to get this decomposed circuit? In Nielsen and Chuang they deal with the general recipe for decomposing $C^n(U)$ gates where $U$ is a single qubit unitary, but not where $U$ is a $k$-qubit unitary cf. glS's answer. Answer: By giving U as that sequence you have reduced the problem to controlled SWAPs, cXs, cYs and cZs already. The single qubit gates you already know how to make controlled. You can reduce further to controlled SWAPs and cXs by putting in the appropriate $U$ such that $UXU^\dagger=Y$ or $Z$. That gets rid of dealing with the controlled cYs and cZs separately. So now just have to reduce Toffoli and Fredkin gates to 1 and 2 qubit gates. A Fredkin can be built with 3 Toffoli's just like how a SWAP is built from CNOTs, but with an extra control. So now all that remains is decomposing a Toffoli gate into single qubit and 2-qubit gates. That is answered here.
{ "domain": "quantumcomputing.stackexchange", "id": 784, "tags": "quantum-gate, gate-synthesis" }
Can you see a solar eclipse from the International Space Station?
Question: Inspired by last week's solar eclipse, I'm wondering under what conditions one can see a total solar eclipse from the ISS. How often does it happen? I guess it doesn't last very long because of the fast orbital speed. Answer: It's close enough to the earth to be well within the range of solar eclipses. There's always going to be a solar eclipse somewhere in space because the moon will always cast a shadow behind it, well, except for when the moon is eclipsed by the earth. But the moon's shadow passes over the earth just a small percentage of the time. Given that the space station orbits the earth once every hour and a half, it's likely to see eclipses more often than you would see one on earth, assuming you didn't travel to see them. The ratio should be the same, so if eclipses last 1/15th the time on the space station than they do on earth - which is probobly about right, then you should see them about 15 times as often. Which, granted, still isn't very often. Eclipses move across the earth, west to east, at about 1,000 MPH - well, depending on the latitude. Closer to the poles, they move slower. The moon orbits the earth at about 2,000 MPH, in the opposite direction that the earth rotates, so the net speed of the shadow is about 1,000 MPH. http://sunearthday.nasa.gov/2006/faq.php Here are some photos, though these appear to just be partial: https://sociallyuncensored.com/entry/6896-solar-eclipse-from-space-aboard-the-international-space-station/ Here is the path of the 2017 eclipse, which is closer to the equator than the recent one, so it's path is longer. There's a fair chance the shuttle will pass through that path, as it'll complete 2 orbits in that time. http://eclipse.gsfc.nasa.gov/SEmono/TSE2017/TSE2017fig/TSE2017-1.gif
{ "domain": "physics.stackexchange", "id": 20649, "tags": "orbital-motion, solar-system, celestial-mechanics, satellites, eclipse" }
Fermion Propagator
Question: Will the fermion propagator change if instead of deriving it from the Lagrangian $$\mathcal{L}=i\bar{\Psi}\gamma^{\mu}\partial_{\mu}\Psi -m\bar{\Psi}\Psi\tag{1}$$ I derive it from $$\mathcal{L}'=\frac{i}{2}(\bar{\Psi}\gamma^{\mu}\partial_{\mu}\Psi- \partial_{\mu}\bar{\Psi}\gamma^{\mu}\Psi) -m\bar{\Psi}\Psi\tag{2}$$ and if yes, what will the new propagator be? The fermion propagator derived from the first Lagrangian, $\mathcal{L}$, is $$(-i)\frac{(-\gamma^{\mu}p_{\mu}+m)}{p^2+m^2-i\epsilon}.\tag{3}$$ EDIT: I will lay out the blueprint for deriving the expression for the Feynman propagator Define the Feynman propagator as $$S_F(x-y)_{\alpha\beta}= \langle0|T\bar{\Psi}(x)\Psi(y)|0\rangle_{\alpha\beta}\tag{4}$$ and then substitute the mode expansions for $\Psi(y)$ and $\bar{\Psi}(x)$. The result will yield the expression for the Feynman propagator in position space. Invert that and get the Feynman propagator in momentum space So my question reduces to "whether or not the propagator is determined by the Lagrangian." And I guess the answer is yes through the mode expansion, which needs to satisfy the classical equations of motion, right? Since the equations of motion are the same for $\mathcal{L}$ and $\mathcal{L}'$, I would suggest that the propagator is the same. But at the very beginning I hadn't write that to get a feeling on what other people think... So now, I would just appreciate for some comments/confirmation if any... Answer: Yes they give the same propagator. The bulk Lagrangian is defined up to a total derivative or equivalently a boundary term. Your second formulation is the symmetric version (people sometimes write the action of the partial derivative with \rightleftarrow). The two versions differ by the boundary term: $$ \mathcal L=\mathcal L’+\partial_\mu \left(\frac{i}{2}\bar \psi\gamma^\mu\psi\right) $$ This is why the bulk classical equations of motion are the same. If the domain is finite, even classically these boundary terms are important (they capture the boundary conditions). For topological theories, the boundary terms are important even for the bulk as they capture how the field is twisted. This is not the case for a spin 1/2. Anyway, the topological considerations do not affect the propagator. Hope this helps.
{ "domain": "physics.stackexchange", "id": 97804, "tags": "lagrangian-formalism, quantum-electrodynamics, fermions, dirac-equation, propagator" }
Problem regarding components of forces in conical pendulum
Question: For the motion of conical pendulum we can write equations as $$T_{F}\cos\theta =mg$$ $$T_{F}\sin\theta=\frac{mv^2}{R}$$ $T_{F}$ represents tension in the string. $v$ is the velocity of bob at this instant. $R$ is the radius of circle. But if we split $mg$ into its components,we can write $$T_{F}=mg\cos\theta$$(because length of the string is constant) What does then $mg \sin\theta$ will do? Also $mg \sin\theta$ is not in plane of circle. So what causes centripetal acceleration if I split components like this? Answer: you have three forces that must be in equilibrium, these is independent of the coordinate system that you chosen x-y system $$\sum F_x={\frac {m{v}^{2}}{R}}-T_{{F}}\sin \left( \theta \right)= 0$$ $$\sum F_y=-mg+T_{{F}}\cos \left( \theta \right)=0 $$ x'-y' system $$\sum F_x'={\frac {m{v}^{2}\cos \left( \theta \right) }{R}}-\sin \left( \theta \right) mg =0$$ $$\sum F_y'=-{\frac {\sin \left( \theta \right) m{v}^{2}}{R}}-\cos \left( \theta \right) mg+T_{{F}} =0$$ in both coordinate system you obtain the same solutions $$T_F=\frac{m\,g}{\cos(\theta)}$$ $$v=\sqrt{g\,r\,\tan(\theta)}$$ thus the choice of the coordinate system can't affected the results. I am not sure that this answer your question?
{ "domain": "physics.stackexchange", "id": 75368, "tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, string" }
C++ identity function
Question: I've implemented an identity function (well, actually a functor struct) in C++. The goal is that every occurrence of an expression expr in the code can be substituted by identity(expr) without changing the semantics of the program at all. I make it a struct with a templated operator() instead of simply a function template because this enables us to pass identity as an argument to other functions (say std::transform) without needing to specify the type (e.g., identity<std::pair<std::vector<int>::const_iterator, std::vector<double>::const_iterator>>). The code is written in C++14. #include <utility> static constexpr struct identity_t { template<typename T> constexpr decltype(auto) operator()(T&& t) const noexcept { return std::forward<T>(t); } } identity{}; Usage: int x = 42; identity(x) = 24; constexpr int size = 123; int arr[identity(size)] = {}; vector<int> v1 = {1, 2, 3, 4}, v2; v2.reserve(v1.size()); transform(v1.cbegin(), v1.cend(), back_inserter(v2), identity); template<typename T> class my_container { public: template<typename F> auto flat_map(F&& f) const { ... } auto flatten() const { return flat_map(identity); } ... }; Since identity(expr) is nothing more than a cast, performance should not be a problem. I'm concerned whether there are cases in which this implementation fails to keep the program's behavior unchanged. If there are, how can I fix the implementation? EDIT: I changed "a variable x" in the first paragraph to "an expression expr". The latter is more accurate. Answer: Well, that looks like a straightforward use of perfect forwarding in the parameter (T&&, std::forward<T>) and in the return type (decltype(auto)). So yep, you've got it. There are plenty of places in C++ where x and identity(x) don't do the same thing, either because x is a name as well as an expression, or because x is a construct that looks like an expression but technically is not one. (Personally, I consider almost all of these places to be mistakes in the design of C++.) Here are the ones I can think of. Replacing identity(x) with x in all of the following examples changes the behavior of the code sample. As mentioned in the comments on the question, the name of a function template can deduce what kind of function pointer to decay to: template<class T> void f(T) {} void (*fp)(int) = identity(f); The name of a local variable gets copy elision (a.k.a. Named Return Value Optimization, or NRVO): struct S { S() {} S(const S&) { puts("copy"); } S(S&&) { puts("move"); } }; S f() { S s; return identity(s); } int main() { f(); } The name of a local variable is treated as an rvalue in the context of return: struct S { S() {} S(const S&) { puts("copy"); } S(S&&) { puts("move"); } }; std::any f() { S s; return identity(s); } int main() { f(); } Lifetime extension applies to temporaries whose construction is "visible" in some sense that is extremely subtle when I start thinking hard about it. Interposing the call to identity disables lifetime extension. struct S { S() { puts("ctor"); } S(const S&) = delete; S(S&&) = delete; ~S() { puts("dtor"); } }; int main() { const S& x = identity(S()); puts("done"); } Braced initializers and string literals look like expressions, but in the context of an initializer, they're special cases: const char x[] = identity("hello"); int y = identity({5}); Now, I can't imagine how you'd use identity in your codebase where any of these quirks might matter; but then, I don't quite see how you plan to use identity at all. Your only fleshed-out example is just using std::transform(..., identity) as a verbose way of writing std::copy(...), which doesn't seem very useful.
{ "domain": "codereview.stackexchange", "id": 28484, "tags": "c++, functional-programming, template, c++14, type-safety" }
Electric dipole (E1) transition why do we assume $I$ remains constant?
Question: I have been looking at the selection rules for electric dipole transitions with the presence of hyperfine structure.The once of importance for this question are: $$\Delta F=0,\pm 1 \text{ (not $0\rightarrow 0$)}$$ $$\Delta J=0,\pm 1 \text{ (not $0\rightarrow 0$)}$$ The fact that we do not allow $0\rightarrow 0$ for the value of $J$ indicates that we are requiring the value of $J$ to change in such a E1 transition. This means that we assume no transitions where only $I$ changes occur, and from what I can tell we see $I$ as remaining constant in such a transition. Why is such an approximation valid? Answer: Quantum numbers $I$ and $J$ belong to different subsystems of the atom: $I$ is the nuclear spin QN, and $J$ is total electronic angular momentum QN. Therefore changes in $J$ would be associated with electric field influencing the electron cloud, and change in $I$ would imply change in the state of nucleus. Now take a look at the electric dipole moment operator $\hat{d} = q\hat{r}$. For a nucleus that is typically several orders of magnitude smaller than electron orbits you would expect the same to be true for transition electric dipole moments. Therefore the transitions where $I$ changes would be much weaker than electronic transitions, and for most practical applications you can assume that only $I$ preserving transitions take place. Another reason why said approximation holds well for optical transitions is the difference in energy scales for electronic and nuclear transitions - the energy of an optical range photon is simply not sufficient to induce change in the state of nucleus.
{ "domain": "physics.stackexchange", "id": 38967, "tags": "atomic-physics, radiation" }
Context free grammar for nested arrays separated by commas
Question: I have to define a context free grammar for the following rules: (i) A pair of square bracket tokens [] surrounding zero or more values separated by commas. (ii) A value can be another array or a number. A number is represented by the token NUMBER. So for example, [NUMBER, [NUMBER, NUMBER], NUMBER]. is valid. I am stuck as how to approach this. My intuition is always to look at the question and see that S->LSQ VALUE RSQ, VALUE->VALUE COMMA VALUE | VALUE | ARRAY | e | NUMBER, ARRAY -> LSQ NUMBER RSQ, NUMBER ->NUMBER. But I know this slips up. What steps can I take to ensure I am always thinking in the right way? Answer: your current implementation doesn't enforce the first condition "A pair of square bracket tokens [] surrounding zero or more values separated by commas" as an empty string or a NUMBER on its own would be accepted by the grammar. You could use the following CFG to maintain the integrity of the constraints array ::= [ ] | [ element ] element ::= value | value , element value ::= array | NUMBER To derive [NUMBER, [NUMBER, NUMBER], NUMBER] Start with array -> [element] [ element ] [ value, element ] [value, array ] [value, value, element ] [value, array, value] [value, [element], value] [value, [value, element], value] [value, [value, value], value] [NUMBER, [NUMBER, NUMBER], NUMBER] The grammar rules provided for JSON here might also be a useful reference: http://json.org/
{ "domain": "cs.stackexchange", "id": 6655, "tags": "context-free, formal-grammars" }
Egg dropping puzzle
Question: My problem is a modification of the known problem. Suppose you have a building with n floors. (Where n can be any size). Furthermore you have an indestructible egg at your disposal. If the egg is thrown from a floor which is not the maximum one gets a message. If the floor was too high you don't get a message. How can I design an algorithm that uses minimum many tests without knowing the height of the building? What is the minimum asymptotic runtime of such an algorithm? Answer: Suppose that the true number of floors is $n$. Your question is unclear, but it seems that you are interested in the minimum number of queries of the form "$n < m$?" needed to find $n$. Let us start with a lower bound. Consider a strategy for the task that works for every $n$. We can associate with each $n$ the transcript of the strategy, which is the list of yes/no answers to the queries performed by the strategy. These transcripts constitute a prefix code (why?), and so the number of queries $a_n$ asked when the answer is $n$ satisfies Kraft's inequality: $$ \sum_{n=1}^\infty 2^{-a_n} \leq 1. $$ In particular, $\limsup (a_n - \log n) = \infty$ (since the series $1/n$ diversges), which we write as $a_n \gg \log n$; here $\log n = \log_2 n$. Similarly, $a_n \gg \log n + \log \log n$ (since the series $1/n\log n$ diverges). Let us now attempt to design strategies. One simple strategy asks whether $n < 2^t$ for $t=1,2,\ldots$, until finding a value of $t$ such that $2^t \leq n < 2^{t+1}$; this takes $\log n + O(1)$ queries. Binary search over $2^{t+1}$ takes $\log n + O(1)$ more queries, for a total of $2\log n + O(1)$. A better strategy applies the above strategy recursively on the $t$'s. The first step is to ask whether $n < 2^{2^s}$ for $s=0,1,\ldots$, until finding a value of $s$ such that $2^{2^s} \leq n < 2^{2^{s+1}}$; this takes $\log \log n + O(1)$ queries. Binary search over $2^{2^s},2^{2^s+1},\ldots,2^{2^{s+1}}$ finds a value of $t$ such that $2^t \leq n < 2^{t+1}$ in $\log \log n + O(1)$ more steps, and finding $n$ using another binary search takes $\log n + O(1)$ steps, for a total of $\log n + 2\log\log n + O(1)$ steps. The two strategies are completely analogous to Elias' gamma coding and delta coding. It is likely that Elias' omega coding can likewise be implemented.
{ "domain": "cs.stackexchange", "id": 12478, "tags": "algorithms" }
Connecting charged capacitor to uncharged capacitor by conducting wire
Question: The capacitor A has a charge $q$ on it whereas capacitor $B$ is uncharged.The charge appearing on capacitor B a long time after the switch is closed is found to be zero still.Why? Answer: The fact can be mathematically proved. Initially charge on the right plate of capacitor A is $-q$ and that on the left of capacitor $B$ is zero.The electric field inside the two capacitors and energy stored in them are: $$E_A=\frac{q}{A\epsilon_o}$$ $$U_A=(1/2)\epsilon_o{E_A}^2(Ad)=\frac{q^2d}{2\epsilon_oA}$$ $$E_B=0$$ $$U_B=0$$ Let charge $q'$ moves from right plate of A to the left plate of B when switch S is closed.By charge conservation,charge remaining on right plate of A is $-q-q'$. New electric fields and potential energies are: $$E_A=\frac{q}{2A\epsilon_o}-\frac{-q-q'}{2A\epsilon_o}$$ $$U_A=(1/2)\epsilon_o{E_A}^2(Ad)=\frac{q^2d}{2\epsilon_oA}(1+\frac{q'}{2q})^2$$ $$E_B=\frac{q'}{2A\epsilon_o}$$ $$U_B=\frac{q'^2d}{8A\epsilon_o}$$ Use the energy conservation $U_A+U_B=U'_A+U'_B$ to get $q'=0$[possible] or $q'=2q$[not possible].
{ "domain": "physics.stackexchange", "id": 33544, "tags": "electrostatics, capacitance" }
Determination of auxiliary scale in dimensional regularization
Question: My questions are in italics. In the article [1] a dimensional regularization is presented on an electrostatic example of an infinite wire with constant linear charge density $\lambda$. It is shown that the direct computation of the scalar potential gives infinity: $$ \phi({\bf x}) = {\lambda\over 4\pi\epsilon_0}\int_{-\infty}^\infty { d l \over |{\bf x} - {\bf l}| } = {\lambda\over 4\pi\epsilon_0}\int_{-\infty}^\infty { d l \over (x^2 + y^2 + (z-l)^2)^{1\over 2} } = $$ $$ = {\lambda\over 4\pi\epsilon_0}\int_{-\infty}^\infty { d u \over \sqrt{x^2 + y^2 + u^2} } = \infty $$ But with dimensional regularization in the modified minimal subtraction scheme we get eventually: $$ \phi_{\overline{\rm MS}}({\bf x}) = {\lambda\over 4\pi\epsilon_0} \log{\Lambda^2\over x^2 + y^2} $$ where $\Lambda$ is the auxiliary scale parameter. One can then calculate the electric field (let's set $y=0$ from now on) as follows: $$ E_x = -{\partial \over \partial x} \phi_{\overline{\mathrm{MS}}}(x) = -{\partial \over \partial x} {\lambda\over 4\pi\epsilon_0} \log{\Lambda^2\over x^2}= $$ $$ = - {\lambda\over 4\pi\epsilon_0} {x^2\over\Lambda^2} \Lambda^2 \left(-{2\over x^3}\right) = {\lambda\over 2\pi\epsilon_0} {1\over x} $$ The article claims that the original scalar potential is scale invariant: $\phi(kx) = \phi(x)$. But since both $\phi(kx)$ and $\phi(x)$ are infinite, I don't understand the argument. The article claims that the dimensional regularization preservers translational symmetry. However, the only way to make $\phi_{\overline{\mathrm{MS}}}(kx)=\phi_{\overline{\mathrm{MS}}}(x)$ is to choose different $\Lambda$ for each side. Are we allowed to do that? I thought that we have to set $\Lambda$ once and for all and then just keep calculating with it and it must cancel at the end. Update: based on Michael's comment below I realized that the article claims translational invariance of the original problem, i.e. $\phi_{\overline{\rm MS}}(x, y, z+h)=\phi_{\overline{\rm MS}}(x, y, z)$ and that is obviously true, because $\phi_{\overline{\rm MS}}({\bf x})$ does not depend on $z$. So I think that answers this particular question. Still a clarification from an expert would be nice. [1] Olness, F., & Scalise, R. (2011). Regularization, renormalization, and dimensional analysis: Dimensional regularization meets freshman E&M. American Journal of Physics, 79(3), 306. doi:10.1119/1.3535586, available online here. Answer: First of all, the procedure is called dimensional regularization, not dimensional renormalization. Regularization is the process by which we make sums and integrals non-singular so that their results aren't infinite – the result "infinity" carries no meaningful physical information because the results of measurements in physics are always particular finite numbers. After the regularization, the results of integrals are manageable finite expressions although they may still diverge in the limits we call "physical". Renormalization is another step in which we carefully distinguish bare values of parameters (in the action) and the observed values, making sure that the theory with the appropriate values of the parameters agrees with the observations. Renormalization is something we would have to do even if the underlying integrals were convergent. It usually follows a regularization procedure but is independent of it. Dimensional regularization is just a methodology to evaluate particular integrals – not mentioning what physical quantities or parameters are expressed by these integrals – so it's clearly a regularization technique, not renormalization technique. A broader technique to see these integrals in the loop diagrams and give them the right interpretations for the amplitudes may lead to $\overline{MS}$, em-es-bar, which is a renormalization scheme (a renormalization scheme is given by the choice of the renormalization scale as well as the exact definition of physically measurable quantities, i.e. scattering amplitudes of particles with certain energies, that play the role of the coupling constants for Taylor expansions etc.). But dim. reg. itself is just a regularization technique. Now, the new parameter $\Lambda$ in dim. reg. is auxiliary, newly added, so we finally expect or want it to drop out of the physical expressions. Indeed, $\Lambda$ of dim. reg. also drops out of the final physical expressions although it's only after the full calculation of the physics quantities including the renormalization. In particular, various quantities linked to certain scales may depend on $\Lambda$, or the ratio $\mu/\Lambda$ involving a new auxiliary scale, but the observed/predicted cross sections are linked to other observed cross sections etc. by formulae that contain neither $\Lambda$ nor $\mu$. Scale invariance Below equation 7 of the paper you mentioned, they make it rather clear what they mean by the scale invariance. They mean that the expression, the integral in equation 7, is formally scale-invariant under $k\to kx$. If we just rescale $y\to ky$ as well, the integration variable, the factors of $k$ cancel between $dy$ and $1/\sqrt{x^2+y^2}$. The integral itself is divergent but if we were satisfied with this unphysical answer $\infty$, it would be OK for the scale invariance because $\phi(x)=\phi(kx)=\infty$. As they make it clear between equations 7 and 12, this scale invariance is subtle for divergent integrals because while we may say that $\infty=\infty$, it's still true that $\infty-\infty$ which appears in physically important quantities (work that is done) is an indeterminate form whose value may be any finite (or infinite) number. In particular, if you rescale $x,y$ by $k$, the $\overline{MS}$ renormalized expression changes additively by $(\lambda/4\pi\epsilon_0)\ln k$ with some sign. It's a purely additive shift that is independent of $x,y,z$ and such an additive shift may be undone by a simple $U(1)$ gauge transformation whose gauge parameter is something like $(\lambda/4\pi\epsilon_0)\ln(k)\cdot t$ i.e. linear in time (because we want to eliminate the temporal $A_0$ component). So the renormalized value of the potential isn't quite scale-invariant because, as you correctly said – and it is easy to verify it by looking at the actual logarithmic expression for the potential – the value of $\Lambda$ would have to be rescaled as well. But if $\Lambda$ isn't rescaled, the change of the potential under $\vec x\to k\vec x$ is just a simple constant additive shift that is equivalent to a gauge transformation so it's still true that all gauge-invariant quantities that may be calculated out of such a potential (and physical, measurable, observable quantities have to be gauge-invariant) are scale-invariant. More precisely, they are "covariant" and get rescaled by the right power $k^\Delta$ where $\Delta$ refers to their dimension. That's why the electric field goes like $1/x$ etc. For this reason, one may use a bit sloppy language and say that the potential itself is scale-invariant. But the extent to which this statement is true for the potential in various forms – formal expression for the integral, the naive result of the integral, or the result in a renormalization scheme – depends on the details in the way sketched above. On the other hand, as you wrote, the translational invariance in the direction along the wire is uncontroversial, manifest, and protected by all forms of the potential or the field strength.
{ "domain": "physics.stackexchange", "id": 5886, "tags": "electrostatics, renormalization, regularization" }
NFA for the language that accepts binary strings ending with 00
Question: This is the question: 1.7. Give state diagrams of NFAs with the specified number of states recognizing each of the following languages. In all parts, the alphabet is {0,1}. a. The language {w| w ends with 00} with three states This is how my professor answered it: But this is the answer I found on web: But I think the second one should be the correct answer. Answer: The first answer is incorrect. The automaton does not accept, for example, the input word $0100$. In fact, the first automaton misses at least all words in the language that are not of the form $1^*0^*$. The second answer is correct. It is not hard to prove it. Can you try?
{ "domain": "cs.stackexchange", "id": 21738, "tags": "automata, finite-automata" }
Which process(es) kept matter in equilibrium with radiation in the early universe?
Question: Which of the following process kept the matter in thermodynamic equilibrium with radiation? Is it $e^++e^-\leftrightarrow \gamma+\gamma$ or is it the scatterings like $e^-+\gamma\rightarrow e^-+\gamma$? Can one explain what happens to both these processes as the temperature of the universe falls? Answer: Both of these processes keep plasma containing electrons-positrons and photons in equilibrium, since of them distribute the energy. For example, the high-energy electron-positron pair create the photons pair due to annihilation (the first process). Then one of photons pair interact with the low-energy electron through Compton scattering (the second process) and give to it large part of the energy. And so on. When the temperature decreases down to $T \sim m_{e}$, the averaged energy of photon pair is typically not sufficient to produce the electron-positron pair (with minimal energy being $2m_{e}$). Therefore the process $$ e^{-}+e^{+} \to \gamma + \gamma $$ is no-longer invertible. The electron-positron pair can still annihilate to photons, as long as there are positrons (note that there must be the matter-antimatter asymmetry in the Early Universe). This in particular explains why the temperature of relic photons observed today is larger in the factor $\left(\frac{11}{4}\right)^{\frac{1}{3}}$ than the relic neutrinos temperature. The process 2 is remained invertible for these temperatures. Next, suppose that the temperature continues to decrease. The photons feel interaction $$ e^{-} + \gamma \leftrightarrow \gamma + e^{-} $$ as long as corresponding interaction rate $\Gamma (e\gamma \to e\gamma)$ is longer than the Hubble scale $H(t) = \frac{\dot{a}}{a}$. The rate $\Gamma (e\gamma\to e\gamma)$ is proportional to the electron number density $n_{e}$, $$ \Gamma(e\gamma \to e\gamma) \simeq n_{e}\sigma_{e\gamma \to e\gamma} $$ When temperatures are small, $T << m_{e}$, the only remained charged states in plasma are electron $e^{-}$ and proton $p^{+}$. The Universe is assumed to be electrically neutral, so when they tend to form bounded state (hydrogen atom), the electron density tends to zero. If the temperature (and hence averaged energies of photons) in plasma are significantly smaller than the Hydrogen ionization energy $E_{\text{ion}} = 13.6 \text{ eV}$, then the density $n_{e}$ begins to decrease quickly. Then $\Gamma$ begins to be smaller than $H$, and photons decouple (propagates as "frozen state"). The process 2 then almost disappears.
{ "domain": "physics.stackexchange", "id": 36773, "tags": "cosmology, universe" }