anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Complexity of specific cases of MAX2SAT
Question: I know that MAX2SAT is NP-complete in general but I'm wondering about if certain restricted cases are known to be in P. Certainly the languages $L_k:=\{ \phi \,|\, \phi\,\text{is an instance of 2SAT which has an assignment satisfying at least k clauses.}\}$ can be solved in $O(n^k)$ through brute force search since for each language $k$ is fixed. However, I'm wondering about the case when a fraction of the clauses is specified. Does any fraction yield an NP-hard problem? Specifically I'm wondering about the case of satisfying at least half of the clauses of a 2SAT instance. The reduction I saw from 3SAT to MAX2SAT constructs 10 clauses from each clause in 3SAT such that out of these ten, exactly 7 are satisfied when the original 3SAT clause is satisfied and at most 6 are satisfied when the original clause is not satisfied. So in this reduction the fraction of $7/10$ works but $1/2$ does not because unsatisfying truth assignments of a 3SAT instance can still yield an instance of 2SAT that has an assignment satisfying more than half of the clauses. I thought about another construction or adding extra clauses to an instance of 2SAT but I've been unsuccessful so far. Answer: You can always satisfy at least half of clauses: for each variable $x$, find the number of clauses that contain $x$ and the number of clauses that contain $\lnot x$. Select the one which satisfies the most clauses. Remove clauses containing $x$ and $\lnot x$. Repeat for other variables. Since for each $x$ we satisfy at least half of removed clauses, we satisfy half of the clauses overall. On the other hand, it's also tight: let $\alpha > \frac 12$ be the fraction of clauses for which we can give an answer. Let $\beta > \frac 12$ be the maximum fraction of clauses we can satisfy in a specific clause. Then we can add clauses so that $\beta$ (for the new clause) becomes arbitrary clause to $\alpha$: If $\beta < \alpha$, then we can add clauses $(x_i \lor \lnot x_i)$, until $\beta > \alpha$ (since these clauses are always true, $\beta$ increases). If $\beta > \alpha$, we can add clauses $(x_i)$ and $(\lnot x_i)$, until $\beta < \alpha$ (since exactly half of clauses is true, $\beta$ decreases). I didn't check, but to get $O(\frac 1m)$ difference (i.e. to find the exact number of clauses), I think it suffices to add $O(m)$ clauses. In other words, if we can solve for some $\alpha > \frac 12$, we can check for any $\beta$ whether $\beta$ fraction of clauses can be satisfied, and therefore we can solve MAX2SAT in polynomial time.
{ "domain": "cs.stackexchange", "id": 16747, "tags": "complexity-theory, np-complete, reductions, satisfiability, 2-sat" }
How is PIV control performed?
Question: I'm considering experimenting with PIV control instead of PID control. Contrary to PID, PIV control has very little explanation on the internet and literature. There is almost a single source of information explaining the method, which is a technical paper by Parker Motion. What I understand from the control method diagram (which is in Laplace domain) is that the control output boils down to the sum of: Kpp*(integral of position error) -Kiv*(integral of measured velocity) -Kpv*(measured velocity) Am I correct? Thank you. Answer: It seems to me like there are three basic differences between the classic PID topology and the so-called PIV topology mentioned in the white paper: Desired velocity is assumed to be proportional to position error, the $K_p$ term regulates this. The integral error gain $K_i$ works to remove steady state errors in velocity, not position. That is essentially the same thing, however, due to item #1. The velocity estimate is fed directly through the $K_v$ term (instead of considering the derivative of position error). In the paper they claim that the main advantage of this topology is that it is easier to tune. The output of the controller is formed as follows: $e_\theta=\theta^*-\theta\\ e_\omega = (K_pe_\theta-\hat{\omega})\\ \text{output} = \int K_ie_\omega dt - K_v\hat{\omega}$ Of course, since you will probably be programming this, the integral is replaced by an accumulator variable as follows: $e_\theta=\theta^*-\theta\\ e_\omega = (K_pe_\theta-\hat{\omega})\\ \text{integral} = \text{integral} + K_ie_\omega \Delta t \\ \text{output} = \text{integral} - K_v\hat{\omega}$
{ "domain": "robotics.stackexchange", "id": 174, "tags": "control, otherservos, pid" }
What is the most accurate experimental confirmation of Rutherford's $\sin^{-4}\frac{\phi}{2}$ law?
Question: What is the most accurate experimental confirmation to date of Rutherford's $\sin^{-4}\frac{\phi}{2}$ law, where $\phi$ is the scattering angle? Answer: This paper* seems to be one of the most recent papers that concerns itself with the OP's equation directly. Anything since then appears to take it as fact and use it to determine something else. *Large-angle scattering of light ions in the weakly screened Rutherford region. Phys. Rev. A 21, 1891 – Published 1 June 1980 - H. H. Andersen, F. Besenbacher, P. Loftager, and W. Möller
{ "domain": "physics.stackexchange", "id": 19424, "tags": "experimental-physics, atomic-physics, scattering, atoms" }
Induced end and monopoles paradox
Question: It states that induced end is equal to the change in magnetic flux but we also know that magnetic flux is zero as magnetic monopoles don't exist so how can flux change? Answer: Assuming that magnetic monopoles don't exist the divergence of $\mathbf{B}$ is zero so magnetic flux lines cannot have an end - they have to be closed loops. And this is exactly what the flux lines are in e.g. a solenoid. Zero divergence does not mean there cannot be any flux lines.
{ "domain": "physics.stackexchange", "id": 29395, "tags": "electromagnetism" }
Do Events Conditional to Simultaneity Occur in Every Reference Frame?
Question: Imagine a train with a table that contains a light on both ends (so that the line between the two sources of light is parallel with the line of motion of the train) and a light sensor in the perfect middle. The lights start out off with the train moving at a high speed. Then, the lights simultaneously (in the reference of the train) turn on. The sensor detects whether both beams of light hit it at the same time or not. If the light hits the sensor from both sides at once, the sensor turns on a bright light that can be seen from out side the train. Since, in the reference of the train, the lights both go on simultaneously, the light sensor will detect that and turn on its bright light. However, a camera sits on an embankment near the train waiting for it to pass (let's just say that the lights are triggered by something on the tracks right before the camera. When the lights turn on, the camera standing on the embankment will see... What comes next of the following options? The camera sees both lights turn on (and can calculate that it was perfectly equidistant to both light sources when the rays of light left the light source), but since the photo sensor is moving away from one beam of light and toward the other, the beams of light do not reach the photo sensor at the same time. But the camera sees the bright light turn on anyway. And the camera doesn't see the bright light turn on. The camera sees both sources of light turn on at different times such that the beams reach the photo sensor at the same time. The camera sees both sources of light turn on at different times, but the beams of light still don't reach the photo sensor at the same rate. But the camera still sees the bright light turn on anyway. And the camera doesn't see the bright light turn on even though the camera on board the train saw it turn on. Would this thought experiment follow option 1.1, 1.2, 2, 3.1, or 3.2? If it follows option 2, is there a way to set up a thought experiment so that the criteria to turn on the bright light is met in the reference frame of the train but not in the reference frame of the embankment? If so, would the camera see the bright light turn on anyway, or would the camera on the embankment see the light stay off? In short, if an event is conditional upon the simultaneity of two events, will the conditional event take place even in a frame of reference where the condition is not met, or will the event take place regardless of any frame of reference other than that of whatever is deciding the simultaneity of the two events? EDIT This program graphs the thought experiment with the horizontal component being position and the vertical component being time. The frame of reference of the train is given in black, and the frame of reference of the embankment is given in red. The yellow lines represent the positions of photons given a particular time. Note that the red points and the black points are not in the same coordinate system. The red points are the black points after the Lorentz transformation. Black A and B are the positions (in space and in time) that the lights first turn on giving off their first photons, whose positions versus time are graphed in yellow. Black M is the midpoint of line segment AB and is the location of the light sensor. Note that since M is just a location and not a time, it should technically be represented as a vertical line. Both yellow lines intersect with this should-be-line-segment M and each other at black point C (a time and location). Since they intersect with M at the same point, the events are recorded as simultaneous. Red A' and B' are black A and B after the Lorentz transformation (aka, they're the equivalents of A and B in the reference frame of the embankment). While the black points A and B are at the same height in the screen (they both posses the same time component and are thus simultaneous), red A' and B' are not graphed at the same height and thus are not simultaneous. So that eliminates option 1 because A and B are not simultaneous in the reference frame of the camera on the embankment. Notice now the yellow lines protruding from red A' and B'. Those are the photons that are emitted by both lights. Now, as black M should have been graphed as a vertical line, red M' should have been graphed as a line slanted toward the left (the line that passes through red M' and red C'). Thus, it is clear that the light from B' intersects M' long before (below) the light from A' intersects M'. Now, to put the speed of light more to scale, all the yellow lines should have been much more horizontal. Thus, putting the whole thing to scale would exaggerate the fact that the light from B' intersected 'M before the light from 'A did (you can change the speed of light in this simulation by changing the number on the first line of code: "var c = 3;". You can also change the speed of the train on the second line to put things into scale. Everything is in SI units). Correct me if I'm wrong, but using the math of the Lorentz transformation seems to indicate option 3. Both red A' and B' are not simultaneous because they have different time components. Also, the beams of light do not intersect the photo sensor simultaneously (in the perspective of the embankment) because the rays of light do not intersect line M' -> C' at the same height or time component. So it seems that option 3 is correct, in which case I still don't know if sub-option 1 or 2 is correct. UPDATE 2 I was being inconsistent in my diagram with representing time as seconds of meters. I fixed it, and it's now working as option 2. Answer: The stationary observer would see the two lights on the train to light up one after the other such that they hit the sensor at the same time. That is, option 2. In general, when the "bright light" turns on, it will be observed in all reference frames. The answer to the second part of your question is that there is no way to set up the problem in the way you ask. Note that the defining criterion in the way you've set up the problem was that two events at the same spatial location happened at the same time. This will be agreed upon by all reference frames. Relativity of simultaneity only applies to spatially separated events (like the lights in the front and back of your train), not to events that are at the same point in space.
{ "domain": "physics.stackexchange", "id": 55564, "tags": "special-relativity, inertial-frames, observers, thought-experiment" }
How can I tell if a photodetector is thermal noise limited or shot noise limited?
Question: Most datasheets for photodetectors only specify the noise equivalent power, making no distinction between thermal noise (Johnson noise) and shot noise. For modelling purposes, how do I know whether the photodetector is shot-noise limited or thermal noise limited? Answer: Noise equivalent power (NEP) is a parameter that depends on thermal noise only. From RP Photonics, When a photodetector does not get any input light, it nevertheless produces some noise output with a certain average power, which is proportional to the square of the r.m.s. voltage or current amplitude. The noise-equivalent power (NEP) of the device is the optical input power which produces an additional output power identical to that noise power for a given bandwidth (see below). Note that NEP also depends on the bandwidth considered, and that a receiver's specified NEP is often only 1 Hz. For your application, you must scale the NEP by the square root of your system's bandwidth to get the noise measure appropriate for your system. Shot noise ideally doesn't depend on the detector used (a non-unity quantum efficiency will increase the effective shot noise), only on the frequency and power of the signal applied. The shot noise power spectral density is given by $$S(f) = h\nu\bar{P}$$ where $\nu$ is the optical frequency and $\bar{P}$ is the average power received. As it doesn't depend on any parameter of the receiver, it wouldn't make sense for it be specified on a receiver datasheet. Your system is typically considered shot noise limited if the shot noise is greater than the thermal (or other) noise at the detection threshold. Typically (at optical and NIR frequencies) this only happens if you use a detector with internal gain, such as a photomultiplier or avalanche photodetector.
{ "domain": "physics.stackexchange", "id": 84590, "tags": "optics, photon-emission, noise" }
Perfect lossless rotation of JPEG image (ITU 81)
Question: JPEG image compression is Fourier based DCT (ITU 81). It divides the image into 8x8 pixel blocks, and processes each using a Discrete Cosine Transform. The results are quantised and then encoded. Looking at the MSDN documentation: Transforming a JPEG without loss of information. It seems that both width and height are required to be a multiple of 16 to perform a perfect 90° rotation. Now if we inspect the source code of the open-source implementation jpegtran, we realize that the width / height are required to be a multiple of 8 to perform a perfect 90° rotation: jtransform_perfect_transform(JDIMENSION image_width, JDIMENSION image_height, [...] case JXFORM_ROT_90: if (image_height % (JDIMENSION)MCU_height) result = FALSE; ref: https://github.com/libjpeg-turbo/libjpeg-turbo/blob/2.1.0/transupp.c#L2281-L2283 Which can easily verified from the command line: % convert -size 808x808 xc:white canvas.jpg % jpegtran -perfect -rotate 90 -outfile rot90.jpg canvas.jpg && echo $? 0 So in conclusion it appears that perfect rotation can be achieved with image size being multiple of 8, since we are only shuffling values within one 8x8 block. Why would MSDN documentation take extra care and require image size be multiple of 16, then ? Or is jpegtran actually producing non-perfect 90° rotation ? Update. My original test was using single scalar (1 component), if I try with 3 components I get the same result: % convert -size 808x808 xc:#990000 canvas2.jpg % file canvas2.jpg canvas2.jpg: JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 808x808, components 3 % jpegtran -perfect -rotate 90 -outfile rot90.jpg canvas2.jpg && echo $? 0 Answer: The basic unit for color components is 8 by 8 only if no chroma subsampling is used, i.e 4:4:4 format. Most likely your test image uses this, but not all images do. When 4:2:2 chroma subsampling is used, the basic unit is 16 by 8 pixels. In 4:2:0 chroma subsampling, the basic unit is 16 by 16 pixels.
{ "domain": "dsp.stackexchange", "id": 10286, "tags": "fft, jpeg" }
straight-line simulatability
Question: Does any body know any good reference for meaning of straight-line simulatability? I am currently deep into Universal Composability (UC) framework of Canetti but I can't find any good reference for meaning of straight-line simulatability. Any help is appreciated. Answer: Here, "straight-line" is contrasted with "rewinding". A simulator is "straight-line" if it does not "rewind" the party it is doing the simulation for. For instance, in a zero-knowledge protocol, the simulator usually rewinds the "verifier". In the "straight-line" sense, this rewinding does not happen. I first saw the term "straight-line simulator" in Rafael Pass's paper (On Deniabililty in the Common Reference String and Random Oracle Models. (CRYPTO'03)) and M.Sc. thesis (Alternative Variants of Zero-Knowledge Proofs). Edit: I found an earlier paper: Concurrent Zero-Knowledge: Reducing the Need for Timing Constraints by Cynthia Dwork and Amit Sahai, which dates back to 1998. For more pointers, see Alon Rosen's comment below.
{ "domain": "cstheory.stackexchange", "id": 654, "tags": "cr.crypto-security" }
XGBoost: # rounds is equal to n_estimators?
Question: I'm running a regression XGBoost model and trying to prevent over-fitting by watching the train and test error using this code: eval_set = [(X_train, y_train), (X_test, y_test)] xg_reg = xgb.XGBRegressor(booster='gbtree', objective ='reg:squarederror', max_depth = 6, n_estimators = 100, min_child_weight = 1, learning_rate = 0.05, seed = 1,early_stopping_rounds = 10) xg_reg.fit(X_train,y_train,eval_metric="rmse", eval_set = eval_set, verbose = True) This prints out as follows: [93] validation_0-rmse:0.233752 validation_1-rmse:0.373165 [94] validation_0-rmse:0.2334 validation_1-rmse:0.37314 [95] validation_0-rmse:0.232194 validation_1-rmse:0.372643 [96] validation_0-rmse:0.231809 validation_1-rmse:0.372675 [97] validation_0-rmse:0.231392 validation_1-rmse:0.372702 [98] validation_0-rmse:0.230033 validation_1-rmse:0.372244 [99] validation_0-rmse:0.228548 validation_1-rmse:0.372253 However, I've noticed the number of training rounds printed out and in the evals_results always equals the n_estimators. In [92]: len(results['validation_0']['rmse']) Out[92]: 100 If I change the number of trees to 600, the # of rounds goes up to 600, etc. I was under the impression that what's being printed is the metric result from each round of training, which includes training all the trees at once. What is going on here? Is each layer of trees considered a separate training round? Answer: For gradient boosting, there really is no concept of a "layer of trees" which I think is where the confusion is happening. Each iteration (round) of boosting fits a single tree to the negative gradient of some loss function (calculated using the predictions from the updated model in all prior iterations), in your case, root mean squared error. The algorithm does not fit an ensemble of trees at each iteration. Each tree is then added (with optimal weighting) to all prior fitted trees + optimal weights in previous iterations to come up with final predictions. The validation scores you see are, as a result, the score of your complete model up to that iteration of tree fitted. So this line here for instance: [93] validation_0-rmse:0.233752 validation_1-rmse:0.373165 is the performance of your model on validation_ 0 and validation_1 for a model that has fit 93 trees on past gradients generated in prior iterations.
{ "domain": "datascience.stackexchange", "id": 5918, "tags": "scikit-learn, regression, xgboost" }
Hybridization of nitrogen in a ring
Question: Is there a well defined way to discern the hybridization of a nitrogen atom in ring, like pyrrole? How can you know whether the nitrogen's lone pair are in the conjugated system or not? Answer: If an $\alpha$-$\beta$ unsaturated heteroatom isn't already participating in a $\pi$-bond and it has a lone pair of electrons, these electrons will be delocalized and in the $\mathrm{p}$-orbital. Let's compare pyridine and pyrrole: $\hspace{5cm}$ The nitrogen in pyridine is already participating in a double bond, so it is clearly $\mathrm{sp^2}$ hybridized. The nitrogen in pyrrole is $\alpha$-$\beta$ unsaturated, so its lone pair is delocalized. Both of these molecules are heterocyclic aromatic rings, but these are not the only cases in which a nitrogen atom's lone pair are delocalized. They are also delocalized in amides (which resemble the $\alpha$-$\beta$ unsaturation of pyrrole): $\hspace{4.6cm}$ If you want to know more about criterion for aromaticity, read about Hückel's rule.
{ "domain": "chemistry.stackexchange", "id": 5554, "tags": "organic-chemistry, aromatic-compounds, hybridization, heterocyclic-compounds" }
Efficient algorithm for getting from 1 to n with 3 specific operations
Question: The question: Given those 3 valid operations over numbers and an integer $n$: add $1$ to the number multiply the number by $2$ multiply the number by $3$ describe an efficient algorithm for the minimal number of operations of getting from $1$ to $n$ with the 3 operations mentioned above. For example for $n=28$ the answer is $4$ : $1\times 3\times 3\times 3+1=27+1=28$. My approach: I noticed that there is a recursive algorithm that will provide the answer but even when I used memoization the algorithm took a lot of time to end with $n\geq 1000$. I thought of a way to start with $n$ instead and try to reduce it to $1$ with the inverse operations of subtracting $1$ dividing by $2$ or by $3$ and trying to get it to be devisible by $3$ or by $2$ by subtracting $1$'s and checking the mod. But my second approach had some (more than some) mistakes where It stated that the smallest number of operations is more than it is. Please help or give a hint I clearly missing some kay fact about the nature of such operations. Edit: def ToN(n): d=dict() def to(x,num,di): if (num==x): return 0 elif (num>x): return num elif num in di: return di[num] else: if num+1 not in di: di[num+1]=to(x,num+1,di) if num*2 not in di: di[num*2]=to(x,num*2,di) if num*3 not in di: di[num*3]=to(x,num*3,di) di[num]=min(di[num+1],di[num*2],di[num*3])+1 return di[num] return to(n,1,d) I wrote the code above in python and it takes a lot of time to end for num=1000. Can you help me understand what is wrong w.r.t. efficiency. Answer: Find the shortest path from $1$ to $n$ on an appropriate graph on vertices $\{1, \dots, n\}$. This approach will work whenever it's guaranteed that intermediate values in the calculations will lie within some bounded range.
{ "domain": "cs.stackexchange", "id": 6884, "tags": "algorithms, recursion, integers" }
Python - Tkinter - periodic table of chemical elements
Question: Inspired by a question on StackOverflow I decided to code a GUI that is simple, efficent and can be used in other projects as well. I wanted to share this code since it probably is usefull to other people as well. You may want to share some practical hints how to make this code even better. The code produces a table of frames and shows the information, I did gather for about 5 hours from wikipedia, in the final output. The frames are made clickable to make the usecase wider then without. I hope you enjoy this bit of code. Database: symbols = ['H','He','Li','Be','B','C','N','O','F','Ne', 'Na','Mg','Al','Si','P','S','Cl','Ar','K', 'Ca', 'Sc', 'Ti', 'V','Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn', 'Ga', 'Ge', 'As', 'Se', 'Br', 'Kr', 'Rb', 'Sr', 'Y', 'Zr', 'Nb', 'Mo', 'Tc', 'Ru', 'Rh', 'Pd', 'Ag', 'Cd', 'In', 'Sn', 'Sb', 'Te', 'I', 'Xe','Cs', 'Ba','La', 'Ce', 'Pr', 'Nd', 'Pm', 'Sm', 'Eu', 'Gd', 'Tb', 'Dy', 'Ho', 'Er', 'Tm', 'Yb', 'Lu', 'Hf', 'Ta', 'W', 'Re', 'Os', 'Ir', 'Pt', 'Au', 'Hg', 'Tl', 'Pb', 'Bi', 'Po', 'At', 'Rn', 'Fr', 'Ra', 'Ac', 'Th', 'Pa', 'U', 'Np', 'Pu', 'Am', 'Cm', 'Bk', 'Cf', 'Es', 'Fm', 'Md', 'No', 'Lr', 'Rf', 'Db', 'Sg', 'Bh','Hs', 'Mt', 'Ds', 'Rg', 'Cn', 'Nh', 'Fl', 'Mc', 'Lv', 'Ts', 'Og'] keywords =['name','index','elementkategorie','gruppe','periode','block', 'atommasse','aggregatzustand','dichte','elektronegativität'] values = [['Wasserstoff',1,'Nichtmetalle',1,1,'s',1.01,'gasförmig',0.08,2.2],#H ['Helium',2,'Edelgase',18,1,'s',4.00,'gasförmig',0.18,'n.A'],#He ['Lithium',3,'Alkalimetalle',1,2,'s',6.94,'fest',0.53,0.98],#Li ['Beryllium',4,'Erdalkalimetalle',2,2,'s',9.01,'fest',1.84,1.57],#Be ['Bor',5,'Halbmetalle',13,2,'p',10.81,'fest',2.46,2.04],#B ['Kohlenstoff',6,'Nichtmetalle',14,2,'p',12.01,'fest',2.26,2.55],#C ['Stickstoff',7,'Nichtmetalle',15,2,'p',14.00,'gasförmig',1.17,3.04],#N ['Sauerstoff',8,'Nichtmetalle',16,2,'p',15.99,'gasförmig',1.43,3.44],#O ['Fluor',9,'Halogene',17,2,'p',18.99,'gasförmig',1.70,3.98],#F ['Neon',10,'Edelgase',18,2,'p',20.17,'gasförmig',0.90,'n.A'],#Ne ['Natrium',11,'Alkalimetalle',1,3,'s',22.99,'fest',0.97,0.93],#Na ['Magnesium',12,'Erdalkalimetalle',2,3,'s',24.31,'fest',1.74,1.31],#Mg ['Aluminium',13,'Metalle',13,3,'p',26.98,'fest',2.69,1.61],#Al ['Silicium',14,'Halbmetalle',14,3,'p',28.08,'fest',2.34,1.90],#Si ['Phosphor',15,'Nichtmetalle',15,3,'p',30.97,'fest',2.4,2.19],#P ['Schwefel',16,'Nichtmetalle',16,3,'p',32.06,'fest',2.07,2.58],#S ['Chlor',17,'Halogene',17,3,'p',35.45,'gasförmig',3.22,3.16],#Cl ['Argon',18,'Edelgase',18,3,'p',39.95,'gasförmig',1.78,'n.A'],#Ar ['Kalium',19,'Alkalimetalle',1,4,'s',39.09,'fest',0.86,0.82],#K ['Calicium',20,'Erdalkalimetalle',2,4,'s',40.08,'fest',1.55,1.00],#Ca ['Scandium',21,'Übergangsmetalle',3,4,'d',44.96,'fest',2.99,1.36],#Sc ['Titan',22,'Übergangsmetalle',4,4,'d',47.87,'fest',4.5,1.54],#Ti ['Vandium',23,'Übergangsmetalle',5,4,'d',50.94,'fest',6.11,1.63],#V ['Chrom',24,'Übergangsmetalle',6,4,'d',51.99,'fest',7.14,1.66],#Cr ['Mangan',25,'Übergangsmetalle',7,4,'d',54.94,'fest',7.43,1.55],#Mn ['Eisen',26,'Übergangsmetalle',8,4,'d',55.85,'fest',7.87,1.83],#Fe ['Cobalt',27,'Übergangsmetalle',9,4,'d',58.93,'fest',8.90,1.88],#Co ['Nickel',28,'Übergangsmetalle',10,4,'d',58.69,'fest',8.90,1.91],#Ni ['Kupfer',29,'Übergangsmetalle',11,4,'d',63.54,'fest',8.92,1.90],#Cu ['Zink',30,'Übergangsmetalle',12,4,'d',65.38,'fest',7.14,1.65],#Zn ['Gallium',31,'Metalle',13,4,'p',69.72,'fest',5.90,1.81],#Ga ['Germanium',32,'Halbmetalle',14,4,'p',72.63,'fest',5.32,2.01],#Ge ['Arsen',33,'Halbmetalle',15,4,'p',74.92,'fest',5.73,2.18],#As ['Selen',34,'Halbmetalle',16,4,'p',78.97,'fest',4.82,2.55],#Se ['Brom',35,'Halogene',17,4,'p',79.90,'flüssig',3.12,2.96],#Br ['Krypton',36,'Edelgase',18,4,'p',83.80,'gasförmig',3.75,3.00],#Kr ['Rubidium',37,'Alkalimetalle',1,5,'s',85.47,'fest',1.53,0.82],#Rb ['Strontium',38,'Erdalkalimetalle',2,5,'s',87.62,'fest',2.63,0.95],#Sr ['Yttrium',39,'Übergangsmetalle',3,5,'d',88.91,'fest',4.47,1.22],#Y ['Zirconium',40,'Übergangsmetalle',4,5,'d',91.22,'fest',6.50,1.33],#Zr ['Niob',41,'Übergangsmetalle',5,5,'d',92.90,'fest',8.57,1.6],#Nb ['Molybdän',42,'Übergangsmetalle',6,5,'d',95.95,'fest',10.28,2.16],#Mo ['Technetium',43,'Übergangsmetalle',7,5,'d',98.90,'fest',11.5,1.9],#Tc ['Ruthenium',44,'Übergangsmetalle',8,5,'d',101.07,'fest',12.37,2.2],#Ru ['Rhodium',45,'Übergangsmetalle',9,5,'d',102.90,'fest',12.38,2.28],#Rh ['Palladium',46,'Übergangsmetalle',10,5,'d',106.42,'fest',11.99,2.20],#Pd ['Silber',47,'Übergangsmetalle',11,5,'d',107.87,'fest',10.49,1.93],#Ag ['cadmium',48,'Übergangsmetalle',12,5,'d',112.41,'fest',8.65,1.69],#Cd ['Indium',49,'Metalle',13,5,'p',114.82,'fest',7.31,1.78],#In ['Zinn',50,'Metalle',14,5,'p',118.71,'fest',5.77,1.96],#Sn ['Antimon',51,'Halbmetalle',15,5,'p',121.76,'fest',6.70,2.05],#Sb ['Tellur',52,'Halbmetalle',16,5,'p',127.60,'fest',6.24,2.10],#Te ['Iod',53,'Halogene',17,5,'p',126.90,'fest',4.94,2.66],#I ['Xenon',54,'Edelgase',18,5,'p',131.29,'gasförmig',5.90,2.6],#Xe ['Caesium',55,'Alkalimetalle',1,6,'s',132.91,'fest',1.90,0.79],#Cs ['Barium',56,'Erdalkalimetalle',2,6,'s',137.33,'fest',3.62,0.89],#Ba ['Lanthan',57,'Übergangsmetalle',3,6,'d',138.90,'fest',6.17,1.1],#La ['Cer',58,'Lanthanoide','La',6,'f',140.12,'fest',6.77,1.12],#Ce ['Praseodym',59,'Lanthanoide','La',6,'f',140.91,'fest',6.48,1.13],#Pr ['Neodym',60,'Lanthanoide','La',6,'f',144.24,'fest',7.00,1.14],#Nd ['Promethium',61,'Lanthanoide','La',6,'f',146.91,'fest',7.2,'n.A.'],#Pm ['Samarium',62,'Lanthanoide','La',6,'f',150.36,'fest',7.54,1.17],#Sm ['Europium',63,'Lanthanoide','La',6,'f',151.96,'fest',5.25,'n.A'],#Eu ['Gadolinium',64,'Lanthanoide','La',6,'f',157.25,'fest',7.89,1.20],#Gd ['Terbium',65,'Lanthanoide','La',6,'f',158.93,'fest',8.25,'n.A'],#Tb ['Dysprosium',66,'Lanthanoide','La',6,'f',162.50,'fest',8.56,1.22],#Dy ['Holmium',67,'Lanthanoide','La',6,'f',164.93,'fest',8.78,1.23],#Ho ['Erbium',68,'Lanthanoide','La',6,'f',167.26,'fest',9.05,1.24],#Er ['Thulium',69,'Lanthanoide','La',6,'f',168.93,'fest',9.32,1.25],#Tm ['Ytterbium',70,'Lanthanoide','La',6,'f',173.05,'fest',6.97,'n.A'],#Yb ['Lutetium',71,'Lanthanoide','La',6,'f',174.97,'fest',9.84,1.27],#Lu ['Hafnium',72,'Übergangsmetalle',4,6,'d',178.49,'fest',13.28,1.3],#Hf ['Tantal',73,'Übergangsmetalle',5,6,'d',180.95,'fest',16.65,1.5],#Ta ['Wolfram',74,'Übergangsmetalle',6,6,'d',183.84,'fest',19.25,2.36],#W ['Rhenium',75,'Übergangsmetalle',7,6,'d',186.21,'fest',21.00,1.9],#Re ['Osmium',76,'Übergangsmetalle',8,6,'d',190.23,'fest',22.59,2.2],#Os ['Irdium',77,'Übergangsmetalle',9,6,'d',192.22,'fest',22.56,2.2],#Ir ['Platin',78,'Übergangsmetalle',10,6,'d',195.08,'fest',21.45,2.2],#Pt ['Gold',79,'Übergangsmetalle',11,6,'d',196.97,'fest',19.32,2.54],#Au ['Quecksilber',80,'Übergangsmetalle',12,6,'d',200.59,'flüssig',13.55,2.00],#Hg ['Thalium',81,'Metalle',13,6,'p',204.38,'fest',11.85,1.62],#Tl ['Blei',82,'Metalle',14,6,'p',207.20,'fest',11.34,2.33],#Pb ['Bismut',83,'Metalle',15,6,'p',208.98,'fest',9.78,2.02],#Bi ['Polonium',84,'Metalle',16,6,'p',209.98,'fest',9.20,2.0],#Po ['Astat',85,'Halogene',17,6,'p',209.99,'fest','n.A',2.2],#At ['Radon',86,'Edelgase',18,6,'p',222.00,'gasförmig',9.73,'n.A'],#Rn ['Francium',87,'Alkalimetalle',1,7,'s',223.02,'fest','n.A',0.7],#Fr ['Radium',88,'Erdalkalimetalle',2,7,'s',226.03,'fest',5.5,0.9],#Ra ['Actinium',89,'Übergangsmetalle',3,7,'d',227.03,'fest',10.07,1.1],#Ac ['Thorium',90,'Actinoide','Ac',7,'f',232.04,'fest',11.72,1.3],#Th ['Protactinium',91,'Actinoide','Ac',7,'f',231.04,'fest',15.37,1.5],#Pa ['Uran',92,'Actinoide','Ac',7,'f',238.03,'fest',19.16,1.38],#U ['Neptunium',93,'Actinoide','Ac',7,'f',237.05,'fest',20.45,1.36],#Np ['Plutonium',94,'Actinoide','Ac',7,'f',244.06,'fest',19.82,1.28],#Pu ['Americium',95,'Actinoide','Ac',7,'f',243.06,'fest',13.67,1.3],#Am ['Curium',96,'Actinoide','Ac',7,'f',247.07,'fest',13.51,1.3],#Cm ['Berkelium',97,'Actinoide','Ac',7,'f',247,'fest',14.78,1.3],#Bk ['Californium',98,'Actinoide','Ac',7,'f',251,'fest',15.1,1.3],#Cf ['Einsteinium',99,'Actinoide','Ac',7,'f',252,'fest',8.84,'n.A'],#Es ['Fermium',100,'Actinoide','Ac',7,'f',257.10,'fest','n.A','n.A'],#Fm ['Medelevium',101,'Actinoide','Ac',7,'f',258,'fest','n.A','n.A'],#Md ['Nobelium',102,'Actinoide','Ac',7,'f',259,'fest','n.A.','n.A'],#No ['Lawrencium',103,'Actinoide','Ac',7,'f',266,'fest','n.A','n.A'],#Lr ['Rutherdordium',104,'Übergangsmetalle',4,7,'d',261.11,'fest',17.00,'n.A'],#Rf ['Dubnium',105,'Übergangsmetalle',5,7,'d',262.11,'n.A','n.A','n.A'],#Db ['Seaborgium',106,'Übergangsmetalle',6,7,'d',263.12,'n.A','n.A','n.A'],#Sg ['Bohrium',107,'Übergangsmetalle',7,7,'d',262.12,'n.A','n.A','n.A'],#Bh ['Hassium',108,'Übergangsmetalle',8,7,'d',265,'n.A','n.A','n.A'],#Hs ['Meitnerium',109,'Unbekannt',9,7,'d',268,'n.A','n.A','n.A'],#Mt ['Darmstadtium',110,'Unbekannt',10,7,'d',281,'n.A','n.A','n.A'],#Ds ['Roentgenium',111,'Unbekannt',11,7,'d',280,'n.A','n.A','n.A'],#Rg ['Copernicium',112,'Unbekannt',12,7,'d',277,'n.A','n.A','n.A'],#Cn ['Nihonium',113,'Unbekannt',13,7,'p',287,'n.A','n.A','n.A'],#Nh ['Flerovium',114,'Unbekannt',14,7,'p',289,'n.A','n.A','n.A'],#Fl ['Moscovium',115,'Unbekannt',15,7,'p',288,'n.A','n.A','n.A'],#Mc ['Livermorium',116,'Unbekannt',16,7,'p',293,'n.A','n.A','n.A'],#Lv ['Tenness',117,'Unbekannt',17,7,'p',292,'n.A','n.A','n.A'],#Ts ['Oganesson',118,'Unbekannt',18,7,'p',294,'fest',6.6,'n.A']#Og ] kategorie_farben = {'Alkalimetalle' : '#fe6f61', 'Erdalkalimetalle':'#6791a7', 'Übergangsmetalle':'#83b8d0', 'Metalle':'#cae2ed', 'Halbmetalle':'#a7d6bc', 'Nichtmetalle':'#ffde66', 'Halogene':'#e9aa63', 'Edelgase':'#e29136', 'Unbekannt':'#cec0bf', 'Lanthanoide':'#696071', 'Actinoide':'#5b4c68'} Code: import tkinter as tk root = tk.Tk() class Element(tk.Frame): la_offset = 2;ac_offset=2;offset=2 def __init__(self,master,symbol,**kwargs): tk.Frame.__init__(self,master, relief = 'raised') self.kwargs = kwargs self.command= kwargs.pop('command', lambda:print('No command')) self.WIDTH,self.HEIGHT,self.BD = 100,100,3 self.CMP = self.BD*2 bg = kategorie_farben.get(kwargs.get('elementkategorie')) self.configure(width=self.WIDTH,height=self.HEIGHT,bd=self.BD, bg=bg) self.grid_propagate(0) self.idx = tk.Label(self,text=kwargs.get('index'),bg=bg) self.u = tk.Label(self,text=kwargs.get('atommasse'),bg=bg) self.name = tk.Label(self,text=kwargs.get('name'),bg=bg) self.symb = tk.Label(self,text=symbol,font=('bold'),fg=self.get_fg(),bg=bg) self.e = tk.Label(self,text=kwargs.get('elektronegativität'),bg=bg) self.d = tk.Label(self,text=kwargs.get('dichte'),bg=bg) self.grid_columnconfigure(1, weight=2) self.grid_rowconfigure(1, weight=2) self.idx.grid(row=0,column=0,sticky='w') self.u.grid(row=0,column=2,sticky='e') mid_x = self.WIDTH/2-self.name.winfo_reqwidth()/2 mid_y = self.HEIGHT/2-self.name.winfo_reqheight()/2 offset= 15 self.name.place(in_=self,x=mid_x-self.CMP,y=mid_y-self.CMP+offset) mid_x = self.WIDTH/2-self.symb.winfo_reqwidth()/2 mid_y = self.HEIGHT/2-self.symb.winfo_reqheight()/2 self.symb.place(in_=self,x=mid_x-self.CMP,y=mid_y-self.CMP-offset/2) self.e.grid(row=2,column=0,sticky='w') self.d.grid(row=2,column=2,sticky='e') r,c = kwargs.pop('periode'),kwargs.pop('gruppe') if c in ('La','Ac'): if c == 'La': c =Element.la_offset+Element.offset;Element.la_offset +=1 r += self.offset if c == 'Ac': c =Element.ac_offset+Element.offset;Element.ac_offset +=1 r += Element.offset self.grid(row=r,column=c,sticky='nswe') self.bind('<Enter>', self.in_active) self.bind('<Leave>', self.in_active) self.bind('<ButtonPress-1>', self.indicate) self.bind('<ButtonRelease-1>', self.execute) [child.bind('<ButtonPress-1>', self.indicate) for child in self.winfo_children()] [child.bind('<ButtonRelease-1>', self.execute) for child in self.winfo_children()] def in_active(self,event): if str(event.type) == 'Enter': self.flag = True if str(event.type) == 'Leave': self.flag = False;self.configure(relief='raised') def indicate(self,event): self.configure(relief='sunken') def execute(self,event): if self.flag: self.command();self.configure(relief='raised') else: self.configure(relief='raised') def get_fg(self): if self.kwargs.get('aggregatzustand') == 'fest': return 'black' if self.kwargs.get('aggregatzustand') == 'flüssig': return 'blue' if self.kwargs.get('aggregatzustand') == 'gasförmig': return 'red' if self.kwargs.get('aggregatzustand') == 'n.A': return 'grey' def test(): print('testing..') for idx,symbol in enumerate(symbols): kwargs = {} for k,v in zip(keywords,values[idx]): kwargs.update({k:v}) Element(root,symbol,command=test,**kwargs) root.mainloop() Answer: I hope you enjoy this bit of code. I did! Your symbols, keywords and values should have capitalised variable names since they're global constants. However, life would be easier if your symbols were integrated into your values, and keywords prepended with symbol. Even better: none of this actually belongs in your code, and should be cut out to a database file. JSON is easiest but there are others; for instance CSV would be higher-density (but has weaker typing). You have not enough German in some parts, and too much German in others. Your localised data (e.g. Stickstoff) are fine. Schema (e.g. elementkategorie) and code (e.g. kategorie_farben) should not be localised and should be in English. Your floating-point rendering should use localised formats; in your case it will turn your decimal point into a comma. Don't write n.A in your database; use None and convert that to a string on render. Don't store root in the global namespace. Don't leave **kwargs as a dictionary; instead make a simple @dataclass or named tuple. Don't over-abbreviate variables like BD which should be BORDER. Likewise, over-abbreviated tk keyword arguments like bg have a full-form background which should be used instead. Rather than strings like e, prefer constants in tk like tk.E. Your if c in ('La','Ac'): is redundant and can be deleted. Your flag, <Enter> and <Leave> aren't doing anything so in my suggested code I deleted them. Refactor your get_fg to be a dictionary lookup. Prefer the "has-a" pattern over the "is-a" pattern for your element frame class; in other words, instantiate a frame instead of inheriting one. Consider adding a (German!) title to your window. Factor out your creation of a middle-placed label for name and symbol to a function. Better yet: don't call place, and just represent your name and symbol labels as rows within the grid that span the width of the grid and are sticky to both east and west. Consider resizing your chart to the window by use of a container frame and pack_configure. You should name all of your widgets. If you don't, a name will be generated for you internally and this will make debugging more difficult. Your element grid coordinate calculations are non-reentrant and can be performed only once per process run, since your offset variables are stored as statics. You should refactor this; the nicest way is an iterator function that keeps these offsets as locals and throws them away once all elements have been placed. Rather than binding your mouse events to all children of your element frame, consider just calling a bindtags to pass all events from the child labels to the parent frame. Typo: it's "rutherfordium". I thought about calling the according wikipedia site This is easy via webbrowser. Suggested elements.json [ { "symbol": "H", "name": "Wasserstoff", "number": 1, "category": "Nichtmetalle", "group": 1, "period": 1, "block": "s", "mass": 1.01, "phase": "gasförmig", "density": 0.08, "electronegativity": 2.2 }, { "symbol": "He", "name": "Helium", "number": 2, "category": "Edelgase", "group": 18, "period": 1, "block": "s", "mass": 4.0, "phase": "gasförmig", "density": 0.18, "electronegativity": null }, { "symbol": "Li", "name": "Lithium", "number": 3, "category": "Alkalimetalle", "group": 1, "period": 2, "block": "s", "mass": 6.94, "phase": "fest", "density": 0.53, "electronegativity": 0.98 }, { "symbol": "Be", "name": "Beryllium", "number": 4, "category": "Erdalkalimetalle", "group": 2, "period": 2, "block": "s", "mass": 9.01, "phase": "fest", "density": 1.84, "electronegativity": 1.57 }, { "symbol": "B", "name": "Bor", "number": 5, "category": "Halbmetalle", "group": 13, "period": 2, "block": "p", "mass": 10.81, "phase": "fest", "density": 2.46, "electronegativity": 2.04 }, { "symbol": "C", "name": "Kohlenstoff", "number": 6, "category": "Nichtmetalle", "group": 14, "period": 2, "block": "p", "mass": 12.01, "phase": "fest", "density": 2.26, "electronegativity": 2.55 }, { "symbol": "N", "name": "Stickstoff", "number": 7, "category": "Nichtmetalle", "group": 15, "period": 2, "block": "p", "mass": 14.0, "phase": "gasförmig", "density": 1.17, "electronegativity": 3.04 }, { "symbol": "O", "name": "Sauerstoff", "number": 8, "category": "Nichtmetalle", "group": 16, "period": 2, "block": "p", "mass": 15.99, "phase": "gasförmig", "density": 1.43, "electronegativity": 3.44 }, { "symbol": "F", "name": "Fluor", "number": 9, "category": "Halogene", "group": 17, "period": 2, "block": "p", "mass": 18.99, "phase": "gasförmig", "density": 1.7, "electronegativity": 3.98 }, { "symbol": "Ne", "name": "Neon", "number": 10, "category": "Edelgase", "group": 18, "period": 2, "block": "p", "mass": 20.17, "phase": "gasförmig", "density": 0.9, "electronegativity": null }, { "symbol": "Na", "name": "Natrium", "number": 11, "category": "Alkalimetalle", "group": 1, "period": 3, "block": "s", "mass": 22.99, "phase": "fest", "density": 0.97, "electronegativity": 0.93 }, { "symbol": "Mg", "name": "Magnesium", "number": 12, "category": "Erdalkalimetalle", "group": 2, "period": 3, "block": "s", "mass": 24.31, "phase": "fest", "density": 1.74, "electronegativity": 1.31 }, { "symbol": "Al", "name": "Aluminium", "number": 13, "category": "Metalle", "group": 13, "period": 3, "block": "p", "mass": 26.98, "phase": "fest", "density": 2.69, "electronegativity": 1.61 }, { "symbol": "Si", "name": "Silicium", "number": 14, "category": "Halbmetalle", "group": 14, "period": 3, "block": "p", "mass": 28.08, "phase": "fest", "density": 2.34, "electronegativity": 1.9 }, { "symbol": "P", "name": "Phosphor", "number": 15, "category": "Nichtmetalle", "group": 15, "period": 3, "block": "p", "mass": 30.97, "phase": "fest", "density": 2.4, "electronegativity": 2.19 }, { "symbol": "S", "name": "Schwefel", "number": 16, "category": "Nichtmetalle", "group": 16, "period": 3, "block": "p", "mass": 32.06, "phase": "fest", "density": 2.07, "electronegativity": 2.58 }, { "symbol": "Cl", "name": "Chlor", "number": 17, "category": "Halogene", "group": 17, "period": 3, "block": "p", "mass": 35.45, "phase": "gasförmig", "density": 3.22, "electronegativity": 3.16 }, { "symbol": "Ar", "name": "Argon", "number": 18, "category": "Edelgase", "group": 18, "period": 3, "block": "p", "mass": 39.95, "phase": "gasförmig", "density": 1.78, "electronegativity": null }, { "symbol": "K", "name": "Kalium", "number": 19, "category": "Alkalimetalle", "group": 1, "period": 4, "block": "s", "mass": 39.09, "phase": "fest", "density": 0.86, "electronegativity": 0.82 }, { "symbol": "Ca", "name": "Calicium", "number": 20, "category": "Erdalkalimetalle", "group": 2, "period": 4, "block": "s", "mass": 40.08, "phase": "fest", "density": 1.55, "electronegativity": 1.0 }, { "symbol": "Sc", "name": "Scandium", "number": 21, "category": "Übergangsmetalle", "group": 3, "period": 4, "block": "d", "mass": 44.96, "phase": "fest", "density": 2.99, "electronegativity": 1.36 }, { "symbol": "Ti", "name": "Titan", "number": 22, "category": "Übergangsmetalle", "group": 4, "period": 4, "block": "d", "mass": 47.87, "phase": "fest", "density": 4.5, "electronegativity": 1.54 }, { "symbol": "V", "name": "Vandium", "number": 23, "category": "Übergangsmetalle", "group": 5, "period": 4, "block": "d", "mass": 50.94, "phase": "fest", "density": 6.11, "electronegativity": 1.63 }, { "symbol": "Cr", "name": "Chrom", "number": 24, "category": "Übergangsmetalle", "group": 6, "period": 4, "block": "d", "mass": 51.99, "phase": "fest", "density": 7.14, "electronegativity": 1.66 }, { "symbol": "Mn", "name": "Mangan", "number": 25, "category": "Übergangsmetalle", "group": 7, "period": 4, "block": "d", "mass": 54.94, "phase": "fest", "density": 7.43, "electronegativity": 1.55 }, { "symbol": "Fe", "name": "Eisen", "number": 26, "category": "Übergangsmetalle", "group": 8, "period": 4, "block": "d", "mass": 55.85, "phase": "fest", "density": 7.87, "electronegativity": 1.83 }, { "symbol": "Co", "name": "Cobalt", "number": 27, "category": "Übergangsmetalle", "group": 9, "period": 4, "block": "d", "mass": 58.93, "phase": "fest", "density": 8.9, "electronegativity": 1.88 }, { "symbol": "Ni", "name": "Nickel", "number": 28, "category": "Übergangsmetalle", "group": 10, "period": 4, "block": "d", "mass": 58.69, "phase": "fest", "density": 8.9, "electronegativity": 1.91 }, { "symbol": "Cu", "name": "Kupfer", "number": 29, "category": "Übergangsmetalle", "group": 11, "period": 4, "block": "d", "mass": 63.54, "phase": "fest", "density": 8.92, "electronegativity": 1.9 }, { "symbol": "Zn", "name": "Zink", "number": 30, "category": "Übergangsmetalle", "group": 12, "period": 4, "block": "d", "mass": 65.38, "phase": "fest", "density": 7.14, "electronegativity": 1.65 }, { "symbol": "Ga", "name": "Gallium", "number": 31, "category": "Metalle", "group": 13, "period": 4, "block": "p", "mass": 69.72, "phase": "fest", "density": 5.9, "electronegativity": 1.81 }, { "symbol": "Ge", "name": "Germanium", "number": 32, "category": "Halbmetalle", "group": 14, "period": 4, "block": "p", "mass": 72.63, "phase": "fest", "density": 5.32, "electronegativity": 2.01 }, { "symbol": "As", "name": "Arsen", "number": 33, "category": "Halbmetalle", "group": 15, "period": 4, "block": "p", "mass": 74.92, "phase": "fest", "density": 5.73, "electronegativity": 2.18 }, { "symbol": "Se", "name": "Selen", "number": 34, "category": "Halbmetalle", "group": 16, "period": 4, "block": "p", "mass": 78.97, "phase": "fest", "density": 4.82, "electronegativity": 2.55 }, { "symbol": "Br", "name": "Brom", "number": 35, "category": "Halogene", "group": 17, "period": 4, "block": "p", "mass": 79.9, "phase": "flüssig", "density": 3.12, "electronegativity": 2.96 }, { "symbol": "Kr", "name": "Krypton", "number": 36, "category": "Edelgase", "group": 18, "period": 4, "block": "p", "mass": 83.8, "phase": "gasförmig", "density": 3.75, "electronegativity": 3.0 }, { "symbol": "Rb", "name": "Rubidium", "number": 37, "category": "Alkalimetalle", "group": 1, "period": 5, "block": "s", "mass": 85.47, "phase": "fest", "density": 1.53, "electronegativity": 0.82 }, { "symbol": "Sr", "name": "Strontium", "number": 38, "category": "Erdalkalimetalle", "group": 2, "period": 5, "block": "s", "mass": 87.62, "phase": "fest", "density": 2.63, "electronegativity": 0.95 }, { "symbol": "Y", "name": "Yttrium", "number": 39, "category": "Übergangsmetalle", "group": 3, "period": 5, "block": "d", "mass": 88.91, "phase": "fest", "density": 4.47, "electronegativity": 1.22 }, { "symbol": "Zr", "name": "Zirconium", "number": 40, "category": "Übergangsmetalle", "group": 4, "period": 5, "block": "d", "mass": 91.22, "phase": "fest", "density": 6.5, "electronegativity": 1.33 }, { "symbol": "Nb", "name": "Niob", "number": 41, "category": "Übergangsmetalle", "group": 5, "period": 5, "block": "d", "mass": 92.9, "phase": "fest", "density": 8.57, "electronegativity": 1.6 }, { "symbol": "Mo", "name": "Molybdän", "number": 42, "category": "Übergangsmetalle", "group": 6, "period": 5, "block": "d", "mass": 95.95, "phase": "fest", "density": 10.28, "electronegativity": 2.16 }, { "symbol": "Tc", "name": "Technetium", "number": 43, "category": "Übergangsmetalle", "group": 7, "period": 5, "block": "d", "mass": 98.9, "phase": "fest", "density": 11.5, "electronegativity": 1.9 }, { "symbol": "Ru", "name": "Ruthenium", "number": 44, "category": "Übergangsmetalle", "group": 8, "period": 5, "block": "d", "mass": 101.07, "phase": "fest", "density": 12.37, "electronegativity": 2.2 }, { "symbol": "Rh", "name": "Rhodium", "number": 45, "category": "Übergangsmetalle", "group": 9, "period": 5, "block": "d", "mass": 102.9, "phase": "fest", "density": 12.38, "electronegativity": 2.28 }, { "symbol": "Pd", "name": "Palladium", "number": 46, "category": "Übergangsmetalle", "group": 10, "period": 5, "block": "d", "mass": 106.42, "phase": "fest", "density": 11.99, "electronegativity": 2.2 }, { "symbol": "Ag", "name": "Silber", "number": 47, "category": "Übergangsmetalle", "group": 11, "period": 5, "block": "d", "mass": 107.87, "phase": "fest", "density": 10.49, "electronegativity": 1.93 }, { "symbol": "Cd", "name": "cadmium", "number": 48, "category": "Übergangsmetalle", "group": 12, "period": 5, "block": "d", "mass": 112.41, "phase": "fest", "density": 8.65, "electronegativity": 1.69 }, { "symbol": "In", "name": "Indium", "number": 49, "category": "Metalle", "group": 13, "period": 5, "block": "p", "mass": 114.82, "phase": "fest", "density": 7.31, "electronegativity": 1.78 }, { "symbol": "Sn", "name": "Zinn", "number": 50, "category": "Metalle", "group": 14, "period": 5, "block": "p", "mass": 118.71, "phase": "fest", "density": 5.77, "electronegativity": 1.96 }, { "symbol": "Sb", "name": "Antimon", "number": 51, "category": "Halbmetalle", "group": 15, "period": 5, "block": "p", "mass": 121.76, "phase": "fest", "density": 6.7, "electronegativity": 2.05 }, { "symbol": "Te", "name": "Tellur", "number": 52, "category": "Halbmetalle", "group": 16, "period": 5, "block": "p", "mass": 127.6, "phase": "fest", "density": 6.24, "electronegativity": 2.1 }, { "symbol": "I", "name": "Iod", "number": 53, "category": "Halogene", "group": 17, "period": 5, "block": "p", "mass": 126.9, "phase": "fest", "density": 4.94, "electronegativity": 2.66 }, { "symbol": "Xe", "name": "Xenon", "number": 54, "category": "Edelgase", "group": 18, "period": 5, "block": "p", "mass": 131.29, "phase": "gasförmig", "density": 5.9, "electronegativity": 2.6 }, { "symbol": "Cs", "name": "Caesium", "number": 55, "category": "Alkalimetalle", "group": 1, "period": 6, "block": "s", "mass": 132.91, "phase": "fest", "density": 1.9, "electronegativity": 0.79 }, { "symbol": "Ba", "name": "Barium", "number": 56, "category": "Erdalkalimetalle", "group": 2, "period": 6, "block": "s", "mass": 137.33, "phase": "fest", "density": 3.62, "electronegativity": 0.89 }, { "symbol": "La", "name": "Lanthan", "number": 57, "category": "Übergangsmetalle", "group": 3, "period": 6, "block": "d", "mass": 138.9, "phase": "fest", "density": 6.17, "electronegativity": 1.1 }, { "symbol": "Ce", "name": "Cer", "number": 58, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 140.12, "phase": "fest", "density": 6.77, "electronegativity": 1.12 }, { "symbol": "Pr", "name": "Praseodym", "number": 59, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 140.91, "phase": "fest", "density": 6.48, "electronegativity": 1.13 }, { "symbol": "Nd", "name": "Neodym", "number": 60, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 144.24, "phase": "fest", "density": 7.0, "electronegativity": 1.14 }, { "symbol": "Pm", "name": "Promethium", "number": 61, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 146.91, "phase": "fest", "density": 7.2, "electronegativity": null }, { "symbol": "Sm", "name": "Samarium", "number": 62, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 150.36, "phase": "fest", "density": 7.54, "electronegativity": 1.17 }, { "symbol": "Eu", "name": "Europium", "number": 63, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 151.96, "phase": "fest", "density": 5.25, "electronegativity": null }, { "symbol": "Gd", "name": "Gadolinium", "number": 64, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 157.25, "phase": "fest", "density": 7.89, "electronegativity": 1.2 }, { "symbol": "Tb", "name": "Terbium", "number": 65, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 158.93, "phase": "fest", "density": 8.25, "electronegativity": null }, { "symbol": "Dy", "name": "Dysprosium", "number": 66, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 162.5, "phase": "fest", "density": 8.56, "electronegativity": 1.22 }, { "symbol": "Ho", "name": "Holmium", "number": 67, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 164.93, "phase": "fest", "density": 8.78, "electronegativity": 1.23 }, { "symbol": "Er", "name": "Erbium", "number": 68, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 167.26, "phase": "fest", "density": 9.05, "electronegativity": 1.24 }, { "symbol": "Tm", "name": "Thulium", "number": 69, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 168.93, "phase": "fest", "density": 9.32, "electronegativity": 1.25 }, { "symbol": "Yb", "name": "Ytterbium", "number": 70, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 173.05, "phase": "fest", "density": 6.97, "electronegativity": null }, { "symbol": "Lu", "name": "Lutetium", "number": 71, "category": "Lanthanoide", "group": "La", "period": 6, "block": "f", "mass": 174.97, "phase": "fest", "density": 9.84, "electronegativity": 1.27 }, { "symbol": "Hf", "name": "Hafnium", "number": 72, "category": "Übergangsmetalle", "group": 4, "period": 6, "block": "d", "mass": 178.49, "phase": "fest", "density": 13.28, "electronegativity": 1.3 }, { "symbol": "Ta", "name": "Tantal", "number": 73, "category": "Übergangsmetalle", "group": 5, "period": 6, "block": "d", "mass": 180.95, "phase": "fest", "density": 16.65, "electronegativity": 1.5 }, { "symbol": "W", "name": "Wolfram", "number": 74, "category": "Übergangsmetalle", "group": 6, "period": 6, "block": "d", "mass": 183.84, "phase": "fest", "density": 19.25, "electronegativity": 2.36 }, { "symbol": "Re", "name": "Rhenium", "number": 75, "category": "Übergangsmetalle", "group": 7, "period": 6, "block": "d", "mass": 186.21, "phase": "fest", "density": 21.0, "electronegativity": 1.9 }, { "symbol": "Os", "name": "Osmium", "number": 76, "category": "Übergangsmetalle", "group": 8, "period": 6, "block": "d", "mass": 190.23, "phase": "fest", "density": 22.59, "electronegativity": 2.2 }, { "symbol": "Ir", "name": "Irdium", "number": 77, "category": "Übergangsmetalle", "group": 9, "period": 6, "block": "d", "mass": 192.22, "phase": "fest", "density": 22.56, "electronegativity": 2.2 }, { "symbol": "Pt", "name": "Platin", "number": 78, "category": "Übergangsmetalle", "group": 10, "period": 6, "block": "d", "mass": 195.08, "phase": "fest", "density": 21.45, "electronegativity": 2.2 }, { "symbol": "Au", "name": "Gold", "number": 79, "category": "Übergangsmetalle", "group": 11, "period": 6, "block": "d", "mass": 196.97, "phase": "fest", "density": 19.32, "electronegativity": 2.54 }, { "symbol": "Hg", "name": "Quecksilber", "number": 80, "category": "Übergangsmetalle", "group": 12, "period": 6, "block": "d", "mass": 200.59, "phase": "flüssig", "density": 13.55, "electronegativity": 2.0 }, { "symbol": "Tl", "name": "Thalium", "number": 81, "category": "Metalle", "group": 13, "period": 6, "block": "p", "mass": 204.38, "phase": "fest", "density": 11.85, "electronegativity": 1.62 }, { "symbol": "Pb", "name": "Blei", "number": 82, "category": "Metalle", "group": 14, "period": 6, "block": "p", "mass": 207.2, "phase": "fest", "density": 11.34, "electronegativity": 2.33 }, { "symbol": "Bi", "name": "Bismut", "number": 83, "category": "Metalle", "group": 15, "period": 6, "block": "p", "mass": 208.98, "phase": "fest", "density": 9.78, "electronegativity": 2.02 }, { "symbol": "Po", "name": "Polonium", "number": 84, "category": "Metalle", "group": 16, "period": 6, "block": "p", "mass": 209.98, "phase": "fest", "density": 9.2, "electronegativity": 2.0 }, { "symbol": "At", "name": "Astat", "number": 85, "category": "Halogene", "group": 17, "period": 6, "block": "p", "mass": 209.99, "phase": "fest", "density": null, "electronegativity": 2.2 }, { "symbol": "Rn", "name": "Radon", "number": 86, "category": "Edelgase", "group": 18, "period": 6, "block": "p", "mass": 222.0, "phase": "gasförmig", "density": 9.73, "electronegativity": null }, { "symbol": "Fr", "name": "Francium", "number": 87, "category": "Alkalimetalle", "group": 1, "period": 7, "block": "s", "mass": 223.02, "phase": "fest", "density": null, "electronegativity": 0.7 }, { "symbol": "Ra", "name": "Radium", "number": 88, "category": "Erdalkalimetalle", "group": 2, "period": 7, "block": "s", "mass": 226.03, "phase": "fest", "density": 5.5, "electronegativity": 0.9 }, { "symbol": "Ac", "name": "Actinium", "number": 89, "category": "Übergangsmetalle", "group": 3, "period": 7, "block": "d", "mass": 227.03, "phase": "fest", "density": 10.07, "electronegativity": 1.1 }, { "symbol": "Th", "name": "Thorium", "number": 90, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 232.04, "phase": "fest", "density": 11.72, "electronegativity": 1.3 }, { "symbol": "Pa", "name": "Protactinium", "number": 91, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 231.04, "phase": "fest", "density": 15.37, "electronegativity": 1.5 }, { "symbol": "U", "name": "Uran", "number": 92, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 238.03, "phase": "fest", "density": 19.16, "electronegativity": 1.38 }, { "symbol": "Np", "name": "Neptunium", "number": 93, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 237.05, "phase": "fest", "density": 20.45, "electronegativity": 1.36 }, { "symbol": "Pu", "name": "Plutonium", "number": 94, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 244.06, "phase": "fest", "density": 19.82, "electronegativity": 1.28 }, { "symbol": "Am", "name": "Americium", "number": 95, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 243.06, "phase": "fest", "density": 13.67, "electronegativity": 1.3 }, { "symbol": "Cm", "name": "Curium", "number": 96, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 247.07, "phase": "fest", "density": 13.51, "electronegativity": 1.3 }, { "symbol": "Bk", "name": "Berkelium", "number": 97, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 247, "phase": "fest", "density": 14.78, "electronegativity": 1.3 }, { "symbol": "Cf", "name": "Californium", "number": 98, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 251, "phase": "fest", "density": 15.1, "electronegativity": 1.3 }, { "symbol": "Es", "name": "Einsteinium", "number": 99, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 252, "phase": "fest", "density": 8.84, "electronegativity": null }, { "symbol": "Fm", "name": "Fermium", "number": 100, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 257.1, "phase": "fest", "density": null, "electronegativity": null }, { "symbol": "Md", "name": "Medelevium", "number": 101, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 258, "phase": "fest", "density": null, "electronegativity": null }, { "symbol": "No", "name": "Nobelium", "number": 102, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 259, "phase": "fest", "density": null, "electronegativity": null }, { "symbol": "Lr", "name": "Lawrencium", "number": 103, "category": "Actinoide", "group": "Ac", "period": 7, "block": "f", "mass": 266, "phase": "fest", "density": null, "electronegativity": null }, { "symbol": "Rf", "name": "Rutherfordium", "number": 104, "category": "Übergangsmetalle", "group": 4, "period": 7, "block": "d", "mass": 261.11, "phase": "fest", "density": 17.0, "electronegativity": null }, { "symbol": "Db", "name": "Dubnium", "number": 105, "category": "Übergangsmetalle", "group": 5, "period": 7, "block": "d", "mass": 262.11, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Sg", "name": "Seaborgium", "number": 106, "category": "Übergangsmetalle", "group": 6, "period": 7, "block": "d", "mass": 263.12, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Bh", "name": "Bohrium", "number": 107, "category": "Übergangsmetalle", "group": 7, "period": 7, "block": "d", "mass": 262.12, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Hs", "name": "Hassium", "number": 108, "category": "Übergangsmetalle", "group": 8, "period": 7, "block": "d", "mass": 265, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Mt", "name": "Meitnerium", "number": 109, "category": "Unbekannt", "group": 9, "period": 7, "block": "d", "mass": 268, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Ds", "name": "Darmstadtium", "number": 110, "category": "Unbekannt", "group": 10, "period": 7, "block": "d", "mass": 281, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Rg", "name": "Roentgenium", "number": 111, "category": "Unbekannt", "group": 11, "period": 7, "block": "d", "mass": 280, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Cn", "name": "Copernicium", "number": 112, "category": "Unbekannt", "group": 12, "period": 7, "block": "d", "mass": 277, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Nh", "name": "Nihonium", "number": 113, "category": "Unbekannt", "group": 13, "period": 7, "block": "p", "mass": 287, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Fl", "name": "Flerovium", "number": 114, "category": "Unbekannt", "group": 14, "period": 7, "block": "p", "mass": 289, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Mc", "name": "Moscovium", "number": 115, "category": "Unbekannt", "group": 15, "period": 7, "block": "p", "mass": 288, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Lv", "name": "Livermorium", "number": 116, "category": "Unbekannt", "group": 16, "period": 7, "block": "p", "mass": 293, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Ts", "name": "Tenness", "number": 117, "category": "Unbekannt", "group": 17, "period": 7, "block": "p", "mass": 292, "phase": null, "density": null, "electronegativity": null }, { "symbol": "Og", "name": "Oganesson", "number": 118, "category": "Unbekannt", "group": 18, "period": 7, "block": "p", "mass": 294, "phase": "fest", "density": 6.6, "electronegativity": null } ] Python import json import tkinter as tk import webbrowser from locale import setlocale, LC_ALL, format_string from typing import Optional, Union, Iterable, Iterator, NamedTuple class Element(NamedTuple): symbol: str name: str number: int category: str group: Union[str, int] period: int block: str mass: float phase: Optional[str] density: Optional[float] electronegativity: Optional[float] class PlacedElement(NamedTuple): row: int column: int element: Element def format_float(s: Optional[float]) -> str: if s is None: return 'n.A' return format_string('%.2f', s) def place_elements(elements: Iterable[Element]) -> Iterator[PlacedElement]: OFFSET = 2 la_offset = 2 ac_offset = 2 for element in elements: period, group_name = element.period, element.group if group_name == 'La': group = la_offset + OFFSET la_offset += 1 period += OFFSET elif group_name == 'Ac': group = ac_offset + OFFSET ac_offset += 1 period += OFFSET else: group = group_name yield PlacedElement(row=period - 1, column=group - 1, element=element) def load_json(filename: str = 'elements.json') -> Iterator[Element]: with open(filename, encoding='utf-8') as f: for element_dict in json.load(f): yield Element(**element_dict) class ElementButton: BORDER = 3 CATEGORY_COLORS = { 'Alkalimetalle': '#fe6f61', 'Erdalkalimetalle': '#6791a7', 'Übergangsmetalle': '#83b8d0', 'Metalle': '#cae2ed', 'Halbmetalle': '#a7d6bc', 'Nichtmetalle': '#ffde66', 'Halogene': '#e9aa63', 'Edelgase': '#e29136', 'Unbekannt': '#cec0bf', 'Lanthanoide': '#696071', 'Actinoide': '#5b4c68', } PHASE_COLORS = { 'fest': 'black', 'flüssig': 'blue', 'gasförmig': 'red', None: 'grey', } def __init__(self, parent: tk.Widget, placed_element: PlacedElement) -> None: self.element = placed_element.element self.background = self.CATEGORY_COLORS[self.element.category] self.frame = frame = tk.Frame( parent, relief=tk.RAISED, name=f'frame_{self.element.symbol}', background=self.background, border=self.BORDER, ) self.frame.grid_columnconfigure(1, weight=2) self.frame.grid(row=placed_element.row, column=placed_element.column, sticky=tk.EW) self.populate() frame.bind('<ButtonPress-1>', self.press) frame.bind('<ButtonRelease-1>', self.release) for child in frame.winfo_children(): child.bindtags((frame,)) def populate(self) -> None: prefix = f'label_{self.element.symbol}_' tk.Label( self.frame, name=prefix + 'number', text=self.element.number, background=self.background, ).grid(row=0, column=0, sticky=tk.NW) tk.Label( self.frame, name=prefix + 'mass', text=format_float(self.element.mass), background=self.background, ).grid(row=0, column=2, sticky=tk.NE) tk.Label( self.frame, name=prefix + 'name', text=self.element.name, background=self.background, ).grid(row=1, column=0, sticky=tk.EW, columnspan=3) tk.Label( self.frame, name=prefix + 'symbol', text=self.element.symbol, font='bold', background=self.background, foreground=self.PHASE_COLORS[self.element.phase], ).grid(row=2, column=0, sticky=tk.EW, columnspan=3) tk.Label( self.frame, name=prefix + 'electronegativity', text=format_float(self.element.electronegativity), background=self.background, ).grid(row=3, column=0, sticky=tk.SW) tk.Label( self.frame, name=prefix + 'density', text=format_float(self.element.density), background=self.background, ).grid(row=3, column=2, sticky=tk.SE) def press(self, event: tk.Event) -> None: self.frame.configure(relief='sunken') def release(self, event: tk.Event) -> None: self.frame.configure(relief='raised') webbrowser.open( url=f'https://de.wikipedia.org/wiki/{self.element.name}', new=2, ) def main() -> None: setlocale(LC_ALL, 'de-DE.UTF-8') root = tk.Tk() root.title('Periodensystem der Elemente') frame = tk.Frame(root, name='grid_container') frame.pack_configure(fill=tk.BOTH) elements = tuple(place_elements(load_json())) for element in elements: ElementButton(frame, element) columns = {elm.column for elm in elements} for x in columns: frame.grid_columnconfigure(index=x, weight=1) root.mainloop() if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 42679, "tags": "python, template, tkinter, gui, factory-method" }
How do I calculate the power consumed by a lightbulb?
Question: I'm studying a lightbulb and its variable resistance, given by the expression: $R(T) = Ro[1 + α(T-T_0)]$, where $R_0$ is the resistance of the lamp at $T_0$. In this case, $R$ is not given by Ohm's law ($V=Ri$). So, which expression can I use to calculate the power consumed by a lightbulb? $P = R\cdot i^2$ $P = \frac{V^2}{R}$ $P = V\cdot i$ (I don't think I can use this one) I have measured the current and the e.p.d. at the bulb using instruments. Answer: All those expressions are equivalent and can be used, if you simultaneously measure, the voltage, current (and know the resistance from the temperature). Why do you think ohm's law doesn't apply? Resistance is only dependent on temperature in your case (not current or voltage). In general I would use P = IV, if Ohm's law actually doesn't apply.
{ "domain": "physics.stackexchange", "id": 1224, "tags": "electromagnetism, visible-light, electricity, power" }
NP-Complete Reduction
Question: Prove the following problem is NP-Complete: The problem gave a directed graph G, and several subsets of vertices of such graph are being specified as T1,T2,....Tn, and the subsects could intersect, does there exist a path in G where it doesn't contain any cycle and for every subsect Ti, the path consist of exactly 3 vertices from Ti? On the proving part I've found the certificate to verify such problem but I'm stuck on the reduction process and cannot think of a previously-exist NP-Complete problem to reduced to, I've thought of TSP, Dir-Ham Cycle and even 3SAT,For the TSP I've though about picking vertices from each subset and form a map based on such, if there exist a specific combination of vertices along with a path resulting a specific value then we can say yes to the question, however that's a bit brutal force rather than polynomial, but TSP in my perspective is the most reasonable for this condition, I've though about dir ham-cycle but the intuition still falls apart, may I please get some assist on what direction should I approach it with? Answer: I think a reduction from $\texttt{Directed Hamiltonian Path}$ (not cycle) would work quite well. Given a digraph $G = (V, E)$ where $V = \{v_1, …, v_n\}$, consider the digraph $G' = (V', E')$ where: for all $i \in \{1, …, n\}$, $T_i = \{v_{i,1}, v_{i,2}, v_{i,3}\}$; $V'$ consist of three copies of each vertex $v_i$: $\bigcup\limits_{i=1}^n T_i$; $E' = \{(v_{i,3}, v_{j,1})\mid (v_i, v_j)\in E\} \cup \bigcup\limits_{i=1}^n\{(v_{i,1}, v_{i,2}), (v_{i,2}, v_{i,3})\}$, meaning that for each edge $v_i\rightarrow v_j$, you create $v_{i,3}\rightarrow v_{j,1}$, and for each vertex $v_i$, you create a path $v_{i,1}\rightarrow v_{i,2}\rightarrow v_{i,3}$. It is clear that this construction can be done in polynomial time in the size of $G$. Now, $G$ has a Hamiltonian path if and only if there is an acyclic path in $G'$ containing exactly $3$ vertices from each $T_i$. Can you see why?
{ "domain": "cs.stackexchange", "id": 20668, "tags": "np-complete, np, decision-problem" }
Why doesn't the Sun wobble towards Jupiter instead of away from Jupiter?
Question: This is the page I am referring to. It seems counterintuitive to me that the Sun should be on the opposite side of the barycenter's wobble. I realize I am wrong, but I cannot see why I am wrong. Can someone explain why the wobble is away from Jupiter, not towards Jupiter? Here is an unedited screenshot of the NASA animation - it shows the sun on the opposite side of the green line (barycenter) as Jupiter. My logic says, since gravity is in play here, the Sun and Jupiter should be on the same side of the green line. I have edited NASA's image in MS Paint to show what I think should be happening: Answer: The part of your intuition that is correct is that Jupiter pulls the Sun towards it. The problem is that "pulls towards" does not mean "brings closer"! The gravitational force results in an acceleration towards an attracting body, which is not a displacement or even the derivative of displacement, but the second derivative of displacement. Oscillatory or circular motion has the property that the second derivative carries a minus sign. For example, when the Sun is on the right side of the green circle, its acceleration is to the left, because it is changing from rightward motion to leftward motion. Thus, by being on the opposite side from Jupiter, the Sun is continually accelerating towards it.
{ "domain": "astronomy.stackexchange", "id": 5687, "tags": "the-sun, gravity, jupiter" }
Picking random lines out of text files
Question: So I was doing something which creates an airplane with an actual airline code, number, airplane type and so on. The airline code and airplane type are stored in a txt file so I can add more later to my leisure... but the code I came up with to get a random line is extremely messy. Is there a better way? I searched all over stackexchange but it all seemed quite complex for little files which will have only (if ever) a couple dozen lines of content. public String generate() { int totalLines = 0; File file = new File("icaocodes.txt"); BufferedReader br = null; try { br = new BufferedReader(new FileReader(file)); while ((br.readLine()) != null) { totalLines++; } br.close(); br = new BufferedReader(new FileReader(file)); Random random = new Random(); int randomInt = random.nextInt(totalLines); int count=0; String icaocode; while ( (icaocode = br.readLine()) != null) { if (count == randomInt) { br.close(); return icaocode; } count++; } br.close(); } catch (FileNotFoundException e) { System.out.println("File not found: " + file.toString()); } catch (IOException e) { System.out.println("Unable to read file: " + file.toString()); } return "Puppies"; } Answer: Naive Solution (I say naive because of the assumption that the file you are working with is small. From a general design standpoint, it is usually best to prepare for the worst case and assume you may have to read a super long file at some point in time.) Read the file line by line into a List or a Map, then fetch a random entry from here. (Judging by the brief details of what you're working on, a Map of different objects might be a good choice if you'd like to parse a file into different objects for flights, etc.) Probably Better Solution There's a good discussion over here about Reservoir Sampling. Side notes Because you posted in Code Review... Split the reading from the BufferedReaders into separate try-catch blocks. This is just good practice, as in some cases your program may be able to continue executing even if an exception is thrown. Close your readers in a finally. Check out this post on closing readers Make file final. I'm assuming you just slapped the "Puppies" in the return statement, but if you post more complete code, you can get comments and suggestions on all of your code, not just your approach to finding a random line :)
{ "domain": "codereview.stackexchange", "id": 12820, "tags": "java, beginner, random, file" }
AdS/CFT Group Theory
Question: I have a two part question about AdS/CFT: Is the only necessary ingredient that the isometry group of AdS matches the conformal group in one dimension less or are there other prerequisites to build a holographic connection? How does one demonstrate that the isometry group of $AdS_{d+1}$ is $SO(d,2)$? I cannot find any references that do this explicitly which is what I need. Is this because it takes a long time to show this? Answer: 1) That's not the only ingredient -- it's a prerequisite for holography. In reality, with holographic duality one always means a precise mapping from observables in a gravity theory in AdS to observables in a CFT that lives on the boundary. So holography is much richer: it prescribes for example how you can compute a Wilson loop on the boundary CFT in terms of gravity. 2) No, it's in fact almost trivial. $AdS_{d+1}$ with radius $R$ can be defined as the solution to $$ \eta_{\mu \nu} X^\mu X^\nu = R^2 $$ with $\eta_{\mu \nu} = (1,1,-1,\ldots,-1)$ and where $X^\mu$ lives in $\mathbb{R}^{d+2}$. But $SO(2,d)$ is precisely the group that leaves the quadratic form $\eta_{\mu \nu} X^\mu X^\nu$ invariant. If this is too abstract, think of the sphere $S^2$. It can be defined as the set of points $X^\mu \in \mathbb{R}^3$ that obey $$ \delta_{\mu \nu} X^\mu X^\nu = R^2.$$ Its isometry group is $SO(3)$ because this leaves $X^2$ invariant.
{ "domain": "physics.stackexchange", "id": 27511, "tags": "group-theory, ads-cft, holographic-principle" }
Scalar fields in AdS$_3$
Question: I'm looking at lecture notes on AdS/CFT by Jared Kaplan, and in section 4.2 he claims that the action for a free scalar field in AdS$_3$ is $$S=\int dt d\rho d\theta \dfrac{\sin\rho}{\cos\rho}\dfrac{1}{2}\left[\dot{\phi}^2-\left(\partial_\rho\phi\right)^2-\dfrac{1}{\sin^2\rho}\left(\partial_\theta\phi\right)^2-\dfrac{m^2}{\cos^2\rho}\phi^2\right]$$ and that the canonical momentum conjugate to $\phi$ is $$P_\phi=\dfrac{\delta L}{\delta\dot{\phi}}=\dfrac{\sin\rho}{\cos^2\rho}\dot{\phi}$$ Now, my question is: where do the $\cos^2\rho$ terms in the action and the conjugate momentum come from? Maybe I'm missing something obvious, but when computing the canonical momentum, shouldn't I only pick up the prefactor of $\frac{\sin\rho}{\cos\rho}$? As for the mass term in the action, I know that the free scalar field action in AdS$_{d+1}$ is $$S=\int_{AdS}d^{d+1}x\sqrt{-g}\left[\dfrac{1}{2}\left(\nabla_A\phi\right)^2-\dfrac{1}{2}m^2\phi^2\right]$$ with the metric $$ds^2=\dfrac{1}{\cos^2{\rho}}\left(dt^2-d\rho^2-\sin^2{\rho}\ d\Omega_{d-1}^2\right)$$ so how is the mass term picking up an extra $\frac{1}{\cos^2\rho}$? Answer: The metric for $AdS_3$ is $$ds^2=\frac{1}{cos^2\rho}(dt^2-d\rho^2-sin^2\rho d\theta^2)$$, because $d=2$ is $AdS_3$. So $$g=\frac{1}{cos^2\rho}\times\frac{-1}{cos^2\rho}\times\frac{-sin^2\rho}{cos^2\rho}=\frac{sin^2\rho}{cos^6\rho}.$$ That's why in the mass term there is an extra $\frac{1}{cos^2\rho}$. And I think there is a typo in the expression of the momentum. It should be $$P_\phi=\frac{sin\rho}{cos\rho}\dot{\phi}.$$
{ "domain": "physics.stackexchange", "id": 29744, "tags": "homework-and-exercises, lagrangian-formalism, metric-tensor, ads-cft, textbook-erratum" }
Hermite Functions as Basis for Hilbert Space
Question: It is well-known that the Hermite functions (not Hermite polynomials!) form a orthonormal basis for a Hilbert space. Therefore, cannot all solutions of the Schrodinger equation even the non-Harmonic oscillator cases be written in terms of Hermite functions? Thanks. Answer: Therefore, cannot all solutions of the Schrodinger equation even the non-Harmonic oscillator cases be written in terms of Hermite functions? Yes, they can. However, note that there is a long way between "it is possible" to "it is a useful way to understand the problem". The ver that you can express those solutions in that way doesn't mean it's helpful to do so.
{ "domain": "physics.stackexchange", "id": 60830, "tags": "quantum-mechanics" }
When exactly does the split into different heads in Multi-Head-Attention occur?
Question: I am confused by the Multi-Head part of the Multi-Head-Attention used in Transformers. My question concerns the implementations in Pytorch of nn.MultiheadAttention and its forward method multi_head_attention_forward and whether these are actually identical to the paper. Unfortunately, I have been unable to follow along the original code of the paper. So I could not check whether the implementations in Pytorch are acutally identical to the paper. Please forgive the excessive use of illustrations. However, I hope it will improve understanding my problem. What is the correct order for calculating the Queries Q, Keys K and Values V and splitting the operation into the individual Attention-Heads? Unfortunately most explanations I found online while helpful for understanding the general principle and intuition of Multi-Head-Attention did not go into the details of the implementation. In the original paper Attention is all you need Multi-Head-Attention is explained as followed: First, according to my current understanding, if we have a sequence of vectors with 512-dimensions (like in the original Transformer) and we have $h=8$ Attention-Heads (again like the original), every Attention-Head attends to $512/8=64$ entries of the input vector used to calculate the Attention in the corresponding head. So the first Attention-Head attends to the first 64 entries, the second to the second 64 entries and so on. However, if the split is conducted before calculating Q,K,V this would refer to the first 64 entries of X (this does not seem match the explanation in the paper I believe) while in the other case it would refer to the first 64 entries of Q,K,V. In the text they say "project the queries, keys and values h times with different, learned linear projections to $d_k,d_k$ and $d_v$ dimensions and since they set $d_k=d_v=d_{model}/h=512/8=64$. Therefore, if we actually have single matrices for every Attention-Head h we would have $W^Q_i,W^K_i,W^V_i \in \mathbb{R}^{512x64} \forall i \in h$. This matches the illustration found here https://jalammar.github.io/illustrated-transformer/ It is explained that the input X is transformed into the Queries, Keys and Values for the different attention heads by using different projection matrices which are learned during training. This seems to indicate that the split into the individual Attention-Heads is conducted after the calculation of $Q,K,V$ (or rather during the calculation). Since we have $h=8$ this leads in sum to $3*8*512*64=3*512*512$ learnable parameters in total (if we ignore the bias). Thus as far as the overall number of parameters is concerned we would have the same number if we would instead use three big matrices which concatenate the matrices of the individual Attention-Heads. $W^Q=[W^Q_1,W^Q_2,...,W^Q_h] \in \mathbb{R}^{512x512}$ $W^K=[W^K_1,W^K_2,...,W^K_h] \in \mathbb{R}^{512x512}$ $W^Q=[W^V_1,W^V_2,...,W^V_h] \in \mathbb{R}^{512x512}$ In the explanation from the same author of GPT-2 (this model has an embedding dimension of 768 and 12 Attention-Heads, instead of 512 and 8 like the original Transformer) found here https://jalammar.github.io/illustrated-gpt2/#part-2-illustrated-self-attention the Queries Q, Keys K and Values V are first calculated by multiplying the input with one big matrix which is the concatenation of $W^Q,W^K,W^V$. If the input for calculating Q, K and V is identical (which is the case for self-attention), it is clear to me that you can use $W_{concat}=[W^Q,W^K,W^V]$ and obtain $[Q,K,V]$, since you essentially still multiply the input with each weight matrix separately. Then you can split the result to again obtain $Q,K,V$ as individual matrices (The image displays $q_9,k_9,v_9$ as an example, not the complete matrices). Then $q_9,k_9,v_9$ are again split into 12 vectors which results in a matrix of dimension $(12x64)$. So overall here we did not use individual matrices per Attention-Head but only one larger matrix. Is this method mathematically identical to the one using individual smaller matrices per Attention-Head? It appears that this is the way the calculation is implented in Pytorch, if $d_{model}=kdim=vdim$ Though here unlike in the paper which used $d_k$ and $d_v$ as names, $kdim$ and $vdim$ refer to the dimension of all Attention-Heads summed up, e.g. $kdim=d_k*num_{heads}$(=512 for the original Transformer). So In the documentation of nn.modules.MultiheadAttention the model either creates three separate projection matrices to generate the Queries, Keys and Values or one big matrix (if the dimensions are identical). The following is part of the _init_ function. if self._qkv_same_embed_dim is False: self.q_proj_weight = Parameter(torch.empty((embed_dim, embed_dim), **factory_kwargs)) self.k_proj_weight = Parameter(torch.empty((embed_dim, self.kdim), **factory_kwargs)) self.v_proj_weight = Parameter(torch.empty((embed_dim, self.vdim), **factory_kwargs)) self.register_parameter('in_proj_weight', None) else: self.in_proj_weight = Parameter(torch.empty((3 * embed_dim, embed_dim), **factory_kwargs)) self.register_parameter('q_proj_weight', None) self.register_parameter('k_proj_weight', None) self.register_parameter('v_proj_weight', None) If we stay in the standard case of _qkv_same_embed_dim=True the input is passed through a nn.linear as part of _in_projection_packed which is using self.in_proj_weight as the weight if not use_separate_proj_weight: assert in_proj_weight is not None, "use_separate_proj_weight is False but in_proj_weight is None" q, k, v = _in_projection_packed(query, key, value, in_proj_weight, in_proj_bias) else: assert q_proj_weight is not None, "use_separate_proj_weight is True but q_proj_weight is None" assert k_proj_weight is not None, "use_separate_proj_weight is True but k_proj_weight is None" assert v_proj_weight is not None, "use_separate_proj_weight is True but v_proj_weight is None" if in_proj_bias is None: b_q = b_k = b_v = None else: b_q, b_k, b_v = in_proj_bias.chunk(3) q, k, v = _in_projection(query, key, value, q_proj_weight, k_proj_weight, v_proj_weight, b_q, b_k, b_v) def _in_projection_packed( q: Tensor, k: Tensor, v: Tensor, w: Tensor, b: Optional[Tensor] = None, ) -> List[Tensor]: r""" Performs the in-projection step of the attention operation, using packed weights. Output is a triple containing projection tensors for query, key and value. Args: q, k, v: query, key and value tensors to be projected. For self-attention, these are typically the same tensor; for encoder-decoder attention, k and v are typically the same tensor. (We take advantage of these identities for performance if they are present.) Regardless, q, k and v must share a common embedding dimension; otherwise their shapes may vary. w: projection weights for q, k and v, packed into a single tensor. Weights are packed along dimension 0, in q, k, v order. b: optional projection biases for q, k and v, packed into a single tensor in q, k, v order. Shape: Inputs: - q: :math:`(..., E)` where E is the embedding dimension - k: :math:`(..., E)` where E is the embedding dimension - v: :math:`(..., E)` where E is the embedding dimension - w: :math:`(E * 3, E)` where E is the embedding dimension - b: :math:`E * 3` where E is the embedding dimension Output: - in output list :math:`[q', k', v']`, each output tensor will have the same shape as the corresponding input tensor. """ E = q.size(-1) if k is v: if q is k: # self-attention return linear(q, w, b).chunk(3, dim=-1) else: # encoder-decoder attention w_q, w_kv = w.split([E, E * 2]) if b is None: b_q = b_kv = None else: b_q, b_kv = b.split([E, E * 2]) return (linear(q, w_q, b_q),) + linear(k, w_kv, b_kv).chunk(2, dim=-1) else: w_q, w_k, w_v = w.chunk(3) if b is None: b_q = b_k = b_v = None else: b_q, b_k, b_v = b.chunk(3) return linear(q, w_q, b_q), linear(k, w_k, b_k), linear(v, w_v, b_v) def _in_projection( q: Tensor, k: Tensor, v: Tensor, w_q: Tensor, w_k: Tensor, w_v: Tensor, b_q: Optional[Tensor] = None, b_k: Optional[Tensor] = None, b_v: Optional[Tensor] = None, ) -> Tuple[Tensor, Tensor, Tensor]: r""" Performs the in-projection step of the attention operation. This is simply a triple of linear projections, with shape constraints on the weights which ensure embedding dimension uniformity in the projected outputs. Output is a triple containing projection tensors for query, key and value. Args: q, k, v: query, key and value tensors to be projected. w_q, w_k, w_v: weights for q, k and v, respectively. b_q, b_k, b_v: optional biases for q, k and v, respectively. Shape: Inputs: - q: :math:`(Qdims..., Eq)` where Eq is the query embedding dimension and Qdims are any number of leading dimensions. - k: :math:`(Kdims..., Ek)` where Ek is the key embedding dimension and Kdims are any number of leading dimensions. - v: :math:`(Vdims..., Ev)` where Ev is the value embedding dimension and Vdims are any number of leading dimensions. - w_q: :math:`(Eq, Eq)` - w_k: :math:`(Eq, Ek)` - w_v: :math:`(Eq, Ev)` - b_q: :math:`(Eq)` - b_k: :math:`(Eq)` - b_v: :math:`(Eq)` Output: in output triple :math:`(q', k', v')`, - q': :math:`[Qdims..., Eq]` - k': :math:`[Kdims..., Eq]` - v': :math:`[Vdims..., Eq]` """ Eq, Ek, Ev = q.size(-1), k.size(-1), v.size(-1) assert w_q.shape == (Eq, Eq), f"expecting query weights shape of {(Eq, Eq)}, but got {w_q.shape}" assert w_k.shape == (Eq, Ek), f"expecting key weights shape of {(Eq, Ek)}, but got {w_k.shape}" assert w_v.shape == (Eq, Ev), f"expecting value weights shape of {(Eq, Ev)}, but got {w_v.shape}" assert b_q is None or b_q.shape == (Eq,), f"expecting query bias shape of {(Eq,)}, but got {b_q.shape}" assert b_k is None or b_k.shape == (Eq,), f"expecting key bias shape of {(Eq,)}, but got {b_k.shape}" assert b_v is None or b_v.shape == (Eq,), f"expecting value bias shape of {(Eq,)}, but got {b_v.shape}" return linear(q, w_q, b_q), linear(k, w_k, b_k), linear(v, w_v, b_v) def linear( input: Tensor, weight: Tensor, bias: Optional[Tensor] = None, scale: Optional[float] = None, zero_point: Optional[int] = None ) -> Tensor: r""" Applies a linear transformation to the incoming quantized data: :math:`y = xA^T + b`. See :class:`~torch.nn.quantized.Linear` .. note:: Current implementation packs weights on every call, which has penalty on performance. If you want to avoid the overhead, use :class:`~torch.nn.quantized.Linear`. Args: input (Tensor): Quantized input of type `torch.quint8` weight (Tensor): Quantized weight of type `torch.qint8` bias (Tensor): None or fp32 bias of type `torch.float` scale (double): output scale. If None, derived from the input scale zero_point (long): output zero point. If None, derived from the input zero_point Shape: - Input: :math:`(N, *, in\_features)` where `*` means any number of additional dimensions - Weight: :math:`(out\_features, in\_features)` - Bias: :math:`(out\_features)` - Output: :math:`(N, *, out\_features)` """ if scale is None: scale = input.q_scale() if zero_point is None: zero_point = input.q_zero_point() _packed_params = torch.ops.quantized.linear_prepack(weight, bias) return torch.ops.quantized.linear(input, _packed_params, scale, zero_point) Then later the Queries, Keys and Values are split up into the individual Attention-Heads: # # reshape q, k, v for multihead attention and make em batch first # q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1) if static_k is None: k = k.contiguous().view(k.shape[0], bsz * num_heads, head_dim).transpose(0, 1) else: # TODO finish disentangling control flow so we don't do in-projections when statics are passed assert static_k.size(0) == bsz * num_heads, \ f"expecting static_k.size(0) of {bsz * num_heads}, but got {static_k.size(0)}" assert static_k.size(2) == head_dim, \ f"expecting static_k.size(2) of {head_dim}, but got {static_k.size(2)}" k = static_k if static_v is None: v = v.contiguous().view(v.shape[0], bsz * num_heads, head_dim).transpose(0, 1) else: # TODO finish disentangling control flow so we don't do in-projections when statics are passed assert static_v.size(0) == bsz * num_heads, \ f"expecting static_v.size(0) of {bsz * num_heads}, but got {static_v.size(0)}" assert static_v.size(2) == head_dim, \ f"expecting static_v.size(2) of {head_dim}, but got {static_v.size(2)}" v = static_v Therefore, it appears to me that both ways of calculations should be equal and the one using the bigger matrices is just more efficient to compute. Is this correct? In this case, I ask myself why nn.MultiheadAttention requires that embed_dim is divisible by num_heads, since the split into the individual Attention-Heads is actually conducted after generating $Q,K,V$. Should then not $d_q,d_k,d_v$ be made to be divisible by num_heads? Since these dimensions do not have to be equal to the dimension of the inputs? Thanks for any advice in advance! Answer: The queries, keys and values are calculated then chunked so that each chunk depends on (is a linear combination of) all values of the input embedding. As for understanding an implementation, I didn't bother with pytorch but instead understood this code http://einops.rocks/pytorch-examples.html although there are differences as I understand that pytorch expects input in the form (L, N, C) where L=words, N=batch, C=embedding for performance reasons. Transformer's attention needs more attention class MultiHeadAttentionNew(nn.Module): def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1): super().__init__() self.n_head = n_head self.w_qs = nn.Linear(d_model, n_head * d_k) self.w_ks = nn.Linear(d_model, n_head * d_k) self.w_vs = nn.Linear(d_model, n_head * d_v) nn.init.normal_(self.w_qs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k))) nn.init.normal_(self.w_ks.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_k))) nn.init.normal_(self.w_vs.weight, mean=0, std=np.sqrt(2.0 / (d_model + d_v))) self.fc = nn.Linear(n_head * d_v, d_model) nn.init.xavier_normal_(self.fc.weight) self.dropout = nn.Dropout(p=dropout) self.layer_norm = nn.LayerNorm(d_model) def forward(self, q, k, v, mask=None): residual = q q = rearrange(self.w_qs(q), 'b l (head k) -> head b l k', head=self.n_head) k = rearrange(self.w_ks(k), 'b t (head k) -> head b t k', head=self.n_head) v = rearrange(self.w_vs(v), 'b t (head v) -> head b t v', head=self.n_head) attn = torch.einsum('hblk,hbtk->hblt', [q, k]) / np.sqrt(q.shape[-1]) if mask is not None: attn = attn.masked_fill(mask[None], -np.inf) attn = torch.softmax(attn, dim=3) output = torch.einsum('hblt,hbtv->hblv', [attn, v]) output = rearrange(output, 'head b l v -> b l (head v)') output = self.dropout(self.fc(output)) output = self.layer_norm(output + residual) Understand MultiHeadAttentionNew first then understand MultiHeadAttentionOld, and then maybe look at the pytorch code.
{ "domain": "ai.stackexchange", "id": 3331, "tags": "transformer, pytorch, attention" }
How to calculate the energy to dissociate a bond into neutral atoms?
Question: I am self studying chemistry through MiT ocw 5.111 . On practice exam 2 problem 2 there is a question which states the following element ionization energy electron affinity Potassium(K) 418 kJ/mol 48 kJ/mol Fluorine(F) 1680 kJ/mol 328 kJ/mol Chlorine(Cl) 1255 kJ/mol 349 kJ/mol (a) (12 points) The ionic bond length for KF is 0.217 nm. Calculate the energy (in units of kJ/mol required to dissociate a single molecule of KF into the neutral atoms K and F, using information provided above. For this calculation, assume that the potassium and fluorine ions are point charges. I proceeded to calculate the bond dissociation energy using the following formula $U(r)=\frac{z_1z_2e^2}{4\pi\varepsilon_0r}$ from which I obtained a dissociation energy of $-640$ kJ/mol. Then left with two ions F$^{-1}$ and K$^{+1}$. I went on to determine their ionization energies and electron affinities respectively. To make F$^{-1}\to$ F + $e^{-}$ an electron must be lost thus warranting an ionization energy of $-1680$ kJ/mol and. To make K$^{+1}$+ $e^{-} \to$ K and electron must be gained thus referring to the election affinity of K of $18$ kJ/mol. When equated as (Energy Required - Energy Released) I arrive at (Dissociation Energy + Ionization Energy) - Electron Affinity = (-640 kJ/ mol - 1680 kJ/mol) + 48 kJ/mol = -2272 kJ/mol required However I see in the answer key this is incorrect could someone guide me as to where I am going wrong here thank you. Answer: Your $\pu{640 kJ mol^-1}$ value is correct, so the next step is to neutralize the ions. First add an electron to a potassium ion to get a potassium atom. This releases $\pu{418 kJ mol^-1}$ because it is the reverse of the potassium atom ionization. Then take away an electron from fluoride ion to get a fluorine atom. This requires an input of $\pu{328 kJ mol^-1}$ because it is the reverse of the fluorine electron affinity. So the result is $\pu{550 kJ mol^-1}$, i.e., $\pu{640 kJ mol^-1}$ - $\pu{418 kJ mol^-1}$ + $\pu{328 kJ mol^-1}$. This figure makes it clear: Figure copyright information: D.W. Oxtoby, H.P. Gillis, L.J. Butler, Principles of Modern Chemistry, 8th Ed., Cengage Learning, Section 3.8, p. 87, © 2016 Cengage Learning.
{ "domain": "chemistry.stackexchange", "id": 13406, "tags": "inorganic-chemistry, physical-chemistry, bond, energy, ionization-energy" }
Hund's rule & different H₂ molecules
Question: Does Hund's rule allow both of the following scenarios? Filling each orbital with a single electron, so that a sub-shell, at first, only electrons with a negative spin Filling each orbital with a single electron, such that a sub-shell, at first, only electrons with a positive spin? I assume two H atoms whose electrons have different spins would form $\ce{H-H}$ and atoms where the spins are the same would form $\ce{H:H}$. Is that correct? Answer: For H$_2$ there are only two electrons and thus you can only have singlet (electrons with opposite spin projection) and triplet states (electrons with equal spin projection). The spin wave function of the singlet state is a linear combination (a superposition state) of the first electron with spin projection $\alpha$ and the second electron with spin projection $\beta$ and the opposite situation. Both spin states in the quantum superposition have opposite phase: $$|\uparrow\downarrow\rangle -|\downarrow\uparrow \rangle$$ If the electrons are in the same molecular orbital then (by the Pauli principle) only the singlet state exists. If they are filling $\pi$ molecular orbitals then they could be in different degenerate orbitals (with angular momentum projection on the molecular axis $M_z = \pm \hbar$) and thus both singlet and triplet terms would exist. There are three degenerate states belonging to the triplet term, that is, there are three wave functions that have equal energy. There spin components are: $$|\uparrow \uparrow \rangle$$ $$|\downarrow \downarrow \rangle$$ $$|\uparrow \downarrow\rangle + | \downarrow\uparrow \rangle$$ So you can see that it doesn't matter if you fill one molecular orbital with one electron in spin $\alpha$ and a different molecular orbital with the electron with $\alpha$ spin or you do so with both $\beta$ and $\beta$ spins. However you can not put both electrons with the same spin projection in the same molecular orbital. That is against the Pauli principle, that is, against the anti-symmetry of the electronic wave function.
{ "domain": "chemistry.stackexchange", "id": 303, "tags": "spin, hydrogen" }
Choose-many categorical features: alternatives to one-hot encoding?
Question: I'm building a model to predict the lifetime value of a client based on the relational data we have on them. The user table has a bunch of one-to-many child tables that might be predictive. Grossly simplified, the child features boil down to things like: a list of item categories that they've bought in the past a list of the predominant colors in ads they've clicked on etc, etc In each case, the obvious feature comprises a list of ~ 0-10 choices from a categorical variable. I have several of these features, some of which have as many as ~10k discrete values, so one-hot encoding would get very wide, very fast. Aside: if there a term of art for this kind of "list-of-tags feature" that I'm referring to as "choose many categorical", please tell me. Question: Is there an dense encoding scheme that works with choose-many categorical features? Answer: If your algorithm is based on gradient descent optimization, you can use embeddings, which are dense representation spaces for discrete elements. Embeddings are supported by most deep learning frameworks such as pytorch or tensorflow. Update: the fact that you want to have multiple discrete values does not prevent the possibility of using embeddings: you can just add all vectors together in a single value. The most straightforward approach for this would be to have a constant length for the list (equal to the maximum number of elements in all lists, or a sensible maximum value), filling with "padding" items the positions that are not needed. If you want to take the sequential appearance of the elements into account, instead of adding the vectors together you could apply convolutional layers or an LSTM over the embedded vectors.
{ "domain": "datascience.stackexchange", "id": 8660, "tags": "encoding, categorical-encoding" }
Get an integer as input and display Pi rounded to that amount of decimal places
Question: I've been writing a program that accepts an integer as input and displays the number Pi rounded to that number. The only issue I see is the fact that the Math.Round method can only round up to 15 spaces and there's no try-catch for that ArgumentOutOfRange exception. I'm also not sure how safe it is letting your flow of execution rely on a try-catch statement. class Program { public static int counter = 0; static void Main(string[] args) { Console.WriteLine("Welcome to the Pi-Rounder! Find Pi to up to 15 decimal places!"); Console.WriteLine("Please enter the number of decimal places you'd like to round to."); int roundTo; do { string digitAsString = Console.ReadLine(); roundTo = ConversionLoop(digitAsString); } while(roundTo == 0 && counter != 5); if(counter == 5) { throw new FormatException(); } else { double piRounded = Math.Round(Math.PI, roundTo); Console.WriteLine(piRounded); Console.ReadLine(); } } static int ConversionLoop(string digitString) { try { int digit = Convert.ToInt32(digitString); return digit; } catch(FormatException) { counter++; Console.WriteLine("That was not a valid number. Please try again."); return 0; } } } Answer: There are several issues with this piece of code: do { string digitAsString = Console.ReadLine(); roundTo = ConversionLoop(digitAsString); } while(roundTo == 0 && counter != 5); if(counter == 5) { throw new FormatException(); } else { double piRounded = Math.Round(Math.PI, roundTo); Console.WriteLine(piRounded); Console.ReadLine(); } Problems: "ConversionLoop" is a meaningless name. The function parses a string to an integer, so a better name would be toInt The handling of invalid input and incrementing the counter are not visible here. At first look I didn't see how the counter can advance, and it seemed you don't tell the user about invalid results. I had to look at the ConversionLoop to find out, but it was not logical to do so. The responsibility of getting valid input should not be split between two methods, it would be clearer to handle in one place, and have all the elements of the logic easily visible. If the user fails to enter valid input 5 times, the code throws new FormatException() FormatException is not appropriate for this. The problem is not invalid format, but failure to enter valid input within a reasonable number of retries. It's a different kind of error, and should be captured by a different exception class Creating an exception without a text message explaining the problem makes debugging difficult After throwing an exception in the if branch, the program exits from the method, so you can simplify the else A bug I think you have a bug: if the user enters 0 as input, the ConversionLoop method doesn't print an error and returns 0 normally, but the program will still wait for another try. Without a message, this will be confusing to the user. I doubt you intended it this way. Suggested implementation With the above suggestions, the code becomes: class UserInputException : Exception { public UserInputException(string message) : base(message) { } } public static int MAX_TRIES = 5; static void Main(string[] args) { Console.WriteLine("Welcome to the Pi-Rounder! Find Pi to up to 15 decimal places!"); int roundTo = ReadIntegerInput(); double piRounded = Math.Round(Math.PI, roundTo); Console.WriteLine(piRounded); Console.ReadLine(); } static int ReadIntegerInput() { Console.WriteLine("Please enter the number of decimal places you'd like to round to."); int counter = 0; while (true) { string digitAsString = Console.ReadLine(); try { return Convert.ToInt32(digitAsString); } catch(FormatException) { if (++counter == MAX_TRIES) { throw new UserInputException("Too many invalid inputs. Goodbye."); } Console.WriteLine("That was not a valid number. Please try again."); } } }
{ "domain": "codereview.stackexchange", "id": 14362, "tags": "c#, error-handling, formatting, floating-point" }
Textbook for special relativity: modern version of Bondi's Relativity and Common Sense?
Question: I am looking for a textbook on special relativity for school children. A background in simple vector based mechanics could be assumed. Primarily it needs to be readable at high school English reading level with minimal jargon, emphasising intuition and without introducing too much unnecessary mathematical baggage (tensors seem to be useful for GR but unnecessary in an intro to SR.) A more accessible alternative to vanilla SR mathematical calculations that directly models SR could also be a great bonus for aiding intuition, just as logs aid in the intuition of geometric progressions. I've started reading Relativity and Common Sense, and it is amazingly readable, like a pop-science book, but in-depth and with real science. Bondi also uses doppler k-factors as a replacement for velocity which also aids intuition compared to matrices and gammas all over the place. Unfortunately, the e-book I've found is simply a PDF scan of the 40+ year old book and looks very dated, and there are only a few diagrams. Is there a modern equivalent that I should be looking at? I've also looked at Algebra of Physical Space (Clifford Algebra) as an accessible methodology. It looks very easy to use, but I can't find a good textbook written for beginners. Unfortunately most of the material I've found are papers focussing on APS and convincing existing practitioners to convert to APS rather than a gentle introduction to SR that happens to use APS. P.s. I'm not wanting to start a flame war about mathematical models of SR, I would just regard an intuitive mathematical model to be a bonus. These are school kids, they are not going to work at LHC next year. To moderators: sorry that this question might not have a single clean-cut answer, but I expect that the answers will be based on experience and professional judgement rather than uninformed opinion :-) Similar question, for a textbook for a different purpose: Textbook on the Geometry of Special Relativity Answer: An Illustrated Guide to Relativity - Tatsu Takeuchi A very enjoyable book on special relativity for beginners. It covers the basics (Lorentz transforms, length contraction, time dilation, velocity addition, twin paradox,...) using spacetime diagrams rather than equations. It's a fun and intuitive introduction. To give you an idea: this is an illustration of relativistic velocity addition:
{ "domain": "physics.stackexchange", "id": 18281, "tags": "special-relativity, resource-recommendations, education" }
Why is it called "carbonation"?
Question: Why is it referred to as "carbonation" and we drink "carbonated" beverages when carbonate is $\ce{CO3}$ while $\ce{CO2}$ (carbonite?) is present in carbonation? Answer: Nope. Carbonate is $\ce{CO}_3^\color{red}{2-}$ (an ion), not $\ce{CO3}$. Carbonite is $\ce{CO}_2^\color{red}{2-}$. Carbonation involves dissolving $\ce{CO2}$ gas in water. It turns out that $\ce{CO2}$ reacts (maybe not the best term to use here*) with water via the following reaction $$\ce{CO2 + H2O -> CO3^{2-} + 2H+}$$ So you have effectively carbonated the soft drink. When you depressurize the bottle by opening it, the reverse reaction occurs and you get carbon dioxide. *Such dissociation is normal and integral to dissolution of polar solutes in water, so generally it is considered as part of the dissolution
{ "domain": "chemistry.stackexchange", "id": 172, "tags": "everyday-chemistry, aqueous-solution, nomenclature" }
How is an electron "recycled" in a neutron?
Question: A proton is made up, they say, by 2 up and 1 down quark, drowned in a sea of virual paricles: when an electron is captured this process thereby changes a nuclear proton to a neutron and simultaneously causes the emission of an electron neutrino. p + e− → n + ν e ...and a neutron is made up by 2 down quarks and 1 up quarks Can you clarify some aspects of this "process" which I couldn't find anywhere? 1) what happens to the electron? we know that it is an elementary particle, that means it has no internal structure and it can only be transformed into a photon when joined to a positron, is it first turned into energy? but there is no positron at hand in the proton. What are the steps that lead from 2u+1d+1e to 2d+1d+v? 2) is there any qualitative difference between the charge of a down quark and an electron? can electric charge emerge from two different types of particle or a d-quark is just an electron with restmass= energy =1.2356*10^20Hz/3? Answer: The process is called electron capture, and the Feynmann diagram for it is: The up quark emits a virtual W$^+$ boson and changes to a down quark. The electron interacts with the W$^+$ and converts to an electron neutrino. This is a weak force interaction, and the weak force can change the flavour of particles i.e. it can change quarks to a quark of a different type and likewise interchange between leptons and neutrinos. Particles can change into other particles because all particles are excitation in a quantum field. For example there is an electron quantum field that pervades all of spacetime. If we add a quantum of energy to this field it appears as an electron. Likewise we can remove a quantum of energy from the electron field and this makes an electron disappear. An electron can change into an electron neutrino because energy can be transferred from the electron field to the neutrino field making one electron disappear and one neutrino appear. Likewise for the up to down quark change.
{ "domain": "physics.stackexchange", "id": 28789, "tags": "nuclear-physics, electrons, standard-model, neutrons, weak-interaction" }
why does the partial pressure not change?
Question: Suppose I have the equilibrium in a closed container at constant temperature: $$\ce{2CaSO4(s) -> 2CaO(s) + 2SO2(g) + O2(g)}$$ If the volume of the container is halved, the reaction should move towards that side having lesser number of gaseous moles and hence the partial pressures of $\ce{O2}$ and $\ce{SO2}$ must change but it is given that they won't change. How do you solve this question? Answer: The given explanations are correct, but there is a slightly different way of looking at it which makes it easier to understand. While 2CaSO4(s)⟶2CaO(s) + 2SO2(g) + O2(g) is quite a correct way to describe the equilibrium, it is worthwhile to remember that since we have an equilibrium, we probably also have some CaO(s) on the left: 2CaSO4(s) + nCaO(s)⟶2CaO(s) + 2SO2(g) + O2(g). Then, mentally halving the volume of the container doubles the pressure (mentally), which then causes a back reaction with the CaO on the left side, leaving the partial pressures unchanged because they are dependent on the nature of the equilibrium constant. Putting the CaO on the left side is like completing the mental picture of a seesaw (a metaphor for equilibrium), which otherwise has a weight on one side and not on the other.
{ "domain": "chemistry.stackexchange", "id": 14302, "tags": "equilibrium, gas-phase-chemistry" }
When will the 2 cars Have equal speeds?
Question: If car1 started at 20m, with initial velocity of -4m/s and acceleration of 3 m/s^2. Car 2 started at 15m, with initial velocity of 6m/s and acceleration of 0. At what time will the 2 cars have equal speeds. Here's my work below: assuming constant acceleration, we know that $v_{avg}=\frac{\left(v_f+v_i\right)}{2}=\frac{x}{t}$ and the acceleration $a=\frac{v_f-v_i}{t}$ with a little bit of manipulation of the two equations, I got $v_f=\frac{x}{t}+\frac{a}{2}t$ since we need to find the time at which the two cars will have the same speed. That's v_1f = v_2f then, we get $\left(v_{f_2}=\frac{x_2}{t}+\frac{a_2}{2}t\right)=\left(v_{f_1}=\frac{x_1}{t}+\frac{a1}{2}t\right)$ with manipulation of the above equations for t I get $t=\sqrt{\frac{2\left(x1-x2\right)}{a2-a1}}$ Is this the right approach? Answer: It is much simpler than what you attempted. You don't need to take into account the positions of the cars, since what they ask you is to compute when the velocities of both of them will be equal, and they give you their initial velocities and their accelerations. If you know the initial velocity of a car and its acceleration, you can compute its velocity at any moment in time, and that is independent of whether the car is passing through your street or in another town (its position doesn't matter). If $v_0$ is the initial velocity of an object, $a$ its constant acceleration, $t$ is the moment in time and $v$ the velocity of the object (which will obviously change with time unless the acceleration is zero), then: $$v=v_0+a\cdot t$$ Writing an equation like this for each one of your cars, we have: $$\begin{cases}v_1=v_{0,1}+a_1\cdot t \\ v_2=v_{0,2}+a_2\cdot t\end{cases}$$ And what is the value of $t$ for which the velocities $v_1=v_2$ will be equal? Well, we just have to make $v_1=v_2$ in the equations above: $$v_1=v_2\quad\Rightarrow\quad v_{0,1}+a_1\cdot t=v_{0,2}+a_2\cdot t\quad\Rightarrow\quad t=\dfrac{v_{0,2}-v_{0,1}}{a_1-a_2}=\dfrac{6-(-4)}{3-0}=\dfrac{10}{3}=3,333$$ At $t=3,333$ seconds, both cars will be rolling at the same velocity (note that this velocity will be 6 m/s, since that is the initial velocity of the second car, which has zero acceleration, and this means that its velocity will remain constant). Edit: I'll try to clarify here why the positions aren't relevant. To understand this, you just have to think about the physical meaning of the concepts involved: velocity and acceleration. Velocity is easy, its units are $m/s$, meters per second, which means that if the velocity of an object is, for example, 3 $m/s$, then that object moves 3 meters in one second. In the case of acceleration, you will have noticed that its units are $m/s^2$, meters per second squared. A second squared probably doesn't make much sense in an intuitive way, but what if we write it like this? $$\dfrac{m}{s^2}=\dfrac{m/s}{s}$$ An acceleration is then expressed in meters per second, per second. This means it measures in how many meters per second a velocity changes in a second. So, if a car is initially moving with a velocity of 6 $m/s$, and it has a constant acceleration of 2 $m/s^2$, then its velocity will increase in 2 $m/s$ with every second that passes. After one second, it will be 8 $m/s$, after two seconds it will be 10 $m/s$, and so on. Since the problem just asks you when the velocities of the two cars will be equal, and we just reasoned that all you need to know to compute the velocity of a car that moves with constant acceleration is its initial velocity, the value of the acceleration and how much time has passed, we conclude that the positions aren't relevant to calculate the velocities.
{ "domain": "physics.stackexchange", "id": 93484, "tags": "homework-and-exercises, kinematics, time, speed" }
Do birds change their diet before migration?
Question: Migration takes up a lot of energy, so I am wondering if birds (in my case mallards) change their diet / food preferences in the weeks before migrating. Do they select for more energetic foods to fuel up, or do they just eat more of the same foods to gain enough energy? Answer: Apparently, yes. This article underlines the changes in diet in semipalmated sandpiper before its 3000 km-long migration: Near the beginning of their journey, sandpipers stop at the Bay of Fundy on Canada's eastern coast to gorge on mud shrimp, 1-cm-long crustaceans loaded with omega-3 fatty acids. Over 2 weeks, the frantic feeding doubles each sandpiper's body mass [and i]t makes sandpipers' muscles use oxygen more efficiently, enhancing the birds' endurance. The authors experimented the diet on a similar species to disentangle the effects of the different diet from other effects, for example due to hormonal changes: To isolate diet's role, Weber and colleagues took exercise and migration out of the equation. They turned to the bobwhite quail, an unrelated sedentary bird that doesn't migrate and seldom flies. For 6 weeks, the scientists fed 40 couch-potato quails a combination of omega-3 fatty acids from fish oil. To the researchers' surprise, the quail's oxidative capacity - their muscles' efficiency at using fuel - shot up 58% to 90%. This paper reviews the effects of diet changes on the digestive trait of migratory birds; however, it's quite old (2001). I couldn't find any indication on mallard feeding prior to migration, but these links can be a start for further research.
{ "domain": "biology.stackexchange", "id": 8257, "tags": "food, ornithology, behaviour, diet, migration" }
How to change gazebo plugin parameter
Question: I am using Hydro on Ubuntu 12.04. When I spawn create base turtlebot, turtlebot begins to navigate on its own. I suspect that in file create_gazebo.urdf.xacro under create_description package like in this code snippet alwaysOn parameter is causing this. Is there any way to change value of this parameter in launch file or only way is changing this file ? .... <robot xmlns:xacro="http://ros.org/wiki/xacro" name="turtlebot_gazebo"> <xacro:macro name="sim_create"> <gazebo> <plugin name="create_controller" filename="libgazebo_ros_create.so"> <alwaysOn>true</alwaysOn> <node_namespace>turtlebot_node</node_namespace> <left_wheel_joint>left_wheel_joint</left_wheel_joint> <right_wheel_joint>right_wheel_joint</right_wheel_joint> <front_castor_joint>front_castor_joint</front_castor_joint> <rear_castor_joint>rear_castor_joint</rear_castor_joint> <wheel_separation>.260</wheel_separation> <wheel_diameter>0.066</wheel_diameter> <base_geom>base_footprint_collision_base_link</base_geom> <updateRate>40</updateRate> <torque>1.0</torque> </plugin> </gazebo> ..... Originally posted by serdar on ROS Answers with karma: 3 on 2014-09-29 Post score: 0 Answer: That is specifying that the plugin controller is always on. That is not causing your robot to begin to navigate. It is just allowing it to react. Without the controller running the plugin would not respond to commands, or properly update state. It's not parameterized because it's not designed to be able to be turned on or off. You need to find what is sending cmd_vel to your (virtual) robot and disable that. Originally posted by tfoote with karma: 58457 on 2014-09-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19566, "tags": "gazebo, ros-hydro" }
QM Propagator Question
Question: On page 577 of Shankar's 'Principles of Quantum Mechanics' the author gives the Schrodinger propagator: $$U_s(\textbf{r},t;\textbf{r}',t')=\sum_{n}\psi_n(\textbf{r})\psi^*_n(\textbf{r}')\exp[-iE_n(t-t')].\tag{1}$$ From this we can get $$\left( i\frac{\partial}{\partial t}-H \right)U_s = 0.\tag{2}$$ To stop the propagator propagating backward in time, he introduces the function $$G_s(\textbf{r},t;\textbf{r}',t')=\theta(t-t')U_s(\textbf{r},t;\textbf{r}',t').\tag{3}$$ Shankar then states that $G_s$ obeys the equation $$\left( i\frac{\partial}{\partial t}-H \right)G_s=\left[ i\frac{\partial}{\partial t}\theta(t-t') \right]\sum_{n}\psi_n(\textbf{r})\psi^*_n(\textbf{r}')\exp[-iE_n(t-t')]\tag{4}$$ In equation $(4)$ it is implied that $$H\left[\theta(t-t')\sum_{n}\psi_n(\textbf{r})\psi^*_n(\textbf{r}')\exp[-iE_n(t-t')]\right]=0.\tag{5}$$ My question is how do we know or show that equation $(5)$ is true? Answer: Equation 5 isn't true. $\Theta(t-t')$ is a scalar/non-operator. Just use the product rule with the derivative, and you'll get: $$[i\partial_t - H][\Theta(t-t')U_s] = iU_s\partial_t\Theta(t-t') + \Theta(t-t')[i\partial_t - H]U_s.$$ Also, I've never heard the "To stop the propagator propagating backward in time," line. What's going on is you're using the propagator to construct the Green's function for the Schrödinger equation, which is a slightly different concept. Short version: $\Theta(t-t')$ is the Green's function for $\partial_t$. The Green's function of $\partial_t + \gamma$ is $\Theta(t-t') e^{-\gamma(t-t')}$. The rest is playing around with eigenvalues and reconstructing the operator.
{ "domain": "physics.stackexchange", "id": 91097, "tags": "quantum-mechanics, schroedinger-equation, greens-functions, propagator" }
Antimatter and quantum mechanics
Question: This question could have a very simple answer but I could not find that answer anywhere. My question is since electrons, protons, etc they all have their antiparticles, why are not they mentioned in Quantum Physics? And if they are real, should not they be included into Schrödinger equation? Answer: The non-relativistic behavior of antiparticles can be understood with the Schrodinger equation. For example, anti-hydrogen is approximated by the Schrodinger equation to the same accuracy as hydrogen is. This is often never mentioned in an introductory Quantum Mechanics course. But the relationship between particles and antiparticles can only be understood using relativistic quantum mechanics, such as the Dirac equation or relativistic quantum field theory. Quantum electrodynamics (QED) is an example of the latter and explains, among many other things, how an electron and a positron can annihilate into photons. Since charged particles and their antiparticles can annihilate to produce photons, which are never non-relativistic, the non-relativistic Schrodinger equation cannot explain this interaction. Also, the Schrodinger equation cannot represent particles or antiparticles appearing or disappearing, like QED can. But the Schrodinger equation can explain how an anti-proton binds with an anti-electron (positron) to make anti-hydrogen, since this does not involve relativistic processes and no particles appear or disappear.
{ "domain": "physics.stackexchange", "id": 64424, "tags": "quantum-mechanics, particle-physics, schroedinger-equation, antimatter, dirac-equation" }
How pr2_mechanism to use hard real-time mechanism?
Question: pr2_mechanism doc says: The pr2_mechanism stack contains the infrastructure to control the PR2 robot in a hard realtime control loop. When I using normal ubuntu not linux-rt,does pr2_mechanism will also make sure it can use hard real-time mechanism? Or should I install linux-rt and install ROS on that kernel? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2011-09-30 Post score: 1 Answer: Vanilla Linux kernels have provided good real-time performance for many years. For most robotics applications, the generic Ubuntu kernel should work fine. If you observe performance problems with your robot running your specific mix of software, there are several kernel options you can try. Originally posted by joq with karma: 25443 on 2011-09-30 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Wim on 2011-11-03: Yes, pr2_mechanism allows you to run controllers in hard realtime. Comment by joq on 2011-10-20: I do not know the internal implementation of pr2_mechanism. Since it says it supports hard real-time, I suppose it must use real-time scheduling. Comment by sam on 2011-10-20: Thank you for these information. And what's the answer of my question? Do pr2_mechanism have that ability or not? Yes or no? Comment by joq on 2011-10-19: The vanilla Ubuntu kernel provides soft realtime, which is good enough for most robotics. It supports POSIX realtime scheduling. If you believe you need harder guarantees, you should first measure the jitter while running your actual hardware and software. Comment by sam on 2011-10-19: Thank you~ But when I using normal ubuntu not linux-rt,does pr2_mechanism will also make sure it can use hard real-time mechanism? Or will it just no hard rt functionalities?
{ "domain": "robotics.stackexchange", "id": 6834, "tags": "ros, real-time" }
Error is occurring setting an array element with a sequence
Question: I have the columns in my Data Frame as shown below: Venue city Venue Categories Madison London [1, 1, 1, 1, 0, 0, 0, ...,0,0] WaterFront Austria [0, 1, 1 0, 0, 0, 0, ....0,1] Aeronaut Marvilles [0, 0, 0, 0, 1, 1, 1, ....1,1] Aeronaut Paris [0, 1, 1, 0, 0, 0, 0, ....1,1] Gostrich New York [0, 0, 1, 0, 0, 0, 0, ....1,0] I am passing this data to my machine learning model , but model.fit is not accepting the input , My code is shown below , that I am trying , labelencoder = LabelEncoder() dff['Venue']=labelencoder.fit_transform(dff['Venue']) dff['city']=labelencoder.fit_transform(dff['city']) Let's say , if I want to increase the number of features . I want to add more columns , then I again write everything for each feature just like shown below , if i want to add type column and owner column city = dff['city'].values owner = dff['owner'].values type = dff['type'].values categories = dff['Venue Categories'].values labels = np.array(dff['Venue'].values) data = np.array([make_sample(city[i], owner[i], type[i] categories[i]) for i in range(len(city))]) This will looks weird , I want to make it global , means there should not need to touch the code if we may increase the number of columns . Answer: I managed to make it work, by combining the city column with the venue categories column into a 2D (numpy) array which can be used by the RandomForestClassifier of sklearn. Example code: import pandas as pd from sklearn.preprocessing import LabelEncoder from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split import numpy as np def make_data(df, target_column='Venue', categories_column='Venue Categories'): categories = df[categories_column].values other_features = [] for col in df.columns: if col in [categories_column, target_column]: continue other_features.append(df[col].values); return np.array([[col[i] for col in other_features]+categories[i] for i in range(len(categories))]), np.array(df[target_column].values) labelencoder = LabelEncoder() dff['Venue'] = labelencoder.fit_transform(dff['Venue']) dff['city'] = labelencoder.fit_transform(dff['city']) data, labels = make_data(dff, 'Venue', 'Venue Categories') train_data,test_data,train_labels,test_labels = train_test_split(data,labels,test_size=0.20) model = RandomForestClassifier() model.fit(train_data,train_labels) Note make sure that Venue Categories column has same number of elements for each row of data, else a new problem will arise again. If needed fill with dummy values
{ "domain": "datascience.stackexchange", "id": 9520, "tags": "machine-learning, python, pandas, dataframe" }
How MATLAB calculates matched filter gain
Question: Here is a bunch of code from MATLAB documentation: Designing a Basic Monostatic Pulse Radar: wav = phased.RectangularWaveform(... 'PulseWidth',1/pulseBandwidth,... 'PRF',PRF,... 'SampleRate',Fs); matchingcoeff = getMatchedFilter(wav); hmf = phased.MatchedFilter(... 'Coefficients',matchingcoeff,... 'GainOutputPort',true); [rx_pulses, mfgain] = step(hmf,rx_pulses); How MATLAB calculates the mfgain (matched filter gain)? Any formula for that? Answer: Matched filter gain is calculated as (in dB): $G_{dB} = 10 \cdot \log_{10}(L)$, where $L$ - is filter length. It's maximum possible SNR improvement that filter can provide. Try this formula and compare result with MATLAB one. I suppose it will be the same.
{ "domain": "dsp.stackexchange", "id": 4691, "tags": "matched-filter" }
Why is this n^2 growth?
Question: I am attempting to understand the growth of the following algorithm, which is described as $n^2$ growth in the book I am reading: "... performs of the order of $n^2$ steps on a sequence of length $n$." Could someone please explain how this is calculated in the following code, which is also taken from the book? If I print out the statements when the lines are executed, it first executes $n$ steps, then decreases $n-1$ steps for each loop iteration until it reaches $0$. This does not seem like exponential growth to me. Why does this grow at $n^2$? dataset = [3,1,2,7,5] product = 0 # algorithm begins here for i in range(len(dataset)): for j in range(i + 1, len(dataset)): product = max(product, dataset[i]* dataset[j]) Answer: Because $n + (n-1) + (n-2) + \cdots + 2 + 1 = \frac{n(n+1)}{2} \in \mathcal{O}(n^2)$. Note that $n^2$ is polynomial, not exponential (that would be $2^n$ for example).
{ "domain": "cs.stackexchange", "id": 11920, "tags": "algorithms, algorithm-analysis, python" }
Why do some gases transfer radioactivity and some don't?
Question: I have recently read that helium is going to be used as coolant in Generation IV nuclear reactors, because Helium is radiologically inert (i.e., it does not easily participate in nuclear processes and does not become radioactive). (Source: Ch 3 of The Impact of Selling the Federal Helium Reserve (2000)) Does that mean, if there would be some helium leaks, it would not pollute the air? Also, how is that some gas "conduct" radioactivity more than others? How is that determined? I have read something about cross sectional area of nuclei with barns as unit, but I don't really understand the process. What are some other gases which are radiologically inert? Answer: Chemically, helium is inert because it has a "filled valence shell" of electrons, which is very stable; it's extremely difficult to change this structure, as doing so requires a lot of energy and produces a system which is likely to quickly revert back to its ground state under normal conditions. The helium-4 nucleus is in a very similar situation: in a sense, it has "filled shells" of protons and neutrons. Relative to its neighbors on the nuclear chart, it's one of the most stable nuclear configurations we have measured. Changing this nuclear structure in any way is difficult, so helium-4 is unlikely to become radioactive in the first place, and the configurations that are created when it does happen are so unstable that they decay almost instantly. Neutron capture (far and away the primary cause of secondary radioactivity) creates helium-5, which decays with a half-life of $7\times 10^{-22}$ seconds, so it barely even exists, and certainly won't be found outside the reactor. It's also basically impossible to excite the helium-4 nucleus to a higher energy level using gamma radiation from a fission reactor, as the next energy level is 20 MeV above the ground state (for reference, most of the steps of the uranium decay chain have a total released energy of only 4-7 MeV). So it's safe to say that helium-4 is radiologically inert. The term "conduction" is probably* referring to the following process: a radioactive nucleus predisposed to emit neutrons decays, and the emitted neutrons are captured by another nucleus, which might make it unstable and therefore radioactive. In this sense, what determines how readily a substance "conducts" radioactivity is its willingness to capture neutrons (aka the neutron capture cross section), which is heavily dependent on the specific nuclear structure. (There are other ways to induce radioactivity, like beta decay of one nucleus followed by electron capture by another, or gamma-ray emission and absorption, but the conditions required for those processes are rarer.) For other radiologically-inert substances, one might look for other nuclei that have "filled shells" of protons and neutrons. In nuclear structure, these are called "doubly magic" nuclei (having a "magic" number of protons and a "magic" number of neutrons), and do indeed have a reputation for stability, though none are quite so stable as helium-4. Doubly-magic nuclei include oxygen-16, calcium-40, and iron-56. *Let me stress that "conduction" is highly nonstandard terminology; the term for the process that I describe here is "induced radioactivity."
{ "domain": "physics.stackexchange", "id": 51817, "tags": "nuclear-physics, radioactivity, nuclear-engineering" }
Finding Accurate BPM and Beat Time Values in Audio
Question: I working on writing a program to get the BPM rate and the Beat Times in audio songs but I am having trouble trying to come up with reliable testing strategies for my design.What do I benchmark my BPM rates and Beat Times against? Should I just use other softwares or is there ways of checking if my BPM rates and beat times are correct? (apart from physically listening and tapping along to the song). Answer: Comparing your results against data annotated by a human listener is the way to go. You can try downloading some of the evaluation data from MIREX or look for other research databases.
{ "domain": "dsp.stackexchange", "id": 1720, "tags": "audio, signal-analysis" }
Is tangential component of $\mathbf{B}$ undefined at the boundary of two media?
Question: Tangential component of $\mathbf{B}$ is discontinuous at the boundary of two media. Does this mean that tangential component of $\mathbf{B}$ is undefined at the boundary of two media? If yes, then: $\mathbf{B}$ is undefined at the boundary of two media. $\nabla \cdot\mathbf{B}$ is undefined at the boundary of two media. This contradicts $\nabla \cdot\mathbf{B}=0$ everywhere. How to get out of this difficulty? If no, then: What is the value of tangential component of $\mathbf{B}$ at the boundary of two media? Answer: Does this mean that tangential component of is undefined at the boundary of two media? An infinitely sharp boundary with a delta function surface current? Yes. This contradicts ∇⋅=0 everywhere. How to get out of this difficulty? What's the difficulty? Maxwell's equations are differential equations for vector fields, and hold on a particular domain. In the model you're describing, the sharp boundary isn't actually in the domain of $\mathbf B$. If you're uncomfortable with this, there are workarounds. If you define the tangential component of $\mathbf B$ on the boundary to be the average of the tangential components on either side, and interpret the derivatives in Maxwell's equations in a weak sense, then everything is well-defined. Alternatively, because nothing in nature is ever infinitely sharp, you could replace the sharp boundaries and surface currents with things like bump functions, which would eliminate discontinuities by smoothing everything out.
{ "domain": "physics.stackexchange", "id": 57879, "tags": "electromagnetism, magnetic-fields, maxwell-equations, boundary-conditions" }
How to disable "Policy CMP0045 is not set" warnings?
Question: I only create catkin package without edit anything. When I catkin_make , it shows a lots of warnings. It makes me very hard to see the other error or warnings. The full log: sam@sam:~/code/ros_hydro$ catkin_make Base path: /home/sam/code/ros_hydro Source space: /home/sam/code/ros_hydro/src Build space: /home/sam/code/ros_hydro/build Devel space: /home/sam/code/ros_hydro/devel Install space: /home/sam/code/ros_hydro/install #### #### Running command: "make cmake_check_build_system" in "/home/sam/code/ros_hydro/build" #### #### #### Running command: "make -j8 -l8" in "/home/sam/code/ros_hydro/build" #### sam@sam:~/code/ros_hydro$ catkin_make Base path: /home/sam/code/ros_hydro Source space: /home/sam/code/ros_hydro/src Build space: /home/sam/code/ros_hydro/build Devel space: /home/sam/code/ros_hydro/devel Install space: /home/sam/code/ros_hydro/install #### #### Running command: "cmake /home/sam/code/ros_hydro/src -DCATKIN_DEVEL_PREFIX=/home/sam/code/ros_hydro/devel -DCMAKE_INSTALL_PREFIX=/home/sam/code/ros_hydro/install" in "/home/sam/code/ros_hydro/build" #### -- Using CATKIN_DEVEL_PREFIX: /home/sam/code/ros_hydro/devel -- Using CMAKE_PREFIX_PATH: /home/sam/code/ros_hydro/devel;/opt/ros/hydro -- This workspace overlays: /home/sam/code/ros_hydro/devel;/opt/ros/hydro -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Python version: 2.7 -- Using Debian Python package layout -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/sam/code/ros_hydro/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built CMake Warning (dev) at /opt/ros/hydro/share/catkin/cmake/tools/doxygen.cmake:40 (GET_TARGET_PROPERTY): Policy CMP0045 is not set: Error on non-existent target in get_target_property. Run "cmake --help-policy CMP0045" for policy details. Use the cmake_policy command to set the policy and suppress this warning. get_target_property() called with non-existent target "doxygen". Call Stack (most recent call first): /opt/ros/hydro/share/catkin/cmake/all.cmake:148 (include) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:20 (include) CMakeLists.txt:52 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. -- catkin 0.5.90 -- BUILD_SHARED_LIBS is on WARNING: Package "ompl" does not follow the version conventions. It should not contain leading zeros (unless the number is 0). WARNING: Package "libg2o" does not follow the version conventions. It should not contain leading zeros (unless the number is 0). -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - sam_industrial_robot1 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'sam_industrial_robot1' -- ==> add_subdirectory(sam_industrial_robot1) CMake Warning (dev) at /opt/ros/hydro/share/cpp_common/cmake/cpp_commonConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/roscpp_serialization/cmake/roscpp_serializationConfig.cmake:94 (if): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/message_runtime/cmake/message_runtimeConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/roscpp_serialization/cmake/roscpp_serializationConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/message_runtime/cmake/message_runtimeConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/roscpp_traits/cmake/roscpp_traitsConfig.cmake:94 (if): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp_serialization/cmake/roscpp_serializationConfig.cmake:165 (find_package) /opt/ros/hydro/share/message_runtime/cmake/message_runtimeConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/roscpp_traits/cmake/roscpp_traitsConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp_serialization/cmake/roscpp_serializationConfig.cmake:165 (find_package) /opt/ros/hydro/share/message_runtime/cmake/message_runtimeConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/rostime/cmake/rostimeConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp_traits/cmake/roscpp_traitsConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp_serialization/cmake/roscpp_serializationConfig.cmake:165 (find_package) /opt/ros/hydro/share/message_runtime/cmake/message_runtimeConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/rosconsole/cmake/rosconsoleConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/rosgraph_msgs/cmake/rosgraph_msgsConfig.cmake:94 (if): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/rosgraph_msgs/cmake/rosgraph_msgsConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/std_msgs/cmake/std_msgsConfig.cmake:94 (if): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/rosgraph_msgs/cmake/rosgraph_msgsConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/std_msgs/cmake/std_msgsConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/rosgraph_msgs/cmake/rosgraph_msgsConfig.cmake:165 (find_package) /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/xmlrpcpp/cmake/xmlrpcppConfig.cmake:94 (if): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at /opt/ros/hydro/share/xmlrpcpp/cmake/xmlrpcppConfig.cmake:100 (elseif): Policy CMP0054 is not set: Only interpret if() arguments as variables or keywords when unquoted. Run "cmake --help-policy CMP0054" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Quoted variables like "include" will no longer be dereferenced when the policy is set to NEW. Since the policy is not set the OLD behavior will be used. Call Stack (most recent call first): /opt/ros/hydro/share/roscpp/cmake/roscppConfig.cmake:165 (find_package) /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:75 (find_package) sam_industrial_robot1/CMakeLists.txt:7 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring done -- Generating done -- Build files have been written to: /home/sam/code/ros_hydro/build #### #### Running command: "make -j8 -l8" in "/home/sam/code/ros_hydro/build" #### sam@sam:~/code/ros_hydro$ How to disable all of them? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2015-12-08 Post score: 0 Original comments Comment by gvdhoorn on 2015-12-08: Which version of cmake are you using? Which OS, which version? Which version of ROS? etc Comment by sam on 2015-12-09: I remember I install cmake 3.0 due to caffe CNN lib. I use ubuntu 12.04 64 bits. I use Hydro. Comment by gvdhoorn on 2015-12-09: As most of ROS' CMakeLists.txt are written against CMake 2.8.x, I can imagine that there will be some issues when using them with newer versions of CMake. I'm not sure what the most efficient way would be to work around this I'm afraid. Answer: Hi! To solve this, I use the following in the header of the CMakeLists.txt of the package .. cmake_minimum_required(VERSION 2.8) project(Name) cmake_policy(SET CMP0054 OLD) cmake_policy(SET CMP0045 OLD) Originally posted by Kuro9206 with karma: 48 on 2016-06-16 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by sam on 2017-04-08: Wow! I add these two lines into every catkin packages, and it works! Thank you~~^^ Comment by MoodyDevta on 2022-07-08: Thank you for this. I was having the same problem and your solution worked!
{ "domain": "robotics.stackexchange", "id": 23177, "tags": "catkin-make" }
Determine force distribution from a bolt pattern
Question: Need some help designing a bolt pattern for a pressure tight enclosure. How do i determine the force applied at a distance from a bolt location so that I meet the minimum force to compress the gasket along the entire length of the enclosure but while using the minimum number of bolts? And if anyone know how to model this on solidworks that would be very useful as well! Answer: I remember learning about this in design class. Each bolt has two components to it's shear force. One is equal to a fraction of the applied load directly, and the other is the shear force needed to create an equipollent moment at the center of the bolt pattern. Specifically, with $n$ bolts, and $\mathbf{r}$ the location of the load from the bolt pattern center, the equipollent moment is $$ M =\| \mathbf{r} \times \mathbf{F} \| = r_x F_y - r_y F_x$$ If the position of the i-th bolt is $\mathbf{b}_i = R_i \mathbf{e}_i$ where $R_i$ is the radial distance and $\mathbf{e}_i$ the radial direction vector. Consider also the perpendicular direction $\mathbf{n}_i$. This points tangentially from each bolt. The total shear force for the i-th bolt is $$\mathbf{S}_i = \frac{1}{n} \left( \mathbf{F} + \frac{M}{R_i} \mathbf{n}_i \right) $$ You can show that this satisfies the equilibrium equations $$\begin{cases} \sum \limits_{i=1}^n \mathbf{S}_i = \mathbf{F} \\ \sum \limits_{i=1}^n \| \mathbf{b}_i \times \mathbf{S}_i \| = M \end{cases} $$ This assumes that the connecting parts are compliant enough to give an equal load distribution to the bolts and that the bolt holes are a loose fit and don't push against the bolts when not loaded.
{ "domain": "engineering.stackexchange", "id": 1799, "tags": "mechanical-engineering, civil-engineering, solidworks, bolting, pressure-vessel" }
Normalized Cross Correlation Operation
Question: According to my question about separability of Gabor filters in this link, I want now to convolve my image with this separable filter by using the normalized cross correlation operation. Assume my Gabor filter is G, my image is I. My Gabor is separated into Low-Pass gaussian filter f(x) and Band-Pass gaussian filter g(y). Therefore the image is convolved with the Gabor using the following equation: I(x,y)*G(x,y) = (I(x,y)*f(x))*g(y). But I want to achieve this separable convolution using the normalized cross-correlation operation described below: Where ^G is the zero mean, unit normal version of the filter and H(x,y) represents a filter with all ones and the same size of the Gabor filter. 1) I didn't understand what is ^G. What should be its value? what it differs from G ? 2) How the normalized cross-correlation is computed for the separable Gabor? I don't know if I use correctly the formula: I(x,y)*f(x)*g(y) / I^2(x,y)*H(x,y) . I don't think that it's true. because I didn't understand what should be the value of zero mean, unit normal version of the Gabor. Answer: From what the paper described, I think $\hat G$ is achieved by standardized normalization on $u$ and $v$ before you implement the formula (1) on $G$: x = -filtSizeL : filtSizeR; y = x; u = x * cos(theta) + y * sin(theta); v = -x * sin(theta) + y * cos(theta); u = (u - mean(u))/std(u); v = (v - mean(v))/std(v); The filter is still separable after this normalization step, but I don't think you need to implement that, because it is just equivalent to select a different $\sigma$ value in the filter.
{ "domain": "dsp.stackexchange", "id": 1470, "tags": "image-processing, cross-correlation, separability" }
Can I create a layer with multiple rnn cell ? [question about a paper]
Question: I am trying to implement https://dl.acm.org/doi/pdf/10.1145/3269206.3271794 . Structure: As it said: In particular, we integrate the embedding vectors learned from each individual recurrent encoder into a new conclusive embedding vector to jointly consider various time series patterns with different ⟨α, β⟩ configurations For my understanding, it use multiple individual rnn cell to process different timeseries, then concat all hidden states together to form a 3D input which can use 2d conv extract features . But I didn't see there is a way to create multiple rnn cells in same layer , do I misunderstand?? If not , could you please give me a guide or an example ? Answer: Use funcational api solve the problem . Structure likes this : import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers data = pd.DataFrame(np.random.uniform(size=(1000,3)), columns=['Sales', 'SalesDiff7', 'SalesAggMean7']) multi_inputs = [] multi_outputs = [] window_size = my_window.input_width for i in range(data.shape[1]): ti = keras.Input(shape=(window_size, 1), name=f't{i}') tlstm = layers.LSTM(32)(ti) multi_inputs.append(ti) multi_outputs.append(tlstm) r = tf.stack(multi_outputs, axis=-2) ..... result = keras.layers.Dense(units=1)(fc) model = Model( inputs=multi_inputs, outputs=result, )
{ "domain": "datascience.stackexchange", "id": 8679, "tags": "deep-learning, time-series, lstm, rnn" }
Histogram of letters in string
Question: I'm pretty new to Java and I've been trying to use top-down design and make my code very readable, so I would appreciate any comments as to the clarity and efficiency of this program. Looking for feedback on: readability (spacing, comments, etc) and just overall quality of code! The task is to produce a histogram that shows how many times a certain letter appears in a string. import java.util.Scanner; /** This class creates a histogram of the letters in a string. Ie outputs ** how many times the letter a appears in the string, the letter b... and so on ** until z. Extra challege: only traversing the string once **/ public class Histogram{ public static void main(String[] args){ Scanner kb = new Scanner(System.in); final int LETTERS_IN_ALPHABET = 26; int[] letterCounter = new int[LETTERS_IN_ALPHABET]; //holds info on how many //times a letter appears. eg letterCounter[0] -> how many times a appears System.out.print("Enter string: "); String string = kb.nextLine(); for(int i = 0; i < string.length(); i++){//traversing string char letterThere = string.charAt(i);//reads what character is at index int placeInLetterCtr = whereInLetterCtr(letterThere);//determining where it should go in array letterCounter[placeInLetterCtr] ++;//increasing corresponsing index } printNumbers(letterCounter); printLetters(); }//end main /* given a char, determines at what index of the character storage array the * char belongs to. Eg, if given the char 'c', it should return int 2 */ public static int whereInLetterCtr(char letter){ int i =0 ; for(char comparisonLetter = 'a'; comparisonLetter <= 'z'; comparisonLetter++){ if(letter == comparisonLetter){ return i; } i++; } return i; }//end whereInLetterCtr /* prints row of numbers */ public static void printNumbers(int[] array){ for(int i=0; i<array.length; i++){ System.out.printf("%4d", array[i]); } System.out.println(); }//end printNumbers /* prints row of letters */ public static void printLetters(){ for(char letter ='a'; letter <='z'; letter++){ System.out.printf("%4c", letter); } }//end printLetters }//end class Answer: You've got way too many comments - you should only use comments when you need to explain an oddity. You don't need a comment to tell you when a method ends. I know this for loop for string.length() is traversing a string - it is simply visual clutter. A lot of your variable names don't follow programming standard glossary. You're trying to explain too much in your variable names. Trust the reader to figure it out to reduce visual clutter. letterThere There? where's there? how about -> currentLetter placeInLetterCtr place? letter ctr? is ctr center? constructor? we know from a bit of reading surrounding lines that it's the index in a letter counter (which is a crucial part of your code, so you don't need to keep reiterating everything is for the letter counter) how about -> letterIndex whereInLetterCtr again, 'where'? it's an index! -> getIndexForCharacter I know it's a one-off class, but if you're going to do object-oriented programming do it right - Define your constants in your Histogram class, method, in the Histogram class. Remove 'static' from your methods, and create a Histogram class in your main() instead, using it like: new Histogram().create("someString")
{ "domain": "codereview.stackexchange", "id": 30947, "tags": "java, beginner, strings" }
Is there any free software that can recognize and classify sounds?
Question: i am making a smart home automation software and i would like to know if there is any software that can recognize noises such as speech ,playing music,phone ringing etc The exact software i am looking for has already been made by mitsubishi but i cant find the source. A video by mitsubishi explaining what i need http://www.merl.com/areas/SoundRecognition/classifier-on-pda.mpeg also the project's site http://www.merl.com/areas/SoundRecognition/ Answer: I don't have any practical experience but I found numerous libraries: http://jmir.sourceforge.net/ http://libxtract.sourceforge.net/ http://yaafe.sourceforge.net/ http://feapi.sourceforge.net/ https://sites.google.com/site/pdescriptors/ http://clam-project.org/ http://taps.cs.princeton.edu/ http://aubio.org/ You might need to be familiar with classification algorithms to use make good use of them.
{ "domain": "dsp.stackexchange", "id": 2296, "tags": "audio, speech-recognition, sound" }
Sending data to octomap_server issue
Question: Hi guys! I'm having some trouble running rgbdslam (on turtlebot w/ kinect) to generate an octomap without using the GUI. After start the robot and kinect (openni), my steps are > roslaunch rgbdslam octomap_server.launch > roslaunch rgbdslam headless.launch > rosservice call /rgbdslam/ros_ui_b pause false -- Then capture some frames moving the robot around my lab -- > rosservice call /rgbdslam/ros_ui_b pause true > rosservice call /rgbdslam/ros_ui send_all The problem is: After send all the points captured, i'm able to run: rosrun octomap_server octomap_saver file.bt but file.bt only contains the last frame captured. I think that octomap_server is overwriting the data received, so in the end it only contains the last frame. I've tried to change the parameters a lot, but with no success. Here are the pictures of octomap_server.launch and headless.launch. Any suggestion ? Thanks ! Originally posted by Thiagopj on ROS Answers with karma: 23 on 2014-12-18 Post score: 1 Answer: Hi Your launch-files seem right. I am not sure what the problem is. See my answer to this question. Originally posted by Felix Endres with karma: 6468 on 2015-01-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20376, "tags": "slam, navigation, kinect, octomap, turtlebot" }
Multiple correct answers for compilation
Question: When you are asked to hand compile into assembly language, are there multiple correct answers? For example, in https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-004-computation-structures-spring-2017/c11/c11s3/compilation_answers.pdf, can you get the correct solution even if your solution doesn't match the solution given? Answer: Yup. There are multiple sequences of assembly instructions that all do the same thing. (For instance, one trivial way to see this is to notice that you can insert a no-op instruction anywhere.) They might not all be equally good, but they'd all be correct.
{ "domain": "cs.stackexchange", "id": 16969, "tags": "computer-architecture, compilers" }
Simple streaming parser to extract lines
Question: It is meant to be given strings over time and output all NewLine delimited substrings. The ColumnReader is meant to be a solution for Delimiter-separated-value files, like CSV or TSV. using System; namespace fbstj { public sealed class LineReader { public string NewLine { get; set; } string _buffer = ""; public void Parse(string text) { _buffer += text; var lastline = _buffer.LastIndexOf(NewLine); if (lastline == -1) return; var lines = _buffer.Substring(0, lastline).Split(NewLine.ToCharArray(), StringSplitOptions.RemoveEmptyEntries); _buffer = _buffer.Substring(lastline); foreach (var line in lines) Receive(line); } public event Action<string> Receive = (line) => { }; } public struct ColumnReader { public readonly string Delimiter; public readonly LineReader Reader; public ColumnReader(string newline, string delimiter) : this() { Reader = new LineReader { NewLine = newline }; Delimiter = delimiter; Receive += (columns) => { }; Reader.Receive += _receive_line; } private void _receive_line(string line) { Receive(line.Split(Delimiter.ToCharArray())); } public event Action<string[]> Receive; } } My main use-case is reading lines from a SerialPort: var parser = new IO.LineReader { NewLine = port.NewLine }; port.DataReceived += (o, e) => parser.Parse(port.ReadExisting()); parser.Receive += (line) => { }; Answer: This looks like they used to be fields and then you made them properties: public readonly string Delimiter; public readonly LineReader Reader; IMHO that means they should get a getter and setter, like this: public string Delimiter { get; private set; } public LineReader Reader { get; private set; } String.LastIndexOf is culture-specific, so consider using a StringComparison.
{ "domain": "codereview.stackexchange", "id": 12334, "tags": "c#" }
What type of regulation is being employed?
Question: As already mentioned in this post. In the context of QFT, the kernel of integration for the overlap of a field configuration ket, $| \Phi \rangle$ with the vacuum $|0\rangle$ in a free theory is given by (See: S. Weinberg's Vol. 1 QFT Foundations Ch. 9.2) $$ \mathcal{E}({\bf x},{\bf y}) = \frac{1}{(2\pi)^3} \int {\rm d}^3{\bf p}\, e^{i{\bf p}\cdot({\bf x}-{\bf y})}\sqrt{{\bf p}^2 + m^2}\tag{1}\label{eq:kernel}$$ which can be shown to algebraically match the following expression by abusing the Basset Integral for index $\nu = -1$. $$\mathcal{E}({\bf x},{\bf y}) = \frac{1}{2\pi^2} \frac{m}{r} \frac{\rm d}{{\rm d} r} \left( \frac{1}{r} K_{-1}(m r) \right)\quad \text{for}\quad |{\bf x - y}| = r \tag{2}\label{eq:kernel2},$$ where $K_{-1}$ denotes a modified Bessel function of the second kind. It is clear that the integration in Eq.~\eqref{eq:kernel} is divergent, while Eq.~\eqref{eq:kernel2} is not, so some sort of regularization happened in between these steps. Does anybody know which technique one could use to formalize the relation between the two? Answer: If you want to use the standard Fourier transform of functions instead of distributions (see @AccidentalFourierTransform comment), you can regularize as follows, $$\mathcal{E}({\bf x},{\bf y}, t_x,t_y) := \mathcal{E}({\bf x},{\bf y}, t,t)$$ for $$\mathcal{E}({\bf x},{\bf y}, t_x,t_y) = w-\lim_{\epsilon \to 0^+} \frac{1}{(2\pi)^3} \int {\rm d}^3{\bf p}\, e^{i[{\bf p}\cdot({\bf x}-{\bf y}) - p^0(t_x-y_y-i\epsilon)]}\sqrt{{\bf p}^2 + m^2}$$ where $p^0:= \sqrt{{\bf p}^2 + m^2}$ and $w$ denotes the weak limit: first integrate against a smooth compactly support (or Schwartz) function of $({\bf x},{\bf y}) \in \mathbb{R}^6$ and next take the limit. Using that procedure one sees that it is equivalent to directly integrate the smooth function against the integral kernel in the right-hand side of (2). Hence (2) is an identity in the sense of distributions in ${\cal S}'(\mathbb{R}^6)$.
{ "domain": "physics.stackexchange", "id": 79406, "tags": "quantum-field-theory, path-integral, regularization, ground-state" }
How does black CuO impart a green color to glazes and glass?
Question: I read from a source that cupric oxide (CuO) imparts green to blue colour to glazes and glass. But CuO is black in colour. How is this possible? Answer: This is due to the oxidation of cupric oxide. The main componds believed to cause this colour comprise a mixture of 3 compounds: $\ce{Cu4SO4(OH)6}$ (green); $\ce{Cu2CO3(OH)2}$ (green); and $\ce{Cu3(CO3)2(OH)2}$(blue). The following reactions are believed to take place: Copper (I) oxidised to the black copper (II) sulfide ($\ce{CuS}$) in the presence of sulfur impurities. Under accelerated conditions, the described process occur at a faster rate. $\ce{CuO}$ and $\ce{CuS}$ reacts with carbon dioxide ($\ce{CO2}$) and hydroxide ions ($\ce{OH-}$) in water (in presence of air) to form $\ce{Cu2CO3(OH)2}$. The extent of humidity and the level of sulfur have a significant impact on how fast the compounds develop, (under controlled conditions these reactants are varied to produce a favourable hue) as well as the relative ratio of the three components. $$\ce{2CuO + CO2 + H2O → Cu2CO3(OH)2~~~~~~~(1)}$$ $$\ce{3CuO + 2CO2 + H2O → Cu3(CO3)2(OH)2~~~~~~~~(2)}$$ $$\ce{4CuO + SO3 +3H2O → Cu4SO4(OH)6~~~~~~~~~(3)}$$ References http://www.wskc.org/documents/281621/282063/ENGAGE_E3S_Chemistry_Statue+of+Liberty.pdf/e4f24c7e-3666-425e-9c41-7dbdd0065eb4
{ "domain": "chemistry.stackexchange", "id": 7913, "tags": "inorganic-chemistry, everyday-chemistry, materials, color" }
Amount of classes for a semantic segmentation neural network
Question: I am currently working on implementation of semantic segmentation of images neural network, and try to implement one of the already existing solution such as Fully Convolutional Neural Network 1. Data that I am using is based on Pascal-Context dataset [2], which has additional labeling to original 20-class PASCAL VOC dataset. This results in a dataset with over 450-classes. Problem Initial 20-classes do not not match classes that I would like to achieve for indoor scenes. Therefore, I have created a short list of 12-classes that I would like to capture, which are in Pascal Context 450-classes dataset. I managed to convert the data and now trying to start training. I am following this tutorial [3] on Matlab, which provides an example of an Image with classes overlay and all pixels colored. However, in my scenario, I only want to be able to distinguish elements such as tvmonitor, sofa, wall and ignoring all other elements, which might be there. Matlab tutorial states: "Areas with no color overlay do not have pixel labels and are not used during training."[ As you can see above, I have two classes present in the picture, but I am not sure whether I should also include class Background, which would put overlay on everything that is not within my list of classes, and include that as an additional class in my training or not. In summary, I am wondering whether Background needs to be provided as an additional class to my list of the classes that I would like to classify or not, even if this background class usually takes majority of each of images. Would that result in everything being classified as background? References: 1 https://github.com/shelhamer/fcn.berkeleyvision.org [2] https://www.cs.stanford.edu/~roozbeh/pascal-context/ [3] https://uk.mathworks.com/help/vision/examples/semantic-segmentation-using-deep-learning.html#d119e321 Answer: It's hard to tell whether you are planning on doing image classification or object recognition/detection. If you're doing image classification, you need each image to have only a single object, and the object needs to be in approximately the same position every time. You train a neural network to output the label (which object it is). If you're doing object recognition/detection, in the training set it's not enough to have a label for each image. Rather, for each image you should have a list of the objects (labels) and the bounding box of each. Then, you train a neural network to output the label (which object) and location (bounding box) of the object. That requires a different architecture than object classification. I suggest you spend some time reading up on object recognition/detection and how to do that with a neural network, since it sounds like you might not be familiar with that yet. As far as the background class: I'm assuming you mean a "none-of-the-above" class. If you are doing image classification and you expect to have some images that aren't of any of the 12 labels, then yes, you should have a 13th label for "none of the above" (i.e., background), and your training set should contain examples of all 13 classes. If you're doing object recognition/detection, that isn't necessary; instead, typically the training set needs to have realistic images that contain zero, one, or more objects, though that might depend on the specific architecture and approach for object recognition/detection you are using.
{ "domain": "cs.stackexchange", "id": 10701, "tags": "neural-networks" }
Understanding the Double Slit Experiment without Path Integrals
Question: Despite being presented as one of the fundamental results of Quantum Mechanics in practically every textbook, I realized this morning that I don't understand deeply how Quantum Mechanics predicts the double slit experiment. The best explanation I have found seems to be closely related to the path integral formulation (e.g., since there are to a good approximation only two paths that a particle can take, the amplitudes due to these two paths add/interfere). However, I am interested in how one could go about deriving the result using the methods taught in an introductory quantum mechanics course: defining a potential, solving the Schrodinger equation, using the generalized statistical interpretation, etc. Is there a simple way to see it from this angle? Or is this a result best left to understanding via the path integral? Answer: I did once look at path integrals, but they are really just a complicated way of doing wave mechanics and I never found them useful. The double slit is much easier using basic methods, and I am not sure why your books would approach it any other way. There is no potential (or constant potential if you prefer), so the solution of the Schrodinger equation is a simple wave. The wave functions from the two slits sum, so one gets exactly the same pattern as for the classical Young's slit experiment. In the figure, where wave crests (light grey) meet crests and troughs (dark grey) meet troughs, the amplitude of the wave increases (white crests, black troughs) creating a bright area on the screen. Where crests meet troughs, the waves cancel out (mid-grey), leading to a dark region. The only difference between the classical interpretation and quantum mechanics is that in quantum mechanics the wave function is simply a way of calculating the probability for where a particle will be observed on the screen, as shown when particles pass through the slits one at a time. Result of the Young’s slits experiment using individual electrons, as carried out by Dr. Tonomura in 1989, showing the build-up of an interference pattern of single electrons. Numbers of electrons are 200 (b), 6000 (c), 40000 (d), 140000 (e).
{ "domain": "physics.stackexchange", "id": 70321, "tags": "quantum-mechanics, double-slit-experiment, path-integral, interference, superposition" }
Concavity of Conditional Quantum Entropy
Question: Let's say I have a bipartite density operator $\gamma_{12} = (1 - \epsilon) \rho_{12} + \epsilon\sigma_{12}$, for $0 \le \epsilon \le 1$, i.e., a convex combination of $\rho_{12}$ and $\sigma_{12}$. I want to show that ($S$ represents Von Neumann entropy): $$ S(\gamma_{12} | \gamma_2) \ge (1 - \epsilon) S(\rho_{12} | \rho_2) + \epsilon S( \sigma_{12} | \sigma_2). $$ The note that I am following says that this is due to the concavity of conditional entropy, which is not immediately obvious to me. I tried to derive it in the following way: $$ \begin{align} S(\gamma_{12} | \gamma_2) &= S(\gamma_{12}) - S(\gamma_2) \\ &= S((1 - \epsilon) \rho_{12} + \epsilon\sigma_{12}) - S((1 - \epsilon)\rho_2 + \epsilon\sigma_2) \;\; \text{[definition of $\gamma_{12}$ and using partial trace] } \\ &\ge (1 - \epsilon) S(\rho_{12}) + \epsilon S(\sigma_{12}) - S((1 - \epsilon)\rho_2 + \epsilon\sigma_2) \;\; \text{[using concavity in the first S] } \\ &\stackrel{?}{\ge} (1 - \epsilon) S(\rho_{12}) + \epsilon S(\sigma_{12}) - (1 - \epsilon)S(\rho_2) - \epsilon S(\sigma_2). \end{align} $$ This of course, gives me the desired inequality. But how come the last inequality is true? Isn't $S((1 - \epsilon)\rho_2 + \epsilon\sigma_2) \ge (1 - \epsilon)S(\rho_2) - \epsilon S(\sigma_2)$ due to concavity? Thanks! Answer: First, encode the bipartite ensemble $\gamma_{12}$ into a CQ state $\omega$ $$\omega_{XAB}=\sum_{x}p_{x}(x)|x\rangle\langle x|\otimes\rho_{AB}^{x}$$ now we can take the difference between $$H(A|B)\gamma_{12}-\sum_{x}p_{x}(x)H(A|B)_{\rho^{x}}$$ where $\rho_{x}$ will be one of the density operators you have in your ensemble. Using the CQ state to rewrite the above difference: $$H(A|B)_{\omega}-H(A|BX)_{\omega}=I(A:X|B)_{\omega}=H(X|B)_{\omega}-H(X|AB)_{\omega}$$ $$=D(\omega_{XAB}||I_{X}\otimes\omega_{AB})-D(\omega_{AB}||I_{X}\otimes\omega_{B})$$ now notice that I can derive the second relative entropy by the first via the action of a partial trace map in subsystem A. Due to the monotonicity of the relative entropy, I then get $$D(\omega_{XAB}||I_{X}\otimes\omega_{AB})- D(\omega_{AB}||I_{X}\otimes\omega_{B}) \ge0$$ subbing this back in $$H(A|B)\gamma_{12}-\sum_{x}p_{x}(x)H(A|B)_{\rho^{x}}=D(\omega_{XAB}||I_{X}\otimes\omega_{AB})- D(\omega_{AB}||I_{X}\otimes\omega_{B}) \ge0$$ so $$H(A|B)\gamma_{12}-\sum_{x}p_{x}(x)H(A|B)_{\rho^{x}}\ge 0$$ Alternatively, after getting to $I(A:X|B)_{\omega}$ you could just use strong subadditivity to show $I(A:X|B)_{\omega}\ge 0$
{ "domain": "quantumcomputing.stackexchange", "id": 3070, "tags": "density-matrix, information-theory, entropy" }
Help !!Homogeneous transformation matrix
Question: I came across many good books on robotics. In particular I am interested in Inverse kinematic of 6dof robot. All books have example which goes on like this "given homogeneous transformation matrix as below, find the angles ?".. Problem, is how do I find components of a homogeneous transformation. matrix in real world? i.e. how do i practically derive 9 components of the rotation matrix embeded in homogeneous transformation matrix?..... Answer: The upper left 3x3 submatrix represents the rotation of the end effector coordinate frame relative to the base frame. In this submatrix, the first column maps the final frame's x axis to the base frame's x axis; similarly for y and z from the next two columns. The first three elements of the right column of the homogeneous transform matrix represent the position vector from the base frame origin to the origin of the last frame. EDIT BASED ON COMMENT: There are several ways to define the nine components of the rotation submatrix, $R$, given a particular task in space. Note that $R$ is orthonormal, so you don't really need to define all 9 based on just the task. A very common approach is to represent the task orientations (with respect to the global coordinate system) using Euler angles. With this representation, each column of $R$ describes a rotation about one of the axes. Be careful with Euler angles, though, because the order of rotation matters. Commonly, but not exclusively, the first column of $R$ describes a rotation about the global $z$ axis; the second column describes a rotation about the now-rotated $y$ axis; and the third column describes a rotation about the $x$ axis, which has been rotated by the two previous angles. There are other Euler angle representations, also. There are other ways to use $R$ to describe the task orientation. I find Waldron's text very readable for this. Check out section 1.2.2 of his draft Handbook of Robotics sourced by Georgia Tech. In this section he describes not only Z-Y-X Euler angles, but also Fixed Angles, quaternions, and Angle-Axis representations for orientation. The important thing is to ensure you consider whatever representation you use for $R$ when you compute the inverse kinematics. The homogeneous transformation describes how the position and rotation vary based on joint angles, but you need to ensure that your definition for $R$ is properly inverted in computing the final three joint angles for your robot. Blockquote
{ "domain": "robotics.stackexchange", "id": 1394, "tags": "inverse-kinematics" }
Constant vector in changing basis
Question: If the unit vectors in the cylindrical coordinate system are functions of position, then how can I get a constant vector? Answer: You have the coordinate components of the vector depend on position in a way that cancels out the dependence of the basis. For example, one has, in cylindrical coordinates, $$\hat{x} = \cos\phi \hat{r} - \sin\phi \mathbf{\hat{\phi}} \quad \text{and} \quad \hat{y} = \sin\phi \hat{r} + \cos\phi \mathbf{\hat{\phi}}.$$ The sine and cosines compensate for the position-dependency of the basis.
{ "domain": "physics.stackexchange", "id": 97474, "tags": "homework-and-exercises, differential-geometry, vectors" }
Get started with gazebo
Question: Hi all, I just started to play with ROS and gazebo. I've went through the gazebo tutorial and wanted to move to pr2_gazebo tutorial. But now, I'm have problems of installing pr2_gazebo. First, I checked out the pr2_simulator at (https://code.ros.org/svn/wg-ros-pkg/stacks/pr2_simulator) Then, I tried "rosdep install pr2_gazebo", but got a bunch of error messages (e.g., Failed to find stack for package [joint_trajectory_action]). Where should I get those dependency packages? I appreciate any pointers to get me started with pr2 simulator. Thanks! Originally posted by roboren on ROS Answers with karma: 18 on 2011-05-27 Post score: 0 Answer: I ended up with manually download all missing dependency packages, and it seems working now. Originally posted by roboren with karma: 18 on 2011-05-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5685, "tags": "gazebo, pr2" }
TimeSync handler never called
Question: The following code handler is never called. This should work yes? #include <string> #include <ros/ros.h> #include <geometry_msgs/PoseStamped.h> #include <message_filters/subscriber.h> #include <message_filters/time_synchronizer.h> #include <message_filters/sync_policies/approximate_time.h> using namespace message_filters; class SynchronizerTest { public: SynchronizerTest() { setTest(); } void handelerSynchronizerTest(const geometry_msgs::PoseStampedConstPtr& msg1, const geometry_msgs::PoseStampedConstPtr& msg2) { ROS_INFO("HERE1 made it to handeler [%e]", msg1->header.stamp.toSec()); ROS_INFO("HERE2 [%e]", msg2->header.stamp.toSec()); } void setTest() { j1_sub = n.subscribe("/SynchronizerTest1", 10, &SynchronizerTest::handelerSynchronizerTest, this); j2_sub = n.subscribe("/SynchronizerTest2", 10, &SynchronizerTest::handelerSynchronizerTest, this); // TimeSynchronizer<geometry_msgs::PoseStamped, geometry_msgs::PoseStamped> sync(j1_sub, j2_sub, 10); // sync.registerCallback(boost::bind(&SynchronizerTest::handelerSynchronizerTest, this, _1, _2)); ROS_INFO("Test set"); } private: ros::NodeHandle n; Subscriber<geometry_msgs::PoseStamped> j1_sub; Subscriber<geometry_msgs::PoseStamped> j2_sub; }; // Enodof Class int main(int argc, char **argv) { ros::init(argc, argv, "SynchronizerTest"); ros::NodeHandle n; ros::Rate loop_rate(10); int count = 0; SynchronizerTest sp; ros::Publisher pub1 = n.advertise<geometry_msgs::PoseStamped>("SynchronizerTest1", 10); ros::Publisher pub2 = n.advertise<geometry_msgs::PoseStamped>("SynchronizerTest2", 10); while (ros::ok()) { geometry_msgs::PoseStamped msg; msg.header.stamp = ros::Time::now(); pub1.publish(msg); pub2.publish(msg); ros::spinOnce(); loop_rate.sleep(); ++count; } return 0; } Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-09-28 Post score: 2 Original comments Comment by Boris on 2013-09-28: How do you publish to /SynchronizerTest1 and /SynchronizerTest2? Do your messages satisfy the conditions from here: http://wiki.ros.org/message_filters/ApproximateTime ? Comment by rnunziata on 2013-09-28: If you can see where this does not meet requirements then please point it out. Really all I want is a blocking read I can correlate time. I did look over this document and did not see anything that should stop this code from working. Comment by Boris on 2013-09-28: Oh, sorry... I confused this questions with another yours. Somehow several of them looks very similar. Answer: The problem is that you have defined subscribers in a local namespace. Thus j1_sub and j2_sub are destroyed immediately after setTest() function returns. So make them members of the class to make it work. Same with the TimeSynchronizer. EDIT: Here is one of the possible implementations: class SynchronizerTest { boost::shared_ptr<Subscriber<geometry_msgs::PoseStamped> > j1_sub; boost::shared_ptr<Subscriber<geometry_msgs::PoseStamped> > j2_sub; boost::shared_ptr<TimeSynchronizer<geometry_msgs::PoseStamped, geometry_msgs::PoseStamped> > sync; ... void setTest() { j1_sub.reset(new Subscriber<geometry_msgs::PoseStamped>(n, "/SynchronizerTest1", 10)); j2_sub.reset(new Subscriber<geometry_msgs::PoseStamped>(n, "/SynchronizerTest2", 10)); sync.reset(new TimeSynchronizer<geometry_msgs::PoseStamped, geometry_msgs::PoseStamped>(*j1_sub, *j2_sub, 10)); sync->registerCallback(boost::bind(&SynchronizerTest::handelerSynchronizerTest, this, _1, _2)); ROS_INFO("Test set"); } ... } Originally posted by Boris with karma: 3060 on 2013-09-28 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by rnunziata on 2013-09-29: I have made the changes you recommend by now I get compilation errors. Can you supply the correct code for this short program please. I do recommend that the documentation be updated with solution for class functions. Comment by Boris on 2013-09-29: Well, you did some very strange modifications, but definitely not what was recommended. I will update the answer with a code snippet in a bit. Comment by rnunziata on 2013-09-29: Thank you..I look forward to the solution. Comment by rnunziata on 2013-09-30: Do you think you will have time this week to post the solution. Thank you. Comment by Boris on 2013-09-30: It has been posted couple of days ago. Please see EDIT section in my answer. Comment by rnunziata on 2013-10-01: Sorry Boris...did not see Edit. The *n_ was giving me problems but it compiled if I changed it to just n. Will be testing shortly. Comment by Boris on 2013-10-01: My mistake, sorry... That is actually exerpt from a working program, I just have a habit to keep things as shared pointers including ros::NodeHandle and forgot to change it back to your notation. Comment by rnunziata on 2013-10-01: Much thanks...that worked..:-) I would like to see the tutorial updated to reflex member usage as it is not obvious. I feel the two approaches present a inconsistent view of the interface and that it would be better to just have one that worked for both.
{ "domain": "robotics.stackexchange", "id": 15691, "tags": "ros" }
Function to sum all Armstrong numbers within a range
Question: I've tried to solve a challenge posted on a LinkedIn forum using Haskell - a language I'm still learning the basics of - and, while the code works correctly, I would like to get some feedback on the coding style, and learn what could be improved. The task is to add all Armstrong numbers within a range. An Armstrong (or narcissistic) number is a number that is equal to the sum of all its digits, each raised to the power of the length of its digits. For instance, 153 is an Armstrong number because \$1^3 + 5^3 + 3^3\$ equals 153. The range can be given in any order - that is 5 10 and 10 5 denote the same interval, that is, all numbers between 5 and 10 (boundaries included). And here's my code: digits :: (Integral n) => n -> [n] digits n | n < 10 = [n] | otherwise = digits (n `div` 10) ++ [n `mod` 10] numberLength :: Int -> Int numberLength = length . digits powerNumber :: Int -> [Int] powerNumber n = map (\d -> (d ^ exponent)) listOfDigits where exponent = numberLength n listOfDigits = digits n isArmstrong :: Int -> Bool isArmstrong a = a == sum (powerNumber a) sortEnds :: Int -> Int -> [Int] sortEnds a b = if a < b then [a, b] else [b, a] sumAllArmstrongNumber :: Int -> Int -> Int sumAllArmstrongNumber a b = sum([x | x <- [start..end], isArmstrong x]) where [start, end] = sortEnds a b Any feedback is much appreciated! Answer: I would like to get some feedback on the coding style, and learn what could be improved. First of all, it's fantastic that all your functions have proper types, consistent indentation and style. Keep that in your future code style! Prefer divMod or quotRem If you use both y `div` x and y `mod` x for the same x and y then you should use divMod instead. Even better, use quotRem if possible. quotRem returns the same results for positive numbers but is slightly faster: digits :: (Integral n) => n -> [n] digits n | n < 10 = [n] | otherwise = let (q, r) = n `quotRem` 10 in digits q ++ [r] Cons; don't append There's one big drawback in the new digits function though: we're always appending a single element. This yields an \$\mathcal O(n^2)\$ algorithm, since (x:xs) ++ [y] = x : (xs ++ [y]). Instead, we should cons the new element on the rest of the list: digits :: (Integral n) => n -> [n] digits n | n < 10 = [n] | otherwise = let (q, r) = n `quotRem` 10 in r : digits q Note that this will return the digits in reverse order but that's fine. If you want the digits in the original order, consider using reverse on the result: digits :: (Integral n) => n -> [n] digits = reverse . go where go n = case n `quotRem` 10 of (0, r) -> [r] (q, r) -> r : go q I'm a fan of case for this style of quotRem usage, but that's just my personal opinion. Map vs pointfree vs comprehension Warning: this section and mostly about personal preference. In the following code, listOfDigits and expontent are defined and used once: powerNumber :: Int -> [Int] powerNumber n = map (\d -> (d ^ exponent)) listOfDigits where exponent = numberLength n listOfDigits = digits n I'd personally prefer digits n instead of listOfDigits, since it's only used as an argument for map: powerNumber n = map (\d -> (d ^ exponent)) $ digits n where exponent = numberLength n Next (\d -> (d ^ exponent)) is (^exponent), which is preferred by some: powerNumber n = map (^exponent) $ digits n where exponent = numberLength n But in terms of readability, a list comprehension might even better: powerNumber :: Int -> [Int] powerNumber n = [ d ^ exponent | d <- digits n ] where exponent = numberLength n Add additional information in return types sortEnds :: Int -> Int -> [Int] sortEnds a b = if a < b then [a, b] else [b, a] The return type is slightly misleading: sortEnds always returns exactly two elements. So we should use a pair: sortEnds :: Int -> Int -> (Int, Int) sortEnds a b = if a < b then (a, b) else (b, a) Remove superfluous parentheses sumAllArmstrongNumber :: Int -> Int -> Int sumAllArmstrongNumber a b = sum([x | x <- [start..end], isArmstrong x]) where [start, end] = sortEnds a b I'd remove the explicit parentheses around sum's argument: sumAllArmstrongNumber :: Int -> Int -> Int sumAllArmstrongNumber a b = sum [x | x <- [start..end], isArmstrong x] where (start, end) = sortEnds a b
{ "domain": "codereview.stackexchange", "id": 43121, "tags": "programming-challenge, haskell" }
What is the correct sign in the unitary evolution operator of a beam splitter?
Question: I'm a bit confused about which is the correct sign in the unitary evolution operator of a beam splitter. In paper Digital quantum simulation of linear and nonlinear optical elements author uses the following expression: $$U_{ij} = e^{i\epsilon_{ij}\left(b_j^\dagger a_i + b_j a_i^\dagger \right)} $$ where $i,j$ refers to the beam splitter modes (so, what is the meaning of $a$ and $b$?) However, Nielsen & Chuang use (on page 291) the following equation for the same thing: $$ U = e^{\theta\left(a^\dagger b - a b^\dagger\right)},$$ where the latter term in the exponent has a minus sign. How can I relate both equations? Are they the same? Lastly, can you recommend me any book which deals with beam splitter and related stuff? I have not found any good reference. Answer: First, the operators $a$ and $b$ ($a^{\dagger}$ and $b^{\dagger}$) are the annihilation (creation) operators of the two photonic modes in your problem. For an introduction to the subject I recommend you to look for some decent lecture notes on quantum optics. A well readable introductory book is Mark Fox's "Quantum Optics -- An Introduction" and a more advanced read is Grynberg, Aspect and Fabre's "An Introduction to Quantum Optics". But the parts you are interested in are possibly well explained in Nielsen and Chuang. Also this document I found while googling might be of interest. But back to the original question: why does the paper you mention and the Wikipedia article define the beamsplitter as \begin{align} U = \mathrm{e}^{i \theta (b^{\dagger} a + b a^{\dagger})} \end{align} with a plus sign, contrary to Nielsen and Chuang? The truth is, both operators are "correct", they just describe polarizing and non-polarizing beamsplitters. The beamsplitters usually encountered in labs are polarizing, so that the reflected photon obtains an additional phase shift. This is actually explained quite neatly in Box 7.3 of Nielsen and Chuang, where they show that there is an isomorphism between the transformation of two photonic modes and $SU(2)$. The plus convention corresponds to a Pauli $X$ rotation, where the minus convention corresponds to a Pauli $Y$ rotation. As pointed out before, there is an additional phase shift, embodied by the phase operator \begin{align} S = \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix} \end{align} that can be used to relate the two transformations. For that see Page 51 of "Introduction to Optical Quantum Information Processing" by Kok and Lovett.
{ "domain": "quantumcomputing.stackexchange", "id": 1329, "tags": "hamiltonian-simulation, unitarity, optical-quantum-computing" }
Full integration test for a Console application
Question: I'm sill experimenting with different design patterns for full integration tests for Console applications (and later also Windows Services) and I wasn't quite happy with the result of the refactoring of my last question. I've changed a few things and this is what I've come up this time. The main application code that starts an application is now divided into two files Program.Entry.cs and Program.Instance.cs that are partial classes of the Program (otherwise I needed a new name for the other file and this isn't actually necessary). The Entry file contains only the Main method and redirects this call to the Start method that besides the args parameter also takes a few other ones that I'm using for testing. It also contains other static methods necessary for the initialization. The Instance file contains only instance code that runs the application. Having them both split allows me to better separate the two tasks. The new code: Program.Entry.cs internal partial class Program { internal static int Main(string[] args) { return Start( args, InitializeLogging, InitializeConfiguration, configuration => InitializeContainer(configuration, Enumerable.Empty<Autofac.Module>())); } public static int Start( string[] args, Action initializeLogging, Func<Configuration> initializeConfiguration, Func<Configuration, IContainer> initializeContainer) { initializeLogging(); var mainLogger = LoggerFactory.CreateLogger(nameof(Program)); LogEntry.New().Debug().Message("Logging initialized.").Log(mainLogger); var mainLogEntry = LogEntry.New().Stopwatch(sw => sw.Start()); try { var configuration = initializeConfiguration(); LogEntry.New().Debug().Message("Configuration initialized.").Log(mainLogger); var container = initializeContainer(configuration); LogEntry.New().Debug().Message("IoC initialized.").Log(mainLogger); using (var scope = container.BeginLifetimeScope()) { var program = scope.Resolve<Program>(); LogEntry.New().Info().Message($"Created {Name} v{Version}").Log(mainLogger); program.Start(args); } mainLogEntry.Info().Message("Completed."); return 0; } catch (Exception ex) { mainLogEntry.Fatal().Message("Crashed.").Exception(ex); return 1; } finally { mainLogEntry.Log(mainLogger); LogEntry.New().Info().Message("Exited.").Log(mainLogger); } } #region Initialization internal static void InitializeLogging() { Reusable.Logging.NLog.Tools.LayoutRenderers.InvariantPropertiesLayoutRenderer.Register(); Reusable.Logging.Logger.ComputedProperties.Add(new Reusable.Logging.ComputedProperties.AppSetting(name: "Environment", key: $"Gunter.Program.Config.Environment")); Reusable.Logging.Logger.ComputedProperties.Add(new Reusable.Logging.ComputedProperties.ElapsedSeconds()); Reusable.Logging.LoggerFactory.Initialize<Reusable.Logging.Adapters.NLogFactory>(); } internal static Configuration InitializeConfiguration() { try { return new Configuration(new AppSettings()); } catch (Exception ex) { throw new InitializationException("Could not initialize configuration.", ex); } } internal static IContainer InitializeContainer(Configuration configuration, IEnumerable<Autofac.Module> moduleOverrides) { try { var builder = new ContainerBuilder(); builder.RegisterInstance(configuration.Load<Program, Workspace>()); builder.RegisterModule<SystemModule>(); builder.RegisterModule<DataModule>(); builder.RegisterModule<ReportingModule>(); builder.RegisterModule<HtmlModule>(); builder .RegisterType<TestRunner>() .WithParameter(new TypedParameter(typeof(ILogger), LoggerFactory.CreateLogger(nameof(TestRunner)))); builder .RegisterType<Program>() .WithParameter(new TypedParameter(typeof(ILogger), LoggerFactory.CreateLogger(nameof(Program)))) .PropertiesAutowired(); foreach (var module in moduleOverrides) { builder.RegisterModule(module); } return builder.Build(); } catch (Exception ex) { throw new InitializationException("Could not initialize container.", ex); } } #endregion } Program.Instance.cs internal partial class Program { public static readonly string Name = Assembly.GetAssembly(typeof(Program)).GetName().Name; public static readonly string Version = "2.0.0"; private static readonly string GlobalFileName = "_Global.json"; private readonly ILogger _logger; private readonly IPathResolver _pathResolver; private readonly IFileSystem _fileSystem; private readonly IVariableBuilder _variableBuilder; private readonly AutofacContractResolver _autofacContractResolver; private readonly TestRunner _testRunner; public Program( ILogger logger, IPathResolver pathResolver, IFileSystem fileSystem, IVariableBuilder variableBuilder, AutofacContractResolver autofacContractResolver, TestRunner testRunner) { _logger = logger; _pathResolver = pathResolver; _fileSystem = fileSystem; _variableBuilder = variableBuilder; _autofacContractResolver = autofacContractResolver; _testRunner = testRunner; } public Workspace Workspace { get; set; } public void Start(string[] args) { var globalFile = LoadGlobalFile(); var globals = VariableResolver.Empty .MergeWith(globalFile.Globals) .MergeWith(_variableBuilder.BuildVariables(Workspace)); var testFiles = LoadTestFiles().ToList(); LogEntry.New().Debug().Message($"Test files ({testFiles.Count}) loaded.").Log(_logger); LogEntry.New().Info().Message($"*** {Name} v{Version} started. ***").Log(_logger); _testRunner.RunTestFiles(testFiles, args, globals); } private GlobalFile LoadGlobalFile() { var targetsDirectoryName = _pathResolver.ResolveDirectoryPath(Workspace.Targets); var fileName = Path.Combine(targetsDirectoryName, GlobalFileName); if (!File.Exists(fileName)) { return new GlobalFile(); } try { var globalFileJson = _fileSystem.ReadAllText(fileName); var globalFile = JsonConvert.DeserializeObject<GlobalFile>(globalFileJson); VariableValidator.ValidateNamesNotReserved(globalFile.Globals, _variableBuilder.Names); LogEntry.New().Debug().Message($"{Path.GetFileName(fileName)} loaded.").Log(_logger); return globalFile; } catch (Exception ex) { throw new InitializationException($"Could not load {Path.GetFileName(fileName)}.", ex); } } [NotNull, ItemNotNull] private IEnumerable<TestFile> LoadTestFiles() { LogEntry.New().Debug().Message("Initializing tests...").Log(_logger); return GetTestFileNames() .Select(LoadTest) .Where(Conditional.IsNotNull); } [NotNull, ItemNotNull] private IEnumerable<string> GetTestFileNames() { var targetsDirectoryName = _pathResolver.ResolveDirectoryPath(Workspace.Targets); return from fullName in _fileSystem.GetFiles(targetsDirectoryName, "*.json") where !Path.GetFileName(fullName).StartsWith("_", StringComparison.OrdinalIgnoreCase) select fullName; } [CanBeNull] private TestFile LoadTest(string fileName) { var logEntry = LogEntry.New().Info(); try { var json = _fileSystem.ReadAllText(fileName); var testFile = JsonConvert.DeserializeObject<TestFile>(json, new JsonSerializerSettings { ContractResolver = _autofacContractResolver, DefaultValueHandling = DefaultValueHandling.Populate, TypeNameHandling = TypeNameHandling.Auto, }); testFile.FullName = fileName; VariableValidator.ValidateNamesNotReserved(testFile.Locals, _variableBuilder.Names); logEntry.Message($"Test initialized: {fileName}"); return testFile; } catch (Exception ex) { logEntry.Error().Message($"Could not initialize test: {fileName}").Exception(ex); return null; } finally { logEntry.Log(_logger); } } } Testing Now I am able to override the default modules with my test versions to actually use other files that I store in a test project as Embeded Resources. This way I can simulate various scenarios with non-existing or invalid files, not working database connections etc and verfiy that the application behaves as I want it to. This is, it should exit in some situations and survive in others or log something etc. Here's one of the first tests I wrote for it and a new TestFileSystem that I use to fake files. Additionaly I can now test it with different command line arguments. [TestClass] public class ProgramTest { [TestMethod] public void Start_NoArguments_RunsAllTests() { var exitCode = Program.Start( new string[0], Program.InitializeLogging, Program.InitializeConfiguration, configuration => Program.InitializeContainer(configuration, new[] { new TestModule { FileSystem = new TestFileSystem { Files = { @"t:\tests\assets\targets\single-test.json" } } } })); Assert.AreEqual(0, exitCode); Assert.AreEqual(1, TestAlert.GetReports("single-test.json").Count); } } internal class TestModule : Autofac.Module { public TestFileSystem FileSystem { get; set; } protected override void Load(ContainerBuilder builder) { builder .RegisterType<TestAlert>(); builder .RegisterType<TestPathResolver>() .As<IPathResolver>(); builder .RegisterInstance(FileSystem) .As<IFileSystem>(); } } internal class TestFileSystem : IFileSystem { public List<string> Files { get; } = new List<string>(); public string ReadAllText(string fileName) { switch (fileName.ToLower()) { case @"t:\tests\_Global.json": return ResourceReader.ReadEmbeddedResource<ProgramTest>("Resources.assets.targets._Global.json"); case @"t:\tests\single-test.json": return ResourceReader.ReadEmbeddedResource<ProgramTest>("Resources.assets.targets.single-test.json"); default: throw new FileNotFoundException($"File \"{fileName}\" does not exist."); } } public string[] GetFiles(string path, string searchPattern) { return Files.ToArray(); } } What do you think of this design? Can it still be improved in any way? Answer: var mainLogger = LoggerFactory.CreateLogger(nameof(Program)); LogEntry.New().Debug().Message("Logging initialized.").Log(mainLogger); This should really be split into two lines. Not sure what happened, but I'm sure it was a mistake. var mainLogger = LoggerFactory.CreateLogger(nameof(Program)); LogEntry.New().Debug().Message("Logging initialized.").Log(mainLogger); Same with the very next line. Maybe it wasn't a mistake. Maybe you did this intentionally because of all the noisy daisy chaining your logger needs. Try extracting a few small helpers to reduce the noise. private void logDebug(string message, ILogger logger) { LogEntry.New().Debug().Message(message).Log(logger); } Otherwise, I find your entry point well structured and readable. I like that you ensure the program returns a non-zero exit code on failure. We windows devs tend to forget these kinds of things, but it's important if you ever program ever gets used as part of some shell script. +1 Using a couple of variables here could clean this up. internal static void InitializeLogging() { Reusable.Logging.NLog.Tools.LayoutRenderers.InvariantPropertiesLayoutRenderer.Register(); Reusable.Logging.Logger.ComputedProperties.Add(new Reusable.Logging.ComputedProperties.AppSetting(name: "Environment", key: $"Gunter.Program.Config.Environment")); Reusable.Logging.Logger.ComputedProperties.Add(new Reusable.Logging.ComputedProperties.ElapsedSeconds()); Reusable.Logging.LoggerFactory.Initialize<Reusable.Logging.Adapters.NLogFactory>(); } internal static void InitializeLogging() { Reusable.Logging.NLog.Tools.LayoutRenderers.InvariantPropertiesLayoutRenderer.Register(); var properties = Reusable.Logging.Logger.ComputedProperties; properties.Add(new Reusable.Logging.ComputedProperties.AppSetting(name: "Environment", key: $"Gunter.Program.Config.Environment")); properties.Add(new Reusable.Logging.ComputedProperties.ElapsedSeconds()); Reusable.Logging.LoggerFactory.Initialize<Reusable.Logging.Adapters.NLogFactory>(); } If I got that wrong and those are namespaces, not static properties, for goodness sake use some imports. One last thought... If your entry point has become so large and complex that you're worried about testing the code directly, or are tempted to use a partial class, then your domain is likely missing some concepts. I think it would be useful to ask yourself why you want to test these conditions, and why is it hard to test them? Could you test them by just running the program under different contexts? Could you use a shell script to just run it with different inputs? Why not? Because you're testing embedded resources? Why not pass those files via args instead? Why don't your other existing tests inside of the actual domain logic test those embedded resources efficiently? Have you put a ton of time and effort into a problem that didn't need to be solved out of some dogmatic reasoning that everything must be tested extremely thoroughly? What benefit did it give you? Sometimes simpler is better and we need to know when enough is enough. I'm not saying that you were wrong to do any of this, I'm just asking you to reflect on whether or not you were.
{ "domain": "codereview.stackexchange", "id": 26212, "tags": "c#, design-patterns, console, dependency-injection, integration-testing" }
Does the language of TM's that repeat a configuration infinite times semi-decidable or not?
Question: Let us define the following languages: $$ {L_1 = \{\langle M\rangle : M \ \text{is a TM and $\exists w\in \Sigma^*$ s.t $M(w)$ repeats a configuration infinite times}\}} $$ $$ L_2 = \{\langle M\rangle : M \ \text{is a TM and on all inputs there exists a configuration that repeats infinite times}\} $$ My question is what class does this languages belong to ($\mathcal{R},\mathcal{RE\setminus{R}}, etc...$)? I have shown that: ${H_{TM}}\le_m L_2$: On input $\langle M,w\rangle$ return a new $TM$ $\langle M' \rangle$ such that on all inputs, runs $M$ on $w$. Keep a counter at the beginning on the number of steps (to avoid entering the same configuration infinite number of times, if it doesn't stop). If it does stop, repeat some configuration in a loop. So that seem to prove that $L_1,L_2\notin \mathcal{R},\mathcal{co-RE}$. But is it in $\mathcal{RE}$? I couldn't write a $TM$ that accepts those languages, nor derive a reduction from $\overline{H_{TM}}$, $\overline{A_{TM}}$ or any other relevant language. Any ideas on solving this? Thanks! Answer: $L_1$ is indeed recursively enumerable. Suppose for simplicity that the input consists of a pair $M,w$ and you want to know whether $M$ repeats a configuration during its computation on $w$. You can then loop over the integers, and for each $n\in\mathbb{N}$ check if $M$ repeats a configuration with input $w$ during the first $n$ steps. This is possible since during the first $n$ steps $M$ uses at most $n$ cells from the tape, hence the number of possible configurations is finite (you can store them in memory and mark those which were seen). Now, you want to add an additional existential quantifier over the words $w$, and this can be achieved by dovetailing. Simultaneously scan words and computations length, i.e. for each $n$ and for all words of length $\le n$, check if $M$ repeats a configuration during its first $n$ steps. The additional universal quantifier prevents $L_2$ from being recursively enumerable, you can show this by reduction from the complement of the halting problem. Given $(M,w)$, your reduction outputs a machine which on input $x$ simulates $M$ on $w$ for $|x|$ steps, and enters a loop iff $M$ did not halt in this time.
{ "domain": "cs.stackexchange", "id": 10644, "tags": "formal-languages, computability, reductions, undecidability, semi-decidability" }
Does Feynman's derivation of Maxwell's equations have a physical interpretation?
Question: There are so many times that something leaves you stumped. I was recently reading the paper "Feynman's derivation of Maxwell's equations and extra dimensions" and the derivation of the Maxwell's equations from just Newton's second law and the quantum mechanical commutation relations really intrigued me. They only derived the Bianchi set, yet with slight tweakings with relativity, the other two can be derived. Awesome as it is, does this even have a physical interpretation? How is it possible to mix together classical and quantum equations for a single particle, which aren't even compatible, and produce a description of the electromagnetic field? Answer: Feynman's derivation is wonderful, and I want to sketch why we would expect it to work, and what implicit assumptions it's really making. The real issue is that by switching back and forth between quantum and classical notation, Feynman sneaks in physical assumptions that are sufficiently restrictive to determine Maxwell's equations uniquely. To show this, I'll give a similar proof in fully classical, relativistic notation. By locality, we expect the force on a particle at position $x^\mu$ with momentum $p^\mu$ depends solely on $p^\mu$ and $F(x^\mu$). (This is Eq. 1 in the paper.) Then the most general possible expression for the relativistic four-force is $$\frac{d p^\mu}{d\tau}= F_1^\mu(x^\mu) + F_2^{\mu\nu}(x^\mu)\, p_\nu + F_3^{\mu\nu\rho}(x^\mu)\, p_\nu p_\rho + \ldots$$ where we have an infinite series of $F_i$ tensors representing the field $F$. (Of course, we already implicitly used rotational invariance to get this.) I'll suppress the $x^\mu$ argument to save space. It's clear that we need more physical assumptions at this point since the $F_i$ are much too general. The next step is to assume that the Lagrangian $L(x^\mu, \dot{x}^\mu, t)$ is quadratic in velocity. Differentiating, this implies that the force must be at most linear in momentum, so we have $$\frac{d p^\mu}{d\tau}= F_1^\mu + F_2^{\mu\nu}\, p_\nu.$$ This is a rather strong assumption, so how did Feynman slip it in? It's in equation 2, $$[x_i, v_j] = i \frac{\hbar}{m} \delta_{ij}.$$ Now, to go from classical Hamiltonian mechanics to quantum mechanics, we perform Dirac's prescription of replacing Poisson brackets with commutators, which yields the canonical commutation relations $[x_i, p_j] = i \hbar \delta_{ij}$ where $x_i$ and $p_i$ are classically canonically conjugate. Thus, Feynman's Eq. 2 implicitly uses the innocuous-looking equation $$\mathbf{p} = m \mathbf{v}.$$ However, since the momentum is defined as $$p \equiv \frac{\partial L}{\partial \dot{x}}$$ this is really a statement that the Lagrangian is quadratic in velocity, so the force is at most linear in velocity. Thus we get a strong mathematical constraint by using a familiar, intuitive physical result. The next physical assumption is that the force does not change the mass of the particle. Feynman does this implicitly when moving from Eq. 2 to Eq. 4 by not including a $dm/dt$ term. On the other hand, since $p^\mu p_\mu = m^2$, in our notation $dm/dt = 0$ is equivalent to the nontrivial constraint $$0 = p_\mu \frac{dp^\mu}{d\tau} = F_1^\mu p_\mu + F_2^{\mu\nu} p_\mu p_\nu.$$ For this to always hold, we need $F_1 = 0$ and $F_2$ (hereafter called $F$) to be an antisymmetric tensor and hence a rank two differential form. We've now recovered the Lorentz force law $$\frac{d p^\mu}{d\tau} = F^{\mu\nu} p_\nu.$$ Our next task is to restore Maxwell's equations. That seems impossible because we don't know anything about the field's dynamics, but again the simplicity of the Hamiltonian helps. Since it is at most quadratic in momentum, the most general form is $$H = \frac{p^2}{2m} + \mathbf{A}_1 \cdot \mathbf{p} + A_2.$$ Collecting $\mathbf{A}_1$ and $A_2$ into a four-vector $A^\mu$, Hamilton's equations are $$\frac{dp^\mu}{d\tau} = (dA)^{\mu\nu} p_\nu$$ where $d$ is the exterior derivative. That is, the simplicity of the Hamiltonian forces the field $F$ to be described in terms of a potential, $F = dA$. Since $d^2 = 0$ we conclude $$dF = 0$$ which contains two of Maxwell's equations, specifically Gauss's law for magnetism and Faraday's law. So far we haven't actually used relativity, just worked in relativistic notation, and indeed this is where our derivation and Feynman's run out of steam. To get the other two equations, we need relativity proper. The basic conclusion is that Feynman's derivation is great, but not completely mysterious. In particular, it isn't really mixing classical and quantum mechanics at all -- the quantum equations that Feynman uses are equivalent to classical ones derived from Hamilton's equations, because he is using the Dirac quantization procedure, so the only real purpose of the quantum mechanics is to slip in $\mathbf{p} = m \mathbf{v}$, and by extension, the fact that the Hamiltonian is very simple, i.e. quadratic in $\mathbf{p}$. The other assumptions are locality and mass conservation. It's not surprising that electromagnetism pops out almost 'for free', because the space of possible theories really is quite constrained. In the more general framework of quantum field theory, we can get Maxwell's equations by assuming locality, parity symmetry, Lorentz invariance, and that there exists a long-range force mediated by a spin 1 particle, as explained elsewhere on this site. This has consequences for classical physics, because the only classical physics we can observe are those quantum fields which have a sensible classical limit.
{ "domain": "physics.stackexchange", "id": 47485, "tags": "quantum-mechanics, newtonian-mechanics, special-relativity, maxwell-equations" }
Are SNPs or SSR copy number variation mutations more prominent?
Question: I'm trying to get a sense of the dominant way that mutations occur. I have seen various numbers which seem at least at first glance to conflict, and I was curious if anyone had clarification on this. According to Shastry, "SNP alleles in human disease and evolution" (Journal of Human Genetics 47:561–566, 2002), In two randomly selected human genomes, 99.9% of the DNA sequence is identical. The remaining 0.1% of DNA contains sequence variations. The most common type of such variation is called a single-nucleotide polymorphism, or SNP. However, according to Nevo, "Genetic Diversity" (Encyclopedia of Biodiversity, 2001), Significantly, SSRs experience mutation at notably higher rates than do nonrepetitive sequences: $10^{-2}$ to $10^{-3}$ per locus, per gamete, per generation, which leads to their high polymorphism. Additionally, according to Kashi and King, "Simple Sequence Repeats as Advantageous Mutators in Evolution", (TRENDS in Genetics 22(5):257), Mutations that alter repeat number typically occur at rates orders of magnitude greater than single-nucleotide point mutations. So, if variations in organisms is mostly due to SSRs, and this is mostly copy number variation, then would it be correct to say that copy number variants in SSRs is the dominant form of variation, or am I missing something? Which one is correct, or which source is more authoritative, or is there a more up-to-date review I should be consulting? Answer: I like @mgkrebbs answer, I think that it hits most of the high points, but I wrote a review on this subject a couple of years ago where we specifically put together estimates on the mutational load of different mutation classes (see Table 1). Yaniv Erlich's group directly addressed the question that you are trying to answer, and they estimated that microsatellites contribute slightly more mutations per generation than substitutions: These predictions indicate that the load of de novo STR [SSR] mutations is at least 75 mutations per generation, rivaling the load of all other known variant types. Most estimates of diploid overall substitution mutation rates are ~50/generation, for comparison. However, if you are interested in overall variant number you will also need to take into account an incredibly complex architecture of structural variants in basically any genome, including also plasmids, viruses, and transposons. So when you talk about the "dominant form of variation", you really do need to be a bit more specific. On a per locus or even per mutation class basis, STRs/SSRs probably are "dominant", but it's hard to argue that e.g. 1 STR mutation is more important than one transposon hop or one chromosome aneuploidy or one centromeric satellite contraction. I'd recommend reading the papers linked for a little more context.
{ "domain": "biology.stackexchange", "id": 11386, "tags": "evolution, molecular-genetics, mutations, repeatitive-sequence" }
Proof that Turing machines and computers have same power
Question: How do we prove that any logical circuit can be simulated by a Turing machine? For example, we take a logical circuit $L$ that is made of gates and, or, and not. This circuit determines a problem, for example, whether the input is an even number. How can we prove that if a problem is decided by a logical circuit, there is a Turing machine that can decide the problem? In other words, how do we prove that a CPU is equal in power to a Turing machine? We should not forget that a Turing machine has infinitely many cells. Answer: Turing machines and Boolean circuits cannot really be compared, since Turing machines handle inputs of arbitrary length, whereas Boolean circuits only handle inputs of fixed length. Furthermore, Turing machines and CPUs corresponds to two different computation models. Turing machines access their memory by moving a head across a tape. In contrast, a CPU accesses its memory via random access. The abstraction corresponding to CPUs is the random-access machine. If we want to compare the power of Turing machines and circuits, we need to put them into equal footing. One way to do this is to consider a sequence of circuits $C_0,C_1,C_2,\ldots$, one per input length. Such a sequence computes a language in the natural way: an input $x$ belongs to the language if $C_{|x|}$ returns True when given $x$ as input. This circuit model is much more powerful than Turing machines – indeed, it can compute any language, whereas Turing machines can only compute decidable languages. What went wrong? Let us consider your example, of circuits for parity. The circuits computing parity for different lengths of input are very similar to one another; they seem to be made from a "mold", a set of instructions which can be translated to a circuit of arbitrary length. In contrast, the circuit model considered in the preceding paragraph specifies no such relation between the different circuits. When we add this constraint – that the different circuits follow a "blueprint" – we do get a model equivalent to Turing machines. What exactly constitutes a blueprint is a bit technical (one option is to have the circuits generated by a Turing machine), but the intuitive idea should be clear. Circuits also come up when trying to understand resource-bounded computation, such as the complexity class $\mathsf{P}$ of languages which can be decided in polynomial time. It is known that any such language can be translated to a "uniform" family of circuits of polynomial size. In this way, the fact that the language can be decided efficiently is reflected by the size of circuits that compute it.
{ "domain": "cs.stackexchange", "id": 12026, "tags": "turing-machines, computability" }
Single linear actuator lifting mechanism
Question: I have inherited a DIY build and I have some doubts about its design. Have a look at the following diagram: 1 is the linear actuator. It's quite beefy 2000 lbs 12" stroke. It is installed in an aluminium channel. 4- Is a carriage, essentially a box section with little rollers on all sides, so it has no play within the channel and moves quite smoothly. This is connected to the actuator's rod. 3. Is a flat steel bar that links to an identical channel (2) and carriage assembly, though this doesn't have an actuator in it. The system is supposed to lift a max 220 lbs load that is evenly distributed along the bar. My questions are: While there don't seem to be any bending forces to the actuator itself, is it a safe design, as far as longevity of the components goes? is there anything I can do in channel 2 to support the system, e.g. a spring at the bottom, etc. to address any design problems? Answer: As a quick basic estimation approach we annotate the following: The length of bar 3 = L Top and bottom bearing force on each box $F_b$ The height and width of boxes 4 and its counterpart on channel 2 H and W. $$ \Sigma M=0 ,\quad 220*L/4 = H/2*4F_b \rightarrow \quad F_b=220*L/8H $$ Now you have to figure if the bearings and their connections are okay to take this load multiplied by a safety factor of say 3.
{ "domain": "engineering.stackexchange", "id": 3599, "tags": "mechanical-engineering, linear-motion, linear-motors" }
Derivation of Maxwell's equations using Lagrangian formalism
Question: Some time ago, I read in Landau's Theoretical Physics Course you could derive Maxwell's equations using the Lagrangian formalism, and I find this to be exciting. Unfortunately, I don't have access to the book and even if I had it, I'm not sure I could understand it, since I haven't learnt anything about tensors yet, and seemingly you need to use a stress tensor. Could you please explain to me exactly how Maxwell's equations derive from the Lagrangian formalism? In fact, from my Classical Mechanica course, I thought Lagrangians were only used in mechanics, but I've read here they can give place to different Physics depending on your choice of the Lagrangian. How is this possible? I'm just as confused as I am thrilled about the possibilities this would open up. If you'd clarify my doubts for me, I'd appreciate it very much. Answer: The principle of least action is extremely robust and has been employed in so many interesting ways. The Lagrangian for the electromagnetic field is given by: $$\mathcal L=-{1\over 4}F_{\mu\nu}F^{\mu\nu}.$$ There are different conventions that use differing values for the constant out front, for example that which you see in J.D. Jackson's text, however, they all contain the "quadratic" form in $F_{\mu\nu}$. Here, $F_{\mu\nu}$ and $F^{\mu\nu}$ are second order covariant and contravariant rank tensors known as the Faraday or field strength tensor, however, this is not so daunting as it may sound in this context because you can identify these quantities with matrices, albeit matrices that transform in the proper way under rotation. See Goldstein's Classical Mechanics chapter 13 for a good introduction to the Lagrangian formulation for continuous systems and fields, as that is precisely what the electromagnetic field is. At any rate, the Euler Lagrange equations for such systems has the form:$$\partial_\mu \bigg({\partial\mathcal L\over\partial (\partial_\mu \phi_\rho)}\bigg)-{\partial\mathcal L\over\partial\phi_\rho}=0.$$ $$\vdots$$ Where we have as many equations as we have fields. Notice that we have avoided differentiating with respect to the usual generalized coordinates and have instead differentiated with respect to some functions $\phi_\rho$. The functions $\phi_\rho$ are any set of functions which act as the "coordinates" of the Lagrangian, which in the continuous system is now a field or density that is defined everywhere in space, i.e. for a continuous system the lagrangian is such that: $$\mathcal L=\mathcal L(\phi_{\rho}, \partial_\mu\phi_{\rho}, x^\mu).$$ The Lagrangian may be a function of any number of fields, their derivatives and possibly the raw coordinates themselves! You are right when you say that the subject is interesting! Now back to Maxwell's theory. The field strength tensor is defined as: $$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu,$$ and $$F^{\mu\nu}=\partial^\mu A^\nu-\partial^\nu A^\mu.$$ So for the electromagnetic system, the field involved is the 4-vector potential $A^\mu$, so inserting these into the given Lagrangian expression we get: $$\mathcal L=-{1\over 4}(\partial_\mu A_\nu-\partial_\nu A_\mu)(\partial^\mu A^\nu-\partial^\nu A^\mu).$$ So Maxwell's equations can be derived via the Euler Lagrange equations: $$\partial_\nu\bigg({\partial\mathcal L\over\partial (\partial_\nu A^\mu)}\bigg)-{\partial\mathcal L\over\partial A^\mu}=0.$$ Now, this calculation is straightforward, however it does require that you get comfortable with manipulating indices, thus I will leave off the derivation for now and offer you some good pieces of advice that I hope you will pursue as your time admits. Firstly, read 7.4-7.6 of Goldstein's Classical Mechanics, then read chapter 13 or at least 13.1-13.2, then you will be ready to calculate the above derivatives and find Maxwell's equations from the Lagrangian formulation. This is really not a lot of material to cover, and it serve as excellent preparation for more advanced physics.
{ "domain": "physics.stackexchange", "id": 100273, "tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, maxwell-equations, variational-calculus" }
why are deterministic PDAs not closed under concatenation?
Question: I can understand that they are not closed under concatenation because without non determinism, PDA cannot decide whether to loop in the first PDA or jump to the next one. But can someone prove this with an example. Also prove that the resulting language cannot be accepted by DPDA Answer: Pick the languages: $L_1 = \{a^ib^jc^k | i \neq j \}$ and $L_2 = \{a^ib^jc^k | j \neq k\}$; both are DCFL and $L_3 = 0L_1 \cup L_2$ is DCFL, too. $L_0 = 0^*$ is DCFL (regular) But $L_{conc} = L_0 \cdot L_3 = 0^* L_3$ is not DCFL. Proof: Suppose that $L_{conc}$ (which is the concatenation of two DCFLs) is DCFL. If we intersect $L_{conc}$ with the regular language $0a^*b^*c^*$, we should get a DCFL language: $L_{conc} \cap \{0a^*b^*c^*\}$ = $0L_1 \cup 0L_2$. suppose $0L_1 \cup 0L_2$ is a DCFL, so $L_1 \cup L_2$ should be a DCFL, too but: $L_1 \cup L_2 = \overline{\overline{L_1} \cap \overline{L_2}} = \overline{\{a^ib^ic^i\}}$ which is not DCFL $\Rightarrow$ contradiction.
{ "domain": "cs.stackexchange", "id": 1473, "tags": "automata, pushdown-automata" }
Why do focal lengths affect magnification?
Question: For compound lenses, the image formed by first lens acts as the imaginaryobject for the second lens. In telescopes, the objective lens projects an image on its focal point which works as the object for the eyepiece. Per the property of convex lenses, the eyepiece magnifies the image. If the focal length of the eyepiece is smaller we'll get a higher magnification. Now if the focal length of the objective lens is increased, it'll again project a small image on its focal point. So for two objective lenses with different focal length, it seems the image size should about the same. So why does the magnification change? Answer: It is easy to understand that the higher the focal length is the narrower the angle of the image from the objective lens or mirror to the eyepiece. So the narrower the angle (higher focal length that is) the deeper the image can go into the eyepiece without losing focus. The lower the focal length is the shorter the depth of focus. It means that you have less play with your focuser with shorter focal length than longer focal length. In fact you can still get same magnification for both focal lengths but the one with shorter focal length will run out of its focus long before the one with a longer focal length. This is why you notice the expansion of the "star" as it goes out of focus in either direction. The longer focal length is more forgiving with the depth of focus thus giving you more play with your focuser. The longer focal length affords you to move more forward toward the objective lens or mirror to get larger image while still in focus than with shorter focal length which expands the image beyond the field of your eyepiece in use. It has a lot to do with the eyepiece ability to eat the whole image in one gulp.
{ "domain": "physics.stackexchange", "id": 28164, "tags": "optics, telescopes, lenses" }
Listing human-readable enums
Question: I'm trying to do some util class to operate on enums - convert Enums to its special strings representation via interface method. String stored in enum constructors. public enum StatusEnum implements HumanReadableEnum { NOT_VERIFIED("Not verified message"), INVALID("Invalid message"), APPROVED("Approved message"); private String text; public String getHumanReadable(){ // single method of HumanReadableEnum return text; } For example instead of HumanReadableEnum[] v = StatusEnum.values() I want to get List of inner text messages and do it in generic style. So now I made a toHumanReadableCollection method: public class EnumUtils { public static <CustomEnum extends Enum & HumanReadableEnum, E extends Class<CustomEnum>> List<String> toHumanReadableCollection(E enumClass){ Enum[] values = enumClass.getEnumConstants(); List<String> humanReadableCollection = new ArrayList<>(values.length); for (Enum value : values) { String stringRepresentation = ((CustomEnum)value).getHumanReadable(); humanReadableCollection.add(stringRepresentation); } return humanReadableCollection; } } As a result I can do something like: List<String> messagesOfEnum1 = EnumUtils.toHumanReadableCollection(MyEnum1.class); List<String> messagesOfEnum2 = EnumUtils.toHumanReadableCollection(MyEnum2.class); And have code of toHumanReadableCollection in single place. But I have some doubts about performance. So could somebody help me to understand the cost of my code? :) Did I touched the reflection?) Answer: Method header Your method header can be improved, the generic type E is not necessary, it can instead be written as: public static <CustomEnum extends Enum & HumanReadableEnum> List<String> toHumanReadableCollection(Class<CustomEnum> enumClass){ Type parameters should generally be one-character names, so this would be better as: public static <E extends Enum & HumanReadableEnum> List<String> toHumanReadableCollection(Class<E> enumClass) { Compiler warning You're getting a compiler warning here: String stringRepresentation = ((CustomEnum)value).getHumanReadable(); All you care about is the getHumanReadable method, which is in the HumanReadableEnum interface, so make that: String stringRepresentation = ((HumanReadableEnum)value).getHumanReadable(); Naming and interface HumanReadableEnum should instead be HumanReadable, there's no need to keep Enum in the name, there's no need to restrict the interface to only enums either. Your questions Reflection? Yes, this line of code is using reflection: Enum[] values = enumClass.getEnumConstants(); Time complexity? \$O(n)\$, it is linear to the number of enum constants. Java 8 In Java 8, you could replace your method with this: List<String> list = Arrays.stream(StatusEnum.values()) .map(HumanReadableEnum::getHumanReadable) .collect(Collectors.toList()); Or, as a general method, and also incorporating the earlier suggestions: public static <E extends Enum & HumanReadable> List<String> toHumanReadableCollection(Class<E> enumClass) { Enum[] values = enumClass.getEnumConstants(); return Arrays.stream(values) .map(e -> ((HumanReadable)e).getHumanReadable()) .collect(Collectors.toList()); } There's often a benefit of using Streams, I suggest you try to learn the Java 8 Stream API at some point.
{ "domain": "codereview.stackexchange", "id": 16009, "tags": "java, performance, reflection" }
How do I file tickets with Trac on code.ros.org?
Question: I would like to know how to file tickets with ROS Trac. My issue is that I don't know how to sign up for an account. I went to ROS code site, and it tells me to go to the ROS Tickets Wiki which tells me to go back to ROS code, which quickly becomes circular. I have an account for editing wiki pages for ros.org and (obviously) for ROS Answers, but when I try to use either of those logins for ROS Trac I am not successful. Is there a page where I can sign up for Trac? Thanks! Originally posted by Thomas D on ROS Answers with karma: 4347 on 2011-09-01 Post score: 2 Answer: If I recall correctly, register on https://code.ros.org/gf/ (see link at top right). That registration will then be propagated to any Trac instance managed by gForge (including ros, ros-pkg, wg-ros-pkg, etc). Originally posted by Eric Perko with karma: 8406 on 2011-09-01 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Thomas D on 2011-09-01: That's it. Not sure why I didn't see it. Thanks.
{ "domain": "robotics.stackexchange", "id": 6581, "tags": "ros" }
Definition of spacetime interval
Question: We know that a spacetime vector $x:= (\vec{x}, ct)$, where $c$ is the speed of light. Why is the interval $I$ in spacetime defined as $$ I=-(\Delta t)^2 + \frac{1}{c^2}\left[ (\Delta x)^2+(\Delta y)^2+(\Delta z)^2 \right] ?$$ More concretely, (1) Why is $I$ defined in terms of squared components (without the square root)? If it were $I^2$ then it'd be more clear. (2) Why does division by $c^2$ happen in $I$? What is $c^2 I$ then? Would appreciate some insights. Answer: I have written the answer with the following expression in mind as the expression for interval: $\Delta t^2 - \dfrac{1}{c^2}(\Delta x^2 + \Delta y^2 + \Delta z^2)$. In my experience, this is a rather common form of interval than the one stated by the OP. Whenever I write something like "the form you mentioned", it really means to refer to the above-stated expression. The stated quantity is a Lorentz invariant. This quantity, calculated for a pair of events, remains the same no matter which inertial observer measures it. Thus, it is a good idea to think of this quantity as a property of the pair of events itself, over and above its measurement by any observer. This is the essential reason to define this quantity as the interval - an analog of the Euclidean distance in Minkowskian spacetime. But, as you have partially pointed out, any bijective mapping of the stated quantity would satisfy all the above properties. So, why not define any one of them as the interval? To sum it up in one sentence: There is not much Physics in choosing this particular expression but it is rather a matter of convention. In fact, some variations of this quantity are also equally accepted as the interval. The most famous is the negative of this quantity (up to the factor of $c^2)$. Still, the origins of all the conventions are not stupid and there are good explanations for the particular questions you have asked: (1) It is used in the squared form because the quantity can be negative as well as positive. Thus, taking the square-root would introduce unnecessary work with imaginary quantities popping up everywhere. Another reason is that in Tensorial formulation of Special Relativity, which is crucial for its extension to General Relativity, this form finds a very elegant and easily manipulatable expression. Taking its root would again introduce unnecessary complications in the Tensorial representation. (2) The stated expression is helpful to find out the proper time between two timelike events. To give this quantity temporal dimension, the division by $c^2$ is done. When you use the negative of the above formula, it is meant to be used to find out spacetime interval between two spacelike events. In that case, it is appropriate to give it spatial dimension and thus, indeed, the $c^2$ is multiplied with $t^2$ instead of dividing $x^2+y^2+z^2$ by $c^2$. All this mess is thrown out in any serious theoretical work by introducing the natural units where $c=1$.
{ "domain": "physics.stackexchange", "id": 36827, "tags": "special-relativity, spacetime, metric-tensor" }
Fraction class in Java
Question: (See the next iteration.) I have this class for representing exact fractions. See what I have: Fraction.java: package net.coderodde.math; import java.util.ArrayList; import java.util.List; /** * This class implements a fraction consisting of a numerator and a denominator. * * @author Rodion "rodde" Efremov * @version 1.6 (Apr 29, 2016) */ public class Fraction extends Number { private final long numerator; private final long denominator; public Fraction(final long numerator, final long denominator) { if (denominator == 0) { throw new IllegalArgumentException("The denominator is zero."); } if (numerator == 0) { this.numerator = 0; this.denominator = 1; } else { final boolean isPositive = isPositive(numerator, denominator); final long[] data = eliminateCommonFactors(numerator, denominator); this.numerator = isPositive ? data[0] : -data[0]; this.denominator = data[1]; } } public Fraction plus(final Fraction other) { return new Fraction(this.numerator * other.denominator + this.denominator * other.numerator, this.denominator * other.denominator); } public Fraction minus(final Fraction other) { return new Fraction(this.numerator * other.denominator - this.denominator * other.numerator, this.denominator * other.denominator); } public Fraction multiply(final Fraction other) { return new Fraction(this.numerator * other.numerator, this.denominator * other.denominator); } public Fraction divide(final Fraction other) { return new Fraction(this.numerator * other.denominator, this.denominator * other.numerator); } public Fraction abs() { return new Fraction(Math.abs(numerator), Math.abs(denominator)); } public long getNumerator() { return numerator; } public long getDenominator() { return denominator; } public Fraction neg() { return new Fraction(-numerator, denominator); } @Override public String toString() { return "" + numerator + "/" + denominator; } @Override public boolean equals(Object o) { if (o == null) { return false; } if (!getClass().equals(o.getClass())) { return false; } final Fraction other = (Fraction) o; return getNumerator() == other.getNumerator() && getDenominator() == other.getDenominator(); } @Override public int intValue() { return (int)(numerator / denominator); } @Override public long longValue() { return numerator / denominator; } @Override public float floatValue() { return ((float) numerator) / denominator; } @Override public double doubleValue() { return ((double) numerator) / denominator; } private boolean isPositive(final long numerator, final long denominator) { final boolean numeratorIsPositive = numerator > 0L; final boolean denominatorIsPositive = denominator > 0L; return (numeratorIsPositive && denominatorIsPositive) || (!numeratorIsPositive && !denominatorIsPositive); } private long[] eliminateCommonFactors(long numerator, long denominator) { numerator = Math.abs(numerator); denominator = Math.abs(denominator); if (numerator < denominator) { final List<Long> numeratorPrimeFactorList = factorize(numerator); for (final long primeFactor : numeratorPrimeFactorList) { if (denominator % primeFactor == 0) { denominator /= primeFactor; numerator /= primeFactor; } } } else { final List<Long> denominatorPrimeFactorList = factorize(denominator); for (final long primeFactor : denominatorPrimeFactorList) { if (numerator % primeFactor == 0) { numerator /= primeFactor; denominator /= primeFactor; } } } return new long[]{ numerator, denominator }; } private static List<Long> factorize(long value) { final List<Long> factorList = new ArrayList(); while (value % 2L == 0) { factorList.add(2L); value /= 2L; } for (long prime = 3L; prime <= (long) Math.sqrt(value); prime = nextPrime(prime)) { if (prime * prime > value) { break; } while (value % prime == 0L) { factorList.add(prime); value /= prime; } } if (value > 1) { factorList.add(value); } return factorList; } private static long nextPrime(final long prime) { long nextPrimeCandidate = prime + 2L; while (!isPrime(nextPrimeCandidate)) { nextPrimeCandidate += 2L; } return nextPrimeCandidate; } private static boolean isPrime(final long primeNumberCandidate) { final long upperBound = (long) Math.sqrt(primeNumberCandidate); for (long i = 3L; i < upperBound; i += 2) { if (primeNumberCandidate % i == 0L) { return false; } } return true; } } FractionTest.java: package net.coderodde.math; import org.junit.Test; import static org.junit.Assert.*; public class FractionTest { private static float DELTA = 0.001f; @Test public void testConstruct() { assertEquals(new Fraction(5, 3) , new Fraction(35, 21)); assertEquals(new Fraction(5, 3) , new Fraction(-35, -21)); assertEquals(new Fraction(-5, 3), new Fraction(-35, 21)); assertEquals(new Fraction(-5, 3), new Fraction(35, -21)); assertEquals(new Fraction(0, 1) , new Fraction(0, 100)); } @Test(expected = IllegalArgumentException.class) public void testThrowsOnZeroDenominator() { new Fraction(1, 0); } @Test public void testPlus() { // (7 / 3) + (6 / 5) = (35 / 15) + (18 / 15) = 53 / 15 Fraction a = new Fraction(7, 3); Fraction b = new Fraction(6, 5); assertEquals(new Fraction(53, 15), a.plus(b)); assertEquals(new Fraction(53, 15), b.plus(a)); a = new Fraction(-7, 3); b = new Fraction(6, -5); assertEquals(new Fraction(-53, 15), a.plus(b)); a = new Fraction(7, 3); b = new Fraction(6, -5); // (7 / 3) - (6 / 5) = (35 / 15) - (18 / 15) = 17 / 15 assertEquals(new Fraction(17, 15), a.plus(b)); } @Test public void testMinus() { // (7 / 3) - (6 / 5) = (35 / 15) - (18 / 15) = 17 / 15 Fraction a = new Fraction(7, 3); Fraction b = new Fraction(6, 5); assertEquals(new Fraction(17, 15), a.minus(b)); assertEquals(new Fraction(17, -15), b.minus(a)); assertEquals(new Fraction(-17, 15), b.minus(a)); } @Test public void testMultiply() { Fraction a = new Fraction(3, 7); Fraction b = new Fraction(5, 3); assertEquals(new Fraction(5, 7), a.multiply(b)); b = new Fraction(-5, 3); assertEquals(new Fraction(-5, 7), a.multiply(b)); assertEquals(new Fraction(5, -7), a.multiply(b)); } @Test public void testDivide() { // (2/9) / (6/4) = (2/9) * (2/3) = 4 / 27 Fraction a = new Fraction(2, 9); Fraction b = new Fraction(6, 4); assertEquals(new Fraction(4, 27), a.divide(b)); assertEquals(new Fraction(-4, -27), a.divide(b)); } @Test public void testAbs() { assertEquals(new Fraction(2, 4), new Fraction( 1, 2).abs()); assertEquals(new Fraction(2, 4), new Fraction(-1, 2).abs()); assertEquals(new Fraction(2, 4), new Fraction( 1, -2).abs()); assertEquals(new Fraction(2, 4), new Fraction(-1, -2).abs()); } @Test public void testGetNumerator() { assertEquals(3, new Fraction(6, 4) .getNumerator()); assertEquals(3, new Fraction(3, 2) .getNumerator()); assertEquals(3, new Fraction(9, 6) .getNumerator()); assertEquals(15, new Fraction(15, 11).getNumerator()); } @Test public void testGetDenominator() { assertEquals(2, new Fraction(6, 4) .getDenominator()); assertEquals(2, new Fraction(3, 2) .getDenominator()); assertEquals(2, new Fraction(9, 6) .getDenominator()); assertEquals(11, new Fraction(15, 11).getDenominator()); } @Test public void testToString() { assertEquals("3/2" , new Fraction(6 , 4) .toString()); assertEquals("3/2" , new Fraction(3 , 2) .toString()); assertEquals("3/2" , new Fraction(9 , 6) .toString()); assertEquals("15/11" , new Fraction(15 , 11) .toString()); assertEquals("-15/11", new Fraction(-15, 11) .toString()); assertEquals("-15/11", new Fraction(15 , -11).toString()); assertEquals("15/11" , new Fraction(-15, -11).toString()); assertEquals("0/1" , new Fraction(0, -123) .toString()); } @Test public void testIntValue() { assertEquals(0, new Fraction(0, 4).intValue()); assertEquals(0, new Fraction(1, 4).intValue()); assertEquals(0, new Fraction(2, 4).intValue()); assertEquals(0, new Fraction(3, 4).intValue()); assertEquals(1, new Fraction(4, 4).intValue()); assertEquals(1, new Fraction(5, 4).intValue()); assertEquals(1, new Fraction(6, 4).intValue()); assertEquals(1, new Fraction(7, 4).intValue()); assertEquals(0, new Fraction(-0, 4).intValue()); assertEquals(0, new Fraction(-1, 4).intValue()); assertEquals(0, new Fraction(-2, 4).intValue()); assertEquals(0, new Fraction(-3, 4).intValue()); assertEquals(-1, new Fraction(-4, 4).intValue()); assertEquals(-1, new Fraction(-5, 4).intValue()); assertEquals(-1, new Fraction(-6, 4).intValue()); assertEquals(-1, new Fraction(-7, 4).intValue()); assertEquals(4, new Fraction(-17, -4).intValue()); } @Test public void testLongValue() { assertEquals(0L, new Fraction(0, 4).longValue()); assertEquals(0L, new Fraction(1, 4).longValue()); assertEquals(0L, new Fraction(2, 4).longValue()); assertEquals(0L, new Fraction(3, 4).longValue()); assertEquals(1L, new Fraction(4, 4).longValue()); assertEquals(1L, new Fraction(5, 4).longValue()); assertEquals(1L, new Fraction(6, 4).longValue()); assertEquals(1L, new Fraction(7, 4).longValue()); assertEquals(0L, new Fraction(-0, 4).longValue()); assertEquals(0L, new Fraction(-1, 4).longValue()); assertEquals(0L, new Fraction(-2, 4).longValue()); assertEquals(0L, new Fraction(-3, 4).longValue()); assertEquals(-1L, new Fraction(-4, 4).longValue()); assertEquals(-1L, new Fraction(-5, 4).longValue()); assertEquals(-1L, new Fraction(-6, 4).longValue()); assertEquals(-1L, new Fraction(-7, 4).longValue()); assertEquals(4L, new Fraction(-17, -4).longValue()); } @Test public void testFloatValue() { assertEquals(0.0f , new Fraction(0, 4).floatValue(), DELTA); assertEquals(0.25f, new Fraction(1, 4).floatValue(), DELTA); assertEquals(0.5f , new Fraction(2, 4).floatValue(), DELTA); assertEquals(0.75f, new Fraction(3, 4).floatValue(), DELTA); assertEquals(1.0f , new Fraction(4, 4).floatValue(), DELTA); assertEquals(1.25f, new Fraction(5, 4).floatValue(), DELTA); assertEquals(1.5f , new Fraction(6, 4).floatValue(), DELTA); assertEquals(1.75f, new Fraction(7, 4).floatValue(), DELTA); assertEquals(0.0f , new Fraction(-0, 4).floatValue(), DELTA); assertEquals(-0.25f, new Fraction(-1, 4).floatValue(), DELTA); assertEquals(-0.5f , new Fraction(-2, 4).floatValue(), DELTA); assertEquals(-0.75f, new Fraction(-3, 4).floatValue(), DELTA); assertEquals(-1.0f , new Fraction(-4, 4).floatValue(), DELTA); assertEquals(-1.25f, new Fraction(-5, 4).floatValue(), DELTA); assertEquals(-1.5f , new Fraction(-6, 4).floatValue(), DELTA); assertEquals(-1.75f, new Fraction(-7, 4).floatValue(), DELTA); assertEquals(4.25f, new Fraction(-17, -4).floatValue(), DELTA); } @Test public void testDoubleValue() { assertEquals(0.0 , new Fraction(0, 4).doubleValue(), DELTA); assertEquals(0.25, new Fraction(1, 4).doubleValue(), DELTA); assertEquals(0.5 , new Fraction(2, 4).doubleValue(), DELTA); assertEquals(0.75, new Fraction(3, 4).doubleValue(), DELTA); assertEquals(1.0 , new Fraction(4, 4).doubleValue(), DELTA); assertEquals(1.25, new Fraction(5, 4).doubleValue(), DELTA); assertEquals(1.5 , new Fraction(6, 4).doubleValue(), DELTA); assertEquals(1.75, new Fraction(7, 4).doubleValue(), DELTA); assertEquals(0.0 , new Fraction(-0, 4).doubleValue(), DELTA); assertEquals(-0.25, new Fraction(-1, 4).doubleValue(), DELTA); assertEquals(-0.5 , new Fraction(-2, 4).doubleValue(), DELTA); assertEquals(-0.75, new Fraction(-3, 4).doubleValue(), DELTA); assertEquals(-1.0 , new Fraction(-4, 4).doubleValue(), DELTA); assertEquals(-1.25, new Fraction(-5, 4).doubleValue(), DELTA); assertEquals(-1.5 , new Fraction(-6, 4).doubleValue(), DELTA); assertEquals(-1.75, new Fraction(-7, 4).doubleValue(), DELTA); assertEquals(4.25, new Fraction(-17, -4).doubleValue(), DELTA); } } Any critique is much appreciated. Answer: You can make a lot of simplifications. Non zero numbers with the same sign Currently, you have a method that tests whether two given non-zero long values have the same sign with the following: private boolean isPositive(final long numerator, final long denominator) { final boolean numeratorIsPositive = numerator > 0L; final boolean denominatorIsPositive = denominator > 0L; return (numeratorIsPositive && denominatorIsPositive) || (!numeratorIsPositive && !denominatorIsPositive); } This can be written more concisely with: private boolean isPositive(final long numerator, final long denominator) { return numerator > 0 == denominator > 0; } You don't need to have L suffix. It is guaranteed that the check will be done on long values due to binary numeric promotion . JLS section 15.20.1 Numerical Comparison Operators <, <=, >, and >=: Binary numeric promotion is performed on the operands (§5.6.2). and JLS section 5.6.2: Otherwise, if either operand is of type long, the other is converted to long. So, rest assured. You can simply check whether the fact that both numbers are greater than 0 is the same: if they are both greater or lower than 0 then the result will be true. Note that both numbers can't be equal to 0 since that case was already handled. As such, you may not even need that method and directly have: final boolean isPositive = numerator > 0 == denominator > 0; Simplifying a fraction The biggest complication is your method to simplify a fraction. You currently have a complicated way with determining the prime factor when you can do it in a much simpler way. Just calculate the greatest common divisor of both the numerator and the denominator. private static long gcm(long a, long b) { return b == 0 ? a : gcm(b, a % b); } Then your code in the constructor simply becomes: final boolean isPositive = numerator > 0 == denominator > 0; final long gcm = gcm(numerator, denominator); this.numerator = isPositive ? Math.abs(numerator / gcm) : -Math.abs(numerator / gcm); this.denominator = Math.abs(denominator / gcm); All of your tests still pass with this change. toString() Small nitpick, in the following: @Override public String toString() { return "" + numerator + "/" + denominator; } you don't need the empty string. Just have: @Override public String toString() { return numerator + "/" + denominator; } No serialVersionUID You are extending from Number which is serializable; this makes your class serializable also. As such, you should defined a serial version UID for your class. Final and immutable classes Do you intend to override that class in the future? It doesn't look like a good idea to do it. I would make that class final to make it impossible, just like the built-in Integer or Long. The fact that the class is immutable is a very good idea. Static factories Consider creating a pool of common fractions, like ONE or ZERO. Then you can create static factories to return Fraction instances instead of using the constructor directly. Typically, this is done by introducing a method valueOf(...) that will return the instance of Fraction. You can take inspiration from Integer.valueOf or BigDecimal.valueOf. The constructor is then made private so that the caller needs to go through that method. As an example you could create two public constants for zero and one: public static final Fraction ZERO = new Fraction(0, 1); public static final Fraction ONE = new Fraction(1, 1); Then, you can reuse them in the static factory: public static Fraction valueOf(final long numerator, final long denominator) { if (denominator == 0) { throw new IllegalArgumentException("The denominator is zero."); } if (numerator == 0) { return ZERO; } else if (numerator == denominator) { return ONE; } return new Fraction(numerator, denominator); } This avoids to instantiate new fractions and reuse existing ones.
{ "domain": "codereview.stackexchange", "id": 19771, "tags": "java, rational-numbers" }
Paradox in topological phase of SSH model
Question: Consider the SSH model, i.e. the dimerized tight-binding model with Hamiltonian $$H = \sum_i (t+\delta t) c^\dagger_{Ai} c_{Bi} + (t-\delta t) c_{A(i+1)}^\dagger c_{Bi} + \text{h.c.}.$$ This describes electrons in a crystal where the transition amplitudes between adjacent atoms alternate between $t+\delta t$ and $t-\delta t$. The unit cell is labeled by $n$, and the indices $A$ and $B$ indicate the two states in each of these cells. Applying the Berry phase for two-state systems, one finds that the phase picked up by adiabatic transport around a Bloch band in this model is zero if $\delta t > 0$ and $\pi$ if $\delta t < 0$. This phase has apparently been observed in experiments, so it's physical. I am really confused how there can be a difference depending on the sign of $\delta t$. If we just shift over the unit cells, then $t+ \delta t$ and $t-\delta t$ swap places, so $\delta t$ changes sign. How could this possibly change the Berry phase? Answer: I think what is observed is the two phases difference, not each one separately. for example take look at this one: Direct measurement of the Zak phase in topological Bloch bands As you said, the Zak phase depends on the unit cell you choose, so it can not be physical. But the difference is physical in the sense that if you fix your unit cell and start with one system, say $\delta t >0$, and start to decrease it until $\delta t<0$, Zak phase will change $\pm \pi$ when $\delta t =0$. So these two phases are topologically different which is a physical statement. As is noted in the aforementioned article: We point out that the Zak phase of each dimerization is a gauge dependent quantity, i.e. it depends on the choice of origin of the unit cell, however, the difference of Zak phases of the two dimerizations is uniquely defined
{ "domain": "physics.stackexchange", "id": 30512, "tags": "condensed-matter, topological-phase, tight-binding" }
Function of data pin(s) in a single RAM chip
Question: Consider a very simple RAM chip that has 20 address pins and 4 data pins. I would like to know the function of the 4 data pins when only 1 data pin is enough. What do I mean by 1 data pin is enough? If an address is applied to address pins, a memory cell is selected (which can only have value 0 or 1 — in a very basic RAM). Isn't it counter-intuitive to send 0 bit using 4 data pins when 1 data pin is okay. So why would the RAM have 4 data pins in this case? If it's a memory system that combines multiple RAM chips, I understand having multiple data pins but I don't understand it for a single RAM chip NB: Something might be missing from my base understanding because I've been struggling to understand how 4 data pins are necessary. Answer: Each address in this RAM chip stores 4 bits. For example, you can write data 0100 to address 01010101010101010101 and then write data 0011 to address 00000000001111111111 and then read address 01010101010101010101 and you get the data 0100 again. You can view it as 4 1-bit RAM chips in parallel. Each address selects a group of 4 memory cells. Each address has a separate 4 bits. It does not read 4 addresses at once. The second data bit from the first address is not the first data bit from the second address. If you want 8-bit memory, you can combine 8 1-bit chips like you said, or you can use 2 4-bit chips, or 1 8-bit chip, and the circuit designer usually finds it more convenient and cheaper if there are less chips.
{ "domain": "cs.stackexchange", "id": 21454, "tags": "computer-architecture" }
I found the matrices but can't find the inverse kinematics angles
Question: I need to find inverse and forward kinematics of Mitsubishi RV-M2 as a homewrok. I found the forward kinematics part. But I got stuck in inverse kinematics. Teacher said we can think that wrist joints is not moving.(To make equations easier I guess.) This is why I thought tetha4(in T4) and tetha5(in T5) should be 0. Here is my MATLABcode syms t1 t2 t3 d1 a1 a2 a3 d5 px py pz r1 r2 r3 r4 r5 r6 r7 r8 r9 % sybolic equations T1=[cos(t1) -sin(t1) 0 0; sin(t1) cos(t1) 0 0; 0 0 1 d1; 0 0 0 1;]; T2=[cos(t2) -sin(t2) 0 a1; 0 0 -1 0; sin(t2) cos(t2) 0 0; 0 0 0 1;]; T3=[cos(t3) -sin(t3) 0 a2; sin(t3) cos(t3) 0 0; 0 0 1 0; 0 0 0 1;]; T4= [0 -1 0 a3; 1 0 0 0; 0 0 1 0; 0 0 0 1;]; T5=[1 0 0 0; 0 0 -1 -d5; 0 1 0 0; 0 0 0 1;]; Tg= [r1 r2 r3 px; r4 r5 r6 py; r7 r8 r9 pz; 0 0 0 1;]; left= inv(T1)* Tg; left=left(1:4,4); left=simplify(left) right= T2*T3*T4*T5; right=right(1:4,4); right=simplify(right) This gives us I find t1 using this and results are matching with forward kinematics equations. But I couldn't find t2 and t3. How can I do that? Is there a formula or somethig? Answer: I don't want to just give the answer....try squaring the $x$ and $z$ terms (after moving $a_1$ to the left) and see where that gets you.
{ "domain": "robotics.stackexchange", "id": 1232, "tags": "kinematics" }
Measuring entanglement entropy using a stabilizer circuit simulator
Question: I'm trying to simulate stabilizer circuits using the Clifford tableau formalism that lets you scale up to hundreds of qubits. What I want to do is find the entanglement entropy on by splitting my quantum state (defined as a line of $N$ qubits) at the end of my quantum circuit after applying gates and some measurements. From this paper, the authors calculate the entanglement entropy for a state defined by its stabilizers. In particular, I'm looking at Equations 10 (this is their "clipped gauge"), A16 and A17. If $\mathcal{S}$ is the set of stabilizers for the state, then the entropy is given by (Equation A16): $$S = |A| - \log_2 |\mathcal{S}_A|,$$ where $|A|$ is the size of the bipartition of the quantum state and $\mathcal{S}_A$ is the part of $\mathcal{S}$ which acts on $\bar{A}$ with the identity. I want to simulate my quantum circuit and calculate the entanglement entropy like they do in their paper, but I'm not sure what's the easiest way to do so. A lot of the tools for simulating stabilizer circuits aren't the most transparent to use. In particular, I'm trying to understand how to go from the tableau representation a lot of simulators output and the set of generators I need to calculate the entropy. Is there a simple procedure to go from the tableau representation to the entropy? I'm trying to think of how to implement this in code. For the actual simulator, I see there are a few options. I need measurements, so while Qiskit does offer Clifford simulation, I can't seem to do measurements with it. The others that offer a Python interface are: Stim, by Craig Gidney. Python CHP Stabilizer Simulator, also by Craig Gidney. CHP Sim, by the community for Q#. If anyone has experience with these and can explain how to go from the tableau representation to the calculation of the entropy, that would be great, since these simulators usually seem to be for giving shots in the computational basis. Answer: With the help of Craig Gidney and some others, I was able to pin down the procedure to calculate the entropy. Here are the steps. Create your circuit with a stabilizer simulator This can be done with whatever simulator you want. As Craig mentioned in his answer, Stim is a great tool for the job. The rest of the answer in this section will assume you're using Stim, but it's not required. Your code will look something like this: import stim # Define your circuit here circuit = stim.TableauSimulator() ... # Create the tableau representation tableau = circuit.current_inverse_tableau() ** -1 zs = [tableau.z_output(k) for k in range(len(tableau))] zs = np.array(zs) What you get with tableau is a set of stim.PauliString objects, which are essentially your "stabilizers" and "destabilizers", to use the language of Aaronson's paper on page 4. For the purposes of the entropy, we only care about the stabilizers, which are given by the zs object that we define here. Essentially, a quantum circuit starts in the product state $|0 \rangle^{\otimes N}$, and then gates transform the state. The idea is that the stabilizer generators for this initial state is: \begin{equation} g_1 = Z_1 \equiv Z \otimes \mathbb{1} \otimes \mathbb{1} \otimes \ldots \otimes \mathbb{1}, \\ g_2 = Z_2 \equiv \mathbb{1} \otimes Z \otimes \mathbb{1} \otimes \ldots \otimes \mathbb{1}, \end{equation} and so on until the end. If we have $N$ qubits, there will be exactly $8$ stabilizers. However, it's super important to note that these are not identified with certain qubits. For example, stabilizer $g_1$ is not identified with the first qubit (this tripped me up for a bit, so I wanted to note it). The way Stim stores the stabilizers is the following: $0$ means an identity, $1$ means an $X$ operator, $2$ means a $Y$ operator, and $3$ means a $Z$ operator. This is because of the binary notation. To get into the actual form of the tableau matrix, we need to make a $N \times 2N$ matrix, with the left $N \times N$ block being for the $X$ operators and the right block for the $Z$ operators. So you can just write a little function like this: def binaryMatrix(zStabilizers): """ - Purpose: Construct the binary matrix representing the stabilizer states. - Inputs: - zStabilizers (array): The result of conjugating the Z generators on the initial state. Outputs: - binaryMatrix (array of size (N, 2N)): An array that describes the location of the stabilizers in the tableau representation. """ N = len(zStabilizers) binaryMatrix = np.zeros((N,2*N)) r = 0 # Row number for row in zStabilizers: c = 0 # Column number for i in row: if i == 3: # Pauli Z binaryMatrix[r,N + c] = 1 if i == 2: # Pauli Y binaryMatrix[r,N + c] = 1 binaryMatrix[r,c] = 1 if i == 1: # Pauli X binaryMatrix[r,c] = 1 c += 1 r += 1 return binaryMatrix Now, we're ready to calculate the entropy. Calculating the entropy of a cut Now that we have the matrix corresponding to the evolved quantum state through the circuit, we want to find the entanglement entropy. In this paper, the key equation is Equation A19, but the real helpful comment I found on this was on Footnote 11 of this paper (page 10), which says how to do this numerically. I also communicated with one of the authors (Xiao Chen) from the other paper here, and his comments were also quite helpful. Our system involves $N$ qubits, and now we want to find the entanglement entropy between two subsystems, which we will label $A$ and $B$. Equation A19 of the paper I referenced above tells us: \begin{equation} \label{eq:Entropy} S_A = \text{rank}\left( \text{projection}_A \mathcal{S} \right) - N_A. \end{equation} In this equation, $N_A$ is the number of qubits in part $A$, $\mathcal{S}$ is the set of stabilizers (our binary matrix we got above), and the projection operator means we want to "truncate" the matrix so that it's only describing the parts on $A$. To do this, remember that our matrix starts off as $N \times 2N$. We now want to truncate it so that we don't care about the qubits in region $B$. Mathematically, this is what the projection operator does. It pretends everything in region $B$ is the identity, which in our matrix means the entries become zero. But a simpler way to deal with this is to just delete the columns needed to describe region $B$, since they won't play a role anyway. Numerically, the following function does the trick: def getCutStabilizers(binaryMatrix, cut): """ - Purpose: Return only the part of the binary matrix that corresponds to the qubits we want to consider for a bipartition. - Inputs: - binaryMatrix (array of size (N, 2N)): The binary matrix for the stabilizer generators. - cut (integer): Location for the cut. - Outputs: - cutMatrix (array of size (N, 2cut)): The binary matrix for the cut on the left. """ N = len(binaryMatrix) cutMatrix = np.zeros((N,2*cut)) cutMatrix[:,:cut] = binaryMatrix[:,:cut] cutMatrix[:,cut:] = binaryMatrix[:,N:N+cut] return cutMatrix This truncates our original $N \times 2N$ matrix into a $N \times 2N_A$ matrix, with everything else deleted. Now, the equation for the entropy simply requires us to compute the rank of this matrix, and subtract off the number of qubits in region $A$. Numerically, you can do this using Gaussian elimination with Boolean variables (in others worlds, modulo 2 arithmetic), but you can also use just a plain old SVD over real variables to get the rank. Edit: I made a mistake in saying that a regular SVD calculation will work. For reference, I was using the matrix_rank function from NumPy. After comparing it with another approach, it seems like it mostly works, but is sometimes off. As such, I'd recommend doing Gaussian elimination, with something like this. After that, you should be good to go. The entropy is essentially a matrix rank computation. Also note that you can use whatever stabilizer circuit simulator you like, as long as in the end you get out the stabilizer generators which you can then build your binary matrix from.
{ "domain": "quantumcomputing.stackexchange", "id": 2607, "tags": "simulation, q#, entropy, stabilizer-state, stim" }
Linked List size in constant time or linear time
Question: The space-time complexity of getting the size of the linked list can differ in different implementations as far as I understand it. In the Boost C++ library one finds that the size() function can be constant time or linear time. I was wondering what the differences are in getting the size to be constant time over getting it to be linear? Can anyone elaborate on the algorithm differences? Answer: The key difference is that you have a field tracking the size of the list. This field can be accessed in constant time. The field must be updated for every addition or removal. If you compute the size of the list by counting the elements, then it will be linear.
{ "domain": "cstheory.stackexchange", "id": 231, "tags": "time-complexity" }
How to invoke long range correlations among the onsite energies in a 1D lattice theoretically?
Question: I am trying to find some literature which tells me how to prescribe the on site energies in a lattice such that they have a long range correlation. I want to generate separate sets of those energies randomly in each realization. Answer: Here's an outline of how you might achieve this numerically with a Monte Carlo method. Suppose you desire $\langle e_ie_j\rangle\approx f_{ij}$. Define some metric $U=\|\langle e_ie_j\rangle - f_{ij}\|$ to measure how 'close' you are to such a correlation. Step 1: Begin with a random array $\{e_i\}$. Step 2: For each i: perturb $e_i\rightarrow e_i+\delta$ with a random $\delta$ and evaluate the resultant change in $U$, call it $\Delta U$. You accept this change with probability $\min(1, \exp(-\Delta U/T))$, otherwise you reject it. Repeat Step 2 until $U$ becomes sufficiently small, and record the configuration $\{e_i\}$ that minimises $U$. Note the $T$ parameter above. When $T\to\infty$, all changes will be accepted (random walk) whereas when $T\to 0$ you have direct minimisation of $U$. Simulated annealing involves starting with a large $T$ and then gradually decreasing it over many iterations.
{ "domain": "physics.stackexchange", "id": 41419, "tags": "quantum-mechanics, condensed-matter" }
Showing invariance of tensor trace under $\rm SO(N)$
Question: If $O$ is an element of $SO(N)$, then $O$ is an $N\times N$ matrix satisfying $O^TO=1$ and det$(O)=1$. Let tensor $T^{ij}$ be a representation of the group and let the trace be Tr$(T^{ij})=\delta^{ij}T^{ij}\equiv T$. To show the invariance of the trace under $SO(N)$ transformations, we write $$ T=\delta^{ij}T^{ij}\quad\longrightarrow\quad \delta^{ij}T'^{ij}=\delta^{ij}O^{il}O^{jm}T^{lm}=(O^T)^{li}\delta^{ij}O^{jm}T^{lm}=\delta^{lm}T^{lm}=T~.$$ (Note the small prime symbol on $T$ to the right of the arrow which signifies that it is transformed.) I want to make sure I understand the step where we insert $(O^T)^{li}$. From the definition of the transpose, it is obvious that $O^{il}=(O^T)^{li}$. Is that all that is being done here? Since $\delta^{ij}$ is not a matrix, I don't think it matters where it appears in the expression. However, since my source material (Zee's QFT) has moved $O^T$ to the other side of $\delta$, I want to make sure I'm not oversimplifying something about the algebra of the indices. Is it as simple as the identity $O^{il}=(O^T)^{li}$? Answer: Yeah that’s all that there is to it essentially. $\delta$ is a kind of matrix — it is a tensor with two indices; so a square matrix. The whole point of this step is to put an expression with contracted indices into a matrix form, where matrices are multiplied. That’s why the reordering happens — the author wants to have the last index of the previous symbol to be contracted with the first index of the current, to interpret the contraction as matrix multiplication. This condensed notation can be easier to work with that is all.
{ "domain": "physics.stackexchange", "id": 72374, "tags": "tensor-calculus, group-theory, trace" }
Is this a good approach to replace mysqli_num_rows?
Question: Is this a good approach to replace mysqli_num_rows? $db is a PDO instance. .. $result = $db->query("SELECT * FROM users"); $result_set = $result->fetchAll(); $count = count($result_set); if($count>0){ //if $count>0 show something special while($row = array_shift($result_set)){ // my code } //if $count>0 show something special here too }else{ //some message } Answer: Yes, in order to know whether your query returned any rows, in a modern web application it is preferred to fetch the data in a variable first and then use this variable to see whether it's empty or not. In this regard, calling the count() function would be also superfluous. In PHP, an array is as good in the if statement as a boolean value. Also, foreach is generally used to iterate over arrays. $result = $db->query("SELECT * FROM users"); $result_set = $result->fetchAll(); if($result_set){ //if $count>0 show something special foreach($result_set as $row){ // my code } }else{ //some message } A few obvious notes: you should never select more rows than going to be used on a single web page you should never select the actual rows from the database only to count them in case you are working with a large dataset (in a command line utility for example), it is unwise to fetch all the rows first. in this case other means of detecting whether your query returned any rows have to be used.
{ "domain": "codereview.stackexchange", "id": 39421, "tags": "php, mysql, pdo" }
Can I break the degeneracy of energy eigenstates if I know which irrep of the group they transform as?
Question: Say I have a quantum system with a symmetric potential, whose symmetry is described by a group $G$. I know the character table of $G$, its irreducible representations, can work out the projection operators $\Pi_j$ etc. With imaginary time evolution, I can find the spatial part of the energy eigenstates $\phi_{E_i}$. If there are degeneracies, however, what I will get is the sum of the degenerate eigenstates at the same $E_i$: $\psi_{E_i} = \sum_j \phi^{(j)}_{E_i}$. Question: if I know the group, the irrep, etc., can I decompose/break $\psi$ into the individual $\phi^{(j)}_{E_i}$? Reason for the question: From this answer: Suppose there is a group of transformations $G$. Then it acts on the Hilbert space by some set of unitary transformations $\mathcal{O}$. The Hilbert space is therefore a representation of the group $G$, and it splits up into subspaces of irreducible representations (irreps). The important thing is that if $|\psi\rangle$ and $|\phi\rangle$ are in the same irrep iff you can get from one to the other by applying operators $\mathcal{O}$. So another way of phrasing my question would be: can I somehow get $\mathcal{O}$, such that $\phi^{(2)}_{E_i} = \mathcal{O}\phi^{(1)}_{E_i}$ and $\psi = \sum_j \phi^{(j)}_{E_i} = \sum_j \mathcal{O}^j\phi^{(1)}_{E_i}$ ? Example The 2D quantum harmonic oscillator, looking at the states with energy $E = 2\hbar\omega$ which are $\psi_1(x,y) = \phi_0(x)\phi_1(y)$ and $\psi_2(x,y) = \phi_0(y)\phi_1(x)$ where $0$ is the ground state and $1$ the first excited state. I know that $|\psi_1|^2$ and $|\psi_2|^2$ should look like this: but, from my code, I get the spatial distribution of the "overall" energy level $E=2$ so I get $|\psi_1+\psi_2|^2$: The group and irrep information about the 2D harmonic oscillator is from here: The set of states with total number $m$ of excitation span the irrep $(m,0)$ of $SU(2)$. Thus the degeneracy is the dimension of this irrep [...] this is just $m+1$. With this info, can I get $\psi_1$ and $\psi_2$ from $\psi_1 + \psi_2$? EDIT: To clarify, I want to decompose $\psi$ into non-degenerate eigenstates, not $|\psi|^2$. I am just plotting $|\psi|^2$ instead of $\psi$ for simplicity. Answer: It is not possible to “break the degeneracy” by combining degenerate states: any unitary transformation inside the degenerate subspace will produce a different set of eigenstates of $H$, but they will all have the same eigenvalues. This does not prevent you from organizing the states in your (degenerate) subspace into irreps of some group. Presumably the elements of this group commute with the Hamiltonian so you will get legitimate eigenstates which also carry group labels. The reason you might want to use the group is that some perturbation might lift the degeneracy so that states in different irreps have different eigenvalues (or at least some irreps have different eigenvalues, as there is to guarantee that all degeneracies are lifted). In the example you give, you start with $\phi_0(x)\phi_1(y)$ and $\phi_0(y)\phi_1(x)$. Imagine you were to add an interaction between the two particles - say some sort of potential which would be of the type $\kappa (x-y)^2$. A perturbative treatment of this interaction would depend, to first order, on the average of $(x-y)^2$. This perturbation is invariant under $S_2$, the group of permutation of the two coordinates. This group has 2 1-dimensional representation and (no surprise) the irreps are spanned (up to normalization) by \begin{align} \Psi_{\pm}(x,y)&=\phi_0(x)\phi_1(y)\pm \phi_1(x)\phi_0(y)\, ,\\ &\sim e^{-\lambda(x^2+y^2)}(y\pm x) \end{align} The states $\Psi_{\pm}(x,y)$ are still degenerate under the original 2D h.o. oscillator Hamiltonian but, in a perturbative treatment, the effect of interaction in $(x-y)^2$ would depend, to first order, on the average of $(x-y)^2$ evaluated with either $\Psi_{\pm}(x,y)$, and this average is different for the two combinations. Moreover, it is easy to check using parity that \begin{align} \int dx\,dy\, \Psi_-(x,y)(x-y)^2\Psi_+(x,y)=0\, . \end{align} Thus, by organizing your state according to irreps of $S_2$, which leaves your perturbation invariant, the degeneracy is lifted by the perturbation and the basis states for the irreps of $S_2$ are eigenstates of the Hamiltonian plus perturbation, at least to first order. To understand the splitting of terms there is a massive paper (translated in English) by Hans Bethe Bethe, Hans A. "Splitting of terms in crystals." Selected Works Of Hans A Bethe: (With Commentary). 1997. 1-72. (originally Ann.Physics 3 p.133 (1929)) and the other canonical source is Tinkham, M., 2003. Group theory and quantum mechanics. Courier Corporation.
{ "domain": "physics.stackexchange", "id": 69128, "tags": "quantum-mechanics, symmetry, group-theory, representation-theory" }
Proof that a force applied to the center of mass is the same as force applied off-center
Question: There is a similar question that gives a bit of an explanation, but little mathematical proof here: force applied not on the center of mass I would like mathematical proof that shows that the velocity of a rigid body when a force is applied to the center of mass is equal to the velocity of the same rigid body when the same force is applied to a point on the body other than the center of mass. Thanks in advance! Answer: Newton's second law $F=ma$ does not depend on the point of application of force because this law is valid only for point particles. Now to apply it to rigid bodies we must consider them as a system of particles. Let a rigid body be made up of $N$ particles of mass $m_1,m_2,\cdots,m_N$. Now apply a force $f$ to some $i_{th}$ particle. All other particles will also exert internal forces on each other. Therefore, the second law for all particles is \begin{align}f_1^{int}&=\frac{dp_1}{dt}\\ f_2^{int}&=\frac{dp_2}{dt}\\ \cdots\\ f+f_i^{int}&=\frac{dp_i}{dt}\\ \cdots\\ f_N^{int}&=\frac{dp_N}{dt}\end{align} Adding all these $$\sum_j f_j^{int}+f=\sum_j\frac{dp_j}{dt}$$ By the third law $$\sum_j f_j^{int}=0$$ Thus $$f=\frac{dP}{dt} \text{where } P=\sum_jp_j$$ Now if you apply the same force $f$ to the center of mass of the body, you get the same equation for total momentum. $$f=\frac{dP}{dt}$$ Therefore both situations will have identical solution for the total momentum and hence for the linear velocity of the center of mass of the rigid body.
{ "domain": "physics.stackexchange", "id": 14312, "tags": "forces, rigid-body-dynamics" }
How to find what values are assigned to labels that where encoded using LabelEncoder?
Question: places = ['India','France','India','Australia','Australia','India','India','France'] Here places are the DataFrame Series, now how can I find that which label was encoded with values like India = 0 , Australia = 1 ,France = 2. This is ok for few labels what if there are 100's of labels available in a huge dataset. Answer: Use the classes_ attribute of your LabelEncoder. For example: le = preprocessing.LabelEncoder() le.fit(places) print(le.classes_) The index of the label in le.classes_ is the encoded value of the label. See another example here.
{ "domain": "datascience.stackexchange", "id": 4568, "tags": "machine-learning, python, scikit-learn" }
How can we infer the mass of SMBH in galaxies that are not active anymore?
Question: I know it is possible to infer the mass of a supermassive black hole (SMBH) by many methods, i.e., stellar orbits for out Galaxy, Iron line profile from the accretion disk, and probably other methods (perhaps from the spectrum of the radiation disk itself, that can be related with the central mass, if supposed thermal in origin). What I don't know is: how can we infer the mass of SMBH in galaxies that are not active anymore? Answer: I can think of two methods. Both rely on the dynamics of material surrounding the SMBH, which is affected up to a distance of the order of the "sphere of influence". This is the region where the BH dominates the dynamics as compared to the enclosed mass of the galaxy. The sphere of influence is: $$ R_{\mathrm{infl}} \equiv \frac{G M_{\mathrm{SMBH}}}{\sigma_{\mathrm{bulge}}^2} >> R_{\mathrm{horizon}} \approx R_{\mathrm{Schwartzschild}} \equiv \frac{G M_{\mathrm{SMBH}}}{c^2} $$ While typically $\sigma_{\mathrm{bulge}} \approx 250 km s^{-1}$, it is well known that $c \approx 300000 km s^{-1}$. This means that the influence of a BH can be felt much further away than its event horizon, which is where the accretion takes place. In fact the ratio between the two distances is about 1000000. Exploiting this fact, astronomers have been using two methods to probe extragalactic*, quiescent SMBH: The first method is to observe CO lines (radio astronomy) to trace gas circling the BH. The gas does not need to be near the event horizon, which is much smaller than the sphere of influence. In fact CO observations rely on the gas being relatively dense but cold. Essentially the speed at which the CO gas will rotate is a (quadrature) sum of the declining component due to the stellar mass, plus the Keplerian component due to SMBH mass. The second method is completely analogous, but relies on measuring the unresolved kinematics of the stars surrounding the SMBH. This can be done in various bands, but most authors use visible line absorptions to measure the velocity and velocity dispersion (and other moments) of the stars in the regions surrounding the BH. If the kinematics cannot be explained without including a point mass in the middle, then you are done. See this work, for a comparison of the two methods. *(extragalactic means outside of our own galaxy)
{ "domain": "astronomy.stackexchange", "id": 268, "tags": "supermassive-black-hole, galaxy-center" }
X-ray imaging and diamond
Question: Why doesn't diamond show up in X-ray imaging? Diamond is the hardest substance ever known, and as we know, X-ray radiation is produced when a cathode ray of high frequency hits a very high atomic mass target. But I am amazed: why doesn't diamond show up in X-ray imaging? Answer: Exactly how and to what extent diamond shows up in x-rays depends on factors such as type of x-ray apparatus, size of diamond, orientation and so on. Carbon has an atomic mass of 12. That's fairly low. Diamond exhibits a bunch of unique properties such as extreme hardness, high thermal conductivity and chemical inertness. In terms of X-ray windows another property of diamond is of crucial importance: diamond consists of carbon i.e. diamond is a low Z material which is transparent to X-rays. ... High transmission coefficients even at low X-ray energies can be achieved by using thin diamond membranes mounted on circular silicon support disks. The thickness of these membranes can be as low as 1 µm From http://www.diamond-materials.com/EN/products/cvd_for_xray/xray_windows.htm However you can use X-rays to make useful images of diamonds X-Ray absorption is related to density, diamond is about 3.5 times as dense as water. 3,500 kg/m3 or 3.5 g/cm3.
{ "domain": "physics.stackexchange", "id": 9131, "tags": "x-rays, imaging, diamond" }
Doubt about the Gaussian state
Question: I am reading an article that makes an application using the Gaussian state. The author of the article writes the Gaussian state as follows: $$\psi(q) = [2\pi(\Delta q)^2]^{-\frac{1}{4}}e^{-\frac{q^2}{4(\Delta q)^2}}e^{i\frac{\tilde pq}{\hslash}}$$ and thereby we have the following relationships for your uncertainty $$\langle Q\rangle = \tilde q$$ $$\langle P\rangle = \tilde p$$ $$\Delta Q = \sqrt{\langle Q^2 \rangle - \langle Q \rangle^2}=\Delta q$$ $$\Delta P= \sqrt{\langle P^2 \rangle - \langle P \rangle^2} = \frac{\hslash}{2\Delta q}$$ I was a little confused because, when I studied the Gaussian state by Cohen-Tannoudji's book Quantum Mechanics. Because in the complement C3 (Root mean square deviations of two conjugate observables) of his book, the expression of the Gaussian state is as follows: $$\psi(q) = [2\pi(\Delta q)^2]^{-\frac{1}{4}}e^{-[\frac{q-\langle Q\rangle}{2(\Delta Q)}]^2}e^{i\frac{\langle P\rangle q}{\hslash}}$$ These two expressions for the Gaussian state are written a little differently from each other. I would like to know what is the difference between these two? And if there is any way to write the expression of the Gaussian state of Cohen-Tannoudji's book in the previous form of the article I was reading? Answer: In Freire article, in Eq. (18), you define the "standard continuous-variable minimum-uncertainty Gaussian state" as \begin{equation} |\psi\rangle=\int_{-\infty}^{\infty}dq \psi(q-\bar{q})|q\rangle,\qquad\psi(q)=[2\pi(\Delta q)^2]^{-1/4} e^{-\frac{q^2}{4(\Delta q)^2}}e^{i\frac{\bar{p}q}{\hbar}} \end{equation} Just notice that the minimum-uncertainty Gaussian state is defined with $\psi(q-\bar{q})$ in the integral, but then next to the integral the author gives you the definition of $\psi(q)$ and $\textbf{not}$ of $\psi(q-\bar{q})$. So you have that the the minimum-uncertainty Gaussian state, in the position representation, reads \begin{equation} \psi(q-\bar{q})=[2\pi(\Delta q)^2]^{-1/4} e^{-\frac{(q-\bar{q})^2}{4(\Delta q)^2}}e^{i\frac{\bar{p}q}{\hbar}} \end{equation} This is exactly the same state as in Cohen-Tannoudji's book. Cohen-Tannoudji defines the minimum-uncertainty Gaussian state state as $\psi(q)$ while Freire defines it as $\psi(q-\bar{q})$, simply because he wants to highlight the fact that the gaussian is "shifted" by the term $\bar{q}$ from the origin.
{ "domain": "physics.stackexchange", "id": 76607, "tags": "quantum-mechanics, quantum-information, heisenberg-uncertainty-principle, quantum-states" }
ROS master crashing due to XMLRPC error
Question: I restarted my robot, running ROS Kinetic on a Raspberry Pi, and I'm finding that the master node is immediately crashing on startup with the error: auto-starting new master process[master]: started with pid [7708] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 5dab8662-0754-11e8-a585-b827eb09ee17 process[rosout-1]: started with pid [7721] started core service [/rosout] load_parameters: unable to set parameters (last param was [/head_arduino/serial_node/baud=115200]): cannot marshal None unless allow_none is enabled Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/__init__.py", line 307, in main p.start() File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/parent.py", line 279, in start self.runner.launch() File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/launch.py", line 654, in launch self._setup() File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/launch.py", line 641, in _setup self._load_parameters() File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/launch.py", line 338, in _load_parameters r = param_server_multi() File "/usr/lib/python2.7/xmlrpclib.py", line 1006, in __call__ return MultiCallIterator(self.__server.system.multicall(marshalled_list)) File "/usr/lib/python2.7/xmlrpclib.py", line 1243, in __call__ return self.__send(self.__name, args) File "/usr/lib/python2.7/xmlrpclib.py", line 1596, in __request allow_none=self.__allow_none) File "/usr/lib/python2.7/xmlrpclib.py", line 1094, in dumps data = m.dumps(params) File "/usr/lib/python2.7/xmlrpclib.py", line 638, in dumps dump(v, write) File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump f(self, value, write) File "/usr/lib/python2.7/xmlrpclib.py", line 719, in dump_array dump(v, write) File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump f(self, value, write) File "/usr/lib/python2.7/xmlrpclib.py", line 741, in dump_struct dump(v, write) File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump f(self, value, write) File "/usr/lib/python2.7/xmlrpclib.py", line 719, in dump_array dump(v, write) File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump f(self, value, write) File "/usr/lib/python2.7/xmlrpclib.py", line 664, in dump_nil raise TypeError, "cannot marshal None unless allow_none is enabled" TypeError: cannot marshal None unless allow_none is enabled [rosout-1] killing on exit [master] killing on exit Looking in my ~/.ros/latest/latest/roslaunch-rae-8050.log I also see the lines: [roslaunch][INFO] '2018-02-01 08:55:26': load_parameters starting ... [roslaunch][ERROR] '2018-02-01 08:55:26': load_parameters: unable to set parameters (last param was [/head_arduino/serial_node/baud=115200]): cannot marshal None unless allow_none is enabled What's causing this? My node /head_arduino/serial_node/baud=115200 is just a stock rosserial serial_node.py instance launched with: <launch> <group ns="head_arduino"> <node pkg="rosserial_python" type="serial_node.py" name="serial_node" output="screen"> <param name="~port" value="/dev/ttyACM0" /> <param name="~baud" value="115200" /> </node> </group> </launch> Clearly, none of its parameters are None. Googling the error suggests it's a generic exception thrown by xmlrpclib when an XMLRPC endpoint receives a bad value, but obviously, that could mean anything here. Is this evidence that the Pi wasn't cleanly shutdown and that data was corrupted somewhere, causing bad data to be loaded via the ros parameter server? How would I fix or purge this bad data? How do I reset the parameter server so it loads the correct parameters from my launch files? Edit: I was running everything from a single all.launch file that looks like: <launch> <include file="$(find myrobot)/launch/serial_head.launch" /> <include file="$(find myrobot)/launch/serial_torso.launch" /> <include file="$(find myrobot)/launch/raspicam_compressed_320.launch" /> <include file="$(find myrobot)/launch/sound.launch" /> <include file="$(find myrobot)/launch/diagnostics_aggregator.launch" /> <include file="$(find myrobot)/launch/robot_state_publisher.launch" /> ... </launch> To simply things, I tried running just roscore and then the launch file for my serial_node.py, and that ran fine. But when I again ran my all.launch the error re-occurred. Why is this? Does the ROS parameter server cache parameters per parent launch file? I tried this suggestion to add clear_params="true" to all my nodes, but that had no effect. I noticed that the <include> tag allows supports clear_param, so I tried that, but that just gives me a different error. It apparently requires the tag's ns parameter be set, so I changed my include tags to end with clear_param="true" ns="/" />, but that just made it immediately fail with the error: Failed to clear parameter: parameter [/] is not set Originally posted by Cerin on ROS Answers with karma: 940 on 2018-02-01 Post score: 1 Answer: Since the error message doesn't specify the launch file causing the error, I was forced to go through my top-level launch file, comment everything out, and then one by one, re-enable each child launch file, and then re-run it, testing to see if I received the error. This allowed me to track the error down to my launch file for the diagnostics_aggregator. Again, the launch file itself doesn't pass in any blank or None values. However, I was loading a configuration file with: <rosparam command="load" file="$(find myrobot)/config/analyzers.yaml" /> I checked this file, and sure enough, it was corrupt. I re-uploaded this file, and that resolved the issue. Originally posted by Cerin with karma: 940 on 2018-02-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29937, "tags": "ros, xmlrpc, ros-kinetic, rosparam" }
Surface Area to Volume Ratio
Question: I know there are some aspects we can try to apply this surface area to volume ratio to say physical chemistry in order to see reactivity of chemicals. However I am very confused in my structural engineering elective course, it states that surface area to volume of high rise tower like structures is very important. I do see that the SA:V ratio of certain shapes in general give a very interesting and neatly simplified expression once computed, but I do not see what the application of SA:V ratio is when building structures of towers (Eiffel tower CN Tower etc...). Can anyone help me explain the significance of SA:V ratio when building high rise structures (I would appreciate technical details as engineering is not my major of study i want to understand alittle better)? Answer: In high rise design surface area to volume ratio is a significant factor in the amount of energy used to keep the building air conditioned and useable for its design function. The larger the surface the more heat transfers either from the building to outside in the winter or vice versa in the summer. At the first glance it would seem a cube would have the least surface to volume ratio and be an ideal shape. However a cube has a big portion of its usable floor are near its core, far away from the exterior windows which provide natural lighting and ventilation. So the core of the building needs continuous lighting and ventilation. which adds to the energy consumption. The optimal building form factor and SA/V is one which would address all these concerns and find a balance between an energy efficient shell and energy needed for lighting and ventilation, etc. Location and angle of orientation of the building with respect to sun rays, wind direction and snow drift. The climate. Moderate Climate gives flexibility to have more surface to take advantage of natural light. The extreme conditions the building envelope will likely be exposed to.
{ "domain": "engineering.stackexchange", "id": 2873, "tags": "structural-engineering" }
Find missing element in an array of unique elements from 0 to n
Question: The task is taken from LeetCode Given an array containing n distinct numbers taken from 0, 1, 2, ..., n, find the one that is missing from the array. Example 1: Input: [3,0,1] Output: 2 Example 2: Input: [9,6,4,2,3,5,7,0,1] Output: 8 Note: Your algorithm should run in linear runtime complexity. Could you implement it using only constant extra space complexity? My approach is to subtract the sum from 0-n and the sum of the elements in the array. For the sum of the number from 0 to n I use the Gauss forumula: (n * (n + 1)) / 2. For the sum in the array I will have to iterate through the whole array and sum up the elements. My solution has time complexity of \$O(n)\$ and space complexity of \$O(1)\$. /** * @param {number[]} nums * @return {number} */ var missingNumber = function(nums) { if (nums.length === 0) return -1; const sumOfNums = nums.reduce((ac, x) => ac + x); const sumTillN = (nums.length * (nums.length + 1)) / 2; return sumTillN - sumOfNums; }; Answer: I don't think there is a better than \$O(n)\$ and \$O(1)\$ to this problem. However you can simplify the code a little (trivial) by subtracting from the expected total. const findVal = nums => vals.reduce((t, v) => t - v, nums.length * (nums.length + 1) / 2); Or function findVal(nums) { var total = nums.length * (nums.length + 1) / 2; for (const v of nums) { total -= v } return total; } BTW The extra brackets are not needed when calculate the total (x * (x + 1)) / 2 === x * (x + 1) / 2
{ "domain": "codereview.stackexchange", "id": 35055, "tags": "javascript, algorithm, object-oriented" }
Why in the relativistic quantum mechanics $ \gamma_4$ name is not used instead of $ \gamma_5$?
Question: I have seen in the in the Dirac equation $$\gamma_0,\gamma_1,\gamma_2,\gamma_3.$$ Then I have seen the definition of a new matrix $$\gamma_5=i\gamma_0\gamma_1\gamma_2\gamma_3.$$ Now my question is why the name of the new matrix has not been given as $$\gamma_4.$$ Is there any historical reason behind this or it is simply taken as $$\gamma_5$$ without any special reason skipping $$\gamma_4~?$$ Answer: From Wikipedia: The number 5 is a relic of old notation in which $\gamma^0$ was called "$\gamma^4$".
{ "domain": "physics.stackexchange", "id": 12244, "tags": "special-relativity, notation, dirac-matrices" }
What is the purpose of building multistage rockets, rather then packing more fuel into a single stage?
Question: Reading an article about multistage rockets (not educated at all on the topic), and from the get go it seems to assume they are used/need to be used. Answer: As the rocket is propelled upwards, it expends fuel. So there is no need to carry half empty fuel tanks. By splitting it up into separate stages, you can simply drop off unneeded mass.
{ "domain": "engineering.stackexchange", "id": 2308, "tags": "aerospace-engineering, rocketry" }
Using accelerometer as a seismograph
Question: I'm using ADXL345 accelerometer with Raspberry Pi to build a seismograph. I've successfully hooked it up and can plot the accelerometer data in three axis. Is there any way to express these data in the form of the magnitude of an earthquake, of course, at the point of sensing? I know that it might be imprecise, but any representation would be helpful (e.g. Richter scale), and how to accomplish that. Answer: The magnitude of an earthquake is related to the total energy released, therefore to estimate it from a seismogram you need to know the distance to the source. In the case of the Richter scale for example, the relationship between magnitude and seismogram amplitude is defined for a standard distance. If you have only one seismograph, you can not triangulate the location of the source (hypocenter). Therefore, you can not estimate the magnitude of a seismic event (Richter or moment magnitude). But you can estimate the local seismic intensity of the event at the particular location of your instrument. With the accelerometer data you can easily measure the peak ground acceleration, that can be used to estimate the intensity in any of the existing scales. For example, the peak ground accelerations associated to each intensity level in the commonly used Mercalli intensity scale are: Those g values would be easy to calculate with the accelerometer data and proper calibration constants. Table taken from the Wikipedia page for peak ground acceleration You might want to have a look at this question. There are some nice answers and references that you might find useful.
{ "domain": "earthscience.stackexchange", "id": 1353, "tags": "earthquakes, seismology, instrumentation, in-situ-measurements, diy" }
Comparing encoders to same input of differnt output size
Question: Let's say I have an input s1 and I pass it to two encoders e1 and e2. They output encodings of size s1 and s2, where their length are not equal, lets say len(s1) = k*len(s2). Is it possible to somehow compare which encoder is better out of the two ? Answer: Encoders "encode" your high dimensionality input into a lower dimensional space. One way to compare different encoders is to use such representation for building the same model (say a NN with a fixed amount of layers) and to see which one is performing best or if the performance loss for the lower dimensionality is acceptable. A downside of that stand on the training procedure that advantage one representation instead of another. Think for example as having very few data points and to compare a big encoder with a small one: probably the small one will be better because it won't overfit. You can also take this concept to the extreme and use those representations as inputs to a linear/logistic regression: in this way the chances of overfitting are very low and the better representation for the linear scenario will result in higher performances.
{ "domain": "datascience.stackexchange", "id": 11067, "tags": "machine-learning, nlp, encoding, vector-space-models, encoder" }
C# extension method to do some action if a target operation takes too long time to finish
Question: I have a potentially long running operation and I want to trigger some action if takes too long time. Also I want to reuse this logic. Operation to check: public static async Task Do(CancellationToken ct, TimeSpan actionDuration) { await Task.Delay(actionDuration, ct); Debug.WriteLine("action finished"); } Example of usage: [Fact] public async Task AsyncTaskMethodWithoutException_LongerThanCheck_CheckTriggeredActionSucceed() { Func<Task> action = () => Do(CancellationToken.None, TimeSpan.FromMilliseconds(300)); var checkResult = await action.Check(TimeSpan.FromMilliseconds(100), () => { Debug.WriteLine("too long"); return Task.CompletedTask; }, CancellationToken.None); checkResult.IsOnTooLongFired.Should().BeTrue(); checkResult.ActionTask.Status.Should().Be(TaskStatus.RanToCompletion); checkResult.CheckTask.Status.Should().Be(TaskStatus.Canceled); } Here is my code: public class TooLongCheckResult { public static TooLongCheckResult ActionFailedInstantly = new TooLongCheckResult(Task.CompletedTask, Task.CompletedTask); public TooLongCheckResult(Task actionTask, Task checkTask) { ActionTask = actionTask; CheckTask = checkTask; } public Task ActionTask { get; } public Task CheckTask { get; } public bool IsOnTooLongFired { get; private set; } public void OnTooLongFired() => IsOnTooLongFired = true; } public sealed class TooLongCheckResult<T> { public static TooLongCheckResult<T> ActionFailedInstantly = new TooLongCheckResult<T>(Task.CompletedTask, Task.CompletedTask); public TooLongCheckResult(Task actionTask, Task checkTask) { ActionTask = actionTask; CheckTask = checkTask; } public Task ActionTask { get; } public Task CheckTask { get; } public bool IsOnTooLongFired { get; private set; } public T Result { get; private set; } public void SetResult(T result) => Result = result; public void OnTooLongFired() => IsOnTooLongFired = true; } public static class TooLongTaskExt { public static async Task<TooLongCheckResult> Check(this Func<Task> action, TimeSpan tooLongThreshold, Func<Task> onTooLong, CancellationToken ct) { var childCts = CancellationTokenSource.CreateLinkedTokenSource(ct); childCts.CancelAfter(tooLongThreshold); Task actionTask; try { actionTask = action.Invoke(); } catch (Exception ex) { //most likely it's because method in action isn't declared with async keyword Debug.WriteLine("instant error: " + ex.GetType().Name); return TooLongCheckResult.ActionFailedInstantly; } var timerTask = Task.Delay(Timeout.InfiniteTimeSpan, childCts.Token); var result = new TooLongCheckResult(actionTask, timerTask); var firstFinished = await Task.WhenAny(timerTask, actionTask); if (firstFinished == timerTask) { await onTooLong.Invoke(); result.OnTooLongFired(); } else { childCts.Cancel(); } try { await Task.WhenAll(timerTask, actionTask); } catch (TaskCanceledException) { Debug.WriteLine("cancellation is triggered"); } catch (Exception ex) { Debug.WriteLine("awaited error: " + ex.GetType().Name); throw; } Debug.WriteLine("actionTask: " + actionTask.Status); Debug.WriteLine("timerTask: " + timerTask.Status); return result; } public static async Task<TooLongCheckResult<T>> CheckWithResult<T>(this Func<Task<T>> action, TimeSpan tooLongThreshold, Func<Task> onTooLong, CancellationToken ct) { var childCts = CancellationTokenSource.CreateLinkedTokenSource(ct); childCts.CancelAfter(tooLongThreshold); Task<T> actionTask; try { actionTask = action.Invoke(); } catch (Exception ex) { //most likely it's because method in action isn't declared with async keyword Debug.WriteLine("instant error: " + ex.GetType().Name); return TooLongCheckResult<T>.ActionFailedInstantly; } var timerTask = Task.Delay(Timeout.InfiniteTimeSpan, childCts.Token); var result = new TooLongCheckResult<T>(actionTask, timerTask); var firstFinished = await Task.WhenAny(timerTask, actionTask); if (firstFinished == timerTask) { await onTooLong.Invoke(); result.OnTooLongFired(); } else { childCts.Cancel(); } try { await Task.WhenAll(timerTask, actionTask); } catch (TaskCanceledException) { Debug.WriteLine("cancellation is triggered"); } catch (Exception ex) { Debug.WriteLine("awaited error: " + ex.GetType().Name); throw; } Debug.WriteLine("actionTask: " + actionTask.Status); Debug.WriteLine("timerTask: " + timerTask.Status); if (actionTask.Status == TaskStatus.RanToCompletion) { result.SetResult(actionTask.Result); } return result; } } Answer: Just a suggestion to consider as improvement. Abstract example: public static class TaskExtensions { public static Task InvokeAfterAsync(this Task task, double thresholdMilliseconds, Func<Task> callbackAsync, CancellationToken token = CancellationToken.None) => task.InvokeAfterAsync(TimeSpan.FromMilliseconds(thresholdMilliseconds), callbackAsync, token); public static async Task InvokeAfterAsync(this Task task, TimeSpan threshold, Func<Task> callbackAsync, CancellationToken token = CancellationToken.None) { using (CancellationTokenSource cts = CancellationTokenSource.CreateLinkedTokenSource(token)) { Task fastTask = await Task.WhenAny(task, Task.Delay(threshold, cts.Token)); if (fastTask != task) { await Task.WhenAll(callbackAsync(), task); } else { cts.Cancel(); } } } } Possible usage await DoMyJobAsync().InvokeAfterAsync(100, myCallbackAsync); CTS implements IDisposable, consider to Dispose() it or decorate with using statement. Don't use shorten names if possible. Make code more readable. Use less var statements. Use it only with = new if you want to store exactly the same type (not interface or parent type), otherwise avoid it. (but it's up to you) Let exceptions to be handled outside. E.g. if caller want to Cancel() its own CTS, it must care about to catch the exception. Use method overloads and default values to simplify the method usage.
{ "domain": "codereview.stackexchange", "id": 39324, "tags": "c#, task-parallel-library" }
How to move a model using external C++ application and gazebo transport?
Question: Lets say I have a simple box loaded in gazebo world. How can I move this box using the gazebo transport client, and sample code somewhere, I don't see anything in examples that matches? Second question is lets say I update position at 1Hz, can I also set the body rates, so gazebo smoothes the position in between position updates? Originally posted by LakeWorthB on Gazebo Answers with karma: 45 on 2020-03-23 Post score: 0 Answer: I ended up just creating my own model plugin and created my own tcp interface using the gazebo/transport/Connection.hh . I didn't use full gazebo transport since I didn't want to make that a requirement for clients on the other side. Originally posted by LakeWorthB with karma: 45 on 2020-04-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 4487, "tags": "gazebo-11, gazebo-10" }
Radar Vs. Sonar
Question: What is the true difference between radar and sonar? My understanding is that radar uses a reflected EM wave, while sonar uses a compression (shock wave) of the material it's in. (It compresses water, and then looks to see if any of the shock wave is reflected back?) If that is true, than any spacecraft with one of these devices would be better off using radar to track objects in space? Answer: Radar (radio detection and ranging) uses electromagnetic waves (typically of the radio or microwave region of the spectrum) to detect the presence of remote objects along with other properties including the object's range and angular location relative to the radar. Sonar (sound navigation and ranging) is a similar technique to radar, but uses sound waves (rather than electromagnetic). As a consequence, sonar required a medium for sound waves to propagate in, typically water or air. There is also Lidar (light detection and ranging) which, like radar, operates with electromagnetic waves but in the optical portion of the spectrum. Of the three of these, radar and lidar are suitable for spacecraft as their electromagnetic signal do not require a medium to propagate in. Sonar, requiring some contiguous material for both the sonar system and any targets to be immersed in, would be unsuitable in the vacuum of space.
{ "domain": "physics.stackexchange", "id": 85981, "tags": "waves, radar" }
Should the odometry have better no speed or a calculated speed or cmd_vel?
Question: I am working on a robot. And currently I have no information about its actual speed, like from motors e. G. But the odom message has a field for twist. What will be the best thing to do: leave the twist with 0s calculate speed from past positions ... or use cmd_vel Thanks for your help. Originally posted by ct2034 on ROS Answers with karma: 862 on 2014-11-17 Post score: 3 Answer: There are two places where the speed information in an odometry message is used: navigation (move_base) uses the speed as the current speed of the robot, for computing the maximum commandable speed according to the acceleration limits. Setting a zero speed will confuse move_base and generally precent the local_planner from commanding the maximum velocity A kalman filter (robot_pose_ekf, robot_localization) will usually use the velocity from the wheel encoders as one of the inputs to the kalman filter, which will integrate the velocity along with other measurements to produce an estimate of the robot's position. Again, using zeros here can cause the position estimates of your robot to be wrong. If you can differentiate the wheel positions to produce a fairly clean, stable velocity estimate, that's probably more accurate than simply repeating the commanded velocity. If there's a lot of noise in your differentiated wheel velocities, or if your wheel speed controller is not very accurate, using the calculated wheel velocities may cause the navigation controller to be unstable, and it may be wiser to repeat the commanded velocities. Originally posted by ahendrix with karma: 47576 on 2014-11-17 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 20063, "tags": "ros, navigation, odometry, localilzation" }
Why is mRNA used as a biomarker for cancer over tRNA or rRNA?
Question: I cannot find a clear explanation for why mRNA is used as cancer biomarker and not tRNA or rRNA. Is there something peculiar about mRNA which cannot be fulfilled by tRNA or rRNA? Answer: mRNAs encode specific genes and perform specific functions whereas the other two perform housekeeping functions in all cells. Changes in their expression are related to global changes in the cell. The expression of rRNA and tRNA is affected in certain conditions such as stress and also possibly cancer (may be due to high replication rates). However, even if they are affected in cancer you cannot use them as markers for specific cancers as they are present and highly expressed in all cells. Therefore specific tumour suppressor genes and oncogenes are studied.
{ "domain": "biology.stackexchange", "id": 5183, "tags": "molecular-genetics, cancer, rna" }
Update DRCSIM setup.sh
Question: It would be nice to include export GAZEBO_RESOURCE_PATH=/usr/share/drcsim-1.3/models:$GAZEBO_RESOURCE_PATH in drcsim-1.x/setup.sh Currently to access the Atlas meshes we do something like: file://../models/atlas/meshes/ltorso.dae Thanks, Chris Originally posted by cga on Gazebo Answers with karma: 223 on 2012-12-22 Post score: 0 Answer: Ticketed. Originally posted by gerkey with karma: 1414 on 2012-12-30 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 2886, "tags": "drcsim" }