anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
A pedestrian explanation of conformal blocks
Question: I would be very happy if someone could take a stab at conveying what conformal blocks are and how they are used in conformal field theory (CFT). I'm finally getting the glimmerings of understanding from reading Moore and Read's wonderful paper. But I think/hope this site has people who can explain the notions involved in a simpler and more intuitive manner. Edit: Here is a simple example, taken from pg 8 of the reference cited above ... In a 2D CFT we have correlation functions of fields $ \phi_i(z,\bar z) $, (where $ z = x+\imath y$) at various points on the complex plane. The n-point correlation function can be expanded as: $$ \left \langle \prod_{a=1}^n \phi_{i_a}(z_a,\bar z_a) \right \rangle = \sum_p | F_{p\; i_{1} \dots i_n}(z_{1} \dots z_n)|^2 $$ Here $p$ labels members of a basis of functions $ F_{p\; i_1 \dots i_n}(z_{1} \dots z_n) $ which span a vector space for each n-tuple $(z_{1} \dots z_n)$ These functions $F_p$ are known as conformal blocks, and appear to give a "fourier" decomposition of the correlation functions. This is what I've gathered so far. If someone could elaborate with more examples that would be wonderful ! Edit: It is proving very difficult to decide which answer is the "correct" one. I will give it a few more days. Perhaps the situation will change ! The "correct" answer goes to (drum-roll): David Zavlasky. Well they are all great answers. I chose David's for the extra five points because his is the simplest, IMHO. He also mentions the "cross-ratio" which is a building block of CFT. Answer: Now that we have a physicist's perspective, I don't feel too bad outlining conformal blocks from a mathematician's point of view. Presumably there is a dictionary connecting the two worlds, but I don't understand physics well enough to say coherent sentences about it. I apologize in advance for any confusion - this is not a very pedestrian topic. I'll approach conformal blocks from the standpoint of conformal vertex algebras, which typically appear in mathematics as algebraic structures that you can use to prove theorems in representation theory. Vertex algebras are vector spaces $V$ equipped with a "multiplication with singularities" $V \otimes V \to V((z))$ that encodes a best effort at multiplying quantum fields (which are sometimes called "operator-valued distributions"). Left multiplication by an element $u$ yields a formal power series $\sum_{n \in \mathbb{Z}} u_n z^{-n-1}$ whose coefficients are operators. To make a vertex algebra conformal is to choose a distinguished vector $\omega$ whose corresponding operators generate an action of the Virasoro algebra, which is a central extension of the complexified Lie algebra of polynomial vector fields on the circle. You don't lose much conceptually by thinking of Virasoro as the tangent space of the group $Diff(S^1)$ at the identity, but there is a "nonzero central charge" anomaly in play that can make the central extension necessary. The circle shows up here because it is the boundary of a puncture where we will insert a field. My understanding of the physical interpretation is the following incomplete and possibly incorrect picture: Inside a 2D conformal field theory, there is an algebra of (say, left-moving) chiral symmetries, and this is precisely the information captured by the conformal vertex algebra. The space of states in the theory decomposes into a set of "sectors" which are modules of the vertex algebra. If we choose a Riemann surface (which is a sphere in most textbooks), and attach states from various sectors to a set of distinct points, we should get a set of amplitudes, which are values of chiral correlation functions attached to these input data. I have heard that there is some way to pass from the chiral stuff to the conformal field theory proper, where the ambiguity in the correlators disappears and one gets honest correlation functions, but I haven't seen it in the math literature. In any case, conformal blocks live inside this machine - given sectors attached to points on a Riemann surface, a conformal block is a gadget that eats choices of states in those sectors, and outputs values of correlation functions in a manner consistent with the chiral symmetries. Here is a sketch of the mathematical construction, due to Edward Frenkel (and described in more detail in his book Vertex Algebras and Algebraic Curves with David Ben-Zvi): There is a "positive half" of the Virasoro algebra, spanned by generators $-z^n\frac{d}{dz}$ for $n \geq 0$, and it generates the Lie algebra of derivations on the infinitesimal complex disk, and also acts on the conformal vertex algebra $V$. We can use this action to construct a vector bundle $\mathscr{V}$ with flat connection on our Riemann surface of choice by the Gelfand-Kazhdan "formal geometry" method (which I won't describe). Given punctures $p_1, \dots, p_n$, one constructs, from the De Rham complex of $\mathscr{V}$, a Lie algebra $L$ that acts naturally on $n$-tuples of $V$-modules. Given $V$-modules $M_i$ attached at points $p_i$, a conformal block is an $L$-module map from $\bigotimes M_i$ to the trivial module. It is in general quite difficult to do any explicit calculations with conformal blocks, because of the amount of geometry involved. If your Riemann surface has handles, you will have to deal with a choice of complex structure, and if it has a lot of punctures, you have to deal with a complicated configuration space of points. You typically see tree-level diagrams with 4 inputs, because: That is where the bare minimum of geometry appears - since the automorphism group of the complex projective line is triply transitive, the configuration space of four points is a thrice-punctured line (by which I mean a sphere). Depending on the level of detail you seek, it is often all that you need - the spaces of blocks can be assembled by gluing surfaces together out of pants and taking sums over sectors where the sewing happens. In the complex algebro-geometric picture, this sewing means sticking spheres together transversely at points to get a nodal curve. One then deforms to get a smooth complex curve, and does a parallel transport along the corresponding path in the moduli space of marked curves. The four-point configuration is a situation where you have exactly one sewing operation (and the other such situation is a punctured torus, which is important for getting characters). In fact, when the conformal field theory is suitably well-behaved (read: rational), one gets dimensions of spaces of all conformal blocks from just the dimensions of three-point genus zero blocks, also known as structure constants of the fusion algebra. One sees this in the Verlinde formula, for example. I think examples of conformal blocks have a certain necessary complexity, but here is an overview of a reasonably simple case that is motivated by the WZW model. Pick a simple Lie group, like $SU(2)$, and a level $\ell$ (which we can view as a positive integer). One constructs the vertex algebra and its modules as level $\ell$ integrable representations of the affine Kac-Moody Lie algebra $\hat{\mathfrak{sl}_2}$, which is a central extension of the loop algebra of the complexification of the Lie algebra $\mathfrak{su}_2$. If we choose a Riemann surface (such as a sphere), and decorate points with just the vacuum module, we get a space of conformal blocks that is the space of global sections of a certain line bundle $L_G^{\otimes \ell}$ on the moduli space of $SU(2)$ bundles on the surface. Here $L_G$ is the ample generator of the Picard group of the moduli space.
{ "domain": "physics.stackexchange", "id": 20348, "tags": "mathematical-physics, condensed-matter, conformal-field-theory, research-level" }
Reason for borexino pep flux confidence level error
Question: At the TAUP conference in 2011, the Borexino collaboration presented an analysis of electron-neutrino elastic scattering events in their liquid scintillation detector to claim the first observation of neutrinos produced by the pep reaction in the Sun. They quoted a value for the total flux of solar pep neutrinos of 1.6±0.3 (units of 0.01c) and a significance level for the detection of 97%. I'm wondering is the confidence level justifiable? I'm wondering does anyone know whether it is distributed as a normal distribution and, as result, is the significance level valid? Answer: For normal statistics 97% confidence is between 2 and 3 sigma. When quoting a 'detection' of a small signal in a large background you are claiming to have statistically distinguished the signal from zero, so for normal statistics tat would be a claim that the centroid is nearly three times as far from zero as the 1-sigma uncertainty (and $0.3$ is less than a fifth of $1.6$). However, this kind of bounds-setting measurement doesn't generally follow normal statistics and there is likely a more involved theory at work here. Possibly Feldman-Cousins which has seen a lot of use in the field in recent decades. In any case, that level of confidence does not seen unreasonable given the quoted measurement, but the details depend a lot on exactly how the analysis was done. There should be a pre-print or paper either now or soon.
{ "domain": "physics.stackexchange", "id": 44418, "tags": "experimental-physics, sun, neutrinos, data-analysis" }
Why do lasers cut? Is this a case of light acting as matter?
Question: All I found in Google was very broad. From a physics models perspective, why can photons emitted from a laser cut? Does this cut mean that the photons are acting like matter? Answer: When lasers cut something, they're only cutting in the sense that they're making atoms be not as attracted as they once were to each other. When you get down to the nitty-gritty details, it is not really the same as mechanical cutting. Remember that lasers shoot photons, and when photons hit atoms, they excite electrons. If you excite these electrons enough, they'll have enough energy to disassociate from the atoms they formerly "belonged" to. This makes individual atoms disassociate with whatever other atoms they were once bonded to, and in the mad scramble to go to a lower energy state, they very likely do not go into the same configuration they were before. Some atoms, like the ones directly hit by the laser beam, go to a vapor and float away. Others "choose" one side of the material to go to. Any bonds the material had with itself is then dissolved, so it is effectively cut. This is different than, say, taking shears or scissors to the material. The methods of those things cutting are purely mechanical, and you don't have to worry about vapors as much as when cutting with a laser. (You also don't have to worry about reflections from materials, either!)
{ "domain": "physics.stackexchange", "id": 13956, "tags": "laser, mass-energy, duality" }
I don't understand the approach used in solving this kinematics problem
Question: The problem text is as follows: During the first half of a straight path, which is at $\alpha_1=60°$ relative to a reference line, a car is travelling at $v_1=72 km/h$. During the second half of the straight path which is now at $\alpha_2=30°$ relative to the same reference line, the car is traveling at $v_2=36km/h$. What is the average velocity of the car? So I've used two different approaches to solve this problem and none of them are correct for reasons unknown to me. The first approach was to find the x and y components of both velocities, then using those find the average x and y velocities, and finally using Pythagoras theorem to find the average velocity using average x and y velocity components. This yields 43.56 which is incorrect. Please note that the expression I used for the average velocity of this kind of motion (one velocity for one half of the path, and another for the 2nd half) is $$\frac{2v_1v_2}{v_1+v_2}$$ (I used this to find average x and y velocity components). Although this approach was intuitive to me I think it is wrong because I don't think that just finding the average velocity components and then using them to find the average velocity is the way to go. I tried simply applying the definition equation of average velocity ($\frac{displacement}{time~interval}$) and then it boils down to simply plugging the velocities in the above written equation and it then the average velocity is 48 km/h. But this is incorrect as well. The correct value of the average velocity is 46.36 km/h. Any clarification would be welcome. Answer: Here is how to answer this question (rather than the specific answer, which I don't think you're after). Firstly they tell you that the two bits of the path are the same length ('the first half of the path') and are straight. Call the length $l$. They also give you the angles, $\alpha_1$ and $\alpha_2$ of the two bits of the path to some reference line. This is enough to compute the straight-line distance between the start and finish point in terms of $l$, just using some basic trigonometry. Now, you know the speeds of the car on the two halves of the path, and you know that $\textit{time}=\textit{distance}/\textit{speed}$. So you know the time the car took to traverse each part of the path, in terms of $l$. Well the total time it took is the sum of these times, and from above you know the total distance it travelled. So now you know the average speed.
{ "domain": "physics.stackexchange", "id": 32485, "tags": "homework-and-exercises, kinematics" }
What do they call that elegant triangular scaffolding often used in future-oriented architecture?
Question: I'm a 3d artist and I'm trying to produce a type of scaffolding within a biodome. For lack of knowledge in technical terms, I'm having a hard time learning about the design pattern/structure that I must follow (I suspect just making a triangle-array would be ignorant). What is the technical term for the triangular scaffolding often used in future-esque projects? I've seen it used to hold up giant solar arrays as well as biodomes, etc. If I only knew what it was called I could go ahead and research it! Answer: The general term for these structures in 3-d is a space frame. "Just making a triangle array" is a pretty good starting point, because three bars forming a triangle make a rigid structure without any bracing at the corners, unlike four bars forming a rectangle which can be "squashed" out of shape. Another key to making a stiff lightweight structure is to make it curved in both directions (i.e. not like a cylinder or a plane), so that effectively you have groups of four bars forming tetrahedra. The curved structure works the same way as a dome. If you want to study designing something like this, you could start with the same idea in two dimensions, called a truss. Trusses are a very common engineering design for the roof supports in buildings, and also for bridges, tower cranes, etc. To make a space frame which is cheap and easy to assemble, you need to make the bars with a few standard lengths and meeting at the same angles, rather than every bar and every joint being different. Of course a computer aided design can have every part different if you want, but the manufacturing and assembly costs will be higher. You might want to look at polyhedra (including "semi-regular" polyhedra, not just the five "Platonic regular solids") where all the edges are the same length, and buckyballs which have similar properties. In particular, there are a few deltahedra where all the faces are equilateral triangles but the complete object does not have a "regular" shape. The regular polyhedra were studied by the ancient Greek mathematicians, but the irregular shaped deltahedra were not discovered until nearly 2000 years later - most likely simply because nobody bothered to look to see if such a thing could exist. Google any of the terms in bold type for more information.
{ "domain": "engineering.stackexchange", "id": 1270, "tags": "architecture" }
ros indigo rviz performance displaying pointclouds 2x faster when compiled from_source
Question: Hi there, as we started migrating our code to indigo yesterday we had performance issues, especially when working with pointclouds. The code compiled with catkin_make runs just with 1Hz, whereas using rosbuild we achieve 30Hz. We found out, related to the following topic: http://answers.ros.org/question/71965/catkin-compiled-code-runs-3x-slower/ that we need to enable compiler optimization by setting catkin_make -DCMAKE_BUILD_TYPE=Release and the code runs at the expected framerate of 30Hz. Now to the topic: Displaying just a pointcloud from the asus xtion in rviz installed from the repository gives us a performance of 14Hz on our desktop. Intel i7-3770 3.4 GHz quad-core Nvidia GeForce GTX660 16GB ram Ubuntu 14.04 - 64bit ros-indigo Knowing the above (catkin compiled code runs 3x slower) we build rviz from source with the following compiler options: set(CMAKE_BUILD_TYPE RelWithDebInfo) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++0x -Wall -msse2 -msse3 -mssse3 -msse4 -msse4.2") This gives us a performance of 30Hz displaying a pointcloud, resulting in a much smoother usage of rviz. Evaluating the same on a laptop: Intel i7-4600M, 2.1Ghz dual core Intel HD Graphics 4600 8 GB RAM Ubuntu 14.04 - 64bit ros indigo rviz from repository: 4Hz vs. rviz from source 9Hz EDIT I prepared a table containing all build tests I have done: This leads to the question: Are the ros-indigo packages really compiled with compiler optimizations turned on e.g. as Release? Have a nice day! Originally posted by kluessi on ROS Answers with karma: 73 on 2014-08-13 Post score: 4 Original comments Comment by jorge on 2014-08-13: Out of curiosity, did u try to set PLATFORM_CXX_FLAGS to "${CMAKE_CXX_FLAGS} -std=gnu++0x -Wall -march=native"? According to this should add the apropriate -msseN for your platform Comment by kluessi on 2014-08-13: I tried your suggestion. For results please look in my post above. Not much difference. Still faster than the repository, but as fast as without SSE optimizations turned on. Comment by jorge on 2014-08-13: Make sense. Thanks! Comment by William on 2014-08-13: Probably related to https://github.com/ros-visualization/rviz/issues/775 Comment by William on 2014-08-13: Also probably related to https://github.com/ros-infrastructure/bloom/issues/277 Comment by kluessi on 2014-08-14: Thanks for the links William. Seem to be related. Answer: Yes; the released packages are compiled with optimizations turned on, but they're compiled for the "standard" x86 or x86_64 machine, which doesn't include the higher SSE optimizations (at least 3, 4 and 4.1, if my memory is correct). The builds done on the build farm do this on purpose because they have to be compatible with all of the possible computers that are supported by Ubuntu. Obviously, if you're willing to use a newer instruction set that isn't portable, you can get faster performance. I suspect you'll see similar performance to the released packages if you compile in release mode with the default optimization flags. Originally posted by ahendrix with karma: 47576 on 2014-08-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by kluessi on 2014-08-13: I compiled rviz again using BUILD_TYPE Release and RelWithDebInfo, both without SSE optimizations. I still get the performance of 30Hz on the desktop and 14Hz. I will prepare a table containing all builds tested and edit my post above in a second. Comment by ahendrix on 2014-08-13: cmake tends to cache environment variables between runs; did you clean your workspace and rebuild when switching flags? Comment by kluessi on 2014-08-14: I performed the tests again, this time deleting the build and devel folder from the workspace. Same results as above. Comment by Martin Günther on 2014-10-07: I think I solved it: https://github.com/ros-visualization/rviz/issues/775
{ "domain": "robotics.stackexchange", "id": 19027, "tags": "ros, kinect, rviz, catkin, build-from-source" }
Why are exploding HHO bubbles louder then pure HHO?
Question: We tried my electrolyzer with a friend today, we filled a small bottle with HHO gas, and set it on fire. It was loud but not a big deal, like a firecracker. Then we added some soapy water and created HHO bubbles instead of filling the bottle with HHO underwater. Then we lit the bubbles and it exploded so loud that for a while all we heard was just ringing. It was way too louder than when before, same gas same bottle, we even tried it twice with same result. Im wondering why is it so unbelievably louder. I thought that maybe the sound just traveled in a different way, because the bottle was turned in the opposite way (towards the ceiling instead of towards the floor), but im not sure. Answer: Some ideas you might want to consider or explore: 1. Bottle You take a 1 litre water bottle made of PET and filled with water. You invert it and place it into a tank of water over electrodes that are producing oxygen and hydrogen gas. You collect the mixture of gases until all the water has been displaced from the bottle. You take the bottle from the water tank and move it to another location, you apply a source of ignition (e.g. a match) to the open mouth of the bottle. You lost some gas while moving the bottle from the tank to another location (it was replaced by mixing with nitrogen and other gases in the surrounding air. The shape of the bottle limits the spread of the ignition through the gas inside the bottle, the ignition spreads from the opening and travels along the length of the plastic bottle. The bottle itself resists the expansion produced by the burning gases and perhaps limits the rate at which the reaction proceeds 2. Bubbles You dispense with the bottle, add some detergent to the water and allow the oxygen and hydrogen to form bubbles at the surface You ignite the bubbles. Some of the hydrogen and oxygen does not form bubbles but is invisibly present around the bubbles. This leads you to underestimate the volume produced you are not good at visually estimating 1 litre of bubbles. You have more than 1 litre of collected gas. The bubbles are not contained in a (moderately) rigid container. This means the reaction can spread faster - producing a more concentrated (louder) sound wave. In General You will see I have had to invent a description of your procedure because you only described it in very vague terms. You did not describe in detail the apparatus you used or the procedure you followed. you did not explain how you measured and verified the volumes of gas involved. I have made a number of hypotheses - you should also. Some of these hyptheses will turn out to be false on testing. I would think about how you can test all these hypotheses. I would think about how I can make good records of everything done. I would ask myself what sort of controls I can introduce. I would think about how I could make a more objective measurement of the loudness of the explosions and how to fix the distance of the measuring equipment from the site of explosion. Hint "way too louder than when before" and "so unbelievably louder" are not SI units :-)
{ "domain": "physics.stackexchange", "id": 25885, "tags": "acoustics, home-experiment, explosions" }
Determining an adiabatic gas coefficient
Question: Is it possible to determine the adiabatic gas coefficient of an ideal gas when measuring the pressure of the gas in an isothermal process? Answer: No. The adiabatic coefficient $\gamma=C_P/C_V$ involves the heat capacities, which are defined $$ C_P = \left(\frac{\partial H}{\partial T}\right)_P, \quad C_V = \left(\frac{\partial U}{\partial T}\right)_V $$ In isothermal process of ideal gas $\Delta U = \Delta H = 0$, and $$Q = - W = RT \ln\frac{V_2}{V_1}$$ These results are the same for all ideal gases, and as we see, they don't depend on the heat capacicities. Therefore, we cannot extract information about the heat capacities from an isothermal process. But we can do that from an isobaric process. In this case $$ \Delta H = Q = C_P \Delta T\quad \Rightarrow C_P = \frac{Q}{\Delta T} $$ By measuring the amount of heat we obtain $C_P$, then we can get $C_V$ from $C_V=C_P-R$. If the heat capacities depend on temperature we need to (i) conduct this process in small temperature steps ($\Delta T\to dT$) and (ii) and repeat it at various temperatures in order to get $C_P$ as a function of $T$
{ "domain": "physics.stackexchange", "id": 95174, "tags": "thermodynamics, gas, adiabatic" }
Change in entropy with micelle formation
Question: Micelles are formed only above the Kraft temperature. We know ∆G = ∆H - T∆S (Gibbs energy relation). For micelle formation to be spontaneous, ∆G has to be negative. This implies ∆H - T∆S has to be negative. Since spontaneity increases with temperature, this must mean that ∆S is positive. However this seems counter-intuitive because when micelles form, they come together. Shouldn't this actually decrease the entropy? Answer: You are forgetting that the micelles form in water which is polar and forms hydrogen bonds. The micelle formation is primarily entropically driven. For isolated surfactant molecules the polar head groups can stabilise water molecules around them, but the hydrocarbon chain is hydrophobic and so polar water has to bridge over this part of the molecule forming specific hydrogen bonds to do so. This reduces the water's entropy as these water molecules are more or less fixed in place. As more surfactant is added more water is forced into in this relatively low entropy state. At and above the CMC it is entropically more favourable to form a micelle. Lots of water molecules are now free up to take their normal behaviour, with increase in entropy, as the surfactant tails largely cluster together inside the micelle and the polar head groups are on the surface and interact with the water in just the same way as isolated surfactant molecules. The entropy decrease in forming relatively few micelles is overwhelmed by its increase in releasing numerous water molecules.
{ "domain": "chemistry.stackexchange", "id": 17987, "tags": "entropy, free-energy" }
About fixing the potential on the surface of a conductor
Question: In Purcell's Electricity and Magnetism, p.116 section 3.3, the author spoke about Laplace's equation and said that the boundary conditions for the potential$\,\phi$ on the surface of the conductor may be fixed: In a real system the potentials may be fixed by permanent connections to batteries or other constant-potential "power supplies." Does it mean then if I had a conductor like this one: I would be able to set the potential $\phi$ on its surface to a given value by connecting it in the way below? Answer: Only if the battery behaves like an ideal voltage source, which means zero internal resistance, or at least internal resistance much less than any load resistance connected to it. A real battery has internal resistance not shown in your diagram. If connected to a load in addition to the conductor the potential on the conductor will drop. The key term in the statement is “constant-potential” power supply. Hope this helps.
{ "domain": "physics.stackexchange", "id": 60591, "tags": "electromagnetism, electrostatics, potential, conductors" }
Comparaison of Semiconductors
Question: I have to compare semiconductors Si,SiC, and GaN theoretically and with an experiment. I know that I will be making the comparaison between the band gap, doping concentration and everything else, theoretically is also ok. I dont know how to do it with an experiement though, would that be a software or an actual experiment or what to use and if you have any suggestion or links that could help, id be greatful. Thank you. Answer: How you compare these materials highly depends on your goals. If you are working on a thesis then first start should be talking with you mentor to get better picture, but I'll try to give you few hints. As I am not theorist, I'll avoid this part with silent hint to check out what is density functional theory (DFT for short, it is very popular right now and in my literature review for my thesis every other paper mentioned it). Now more exciting part (for me)! Your experimental comparison can lead you in many directions and I'll mention few that come to my mind with experimental techniques that would do at least one part of job: Transport properties measurement - essentialy you want to measure conductivity of each material, determine mobilities of majority carriers by Hall effect. After that, you can introduce defects in the structure (doping) and repeat above mentioned measurements and see what happens. You can also determine lifetime of carriers. Experimental techniques you could start with are XPS (to determine chemical states of dopants), SEM to see surface of these materials. Optical properties - absorption and transmission measurements can be carried without much problems and standart techiques for that are UV-Vis spectroscopy or PL. To me, it would be very interesting to see how band gap changes with thickness of materials (it is known that band gap increases as thickness is reduced). Also you can determine dielectric constant and refractive index (this would include broad spectrum EM illumination of material and employment of Kramers-Kronig relations). As you can see it very depends on the goal you have with this comparison and it is hard to give you specific answer on question. If you want theoretical comparison then you should calculate all important quantities of these materials and if you want experimental comparison try to find out what you have at disposal and see how can you use that for work. I hope that at least I gave you some ideas that you can use. Abreviations: XPS = X-ray photoelectron spectroscopy SEM = Scanning electron microscopy PL = Photoluminesence
{ "domain": "physics.stackexchange", "id": 83237, "tags": "semiconductor-physics" }
How does a humidifier save on heating?
Question: I have heard that using a humidifier in the winter can help you save money in heating costs. If this is true, I was curious about the physics behind why this is. My thoughts were that since water has a very high specific heat, adding more water into the air would actually require you to use more energy to raise the temperature of the air the same amount. Is it that if you use a humidifier, the perceived temperature is higher?(so maybe 67 degrees F with a humidifier would feel like 70 without, thus saving because you can keep the thermostat down?) If so, then how does humidity affect perceived temperature? Or is it that this claim is wrong? My background: I am a graduate student in combinatorics, a branch of math not very related to physics. I have seen undergraduate physics 1 & 2 in a formal setting, and enjoy learning about physics. Answer: I don't think it results in any savings. Given some interior temperature and outside temperature, heat flows from inside to outside by conduction through walls. Even if humidity were to change conductivity of air, heat transfer in most cases (say from your body to air) is by turbulent convection, and so change in conductivity of air matters little. Having said that, I am not from a cold country, so it is possible that I have overlooked something subtle.
{ "domain": "physics.stackexchange", "id": 36102, "tags": "thermodynamics, soft-question, applied-physics, humidity" }
Console Application Customizer - GetBootstrap v2.0
Question: GetBootstrap-2.0 update from GetBootstrap-1.0.0.2 I just update my console application customizer and add some new features like disable minimize, maximize and close button, new color and simple popup box. Help me review my codes... Here is my sample preview of GetBootstrap-2.0: The preview contains Write(), WriteLine(),Popup(), Typewriter effect and colors. Bootstrap.cs public partial class Bootstrap { public static void Write(string format, int min = 50, int max = 100, params object[] args) { format = String.Format(format, args); for (int i = 0; i < format.Length; i++) { Thread.Sleep(TypeWriter.Next(min, max)); Console.Write(format.Substring(i, 1)); } } public static void Write(string format, BootstrapStyle style, BootstrapType type, params object[] args) { Customize(style, type); Console.Write(String.Format(format, args)); Console.ResetColor(); } public static void Write(string format, int min, int max, BootstrapStyle style, BootstrapType type, params object[] args) { Customize(style, type); Write(format, min, max, args); Console.ResetColor(); } #region Method WriteLine... public static void WriteLine(string format, int min = 50, int max = 100, params object[] args) { Write(format, min, max, args); Console.WriteLine(); } public static void WriteLine(string format, BootstrapStyle style, BootstrapType type, params object[] args) { Write(format, style, type, args); Console.WriteLine(); } public static void WriteLine(string format, int min, int max, BootstrapStyle style, BootstrapType type, params object[] args) { Write(format, min, max, style, type, args); Console.WriteLine(); } #endregion #region Method MessageBox public static void Popup(string format, string caption, params object[] args) { MessageBox.Show(String.Format(format, args), caption); } #endregion #region Method Controller... public static void CloseBox(bool enable) { EnableMenu(GetSystemMenu(GetConsoleWindow(), enable), SC_CLOSE, MF_ENABLED); } public static void MaximizeBox(bool enable) { EnableMenu(GetSystemMenu(GetConsoleWindow(), enable), SC_MAXIMIZE, MF_ENABLED); } public static void MinimizeBox(bool enable) { EnableMenu(GetSystemMenu(GetConsoleWindow(), enable), SC_MINIMIZE, MF_ENABLED); } #endregion } I decide to create a new file for my Bootstrap design and make it partial of Bootstrap.cs. Bootstrap.Designer.cs partial class Bootstrap { static Random TypeWriter = new Random(); const int MF_ENABLED = 0x00000000; const int SC_CLOSE = 0xF060; const int SC_MAXIMIZE = 0xF030; const int SC_MINIMIZE = 0xF020; [DllImport("user32.dll")] static extern IntPtr GetSystemMenu(IntPtr hWnd, bool bRevert); [DllImport("kernel32.dll", ExactSpelling = true)] static extern IntPtr GetConsoleWindow(); [DllImport("user32.dll")] static extern int EnableMenu(IntPtr hMenu, int nPosition, int wFlags); static void Customize(BootstrapStyle style, BootstrapType type) { switch (type) { case BootstrapType.Success: Console.ForegroundColor = ConsoleColor.Green; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkGreen; } break; case BootstrapType.Info: Console.ForegroundColor = ConsoleColor.Cyan; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkCyan; } break; case BootstrapType.Warning: Console.ForegroundColor = ConsoleColor.Yellow; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkYellow; } break; case BootstrapType.Danger: Console.ForegroundColor = ConsoleColor.Red; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkRed; } break; case BootstrapType.Magenta: Console.ForegroundColor = ConsoleColor.Magenta; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkMagenta; } break; case BootstrapType.Cobalt: Console.ForegroundColor = ConsoleColor.Blue; if (style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkBlue; } break; default: Console.ForegroundColor = ConsoleColor.Gray; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkGray; } break; } } } #region Enum... public enum BootstrapStyle { Default, Alert } public enum BootstrapType { Default, Success, Info, Warning, Danger, Magenta, Cobalt } #endregion If you want to try it I have a download link below. -"Tutorial Inside" You can download and edit the project if you want. Updates: Added New Color Added Simple Popup Added Disable Minimize Added Disable Maximize Added Disable Close Button Download Link Answer: A Bit About OOP There's a really nasty switch statement in your Designer. switch (type) { case BootstrapType.Success: Console.ForegroundColor = ConsoleColor.Green; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkGreen; } break; case BootstrapType.Info: Console.ForegroundColor = ConsoleColor.Cyan; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkCyan; } break; case BootstrapType.Warning: Console.ForegroundColor = ConsoleColor.Yellow; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkYellow; } break; case BootstrapType.Danger: Console.ForegroundColor = ConsoleColor.Red; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkRed; } break; case BootstrapType.Magenta: Console.ForegroundColor = ConsoleColor.Magenta; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkMagenta; } break; case BootstrapType.Cobalt: Console.ForegroundColor = ConsoleColor.Blue; if (style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkBlue; } break; default: Console.ForegroundColor = ConsoleColor.Gray; if(style == BootstrapStyle.Alert) { Console.BackgroundColor = ConsoleColor.DarkGray; } break; } In order to add any new styles to your code, you potentially have to make changes to three places in your code. Minimally, you have to add it to your enum and your switch. A little OOP can go about making this much easier to deal with. Particularly because these all do a very similar thing. You could extend this indefinitely and easily by inheriting from your Bootstrap class and overriding Customize. First, you would need to change the signature of Customize in the base class so that we can over ride the method. Second, change it so that only the default case gets executed here. static virtual void Customize(BootstrapStyle style) { Console.ForegroundColor = ConsoleColor.Gray; if (style == Boostrap.Alert) { ConsoleColor.BackgroundColor = ConsoleColor.DarkGray; } } Note that I also removed the Type parameter, as we're replacing it with inherited classes. Now we can go about implementing child classes like so. public class SuccessBootstrap : Bootstrap { static override void Customize(BootstrapStyle style) { Console.ForegroundColor = ConsoleColor.Green; if (style == Boostrap.Alert) { ConsoleColor.BackgroundColor = ConsoleColor.DarkGreen; } } } public class InfoBoostrap : Bootstrap { static override void Customize(BootstrapStyle style) { Console.ForegroundColor = ConsoleColor.Cyan; if (style == Boostrap.Alert) { ConsoleColor.BackgroundColor = ConsoleColor.DarkCyan; } } } And so on. With this method of creating new types of coloring schemes, you don't even have to open up the Bootstrap file. You just create a new child class and over ride Customize. There is one more refactoring that should probably happen though. The "If style == alert then set background color" logic gets repeated in each and every one of these overrides. It could be simplified by extracting this logic into a protected method of your base class. protected void SetBackgroundColor(ConsoleColor backgroundColor, BootstrapStyle style) { if(style == BootstrapStyle.Alert) { Console.BackgroundColor = backgroundColor; } } Which simplifies your child class implementations down to two dead simple lines of code. public class SuccessBootstrap : Bootstrap { static override void Customize(BootstrapStyle style) { Console.ForegroundColor = ConsoleColor.Green; base.SetBackgroundColor(ConsoleColor.DarkGreen, style); } } If you should decide to add a third style, you only need to update the logic in the base class.
{ "domain": "codereview.stackexchange", "id": 11269, "tags": "c#, console" }
Why isn't bacterial colony a monolayer?
Question: I am an engineer entering the field of microbiology. I was watching bacterial colonies (lab strain E. coli to be specific) being grown on agar plates. I was surprised that the bacterial colony was not a monolayer (i.e. a single layer of cells), but a mound with bacteria piled on top of each other (except may be at the edge of the colony). Why doesn't bacteria spread out in a monolayer over the entire agar surface? And, how do bacteria in the pile which are not in direct contact with the agar survive? Is it that nutrients are passed upwards by those cells which are in direct contact with the agar surface? Answer: As suggested by your conversation in the comments with anon, the bacterial colony is the outcome of a mechanistic growth model in which nutrient diffusion occurs from below to above. For an explicit mathematical model of this process (which also incorporates mechanical forces), you can see this paper (Warren et al. 2019). Notably, this mounding occurs even in the absence of an extracellular matrix, and is proposed to be related to cell orientations within the colony. I reproduce Figure 4 which shows some of the relevant results: The cross-sectional anatomy of a simulated colony. (A) Snapshot of cross-sectional view of the colony at t=20h . Cyan represents horizontally oriented cells (≥45o with z-axis); Yellow represents vertically oriented cells (≤45o with z-axis). (B) Fraction of vertically oriented cells averaged over z vs radius. (C) A side view of the azimuthally averaged director field, indicating the orientation of the rod-like cells. (D) A side view of the azimuthally averaged velocity field. (E) Vertical component of velocity, Vz , at various values of z along the center of the colony. Increase in vertical speed is seen only for the bottom 10 µm (F) A cross-sectional view of the colony, color representing the time since last division. Purple and blue represent cells that have not divided for the past 10 h, and red represents the actively dividing cells. (G) A cross-sectional view of the local growth rate in the colony, with the color bar showing the values of local growth rate. A disc-shaped 'growth zone' is revealed by the red color at the bottom of the colony. Other similar models describe "verticalization" of bacterial colonies as an explicitly mechanical process (Beroz et al 2018), without regard for nutrient flow. However, differences in these models lead to qualitatively different results for how tall colonies get.
{ "domain": "biology.stackexchange", "id": 12216, "tags": "bacteriology, morphology, growth-media" }
What crystals are being used in this project to detect dark matter?
Question: I recently read an article about "groundbreaking Australian research in the hunt for dark matter", in which it said: We use highly purified crystals to detect dark matter. These crystals react when they are struck by dark matter particles and have to kept underground from other forms of radiation so we don’t get false readings. The Stawell Underground Physics Laboratory (SUPL) is being built 1025 meters below ground. Once we have results, we’ll compare those with the results from detectors in other parts of the world. I'm interested to know what these crystals might be composed of, and what kind of "reaction" is expected - for example, a change in energy level, or momentum? The research is being undertaken through the Centre of Excellence for Particle Physics at the Terascale (CoEPP). This CoEPP web page hints at the interaction they're looking for, but gives no indication of what the crystals are: Uncovering dark matter at the LHC is a nontrivial task, because it will not interact with the detectors at all. Instead, we can hope to infer the presence of dark matter particles created at the LHC by ascertaining that something is missing. A spectacular signal is that of the mono-X (where 'X' can signify γ, Z or jets). In this process, a single Standard Model particle recoils against missing momentum attributed to dark matter particles which escape the detector unseen. CoEPP researchers have performed some preliminary work on mono-Z signals. This work will continue, with a focus on going beyond the effective field theory description by instead looking at simple UV complete models. I looked through Physics Stack Exchange for potential answers, but the only relevant questions seem to focus on liquid xenon: https://physics.stackexchange.com/a/476052/113185 https://physics.stackexchange.com/a/219144/113185 Answer: Sodium iodide. The experiment at the Stawell underground lab in Australia is part of the SABRE experiment, where SABRE is an acronym for Sodium Iodide with Active Background Rejection Experiment. This page describes the detectors. The heart of the SABRE detector is high radio-purity thallium-doped sodium-Iodide (NaI(Tl)) scintillating crystals. Each collision between a Dark Matter particle and a nucleus releases a small amount of energy (<100 keV) that is converted into light.
{ "domain": "physics.stackexchange", "id": 61315, "tags": "dark-matter, wimps" }
How to generate and visualize results of all possible parameter values in Python?
Question: I have two parameters: demand, ticket_qty. demand is an integer and can have a possible range of [100 - 200] ticket_qty is also an integer and can have a possible range of [0 - 200] I want to visualize how values for the above parameters affect the functions below: price = demand - tickets_qty revenue = price * ticket_qty In Excel I created two matrixes (price and revenue) with demand as row indexes and ticket_qty as column indexes and simply applied the formula for each cell based on the rules above. However, seeing as this is a Kaggle microchallenge, I'd like to perform this using Python so as to then be able to plot and show optimal levels of price/revenue and do all my work inside the Jupyter notebook. Questions: How would I go about doing this in Python? pandas? numpy? In statistics/data science, what is this "process" of mapping out possible values called? So as to facilitate further research Answer: It's probably fastest to do this using numpy, in which you can define the possible ranges of your values using numpy.arange as follows: import numpy as np demand = np.arange(100, 201, 1) # integers from 100-200 inclusive ticket_qty = np.arange(0, 201, 1) # integers from 0-200 inclusive If you then want to calculate the possibilities using the different values you can use the .outer method of the different universal numpy functions (i.e. numpy.subtract to subtract values) to get easily get the results without looping over the arrays.
{ "domain": "datascience.stackexchange", "id": 8791, "tags": "python, statistics" }
Vis viva equation - finding how change in velocity changes the semi-major axis distances - math trick
Question: I'm reading about how to deorbit. I stumped upon a trick here. In the question, we wish to find how a chance in orbital speed changes the semi-major axis distances. So in the answer we start with the Vis Viva equation: $$ v^2 = GM(\frac2r-\frac1a)$$ and assume we have a circular orbit, $r = a$. Then we rewrite the equation: $$ v^2 = GM\frac1a.$$ Here comes the trick I don't understand. We now rewrite that equation to: $$2v \thinspace dv = \frac{GM}{a^2} da.$$ How is that possible? I would love to know how this is done. I can see it makes sense, since taking the integration restores the original equation. It also seems like he could be multiplying with $\frac{d}{dv\thinspace da}$ on both sides, but now I'm just guessing. Answer: This is a typical exercise in calculus so I would recommend brushing up. Given your equation: $$v^2 = GM\left( \frac{2}{r} - \frac{1}{a}\right)$$ you want to ask how the RHS changes when you change the LHS. I.e. you apply the differential operator $d$. $$\frac{d(v^2)}{dv} = 2v \iff d(v^2) = 2vdv$$ $$\frac{d}{da} GM\left(\frac{2}{r} - \frac{1}{a}\right) = \frac{GM}{a^2} \iff d\left(GM\left(\frac{2}{r} - \frac{1}{a}\right)\right) = \frac{GMda}{a^2}$$.
{ "domain": "physics.stackexchange", "id": 66065, "tags": "orbital-motion, differentiation, celestial-mechanics, calculus" }
Rate of O exchange with H2O
Question: We have to find the order of rate of exchange of O with H2O of the following compounds $\ce{CCl3CHO}$ $\ce{CH3CHO}$ $\ce{CH3COCH3}$ $\ce{CF3CHO}$ According to me, higher rates of exchange would be when electron density is be less on the carbonyl carbon But how we will compare 1. & 2.? In first compound there is hydrogen bonding (chloral hydrate) Answer: When you talk about the rate of exchange of oxygen with water it might be instructive to realise that actually any aldehyde or ketone in the presence of water is in equilibrium with the hydrate, as shown below. Source: UC Davis There are a few ways we know that this is indeed an equilibrium process, but the main method that was used to fully elucidate the mechanism was by using isotopically labelled water containing 18O (the most abundant isotope of oxygen is 16O). By doing this the scientists could observe the 16O in the starting aldehyde being slowly replaced with 18O from the enriched water. Given that an equilibrium is involved, there are actually two pertinent questions... What is the position of the equilibrium? How fast do we reach the equilibrium? The table below might be of some use to you. On the left, you can see a variety of carbonyl compounds, and on the right, the equilibrium constants for the hydration. Two of the compounds you listed (acetaldehyde and trifluoroacetaldehyde) are listed in the table, and you can see that the acetaldehyde is significantly less hydrated at equilibrium than the trifluoroacetaldehyde. Source: J. Am. Chem. Soc. 1983, 105, 868 (American Chemical Society) From the table, some trends can be picked out (excluding formaldehyde which is almost exclusively hydrated). There is here a relationship between kinetics and thermodynamics (the more favourable a compound is to form the hydrate, the faster it will form it) Aldehydes are far more likely to be hydrated than ketones. This is largely though not exclusively a steric effect. Electron withdawing/electronegative substituents favour hydration. On a very simple level this is due to the fact that they make the carbonyl more electron poor, thus promoting the addition of water via its lone pair. The order should therefore be 4>1>>2>3 (fluorine is more electron withdrawing than chlorine)
{ "domain": "chemistry.stackexchange", "id": 6020, "tags": "organic-chemistry, carbonyl-compounds, alcohols, covalent-compounds" }
rgdslam linking error
Question: Hi all, I'm trying to build rgbdslam under ubuntu 11.10 and ros electric. I'm following these instructions (point 2). However it fails with the following error: [100%] Building CXX object CMakeFiles/rgbdslam.dir/src/moc_ros_service_ui.o Linking CXX executable ../bin/rgbdslam /usr/bin/ld: CMakeFiles/rgbdslam.dir/src/graph_manager.o: undefined reference to symbol 'g2o::MatrixStructure::~MatrixStructure()' /usr/bin/ld: note: 'g2o::MatrixStructure::~MatrixStructure()' is defined in DSO /home/serl-3d/Software/ros-stacks/g2o/lib//libg2o_core.so so try adding it to the linker command line /home/serl-3d/Software/ros-stacks/g2o/lib//libg2o_core.so: could not read symbols: Invalid operation collect2: ld returned 1 exit status make[3]: *** [../bin/rgbdslam] Error 1 make[3]: Leaving directory `/home/serl-3d/Software/ros-stacks/rgbdslam/build' make[2]: *** [CMakeFiles/rgbdslam.dir/all] Error 2 make[2]: Leaving directory `/home/serl-3d/Software/ros-stacks/rgbdslam/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/serl-3d/Software/ros-stacks/rgbdslam/build' make: *** [all] Error 2 How can I fix it? Thanks Originally posted by LucaGhera on ROS Answers with karma: 128 on 2012-01-15 Post score: 0 Answer: Hi, something changed in ubuntu 11.10 s.t. new library-dependencies are required. I am currently updating the our google code subversion repository. I haven' tested it thoroughly, but feel free to give it a try. Originally posted by Felix Endres with karma: 6468 on 2012-01-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by LucaGhera on 2012-01-17: @Felix Endres Now it compiles and works...thanks! Comment by Felix Endres on 2012-01-17: Forgot to check in the service definition. Sorry. Please try again Comment by LucaGhera on 2012-01-16: rgbdslam/rgbdslam_ros_ui.h: No such file or directory Comment by LucaGhera on 2012-01-16: @Felix Endres Thank. However it still fails: Building CXX object CMakeFiles/rgbdslam.dir/src/main.o In file included from /home/serl-3d/Software/ros-stacks/rgbdslam_freiburg/rgbdslam/src/main.cpp:24:0: ~/Software/ros-stacks/rgbdslam_freiburg/rgbdslam/src/ros_service_ui.h:24:38: fatal error:
{ "domain": "robotics.stackexchange", "id": 7900, "tags": "slam, navigation, rosmake" }
Does allowing a mutable transition function in Turing machine make it more powerful?
Question: as the title says does having a mutable transition function make the Turing machine more powerful by mutable I mean we have a set of transition functions that we can choose one of them arbitrary based on the current state and symbol in definition of Turing machine from wikipedia instead of $\delta$ we have a set $\Delta = \{\delta_1, \delta_2, \delta_3, ..., \delta_n\} $ that is a subset of all possible transition functions and $O = \{o_1, o_2, o_3, ...\}$ where $o_i = \delta_j$ $\delta : \mathbb{N} \rightarrow O$ $\delta(i) = o_i$ is the actual transition function and $o_i$ is defined for current state and symbol and machine start from $\delta(1)$ and then $\delta(2),\delta(3),...$ Answer: First, let's simplify your definition of a "hyper-Turing machine". A hyper-Turing machine is a 7-tuple $M = (Q, \Gamma, b, \Sigma, \delta, q_0, F)$, where $\Gamma$ is a finite set of tape alphabet symbols $b \in \Gamma$ is the "blank symbol" $Q$ is a finite set of states $\Sigma \subseteq \Gamma \setminus \{b\}$ is a finite set of input symbols $q_0 \in Q$ is the initial state $F \subseteq$ is a finite set of "accepting states" $\delta : \mathbb{N} \times (Q \setminus F) \times \Gamma \to Q \times \Gamma \times \{L, R\}$ is the "transition function" Note that in OP's original question, we instead had some finite set $\Delta \subseteq (Q \setminus F) \times \Gamma \to Q \times \Gamma \times \{L, R\}$ and some function $\delta : \mathbb{N} \to \Delta$. However, without loss of generality, we can always take $\Delta = (Q \setminus F) \times \Gamma \to Q \times \Gamma \times \{L, R\}$, since this set is finite. Then we have $\delta : \mathbb{N} \to (Q \setminus F) \times \Gamma \to Q \times \Gamma \times \{L, R\}$, but by Currying, this is equivalent to $\delta : \mathbb{N} \times (Q \setminus F) \times \Gamma \to Q \times \Gamma \times \{L, R\}$. The semantics are that the hyper-Turing machine keeps track of the number of transitions it has made so far. If the machine has made $n$ transitions so far, the symbol at the head is $v \in \Gamma$, and the current state is $q \in Q \setminus F$, then let $(q', x', d) = \delta(n, q, v)$. The Turing machine first replaces the character at the head location with character $v$. It then moves the head in direction $d$. It then transitions to state $q$. It turns out that Thm. Any function $\mathbb{N} \to \mathbb{N}$ can be computed by a hyper-Turing machine. What does it mean for a function $f : \mathbb{N} \to \mathbb{N}$ to be computed by a hyper-Turing machine $M$? It means that there is some hyper-Turing machine over input alphabet $\Sigma = \{1\}$ such that for all $n \in \mathbb{N}$, if the hyper-Turing machine is run on input $1^n$, then the Turing machine eventually halts, and when it halts, the tape has $1^{f(n)}$ on it. Proving this is quite simple. Simply consider the following pseudocode: For each $i \in \mathbb{N}$: 1: Walk $i$ steps to the right. Then, walk $i$ steps back to the left. From this information, we know whether the string is $1^i$. 2: If the string is $1^i$, then erase the string, write the string $1^{f(i)}$, and halt, taking a total of $i + f(i)$ steps. Otherwise, simply walk back and forth for a total of $2 \cdot (i + f(i))$ steps, then continue with the loop. So all functions $\mathbb{N} \to \mathbb{N}$ are hyper-Turing computable. Using a bit more creativity, it's possible to extend the original argument to show that given any alphabet $\Sigma$, all functions $\Sigma^* \to \Sigma^*$ are hyper-Turing computable. This, of course, involves pushing across the bijection $\mathbb{N} \to \mathbb{N}$, though it requires a bit more creativity than the last case. So all functions are hyper-Turing computable. Hence, hyper-Turing computability is a rather useless notion.
{ "domain": "cs.stackexchange", "id": 19208, "tags": "turing-machines, hypercomputation" }
Multiplication by a Haar random unitary two times
Question: Consider a Haar random unitary $U$. I am trying to compute the value (or put a bound on) \begin{equation} \mathbb{E}\left[\left|\langle 0^{n} |U^{2} |0^{n}\rangle\right|^{2}\right]. \end{equation} The expectation is taken over the choice of the circuit. We know that \begin{equation} \mathbb{E}\left[\left|\langle 0^{n} |U |0^{n}\rangle \right|^{2}\right] = \frac{1}{2^{n}}. \end{equation} Multiplication by another unitary $U$ should "scramble" the probability even more, but what might be a way to prove that? Answer: There is an explicit formula for the integral with respect to the Haar measure of any polynomial in the entries of a unitary and its conjugate, due to Collins and Śniady: Benoît Collins and Piotr Śniady. Integration with Respect to the Haar Measure on Unitary, Orthogonal and Symplectic Group. Communications in Mathematical Physics 264: 773-795, 2006. [arXiv:math-ph/0402073] I won't actually write down the formula in this answer, it can be found in the paper. For high-degree polynomials you need to know some things about the representation theory of the symmetric group to evaluate it, but for low-degree polynomials it's easy enough to just look up the required values. Let me rewrite the question using notation that matches with Collins and Sniady: we'll assume the rows and columns of the unitary $U$ we're integrating over are indexed by the integers $1,\ldots,d$, so $d = 2^n$, and we'll denote the $(i,j)$ entry of $U$ by $U_{i,j}$. The question asks for the value $$ \int \bigl\vert \bigl(U^2\bigr)_{1,1} \bigr\vert^2 \mathrm{d}U $$ where the integral is with respect to Haar measure for $d\times d$ unitary matrices. We need to rewrite the integrand. We have $$ \bigl(U^2\bigr)_{1,1} = \sum_{i=1}^d U_{1,i} U_{i,1} $$ and therefore $$ \bigl\vert \bigl(U^2\bigr)_{1,1} \bigr\vert^2 = \sum_{i=1}^d\sum_{j=1}^d U_{1,i} U_{i,1} \overline{U_{1,j}} \overline{U_{j,1}}. $$ From the formula of Collins and Sniady, we can conclude this: $$ \int U_{1,i} U_{i,1} \overline{U_{1,j}} \overline{U_{j,1}} \, \mathrm{d}U = \begin{cases} \frac{2}{d(d+1)} & i = j = 1\\ \frac{1}{d^2-1} & i = j \not= 1\\ 0 & i\not=j. \end{cases} $$ Therefore, the value we're looking for is $$ \int \bigl\vert \bigl(U^2\bigr)_{1,1} \bigr\vert^2 \mathrm{d}U = \frac{2}{d(d+1)} + (d-1) \frac{1}{d^2 - 1} = \frac{d+2}{d(d+1)}. $$ Switching back to the notation in the original question gives $$ \mathbb{E}\Bigl[ \bigl\vert \langle 0^n \vert U^2 \vert 0^n\rangle \bigr\vert^2\Bigr] = \frac{2^n + 2}{2^n(2^n+1)} = \frac{2^{n-1} + 1}{2^{n-1}(2^n+1)}, $$ which is ever-so-slightly larger than $1/2^n$. So, unless I made a mistake in this calculation (which is a definite possibility — it should be checked carefully), the intuition that applying a unitary twice scrambles the probability more is wrong: it scrambles it slightly less. It's kind of like vitamins: taking twice as much as you need isn't better for you, it's actually a little bit worse.
{ "domain": "quantumcomputing.stackexchange", "id": 3880, "tags": "random-quantum-circuit, haar-distribution" }
Read project setting in asp.net mvc
Question: I have a Table that I store All setting in that , it is like below : public class Setting:IEntity { public string Key { get; set; } public string Value { get; set; } } and I have a service like this to update and read Setting Table : public class SettingService : ISettingService { #region Fields private readonly IUnitOfWork _uow; private readonly IDbSet<Setting> _settings; private static readonly ConcurrentDictionary<string, object> _cash = new ConcurrentDictionary<string, object>(); #endregion #region Methods #region ctor public SettingService(IUnitOfWork uow) { _uow = uow; _settings = _uow.Set<Setting>(); if (_cash.IsEmpty) lock (_cash) { if (_cash.IsEmpty) _settings.ToList().ForEach(item => _cash.TryAdd(item.Key, item.Value)); } } #endregion public T Get<T>() where T : ISetting { object value; var setting = Activator.CreateInstance<T>(); var prefix = typeof(T).Name; foreach (PropertyInfo item in typeof(T).GetProperties()) { string key = $"{prefix}.{item.Name}"; _cash.TryGetValue(key, out value); if (item.PropertyType == typeof(Boolean)) { bool result; Boolean.TryParse(value?.ToString(), out result); item.SetValue(setting, result); } else item.SetValue(setting, value); } return setting; } public void Set<T>(T model) where T : ISetting { var prefix = typeof(T).Name; Type type = typeof(T); foreach (PropertyInfo prop in typeof(T).GetProperties()) { var key = $"{prefix}.{prop.Name}"; var setting = _settings.FirstOrDefault(row => row.Key == key); var isAddedd = true; if (setting == null) { setting = new Setting { Key = key }; _settings.Add(setting); _uow.MarkAsAdded(setting); isAddedd = false; } setting.Value = prop.GetValue(model, null)?.ToString() ?? string.Empty; if (isAddedd) _uow.MarkAsChanged(setting); _cash.AddOrUpdate(key, setting.Value, (oldkey, oldValue) => setting.Value); } } #endregion } I use this Service Like below : var data = _settingService.Get<AboutSetting>(); // when I want to featch from db _settingService.Set<AboutSetting>(aboutUsViewModel);// for update now I need to Read All Project Setting from project , in some Views I need just some of them like Address , Tel ,... I have created some Classes Like below : public static class CompanyConfig { private static CompanyInformationSetting _companySettings; private static ISettingService _settingService; static CompanyConfig() { _settingService = ApplicationObjectFactory.Container.GetInstance<ISettingService>(); _companySettings = _settingService.Get<CompanyInformationSetting>(); } public static string CompanyAddress { get { return _companySettings.Address; } } } and use them in View Like : <h2> Address : @(CompanyConfig.CompanyAddress) </h2> Is there a way better than this , does this way bad for Performance ? Another Way is to change BaseViewPage Like below and set some properties like below : public class BaseViewPage<TModel> : WebViewPage<TModel> { private readonly ISettingService _settingService; public BaseViewPage() { _settingService = ApplicationObjectFactory.Container.GetInstance<ISettingService>(); } public override void Execute() { } public string CompanyName { get { return _settingService.Get<CompanyInformationSetting>("CompanyName"); } } public string CompanyPhoneNumber { get { return _settingService.Get<CompanyInformationSetting>("PhoneNumber"); } } public string CompanyEmail { get { return _settingService.Get<CompanyInformationSetting>("Email"); } } public string CompanyAddress { get { return _settingService.Get<CompanyInformationSetting>("Address"); } } } in this way we dont need to static class and it solves singleton of UnitOfwork Answer: Quick note first, _cash should be _cache dictionary definition: a temporary storage space or memory that allows fast access to data Is this bad for performance? You'd have to measure it but you've got a non-trivial amount of reflection on every call to Get and Set. Why not serialize your values to JSON or Xml to store in the database? Create an interface and you can decide on your implementation later. You could test a bunch of serializers and see which is fastest. public interface ISettingSerializer { string Serialize<T>(T value); T Deserialize<T>(string value); } What I'm really trying to say is: don't create your own serialization. There are already multiple good options that support a lot of configuration. E.g. think about attributes to control how or even whether a member is included. You have another problem - a captive dependency. You have a Singleton CompanyConfig (as it's static) that holds a reference to a unit of work (through the settings service) which shouldn't be kept alive for the lifetime of the application IMO.
{ "domain": "codereview.stackexchange", "id": 27045, "tags": "c#, mvc, asp.net-mvc" }
vision_opencv can't be found by ROS but it's installed
Question: I was trying to make a package using the catkin_create_pkg but later it gives me an error for a package I have. My command was: catkin_create_pkg opencv_ros sensor_msgs cv_bridge roscpp std_msgs vision_opencv and it says that is was successful, after I run this command: rospack depends1 opencv_ros and I get this error: [rospack] Error: package 'opencv_ros' depends on non-existent package 'vision_opencv' and rosdep claims that it is not a system dependency. Check the ROS_PACKAGE_PATH or try calling 'rosdep update' the problem is that the package exist. If I run the command: roscd vision_opencv it takes me to this location /opt/ros/indigo/share/vision_opencv and the folder of the package is next to all the other packages. The only thing that looked weird is that the folder has only a package.xml file in it. What I want to be able to is use opencv in ROS. Say that I only want to use opencv functions with out moving images form one node to the other (all the loading and processing in one file) what else do I need except vision_opencv package (and making changes in the CMakeList.txt and pagkage.xml) ? I am using ubuntu 14.04 64bit with ROS indigo. I use the catkin system. Originally posted by Metalzero2 on ROS Answers with karma: 293 on 2014-08-26 Post score: 1 Answer: The essential problem is that vision_opencv is a metapackage, not a package. A package should never depend, actually, it cannot depend on a metapackage. There is a great discussion about why this is the case here. In your package, depending on cv_bridge should be enough. Note that the package.xml in cv_bridge adds a bunch of system dependencies (such as libopencv-dev). Originally posted by jarvisschultz with karma: 9031 on 2014-08-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Metalzero2 on 2014-08-27: @jarvisschultz Yes, now I see the problem. Now when I was building some simple code it works fine (by using cv_brinde only) but when I try to put some other code with Surf or Sift (from openCV) it gives me error. Where do I need to look to solve this problem (I mean in what file)? Comment by jarvisschultz on 2014-08-27: I'm guessing that your issue is related Surf and Sift moving to a "nonfree" module that doesn't seem to be installed with the apt-get packages on 14.04. See this question/answer Comment by Metalzero2 on 2014-08-27: @jarvisschultz The same code works fine when I run it in "Code::Blocks" and the "nonfree" header is in my openCV folder. Will it make a difference if I build ROS from source instead of installing it the recommended way (according to the tutorials) ?
{ "domain": "robotics.stackexchange", "id": 19199, "tags": "ros, opencv, opencv2, vision-opencv, catkin-create-pkg" }
What does "$\gg$" mean in physics problems?
Question: In the context of this: Our goal in this subsection will be to obtain $r$ as a function of $θ$, for a gravitational potential. The gravitational potential energy between two objects, of masses $M$ and $m$. In the present treatment, let us consider the mass M to be bolted down at the origin of our coordinate system. This is approximately true in the case where $M\gg m$, as in the earth-sun system. I have seen this symbol ($\gg$ or $\ll$) multiple times in physics problems, but I have no clue what it means. I have heard that it's a limit statement of some sort, but I just want to double check how it works. Answer: $`` >> "$ or $`` \gg "$ means "much greater than". $`` << "$ or $`` \ll "$ means "much less than". Typically, if $x \gg y ,$ then the following are considered to apply:$$ \begin{alignat}{7} y & \ll x \tag{1} \\[10px] x + y & \approx x \tag{2} \\[10px] x - y & \approx x \tag{3} \\[10px] \frac{y}{x} & \approx 0 \tag{4} \\[10px] \left|\frac{x}{y}\right| & \approx \infty \tag{5} \end{alignat} $$It's a bit of a fuzzy, approximate logic, so it should be used with care. Still, it's often convenient to say that one value is so much larger/smaller than another that we can simplify functions involving them without knowing their exact values. Some care should be taken with signs. For example, it's technically true that $-{10}^{10} \ll 1 \,,$ though $\left| -{10}^{10} \right| \gg \left|1\right| \,,$ which might cause some confusion. In the case of mass, as given in the question statement, this is dodged as mass is always positive.
{ "domain": "physics.stackexchange", "id": 57651, "tags": "notation" }
Can a decision tree learn to solve a xOR problem?
Question: I have read online that decision trees can solve xOR type problems, as shown in images (xOR problem: 1) and (Possible solution as decision tree: 2). My question is how can a decision tree learn to solve this problem in this scenario. I just don't see a way for any metric (Information gain, Gini Score, ...) to choose one of the splits in image 2 over any other random split. Is it possible to solve the presented problem with a decision tree? Would using a random forest solve the problem in any way? Thank you in advance. Answer: Yes, a decision tree can learn an XOR. I have read online that decision trees can solve xOR type problems... Often things are phrased not carefully enough. A neural network can perfectly sort a list of integers, but training one to do that would be quite hard. Your image shows that a tree can easily represent the XOR function, but your question is how to learn such a tree structure. My question is how can a decision tree learn to solve this problem in this scenario. I just don't see a way for any metric (Information gain, Gini Score, ...) to choose one of the splits in image 2 over any other random split. Indeed, the first split is probably quite random, or due to noise (if you go for $\operatorname{sign}(x\cdot y)$ with continuous $x,y$ instead of the discrete $x,y$ and XOR). But, as long as your algorithm makes the plunge with one of those first splits, the next splits are obvious and your tree will make them. Is it possible to solve the presented problem with a decision tree? Here's a notebook (github/colab, suggestions welcome) demonstrating that yes, a (sklearn) decision tree can learn $\operatorname{sign}(x\cdot y)$ (perhaps with some errors when points are extremely close to 0); but it also goes on to show some of the difficulties, e.g. when variables other than $x,y$ are available to the tree to split on. Up-shot: noise variables can wreck that first split I mentioned above, and even useful variables can make the tree lose track of the XOR. Would using a random forest solve the problem in any way? Probably not the basic problem, but it looks like it helps with, e.g., the noise variables above.
{ "domain": "datascience.stackexchange", "id": 6145, "tags": "random-forest, decision-trees" }
Is it safe to combine sodium hypochlorite (bleach) sodium triphosphate (TSP) for cleaning?
Question: I've heard this combination can be particularly effective at cleaning biofilms, perhaps because the TSP is a surfacant, allowing bleach to "attack" more layers of the biofilm. However, I'm also aware one should be cautious when mixing anything other than water with bleach, as there are a wide variety of reactions that can occur. But I've seen nothing about bleach and TSP. Should it be safe? Answer: Dissolved in water, trisodium phosphate $\ce{Na3PO4}$ produces ions $\ce{PO^{3-}4}$ that react by hydrolysis with water, according to $$\ce{PO^{3-}4 + H2O -> HPO^{2-}4 + OH^-}$$ So such a solution contains the ions $\ce{OH-}$, and these ions exist also in the bleach. Indeed, bleach main ingredient is sodium hypochlorite $\ce{NaClO}$. But its solution contains significant amounts of $\ce{NaOH}$ which produce the ions $\ce{OH-}$. As both solutions (bleach and trisodium phosphate) contains the same ions, they can be mixed without any trouble. Furthermore the hypochlorite ion $\ce{ClO-}$ does not react with the phosphate ions $\ce{PO^{3-}4}$ and $\ce{HPO^{2-}4}$ .
{ "domain": "chemistry.stackexchange", "id": 14709, "tags": "safety" }
Limitations of drag equation
Question: The magnitude of the air resistance for objects with Reynolds numbers greater than 1000 is given by the formula: Why it does not hold for objects with lower Reynolds numbers? Can I use this equation safely to determine the terminal velocity of the tennis ball falling from the top of a skyscraper and generally for objects travelling in air? Finally in what case can a drag coefficient have value greater than 1? Answer: At low speeds (more precisely low Reynold's numbers) where the flow is laminar or only partly turbulent the drag varies as $v^\alpha$ where $1 \le \alpha \le 2$. Under most conditions the air flow is turbulent and you can assume a $v^2$ dependance. This will certainly be the case for the terminal velocity of a tennis ball. The Reynolds number for a sphere is: $$ R_e = \frac{vd\rho}{\eta} $$ The density of air is about 1.2kg/m$^{-3}$ and the viscosity about 1.8 $\times$ 10$^{-5}$Pa.s so for a tennis ball 10cm diameter a Reynolds number of 1000 corresponds to 0.17m/s. The drag equation is phenomenological rather than derived from any rigorous theoretical treatment, and the drag coefficient is basically a fudge factor. It's only approximately constant. For example the drag coefficient of a sphere can vary depending on the speed and can have values greater than 1 at low Reynolds numbers.
{ "domain": "physics.stackexchange", "id": 5836, "tags": "fluid-dynamics, aerodynamics, drag" }
OOP bank account program in Python 3
Question: Python beginner here. Building an OOP bank account program with SQLite and starting to struggle a bit with its design. From a design standpoint I have a file (bank) which calls ATM. ATM simulates an ATM, and is the file which then calls bank_account or credit_card depending on the account data passed in from bank. To initially setup an account I decided to put this into a different file, for example bank_account_setup or for credit_card, credit_card_setup. These would create_account, help setup pin etc so that the account is created and ready to use. Then the actual bank_account or credit_card contains other functions, like, deposit, withdraw, get_balance etc. Also, send_email is in another file My question is basically around my design. Is there a way to structure this better? How about my setup files to create bank or credit card accounts? Is that a good or bad idea? Also, another issue I am having is that when I run bank I pass in account type to ATM. In ATM I then had to know what class I am using in advance and instantiate that inside ATM. Could I handle that dynamically? (Also, the code does work - just concerned with bad design). DB Schema (this is mostly correct but some fields may have been added by hand to sqlite): c.execute("""CREATE TABLE IF NOT EXISTS bank_account ( name text, social integer, account_number integer PRIMARY KEY, balance integer, pin integer )""") c.execute("""CREATE TABLE IF NOT EXISTS credit_card ( name text, social integer, account_number integer PRIMARY KEY, balance integer, card_no integer, credit_score integer, credit_limit integer )""") c.execute("""CREATE TABLE IF NOT EXISTS savings_account ( name text, social integer, account_number integer PRIMARY KEY, balance integer, rate real )""") c.execute("""CREATE TABLE IF NOT EXISTS notifications ( name text, email_address, account_number integer PRIMARY KEY, account_type, notif_bal, notif_deposits, notif_overdraft )""") c.execute("""CREATE TABLE IF NOT EXISTS auth_code ( account_number integer PRIMARY KEY, account_type, email, auth_code )""") Here's my calling file bank: import atm class BankAccount: def __init__(self, name, social, account_number, balance, acctype): self.name = name self.social = social self.account_number = account_number self.balance = balance self.acctype = acctype if __name__ == '__main__': obj1 = atm.ATM.main_menu( "Frank Smith", 135063522, 5544, 850, 'credit_card', 4400110022004400) Here's ATM, which calls the other files: import sqlite3, smtplib import bank_account import secrets import send_email import credit_card from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email import encoders conn = sqlite3.connect('bank_account.db') c = conn.cursor() class ATM: def get_pin(account_number, acctype, user_pin): with conn: db_pin = c.execute("SELECT pin from {} WHERE account_number=:account_number".format(acctype), {'account_number':account_number}) db_pin = c.fetchone() if db_pin is not None: return(db_pin[0]) else: print("db pin is None") pass def set_pin(account_number, acctype, input_code): with conn: get_code = ATM.get_user_code(account_number, acctype) if get_code is None: pass print("You need to request an authorization code first before you set your pin") else: if get_code !=input_code: print("Authorization code not valid") else: pin = input("Please set your 4 digit pin: ") if len(pin) < 4 or len(pin) >4 or len(pin) == 4 and pin.isdigit()==False: print("This is not a 4 digit pin") else: print("pin accepted") c.execute("""UPDATE {} SET pin=:pin WHERE account_number =:account_number""".format(acctype), {'account_number':account_number, 'pin':pin}) print("Pin for account has been updated") def main_menu(name, social, account_number, balance, acctype, card_no=None): # obj1 = bank_account.BankAccount(name, social, account_number, balance, acctype) # obj1 = credit_card.CreditCard(name, social, account_number, balance, acctype, card_no) user_pin = int(input("\nATM Home Screen. Please enter your pin code: ")) db_pin = ATM.get_pin(account_number, acctype, user_pin) if user_pin != db_pin and db_pin != '': print("No pin match") elif db_pin is '': print("Pin has not been set") print("First request an authorization code and use that to set the pin") else: user_pin == db_pin print("\nPin accepted continue \n ") print("""""""ATM Menu, choose an option""""""") print("\n1 - Deposit funds") print("2 - Withdraw funds") print("3 - Check balance") print("4 - Reset Pin") print("5 - Exit") while True: try: choice = int(input("Please enter a number: ")) except ValueError: print("This is not a number") if choice >= 1 and choice <=5: if choice == 1: amount = input("\nPlease enter the deposit amount: ") if amount != '' and amount.isdigit(): int(amount) obj1.deposit( account_number, acctype, amount) else: print("Please enter a valid number") elif choice == 2: amount = input("Please enter the withdrawl amount: ") if amount != '' and amount.isdigit(): int(amount) obj1.withdraw(account_number, acctype, amount) else: print("Please enter a valid number") elif choice ==3: obj1.get_balance(account_number, acctype) elif choice ==4: new_pin = input("Please enter a new 4 digit pin: ") if new_pin != '' and new_pin.isdigit(): int(new_pin) obj1.set_reset_pin(account_number, acctype, new_pin) elif choice ==5: break else: print("Not a valid number") ALL of bank_account_setup: import sqlite3 import secrets import getpass import smtplib, sqlite3 from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email import encoders import send_email conn = sqlite3.connect('bank_account.db') c = conn.cursor() class BankAccount: def __init__(self, name, social, account_number, balance, acctype): self.name = name self.social = social self.account_number = account_number self.balance = balance self.acctype = acctype """ create different accounts based on account type passed in """ def create_account(self, name, social, account_number, balance, acctype, card_no=None, credit_score=None, credit_limit=None): self.rate = None with conn: # account_found = BankAccount.get_account(self, account_number, acctype) # if not account_found: if acctype == 'bank_account': c.execute("INSERT INTO {} VALUES (:name, :social, :account_number, :balance, :pin)".format(acctype), {'name':name, 'social': social,'account_number': account_number, 'balance':balance, 'pin':''}) print("New account: {} has been created, acc # is: {}".format(acctype, account_number)) elif acctype == 'savings_account': c.execute("INSERT INTO {} VALUES (:name, :social, :account_number, :balance, :rate)".format(acctype), {'name':name, 'social': social,'account_number': account_number, 'balance':balance, 'rate':''}) print("New account: {} has been created, acc # is: {}".format(acctype, account_number)) elif acctype == 'credit_card': c.execute("INSERT INTO credit_card VALUES (:name, :social, :account_number, :balance, :card_no,:credit_score, :credit_limit, :pin)", {'name':name, 'social': social,'account_number': account_number, 'balance':balance, 'card_no' :card_no, 'credit_score':credit_score, 'credit_limit':credit_limit, 'pin':'' }) print("New account: {} has been created, acc # is: {}".format(acctype, account_number)) conn.commit() """ Show all rows in DB for the the account type passed in """ def get_account(self,account_number, acctype): with conn: account_find = c.execute("SELECT * from {} WHERE account_number=:account_number".format(acctype), {'account_number':account_number}) account_found = c.fetchone() if not account_found: print("No {} matching that number could be found".format(acctype)) else: print("Account type: {} exists!".format(acctype)) print(account_found) return(account_found) """ Generate a random string for card activation """ def set_user_code(self, account_number, acctype, email): with conn: # account_found = BankAccount.get_account(self, account_number, acctype) account_found = BankAccount.get_user_code(self, account_number, acctype) if not account_found: auth_code = secrets.token_hex(4) print("User code {} generated".format(auth_code)) c.execute("INSERT INTO auth_code VALUES (:account_number, :acctype, :email, :auth_code)", {'account_number': account_number, 'acctype': acctype, 'email':email, 'auth_code':auth_code}) print("DB updated with auth code for account") subject = 'Authorization code' body = 'Authorization code: {}\n \ Use the authorization code when setting your pin for the first time'.format(auth_code) email_user = 'testpython79@gmail.com' email_send = 'testpython79@gmail.com' email_pass = 'Liverpool27' msg = MIMEMultipart() msg['From'] = email_user msg['To'] = email_send msg['Subject'] = subject msg.attach(MIMEText(body, 'plain')) server = smtplib.SMTP('smtp.gmail.com: 587') server.starttls() server.login(email_user, email_pass) text = msg.as_string() server.sendmail(email_user, email_send, text) server.quit else: print("Auth code {} is already set for this account {} ".format(account_found[2], account_number)) conn.commit() def get_user_code(self,account_number, acctype): with conn: account_found = c.execute("SELECT auth_code from auth_code WHERE account_number=:account_number", {'account_number':account_number}) account_found = c.fetchone() if account_found is None: pass # print("You need to request an authorization code first before you set your pin") else: return(account_found[0]) """ Set pin for an account based on the auth code entered for validation """ def set_pin(self,account_number, acctype, input_code): with conn: get_code = BankAccount.get_user_code(self, account_number, acctype) if get_code is None: pass print("You need to request an authorization code first before you set your pin") else: if get_code !=input_code: print("Authorization code not valid") else: pin = input("Please set your 4 digit pin: ") if len(pin) < 4 or len(pin) >4 or len(pin) == 4 and pin.isdigit()==False: print("This is not a 4 digit pin") else: print("pin accepted") c.execute("""UPDATE {} SET pin=:pin WHERE account_number =:account_number""".format(acctype), {'account_number':account_number, 'pin':pin}) print("Pin for account has been updated") if __name__ == '__main__': obj1 = BankAccount("Alexis Sanchez", 135063522, 5534, 100, 'bank_account') # obj1.create_account("Alexis Sanchez", 135063522, 5534, 100, 'bank_account') # obj1.set_user_code(5534, 'bank_account', 'asanchez@noemail.com') obj1.set_pin(5534, 'bank_account', '1f7bd3f9') ALL of bank_account: import sqlite3 import secrets import getpass import smtplib, sqlite3 from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email import encoders import send_email conn = sqlite3.connect('bank_account.db') c = conn.cursor() class BankAccount: def __init__(self, name, social, account_number, balance, acctype): self.name = name self.social = social self.account_number = account_number self.balance = balance self.acctype = acctype """ create different accounts based on account type passed in """ def create_account(self, name, social, account_number, balance, acctype, card_no=None, credit_score=None, credit_limit=None): self.rate = None with conn: # account_found = BankAccount.get_account(self, account_number, acctype) # if not account_found: if acctype == 'bank_account': c.execute("INSERT INTO {} VALUES (:name, :social, :account_number, :balance, :pin)".format(acctype), {'name':name, 'social': social,'account_number': account_number, 'balance':balance, 'pin':''}) print("New account: {} has been created, acc # is: {}".format(acctype, account_number)) elif acctype == 'savings_account': print("Savings account") c.execute("INSERT INTO {} VALUES (:name, :social, :account_number, :balance, :rate)".format(acctype), {'name':name, 'social': social,'account_number': account_number, 'balance':balance, 'rate':''}) print("New account: {} has been created, acc # is: {}".format(acctype, account_number)) elif acctype == 'credit_card': c.execute("INSERT INTO credit_card VALUES (:name, :social, :account_number, :balance, :card_no,:credit_score, :credit_limit, :pin)", {'name':name, 'social': social,'account_number': account_number, 'balance':balance, 'card_no' :card_no, 'credit_score':credit_score, 'credit_limit':credit_limit, 'pin':'' }) print("New account: {} has been created, acc # is: {}".format(acctype, account_number)) conn.commit() """ Generate a random string for card activation """ def set_user_code(self, account_number, acctype, email): with conn: # account_found = BankAccount.get_account(self, account_number, acctype) account_found = BankAccount.get_user_code(self, account_number, acctype) if not account_found: auth_code = secrets.token_hex(4) print("User code {} generated".format(auth_code)) c.execute("INSERT INTO auth_code VALUES (:account_number, :acctype, :email, :auth_code)", {'account_number': account_number, 'acctype': acctype, 'email':email, 'auth_code':auth_code}) print("DB updated with auth code for account") subject = 'Authorization code' body = 'Authorization code: {}\n \ Use the authorization code when setting your pin for the first time'.format(auth_code) email_user = 'testpython79@gmail.com' email_send = 'testpython79@gmail.com' email_pass = 'Liverpool27' msg = MIMEMultipart() msg['From'] = email_user msg['To'] = email_send msg['Subject'] = subject msg.attach(MIMEText(body, 'plain')) server = smtplib.SMTP('smtp.gmail.com: 587') server.starttls() server.login(email_user, email_pass) text = msg.as_string() server.sendmail(email_user, email_send, text) server.quit else: print("Auth code {} is already set for this account {} ".format(account_found[2], account_number)) conn.commit() def get_user_code(self,account_number, acctype): with conn: account_found = c.execute("SELECT auth_code from auth_code WHERE account_number=:account_number", {'account_number':account_number}) account_found = c.fetchone() if account_found is None: pass # print("You need to request an authorization code first before you set your pin") else: return(account_found[0]) """ Set pin for an account based on the auth code entered for validation """ def set_pin(self,account_number, acctype, input_code): with conn: get_code = BankAccount.get_user_code(self, account_number, acctype) if get_code is None: pass print("You need to request an authorization code first before you set your pin") else: if get_code !=input_code: print("Authorization code not valid") else: pin = input("Please set your 4 digit pin: ") if len(pin) < 4 or len(pin) >4 or len(pin) == 4 and pin.isdigit()==False: print("This is not a 4 digit pin") else: print("pin accepted") c.execute("""UPDATE {} SET pin=:pin WHERE account_number =:account_number""".format(acctype), {'account_number':account_number, 'pin':pin}) print("Pin for account has been updated") """ Reset pin, pass in new pin """ def set_reset_pin(self, account_number, acctype, new_pin): with conn: c.execute("""UPDATE {} SET pin=:new_pin WHERE account_number =:account_number""".format(acctype), {'account_number':account_number, 'new_pin':new_pin}) print("Pin for account has been updated") conn.commit() def get_pin(self, account_number, acctype, user_pin): with conn: db_pin = c.execute("SELECT pin from {} WHERE account_number=:account_number".format(acctype), {'account_number':account_number}) db_pin = c.fetchone() if db_pin is not None: return(db_pin[0]) else: print("db pin is None") pass """ Set email notification preferences for users who have confirmed bank accounts """ """ Do an insert if no notifications record is found """ """ If there is a notifications record, update all notifications for that user """ def set_notifications(self, name, email_address, account_number, acctype, notif_bal, notif_deposits, notif_overdraft, notif_withdraw): with conn: """ check if an account is found first """ account_found = BankAccount.get_account(self, account_number, acctype) """ Check if a notification record is found """ notif_found = BankAccount.get_notif(self, account_number, acctype) if account_found and notif_found is None: c.execute("""INSERT INTO notifications VALUES (:name, :email_address, :account_number, :acctype, :notif_bal, :notif_deposits, :notif_overdraft, :notif_withdraw)""".format(acctype), {'name':name, 'email_address': email_address,'account_number': account_number, 'acctype':acctype, 'notif_bal':notif_bal, 'notif_deposits':notif_deposits, 'notif_overdraft':notif_overdraft, 'notif_withdraw':notif_withdraw}) print("Notifications for acc#{} has been created, acctype {} have been setup".format(account_number, acctype)) elif account_found and notif_found: c.execute("""UPDATE notifications SET notif_bal=:notif_bal, notif_deposits=:notif_deposits, notif_overdraft=:notif_overdraft, notif_withdraw=:notif_withdraw""", {'notif_bal':notif_bal, 'notif_deposits':notif_deposits, 'notif_overdraft':notif_overdraft, 'notif_withdraw': notif_withdraw}) print("Notificatons for acc# {} have been updated".format(account_number)) else: print("Setup an account first then set notifications") def get_notif(self,account_number, acctype): with conn: notif_exist = c.execute("""SELECT * from notifications WHERE account_number=:account_number and account_type=:account_type""", {'account_number':account_number, 'account_type':acctype}) notif_exist = c.fetchone() return(notif_exist) """ Show all rows in DB for the the account type passed in """ def get_account(self,account_number, acctype): with conn: account_find = c.execute("SELECT * from {} WHERE account_number=:account_number".format(acctype), {'account_number':account_number}) account_found = c.fetchone() if not account_found: print("No {} matching that number could be found".format(acctype)) else: print("Account type: {} exists!".format(acctype)) print(account_found) return(account_found) def get_balance(self, account_number, acctype): with conn: balance = c.execute("SELECT balance from {} WHERE account_number=:account_number".format(acctype), {'account_number':account_number}) balance = c.fetchone() print("The balance for account number: {} is ${}".format(account_number, balance[0])) notif_set = BankAccount.get_notif(self, account_number, acctype) if notif_set is None: print("No notifications are set for this user") else: notif_balance = notif_set[4] name = notif_set[0] if notif_balance == 1: notify = send_email.send_email(account_number, acctype, 'Balance', balance, balance, name) return(balance[0]) """ Deposit funds into the account number + acctype for the account passed in """ def deposit(self, account_number, acctype, amount): with conn: """ Check acct exists before making deposit """ account_found = BankAccount.get_account(self, account_number, acctype) if account_found: existing_bal = account_found[3] c.execute("""UPDATE {} SET balance=balance +:amount WHERE account_number =:account_number""".format(acctype), {'account_number':account_number, 'amount':amount}) new_bal = existing_bal + (int(amount)) print("${} has been deposited to account {} and the new balance is ${}".format(amount, account_number, existing_bal + (int(amount)))) # Check email configurations are turned on for deposits notif_set = BankAccount.get_notif(self, account_number, acctype) if notif_set is None: print("No notifications are set for this user") else: notif_deposits = notif_set[5] name = notif_set[0] if notif_deposits == 1: notify = send_email.send_email(account_number, acctype, 'Deposit', amount, new_bal, name) """ withdraw funds from the bank account passed in """ def withdraw(self, account_number, acctype, amount): with conn: """ Check account exists """ account_found = BankAccount.get_account(self, account_number, acctype) existing_bal = account_found[3] if account_found: c.execute("""UPDATE bank_account SET balance=balance -:amount WHERE account_number =:account_number""", {'account_number':account_number, 'amount':amount}) new_bal = existing_bal - (int(amount)) conn.commit() print("${} has been withdrawn from account {} and the new balance is ${}".format(amount, account_number, existing_bal - (int(amount)))) notif_set = BankAccount.get_notif(self, account_number, acctype) if notif_set is None: print("No notifications have been set for this acct") else: notif_withdraw = notif_set[7] name = notif_set[0] if notif_withdraw == 1: notify = send_email.send_email(account_number, acctype, 'Withdraw', amount, new_bal, name) else: print("Withdrawl notifications have been turned off") if account_found and new_bal < 0 and notif_set is not None: notify_o = send_email.send_email(account_number, acctype, 'Overdraft', amount, new_bal, name) conn.commit() ALL of credit card setup: from bank_account import BankAccount import sqlite3 conn = sqlite3.connect('bank_account.db') c = conn.cursor() class CreditCard(BankAccount): def __init__(self, name, social, account_number, balance, acctype, card_no, credit_score=None, credit_limit=None): super().__init__(name, social, account_number, balance, acctype) self.card_no = card_no self.credit_score = credit_score self.credit_limit = credit_limit """ set credit limit, check if acct exists, then call get credit limit """ def set_credit_limit(self, account_number, acctype, credit_score): with conn: account_found = BankAccount.get_account(self, account_number, acctype) if account_found: credit_limit = CreditCard.set_credit_limit_helper(self, account_number, credit_score) if credit_limit: c.execute("""UPDATE credit_card SET credit_limit=:credit_limit WHERE account_number =:account_number """, {'account_number':account_number, 'credit_limit':credit_limit}) print("Account number {} credit limit is set to {}".format(account_number, credit_limit)) conn.commit() def get_credit_limit(self, account_number): with conn: c.execute("""SELECT credit_limit from credit_card WHERE account_number=:account_number""", {'account_number':account_number}) credit_limit = c.fetchone() if credit_limit is None: pass else: return(credit_limit[0]) """ get credit limit based on credit score passed in """ def set_credit_limit_helper(self, account_number, credit_score): if credit_score > 700: credit_limit = -2000 elif credit_score > 100 and credit_score <= 300: credit_limit = -1500 else: credit_limit = -1000 return credit_limit if __name__ == '__main__': obj1 = CreditCard("Juan Santos", 135063555, 5544, 100, 'credit_card', 2200330066007700) # obj1.create_account("Juan Santos", 135063555, 9922, 100, 'credit_card', 2200330066007700) obj1.set_credit_limit(5544, 'credit_card', 200) # obj1.set_user_code(9922, 'credit_card', 'juan@noemail.com') # obj1.set_pin(9922, 'credit_card', 'b4493a59') ALL of credit_card: from bank_account import BankAccount import sqlite3 conn = sqlite3.connect('bank_account.db') c = conn.cursor() class CreditCard(BankAccount): def __init__(self, name, social, account_number, balance, acctype, card_no, credit_score=None, credit_limit=None): super().__init__(name, social, account_number, balance, acctype) self.card_no = card_no self.credit_score = credit_score self.credit_limit = credit_limit """ set credit limit, check if acct exists, then call get credit limit """ def set_credit_limit(self, account_number, acctype, credit_score): with conn: account_found = BankAccount.get_account(self, account_number, acctype) if account_found: credit_limit = CreditCard.set_credit_limit_helper(self, account_number, credit_score) if credit_limit: c.execute("""UPDATE credit_card SET credit_limit=:credit_limit WHERE account_number =:account_number """, {'account_number':account_number, 'credit_limit':credit_limit}) print("Account number {} credit limit is set to {}".format(account_number, credit_limit)) conn.commit() def withdraw(self, account_number, acctype, amount): with conn: account_found = BankAccount.get_account(self, account_number, acctype) if account_found: balance = account_found[3] credit_limit = CreditCard.get_credit_limit(self, account_number) amount_left = credit_limit - (int(balance)) if balance - (int(amount)) < credit_limit: print("Your balance is: {}, and your credit limit is: {}".format(balance, credit_limit)) print("The max you can withdraw is {}".format(amount_left)) else: existing_bal = account_found[3] c.execute("""UPDATE credit_card SET balance=balance -:amount WHERE account_number =:account_number""", {'account_number':account_number, 'amount':amount}) print("${} has been withdrawn from account {} and the new balance is ${}".format(amount, account_number, existing_bal - (int(amount)))) notif_set = BankAccount.get_notif(self, account_number, acctype) if notif_set is None: print("No notifications have been set for this acct") else: notif_withdraw = notif_set[7] if notif_withdraw == 1: notify = BankAccount.send_email(self,account_number, acctype, 'Withdraw', amount, existing_bal - amount) conn.commit() def get_credit_limit(self, account_number): with conn: c.execute("""SELECT credit_limit from credit_card WHERE account_number=:account_number""", {'account_number':account_number}) credit_limit = c.fetchone() if credit_limit is None: pass else: return(credit_limit[0]) """ get credit limit based on credit score passed in """ def set_credit_limit_helper(self, account_number, credit_score): if credit_score > 700: credit_limit = -2000 elif credit_score > 100 and credit_score <= 300: credit_limit = -1500 else: credit_limit = -1000 return credit_limit Adding send email program also: import smtplib, sqlite3 from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email import encoders import bank_account conn = sqlite3.connect('bank_account.db') c = conn.cursor() """ get email config - check if there is an existing acct first """ def get_email_config(account_number, acctype, action, amount, new_bal, name): subject = '{} account {} notification'.format(acctype, action) body = '' acctype = acctype.replace('_',' ') if action == 'Balance': body = '{},\n \ The balance for account number: {} is : ${} '.format(name, account_number, amount[0]) elif action == 'Deposit': body= '{},\n \ A deposit of: ${} has been made to account number: {}\n \ The new balance is now: ${}'.format(name, amount, account_number, new_bal) elif action == 'Overdraft': body = '{},\n \ Please note that account number: {} is now overdrawn \n \ The balance is ${}. Please add funds to avoid penalties'.format(name, account_number, new_bal) elif action == 'Withdraw': body = '{},\n \ Please note that a withdrawl in the amount of: ${} has been made from account number: {} \n \ The balance is ${}'.format(name, amount, account_number, new_bal) return(subject, body) def send_email( account_number, acctype, action, amount, new_bal, name): """ Pull subject and body from email_config """ email_config = get_email_config(account_number, acctype, action, amount, new_bal, name) body = email_config[1] subject = email_config[0] email_user = 'placeholder@noemail.com' email_send = 'customerplaceholder@noemail.com' email_pass = 'pass' msg = MIMEMultipart() msg['From'] = email_config[0] msg['To'] = email_config[1] msg['Subject'] = email_config[-2] msg.attach(MIMEText(body, 'plain')) server = smtplib.SMTP('smtp.gmail.com: 587') server.starttls() server.login(email_user, email_pass) text = msg.as_string() server.sendmail(email_user, email_send, text) server.quit if action == 'Deposit': print("Deposit Notification sent") elif action == 'Balance': print("Balance Notification sent") elif action == 'Overdraft': print("Overdraft Notification sent") elif action == 'Withdraw': print("Withdraw Notification sent") else: pass Answer: Some suggestions: Running this code through a linter such as flake8 will give you some hints towards producing more idiomatic code. You can also use Black to automatically format your code to be more idiomatic. You don't need to assign obj1 in main - the variable is unused. main_menu taking a bunch of strings and numbers makes the call completely unintelligible on its own. If you were to instead create a Customer with an Account, Identification and BankCard instead, the meaning of the now single parameter would be obvious. fetchone does not assert that there is only one record. You should make sure to check that whenever you use it outside of a loop you assert that there are no more rows after retrieving it. Otherwise you can very easily get into situations where business rules such as having only one code per card is broken. Optional parameters are a code smell. That doesn't mean they are always bad, but they are very often a sign that the code needs some YAGNI love. In your case you never call main_menu without card_no, so it should not be optional.
{ "domain": "codereview.stackexchange", "id": 33797, "tags": "python, object-oriented, python-3.x, finance, sqlite" }
What is the name of the principle that replaces a history functional by internal/state variables?
Question: I just need to know the correct expression: When narrowing down constitutive equations for the mechanics solids in continuum mechanics, one has in the very general case a Cauchy stress $\mathbf{T}$ as a result of the deformation history $\chi(\mathbf{x},t)$, written as a functional involving integrals over time and the body $\mathcal{B}$: $$ \mathbf{T}(\mathbf{x}_0,t)=\int_\mathcal{B} \int_0^t f(\chi(\mathbf{x},\tau)-\chi(\mathbf{x}_0,\tau)) \ \mathrm d\tau \ \mathrm d \mathbf{x} $$ where $f$ is the constitutive function (Peridynamics is an example). Then, principles of material modelling are invoked to reduce the functional freedom of the material model. One generally replaces the time integral by an internal variable $\mathbf{v}$, $$ \mathbf{T}(\mathbf{x}_0,\mathbf{v})=\int_\mathcal{B} g(\mathbf{\chi(x)}-\chi(\mathbf{x}_0),\mathbf{v}) \mathrm d \mathbf{x}\\ \dot{\mathbf{v}}(t,\mathbf{v})=h(...) $$ the evolution of which is prescribed by another constitutive function $h$. I would like to know if this reduction has a name. It is probably some kind of "principle of", like the "principles of determinism" or "principle of local action". Answer: The replacement of the whole history by some (finite dimensional) internal variables is called state space formulation.
{ "domain": "physics.stackexchange", "id": 42590, "tags": "terminology, continuum-mechanics, stress-strain" }
Cubed Root of an Expectation value $\left<\frac 1 {r^3}\right>$
Question: I am afraid this question will reveal how little I know about expectation values but it cannot be helped. Say I have the expectation value: $$\left<\frac 1 {r^3}\right> = \frac {1}{(na_0)^3} \frac {2}{\ell(\ell+1)(2\ell+1)}$$ If I know the values of $n, a_0$ and $\ell$ and I wanted to find $\left<\frac {1}{r}\right>$ could I simply take the cubed root of the value of $\left<\frac 1 {r^3}\right>$? Answer: Generally speaking no. A counter example is the standard deviation: $$\sigma_X^2=\langle X^2\rangle-\langle X\rangle^2$$ The fact that this is an important quantity indicates that generally you can't take powers outside an expectation value, otherwise the standard deviation would be zero. In some examples you might be able to take powers outside of an expectation value but without further knowledge of your system it's impossible to tell. Edit: to make this more explicit compare \begin{align} \langle X^n\rangle&=p_1 X_1^n+\dots+p_NX_N^n\\ \langle X\rangle^n&=\left(p_1 X_1+\dots+p_NX_N\right)^n \end{align}
{ "domain": "physics.stackexchange", "id": 86517, "tags": "quantum-mechanics, atomic-physics, orbitals" }
MySQLConnection Conn.Connect()
Question: I'm pretty new to working with databases, and I would like to know if I've made some common, major design flaws in my implementation. For context, the whole project is intended to be used by a single user. The code I want reviewed: class DatabaseConnection { //properties private MySqlConnection _connection; public MySqlConnection Connection { get { return _connection; } } public AccountLogin AccountCredentials { get; } public DatabaseLogin DatabaseCredentials { get; } //constructor public DatabaseConnection(AccountLogin _AccCred, DatabaseLogin _DBCred) { AccountCredentials = _AccCred; DatabaseCredentials = _DBCred; } public void Connect() { if (Connection != null) { return; } string[] UserInputs = { DatabaseCredentials?.DatabaseName, DatabaseCredentials?.Server, DatabaseCredentials?.Port, AccountCredentials?.Password, AccountCredentials?.Username }; bool ChkInpts = Validators.NullStringValidator(UserInputs); if (!ChkInpts) { string ConnInfo = "server=" + DatabaseCredentials.Server + ";" + "user=" + AccountCredentials.Username + ";" + "database=" + DatabaseCredentials.DatabaseName + "port=" + DatabaseCredentials.Port + "password=" + AccountCredentials.Password; _connection = new MySqlConnection(ConnInfo); try { Console.WriteLine("Connecting to " + DatabaseCredentials.DatabaseName + "..."); _connection.Open(); Console.WriteLine("Connection to " + DatabaseCredentials.DatabaseName + " successful."); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } } } public void Close() { _connection.Close(); } } Relevant code: class AccountLogin { public string Username { get; } public string Password { get; } public AccountLogin(string _username, string _password) { Username = _username; Password = _password; } } class DatabaseLogin { public string DatabaseName { get; } public string Server { get; } public string Port { get; } public DatabaseLogin(string _server, string _DatabaseName, string _port) { DatabaseName = _DatabaseName; Server = _server; Port = _port; } } static class Validators { public static bool NullStringValidator(string[] _inputs) { bool _result = false; for (int i = 0; i < _inputs.Length; i++) { if (string.IsNullOrEmpty(_inputs[i])) { return _result = true; } } return _result; } } Answer: The conventional naming for arguments in C# is camelCase: AccountLogin _AccCred looks better as AccountLogin accountLogin I wonder if this: DatabaseCredentials?.DatabaseName is of any use? if DatabaseCredentials is null what database can you then connect to? I would throw an exception if DatabaseCredentials == null. This applies probably to AccountCredentials too. String concatenations are a rather inefficient way to build strings: instead of string ConnInfo = "server=" + DatabaseCredentials.Server + ";" + "user=" + AccountCredentials.Username + ";" + "database=" + DatabaseCredentials.DatabaseName + "port=" + DatabaseCredentials.Port + "password=" + AccountCredentials.Password; use either StringBuilder or String.Format("server={0};user={1};...", DatabaseCredentials.Server, AccountCredentials.Username,...); or string interpolation: $"server={DatabaseCredentials.Server};user={AccountCredentials.Username};..." MySqlConnection implements IDisposable. Therefore your wrapper should do that as well, so you can dispose the _connection object, and the client can then use your wrapper like this: using (DatabaseConnection dc = new DatabaseConnection(...)) { // Use the connection... } The try-catch-statement in the Connect() method is of no use to the client because, if the connection fails, it is not signaled to the client. Either rethrow the exception, throw a new exception of your own or simply ignore any exceptions here and let the client handle them. in public void Close() { _connection.Close(); } you should test for _connection == null.
{ "domain": "codereview.stackexchange", "id": 31770, "tags": "c#, mysql" }
Magnetic field and magnet
Question: We know that magnetic does not do any physical work . Now consider; we attached a magnet in a wall with help of a tape. Now we bring a magnet near to other magnet and released we see that magnet will displace towards other magnet. How was the work done on the second magnet to traverse the distance ? Answer: When you bring the second magent close to the one taped, you are actually taking it into it's field.Now you have given the second magnet some amount of potential energy while you are holding it in the other's field without letting it go. The moment you let it go that potential energy you gave it by taking it into it's field is used in moving across that distance. Think of it the same way an meteor entering the Earth's gravitational field falls onto the ground.
{ "domain": "physics.stackexchange", "id": 58090, "tags": "magnetostatics" }
A Question on Convex Conjugate Duality for KL Divergence
Question: The convex conjugate of a function, say, $f:X\mapsto \mathbb{R}$ is a function $f^*:X^*\mapsto \mathbb{R}$ defined as $$f^*(x^*):=\sup_{x\in X} ~\langle x, x^*\rangle-f(x),$$ where $X^*$ is the topological dual of $X$ and $\langle\cdot, \cdot\rangle$ is dual pairing between $X$ and $X^*$. The relative entropy (aka Kullback-Leibler divergence) $D(\cdot||Q):\mathcal{P}(X)\mapsto \mathbb{R}^+$, is defined for two probability measures $P$ and $Q$ (P<< Q) as $$D(P||Q)=\int_XdP \log\frac{dP}{dQ}.$$ I have been trying to calculate the convex conjugate of map $P\mapsto D(P||Q)$ but I have failed. I know that the answer is $\log \mathbb{E}_{Q}[e^{f}]$ where $\mathbb{E}_Q[\cdot]$ is the expectation operator with respect to probability measure $Q$. Answer: To make it easier let's assume $X$ is finite, of size $n$ and associate the density of $Q$ with an $n$-dimensional vector $q$. Assume also that $q$ is everywhere positive - otherwise replace $X$ with the support of $q$. Then the conjugate is $$ f^*_q(x) = \sup_p\ \langle x, p \rangle - \sum_{i = 1}^n{p_i\log(p_i/q_i)}. $$ where the supremum is over the probability simplex $\{p\geq 0: \sum_i p_i = 1\}$. Since the simplex is compact and the function inside the supremum is continuous, the supremum is achieved at some $p$. Using Lagrange multipliers you get that for some real value $\lambda$ an optimal $p$ must satisfy $x_i - 1 - \log(p_i/q_i) = - \lambda$ for all $i$, which gives $p_i = q_ie^{x_i + \lambda-1 }$. Since $1 = \sum_i p_i$, we have $\lambda - 1 = -\log\left(\sum_i{q_i e^{x_i}}\right)$. Substituting, we get $$ \begin{align} f^*_q(x) &= \sum_i{q_i e^{x_i + \lambda- 1 }x_i} - \sum_i{q_i e^{x_i+ \lambda-1 }(x_i+ \lambda-1)} \\ &= -(\lambda-1)\sum_i{q_ie^{x_i+ \lambda-1 }} \\ &= -(\lambda-1)\sum_i{p_i}\\ &= -(\lambda-1) = \log\left(\sum_i{q_i e^{x_i}}\right), \end{align} $$ which is exactly the logarithm of the expectation of $e^{x_i}$ under $q$.
{ "domain": "cstheory.stackexchange", "id": 4819, "tags": "it.information-theory, convex-optimization, convex-geometry" }
How does an ammeter in a circuit work?
Question: Apologies for this simple question, but I'm having trouble grasping the concept of how an ammeter works. Taking the following circuit as an example: When the switch is closed, current flows through the circuit - ie. there is a potential difference between the two ends of the circuit, so electrons flow from the negative terminal to positive. The ammeter records the current flowing through the variable resistor. My question is: How can the ammeter tell how much current is flowing the resistor? since it's "behind" the resistor? And also: Why and how does a resistor limit the current flowing through the entire circuit? doesn't it limit only the current that is flowing past and after the resistor? Answer: Two questions: How can the ammeter tell how much current is flowing the resistor? since it's "behind" the resistor? There at least several means that current can be measured using different technologies. The early ammeters used galvanometric technology where a coil in the galvanometer becomes part of the current path. The coil generates a magnetic field and the magnetic field mechanically deflects in an angular fashion a permanent magnet attached to a dial pointer. But in today's technology we can sense the magnetic field using Hall sensors or more often we use a shunt resistor (low resistance resistor) that does not greatly impede the current however allows sufficient voltage drop to determine the current using Ohms Law. Why and how does a resistor limit the current flowing through the entire circuit? doesn't it limit only the current that is flowing past and after the resistor? First of all resistors 'limit' current by converting the electrical energy that flows through the resistor into heat energy. Secondly the current flowing into a resistor is equal to the current flowing out of the resistor. Although there is a voltage drop across the resistor, there is no 'current drop'. By dropping voltage across the limiting resistor in the circuit you lower the voltage drop across the rest of the circuit thus the current with the current limiting resistor in place is less through the entire circuit than if you did not have the resistor there. Another way to think of it is that by adding the resistor in series with the existing circuit, you have increased the total circuit impedance and by Ohm s law reduced current flow. $$I_{initial}=\frac{V}{R_{circ}}$$ $$I_{after}=\frac{V}{R_{circ}+R_{limiter}}$$
{ "domain": "physics.stackexchange", "id": 26433, "tags": "electricity, electric-circuits, electric-current, electrical-resistance" }
How do I solve for Ksp
Question: The problem gives that the solubility of Silver dichromate is $8.3\times10^{-3} g/100mL$. I need to find $K_{sp}$ which is supposed to be $2.8\times10^{-11}$. $K_{sp}$ is $\ce{[Ag+]^2[Cr2O7^{2-}]}$ I changed the $8.3\times 10^{-3}$ from g/mL to mol/L and got $1.9\times 10^{-8}$. Silver dichromate breaks into 2 Ag and one dichromate. So that would be $(2 \times 1.9\times10^{-8})^2 \times ( 1.9\times10^{-8})$ That's what I did but it's not giving me the right answer. Where am I going wrong here? Answer: The problem gives that the solubility of Silver dichromate is $\mathrm{8.3\cdot10^{-3} g/100\,mL}$. […] I changed the $8.3\cdot10^{-3}$ from g/mL to mol/L and got $1.9\cdot10^{-8}$. […] Were am I going wrong here? Was the solubility given in $\mathrm{g/100\,mL}$? $$\mathrm{8.3\cdot10^{-3} g/100\,mL = 8.3\cdot10^{-2}\,g/L = \frac{8.3}{431.72}\,mol/L = 1.92\cdot10^{-4}\,mol/L}$$
{ "domain": "chemistry.stackexchange", "id": 3056, "tags": "solubility, solutions" }
The role of tensors and matrices in general relativity
Question: First, I basically just want to know what exactly what is the difference between tensors and matrices? Second, looking for clarification on the following two (possibly confused) thoughts) of my own on this, which follow here. (1) I was basically taught in school that a tensor is any object that transforms as: $T_{\alpha \beta }^{'}\rightarrow \frac{\partial x _{\alpha }^{'}}{\partial x _{\alpha }} \frac{\partial x _{}\beta ^{'}}{\partial x _{}\beta } T_{\alpha \beta }$ However, is it correct that not all matrices follow this transformation law? Meaning all tensors can be represented as matrices, but not all matrices are necessarily tensors? (Hence, THIS is the difference between matrices and tensors?) However, if that is true, then it seems a little loose to say matrices are tensors of rank 2, if indeed all matrices do not actually transform as tensors, and therefore are not tensors? (2) Is it correct that one of things the law of general covariance says is that if a tensor equation is true in one frame then it is true in all frames? Secondly, is this made possible by the tensor transformation law? Answer: However, if that is true, then it seems a little loose to say matrices are tensors of rank 2 Where did you read that? A matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of a 2 × 3 matrix are two rows and three columns. Tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Examples of such relations include the dot product, the cross product, and linear maps. Above definitions from Wikipedia. If you look at Einstein's Equation: $${\displaystyle R_{\mu \nu }-{1 \over 2}{Rg_{\mu \nu }}+\Lambda g_{\mu \nu }=8\pi {G \over c^{4}}T_{\mu \nu }} $$ Because this equation is written in tensor form, it is true in all frames. Is it correct that one of things the law of general covariance says is that if a tensor equation is true in one frame then it is true in all frames? Secondly, is this made possible by the tensor transformation law? Yes and yes. Tensors, because of their transformation properties, are essential in writing GR related equations. In comparison, a matrix is basically just a book keeping exercise. This same question is covered in Matrices and Tensors on MathSE. This extract from Tensors by James Rowland is a better description than I can give. It is longer than I would like to quote, but informative, imo. My tablet, for some reason, will not copy the link, but it is easy to find. My apologies for this. Notice how in my example for a rank 2 tensor I specified a basis to work in. At this point you might think that a matrix is a rank 2 tensor, or that a vector is a rank 1 tensor. This is not quite right. These are representations of tensors in some basis. The tensor is a more general thing that doesn’t care about the basis you work in. If you have the representation of a rank 1 tensor in some basis (a vector), you can obtain the representation in another basis by coordinate transformation. When you change bases, you must change the representation of your tensor v → P.v, and both of these vectors represent the same rank 1 tensor in some basis. A matrix A doesn’t change when you change bases. It is silly to say “the matrix $P^{−1}AP$ is the matrix A in some basis”, unless P is the identity matrix. However, if A represents the rank 2 tensor B in some basis, then the matrix $P^{−1}AP$ represents B in another basis. The key here is to note the difference between a tensor and its representation. When you specify a direction and distance to your house by pointing, notice that you do not refer to a coordinate system. If you decided your coordinates are North, East, and away from the center of the Earth, then you can specify a vector which represents the direction your arm is pointing. Now if you change to a coordinate system based on South, West, and towards the center of the Earth, the coordinates would all be flipped, but your arm still points in the same direction. A direction does not depend on a coordinate system, its coordinates do. [My emphasis]
{ "domain": "physics.stackexchange", "id": 41656, "tags": "general-relativity, tensor-calculus" }
Confusion in calculating electric field due to infinite plane
Question: (I) Electric field at a point on positive $x$-axis: Let us consider Cartesian coordinate system with infinitely large circular plane at $y$-$z$ plane. Let $P$ be any point where we want to measure $\vec{E}$. Let us take the origin of our coordinate system at point $O$ which is the projection of point $P$ on $y$-$z$ plane. Let us first consider a red ring of infinitesimal thickness $dl$ with center $O$ and radius $l$ in the $y$-$z$ plane. Let an infinitesimal element of this red ring be $Q_1$. Let another infinitesimal element at the other end of the diameter be $Q_2$. Let us first calculate $E_x$ $E_x$ at point $P$ due to element area at $Q_1$: $$d^2E_x=k\ dq'\ \dfrac{\xi}{r^3}=k\ \sigma\ dS'\ \dfrac{\xi}{r^3}=k\ \sigma\ (l\ d\theta\ dl)\ \dfrac{\xi}{r^3}$$ $E_x$ at point $P$ due to element area of red ring: $$dE_x= \int_0^{2 \pi}d^2E_x=\int_0^{2 \pi} k\ \sigma\ (l\ d\theta\ dl)\ \dfrac{\xi}{r^3}=k\ \sigma\ l\ dl\ \dfrac{\xi}{r^3} \int_0^{2 \pi} d\theta=2 \pi\ k\ \sigma\ l\ dl\ \dfrac{\xi}{r^3}$$ $E_x$ at point $P$ due to infinitely large circular plane: $$E_x=\int_0^\infty dE_x=\int_0^\infty 2 \pi\ k\ \sigma\ l\ dl\ \dfrac{\xi}{r^3} =2 \pi\ k\ \sigma\ \xi\ \int_0^\infty \dfrac{l}{r^3}\ dl$$ $$ \bbox[5px,border:2px solid black] { \dfrac{\partial r}{\partial l}=\dfrac{dr}{dr^2} \dfrac{\partial r^2}{\partial l}=\dfrac{1}{2r} \dfrac{\partial (\xi^2 + l^2)}{\partial l}=\dfrac{1}{2r} \dfrac{\partial l^2}{\partial l}=\dfrac{1}{2r} 2l=\dfrac{l}{r} \Rightarrow dl=\dfrac{r\ dr}{l} } $$ $$ \bbox[5px,border:2px solid black] { \text{When:}\ l=0, r= \xi;\ l=\infty, r= \infty } $$ Therefore: $$E_x=2 \pi\ k\ \sigma\ \xi\ \int_\xi^\infty \dfrac{l}{r^3}\ \dfrac{r\ dr}{l}= 2 \pi\ k\ \sigma\ \xi\ \int_\xi^\infty r^{-2} dr= 2 \pi\ k\ \sigma\ \xi\ \left[ -\dfrac{1}{r} \right]_{\xi}^\infty=-2 \pi\ k\ \sigma\ \xi\ \left[ \dfrac{1}{\infty}-\dfrac{1}{\xi} \right]=2 \pi\ k\ \sigma\ $$ This result is indeed correct. (II) Electric field at a point on negative $x$-axis If I use same coordinte system and find $E_x$ at a point on negative $x$-axis, I should get $(-2 \pi\ k\ \sigma)$. But I am getting $(2 \pi\ k\ \sigma)$ by following the exact same calculation as shown above. The only difference between the two calculations is that in the former $\xi$ is positive while in the latter $\xi$ is negative. But it shouldn't matter as it gets cancelled out. Please explain why am I not getting ($E_x=-2 \pi\ k\ \sigma$) Answer: Ah, sign errors. You've got to be careful with how you treat signs. Your issue comes from blending together coordinates and distances in $ \xi $. Coordinates may be negative, but distances may not be. In particular, re-visit the line "When: $ l=0$, $r=\xi$." Earlier, $\xi$ referred to your coordinate along the x-axis, allowing it to be negative. In this line, however, $r=\xi$ refers to the distance from the plane, and would be non-physical for negative $\xi$. If you want $\xi$ to represent a coordinate, then this line should instead read $r=|\xi|$. The correct result follows.
{ "domain": "physics.stackexchange", "id": 53065, "tags": "homework-and-exercises, electrostatics, electric-fields, mathematical-physics" }
Android APP User class implementation
Question: This is a follow-up question for Android app class serialization. The implementation of User class has been updated and the functionality of null string checking, password strength checking and the correctness of date formatting are considered in this post. The part of password strength checking has been implemented in PasswordStrength class. The experimental implementation Project name: UserClassTest PasswordStrength.java implementation: package com.example.userclasstest; import java.util.ArrayList; public class PasswordStrength { private ArrayList<String> commonPasswords = new ArrayList<>(); private int minimumLengthRequired = 8; public enum Level { Weak, Medium, // TODO: Implement the determination of medium level password Strong // TODO: Implement the determination of strong level password } private Level passwordStrengthLevel; public PasswordStrength(String input) { CreateCommonPasswordsList(); if (LengthCheck(input) == false) { this.passwordStrengthLevel = Level.Weak; return; } if (this.commonPasswords.contains(input)) { this.passwordStrengthLevel = Level.Weak; return; } this.passwordStrengthLevel = Level.Medium; return; } private void CreateCommonPasswordsList() { this.commonPasswords.add("111111"); this.commonPasswords.add("123123"); this.commonPasswords.add("12345"); this.commonPasswords.add("123456"); this.commonPasswords.add("12345678"); this.commonPasswords.add("123456789"); this.commonPasswords.add("1234567890"); this.commonPasswords.add("picture1"); this.commonPasswords.add("password"); this.commonPasswords.add("password123"); this.commonPasswords.add("Password"); this.commonPasswords.add("Password123"); this.commonPasswords.add("Password123"); } private boolean LengthCheck(String input) { if (input.length() > minimumLengthRequired) { return true; } return false; } public boolean IsPasswordWeak() { return this.passwordStrengthLevel.equals(Level.Weak); } } User.java implementation: package com.example.userclasstest; import android.content.Context; import android.util.Log; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.text.DateFormat; import java.text.ParseException; import java.text.SimpleDateFormat; public class User implements java.io.Serializable{ private String fullName; private String personalID; private String dateOfBirth; private String cellPhoneNumber; private String emailInfo; private String password; public User(String fullNameInput, String personalIDInput, String dateOfBirthInput, String cellPhoneNumberInput, String emailInfoInput, String passwordInput) throws NoSuchAlgorithmException, NullPointerException, IllegalArgumentException // User object constructor { // Reference: https://stackoverflow.com/a/6358/6667035 if (fullNameInput == null) { throw new NullPointerException("fullNameInput must not be null"); } this.fullName = fullNameInput; if (personalIDInput == null) { throw new NullPointerException("personalIDInput must not be null"); } this.personalID = personalIDInput; if (dateOfBirthInput == null) { throw new NullPointerException("dateOfBirthInput must not be null"); } if (IsDateValid(dateOfBirthInput) == false) { throw new IllegalArgumentException("dateOfBirthInput is invalid"); } this.dateOfBirth = dateOfBirthInput; if (cellPhoneNumberInput == null) { throw new NullPointerException("cellPhoneNumberInput must not be null"); } this.cellPhoneNumber = cellPhoneNumberInput; if (emailInfoInput == null) { throw new NullPointerException("emailInfoInput must not be null"); } this.emailInfo = emailInfoInput; if (passwordInput == null) { throw new NullPointerException("passwordInput must not be null"); } if (new PasswordStrength(passwordInput).IsPasswordWeak()) { throw new IllegalArgumentException("Password is weak!"); } this.password = HashingMethod(passwordInput); } // Reference: https://stackoverflow.com/a/20231617/6667035 private boolean IsDateValid(String dateStr) { if (dateStr.matches("([0-9]{2})/([0-9]{2})/([0-9]{4})")) return true; else return false; } public String GetFullName() { return this.fullName; } public String GetPersonalID() { return this.personalID; } public String GetDateOfBirth() { return this.dateOfBirth; } public String GetCellPhoneNumber() { return this.cellPhoneNumber; } public String GetEmailInfo() { return this.emailInfo; } public String GetHash() throws NoSuchAlgorithmException { return HashingMethod(this.fullName + this.personalID); } public String GetHashedPassword() throws NoSuchAlgorithmException { return this.password; } public boolean CheckPassword(String password) { boolean result = false; try { result = this.password.equals(HashingMethod(password)); } catch (Exception e) { e.printStackTrace(); } return result; } //********************************************************************************************** // Reference: https://stackoverflow.com/a/2624385/6667035 private String HashingMethod(String InputString) throws NoSuchAlgorithmException { MessageDigest messageDigest = MessageDigest.getInstance("SHA-256"); String stringToHash = InputString; messageDigest.update(stringToHash.getBytes()); String stringHash = new String(messageDigest.digest()); return stringHash; } } MainActivity.java implementation: package com.example.userclasstest; import androidx.appcompat.app.AppCompatActivity; import android.content.Context; import android.os.Bundle; import android.util.Log; import android.widget.Toast; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); User currentUser = null; try { currentUser = new User( "Mike", "M12345678", "10/13/1990", "(555) 555-1234", "123456@test.com", "p?@nY4q~?A-)T#+^"); } catch (Exception e) { ShowToast(e.getMessage(), Toast.LENGTH_LONG); e.printStackTrace(); Log.d("onCreate", e.getMessage()); return; } ShowToast(currentUser.GetFullName(), Toast.LENGTH_SHORT); } private void ShowToast(String Text, int Duration) { Context context = getApplicationContext(); CharSequence text = Text; int duration = Duration; Toast toast = Toast.makeText(context, text, duration); toast.show(); } } All suggestions are welcome. The summary information: Which question it is a follow-up to? Android app class serialization What changes has been made in the code since last question? The implementation of User class has been updated and the functionality of null string checking, password strength checking and the correctness of date formatting are considered in this post. Why a new review is being asked for? If there is any possible improvement, please let me know. Answer: Dates // Reference: https://stackoverflow.com/a/20231617/6667035 private boolean IsDateValid(String dateStr) { if (dateStr.matches("([0-9]{2})/([0-9]{2})/([0-9]{4})")) return true; else return false; } Did you read the comment the OP posted after that linked answer? Almost 5 years after posting this answer, I realize that this is a stupid way to validate a date format. But i'll just leave this here to tell people that using regex to validate a date is unacceptable As an example... "99/99/2021" is a valid date...
{ "domain": "codereview.stackexchange", "id": 41400, "tags": "java, object-oriented, android, classes, null" }
Wrong Graph Plot using K-Means in Python
Question: This is my first time implementing a Machine Learning Algorithm in Python. I tried implementing K-Means using Python and Sklearn for this dataset. from sklearn.cluster import KMeans import numpy as np import pandas as pd from matplotlib import pyplot as plt # Importing the dataset data = pd.read_csv('dataset.csv') print("Input Data and Shape") print(data.shape) data.head() # Getting the values and plotting it f1 = data['Area'].values f2 = data['perimeter'].values f3 = data['Compactness'].values f4 = data['length_kernel'].values f5 = data['width_kernel'].values f6 = data['asymmetry'].values f7 = data['length_kernel_groove'].values X = np.array(list(zip(f1,f2,f3,f4,f5,f6,f7))) # Number of clusters kmeans = KMeans(n_clusters=7) kmeans = kmeans.fit(X) # Getting the cluster labels labels = kmeans.predict(X) # Centroid values centroids = kmeans.cluster_centers_ plt.scatter(X[:,0], X[:,1],cmap='rainbow') plt.scatter(centroids[:,0], centroids[:1], color="black", marker='*') plt.show() The graph doesn't seem to plot the data correctly. How can I debug this issue? Answer: Well, there are some issues: Dimension vs K: Before talking about visualization I would like to address some clustering concept. Your data is in 7 dimensions but it does not mean that you have 7 clusters! Be careful here. For instance I have two features of people let's say salary and number of years they have working experience. Here I have two features but does it mean that there necessary two categories inside the data? sure not! Visualization: Your data is in 7 dimension which is not visualizable. So you decided to reduce this to two which is a correct approach but you did a wrong thing for this correct approach. You can not take the first two features to visualize 7 dimensions, you need to REDUCE it to two features using Dimensionality Reduction algorithms like PCA, NMF, etc. What you did is actually IGNORING 5 dimensions of the points which are extremely informative for placing them in a 7-dimensional space. Solution Everything is right. Just add a PCA to your code like this: From sklearn.decomposition import PCA Model = PCA(n_components=2) X_new = Model.fit_transform(X) ... Use X_new instead of X for K-means procedure Please note that I wrote this relying on my memory so better to check the documentation if I had a typo or smth. In case you have more question you can comment here. Good Luck!
{ "domain": "datascience.stackexchange", "id": 2213, "tags": "python, scikit-learn, clustering, k-means, plotting" }
Curious about computer-assisted NP-completeness proofs
Question: In the paper "THE COMPLEXITY OF SATISFIABILITY PROBLEMS" by Thomas J. Schaefer, the author has mentioned that This raises the intriguing possibility of computer-assisted NP-completeness proofs. Once the researcher has established the basic framework for simulating conjunctions of clauses, the relational complexity could be explored with the help of a computer. The computer would be instructed to randomly generate various input configurations and test whether the defined relation was non-affine, non-bijunctive, etc. Of course, this is a limitation: The fruitfulness of such an approach remains to be proved: the enumeration of the elements of a relation on lO or 15 variables is Surely not a light computational task. I am curious that Are there follow-up researches in developing this idea of "computer-assisted NP-completeness proofs"? What is the state-of-the-art (may be specific to $\textsf{3SAT}$ or $\textsf{3-Partition}$)? Since Schaefer has proposed the idea of "computer-assisted" NP-Completeness proof (at least for reductions from $\textsf{SAT}$), does this mean there are some general principles/structures underlying these reductions (for the ones from $\textsf{3SAT}$ or $\text{3-Partition}$)? If so, what are they? Does anyone have experience in proving NP-completeness with a computer-assistant? Or can anyone make up an artificial example? Answer: As for question 2, there are at least two examples of $NP$-completeness proofs that involve computer-assistant. Erickson and Ruskey provided a computer-aided proof that Domino Tatami Covering is NP-complete. They gave a polynomial time reduction from planar 3-SAT to tatami domino covering. A SAT-solver (Minisat) was used to automate gadgets discovery in the reduction. No other $NP$-completeness proof is known for it. Ruepp and Holzer proved that pencil puzzle Kakuro is $NP$-complete. Some parts of the $NP$-completeness proof were generated automatically using a SAT-solver ( again Minisat).
{ "domain": "cstheory.stackexchange", "id": 3115, "tags": "cc.complexity-theory, reference-request, soft-question, reductions, proof-assistants" }
play_card() method in an Exploding Kittens card game
Question: I wrote a library for card games. To demonstrate how it can be used, I also wrote an Exploding Kittens card game as an example. I'm using CodeClimate to determine cyclomatic complexity. This method in the Exploding Kittens game has complexity too high (6): def play_card(self, card: KittenCard, player: Player = None, target: Player = None): if card.selfcast and player is None: raise Exception("You must pass a player who owns the card!") if card.targetable and target is None: raise Exception("You must pass a target!") if not self.ask_for_nope(): card.effect(player, target) else: print("Card was noped :(") My question is, how would you reduce it? It seems really readable and maintainable now, but I want to get rid of the Codeclimate issue. Should I be splitting it up to two functions or does it defeat its purpose and makes the code less maintainable? Answer: (This is a new version of the answer based on feedback from iScrE4m who asked the question. Feel free to consul the history for the original, wrong answer.) Readability First, it's confusing to write code like this: if condition: do_stuff() if other_condition: do_other_stuff() Because it can be mistaken with the same code using elif where only one branch is executed. And indeed, that's how I read it first. The solution is simple: add a new line between the two conditions. That's a matter of personal preference though, and some developers prefer to write compactly as you did. What's not captured by cyclomatic complexity? Nevertheless, it's difficult to reason about this function, simply because you're using ifs and no elsifs. So we have to think about all options: for each if that's not an elif, what happens if both conditions are true? And I don't know if card.selfcast and player is None, card.targetable and target is None and not self.ask_for_nope() can be all true or not. I'm lucky here because you're raising an exception in the first two so I can infer that you consider them exclusive, but that's more reasoning that I would have liked to make. However, cyclomatic complexity, as defined by Thomas J. McCabe in his 1976 paper, and used by modern tools, does not capture this information. He says in the beginning of section 2 ("A complexity measure"): The complexity measure approach we will take is to measure and control the number of paths through a program. This approach, however, immediately raises the following nasty problem: "Any program with a backward branch potentially has an infinite number of paths." Although it is possible to define a set of algebraic expressions that give the total number of possible paths through a (structured) program (1), using the total number of paths has been found to be impractical. Because of this the complexity measure developed here is defined in terms of basic paths - that when taken in combination will generate every possible path. (1) The appendix explains how it can be done. It's not entirely clear why it's considered impractical, but I guess it makes it harder to compare programs even if does not capture the issue here. (When McCabe wrote his paper in 1976, Fortran did not have ELSE IF: it only appeared in Fortran 77, but it already had a case satement, like the switch statement in C). Leaving cyclomatic complexity aside, what are all the paths the original code can take? Two options per if, so that's 2*2*2, ie. 8. Now, using elsif, you only have 4 options, all of them with only one statement. Much better, right? What's captured by cyclomatic complexity? One thing that cyclomatic complexity does capture is that "complicated" conditions count more than simple ones. So when you write if card.selfcast and player is None, this actually counts as two ifs. This is why you reach 6: 5 ifs + 1. If you want to stay within the limit, you can simply encapsulate the conditions in their own functions, which would lower the complexity to 4. Solving your immediate issue I'm not sure encapsulating the conditions is worth it. In this case, consider just raising the limit: 6 is quite low, and I've seen 10 used elsewhere successfully (maybe because this is the number McCabe used in his original paper, even if he says it's arbitrary). This is just at tool, and if it does not help, then stop using it. :)
{ "domain": "codereview.stackexchange", "id": 21920, "tags": "python, playing-cards, cyclomatic-complexity" }
Optimizing Iteration of List and Dictionaries Python
Question: This project's goal is to parse through a large file containing data. I parse that data into a list containing a dictionary. I then do calculations based on that data and optionally plot it for visualization purposes. I use the data for simple calculations to the rewards based on their performance. I wrote this so that each worker mining to a pool will be rewarded fairly. This allows multiple people to mine on the same account for quicker payout times. Saved data file example: Worker_Data.Data: Data={'pool_current_hashrate': '100215904', 'pool_average_hashrate': '61640734', 'pool_reported_hashrate': '78165786', 'current_hashrate_alex147': '47721859', 'average_hashrate_alex147': '35791394', 'reported_hashrate_alex147': '36895352', 'current_hashrate_henry147': '52494045', 'average_hashrate_henry147': '25849340', 'reported_hashrate_henry147': '41354162', 'time_stamp': '1620751617', 'eth': '0.008999485617836284', 'zil': '4.654624711084'} Data={'pool_current_hashrate': '100215904', 'pool_average_hashrate': '61640734', 'pool_reported_hashrate': '78337185', 'current_hashrate_alex147': '47721859', 'average_hashrate_alex147': '35791394', 'reported_hashrate_alex147': '36890956', 'current_hashrate_henry147': '52494045', 'average_hashrate_henry147': '25849340', 'reported_hashrate_henry147': '41509445', 'time_stamp': '1620751678', 'eth': '0.008999485617836284', 'zil': '4.654624711084'} Note: there can be as few as 1 miner or as many as the pool allows, each one specifies the current, average, and reported hash rate of the mining rig. I parse this file which contains tens of thousands of these lines to calculate how much 'work' each miner has done in the time between payouts and then calculate their take of the change in balance. Each line is a separate dictionary in a list. Code: from ast import literal_eval PATH = "A:\\Python Project\\ezil_api\\Data\\" # path of data file WORKER_SPLIT = 0.50 # used if start balance is not 0 def make_file(name, config_dict, type_conf, path=PATH): with open(path + name + "." + type_conf, "a+") as file: for keys, values in zip(config_dict.keys(), config_dict.values()): file.write(f"{keys}={values}\n") def read_data(path, file_name): data = [] with open(path + file_name, "r+") as config: lines = config.readlines() for line in lines: line = line[line.find("=") + 1:] line_data = literal_eval(line) data.append(line_data) return data def eval_data(): workers = [] start_balance_eth = 0 start_balance_zil = 0 balance_eth = [] balance_zil = [] balance_delta_eth = [] balance_delta_zil = [] delta_eth_range = [0] time = [] time_delta = [] balance_workers_eth = {} balance_workers_zil = {} hashrate_workers = {} integral_worker = {} worker_percentage = {} b = {} odd = 0 even = 0 hashrate_pool = [] balance_eth_delta = [] total_integral = [] temp_integral = 0 files_workers = read_data(path=PATH, file_name="Worker_Data.Data") from time import time as t for worker_data in files_workers: index = files_workers.index(worker_data) worker_list_temp = [worker_temp[17:] for worker_temp in worker_data.keys() if "average_hashrate_" in worker_temp] for worker in worker_list_temp: if worker not in workers: workers.append(worker) hashrate_workers[worker] = [] balance_workers_eth[worker] = 0 balance_workers_zil[worker] = 0 integral_worker[worker] = [] worker_percentage[worker] = [] b[worker] = [] current_balance_eth = float(worker_data["eth"]) current_balance_zil = float(worker_data["zil"]) current_time = int(worker_data["time_stamp"]) for worker in workers: current_worker_in_keys = False for keys in worker_data.keys(): if worker in keys: current_worker_in_keys = True if current_worker_in_keys: worker_hashrate = worker_data[f"current_hashrate_{worker}"] hashrate_workers[worker].append(int(worker_hashrate)) else: hashrate_workers[worker].append(0) hashrate_pool.append(float(worker_data["pool_current_hashrate"])) if index > 0: if current_balance_eth > balance_eth[-1]: delta_eth = current_balance_eth - balance_eth[-1] balance_delta_eth.append(delta_eth) else: balance_delta_eth.append(0) if current_balance_zil > balance_zil[-1]: delta_zil = current_balance_zil - balance_zil[-1] balance_delta_zil.append(delta_zil) else: balance_delta_zil.append(0) delta_time = current_time - time[-1] time_delta.append(delta_time) else: start_balance_eth = current_balance_eth start_balance_zil = current_balance_zil balance_delta_eth.append(0) balance_delta_zil.append(0) balance_eth.append(current_balance_eth) balance_zil.append(current_balance_zil) time.append(current_time) for d_eth, index_temp in zip(balance_delta_eth, range(len(balance_delta_eth))): if d_eth != 0: delta_eth_range.append(index_temp) for worker in workers: # if it doesn't have data for balances, it splits it between workers if start_balance_zil > 0: balance_workers_zil[worker] += start_balance_zil * WORKER_SPLIT if start_balance_eth > 0: balance_workers_eth[worker] += start_balance_eth * WORKER_SPLIT for index in range(len(delta_eth_range)): # integral of hashrate if index > 0: temp_time_delta_list = time_delta[delta_eth_range[index - 1]:delta_eth_range[index]] temp_hashrate_list = [hashrate_workers[worker][delta_eth_range[index - 1]:delta_eth_range[index]], temp_time_delta_list] while len(temp_hashrate_list[0]) < len(temp_hashrate_list[1]): temp_hashrate_list[0].append(0) temp_hashrate_len = len(temp_hashrate_list[0]) x = temp_hashrate_list[0] y = temp_hashrate_list[1] if temp_hashrate_len > 4: # do simpsons integration: # start = (delta x * h[0] + delta x * h[-1])/3 # odd = (delta x * h[1] + delta x * h[3]...) * (4/3) # evens = (delta x * h[2] + delta x h[4]...) * (2/3) start = (x[0] * y[0] + x[-1] * y[-1]) * (4 / 3) for i in range(len(temp_hashrate_list)): if ((temp_hashrate_len - 1) > i) and (i > 0): if i % 2: odd += (x[i] * y[i]) * (4 / 3) else: even += (x[i] * y[i]) * (2 / 3) integral = start + even + odd integral_worker[worker].append(integral) even = 0 odd = 0 elif temp_hashrate_len > 1: # do trapezoid integration # delta x/2(h[0] + 2*h[1] + 2*h[2]... + h[-1]) trap_integral = ((x[0] * y[0]) + (x[-1] * y[-1])) for i in range(len(temp_hashrate_list)): if ((temp_hashrate_len - 1) > i) and (i > 0): trap_integral += (x[i] * y[i]) integral_worker[worker].append(trap_integral) elif temp_hashrate_len == 1: # do riemann sum integration # y * delta x riemann_integral = y[0] * x[0] integral_worker[worker].append(riemann_integral) for index in range(len(integral_worker[workers[0]])): for worker in integral_worker.keys(): temp_integral += integral_worker[worker][index] total_integral.append(temp_integral) temp_integral = 0 for worker in workers: for integral_t, worker_integral in zip(total_integral, integral_worker[worker]): try: worker_percentage[worker].append(worker_integral / integral_t) except ZeroDivisionError: pass for delta in balance_delta_eth: if delta != 0: balance_eth_delta.append(delta) for worker in workers: for percentage, delta in zip(worker_percentage[worker], balance_eth_delta): balance_workers_eth[worker] += percentage * delta b[worker].append(balance_workers_eth[worker]) def plot(): import matplotlib.pyplot as plt time.sort(reverse=True) plt.xlabel = "Delta Balance index" plt.ylabel = "ETH Balance" plt.title("Index Vs ETH") for worker_d in workers: temp_x = [] for index_d in range(len(worker_percentage[worker_d])): temp_x.append(index_d + 1) x_d = temp_x y_d = b[worker_d] plt.plot(x_d, y_d, "-", label=f"{worker_d}") def plot_ddx(): d_list = [] for local_index in range(len(delta_eth_range)): if local_index > 0: temp_index = delta_eth_range[local_index] prev_temp_index = delta_eth_range[local_index - 1] delta_eth_temp = balance_eth[temp_index] - balance_eth[prev_temp_index] delta_time_temp = time[temp_index] - time[prev_temp_index] average_hashrate_temp = (sum(hashrate_pool[prev_temp_index:temp_index])) / ( temp_index - prev_temp_index) d_list.append( ((delta_eth_temp / delta_time_temp) / average_hashrate_temp) * 1000000 * 60 * 60 * 24 * 10) # magic numbers are as follows: # 1000000, convert to per mh/s, # 60*60*24, convert from seconds to days, plt.plot(x_d, d_list, "-", label="ETH per 10 Mh/s per day") plt.legend() plt.show() for keys in balance_workers_eth.keys(): print(keys, balance_workers_eth[keys]) plot() if __name__ == "__main__": eval_data() Is there any way I can speed this up? Currently, at 22,000 lines it takes about 40 seconds to execute this program. The main section of code that takes up the most time is the part in which I iterate through files_workers in the line: for worker_data in files_workers: This takes up a large percentage of the time needed based on my testing and peg's a core on my cpu. Is there a more efficient approach to this problem? I appreciate any help/constructive criticism. Answer: For starter, it seems that the code searches for the relevant index of the current dict object: for worker_data in files_workers: index = files_workers.index(worker_data) but: you can easily use enumerate to avoid this it seems that you only care about whether or not it's the first iteration, so maybe a flag should be enough.
{ "domain": "codereview.stackexchange", "id": 41452, "tags": "python, performance" }
Run time complexity of printing a binary tree level by level
Question: I have some doubts about running time of following question. The problem is printing a binary tree level by level. For example if tree is like 5 / \ 2 6 / \ \ 7 1 8 Then the output should be like 5 2 6 7 1 8 TreeNode structure is very simple and looks like below; public class TreeNode { public TreeNode left; public TreeNode right; public int value; public TreeNode(int value) { this.value = value; } } And the code is shown below: public void printByLevel(TreeNode root) { if (root == null) { throw new IllegalArgumentException(); } Queue<TreeNode> current = new LinkedList<>(); current.add(root); while (!current.isEmpty()) { Queue<TreeNode> parents = current; current = new LinkedList<>(); // print parents for (TreeNode parent : parents) { System.out.print(parent.value + " "); } System.out.println(); // construct next level by adding children of each parent for (TreeNode parent : parents) { if (parent.left != null) { current.add(parent.left); } if (parent.right != null) { current.add(parent.right); } } } } Now can I say running time of this algorithm is \$O(N)\$ because every is node processed only once? Or should I say it is \$O(m*N)\$, where \$m\$ refers to number of nodes at the current level because I am printing nodes at each level? Answer: Your algorithm is plain \$O(n)\$. If you double the number of nodes, your runtime doubles. If you double the number of nodes at the current level, you also double the total number of nodes (or are you planning an unbalanced tree?). Your algorithm is OK, but there are some deviations from the classics... The use of a queue here is not quite right, or, rather, the way you are using it is not taking advantage of its queue features. In fact, you are using it as a List, and not a Queue. A Queue should be using the methods add, remove and element (or offer, poll, and peek). You are using the iterate method though instead. Also, instead of emptying the queue, you create a new one, and throw the old one out. This is inefficient. A more traditional implementation of the algorithm will use a single queue, and add a 'marker' in the queue to indicate the end of a 'row' in the tree. Consider using 'null' as a marker, to show you have reached the end of the row, as follows: queue.add(root); queue.add(null); while (!queue.isEmpty()) { Node n = queue.remove(); if (n == null) { // end of a row System.out.println(); // which also means we have no more kids to add, so // if there is more data, we mark the next end of the *next* row: if (!queue.isEmpty()) { queue.add(null); } } else { System.out.print(n.value + " "); if (n.left != null) { queue.add(n.left); } if (n.right != null) { queue.add(n.right); } } } Sometimes people use a special marker instance instead of null, perhaps something like: private static final ENDOFROW = new Node(-1); Then, instead of the null check, you can do: if (n == ENDOFROW) { and you can 'add' an ENDOFROW instance as well instead of the null.
{ "domain": "codereview.stackexchange", "id": 9454, "tags": "java, tree, breadth-first-search" }
Why don't we use sodium bicarbonate to neutralise base spills?
Question: According to this document on lab safety (http://faculty.washington.edu/korshin/Class-486/AEESP-safety-notes.pdf), it says to not use $\ce{NaHCO3}$ to neutralise specifically base spills, stating "Do not use acetic acid or sodium bicarbonate to clean a base spill. The sodium bicarbonate will not neutralize the spill, and acetic acid could react strongly with the base." Could someone explain why sodium bicarbonate won't neutralise a base spill? I have always learnt that it is amphiprotic and therefore can be used to neutralise BOTH acid AND base spills. Answer: I am going to argue that the currently-accepted answer is incorrect. As the current accepted answer correctly pointed out, the bicarbonate ion dissociates into a proton and a carbonate ion, and that an equilibrium between the bicarbonate ion and the carbonate and hydrogen ion is established. However, Sam202's calculation assumes that there is no driving force that would shift the equilibrium towards one side or the other. In the case of an acid/base reaction, this is simply not true. Following from Le Chatelier's principle, the equilibrium can be shifted to favor the products by removing products as they are formed. In the case of sodium bicarbonate reacting with sodium hydroxide, hydrogen ions formed by the dissociation of bicarbonate are removed by reaction with hydroxide ions to form water, causing the equilibrium to shift to favor greater dissociation of bicarbonate, driving the reaction to completion. As a sidenote, acid/base titrations assume that the reaction between the acid and base is stoichiometric. If Sam202's answer were correct, titrating a weak acid with a strong base or vice-versa would be impractical. As to the question at hand, the reaction of sodium bicarbonate with sodium hydroxide would produce water and sodium carbonate as pointed out by commenter Poutnik. While sodium carbonate is a weaker base compared to sodium hydroxide (pKa of sodium carbonate is 10.33 compared to 15.7 for sodium hydroxide, according to Wikipedia) it is still basic (For reference, pKa of ammonia is 9.25, pKa of sodium bicarbonate is 6.3). Thus, the reaction of sodium bicarbonate with sodium hydroxide would still leave a base spill, as Poutnik correctly deduced.
{ "domain": "chemistry.stackexchange", "id": 17150, "tags": "acid-base, safety" }
What is the tangential velocity of a cylinder which is moving with a translational/ linear velocity (of center of mass)?
Question: Let a cylinder is made to roll in such a way that the velocity of its center of mass is $v$ $m/s$. Are the particles of its surface supposed to move with equivalent tangential velocity? It is to be noted that the cylinder is rolling on a non frictional surface(negligible amount of friction).Isn't tangential velocity independent on translational velocity in this circumstance? The scenario is like the cylinder is being taken from one place to another along a flat surface by rolling it and with respect to a stationary object like a tree its linear velocity is v m/s Answer: What I want to imply first is that rolling motion cannot happen on a frictionless surface (neglegible friction). It merely slides if no friction. Then the all particles are moving with $v$ linear velocity. If the friction is enough to provide external torque for rolling motion, then we can analyze it as a combination of two motions: linear motion and rotational motion. The all particles has the same $v$ linear velocity. And every particle on the same circumference has the same tangential velocity. If it is a rolling without slipping motion, the bottommost particle has zero velocity, therefore tangential velocity is equal to linear velocity and they are opposite in direction.[1]
{ "domain": "physics.stackexchange", "id": 82774, "tags": "newtonian-mechanics, rotational-dynamics, rotational-kinematics" }
Why is the Trendelenburg position used when placing and removing Central Venous Line catheters in the sub-clavian vein?
Question: The title says it all. Why is it that patients are placed in the trendelenburg position when a catheter is inserted in the sub clavian vein? What would happen if the patient wasn't placed in the trendelenburg position when placing the catheter? Thanks in advance for the help. Answer: There are 2 main reasons for using the Trendelenburg position when placing and removing a central venous line catheter into the subclavian or even internal jugular vein. Exactly what C Rags mentioned-- to increase the size of the vein. This position utilizes the force of gravity to pool blood towards the head from the lower extremities. This makes the vein of interest easier to visualize compared to surrounding structures, to palpate since this increases pressure within the vein, and to ultimately puncture. To avoid air embolisms, which are particularly more damaging if they get into the brain causing a stroke than if they travel inferiorly to the leg, for example. Air embolisms can be introduced when inserting or removing a line. Ref: http://emedicine.medscape.com/article/422189-overview#showall
{ "domain": "biology.stackexchange", "id": 3323, "tags": "blood-circulation, circulatory-system" }
timed automata - advance only in certain states
Question: I am currently looking into timed automata for a project. I am thinking about a timed automaton where a clock only advances when the automaton is in a certain state. However, my little knowledge in the domain makes me wonder whether somebody already defined timed automata like that. As an example: I.e. I have a timed automaton with two states: L0 and L1. My clock is c - the total time that the automaton spent in time L1. The clock c should only advance when the automaton is in state L1. (see upper part of drawing) Solution with my existing knowledge (not very elegant): I can see a way where I could use event-clock automata (1) to achieve this, by triggering an event a and on entering L1 the value of c is set to: c := c - xa where xa is the time since event a. (See lower part of the drawing) However, I'm not convinced of this solution as it is inelegant, further I'm not sure whether timed automata allow for the setting of clocks. So far I only read about reset to zero. (1) http://pub.ist.ac.at/~tah/Publications/a_determinizable_class_of_timed_automata.pdf Answer: So, after looking deeper into the subject I found the solution. Posting in case somebody else needs an answer and is unfamiliar with the subject. What I was looking for is called a Stopwatch Automaton (SWA) and allows the time to stop or advance (by specifying the derivative of a clock c' = {0, 1}). Reference: The Impressive Power of Stopwatches. Franck Cassez, Kim Larsen http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.669&rep=rep1&type=pdf Further I discovered that Hourglass automata also allow time to stop, but additionally also to reverse: c' = {-1, 0, 1}. They however require the specification of a maximum clock value after which "all the sand passed through the hourglass" and the clock does not advance further. Reference: Hourglass Automata. Yuki Osada, Tim French, Mark Reynolds, Harry Smallbone https://arxiv.org/abs/1408.5965
{ "domain": "cs.stackexchange", "id": 10856, "tags": "automata, real-time" }
Setting class-weights in a batch in where a certain class is not present
Question: I'm handling a high imbalanced dataset, thus, I'm weighing the loss function in order to penalize the misclassification of the minority classes, I set the weights in each batch as follows: w[i] = num_total_instances / num_instances_of_class[i] Where 'i' goes from 0 to N-1 (for N classes), 'num_total_instances' is the total number of instances in the batch, and 'num_instances_of_class[i]' is the number of instances of class 'i' in the batch. The problem is, as I'm doing it on each batch, It may occur that, in a certain batch there is no instance of class 'j', thus w[j] = inf. ¿What weigh should I set for class 'j'? At this moment, I'm setting this w[j] to 1E6 I know that there are other approaches to fight against imbalanced datasets and I also know that I can pre-compute the weights for the whole dataset and use them (fixed) cross all batches, but, I'd like the specific answer for this question, whereas it makes sense. For sure, it will be well received other suggestions for the calculation of the weights. Answer: I don't think that the weight you use matters, whether you set it to $10^6$ or $10^8$. This is because, as examples of that class are not in the batch, the loss function for that batch will not have any of those examples contributing, so that weight won't appear. If something does not appear in the loss function, it is not used at all. For that reason, it doesn't really matter what weight to use.
{ "domain": "datascience.stackexchange", "id": 3109, "tags": "dataset, class-imbalance" }
Can mass be ejected from a black hole within the Schwarzschild radius?
Question: This goes far beyond my limited knowledge regarding the physics of black holes, but my colleague told me this today: I doubt that if I stand on the surface of a black hole - inside the Schwarzschild radius - and a stone falls on my head from outside the Schwarzschild radius, it cannot come out again. Then I just have to throw it back up with the energy it fell on my head so that it leaves the black hole. The stone can accept this energy because he fell on my head with it. Otherwise, the physical laws of gravity no longer apply. Just try to let a stone vibrate in a gravitational field whose generating mass has no expansion and idealizes does not collide with the vibrating body. It does not get stuck in the point x=0, but flies out again with infinite energy against an infinite acceleration. And this in clearly finite time. If you have managed this calculation, then the question arises, where the energy should go, if the stone can fall in, but not fall out again? Energy is simply the time integral of the acceleration equation after multiplying both sides by the corresponding velocity equation. Since due to the existence of the acceleration equation also the velocity equation exists, the energy a body has in a force field is fixed for every movement. And this energy is on the way there, the same as on the way back, because it depends only on the place and not on the time. Even if time is deformed and the path is compressed, energy and place remain firmly connected. So the stone comes out of the black hole. I wanted to share this with the community in order to get a clearer perspective to his statement. Are his assumptions correct? If so is there proof and maybe further literature? (As far as I remember, the jets coming from super massive black holes do not contain anything from beyond the Schwarzschild radius.) Answer: I don't know what surface of black hole means, but you cannot stand on the fixed radius from centre of gravitational field (center of black hole) without having superluminal speed. The problem is that inside event horison the spacetime becomes dynamic - you can imagine space itself is collapsing and it is collapsing quickly enaugh to prevent anything that does not have superluminal speed from escaping. Yes you can throw the object away from the center, but because the space itself collapses to the center the net effect is that the object will always move to the center no matter how much energy you provide The next problem is that there is no energy conservation in GR in the sense it is in newtonian mechanics. The energy is conserved only locally, not globally. The energy is not even defined globally - simply because due to equivalence principle you cannot define energy of gravitational field itself. In local inertial frame there is no gravitational field. The supposed energy accumulated by falling in gravitational field is not due to the fall of object, it is due to the acceleration needed to remain still in gravitational field. The energy of an object is simply projection of objects 4-velocity onto your own 4-velocity. The objects 4-velocity remains "the same" (is parallel transported) during the fall, but your own is accelerated and thus is changing in any local inertial frame around you. The percieved kinetic energy of the object is not increasing in the free fall due to objects motion, but due to your own acceleration. Sometimes this can be mathematically reformulated so that you can associate changes of kinetic energy with freely falling object itself like in newtonian limit, but not in general. This is all very sloppy of course, because i assume you don't know much about mathematic that is needed to formulate the ideas precisely. If you want to learn it though, i would recommend Gravitation by Misner, Wheeler and Thorne. Its huge book, but you don't need to read everything and all the ideas are explained very inuitively as well as mathematically
{ "domain": "physics.stackexchange", "id": 58519, "tags": "general-relativity, gravity, black-holes, event-horizon, vibrations" }
Forcing a multi-label multi-class tree-based classifier to make more label predictions per document
Question: I'm been experimenting with tree based classifiers for multi-label document classification. All the trees I've created, however, tend to predict only one or two labels per document. Whereas the training set has about 4 labels per document on average. Furthermore, in my particular application, false positives are much less costly to the business than false negatives. So, if anything, I'd like the tree to be making about 6 or 7 predictions per document. I'm not entirely sure which parameters control this. I've tried experimenting with tree size but without effect. I'd ideally like to just set a threshold for when a prediction is included, and lower this. I'm using sklearn (and playing with skmultilearn). Here's an example of a forest configuration: from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier( n_estimators=20, criterion='gini', max_features=0.5, max_depth=68, min_samples_split=4, min_samples_leaf = 2, n_jobs=3 ) Answer: You seem to experience a class imbalance situation where some classes dominate the others by the number of samples they have, so that your algorithm finds it wise to predict less of the rare classes or non-predict them to decrease the loss at the end. You can manually set the class weights for the Random Forest Classifier, making loss function treat unevenly to different classes, but even in total. For details you can refer to: https://stackoverflow.com/questions/20082674/unbalanced-classification-using-randomforestclassifier-in-sklearn Note: Random Forest is not robust to class imbalance, this is a known situation. You can refer to: https://stats.stackexchange.com/questions/242833/is-random-forest-a-good-option-for-unbalanced-data-classification Hope if I could help. If not, I will be around for further discussion.
{ "domain": "datascience.stackexchange", "id": 4181, "tags": "python, scikit-learn, decision-trees, multiclass-classification, multilabel-classification" }
Project Euler Problem 12 in Python
Question: I'm trying to do Project Euler Problem 12, which reads as: The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. What is the value of the first triangle number to have over five hundred divisors? My code is as follows: import math from time import time t = time() def divisors(n): number_of_factors = 0 for i in range(1, int(math.ceil(math.sqrt(n)))): if n % i == 0: number_of_factors +=2 else: continue return number_of_factors x=1 for y in range(2,1000000): x += y if divisors(x) >= 500: print x break tt = time()-t print tt My code takes 3.8 seconds to run, and I know this can be done in around 0.2 seconds in Java. Any suggestions on how to make this run faster? Answer: Improving project Euler solutions is usually done by improving the algorithms itself rather than just optimising the code. First things first, your divisors() function is wrong and does not work for perfect squares. To fix this, you need to take care of the sqrt(n) case separately: def divisors(n): number_of_factors = 0 for i in xrange(1, int(math.ceil(math.sqrt(n)))+1): if n % i == 0: number_of_factors +=2 if i*i==n: number_of_factors -=1 return number_of_factors Now we come to the main part of the code. The important observation to improve your code is that the nth triangular number is given by T(n) = n(n+1)/2. Now, n and n+1 are coprime. If a and b are coprime numbers, the number of divisors of a*b is just the product of number of divisors of a and b. This suggests the following improvement for the main part of the code: for n in xrange(1,1000000): Tn=(n*(n+1))/2 if n%2==0: cnt=divisors(n/2)*divisors(n+1) else: cnt=divisors(n)*divisors((n+1)/2) if cnt >= 500: print Tn break The improvement this provides is very significant. You can improve the performance further by modifying the divisor function to use the same technique: def divisors(n,start): if n==1: return 1 for i in xrange(st, int(math.ceil(math.sqrt(n)))+1): if n % i == 0: cnt=1 while(n%i==0): n/=i cnt+=1 return divisors(n,i+1)*cnt return 2 Essentially, we find p, the first prime factor of n. If p^k is the maximum power of p that divides n, (k+1)*divisors(n/p^k) is the number of divisors of n. start is just a starting point for checking prime divisors. Overall, these modifications seem to reduce running time from ~5s to 0.15s on my laptop.
{ "domain": "codereview.stackexchange", "id": 26277, "tags": "python, performance, programming-challenge, python-2.x" }
Possible to separate audio from separate sources into separate audio streams using multiple microphones?
Question: I want to be able to separate audio from separate sources into separate audio files. I know that audio takes time to travel, so there could be differences in recording time for the same "woohoo" sound. Not sure what else there is to work with. Anyone know how it would be possible (or even IF it is possible) to accomplish the goal using multiple microphones? I have not been able to find anything on this topic yet. Is it because it is impossible? Edit: Found this link... https://www.technologynetworks.com/informatics/blog/a-solution-to-the-cocktail-party-problem-310214 Answer: It really is unclear what you are asking. Taking a guess on your setup and goals what you want would be a phased microphone array. Two microphones detecting the same sound at different times will be able to determine that the source is located somewhere on the sheets of a hyperboloid. If your sources are located at well defined positions so that no two occupy the same hyperboloid then 2 microphones is all you'll need. If they're not then you'll need 4 microphones in order to triangulate in 3D. In practice you will need circuitry to perform the cross correlation between each microphone and thresholding. It would also be helpful to do some fft processing in order to match frequency components for better filtering.
{ "domain": "physics.stackexchange", "id": 59976, "tags": "waves, acoustics, wavelength" }
gazebo-2.0 is called when I want to execute gazebo-5.0.0
Question: Given that I had two version of gazebo now ,2.0 and 5.0. When I want to execute gazebo-5.0.0 (typing : ./usr/local/gazebo5/bin/gazebo-5.0.0) ,it turned out that gazebo-2.0 is called But if I execute gzserver and gzclient separately, correct version ( aka gzserver-5.0 and gzclient 5.0)are called. "which" command output https://www.dropbox.com/s/1gnjq42j3p18ibk/rjIevpxFHLAxL-xlucwk_0z6laQFKI3zUz4ddXuA0og.png?dl=0 I have checked the shared library linked to gazebo-5.0.0,and their version are correct. How can I deal with this problem? Thank you! ldd gazebo-5.0.0 | grep gazebo libgazebo_common.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_common.so.5 (0x00007f194247c000) libgazebo_util.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_util.so.5 (0x00007f194221f000) libgazebo_transport.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_transport.so.5 (0x00007f1941fb1000) libgazebo_physics.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_physics.so.5 (0x00007f1941c2e000) libgazebo_sensors.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_sensors.so.5 (0x00007f1941978000) libgazebo_msgs.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_msgs.so.5 (0x00007f19415d0000) libgazebo_math.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_math.so.5 (0x00007f1940798000) libgazebo_physics_ode.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_physics_ode.so.5 (0x00007f193e022000) libgazebo_rendering.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_rendering.so.5 (0x00007f193d985000) libgazebo_ode.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_ode.so.5 (0x00007f193a468000) libgazebo_skyx.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_skyx.so.5 (0x00007f1939b03000) libgazebo_selection_buffer.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_selection_buffer.so.5 (0x00007f19398f4000) libgazebo_rendering_deferred.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_rendering_deferred.so.5 (0x00007f1938893000) libgazebo_opcode.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_opcode.so.5 (0x00007f1935d24000) libgazebo_opende_ou.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_opende_ou.so.5 (0x00007f1935b1f000) libgazebo_ccd.so.5 => /usr/local/gazebo5/lib/x86_64-linux-gnu/libgazebo_ccd.so.5 (0x00007f1935914000) hello, here is my output https://www.dropbox.com/s/5rcterv2jz05iln/02.png?dl=0 Thank you, this line does work! https://www.dropbox.com/s/yhwaolnmevew0n1/01.png?dl=0 https://www.dropbox.com/s/wryeosvfpek1osh/03.png?dl=0 but, I don't understand when I simply change into /usr/local/gazebo/bin and entering "./gazebo", I can't make a success? Originally posted by Zheng yo chen on Gazebo Answers with karma: 80 on 2015-05-21 Post score: 0 Original comments Comment by Jose Luis Rivero on 2015-05-23: Could you please edit the question and include the output of: which gazebo, which gzserver, which gzclient. ldd output is not really useful in this case. Comment by Jose Luis Rivero on 2015-06-02: Looks like you have many installation of gazebo present in your system. I'm assuming that you have gazebo2 installed from packages, into /usr, and gazebo5 into /usr/local/gazebo5. Why do you have gazebo executables into /usr/local/bin? Could you please attach also: find /usr -name 'gz*' -o -name 'gazebo'. Also, what happen if you run: PATH="/usr/local/gazebo5/bin:${PATH}" gazebo? Answer: When installing gazebo in non default paths (looks like yours is /usr/local/gazebo5) it is necessary to source the correct setup.sh, so environment variables are modified according to work with non default paths. Did you run: source /usr/local/gazebo5/share/setup.sh before running gazebo? Originally posted by Jose Luis Rivero with karma: 1485 on 2015-05-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Zheng yo chen on 2015-05-26: Thank you~ ,I did ,but doesn't work :( Comment by Jose Luis Rivero on 2015-05-26: Could you please edit the question and include the output of: which gazebo, which gzserver, which gzclient.
{ "domain": "robotics.stackexchange", "id": 3769, "tags": "gazebo" }
What do firebugs do when they "connect" by their rear?
Question: See this image for example. What are they doing? What is the name of this process? Answer: They mate! These matings can be very long When copulating, firebugs (Pyrrhocoris apterus) form tandems for prolonged periods. Half of the copulations of marked individuals in the field lasted longer than 12 h, and some lasted up to 7 days. (Schöfl and Taborsky (2002)) The likely reason for such a long mating is for males to guard the female to prevent future matings and therefore to prevent sperm competition males prolong copulations as a form of ejaculate-guarding under high competition with other males. (Schöfl and Taborsky (2002))
{ "domain": "biology.stackexchange", "id": 8606, "tags": "zoology" }
Nucleoside triphosphates vs nucleotide diphosphates
Question: A nucleoside can be defined as a nucleotide without its phosphate group. Thus, a nucleoside triphosphate (NTP) is a nucleoside bound to 3 phosphates. This in turn must be equivalent to a nucleotide (whose name implies the inclusion of one phosphate already) plus an additional 2 phosphates. Thus, would it be appropriate to refer to an NTP as a nucleotide diphosphate? Answer: Nucleoside is nucleotide without a phosphate group. That is something we say to understand and correlate nucleotide and nucleoside. But if followed Terminology by IUPAC a nucleotide should contain only one phosphate group, not more than that. Though NTP is a type of nucleotide, however, for the sake of technical terminology, nucleotides are given classifications as nucleosides with a suffix describing the number of phosphates present in a specific unit. For example, if a nucleotide has one phosphate, it is a nucleoside monophosphate (NMP). If the nucleotide has two phosphates, then it is called a nucleoside diphosphate (NDP), and for three, it is a nucleoside triphosphate (NTP). So you can say NTP as nucleotide diphosphate, but considering the fact that nomenclature also indicate the chemical property of the compound, it would not be right. Wikipedia
{ "domain": "biology.stackexchange", "id": 6950, "tags": "biochemistry, nomenclature, nucleic-acids" }
What is the correct name for the 4-space of special relativity
Question: I refer to the 4-space commonly used to describe event-points $(x_0,...,x_3)$. A massive particle traces out a time-like path in such a space, since it cannot travel with a speed greater than or equal to c. It the correct name for the space a Riemann space, a Lorentz space, a hyperbolic 4-space? What is the accepted name for the space or manifold? Do mathematicians and physicists ordinarily use the same name for it? Answer: It is called Minkowski space.
{ "domain": "physics.stackexchange", "id": 87181, "tags": "special-relativity, spacetime, terminology" }
Difference between row-polymorpism and bounded polymorpishm?
Question: I recently came across this blog post by Brian McKenna explaining row polymorphism. This seemed like a wonderful idea to me, but then I realized it smells an awful lot like bounded parametric polymorphism: With row-polymorphism: sum: {x: int, y: int | rho} -> int function sum r = r.x + r.y With bounded parametric polymorphism: sum: forall a <: {x: int, y: int}. a -> int function sum r = r.x + r.y Can someone clarify the differences to polymorphism between these two approaches? Answer: So, there are a few differences: In row polymorphism, you've bound $\rho$ to a name, so you can use it elsewhere. For example, forall rho . rho -> {x : int | rho} is expressible with row polymorphism, but not purely using bounded polymorphism. Likewise, you can even express field deletion this way: forall rho .{x : int | rho} -> rho is a perfectly valid row-polymorphic type, but subtyping doesn't really have a way of expressing this. Because row polymorphism lets you add and delete fields this way, usually it works with a "stack" semantics, so that old fields are shadowed when new fields of the same name are added. Bounded subtyping uses, well, subtyping. So with row polymorphism, you can't give x : {x : int, y : int} as argument to a function of type {x : int} ->int. But most systems with bounded subtyping would have non-polymorphic subtyping too, that would allow this, since most systems have {x : int} <: {x : int, y : int}. Bounded subtyping tends to be bit more precise and flexible, and the "stack" semantics of row polymorphism don't arise that much. But inference is much easier for row polymorphism, and can work in the absence of any type annotations (for example, in Elm), which usually isn't the case in the presence of subtyping.
{ "domain": "cs.stackexchange", "id": 9860, "tags": "programming-languages, semantics" }
Derivation of heat equation
Question: What are the main physical laws to derive the following heat equation: $$u_t -\Delta u=f(t,x)?$$ I'm wondering about the interpretation of the Laplacian $\Delta$ and its role in heat process. Answer: I think a very good intuition about what is the Laplacian "doing" is to look at the way of implementing it in a computational simulation, e.g., its finite-difference method's implementation. Since it is the sum of second, non-crossed derivatives, if the system is approximated with a square grid of stepsize $h$, the Laplacian is approximated as (you can think about this yourself as it is fairly easy or just google it): $$\Delta u(x,y) \approx \frac{u(x-h, y) + u(x+h, y) + u(x, y-h)+ u(x, y+h) - 4u(x,y)}{h^2}$$ Looking carefully at this expression, it means computing the sum of the difference between all neighbouring sides. Imagine now that there is a gradient in heat $u$ from left to right. This will mean that the difference $u(x-h, y)-u(x, y)$ is positive while $u(x+h, y)-u(x, y)$ is negative. Thus the new value after a time $dt$ on the site $(x,y)$ will be a balance between heat that flows in from the left and heat that flows out to the right. So, basically, what the Laplacian is doing is "homogenizing" the value of $u$ in space. This is why you find the Laplacian operator in any equation that involves diffusion. In the case where there are no sources or sinks in the system ($f(x,y) = 0$, $\forall(x,y)$), the equation basically comes to be $\partial_t u = \Delta u$ (diffusion equation), and the system distributes heat as homogenously as possible. With sources or sinks, the system tries to but the sources/sinks $f(x,y)$ keep them from doing it completely. The subject is usually treated in books on Partial Differential Equations, usually it's one of the first (interesting) cases presented. It allows for a good introduction to Fourier series (historically originating in the problem) and Green's functions. The one by J. David Logan (Springer) has a treatment of the matter, and you can find the finite-difference approximations at the last chapter.
{ "domain": "physics.stackexchange", "id": 61714, "tags": "thermodynamics, diffusion, heat-conduction" }
How the generator loss works in a GAN
Question: I've been reading about GANs so I can implement a simple image generator. It seems that the loss for the generator is given by the following equation: log(1 - D(G(z))) But I don't see how this equation can apply to the generator. The generator's output is in latent space (e.g. 784) but the discriminator's output is in label space (e.g. 10). Therefore this loss would have 10 dimensions. So how can the generator, which outputs 784 dimensions, be hooked up to minimise this 10 dimensional function? What am I missing? Answer: What am I missing? The generator's loss is not calculated by comparing its output to a target value, but by processing it through the discriminator. So it is still technically a function of its output, but a complex and indirect one. There will consequently be a gradient at the generator's output, and you could potentially derive some kind of per-pixel loss metric using that, and perhaps summarise it into a single mean value. However, I don't think that would give you any useful extra information for monitoring progress. The objective function used for training is still "how well can the discriminator tell that the generator's image is not real?". Either way, progress when training GANs is hard to assess. You don't want to see very low loss values for discriminator or generator because they imply instability. You may be able to use a scoring system like Inception Score, or Fréchet Inception Distance which use a trained image classifier to assess quality of the output independently of generator or discriminator scores. GANs are not the only neural network architecture and training scheme where the objective function is a complex processing of the output layer. Face recognition using triplet loss also has an indirect function (this time not via a separate network). In both this case, and with GANs, you rely on being able to back-propagate through the objective function to get gradients that affect the network you are training. The take away is that objective functions do not have to be limited to direct comparisons of a neural network's output with some target value. They can be more complex than that, and this can allow invention of new types of supervised and semi-supervised training feedback.
{ "domain": "ai.stackexchange", "id": 3976, "tags": "machine-learning, generative-adversarial-networks" }
Can we use GAEs with A3C asynchronous reinforcement learning?
Question: The Generalized Advantage Estimator seems very effective with algorithms like PPO in reinforcement learning. But most of the A3C algorithms I have seen use Averaged Advantage estimates. Is there any reason? Answer: Sure can: https://github.com/ikostrikov/pytorch-a3c The reason is the GAE paper was released after A3C, so the paper implementation, which is what most people will look at, doesn't use GAE. Relevant lines here: https://github.com/ikostrikov/pytorch-a3c/blob/master/train.py#L95-L98
{ "domain": "datascience.stackexchange", "id": 4240, "tags": "reinforcement-learning" }
Where can we find easily documentations about functions/methods provided by libraries?
Question: Hi everybody, I'd like to code my own plugins. I did the tutorials from the web site but if I look at the examples, in the .cpp files, many different methods are used to describe the plugin. The problem is that I don't know what these methods do. Moreover, I'd like to use all the functionalities provided by the libraries to be more efficient. So, my question is : Where can we find easily documentations about functions/methods provided by libraries ? Maybe on the Gazebo website ? Or directly in the library folder ? Thank you in advance Ronan Originally posted by ronan on Gazebo Answers with karma: 23 on 2017-10-04 Post score: 0 Answer: Here's the API documentation site for Gazebo 7: http://osrf-distributions.s3.amazonaws.com/gazebo/api/7.1.0/index.html Originally posted by chapulina with karma: 7504 on 2017-10-04 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 4187, "tags": "gazebo-plugin" }
How should I downsample and normalize R1s and R2s and incorporate this into Lexogen's QSPA tool
Question: I am using Lexogen's Quantitative Sequencing Pool Analysis tool (here: https://github.com/Lexogen-Tools/quantseqpool_analysis) to analyze R1 and R2 files. I have been able to successfully run this tool. However, I need to downsample and then normalize these results so I need to do this all again. To downsample, I put the R1s (the ones generated after idemultiplexing) through the SeqTK_sample tool on UseGalaxy. Afterwards, I ran the QSPA tool with the new R1s and old R2s. I skipped the idemultiplexing part by # out the code. This is generating errors (screenshots are attached). I don't think these errors are from me # out the code but rather from the new R1s with the old R2s. I didn't receive any errors the first time I ran this tool. I will be normalizing my data with DESeq2 using the summary_unique.tsv file. If anyone has any tips/suggestions it would be much appreciated. I am sure there is a better way to do this. If you require anything from me just ask! Cheers, Answer: The issue was that we downsampled with useGalaxy's SeqTK tool prior to running UMI-tools and cutadapt (the 2nd and 3rd process, respectively, of Quantitative Sequencing Pool Analysis). I tried downsampling with the R1 data, available after cutadapt (located in the "trimmed" folder), and everything ran fine. I believe the issues were caused by the barcode sequences being removed by the SeqTK tool prior to running UMI-tools. You can (somewhat) customize the QSPA script by #'ing out parts of the script that you don't want to run. In this case, I #'d out Star Aligner, UMI-tools (deduplication), and Feature Counts so that the script would stop after cutadapt. After I downsampled with SeqTK, I ran the script using only the parts that I had previously #'d out.
{ "domain": "bioinformatics.stackexchange", "id": 2005, "tags": "sequence-analysis" }
Two-photon N00N state through Mach-Zehnder interferometer
Question: I am interested in modelling a two-photon N00N state sent through a Mach-Zehnder interferometer, which consists of a beam-splitter (50:50), a phase shift operator on the first mode, a phase shift operator on the second mode and a beam splitter (50:50), before a photon number counting measurement. I am interested in writing out the pre-measurement process for an initial ($N=2$) N00N state $$|N00N\rangle := \frac{1}{\sqrt{2}}(|2,0\rangle+|0,2\rangle).$$ The photon basis states are therefore given by the following set of Fock states {$|2,0\rangle, |1,1\rangle, |0,2\rangle $}. The beam-splitter (BS) unitary operator $\hat{U}_{BS}$, defined by it's action on the creation operator is given by: $$\hat{a}^{\dagger} \mapsto \frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+i\hat{b}^{\dagger}),$$ where $'a'$ and $'b'$ correspond to the first and second modes respectively. The BS operator therefore acts on the basis states as $$|2,0\rangle \mapsto \frac{1}{2}(|2,0\rangle +2i|1,1\rangle-|0,2\rangle), ~|1,1\rangle \mapsto \frac{1}{2}(i|2,0\rangle +2|1,1\rangle+i|0,2\rangle)$$ and $$|0,2\rangle \mapsto \frac{1}{2}(-|2,0\rangle +2i|1,1\rangle+|0,2\rangle).$$ For the phase shift unitary operator on the first mode we have $\hat{U}(\phi)$ we have $$|2,0\rangle \mapsto e^{i 2 \phi}|2,0\rangle, ~|1,1\rangle \mapsto e^{i \phi}|1,1\rangle \text{ and } |0,2\rangle \mapsto |0,2\rangle.$$ A similar definition applies for the phase shift on the second mode. The Mach-Zehnder sequence of unitary operations on the N00N state is therefore given by $$|\psi_{out}\rangle := \hat{U}_{BS}\hat{U}(\phi_2)\hat{U}(\phi_1)\hat{U}_{BS}|N00N\rangle,$$ using the above definitions this simplifies to $$\frac{1}{\sqrt{2}}i e^{i \phi_1} e^{i \phi_2}(i|2,0\rangle + 2|1,1\rangle +i|0,2\rangle).$$ This is the final pre-measurement state (before normalization). Please advise if this makes sense and is consistent. More detail of my working can be provided. Thanks for any assistance. Answer: There are some corrections to the calculations, but it is more important to focus on the motivation: why are you sending a NOON state through a Mach-Zehnder interferometer? The idea of the NOON state is that it itself is an incredible entangled state that is useful for sensing relative phases between two modes. If you send this state through the first beam splitter in the M-Z, before it sees the phases, it will be changed to a worse state for phase sensing. Better would be to ignore the first beam splitter, sending one arm of the NOON state to one arm of the interferometer and the other to the other. Or, the if you really want to use the full M-Z setup, you need to compute the state that becomes a NOON state after the first beam splitter (using the inverse of your calculations here) and send that state into the M-Z. Other sensing protocols do this too (for example, send in the twin Fock state $|N/2,N/2\rangle$; it only picks up an irrelevant global phase if you use it to directly sense a phase, but if you send it through the first beam splitter of the M-Z before applying the phase it becomes a great state for phase estimation). Now, to the calculations. The normalization is incorrect, as can be seen, for example, by the state $|2,0\rangle$ being normalized but the state $(|2,0\rangle+2i|1,1\rangle-|0,2\rangle)/2$ having norm $1/4+1+1/4=3/2$. One must carry out the computation $$|2,0\rangle=\frac{a^{\dagger 2}}{\sqrt{2!}}|0,0\rangle\to\frac{(\frac{a^\dagger+i b^\dagger}{\sqrt{2}})^2}{\sqrt{2!}}|0,0\rangle=\frac{a^{\dagger 2}+2ia^\dagger b^\dagger-b^{\dagger 2}}{2\sqrt{2}}|0,0\rangle$$ and then also recall that we pick up another factor of $\sqrt{2}$ when applying $a^{\dagger 2}|0,0\rangle=\sqrt{2}|2,0\rangle$, etc. I will not complete the calculation for you, just pointing out how to correct it. Finally, one notices that the two-photon NOON state is special because, at a beam splitter, it is possible to convert it into a fully separable state $|1,1\rangle$ (that only acquires a global phase and cannot be used for phase sensing). This is the only NOON state for which this is possible, and it demonstrates that beam splitters can completely change the properties of a state from being a great resource for phase estimation to a useless one.
{ "domain": "quantumcomputing.stackexchange", "id": 5493, "tags": "quantum-algorithms, entanglement, quantum-operation, photonics, quantum-optics" }
Would alpha decay taste sour?
Question: If someone were foolish and/or brave enough to build a device to focus the alpha decay products from a radioactive sample, then point it at their tongue through their open mouth, would it taste sour, assuming there were sufficient helium nuclei? On one hand, I can imagine the eletro-negativity of the alpha decay products being high enough that it would act as an acid and trigger the sour-response in the taste bud. On the other hand, I can imagine it not tasting like anything for the two following reasons: The helium nucleus might quickly gain electrons and become inert before it had a chance to trigger a neural response The helium nucleus might cause enough damage to the taste buds to render them ineffective I know this is a bit of a silly question, but what would alpha-decay taste like? Answer: You are likely thinking of protons having a sour taste, since we experience H$^{+}$ as sour (actually, it is H$_3$O$^+$ in water). However, alpha particles either zip through the sensory cells and just act as ionizing radiation or slow down and then behave like tasteless helium. Apparently ionizing radiation does produce a metallic taste when it is strong enough, perhaps due to breaking down lipids. There may be an ozone smell too. A strong alpha source might achieve that. (There may be other effects too. Depletion of some types of taste sensory cells is likely: radiation therapy commonly affects taste acuity. Charged particles can trigger neural firing too.) I am more curious about the taste of a proton beam. It might be sour, but most likely just acts as intense ionizing radiation. Anatoli Bugorski did experience it as brightness, but it did not pass through his mouth as far as I know.
{ "domain": "physics.stackexchange", "id": 60249, "tags": "radioactivity" }
Show that two families of curves are orthogonal (without using orthogonal trajectories)
Question: I'm reading through Hartle's General Relativity and came across this question: Consider the following coordinate transformation from rectangular coordinates $(x,y)$, labeling points in the plane to a new set of coordinates $(m,n)$: $$x = mn,$$ $$y = (1/2)(m^2 - n^2).$$ (c) Do the curves of constant m and constant n intersect at right angles? I determined that the curves of constant $m$ are orthogonal trajectories to the curves of constant $n$, but the answer in the solutions manual simply states "The curves intersect at right angles because there are no cross terms $dmdn$ in the metric." I don't understand where this comes from. What does he mean? Where can I learn more about this? I imagine that my method of orthognal trajectories will get unwieldy with more variables. Answer: Well, you can just calculate the metric: $$\begin{aligned} \mathrm ds^2 &= \mathrm dx^2 + \mathrm dy^2\\ &= \mathrm d(mn)^2 + \frac12\mathrm d(m^2 - n^2)\\ &= (m\,\mathrm dn + n\,\mathrm dm)^2 + \frac12(2m\,\mathrm dm - 2n\,\mathrm dn)^2\\ &= m^2\,\mathrm dn^2 + 2mn\,\mathrm dm\,\mathrm dn + n^2\,\mathrm dm^2 + (m^2\,\mathrm dm^2 + n^2\,\mathrm dn^2 - 2mn\,\mathrm dm\,\mathrm dn)\\ &= (m^2+n^2)(\mathrm dm^2 + \mathrm dn^2) \end{aligned}$$ As you can see, the mixed term $\mathrm dm\,\mathrm dn$ cancels out so the metric is diagonal. This means that the vectors $\partial_m$ dual to $\mathrm dm$ and $\partial_n$ dual to $\mathrm dn$ are orthogonal, since $\mathrm dm(\partial_n)=\mathrm dn(\partial_m)=0$. and thus $\mathrm dm^2(\partial_m,\partial_n) = \mathrm dm(\partial_m)\mathrm dm(\partial_n)=0$ and analogously for $\mathrm dn^2$, and thus $ds^2(\partial_m,\partial_n)=0$. If there were a term proportional to $\mathrm dm\,\mathrm dn$, then this would give $\mathrm dm(\partial_m)\mathrm dn(\partial_n)\ne 0$, and thus the vectors would not be orthogonal.
{ "domain": "physics.stackexchange", "id": 13513, "tags": "homework-and-exercises, general-relativity, differential-geometry, metric-tensor, coordinate-systems" }
How to (Efficiently) Sort a List of Items with Parent/Child Relationship
Question: I have a list of items that have a parent/child/grandchild/etc. type of relationship. Each item has a list of descendants, and an _.isDescendentOf(other) member function that returns true when a node is a descendant of another node (they have a list of immediate parents, and can sweep upwards to find grandparents). Some of the items may not have a relationship with one another. I need to sort these items such that no item comes before an item it is a descendant of. Consider items of the format (a, [b, c]) which indicates that a has descendants b and c. If we had the unsorted list (with an invalid ordering): [(c, []), (a, [b, c]), (b, [c]), (d, [])] Valid orderings would be: [(a, [b, c]), (b, [c]), (c, []), (d, [])] or [(d, []), (a, [b, c]), (b, [c]), (c, [])] (Note, (d, []) can technically be placed anywhere since it does not have a relationship with the others.) (Note, a, b, etc. are randomly-assigned placeholder names and, though they are alphabetically ordered here, in reality they are random.) I am quite unsure how to achieve this in an efficient way. My (current) solution involves maintaining a list of "placed" and "unplaced" items (the former is empty at the beginning, and the latter initialized with a copy of the unsorted list), and then greedily moving "unplaced" items to the "placed" list when they do not have any ancestors in the "unplaced" list (until the "unplaced" list is empty). That being said... I am curious whether there is a more efficient way to achieve these results. Thanks in advance! Answer: I think you already have implemented some sort of adjacency list representation of a directed graph. You can use a topological sorting (https://en.m.wikipedia.org/wiki/Topological_sorting) to order the items.
{ "domain": "cs.stackexchange", "id": 17479, "tags": "sorting, trees, greedy-algorithms" }
[Buoyancy Plugin] Possible bug
Question: Hi everybody, I'm using Gazebo 6.6.0 with Ubuntu 14.04 and currently I'm experiencing strange behaviors with the buoyancy plugin shipped with this version. The following minimal working example shows two sphere, of mass 1 kg, and the usage of the buoyancy plugin where I force the volume of these two sphere to be equal to their mass. Having fluid density equal to 1 should result having the two objects floating in the air, because in this case force_of_gravity == force_of_buoyancy. Unfortunately only the first sphere, l1, floats whereas the second one falls down. <?xml version='1.0'?> <sdf version='1.5'> <model name="buoyancy_example"> <!-- ================== link l1 ================== --> <link name="l1"> <pose>0 0 1 0 0 0</pose> <inertial> <mass>1</mass> </inertial> <visual name='visual'> <geometry> <sphere> <radius>0.5</radius> </sphere> </geometry> </visual> <collision name='collision'> <geometry> <sphere> <radius>0.5</radius> </sphere> </geometry> </collision> </link> <!-- ================== link l2 ================== --> <link name="l2"> <pose>2 0 1 0 0 0</pose> <inertial> <mass>1</mass> </inertial> <visual name='visual'> <geometry> <sphere> <radius>0.5</radius> </sphere> </geometry> </visual> <collision name='collision'> <geometry> <sphere> <radius>0.5</radius> </sphere> </geometry> </collision> </link> <!-- ================== buoyancy plugin ================== --> <plugin name="buoyancy" filename="libBuoyancyPlugin.so"> <fluid_density>1</fluid_density> <link name="l1"> <center_of_volume>0 0 0</center_of_volume> <volume>1</volume> <!-- volume == mass ==> object should float --> </link> <link name="l2"> <center_of_volume>0 0 0</center_of_volume> <volume>1</volume> <!-- volume == mass ==> object should float --> </link> </plugin> </model> </sdf> Based on my trials I believe that the problem is in this for statement here, which doesn't return the next link element within the plugin tag. Does anyone know how I can fix this bug and be able to use multiple "links" tags within this plugin? Thanks, Marco. Originally posted by Marco on Gazebo Answers with karma: 3 on 2016-06-28 Post score: 0 Answer: It looks like a bug. That statement you mentioned should be: linkElem = linkElem->GetNextElement("link") It would be great if you could make a pull request to Gazebo with the fix, and a bonus if you add a test with your use case. The pull request could be targeted at branch gazebo6. Originally posted by chapulina with karma: 7504 on 2016-06-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Marco on 2016-06-29: Hi, I just checked your suggest and now the plugin works ... thanks :) I'll make the pull request, but I'm not totally sure about how to write a test for it ... could you give me more information about how to do it? Thanks Comment by chapulina on 2016-06-29: Oh I wouldn't worry too much about the test, it would just be a bonus ;) It would probably involve writing an integration test: https://bitbucket.org/osrf/gazebo/src/default/test/integration/ Comment by Marco on 2016-06-30: ok, I created a pull request targeted to gazebo6. About the test: I'm not totally sure about how to check if the plugin is working well. Should I spawn two objects and check if the heights of their centers of gravity are > 0 ? Comment by chapulina on 2016-06-30: Thanks! Sounds good. Make sure to let enough iterations pass so the models have time to float. Comment by chapulina on 2016-06-30: The pull request with the fix has been merged by the way ;) You're still welcome to add a test, it will make sure we don't break this in the future by mistake. Comment by Marco on 2016-07-01: cool! I'll try to write a simple test in the next days ;)
{ "domain": "robotics.stackexchange", "id": 3943, "tags": "gazebo-plugin" }
ViewModel creator design
Question: I am trying to move some logic outside my controller for creating my view models for the view. I have a lot of queries to fire to make sure the complete view model is ready, so to keep my controllers thin I came up with this design - any feedback, comments or suggestions are welcome! The interfaces looks as following: public interface IViewModel { // Marker interface } public interface IHandleViewModel<TViewModel> where TViewModel : IViewModel { Task<TViewModel> Handle(); } public interface IProcessViewModels { Task<TViewModel> Create<TViewModel>() where TViewModel : IViewModel, new(); } The implementation of the IProcessViewModels is using dependency injection (Simple Injector) to find the correct handler for the view model: internal sealed class ViewModelProcessor : IProcessViewModels { private readonly Container _container; public ViewModelProcessor(Container container) { _container = container; } public Task<TViewModel> Create<TViewModel>() where TViewModel : IViewModel, new() { var handlerType = typeof(IHandleViewModel<>).MakeGenericType(typeof(TViewModel)); dynamic handler = _container.GetInstance(handlerType); return handler.Handle(); } } The setup is done in the Composition Root using this extension: public static void RegisterViewModels(this Container container, Assembly[] viewModelAssemblies) { if (container == null) throw new ArgumentNullException("container"); if (viewModelAssemblies == null) throw new ArgumentNullException("viewModelAssemblies"); container.RegisterSingle<IProcessViewModels, ViewModelProcessor>(); container.RegisterManyForOpenGeneric(typeof(IHandleViewModel<>), viewModelAssemblies); container.RegisterSingleDecorator( typeof(IHandleViewModel<>), typeof(ViewModelLifetimeScopeDecorator<>) ); } The usage public HomeViewModel : IViewModel { public string PageTitle { get; set; } public IList<TodoItem> TodoItems { get; set; } } public HandleHomeViewModel : IHandleViewModel<HomeViewModel> { private readonly IProcessQueries _queries; // CQRS public HandleHomeViewModel(IProcessQueries queries) { _queries = queries; } public async Task<HomeViewModel> Handle() { var model = new HomeViewModel { PageTitle = "This is the page title"; TodoItems = await _queries.Execute(new GetTodoItemsFromDatabase()); // for demo purpose } return model; } } public class HomeController : Controller { private readonly IProcessViewModels _models; public HomeController(IProcessViewModels models) { _models = models; } public async Task<ActionResult> Index() { var model = await _models.Create<HomeViewModel>(); return model; } } Answer: I would say your design is fine. In general I advise injecting IHandleViewModel<T> implementations directly into consumers, because this makes it easier to verify the object graph directly and makes it clearer what the consumer actually depends on. You should do this, unless you regularly inject multiple IHandleViewModel<T>s into the same consumer, while you are pretty sure that these consumers are NOT violating the Single Responsibility Principle. But if you decide to inject the ViewModelProcessor mediator, I advice to add a single unit/integration test to the system that verifies whether there is an IHandleViewModel<T> implementation for each view model in the system. This prevents you from getting an exception at runtime because an implementation is missing. The IViewModel marker interface gets useful when writing such unit test, because this interface allows you to find all view models easily as follows: // Arrange var viewModelHandlerTypes = from assembly in AppDomain.CurrentDomain.GetAssemblies() from type in assembly.GetTypes() where typeof(IViewModel).IsAssignableFrom(type) where !type.IsAbstract && !type.IsGenericTypeDefinition select typeof(IHandleViewModel<>).MakeGenericType(type); // Act viewModelHandlerTypes.ToList().ForEach(container.GetInstance); About Async Programming A last suggestion I would like to make is to get rid of the complete async programming model. Asynchronous methods tend to spread through your application like a virus and make both programming and debugging your application much harder. Yes, this asynchronous programming has become WAY easier than it used to be, but it is STILL harder than synchronous programming and it will probably stay harder untill the .NET runtime has been rewritten from the ground up (if that's even possible). I know this is against popular opinion, but there is hardly ever a reason to polute your entire code base with this asynchronous programming model. Main reason for Microsoft to push this programming model really hard is because it is more efficient when running in the cloud. This makes sense, because in Azure, you pay per CPU cycle and per the number of machines you need. But on the other hand, asynchronous programming costs way more developer cycles, and because developers are quite expensive, it is quite unlikely that your savings on the Azure bills will actually compensate the extra developer costs. But obviously, you will have to do the math yourself. Don't get me wrong, of course we want -and need- responsive UIs so we might need a few async/await calls inside your Window, Page or View Model classes in our presentation layer built with WPF, Silverlight, Win Forms or some other client technology. You can still do this, even though your whole code base below is synchronous, with synchonous calls to the database, web services and the file system. When doing that, you will still be able to make your UI responsive, but the only difference is that you'll have a background thread sleeping most of the time, instead of using I/O completion ports. But I've never ever worked on an application where having this single extra background thread was a problem. Even for Windows Phone applications this is a non-issue. But since you're building an MVC application, don't bother in making your controller code asynchronous, the user's browser will wait anyway, even if you make your controller asynchronous. Asynchronous programming may be the new shiny thing in the .NET world, and with some training and experience, we can become quite effective as developers in applying it, but even than it is more painful than synchronous progamming (which is hard enough by itself), and instead of spending money on training developers learning how to do async, I rather spend this money in training them to learn the SOLID design principles, Test Driven Development, Functional Programming or writing clean code. There are so many other skills that are probably more important and more effective in reducing the total cost of ownership, that I rather have my money on that first.
{ "domain": "codereview.stackexchange", "id": 12648, "tags": "c#, design-patterns, dependency-injection, asp.net-mvc" }
Interacting with a database
Question: I've written my first class to interact with a database. I was hoping I could get some feedback on the design of my class and areas that I need to improve on. Is there anything I'm doing below that is considered a bad habit that I should break now? public IEnumerable<string> ReturnSingleSetting(int settingCode) interacts with a normalized table to populate combo boxes based on the setting value passed to it (for example, a user code of 20 is a user, sending that to this method would return all users (to fill the combobox). public void InsertHealthIndicator(string workflowType, string workflowEvent, int times, string workflowSummary) interacts with a stored procedure to write a workflow error type into another normalized table. public DataView DisplayHealthIndicator(DateTime startDate, DateTime endDate) uses another stored procedure to return the workflow error types between specific dates. Note: Although this seems like I likely shouldn't use stored procedures in some areas here, I've done so so I can base some SSRS reports off the same stored procedures (so a bug fixed in one area is a bug fixed in both). using System; using System.Collections.Generic; using System.Data; using System.Globalization; using System.Data.SqlClient; using System.Windows; namespace QIC.RE.SupportBox { internal class DatabaseHandle { /// <summary> /// Class used when interacting with the database /// </summary> public string GetConnectionString() { // todo: Integrate into settings.xml return "Data Source=FINALLYWINDOWS7\\TESTING;Initial Catalog=Testing;Integrated Security=true"; } public IEnumerable<string> ReturnSingleSetting(int settingCode) { var returnList = new List<string>(); string queryString = " select setting_main" + " from [marlin].[support_config]" + " where config_code = " + settingCode.ToString(CultureInfo.InvariantCulture) + " and setting_active = 1" + " order by setting_main"; using (var connection = new SqlConnection(GetConnectionString())) { var command = new SqlCommand(queryString, connection); try { connection.Open(); using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { returnList.Add(reader[0].ToString()); } reader.Close(); } } catch (Exception ex) { MessageBox.Show(ex.ToString()); throw; } connection.Close(); } return returnList; } public void InsertHealthIndicator(string workflowType, string workflowEvent, int times, string workflowSummary) { string queryString = "EXEC [marlin].[support_add_workflow_indicator]" + "@workflow_type = @workflowType," + "@workflow_event = @workflowEvent," + "@event_count = @eventCount," + "@event_summary = @eventSummary"; using (var connection = new SqlConnection(GetConnectionString())) { try { connection.Open(); using(var cmd = new SqlCommand(queryString, connection)) { cmd.Parameters.AddWithValue("@workflowType", workflowType); cmd.Parameters.AddWithValue("@workflowEvent", workflowEvent); cmd.Parameters.AddWithValue("@eventCount", times); cmd.Parameters.AddWithValue("@eventSummary", workflowSummary); cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); } connection.Close(); } catch(SqlException ex) { string msg = "Insert Error: "; msg += ex.Message; throw new Exception(msg); } } } public DataView DisplayHealthIndicator(DateTime startDate, DateTime endDate) { string queryString = "[marlin].[support_retrieve_workflow_history]"; using (SqlConnection connection = new SqlConnection(GetConnectionString())) { using (var cmd = new SqlCommand(queryString, connection)) { connection.Open(); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("date_from", startDate.Date); cmd.Parameters.AddWithValue("date_to", endDate.Date); var reader = cmd.ExecuteReader(); var dt = new DataTable(); dt.Load(reader); connection.Close(); return dt.DefaultView; } } } } } Answer: I believe it's much better to use some ORM together with LINQ, rather than writing raw SQL. It means more errors are checked at compile time, it will help you avoid some common mistakes and it will make your code much shorter. I would also always use parametrized SQL queries and never concatenate them by hand. You do use them most of the time, and in the one case where you don't, there is no danger of SQL injection, because the parameter is an integer, but I still think it's better to use parameters everywhere. (I think it may also make your query faster thanks to caching, but I'm not completely sure about that.) Also, you shouldn't throw Exception, you should create a custom class that inherits from Exception. And, if possible, include the original exception as inner exception, to make debugging the original source of the error easier.
{ "domain": "codereview.stackexchange", "id": 2194, "tags": "c#, sql, ado.net" }
Which travels further, a football or a soccer ball, when thrown?
Question: I recently answered this question: Laminar Versus Turbulent Flow and it caused me to think of the following situation. Given that an (American) football weighs between Football 400 to 430 grams, and a (FIFA Rules) soccer ball is in the same weight range, 410 to 450 grams, which will travel further, when thrown under the same conditions? I am assuming both are the same weight, say 420 grams, and both are fired into still air at the same angle, say 45 degrees, by a machine that imparts the same initial velocity to both of them. Also no spin is involved, although it would be interesting to see, if the soccer ball was allowed to spin, would that reduce drag, by reason of the planar-like surfaces it is composed of, and the regular ridges between the panels? I am assuming no spin of any kind and the same "flight path" but any answers that incorporate more realistic situations will be welcomed. It's the difference in drag that I was initially interested in, but any answer that applies reality to the problem is appreciated. Please ignore the relative sizes shown in the picture, as these are chosen at random. In professional play the football has a long axis of 28 cm , a long circumference of 71 cm, and a short circumference of 53 cm. The soccer ball has a diameter of 22 cm and a circumference of 70 cm. My guess would be the football, rather than the soccer ball, because of it's shape, and if so, can any estimate be made of the extra distance? I do realise that this is a "novelty" type question, compared to the majority of questions received here, but I do think that there may be some interesting fluid dynamics related physics involved. Answer: The drag force on an American football is in the range of a coefficient of .05 to .06. If the football is spinning the drag is slightly less. The drag on a FIFA soccer ball is a coefficient of .25. The football should travel further. The diameter of an NFL football is about 17.3 centimeters. The diameter of a FIFA soccer ball is roughly 22 centimeters. The greater cross sectional area of the soccer ball creates a thicker wake and more drag, as Sebastian Riese points out in his comment. The chief difference between the balls, which produces less drag on the NFL football, is the shape. The NFL football's wake is significantly thinner than the soccer ball's, to a great extent because of the shape of the ball. The hulls of ships, for example, are designed like a football's shape below the water line toward the stern in order to allow a more laminar flow as the boundary layer separates from the back of the hull. Laminar flow creates a thinner wake which produces less turbulent viscous drag.
{ "domain": "physics.stackexchange", "id": 24688, "tags": "forces, fluid-dynamics, energy, experimental-physics, everyday-life" }
I couldn't figure out something about mass and weight measurement
Question: Today when I was studying physic for my University enterance exam I have stucked in one topic. I understood the difference between mass and weight. Mass is the amount of material an object contains and weight is the force applied to that mass due to gravity or other source of potential force(like magnetic feild maybe, not sure) But when it comes to measurement of mass and weight I just couldnt understand the fact we need at least a little bit gravity to measure mass but then whats the difference between weight and mass now? Without gravity or other force to an object I can't measure weight as well. I am just trying to imagine in very very very low gravity just enough to keep our feets on ground in a scenario like this if I measure the mass will it be same really? Or does that the mass we measure in earth isnt actually mass? just something close to it I just can not quite figure out this with an actual example. I would be glad if someone explains this to me with an basic example. Answer: We do not need gravity to measure mass, there are lots of scientific experiments where mass can me determined. A particle accelerator is an example where we see motion due to fundamental forces, the motion is affected by mass. Weight always refers to the gravity force, gravity on earth or gravity on the moon, etc. There are 4 fundamental forces that science studies, and the attraction of masses is due to the gravity force, this is why we see weight on earth. But we can also charge a mass or create a charged particle and use the EM force to study its mass as well.
{ "domain": "physics.stackexchange", "id": 52391, "tags": "gravity, mass, measurements, weight" }
ROS Groovy, robot_mechanism_controllers: cannot find -lPyKDL
Question: Hi, On a fresh ROS Groovy install (on Ubuntu 12.04), I'm failing to compile robot_mechanism_controllers. I get a linkage error: Linking CXX shared library ../lib/librobot_mechanism_controllers.so/usr/bin/ld: cannot find -lPyKDL robot_mechanism_controllers is depending on kdl (which depends on python_orocos_kdl). The python_orocos_kdl in /opt/ros/groovy/stacks does contain a PyKDL.so, but not the one in /opt/ros/groovy/share (which is coming first in the ROS_PACKAGE_PATH). I found this as I was trying to compile my own controllers based on robot_mechanism_controllers and was getting this error. I tried to compile the already installed robot_mechanism_controllers package (installed through standard apt-get), and also tried to download it in an overlay from the svn. Any idea? Cheers, Ugo Originally posted by Ugo on ROS Answers with karma: 1620 on 2013-01-15 Post score: 0 Original comments Comment by Ugo on 2013-01-15: I can "fix" it by manually prepending the path to the python_orocos_kdl in /opt/ros/stacks to ROS_PACKAGE_PATH (export ROS_PACKAGE_PATH=/opt/ros/groovy/stacks/orocos_kinematics_dynamics/python_orocos_kdl:${ROS_PACKAGE_PATH}). What's wrong here? Answer: So it seems that currently groovy is released with 2 python_orocos_kdl packages, a catkin one, and a rosbuild one as part of the orocos_kinematics_dynamics stack. That cannot be good. The catkin one overlays the rosbuild one in the ROS_PACKAGE_PATH. Which is why it works if you change the ROS_PACKAGE_PATH. I guess you could probably remove ros-groovy-python-orocos-kdl with apt for the time being, until the migration to catkin is complete. At least for me it was an optional apt library. Else contact the orocos kdl maintainer maybe. Originally posted by KruseT with karma: 7848 on 2013-01-15 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Ugo on 2013-01-15: Hi, thanks for the answer. Removing ros-groovy-python-orocos-kdl is not possible (due to apt dependencies on my side). But I can keep this prepended to the package path for the time being.
{ "domain": "robotics.stackexchange", "id": 12421, "tags": "ros, ros-groovy, compilation" }
Is it possible for a robot to navigate through predefined coordinates?
Question: I am a total newbie in robotics so please bare with me. I have a school project where my team has to design a robot that is capable of picking up 3 golf balls in different sizes at predefined locations. Then it has to drop these balls into their respective holes. We are using an arduino chip in our robot. I thought I could perhaps define a path for the robot, an invisible virtual path you may call. So imagining the platform as Cartesian plane, can I tell the robot go to where I want it to go? For example, go to (5,12) Or do I need some sort of sensors so the robot figures it out by itself. Thanks for your time! Answer: So, this is my first answer here. Hopefully this will help! The golf balls, given that they are present in predefined locations, seem to be either be on a graph/grid, or at certain distances away from each other on the same plane. Firstly, for a grid. You need to have line sensors to detect the presence of a grid which is either a junction (basically a complete black line detected by all sensors) or a right-angled turn. In either case the trick is to assign co-ordinates to it and travelling based on these. For example to travel from (0,0) to (0,1) you need to go north by 1 co-ordinate. So starting from (0,0) the bot must follow the line north till it detects a junction. And so on for any co-ordinate. This link should help: http://www.robotix.in/tutorials/category/auto/grid Secondly, for no grid. Assuming that these golf balls are at certain distances from the starting point, it is possible to create a virtual grid of your own where each co-ordinate is at fixed distances, just like on a grid or graph. In that case, the bot has to follow the same procedures to reach a destination but just by moving for those fixed distances. So it is suggested that your bot has encoders for the dc motors that will help in travelling precisely the distance that you wish to travel. Good Luck!! \m/
{ "domain": "robotics.stackexchange", "id": 616, "tags": "arduino, sensors, navigation" }
Current charge density of the $H$-field
Question: In this Wikipedia article: https://en.wikipedia.org/wiki/Magnetic_field#:~:text=south%20to%20north.-,H%2Dfield%20and%20magnetic%20materials,-Comparison%20of%20B we see how the volume free current charge density and the surface free current charge density are found. While I am able to understand the derivation of the volume free current charge density, I cannot understand how was the surface free current charge density was derived: $$\vec H^{||}_1 - H^{||}_2=\vec K_f \times \vec n.$$ I have to questions regarding this equation: How was the part in the right side derived? How can I understand (geometrically) how is that cross product happening? I don't know how the surface free current charge density $\vec K_f$ flows in the surface to even understand to do the cross product and also know in which direction does the resulting vector points at? Answer: I found a neat derivation of the above equation using a boundary condition motivated by Ampere's Law. See this. So $K_f$ is the surface current density and $\hat n$ is the normal vector pointing in the direction of the separation from medium 1 to medium 2, normal to the boundary of separation. So whatever $H_1-H_2$ is, it must be normal to the direction of the normal vector. Try to imagine possible orientations and use the right hand rule to get the direction of $H_1-H_2$. Hope this helps.
{ "domain": "physics.stackexchange", "id": 84598, "tags": "electromagnetism" }
Does light remember its past to follow the least time path?
Question: I read about the principle of least time. However, I think, for light to follow this principle, light would have to remember its past path before deciding where to go for its future path. Why I think that: Suppose I shine a torch in the air medium, directed toward the water medium. I shine it at point A. It hits the surface of the water at point B. Here's light deciding where to go next from B while following the principle of least time: For simplicity, let's consider two choices for the next point to visit (two nearby points to B): Point C- Light thinks: "If I go to C, I'd have traveled the path A-B-C overall. But there was a shorter path to C : (say) A-D-C. So I should have traveled through A-D-C all along if I wanted to go to C. Since, I journeyed through AB instead of AD, the point C is out of option now" So light rejects C as its future point. Point E- Light thinks- "If I got to E from here, I'd have traveled along A-B-E overall. Since, there's no shorter path from A to E, so I can go to E" Obviously, light can't think but I had to add that to make my argument clear. So all this requires light remembering that it traveled through AB before deciding where to go next from B. Is this true that light remembers its past path while deciding its future path? To be clear, light can neither think nor decide, but do the physical laws governing propagation of light take light's past into account? EDIT- That link does not answer my question, because my question is not limited to refraction. We can make the same argument as in my post by picking any point B in the middle of light's journey. The same argument can also be made for 'principle of least action' in mechanics in general. My question is "Do all these principles of least things rely on remembering the past path to decide the future path?" Also, is the argument in my post clear? I tried to explain it best by a scenario where light is thinking where to go next. Answer: It seems to me that Fermat's least time is applicable only for the cases of mirror reflection and refraction of a wave front from one medium to another. It seems to me Fermat's least time doesn't cover the optics of a diffraction grating. For simplicity let's take the case of a double slit setup. We observe that on the screen an interfererence pattern arises. The interference pattern consists of multiple parallel bands of alternating bright and dark. In the case of a two slit setup the interference pattern is attributed to the fact that for each position on the screen the wavefront has two paths to reach it. The very reason that the interference pattern arises is that the two paths have different length. As we know: for positions on the screen where there is a bright band the path length difference is a multiple of the wavelength of the light. (Conversely, path length difference of half a wavelength: destructive interference.) Generalizing to diffraction gratings: for the interference pattern to arise the wavefront must travel multiple paths, which means not only the path of least time but paths of longer time too. In that sense there is no such thing as a 'principle of least time'. Fermat's least time is restricted to the cases of mirror reflection and refraction of a wave front from one medium to another. Wavefront propagation Then again, in another sense we can meaningfully talk about the past leaving some imprint. When describing propagation of a wavefront you have to take into account that light ariving at a particular location on a screen can arrive there from multiple different directions. That spatial aspect of wave propagation does make that the set of obstacles that the wavefront has negotiated can completely determine how the light illuminates the screen. Anyway, whether the case is wave propagation or motion of a point mass, the phenomenon is fully described with a differential equation. The nature of a differential equation is that it describes motion that proceeds from instant to instant, down to infinitisimally short instants. You asked in a comment to another answer about Hamilton's stationary action (often referred to as 'principle of least action'). Hamilton's stationary action does not change the fact that the phenomenon is fully described with a differential equation.
{ "domain": "physics.stackexchange", "id": 71194, "tags": "optics, visible-light, electromagnetic-radiation, refraction, variational-principle" }
Does electrical current have a measurable momentum?
Question: If I were to build a levitating super-cooled superconducting loop of wire with a electrically charged circuit that would discharge its stored energy to produce a large circulation of current around the superconducting loop levitating off the ground, would the loop of wire experience a counter torque, forcing the loop to rotate in the opposite direction that the electrical current rotates to have a total torque and angular momentum equal to zero? Is this experiment possible? Answer: Not only is the experiment possible, a version of the experiment at the atomic level was done by Einstein and de Haas[1]. Einstein and deHaas showed that the angular momentum inhering in the aligned electron spins in a ferromagnet can be exhibited on a macroscopic scale when the sample is demagnetize. Apparently, his is the only experiment Einstein ever performed himself[2]. References 1. Einstein and de Haas paper[pdf] http://www.dwc.knaw.nl/DL/publications/PU00012546.pdf Wikipedia on the Einstein de Haas effect https://en.wikipedia.org/wiki/Einstein%E2%80%93de_Haas_effect
{ "domain": "physics.stackexchange", "id": 13655, "tags": "superconductivity" }
Cannot install baxter simulator in ROS Indigo
Question: I am trying to install Baxter simulator so that I may use it with Moveit without connecting to a real Baxter robot. I have followed the following tutorial: link:Baxter Wiki When I try this command: sudo apt-get install gazebo2 ros-indigo-qt-build ros-indigo-driver-common ros-indigo-gazebo-ros-control ros-indigo-gazebo-ros-pkgs ros-indigo-ros-control ros-indigo-control-toolbox ros-indigo-realtime-tools ros-indigo-ros-controllers ros-indigo-xacro python-wstool ros-indigo-tf-conversions ros-indigo-kdl-parser I get the following error: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? I also tried this command: sudo apt-get install ros-indigo-baxter-simulator The same issue exists. Why the packages are not found? Originally posted by Tawfiq Chowdhury on ROS Answers with karma: 137 on 2019-06-14 Post score: 0 Answer: Does a sudo apt update complete successfully? Could it be that you're running into #q325039? Originally posted by gvdhoorn with karma: 86574 on 2019-06-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Tawfiq Chowdhury on 2019-06-14: It did not fix the issue. Comment by gvdhoorn on 2019-06-14: Then you'll need to show us the output of sudo apt update and of the sudo apt-get install .. command. Please copy-paste errors and messages verbatim. Do not paraphrase errors. Comment by Tawfiq Chowdhury on 2019-06-14: Oh it did actually, after doing the sudo update Comment by gvdhoorn on 2019-06-14: So this was #q325039? Comment by Tawfiq Chowdhury on 2019-06-14: Yes, it was
{ "domain": "robotics.stackexchange", "id": 33190, "tags": "ros-indigo" }
Chemical Potential in the canonical and grand canonical ensemble
Question: I'm studying the ideal Fermi gas from "Statistical Mechanics", by R. K. Pathria. In particular, the following formula, which can be found on page 237: \begin{equation} \mu=\left(\frac{3N}{4 \pi g V}\right)^{\frac{2}{3}}\frac{h^2}{2m} \end{equation} describes the chemical potential in the grand canonical ensemble as a function of the number of particles $N$ and the volume $V$. However, on page 242 he uses this formula for the chemical potential in studying the canonical ensemble. There is a reason for which that formula should hold in both cases? Answer: In statistical mechanics, we always consider systems at thermodynamic limit. By thermodynamic limit, we mean that the volume or the number of particles of a system tends to infinity. It can be shown that the difference between different ensembles vanish at this limit. See, for instance, J. E. Mayer and M. G. Mayer, Statistical Mechanics, (John Wiley, New York, 1940).
{ "domain": "physics.stackexchange", "id": 29051, "tags": "thermodynamics, statistical-mechanics, ideal-gas, fermions, chemical-potential" }
Algorithm to generate a token as survey summary
Question: I'm searching for an algorithm but struggle to find anything, as I'm not sure how to formulate it correctly. I created a simple survey app in Angular with 24 questions and each has 2-5 answers. When the user answered all the questions I'd like to give him a token (as short as possible) on the result page that he can note down, so the next time he can enter the token instead of having to answer all the questions again. My thought process so far: I can store the information in an array with 24 entries [1, 4, 2, ..., 2, 3] As all values have 1 digit I can remove everything but the numbers 142...23 This still leaves me with a 24 digits long number. Nothing one would like to note down and type out by hand. So next I tried to convert this number (with ().toString(2)) to a binary with the goal to turned that binary into a human readable string with String.fromCharCode. But all I got was the rather not so human readable æ\u000f\u0000³KH\u0000\u0000\u0000 for my test input. I could break down the number into digit pairs (e.g. 120422... -> [12, 04, 22, ...]) and map each of these to a character (00 -> A, 01 -> B, ...), but I think there must be something more elegant and efficient, as it would only half the original size. Although using 3 digits for mapping would result in 125 combinations which is rather hard to map to the alphabet, even with upper/lowercases and special characters. But I'd be very happy to hear your ideas. Many thanks in advance! Answer: I think you should try converting your 24-digits (in base 10) number, in another base, which would reduce a lot the length of your token, while keeping the same information. You should try experimenting with this website for example, which convert in base 36 (10 number + 26 letters): http://www.unitconversion.org/numbers/base-10-to-base-36-conversion.html Here, the number 986541236547896541258745 becomes 4GNE5T9XUQO08CKK I'm sure you can devise an algorithm in base 62 (10 number, 26 lower cases letters, 26 upper cases letters), or event more if you use other characters, such as +-*&"# ... etc EDIT: found this website that allows you to convert from and to any bases between 2 and 62: https://www.dcode.fr/base-n-convert Here, the number 986541236547896541258745 becomes 4VMCgFPOG10DLH
{ "domain": "cs.stackexchange", "id": 16594, "tags": "algorithms" }
Conditions on one forms
Question: I am trying to solve exercise 8.3 from Lightman's problem book, but I don't know where to start to get a sufficient and necessary condition on a field of one forms $\tilde\sigma$ for there to exist a function f such that $\tilde\sigma $ = $\tilde{df}$. I even tried to understand the solution, but I didn't. Can you help me, please? Answer: Here is my attempt. For clarity, the question refers to 3-dimensional Euclidean space. Let $f$ be a scalar function that is at least two times continuously differentiable. It is important to note ${\bf \tilde\sigma}$ is a $1$-form field. The scalar function $f$ is said to be a $0$-form. If we can write $$ {\bf \tilde \sigma} = \tilde{df} = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy + \frac{\partial f}{\partial z} dy\,. $$ we can say ${\bf \tilde\sigma}$ is exact. By the Poincare Lemma we can say that exterior derivative of ${\bf \tilde\sigma}$ vanishes, i.e., $$ d(\tilde{df}) = 0\,. $$ But the exterior derivative of $\tilde \sigma$ is $$ d\tilde\sigma = \left(\frac{\partial^2f}{\partial x\partial y} - \frac{\partial^2f}{\partial y\partial x}\right)dx \wedge dy +\left(\frac{\partial^2f}{\partial x\partial z} - \frac{\partial^2f}{\partial z\partial x}\right)dx \wedge dz + \left(\frac{\partial^2f}{\partial y\partial z} - \frac{\partial^2f}{\partial z\partial y}\right)dy \wedge dz\,. $$ Since $d\tilde\sigma = 0$ it must be the case that $$ \frac{\partial^2f}{\partial x\partial y} = \frac{\partial^2f}{\partial y\partial x}\,,\quad \frac{\partial^2f}{\partial x\partial z} = \frac{\partial^2f}{\partial z\partial x}\,,\quad \frac{\partial^2f}{\partial y\partial z} = \frac{\partial^2f}{\partial z\partial y}\,. $$ This is exactly the condition that is meant by $\sigma_{i,j} = \sigma_{j,i}\,$ as Lightman et al. suggest in their solution because $$ \tilde\sigma_{i} = \frac{\partial f}{\partial x^{i}}\,, $$ where $i=1,2,3$ and $x^1=x, x^2 = y, x^3=z\,.$ I didn't use the content at the link https://en.wikipedia.org/wiki/Differential_form but I think exact differential forms are mentioned there. I checked "Tensors, Differential Forms and Variational Principles" by David Lovelock and Hanno Rund.
{ "domain": "physics.stackexchange", "id": 72415, "tags": "general-relativity, differential-geometry" }
Error starting .py with roslaunch
Question: Hi, I am trying to start a python script to allow keyboard control of two dynamixel MX28T servo motors. But when I try roslaunch pantilt pt_keyboard.py I get the following error message. "Invalid roslaunch XML syntax: not well-formed (invalid token): line 1, column 1" I thought first I could not run a .py file with roslaunch. But I saw another guy doing it. I already googled for a solution, but haven't found the answer for my specific problem. Anybody who has an Idea? Thanks for helping! Originally posted by mike.sru on ROS Answers with karma: 16 on 2013-09-19 Post score: 0 Answer: roslaunch is supposed to be used with XML launch files. However you attempting to launch a Python script. To run Python scripts and binary executables use rosrun instead. It also might be necessary to add 'executable' flag to that script with: $ chmod a+x pt_keyboard.py Otherwise rosrun will not run the script. Originally posted by Boris with karma: 3060 on 2013-09-19 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 15575, "tags": "python, roslaunch" }
Two cases of a switch with similar blocs of code
Question: Is it possible to merge these two cases into one so I don't have what is basically duplicated code except for the cast part? case ScheduleActionsMediator.ACTION_CREATE_RENDITION: final List<CrewMemberVO> crewMembers = new ArrayList<CrewMemberVO>(); CrewMemberRenditionActionStruct struct = (CrewMemberRenditionActionStruct) action; for (final CrewMemberVO crew : struct.getAffectedCrews().values()) if (crew != null) crewMembers.add(crew); this.cacheFacade.deleteCrewMembers(crewMembers, false, this.crewMemberRules); break; case ScheduleActionsMediator.ACTION_CREATE: final List<CrewMemberVO> crewMembersTwo = new ArrayList<CrewMemberVO>(); CrewMemberCreateModifyActionStruct structTwo = (CrewMemberCreateModifyActionStruct) action; for (final CrewMemberVO crew : structTwo.getAffectedCrews().values()) if (crew != null) crewMembersTwo.add(crew); this.cacheFacade.deleteCrewMembers(crewMembersTwo, false, this.crewMemberRules); break; Answer: case ScheduleActionsMediator.ACTION_CREATE_RENDITION: case ScheduleActionsMediator.ACTION_CREATE: List<CrewMemberVO> crewMembers = new ArrayList<>(); for (final CrewMemberVO crew : action.getAffectedCrews().values()) { if (crew != null) { crewMembers.add(crew); } } cacheFacade.deleteCrewMembers(crewMembers, false, crewMemberRules); break; If action's type supports the getAffectedCrews method, this will work as stands. This gets rid of the duplicate code, which will make it easier to maintain. Of course, we don't have sufficient context here to see if action supports the getAffectedCrews method without casting. If it doesn't, you'd have to make changes outside of this snippet in order to make things work. It's unclear how extensive those changes might be. If you post a followup question, consider posting at least the entire method. We'd need the type for action and how it relates to the two cast types. Also include the ScheduleActionsMediator definition. I removed the this. from several variables. Unless there is a conflict, you don't need it. Of course, some people prefer to use it to indicate which are object fields rather than local variables. I removed the final from crewMembers, as not telling us much. It doesn't keep a mutable type like ArrayList from being modified. It only keeps which object is represented from being changed. The final on crew is more useful, but not by much. In newer versions of Java, you don't need to specify <CrewMemberVO> the second time. You can just say <>. Not only does that save you some typing now, but it helps simplify future maintenance.
{ "domain": "codereview.stackexchange", "id": 26239, "tags": "java" }
QT4 gui interface
Question: Hi the past few days I've been trying to make a simple gui using Qt4 to send a msg when pressing a button and to receive an int value from a topic and display it on a label or something. I have successfully built the qt gui using rosmake as a standalone program but i dont know exactly which steps to take to integrate a ros node in the gui. I looked through all the tutorials(eros qt and wiki) you have here and most questions. The eros examples do provide an idea but no luck with making it work. Does anyone have any ideas, sample code or steps needed to combine a qt app along with ros node. Thanks in advance! Originally posted by opcode on ROS Answers with karma: 46 on 2012-04-01 Post score: 1 Original comments Comment by Astronaut on 2013-12-01: Hi I wont to try the same. please can you tell me the steps you need to combine qt app with ros node? I could compile it with make. But I dont know how to run it? When i run it with ./qfoo it gives me bash: ./qfoo: No such file or directory Answer: To answer my own question. I overlooked one of the tutorials. I just typed: " rosrun qt_create roscreate-qt-pkg qfoo " That finally gives a clean working node with a gui!! Originally posted by opcode with karma: 46 on 2012-04-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Astronaut on 2013-12-01: How do you run the qfoo package??
{ "domain": "robotics.stackexchange", "id": 8822, "tags": "ros, gui, qt" }
Euler-Lagrange equations in relativity (Goldstein)
Question: In order to have a covariant formulation of special relativity, we stop using the time $t$ as a parameter and we choose some invariant parameter. In Goldstein (third edition), chapter $7.10$, it goes through this derivation making an argument about why proper time $\tau$ can't be the parameter we are looking for because of the constraint on 4-velocities. \begin{equation} u_\nu u^\nu=c^2 \tag{1} \end{equation} It then chooses another parameter $\theta$ and derives Euler Lagrange equations using this new parameter $\theta$. At the end he chooses \begin{equation}\theta=\tau\tag{*}\end{equation} Why does this allow us to ignore the constraint $(1)$? We're basically using proper time the whole time but ignoring the constraint and using at the end. I've read something similar on another book (which used $s=\int ds$ where $ds^2=dx^\nu dx_\nu$, thought) which said that to find the variation of action: \begin{equation} S=\int Lds \end{equation} We should also consider the variation of $ds$ with coordinates. Then it introduces a new parameter that is eventually replaced by $s$. How do you account for that? Answer: Since the 4-velocity $$u^{\mu}~:=~\frac{dx^{\mu}}{d\tau} \tag{U}$$ by definition is the 4-position differentiated wrt. proper time $\tau$, the condition $$\begin{align} c^2\mathrm{d}\tau^2 ~=~&\pm \eta_{\mu\nu}\mathrm{d}x^{\mu} \mathrm{d}x^{\nu} \cr~\Updownarrow~&\cr u^{\mu} u_{\mu}~=~&\pm c^2 \end{align}\tag{C}$$ holds independently of any worldline (WL) parametrization $\theta$ [in Minkowski signature $(\pm,\mp,\mp,\mp)$, respectively]. Goldstein considers the action for a massive relativistic point particle $$\begin{align} I ~=~&\int_{\theta_i}^{\theta_f} \! d\theta ~\Lambda, \cr \Lambda~=~&-mc\sqrt{\pm x^{\prime}_{\mu}x^{\prime \mu}}, \cr x^{\prime \mu}~:=~\frac{dx^{\mu}}{d\theta},\end{align}\tag{I}$$ which is WL reparametrization invariant $\theta\to\tilde{\theta}=f(\theta)$. It is important to realize that the 4 quantities $x^{\prime \mu}$ are not constrained by eq. (C). ($\leftarrow$ This is OP's main question.) The corresponding Euler-Lagrange (EL) equation$^1$ $$ \frac{d}{d\theta} \left( \frac{mc x^{\prime}_{\mu}}{\sqrt{\pm x^{\prime}_{\mu}x^{\prime \mu}}}\right)~\approx~0\tag{EL} $$ is WL reparametrization covariant. After the variation, it is legitimate to choose the parametrization $\theta=\tau$. The EL equation then simplifies [with the help of eq. (C)] to $$ \frac{d(m u_{\mu})}{d\tau} ~\approx~0,\tag{EL'}$$ i.e. the 4-acceleration is zero on-shell. Goldstein makes the pragmatic observation that if we a priori choose $\theta=\tau$ in the Lagrangian $\Lambda$ [and somehow ignore the constraint (C)], then we can formally write down the correct EL-equation [with $\theta=\tau$]. Although Goldstein obtains the correct EL-equation by this dirty trick, it is conceptionally very misleading. In fact, if we literally choose the parametrization $\theta=\tau$ in the action prior to the variation then the action would become a constant off-shell: $$ \begin{align} I ~=~&\int_{\tau_i}^{\tau_f} \! d\tau~(-mc)\sqrt{\pm u_{\mu}u^{\mu}}\cr ~=~&-mc^2(\tau_f-\tau_i).\end{align}\tag{I'}$$ Phrased differently: all virtual paths would have the same value, i.e. the stationary action principle becomes ill-defined. References: H. Goldstein, Classical Mechanics, 2nd edition; Section 7.9. H. Goldstein, Classical Mechanics, 3rd edition; Section 7.10. -- $^1$ The $\approx$ symbol means equality modulo EOM.
{ "domain": "physics.stackexchange", "id": 79457, "tags": "special-relativity, lagrangian-formalism" }
Effect of slits and a lens
Question: We have a screen with two slits (Young style) separated by a distance $d$, one of them receives a planar wave of $600nm$, the other receives a planar wave of $400nm$. Behind the slits there's a screen we observe. Only with this, as the waves have different wavelengths, I guess there can't be any interference, we will only see the difraction pattern, the two functions of the form $\sin^2(x)/x^2$, with the principal maximums separated a distance $d$. Am I right here? Now, we put a convex lens behind the slits so the screen in which we observe is in the focal plane. Ok, I have three configurations: 1.- In the first one, the system is configured such that the slits are far away from the lens. Here, we can approximate the wave that arrives as a planar wave, and therefore the lens will perform the Fourier transform in the focal plane of the screen. The diffraction of the slits also performs the Fourier transform, so this configuration should lead to having only two bars of light in the screen, centered in the focus. Am I right? 2.- The slits are in the focal plane on the lens, such that the lens is in the middle of slits-screen. Here, the same thing should happen, right? As the light comes from the focal plane, the lens must do the fourier transform with no extra things, and we should get the two bars, again both of them in the same line (center of the screen). Am I right here? 3.- The last one, I can't see... the lens is just behind the slits so the distance between slits and lens is $\approx 0$. My guess here is that the two centers of the intesity distributions $\sin^2(x)/x^2$ will go to the center of the screen (the focus) because they go perpendicular to the lens, but the rest of the pattern will just be compressed a little. Again the wave don't interfere, so the intensities just sum up, and the resulting will be: $$I=I_1\frac{\sin^2(\alpha x)}{(\alpha x)^2}+I_2\frac{\sin^2(\beta x')}{(\beta x')^2}$$ Being $\alpha$ and $\beta$ the some factor of compression due to the lens, that could actually be a function of $x$. Am I right here? Am I completely wrong? What would happen? Answer: Only with this, as the waves have different wavelengths, I guess there can't be any interference, we will only see the difraction pattern, the two functions of the form sin2(x)/x2, with the principal maximums separated a distance d. Am I right here? Sort of. The diffraction pattern is visible "at infinity", which is in fact your case #3. I'll explain there. 1.- In the first one, the system is configured such that the slits are far away from the lens. Here, we can approximate the wave that arrives as a planar wave, and therefore the lens will perform the Fourier transform in the focal plane of the screen. The diffraction of the slits also performs the Fourier transform, so this configuration should lead to having only two bars of light in the screen, centered in the focus. Am I right? Sort of. Your lens has a limited diameter, so if you place it far from the slits, it will capture only the central portion of the diffraction pattern, i.e. the top of the sinx/x function. In other words, you will loose the fringe pattern and reconstruct slits without fringes. 2.- The slits are in the focal plane on the lens, such that the lens is in the middle of slits-screen. Here, the same thing should happen, right? As the light comes from the focal plane, the lens must do the fourier transform with no extra things, and we should get the two bars, again both of them in the same line (center of the screen). Am I right here? This will be somewhat different because you do the image of the slits at infinity, i.e. a blurry image at close distances. Depending on how close is the lens, you may only be taking the "no-fringe" portion of the diffraction pattern. 3.- The last one, I can't see... the lens is just behind the slits so the distance between slits and lens is 0. That is exactly the typical school case. The diffraction pattern is, before the lens, located at infinity, or in other words, the fringes are defined as angles and not position ($\sin\theta/\theta$). The role of the lens is to bring these fringes at a finite distance (the focal length). You probably learned that a lens makes an image, initially located at infinity, located at the focal length. That's the same with the diffraction fringes. Now, as the two slits are both close to the lens, you won't do an image of them. That means that you should not see separated slit images, but instead, you should see two superimposed diffraction patterns, centered at the same point.
{ "domain": "physics.stackexchange", "id": 8201, "tags": "optics, lenses" }
What happens to the ergospheres of two colliding black holes, right before collision?
Question: Let's assume we have two black holes with equal mass. They move towards each other, each heading towards the center of the other black hole. Both black holes rotate with equal speed. What happens to a particle that is placed in the middle of both black holes in respect to the approaching ergospheres? If they rotate in opposite directions, do the ergospheres add up, shooting the particle outwards? If they rotate in the same direction, do the ergospheres cancel each other out, not moving the particle at all? Does the ergosphere of one black hole affect the other black hole? What happens if the two black holes do not have equal mass? Does the ergosphere of the heavier black hole "win" and force the particle to move in its direction? Do the ergospheres influence the other black hole in this case? Please excuse my possible layman terms, physics is not my area of expertise. Answer: Firstly, the ergosphere of black holes is considered an indirect rotational quality, its just the effect of the massive rotation speed morphing the area of space outside the event horizon. Geometrical Theory: (2) If they rotate in opposite directions, do the ergospheres add up, shooting the particle outwards? (1) If they rotate in the same direction, do the ergospheres cancel each other out, not moving the particle at all? Until the particle is not under any influence of a ergophere rotation, the objects pulled into a black hole is due to its gravitational pull (which extends way further than the ergosphere). From what I suspect, the particle will stay in the middle for both situations until the above statement is true, assuming the particle can sustain pull forces from both the black holes combined (realistically not possible), the black hole have the same mass and are equidistant from the particles (as you have mentioned) and they have the same gravitational pull. Now, once the black the particle enters the ergospheres of the black holes (assuming the both the black hole move at the exact same speed and the particle enters the ergospheres at the same time): In condition (1): The particle will stay constant / still. This is because, referencing the image above the opposite rotational forces of black holes A, B on the particle, will cancel out and with the balanced gravitational forces, the particle simply stays still. In condition (2): The object will be ejected out in the direction relative to the added rotational direction of the black holes. Each force has two components; horizontal and vertical, Fx and Fy as seen in the image above. Due to the direction of x component of the rotations of the black holes, they will cancel out, leaving only the y. Since the y component is in the same direction for both black holes and at a equal magnitude (quantity), its only obvious that the particle should move straight in the y and relative to the rotation direction of the black holes (downwards in the image) and it will eventually be ejected. Other possibilities include: The particles rotates around the two black holes, following the path above OR below Experimental / Real Life: To stay true to reality, the above theory will not work due to a simple reason: When the black holes come near enough one another, the will orbit each other, which gradually decays, leading them to collide and become a bigger black hole, which in-turn leads to two scenarios: either the force and factors affecting the particle cause it to be ejected out into space, OR, your particle gets sucked into the black hole :(. So why did I do the geometrical theory above? Its always fun to play around with geometry and theory and what it would be like excluding any other variables and problems. If I just said the black hole ate up the particle, does not really seem to explain a lot + plus its cooler to explain geometry. Hope this helps :) !!
{ "domain": "physics.stackexchange", "id": 81556, "tags": "general-relativity, black-holes, spacetime, event-horizon, kerr-metric" }
Normalized Euclidean Distance versus cross correlation?
Question: Normalized Euclidean Distance and Normalized Cross - Correlation can both be used as a metric of distance between vectors. What is the difference between these metrics? It seems to me that they are the same, although I have not seen this explicitly stated in any textbook or literature. thank you. Answer: These two metrics are not the same. The normalized Euclidean distance is the distance between two normalized vectors that have been normalized to length one. If the vectors are identical then the distance is 0, if the vectors point in opposite directions the distance is 2, and if the vectors are orthogonal (perpendicular) the distance is sqrt(2). It is a positive definite scalar value between 0 and 2. The normalized cross-correlation is the dot product between the two normalized vectors. If the vectors are identical, then the correlation is 1, if the vectors point in opposite directions the correlation is -1, and if the vectors are orthogonal (perpendicular) the correlation is 0. It is a scalar value between -1 and 1. This all comes with the understanding that in time-series analysis the cross-correlation is a measure of similarity of two series as a function of the lag of one relative to the other.
{ "domain": "datascience.stackexchange", "id": 375, "tags": "time-series, correlation" }
RESTful HTTP Post with falcon
Question: I wrote a small aggregator app that aggregates values from a json http post request and outputs the aggregated values. Now the aggregator function is somewhat large but the output appears correct. I start it with gunicorn: $ gunicorn --workers 2 --threads 4 aggregator:api I test it with curl : $ curl -X POST -d @aggregator.json http://localhost:8000/aggregate It outputs the aggregated values. import falcon import json from datetime import datetime from dateutil import tz import requests import time from_zone = tz.gettz('UTC') to_zone = tz.gettz('Europe/Stockholm') class DataService: def on_post(self, req, resp): data = json.loads(req.stream.read().decode('utf-8')) # output the data, we could write it to persistent storage here print(data) class AggregatorService: def zero_if_none(self, value): if value is not None: x = int(value) else: x = 0 return x # return the start day of consumption e.g. 1 for 2014-12-01 def get_day_start(self, hours): return int(datetime.fromtimestamp( int(hours[0][0]) ).strftime('%d')) # return the month number e.g. 2 for February def get_month_start(self, hours): return int(datetime.fromtimestamp( int(hours[0][0]) ).strftime('%m')) # return the day of month number e.g. 5 for 2015-01-05 def get_day(self, hour): return int(datetime.fromtimestamp( int(hour[0]) ).strftime('%d')) # return the month number for a timestamp def get_month(self, hour): return int(datetime.fromtimestamp( int(hour[0]) ).strftime('%m')) def on_post(self, req, resp): data = json.loads(req.stream.read().decode('utf-8')) hours = data['hours'] day_start = self.get_day_start(hours) month_start = self.get_month_start(hours) aggr_daily_wh = 0 aggr_monthly_wh = 0 aggr_daily_th = 0 aggr_monthly_th = 0 jdict = {} jdict['user'] = data['user'] jhours = [] jdays = [] jmonths = [] last_h = 0 last_day_wh = 0 last_day_th = 0 last_month_wh = 0 last_month_th = 0 for hour in hours: day = self.get_day(hour) print("day %d" % day) month = self.get_month(hour) print("month %d" % month) utime = datetime.fromtimestamp( int(hour[0]) ).strftime('%Y-%m-%d %H:%M:%S') utc = datetime.strptime(utime, '%Y-%m-%d %H:%M:%S') # Tell the datetime object that it's in UTC time zone since # datetime objects are 'naive' by default utc = utc.replace(tzinfo=from_zone) # Convert time zone and change the timestamp tstamp = int(time.mktime(utc.astimezone(to_zone).timetuple())) # consumption is 0 if there is no value wh = self.zero_if_none(hour[1]) th = self.zero_if_none(hour[2]) # append hourly comsumption jhours.append([tstamp, wh, th]) if day == day_start: # aggregate daily comsumption aggr_daily_wh += wh aggr_daily_th += th else: # new day # append daily comsumption jdays.append([tstamp, aggr_daily_wh, aggr_daily_th]) # begin new day day_start = day aggr_daily_wh = 0 aggr_daily_th = 0 aggr_daily_wh += wh aggr_daily_th += th if month == month_start: # aggregate monthly consumption print("adding from month %d" % month) aggr_monthly_wh += wh aggr_monthly_th += th else: # new month # append monthly comsumption jmonths.append([int(tstamp), aggr_monthly_wh, aggr_monthly_th]) # begin new month month_start = month aggr_monthly_wh = 0 aggr_monthly_th = 0 aggr_monthly_wh += wh aggr_monthly_th += th # make the values from the last iteration visible outside the loop last_h = tstamp last_month_wh = aggr_monthly_wh last_month_th = aggr_monthly_th last_day_wh = aggr_daily_wh last_day_th = aggr_daily_th # append the last values jdays.append([last_h, last_day_wh, last_day_th]) jmonths.append([last_h, last_month_wh, last_month_th]) # create the json dictionary jdict['hours'] = jhours jdict['days'] = jdays jdict['months'] = jmonths r = requests.post('http://localhost:8000/store', json=jdict) api = falcon.API() api.add_route('/aggregate', AggregatorService()) api.add_route('/store', DataService()) aggregator.json { "user": 42, "hours": [ [ 1417392000, 1, null ], [ 1417395600, 2, 3 ], [ 1417647600, 3, 2 ], [ 1417651200, null, 6 ], [ 1422835200, 6, 2 ], [ 1423094400, 4, 3 ] ] } Answer: Store disimillar data as a dict, not a list In each hour in hours of your JSON, you have a list that stores: a date in Unix format two integers that can also be null It would be clearer and more standard if, instead of a list, you used a dict. As in: "hours": [ { (your_label_1): 1417392000, (your_label_2): 1, (your_label_3): null }, This would aid greatly in getting across what the code is doing. (I still don't exactly what the second and third values represent. Neither of the wh and th abreviations are really very helpful.) Perform all time operations in Unix time An advantage of Unix time is that dates and times can be added and subtracted without separate handlers to take care of days, hours etc. This would dramatically shorten your on_post function. So, first do all operations in Unix time format (UTC time zone), and finally convert to days/hours/minutes only when you need to display the time information to the user. (Note: Your wh and th variables (in units of hours in your code) would need to be changed to units of seconds. This could either be done by extracting the hour value from the JSON and multiplying by 60, or changing the JSON format to seconds to match the Unix times.) (Also note: If you have no control over the JSON format, then most of my points above won't exactly apply to you.)
{ "domain": "codereview.stackexchange", "id": 24148, "tags": "python, python-3.x, http, rest" }
Displaying a scrolling stock exchange ticker in a window
Question: I've written my first OOP program (194 lines including docstring and comments) in Python that uses Tkinter as GUI and Threads. The program shows a window and displays a scrolling stock exchange ticker in a line. I've created the following classes: A class that draws the GUI layout with Tkinter library. A class for the tickers, with the data of each securities. A class for manage the stock exchange market and dispatch the data to the GUI. The Thread subclass for updating the stock exchange values every certain time interval. The code seems very ugly to me. I would like some feedback about the code structure, especially about the use of OOP. #!usr/bin/python3 # -*- coding: UTF-8 -*- from tkinter import * import time import threading from random import randint as randint, uniform as randlimit class AplicationTkinter(Frame): """ Class of tkinter.Frame subclass, Initializes the GUI methods: initGUI, draws the layout scroll_ticker, inserts character by character in the Text widget """ def __init__(self, parent): Frame.__init__(self, parent) self.parent = parent self.initGUI() self.scroll_ticker() def initGUI(self): # changes the window icon self.parent.iconbitmap("tabla.ico") self.parent.title("Stock Exchange Ticker") # fix a status bar at the bottom of the window, for future improvements self.status_bar = Label(self.parent, text="", bd=1, relief=SUNKEN, anchor=W) self.status_bar.pack(side=BOTTOM, fill=X) # content Frame for entry, for future improvements self.frm_1 = Frame(self.parent) self.frm_1.pack() self.var_entry = StringVar() self.ent_1 = Entry(self.frm_1, textvariable=self.var_entry) self.ent_1.pack() self.var_entry.set("a default value") str_ent_1 = self.ent_1.get() # content LabelFrame to show the ticker scrolling line of text self.lblfr_1 = LabelFrame(self.parent, text="Ventana de Resultados") self.lblfr_1.pack() # creates an instance of the StockMarket class for contents the the data self.market_one = StockMarket(stock_market) # the scrolling line of Text for show the data self.txt_ticker_widget = Text(self.lblfr_1, background='black', height=1, width=56, wrap="none") self.txt_ticker_widget.pack(side=TOP, fill=X) self.txt_ticker_widget.tag_configure("up", foreground="green") self.txt_ticker_widget.tag_configure("down", foreground="red") self.txt_ticker_widget.tag_configure("even", foreground="white") self.tag = {CHAR_DOWN: "down", CHAR_EVEN: "even", CHAR_UP: "up"} def scroll_ticker(self): self.txt_ticker_widget.configure(state=NORMAL) self.txt_ticker_widget.insert(END, self.market_one.get_next_character(), self.tag[self.market_one.get_tag()]) # TODO simplify self.txt_ticker_widget.see(END) self.txt_ticker_widget.configure(state=DISABLED) self.txt_ticker_widget.after(SPEED, self.scroll_ticker) # recursive each interval of millisecs # Here starts the program working process, until here was the GUI # CONSTANTS CHAR_UP = "\u25B2" CHAR_DOWN = "\u25BC" CHAR_EVEN = "=" SPEED = 250 UPDATE_TIME = 60 # INITIAL DATA, this must be changed to implement the load of a external source stock_market = [["GOOG", "587.25", CHAR_UP, "(+12.14)"], ["AAPL", "237.14", CHAR_UP, "(+7.25)"], ["GTAT", "87.47", CHAR_DOWN, "(-1.18)"], ["KNDI", "167.32", CHAR_UP, "(+6.85)"], ["ORCL", "482.91", CHAR_DOWN, "(-24.65)"], ["FBOK", "327.67", CHAR_DOWN, "(-11.78)"], ["TWTR", "842.41", CHAR_UP, "(+15.45)"]] class StockTicker(): """ Class StockTicker, handle each stock symbol and their data attributes: symbol, string, the abbreviature of the securitie price, string, the current price of the securitie direction, string(1), is a character that indicates its las fix price went up, down or even change, string, is the value of the last change surrounded by '()', the first character is '+' or '-' methods: update_ticker, update the securitie price, direction and change with random values ticker_to_text, returns a formatted string with all the data of the securitie """ def __init__(self, list_data): self.symbol, self.price, self.direction, self.change = list_data def update_ticker(self): flt_price = float(self.price) if randint(0, 9) == 0: self.direction = CHAR_EVEN else: increase_percent = randlimit(-5, 5) # TODO implementar normalvariate(0, 0.02) o gauss(0, 0.02) flt_change = flt_price * increase_percent / 100 flt_new_price = flt_price + flt_change self.price = "{:.2f}".format(flt_new_price) if flt_change < 0: self.direction = CHAR_DOWN elif flt_change == 0: self.direction = CHAR_EVEN else: self.direction = CHAR_UP self.change = "({:+.2f})".format(flt_change) def ticker_to_text(self): return " | {} {} {} {} ".format(self.symbol, self.price, self.direction, self.change) class StockMarket(): """ Class StockMarket, creates and handle a list of StockTicker objects, and provide to the GUI of stuff for the scroll ticker attributes: smarket, list of StockTicker objects thread_actualizar, Thread object to update the stock market each time interval methods: load_market, load the list with StockTicker object taking the data from the initial source data. update_market, update the objects of the list get_one_ticker, getter function to return one securitie data in text format and rotates to the next one get_next_character, returns a character of one securitie (if the securitie data is exhausted retrieve another securitie) data to the GUI. """ def __init__(self, l_inicial): self.smarket = [] self.load_market(l_inicial) self.current_ticker = self.get_one_ticker() self.thread_updating = UpdateThread(self) self.thread_updating.start() def load_market(self, l_inicial): for data_ticker in l_inicial: simple_ticker = StockTicker(data_ticker) self.smarket.append(simple_ticker) def update_market(self): for j in range(len(self.smarket)): self.smarket[j].update_ticker() def get_one_ticker(self): self.one_ticker = self.smarket.pop(0) self.smarket.append(self.one_ticker) self.index = 0 return self.one_ticker.ticker_to_text() def get_next_character(self): if self.index == len(self.current_ticker): self.current_ticker = self.get_one_ticker() self.index = 0 self.character_symbol = self.current_ticker[self.index:self.index+1] self.index += 1 return self.character_symbol def get_tag(self): return self.one_ticker.direction class UpdateThread(threading.Thread): """ Class UpdateThread(), subclass of Thread, handle the time to the next update of the stock market values args: market_1, a StockMarket class object to update attributes: my_check, string for debugging purpouses, it'll be implemented the source data management the_market, StockMarket object that will be updated methods: run, overrides the Thread run method, and calls the update_market method of StockMarket class each interval """ def __init__(self, market_1): self.my_check = " CHECK " # TODO replace with initial source data. self.the_market = market_1 threading.Thread.__init__(self) def run(self): time.sleep(UPDATE_TIME) self.the_market.update_market() print(" UPDATED!!!") # for debugging self.run() # STARTS THE PROGRAM def main(): the_window = Tk() aplicacion = AplicationTkinter(the_window) # init the GUI process the_window.mainloop() if __name__ == '__main__': main() I know there are some lines that I could have written in a unique line, but for now I'd like to be explicit. Or expressions that can be simplified like: if not randint(0, 9): instead of: if randint(0, 9) == 0: Answer: Some comments: There's a slash missing in the path to the python3 binary Beware of lines longer than 80 characters (pep8) Constants should be at the top of the file Constanst used only in one class, should be class attributes (for example, SPEED) Use docstrings for every method (do not describe each method in the class docstring) (pep257) Using sphinx formatting for the docstring is a good idea Try to avoid from tkinter import *. An alternative to still keep short names would be import tkinter as tk and the code would be a little bit more readable Review the variable names. If there is just one frame, use self.frame instead of self.frm_1. If there is just label frame use self.label_frame instead of self.lblfr_1. I'm not aware of any convention for tk, but I find the current variable names hard to read Also, do not use self. when the variable is just used in that method and never again. This makes the reader think that variable is going to be used later when is not. Use well known methods when possible. ticker_to_text sounds pretty much like it should be __str__. There are some methods that seem to be used only internally like, for example, get_one_ticker. Prefix them with an underscore to make clear they are private. Try to set all attributes needed in the __init__ method. If there's no value available at initialization time, then set them to None. This provides the reader with the information of all the attributes used to keep the object internal state. Note that self.current_ticker[self.index:self.index+1] is self.current_ticker[self.index] Avoid verbosity in method names. For instance, Market.load_market should be Market.load Implement iteration properly, that is, instead of for j in range(len(self.smarket)) use for ticker in self.smarket (and rename smarket to tickers) Make sure the update thread is stopped when the application is closed (that doesn't happen currently) Rearding the object oriented design, I think it's good already. I see there's an application class that takes care of the GUI, a stock market class that handles all the tickers, a ticker class that keeps the data for each ticker and a thread that asks the market to update itself periodically. All in all, if you fix the readability issues, I think the code can be pretty good. Actually, I looked at the application running and the output is already quite nice.
{ "domain": "codereview.stackexchange", "id": 8669, "tags": "python, object-oriented, multithreading, classes, tkinter" }
Which preprocessing is the correct way to forecast time-series data using LSTM?
Question: I just started to study time-series forecasting using RNN. I have a few months of time series data that was an hour unit. The data is a kind of percentage value of my little experiment and no other correlated information this. It is simple 1-D array info. I would like to forecast the future condition of this. Many tutorials and web info introduced direct training and forecasting the time series data without any data pre-processing. But for the RNN (or ML and DL), I think we should consider the data's condition that is stationary or not. My data is totally random condition which is stationary data (no seasonality, no trend). For example, the US stock prediction tutorial showed super great accuracy forecasting performance according to many LSTM tutorials. [If this really works and is true, then all ML developers will be rich.] And, Some of them didn't emphasize and note a kind of the data pre-processing such as non-stationary to stationary something like that. According to my short knowledge, I think the non-stationary data such as stock price (will have trend) should be converted as a stationary format through differencing or some other steps. and I think this is a correct prediction as a view of theoretical sense even if the accuracy is not high. So my point is, I'm a bit confused about whether that really is no need for any preprocessing to treat stationary or not. For my case, I applied differencing step ($t_n - t_{n-1}$)to my time-series data in order to remove the trend or some periodic situation. Is my understanding not correct? Why do time-series forecasting tutorials not introduced data stationarity? Answer: Models based on the stock markets are often unreliable because there is a lot of noise and even if it seems to predict well for the validation data, the result is very different in practice. What you call stationary means depending on previous values in a relative way, and predicting stock markets should follow this rule. However, RNN and LSTM have been built to memorize low-noise patterns, so you will want to apply some noise reduction like smoothing for better results. In addition, I recommend converting data to have fluctuations (=derivates) rather than raw values, so that the neural network learns data dynamics. For a good model, you must simulate prediction vs real-world results for each day. It means evaluating the model prediction for day 1, checking if the result is correct or not, then feeding the neural network with this information, and applying the same logic for day 2 and so on. Generally speaking, stock markets are so difficult that you will want to use multi-variate and seasonality models. Maybe Prophet is a better option. https://facebook.github.io/prophet/
{ "domain": "datascience.stackexchange", "id": 11772, "tags": "time-series, lstm, rnn, preprocessing, accuracy" }
Monty Hall simulation with any number of doors
Question: After answering Monty hall python simulation I wondered what the outcome were to be if there were more than three doors, say four or seven. And so I decided to modify the problem slightly to adjust for this. There are \$x\$ amount of doors. Behind one door is a car, the others are goats. You pick a door, and the host reveals a goat in a different door. You're then given the choice to change from the selected door to any door except the open door, or the already selected door. And so I decided to find out how many times a person is likely to win if they always pick a random door when asked if they want to switch door. And so programmed the following: import random def monty_hall(amount, doors=3): if doors < 3: raise ValueError(f'doors must be greater than three, not {doors}') wins = 0 for _ in range(amount): player_choice = random.randrange(doors) car_placement = random.randrange(doors) other_doors = set(range(doors)) - {player_choice, car_placement} shown_door = random.choice(list(other_doors)) swap_doors = set(range(doors)) - {player_choice, shown_door} final_choice = random.choice(list(swap_doors)) wins += final_choice == car_placement return wins print(monty_hall(1000000, 3)) print(monty_hall(1000000, 4)) print(monty_hall(1000000, 5)) I then decided to optimize the above code, and came up with the following ways to do this: Change player_choice to doors - 1. The first player choice doesn't have to be random. Change other_doors and swap_doors to account for (1). Rather than removing an item that will always be in the set, remove it from the range and the second set. And so other_doors becomes: set(range(doors - 1)) - {car_placement}. Remove player_choice as it's no longer used. Change shown_door to always be the first door. However if the car is the first door, show the second. shown_door = car_placement == 0. Remove other_doors as it's no longer used. Change swap_doors to be a list, rather then a set with an item being removed. This requires the use of an if else to manually split the list if the car is in the first door. And so can become: if car_placment == 0: swap_doors = [0] + list(range(2, doors - 1)) else: swap_doors = list(range(1, doors - 1)) final_choice = random.choice(swap_doors) Rather than making my code WET, as I did in (6), we can instead move the car to the second door if it's in the first, and use the else whatever the case. Finally we can merge all the code into a sum and a generator comprehension. This resulted in the following code: import random def monty_hall(amount, doors=3): if doors < 3: raise ValueError(f'doors must be greater than three, not {doors}') rand = random.randrange return sum( max(rand(doors), 1) == rand(1, doors - 1) for _ in range(amount) ) print(monty_hall(1000000, 3)) print(monty_hall(1000000, 4)) print(monty_hall(1000000, 5)) Are there any improvements that I can make on either the optimized or non-optimized solutions? Also are there any issues with the optimized solution, as my method for optimization weren't exactly scientific. As far as I know it looks good. Code is only runnable in Python 3.6+ Answer: I would have used an helper function in the first version so that your monty_hall function is: def monty_hall(amount, doors=3): if doors < 3: raise ValueError(f'doors must be greater than three, not {doors}') return sum(player_wins_a_car(doors) for _ in range(amount)) This also applies to the second version but removes some of the speed optimizations you made there; so not sure if it’s worth keeping. However, in the first version, I would use list-comprehensions to reduce a bit the memory needed. It would also exhibit the actions more like a real player would do. Since creating the set from range will already iterate over each elements, using the list-comp won't be much more costy: import random def player_wins_a_car(doors): player_choice = random.randrange(doors) car_placement = random.randrange(doors) shown_door = random.choice([door for door in range(doors) if door not in (player_choice, car_placement)]) final_choice = random.choice([door for door in range(doors) if door not in (player_choice, shown_door)]) return final_choice == car_placement For the second version, if you were to keep two functions that could lead to more explanatory stuff like: def player_wins_a_car(doors): # Choose a door that hide a car. Car is never behind door 0, # if it were behind door 0, consider door 0 and 1 to be # swapped so that host always open door 0. car_placement = random.randrange(doors) or 1 # Consider an initial choice of door `doors - 1` and host # showing door 0 unconditionaly. Always choosing a random # door out of these two. final_choice = random.randrange(1, doors-1) return car_placement == final_choice I’m using or here as the only value triggering max would be 0, not sure which one performs better.
{ "domain": "codereview.stackexchange", "id": 25286, "tags": "python, python-3.x, random, simulation" }
I want to know what if a color repeats, and if so change a drawable
Question: So I made this code: private List<eventAndColor> GetEventosToPopulateCalendar() { // A List with only the days that have events List<Eventos> CalEvents = eventDao.Select(FromEventos.DistinctDate); // A List of a struct with Date + color of event List<eventAndColor> events = new List<eventAndColor>(); // If List isn't null if (CalEvents != null) { // Iterate through the days that have events foreach (var calEvent in CalEvents) { var eventsInDay = eventDao.Select(FromEventos.WhereDay, calEvent.DataDocumento); // A List with all events on that day bool hasDifColor = false; // A bool to flag if it encounters a different color // Iterate through all the events on the day foreach (var day in eventsInDay) if (day.Cor != calEvent.Cor) // Only set hasDifColor to True if it founds a different color { hasDifColor = true; break; // Break loop } if (hasDifColor) events.Add(new eventAndColor(calEvent.DataDocumento, MULTIPLE_EVENTS)); // Add a tag of multiple event colors else events.Add(new eventAndColor(calEvent.DataDocumento, calEvent.Cor)); // Add a tag of an event color } return events; } else return events = new List<eventAndColor>(); // Empty list } It's objective is to check the events of a group of dates, and see if in each day if the events inside it have different colors, if it does I tell that there are multiple color events in that specific day. the problem here is that I don't like this code, it looks too messy, I wanted to know if theres anyway to optimise it. (I'm talking about the loops, but I welcome any other criticism towards my code) The "Eventos" Class and the Struc are here if needed: Struct public struct eventAndColor { public DateTime eventDate; public string colorHex; public eventAndColor(DateTime date, string color) { eventDate = date; colorHex = color; } } Eventos public class Eventos : Evento, IEnumerable { public List<Eventos> eventos = new List<Eventos>(); public void Add(Eventos evento) { eventos.Add(evento); } public Eventos Get(int position) { return eventos[position]; } public IEnumerator GetEnumerator() { return eventos.GetEnumerator(); } public Eventos() { } } public class Evento : Java.Lang.Object { #region Table Columns Names public const string COLUMN_Id = "ID"; public const string COLUMN_DataHoraRegistoSistema = "Data Hora Registo Sistema"; public const string COLUMN_NoDocumento = "No Documento"; public const string COLUMN_NoColaborador = "No Colaborador"; public const string COLUMN_Descricao = "Descricao"; public const string COLUMN_Local = "Local"; public const string COLUMN_DataDocumento = "Data Documento"; public const string COLUMN_HoraDe = "Hora De"; public const string COLUMN_HoraA = "Hora A"; public const string COLUMN_TipoTarefa = "Tipo Tarefa"; public const string COLUMN_Equipa = "Equipa"; public const string COLUMN_NoDocumentoLigacao = "No Documento Ligacao"; public const string COLUMN_IdTarefaMov = "Id Tarefa Mov"; public const string COLUMN_Recorrente = "Recorrente"; public const string COLUMN_IdAtividade = "Id Atividade"; public const string COLUMN_NoEntidade = "No Entidade"; public const string COLUMN_NoCliente = "No Cliente"; public const string COLUMN_NoContrato = "No Contrato"; public const string COLUMN_NoLinhaContrato = "No Linha Contrato"; public const string COLUMN_NoProduto = "No Produto"; public const string COLUMN_NoInvestimento = "No Investimento"; public const string COLUMN_SegProcesso = "Seg Processo"; public const string COLUMN_Eliminado = "Eliminado"; public const string COLUMN_Concluido = "Concluido"; public const string COLUMN_Cor = "Cor"; #endregion #region Table Fields [Column(COLUMN_Id), PrimaryKey, NotNull, Unique, AutoIncrement] public int? Id { get; set; } [Column(COLUMN_DataHoraRegistoSistema)] public DateTime DataHoraRegistoSistema { get; set; } [Column(COLUMN_NoDocumento)] public string NoDocumento { get; set; } = ""; [Column(COLUMN_NoColaborador)] public string NoColaborador { get; set; } = ""; [Column(COLUMN_Descricao)] public string Descricao { get; set; } = ""; [Column(COLUMN_Local)] public string Local { get; set; } = ""; [Column(COLUMN_DataDocumento)] public DateTime DataDocumento { get; set; } [Column(COLUMN_HoraDe)] public DateTime HoraDe { get; set; } [Column(COLUMN_HoraA)] public DateTime HoraA { get; set; } [Column(COLUMN_TipoTarefa)] public int TipoTarefa { get; set; } = 0; [Column(COLUMN_Equipa)] public string Equipa { get; set; } = ""; [Column(COLUMN_NoDocumentoLigacao)] public string NoDocumentoLigacao { get; set; } = ""; [Column(COLUMN_IdTarefaMov)] public int IdTarefaMov { get; set; } = 0; [Column(COLUMN_Recorrente)] public bool Recorrente { get; set; } = false; [Column(COLUMN_IdAtividade)] public int IdAtividade { get; set; } = 0; [Column(COLUMN_NoEntidade)] public string NoEntidade { get; set; } = ""; [Column(COLUMN_NoCliente)] public string NoCliente { get; set; } = ""; [Column(COLUMN_NoContrato)] public string NoContrato { get; set; } = ""; [Column(COLUMN_NoLinhaContrato)] public string NoLinhaContrato { get; set; } = ""; [Column(COLUMN_NoProduto)] public string NoProduto { get; set; } = ""; [Column(COLUMN_NoInvestimento)] public string NoInvestimento { get; set; } = ""; [Column(COLUMN_SegProcesso)] public string SegProcesso { get; set; } = ""; [Column(COLUMN_Eliminado)] public bool Eliminado { get; set; } = false; [Column(COLUMN_Concluido)] public bool Concluido { get; set; } = false; [Column(COLUMN_Cor)] public string Cor { get; set; } = "#1869BF"; #endregion } Answer: By using a guard condition regarding CalEvents being null you can save one level of indentation like so private List<eventAndColor> GetEventosToPopulateCalendar() { // A List with only the days that have events List<Eventos> calEvents = eventDao.Select(FromEventos.DistinctDate); if (calEvents == null || calEvents.Count == 0) { return new List<eventAndColor>(); } // A List of a struct with Date + color of event List<eventAndColor> events = new List<eventAndColor>(); Maybe you have noticed that the former CalEvents are now named calEvents because method-level variables should be named using camlCase casing. You can read more about it in the .NET Naming Guidelines. The inner loop can be replaced with some Linq-magic like so if (eventsInDay.Any(day => day.Cor != calEvent.Cor)) { events.Add(new eventAndColor(calEvent.DataDocumento, MULTIPLE_EVENTS)); // Add a tag of multiple event colors } else { events.Add(new eventAndColor(calEvent.DataDocumento, calEvent.Cor)); // Add a tag of an event color } or maybe better like so string desiredColor = calEvent.Cor if (eventsInDay.Any(day => day.Cor != calEvent.Cor)) { desiredColor = MULTIPLE_EVENTS; // Add a tag of multiple event colors } events.Add(new eventAndColor(calEvent.DataDocumento, desiredColor)); // Add a tag of an event color Putting all together would result in private List<eventAndColor> GetEventosToPopulateCalendar() { // A List with only the days that have events List<Eventos> calEvents = eventDao.Select(FromEventos.DistinctDate); if (calEvents == null || calEvents.Count == 0) { return new List<eventAndColor>(); } // A List of a struct with Date + color of event List<eventAndColor> events = new List<eventAndColor>(); foreach (var calEvent in calEvents) { var eventsInDay = eventDao.Select(FromEventos.WhereDay, calEvent.DataDocumento); // A List with all events on that day string desiredColor = calEvent.Cor if (eventsInDay.Any(day => day.Cor != calEvent.Cor)) { desiredColor = MULTIPLE_EVENTS; // Add a tag of multiple event colors } events.Add(new eventAndColor(calEvent.DataDocumento, desiredColor)); // Add a tag of an event color } return events; } Some more words regarding your code and coding-style: Because most developers are coding using english as a language to name things you should do this as well. In this way if some other (non native speaker of your language) developer should maintain your code it will be much easier. Because most developers expect a specific style, which mostly is based on the .NET Naming Guidelines you should stick to that style as well. E.g eventAndColor should be named using PascalCase casing. Don't omit braces {} althought they might be optional.
{ "domain": "codereview.stackexchange", "id": 32690, "tags": "c#, performance, object-oriented, xamarin" }
Sklearn Linear Regression examples
Question: Could someone give an example of the application of Tf-idf with sparse data (lots of zeros) in sklearn? I am not quite sure where to insert the weight of Tf-idf and how to rightly obtain the weight. Without knowing this fully, I would not be able to well use the tool for prediction. Thank you. Answer: There is an application of tf-idf on the sklearn website. sklearn handles sparse matrices for you, so I wouldn't worry about it too much: Fortunately, most values in X will be zeros since for a given document less than a couple thousands of distinct words will be used. For this reason we say that bags of words are typically high-dimensional sparse datasets. We can save a lot of memory by only storing the non-zero parts of the feature vectors in memory. scipy.sparse matrices are data structures that do exactly this, and scikit-learn has built-in support for these structures. Regarding your point about inserting the weight, I guess you have already performed tf-idf on your training corpus, but you don't know how to apply it to your test corpus? If so you could so as follows (taken from above link) from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(data) # Perform tf-idf tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) docs_new = ['God is love', 'OpenGL on the GPU is fast'] # New test documents X_test_counts = count_vect.transform(docs_new) # Count vectorise the new documents X_test_tfidf = tfidf_transformer.transform(X_new_counts) # transfrom the test counts
{ "domain": "datascience.stackexchange", "id": 989, "tags": "scikit-learn, linear-regression, weighted-data" }
gradient strength units in MRI
Question: This may not be the appropriate forum for this but seemed to be the closest. I am trying to understand some concepts around MRI physics and it is common to use external magnetic fields created using gradient coils to manipulate the main magnetic field strength at different locations. Now, the books talk in terms of gradient amplitude and the units they typically use is mT/m (microtesla/metre). I am not sure why there is this per meter as it is just the gradient amplitude should it not just be microtesla or teslas? Why is it defined per unit distance? Answer: A 'gradient' measures how quickly something changes with respect to something else. In this case, it's how much the magnetic field strength changes per unit length.
{ "domain": "physics.stackexchange", "id": 30389, "tags": "electromagnetism, resonance" }
How would the eigenstates of a particle with spin 3/2 look like?
Question: I learnt in an introductory course about quantum mecanics how to work with spin 1/2 particles. I saw how the algebra is almost the same as for angular momentum, but no one ever told me about particles having a spin different from 1/2. I know there are no known particles of spin 3/2, but I am wondering how the eigenstates of the spin operator in z direction would look like, to get a better understanding of what spin really is. Answer: It should follow most of the same ideas of spin-1/2 and spin-1, in terms of satisfying the general algebra of angular momentum. We can construct the operators with a little bit of work. Firstly in general, $$[J_i,J_j] = i\hbar\epsilon_{ijk}J_k$$ Following that a particle with spin $s$ will have $2s+1$ a spin-3/2 particle will have four states, with z-component of angular momentum $\frac{3}{2}\hbar$, $\frac{1}{2}\hbar$, $-\frac{1}{2}\hbar$, $-\frac{3}{2}\hbar$. We can take these states as a basis, and immediately write the $\hat{S}_z$ operator in this basis, $$\hat{S}_z = \frac{1}{2}\hbar\left(\begin{matrix} 3 & 0 & 0 & 0\\0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -3 \end{matrix}\right)$$ Now it comes down to finding two matrices $\hat{S}_x$ and $\hat{S}_y$ that satisfy the algebraic equation above given this matrix representing $\hat{S}_z$. Two such examples taken from wikipedia (https://en.wikipedia.org/wiki/3D_rotation_group#A_note_on_Lie_algebra), $$\hat{S}_x = \frac{1}{2}\hbar\left(\begin{matrix} 0 & \sqrt{3} & 0 & 0\\\sqrt{3} & 0 & 2 & 0\\ 0 & 2 & 0 & \sqrt{3}\\ 0 & 0 & \sqrt{3} & 0 \end{matrix}\right)$$ and, $$\hat{S}_y = \frac{1}{2}\hbar\left(\begin{matrix} 0 & -i\sqrt{3} & 0 & 0\\ i\sqrt{3} & 0 & -2i & 0\\ 0 & 2i & 0 & -i\sqrt{3}\\ 0 & 0 & i\sqrt{3} & 0 \end{matrix}\right)$$ You could follow this same logic to represent the higher spins as well. Edit: Instead of pulling these matrices out of nowhere I will show a way to find them. Following directly from the angular momentum algebra, you can construct raising and lowering operators. These take the form $J_{+} = J_x + iJ_y$ and $J_- = J_x - iJ_y$. They take a state with a given z-projection and return a state one higher (or lower) multiplied by a number. Specifically, $$J_{\pm}|J,m\rangle = \hbar\sqrt{j(j+1)-m(m\pm 1)}|J,m\pm 1\rangle$$ Given our basis and representation of $\hat{S}_z$ we can calculate the coefficients for the matrices $S_+$ and $S_-$. For example this would give for $S_+$ matrix element $\langle \frac{3}{2}, -\frac{1}{2}|\hat{S}_+|\frac{3}{2},-\frac{3}{2}\rangle = \hbar\sqrt{\frac{3}{2}(\frac{3}{2}+1)+\frac{3}{2}(-\frac{3}{2}+1)}=\frac{\hbar}{2}\sqrt{15-3} = \sqrt{3}\hbar$ $$\hat{S}_+ = \hbar\left(\begin{matrix} 0 & \sqrt{3} & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & \sqrt{3}\\ 0 & 0 & 0 & 0 \end{matrix}\right)$$ and, $$\hat{S}_- = \hbar\left(\begin{matrix} 0 & 0 & 0 & 0\\ \sqrt{3} & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & \sqrt{3} & 0 \end{matrix}\right)$$ The above versions of $\hat{S}_x$ and $\hat{S}_y$ can be calculated by plugging these into the definitions of the raising and lowering operators. I find this method of construction clarifies the structure of the matrices ($\hat{S}_x$ and $\hat{S}_y$ only having off diagonal components, and the pattern of minus signs) at least in the basis where you have diagonalized $\hat{S}_z$
{ "domain": "physics.stackexchange", "id": 75275, "tags": "quantum-mechanics, quantum-spin, spinors" }
Is anything wrong with the given statement?
Question: If we take randomly any two bodies, the one with the higher temperature will contain more heat. Answer: Different materials have different heat capacities, which is the degree to which stored energy is given away (this leaving energy is known as heat). For example: Water has a high capacity, most metals have low capacities. Now if you have samples of both materials (with same masses) that measure 30°C each, then the water sample has more energy stored. Why? Because water gives away only a little amount of its stored energy, therefore it must contain more energy to maintain 30°.
{ "domain": "physics.stackexchange", "id": 56771, "tags": "thermodynamics" }
Polar radius and position vector: two-dimensional kinematics for high school students
Question: We consider for example this image, which is a polar graph of a naval unit's on-board instrumentation with vector radius (or polar radius) $\rho$ and anomaly $\theta$ or polar angle. We know that a point $P=(x,y)$ in an orthogonal Cartesian coordinate system may be identified in a polar diagram with coordinates $P\equiv(\rho,\theta)$ or viceversa. If we consider the trajectory $\Gamma$ (the curve coloured in brown) of a target and $\mathbf r=\mathbf r(t)$ is your position vector is it possible to say that there is an analogy between the position vector $\mathbf r$ and the polar radius $\rho$? Or are the two quantities distinct because the first is a vector and the polar radius is a scalar? Answer: The position vector $\boldsymbol{r}(t)$ is the parametrization of the curve it rides on. But converting this to a polar form $\rho(\theta)$ is also a parameterization of the same curve. For example an ellipse can be parameteized with $$ \boldsymbol{r}(t) = \pmatrix{x(t) \\ y(t)} = \pmatrix{ a \cos t \\ b \sin t} \tag{1}$$ This position vector objeys the equation of the ellipse $$ \left( \tfrac{x}{a} \right)^2 + \left( \tfrac{y}{b} \right)^2 = 1$$ where $a$ is the semi-major axis, and $b$ the semi-minor axis. Now consider the polar coordinates $$ \boldsymbol{r}(t) = \pmatrix{x(t) \\ y(t)} = \pmatrix{ \rho \cos \theta \\ \rho \sin \theta} $$ that yields the solution $$ \rho(\theta) = \frac{ a b}{\sqrt{ a^2 - (a^2-b^2) \cos^2 \theta}} \tag{2}$$ Expressions (1) and (2) are equivalent to each other since both describe the same ellipse.
{ "domain": "physics.stackexchange", "id": 71560, "tags": "kinematics, definition" }
Failed to process package 'cartographer':
Question: when i build the cartographer_ros ,errors happened as follow: [6/90] Building documentation. Running Sphinx v1.2.2 loading pickled environment... done building [html]: targets for 0 source files that are out of date updating environment: 0 added, 0 changed, 0 removed looking for now-outdated files... none found no targets are out of date. [9/90] Building CXX object cartographe...eal_time_correlative_scan_matcher.cc.o FAILED: /usr/bin/c++ -O3 -DNDEBUG -isystem /usr/include/eigen3 -isystem /home/exbot/cartographer/install_isolated/include -I. -I/home/exbot/cartographer/src/cartographer -isystem /usr/include/lua5.2 -I/home/exbot/cartographer/src/cartographer/cartographer/common/{Boost_INCLUDE_DIRS} -O3 -DNDEBUG -std=c++11 -Wall -Wpedantic -Werror=format-security -Werror=return-type -Werror=uninitialized -MMD -MT cartographer/mapping_2d/scan_matching/CMakeFiles/mapping_2d_scan_matching_real_time_correlative_scan_matcher.dir/real_time_correlative_scan_matcher.cc.o -MF "cartographer/mapping_2d/scan_matching/CMakeFiles/mapping_2d_scan_matching_real_time_correlative_scan_matcher.dir/real_time_correlative_scan_matcher.cc.o.d" -o cartographer/mapping_2d/scan_matching/CMakeFiles/mapping_2d_scan_matching_real_time_correlative_scan_matcher.dir/real_time_correlative_scan_matcher.cc.o -c /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/real_time_correlative_scan_matcher.cc In file included from /opt/eigen/Eigen/Geometry:39:0, from /opt/eigen/Eigen/Dense:6, from /home/exbot/cartographer/install_isolated/include/ceres/internal/numeric_diff.h:40, from /home/exbot/cartographer/install_isolated/include/ceres/dynamic_numeric_diff_cost_function.h:70, from /home/exbot/cartographer/install_isolated/include/ceres/ceres.h:44, from /home/exbot/cartographer/src/cartographer/cartographer/common/math.h:25, from /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/probability_grid.h:28, from /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/real_time_correlative_scan_matcher.h:44, from /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/real_time_correlative_scan_matcher.cc:17: /opt/eigen/Eigen/src/Geometry/Rotation2D.h: In instantiation of ‘Eigen::Rotation2D<Scalar> Eigen::Rotation2D<Scalar>::operator*(const Eigen::Rotation2D<Scalar>&) const [with _Scalar = double]’: /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/real_time_correlative_scan_matcher.cc:123:71: required from here /opt/eigen/Eigen/src/Geometry/Rotation2D.h:78:28: error: could not convert ‘(((Eigen::Rotation2D<double>::Scalar)((const Eigen::Rotation2D<double>*)this)->Eigen::Rotation2D<double>::m_angle) + ((Eigen::Rotation2D<double>::Scalar)other.Eigen::Rotation2D<double>::m_angle))’ from ‘Eigen::Rotation2D<double>::Scalar {aka double}’ to ‘Eigen::Rotation2D<double>’ { return m_angle + other.m_angle; } ^ [9/90] Building CXX object cartographe...dir/fast_correlative_scan_matcher.cc.o FAILED: /usr/bin/c++ -O3 -DNDEBUG -isystem /usr/include/eigen3 -isystem /home/exbot/cartographer/install_isolated/include -I. -I/home/exbot/cartographer/src/cartographer -I/home/exbot/cartographer/src/cartographer/cartographer/common/{Boost_INCLUDE_DIRS} -isystem /usr/include/lua5.2 -O3 -DNDEBUG -std=c++11 -Wall -Wpedantic -Werror=format-security -Werror=return-type -Werror=uninitialized -MMD -MT cartographer/mapping_2d/scan_matching/CMakeFiles/mapping_2d_scan_matching_fast_correlative_scan_matcher.dir/fast_correlative_scan_matcher.cc.o -MF "cartographer/mapping_2d/scan_matching/CMakeFiles/mapping_2d_scan_matching_fast_correlative_scan_matcher.dir/fast_correlative_scan_matcher.cc.o.d" -o cartographer/mapping_2d/scan_matching/CMakeFiles/mapping_2d_scan_matching_fast_correlative_scan_matcher.dir/fast_correlative_scan_matcher.cc.o -c /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/fast_correlative_scan_matcher.cc In file included from /opt/eigen/Eigen/Geometry:39:0, from /opt/eigen/Eigen/Dense:6, from /home/exbot/cartographer/install_isolated/include/ceres/internal/numeric_diff.h:40, from /home/exbot/cartographer/install_isolated/include/ceres/dynamic_numeric_diff_cost_function.h:70, from /home/exbot/cartographer/install_isolated/include/ceres/ceres.h:44, from /home/exbot/cartographer/src/cartographer/cartographer/common/math.h:25, from /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/probability_grid.h:28, from /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/fast_correlative_scan_matcher.h:33, from /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/fast_correlative_scan_matcher.cc:17: /opt/eigen/Eigen/src/Geometry/Rotation2D.h: In instantiation of ‘Eigen::Rotation2D<Scalar> Eigen::Rotation2D<Scalar>::operator*(const Eigen::Rotation2D<Scalar>&) const [with _Scalar = double]’: /home/exbot/cartographer/src/cartographer/cartographer/mapping_2d/scan_matching/fast_correlative_scan_matcher.cc:270:73: required from here /opt/eigen/Eigen/src/Geometry/Rotation2D.h:78:28: error: could not convert ‘(((Eigen::Rotation2D<double>::Scalar)((const Eigen::Rotation2D<double>*)this)->Eigen::Rotation2D<double>::m_angle) + ((Eigen::Rotation2D<double>::Scalar)other.Eigen::Rotation2D<double>::m_angle))’ from ‘Eigen::Rotation2D<double>::Scalar {aka double}’ to ‘Eigen::Rotation2D<double>’ { return m_angle + other.m_angle; } ^ [9/90] Building CXX object cartographe..._test.dir/laser_fan_inserter_test.cc.o ninja: build stopped: subcommand failed. <== Failed to process package 'cartographer': Command '/home/exbot/cartographer/install_isolated/env.sh ninja -j4 -l4' returned non-zero exit status 1 Reproduce this error by running: ==> cd /home/exbot/cartographer/build_isolated/cartographer && /home/exbot/cartographer/install_isolated/env.sh ninja -j4 -l4 Command failed, exiting. i have rebuilt many times but the errors are still have.My ROS is indigo,ubuntu 14.04 x86.But I have succeed to build cartographer in the other computer which has ROS indigo ,ubuntu 14.04 x64.plz help! Originally posted by yueweiliang on ROS Answers with karma: 21 on 2016-10-10 Post score: 0 Original comments Comment by gvdhoorn on 2016-10-11: I think this is specific enough to be solved by the Cartographer devs themselves. I recommend you report this to cartographer_ros/issues. Please report back if/when you resolve this. Comment by yueweiliang on 2016-10-11: I have get the reply from github that X86 is unsopported.But they also say that the errors may not be caused by X86 or X64.I decided to change to X64 to solve it ........ Comment by gvdhoorn on 2016-10-11: Could you provide a link to the github issue that you reported? Answer: Resolved here: https://github.com/googlecartographer/cartographer/issues/42 Originally posted by damonkohler with karma: 3838 on 2016-12-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25952, "tags": "ros, cartographer" }
N-puzzle program solved using A* search
Question: I coded the following program to solve the n-puzzle problem with n = 3. Are there any suggestions regarding readability and variable names? import heapq from random import shuffle import math n=3 fBoard = [1,2,3, 4,5,6, 7,8,0] board = [i for i in range(9)] shuffle(board) print(board) aStar() class Board(): def __init__(self,boardList,cost,parent): self._array = boardList self.heuristic = calcHeuristic(self._array) self.cost = cost self.totalCost = self.cost + self.heuristic self.parent = parent self.hashvalue = hash(tuple(self._array)) def _printBoard(self): for var in range(len(self._array)): if var%3==0 and var!=0: print "\n",self._array[var],",", else: print self._array[var],",", def __hash__(self): return self.hashvalue def __eq__(self,other): return self._array == other._array def aStar(): pq = [] cost = {} visited = {} start = Board(board,0,None) end = Board(fBoard,99,None) heapq.heappush(pq,(start.totalCost,start)) while pq: tmp_tuple = heapq.heappop(pq) tmp_board = tmp_tuple[1] if tmp_board.heuristic == 0: end = tmp_board break index = tmp_board._array.index(0) x = index/3 y = index%3 listOfMoves = checkMove(x,y) for move in listOfMoves: moveBoard = tmp_board._array[:] moveIndex = move[0]*3 + move[1] moveBoard[index],moveBoard[moveIndex] = moveBoard[moveIndex],moveBoard[index] newBoard = Board(moveBoard,tmp_board.cost+1,tmp_board) new_cost = newBoard.totalCost if newBoard not in visited or new_cost < cost[newBoard]: cost[newBoard] = new_cost visited[newBoard] = 1 newBoard.parent = tmp_board heapq.heappush(pq,(newBoard.totalCost,newBoard)) var = end while var != start: print "\n" var._printBoard() var = var.parent print "\n" var._printBoard() def manhattanDist(index,element): idx = fBoard.index(element) manhattan = 0 fBoard_x = idx/3 fBoard_y = idx%3 x = index/3 y = index%3 manhattan += math.fabs(x-fBoard_x) manhattan += math.fabs(y-fBoard_y) return manhattan def calcHeuristic(array): boardList = array heuristic = 0 for var in boardList: x = var/3 y = var%3 if fBoard.index(var) != boardList.index(var): heuristic+=1 heuristic+=manhattanDist(boardList.index(var),var) return heuristic def checkMove(x,y): listOfMoves = [[x,y]] if(x+1<n): listOfMoves.append([x+1,y]) if(x-1>=0): listOfMoves.append([x-1,y]) if(y-1>=0): listOfMoves.append([x,y-1]) if(y+1<n): listOfMoves.append([x,y+1]) return listOfMoves Answer: Are there any suggestions regarding readability and variable names? You should follow PEP8, the official Python coding style guide. For example: Put spaces around operators. Instead of var%3==0, write as var % 3 == 0 Put spaces after commas separating parameters. Instead of Board(fBoard,99,None), write as Board(fBoard, 99, None) Use snake_case for naming variables and methods instead of camelCase Use PascalCase for naming classes What is an fBoard? It's not easy to guess, so that's not a good name. Don't execute code in global scope These lines are executed immediately when the script is imported, which is not good: n=3 fBoard = [1,2,3, 4,5,6, 7,8,0] board = [i for i in range(9)] shuffle(board) print(board) aStar() It would be better to move them into a main function, and call that function from a if __name__ == '__main__' guard, like this: def main(): n=3 fBoard = [1,2,3, 4,5,6, 7,8,0] board = [i for i in range(9)] shuffle(board) print(board) aStar() if __name__ == '__main__': main() This will make your script more reusable, and it will help you avoid some obscure bugs that can happen by variable shadowing. Get ready for Python 3 It's not too difficult to make this script work with Python 3 too. All it takes is adapting your print statements a little bit. In some cases it's as trivial as adding (...), for example: print "\n" Rewrite as: print("\n") When you don't want the automatic newline, for example here: print "\n",self._array[var],",", You can add the end='' parameter: print("\n",self._array[var],",", end='')
{ "domain": "codereview.stackexchange", "id": 12080, "tags": "python, python-2.x, pathfinding, taxicab-geometry" }
How can I build something, like actual implementations of all these things I am learning in signal processing?
Question: I am an undergraduate electronics engineering student taking a signal processing course this semester. I have learned a lot of theoretical things in my lectures, such as how to shift or fold a signal, how to convolve 2 signals, how to take their Laplace or Z or Fourier transforms, but I want to build something useful in real life using all these concepts. All we do in our lab is just write simple programs in MATLAB to do these computations. I found out that effect pedals used for instruments such as guitars are basically signal processors. How and where can I learn to build something practical from all this theory I am learning in my signal processing classes? It is not just signal processing, but other classes too. Sometimes I feel like in electronics engineering all we do is learn theory and never build anything in real life. Like CS students can build actual stuff in software but we can't. Can anyone tell me how can I build actual stuff from all the courses I am taking in electronics engineering? Do I need to learn how to use an Arduino or a Raspberry Pi for that? Answer: I recognize your frustration. I graduated with little skill in software development, electronics assembly or acoustic construction, even though I had classes and a personal interest in topics that touched upon all of them. There are dev kits for DSP that are pre-configured to accept a signal input and output (e.g. audio, radio) and allows you to inject your DSP code. Perhaps tiny-dsp is suitable? I did not try myself. Googling «DIY guitar pedal» brought up this project. It seems relevant to your questions: Perhaps writing an app for iOS or Android would give you the hands on experience that you want? It is possible to do realtime DSP in MATLAB. The portaudio project requires no expensive toolboxes. Surely something similar is possible in Python. I think that might be the low threshold way to be more hands-on about theoretical MATLAB classes. Doing practical work is quite different from theoretical work. Even the most basic theoretical concept tends to require significant amount of work just to bring up. As such it can be frustrating to know somewhat fancy theory but have to struggle to implement even baseline stuff. Best-case a solid theoretical and practical experience is going to aid each other in that one gives insight into the other. My education did not really give me that, but on good days I feel that my colleagues and employers have helped me towards that goal.
{ "domain": "dsp.stackexchange", "id": 11885, "tags": "teaching" }