anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Creating rqt plugin
Question: this is my package: https://github.com/azhar92/GUI-for-AV.git When I open rqt and try to open the plugin from the drop-down menu using the following steps cd ~/catkin_ws catkin_make source ~/catkin_ws/devel/setup.bash rqt --force-discover I can see the name of my package when in the rqt gui, but when i click it, I get the following error in the terminal output: [ERROR] [1482121854.496133919]: Failed to load nodelet [rqt_test1/MyPlugin_1] of type [rqt_test1/MyPlugin]: MultiLibraryClassLoader: Could not create object of class type rqt_test1::MyPlugin as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary() RosPluginlibPluginProvider::load_explicit_type(rqt_test1/MyPlugin) failed creating instance PluginManager._load_plugin() could not load plugin "rqt_test1/MyPlugin": RosPluginlibPluginProvider.load() could not load plugin "rqt_test1/MyPlugin" i think it has to do with the registration of the plugin. any form of guidance will be helpful. Thank you. Plugin.xml : https://github.com/azhar92/GUI-for-AV/blob/master/plugin.xml Code: <library path="lib/librqt_test1"> <class name="rqt_test1/MyPlugin" type="rqt_test1::MyPlugin" base_class_type="rqt_gui_cpp::Plugin"> <description> 4 buttons to switch between 4 different modes. </description> <qtgui> <group> <label>Visualization</label> <statustip>Plugins related to visualization.</statustip> </group> <label>test1</label> <statustip>4 buttons to switch between 4 different modes.</statustip> </qtgui> </class> </library> Originally posted by Azhar on ROS Answers with karma: 100 on 2016-12-19 Post score: 0 Original comments Comment by gvdhoorn on 2016-12-19:\ i think it has to do with the registration of the plugin If you feel that is the case, then please include at least a snippet of code where you show how you implemented that bit. This question will become worthless otherwise if/when your repository is removed/deleted/changed. Comment by Azhar on 2016-12-19: I have edit the question. let me know if there are any other things you need. Thank you! i have not changed my repository! Comment by gvdhoorn on 2016-12-19: So where is the code where you actually register your plugin (or really: export it / make it known to class loader)? Please include that in your question. Comment by AndyZe on 2016-12-19: From a quick glance, I don't see anything to cause that error. But you can delete this from CMakeLists: install(PROGRAMS scripts/${PROJECT_NAME}/gauge_script.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) Comment by Azhar on 2016-12-20: @AndyZelenak, i tried it but it doesn't help. same error ;( Answer: Compare your main.cpp to my_plugin.cpp. You don't have this line at the bottom: PLUGINLIB_DECLARE_CLASS(rqt_gauges, MyPlugin, rqt_gauges::MyPlugin, rqt_gui_cpp::Plugin) And you're missing this function (which seems important): void MyPlugin::initPlugin(qt_gui_cpp::PluginContext& context) So go back and spend some time fixing those obvious issues. I think we would be more motivated to help if you upvoted good answers, too. Originally posted by AndyZe with karma: 2331 on 2016-12-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 26521, "tags": "ros" }
Why are oceanic plates denser than continental plates?
Question: In the theory of tectonic plates, at a convergent boundary between a continental plate and an oceanic plate, the denser plate usually subducts underneath the less dense plate. It is well known that oceanic plates subduct under continental plates, and therefore oceanic plates are more dense than continental plates. My question is why are the oceanic plates always denser than the continental plates. I'm aware that the difference in density can be attributed to the plates differing compositions, but what I'm interested in is why these plates have different composition in the first place giving rise to their relative difference in densities. Answer: Ocean lithosphere (geophysical definition of crust + upper mantle that acts as a 'plate') is primarily of basaltic composition - the upper levels are basalt and the lower levels are gabbro. The top levels have been proven with boreholes, whilst the lower levels have been inferred from transform fault sampling and comparisons with ophiolites. This sequence is produced by partial melting of mantle peridotite at a fairly controlled rate. So much so that basalts formed in this way even have a specific composition "MORB" (Mid Ocean Ridge Basalt). In contrast, continent lithosphere is more complex and tends to be of a 'granitic' composition. This includes granites but can also include a lot of metamorphic rocks (eg. gneiss) and sediments. Sediments are lower density anyway (high pore space), but so are quartz-rich rocks such as granites. The various processes that build continents tend to favour silica rich compositions, resulting in this bulk "granitic" composition. For example, limited partial melting will initially produce high silica, high alkali melts. Erosion will tend to break down most common minerals before quartz - leaving quartz-rich sediments (hence sandstone is primarily quartz). Metamorphism of pelites (rocks rich in Al and Si) will tend from the initial mudstones & basalts through to gneisses & migmatites (which have a lot of quartz and feldspar). Migmatites are partially melted - and the melted bits are essentially granite. Basalt is denser than granite. On gravity surveys, basalts and gabbros will appear as positive anomalies, whilst granites and sedimentary basins will appear as negative anomalies.
{ "domain": "earthscience.stackexchange", "id": 54, "tags": "geology, geophysics, plate-tectonics" }
Does weight influence Earth's spin?
Question: If put enough weight on a particular point on Earth's surface disturbing the balance between hemispheres, is it possible that the Earth's spin could change like an unbalanced spinning top? Answer: The Earth does spin like an unbalanced top. The Earth's rotation axis is not fixed. It instead moves in a complex manner due to a combination of external torques exerted by the Moon and Sun, a torque-free nutation due to the oblate shape of the Earth, and also due to changes on and in the Earth. The torque-induced motions are called precession and nutation, distinguished by period. The largest and slowest of these motions is the axial precession. This causes the Earth's rotation axis to trace out a cone over the course of 26000 years. (source: nasa.gov) The torque-induced nutations are also cyclical motions induced by the Moon and the Sun. These are much smaller in magnitude and have a much shorter period. The largest of these has a magnitude of about 20 arc seconds and a period of 18.6 years. All other nutation terms have much smaller magnitude and have shorter period. The torque-free nutation would have a period of about 305 days if the Earth was solid. The oceans, the atmosphere, and the outer core alter this. The Chandler wobble has a period of about 433 days and a magnitude of less than an arc second. Because the Chandler wobble isn't as predictable as are precession and nutation, it's lumped into a catch-all category called "polar motion." The redistribution of water over the course of a year (e.g., snow on Siberia in the winter but not in the summer) results in a yearly component of the polar motion. There are lots and lots of other factors, all small. Polar motion is observed after the fact.
{ "domain": "astronomy.stackexchange", "id": 1573, "tags": "earth, amateur-observing, fundamental-astronomy" }
How and why does Recrypt function work?
Question: The general aproach presented by Craig Gentry in 2009 to create a fully-homomorphic encryption system is roughly the follow: Create a scheme that can evaluate some functions (increasing the noise in the ciphertext) Change you decryption function to be one of these functions that can be evaluated Use a function Recrypt to somehow decript and encrypt again the ciphertext to eliminate the noise introduced by the homomorphic operations. The idea seems wonderfull, but, I don't understand well how and why this Recrypt function work... For example, in the section 4.3 of the paper Computing Arbitrary Functions of Encrypted Data, he explains it like that: Imagine that we have a list of public keys $p_1, p_2, .. $ and a private key $s_1$, then, we encrypt $m$ using $p_1$ generating $c_1$. Then, we encrypt each bit of $s_1$ using $p_2$ generating a vector of ciphertexts $\overline{s_1}$. Then, Recrypt encrypts each bit of $c_1$ using $p_2$ generating the array $\overline{c_1}$ and evaluate the decryption citcuit $D$ in $\overline{c_1}$, $\overline{s_1}$ and $p_2$. It seems like recrypt tries to decrypt the $\overline{c_1}$ with a wrong key (since it was encrypted with $p_2$, I was expecting something like $s_2$...). Could someone here just try to explain how this Recrypt works? I don't know what I'm missing... If my question is unclear, please, let me know. Thanks. Answer: At a high-level (ignoring the messier details), recryption that boosts bounded-depth homomorphism to unbounded-depth homomorphism works as follows: Suppose you have a public-key "somewhat-homomorphic" encryption scheme with procedures: $(PK, SK) \leftarrow Gen(1^{secparam}; coins)$: generates encryption/decryption keys $c \leftarrow Enc(PK, m; coins)$: encrypts message $m$ as ciphertext $c$ under key $PK$ $m \leftarrow Dec(SK, c)$: decrypts ciphertext $c$ using key $SK$ to message $m$ $c^* \leftarrow Eval(C, c_1, ..., c_k)$: given ciphertexts $c_1, ..., c_k$ and a circuit description $C$, computes $c^* = Enc(C(m_1, ..., m_k))$ where "somewhat-homomorphic" means $Eval$ can only correctly (and succinctly) compute ciphertexts $c^*$ when the circuit $C$ has bounded depth (in some well-defined sense). Correctness just means that w.h.p. over honest $(PK, SK) \leftarrow Gen$, for all $C, \{m_i\}_i$, we have $C(m_1, ..., m_k) = Dec(SK, Eval(C, Enc(PK, m_1; coins_1), ..., Enc(PK, m_k; coins_k)))$. I.e. that if you use the scheme 'honestly,' you get correct decryption of (possibly $Eval$'d) ciphertexts. That said, the observation is that the $Dec$ procedure, when written as a circuit, is a bounded-depth computation. Therefore, we can run $Eval$ on $C = \langle Dec\rangle$ when given (say) $SK_1$ and $Enc(PK_1, m)$ with BOTH encrypted under $PK_2$ To use this, we augment the $Gen$ procedure to first honestly generate two key-pairs $(PK_1, SK_1), (PK_2, SK_2)$. Then, $Gen$ creates a "recryption" key $RK_{1\rightarrow 2} = Enc(PK_2, SK_1)$ -- that is, the encryption of the key $SK_1$ under $PK_2$. The scheme begins with the keys above, messages $\{m_i\}$ and a circuit $C$, and first creates ciphertexts $c^{(1)}_i = Enc(PK_1, m_i)$. In order to recrypt, (conceptually) we can then doubly-encrypt the $\{c^{(1)}_i\}$ under $PK_2$. That is, create $c^{(2)}_i = Enc(PK_2, c^{(1)}_i) = Enc(PK_2, Enc(PK_1, m_i))$. Then, under key $PK_2$, we perform (for each $i$) $Eval(\langle Dec\rangle, RK_{1\rightarrow 2}, c^{(2)}_i)$ obtaining a ciphertext $c^*_i$. By the correctness of $Eval$ and $Dec$, we have $c^*_i = Enc(PK_2, C(first\_plaintext, second\_plaintext)) = Enc(PK_2, Dec(SK_1, c^{(1)}_i)) = Enc(PK_2, m_i)$. The 'messier details' in fact show that these $\{c^*_i = Enc(PK_2, m_i)\}$ are "fresh" ciphertexts under $PK_2$, meaning we have the full "bounded-depth" of $Eval$ available to us. Therefore, if $Eval$ can support at least the depth of the $Dec$ circuit, plus one, then you are able to perform unbounded-depth homomorphic computation (by further assuming circular security, and posting both $RK_{1\rightarrow 2}$ and $RK_{2\rightarrow 1}$, then toggling between the two keys with each recryption). In other words, you compute one step of the computation of some given circuit $C$, then you recrypt, and repeat. P.S. If you go through significantly more effort (involving program obfuscation techniques), you can also obtain FHE without the circular security assumption. See: http://eprint.iacr.org/2014/882
{ "domain": "cstheory.stackexchange", "id": 3033, "tags": "homomorphic-encryption" }
Relativistic transformation of $c^2/v$
Question: Everybody knows the relativistic transformation of a velocity $v$, here in one dimension, $$v'=\frac{v+w}{1+\frac{w}{c^2} v}.$$ But then this implies that $$\frac{c^2}{v'} = \frac{\frac{c^2}{v}+w}{1+\frac{w}{c^2}\frac{c^2}{v}},$$ i.e. the same transformation law for $v$ and for $c^2/v$, to hammer the point home! I had never noticed that before. What is strange is that for a speed $|v|<c$, the speed $\frac{c^2}{|v|}>c$ cannot be that of any particle. It could be a group velocity but that does not shed any light for me. It can't be just a coincidence, can it? What is the physical meaning of this "reciprocity"? In order to connect more closely with @robphy's answer, I could have formulated my question as follow. Given two inertial observers Bob and Carol assigning respective coordinates $(x^0, x^1)$ and $(y^0, y^1)$ to the same event, the transformation mapping a velocity with respect to Bob to a velocity with respect to Carol also maps (i) the "slope" $x^1/x^0$ onto $y^1/y^0$, and (ii) the slope $x^0/x^1$ onto $y^0/y^1$. This is natural for (i) as this slope can transparently be interpreted as the velocity of a signal which was at position $x^1=0$ at time $x^0=0$ but it comes as a surprise for (ii). Answer: Under a boost by relative-spatial-velocity $w=v_{CB}=\tanh\omega$, Bob's 4-velocity vector is transformed into Carol's 4-velocity vector. Call $\omega$ the relative-rapidity between the [timelike] 4-velocities. The [spatial-]velocity transformation $$v_{CA}=\frac{v_{BA}+v_{CB}}{1+v_{BA}v_{CB}}$$ provides her spatial-velocity [the slope of Carol's 4-velocity vector] $v'=v_{CA}=\tanh(\theta+\omega)$ in terms of the Bob's spatial-velocity $v=v_{BA}=\tanh\theta$ and the relative spatial velocity $w=v_{CB}=\tanh\omega$. That boost also transforms Bob's "space axis" [with slope $1/v_{BA}$] into Carol's "space axis" [with slope $1/v_{CA}$]. (This is true because, in Minkowski spacetime geometry, two directions are perpendicular in that geometry when the product of the slopes is +1... to be contrasted with the Euclidean case where the product is $-1$. Note that by taking the product of the two formulas in the OP the right side simplifies.) Although, technically speaking, the angle between spacelike lines isn't the rapidity... these spacelike lines are special because they are correspondingly perpendicular to the 4-velocities in the problem (that of Alice [not shown] and of Bob and Carol) and coplanar with all of them. UPDATE: Here is an algebraic calculation to support the spacetime diagram above. I use signature $(+,-)$, where my first component is the time component. Let $\hat V=\left(\begin{array}{c} \cosh{B} \\ \sinh{B} \end{array}\right)$ be Bob's 4-velocity (a timelike vector). His spatial velocity $v=\rm{slope}=\frac{\rm spatial\ component}{\rm temporal\ component}=\frac{\sinh B}{\cosh B}=\tanh{B}$. Let $M=\gamma\left(\begin{array}{cc} 1 & \beta \\ \beta & 1 \end{array}\right)$ be a boost that maps Bob's 4-vector to Carol's 4-vector. (The boost velocity $\beta$ corresponds to $w$ in the OP.) I am intentionally mixing notations to distinguish the boost components from the 4-velocity components. So, we get Carol's 4-velocity \begin{align} \hat V'&=M\hat V\\ \hat V'&= \gamma\left(\begin{array}{cc} 1 & \beta \\ \beta & 1 \end{array}\right) \left(\begin{array}{c} \cosh{B} \\ \sinh{B} \end{array}\right)\\ &= \gamma\left(\begin{array}{c} \cosh{B} + \beta\sinh{B} \\ \beta\cosh{B}+\sinh{B} \end{array}\right) = \left(\begin{array}{c} \cosh{C} \\ \sinh{C} \end{array}\right)\\ \end{align} $\hat V'$ has slope $$v'=\tanh{C}=\frac{\beta\cosh{B}+\sinh{B}}{\cosh{B} + \beta\sinh{B}} =\frac{\beta+\tanh{B}}{1+\beta\tanh{B}}=\frac{\beta+v}{1+\beta v},$$ the velocity transformation (the OP's eq. 1). Next... Let $\hat V_{\perp}=\left(\begin{array}{c} \sinh{B} \\ \cosh{B} \end{array}\right)$ be Bob's unit-x vector (a spacelike vector orthogonal to his 4-velocity $\hat V$). It has slope $v_{\perp}=\frac{\cosh B}{\sinh B}=\coth B=\frac{1}{v}$. The boost will map this vector to Carol's unit-x vector $\hat V_{\perp}'$ (a spacelike vector orthogonal to Carol's 4-velocity $\hat V'$). So, \begin{align} \hat V_{\perp}'&=M\hat V_{\perp}\\ \hat V_{\perp}'&= \gamma\left(\begin{array}{cc} 1 & \beta \\ \beta & 1 \end{array}\right) \left(\begin{array}{c} \sinh{B} \\ \cosh{B} \end{array}\right)\\ &= \gamma\left(\begin{array}{c} \sinh{B} + \beta\cosh{B} \\ \beta\sinh{B}+\cosh{B} \end{array}\right) \end{align} $\hat V_{\perp}'$ has slope $$v_{\perp}'=\frac{\beta\sinh{B}+\cosh{B}}{\sinh{B} + \beta\cosh{B}} =\frac{\beta+\coth{B}}{1+\beta\coth{B}}=\frac{\beta+v_{\perp}}{1+\beta v_{\perp}}=\frac{\beta+(\frac{1}{v})}{1+\beta (\frac{1}{v})},$$ call it the spacelike-slope transformation (the OP's eq. 2). For completeness, we should show that $v_{\perp}'=\frac{1}{v'}$. \begin{align} v_{\perp}' &\stackrel{?}{=}\frac{1}{v'}\\ &\stackrel{?}{=}\frac{1+\beta v}{\beta + v}\\ &\stackrel{\surd}{=}\frac{(\frac{1}{v})+\beta }{\beta (\frac{1}{v}) +1}\\ \end{align} ...and we are done. In my opinion, the PHYSICAL interpretation is that it is the slope of the observer's X-axis. (One can probably find other interpretations... but at the root of it... it is this slope.) UPDATE #2 (in response to the OP's update and comments): In a Lorentz boost, we can see how the 4-velocity and spatial-axis of an observer transform in a complementary way by viewing a "light-clock diamond" on "rotated graph paper". This is taken from a blog entry I contributed to at https://www.physicsforums.com/insights/relativity-rotated-graph-paper/ This is based on my recent paper (“Relativity on Rotated Graph Paper”, Am. J. Phys. 84, 344 (2016); http://dx.doi.org/10.1119/1.4943251 ). On these diagrams, time runs upwards. Light-Clock Diamonds are traced out by the spacetime paths of light-signals in a longitudinal light-clock. The worldlines of the mirrors of the light-clock are not shown, but are implied by the reflection events. In each tick of the light-clock, these events are "spatial" according to the observer carrying that light-clock. That is, that direction is Minkowski-perpendicular to that observer's 4-velocity. Visually, this means that the diagonals of a light-clock diamond are Minkowski-perpendicular to each other. Under a boost, Alice's light-clock diamond must transform into Bob's light-clock diamond, which must have edges parallel to the light-cone (to preserve the speed of light) and must have area preserved (since the Lorentz boost has determinant equal to one). [The area turns out to be the square-interval of OF.] Geometrically, the tip of the 4-velocity travels along the unit-hyperbola centered at the origin. [This ensures that the area of the diamond is preserved]. At the intersection point, the tangent to the hyperbola is Minkowski-perpendicular to the 4-velocity [a unit radius vector]. This tangent is parallel to the spacelike-diagonal of the light-clock diamond. Thus, diagonal YZ is Minkowski-perpendicular to diagonal OF.
{ "domain": "physics.stackexchange", "id": 44006, "tags": "special-relativity, velocity, inertial-frames, lorentz-symmetry" }
Examples of cryptographic methods using outside randomness
Question: Most cryptographic protocols like pseudorandom number generators run only on "internal" information: that is you set a seed and the next state is a function of its previous state etc.. I am wondering if there are any examples of cryptographic protocols that make use of "external" randomness or information, which is assumed to be known by an adversary? For example is there a PRNG or block cipher that has some "injection step" where it pulls in some public source of information to assist in the encryption? The only example I can come up with is an IV for AES. Answer: A salt or IV or nonce or public key all meet your requirements.
{ "domain": "cs.stackexchange", "id": 19222, "tags": "cryptography, pseudo-random-generators" }
Face recognition on on tilted images
Question: From what I learned, viola jones needs images with faces upfront. Tilting the head a few degrees to each side can botch the algorithm. Is it possible to take an image, take a several rotated copies of it and run the algorithm on each one? This limitation sounds like only a minor neusense to me. Is there a stronger reason for the limitation? Answer: Tilting your head a few degrees to the side doesn't cause Viola-Jones face detectors to fail. They are more robust than that. I suggest trying some experiments to see for yourself. As an aside: It's not always possible to rotate an image to the extent one might like. You have a 2D representation of a 3D object. As an extreme case, imagine trying to take a photograph and simulate rotating the face by 180 degrees around the vertical axis. You can't, because that would require knowing what the back of their head looks like. Even tilting around a horizontal axis has limits; imagine trying to tilt by 180 degrees -- which would require simulating the effect of your hair pointing in a different direction. So while it may be possible to simulate rotation to a limited extent, but there are limits to what kind of synthetic rotation is possible.
{ "domain": "cs.stackexchange", "id": 10992, "tags": "image-processing, facial-recognition" }
Why do an eagle’s feet look ridged?
Question: Why are the knuckles of an eagles toes "ridged"? (I found these pictures to explain. One was from PBS and the other is one I found on Pinterest): I circled the parts of the toes/feet I meant in red (from PBS): Answer: The bubbly structures on the dorsal sides of the digits are scutes. Birds retained them from their dinosaurian ancestors. Here is a picture of similar structures on an alligator foot:
{ "domain": "biology.stackexchange", "id": 6328, "tags": "ornithology" }
Oxidation State Of Propene
Question: I was just trying to understand the oxidation states of organic compounds. A more electronegative atom than carbon increases it's oxidation state by 1 and less electronegaitve atom decreases it's oxidation state by 1. Consider the molecule propene $\ce{CH3-CH=CH2}$ the oxidation state of left carbon is -3, that of the middle one is -1 and that of the right one is -2? So, what is the oxidation state of carbon in propene? Same complexity arises in 2-propyne or any such molecule. Answer: The very idea of oxidation states is not all that useful within the realm of organic chemistry, so you may just as well leave it at the door. That being said, it still can be used, and your question concerning the oxidation state of carbon in propene has a meaningful answer, which you gave yourself: the oxidation state of the methyl carbon is -3, that of the middle one is -1, and that of the methylene one is -2. (There is no such thing as 2-propene, but that's another story.) Asking for anything above that is pretty much like pointing your finger at three random guys in the street and asking what is their name. What are they supposed to answer, if they are not relatives and don't have a name common to all three? Well, numbers are not quite like names, so you may calculate an average... but I guess you already see why the organic chemists tend not to think in terms of oxidation states.
{ "domain": "chemistry.stackexchange", "id": 10369, "tags": "oxidation-state" }
Why does static friction point up the ramp for an object that is rolling without slipping up an inclined plane?
Question: A solid sphere rolls without slipping up and then down an incline. Why does the static friction force point up the incline both when the object is rolling up and also when it's rolling down? I know it's necessary for the force to point up when the object is rolling up, because then it's the only source of torque on the object, and for the object to roll back down the incline, there must be a torque which acts in the opposite direction of the initial angular velocity. However, I don't understand why this is the case. Why does the force need to point up the incline to create the necessary torque, particularly when the object is rolling back down the ramp? Answer: If someone could explain this intuitively, . . . . . I am not sure what exactly you mean by this. The FBD for a body rolling without slipping on an incline is as follows. On first meeting this sort of system the fact that the frictional force is in the same direction whether the body is rolling up an incline or down the incline is "counterintuitive". I will go through the usual reasons given for this behavior later but consider this. You set up an experiment in which the motion of a body rolling down an incline is videoed. You then process the video and produce another video in which time is reversed and the body appears to be rolling up the incline. So you now have two videos. If you the give the two videos to a friend who was not part of the experiment and the processing and ask them which is the original one and which is the processed one, would they be able to tell you? Assuming that the processing is perfect etc, the answer is, "no" and that is because the system of forces acting on the body does not change if the direction of motion of the body changes. The no slipping condition, $v_{\rm CoM} = r\,\omega$, requires that the linear velocity of the centre of mass, $v_{\rm CoM}$, of the body, and the angular velocity, $\omega$, change in tandem. Friction is in such a direction to try and oppose relative motion or to try and prevent relative motion. In the rolling down the slope just suppose that the ball was not rolling. There would be a friction force up the slope so that the final state is the ball rolling down the slope without slipping. You could say that the frictional force does it two ways: The frictional force opposes the force down the slope thus reducing the linear acceleration so that the linear speed down the slope is not as great as it would have been without the frictional force. The frictional force provides a torque about the centre of mass of the ball which produces an angular acceleration of the ball and increases its angular speed. These two effects produce a convergent result - the no slip condition. Once that non slip condition is reached the ball must undergo just the right amount of linear acceleration down the slope and angular acceleration about the centre of mass of the ball to maintain the no slip condition. The frictional force does that. Going up the hill the situation is the same with the frictional force decreasing the linear acceleration along the slope whilst providing a torque to change the angular speed to maintain the no slip condition. To appreciate what is happening in this case just imagine what would happen if a ball spinning counter-clockwise was placed on the track. The frictional force would have to accelerate the ball up the slope whilst slowing down its speed of rotation. Eventually the no slip condition is reached and then the frictional force is there to maintain that condition of no relative movement between the slope and the ball. So in each case what the frictional force does is to try and get to a no slip condition (no relative movement between the ball and the slope) and then maintain that condition.
{ "domain": "physics.stackexchange", "id": 95339, "tags": "newtonian-mechanics, rotational-dynamics, friction, free-body-diagram" }
Are my observations about the differences between average velocity and speed, in 1 dimension, correct?
Question: I've been struggling to understand the differences between average velocity and average speed in 1 dimension; by doing some exercises I think I have figured it out, but i want to check if I'm correct. This describes the movement of a particle through 4 positions: $P_1, P_2, P_3, P_4$. Each position has a respective coordinate $x_1, x_2, x_3, x_4$, and a respective moment in time $t_1, t_2, t_3, t_4$ in which the particle is located in that position. Let's talk about the movement from $P_1$ to $P_3$: The x component of average velocity is $\frac{x_3 - x_1}{t_3 - t_1}$. However, average speed seems to be $\frac{|x_2-x_1|+|x_3-x_2|}{|t_2-t_1|+|t_3-t_2|}$. I have some questions about the latter; As you may have seen, I used absolute value signs. Regarding the norm of the vectors (which describes the distance between the points) it is because I bumped into an exercise in which average speed, initial and final coordinates, and initial time were given, and I had to solve for final time. the final position was a negative number, and initial was positive, so I ended up getting a negative distance, which resulted in a negative final time, something which can not happen. So I concluded that average speed is about the movements in absolute value, which is reinforced by the fact it's an scalar. I'm sure putting absolute values to find the distances between points is correct, but I could be wrong, so ¿is this correct?. The ones I'm not so sure about are the absolute value signs in time, because up to this point I have not done an exercise with negative time involved, but I don't know if they exist, which if they didn't, would make abs. val. signs unnecesary ¿do you know if I have to put them? ¿will I find excercises with negative time involved? About the movement from $P_1$ to $P_2$. It seems x component of average velocity is $\frac{x_2 - x_1}{t_2 - t_1}$, and average speed is given by $\frac{|x_2 - x_1|}{|t_2 - t_1|}$ Is this correct? Answer: It seems x component of average velocity is x2−x1t2−t1, and average speed is given by |x2−x1||t2−t1| ¿is this correct? Yes, this is basically the definition. Those vertical bars should (in this case) not be thought of as "absolute value", but "magnitude". The magnitude of a velocity that is "five to the left" is five. Yes, that is the absolute value in that case, but it isn't always. For instance, the magnitude of the velocity "5 to the left and 2 up" is 5.4 - do you see why? All of this is made easier if you always consider the motion/measurement/whatever along each axis separately. So in that case instead of saying the velocity is 5 to the left and 2 up" you would write (-5, 2), and then your speed is (5, 2). And then, hey presto, you're back to the magnitude being simply the absolute value of the numbers again. And this is why we write |v| to mean "the magnitude of the vector", because ultimately its the same. I wish the math quoted properly...
{ "domain": "physics.stackexchange", "id": 53717, "tags": "kinematics, velocity, speed" }
Intuitive Derivation of Length Contraction in Special Relativity via Thought Experiment
Question: I am trying to intuitively derive length contraction in special relativity using a thought experiment, without relying on Lorentz transformations. My aim is to obtain a derivation similar to how time dilation is derived using the classic light clock thought experiment. However, I have not been able to find or create a similarly intuitive derivation for length contraction. I understand that deriving length contraction is challenging because it involves two points with different worldlines, the concept of simultaneity, and a precise definition of 'measuring' distance between these points. Here's the gedankenexperiment I've devised, but for which I haven't found a satisfactory answer: A rod with a photon emitter at its center is placed on an 2-dimensional plane. At a certain instant, the emitter sends photons in opposite directions. The plane is made of a material that records an imprint when the photons reach the rod's endpoints. The rod's length is determined by measuring the distance between the imprinted spots. When the rod is at rest with respect to the plane, triggering the emitter records two spots at a distance we'll call the proper length of the rod, $L_0$. Now, consider the rod moving at velocity $v$ with respect to the plane, where $v$ is parallel to the direction defined by the rod's endpoints. What is the distance between the imprinted spots when the photon emitter is triggered? I expect the answer to be $L_0 \gamma^{-1}$, as length contraction would imply, but I have been unable to derive this result. Below is my attempt: Suppose the rod is moving to the right and the device trigger event takes place at $(t,x) = (0,0)$ in the rest frame of the plane. We now find the points at which light reaches each endpoint, which we do by solving a simple system of equations for each point. Let $x_A$ and $x_B$ be the position of the right and left endpoint respectively. Then For $A$: \begin{equation} \left. \begin{aligned} x_A = c t_A \\ x_A - L/2 &= v t_A \end{aligned} \right\} \ \rightarrow \ x_A = \frac{L}{2} \frac{1}{1-\beta} \end{equation} For $B$: \begin{equation} \left. \begin{aligned} x_B = - c t_B \\ x_B + L/2 &= v t_B \end{aligned} \right\} \ \rightarrow \ x_B = \frac{L}{2} \frac{1}{1+\beta} \end{equation} Where $\beta = v/c$, $t_A$ and $t_B$ are the time it takes for the photons to reach the $A$ and $B$ endpoints respectively, and $L$ is the length of the rod in the plane frame. Then the difference between $x_A$ and $x_B$ which determines the positions of the spots imprinted on the plane is \begin{equation} \Delta x = x_A - x_B = L \gamma^2 \end{equation} I do not know how to make sense of this. If I assume ad hoc length contraction of the rod ($L = L_0\gamma^{-1}$) then the result would be $\Delta x = L_0 \gamma$, which doesn't make sense to me. It implies that the spots are further apart when the rod is moving, suggesting some kind of length dilation (if we define length by the distance between spots). Can anyone help clarify this issue? Answer: In your thought experiment, let's say $F_{R}$ is the reference frame of the rod and $F_{P}$ is the reference frame of the background plane. Then the difference between $x_A$ and $x_B$ which determines the positions of the spots imprinted on the plane is \begin{equation} \Delta x = x_A - x_B = L \gamma^2 \end{equation} I do not know how to make sense of this. I get stuck in circular reasoning. If I assume ad hoc that $L = L_0\gamma^{-1}$, then the result would be $\Delta x = L_0 \gamma$, which doesn't make sense to me. It implies that the spots are further apart when the rod is moving, suggesting some kind of length dilation (if we define length by the distance between spots). In $F_{P}$, the left endpoint gets marked first, and then the right endpoint gets marked later. This is clearly not what you want, and the distance between the two marks will not be representative of the rod's length in any reference frame. If you insist on your thought experiment, you have to mark the endpoints at the same instant with respect to simultaneity of $F_{P}$. This is the challenge. Here is how I would proceed to create a thought experiment. I would take the light clock thought experiment and flip the light clock 90 degrees so that it is longitudinal to its motion. So to put it another way, suppose we have a rod moving to the right with velocity $v$ in $F_{P}$. At time $t=0$ in $F_{P}$, the left endpoint is at $(t, x) = (0, 0)$ and the right endpoint is at $(t, x) = (0, L)$. At $t=0$, suppose the left endpoint triggers and releases light to the right. The light signal reaches the right endpoint, gets reflected, and goes back to the left endpoint. The total time elapsed with respect to $F_{P}$'s time is $$ \Delta t = \frac{L}{c-v} + \frac{L}{c+v} = \frac{2cL}{c^{2}-v^{2}} = \frac{2L}{c}\gamma^{2}. $$ If we permit ourselves to assume time dilation, then we know that the amount of time that elapses in $F_{R}$'s frame is $$ \Delta t' = \Delta t/\gamma. $$ But reasoning in $F_{R}$ also reveals that the time elapsed (of the total roundtrip of the light) is $$ \Delta t' = \frac{2L_{0}}{c} $$ where $L_{0}$ is the proper length of the rod. Then by putting the equations together we find $L_{0} = L\gamma$, or equivalently $L = L_{0}/\gamma$.
{ "domain": "physics.stackexchange", "id": 94993, "tags": "special-relativity, spacetime, coordinate-systems, inertial-frames, observers" }
Electric Field between two parallel plans of opposite charge density
Question: So considering two infinite parallel plans of opposite charge density let's say +σ for the left plan and -σ for the right plan Why is the electric field calculated this way : $$ E = σ/2εo + σ/2εo = σ/εo $$ I understand that between the plans the vector(E+) will point to the right toward the negatively charged plan. The same goes for vector(E-) that goes toward the negatively charged plan. What I don't understand is why do we not consider the '-σ' value in the equation? Answer: Refer to the figure below. We can look at the field between (and outside) the plates from the perspective of Gauss’s Law (left drawing), or from the perspective of the contributions of two charged sheets (right drawing). Recall the law states that the total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity. Looking at the upper cylinder of the left figure, we see that the only charge enclosed is the positive charge on the left plate. This continues to be the case as we extend the cylinder toward the right surface. When the cylinder includes the charge on the right plate the net charge enclosed is zero thus the field is zero outside the plates. You can do the same thing starting from the right plate. The figure at the right uses the approach of the field contributions of each plate which can be summed up to get the same field as from the Gaussian surface. Note that the direction of the field vectors is always that of the force that a positive charge would experience. Hope this helps
{ "domain": "physics.stackexchange", "id": 52314, "tags": "electrostatics, electric-fields, capacitance, gauss-law" }
Force required to keep a car from sliding out of a curve
Question: I am a high school student trying to get an intuitive understanding of centripetal and centrifugal forces. I can calculate the correct solution but I don't understand the meaning of it. In class today we had the following problem: Given a car weighing 1.5 tons at a speed of 20 m/s compute the centripetal force as multiples of the gravitational force when the car is going through a curve with radius 80 meters and a friction coefficient $\mu = 0.8$. The curve is level (not banked) If I understand correctly what prevents the car from sliding out of the curve is the frictional force which is the centripetal force in this case. It is computed as: $$F_{friction} = F_{centripetal} = \mu * F_{Gravity}$$ The condition for the frictional and centripetal forces is $$F_{friction} \geq F_{centripetal}$$ So I computed the centripetal force and the friction force using these formulas: $$\mu * g * m \geq m * \frac{v^2}{r}$$ This means that the centripetal force can only be as big as the frictional force correct ? As a result I get $$11.77kN \geq 7.5kN$$ But what does this result tell me ? If the left hand side is greater does that mean the car will not slide ? I can understand that if the centrifugal force is bigger than the frictional force the car will slide off. But what about the centripetal force ? Can someone please explain the meaning of this inequality to me ? The coefficient $F_{gravity} / F_{centripetal}$ is 1,96. What does this tell me ? Thanks in advance ! Answer: If I understand correctly what prevents the car from sliding out of the curve is the frictional force which is the centripetal force in this case. Sometimes it can be easier to think about this in the ground frame rather than the car frame. Instead of "sliding out" of the curve, think of the car as normally going straight, but you expect the wheels to force the car onto the curving path. If you hit a slick patch, the wheels won't grip enough and the car simply doesn't turn as fast as you want. If the left hand side is greater does that mean the car will not slide ? Yes. As the wheels are holding the car via static friction, you haven't computed the frictional force, you've computed the maximum frictional force. Friction may actually be less than that. If you turn the wheel slightly, you create small forces. If you turn the wheel excessively at high speed, then at some point the maximum force is exceeded and the wheel skids. You have computed the maximum possible centripetal force. If you want to take a curve that requires a greater force than that, the car will skid and not follow the intended path.
{ "domain": "physics.stackexchange", "id": 84803, "tags": "friction, centripetal-force, centrifugal-force" }
Populates Data from one Workbook to Another
Question: Another code clean up I am working on. I have been breaking up my code based on things I have learned here on CR. The code below all works and functions as expected, but I know it can be streamlined more and would like to see how this can be accomplished. The code below was combined into one code block here for ease of copying, but if I need to break it up into the sheet events and standard modules please let me know. Option Explicit Private Sub Send_Click() SendToQC End Sub Option Explicit Sub SendToQC() Dim cYear As String cYear = Year(Now()) Dim nYear As String nYear = cYear + 1 Dim logWBpath As String logWBpath = "L:\Loans\Quality Control\QC Log " & nYear & ".xlsx" Dim testStr As String testStr = "" Dim ret As Boolean ret = IsWorkBookOpen(logWBpath) Select Case ret Case Is = True Dim msgCap As String msgCap = "The QC Log is currently open. Please try again later or manually enter the data." MsgBox msgCap, vbInformation + vbOKOnly Exit Sub Case Is = False On Error Resume Next testStr = Dir(logWBpath) On Error GoTo 0 Dim closeDate As Date closeDate = Sheet1.Range("P9") Dim logWB As Workbook Dim logWS As Worksheet Select Case Right(closeDate, 4) Case Is = cYear PopulateData logWB, logWS, ThisWorkbook, ThisWorkbook.Sheets("In-House"), cYear Case Is = nYear If testStr = "" Then Dim ErrMsg As String ErrMsg = "The QC Log for " & nYear & " may not have been created yet or has a different naming convention." & vbCrLf & vbCrLf _ & "Please Contact Zack Elcombe." & vbCrLf & " Ext: 4519" & vbCrLf & " Email: ZackE@coderules.coderuls" MsgBox ErrMsg, vbCritical Else PopulateData logWB, logWS, ThisWorkbook, ThisWorkbook.Sheets("In-House"), , nYear End If Case Is = "" MsgBox "Closing Date is required", vbCritical + vbOKOnly End Select With Sheet1.Send .Locked = True .Enabled = False .BackColor = vbGreen End With End Select End Sub Sub PopulateData(LogWorkbook As Workbook, LogWorksheet As Worksheet, QualityContWB As Workbook, _ QualityContWS As Worksheet, Optional ByVal CurrentYear As String, Optional ByVal NextYear As String) If Not CurrentYear = "" Then Set LogWorkbook = Workbooks.Open("L:\Loans\Quality Control\QC Log " & CurrentYear & ".xlsx", False) Else Set LogWorkbook = Workbooks.Open("L:\Loans\Quality Control\QC Log " & NextYear & ".xlsx", False) End If Set LogWorksheet = LogWorkbook.Sheets("Sheet1") Set QualityContWB = ThisWorkbook Set QualityContWS = QualityContWB.Sheets("In-House") Dim dataRow As Long dataRow = LogWorksheet.Cells(Rows.Count, "B").End(xlUp).Row + 1 LogWorksheet.Range("B" & dataRow) = Format(QualityContWS.Range("P9"), "General Date") Select Case LCase(Split(QualityContWS.Range("lnOfficer"), " ")(0)) Case Is = "cassie": LogWorksheet.Range("C" & dataRow) = "CLH" Case Is = "amy": LogWorksheet.Range("C" & dataRow) = "ASN" Case Is = "nancy": LogWorksheet.Range("C" & dataRow) = "NAK" Case Is = "liz": LogWorksheet.Range("C" & dataRow) = "EAO" Case Is = "rob": LogWorksheet.Range("C" & dataRow) = "RTE" End Select LogWorksheet.Range("D" & dataRow) = QualityContWS.Range("LnProcessor") LogWorksheet.Range("E" & dataRow) = QualityContWS.Range("BorrowerName") LogWorksheet.Range("F" & dataRow) = QualityContWS.Range("LnNumber") LogWorksheet.Range("G" & dataRow) = "No" Dim Reviewer As String If Len(QualityContWS.Range("Reviewer")) > 0 Then Select Case LCase(Split(QualityContWS.Range("Reviewer"), " ")(0)) Case Is = "hunter": Reviewer = "HMP" Case Is = "cindy": Reviewer = "CKK" Case Is = "zack": Reviewer = "ZJE" Case Is = "terri": Reviewer = "TJE" End Select Else: Reviewer = "" End If Select Case Len(QualityContWS.Range("DateCleartoClose")) Case Is = 0 LogWorksheet.Range("H" & dataRow) = Reviewer LogWorksheet.Range("I" & dataRow) = vbNullString Case Is > 1: LogWorksheet.Range("I" & dataRow) = QualityContWS.Range("DateCleartoClose") End Select Dim qcComments As String qcComments = QualityContWS.Range("C88") & " " & QualityContWS.Range("C89") & " " & QualityContWS.Range("C90") & " " & QualityContWS.Range("C91") LogWorksheet.Range("J" & dataRow) = qcComments & ". " & Reviewer LogWorkbook.Save LogWorkbook.Close End Sub Option Explicit Function IsWorkBookOpen(filename As String) As Boolean Dim ff As Long, ErrNo As Long On Error Resume Next ff = FreeFile() Open filename For Input Lock Read As #ff Close ff ErrNo = Err On Error GoTo 0 Select Case ErrNo Case 0: IsWorkBookOpen = False Case 70: IsWorkBookOpen = True Case Else: Error ErrNo End Select End Function Answer: Select Case ret Case Is = True Exit Sub Case Is = False End Select I would write a Select Case that will never have more than two conditions as an If..Else statement. In this case, I prefer to wrap the IsWorkBookOpen() in its own If statement because you are going to exit the sub if it is triggered. This will save you an indent level and eliminate the need for the ret variable. If IsWorkBookOpen(logWBpath) Then Dim msgCap As String msgCap = "The QC Log is currently open. Please try again later or manually enter the data." MsgBox msgCap, vbInformation + vbOKOnly Exit Sub End If Adding white-space before and after your code blocks (e.g. If, Select, Sub, Function..) will make the code easier to read. If Len(QualityContWS.Range("Reviewer")) > 0 Then Select Case LCase(Split(QualityContWS.Range("Reviewer"), " ")(0)) Case Is = "hunter": Reviewer = "HMP" Case Is = "cindy": Reviewer = "CKK" Case Is = "zack": Reviewer = "ZJE" Case Is = "terri": Reviewer = "TJE" End Select Else: Reviewer = "" End If Use with blocks to shorten references: Before Dim qcComments As String qcComments = QualityContWS.Range("C88") & " " & QualityContWS.Range("C89") & " " & QualityContWS.Range("C90") & " " & QualityContWS.Range("C91") LogWorksheet.Range("J" & dataRow) = qcComments & ". " & Reviewer After With QualityContWS LogWorksheet.Range("J" & dataRow) = WorksheetFunction.TextJoin(" ", True, .Range("C89:C91").Value, ". ", Reviewer) End With Good thing that these are going to be the only 4 employees who will never leave the company or you may need to rewrite a lot of code in the future. Normally, I would recommend storing the employee information in a database and writing some lookup functions or an employee information class but I am sure you will be alright. Dim dataRow As Long dataRow = LogWorksheet.Cells(Rows.Count, "B").End(xlUp).Row + 1 I'm really not a fan of having a lastrow variable unless absolutely necessary. As I have mentioned in answers to other questions of the OP, consider using Enumeration to reference you columns. Public Enum LogWorksheetColumns cA = 1 cDateOf lnOfficerInitials cLnProcessor cBorrowerName cLnNumber cYesNo cReviewer cDateCleartoClose End Enum Sub PopulateData(...) '... Dim newRow As Range With LogWorksheet Set newRow = .Cells(.Rows.Count, "B").End(xlUp).Offset(1, -1) End With With QualityContWS newRow(cDateOf) = Format(.Range("P9"), "General Date") newRow(lnOfficerInitials) = GetLnProcessor(Split(QualityContWS.Range("lnOfficer").Value, " ")(0)) newRow(cLnProcessor) = .Range("LnProcessor").Value newRow(cBorrowerName) = .Range("BorrowerName").Value newRow(cLnNumber) = .Range("LnNumber").Value newRow(cYesNo) = "No" newRow(cDateCleartoClose) = .... End With '... End Sub Rows.Count needs to be qualified to a worksheet: LogWorksheet.Cells(LogWorksheet.Rows.Count, "B").End(xlUp).Row + 1
{ "domain": "codereview.stackexchange", "id": 36882, "tags": "vba, excel, subroutine" }
Charge, Size, and Movement in Aqeuous Ions as Electrolytes
Question: Would a pair of aqueous $+2$ and $-2$ ions necessarily conduct electricity better than the same concentration of $+1$ and $-1$ ions? Furthermore, would more massive ions not conduct as well as less massive ions because they cannot travel as rapidly? It seems to me that ions move in the liquid to conduct electricity, because solid ionic crystals do not conduct electricity. If they move, then size and charge should both be factors affecting electrolyte strength. However, if ions are moving, why don't they ever achieve the most stable position and stop moving? Answer: Would a pair of aqueous $+2$ and $−2$ ions necessarily conduct electricity better than the same concentration of $+1$ and $−1$ ions? Not necessarily. Taking the concentration of 0.001 aqueous solution of $\ce{KOH}$ and $\ce{MgSO4}$ at $18$ degree Celsius, we find that the current conducted by $\ce{KOH}$ is $2.34 \space mA$ while the current conducted by $\ce{MgSO4}$ is $2.00 \space mA$. Why did this happen? It happened because of the mass. And therefore here comes the answer to your second question. Would more massive ions not conduct as well as less massive ions because they cannot travel as rapidly? Yup, more massive ion means less current. It's because of simple equation (and I hope you are aware of these physics equations)$$qV=\dfrac{mv^2}{2}$$ Here $q$ is the charge on the ion. $V$ is potential difference between both the solutions. $m$ is the mass of the ion. And $v$ is the drift velocity. We know that $qV$ is constant. So we get $$v \propto \dfrac{1}{\sqrt{m}}$$ We can easily analyse that more the mass we have, less the speed we get. If ions are moving, why don't they ever achieve the most stable position and stop moving? They do achieve the most stable position and that's why batteries die.
{ "domain": "chemistry.stackexchange", "id": 5536, "tags": "electrochemistry, aqueous-solution, electricity" }
Why does this pumping lemma proof not cover every division of xyz?
Question: I am trying to prove $L =$ {${a^k|k=2^n, n \ge 0}$} is not regular. So far I have identifed a string $s = a^{2^p}$, and I've divided $s$ into $xyz$ as follows: $x = a^l$ $y = a^k$ *(where $k + l \le p$) $z = a^{2^p-l-k}$ Now, when I start to look at pumping, I first select $i=0$: $xy^iz = xy^0z = xz = a^la^{2^p-l-k}=a^{2^p-k}$, which is clearly not equivalent to $a^{2n}$. However, I know that there are mistakes in this proof and I have been told that my division of the string $s$ into $xyz$ does not account for all divisions. Could anyone please help? Answer: For reference, the pumping lemma is (from Introduction to the Theory of Computation by Michael Sipser) If $A$ is a regular language, then there is a number $p$ (the pumping length) where if $s$ is any string in $A$ of length at least $p$, then $s$ may be divided into three pieces, $s = xyz$, satisfying the following conditions: for each $i ≥ 0, xy^i z \in A$, $|y| > 0$, and $|xy| \le p$. When you want to prove that a language is not regular using the pumping lemma, you use a proof by contradiction. You first assume that the given language is regular. As per the pumping lemma, given a regular language, there exists a number $p$ such that for all $s$ with length at least $p$, there exists a way to partition it in the form $x,y,z$ such that all the three conditions hold. The contrapositive is For all $p\ge1$, there exists a string $s$ with length at least $p$, such that for all ways to partition $s$ in the form $x,y,z$, at least one of the three conditions are false, implies that the language is not regular. So, to show that $L$ is not regular, you need to show that the appropriate quantifier (existential or universal) for each entity is used correctly. Initially, you have taken a value $p$, with no assumption. This is in accordance with for all $p$. Then, you have chosen a string $s$ with length at least $p$. This is in accordance with there exists a string $s$, i.e., a single instance of a string satisfying the rest of the statement is sufficient. Now, you need to consider all ways of partitioning $s$ in the form $x,y,z$, and show that always, at least one of the three statements of the lemma is false. So, you have partitioned it into $x=a^l, y=a^k, z=a^{{2^p}-l-k}$. As of now, there are no assumptions on the values $l$ and $k$ can take, except that they are non negative, which is as required. To force a contradiction, you need to try to show that all the statements of the lemma do hold. Statement 2 states that $|y|\gt0$. So, if it were true, we must have $k\gt0$. Statement 3 states that $|yz|\le p$. If it were true, we must have $k+l\le p$. Finally, Statement 1 must be true if $L$ is regular, so $xy^iz\in L$ for all $i\ge0$ must be true. It is sufficient thus to show that it is false for any one value of $i$. I choose $i=2$. So, we must have $a^la^{2k}a^{{2^p}-l-k}=a^{{2^p}+k}\in L$ for all possible values of $l$ and $k$ except as per the previously imposed restrictions. Now, we have $$k\gt0\implies k\ge1$$ $$2^p+k\ge 2^p+1\gt 2^p\tag1$$ $$k+l\le p$$ But $l\ge 0$, so we have $$k\le p$$ $$2^p+k\le 2^p+p\tag2$$ But we have $$p\lt 2^p$$ for all $p\ge 1$, which is the set of all allowed values of $p$. So, $(2)$ becomes, $$2^p+k\le 2^p+p\lt 2^p+2^p=2^{p+1}\tag3$$ From $(1)$ and $(3)$, $$2^p\lt 2^p+k \lt2^{p+1}$$ As $2^p+k$ lies strictly between consecutive powers of $2$, it cannot be a power of $2$ itself. So, $a^{2^p+k}\notin L$, and we have a contradiction as desired. Thus, $L$ is not regular. In your proof, the initial steps are all correct, all you need to do is to show why the string cannot lie in the language, using an appropriate value of $i$, as shown in this answer. Moreover, choosing $i=0$ is not straightforward. For $i=2$, I argued that $2^p\gt p$. For $i=0$, you will need to use $2^{p-1}\gt p$, which is not true if $p=1$.
{ "domain": "cs.stackexchange", "id": 8289, "tags": "regular-languages, pumping-lemma" }
Please help interpret the IBM Quantum error code: "Instruction bfunc is not supported [7001]"
Question: I have already run the circuit on the IBM Quantum simulators successfully. But, when I ran the same circuit on the real quantum device ibmq_16_melbourne, I got the error. The IBM website does have the 7001 code definition, but it is not specific enough to indicate what the bfunc means. Could you please help interpret the code? More important, how can I avoid that to have a successful run? Thanks. Answer: A bfunc is a Boolean Function, as defined in the QObj Specification (page 20) and the error 7001 refers to Instruction {} is not supported. In other words, the backend ibmq_16_melbourne does not support boolean functions. Your circuit has, most likely, a classical conditional, something like if (c==0) h q[0]; in QASM or a circuit.h(qreg_q[0]).c_if(creg_c, 0) in Qiskit. Conditional are not currently (March 2021) supported by any of the IBM quantum hardware. Although, they supported in many simulators.
{ "domain": "quantumcomputing.stackexchange", "id": 2583, "tags": "programming, ibm-q-experience" }
How can a contact force have both a normal component and a frictional component? What does it mean?
Question: My syllabus says that I must understand (and be able to use) the fact that: a contact force between two surfaces can be represented by two components, the normal component and the frictional component What does this mean? I have been trying to understand this, and I tried to think of a few contact forces -- friction, tension, air resistance. I don't understand how friction, for example, could have both a normal component and a frictional component. Similarly, what if the surface is smooth and there is no friction? How, then, can some force have a frictional component? Those of you out there that understand physics/mechanics can probably see major gaps in my understanding. I cannot know because I do not even know enough to know. My textbook does not clarify this, and I am an almost exclusively self-taught high school student (with no peers/teachers whatsoever). Therefore, I would appreciate a thorough explanation. Thank you! Answer: I don't understand how friction, for example, could have both a normal component and a frictional component. Your syllabus doesn't say this. It says that interaction forces have two components, normal and frictional. It isn't saying friction has these components; friction itself is one of the components. This is just a general application of breaking forces into components. Of course you can break any vector into components in any direction, but in the case of contact between two surfaces the easiest components are perpendicular and parallel to the surfaces that are in contact. We call the perpendicular component the "normal force" and the parallel component "friction", but at the end of the day its just a single force of interaction between the two surfaces.$^*$ Typically we always break these up right away in physics problems and don't acknowledge that we did, so it is hard to recognize that they are from the same interaction. Take the example of incline plane problems. We take the gravity force and break it up into components perpendicular to the incline and parallel to the incline. We treat these two components separately even though they come from the same interaction of gravity between the mass on the incline and the Earth. This might have been masked a little bit if instead of showing this derivation we just said "when an object is on an incline there is a 'sliding force' down the incline and a 'pushing force' into the incline." Similarly, what if the surface is smooth and there is no friction? How, then, can some force have a frictional component? You can still resolve the surface interaction into normal and frictional components, you just find that the friction component is $0$. $^*$If you wanted to write out the interaction force, we would just have $$\mathbf F_\text{interaction}=N\,\hat e_\bot+f\,\hat e_\parallel$$ where $N$ is what we call the "normal force", $f$ is what we call the "friction force", and $\hat e_\bot$ and $\hat e_\parallel$ are unit vectors perpendicular and parallel to the surfaces respectively.
{ "domain": "physics.stackexchange", "id": 72340, "tags": "newtonian-mechanics, forces, classical-mechanics" }
Markov switching models
Question: What are some reference sources for understanding Markov switching models? Answer: Firstly, for understanding the Markov switching models, a nice knowledge of Markov models and the way they work. Most importantly, an idea of time series models and how they work, is very important. I found this tutorial good enough for getting up to speed with the concept. This is another tutorial on a similar application of the switching model, which is the regime switching model. The statsmodels library has a nice support for building the Morkov switching models. Here is one simple and quick Python tutorial which uses the statsmodels library.
{ "domain": "datascience.stackexchange", "id": 11592, "tags": "time-series, markov-process, reference-request" }
What is the name of this lens design that uses concentric dark and transparent rings?
Question: I remember from my optics class years ago that one can make an optical lens out of a set of concentric rings, alternatively transparent and dark. By choosing the diameter and width of said rings, the lenses properties can be tuned, so typically they would be achieved using spatial light modulators (SLM). Initially I thought I remembered them as Fresnel lenses but then the Wikipedia page does not match that description. They do not seem to be featured on this list of lens designs either. Does anyone know how they are called and where documentation can be found about them? Answer: Answering my own question as remembering about the SLM part helped me to narrow down my search. These are indeed called Fresnel zone plates, or simply zone plates and the wikipedia page has a decent amount of information about them. (I'll give the community some time before accepting my own answer, in case someone has better documentation pointers to give).
{ "domain": "physics.stackexchange", "id": 54258, "tags": "optics, lenses" }
Are corrosive and reactive synonymous?
Question: This answer on a question regarding whether water is corrosive in pure form, the author implies that "reactive" and "corrosive" are the same thing (at least with water.) "Corrode" seems to mean irreparably damaged which also seems to imply a corrosive reaction is not reasonably reversible whereas "react" can mean several things such as the changing of an oxidation state, or bonding covalently with another element or molecule, to something as seemingly small as a hydrogen bond. Are "corrode" and "react" truly synonymous or is corrosion just one of many types of reactions? Answer: Corrosion has an economic and a negative connotation e.g., corrosion scientists are interested in protecting metals/non-metals exposed to the atmosphere to reduce major loses to the city & the government. Reaction is a necessary process for corrosion, for instance, eating away or destruction of metals by oxidation. Hence, acids are corrosive to a large number of metals. Even in geology, corrosion implies destruction of rocks mainly by water. Those who live near coastal cities know how cement is corroded (eaten away) by salty water aerosol. There is no oxidation. All senses are negative. In my humble opinion, if something is reactive, it does not necessarily have a negative tone. Water reacts with sodium, but we would not say, water corrodes sodium because an economic loss/negative tone is not implied. Sulfur reacts with copper and so on. I would not include hydrogen bonding under a chemical reaction but this is really splitting hairs just like chemical change vs. physical change enforced upon undergraduate students. In short, water can be corrosive or reactive depending on the context by the speaker/author. Acids are corrosive to metals (negative, warning) or aqua regia reacts with most metals (the sense maybe good or bad).
{ "domain": "chemistry.stackexchange", "id": 16794, "tags": "definitions" }
Differentiability of electric field due to bounded volume charge distribution
Question: In books on electromagnetism, one often sees expressions of Maxwell's equations like $\nabla \cdot \mathbf{E}$ and $\nabla \times \mathbf{E}$. These expressions make sense if $\mathbf{E}$ (which is due to bounded volume charge distribution) is differentiable. I ask this question because in all the textbooks on electromagnetism which I have seen, expressions like $\nabla \cdot \mathbf{E}$ and $\nabla \times \mathbf{E}$ are used and nowhere do they prove the differentiability of $\mathbf{E}$. How can it be justified? Is the differentiability of $\mathbf{E}$ such a trivial case? If yes, why is it so? If no, why do the books ignore discussing the differentiability of $\mathbf{E}$? Answer: Maxwell's equations continue to hold even when the fields are not differentiable in the usual sense as they can be interpreted in terms of "weak" or distributional derivatives. For example, the electric field jumps discontinuously across a surface charge distribution, but $\nabla \cdot {\bf D}= \rho$ remains true with $\rho(x,y,z)=\sigma(x,y) \delta(z)$. This is the case in most of physics, which is why you seldom see differentiably conditions in discussions of vector calculus in physics texts. There are exceptions of course, so caution is always required.
{ "domain": "physics.stackexchange", "id": 57914, "tags": "electromagnetism, electrostatics, electric-fields, differentiation, maxwell-equations" }
Convert BeautifulSoup4 HTML Table to a list of lists, iterating over each Tag elements
Question: I am trying to convert a BeautifulSoup4 HTML Table to a list of lists, iterating over each Tag elements and handling them accordingly. I have an implementation of this that works at a surface level using BeautifulSoup4. However, the code is getting repetitive and complicated needlessly, but every time I try to improve it, I just end up breaking the functionality. I need some guidance on tidying this up. Ultimately, I separate each type of HTML tags for any given row cell. The goal is to re-format the contents of the tables to an Excel spreadsheet and do partial cell formatting (still a work in progress, using xlwt). Note I've left out as much as possible of the parsing, but just enough to give an idea. from bs4 import BeautifulSoup from bs4.element import Tag, NavigableString def handle_bs4_element(element): if isinstance(element, Tag): if len(element.contents) > 1: # Handle each element separately and return a list? What if more elements are nested? Recursive call? _res = [] for e_content in element.contents: _res.append(handle_bs4_element(e_content)) if len(_res) == 1: return _res[0] else: return _res else: tag_name = element.name if tag_name == 'td': _res = [] for td_content in element.contents: _res.append(handle_bs4_element(td_content)) if len(_res) == 1: return _res[0] else: return _res elif tag_name in ('div', 'span'): # This will probably contain more nested tags... _res = [] for td_content in element.contents: _res.append(handle_bs4_element(td_content)) if len(_res) == 1: return _res[0] else: return _res elif tag_name in ('p', 'strong', 'em', 'h3'): # Would handle each case separately, but just for the example return element.text elif tag_name == 'a': e_text = element.text e_link = element['href'] if e_text != e_link: return '{text} ({url})'.format(text=e_text, url=e_link) else: return e_link else: print('Element HTML type not handled: {0}'.format(tag_name)) elif isinstance(element, NavigableString): return element else: raise Exception('bs4 element of type {0} not handled...'.format(type(element))) bs_table = BeautifulSoup(open('table_sample.html'), "html.parser") headers = [h.text for h in bs_table.find_all('th')] data = [headers] rows = bs_table.find_all('tr') for row in rows: row_cells = row.find_all('td') if row_cells: # Handle each row cell appropriately data.append([handle_bs4_element(rc) for rc in row_cells if handle_bs4_element(rc)]) print('\n'.join(map(str, data))) table_sample.html: <table class="confluenceTable"> <tbody> <tr> <th class="numberingColumn confluenceTh">&nbsp;</th> <th class="confluenceTh"><p><strong>Description</strong></p></th> <th colspan="1" class="confluenceTh"><p>Col 1</p></th> <th colspan="1" class="confluenceTh">Col 2</th> <th colspan="1" class="confluenceTh">Col 3</th> </tr> <tr> <td class="numberingColumn confluenceTd">1</td> <td class="confluenceTd"><p>Some paragraph text</p></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">5</td> <td colspan="1" class="confluenceTd">2</td> </tr> <tr> <td class="numberingColumn confluenceTd">2</td> <td colspan="4" class="confluenceTd"><h3 id="some-id1"><strong>HEADER 1</strong></h3></td> </tr> <tr> <td class="numberingColumn confluenceTd">3</td> <td colspan="1" class="confluenceTd"><div><p>Some text: </p><p>(1) Check out this <strong style="line-height: 1.42857;">Figure 1.0.</strong></p></div></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">1</td> </tr> <tr> <td class="numberingColumn confluenceTd">4</td> <td colspan="1" class="confluenceTd"><p>(2)&nbsp;&nbsp;&nbsp;Some more text</p></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">1</td> </tr> <tr> <td class="numberingColumn confluenceTd">5</td> <td colspan="1" class="confluenceTd"><p>(3)&nbsp;&nbsp;&nbsp; Additional text</p></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">1</td> </tr> <tr> <td class="numberingColumn confluenceTd">6</td> <td colspan="1" class="confluenceTd"><p>(4)&nbsp;&nbsp;&nbsp; A bit more text</p></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">1</td> </tr> <tr> <td class="numberingColumn confluenceTd">7</td> <td colspan="1" class="confluenceTd"><span>(5)&nbsp;&nbsp;&nbsp; <span>A span <strong>Figure 1.0</strong> for edited text. At this point the </span>span starts again</span></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">1</td> </tr> <tr> <td class="numberingColumn confluenceTd">8</td> <td colspan="4" class="confluenceTd"><h3 id="some-id2"><strong>HEADER 2</strong></h3></td> </tr> <tr> <td class="numberingColumn confluenceTd">9</td> <td colspan="1" class="confluenceTd"><p>Weird formatting, because Confluence</p><p>&nbsp;</p></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">4</td> <td colspan="1" class="confluenceTd">2</td> </tr> <tr> <td class="numberingColumn confluenceTd">10</td> <td colspan="4" class="confluenceTd"><h3 id="some-id3"><strong>HEADER 3</strong></h3></td> </tr> <tr> <td class="numberingColumn confluenceTd">11</td> <td colspan="1" class="confluenceTd"><p>A paragraph about header 3.</p> <div class="confluence-information-macro confluence-information-macro-information"> <span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span> <div class="confluence-information-macro-body">This is just silly. <strong>Strong</strong> indeed.</div> </div> </td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">3</td> <td colspan="1" class="confluenceTd">3</td> </tr> <tr> <td class="numberingColumn confluenceTd">12</td> <td colspan="1" class="confluenceTd"><span>Something about things or what not. Why is this in a span?</span></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">2</td> </tr> <tr> <td class="numberingColumn confluenceTd">13</td> <td colspan="4" class="confluenceTd"><h3 id="some-id4">HEADER 4</h3></td> </tr> <tr> <td class="numberingColumn confluenceTd">14</td> <td colspan="1" class="confluenceTd"><p>Section 4 baby! Or header.</p> <div class="confluence-information-macro confluence-information-macro-information"> <span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span> <div class="confluence-information-macro-body">Confluence formatting fun.</div> </div> </td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">3</td> </tr> <tr> <td class="numberingColumn confluenceTd">15</td> <td colspan="1" class="confluenceTd"><span>Pretty boring span of text</span></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">2</td> <td colspan="1" class="confluenceTd">2</td> </tr> <tr> <td class="numberingColumn confluenceTd">16</td> <td colspan="4" class="confluenceTd"><h3 id="some-id5"><strong>HEADER 5</strong></h3></td> </tr> <tr> <td class="numberingColumn confluenceTd">17</td> <td colspan="1" class="confluenceTd"><p>A big paragraph describing more stuff. Super exciting.</p></td> <td colspan="1" class="confluenceTd">x</td> <td colspan="1" class="confluenceTd">4</td> <td colspan="1" class="confluenceTd">2</td> </tr> </tbody> </table> Current output: [u'\xa0', u'Description', u'Col 1', u'Col 2', u'Col 3'] [u'1', u'Some paragraph text', u'x', u'5', u'2'] [u'2', u'HEADER 1'] [u'3', [u'Some text: ', [u'(1) Check out this ', u'Figure 1.0.']], u'x', u'2', u'1'] [u'4', u'(2)\xa0\xa0\xa0Some more text', u'x', u'2', u'1'] [u'5', u'(3)\xa0\xa0\xa0 Additional text', u'x', u'2', u'1'] [u'6', u'(4)\xa0\xa0\xa0 A bit more text', u'x', u'2', u'1'] [u'7', [u'(5)\xa0\xa0\xa0 ', [u'A span ', u'Figure 1.0', u' for\n edited text. At this point the '], u'span starts again'], u'x', u'2', u'1'] [u'8', u'HEADER 2'] [u'9', [u'Weird formatting, because Confluence', u'\xa0'], u'x', u'4', u'2'] [u'10', u'HEADER 3'] [u'11', [u'A paragraph about header 3.', u'\n', [u'\n', [], u'\n', [u'This is just silly. ', u'Strong', u' indeed.'], u'\n'], u'\n'], u'x', u'3', u'3'] [u'12', u'Something about things or what not. Why is this in a span?', u'x', u'2', u'2'] [u'13', u'HEADER 4'] [u'14', [u'Section 4 baby! Or header.', u'\n', [u'\n', [], u'\n', u'Confluence formatting fun.', u'\n'], u'\n'], u'x', u'2', u'3'] [u'15', u'Pretty boring span of text', u'x', u'2', u'2'] [u'16', u'HEADER 5'] [u'17', u'A big paragraph describing more stuff. Super exciting.', u'x', u'4', u'2'] Answer: Here is the list of things I would think about to improve: you are doubling on calls to handle_bs4_element() here: data.append([handle_bs4_element(rc) for rc in row_cells if handle_bs4_element(rc)]) Instead, you can either allow "falsy" values for the row cells and filter them afterwards, or expand the loop: result = [] for rc in row_cells: cell_text = handle_bs4_element(rc) if cell_text: result.append(cell_text) data.append(result) the DRY principle. There are several repeated blocks of code, like: if len(_res) == 1: return _res[0] else: return _res using list comprehensions is not only more Pythonic, but actually faster. E.g. you can replace: _res = [] for td_content in element.contents: _res.append(handle_bs4_element(td_content)) with: _res = [handle_bs4_element(td_content) for td_content in element.contents] you can use the short if/else one-liner, replacing: if len(_res) == 1: return _res[0] else: return _res with: return _res[0] if len(_res) == 1 else _res variable naming. _res should not be started with an underscore. You are confusing private class or instance attributes with regular variables. _res should probably be called result, or may be cell_data? if you will have more of this kind of tag-specific processing logic, continuing to put it as an another elif would hurt readability and does not scale well. Consider using the "Extract Method" refactoring method and defining a separate functions for each of the cases. instead of using the .contents list directly, look into using .get_text(), which completes an element's text including the children texts recursively. Not sure if applicable for your problem. or, instead of .contents list, you can use the .children generator As a side note, there is also a simpler way to parse HTML tables - pandas.read_html() which would load an HTML table into a DataFrame, you can then easily dump the dataframe into a list or into CSV, or into an Excel file directly. For example, the following code: from pprint import pprint import pandas as pd df = pd.read_html('table_sample.html')[0] # get the first parsed dataframe pprint(df.values.tolist()) Would automagically produce: [[nan, 'Description', 'Col 1', 'Col 2', 'Col 3'], [1.0, 'Some paragraph text', 'x', '5', '2'], [2.0, 'HEADER 1', nan, nan, nan], [3.0, 'Some text: (1) Check out this Figure 1.0.', 'x', '2', '1'], [4.0, '(2) Some more text', 'x', '2', '1'], [5.0, '(3) Additional text', 'x', '2', '1'], [6.0, '(4) A bit more text', 'x', '2', '1'], [7.0, '(5) A span Figure 1.0 for edited text. At this point the span starts again', 'x', '2', '1'], [8.0, 'HEADER 2', nan, nan, nan], [9.0, 'Weird formatting, because Confluence', 'x', '4', '2'], [10.0, 'HEADER 3', nan, nan, nan], [11.0, 'A paragraph about header 3. This is just silly. Strong indeed.', 'x', '3', '3'], [12.0, 'Something about things or what not. Why is this in a span?', 'x', '2', '2'], [13.0, 'HEADER 4', nan, nan, nan], [14.0, 'Section 4 baby! Or header. Confluence formatting fun.', 'x', '2', '3'], [15.0, 'Pretty boring span of text', 'x', '2', '2'], [16.0, 'HEADER 5', nan, nan, nan], [17.0, 'A big paragraph describing more stuff. Super exciting.', 'x', '4', '2']]
{ "domain": "codereview.stackexchange", "id": 24204, "tags": "python, html, python-2.x, excel, beautifulsoup" }
Winning strategy in the game of triplets
Question: The game of triplets is defined by a finite set of elements $X$, and a finite multi-set $T$ containing triplets of elements. Two players take turns picking elements from $X$ until all elements are taken. Then, the score of each player is the number of triplets from $T$ in which he has at least 2 elements. A standard strategy-stealing argument shows that the first player can always score at least $|T|/2$. Suppose by contradiction that it is false. Then the second player can score more than $|T|/2$. But then the first player, copying the second player's winning strategy, can score more than $|T|/2$ too. This is a contradiction since the sum of scores is $|T|$. QUESTION: what is an explicit strategy for the first player to get a score of at least $|T|/2$? EDIT: Here is an explicit strategy for the first player to get at least $3|T|/8$. To each triplet in $T$, assign a potential $P(a,b)$ based on the number of its elements taken by the (first,second) player: \begin{matrix} \bf a \downarrow b \rightarrow & \bf 0 & \bf 1 & \bf 2 & \bf 3 \\ \bf 0 &3/8&0& 0 & 0 \\ \bf 1 &3/4&1/2& 0 & \\ \bf 2 & 1 & 1 & & \\ \bf 3 & 1 & & & \\ \end{matrix} Initially, every triplet has potential $3/8$, so the potential-sum is $3|T|/8$. Player 1's strategy is: pick an element that maximizes the potential-sum. Suppose that element is $x$ and the element picked next by player 2 is $y$. I claim that the potential-sum after these two moves weakly increases: The potential of a triplet that contains neither $x$ nor $y$ does not change. The potential of a triplet that contains both $x$ and $y$ changes from $P(a,b)$ to $P(a+1,b+1)$, which is always at least as large. The potential of a triplet that contains $x$ and not $y$ increases by $P(a+1,b)-P(a,b)$; The potential of a triplet that contains $y$ and not $x$ decreases by $P(a,b)-P(a,b+1)$; it is easy to check in the table that $P(a,b)-P(a,b+1)\leq P(a+1,b)-P(a,b)$ (the decrease when going right is at most the increase when going down). All in all, the potential-sum increases by the sum of $P(a+1,b)-P(a,b)$ over all triplets that contain $x$, and decreases by (at most) the sum of $P(a+1,b)-P(a,b)$ over all triplets that contain $y$. By the choice of $x$, the first sum is weakly larger. So the potential-sum weakly increases. So the final potential-sum is at least $3|T|/8$. At the end, a triplet has potential $1$ ($0$) iff it is won by player 1 (2), so the final potential-sum equals player 1's score. Answer: This isn't a complete proof, but here's some justification for why known conjectures imply that the game may be computationally hard to solve. Namely, I'm going to argue that finding the correct first move is already probably tricky. As a first step, we argue that the triplets game is harder (in the appropriate sense) than the $\textrm{Denser Induced Subgraph}$ game defined as follows. Two players, A and B, alternate picking vertices on a common graph G. Vertices can only be picked once. When no more vertices remain to be picked, the subgraphs induced by each player's choices are compared. The player with the larger number of induced edges is declared the winner. Proof outline: Given an instance of the $\textrm{Denser Induced Subgraph}$ game with graph $G = (V,E)$, we construct a $\textrm{Triplets}$ instance as follows. Without loss of generality, assume $G$ has no isolated vertices. The set of elements in our instance will be $V \cup (E \times \{0,1\})$. For each edge $e \in E$ between vertices $u$ and $v$, we have two triplets of the form $(u, v, (e, 0))$ and $(u, v, (e, 1))$. Additionally, for each vertex $v \in V$, we throw in four additional triplets of the form $(v,v,v)$. This completes the reduction. Now imagine the proceedings of the $\mathrm{Triplets}$ game. As long as some vertex from $V$ has not been picked, the choice of such a vertex strictly dominates that of any element from $E$. Indeed, picking an element from $E \times \{0,1\}$ only ever gives a potential score increase of $1$ (and also blocks the opponent from at most $1$ point), while picking an element from $V$ automatically gives a score increase of $4$, with potential for more. Therefore, under optimal play, the first $|V|$ rounds will correspond to both players picking elements from $V$. After these rounds, the players alternate picking up the even-sized collection of triples that have not yet been claimed, which correspond to exactly the edges whose endpoints have been picked once by each player. Any reasonable strategy here, for either player, ends up picking up exactly half of those available triplets. The game ends with a sequence of NOOP moves on the already-picked-up triplets. Let $V_A$ be the vertices chosen by player A, and $V_B$ those chosen by B. The score for player A is the sum of (i) four points per vertex chosen from the $(v,v,v)$ triplets (ii) two points per induced edge created from these vertices, and (iii) one point for each split edge. Therefore, the score is $4|V|/2 + 2|E[V_A]| + (|E| - |E[V_A]| - |E[V_B]|)$, where $E[S]$ is the set of edges induced by $S$. Since the first and last terms are ultimately equal for both players, the player with the larger induced subgraph wins. $\square$ With this in mind, we can appeal to some of the work in the literature of detecting dense subgraphs. There's a ton of relevant work out there on this that one can appeal to, but for simplicity of analysis I'll appeal to a particular conjecture on the difficulty of finding dense random graphs in sparse random graphs (I believe that this dependence can be removed with just a little more thought, but this is not meant to be a formal proof). The Planted Dense Subgraph Problem (informal). Let $G = (V,E)$ be a random graph sampled from the Erdos-Renyi distribution $G(n, 1/\sqrt{n})$. With probability $1/2$, we return $G$ as is. Otherwise, we let $V'$ be a uniformly random subset of $V$ of size $\sqrt{n}$. For each $u,v$ pair in $V'$, we add an edge $(u,v)$ to $E$ independently with probability $n^{-1/4}$. Only then do we return $G$. The problem is to, given only the output of the above, correctly identify whether or not the Erdos-Renyi graph was augmented. The Planted Dense Subgraph Conjecture (informal). No polynomial-time algorithm can solve the Planted Dense Subgraph problem with probability at least $51\%$. Suppose that the graph was augmented, and there is an unusually dense component. Since no poly-time algorithm can reliably detect this dense subgraph's presence, it also cannot reliably sample a vertex from this dense component (e.g. due to self-reducibility). Therefore, since (from Player A's perspective) it is selecting a random vertex from a pure Erdos-Renyi graph, it does not matter much which vertex A picks (up to a small change in its scoring that will end up not mattering1). However, if Player B is omniscient, it can reliably sample a vertex from the dense component on its first shot. This process repeats a superconstant number of times before B's choices begin unveiling the dense component to A (otherwise, a polynomial-time algorithm can traverse every path in this game tree to constant depth in order to solve the Planted Dense Subgraph problem). If the process repeated $r$ times before A catches on, then the first $r-1$ rounds can be seen as "freebie" rounds for B, while the $r$th round is the beginning of A and B fighting over the dense component, with B getting the first move and (by your strategy stealing argument) a winning subset. Once the dense component is exhausted, the two players resume fighting over the rest of the graph. While A has chosen $r$ more vertices here than B has, B's first $r$ vertices are worth $\Omega(n^{1/4})$ times as much, and thus B is ultimately the winner. 1. By some type of concentration and pigeonhole argument, the difference between making the first choice and the second choice should not be more than $O(1)$ in the final score. Therefore, despite the game being very weakly solved for player A, it's unlikely that it's computationally feasible for A to play out even the first move of the winning strategy. An approach based on the hardness of the "normal" densest subgraph problem should not be difficult to attain here, either, and composing the reduction with a hardness of approximation result likely can be used to get some kind of hardness based on more mainstream conjectures (eg ETH). I'm not sure what the difficulty of moving up to NP-hardness (or beyond) may be.
{ "domain": "cstheory.stackexchange", "id": 4557, "tags": "gt.game-theory, combinatorial-game-theory" }
What is the complexity of vertex cover on k-partite graphs?
Question: Given a k-partite graph which is already partitioned into k parts, what is the complexity of finding a vertex cover of minimum size? I guess that it's NP-hard, but couldn't yet prove it or find reference for it. I'm also interested in the dependence on k. Answer: For bipartite graphs, vertex cover is polynomially solvable by routine techniques from matching theory. For $k$-partite graphs with $k\ge3$, we observe the following: Vertex cover is NP-complete on cubic graphs By Brooks' theorem, every cubic graph (except $K_4$) is 3-colorable and hence 3-partite.
{ "domain": "cstheory.stackexchange", "id": 3635, "tags": "graph-theory, np-hardness, complexity" }
Commercial support for ROS
Question: Are there companies or individuals that can or do provide commercial support for ROS. I.e. to help with specific components of ROS? Originally posted by Greg S on ROS Answers with karma: 36 on 2013-05-24 Post score: 2 Answer: Here is a good list. SwRI is my favorite ;) http://www.osrfoundation.org/consultants-network/ Originally posted by sedwards with karma: 1601 on 2013-05-24 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14284, "tags": "ros" }
Faraday Law: Exercise
Question: Exercise: Two circular figures $R_1$ and $R_2$ are represented in the image, respectively with radius $r_1 = 21,2cm$ and $r_2 = 32,2cm$. In $R_1$ there is a uniform magnetic field $B_1 = 48mT$ in the perpendicular direction coming in the sheet, and a uniform magnetic field $B_2 = 77.2mT$ in perpendicular direction leaving the sheet (all effects on the edge are neglected). Both fields decrease by $8.5mT / s$. Calculate $\int \vec{E}\cdot d\vec{s}$ for each of the three paths shown in the figure. How can i approach this problem? I don't understand what kind of reasoning i have to do about the decreasing of the magnetic field. Thanks Answer: For the first circle, using Faraday's law, you get $$\int \vec{E}\cdot d\vec{s}=-\frac{d\Phi}{dt}=-A\frac{dB}{dt}$$ $$=-\pi(0.212)^2(8.5 \times 10^{-3} T/s)$$
{ "domain": "physics.stackexchange", "id": 43274, "tags": "homework-and-exercises, electromagnetism" }
Softmax in Sentiment analysis
Question: I am going to do Sentiment Analysis over some tweet texts. So, in summary we have three classes: Positive, Neutral, Negative. If I apply Softmax in the last layer, I will have the probability for each class for each piece of text. we know that in Softmax: P(pos) + P(neu) + P(neg) = 1 My question: suppose that we have a piece of text in Positive label. So, do we have to have these probabilities in this order: P(pos) > P(neu) > P(neg) What does it mean when we have them in this order: P(pos) > P(neg) > P(neu) Can we conclude anything from this? For example, can we say with confidence that the label is Positive like as before? Answer: If you have a text in positive label, and your model think it is positive then the positive probability your model output will be the largest. If you ask your model which is the second most likely label that you(your model) think this text sample belong to, your model's answer is the class that has the second largest probability in the output, and so on. In summary, your model rank the class from most likely to less likely to your sample. So the order of the probabilities depend on your model belief, such that the most likely class will has the largest probability and the least likely class will has the least probability. My question: suppose that we have a piece of text in Positive label. So, do we have to have these probabilities in this order: P(pos) > P(neu) > P(neg) Not exactly, it depends on your model belief, which depends on how good is your data to express the idea of positive, neutral and negative. But usually when use logistic regression to classify 3 class positive, neutral and negative, people will set a threshold for positive, neutral and negative in the probability range, for example: > 0.7 is positive, in [0.4, 0.7] is neutral and the remaining is negative. By doing this, we implicitly assume that the probabilities are indeed have order as you said. This is because we assume that there is an order between positive, neutral and negative, such that neutral is between positive and negative. But if we are dealing with another problem for example classify dog, cat and fist, then I don't think we can assume the order. What does it mean when we have them in this order: P(pos) > P(neg) > P(neu) It means that the model believe the most likely class to your sample is positive, the second most likely is negative, and the least likely is neutral. Can we conclude anything from this? For example, can we say with confidence that the label is Positive like as before? In my opinion, the model is confident with its answer, if we choose to believe it, then we can confidently say that the sample's class is positive as before.
{ "domain": "datascience.stackexchange", "id": 10308, "tags": "nlp, sentiment-analysis" }
Tracks in cloud chambers (Mott’s problem) and quantum state reduction (collapse)
Question: After reading Mott’s paper https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1929.0205 The wave mechanics of α-ray tracks Mott N.F., Proc. R. Soc. Lond. A, 126, 79-84, 1929. my simple question is why this kind of reasoning isn’t sufficient to show that any kind of quantum state reduction (projection postulate, collapse of the wave function) is unnecessary. Of course this does not rule out the idea to use the quantum state reduction as a tool in some calculations, but as presented in some books and lecture notes it seems that the quantum state reduction ist still believed to be an essential, indispensable postulate for quantum mechanics. (of course I know that there are non-collapse interpretations; to my knowledge they haven’t been ruled out experimentally but agree with observations) Answer: I think the point of Mott's paper is that appear tracks and not other patterns because there is enough energy just for that. Taking wavefunctions as configurations, you should study the wave-mechanical configurations in an hypersphere and then translating it in the 3d space. Collapse is a non-sense cartoon, just related to measurement, that at the end is where the paradigma of quantas really comes from. If true, all matter should radiate as hell ! (since neither the two momentums and the enegy should be conserved in a collapse) I think it is the time to admit that we did not get the process. That is clearly a non-linear dynamics where an instability drives the whole system particle+apparatus in a final state with a classical outcomes. I think that quantas are just an illusion of this process. And this is also the source of the big disconnection we have in physics between the micro and the macro. Here one can expect the next big thing.
{ "domain": "physics.stackexchange", "id": 96460, "tags": "quantum-mechanics, quantum-interpretations" }
Barcode of a graph
Question: Using persistent homology, we can analyze the (topological) shape of a cloud of points using the following three-step method: convert the point set into a simplicial complex (and there are a few different ways of doing this) parameterized by a "noise" parameter Compute the homology groups of this complex (again parameterized by the parameter) look at the evolution of the groups as the parameter evolves. The "time of life" of the different groups looks like a collection of intervals, which is called the "barcode" of the shape. Is there an easy explanation of what the barcode looks like if the simplicial complex is merely a 1-skeleton (i.e a graph) ? In other words, suppose we start with a graph (rather than a point set) and then do the remaining two steps as above. Answer: Betti-0 will be one interval for each vertex, with one of the involved intervals vanishing any time an edge connects two components. This will be very similar to a trace of a Union-Find running on the graph. Betti-1 will be one interval for each essential closed loop; corresponding to a running updated basis for the Cycle Space. Since it is a graph, these will appear whenever an edge is added that does not connect two disjoint components, and never disappear again.
{ "domain": "cstheory.stackexchange", "id": 2714, "tags": "cg.comp-geom, topology" }
How to create a ROS node that can start and stop counting based on what message the other node its subscribed to sends
Question: Hello, I am basically trying to get a node that both publishes and subscribes. Imagine I have two nodes, node1 publishes and count_node subscribes and publishes as well. When node1 publishes "start count" I want count_node to start counting and publish the count value. If node1 publishes to "stop count" while count_node is counting, I want count_node to stop the active count. My attempt is below in code. I use ROS Melodic on Ubuntu 18.04 My attempt thus far fails because when I receive the message to start the count the callback function calls a function (startCount) that uses a while loop to increment. Thus until the count it finished, count_node cannot process the message to stop the count and the stopCount function is called AFTER the count is finished, not while its counting. Is there a way to do what I want? Here is my attempt at count_node: import rospy from std_msgs.msg import String from std_msgs.msg import Int32 def callback(data): rospy.loginfo(rospy.get_caller_id() + ' I heard: %s', data.data) if (data.data=="start count"): startSanitization() elif (data.data=="stop count"): stopSanitization() def startCount(): percent = 0 while percent < 101 : rospy.loginfo(percent) pub.publish(percent) percent = percent + 1 rate.sleep() def stopCount(): percent = 0 rospy.loginfo(percent) pub.publish(percent) def listener(): # In ROS, nodes are uniquely named. If two nodes with the same # name are launched, the previous one is kicked off. The # anonymous=True flag means that rospy will choose a unique # name for our 'listener' node so that multiple listeners can # run simultaneously. #check for message on Topic rospy.Subscriber('brwsrButtons', String, callback) rospy.loginfo("hello") # spin() simply keeps python from exiting until this node is stopped rospy.spin() if __name__ == '__main__': #create a unique node rospy.init_node('count_node', anonymous=True) #create a publisher object pub = rospy.Publisher('progress', Int32, queue_size=10) rate = rospy.Rate(10) #10Hz #start the subscribing and publishing process listener() *New Updated code: import rospy from std_msgs.msg import String from std_msgs.msg import Int32 from multiprocessing import Process, Pipe import thread keepCounting = 0 percent = 0 def callback(data,keepCounting): # rospy.loginfo(rospy.get_caller_id() + ' I heard: %s', data.data) if (data.data=="Started Sanitizing the Room"): rospy.loginfo(rospy.get_caller_id() + ' I heard: %s', data.data) keepCounting = 1 elif (data.data=="Stopped Sanitizing the Room"): keepCounting = 0 def countingProc(percent, keepCounting): while percent < 101 and keepCounting==0 : rospy.loginfo(percent) pub.publish(percent) percent += 1 rate.sleep() # elif keepCounting==0 : def listener(percent, keepCounting): # In ROS, nodes are uniquely named. If two nodes with the same # name are launched, the previous one is kicked off. The # anonymous=True flag means that rospy will choose a unique # name for our 'listener' node so that multiple listeners can # run simultaneously. #check for message on Topic rospy.Subscriber('brwsrButtons', String, callback,(keepCounting)) rospy.loginfo("hello") thread.start_new_thread (countingProc, (percent, keepCounting,)) # spin() simply keeps python from exiting until this node is stopped rospy.spin() if __name__ == '__main__': #create a unique node rospy.init_node('percentageHandler', anonymous=True) #create a publisher object pub = rospy.Publisher('progress', Int32, queue_size=10) rate = rospy.Rate(10) #10Hz #start the subscribing and publishing process listener(percent, keepCounting) Originally posted by ROSNewbie on ROS Answers with karma: 5 on 2021-06-25 Post score: 0 Answer: As mentioned by @janindu, you can use threads to accomplish this task. At the same time, it is possible for you to restructure your code to accomplish the same task as well. Let's try to understand your task: When node1 publishes "Start Sanitizing the Room", count_node should start counting from 0 to 100 at a predefined rate. count_node should therefore start counting when it receives the std_msgs String data "Start Sanitizing the Room", and should stop counting when it receives the std_msgs String data "Stop Sanitizing the Room". The count can be seen from a rospy.loginfo statement or from the topic 'progress' that it is publishing to. #!/usr/bin/env python import rospy from std_msgs.msg import String, Int32 keepCounting = True count = 0 def callback(data): global keepCounting, count # global keyword allows us to access global variables if (data.data == "Start Sanitizing the Room"): keepCounting = True elif (data.data == "Stop Sanitizing the Room"): keepCounting = False # Restart the count count = 0 else: # There are 5 logger levels, debug, info, warn, error, and fatal rospy.logwarn("Received the wrong data, please check") def listener(): global keepCounting, count # global keyword allows us to access global variables # Start a subscriber to subscribe to String type data and pass the data to the callback function rospy.Subscriber('brwsrButtons', String, callback) # There are 5 logger levels, debug, info, warn, error, and fatal rospy.loginfo("Started listener method") # Run this node at 10hz r = rospy.Rate(10) # As long as the node is not shutdown keep running this while loop while not rospy.is_shutdown(): if (count < 101 and keepCounting): rospy.loginfo("The count is: " + str(count)) pub.publish(count) count += 1 r.sleep() if__name__ == '__main__': # Take note that if anonymous is set to True, the name of the node will be randomised # Something like /countageHandler_13506_1624779538099 rospy.init_node('countageHandler', anonymous=True) # Start a publisher to publish Int32 type data which buffers up to 10 pieces of the data pub = rospy.Publisher('progress', Int32, queue_size=10) # Call the listener function defined above listener() The reason this works is because you have 2 processes that are running: 1. The while loop in listener 2. The callback function based on the subscriber As you initially intended in your original code, the count is incremented at a rate of 10hz, once the cleaning stops, the keepCount flag is set to False, and the count is reset to 0. When cleaning starts again, the keepCount flag is set to True and both the cleaning and the count resumes. keepCount flag is a global variable because it is defined outside any of the functions at the top of the code. It can be accessed within each of the functions because of the "global" keyword. Hope this helps! Originally posted by ParkerRobert with karma: 113 on 2021-06-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ROSNewbie on 2021-06-27: Thanks for the help, I have used this solution and it worked wonderfully, I'm glad to see other ways to utilize ROS functionality, all the best! Comment by ParkerRobert on 2021-06-29: Hi @ROSNewbie great to hear! You can go ahead and accept the answer you think is best by clicking on the green arrow in the circle below each answer, that way other users will be able to follow the answer as well. Cheers!
{ "domain": "robotics.stackexchange", "id": 36582, "tags": "ros-melodic" }
Simple server log backup script utilising AWS
Question: I have server side log files written to various directories throughout the course of a week. I have the need to optionally compress, and archive these files to AWS. The script I have come up with takes a config file defining the include glob to look for matches, and offers optional removal of the source files once the process has completed. python aws_bkup.py path/to/config.cfg A sample config file can be found in the project directory: https://github.com/twanas/aws-bkup I would be most appreciative of feedback on the code style and areas it can be improved. from __future__ import print_function import re import errno import os from glob import glob from os.path import join, basename, splitext from os import environ, remove from shutil import copy, rmtree from uuid import uuid4 import gzip from configparser import ConfigParser from datetime import date, timedelta import subprocess def gz(src, dest): """ Compresses a file to *.gz Parameters ---------- src: filepath of file to be compressesd dest: destination filepath """ filename = splitext(basename(src))[0] destpath = join(dest, '{}.gz'.format(filename)) blocksize = 1 << 16 #64kB with open(src) as f_in: f_out = gzip.open(destpath, 'wb') while True: block = f_in.read(blocksize) if block == '': break f_out.write(block) f_out.close() def aws_sync(src, dest): """ Synchronise a local directory to aws Parameters ---------- src: local path dest: aws bucket """ cmd = 'aws s3 sync {} {}'.format(src, dest) push = subprocess.call(cmd, shell=True) def today(): """ Returns a string format of today's date """ return date.today().strftime('%Y%m%d') def fwe(): """ Returns a string format of the next friday's date """ d = date.today() while d.weekday() != 4: d += timedelta(1) return d def regex_match(string, pattern): """ Returns if there is a match between parameter and regex pattern """ pattern = re.compile(pattern) return pattern.match(string) def mkdir_p(path): try: os.makedirs(path) except OSError as exc: # Python >2.5 if exc.errno == errno.EEXIST and os.path.isdir(path): pass else: raise def aws_bkup(section, include, exclude, s3root, categorize_weekly=True, compress=True, remove_source=True): """ Transfers a backup of any local files matching the user's criteria to AWS. Parameters ---------- include: regex pattern to use for the file inclusion(s) exclude: regex pattern to use for the file exclusion(s) s3root: AWS root in which to send the backup categorize_weekly: switch between daily and weekly folder groupings compress: switch to compress outbound files to AWS """ folder = '{}'.format(fwe() if categorize_weekly else today()) tmp_root = join('/tmp', str(uuid4())) tmp_dir = join(tmp_root, folder) mkdir_p(tmp_dir) for file in glob(include): if regex_match(file, exclude): continue print('Processing: {}'.format(file)) if compress: gz(file, tmp_dir) else: copy(file, tmp_dir) if remove_source: remove(file) aws_dest = join(s3root, section) print('Syncronizing {} to s3'.format(tmp_dir)) aws_sync(tmp_root, aws_dest) if os.path.exists(tmp_root): rmtree(tmp_root) print('Done') if __name__ == "__main__": import sys args = sys.argv if len(args) < 2: print("Usage: python -m aws-bkup /path/to/config.cfg") sys.exit() config = ConfigParser() config.read(args[1]) environ['AWS_ACCESS_KEY_ID'] = config.get('aws', 'access_id') environ['AWS_SECRET_ACCESS_KEY'] = config.get('aws', 'secret_key') environ['AWS_DEFAULT_REGION'] = config.get('aws', 'region') for section in config.sections(): if section != 'aws': print('Starting {}'.format(section)) aws_bkup( section, config.get(section, 'include'), config.get(section, 'exclude'), config.get('aws', 's3root'), config.getboolean(section, 'categorize_weekly'), config.getboolean(section, 'compress'), config.getboolean(section, 'remove_source') ) Answer: After a quick read-through, I’ve spotted two items: with not used for f_out The code: with open(src) as f_in: f_out = gzip.open(destpath, 'wb') #... f_out.close() should be replaced with: with open(arc) as f_in, gzip.open(destpath, 'wb') as f_out: #... Reg-ex pattern repeatedly compiled The function regex_match() takes a string and compiles it to a pattern, and then matches a string to that pattern. The same pattern string is repeatedly passed to regex_match. This string should be compiled to a pattern by the caller, and the resulting pattern reused for each match. This means the calls to regex_match could be replaced by exclude_pattern.match(file) Argument quoting If src or dest contain spaces, this command may become confused. cmd = 'aws s3 sync {} {}'.format(src, dest) push = subprocess.call(cmd, shell=True) Since you are using the shell=True argument, it may also be a vector for arbitrary command injection! Instead of formatting the command into a string, with proper quoting, and requiring the .call() command to parse it, you can simply pass in an array of arguments to the call. No need to worry about spaces or proper escaping/quoting -- and arbitrary command injection becomes much harder: cmd = ['aws', 's3', 'sync', src, dest] push = subprocess.call(cmd, shell=True) Additional notes: push is neither returned or used. Also, while subprocess.call(...) is still acceptable, as of Python 3.5 subprocess.run(...) is the preferred interface.
{ "domain": "codereview.stackexchange", "id": 44057, "tags": "python, file-system, logging, amazon-web-services" }
Is it possible to perform quantum computation between different Hilbert spaces?
Question: Let us consider a protocol between Alice and Bob. Alice works in a $2^n$-dimensional Hilbert space $\mathcal{H}_A$, using $n$ qubits. Bob works in a $(1+2^n)$-dimensional Hilbert space using qdits. For instance, for $n=128$, Bob would work with two high-dimensional qdits, since $1+2^{128}$ is the product of two large primes. Since $\mathcal{H}_B$ is isomorphic to $\mathbb{C}^{2^n+1}$, it is possible to write any state $|\psi\rangle_{\mathcal{H}_B}\in\mathcal{H}_B$ as: $$|\psi\rangle_{\mathcal{H}_B}=\sum_{i=0}^{2^n}\psi_i|i\rangle_{\mathcal{H}_B}\,.$$ Let us say that Alice prepares the following state: $$|\varphi\rangle_{\mathcal{H}_A} = \sum_{i=0}^{2^n-1}\varphi_i|i\rangle_{\mathcal{H}_A}$$ and sends it to Bob. Now, Bob wants to transform this state into: $$|\varphi\rangle_{\mathcal{H}_B} = \sum_{i=0}^{2^n-1}\varphi_i|i\rangle_{\mathcal{H}_B}+0\left|2^n\right\rangle_{\mathcal{H}_B}$$ and to apply an unitary matrix $\mathbf{U}_{\mathcal{H}_B}$ such that $\mathbf{U}_{\mathcal{H}_B}\left|2^n\right\rangle_{\mathcal{H}_B}=\left|2^n\right\rangle_{\mathcal{H}_B}$ and $\mathbf{U}_{\mathcal{H}_B}^\dagger\left|2^n\right\rangle_{\mathcal{H}_B}=\left|2^n\right\rangle_{\mathcal{H}_B}$ (that is, $\mathbf{U}_{\mathcal{H}_B}$ does not add the $\left|2^n\right\rangle_{\mathcal{H}_B}$ state in the superposition for any other state and is defined as the identity for this state). This would transform the state in : $$\mathbf{U}_{\mathcal{H}_B}|\varphi\rangle_{\mathcal{H}_B}=\sum_{i=0}^{2^n-1}\alpha_i|i\rangle_{\mathcal{H}_B}\,.$$ Finally, Bob transforms back this quantum state into a $n$-qubit quantum state and sends it back to Alice. Does this make sense? Is it possible to consider such a protocol, where one converts a quantum state lying in an Hilbert space to one in another Hilbert space? Answer: An $n$-level quantum system is an $n$-level quantum system, no matter if it's stored on $\lceil \log_2 n \rceil$ qubits or on $\lceil \log_3 n \rceil$ qutrits or other combinations of qudits. There would be costs to interoperating computers that use qudits with different numbers of levels, because you have to do work to convert between them (e.g. having to run iterated long division under superposition to convert from binary to ternary, or wasting space on one to make conversion unnecessary by avoiding using the higher qudit levels that are not available on the other), but there's no fundamental barrier to interoperating them.
{ "domain": "quantumcomputing.stackexchange", "id": 2659, "tags": "quantum-state, experimental-realization" }
Comparing the Lagrangian form of Newton's law & the Reynolds transport theorem
Question: If I was interested in deriving an equation for the conservation of momentum for a fluid, I could write down an expression for the change in momentum density of a fluid point using the Reynolds transport theorem: $$\frac{\partial \rho \vec{v}}{\partial t} + \nabla \cdot (\rho \vec{v} \otimes \vec{v}) = \vec{f}$$ I could also try to write down Newton's law from a Lagrangian perspective for a parcel of fluid: $$\frac{D(\rho \vec{v})}{Dt} = \vec{f}$$ Transforming this into a Eulerian perspective: $$\frac{\partial \rho \vec{v}}{\partial t} + \nabla (\rho \vec{v})\cdot\vec{v} = \vec{f}$$ This expression differs from the expression derived from the Reynolds transport theorem by a $\rho \vec{v}(\nabla \cdot \vec{v})$ term—both expressions are the same for an incompressible flow field, but not when the flow is compressible. Did I make a mistake in applying Newton's law, or is there a valid reason for the discrepancy? Answer: If the left hand side is differentiated property using the product rule for differentiation, we obtain $$\frac{\partial \rho \vec{v}}{\partial t} + \nabla \cdot (\rho \vec{v} \otimes \vec{v}) =\rho\frac{\partial \vec{v}}{\partial t}+\vec{v}\frac{\partial \rho}{\partial t}+\vec{v}\cdot\nabla(\rho \vec{v})+\rho (\vec{v}\cdot \nabla) \vec{v}$$The middle two terms drop out because of the continuity equation, so we are left with $$\frac{\partial \rho \vec{v}}{\partial t} + \nabla \cdot (\rho \vec{v} \otimes \vec{v}) =\rho\frac{\partial \vec{v}}{\partial t}+\rho (\vec{v}\cdot \nabla) \vec{v}=\rho\frac{D\vec{v}}{dt}$$
{ "domain": "physics.stackexchange", "id": 72351, "tags": "newtonian-mechanics, classical-mechanics, fluid-dynamics" }
Mechanism of decarboxylation of alpha-keto carboxylic acid
Question: What is the probable mechanism for the following reaction? $\alpha$-Keto acids on heating with conc.$\ce{H2SO4}$ undergo decarboxylation to give monocarboxylic acids: Also, which of the two carbons (the carbonyl carbon and the carboxylic carbon) is getting eliminated here? Note: I tried to apply the mechanism for the decarboxylation of $\beta$-keto carboxylic acids in this case too, but couldn't work out one. Answer: There are two characteristic reactions of $\alpha$-keto acids with sulphuric acid, but none of them produce acetic acid and carbondioxide as the products as indicated in the reaction given in the OP's textbook. Instead, pyruvic acid (the simple $\alpha$-keto acid shown in the textbook reaction) is easily decarboxylated with warm dilute sulphuric acid to give acetaldehyde: $$\ce{CH3-C(=O)-CO2H ->[dil.H2SO4][\Delta] CH3-CHO + CO2}$$ If the reaction is in conc.$\ce{H2SO4}$ as given in the sought reaction, the products are acetic acid and carbonmonoxide: $$\ce{CH3-C(=O)-CO2H ->[conc.H2SO4][\Delta] CH3-CO2H + CO}$$ Mechanism of the first reaction is given in here as follows: Although the mechanism of this reaction is uncertain yet it occurs only in $\alpha$-keto acids, it is suggested that the $–I$ effect of the $\alpha$-carbonyl group plays an important part in this elimination. You may also find the mechanism of the second reaction in the same site as well: However, it is also possible for $\alpha$-ketocarboxylic acid to undergo decarboxilation to give corresponding carboxylic acid and $\ce{CO2}$, but need redox conditions similar to pyruate decarboxylation in biological conditions: $$\ce{CH3-C(=O)-CO2H ->[Oxidation] CH3-CO2H + CO2}$$ For example, this reaction will undergo in the presence of $\ce{H2O2}$ and following mechanism is given (Ref.1): References: Antonio Lopalco, Gautam Dalwadi, Sida Niu, Richard L. Schowen, Justin Douglas, and Valentino J. Stella, "Mechanism of Decarboxylation of Pyruvic Acid in the Presence of Hydrogen Peroxide," J Pharm Sci. 2016, 105(2), 705–713 (DOI: https://doi.org/10.1002/jps.24653).
{ "domain": "chemistry.stackexchange", "id": 15917, "tags": "organic-chemistry, reaction-mechanism, carbonyl-compounds" }
Irregularity of $\{b^ma^n: (m,n)=1\}$ using Nerode
Question: Let $L=\{b^ma^n \mid \text{$m$ and $n$ are coprime} \}$. Using Nerode's theorem, prove that $L$ is irregular. From Nerode's theorem I know that $L$ is regular if and only if the number of equivalence classes of $R_L$ (the relation defined in Nerode's theorem) is finite, so I need to prove that there are infinitely many equivalence classes. The first thing that came to mind from $L$'s definition is using Dirichlet's theorem, hence I tried: Let $w_{m, i}=b^ma^i$, ($m,i$ are coprime), and I prove that for $j\ne i$, ($m, j$ coprime), $$w_{m, i} \not R_L w_{m, j}$$ Let $z=a^{m+ni}$, ($n$ an integer promised by Dirichlet's theorem such that $m+ni$ and $m$ are coprimes), so $$w_{m, i}z = b^ma^{m+ni+i}= b^ma^{m+(n+1)i}\in L$$ and $$w_{m, j}z = b^ma^{m+ni+j}\not\in L$$ But this isn't necessarily true as $m$ and $m+(n+1)i$ might not be coprime and $m$ and $m+ni+j$ might be. I know from previous exercises that I need to find $w_i$ and show that for a word $z$ $$w_iz\in L\text{ and }w_jz\not\in L \space\space (i\ne j)$$ and therefore there are infinitely many equivalence classed, but I find coprimality difficult to handle. Answer: Let $P$ be the set of all primes. Show that the words $\{b^p : p \in P\}$ belong to different equivalence classes.
{ "domain": "cs.stackexchange", "id": 19936, "tags": "formal-languages, regular-languages" }
Converting enum values to strings in C++
Question: Question 1: functionxxx_as_string() is used below. How else could it be more elegantly named? Question 2: Is the static char* array method adopted below the only solution? Best solution? Any suggestions? Generally, I have this issue where my list of enums will be from 3 - say 50 items, mostly less than 20 items and they are fairly static. #include <iostream> enum thing_type { DTypeAnimal, DTypeMineral, DTypeVegetable, DTypeUnknown }; class thing { public: thing(thing_type type) : type_(type) {} const thing_type get_state() const { return type_; } const char* get_state_as_string() const { static const char* ttype[] = { "Animal", "Mineral", "Vegetable", "Unknown" }; return ttype[type_]; } private: thing_type type_; }; int main() { thing th(DTypeMineral); std::cout << "this thing is a " << th.get_state_as_string() << std::endl; return 0; } I am preferring to remove all the printing stuff from the class interface and use the operator<< overloading idea in 200_success answer like this: #include <iostream> enum thing_type { DTypeUnknown, DTypeAnimal, DTypeMineral, DTypeVegetable }; const char* type2string(thing_type ttype) { static const char* thtype[] = { "Unknown", "Animal", "Mineral", "Vegetable" }; return ttype < sizeof(thtype) ? thtype[ttype] : thtype[0]; } std::ostream& operator<<(std::ostream& os, const thing_type type) { os << type2string(type); return os; } class thing { public: thing(thing_type type) : type_(type) {} const thing_type get_type() const { return type_; } private: thing_type type_; }; std::ostream& operator<<(std::ostream& os, const thing& th) { os << "This is a " << type2string(th.get_type()); return os; } int main() { thing th(DTypeMineral); std::cout << th << std::endl; return 0; } Answer: Question 1: functionxxx_as_string() is used below. How else could it be more elegantly named? How about: std::ostream& operator<<(std::ostream& str, thing const& data); Question 2: Is the static char* array method adopted below the only solution? Best solution? Any suggestions? You will need a static char array (or equivalent (like a switch)) somewhere as there is no built in way to convert enum values (which are integers) to a string.
{ "domain": "codereview.stackexchange", "id": 4513, "tags": "c++, enum" }
Is there a thermodynamical function $\Xi$ such that $d \Xi = V d P/H$?
Question: Question Is there a thermodynamical function $\Xi$ such that generally $$d \Xi = \frac{V d P}{H}$$ where $V$ is the volume, $P$ the pressure, and H the enthalpy $H=U + PV$? If such a function generally does not exist, which conditions must the thermodynamical system (and its state equation) fulfill for $\Xi$ to exist? We may assume the usual thermodynamical identities such as $dU=-PdV + T dS$. Attempt at solution and comments A naive effort to solve this is to simply assume that we can express $P=P(V,H)$ and then integrate $$\frac{\partial \Xi}{\partial H}|_{V=const.} = \frac{V}{H} \frac{\partial P}{\partial H}|_{V=const.}$$ $$\frac{\partial \Xi}{\partial V}|_{H=const.} = \frac{V}{H} \frac{\partial P}{\partial V}|_{H=const.}$$ However, this leads to the integrability condition $$\frac{\partial P}{\partial H}|_{V=const.}=-\frac{V}{H} \frac{\partial P}{\partial V}|_{H=const.}$$ and a nice formal solution for $\Xi$. However, does every physically reasonable thermodynamical system fulfill the integrability condition above? Solution for ideal gas The trivial example of an ideal gas with $K$ degrees of freedom allows for and expression of $P$ as $$P=\frac{2}{K+2} \frac{H}{V}$$ which can be easily seen to fulfill the integrability conditions above and we obtain $$\Xi = \frac{1}{K+2}\log(\frac{H}{H_0} \frac{V_0}{V})$$ where $H_0, V_0$ are integration constants. Answer: Since the pressure $p$ is intensive and the volume $V$ and enthalpy $H$ are extensive variables the function $p=p(V,H)$ is homogeneous degree $0$ so you always have $$H \frac{\partial p}{\partial H} + V \frac{\partial p}{\partial V} = 0$$
{ "domain": "physics.stackexchange", "id": 32272, "tags": "thermodynamics, pressure" }
Supercharge in $\mathcal{N}=1$ supersymmetric quantum mechanics and Noether's theorem
Question: Consider the $0+1$ dimensional Lagrangian $$L=\frac{1}{2}\dot{X}^2(t)+i \psi(t) \dot{\psi}(t).\tag{1.24}$$ Essentially this the Lagrangian of a particle moving in one dimension, $X$, with an additional degree of freedom $\psi$. This can be thought of as a Lagrangian for a spinning particle moving in one dimension. Define the supersymmetry transformations (and think of $\delta$ as a fermionic operator on the fields) as $$\delta X=2i \epsilon \psi\tag{1.28a}$$ and $$\delta \psi=- \epsilon \dot{X}.\tag{1.28b}$$ Noting that $\psi$ and $\delta$ anticommute, $X$ and $\delta$ commute, and also that $\delta$ is a linear operator, we can easily see that $$\delta L = i \epsilon \frac{d}{dt}(\psi \dot{X}).\tag{1.29}$$ Thus, the action is invariant since the Lagrangian changes only by a total derivative, under the above transformation. The conserved 'current' (in fact in one dimension it is the conserved charge) gives, by Noether's theorem, $$\epsilon Q=\frac{\partial L}{ \partial \dot{X}} \delta X+\frac{\partial L}{ \partial \dot{\psi}} \delta \psi-i \epsilon \psi \dot{X}=2i\epsilon \dot{X} \psi-i \epsilon \dot{X} \psi-i \epsilon \psi \dot{X}=0!\tag{1}$$ So the charge turns out to be trivial. However, in these notes, in equation (1.30) it is claimed that the supercharge is, in fact, $$Q=\psi \dot{X}.\tag{1.30}$$ What am I missing? Answer: The second term in OP's formula (1) for the Noether charge has a sign mistake. The second term should be $$\delta\psi^{\mu} \frac{\partial_L L}{ \partial \dot{\psi}^{\mu}} ~=~(-\epsilon \dot{X}^{\mu})(-i\psi_{\mu}) ~=~i \epsilon \dot{X}^{\mu} \psi_{\mu} ~=~(i\psi_{\mu})(-\epsilon \dot{X}^{\mu}) ~=~\frac{\partial_R L}{ \partial \dot{\psi}^{\mu}}\delta\psi^{\mu} ,$$ depending on where we use a left (right) derivative, i.e. the derivative acts from left (right), respectively. As a result the Noether charge becomes non-zero: $$Q~=~2i\psi_{\mu} \dot{X}^{\mu}.\tag{1.30'}$$ The overall factor $2i$ has to do with a strange normalization.
{ "domain": "physics.stackexchange", "id": 64495, "tags": "lagrangian-formalism, conservation-laws, supersymmetry, noethers-theorem, grassmann-numbers" }
How are boolean circuits used for solving P vs NP?
Question: In the paper https://web.stanford.edu/~gavish/documents/sipser-pvsnp.pdf , it is mentioned under the Status section that boolean circuits have been used to try and solve P vs NP. Can anyone explain to me in simple terms how boolean circuits are used for solving P vs NP? Answer: First, let me start by explaining what Boolean circuits are. You are probably familiar with Boolean formulas — these are formulas of the sort $(a \land b) \lor (\lnot a \land \lnot b)$. We can represent each formula as a tree. In our example, the root of the tree will be labeled $\lor$, and its two children are the trees corresponding to $a \land b$ and $\lnot a \land \lnot b$. More generally, there are four types of nodes: nodes labeled $\lor$ or $\land$ have exactly two children, nodes labeled $\lnot$ have exactly one child, and the rest of the nodes are labeled by input variables. We can think of the edges as directed towards the root. Boolean circuits generalize Boolean formulas by allowing arbitrary directed acyclic graphs instead of directed trees. In the example above, we can, for example, identify the two nodes labeled $a$ and the two nodes labeled $b$. Alternatively, we can identify Boolean circuits with straightline programs. These are programs which use the following instructions: $x \gets y \lor z$. $x \gets y \land z$. $x \gets \lnot y$. For example, the formula above corresponds to the straightline program $x \gets a \land b$. $y \gets \lnot a$. $z \gets \lnot b$. $w \gets y \land z$. $o \gets x \lor w$. The value of the formula is the value of the last assignment. Notice that every variable other than the inputs is used exactly once. Straightline programs with this constraint correspond to formulas. If we remove the constraint, the we get circuits. The $\mathsf{P} \neq \mathsf{NP}$ conjecture can be stated equivalently as follows: SAT has no polynomial time algorithm. It turns out that the following conjecture (known as $\mathsf{P/poly} \neq \mathsf{NP}$) implies $\mathsf{P} \neq \mathsf{NP}$: SAT has no polynomial size circuits. What does this mean? We can encode CNFs as strings of bits (for example, encode them first in ASCII, and then unfold the ASCII into bits). Let $SAT_n$ be the collection of satisfiable CNFs of length $n$ bits. A circuit for $SAT_n$ is a circuit on $n$ inputs $x_1,\ldots,x_n$ which returns True if and only if $x_1\ldots x_n \in SAT_n$, i.e., the CNF corresponding to $x_1\ldots x_n$ is satisfiable. We say that a collection of circuits $C_1,C_2,\ldots$ solves SAT if $C_n$ is a circuit for $SAT_n$. The collection has polynomial size if there exists a polynomial $P(n)$ such that the size of $C_n$ is at most $P(n)$ (the size of a circuit is the number of nodes in its graphical representation). Why does this imply that SAT has no polynomial time algorithms? The reason is (essentially) the Cook–Levin theorem. This theorem shows that if SAT has a polynomial time algorithm then it also has polynomial size circuits. There is nothing special about SAT here — this reduction works for every problem. The theorem shows how to encode the computation of a Turing machine running in polynomial time as a polynomial size circuit. Unfortunately, we are very far from realizing this program. We only know how to show that polynomial size circuits cannot solve certain problems in the following circumstances: When the circuit is shallow, that is, has small depth. When the circuit doesn’t use $\lnot$ gates at all. For non-explicit problems: by counting the number of polynomial size circuits (for any fixed polynomial), you can show that there must be some functions that they cannot compute. There is some further progress which throws diagonalization into the mix, but that’s about it. It seems that this approach is stuck, and different lines of attack are required to solve this important problem.
{ "domain": "cs.stackexchange", "id": 18494, "tags": "complexity-theory, p-vs-np" }
General Retry Strategy
Question: Let’s say we copy some file using retry strategy (it might be blocked, etc.): class Processor { public void CopyData() => CopyData(Try.Slow); public void CopyData(Try loop) => loop.Execute(() => File.Copy(@"c\a.txt", @"c:\b.txt")); } What do you think about names chosen for the following library code identifiers? Would you name them differently? public abstract class Try { public static Try Repeat(params int[] delays) => new Repeat(delays); public static readonly Try Never = Repeat(); public static readonly Try Once = Repeat(0); public static readonly Try Slow = Repeat(0, 500, 1500, 4500, 12000); public static readonly Try Fast = Repeat(0, 50, 150, 450, 1200); public abstract void Execute(Action action); } class Repeat : Try { IReadOnlyList<int> Delays { get; } public Repeat(params int[] delays) { Delays = delays; } public override void Execute(Action action) { for(int i=0; i< Delays.Count; i++) try { Thread.Sleep(Delays[i]); action(); return; } catch { if (i == Delays.Count - 1) throw; } } } Answer: I think this would be more useful if the user could specify an interval and how many times he wants to retry like: public static Try Repeat(int dalay, int count) => new Repeat(Enumerable.Repeat(delay, count)); or if he could specify a count and the increment function: public static Try Repeat(int count, Func<int, int> increment) => new Repeat(Enumerable.Range(1, count + 1).Select(x => increment(x)); where the increment could be: x => x * 20 I wouldn't provide such members as Fast or Slow because they are very subjective and what for you currently is slow might be in my application still too fast. Never does not make any sense ;-) Why should I want to never try to execute something? I might as well not write the code at all if it shouldn't run :-P One more thoght. How about specifying the retry strategy via a generic argument: class SlowTry : Repeat { public SlowTry() : base(0, 500, 1500, 4500, 12000) { } } the Try becomes this: public abstract class Try { ... stays the same public static void Execute<TStrategy>(Action action) where TStrategy : Try, new() { new TStrategy().Execute(action); } } use: Try.Execute<SlowTry>(() => File.Copy(@"c\a.txt", @"c:\b.txt")); This way the user can easier specify his strategy.
{ "domain": "codereview.stackexchange", "id": 21785, "tags": "c#, design-patterns, error-handling" }
Custom struct design: Range
Question: I had a need for some inventory management in a recent project. I decided to create a custom struct for managing the concept of a number range. It allows for easy navigation of a collection. It function similar to .NET's own Enumeration classes, with extension methods, wrapper classes and so forth allowing for dealing with a IRanged<Items> as defined: public interface IRanged<out T> { RangeType RangeType { get; } T Start { get; } T End { get; } T Middle { get; } int Size { get; } T AtPercent(int percent); T ValueAtIndex(int index, IRangedEdgeStrategy strategy); } The actual core struct class is: public struct Range : IEquatable<Range>, IRanged<int>, IEnumerable<int> { public Range(int start, int end) { if (start.Equals(end)) throw new ArgumentException("Cannot create a range of size zero"); if (start < end) { Start = start; End = end; } else { Start = end; End = start; } if (Start < 0) { RangeType = End <= 0 ? RangeType.Negative : RangeType.NegToPos; } else { RangeType = RangeType.Positive; } } public int Start { get; } public int End { get; } public RangeType RangeType { get; set; } public int Middle => (int)Math.Round(((decimal)Start + End) / 2); public int Size => Difference(Start, End); public bool Contains(Range range) { return Start <= range.Start && End >= range.End; } public static int Difference(int a, int b) { return Math.Abs(a - b) + 1; } public int ValueAtIndex(int index, IRangedEdgeStrategy strategy) { return strategy.Handle(this, index); } public int AtPercent(int percent) { if (percent <= 0 || percent > 100) throw new ArgumentOutOfRangeException("Percentage needs to be a integer value between 1 and 100"); var p = percent/100f*Size; var clampedPercent = (int) Math.Round(p); return AsClamped(clampedPercent); } public int AsClamped(int index) { return AsClamped(index, Start, End); } public static int AsClamped(int index, int start, int end) { return index < start ? start : index > end ? end : index; } #region IEnumerable IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } public IEnumerator<int> GetEnumerator() { for (var i = Start; i <= End; i++) yield return i; } #endregion #region Equality public bool Equals(Range other) { return Start == other.Start && End == other.End; } public override bool Equals(object obj) { if (obj is Range) return Equals((Range)obj); return false; } public static bool operator ==(Range a, Range b) { return a.Equals(b); } public static bool operator !=(Range a, Range b) { return !(a == b); } public override int GetHashCode() { return Start.GetHashCode() ^ End.GetHashCode(); } #endregion public override string ToString() { return $"({Start}>{End})"; } } A range list is a wrapper, similar to the ReadOnlyCollection<T> that returns an element as an IRanged, allowing simplified collection navigation. public class RangeList<T> : IRanged<T> { private readonly IList<T> _list; public RangeList(IList<T> list) { _list = list; } private Range R => _list.GetRange(); public RangeType RangeType => RangeType.Positive; public T Start => _list[R.Start]; public T End => _list[R.End]; public T Middle => _list[R.Middle]; public int Size => _list.Count; public T AtPercent(int percent) { return _list[R.AtPercent(percent)]; } public T ValueAtIndex(int index, IRangedEdgeStrategy strategy) { return _list[R.ValueAtIndex(index, strategy)]; } } A number of helper extensions for easier use: public static class RangeExtensions { public static Range GetRange<T>(this ICollection<T> x) { return new Range(0,x.Count-1); } public static bool HasRange<T>(this ICollection<T> x) { return x.Count > 0; } public static bool ContainsRange<T>(this ICollection<T> x, Range r) { return x.GetRange().Contains(r); } public static IEnumerable<T> Select<T>(this IList<T> items,Range r) { if (!items.HasRange()) return Enumerable.Empty<T>(); var collectionRange = items.GetRange(); if(!collectionRange.Contains(r)) throw new ArgumentOutOfRangeException("Collection does not contain inner range"); T[] elements = new T[r.Size]; int arrayIndex = 0; for (int i = r.Start; i < r.End+1; i++) { elements[arrayIndex] = items[i]; arrayIndex++; } return elements; } public static IRanged<T> AsRanged<T>(this IList<T> items) { return new RangeList<T>(items); } } Any thoughts on naming conventions, choices of extended types, efficiency of collection types etc that might help make this more friendly? Finally, if anyone is wondering as to the why this becomes more apparent when you nest a range in a range and provide a range value that can be locked to the parent, irrespective of the collection content, for example (green is a RangeValue, blue is the mid-point, grey is the outer range and blue is the inner range. Here is a visualization Answer: Only targeting public struct Range If possible a struct should be immutable so you shouldn't let a user of this struct set the RangeType property. pre calculate values in the constructor instead of calculating the values if accessed public int Middle => (int)Math.Round(((decimal)Start + End) / 2); public int Size => Difference(Start, End); always use braces {} although they might be optional. Omitting braces can lead to serious bugs, using them will structure your code better and make you code more readable. leave your operators some room to breathe. var p = percent/100f*Size; would be more readable like var p = percent / 100f * Size; a ternary inside a ternary expression becomes almost unreadable. This public static int AsClamped(int index, int start, int end) { return index < start ? start : index > end ? end : index; } would be more readable like public static int AsClamped(int index, int start, int end) { if (index < start) { return start; } return index > end ? end : index; }
{ "domain": "codereview.stackexchange", "id": 21065, "tags": "c#, collections, unity3d, extension-methods" }
How to find optimal mutation probability and crossover probability?
Question: I have a genetic algorithm that maximizes a fitness function with two variables f(X,Y). I have been running the algorithm with various parameters in mutation and crossover probability (0.1, 0.2, ...) Since I dont have much theoretical knowledge of GA, how could I proceed in order to find the optimal values for mutation and crossover probability, and if necessary the optimal population size ? Answer: As @Oliver Mason says, picking the parameters that control the behavior of a GA (which are sometimes called "hyperparameters") is historically more of an art than a science. The evolutionary computation literature has many theories about the merits of high vs. low mutation, and high vs. low crossover. Most practitioners I have worked with use either high crossover, low mutation (e.g. Xover = 80%, mutation = 5%), or moderate crossover, moderate mutation (e.g. Xover = 40%, mutation = 40%). In more recent years, the field of hyperparameter optimization has emerged and focuses on developing automatic approaches to picking these parameters. A very simple example of hyperparameter optimization is the GridSearchCV function in ScikitLearn. This systematically tries every combination of, say, 10 crossover values with evey one of 10 mutation values, and reports on which one works best. It uses Cross Validation to prevent overfitting during this process. A more complex approach is Bayesian Hyperparameter Optimization, which performs a sort of optimal experiment design to uncover the best values using as few tests as possible. This approach has been quite successful in tuning the hyperparamters of deep neural networks, for example.
{ "domain": "ai.stackexchange", "id": 1104, "tags": "algorithm, genetic-algorithms, mutation-operators" }
Total vs. Average/Expected internal energy
Question: Total internal energy and average/expected internal energy are used interchangeably, which causes a lot of confusion for me. Is $$ U = \sum_{i=1}^N p_i E_i, $$ the weighted sum of the energy states of a system, describing the total internal energy or the average? If it is the average, what does that really mean? It wouldn't make sense if it was the total, since the equation is independent of the number of particles/molecules in they system, but I just want to be sure. Thank you! Answer: It looks like a formula for the average (mean) of $N$ energy states. Presumably $p_i$ is the probability that the system is in energy state $E_i$. You might have seen the mean worked out as $\frac{\Sigma fx}{\Sigma f}$ where $f$ is frequency. In that case the top line is the total, but when divided by the denominator it gives the mean. Your formula is similar to that, when dealing with probabilities the bottom line is $\Sigma p_i$ and that's $1$.
{ "domain": "physics.stackexchange", "id": 79084, "tags": "statistical-mechanics, physical-chemistry" }
Am I misunderstanding something, or are these Wikipedia statements about quantum tunneling wrong? Badly stated?
Question: From https://en.wikipedia.org/wiki/Quantum_tunnelling#Introduction_to_the_concept : The reason for this difference comes from treating matter as having properties of waves and particles. One interpretation of this duality involves the Heisenberg uncertainty principle, which defines a limit on how precisely the position and the momentum of a particle can be simultaneously known.[7] This implies that no solutions have a probability of exactly zero (or one), though it may approach infinity. If, for example, the calculation for its position was taken as a probability of 1, its speed, would have to be infinity (an impossibility). Hence, the probability of a given particle's existence on the opposite side of an intervening barrier is non-zero, and such particles will appear on the 'other' (a semantically difficult word in this instance) side in proportion to this probability. (Emphasis added.) The bolded part doesn't make sense to me. I don't see how a probability can meaningfully be said to approach anything above one, let alone infinity. And regarding the latter bolded sentence, Heisenberg's uncertainty principle describes knowledge of position and momentum (not velocity) as complementary, so before that (seemingly misplaced) last bolded comma, should that say "momentum", not "speed" (with infinite momentum implying an impossible speed of c)? Answer: Short answer: there is a major typo, the author was not trying to say that the probability may approach infinity, but instead that the probability may approach zero at infinity. So what probability is the author talking about? It's the probability for the particle to be found in some specific finite region. So the sentence This implies that no solutions have a probability of exactly zero (or one), though it may approach infinity. is trying to say: This implies that no solutions have a probability of exactly zero (or one) for the particle to be found in any finite region, though it may approach zero for regions arbitrarily far away (at infinity). This would accurately represent the meaning of the original sentence, but it would still have some problems. I would replace implies with might give us an intuition, and may with will. As for the second bold sentence: If, for example, the calculation for its position was taken as a probability of 1, its speed, would have to be infinity (an impossibility). If I was editing the article I would probably just delete it, but if push came to shove, I would rephrase it like this: We won't try to rigorously justify this intuition, but it might be instructive to consider what would happen if the particle had a probability of exactly 1 to be at some exact position. The position uncertainty would then be 0, which means the momentum uncertainty would be infinite - and that would in turn imply that the uncertainty in its kinetic energy would be infinite as well. But the kinetic energy, unlike momentum, is a non-negative quantity. However, if we have a non-negative quantity with an infinite uncertainty, its average value would have to be infinite! Therefore having a precise position would imply that the average kinetic energy is infinite, which is unphysical!
{ "domain": "physics.stackexchange", "id": 74133, "tags": "wavefunction, schroedinger-equation, heisenberg-uncertainty-principle, probability, quantum-tunneling" }
Understanding Cb and Cr Components of YCbCr Color Space
Question: I am familiar with additive (RGB), substractive (CMYK), and HSV-like colorspaces, but an article I'm currently trying to understand operates on YCbCr color space for image segmentation / object definition. I've spend most of my morning looking for something that would explain the YCbCr naturally, but I just don't get it. I got a nice, intuitive explanation of the general idea behind this color space here, and explanation of how it's used for image coding/compression from these guys (all on photo.SE). The formulas for calculating YCbCr from RGB is readily accessible on wikipedia. I got the motivation for this representation, I got that Y component contains the most important (to the human eye) grey-scale information about the image. I got that Cb and Cr carry information about the colors, and that (because of human eye (in)sensibility), they can be compressed without a visible lost in quality. But, what does each of the chrominance components actually represent? As the article authors mention that "chrominance information is paramount in the definition of objects" in their approach, and I can not fully understand what I'm reading with my current "Y is intensity, Cb and Cr carry color information somehow" level of understanding YCbCr. I'm seeking for an answer along the lines "Cb is..., while Cr is..." or "if you imagine looking through/with XY, you're actually looking at Cb component...", or some other way that would help me understand information carried by each of the components separately, not just that they, together, carry color information. EDIT Let me give examples of intuitive explanations for other color spaces of the type I'm looking for: RGB: Like shining a colored flashlight on a black wall: If you shine with a blue flashlight, you see a blue reflection. If you add a red flashlight, it will show a magenta reflection, which is a mixture of blue and red. CMYK: Like mixing watercolors, you "add to the colors the surface reflects", (i.e. subtracts color from the background) so if you mix a yellow one with a cyan one, if will reflect green and thus you will get a green color. HSV: Little kids are attracted to highly saturated objects, not bright (value). The Hue component is what "gives the color", while low saturation means the color is "diluted" by white. Change in value makes the whole thing brighter or darker. With this definitions, I've been able to get an intuitive feeling about what a color representation in each color space means, without memorizing charts for each of them. Answer: YUV (or YCbCr) is like HSV, but in different coordinates. (The difference between YUV and YCbCr is marginal - mostly related to exact formulas). The $V$ component is the same. $(S,H)$ can be thought of as polar coordinates, and $(U,V)$ as cartesian. $H$ is the angle and $S$ is the radius. A rough conversion would be: $ U = S \cdot \cos(H) $ $ V = S \cdot \sin(H) $ You can see this link for more information. Another thing to add to your intuition list: Saturation is how pure the color is from spectral point of view. For example, a laser has a very narrow spectrum, which implies high saturation.
{ "domain": "dsp.stackexchange", "id": 1954, "tags": "image-processing, image-segmentation" }
Single gravitational plane wave or their interference can carry spin angular momentum?
Question: I would be grateful if anybody could tell me if I had one gravitational wave in the form of a plane wave, it still would carry spin angular momentum? We know that gravitational waves are mostly the interference between many gravitational waves from different sources like binary black holes. I think they carry spin and angular momentum due to conservation laws but I do not know, Do we have a single GW and it can carry Spin and angular momentum. Answer: I'm guessing that any angular momentum carried by a (transverse) wave would be associated with a circular (or elliptical) polarization.
{ "domain": "physics.stackexchange", "id": 87476, "tags": "gravity, conservation-laws, gravitational-waves, plane-wave" }
remove kth last element from singly-linked list - Follow up
Question: This code is a revised version of implementation which asked for an improvement. Original question is asked here: remove kth last element from singly-linked list credits to: Toby, Andreas, Arkadiusz What has changed: remove length from xllist struct check if k is bigger than list length on the fly Code: #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <stdint.h> typedef struct llnode { int value; struct llnode *next; } llnode; typedef struct xllist { llnode * head; llnode * tail; } xllist; bool create_node(xllist *list, int value) { llnode *node = malloc(sizeof *node); if(!node) { return false; } node->value = value; node->next = NULL; if(!list->head) { list->head = node; list->tail = node; } list->tail->next = node; list->tail = node; return true; } bool del_element_from_last(xllist *llist, int k) { //window with 2 pointers, length of k //prev is the prev node to the window llnode *prev; llnode *last; int len; //list length if(llist->head) { last = llist->head; prev = llist->head; len = 1; } for(; last; last=last->next) { len++; if(len > k+1) prev = prev->next; } if(len < k) //len is smaller than k { return false; } if(len == k) //means del 1st element from the list { llist->head = llist->head->next; } //remove first node of the window printf("deleted element:%d \n", prev->next->value); prev->next = prev->next->next; return true; } int main(void) { xllist llist = {NULL, NULL}; for(int i=0; i<100; i++) { if(!create_node(&llist, 100+i)) printf("create fail\n"); } del_element_from_last(&llist, 15); } Answer: You are relying on the compiler zeroing local variables. For portability initialize them as needed. Use the address of either head or next fields (llnode**). Then deletion will become trivial. Free the deleted node. head node deletion does not printf. As printf was probably just test code, not so important. Two loops (instead of ifs) are more readable; first one to establish the back lag, and then following with a distance of k. Doing one thing at the time. Variable names could be more accurate. (I did not do that much better.) So: bool del_element_from_last(xllist *llist, int k) { llnode *cur; int back = 0; for (cur = llist->head; cur && back < k; cur = cur->next) { ++back; } if (back < k) // List too short. { return false; } llnode **follow = &(llist->head); for (; cur; cur = cur->next) { follow = &(*follow)->next; } if (!*follow) // k <= 0. { return false; } printf("deleted element:%d \n", (*follow)->value); llnode *old = *follow; *follow = (*follow)->next; free(old); return true; } The second if guards the deletion naturally - no null pointer. The second if returning false could happen when k is 0. A fallacy is to combine conditions to make the code shorter. The code above is better traceable: [At the second if] Why that null check? At the end of the list? Aha, then k must be 0 (or less).
{ "domain": "codereview.stackexchange", "id": 40391, "tags": "c, linked-list, interview-questions" }
Extract regular words from string but retain all other elements and record their type
Question: This snippet processes every regular (\w+) word in a text and reinserts the processed version: import re import time def process(word: str) -> str: time.sleep(0.05) # Processing takes a while... return word.title() text = """This is just a text! Newlines should work. Multiple ones as well, as well as arbitrary spaces. Super-hyphenated long-lasting and overly-complex words should also work. Arbitrary punctuation; has to work... Because? Why not!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! """ processed_items = [] items = re.split(r"(\w+)", text) for item in items: # Have to check for word *again* in order to skip unnecessary processing, even # though we just matched/found all words. is_word = re.match(r"\w+", item) is not None if is_word: item = process(item) processed_items.append(item) processed_text = "".join(processed_items) print(processed_text) Processing is expensive, so we would like to skip non-word elements, of which there can be many of arbitrary types. In this current version, this requires matching using a word regex twice. There should be a way to only process/split the input once, cutting the regex effort in half. That would require some more structure. A possible solution data structure I had in mind could be a (eventually named) tuple like: items = [ ("Hello", True), ("World", True), ("!", False), ] where the second element indicates whether the element is a word. This would spare us from having to re.match(r"\w+", item) a second time. However, as before, splitting "Hello World!" into the above three elements requires word-splitting in the first place. Answer: Your biggest problem is use of split(). It indiscriminately mixes in matches and non-matches. Instead, just finditer and explicitly define two groups: words and non-words. import re import time from typing import Iterator def process(word: str) -> str: time.sleep(0.05) # Processing takes a while... return word.title() WORD_PAT = re.compile( r''' (?P<notword>\W*) # named capturing group: non-word characters (?P<word>\w*) # named capturing group: word characters ''', re.VERBOSE, ) def split_and_process(text: str) -> Iterator[str]: for match in WORD_PAT.finditer(text): yield match.group('notword') yield process(match.group('word')) def test() -> None: text = """This is just a text! Newlines should work. Multiple ones as well, as well as arbitrary spaces. Super-hyphenated long-lasting and overly-complex words should also work. Arbitrary punctuation; has to work... Because? Why not!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! """ processed_text = "".join(split_and_process(text)) print(processed_text) if __name__ == '__main__': test()
{ "domain": "codereview.stackexchange", "id": 43174, "tags": "python, performance, strings, regex" }
What is the accuracy of an artificial satellite at a certain magnitude?
Question: When we say that 'accuracy of gaia satellite is 10 $\mu$as at $V = 10$ magnitude in position and annual proper motion'. What does this mean? Does it change over time? Answer: What you are referring to is the precision with which the astrometry (position) of stars will be measured with the Gaia satellite. The sentences you refer to in Perryman et al. (2014) are "Gaia will achieve accuracies of some 10μas (microarcsec) in positions and annual proper motions for bright stars (V ∼ 10), degrading to around 25μas at V = 15, and to around 0.3 mas (300μas) at V = 20 (Lindegren et al. 2008)." What this means is that after the 5-year Gaia mission is complete, the angular position (or coordinates) of 10th magnitude stars on the sky will be determined with a relative precision of 10 millionths of an arcsecond. This will also be the accuracy with which the annual tangential motion (the so-called proper motion) of stars can be measured. i.e. 10th magnitude stars will have an uncertainty in their proper motion of $\pm 10 \mu$as/year.
{ "domain": "astronomy.stackexchange", "id": 1018, "tags": "astrophysics, artificial-satellite, mathematics" }
Proving that Breadth-First Search (BFS) results in a bipartition of a tree
Question: In my studies of discrete mathematics, I've learned that a tree graph is inherently bipartite. I'm interested in finding an algorithmic approach to determine its bipartition. It seems to me that Breadth-First Search (BFS) could be a reasonable method for this task. However, I'm struggling with the formal proof of this concept. Could anyone provide a detailed explanation or proof showing that applying BFS to a tree will indeed result in a valid bipartition? Any insights or resources would be greatly appreciated. Answer: Let $r$ be an arbitrary root. Denote by $d(v)$ the distance from $r$ to $v$. As you might know, every edge goes either between vertices of the same layer, ie, vertices of same $d(v)$, or between vertices in consecutive layers. Now, there cannot be an edge going from one vertex in layer $\ell$ to another vertex in layer $\ell$ because then we would have a cycle of length $2\ell+1$, hence every edge goes between consecutive layers. Let $L$ be the set of vertices with even $d(v)$ and $R$ the ones with odd $d(v)$. It follows that there are no edges inside neither $L$ nor $R$.
{ "domain": "cs.stackexchange", "id": 21925, "tags": "graphs, trees, graph-traversal, breadth-first-search" }
Do all equations have identical units on the left- and right-hand sides?
Question: Do all equations have $$\text{left hand side unit} = \text{right hand side unit}$$ for example, $$\text{velocity (m/s)} = \text{distance (m) / time (s)},$$ or is there an equation that has different units on the left- and right-hand sides? I would like to consider empirical equations (determined from experimental results) and theoretical equations (derived from basic theory). Answer: It doesn't matter where the equation came from - a fit to experimental data or a deep string theoretic construction - or who made the equation - Albert Einstein or your next-door neighbour - if the dimensions don't agree on the left- and right-hand sides, it's nonsense. Consider e.g. my new theory that the mass of an electron equals the speed of light. It's just meaningless nonsense from the get-go. This isn't that restrictive - there's lots of equations with correct dimensions (though in some cases you can derive equations or estimates by so-called dimensional analysis, where you just make sure the units agree). But it is useful for checking your work. If you derive a result and the dimensions don't agree, you know you must have made a mistake. There is a subtle distinction between unit and dimension. A dimension represents a fundamental quantity - such as mass, length or time - whereas a unit is a man-made measure of a fundamental quantity or a product of them - such as kg, meters and seconds. Arguably, one can write meaningful equations such as 60 seconds = 1 minute, with matching dimensions but mismatching units (as first noted by Mehrdad).
{ "domain": "physics.stackexchange", "id": 30381, "tags": "units, dimensional-analysis, si-units" }
Finding the friction coefficient by using experimental data
Question: I have the differential system equation with some unknown parameters (friction coefficients) Also I have experimental data. What I want is to determine the friction coefficients which best fit the experimental data. How to do it using the Mathematica, I know there NDSolve and FindFit functions, but they can't be used here? And what are the general approaches of this problem? Edit 1: Here is the model: $$k_1 (x_3[t]-x_1[t])+a_1 \left(x_3'[t]-x_1'[t] \right)+M_1 x_1''[t]=0$$ $$k_2 (x_3[t]-x_2[t])+a_2 \left(x_3'[t]-x_2'[t]\right)+M_2 x_2''[t]=0$$ $$\begin{eqnarray} & k_3 x_3[t]-k_1 (x_3[t]-x_1[t])-k_2 (x_3[t]-x_2[t]) \\& + a_3 x_3'[t]-a_1 \left(x_3'[t]-x_1'[t]\right)-a_2 \left(x_3'[t]-x_2'[t]\right) \\& +C_1 x_3''[t]=0 \end{eqnarray}$$ Answer: I can only propose a crude starting point based on a more modest work I did. My goal was to determine the bleaching coefficient of a fluorophore when submitted to varying illumination intensity over time. I had the theoretical law for the bleaching over time as a function of intensity (depending on unknown parameters), the amount of fluorophore over time, and the starting intensity and intensity at the end of the experiment. The goal was to retrieve the parameters for the bleaching law and the whole intensity over time. Anyway. What I did was to construct (using MATLAB) a function to minimize. In your case, given that you know $x(t)$ experimentally, I suggest to build a function $O(x, y)$ such that: $$ O(x, y) = \sum_t |x(t) - y(t)|^2 $$ with $x(t)$ being your experimental data and $y(t)$ being the solution of your differential system for some parameters $k, M, a$. Put this function as the objective in a simplex optimizer that will find its minimum, allowing $k, M, a$ to vary. This is done through numerical solution. So you have a function to minimize, say $F(k,a,M)$ that returns the value of $O(x,y)$ (which is scalar btw) for the current numerical resolution with the given $k,a,M$. At each step, the whole system will be solved again, and the solution will be matched against the experimental data until they fit. I have found this to be working quite well in my case, as long as you are sure you have the correct model to describe the experimental data. Otherwise, expect some seriously irrelevant results.
{ "domain": "dsp.stackexchange", "id": 148, "tags": "estimators" }
How do I properly use multimarker in an xml file?
Question: I am trying to detect three AR trackers with the ar_track_alvar package link ar_track_alvar package. The three trackers should be detected as a bundle. My xml file which describes the markers looks as follows: <marker index="0" status="1"> <corner x="-3.45" y="-3.45" z="0" /> <corner x="3.45" y="-3.45" z="0" /> <corner x="3.45" y="3.45" z="0" /> <corner x="-3.45" y="3.45" z="0" /> </marker> <marker index="1" status="1"> <corner x="-3.45" y="-16.85" z="0" /> <corner x="3.45" y="-16.85" z="0" /> <corner x="3.45" y="-9.95" z="0" /> <corner x="-3.45" y="-9.95" z="0" /> </marker> <marker index="2" status="1"> <corner x="21.15" y="-16.85" z="0" /> <corner x="28.05" y="-16.85" z="0" /> <corner x="28.05" y="-9.95" z="0" /> <corner x="21.15" y="-9.95" z="0" /> </marker> As far as I understood the first marker has to have index 0. For the other markers the index does not matter, right? What does the status mean? What arguments are valid? Sometimes I get the following error: ERROR InferCorners: "ar_marker_0" passed to lookupTransform argument source_frame does not exist. Sometimes it is also another marker number. How can I avoid this error or what is the reason for it? Originally posted by phi_abs on ROS Answers with karma: 1 on 2017-09-22 Post score: 0 Answer: This is an old one, but I've just been working out the answers to a few of these questions myself so I thought I'd add them here. I'll answer your three questions separately. The first marker index in the bundle XML file can be any marker on you object, it can but doesn't have to be marker zero. The other marker indices must correspond their the appropriate marker number used. Status was a bit of mystery until I dug into the source code, it seems there is some support for extra corner features on the objects as well the marker tags but I haven't attempted to get this working yet. From experimentation we've had it working fine with a status of 1 or 2. I can't help you with this one I'm afraid, we've not experienced this error on our setup. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-01-18 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28903, "tags": "ros, xml, marker, ar-track-alvar" }
Acid-base reaction vs. salt dissolving reaction
Question: I was watching a video in Khan academy about acids and bases. In the video, they asked the $\mathrm{pH}$ of a solution that is $\pu{0.15M}$ of $\ce{NH3}$ ($K_\mathrm{b}=1.8\times10^{-5}$) and $\pu{0.35M}$ of $\ce{NH4NO3}$. So, it was obvious to me that the first reaction is going to be that of a weak base because the $K_\mathrm{b}$ is given. But what wasn't clear to me that the second reaction is going to be that of a salt dissolving reaction that is dissolved completely as is claimed in the video. Since this problem is presented in a beginners learning video I guess there is a way to determine that $\ce{NH4NO3}$ is a salt that is dissolving completely in water and not an acid or base according to the information given in the video. Answer: tl;dr What wasn't clear to me that the second reaction is going to be that of a salt dissolving reaction that is dissolved completely as is claimed in the video. Remember the rule: "All ionic salts and strong acids/bases are 100% dissociated in aqueous solution" I guess there is a way to determine that $\ce{NH4NO3}$ is a salt that is dissolving completely in water and not an acid or base Realize that all such types of species of the form $\ce{A+B-}$ are ionic salts. If you try draw out the Lewis structure of $\ce{NH4NO3}$, you'll quickly realize it to be very different from that of, say, water or acetic acid. In ammonium nitrate, you have two separate ions - ammonium cation and nitrate ion - held together by strong electrostatic forces of attraction (often called an "ionic bond"). Water, having a high dielectric constant, is able to break these electrostatic forces easily and dissolve the salt in 100% concentration. Hence, we say that "All ionic salts and strong acids/bases are 100% dissociated in aqueous solution" i.e. when dissolved in water, they'll dissociate completely. It seems you already knew the latter part, while the part for ionic salts is new for you. As I said before, note that ionic salts are different from weak acids or weak bases like $\ce{CH3COOH}$. The $\ce{O-H}$ bond here is covalent and breaking it is an altogether different job as compared to breaking ionic bonds. Here, the reaction (or "equilibrium") has to favor the dissociation of acetic acid. This is ensured by the high stability of the conjugate base. If you've taken GOC classes, you would quickly realize that the conjugate base $\ce{CH3COO-}$ is much weaker than say $\ce{HSO3-}$ or $\ce{NO3-}$ (guess which acids the latter two are? ;) ) Again, realize that the rule is actually not completely correct. Later down the equilibrium road, you'll come across the equilibrium constant $K_\mathrm{sp}$ that defines exactly how much soluble certain salts are in aqueous solutions. Certainly, many ionic salts would be in that list, and that too with very low solubilities instead! Again, the way these people simplify calculations at an elementary level is to implicitly assume that "All ionic salts whose $K_\mathrm{sp}$ is provided are only partially soluble, while those whose $K_\mathrm{sp}$ is not provided are completely soluble (i.e. 100% dissociated)"
{ "domain": "chemistry.stackexchange", "id": 9861, "tags": "acid-base" }
How does ROS build system work?
Question: Hi, I have been using ROS for quite sometime and have installed many packages of my own and from the sources available. But i would like to know how the building happens. What rosmake actually does? cmake followed by make? Why is manifest.xml very important. I might have missed a tutorial if available on this. Could someone point to it if available or give some detail about the building process here. Thanks, Karthik Originally posted by karthik on ROS Answers with karma: 2831 on 2012-03-07 Post score: 0 Answer: Rosmake basically calls make recursively. This answer talks about the difference between rosmake and make: http://answers.ros.org/question/10614/rosmake-vs-make In rosbuild the Makefile invokes CMake, and uses rospack to get the package information from the manifest.xml files. And you should read about the manifest.xml on the wiki Originally posted by tfoote with karma: 58457 on 2012-03-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8521, "tags": "rosmake, cmake, rosbuild" }
Why is blowing so different than sucking?
Question: Why is it so easy to blow out a candle from a significant distance, but nearly impossible to suck enough air to do the same? Even without focusing the airflow through a nozzle or something, this affect seems to be present. For example, it's easy to feel the air coming out of a box fan, but very hard to feel the air going into one. Answer: A coherent stream of air can be said to be comprised of air particles with a similar velocity magnitude and direction. For both sucking and blowing, there is a relatively coherent stream of air in your mouth. Sucking is pulling air almost equally from all directions, into your mouth, where it becomes coherent. See illustration below. Blowing is introducing a coherent stream of air from your mouth into the air outside your mouth. In the case of blowing, the stream of coherent air that you have created will persist for some distance), at least until it reaches a nearby candle (the coherency is lost with distance due to boundary influences - see the small arrows in the blowing diagram).
{ "domain": "physics.stackexchange", "id": 56073, "tags": "fluid-dynamics, everyday-life, flow, air, home-experiment" }
Hashing Probabilities
Question: I'm not too sure about how to calculate hashing probabilities, and can't find much documents online to help me with it. Am looking to solve this question "If we hash N items into M buckets using a simple uniform hash, what is the expected number of buckets that have exactly 1 item? What is the expected number of buckets with at least 2 items? What is the expected number of buckets with exactly k items?", so will appreciate any help with respect to hashing probabilities. Answer: For sufficiently large $M$, the size distribution of hash slots with good uniform hash functions follows a Poisson distribution. Let $\lambda = \frac{N}{M}$ be the load factor. Then the expected proportion of buckets with exactly $k$ items in it is: $$P(\hbox{# of items in the bucket is } k) = \frac{e^{-\lambda} \lambda^k}{k!}$$
{ "domain": "cs.stackexchange", "id": 7658, "tags": "algorithm-analysis, data-structures, probability-theory, hashing" }
What is the difference between tempering and hardening metals and their end products?
Question: I think I've heard hardening is when you dunk red-hot metal into cold water, and tempering is when you take that hardened metal, heat it slightly, and then let it cool slowly. However what is the difference between their end products? Answer: As the names imply, hardening makes the metal more rigid but more brittle, and tempering (from "temperate", moderate), forgoes some hardness for increased toughness. Iron alloys are hardened by rapid quenching and toughened by annealing, but some copper alloys are made more ductile by quenching and are hardened by working. In brief, annealing allows atoms to migrate to a less-strained position, whereas quenching lock atoms in place; this is used to make tempered glass, as well as to harden metals. Extremely rapid quenching can produce a metallic glass, where precipitation of constituents is prevented. Edged-tools may need the body to be ductile, so that it does not shatter, but require for the edge to be hardened so that it retains sharpness. See http://www.smt.sandvik.com/en/products/strip-steel/strip-products/knife-steel/hardening-guide/purpose-of-hardening-and-tempering/ or http://www.stormthecastle.com/blacksmithing/blacksmithing-a-knife/stock-removal-method-of-knifemaking-part-4.htm.
{ "domain": "chemistry.stackexchange", "id": 3300, "tags": "metal, heat, metallurgy" }
Doubts about the strong and weak versions of Newton’s third law
Question: I was going through some assertion and reason questions. (a) Both A and R are true, and R is the correct explanation of A. (b) Both A and R are true, but R is not the correct explanation of A. (c) A is true but R is false. (d) A is false but R is true. (e) Both A and R are false. Statement A: For a system of two charges $q_1$ and $q_1$ at a separation $r$, the Coulomb force of $q_1$ on $q_2$ and that of $q_2$ on $q_1$ are equal and opposite, and these may or may not act along the line that joins $q_1$ and $q_2$. Statement B: According to Newton's third law, action and reaction forces are equal and opposite but these may or may not act along the line joining the two particles The answer key says (d). My doubts: According to the strong version Newton's third law, the forces must act along the line joining the two particles. So by “Newton's third law,” do we mean the weak version of Newton's third law? If so, why? If someone could provide a translated version of Newton's paper it would be helpful. Does the Coulomb force follow the strong Newton's third law or the weak Newton's third law? Does the Coulomb force ignore the fact that Newtonian mechanics cannot be used for small particles like electrons and protons, and rather we have to use quantum mechanics? (I don't know much about quantum mechanics.) And lastly, what is the answer? Answer: So by “Newton's third law,” do we mean the weak version of Newton's third law? Yes. If so, why? Why is Newton's third law known as *weak law of action and reaction*? Does the Coulomb force ignore the fact that Newtonian mechanics cannot be used for small particles like electrons and protons, and rather we have to use quantum mechanics.? What is the range of the validity of Coulomb's law? Does the Coulomb force follow the strong Newton's third law or the weak Newton's third law? Electrostatic forces obey the strong form of Newton's third law. Hence statement A is false. Statement R is true, since by “Newton's third law,” we mean the weak form of Newton's third law. So the answer is d.
{ "domain": "physics.stackexchange", "id": 80965, "tags": "homework-and-exercises, newtonian-mechanics, forces, vectors, coulombs-law" }
Calculating transmitter power from spectral density
Question: I want to find out what is the real transmit power of my USRP 2922, unfortunately the only results I get when looking at the spectrum analyzer is the spectral power density. These are the results of the my measurements (in LabView - transmitter parameters are: 1Mhz IQ Sampling rate, carrier: 2.4375GHz, 0 Tx gain, QPSK modulation; RX port on USRP receiver and TX port on USRP transmitter are connected with an attenuator): Now, I'm wondering how to find the transmit power of the transmitter from this. The Gain control is for the RX Antenna gain, so it's not useful since it's not power of the USRP transmitter. Maybe I should integrate this value in the band? Alternatively for a sine of the signal, calculate the value from the amplitude? Any ideas? Answer: You're right, to convert from a power density to a power, you integrate over frequency! However: Well, if neither of your transmitter nor receiver are calibrated, then you simply can't find the "real" transmitted or received power. The plots you have are "amplitude relative to maximum ADC input". You don't know what power "maximum ADC input" corresponds to without calibration. End of story - there's not a single absolute power in your system that you can use as reference! The solution here is to calibrate; i.e. send a signal, measure it with a calibrated spectrum analyzer, measure the same signal with your USRP, and then calculate the factor between these two powers. Due to linearity, that factor (==corrective additive term in dB) stays the same for all powers.
{ "domain": "dsp.stackexchange", "id": 9792, "tags": "power-spectral-density, bandwidth, amplitude, usrp" }
What happens when we reconstruct a signal after its Fourier transform is done and some frequencies are altered?
Question: Since we loose time information when we take a Fourier transform, what happens if we alter a few frequencies from the transform and then reconstruct the signal, do we get the altered frequency at the exact place the original frequency was(in time). Would it be beneficial if we use Wavelet transform or FFT? If yes, how? Answer: The spatial content is carried in the phase of the Fourier transform and not in magnitude. As Marcus pointed out, the difficulty with Fourier transform is that it doesn't allow us to immediately analyze the spatial information while still looking at the frequency. There are methods such as windowed FT which try to make that a bit better. However, if you alter frequencies in the magnitude and reconstruct, you would get filtering effects. For instance, if we cut the low frequencies, than it would be a high-pass filter, and likewise, cutting high bands would cause a low-pass filter effect. This is analogous to applying, e.g. Mean/Gaussian filter to an image. Regarding Wavelets, they try to address this time/frequency issue I mentioned above. Wavelets are limited in time and frequency. We slide the wavelets in time domain to construct the Wavelet domain. This provides resolution in the time domain. On the application side, Wavelet denoising is shown to be superior to Fourier domain denoising. But, such superiority is application dependent, as for some applications one doesn't need that spatial analysis. You could find some more information on the differentiation here. Here is an example, based on this one, where we do a low-pass filter by keeping only the low frequency values (both positive and negative): % make our noisy function t = linspace(1,5,1024); x = -(t-2).^2 + 2; y = awgn(x,0.5); F = fft(y,1024); freqRange = 20; % range of frequencies we want to preserve rectangle = zeros(size(F)); rectangle(1:freqRange+1) = 1; % preserve low positive frequencies y_half = ifft(F.*rectangle,1024); % +ve low-pass filtered signal rectangle(end-freqRange+1:end) = 1; % preserve low negative frequencies y_rect = ifft(F.*rectangle,1024); % full low-pass filtered signal figure; plot(t,y,'g--'); hold on, plot(t,x,'k','LineWidth',2); hold on, plot(t,y_half,'b','LineWidth',2); hold on, plot(t,y_rect,'r','LineWidth',2); legend('noisy signal','true signal','positive low-pass','full low-pass','Location','southwest'); The resulting signals look like this: You could see here that the altered frequencies effect the whole signal - wherever a high frequency existed gets smoothed. The spatial content is preserved in reconstructing the signal, because FT is a loss-less transformation, but we didn't get the chance to modify the exact spatial locations - the whole signal is effected by a change to the FT magnitude.
{ "domain": "dsp.stackexchange", "id": 5537, "tags": "fft, signal-analysis, fourier-transform, computer-vision, wavelet" }
BIT: What is the intuition behind a binary indexed tree and how was it thought about?
Question: A binary indexed tree has very less or relatively no literature as compared to other data structures. The only place where it is taught is the topcoder tutorial. Although the tutorial is complete in all the explanations, I cannot understand the intuition behind such a tree? How was it invented? What is the actual proof of its correctness? Answer: Intuitively, you can think of a binary indexed tree as a compressed representation of a binary tree that is itself an optimization of a standard array representation. This answer goes into one possible derivation. Let's suppose, for example, that you want to store cumulative frequencies for a total of 7 different elements. You could start off by writing out seven buckets into which the numbers will be distributed: [ ] [ ] [ ] [ ] [ ] [ ] [ ] 1 2 3 4 5 6 7 Now, let's suppose that the cumulative frequencies look something like this: [ 5 ] [ 6 ] [14 ] [25 ] [77 ] [105] [105] 1 2 3 4 5 6 7 Using this version of the array, you can increment the cumulative frequency of any element by increasing the value of the number stored at that spot, then incrementing the frequencies of everything that come afterwards. For example, to increase the cumulative frequency of 3 by 7, we could add 7 to each element in the array at or after position 3, as shown here: [ 5 ] [ 6 ] [21 ] [32 ] [84 ] [112] [112] 1 2 3 4 5 6 7 The problem with this is that it takes O(n) time to do this, which is pretty slow if n is large. One way that we can think about improving this operation would be to change what we store in the buckets. Rather than storing the cumulative frequency up to the given point, you can instead think of just storing the amount that the current frequency has increased relative to the previous bucket. For example, in our case, we would rewrite the above buckets as follows: Before: [ 5 ] [ 6 ] [21 ] [32 ] [84 ] [112] [112] 1 2 3 4 5 6 7 After: [ +5] [ +1] [+15] [+11] [+52] [+28] [ +0] 1 2 3 4 5 6 7 Now, we can increment the frequency within a bucket in time O(1) by just adding the appropriate amount to that bucket. However, the total cost of doing a lookup now becomes O(n), since we have to recompute the total in the bucket by summing up the values in all smaller buckets. The first major insight we need to get from here to a binary indexed tree is the following: rather than continuously recomputing the sum of the array elements that precede a particular element, what if we were to precompute the total sum of all the elements before specific points in the sequence? If we could do that, then we could figure out the cumulative sum at a point by just summing up the right combination of these precomputed sums. One way to do this is to change the representation from being an array of buckets to being a binary tree of nodes. Each node will be annotated with a value that represents the cumulative sum of all the nodes to the left of that given node. For example, suppose we construct the following binary tree from these nodes: 4 / \ 2 6 / \ / \ 1 3 5 7 Now, we can augment each node by storing the cumulative sum of all the values including that node and its left subtree. For example, given our values, we would store the following: Before: [ +5] [ +1] [+15] [+11] [+52] [+28] [ +0] 1 2 3 4 5 6 7 After: 4 [+32] / \ 2 6 [ +6] [+80] / \ / \ 1 3 5 7 [ +5] [+15] [+52] [ +0] Given this tree structure, it's easy to determine the cumulative sum up to a point. The idea is the following: we maintain a counter, initially 0, then do a normal binary search up until we find the node in question. As we do so, we also do the following: any time that we move right, add the current value to the counter. For example, suppose we want to look up the sum for 3. To do so, we do the following: Start at the root (4). Counter is 0. Go left to node (2). Counter is 0. Go right to node (3). Counter is 0 + 6 = 6. Find node (3). Counter is 6 + 15 = 21. You could imagine also running this process in reverse: starting at a given node, initialize the counter to that node's value, then walk up the tree to the root. Any time you follow a right child link upward, add in the value at the node you arrive at. For example, to find the frequency for 3, we could do the following: Start at node (3). Counter is 15. Go upward to node (2). Counter is 15 + 6 = 21. Go upward to node (4). Counter is 21. To increment the frequency of a node (and, implicitly, the frequencies of all nodes that come after it), we need to update the set of nodes in the tree that include that node in its left subtree. To do this, we do the following: increment the frequency for that node, then start walking up to the root of the tree. Any time you follow a link that takes you up as a left child, increment the frequency of the node you encounter by adding in the current value. For example, to increment the frequency of node 1 by five, we would do the following: 4 [+32] / \ 2 6 [ +6] [+80] / \ / \ > 1 3 5 7 [ +5] [+15] [+52] [ +0] Starting at node 1, increment its frequency by 5 to get 4 [+32] / \ 2 6 [ +6] [+80] / \ / \ > 1 3 5 7 [+10] [+15] [+52] [ +0] Now, go to its parent: 4 [+32] / \ > 2 6 [ +6] [+80] / \ / \ 1 3 5 7 [+10] [+15] [+52] [ +0] We followed a left child link upward, so we increment this node's frequency as well: 4 [+32] / \ > 2 6 [+11] [+80] / \ / \ 1 3 5 7 [+10] [+15] [+52] [ +0] We now go to its parent: > 4 [+32] / \ 2 6 [+11] [+80] / \ / \ 1 3 5 7 [+10] [+15] [+52] [ +0] That was a left child link, so we increment this node as well: 4 [+37] / \ 2 6 [+11] [+80] / \ / \ 1 3 5 7 [+10] [+15] [+52] [ +0] And now we're done! The final step is to convert from this to a binary indexed tree, and this is where we get to do some fun things with binary numbers. Let's rewrite each bucket index in this tree in binary: 100 [+37] / \ 010 110 [+11] [+80] / \ / \ 001 011 101 111 [+10] [+15] [+52] [ +0] Here, we can make a very, very cool observation. Take any of these binary numbers and find the very last 1 that was set in the number, then drop that bit off, along with all the bits that come after it. You are now left with the following: (empty) [+37] / \ 0 1 [+11] [+80] / \ / \ 00 01 10 11 [+10] [+15] [+52] [ +0] Here is a really, really cool observation: if you treat 0 to mean "left" and 1 to mean "right," the remaining bits on each number spell out exactly how to start at the root and then walk down to that number. For example, node 5 has binary pattern 101. The last 1 is the final bit, so we drop that to get 10. Indeed, if you start at the root, go right (1), then go left (0), you end up at node 5! The reason that this is significant is that our lookup and update operations depend on the access path from the node back up to the root and whether we're following left or right child links. For example, during a lookup, we just care about the right links we follow. During an update, we just care about the left links we follow. This binary indexed tree does all of this super efficiently by just using the bits in the index. The key trick is the following property of this perfect binary tree: Given node n, the next node on the access path back up to the root in which we go right is given by taking the binary representation of n and removing the last 1. For example, take a look at the access path for node 7, which is 111. The nodes on the access path to the root that we take that involve following a right pointer upward is Node 7: 111 Node 6: 110 Node 4: 100 All of these are right links. If we take the access path for node 3, which is 011, and look at the nodes where we go right, we get Node 3: 011 Node 2: 010 (Node 4: 100, which follows a left link) This means that we can very, very efficiently compute the cumulative sum up to a node as follows: Write out node n in binary. Set the counter to 0. Repeat the following while n ≠ 0: Add in the value at node n. Clear the rightmost 1 bit from n. Similarly, let's think about how we would do an update step. To do this, we would want to follow the access path back up to the root, updating all nodes where we followed a left link upward. We can do this by essentially doing the above algorithm, but switching all 1's to 0's and 0's to 1's. The final step in the binary indexed tree is to note that because of this bitwise trickery, we don't even need to have the tree stored explicitly anymore. We can just store all the nodes in an array of length n, then use the bitwise twiddling techniques to navigate the tree implicitly. In fact, that's exactly what the bitwise indexed tree does - it stores the nodes in an array, then uses these bitwise tricks to efficiently simulate walking upward in this tree. Hope this helps!
{ "domain": "cs.stackexchange", "id": 21499, "tags": "algorithms, binary-trees, trees" }
About particle number non-conservation in Quantum Field Theory
Question: There are already similar topics with interesting answers such as When particle number can change in quantum physics? but I still don't understand much. I often read about the non-conservation of particle number in (Relativistic) Quantum Field Theory but it remains quite obscure to me. First of all, isn't particle number trivially conserved in any free field theory such as the Klein Gordon field ? In quantizing the theory we construct a Hilbert space that is the direct sum of n-particles Hilbert spaces (Fock Space) but to the best of my knowledge states of different particle numbers are orthogonal when there are no interactions. Once you include interactions such as in QED, you can of course have processes such as $ \gamma + \gamma \longrightarrow particle + antiparticle $. Then I see that Hilbert states of different particle numbers (I use particle as a general term for particles and antiparticles alike) for e.g the Dirac field will be connected through a "flow" of photons into $ e^{+} + e^{-} $. So the number of particles of a given type is not conserved but particle number is still conserved overall in such a process, there is simply a transfer from one field to another. So I guess my main question is are there elementary particle processes where the overall particle number is not conserved ? I can only think of Feynman diagrams that have the same number of ingoing and outgoing particles and it seems to me that momentum conservation etc would not be respected otherwise. But then the overall number of particles in the Universe (as in excitations of any field) would be conserved. That would also imply that when we say e.g. an atom absorbed a photon and emitted two lower frequency ones it would only be an approximation of some kind. I would appreciate it if someone could shed some light on this :) Answer: So I guess my main question is are there elementary particle processes where the overall particle number is not conserved ? Yes, there are. Consider, for example the very first figure shown in the Wikipedia article on Feynman diagrams. This shows an electron and a positron annihilating to become a photon. So, if we were to cut the diagram at this point, already particle number is not conserved. (Two particles became one particle.) The photon is then shown as proceeding along until it becomes a quark, antiquark, and a gluon. Clearly the number of electrons wasn't conserved, the number of positrons wasn't conserved, the number of photons wasn't conserved, etc. Further, the initial state had two particles and the final state has three particles. So, even considering just the initial and final states in the diagram there is no conservation of particle number. Indeed, one of the main motivations for introducing Quantum Field Theory (QFT) is to account for particle number non-conservation.
{ "domain": "physics.stackexchange", "id": 97024, "tags": "quantum-field-theory, particle-physics, conservation-laws" }
Determining geometry of trisubstituted alkene
Question: The way I know to tell whether an alkene is E or Z -is to look at the coupling constant of the protons across the doble bond. Large value (16 Hz) indicates an (E)-alkene where as smaller values (11 Hz) usually indicate a (Z)-alkene. In my project I've been doing a cross metathesis between two alkenes which gives me a tri-substituted alkene, and I cannot observe the coupling constant across this double bond. The reaction gives a mixture of products which I think are the (E)- and (Z)-isomers. I'm able to separate them on HPLC but they do not crystallise. Is there another method of determining this? A lab member suggested I could derivatise the alkene but I don't think this would provide any more useful information. Answer: As @Zhe points out, its not possibly to definitively answer your questions without knowing the structure of the olefin, as its important what is around the olefin, as well as just how many protons are attacked to the olefin. If you read many of the original Grubbs' papers (and indeed any papers using a Grubbs' metathesis to make tri-substituted alkenes), the common method to determine the geometry is to use the nuclear Overhauser effect, or nOe for short. The nOe identifies interactions through space (as opposed to standard NMR where we're looking at the interactions through bonds. By recording the nOe, one can essentially 'measure' how close two protons are, which in this case would allow you to distinguish E and Z. Practically speaking, there are several different NMR experiments that one can run to observe the nOe. The 1D nOe (1D-NOESY) difference experiment involves irradiation of a particular signal (for instance the one alkene proton), to allow measurement of what other protons are proximal in space. This experiment has the advantage of being quick to run, but in order to gain useful information, you have to run several of them (irradiate the alkene proton, then run a different 1D-NOESY in order to make sure that the proton that the alkene proton is 'seeing' also 'sees' the alkene proton The 2D nOe (2D-NOESY) experiment shows all nOe enhancements in a molecule, this is incredibly useful and often used in structure elucidation, but can be slow to run, especially if the amount of sample available is limited.
{ "domain": "chemistry.stackexchange", "id": 8460, "tags": "organic-chemistry, stereochemistry, cis-trans-isomerism" }
how electrophoretic display like E ink retains particles on top of capsule without power?
Question: how electronic ink display retains the particles on top of micro capsule when charge on electrode is released or when device is powered off? Doesnt that make particles loose on the dielectric fluid on capsule after volatge on electrode is removed? How e ink maintains image even when device is powered off? I have understood basics of E ink , couldnt understand this part Answer: From skimming a few articles and patents on e-ink driver technology, my impression is that the primary reason is that each microcapsule acts as a capacitor. Once voltage is applied, the particles move to one electrode or the other and remain there because there is no drain path for the charge. The 'gooiness' of the fluid helps, as evidenced by the typical approach of applying a "shaking pulse" sequence after a certain number of image transitions. This pulse sequence helps ensure that all the microparticles are freed up to be driven to the appropriate state. This may or may not be helpful: http://patents.justia.com/patent/20060170648
{ "domain": "physics.stackexchange", "id": 11978, "tags": "electromagnetism, dielectric" }
How does an isolated body in deep space 'know' it's rotating?
Question: We can imagine an object floating in the known universe, maximally distant from any other large mass. Maybe it has been there since coalescing after the big bang. What physical phenomena tell it whether it is rotating relative to the rest of the universe and therefore experiencing a centrifugal (?) force. Is it the combined gravity of all other matter? Is it 'spooky action at a distance'? Is it because it is rotating relative to 'empty space'? Answer: This is a longstanding problem in physics and has not been wholly solved to anyone's satisfaction. It's not just rotational motion, any motion is subject to this concern. Very basically, what is "motion" for a singular object in its own universe? Mach was one of the first to really explore this issue. He spoke of masses in deep space and wondered if they would have momentum. He concluded they had to, and then went looking for potential solutions to the obvious problem of the lack of any sort of universal ruler. He concluded that the mass distribution of the universe as a whole (which at that time was the Milky Way remember) forms a sort of momentum background against which all objects, local or no, actually measure against. So even in the case when you're studying the collision of objects on a billiard table, the momentum you measure isn't relative to the table, it's "really" relative to this universal frame, but in the end the table is to so you can reduce it that way. A more direct solution to the problem was offered by Brans-Dicke theory. This is a theory that is very similar to General Relativity in that it ascribes many things, notably gravity, to the geometry of spacetime. However, it also adds a second linear field that is sort of "baked into" the universe when it is created. This field creates a background reference frame for momentum. So if BD theory is correct, yes, a universe with a single object in it will definitely feel angular momentum. Unfortunately, as far as we can tell, BD is wrong. There is no direct evidence of this, but it falls to Occam's Razor. The issue is that BD has a coupling constant (alpha IIRC) that defines how strongly this other field couples to the spacetime - its basically similar to G in normal GR. As it falls to zero, the theory becomes GR in the same sort of way that Newtonian gravity is the weak-field limit of GR. You can measure alpha indirectly, and to date every new measurement forces it ever closer to zero. So GR wins.
{ "domain": "physics.stackexchange", "id": 53748, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames, inertial-frames, machs-principle" }
How to get expected running time of hash table?
Question: If I have a hash table of 1000 slots, and I have an array of n numbers. I want to check if there are any repeats in the array of n numbers. The best way to do this that I can think of is storing it in the hash table and use a hash function which assumes simple uniform hashing assumption. Then before every insert you just check all elements in the chain. This makes less collisions and makes the average length of a chain $\alpha = \frac{n}{m} = \frac{n}{1000}$. I am trying to get the expected running time of this, but from what I understand, you are doing an insert operation up to $n$ times. The average running time of a search for a linked list is $\Theta(1+\alpha)$. Doesn't this make the expected running time $O(n+n\alpha) = O(n+\frac{n^2}{1000}) = O(n^2)$? This seems too much for an expected running time. Am I making a mistake here? Answer: If you think of $1000$ as a constant, then yes, the running time is terrible. The idea of hash tables is that if the hash table is moderately bigger than the amount of data stored in it, then operations are very fast. Suppose for example that the hash table has size $m = 2n$. Then each operation takes constant time in expectation. In implementations of hash tables, the hash table expands as more entries are inserted, to ensure that the ratio $\alpha$ is reasonable. Amortized analysis shows that this does not cause a performance hit in terms of the total running time, though individual operations might be slower (this is only a problem if you're writing a real-time application).
{ "domain": "cs.stackexchange", "id": 1153, "tags": "algorithms, algorithm-analysis, hash-tables, hash, probabilistic-algorithms" }
Linearization of MHD equations for a cold plasma and low frequencies
Question: I'm trying to convert the next equations for a cold plasma (consider $P=0$ the pressure), for low frequencies and ignoring dissipative effects: $$\rho \left(\left(\frac{\partial \vec{v} }{\partial t}\right)+ (\vec{v}\cdot \nabla)\vec{v}\right)= \frac{1}{c}\vec{j}\times \vec{B}$$ $$\vec{E}=-\frac{1}{c}\vec{v}\times\vec{B}$$ $$\nabla \times \vec{E}=-\frac{1}{c}\frac{\partial \vec{B}}{\partial t}$$ $$\nabla \times \vec{B}=\frac{4\pi}{c}\vec{j}+\frac{1}{c}\frac{\partial \vec{E}}{\partial t}$$ Assuming $\vec{B}=\vec{B}_0+\vec{B}_1$ with $\vec{B}_0=B_0 \hat{e}_z$ constant and homogeneus. Using a linearization for plane waves and considering the equilibrium velocity as $v_0=0$ convert those equations onto: $$\vec{v}_1=\frac{i}{\rho_0 \omega c}\vec{j}_1 \times \vec{B}_0$$ $$\vec{E}_1=-\frac{1}{c}\vec{v}_1 \times \vec{B}_0$$ $$\vec{B}_1=\frac{c}{\omega}\vec{k}\times \vec{E}_1$$ $$\vec{k}\times(\vec{k}\times \vec{E}_1)=-\left(\frac{1+V_A^2/c^2}{V_A^2}\right)\omega^2 \vec{E}_1$$ Where $V_A=\left(\frac{B_0^2}{\rho_0 4\pi}\right)^{1/2}$ is the Alfven velocity. I could transform the first three equations but the one involving Alfven velocity gave me a lot of problems. Could you help me? Answer: As a matter of preference, I will use $\mathbf{Q}_{o}$ and $\delta \mathbf{Q}$ for the quasi-static and fluctuating terms. Next, recall from vector calculus that the following holds: $$ \mathbf{A} \times \left( \mathbf{B} \times \mathbf{C} \right) = \left( \mathbf{A} \cdot \mathbf{C} \right) \mathbf{B} - \left( \mathbf{A} \cdot \mathbf{B} \right) \mathbf{C} \tag{1} $$ Next we use the expression for $\delta \mathbf{v}$ in the convective electric field term to get: $$ \begin{align} \delta \mathbf{E} & = - \frac{ 1 }{ c } \ \delta \mathbf{v} \times \mathbf{B}_{o} \tag{2a} \\ & = - \frac{ i }{ \rho_{o} \ \omega \ c } \left[ \left( \delta \mathbf{j} \times \mathbf{B}_{o} \right) \times \mathbf{B}_{o} \right] \tag{2b} \\ & = \frac{ i \ B_{o}^{2} }{ \rho_{o} \ \omega \ c^{2} } \left[ \frac{ \mathbf{B}_{o} \times \left( \delta \mathbf{j} \times \mathbf{B}_{o} \right) }{ B_{o}^{2} } \right] \tag{2c} \\ & = \frac{ 4 \ \pi \ i }{ \omega } \left( \frac{ V_{A} }{ c \ B_{o} } \right)^{2} \left[ B_{o}^{2} \ \delta \mathbf{j} - \left( \delta \mathbf{j} \cdot \mathbf{B}_{o} \right) \mathbf{B}_{o} \right] \tag{2d} \end{align} $$ The next thing to notice is that Equation 2a tells us that $\delta \mathbf{E}$ is orthogonal to $\mathbf{B}_{o}$, therefore the last term in Equation 2d must be zero. This is another way of saying there are no field-aligned current perturbations. Then we can rewrite Equation 2d in terms of the current to get: $$ \delta \mathbf{j} = \frac{ \omega }{ 4 \ \pi \ i } \left( \frac{ c }{ V_{A} } \right)^{2} \delta \mathbf{E} \tag{3} $$ Finally, Ampere's law goes to: $$ \begin{align} i \mathbf{k} \times \delta \mathbf{B} & = \frac{ 4 \ \pi }{ c } \delta \mathbf{j} - \frac{ i \ \omega }{ c } \delta \mathbf{E} \tag{4a} \\ \frac{ i \ c }{ \omega } \mathbf{k} \times \left( \mathbf{k} \times \delta \mathbf{E} \right) & = \frac{ 4 \ \pi }{ c } \left[ \frac{ \omega }{ 4 \ \pi \ i } \left( \frac{ c }{ V_{A} } \right)^{2} \delta \mathbf{E} \right] - \frac{ i \ \omega }{ c } \delta \mathbf{E} \tag{4b} \\ \mathbf{k} \times \left( \mathbf{k} \times \delta \mathbf{E} \right) & = - \left( \frac{ \omega }{ c } \right)^{2} \left( \frac{ c }{ V_{A} } \right)^{2} \delta \mathbf{E} - \left( \frac{ \omega }{ c } \right)^{2} \delta \mathbf{E} \tag{4c} \\ & = - \left( \frac{ \omega }{ c } \right)^{2} \left[ 1 + \left( \frac{ c }{ V_{A} } \right)^{2} \right] \delta \mathbf{E} \tag{4d} \\ & = - \omega^{2} \left[ \frac{ 1 }{ c^{2} } + \frac{ 1 }{ V_{A}^{2} } \right] \delta \mathbf{E} \tag{4e} \\ & = - \left( \frac{ \omega }{ V_{A} } \right)^{2} \left[ 1 + \left( \frac{ V_{A} }{ c } \right)^{2} \right] \delta \mathbf{E} \tag{4f} \end{align} $$ where Equation 4f is the same as that which you seek. The only part that may not be immediately obvious here is that $\delta \mathbf{j}$ is orthogonal to $\mathbf{B}_{o}$. Generally if the only contribution to the electric field is from the convective term, then there can be no field-aligned currents. If you allow for other terms in the generalized Ohm's law, then you can get field-aligned currents, which are equivalent to a specific form of Alfven wave called kinetic or shear Alfven waves, depending on the limits/boundary conditions of interest.
{ "domain": "physics.stackexchange", "id": 54010, "tags": "homework-and-exercises, electromagnetism, plasma-physics, magnetohydrodynamics" }
How to deserialize ROS messages from Bag to ROS defined types in C++?
Question: I want to deserialize messages from a Bag file and publishing part of their contents with ROS for further processing. I read a bagfile with the ROS C++ API. I know exactly the content (topic names and types) of my messages at compile time, such that I want to deserialize it into common ROS defined classes (e.g. the ones defined here https://docs.ros.org/en/noetic/api/sensor_msgs/html/index-msg.html) at runtime. Here some code: #include <rosbag/bag.h> #include <rosbag/view.h> //other content rosbag::Bag bag; bag.open("foo.bag", rosbag::bagmode::Read); std::vector<std::string> topics{"camera1/camera_info"}; rosbag::View view(bag, rosbag::TopicQuery(topics)); for(auto const m : view) { const std::string& topic_name = m.getTopic(); std::cout << topic_name << std::endl; std_msgs::String::ConstPtr s = m.instantiate<std_msgs::String>(); if (s != NULL) std::cout << s->data << std::endl; // I want the data in s as a string to be serialized into an instance // of the type CameraInfo located in // path/ros/distro/sensor_msgs/CameraInfo.h shipping with ROS. CameraInfo ci = is_there_any_os_solution(s); //? } bag.close(); I tried to find something on this direction but I just found libraries deserializing contents unknown at compile time. Could anyone give some hints? Answer: I think that you're looking for the instantiate method on the rosbag MessageInstance There's a very minimal example here BOOST_FOREACH(rosbag::MessageInstance const m, view) { std_msgs::Int32::ConstPtr i = m.instantiate<std_msgs::Int32>(); ... And more examples here and a cookbook of examples using the rosbag API.
{ "domain": "robotics.stackexchange", "id": 38560, "tags": "ros, c++, rosbag" }
How is this the last supermoon of 2019?
Question: Media coverage of the "super worm equinox moon" (*eyeroll*) has stated that this is the last supermoon of 2019. Since supermoons normally happen every three or four months, how can there be big gaps? It feels like phenomena like supermoons, which are basically due to the relative phase of orbits, shouldn't have sharp transitions like this. Answer: A supermoon does not occur every three or four months. There may be 2 or 3 consecutive supermoons (that is, separated by 1 month) that occur at about the same time each year. If you ignore the precession of the Moon's orbit, then there is one time of the year when the Full Moon and perigee occur (point 1 in my diagram below). Two weeks later, the New Moon occurs near apogee (point 2). At any other time of the year, the Full Moon occurs at a different point in its orbit around the Earth, so it is farther from the Earth than at perigee. Six months later, the Full Moon is occurring at apogee (point 4), and no one cares about that! (not to scale!) Full Moon at perigee (closest to the Earth) New Moon at apogee (farthest from the Earth) New Moon at perigee Full Moon at apogee The Wikipedia article on the Supermoon has a nice graphic showing the Full Moon and distance from the Earth. Depending on how close to the Earth the Moon needs to be to be "super", you can see that there is a "season" when the supermoon occurs. (I have copied the image here, and added a dashed line at 360,000 km to show which moons might be "super" and which ones are not.) Now, if only people would care about the Super First Quarter Moon. Then we would be celebrating on May 12, 2019! (Not really. That is farther than the 360,000 km criteria.)
{ "domain": "astronomy.stackexchange", "id": 3586, "tags": "the-moon" }
How is the image distance negative?
Question: Following is a problem from book: "The near point of a person's eye is 53.0 cm. To see objects clearly at a distance of 24.0 cm, what should be the focal length of the appropriate corrective lens?" My book says the image distance(from the center of lens) is negative. Why is that so? Isn't the image distance negative when an image is formed on the same side of the object? However, isn't that not the case. My understanding of the scenario is that the lens is in between the object and the image. With object on one side of lens and image on opposite. Answer: The corrective lens produces an image at the near point (or further) so that the person can see it. In your question the object is at 24 cm but near point is 53 cm , So the person can see the image only if the object is at 53 cm or beyond, what the corrective lens does is that it produces an image(Say I1) of the object(which is at 24 cm) at 53 cm (or beyond) ,This image I1 acts as the object for the eye, since the image formed by the corrective lens is on same side of object it is negative by sign convention.
{ "domain": "physics.stackexchange", "id": 56422, "tags": "reflection, refraction, lenses" }
Differential element of current question
Question: I'm watching some basic magnetic field derivations and most of the proofs use the differential element $dI$. Let's say a wire carries a current I. When we take a very small current $dI$ though , shouldn't it have the same value as the current running through the wire? Also,if the current is constant shouldn't $dI$ be zero? What is actually a differential element? Maybe that's the question I should be asking. The only time that it makes sense to me is in 3d conductors where the current varies inside them. One of the examples I had trouble with was while using Ampere's law to find B inside a solinoid. The $I_{enclosed}$ is different from the total current. Isn't I just a rate of movement? How are you supposed to enclose it? Maybe it's actually the charge you enclose ? No, that would be Gauss' law. Anyway, I hope you see what I am missing here. Answer: The current J is localized. I is the flux of the current through the transversal surface of the wire. J is called current density for a reason, it is a "density" of current, it doesn't depend on the wire. I is the total current, it depends on the transversal area of the wire. I don't like to use dI, it is confusing. Instead use I(enc). If the current J is constant everywhere, then I(enc) depends on the radius you take the circular surface at, and I is understood generally as a known value that coincides with I(enc) when the radius of the surface you take is the same as the radius of the wire.
{ "domain": "physics.stackexchange", "id": 40825, "tags": "electromagnetism, maxwell-equations, magnetostatics" }
Display images after augmentation in Keras
Question: How can I display all images after augmentation? How can I get the number of the trained data after augmentation? Thank you Answer: Depending on the kind of data set you are using you can use .flow (if you have data as numpy arrays) or .flow_from_directory (if you have images in file system) to run through the data generator and save the output using save_to_dir argument.
{ "domain": "datascience.stackexchange", "id": 3451, "tags": "deep-learning, keras, computer-vision, convolutional-neural-network" }
Neural Network parse string data?
Question: So, I'm just starting to learn how a neural network can operate to recognize patterns and categorize inputs, and I've seen how an artificial neural network can parse image data and categorize the images (demo with convnetjs), and the key there is to downsample the image and each pixel stimulates one input neuron into the network. However, I'm trying to wrap my head around if this is possible to be done with string inputs? The use-case I've got is a "recommendation engine" for movies a user has watched. Movies have lots of string data (title, plot, tags), and I could imagine "downsampling" the text down to a few key words that describe that movie, but even if I parse out the top five words that describe this movie, I think I'd need input neurons for every english word in order to compare a set of movies? I could limit the input neurons just to the words used in the set, but then could it grow/learn by adding new movies (user watches a new movie, with new words)? Most of the libraries I've seen don't allow adding new neurons after the system has been trained? Is there a standard way to map string/word/character data to inputs into a neural network? Or is a neural network really not the right tool for the job of parsing string data like this (what's a better tool for pattern-matching in string data)? Answer: Using a neural network for prediction on natural language data can be a tricky task, but there are tried and true methods for making it possible. In the Natural Language Processing (NLP) field, text is often represented using the bag of words model. In other words, you have a vector of length n, where n is the number of words in your vocabulary, and each word corresponds to an element in the vector. In order to convert text to numeric data, you simply count the number of occurrences of each word and place that value at the index of the vector that corresponds to the word. Wikipedia does an excellent job of describing this conversion process. Because the length of the vector is fixed, its difficult to deal with new words that don't map to an index, but there are ways to help mitigate this problem (lookup feature hashing). This method of representation has many disadvantages -- it does not preserve the relationship between adjacent words, and results in very sparse vectors. Looking at n-grams helps to fix the problem of preserving word relationships, but for now let's focus on the second problem, sparsity. It's difficult to deal directly with these sparse vectors (many linear algebra libraries do a poor job of handling sparse inputs), so often the next step is dimensionality reduction. For that we can refer to the field of topic modeling: Techniques like Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA) allow the compression of these sparse vectors into dense vectors by representing a document as a combination of topics. You can fix the number of topics used, and in doing so fix the size of the output vector producted by LDA or LSA. This dimensionality reduction process drastically reduces the size of the input vector while attempting to lose a minimal amount of information. Finally, after all of these conversions, you can feed the outputs of the topic modeling process into the inputs of your neural network.
{ "domain": "datascience.stackexchange", "id": 1972, "tags": "neural-network" }
Linkedlist visualization using html5 canvas
Question: I have written a simple linked list visualization program in JavaScript: I am just a beginner in html5 canvas. I am pretty much satisfied with how it works, but I´d like to know if I made some things, that could have been done better. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>JS Bin</title> </head> <body> <canvas id="canvas"></canvas> <script> var Shape=Shape || { x:0, y:0, w:0, h:0, t:0, rects:[], create:function(){ var obj=Object.create(this); return obj; }, add:function(x,y,w,h,t){ this.x=x; this.y=y; this.w=w; this.h=h; this.t=t; this.rects.push({x,y,w,h,t}); }, draw:function(context,text){ for (var i in this.rects) { oRec = this.rects[i]; context.fillStyle = 'red' context.fillRect(oRec.x, oRec.y, oRec.w, oRec.h); context.fillText(oRec.t, oRec.x,oRec.y); context.beginPath(); context.moveTo(oRec.x+oRec.w/2,oRec.y+oRec.h/2); context.lineTo(oRec.x+100,oRec.y+12); context.stroke(); context.closePath(); if(i==this.rects.length){ } } } }; window.onload=function(){ var canvas=document.getElementById("canvas"); var ctx=canvas.getContext("2d"); var h=canvas.width=window.innerWidth; var w=canvas.height=window.innerHeight; var node=Shape.create(); node.add(10, 100, 25, 25,1) node.add(100,100, 25, 25,2) node.add(200,100, 25, 25,3) node.add(300,100, 25, 25,4) node.add(400,100, 25, 25,5) node.add(500,100, 25, 25,6) node.draw(ctx); }; </script> </body> </html> Answer: I'm not sure why are you using Object.create to instantiate your shape. You should know that operator instanceof only works with real javascript pseudoclasses (that is, if you even think you need pseudoclass for this), which look like this: function Shape() { this.x = 0; this.y = 0; this.w = 0; this.h = 0; this.t = 0; this.rects = []; } Shape.prototype = { add:function(x,y,w,h,t) { this.x=x; this.y=y; this.w=w; this.h=h; this.t=t; this.rects.push({x,y,w,h,t}); }, draw:function(context,text) { for (var i in this.rects) { oRec = this.rects[i]; context.fillStyle = 'red' context.fillRect(oRec.x, oRec.y, oRec.w, oRec.h); context.fillText(oRec.t, oRec.x,oRec.y); context.beginPath(); context.moveTo(oRec.x+oRec.w/2,oRec.y+oRec.h/2); context.lineTo(oRec.x+100,oRec.y+12); context.stroke(); context.closePath(); if(i==this.rects.length){ // wtf is this exactly? } } } } There's also one very important difference, which is that in your case rects property is shared between all instances!. Here's a demonstration using your Shape. Here is the same example with my code. Except for that, there's nothing much to say about the code. I'd be more interested how does it actually handle a linked list. Does it check for cycles? Does it show whether the list is circular or null-terminated? Maybe just make the text black and put it one pixel up for readability. It looks better. context.fillStyle = 'black'; context.fillText(oRec.t, oRec.x,oRec.y-1);
{ "domain": "codereview.stackexchange", "id": 19912, "tags": "javascript, beginner, html5, canvas" }
Select a list from a 2D list, take the last item from its nearest non empty left neighbour and put it at its head
Question: I've just got into Haskell a few days ago and I love it. I'm looking for some pointers and best practices on how to organise and write Haskell code specifically when it comes to managing errors, which I have done using Maybe. The following is an implementation of "select a list from a 2D list, take the last item from its nearest non empty left neighbour and put it at its head" which is going to be a sub-routine of a puzzle solver. How can I improve the succinctness of my function? I notice that the more complex the logic of my code gets, it starts to be somewhat unreadable. module Seq where import Data.Maybe {- - Given a 2D list, select a list via 1-based index - Transfer the last item of its immediate non blank - left neighbour to its head. - @param [[Int]] input 2D list - @param Int index of target list - @return the input 2D list after operation -} pull :: Maybe [[Int]] -> Int -> Maybe [[Int]] -- CASE : null input pull Nothing _ = Nothing -- CASE : nothing left to pull pull (Just _) 1 = Nothing pull (Just ([]:xss)) 2 = Nothing -- CASE : index out of bounds pull (Just [xs]) n = Nothing -- CASE : currently nothing to pull pull (Just ([]:xs:xss)) n | p' == Nothing = Nothing | otherwise = Just ([] : p'') where p' = pull (Just (xs:xss)) (n-1) p''= fromJust p' -- CASE : base case; immediate pull pull (Just (xs:xs2:xss)) 2 = Just ((init xs) : ((last xs):xs2) : xss) -- CASE : intermediary blank; split results pull (Just (xs:[]:xss)) n | p' == Nothing = Nothing | otherwise = Just ((head p'') : [] : (tail p'')) where p' = pull (Just (xs:xss)) (n-1) p''= fromJust p' -- CASE : typical recursion to destination pull (Just (xs:xs2:xss)) n | p' == Nothing = Nothing | otherwise = Just (xs : p'') where p' = pull (Just (xs2:xss)) (n-1) p''= fromJust p' The following are test cases for the function: -- pull (Just [[1,2],[],[3],[]]) 4 == Just [[1,2],[],[],[3]] -- pull (Just [[1,2],[],[3],[]]) 3 == Just [[1],[],[2,3],[]] -- pull (Just [[1,2],[],[3],[]]) 2 == Just [[1],[2],[3],[]] -- pull (Just [[],[],[1,2,3],[]) 2 == Nothing Answer: First of all, there is no need for pull's first parameter to be Maybe [[Int]]. More on that later. Also, it will get easier if we switch the parameters. Also more on that later. So let's start with that modification: pull :: Int -> [[Int]] -> Maybe [[Int]] pull 1 _ = Nothing pull 2 ([]:_) = Nothing pull _ [_] = Nothing As you can see, I don't use names for things I'm not interested in. This makes it possible to focus on the interesting parts, e.g. the 1 and 2 matches, and the single-element match in the last line. Now, your other definitions get a lot easier if you don't use guards, but pattern matching instead. If we rewrote the next pattern with your style, we end up with: pull n ([]:xs:xss) = | p' == Nothing = Nothing | otherwise = Just ([] : p'') where p' = pull (n-1) (xs:xss) p''= fromJust p' Which is slightly easier due to the missing Justs in pull (n-1) …. However, with a case … of …, it's a lot more succinct: pull n ([]:xss) = case pull (n-1) xss of Nothing -> Nothing Just p -> Just ([] : p) This "if there is Nothing return nothing, otherwise change the Just value" slightly pattern is so common, that there is even a function for that, fmap, which you can think of as fmap :: (a -> b) -> Maybe a -> Maybe b fmap _ Nothing = Nothing fmap f (Just v) = Just (f v) So we can shorten that whole case to pull n ([]:xss) = fmap ([]:) (pull (n-1) xss) -- * We continue for the following cases: pull 2 (xs:ys:xss) = Just ((init xs) : ((last xs):ys) : xss) pull n (xs:[]:xss) = case pull (n-1) (xs:xss) of Nothing -> Nothing Just p -> Just ((head p) : [] : (tail p)) pull n (xs:xss) = fmap (xs:) (pull (n-1) xss) -- * Note that in both cases marked with -- *, you pattern matched on xs2:xss and used xs2:xss. That's not necessary. So all patterns are: pull :: Int -> [[Int]] -> Maybe [[Int]] pull 1 _ = Nothing pull 2 ([]:_) = Nothing pull _ [_] = Nothing pull n ([]:xss) = fmap ([]:) (pull (n-1) xss) pull 2 (xs:ys:xss) = Just ((init xs) : ((last xs):ys) : xss) pull n (xs:[]:xss) = case pull (n-1) (xs:xss) of Nothing -> Nothing Just p -> Just ((head p) : [] : (tail p)) pull n (xs:xss) = fmap (xs:) (pull (n-1) xss) Now that you can see all cases, it should be obvious that your missing at least one, namely the one for the empty list pull 0 []. Also note that using last and init isn't very efficient, since you need to traverse the list twice. You could write a function like splitLast :: [a] -> ([a], a) which returns both the init and the last element. But that's left as an exercise. TL;DR use case … of … (or pattern matching) instead of guards if you want to use the matched value either way (or use pattern guards). make sure that you handle all cases use Maybe on outputs; if your function cannot do something sensible with Maybe inputs, don't use it, that's what >>= is for.
{ "domain": "codereview.stackexchange", "id": 23173, "tags": "beginner, haskell, linked-list, error-handling, optional" }
How many things are wrong in this "artist view" of the TRAPPIST-1 system?
Question: There is a this poster on the NASA site ( that irks me. Of course its an artist's fantasy, but since it is on a NASA site, I see people considering it as scientifically accurate. So, beyond the two obvious "In which direction(s) is the sun", and "OMG these two planets are crashing into each other! Before we crash in the debris!", what else is wrong? Tides? Roche's limit? Atmospheres ripped out? Also, since 1e is an intermediate planet, it is possible to have the other six planets all conveniently placed in the same area of the sky? Answer: From this answer: Here's a diagram of the size of each of the planets as seen from each of the other planets. More details there. The scale is degrees, and for example the top row shows the largest possible size of planets c through h from planet b.
{ "domain": "astronomy.stackexchange", "id": 3936, "tags": "trappist-1, planetary-science" }
Propagator of Hermitian operator
Question: Is the propagator of a hermitian operator always unitary? I am asking this because the propagator of in the Schrödinger equation is unitary and my book says this is to be expected since the Hamiltonian is hermitian. However, I constructed the propagator of 2 coupled mass system of same mass and same spring constant and found out that it is not unitary!(Hamiltonian in one dimensional case) Edit: What I mean by unitary is: $UU^+=U^+U=I$ and the solution to Schrödinger equation: $$|\psi(t)\rangle =\sum e^{\frac{-itE}{\hbar}}|E\rangle\langle E|\psi(0)\rangle $$ is in terms of the propagator $$\psi(t)=\sum U_{H}(t)|\psi(0)\rangle$$ so this propagator $U_H(t)$ is unitary becuase the Hamiltonian is hermitian according to my book. However when I solved a coupled 2-mass system of zero initial velocity. The solution in terms of the propagator was: $$|x(t)\rangle =\sum U_{sp}(t) |x(0)\rangle $$ where $U_{sp}$ is the propagator of the spring-coupled masses. The differential equation describing the coupled 2-mass system is(zero initial velocity for both masses): $\begin{pmatrix} x1^{''}(t) \\ x2^{''}(t) \end{pmatrix}$=$\begin{pmatrix} \frac{-2k}{m}& \frac{k}{m}\\ \frac{k}{m}& \frac{-2k}{m} \end{pmatrix}$*$\begin{pmatrix} x1(t)\\ x2(t) \end{pmatrix}$ where $\Omega$=$\begin{pmatrix} \frac{-2k}{m}& \frac{k}{m}\\ \frac{k}{m}& \frac{-2k}{m} \end{pmatrix}$ The solution in terms of the propagator and initial displacement is $$\begin{pmatrix} x1(t)\\ x2(t) \end{pmatrix}=\begin{pmatrix} \frac{{}cos(\sqrt{}\frac{k}{m}t)+cos(\sqrt{}\frac{3k}{m}t)}{2} & \frac{{}cos(\sqrt{}\frac{k}{m}t)-cos(\sqrt{}\frac{3k}{m}t)}{2} \\ \frac{{}cos(\sqrt{}\frac{k}{m}t)-cos(\sqrt{}\frac{3k}{m}t)}{2} & \frac{{}cos(\sqrt{}\frac{k}{m}t)+cos(\sqrt{}\frac{3k}{m}t)}{2} \end{pmatrix}\begin{pmatrix} x1(0)\\ x2(0) \end{pmatrix}$$ where the propagator here is $$U_{sp}(t)=\begin{pmatrix} \frac{{}cos(\sqrt{}\frac{k}{m}t)+cos(\sqrt{}\frac{3k}{m}t)}{2} & \frac{{}cos(\sqrt{}\frac{k}{m}t)-cos(\sqrt{}\frac{3k}{m}t)}{2} \\ \frac{{}cos(\sqrt{}\frac{k}{m}t)-cos(\sqrt{}\frac{3k}{m}t)}{2} & \frac{{}cos(\sqrt{}\frac{k}{m}t)+cos(\sqrt{}\frac{3k}{m}t)}{2} \end{pmatrix}$$ which is not unitary?! But if the original matrix $\Omega$ is hermitian shouldn't the propagator be unitary? Answer: I'll try to answer this question rather generally. Given the Schrodinger equation $$i\hbar \frac{\partial \psi}{\partial t}=H\psi$$ one can formally write the solution as: $$\psi(t)=e^{-itH/\hbar}\psi(0)$$ We can call $U(t)=e^{-itH/\hbar}$ the "propagator". Then if $H$ is hermitian, $U(t)$ is unitary, as you correctly claim. It is helpful to note that the argument of the exponential is however anti-hermitian: $(iH)^\dagger = -iH$. So we can also state the following. Consider the equation: $$\frac{\partial y(t)}{\partial t}=A y(t)$$ where $A$ doesn't depend on $t$. The solution is formally given by $y(t)=e^{At} y(0)$. So in your definition, the "propagator" is given by $U(t)=e^{At}$. So in particular, the "propagator" in this case is unitary if $A$ is anti-hermitian. We only required $H$ to be hermitian in the Schrodinger eq. because there was an $i$ which made the overall argument anti-hermitian. The argument is the same if $y$ is a vector. The system you are considering is second order. Suppose we have a system of second order ODEs: $$\ddot{\textbf{x}}(t) = M \textbf{x}(t)$$ The solution is going to depend on the initial velocity, so it won't be in the form you seem to be interested in. Suppose that the initial velocity is zero, so that the solution is given by $\textbf{x}(t) = U(t) \textbf{x}(0)$. Can we require some condition on $M$ which would ensure that $U$ is unitary? Your example shows that it's not enough to require that $M$ is hermitian. It's easier to analyse a first order differential equation, so let's write: $$\textbf{y}(t)=\begin{pmatrix} \textbf{x} \\ \textbf{x}' \end{pmatrix}\,\,\,\,\,\,\,\,\,\, K = \begin{pmatrix} 0 & I \\ M & 0 \end{pmatrix}$$ Then the original second order system is equivalent to the first order system $\dot{\textbf{y}} = K \textbf{y}$. The solution to this problem can be written very easily as $\textbf{y}(t) = \exp{(Kt)} \textbf{y}(0)$. Can we express the exponential in a nice way? Playing with the expression for $K$ we realise its powers have the following expression: $$K^{2k} = \begin{pmatrix} M^k & 0 \\ 0 & M^k \end{pmatrix}\,\,\,\,\,\,\,\,\,\,K^{2k+1} = \begin{pmatrix} 0 & M^k \\ M^{k+1} & 0 \end{pmatrix}$$ Therefore we can write its exponential as: $$\exp{K}=\sum_{n=0}^\infty \frac{K^n}{n!}= \sum_{k=0}^\infty \begin{pmatrix} \frac{M^k}{(2k)!} & \frac{M^k}{(2k+1)!} \\ \frac{M^{k+1}}{(2k+1)!} & \frac{M^k}{(2k)!} \end{pmatrix}$$ and since we're assuming that the initial velocity is zero, we get for the "propagator": $$U(t)=\sum_{k=0}^\infty \frac{M^k t^k}{(2k)!}$$ This is the Taylor series for $\cosh{\sqrt{x}}$. From the series you can see that $M$ hermitian implies that $U$ is hermitian, like you got in your result. What if $M$ is anti-hermitian? It was the case of interest previously. It is not true in general that $M$ anti-hermitian implies $U$ unitary. In particular $$M=\begin{pmatrix} i & 0 \\ 0 & i\end{pmatrix}\implies U(1)=\begin{pmatrix} \cosh{e^{i\pi/4}} & 0 \\ 0 & \cosh{e^{i\pi/4}}\end{pmatrix}$$ but since $\cosh{e^{i\pi/4}}\cosh{e^{-i\pi/4}}\neq 1$, then $UU^\dagger \neq 1$. In conclusion, I think the notion of "propagator" is only helpful in quantum mechanics. For first order differential equations the analogy with the Schrodinger eq. may make sense; but this game of "(anti-)hermitian operator implies unitary propagator" does not work for second order des.
{ "domain": "physics.stackexchange", "id": 45089, "tags": "quantum-mechanics, education, eigenvalue, propagator" }
Is it possible for acceleration, velocity and position vectors to all be orthogonal?
Question: Is it possible for a moving particle to have position, velocity and acceleration vector components to all be orthogonal to one another? The formula below is in my textbook for the electric field of a moving point charge, and I think that the last term would always be zero because there are components of each type of vector in it. They would all have to be orthogonal to one another in order for the term to be non-zero, but I don't know of an instance where this could be true. Is this possible, or am I interpreting this equation incorrectly? I think that $\vec u$ is unique to my textbook so it is $\vec u = c\hat {\mathscr{r}}-\vec v$. Answer: I think you are mistaken. The cross product is $0$ when the vectors are parallel. The cross product is non-zero when they are not parallel. In other words, the vectors don't have to be orthogonal to have a non-zero cross product. For example, if all three of those vectors were in the same plane but not parallel, that term is non-zero. For a simple example: $$\vec A=[1,2,0]$$ $$\vec B=[1,0,0]$$ $$\vec C=[2,1,0]$$ As you can see, none of these vectors are orthogonal. Yet $$\vec B\times\vec C=[0,0,1]$$ $$\vec A\times(\vec B\times\vec C)=[2,-1,0]$$ Which is a non-zero result.
{ "domain": "physics.stackexchange", "id": 52961, "tags": "electromagnetism, electric-fields" }
HTTP scraper efficiency with multiprocessing
Question: I built this scraper for work that will take a csv list of firewalls from our network management system and scan a given list of HTTPS ports to see if the firewalls are accepting web requests on the management ports. I originally built this in powershell, but decided to rebuild it in python for the learning experience. I was able to cut down the scan time substantially using multiprocessing, but I'm wondering if I can further optimize my code to get it faster. Also, I'm very new to python. So if you have any input on better more efficient ways that I could have used to accomplish these steps would be much appreciated. import urllib.request import re import os import ssl import multiprocessing #imports a csv list of firewalls with both private and public IP addresses f = open(r'\h.csv',"r") if f.mode =="r": cont = f.read() #regex to remove private ip addresses and then put the remaining public ip addresses in a list c = re.sub(r"(172)\.(1[6-9]|2[0-9]|3[0-1])(\.(2[0-4][0-9]|25[0-5]|[1][0-9][0-9]|[1-9][0-9]|[0-9])){2}|(192)\.(168)(\.(2[0-4][0-9]|25[0-5]|[1][0-9][0-9]|[1-9][0-9]|[0-9])){2}|(10)(\.(2[0-4][0-4]|25[0-5]|[1][0-9][0-9]|[1-9][0-9]|[0-9])){3}","",cont) d = re.findall(r"[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}",c) #uses HTTP requests to check if any of the 8 management ports on the addresses in the list are accepting web requests def httpScan(list): iplen = len(list) ports = [443, 4433, 444, 433, 4343, 4444, 4443, 4434] portlen = len(ports) for k in range(iplen): for i in range(portlen): context = ssl._create_unverified_context() try: fp = urllib.request.urlopen("https://" + str(list[k]) + ":" + str(ports[i]), context=context) mybytes = fp.read() mystr = mybytes.decode("utf8") fp.close() except: continue if "SSLVPN" in mystr: print(list[k] + " SSLVPN" + ": " + str(ports[i]) + " " + str(k) + " " + str(os.getpid())) elif "auth1.html" in mystr: print(list[k] + " MGMT" + ": " + str(ports[i]) + " " + str(k)) #splits the list of IP addresses up based on how many CPU there are and adds each segment to a dictionary cpu = int(multiprocessing.cpu_count()) sliced = int(len(d)/cpu) mod = int(len(d))%int(cpu) num = 1 lists = dict() for i in range(cpu): if i != (cpu - 1): lists[i] = d[(num*sliced) - sliced:num*sliced] num += 1 else: lists[i] = d[(num*sliced) - sliced:(num*sliced) + mod] #starts a process for each unique segment created t = dict() if __name__ == "__main__": for i in range(cpu): t[i] = multiprocessing.Process(target=httpScan, args=(lists[i],)) t[i].start() ``` Answer: Reading File Here is a tip, while reading, the r is optional f = open(r'\h.csv',"r") can be written as f = open(r'\h.csv') Your whole reading block can use context managers (blocks using the with keyword). with open(r'\h.csv', encoding='utf8') as f: cont = f.read() If you are dealing with a huge text file, you might do: with open(r'\h.csv', encoding='utf8') as f: for ip in f: ip = ip.rstrip('\n') .. verify String Using string formatting i.e. .format() can give a better idea of what's going on. It also eliminates the use of str() each time. We can change this print(list[k] + " MGMT" + ": " + str(ports[i]) + " " + str(k)) to that print("{} MGMT: {} {}".format(list[k], ports[i], k)) and as from 3.6+, adding an f print(f"{list[k]} MGMT: {ports[i]} {k}") Loop Iteration In many other languages, you need the index while looping to have the element at this index. Python provides a nice and intuitive way to loop over elements The current implementation: ports = [443, 4433, 444, 433, 4343, 4444, 4443, 4434] portlen = len(ports) for i in range(portlen): print(ports[i]) But the pythonic way is: ports = [443, 4433, 444, 433, 4343, 4444, 4443, 4434] for port in ports: print(port) port here gives you the element directly. If ever you still want the index, you do: for i, port in enumerate(ports): where i is the index. Miscellaneous Here: cpu = int(multiprocessing.cpu_count()) No need to cast to int as multiprocessing.cpu_count() already returns an integer. You can verify for int by type(multiprocessing.cpu_count()) Normally with .start(), you must include a .join(), as this allows all child processes to terminate before exiting. for ...: ... .start() for ...: ... .join()
{ "domain": "codereview.stackexchange", "id": 35551, "tags": "python, performance, python-3.x, regex, https" }
Proof that time exists
Question: Is time just an axiom? Or can it be proven to exist? Correct me if I'm wrong, but our whole understanding of the universe is based on directly observing the world and building out axioms that are consistent with our observations. Unfortunately, axioms cannot be proven to be true (I believe Godel's incompleteness theorem proved that). So if time is an axiom, then it is not provable. Time also cannot be directly observed either, unlike perceiving an object (seeing a moon, feeling the pressure of the water, etc.) Time is also irrelevant for some physical concepts such as Work. With all of that said, my question is can time be proven to be/exist? And a secondary question of are there physical/mathematical theories being developed that take it as their axioms that time does not exist? Answer: Check Shape Dynamics. In some sense, time doesn't exist in this formalism.
{ "domain": "physics.stackexchange", "id": 20275, "tags": "time" }
Outdated Transformers TextDataset class drops last block when text overlaps. Replace by datasets Dataset class as input of Trainer train_dataset?
Question: Why I try to replace the transformers TextDataset class with datasets Dataset class I stumbled upon this when I tried to make the train_dataset of the Transformers Trainer class from a text file, see How can you get a Huggingface fine-tuning model with the Trainer class from your own text where you can set the arguments for truncation and padding?. The TextDataset of the transformers package is buggy (next heading) and outdated (overnext heading). Transformers TextDataset drops the last block of the split text The TextDataset class drops the last block of the text that was split into blocks by means of the block_size parameter, in the following example, 512 tokens (~ words and other things) per block: from transformers import AutoTokenizer, TextDataset model_name = "dbmdz/german-gpt2" tokenizer = AutoTokenizer.from_pretrained(model_name) file_path = './myfile.txt' train_dataset = TextDataset( tokenizer=tokenizer, file_path=file_path, block_size=512, overwrite_cache=True, ) If I check the last block, I see that it cuts the very last block that has the tail of the text. This code shows only the second last block, the last block gets dropped by the TextDataset class: tokenizer.decode(train_dataset['input_ids'][-1]) Instead, the Trainer class does not drop the last batch by default, but you see from this that there is such a parameter also for the Auto dataloader arguments of the Trainer class, see class transformers Training Arguments: dataloader_drop_last (bool, optional, defaults to False) — Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not. Transformers TextDataset is outdated When I change the setting of a tokenizer and build the TextDataset object another time, sometimes a warning shows that you should take the Transformers datasets Dataset class instead. Here is the warning (there are two warnings in it): Warning 1: > /srv/home/my_user/.local/lib/python3.9/site-packages/transformers/data/datasets/language_modeling.py:54: > FutureWarning: This dataset will be removed from the library soon, > preprocessing should be handled with the Datasets library. You can > have a look at this example script for pointers: > https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py Warning 2: > warnings.warn( Token indices sequence length is longer than the > specified maximum sequence length for this model (31482 > 512). > Running this sequence through the model will result in indexing errors Warning 2 is just from changing from one tokenizer to another, it comes from this line in the given link of the warning. if data_args.max_seq_length > tokenizer.model_max_length: logger.warning( f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the " f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}." ) It is enough to run the code again to get rid of warning 2. This question is only about warning 1 ("FutureWarning: This dataset will be removed..."). Question How do I replace the transformers Textdataset class with the datasets Dataset class so that the output is a dataset that can be the argument of the train_dataset parameter of the transformers Trainer class? Answer: You need to from datasets import load_dataset, and even though the warning tells you so, you do not seem to need to import the Dataset class if you just want to run the Trainer on your own text file input. For me, the load_dataset module was enough. This code runs through, its output is a dataset that can build a fine-tuned model with the help of the Huggingface Transformers PyTorch Trainer class. from transformers import AutoTokenizer from datasets import load_dataset model_name = "dbmdz/german-gpt2" tokenizer = AutoTokenizer.from_pretrained(model_name) file_path = './myfile.txt' bln_truncation = False dataset = load_dataset("text", data_files={"train": file_path}) block_size = 512 tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize_function(examples): return tokenizer( examples["text"], padding="max_length", truncation=bln_truncation) tokenized_datasets = dataset.map(tokenize_function, batched=True) The dataset must then be passed to the Trainer class like this: trainer = Trainer( ... train_dataset=tokenized_datasets["train"], ... )
{ "domain": "datascience.stackexchange", "id": 12131, "tags": "dataset, transformer, huggingface, finetuning, llm" }
Is there a way to use the distances of the two opposite apsides to determine the eccentricity of an orbit?
Question: Is there a way to use the distances of the two opposite apsides to determine the eccentricity of an orbit? The ratio between the distances (i.e. perihelion & aphelion) seem like they'd have a straightforward relationship with the eccentricity of the orbit. Answer: Yes. Derivation is pretty straightforward. Ellipse semi-major axis is arithmetic mean of perihelion & aphelion : $$ \tag 1 a = \frac {r_{max}+r_{min}}{2}$$ while semi-minor axis is geometric mean of maximum and minimum distances from foci : $$ \tag 2 b = \sqrt {r_{max} \cdot r_{min}} $$ And since the eccentricity of an ellipse is defined as : $$ \tag 3 e = \sqrt {1-\frac{b^2}{a^2}} $$ , substituting (1) and (2) into 3-rd, gives : $$\begin{align} \tag 4 e &= \sqrt {1-\frac{4 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}} \\ &=\sqrt {\frac{(r_{max}+r_{min})^2}{(r_{max}+r_{min})^2}-\frac{4 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}}\\ &=\sqrt {\frac{r_{max}^2+r_{min}^2+2 \cdot r_{max} \cdot r_{min} - 4 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}}\\ &=\sqrt {\frac{r_{max}^2+r_{min}^2 - 2 \cdot r_{max} \cdot r_{min}}{(r_{max}+r_{min})^2}}\\ &=\sqrt {\frac{(r_{max}-r_{min})^2}{(r_{max}+r_{min})^2}}\\ &=\boxed {\frac{r_{max}-r_{min}}{r_{max}+r_{min}}} \end{align} $$
{ "domain": "physics.stackexchange", "id": 100407, "tags": "homework-and-exercises, orbital-motion, celestial-mechanics" }
Use Newton's Method to compute sqrt(x)
Question: Given the following task: Use Newton's method to compute the square root of a number. Newton's method involves successive approximation. You start with a guess, and then continue averaging successive guesses until you reach a satisfactory level of precision. I wrote the following (rough) solution in Scheme. Can you help me make it better? (define (abs x) ((if (< x 0) - +) x)) (define (almost-equal x y delta) (> delta (abs (- x y)))) (define (sqrt-prime x last-x) (let ((next-x (/ (+ x last-x) 2))) (if (almost-equal next-x x 0.000001) x (sqrt-prime next-x x)))) (define (sqrt x) (sqrt-prime x 1)) Answer: The sqrt-prime function neither needs nor uses the last-guess argument. It can be safely eliminated: (define (sqrt-prime guess x) (if (good-enough? guess x) guess (sqrt-prime (better-guess guess x) x))) You may call the function thus: (define (sqrt x) (sqrt-prime 1.0 x)) I feel that, other than this minor change, your program is succinct enough.
{ "domain": "codereview.stackexchange", "id": 174, "tags": "lisp, scheme, numerical-methods" }
Single Displacement Reactions: A + AX =?
Question: This seems like a stupid question, but how would I find the product of a single displacement reaction like the following? $$\ce{Pb + Pb(NO3)2 -> \ ?}$$ Normally, a single displacement reaction follows the path of $\ce{A + BX -> AX + B}$. However, what will the equation look like when A and B are the same element? Answer: Nothing. All you would have is just a mixture of white powder and lead metal. In general, lead (II) nitrate is synthesized by: $$\ce{PbO (s) + 2 HNO3 (aq) -> Pb(NO3)2 (aq) + H2O (l)}$$ Lead (II) nitrate is soluble in water. Alternatively, you could do: $$\ce{PbCO3(s) + 2HNO3(aq) -> Pb(NO3)2(aq) + H2O(l) + CO2(g)} $$ Single displacement reactions are a kind of redox reaction. For single displacement reactions to occur: 1.) A and B must either be different metals OR $$ \ce{2AgNO3(aq) + Zn(s) -> 2Ag(s) + Zn(NO3)2(aq)}$$ 2.) A and B must be a halogen $$\ce{Mg(s) + 2 HCl(aq) → MgCl2(aq) + H2(g)}$$ This is necessary as for the redox reaction to occur, you need two differing metals on the reactivity series in order for the single replacement to work, or alternatively, use compounds that contain halogens. Even so, in combining two compounds with metals or halogens, if the element is less reactive than the element in the compound, the single replacement reaction will not proceed. Some examples are: $$\ce{Ag (s) + Cu(NO3)2 (aq) -> } \text{No reaction}$$ $$ \ce{I2 + 2KBr ->} \text{No reaction} $$
{ "domain": "chemistry.stackexchange", "id": 16506, "tags": "synthesis" }
Why does the Lagrangian Density have to be a polynomial of the field?
Question: In a lecture, a professor appeared to have said that the Lagrangian can only contain terms that have powers of $\phi$ and a term with $\partial_\mu \partial^\mu \phi$ . I imagine this would make any physically possible Lagrangian of the form $$\mathcal{L}(\phi, \partial_\mu \phi, t) = k\partial_\mu \phi \partial^\mu \phi + \sum_{i\in I} c_i \phi^i$$ For arbitrary real numbers $k$ and $c_i$ and index set $I$. Is this truly the case? If so, why would it be impossible to have a Lagrangian that has a term with, say, $\cos(\phi)$ or even $(\partial_\mu \partial^\mu \phi)^2$? Answer: The Lagrangian density of any field theory does not need to by polynomial in the field. The polynomial form of a Lagrangian density is typically taken to be an approximation in the spirit of effective field theory. Indeed, one could easy write down a field theory whose Lagrangian density takes the form $$\mathcal{L}=\frac{1}{2}\partial\varphi\cdot\partial\varphi+a^2m^2(\cos{\frac{\varphi}{a}}-1).$$ This theory is known as Sine-Gordon theory (for obvious reasons). In $d=2$ dimensions Sine-Gordon theory is actually incredibly interesting and has many applications in the study of duality. Of course, I could simply Taylor expand the cosine and write $$\mathcal{L}=\frac{1}{2}\partial\varphi\cdot\partial\varphi+\frac{1}{2}m^2\,\varphi^2-\frac{1}{4!}\frac{m^2}{a^2}\varphi^4+\frac{1}{6!}\frac{m^2}{a^4}\,\varphi^6+\cdots,$$ which resembles the form in which you wrote your Lagrangian density. The polynomial approximation is typically taken because one cannot do traditional perturbation theory without it (polynomial terms lead to $n$-valent graphs in the Feynman diagrammatic expansion of the partition function in a field theory) and because of the fact that terms in the Lagrangian with high powers of $\varphi$ typically are less important in certain approximation regimes (this is the basis of effective field theory). I hope this helps!
{ "domain": "physics.stackexchange", "id": 54772, "tags": "lagrangian-formalism, field-theory, renormalization, effective-field-theory, locality" }
Data to "check" Kepler’s first law
Question: I want to "check" Kepler’s first law by using real data of Mars. From the equation of the ellipse, I derived $$\frac{1}{r}=\frac{a}{b^2}+\frac{a}{b^2}\cdot\epsilon\cdot\cos(\varphi),$$ where $a$ is the major semi-axis, $b$ is the minor semi-axis and $\epsilon$ is the eccentricity of the elliptic orbit. I'm looking for the following kind of data: Mars' distance from Sun $r$ the angle $\varphi$ between Mars, Sun and the principal axis of the elliptic orbit. Then, I want to check, whether $r$ and $\varphi$ fit the measured values of $a$, $b$ and $\epsilon$. If there is no such data (perpendicular view on Mars' orbital plane) available, how can I transform data given in other coordinate systems to the ones I need? On a NASA website (https://omniweb.gsfc.nasa.gov/coho/helios/heli.html) I found data in "Solar Ecliptic", "Heliographic" and "Heliographic Inertial" coordinates, but I don't know which come closest to my plan. Update: I tried it with uhoh's recommendations. Unfortunately I failed. With the following python code, using the Horizons x, y, z data stored in an xlsx file, from __future__ import division import numpy as np from statsmodels.regression.linear_model import OLS from statsmodels.tools import add_constant from statsmodels.tools.eval_measures import aicc import pandas as pd import matplotlib.pyplot as plt horizons = pd.read_excel("horizons2.xlsx") horizons = np.array(horizons) horizonsxyz=horizons[:,2:5] horizonsxyz=np.array(horizonsxyz, dtype=np.float64) hx=horizonsxyz[:,0] hy=horizonsxyz[:,1] hz=horizonsxyz[:,2] horizonsr=np.sqrt(hx**2+hy**2+hz**2) horizonsr=horizonsr*6.68459*(10**(-9)) phi=np.arctan2(hy, hx) * 180 / np.pi phi2=np.mod(phi+360, 360) phia=np.mod(phi-286, 360) phiganz=add_constant(phia) horizonsdurchr=1/horizonsr horizons_regr=OLS(horizonsdurchr, phiganz).fit() print(horizons_regr.params) print(horizons_regr.summary()) y_pred_horizons=np.dot(phiganz, horizons_regr.params) print(horizons_regr.params) I get a value of $7.1349\cdot10^{-1}$ for $\frac{a}{b^2}$. This is bad but at least in the right order of magnitude. However for $\frac{a}{b^2}\cdot\epsilon$ I get a really bad value of $-2.89228\cdot10^{-4}$. Deviding the two result yields an estimated excentricity of $0.00044$ which is really far away from the true $0.0934$. I also tried another approach, using the heliographic data mentioned above. Here, I get closer, but only if I add 35 degrees to the angles, which doesnt make sense, since I should add 74 degrees or subtract 278 degrees, to get the angle relative to the perihelion. Answer: Great project! and welcome to Stack Exchange. I'll post a short answer but I think someone can add a more detailed, thorough and insightful answer. I think that website is not well suited, so I'll answer based on you switching to Horizons. If you like Python then it's more fun to use Skyfield. If you want apply an equation based on a Kepler orbit model, you'll need to use data where the Sun stays in one place and Mars orbits around it. That would be Heliocentric with the Sun at (0, 0, 0). That there are three zeros raises the issue of the number of dimensions; proper Kepler orbits are sort-of in 3D i.e. they have an orbital plane that can be tilted to a reference plane, but the orbits are planar. Two problems; your equation assumes 2D flat orbit because of the way $\varphi$ is defined. Ideally you'd like data in the plane of Mars' orbit and you may need to transform NASA/JPL Horizons data into Mars' orbital plane yourself because there are only two main "official" planes, no real planet remains perfectly in a plane. So what you do depends on how far down the rabbit hole of pretending orbits are planes that you want to go. Zeroth order approximation Go to Horizons Use this tutorial and set it up to match the following: Current Settings Ephemeris Type: VECTORS Target Body: Mars [499] Coordinate Origin: Sun (body center) [500@10] Time Span: Start=2020-10-04, Stop=2020-10-05, Step=1 d Table Settings: quantities code=2; output units=KM-S; CSV format=YES Display/Output: default (formatted HTML) -- OR -- Display/Output: download/save (plain text file) Here's a sample line for Mars for today using the Sun as origin (I've truncated some decimal digits). You see right away that Mars is about 201 million km from the Sun, it is also about 4 million km below the J2000.0 ecliptic. 2459126.500, A.D. 2020-Oct-04 00:00:00.00, 2.036231544E+08, 5.355405115E+07, -3.872888712E+06... From here you can approximate $$r = \sqrt{x^2 + y^2 + z^2}$$ and $$\varphi = \arctan2(y, x) - \text{286.502°}$$ Since you are going through all four quadrants it's better to use a computer's arctan2(y, x) or atan2(y, x) with two arguments, not $\arctan(y/x)$ which only works in two quadrants (i.e. 1/7 = -1/-7). First order approximation You see right away that Mars is about 201 million km from the Sun, it is also about 4 million km below the J2000.0 ecliptic. If you want to correct for Mars' orbit's tilt with respect to the ecliptic, you can just find the best plane fit to one Martian year's of data and make your own Mars ecliptic. But I recommend you do the zeroth order first and see how well or poorly it works, then you can decide if you want to tilt.
{ "domain": "astronomy.stackexchange", "id": 4873, "tags": "coordinate, mathematics, kepler, raw-data, space-geometry" }
How do you specify subplots to rqt_plot?
Question: How do I specify subplots to rqt_plot? rosrun rqt_plot rqt_plot /uav/0/pose/pose/position/x:y:z /uav/0/vel/vector/x:y:z gives me six curves. I would like to have one plot for the pose and one for the velocity. How do I do that? Originally posted by TommyP on ROS Answers with karma: 1339 on 2013-05-01 Post score: 3 Answer: Similar thread. Looks like accepting argument from commandline is a high demand? Or are people just working on tutorials that tell to do so? Originally posted by 130s with karma: 10937 on 2013-05-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by TommyP on 2013-05-01: Well, I have not managed to create a subplot in the UI either. But of course you need to be able to start programs from the command line so you can make launch files for different types of setups. ROS is as I see it command line based and that is a very good thing. Comment by 130s on 2013-05-27: Just FYI, tutorial is updated so that taking arguments from commandline is removed from there. Comment by TommyP on 2013-05-27: It would be nice if the tutorial showed how to use sub plots also. Comment by bit-pirate on 2014-08-08: I believe this answer is not answering the question. There is still no information given about how to create subplots. Comment by superjax on 2014-09-23: I agree, I end up writing big python scripts to automatically plot data because I don't know how to command rqt_plot to create subplots.
{ "domain": "robotics.stackexchange", "id": 14017, "tags": "rqt-plot" }
What does the term 'bioavailability' mean?
Question: From what I've read, Bioavailability is the degree to which food nutrients are available for absorption and utilization in the body. How would you explain this with an example? Answer: Bioavailability is a concept which applies to nutrients and drugs which pass through first-pass metabolism, i.e. orally (and to some extent nasally) consumed substances. Anything absorbed in the gut first passes through the liver before reaching the rest of the circulation, and both the gut and liver may metabolise it to some extent. The liver in specific has the powerful Cytochrome P450 system, a huge variety of enzymes to break down all sorts of substances, although in some cases it can actually produce more active or even toxic forms instead of breaking them down. This can lead to drastic reductions in the amount available in the systemic circulation after oral administration. E.g. propanolol (a beta blocker) needs to be given in 100mg doses orally while intravenously (avoiding first-pass metabolism) only 5mg are needed.
{ "domain": "biology.stackexchange", "id": 418, "tags": "nutrition, terminology" }
CSV file parser and compare
Question: This may seem like a lot of stuff? I just need help with 2 small parts the code works, however I have provided the rest of the info in case some one can help. USING PYTHON 3.4 Code below is responsible for comparing multiple CSV files against a cross-reference file and creating a metadata file, information files, also a file to keep track of points that did not have a match in the cross-reference file. it will compare files that are ordered in daily manner, each day holds 1 5min-file , 3 exc-file, 1 ala-file, 1 accu-file. It will produce 1 file that holds the points, one file that holds points with their timestamps, and a file that holds points that have no match with the cross-reference file The code works fine. # cross reference file: header1, header2, header3, header4, header5, header6 aaaaaaa1, bbbbbbb1, ccccccc1, ddddddd1, eeeeeee1, x42, trg, zxc, dfg aaaaaaa2, bbbbbbb2, ccccccc2, ddddddd2, eeeeeee2, fffffff2, zxc, hjg aaaaaaa3, bbbbbbb3, ccccccc3, ddddddd3, eeeeeee3, fffffff3, vcx, hhf aaaaaaa5, bbbbbbb5, ccccccc5, ddddddd5, eeeeeee5, fffffff5, vcx, hhf ... # exce-file: (all time stamps start from 0) 1/1/2014 12:00:00 AM, aaaaaaa2, bbbbbbb2, ccccccc2, ddddddd2, eeeeeee2, v2 1/1/2014 12:00:00 AM, aaaaaaa3, bbbbbbb3, ccccccc3, ddddddd3, eeeeeee3, x3 6, 8 #lines like this should be ignore 1/1/2014 12:00:01 AM, aaaaaaa4, bbbbbbb4, ccccccc4, ddddddd4, eeeeeee4, i4 1/1/2014 12:00:00 AM, aaaaaaa5, bbbbbbb5, ccccccc5, ddddddd5, eeeeeee5, o5 1/1/2014 12:00:01 AM, aaaaaaa6, bbbbbbb6, ccccccc6, ddddddd6, eeeeeee6, p6 3, 22, 14 #lines like this should be ignore 1/1/2014 12:00:00 AM, aaaaaaa7, bbbbbbb7, ccccccc7, ddddddd7, eeeeeee7, l7 ... # 5min_file:(all time stamps are 5 minute increments and start from 0) 1/1/2014 12:00:00 AM, aaaaaaa2, bbbbbbb2, ccccccc2, ddddddd2, eeeeeee2, h2 1 #lines like this should be ignore 1/1/2014 12:00:00 AM, aaaaaaa3, bbbbbbb3, ccccccc3, ddddddd3, eeeeeee3, g3 1/1/2014 12:00:00 AM, aaaaaaa5, bbbbbbb5, ccccccc5, ddddddd5, eeeeeee5, t5 43, 12, 14 #lines like this should be ignore 1/1/2014 12:00:00 AM, aaaaaaa7, bbbbbbb7, ccccccc7, ddddddd7, eeeeeee7, y7 ... # ala and acu files have the same format as exc-file ... # ffm output file: header1, earliest time stamp (in unix), 1 aaaaaaa2, bbbbbbb1, ccccccc1, ddddddd1, eeeeeee1, fffffff1 aaaaaaa3, bbbbbbb1, ccccccc1, ddddddd1, eeeeeee1, fffffff1 aaaaaaa4, bbbbbbb1, ccccccc1, ddddddd1, eeeeeee1, fffffff1 ... # ffd output file: %m/%d/%Y %H:%M:%S1, aaaaaaa2, bbbbbbb2, ccccccc2, ddddddd2, eeeeeee2, h2 %m/%d/%Y %H:%M:%S1, aaaaaaa3, bbbbbbb3, ccccccc3, ddddddd3, eeeeeee3, g3 %m/%d/%Y %H:%M:%S1.1, aaaaaaa4, bbbbbbb4, ccccccc4, ddddddd4, eeeeeee4, i4 %m/%d/%Y %H:%M:%S2, aaaaaaa5, bbbbbbb5, ccccccc5, ddddddd5, eeeeeee5, t5 %m/%d/%Y %H:%M:%S2.1, aaaaaaa6, bbbbbbb6, ccccccc6, ddddddd6, eeeeeee6, p6 %m/%d/%Y %H:%M:%S3, aaaaaaa7, bbbbbbb7, ccccccc7, ddddddd7, eeeeeee7, y7 ... # missing: aaer45, bber45, ccer45, dder45, eeeeeee1, fffffff1 ---> NO MATCH aaaaa3, bbbbbbb1, ccdc90, ddddddd1, eeeeeee1, fffffff1 ----> NO MATCH ... What I would like is for you to help me with and point me to the right direction. (full code is included below) in the analog_exc file I'm opening multiple files (both to read and write), is there a cleaner way to do this? (chunk of code for this section is right below): with open(ffm_all_w + 'ana_ffm.txt', 'w') as ana_ffm, open(missing_key_w + 'ana_missint_keys.txt', 'w') as ana_missing_keys: for x in range(len(ana_exc_input_path)): if not count_path2 > len(ana_exc_input_path): with open(ana_exc_input_path[count_path2], 'r') as ana_exc, open(ffd_ana_exception_path_w + file_name_analog[count_path2] + '.txt' + str(count_path2), 'w') as ffd_ana: 2- The comparing and writing the ana_5min and ana_exc takes too long, is there a better way to do this? def Analog_5_min(): global ana_5min_dic, global_dic, ana_5min_input_path counter = 0 with open(ana_5min_input_path[counter], 'r') as file0: counter += 1 for line in file0: if '/' in str(line): row = line.split(',') key1 = row[1] + '|' + row[2] + '|' + row[3] + '|' + row[4] if key1 in global_dic: ana_5min_dic[key1] = {'time': row[0], 'value': row[6]} compare_func (): for line in ana_exc: col = line.split(",") ana_exc_key = (col[1] + '|' + col[2] + '|' + col[3] + '|' + col[4]) ana_exc_time = col[0] if ana_exc_key in ana_5min_dic: if ana_exc_key not in ana_ffm_track: ana_ffm.write('point' + ',' + str(global_dic[ana_exc_key]['cpKey']) + ',' + str(global_dic[ana_exc_key]['header7']) + ',' + str(global_dic[ana_exc_key]['header5']) + ',' + 'analog' + ',' + ',' + '1' + '\n') ana_ffm_track.append(ana_exc_key) meow = datetime.datetime.strptime(ana_exc_time, '%m/%d/%Y %H:%M:%S') # change str time to date/time obj unix_timestamp = calendar.timegm(meow.timetuple()) # do the conversion to unix stamp time_ms1 = unix_timestamp * 1000 # afterwards it writes files as described above Full code incase someone has other suggestions or wants to look at it: import csv, datetime, calendar, time, os, argparse, sys, fnmatch # there is stuff here for late use global_dic = {} ana_5min_dic = {} ffd_ana_5min_path_w = '' ffd_ana_exception_path_w = '' missing_key_w = '' ffd_ana_hourly_path_w = '' ffm_all_w = '' ffd_alarm_path = '' ffd_digital_path = '' ffd_aacu_path = '' out_put_defult = False min_flag = False ana_5min_input_path = [] ana_exc_input_path = [] # ana_1hr_input_path = [] alam_exc_input_path = [] acu_exc_input_path = [] dig_exc_input_path = [] ana_ffm_track = [] file_name_analog = [] file_name_digital = [] file_name_accu = [] file_name_alarms = [] # create files and path for output def make_output_dir(output_path): global ffd_ana_5min_path_w, ffd_ana_exception_path_w, missing_key_w, ffm_all_w, out_put_defult, ffd_alarm_path, ffd_digital_path, ffd_aacu_path try: if out_put_defult: path = str(os.getcwd()) + '\\' + 'output' else: path = str(output_path) root_path = 'D:\\good_data\\output' + '\\' + str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')) folders = ['ffd_ana_exception', 'missing_keys', 'ffm_all', 'ffd_alarm_exception', 'ffd_digital_exception', 'ffd_accu_exception'] ffd_ana_exception_path_w = os.path.join(str(root_path), 'ffd_ana_exception' + '\\') ffd_alarm_path = os.path.join(str(root_path), 'ffd_alarm_exception' + '\\') ffd_digital_path = os.path.join(str(root_path), 'ffd_digital_exception' + '\\') ffd_aacu_path = os.path.join(str(root_path), 'ffd_accu_exception' + '\\') ffm_all_w = os.path.join(str(root_path), 'ffm_all' + '\\') missing_key_w = os.path.join(str(root_path), 'missing_keys' + '\\') for folder in folders: if not os.path.exists(folder): os.makedirs(os.path.join(root_path, folder)) except FileExistsError: print('Cannot create a file when that file already exists') pass return None # Walk the directory and find needed files def file_search(input_path): global ana_5min_input_path, ana_exc_input_path, ana_1hr_input_path, alam_exc_input_path, acu_exc_input_path, dig_exc_input_path, file_name_analog, file_name_accu, file_name_alarms, file_name_digital for root, dirnames, filenames in os.walk('C:\\Users\\data_meow'): for filename in fnmatch.filter(filenames, '*.csv'): if filename.startswith("Accumulators"): file_name_accu.append(filename.strip('.csv')) acu_exc_input_path.append(os.path.join(root, filename)) elif filename.startswith("Alarms"): file_name_alarms.append(filename.strip('.csv')) alam_exc_input_path.append(os.path.join(root, filename)) elif filename.startswith("Analog_exp"): file_name_analog.append(filename.strip('.csv')) ana_exc_input_path.append(os.path.join(root, filename)) elif filename.startswith("Analog_per_5_min"): ana_5min_input_path.append(os.path.join(root, filename)) elif filename.startswith("Digital_exc"): file_name_digital.append(filename.strip('.csv')) dig_exc_input_path.append(os.path.join(root, filename)) return None # creat a dictionary from cross refrence file def xref(): global global_dic with open('NPPD_XREF.cbt', 'r') as file0: reader1 = csv.reader(file0, delimiter='\t') header = next(reader1) for row in reader1: key = (row[0] + '|' + row[1] + '|' + row[2] + '|' + row[3]) global_dic[key] = {header[0]: row[0], header[1]: row[1], header[2]: row[2], header[3]: row[3], header[4]: row[4], header[5]: row[5], header[6]: row[6], header[7]: row[7], header[8]: row[8], header[9]: row[9]} return None # compare exception analog file with cross refrence file, if there is a point matching than compare with 5-minute analog file, # where the time stamps of exception-analog file and 5-minute analog file match write output from 5-minute analog gile, # otherwise use exception-analog file. # keeps track of points that do not have a match in the cross refrence file and create a txt file for later review # create 2 output files for later use def Analog_5_min(): global ana_5min_dic, global_dic, ana_5min_input_path counter = 0 with open(ana_5min_input_path[counter], 'r') as file0: counter += 1 for line in file0: if '/' in str(line): row = line.split(',') key1 = row[1] + '|' + row[2] + '|' + row[3] + '|' + row[4] if key1 in global_dic: ana_5min_dic[key1] = {'time': row[0], 'value': row[6]} # compare exception analog file with cross refrence dictionary, if there is a point matching than compare the point with 5-minute analog dictionary, # where the time stamps of exception-analog file and 5-minute analog dictionary match write output from 5-minute analog file, # otherwise use exception-analog file. # keeps track of points that do not have a match in the cross refrence file and create a txt file for later review # create 2 output files for later use def Ana_exc(): global global_dic, missing_key_w, out_put_defult, ffd_ana_exception_path_w, ana_exc_input_path, ana_ffm_track, ana_5min_dic, file_name_analog count_path2 = 0 ana_exc_missing = [] ana_exc_ffm_header = True with open(ffm_all_w + 'ana_ffm.txt', 'w') as ana_ffm, open(missing_key_w + 'ana_missint_keys.txt', 'w') as ana_missing_keys: for x in range(len(ana_exc_input_path)): if not count_path2 > len(ana_exc_input_path): with open(ana_exc_input_path[count_path2], 'r') as ana_exc, open(ffd_ana_exception_path_w + file_name_analog[count_path2] + '.txt' + str(count_path2), 'w') as ffd_ana: count_path2 = count_path2 + 1 ana_ffd_header = True if ana_exc_ffm_header: ana_ffm.write('header' + ',' + '1' + '\n') ana_exc_ffm_header = False for line in ana_exc: col = line.split(",") ana_exc_key = (col[1] + '|' + col[2] + '|' + col[3] + '|' + col[4]) ana_exc_time = col[0] if ana_exc_key in ana_5min_dic: if ana_exc_key not in ana_ffm_track: ana_ffm.write('point' + ',' + str(global_dic[ana_exc_key]['cpKey']) + ',' + str(global_dic[ana_exc_key]['header7']) + ',' + str(global_dic[ana_exc_key]['header5']) + ',' + 'analog' + ',' + ',' + '1' + '\n') ana_ffm_track.append(ana_exc_key) meow = datetime.datetime.strptime(ana_exc_time, '%m/%d/%Y %H:%M:%S') # change str time to date/time obj unix_timestamp = calendar.timegm(meow.timetuple()) # do the conversion to unix stamp time_ms1 = unix_timestamp * 1000 if ana_ffd_header: ffd_ana.write('header' + ',' + str(time_ms1) + ',' + '1' + '\n') ana_ffd_header = False ffd_ana.write('value' + ',' + str(global_dic[ana_exc_key]['cpKey']) + ',' + str(global_dic[ana_exc_key]['header5']) + ',' + str(ana_5min_dic[ana_exc_key]['value']) + ',' + str(time_ms1) + ',' + str(time_ms1) + ',' + '0' + ',' + '0' + ',' + '0' + '\n') else: if '/' in str(line): # only process the lines that start with time stamps if ana_exc_key in global_dic: if ana_exc_key not in ana_ffm_track: # keep track of the points in an output file (metadata file) ana_ffm.write('point' + ',' + str(global_dic[ana_exc_key]['cpKey']) + ',' + str(global_dic[ana_exc_key]['header5']) + ',' + str(global_dic[ana_exc_key]['header7']) + ',' + 'analog' + ',' + ',' + '1' + '\n') ana_ffm_track.append(ana_exc_key) meow = datetime.datetime.strptime(str(ana_exc_time), '%m/%d/%Y %H:%M:%S') # change str time to date/time obj unix_timestamp = calendar.timegm(meow.timetuple()) # do the conversion to unix stamp time_ms1 = unix_timestamp * 1000 if ana_ffd_header: # out-file1 header ffd_ana.write('header' + ',' + str(time_ms1) + ',' + '1' + '\n') ana_ffd_header = False ffd_ana.write('value' + ',' + str(global_dic[ana_exc_key]['header8']) + ',' + str(global_dic[ana_exc_key]['header5']) + ',' + str(col[6]) + ',' + str(time_ms1) + ',' + str(time_ms1) + ',' + '0' + ',' + '0' + ',' + '0' + '\n') else: if ana_exc_key not in ana_exc_missing: ana_missing_keys.write(ana_exc_key + '\n') ana_exc_missing.append(ana_exc_key) else: break return None # looks at alarm files and if the points have a match in the cross refrence dictionary, it creates an output # keeps track of points that do not have a match in the cross refrence file and create a txt file for later review def Alarm_points(): global alam_exc_input_path, global_dic, ffd_alarm_path, missing_key_w, ffm_all_w, ana_ffm_track, file_name_alarms count_path = 0 ana_alarm_missing = [] with open(ffm_all_w + 'ana_ffm.txt', 'a') as ana_ffm, open(missing_key_w + 'ana_alarm_missing_keys.txt', 'w') as ana_alarm_missing_keys: for i in range(len(ana_5min_input_path)): if not count_path > len(alam_exc_input_path): with open(alam_exc_input_path[count_path], 'r') as ana_alarm, open(ffd_alarm_path + file_name_alarms[count_path] + '.txt' + str(count_path), 'w') as ffd_alarm: count_path += 1 ana_alarm_ffd_header = True for line in ana_alarm: col = line.split(",") if str(line[2]).startswith('/'): ana_alarm_key = (col[2] + '|' + col[3] + '|' + col[4] + '|' + col[5]) ana_alarm_time = str(col[0]) if ana_alarm_key in global_dic: if ana_alarm_key not in ana_ffm_track: ana_ffm.write('point' + ',' + str(global_dic[ana_alarm_key]['header8']) + ',' + str(global_dic[ana_alarm_key]['header5']) + ',' + str(global_dic[ana_alarm_key]['header7']) + ',' + 'alarm' + ',' + ',' + '1' + '\n') ana_ffm_track.append(str(ana_alarm_key)) meow = datetime.datetime.strptime(ana_alarm_time, "%m/%d/%Y %H:%M:%S") # change str time to date/time obj unix_timestamp = calendar.timegm(meow.timetuple()) # do the conversion to unix stamp time_ms = unix_timestamp * 1000 if ana_alarm_ffd_header: ffd_alarm.write('header' + ',' + str(time_ms) + ',' + '1' + '\n') ana_alarm_ffd_header = False ffd_alarm.write('alarm' + ',' + str(global_dic[ana_alarm_key]['header5']) + ',' + str(col[6]) + ',' + str(time_ms) + ',' + str(time_ms) + ',' + str(col[12]) + ',' + str(col[7]) + ',' + '1' + ',' + global_dic[ana_alarm_key]['header8'] + ',' + '1' + ',' + '0' + ',' + global_dic[ana_alarm_key]['header7'] + ','+ global_dic[ana_alarm_key]['Point Name'] + ',' + '\n') else: if ana_alarm_key not in ana_alarm_missing: ana_alarm_missing_keys.write(str(ana_alarm_key) + '\n') ana_alarm_missing.append(ana_alarm_key) else: break return None # looks at alarm files and if the points have a match in the cross refrence dictionary, it creates an output # keeps track of points that do not have a match in the cross refrence file and create a txt file for later review def Digital_points(): global dig_exc_input_path, global_dic, ffd_digital_path, missing_key_w, ffm_all_w, file_name_digital count_path = 0 ana_digital_missing = [] ana_ffm_dup = [] with open(ffm_all_w + 'ana_ffm.txt', 'a') as ana_ffm, open(missing_key_w + 'ana_digital_missing_keys.txt', 'w') as ana_digital_missing_keys: for i in range(len(dig_exc_input_path)): if not count_path > len(dig_exc_input_path): with open(dig_exc_input_path[count_path], 'r') as ana_digital, open(ffd_digital_path + file_name_digital[count_path] +'.txt' + str(count_path), 'w') as ffd_digital: count_path += 1 ana_digital_ffd_header = True for line in ana_digital: col = line.split(",") if str(line[2]).startswith('/'): ana_digital_key = (col[2] + '|' + col[3] + '|' + col[4] + '|' + col[5]) ana_digital_time = str(col[0]) if ana_digital_key in global_dic: if ana_digital_key not in ana_ffm_dup: ana_ffm.write('point' + ',' + str(global_dic[ana_digital_key]['header8']) + ',' + str(global_dic[ana_digital_key]['header5']) + ',' + str(global_dic[ana_digital_key]['header7']) + ',' + 'analog' + ',' + ',' + '1' + '\n') ana_ffm_dup.append(str(ana_digital_key)) meow = datetime.datetime.strptime(ana_digital_time, "%m/%d/%Y %H:%M:%S") # change str time to date/time obj unix_timestamp = calendar.timegm(meow.timetuple()) # do the conversion to unix stamp time_ms = unix_timestamp * 1000 if ana_digital_ffd_header: ffd_digital.write('header' + ',' + str(time_ms) + ',' + '1' + '\n') ana_digital_ffd_header = False ffd_digital.write('value' + ',' + str(global_dic[ana_digital_key]['header8']) + ',' + str(global_dic[ana_digital_key]['header5']) + ',' + str(col[7]) + ',' + str(time_ms) + ',' + str(time_ms) + ',' + '0' + ',' + '0' + ',' + '0' + '\n') else: if ana_digital_key not in ana_digital_missing: ana_digital_missing_keys.write(str(ana_digital_key) + '\n') ana_digital_missing.append(ana_digital_key) else: break return None # looks at alarm files and if the points have a match in the cross refrence dictionary, it creates an output # keeps track of points that do not have a match in the cross refrence file and create a txt file for later review def Accumulators(): global acu_exc_input_path, global_dic, ffd_aacu_path, missing_key_w, ffm_all_w, file_name_accu count_path = 0 ana_accu_missing = [] ana_ffm_dup = [] with open(ffm_all_w + 'ana_ffm.txt', 'a') as ana_ffm, open(missing_key_w + 'ana_accu_missing_keys.txt', 'w') as ana_accu_missing_keys: for i in range(len(acu_exc_input_path)): if not count_path > len(acu_exc_input_path): with open(acu_exc_input_path[count_path], 'r') as ana_accu, open(ffd_aacu_path + file_name_accu[count_path] + '.txt', 'w') as ffd_accu: count_path += 1 ana_accu_ffd_header = True for line in ana_accu: col = line.split(",") if str(line[2]).startswith('/'): ana_accu_key = (col[2] + '|' + col[3] + '|' + col[4] + '|' + col[5]) ana_accu_time = str(col[0]) if ana_accu_key in global_dic: if ana_accu_key not in ana_ffm_dup: ana_ffm.write('point' + ',' + str(global_dic[ana_accu_key]['header8']) + ',' + str(global_dic[ana_accu_key]['header6']) + ',' + str(global_dic[ana_accu_key]['header7']) + ',' + 'analog' + ',' + ',' + '1' + '\n') ana_ffm_dup.append(str(ana_accu_key)) meow = datetime.datetime.strptime(ana_accu_time, "%m/%d/%Y %H:%M:%S") # change str time to date/time obj unix_timestamp = calendar.timegm(meow.timetuple()) # do the conversion to unix stamp time_ms = unix_timestamp * 1000 if ana_accu_ffd_header: ffd_accu.write('header' + ',' + str(time_ms) + ',' + '1' + '\n') ana_accu_ffd_header = False ffd_accu.write('value' + ',' + str(global_dic[ana_accu_key]['header8']) + ',' + str(global_dic[ana_accu_key]['header5']) + ',' + str(col[7]) + ',' + str(time_ms) + ',' + str(time_ms) + ',' + '0' + ',' + '0' + ',' + '0' + '\n') else: if ana_accu_key not in ana_accu_missing: ana_accu_missing_keys.write(str(ana_accu_key) + '\n') ana_accu_missing.append(ana_accu_key) else: break return None def main(): out_path = '' input_path = '' start_time = time.time() make_output_dir(out_path) file_search(input_path) xref() Analog_5_min() Ana_exc() Alarm_points() Digital_points() Accumulators() print("took", time.time() - start_time, "to run") main() Answer: My advice: use tuples for keys, not string concatenation One thing I can suggest: don't create your keys using string concatenation because this particular operation is not optimal at all and allocates a lot of memory & copies a lot of data. Example for: k = col[2] + '|' + col[3] + '|' + col[4] + '|' + col[5] It's much better to use a tuple (which is hashable). You allocate less memory and you don't copy strings like you did. You'll save time if you do that operation a lot. Replacement key: k = tuple(col[2:6]) you'll have to change it several times in your code and since your keys seem to use following indices, you could write a "list2key" function like this: def list2key(l,start,end): return tuple(l[start:end+1]) k = list2key(col,2,5) avoid useless casts to string I see an obvious one (several times in your code): if '/' in str(line): since line is already a string (read from the file), you just duplicate the string for nothing. Just do: if '/' in line:
{ "domain": "codereview.stackexchange", "id": 22550, "tags": "python, beginner, python-3.x, parsing, csv" }
Find diagonal positions for bishop movement
Question: In a chess board, I need to find the diagonals a bishop can move on, and more specifically, the coordinates of those squares. So, given a grid of any size and a position in that grid (expressed in coordinates within the grid), I have to compute the coordinates of the diagonals of that initial position. I'm using zero-based indexing, and the (row, column) notation for coordinates. For example, on a 8x8 grid, with starting position of (0, 0), the returned list should be [(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7)]. On a 8x8 grid, with starting position of (3, 4), the returned list should be [(3, 4), (2, 3), (1, 2), (0, 1), (4, 5), (5, 6), (6, 7), (4, 3), (5, 2), (6, 1), (7, 0), (2, 5), (1, 6), (0, 7)] This is my working program in Python 3: def diagonals(coord, size): limit = size - 1 coords = [coord] row = coord[0] col = coord[1] while row > 0 and col > 0: row -= 1 col -= 1 coords.append((row, col)) row = coord[0] col = coord[1] while row < limit and col < limit: row += 1 col += 1 coords.append((row, col)) row = coord[0] col = coord[1] while row < limit and col > 0: row += 1 col -= 1 coords.append((row, col)) row = coord[0] col = coord[1] while row > 0 and col < limit: row -= 1 col += 1 coords.append((row, col)) return coords coord = (3, 4) size = 8 print(diagonals(coord, size)) Depending on the diagonal (4 cases), row and column are added or subtracted by one until the last square is reached, and everything is kept in a list, which in the end is returned. It works, but it left me wondering if there's a simpler, different, better way of doing this, probably using linear algebra or something? And what about idiomatically, how can this be more pythonic? Answer: The biggest problem you have is you have five lines that are nearly identical, four times. row = coord[0] col = coord[1] while row > 0 and col > 0: row -= 1 col -= 1 There is two ways to come at this. (1) make a function that yield the rows and columns. (2) use zip and range. The first way you need to enter a start, limit and step for both the x and y axis. You then need to limit all inputs by 0 < x < x_limit, and the same for the y axis too. This is as we don't really know what direction we're going. We then add the step to x each iteration. This can make: def _diagonal(x, y, x_limit, y_limit, x_diff, y_diff): while 0 < x < x_limit and 0 < y < y_limit: x += x_diff y += y_diff yield (x, y) Where you'd use it like: for x, y in _diagonal(row, col, limit, limit, 1, 1): coords.append((x, y)) However this is quite alike to range, we enter a start, limit and step in both. Take range(row + 1, limit, 1). To then get both x and y you need to use zip which works perfectly as it stops iterating when any of the inputs stop. And so you can get: for x, y in zip(range(row + 1, limit, 1), range(col + 1, limit, 1)): coords.append((x, y)) This I would say is better than creating your own function as you don't really have to implement anything yourself. Either way you'll want to remove the duplicate code to append to coords. To do this you can use itertools.chain. This lets you chain iterators to become one iterator. So since you want a list as output you'll have to 'wrap' the call in a list call. This can get you: from itertools import chain def diagonals(coord, size): x, y = coord return list(chain( [(x, y)], zip(range(x - 1, -1, -1), range(y - 1, -1, -1)), zip(range(x + 1, size, 1), range(y + 1, size, 1)), zip(range(x + 1, size, 1), range(y - 1, -1, -1)), zip(range(x - 1, -1, -1), range(y + 1, size, 1)), )) print(diagonals((3, 4), 8))
{ "domain": "codereview.stackexchange", "id": 22961, "tags": "python, matrix, chess" }
Singly linked list implementation in python 3
Question: I have implemented a Singly linked list. It works well. But if you think something needs to be improved, say it. This code was tested in Python 3.7.4. class Node(object): def __init__(self, data): self.data = data self.nextNode = None class LinkedList(object): def __init__(self): self.head = None self.size = 0 # O(1) !!! def insertStart(self, data): self.size = self.size + 1 newNode = Node(data) if not self.head: self.head = newNode else: newNode.nextNode = self.head self.head = newNode def remove(self, data): if self.head is None: return self.size = self.size - 1 currentNode = self.head previousNode = None while currentNode.data != data: previousNode = currentNode currentNode = currentNode.nextNode if previousNode is None: self.head = currentNode.nextNode else: previousNode.nextNode = currentNode.nextNode # O(1) def size1(self): return self.size # O(N) def insertEnd(self, data): self.size = self.size + 1 newNode = Node(data) actualNode = self.head while actualNode.nextNode is not None: actualNode = actualNode.nextNode actualNode.nextNode = newNode def traverseList(self): actualNode = self.head while actualNode is not None: print(actualNode.data) actualNode = actualNode.nextNode Answer: You are missing several important things for this code: Type Annotations Python 3.7 supports type annotations. Especially for a container class like this, you should fully use them. Initializer parameters Python containers generally all provide builder methods or class initializer functions that take other containers and use their contents. Consider: dict(iterable,**kwarg) list(iterable) set(iterable) str(object=b'', encoding='utf-8', errors='strict') tuple(iterable) If you're writing a Python container, you need to conform to expectations. I expect to be able to initialize the container when I create it. Protocols & Magic methods Python has a well-established set of protocols for implementing containers. You just have to decide what kind of container you're writing. I would suggest that a singly-linked list is an Iterable, a Sequence, a MutableSequence and possibly a Set type.
{ "domain": "codereview.stackexchange", "id": 36228, "tags": "python, python-3.x, linked-list" }
Using partial application to compose functions
Question: I have an application that will provide notifications to clients. It has two configurations: Default: A single web api instance that notifies the appropriate subscribers Alternative: Under high load, there will be multiple web api instances. To coordinate messages between instances, I will leverage a redis backplane. When an api instance receives a message, it sends it to the redis backplane, which then pushes it back down to all api instances. I have a config.useRedis option that should control whether Redis should be used. To get things working, I created some pretty monolithic code: //Channels will reference the channel name - which is our device id - and an array //of websockets that are interested in that device let channels = {}; let redisSubscribers = []; // Send a message to the appropriate websocket clients listening on the right channel // As we check the channel, we'll do a little housekeeping as well const broadcast = (data) => { const message = JSON.parse(data); const deviceId = message.deviceId; if (Object.keys(channels).some(key => +key === deviceId)) { //Perform a cleanup of any closed sockets channels[deviceId] = channels[deviceId].filter(socket => socket.readyState === 1); channels[deviceId].forEach(ws => ws.send(data)); //If the channel is empty, nuke the channel and close the redis subscription //if appropriate if (channels[deviceId].length === 0) { delete channels[deviceId]; if (config.useRedis) { const subscriber = redisSubscribers.find(sub => +sub.deviceId === deviceId); if(subscriber) { subscriber.quit(); redisSubscribers = redisSubscribers.filter(sub => sub !== subscriber); } } } } } ... wss.on('connection', (ws) => { console.log('socket established...') const querystring = url.parse(ws.upgradeReq.url, true).query; const deviceId = querystring.deviceId; if (!Object.keys(channels).includes(deviceId)) { channels[deviceId] = [ws]; if(config.useRedis) { //Init redis subscriber ... } } else { channels[deviceId].push(ws); } ws.on('message', (data) => { if(config.useRedis) { redisPublisher.publish(deviceId, data); } else { const message = JSON.parse(data); broadcast(message); } }); }); I'd like to compose the behaviour instead, which would provide more flexibility in future in case I want to move to something else besides Redis. So, if Redis is enabled for the app, in the broadcast function, I want to enhance the function with additional logic. In the connection handler, I want to init a Redis subscriber to receive messages from the backplane. In the message handler, I want to replace the default publishing logic with the custom Redis logic. Note: The following has not been tested, I just threw some code together to demonstrate the course. Consider it pseudo-code :) websocketserver.js const websocketServer = () => { let channels = {}; const broadcast = (function(data) { return function(enhancement) { if (Object.keys(channels).some(key => +key === deviceId)) { //Perform a cleanup of any closed sockets channels[deviceId] = channels[deviceId].filter(socket => socket.readyState === 1); channels[deviceId].forEach(ws => ws.send(data)); //If the channel is empty, nuke the channel if (channels[deviceId].length === 0) { delete channels[deviceId]; } // If any "enhancement" should be applied, execute the passed in function if(typeof enhancement === 'function') { enhancement(deviceId) } } } }) const onConnection = () => { return function(enhancedSubscribe, enhancedPublish) { console.log('socket established...') const querystring = url.parse(ws.upgradeReq.url, true).query; const deviceId = querystring.deviceId; if (!Object.keys(channels).includes(deviceId)) { channels[deviceId] = [ws]; if(typeof === 'enhancedSubscribe') { enhancedSubscribe(deviceId) } } else { channels[deviceId].push(ws); } ws.on('message', (enhancedPublish) => { if(typeof enhancedPublish === 'function'){ enhancedPublish(data); } else { const message = JSON.parse(data); broadcast(message); } } } }; return { broadcast, onConnection, onMessage } } redisServer.js const redisServer = () => { let redisSubscribers = []; const removeSubscriber = () => { const subscriber = redisSubscribers.find(sub => +sub.deviceId === deviceId); if(subscriber) { subscriber.quit(); redisSubscribers = redisSubscribers.filter(sub => sub !== subscriber); } } const enhancedSubscribe = (deviceId, broadcast) => { //Init redis subscriber ... } const enhancedPublish = (deviceId, data) => { redisPublisher.publish(deviceId, data); } return { removeSubscriber, enhancedSubscribe, enhancedPublish } } I attempt to compose the behaviour I want like this: serverFactory.js const server = () => { const broadcast = (data) => { let webSocketBroadcast = websocketServer.broadcast(data); if (config.useRedis) { return webSocketBroadcast(); } else{ return webSocketBroadcast(redisServer.enchanceBroadcast); } } const onConnection = (ws) => { let websocketServerOnConnection = websocketServer.onConnection(); if (config.useRedis) { retuen websocketServerOnConnection() } else } } return { broadcast, onconnection, onMessage } } So, I am attempting to extend/enhance the websocket behaviour with partial application. Does this make sense? Anything further I can do to improve this or make it more flexible? Answer: Semantically, FP thinking means we don't extend functionality, we compose functionality. In JavaScript, when we are composing things usually we can use bind to specialize a function instead of having to write nested functions everywhere. The composition is what affords us the greater flexibility in FP. Here is a good SO post about guiding principles of FP. And also a good video series that, although a little basic, does a good job of demonstrating FP concepts. Here's my take on your FP psuedocode that gets at this subscription model with redis as a possible specialized task, I was very judicious in my functions. Normally wouldn't do this, it's a little over the top (every call to bind creates a new function, which can be a performance concern), but it's purely for demonstrative purposes: /* CONFIG */ let useRedis = config.useRedis; // env var / conf file / whatever // Figure out the most basic actions you need to do, // or the fundamental types that compose your problem // most actions here surround subscriptions // - subscribe (add new channel) // - broadcast data to subscribers // - unsubscribe (delete) // - sendMsg (is what broadcast does) let sendMsg = (msg, ws, id) => ws.send(msg); // the redis vs non part is just implementation detail if(useRedis) { sendMsg = (msg, redisPublisher, id) => redisPublisher.publish(id, msg); } let broadcast = (sendMsgFunc, id, msg, ws) => { let sendCurrMsg = sendMsgFunc.bind(null, msg, ws); // maybe this array could be array of ws or redis...? up to you! :D channels[id].forEach(sendCurrMsg); } let unsubscribe = (id, channels) => { delete channels[id]; return channels; } let findRedisSub = (id, s) => +s.deviceId == id; if(useRedis) { unsubscribe = (id, redisSubs) => { let findTheSub = findRedisSub.bind(null, id); let subscriber = redisSubs.find(findTheSub); if(subscriber) { subscriber.quit(); return redisSubs.filter(findTheSub) } return redisSubs; } } // can either call if(useRedis) ... inside these functions or use new functions // depends if you even need the channels object with redis enabled...? let isSubscribed = (id, channels) => !Object.keys(channels).includes(id); let subscribe = (id, channels, ws) => { if(isSubscribed(id, channels)) { channels[id].push(ws); } else { channels[id] = [ws]; } if(useRedis) { /* add redis sub, return it */ } } /* ... */ //const wss = setUpListener(); // TODO in FP global state is usually a bad thing. // Think about where the best place these global states should go... let channels = {}; // connection function could also be abstracted out... wss.on('connection', ws => { // pull data out let id = getId(ws); // function that abstracts details away // might return redis, we don't care here let subscription = subscribe(id, channels, ws); // partially evaluate broadcast with msg and id let subscriberBroadcast = broadcast.bind(null, sendMsg, subscription); ws.on('message', subscriberBroadcast); }); The connection function code is a lot easier to reason about with this more 'flat' method of thinking about it, and, personally, I like that the abstractions are just plain functions rather than hierarchical objects from a class structure.
{ "domain": "codereview.stackexchange", "id": 26012, "tags": "javascript, functional-programming" }
Integrality gap and complexity classes
Question: I would like to know if there exist some complexity classes that are defined according to the integrality gap of their problems? In particular, is there a class of problems for which their integrality gap is unbounded? Where by unbounded integrality gap, I mean an integrality gap that is equal to infinite, where the value of the relaxation is 0. For example, the min-diff Partition problem has an unbounded integrality gap. Answer: The question is incorrect. The integrality gap is defined for a linear programming formulation of the problem and not fundamentally for the problem. It is possible that a problem has more than one integer programming formulations, with some having unbounded integrality gap and the others having bounded integrality gap. For example, see this paper on the capacitated k-median problem or see this statement from the book The Design of Approximation Algorithms (Page 167, last paragraph).
{ "domain": "cs.stackexchange", "id": 21210, "tags": "complexity-theory, optimization, complexity-classes, integer-programming" }
Lorentz transformation for electric and magnetic fields
Question: How do derive the following transformation rule (J.D. Jackson third Edition 11.10) for electric and magnetic field? $$\vec E' = \gamma \left( \vec E + \vec \beta \times \vec B\right) - \frac{\gamma^2}{\gamma +1} \vec \beta \left( \vec\beta \cdot \vec E \right ) \tag{1}$$ $$\vec B' = \gamma \left( \vec B - \vec \beta \times \vec E\right) - \frac{\gamma^2}{\gamma +1} \vec \beta \left( \vec\beta \cdot \vec B \right ) \tag{2}$$ I know if $\beta$ is along positive x axis, the transformation of fields are given by $$\begin{align*} E_1' &= E_1 \\ E_2' &= \gamma (E_2 - \beta B_3)\\ E_3' &= \gamma (E_3 + \beta B_2) \\ \end{align*}\tag{3}$$ and for magnetic fields by $$\begin{align*} B_1' &= B_1 \\ B_2' &= \gamma (B_2 + \beta E_3)\\ B_3' &= \gamma (B_3 - \beta E_2) \\ \end{align*} \tag{4}$$ here is how I think of it so far, taking $\beta = (\beta_1, 0, 0)$ and taking $x$ component of $(1)$ should give me first of equation $(3)$ but I get $$E_1' = E_1 \left( \gamma - \frac{\gamma^2}{1+\gamma} \beta_1^2\right) = \gamma E_1 $$ Answer: $\newcommand{\B}{\vec{B}^\times} \newcommand{\e}{\vec{E}} \renewcommand{\b}{\vec{\beta}} \newcommand{\bv}{\vec{B}}$ The field tensor can be written $\begin{pmatrix} 0 & -\e \\ \e & \B \end{pmatrix}$, Where $\B$ is the dual tensor to $\vec{B}$ defined by $\B \vec{v} = \vec{B} \times \vec{v}$. Equivalently, $(\B)_{ik} = \epsilon_{ijk} B_j$. Note $\vec{v}^T \B = (\vec{v} \times \vec{B})^T$. It will also be important to note that $$(\vec{v} \times \vec{w})^\times \vec{u} = \vec{w} (\vec{u} \cdot \vec{v}) - \vec{v} (\vec{u} \cdot \vec{w})$$, so that $$ (\vec{v} \times \vec{w})^\times = \vec{w} \otimes \vec{v} - \vec{v} \otimes \vec{w}$$. The action of a Lorentz transformation can be written $$\begin{pmatrix} \gamma & -\gamma \b \\ -\gamma \b& 1+\alpha \b \otimes \b \end{pmatrix}$$, where $$\alpha = \frac{\gamma^2}{1+\gamma}$$. It will be important to note that $$\gamma^2 - \gamma \alpha = \gamma(\gamma - \alpha) = \gamma (\frac{\gamma + \gamma^2 - \gamma^2}{1 + \gamma}) = \frac{\gamma^2}{1+\gamma} = \alpha$$. Also $$1+\alpha \beta^2 = 1 + \alpha (1-1/\gamma^2) =1+ \frac{\gamma^2 -1}{1 + \gamma} =1+ \gamma -1 = \gamma$$ Anyway, the transformed field is $$\begin{pmatrix} \gamma & -\gamma \b \\ -\gamma \b& 1+\alpha \b \otimes \b \end{pmatrix} \begin{pmatrix} 0& -\e \\ \e& \B \end{pmatrix} \begin{pmatrix} \gamma & -\gamma \b \\ -\gamma \b& 1+\alpha \b \otimes \b \end{pmatrix}$$. Since the field tensor is antisymmetric, and the Lorentz transformation tensor is symmetric, we know the result must be antisymmetric. We will use this fact later. Let's start by compute the first product $$\begin{pmatrix} \gamma & -\gamma \b \\ -\gamma \b& 1+\alpha \b \otimes \b \end{pmatrix} \begin{pmatrix} 0& -\e \\ \e& \B \end{pmatrix} = \begin{pmatrix} -\gamma \b \cdot \e& -\gamma \e-\gamma \b \times \bv \\ \e + \alpha \b (\b \cdot \e)& \gamma \b \otimes \e + \B + \alpha \b \otimes (\b \times \bv) \end{pmatrix} $$. Next we compute the second product. Since we already know this product will be antisymmetric, we will only calculate the right column. $$ \begin{pmatrix} -\gamma \b \cdot \e& -\gamma \e-\gamma \b \times \bv \\ \e + \alpha \b (\b \cdot \e)& \gamma \b \otimes \e + \B + \alpha \b \otimes (\b \times \bv) \end{pmatrix} \begin{pmatrix} \gamma & -\gamma \b \\ -\gamma \b& 1+\alpha \b \otimes \b \end{pmatrix} $$ $$ = \begin{pmatrix} 0 & \gamma^2 \b (\b \cdot \e) -\gamma \e-\gamma \b \times \bv -\alpha \gamma \b (\b \cdot \e) \\ \cdots & -\gamma \e \otimes \b - \alpha \gamma (\b \cdot \e) \b \otimes \b + \gamma \b \otimes \e + \B \\ & + \alpha \b \otimes (\b \times \bv) + \alpha \gamma (\b \cdot \e) \b \otimes \b + \alpha (\bv \times \b) \otimes \b \end{pmatrix} $$ $$ = \begin{pmatrix} 0 & -(\gamma (\e + \b \times \bv) - (\gamma^2 - \alpha \gamma) \b (\b \cdot \e)) \\ \cdots & \B -\gamma( \e \otimes \b - \b \otimes \e) - \alpha((\b \times \bv) \otimes \b - \b \otimes (\b \times \bv)) \end{pmatrix} $$ $$ =\begin{pmatrix} 0 & -(\gamma (\e + \b \times \bv) - \alpha \b (\b \cdot \e)) \\ \cdots & \B -\gamma( \b \times \e)^\times - \alpha(\b \times (\b \times \bv))^\times \end{pmatrix} $$ By now we have found the expected expression for the new electric field from the upper right entry: $\tilde{\e} =\gamma (\e + \b \times \bv) - \alpha \b (\b \cdot \e)$. Let's now focus on the bottom right entry. $$ \tilde{\bv}^\times=\B -\gamma( \b \times \e)^\times - \alpha((\b \cdot \bv) \b^\times - \beta^2 \B)$$ $$ = ((1+\alpha \beta^2)\bv - \gamma \b \times \e - \alpha \b (\b \cdot \bv) )^\times$$. Thus $$\tilde{\bv} = \gamma \bv - \gamma \b \times \e - \alpha \b (\b \cdot \bv) $$ $$ = \gamma(\bv - \b \times \e) - \alpha \b (\b \cdot \bv)$$ as was desired.
{ "domain": "physics.stackexchange", "id": 11853, "tags": "homework-and-exercises, special-relativity, classical-electrodynamics" }
Substrates of cytochemical reactions in this immunostaining
Question: Expression of extracellular protein Laminin 9 alpha-4 chain in human skeletal muscle. Indirect immunostaining with HRP immunostain marker. Ob.x40. I have unsuccessfully searched NCBI -database, JSTOR and other major Biology databases for an answer. This suggests to me that I do not understand what is going on. 1. What are the substrates of above cytochemical reactions for the given immunostaining above? I need to give a simpler question where I apparently know the reaction exactly, since it is possible that people cannot answer the above question. Mitochondria in Hep-2 cell line cells. Cytochemical test for mitochondria specific enzyme NAHD dehydrogenase. Ob.x40. 2. What is the substrate of the cytochemical reaction? My answer: The reaction of NADH dehydrogenase is: NADH + H+ + CoQ → NAD+ + CoQH2 Substrate: CoQ Answer: It's hard to understand the question, but in any immunocytochemical staining such as the above, you have two different types of reactions: the antibody binding to the target (in this case, some laminin) the peroxidase-based colorimetric reaction with DAB. DAB (3,3'-diaminobenzidine tetrahydrochloride) is oxidized in the presence of hydrogen peroxide to form a brown precipitate, which becomes the stain. The fact that is an indirect immunostaining implies that you have a non-conjugated primary antibody against laminin and a secondary antibody against the Fc part of immunoglobulins of the species in which the first antibody was raised. This secondary antibody will be conjugated with HRP (horseradish peroxidase, the enzyme responsible of generating the peroxides that will oxidize the DAB). For a more complete reference of immunocytochemical stainings, you can read: http://www.ihcworld.com/_books/Dako_Handbook.pdf
{ "domain": "biology.stackexchange", "id": 26, "tags": "homework, mitochondria" }
Dispersion of Water: A Swimming Pool Conundrum
Question: The following question came up in a swimming pool: The pool has a jet that releases water below the surface. When I put my right hand in the water, several feet away from the jet, I can feel the pressure from the jet. (I believe that technically, I'm feeling a longitudinal wave created by the jet - is that so?) If I then put my left hand directly in front of the jet, it deflects the wave and I no longer feel the pressure on my right hand. When I remove my left hand, I once again feel the pressure. When I remove my left hand from blocking the jet, it takes a second or so before I once again feel the pressure. This makes sense - it takes time for the wave to travel through the pool. However, when I block the jet, I stop feeling the pressure on my right hand immediately. Why is this so? I would have guessed that analogous to the unblocking case, the wave already past my left hand would continue to travel for another second, so I wouldn't immediately stop feeling it. Please help me so that I can swim peacefully once again! Answer: What you feel with your right hand is not a wave (which is a propagation of a disturbance), but a stream of water (involving actual movement of material) coming from the jet. It has some kinetic energy associated with it, but, due to the high viscosity of water, this energy is quickly exhausted, if the stream is not backed up by the pressure from the new water coming from the jet. To generate a wave in the water, you can, for instance, push it with your hand or drop something in it. Once a wave (disturbance) is created, it moves by itself, so it would be impossible to stop it by inserting an obstacle behind the wavefront.
{ "domain": "physics.stackexchange", "id": 50973, "tags": "waves, pressure, water" }
Hacking Ruby Hash: Fastest #to_struct method
Question: I am trying to make the fastest #to_struct method in Ruby's Hash. I am including a use case and benchmark so you can run and see if you have really improved the code. This is my implementation and the benchmark is included. The time at the bottom is the time it takes on my machine. How can I make this faster? require "json" require 'benchmark' require 'bigdecimal/math' class Hash def to_struct k = self.keys klass = k.map(&:to_s).sort_by {|word| word.downcase}.join.capitalize begin Kernel.const_get("Struct::" + klass).new(*self.values_at(*k)) rescue NameError Struct.new(klass, *(k)).new(*self.values_at(*k)) end end end # You have a hash that you have built in your app sample_hash = { foo_key: "foo_val", bar_key: "bar_val", baz_key: "baz_val", foo1_key: "foo_val", bar1_key: "bar_val", baz1_key: "baz_val", foo2_key: "foo_val", bar2_key: "bar_val", baz2_key: "baz_val", foo3_key: "foo_val", bar3_key: "bar_val", baz3_key: "baz_val", foo4_key: "foo_val", bar4_key: "bar_val", baz4_key: "baz_val", foo5_key: "foo_val", bar5_key: "bar_val", baz5_key: "baz_val", foo6_key: "foo_val", bar6_key: "bar_val", baz6_key: "baz_val", foo7_key: "foo_val", bar7_key: "bar_val", baz7_key: "baz_val", } # Then you have JSON coming from some external api json_response = "{\"qux_key\":\"qux_val\",\"quux_key\":\"quux_val\",\"corge_key\":\"corge_val\"}" hash_with_unknown_keys = JSON.parse(json_response) # Merge these two together sample_hash.merge!(hash_with_unknown_keys) iterations = 100_000 Benchmark.bm do |bm| bm.report "#to_struct" do iterations.times do # Would be super nice if I could convert this to a struct with a method # Somehow a bit faster than the explicit example below and much faster than open struct sample_struct = sample_hash.to_struct unless sample_struct.foo_key == "foo_val" raise "Wrong value" end end end bm.report "Struct" do iterations.times do sample_struct = Struct.new(*sample_hash.keys) .new(*sample_hash.values) unless sample_struct.foo_key == "foo_val" raise "Wrong value" end end end bm.report "OpenStruct" do iterations.times do sample_open_struct = OpenStruct.new(sample_hash) unless sample_open_struct.foo_key == "foo_val" raise "Wrong value" end end end end # user system total real # #to_struct 4.030000 0.010000 4.040000 ( 4.072031) # Struct 6.870000 0.290000 7.160000 ( 7.320459) # OpenStruct 23.550000 0.210000 23.760000 ( 23.895187) Answer: Use OpenHash and Ruby >= 2.3.0 Starting with MRI 2.3.0, your benchmark using OpenHash gets fast. Very fast: ruby-2.2.5: ruby 2.2.5p319 (2016-04-26 revision 54774) [x86_64-linux] user system total real #to_struct 1.780000 0.000000 1.780000 ( 1.774490) Struct 9.100000 0.000000 9.100000 ( 9.099619) OpenStruct 7.910000 0.000000 7.910000 ( 7.911342) ruby-2.3.0: ruby 2.3.0p0 (2015-12-25 revision 53290) [x86_64-linux] user system total real #to_struct 1.700000 0.000000 1.700000 ( 1.695587) Struct 7.660000 0.000000 7.660000 ( 7.660869) OpenStruct 0.650000 0.000000 0.650000 ( 0.658817) With the latest MRI, Your #to_struct method gets a bit of a speed boost as well. ruby-2.4.1: ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux] user system total real #to_struct 1.460000 0.000000 1.460000 ( 1.459063) Struct 7.420000 0.000000 7.420000 ( 7.416505) OpenStruct 0.660000 0.000000 0.660000 ( 0.658009) So if you can, use Ruby >= ruby 2.3.0, and use OpenHash. How to make #to_struct faster I made the following changes for performance: Eliminate the mapping of hash keys using #downcase. Use #values instead of #values_at (values are always the same order as keys). See https://stackoverflow.com/a/31425274/238886 and these for clarity: Eliminate the temporary for self.keys DRY the creation of the struct instance Removed self references. With these changes, the code is: class Hash def new_to_struct klass_name = keys.map(&:to_s).sort.join.capitalize klass = begin Kernel.const_get("Struct::" + klass_name) rescue NameError Struct.new(klass_name, *keys) end klass.new(*values) end end and the benchmark (run against ruby-2.4.1): user system total real #to_struct 1.410000 0.000000 1.410000 ( 1.403908) #new_to_struct 0.760000 0.000000 0.760000 ( 0.757548) Struct 7.060000 0.010000 7.070000 ( 7.075619) OpenStruct 0.650000 0.000000 0.650000 ( 0.649057) These changes get to_struct close to OpenStruct, but still not as fast.
{ "domain": "codereview.stackexchange", "id": 26464, "tags": "performance, ruby" }