anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is the calculated shear strain in a beam, engineering strain or tensorial strain?
Question: Recently I came across a FEM code of a linear deformation beam element, and it made me wonder what is the correct relation between shear stress and shear strain in a beam element. In the beam element of the image below, it seems that shear strain is defined as Shear Strain = $\delta_{v}/L$, with L being the length of the element and (if I am not wrong) the shear deformation, $\delta_{v}$, be calculated from $$\delta_{v}=(\delta_{1}-\delta_{2})+(\phi_1+\phi_2)\frac{L}{2},$$ $\delta_{i}$ and $\phi_i$ being the vertical displacement and the rotations of the ends, in this case, is the Shear Strain, the engineering shear strain, $\gamma$ or is it the tensor shear strain, $\varepsilon_{xy}$? A comment on the answer: As stated by a comment below, the definition of shear may not be same for the beam elements as the assumptions in classic mechanics of materials. It seems to me that the definition of engineering strain and tensorial strain becomes vague when it comes to beam elements. Usually beams have free faces without stress, so the equal shear stresses assumed for an infinitesimal element with small deformation concept is not applicable here. That is the shear stresses change along the height of the beam and thus we need a coefficient to account for these changes. A common solution is that the cross-section area is modified by a beam shear coefficient and a shear area is defined: $$A_s = coef.A$$ So now the shear stiffness, that depends on the area, is also modified by coef. On less common cases where a similar element is used to simulate a volume (like a lattice network model), I see that they use 2G instead of G (for example see equation (22) in T. Kawai 1978). This is like the idea that they are assuming that the Shear Strain is tensorial shear strain thus 2G should be used instead of G. But another interpretation can be that the coef. in this case is assumed equal to 2 since the beam does not have free surfaces. (above image adopted from here) Answer: Shear strain is defined as the angular deformation caused due to parallel or shearing force. In structural engineering, there are two cases of shear strains that are of particular concern. The occurrence of each depends on the loading that causes shear deformation as indicated below. 1) Shear Strain due to Pure Shear 2) Shear Strain due to Shear Deformation Note, shear deformation becomes a concern when L/d (Span/beam depth) $\leq$ 10. Otherwise, it is usually ignored. Edit: For typical beams that follow the beam theory (Euler-Bernoulli or Timoshenko), as stated above, the shear deformation is very small thus usually ignored. The measured strain is, therefore, only consisted of the horizontal stretching and shortening of the extreme fibers in the direction of the longitudinal axial axis, which I think is your case.
{ "domain": "engineering.stackexchange", "id": 4195, "tags": "structural-engineering, beam, finite-element-method, shear, deformation" }
Why does red light travel faster than blue light?
Question: I know that light of all frequencies travel at the same speed in vacuum. But I wonder why their speed differ in any other medium, why does red light travel faster if it has less energy than blue light? Answer: Feynman: The correct picture of an atom, which is given by the theory of wave mechanics, says that,so far as problems involving light are concerned, the electrons behave as though they were held by springs. So we shall suppose that the electrons have a linear restoring force which, together with their mass $m$, makes them behave like little oscillators, with a resonant frequency $\omega_0$. ...........The electric field of the light wave polarizes the molecules of the gas,producing oscillating dipole moments.The acceleration of the oscillating charges radiates new waves of the field.The new field,interfering with the old field,produces a changed field which is equivalent to a phase shift of the original wave.Because this phase shift is proportional to the thinkness of the material ,the effect is equivalent to having a different phase velocity in the material''. Considering the above model the expression for refractive index can be derived which shows the refractive index depends on the frequency of the light and since speed of the wave through the meterial depends on the refractive index so different frequency waves move with different velocities. Here is the Feynman's derivation http://www.feynmanlectures.caltech.edu/I_31.html
{ "domain": "physics.stackexchange", "id": 45845, "tags": "visible-light, speed-of-light, refraction" }
Small size datasets for object detection, segmentation and localization
Question: I am looking for a small size dataset on which I can implement object detection, object segmentation and object localization. Can anyone suggest me a dataset less than 5GB? Or do I need to know anything before implementing these algorithms? Answer: There are various dataset available such as Pascal VOC dataset: You can perform all your task with these. size of the dataset is as follows ADE20K Semantic Segmentation Dataset: you can perform only segmentation here COCO dataset: This is rich dataset but a size larger then 5 GB so you can try downloading using google colab in your drive and then make a zip file of data as less than 5 GB You can download all these datasets using Gluoncv easily.here link.
{ "domain": "ai.stackexchange", "id": 1695, "tags": "datasets, object-recognition, object-detection, resource-request" }
Density of Interstitial compounds
Question: I have read in my textbook (not very reliable) that density of interstitial compounds is lesser than parent compound. But how can this be true? We add atoms to the lattice voids, so density should increase right? Answer: Interstitial compounds are typically obtained when elements such as $\ce{H},$ $\ce{B},$ $\ce{C}$ and $\ce{N}$ are located within the interstitial sites of a metallic substructure. Nonetheless, the metallic substructure is not that of the pure metallic element in most cases. For example let's consider $\ce{Nb}.$ We can dissolve some amounts of $\ce{N}$ inside the bcc structure of pure $\ce{Nb}$. On one hand, this dissolution increases the unit cell volume, thus reducing the density. On the other hand, if the amount of dissolved $\ce{N}$ increases, a hcp structure is first formed $(\ce{Nb2N}).$ Further increase of $\ce{N}$ content leads to a fcc structure. In both hcp and fcc structure $\ce{N}$ occupy interstitial sites of the $\ce{Nb}$ substructure and in fact they are both interstitial compounds. But in this case the metallic substructure is not found in pure $\ce{Nb}.$ At the end you have the following densities for $\ce{Nb}$, $\ce{Nb2N}$, $\ce{NbN}$: $$ \begin{array}{lc} \hline \text{Compound} & \rho/\pu{g cm^-3} \\ \hline \ce{Nb} & 8.57 \\ \ce{Nb2N} & 8.25 \\ \ce{NbN} & 8.47 \\ \hline \end{array} $$ So, you can see that considering different interstitial compounds a general rule cannot be gained.
{ "domain": "chemistry.stackexchange", "id": 17028, "tags": "materials, solid-state-chemistry, metallurgy, alloy" }
What is postselection in quantum computing?
Question: A quantum computer can efficiently solve problems lying in the complexity class BQP. I have seen a claim that one can (potentially, because we don't know whether BQP is a proper subset or equal to PP) increase the efficiency of a quantum computer by applying postselection and that the class of efficiently solvable problems now becomes postBQP = PP. What does postselection mean here? Answer: "Postselection" refers to the process of conditioning on the outcome of a measurement on some other qubit. (This is something that you can consider for classical probability distributions and statistical analysis as well: it is not a concept special to quantum computation.) Postselection has featured quite often (up to this point) in quantum mechanics experiments, because — for experiments on very small systems, involving not very many particles — it is a relatively easy way to simulate having good quantum control or feedforward. However, it is not a practical way of realising computation, because you have to condition on an outcome of one or more measurements which may occur with very low probability. Actually 'selecting' a measurement outcome is nothing you can do easily in quantum mechanics — what one actually does is throw away any outcome which does not allow you to do what you want to do. If the outcome which you are trying to select has probability $0 < p < 1$, you will have to try an expected number $1/p$ times before you manage to obtain the outcome you are trying to select. If $p = 1/2^n$ for some large integer $n$, you may be waiting a very long time. The result that postselection 'increases' (as you say) the power of bounded-error quantum computation from BQP to PP is a well-liked result in the theory of quantum computation, not because it is practical, but because it is a simple and crisp result of a sort which is rare in computational complexity, and is useful for informing intuitions about quantum computation — it has led onward to ideas of "quantum supremacy" experiments, for example. But it is not something which you should think of as an operation which is freely available to quantum computers as a practical technique, unless you can show that the outcomes which you are trying to postselect are few enough and of high-enough probability (or, as with measurement-based computation, that you can simulate the 'desirable' outcome by a suitable adaptation of your procedure if you obtain one of the 'undesirable' outcomes).
{ "domain": "quantumcomputing.stackexchange", "id": 91, "tags": "complexity-theory, postselection, bqp, terminology-and-notation" }
Training on data with inherently non-applicable data cells
Question: I am training a model on a chemical sample dataset to find outliers and perform imputation where it makes sense. Chemical Dataset Contains thousands of rows of chemical mixtures with many columns of properties. Example properties: bromine content, density. Inherently non-applicable data The chemicals can be gas, liquid or solid but some properties are only applicable to samples of a certain state. An example could be viscosity in liquids, bond type (ionic, molecular, covalent) in solids or density in gas. So far... ...all research has pointed towards methods of fixing "missing values" via column means, data imputation or something similar. There doesn't seem to be any sense in imputing the freezing point of a gas. A gas mixture does not have a freezing point. I am still in the process of data preparation and unsure how to proceed. I am working in python and missing data is stored as NaN values. Perhaps there are some models that can deal with such NaN-values. Side-Note: The majority of the dataset is comprised of distillation curve datapoints (sequential data describing what percentage of a chemical sample evaporates as temperature is increased). This data is present for all samples. Follow-up 1: Is there a model that will give me NaN values for the freezing point when I give it something that resembles a gas? Follow-up 2: Can this be compared to image object detection where the object is partially obscured? or part of the image is corrupt? Answer: Welcome to the site! The usual approach to missing values is to handle them manually. There are a few algorithms which can do this automatically, such as LightGBM and XGBoost, but in most cases it's better for model performance to decide on how you should indicate that a value is missing in your data. For example in with a Pandas dataframe in Python, I may decide to replace all NA/NaN values in a particular column which should contain positive integers with -99: dataframe[column] = dataframe[column].fillna(-99)
{ "domain": "datascience.stackexchange", "id": 4657, "tags": "python, predictive-modeling, data-cleaning, missing-data, data-imputation" }
Derivative of $l_1$ norm
Question: I want to compute the following derivative with respect to $n\times1$ vector $\mathbf x$. $$g = \left\lVert \mathbf x - A \mathbf x \right\rVert_1 $$ My work: $$g = \left\lVert \mathbf x - A \mathbf x \right\rVert_1 = \sum_{i=1}^{n} \lvert x_i - (A\mathbf x)_i\rvert = \sum_{i=1}^{n} \lvert x_i - A_i \cdot \mathbf x \rvert = \sum_{i=1}^{n} \lvert x_i - \sum_{j=1}^n a_{ij} x_j\rvert$$ So the $k$th element of derivative is: $$\frac{\partial g}{\partial x_k} = \frac{\partial }{\partial x_k}\sum_{i=1}^n \lvert x_i - \sum_{j=1}^n a_{ij} x_j\rvert $$ $$= \frac{\partial }{\partial x_k}\bigg(\lvert x_1 - \sum_{j=1}^n a_{1j} x_j\rvert +\cdots+ \lvert x_k - \sum_{j=1}^n a_{kj} x_j\rvert + \cdots\lvert x_n - \sum_{j=1}^n a_{nj} x_j\rvert \bigg)$$ $$ =-a_{1k}sign(x_1 - \sum_{j=1}^n a_{1j} x_j)-\cdots+(1-a_{kk})sign(x_k - \sum_{j=1}^n a_{kj} x_j)-\cdots -a_{nk}sign(x_n - \sum_{j=1}^n a_{nj} x_j)$$ And my questions: Is this derivation correct? How I can represent the answer compactly? Can you introduce me a source to master this material? Answer: Apart from a sign error, your result looks correct. The term with $(1-a_{1k})$ should have a positive sign. Also note that $\text{sgn}(x)$ as the derivative of $|x|$ is of course only valid for $x\neq 0$. If you take this into account, you can write the derivative in vector/matrix notation if you define $\text{sgn}(\mathbf{a})$ to be a vector with elements $\text{sgn}(a_i)$: $$\nabla g=(\mathbf{I}-\mathbf{A}^T)\text{sgn}(\mathbf{x}-\mathbf{Ax})$$ where $\mathbf{I}$ is the $n\times n$ identity matrix.
{ "domain": "dsp.stackexchange", "id": 3534, "tags": "gradient, norm" }
Can we say it's hybridisation if it's the same species?
Question: If we mixed two population of the same species into the same environment and then they reproduce together, can we say it's hybridisation? The wikipedia definition is the following: Hybridisation (biology) the process of combining different varieties of organisms to create a hybrid Answer: The question boils down to the definition of species and hybridization. In general, we use the term hybridization to talk about mating between individuals from different species but really the term species does not mean as much as most people would think it does. Some people may use the term hybridization to talk about mating between lineages that are usually considered belonging to the asme species. Have a look at How could humans have interbred with Neanderthals if we're a different species? for more information on the problems of definition of species. Edit Here are terms and concepts that are related and may help you gene flow introgression interspecific mating secondary contact partial reproductive isolation
{ "domain": "biology.stackexchange", "id": 5361, "tags": "terminology, population-genetics, hybridization" }
How performe rosrun on joint_state_publisher
Question: I have a launch file that launches a URDF based model on gazebo which has flexible joints (not fixed) and launches the robot_state_publisher node. Well, I would like to control this joints through the GUI interface provided by the joint_state_publisher and for this, at first, I tried, after launching the URDF based model, the following command: rosrun joint_state_publisher joint_state_publisher It did not work and the retrieved error message was: Traceback (most recent call last): File "/opt/ros/kinetic/lib/joint_state_publisher/joint_state_publisher", line 432, in <module> jsp = JointStatePublisher() File "/opt/ros/kinetic/lib/joint_state_publisher/joint_state_publisher", line 45, in __init__ robot = xml.dom.minidom.parseString(description).getElementsByTagName('robot')[0] File "/usr/lib/python2.7/xml/dom/minidom.py", line 1928, in parseString return expatbuilder.parseString(string) File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 940, in parseString return builder.parseString(string) File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 223, in parseString parser.Parse(string, True) TypeError: Parse() argument 1 must be string or read-only buffer, not None Although I understand the error message I do not know what to do to solve it. I do not know what should be this string. The important of all of this is that I want to initialize the GUI interface to control the flexible joints and I do not know how to do it and any help is welcome. The solution could be via launch file also. ps. 1: The robot description is loaded in the parameter mkz/robot_description ps. 2: The currently launch file is: <?xml version="1.0"?> <launch> <arg name="world_file" default="$(find road_generation_gazebo)/worlds/custom.world"/> <!--launches URDF based model--> <include file="$(find vero_sim)/launch/vero_sim.launch"> <arg name="world_file" value="$(arg world_file)"/> </include> <include file="$(find vero_sim)/launch/rviz.launch"/> </launch> Originally posted by Randerson on ROS Answers with karma: 236 on 2017-12-06 Post score: 0 Original comments Comment by jayess on 2017-12-06: Can you please update your question with the entire error and entire launch file? What do you mean by the param robot_description is mapped to mkz/robot/description ? Comment by jayess on 2017-12-06: I believe @David Lu's answer is correct. It's not able to find the robot_description because it's currently located at mkz/robot/description/. Also, this updated launch file doesn't even show a joint_state_publisher so is this the correct/full launch file? Answer: Try rosrun joint_state_publisher joint_state_publisher robot_description:=mkz/robot/description Originally posted by David Lu with karma: 10932 on 2017-12-06 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 29545, "tags": "joint-state-publisher" }
Tegmark's level 1 multiverse - why does infinite set of universes imply existence of a universe almost identical to ours?
Question: In "Our Mathematical Universe" Tegmark claims that inflation theory implies existence of (countably, it seems to me) infinite set of universes. He says that from this follows existence of a universe in which a person almost identical to me has lived the same life and observed the same thing. Is this implication correct? If we were talking about exactly the same universes, then it seems to me obtaining such a universe would be a probabilistic event of zero measure. And because there is only a countably infinite set of universes, we wouldn't get an exactly the same universe anywhere. But I guess we can talk about universes differing from our universe by no more than epsilon. Interpreting this probabilistically (with a lot of handwaving, of course), is this definitely an event of nonzero measure? If yes, then is Tegmark correct that Inflation theory implies this? Answer: The argument is based on our observable universe having a finite, nonzero probability since there is only a finite number of distinguishable configurations within a finite radius (as given by the Bekenstein bound). This is your epsilon. So the measure of our local universe configuration is non-zero, and hence in a sufficiently large and randomly initialized universe (whether spatially infinite or an eternal inflation structure) there will be an infinite number of instances.
{ "domain": "physics.stackexchange", "id": 56441, "tags": "cosmological-inflation, multiverse" }
Strength of bolts
Question: I was wondering if it'd be a good idea to attach a 10kg load to the end of a movable steel rod by fastening it with just one M5 bolt. Looking at Misumi's datasheet, M5 (Class 10.9) has an allowable load of 111kgf. 111kgf is 10x more than my requirement for static load. However, it's not quite clear if 111kgf is for both axial and radial directions. If the load was de/accelerated in any direction, would an M5 bolt be able to support that? For a 10kg load, does that mean if the de/acceleration of the load stayed below 10*9.8m/s^2 = 98m/s^2 (by F=ma), then a single M5 bolt would still be OK? Perhaps if the load was allowed to swing as well, the centrifugal force wouldn't be too far off? Any guidance would be greatly appreciated. EDIT1: Here's a diagram: EDIT2: Here's a revised version: Answer: Simply put this is a textbook example of what not to do. The geometry of the connection, a solid bar to a narrow bolt invites stress concentration at the necking where the bolt enters the rod. The heavy disk will rattle and slowly wear the bolt threads out, allowing play at the connection The play of the disk will cause intense momentum back and forth causing miniature fatigue cracks both on the bolt and on the end of the rod. This mechanism will collect some grind dust lumps inside the sleeve acting like plastic constraints causing pivoting of the bolt around them leading to a sudden complete failure.
{ "domain": "engineering.stackexchange", "id": 3387, "tags": "mechanical-engineering, stresses, strength, bolting, fatigue" }
How to interface a "pull" library with a "push" library using callbacks
Question: I am using a driver that retrieves data from HW (Driver) and a display (Viewer) that will output data to the user. The user calls a trigger function to initiate the getting of data from the Driver. The Driver's constructor accepts a "publish" callback function to be called when data is ready. The Viewer receives a "get data" callback on construction, to be called when it wants data from my code. My code must integrate the "pull" for data from the viewer with the data "pushed" back from the Driver once I call its trigger function. Another block of code (that I do not have access to) initiates requests from the Viewer and acts upon the data it received back. This is represented by the main method in my example, below. Thanks in advance! #include <cstdint> #include <iostream> #include <functional> class Viewer { public: Viewer(std::function<void(int&)> get_data_cb) : m_get_data_cb(get_data_cb) { } void populate_view(int &data) { m_get_data_cb(data); } private: std::function<void(int&)> m_get_data_cb; }; class Driver { public: Driver(std::function<void(int&)> publish_data_cb) : m_publish_data_cb(publish_data_cb) { } void trigger_event() { // tells RTL to start collecting data ++rtl_int; // trigger_event does not call the IRQ - this is just for demo purposes event_IRQ_handler(); } void event_IRQ_handler() { // fires when RTL has finished collecting data m_publish_data_cb(rtl_int); } private: std::function<void(int&)> m_publish_data_cb; int rtl_int{ 0 }; }; /**********************************************************************/ // Code I have control of starts here volatile uint32_t publish_count{0}; int my_int{0}; // callback to send to Driver void my_publish_data_cb(int &i) { my_int = i; ++publish_count; } Driver my_driver = Driver{ my_publish_data_cb }; // callback to send to Viewer void my_get_data_cb(int &i) { uint32_t before = publish_count; my_driver.trigger_event(); // I need to give the driver's event_IRQ_handler() time // to get the data but is this the best way??? while (before == publish_count){}; i = my_int; //cout << "Viewer: i =" << i << endl; } Viewer my_viewer = Viewer{ my_get_data_cb }; // Code I have control of ends here /**********************************************************************/ int main() { my_viewer.populate_view(my_int); std::cout << "my_int = " << my_int << std::endl; my_viewer.populate_view(my_int); std::cout << "my_int = " << my_int << std::endl; my_viewer.populate_view(my_int); std::cout << "my_int = " << my_int << std::endl; return 0; } Answer: You are using a busy-loop to wait for publish_count to be incremented by the interrupt handler. In general, this is a bad way to wait for something to happen, as it keeps the processor busy, which wastes power and might increase its temperature unnecessarily. However, in an embedded device that is not battery-powered, it might be fine. If the device has a single CPU core, then some compilers will even detect what you are doing and insert a CPU instruction in the loop that will pause the processor until an interrupt occurs, thus avoiding wasting power. However, that will not work on multi-core processors. Since you mentioned the code is running on FreeRTOS, you can use its functions to suspend a task, and resume it from within the interrupt handler, for example using xTaskResumeFromISR(). Another option is to use a queue to send information from the ISR to a task. This decouples the task from the ISR, which might have some advantages. For example, in your code, my_get_data_cb() will always wait for the IRQ handler to generate a new value, but maybe there was already a new value available if the IRQ fired between two calls to my_get_data_cb(), in which case the latter could have returned it immediately. Furthermore, with queues you can send data larger than an int without having to worry about atomicity.
{ "domain": "codereview.stackexchange", "id": 45244, "tags": "c++, embedded" }
Lagrangian for a forced system
Question: Suppose that for a non-forced system Lagrange's equations are \begin{equation*} \left\{ \begin{array}{l} m\ddot{x}+\left( k_{1}+k_{2}\right) x-k_{2}y+2c_{1}\dot{x}=0 \\ m\ddot{y}-k_{2}x+\left( k_{2}+k_{3}\right) y+2c_{2}\dot{y}=0.% \end{array}% \right. \end{equation*} But if the system is subject to external forces, say $F_{x}, $ $F_{y}$, which would be the Lagrangian in this case? Can we add $F_{x},$ $F_{y}$ in the right-hand sides? Answer: The short answer is yes: when the system is not conservative because of dissipation or driving, one must include generalized forces on the right hand side of the usual EL equation: \begin{align} \frac{d}{dt}\frac{\partial L}{\partial \dot q_k}-\frac{\partial L}{\partial q_k}={\cal F}_k\, , \end{align} where ${\cal F}_k$ is the generalized force on the (generalized) coordinate $k$. You have already included damping so you need to include the driving term, again “by hand”. In the simplest example of a harmonic force on a 1d system, the equations of motion would then be of the form \begin{align} \ddot{x}+\frac{\omega_0}{Q}\dot{x}+\omega_0^2 x = A\cos(\omega t) \end{align} where for a force $F_0\cos(\omega t)$ and $A=F_0/m$.
{ "domain": "physics.stackexchange", "id": 63194, "tags": "newtonian-mechanics, classical-mechanics, lagrangian-formalism" }
How can sound waves propagate through air?
Question: We know that the sound waves propagate through air, and it can't travel through vacuum. so the thing that help it doing that is the air's molecules pressure. So my question how can that happens? I can't understand that concept. Answer: Sound waves propagate very similarly to how 'the wave' propagates at baseball stadiums: http://www.youtube.com/watch?v=H0K2dvB-7WY At some point something (your vocal cords, a piano string, a speaker) hit a bunch of air particles (atoms, molecules, it really doesn't matter). These particles in turn hit the particles next to them, these hit the ones next to them and so on. No pressure here is simply the absence of any particles, so nothing communicates the orders to move. This is like ' the wave' in that everyone communicates the motion of the wave of the person standing next to them, and if there is no one standing next to you, the wave ends with you. Hearing a sound is the last bunch of air particles next to your ear drum getting the instructions to vibrate which in turn vibrates your ear drum and your brain turns this response into the perception of sound.
{ "domain": "physics.stackexchange", "id": 37649, "tags": "waves, pressure, acoustics, air" }
Find the autocorrelation function of signal $x(t) = u(t) - u(t-1)$
Question: I have used the energy-type signal autocorrelation function: $$\mathcal{R}_{xx}(\tau)=\int_{-\infty}^{\infty}x(t)x^*(t+\tau)dt$$ I have rewritten the equation as: $$\begin{align} \int_{-\infty}^{\infty}\big[u(t)-u(t-1)\big]\big[u(t+\tau )-u(t+\tau-1)\big]dt \\ \end{align}$$ How do I simplify this equation? Answer: Don't make this more complicated than it really is. $x(t)$ is non-zero in the interval $t\in[0,1]$, and $x(t+\tau)$ is non-zero in the interval $t\in[-\tau,1-\tau]$. The integrand is non-zero only if the two functions overlap. There is no overlap for $1-\tau<0$ and $-\tau>1$, i.e., for $|\tau|>1$. So for $|\tau|>1$ the autocorrelation is zero. For $-1<\tau<1$, we have, according to above considerations, the following integral: $$\mathcal{R}_{xx}(\tau)=\int_{\max\{0,-\tau\}}^{\min\{1,1-\tau\}}dt=\begin{cases}\displaystyle\int_{0}^{1-\tau}dt,&0<\tau<1\\\displaystyle\int_{-\tau}^{1}dt,&-1<\tau<0\end{cases}$$
{ "domain": "dsp.stackexchange", "id": 9857, "tags": "continuous-signals, autocorrelation, homework" }
Hamiltonian formulation of general relativity
Question: Why is it not possible to find a Hamiltonian formulation of general relativity as easily as in classical mechanics? There was a remark to this in my lecture but no real explanation as to why this is. What stops us from creating a Hamiltonian formulation of GR? Answer: The short answer to your question is: nothing. Nothing stops us from deriving an Hamiltonian for GR. The Einstein-Hilbert Hamiltonian (i.e. the ADM Hamiltonian) for GR, modulo boundary terms, is the following: $$ H_{EH}=\int d^{3}x\ \ \bigg\{\alpha\,\bigg[\,\frac{2\kappa}{\sqrt{q}}\ G(\pi,\pi)-\frac{\sqrt{q}}{2\kappa}\ R[q]\bigg]-2<\beta,\text{div}\pi>\bigg\} $$ $H_{EH}$ is a functional of the variables $(\alpha, \beta^{i}, q_{ij}, \pi^{ij})$ with the following definitions: $\kappa=8\pi G$, $G$ is Newton's constant, $q_{ij}$ is the metric of 3-dimensional space (not of $4$-dimensional spacetime!), $R[q]$ is its Ricci scalar, the divergence and the product $<,>$ are to be understood with respect to the three-dimensional metric, $\pi^{ij}$ is the momentum conjugate to $q_{ij}$, $\alpha$ is a function, $\beta^{i}$ is a three-dimensional vector and $$ G_{ijkl}=\frac{1}{2}(q_{ik}q_{jl}+q_{il}q_{jk}-q_{ij}q_{kl}) $$ is called the Wheeler-De Witt metric and is itself a function of $q_{ij}$, as you can see. The relation between $\alpha, \beta^{i}$ and $q_{ij}$ and the usual $4$-dimensional metric $g_{\mu\nu}$ is given by the following equations: $$ g_{00}=-\alpha^{2}+q_{ij}\beta^{i}\beta^{j}\qquad g_{0i}=q_{ij}\beta^{j}\qquad g_{ij}=q_{ij} $$ So what's the problem with the Hamiltonian formulation of GR? The answer can be easily given by considering the following example. As you may know, the Lagrangian for a free special-relativistic particle is $$ L=-m\sqrt{-\eta_{\mu\nu}\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}} $$ where $x^{\mu}$ is the position $4$-vector of the particle, $s$ is a real parameter and $\eta_{\mu\nu}$ is the Minkowski metric. Now try to define a conjugate momentum to each of the $x^{\mu}$'s: $$ p_{\mu}=\frac{\partial L}{\partial \dot{x}^{\mu}}=m\frac{\eta_{\mu\nu}\dot{x}^{\nu}}{\sqrt{-\eta_{\sigma\tau}\dot{x}^{\sigma}\dot{x}^{\tau}}} $$ where I set $\dot{x}^{\mu}=dx^{\mu}/ds$. If we were to use the ordinary definition for the Hamiltonian, i.e. $H=p_{\mu}\dot{x}^{\mu}-L$, we would find that $H=0$, since $$ p_{\mu}\dot{x}^{\mu}=m\frac{\eta_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}}{\sqrt{-\eta_{\sigma\tau}\dot{x}^{\sigma}\dot{x}^{\tau}}}=-m\sqrt{-\eta_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}}=L $$ It can be shown that this happens because the action for the free special-relativistic particle is invariant under the transformation of the parameter $s$ into another arbitrary parameter $s'$. This is called a diffeomorphism invariance of the parameter space, and in our specific case it can be shown to lead to the following constraint on the phase space: $$ p^{2}=\eta^{\mu\nu}\pi_{\mu}\pi_{\nu}=-m^{2} $$ which you can compute by yourself. The existence of this constraint spoils the possibility of differentiating the Hamiltonian with respect to the momentum variables, since these are explicitly not independent from one another. However, one can exploit the independence of the action from the parameter $s$ to fix it to be equal to the time variable $t$, i.e. we can set $s=t=x^{0}$. With this choice one has $$ L=-m\sqrt{1-\delta_{ij}\,\dot{x}^{i}\dot{x}^{j}} $$ where now the dot is a derivative with respect to the parameter $t$ and the $x^{0}$ component of the $4$-position vector does not enter the Lagrangian any longer (in the above $t$ is just a parameter!). One can then define a modified Hamiltonian $H'=\pi_{i}\dot{x}^{i}-L$, which turns out to be equal to $$ H'=\frac{m}{\sqrt{1-\delta_{ij}\,\dot{x}^{i}\dot{x}^{j}}}=E\neq 0 $$ where $E$ is the relativistic energy of the free particle. With respect to this Hamiltonian the dynamics of the particle is well-defined with the Hamilton's equations as a starting point. How does this apply to GR? The action of GR, namely $$ S_{EH}=\frac{1}{16\pi G}\ \int d^{4}x\ \sqrt{|g|}\ R[g] $$ is again diffeomorphism-invariant, in the sense that if we arbitrarily change the coordinates $x^{\mu}$ (which act as parameters for the gravitational field) with respect to which the above quantities are defined, the action does not change. This can be shown to lead, again, to the vanishing of the ordinary Hamiltonian $H=\pi_{g} g-L$ (exactly in the same fashion as before) so that a modified Hamiltonian must be used instead. The definition is one order of magnitude more involved than the one I used above for the special-relativistic free particle. The rationale is the following: one fixes a time coordinate $t$ with respect to which the metric $g_{\mu\nu}$ takes the form given above, $$ g_{00}=-\alpha^{2}+q_{ij}\beta^{i}\beta^{j}\qquad g_{0i}=q_{ij}\beta^{j}\qquad g_{ij}=q_{ij} $$ where $\alpha$ is a function and $\beta$ is a three-dimensional vector (called the lapse function and shift vector). One then expresses all the quantities of the theory in terms of $\alpha$, $\beta$, $q$ and their derivatives and defines a modified Hamiltonian $H'$ as $$ H'=\int d^{3}x\ \ \pi_{\alpha}\dot{\alpha}+\pi_{\beta}\dot{\beta}+\pi_{q} q-L $$ It turns out that $\dot{\alpha}$ and $\dot{\beta}$ are not present in the Lagrangian of GR, so that $\pi_{\alpha}=\pi_{\beta}=0$ and $$ H'=\int d^{3}x\ \ \pi_{q} q-L $$ With this definition, $H'=H_{EH}$, with $H_{EH}$ as given above. The Hamilton's equations now need to be slightly modified. Since the momenta conjugate to $\alpha$ and $\beta$ vanish, we must impose that $$ 0=\dot{\pi}_{\alpha}=\frac{\delta H_{EH}}{\delta \alpha}\qquad\qquad 0=\dot{\pi}_{\beta}=\frac{\delta H_{EH}}{\delta \beta} $$ and this gives rise to 4 more equations apart from the usual ones (albeit with three-dimensional rather than 4-dimensional indices), namely $$ \dot{q}_{ij}=\frac{\delta H_{EH}}{\delta \pi^{ij}}\qquad\qquad \dot{\pi}^{ij}=-\frac{\delta H_{EH}}{\delta q_{ij}} $$ In summary, the problem with the Hamiltonian formulation of GR is two-fold: first of all in GR there is no preferred notion of time, and one needs time in order to define a meaningful Hamiltonian; second of all the (modified) Hamiltonian for GR is constrained, meaning that there are constraints on the phase space of the theory. The two problems are actually related to one another: if there is no preferred notion of time, then there must be some form of parameter-independence of the action; this in turn can be shown to lead systematically to the existence of constraints (as we have seen through examples). The subject of Hamiltonian dynamics itself is fairly complex due to the appearance of constraints. The "modified" Hamiltonians I defined above are actually standardly defined in the general setting of Hamiltonian field theory. As I said, in order to formulate the theory one must ALWAYS first of all choose a notion of time, with respect to which then one can evolve the degrees of freedom of the system. The standard definition of $H$, however, is pretty involved (this is the reason why I'm not giving it). It reduces to the usual one when a time can be chosen non-arbitrarily and/or if there are no constraints on the phase space of the theory. If you are interested in the subject you should have a look at Gotay and Marsden's series of articles "Momentum Maps and Classical Fields" (they require a pretty solid knowledge of differential geometry). The application of the theory to GR without the mathematical nuances associated the so-called fiber bundle formulation is just known as the ADM formulation of General Relativity.
{ "domain": "physics.stackexchange", "id": 54977, "tags": "general-relativity, hamiltonian-formalism" }
rqt_plot not publishing showing anything
Question: rqt_plot not publishing anything even thought there is data being published on the topic "mytopic". When I do rostopic echo /mytopic I get continious output with parameters changing given as follows(just posted a script out of the big output I am getting) : id: 3 vs: 4 - id: 10 vs: 1 - id: 13 vs: 6 - id: 18 vs: 1 - id: 19 vs: 1 - id: 21 vs: 8 - id: 27 vs: 1 - id: 31 vs: 1 However when I do rosrun rqt_plot rqt_plot /mytopic nothing is ouputted. Can somebody help me with this. Originally posted by Gudjohnson on ROS Answers with karma: 100 on 2013-07-25 Post score: 2 Original comments Comment by 130s on 2013-08-07: What data types are the members of your topic? Are they numeric? Answer: Try: $ rqt_plot /%TOPIC_NAME%/%FIELD_NAME% In your case: $ rqt_plot /mytopic/id /mytopic/vs Hope this works. Originally posted by 130s with karma: 10937 on 2013-07-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Gudjohnson on 2013-07-25: unfortunately this does not work. I do not understand why it does not work. For rosrun rqt_graph rqt_graph I get a graph in which I have one note which is publishing and subscribing to my /mytopic. rqt_plot should output the data being published on a topic but it does not show anything althought "rostopic echo /mytopic" shows that data is being published on mytopic. Comment by 130s on 2013-08-07: I'm not sure either. Try rosdep install rqt_plot (this installs missing dependency if any) and tell us the result.
{ "domain": "robotics.stackexchange", "id": 15046, "tags": "ros, topic, rqt, ros-groovy, rqt-plot" }
Add new number to sorted array of numbers
Question: The task is to add a new number in the array of numbers sorted in ascending order. So let's say the array is: 20,40,50,60 And the number to be inserted is 24, the new array will be [ 20, 24, 40, 50, 60 ] The code for the above problem statement is as follows, var myArray = [], theNum = undefined; // below function is used to capture the // commandline parameters for array and the // number to be inserted (function(){ process.argv.forEach(function (val, index, array) { var idx = 0, ar = undefined; try{ // get the commandline argument for // array values if(index === 2){ myArray = myArray.concat(val.split(",").map(function(num){ return parseInt(num); })); } // third index is the number to be searched. if(index === 3){ theNum = parseInt(val) } }catch(e){ console.log(e) } }); })(); console.log(" INSERT NUMBER ",theNum," in array ",myArray); insertNum(); // main method to start function insertNum(){ var binarySearch = binarySearch; // methods var doInsertion = doInsertion; // methods // binary search the index position where the number can be inserted var index = binarySearch(myArray,0,myArray.length); console.log("Insert Number in position ",index); // insert new number at the searched index // and move the following numbers to the immediate // next index position. Its a recursive call doInsertion(index,theNum); console.log(" Array after new number insertion ",myArray); // binary search for index position, // where the new number be inserted function binarySearch(array,lowIndex,highIndex){ console.log("LOW INDEX ",lowIndex," HIGH INDEX ",highIndex); var totalLenght = highIndex - lowIndex; var midIndex = parseInt(totalLenght/2); midIndex += lowIndex; if(lowIndex === highIndex){ return lowIndex; } if(array[midIndex] === theNum){ return midIndex; }else if(array[midIndex] < theNum){ lowIndex = midIndex + 1; }else{ highIndex = midIndex; } return binarySearch(array,lowIndex,highIndex); }// end of binary Search // insert new number at the searched index // and move the following numbers to the immediate // next index position. Its a recursive call function doInsertion(index,numToInsert){ var temp = (index >= myArray.length) ? numToInsert : myArray[index]; // once index position is more or equal to total array length, // insert as new number in last position if(index >= myArray.length){ myArray.push(temp); }else{ myArray[index] = numToInsert; index++; // move the numbers ahead to next position // as if pushing to next position doInsertion(index,temp); } } // end of doInsertion } // end of insertNum In the above program I have used binary search to find the index position. and then I insert the new number at that position, and the rest I move forward. The output of the program is as follows: E:\RahulShivsharan\MyPractise\DesignPatternsInJavaScript>node array02.js 1,4,6,8,10,13,18,23 6 INSERT NUMBER 6 in array [ 1, 4, 6, 8, 10, 13, 18, 23 ] LOW INDEX 0 HIGH INDEX 8 LOW INDEX 0 HIGH INDEX 4 Insert Number in position 2 Array after new number insertion [ 1, 4, 6, 6, 8, 10, 13, 18, 23 ] E:\RahulShivsharan\MyPractise\DesignPatternsInJavaScript>node array02.js 1,4,6,8,10,13,18,23 5 INSERT NUMBER 5 in array [ 1, 4, 6, 8, 10, 13, 18, 23 ] LOW INDEX 0 HIGH INDEX 8 LOW INDEX 0 HIGH INDEX 4 LOW INDEX 0 HIGH INDEX 2 LOW INDEX 2 HIGH INDEX 2 Insert Number in position 2 Array after new number insertion [ 1, 4, 5, 6, 8, 10, 13, 18, 23 ] E:\RahulShivsharan\MyPractise\DesignPatternsInJavaScript>node array02.js 1,4,6,8,10,13,18,23 14 INSERT NUMBER 14 in array [ 1, 4, 6, 8, 10, 13, 18, 23 ] LOW INDEX 0 HIGH INDEX 8 LOW INDEX 5 HIGH INDEX 8 LOW INDEX 5 HIGH INDEX 6 LOW INDEX 6 HIGH INDEX 6 Insert Number in position 6 Array after new number insertion [ 1, 4, 6, 8, 10, 13, 14, 18, 23 ] E:\RahulShivsharan\MyPractise\DesignPatternsInJavaScript>node array02.js 20,40,50,60 24 INSERT NUMBER 24 in array [ 20, 40, 50, 60 ] LOW INDEX 0 HIGH INDEX 4 LOW INDEX 0 HIGH INDEX 2 LOW INDEX 0 HIGH INDEX 1 LOW INDEX 1 HIGH INDEX 1 Insert Number in position 1 Array after new number insertion [ 20, 24, 40, 50, 60 ] E:\RahulShivsharan\MyPractise\DesignPatternsInJavaScript> Please do a code review; and please let me know how the code could have been implemented better. Also let me know if my code has any drawbacks, and if so, how it can be improved. Answer: Overall I think it looks pretty good as it is. I like the use of functional programming (i.e. using Array.map()) and the recursive functions. There are a couple minor changes I would make: parseInt radix Pass the radix (usually 10 for base 10 numbers), i.e. the 2nd parameter of parseInt(), to ensure that input with leading zeros or other numbers will not be parsed contrary to your expectations (e.g. this scenario): // get the commandline argument for // array values if(index === 2){ myArray = myArray.concat(val.split(",").map(function(num){ return parseInt(num, 10); })); } // third index is the number to be searched. if(index === 3){ theNum = parseInt(val, 10) } ... and similarly in the other spot where parseInt() is called. Separate functions I would move the nested functions (i.e. binarySearch and doInsertion) outside of insertNum, unless you like keeping it all tied up together? function insertNum(){ //implementation } function binarySearch(){ //implementation } function doInsertion() { //implementation } And what is the point of these two lines? I was able to remove them and still have the code function the same... var binarySearch = binarySearch; // methods var doInsertion = doInsertion; // methods Typo There appears to be a typo for the variable totalLenght - should it not be totalLength? For the sake of readability, I would correct that: var totalLength = highIndex - lowIndex; Small Consolidation One place where 1 line of code could be reduced is in this block of doInsertion(): else{ myArray[index] = numToInsert; index++; // move the numbers ahead to next position // as if pushing to next position doInsertion(index,temp); } The increment of index could be moved up to the array assignment: myArray[index++] = numToInsert; Alternatively, the prefix increment operator could be used on the recursive call: doInsertion(++index,temp);
{ "domain": "codereview.stackexchange", "id": 26566, "tags": "javascript, algorithm, array, node.js, binary-search" }
Random string generator that 'guesses' user inputted word
Question: This takes a user inputted string, then tries to randomly 'guess' it. The problem is that whenever the input's length is 6 digits or more, it takes an incredibly long time to process it. Is there a more efficient way of doing this? import random as rand import string def main(word): len_word = len(word) ntries = 0 while True: letters = [] for i in range(len_word): letters.append(rand.choice(string.ascii_lowercase)) ntries += 1 b = ''.join(letters) if b==word: print('Mission accomplished. It took ' + str(ntries) + ' tries.') break if __name__ == '__main__': main(input('input a word: ')) Note: I have tried making the program guess the letters one by one, and then check if said letter was right, but it seemed to make the problem even worse. Answer: Docstrings Python documentation strings (or docstrings) provide a convenient way of associating documentation with Python modules, functions, classes, and methods. An object's docstring is defined by including a string constant as the first statement in the object's definition. def guess_word(): """Do calculations and return correct guessed word.""" Style check pep0008 https://www.python.org/dev/peps/pep-0008/ the official Python style guide and here are some comments: import string def main(word): Surround top-level function and class definitions with two blank lines. ntries = 0 if b==word: Use descriptive variable names, improves readability and general understanding of the code. Always surround these binary operators with a single space on either side: assignment (=), augmented assignment (+=, -= etc.), comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not), Booleans (and, or, not). Except when = is used to set a function parameter. The code len_word len_word = len(word) ntries = 0 while True: letters = [] for i in range(len_word): the variable len_word is unnecessary, can be expressed: while True: letters = [] for i in range(len(word)): Random brute forcing: a sample size of 6 out of 26 alphabets has 230230 possible combinations (where order is unimportant and each combination has n! re-orderings where n = combination length (letters in the word) what is very obvious is that with such algorithms, it is very likely that zombie dinosaurs start showing up before the code guesses the correct word (10 years to be able to guess a 10 letter word for 500,000 secret words/passwords examined per second). So if what you are trying to build is a password cracker, the algorithms that are better than brute forcing are beyond the scope of this review. What your mini-program does is trying n digit random choices and comparing them to the secret word which number of trials is variable input a word: red Mission accomplished. It took 8428 tries. input a word: red Mission accomplished. It took 16894 tries. and since it's a random choice(not entirely random because it's based on a so-called 'pseudo-random' algorithm, you might generate the same number of trials for a random.seed(n)) this is not the case however, it tries different possibilities, so with some bad luck, that with some word lengths might keep running until aliens start making documentaries about us. Assuming we can check each character separately: (** This is not the case of password crackers in the real world which must guess the exact word). # Note: I have tried making the program guess the letters one by one, and then # check if said letter was right, but it seemed to make the problem even worse. and since you asked about separate letter comparison: import string def guess_word(): """Match characters with user input and return number of iterations, correct guess.""" possible = string.ascii_lowercase + string.ascii_uppercase + string.digits + string.punctuation + ' ' secret_word = input('Enter word: ') yield len(possible) * len(secret_word) for index in range(len(secret_word)): for char in possible: if secret_word[index] == char: yield char if __name__ == '__main__': word = list(guess_word()) print(f"Total: {word[0]} iterations.\nWord: {''.join(word[1:])}")
{ "domain": "codereview.stackexchange", "id": 35748, "tags": "python, strings, random" }
Why is the statement "black holes make up 1% of the mass of the Milky Way Galaxy" unknown?
Question: a) We cannot detect all black holes and therefore don't know the percentage of the galaxy's mass they make up. b) We do not know the mass of the Milky Way's central black hole. c) According to our understanding of stellar evolution, black holes should make up a much lower percentage of the galaxy's mass. d) According to our understanding of stellar evolution, black holes should make up a much higher percentage of the galaxy's mass. In my own understanding, I feel like it should be a). Since the black holes are forming all the time, I think it is hard to detect all of them. But I am not quite sure, could someone give any insight? Answer: (a) We can't detect ALL of anything, but we can certainly makes estimates of their integral properties. For example, we estimate that there are about 100 billion stars in the galaxy, but we can observe only a tiny fraction of these. We have also observed a tiny fraction of the stellar black holes in our galaxy - those few examples that have revealed themselves through their binary companions. (b) Yes we do. It is just over 4 million solar masses and thus a negligible fraction of the Galaxy's total mass. (c) and (d) It is not just stellar evolution that is involved, it is the distribution of stellar birth masses (the initial mass function) that needs to be known, possibly as a function of time. The stellar evolution part has a major uncertainty in that although it is reasonably certain that all stars above about 10 solar masses end their lives in supernovae, it is not certain what fraction will leave black hole remnants. This is probably a strong function of mass - higher masses favour a black hole and progenitor metallicity; low metallicity stars lose less mass during their lives and are more likely to directly collapse to a black hole - see Heger et al. (2003). You can do a back of the envelope calculation. Suppose the IMF has the Saltpeter form $N(M)\propto M^{-2.35}$ between 0.1 and 100 solar masses. Further assume that: because a 1 solar mass star has a lifetime of 10 billion years, the age of the Galaxy is 10 billion years and the stellar lifetime is a very strong inverse function of mass $(\propto M^{-2.5})$, that the 100 billion stars that we deduce exist in our Galaxy (by counting up the local stars and measuring stellar densities elsewhere and extrapolating) are predominantly all the 0.1 to 1 solar mass stars that have ever been born. This allows us to then estimate how many stars have been born in any mass range - even if they have long-since died. So making the above assumptions, I will make the further assumption that all stars from 20-100 solar masses die and leave a black hole remnant.Then if my maths is correct then there are about $10^{11}/1400 \sim 7\times 10^{7}$ black holes in our Galaxy. That is there is 1 black hole for every 1400 stars of 0.1-1 solar mass. The remaining parts of the puzzle are what is the mass of a typical black hole and what is the mass of the Galaxy? A good number for the former is about 10 solar masses, since the black hole binaries appear to cluster at this value or a little lower (Ozel et al. 2010). The latter is still a topic of investigation. It appears to be an order of magnitude bigger than all the stars, gas and dust due to some form of dark matter. Let's use the number $7\times 10^{11}$ solar masses (Eadie & Harris 2016). So my final number is that black holes form $7\times 10^{7}\times 10/7\times 10^{11} = 0.1$ percent of the Milky Way mass. You could push this number up by a factor of a few by assuming that the IMF contained more higher mass stars in the past, or that the threshold mass for black hole production was lower in the past for low metallicity stars. Both seem theoretically possible and are active topics of investigation.
{ "domain": "astronomy.stackexchange", "id": 1968, "tags": "black-hole, galaxy, milky-way" }
gazebosim.org down
Question: It seems the website gazebosim.org is down. Are people at OSRF aware? Originally posted by demmeln on ROS Answers with karma: 4306 on 2014-07-17 Post score: 0 Answer: It seems to be back up. Originally posted by demmeln with karma: 4306 on 2014-07-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18664, "tags": "gazebo" }
How to construct the "Inversion About the Mean" operator?
Question: It seems like it should be simple, based on how Nielsen and Chuang talk about it, but I cannot seem to correctly implement the Inversion About the Mean operator ($2|\psi\rangle \langle\psi| - \mathcal{I}$) that is used in the Grover search algorithm, especially without using any ancilla bits. I thought about performing a NOT operation on all the working qubits, then performing a controlled-NOT on a separate toggle qubit with the control being all the working qubits, then performing a controlled phase flip with control of the toggle bit, and finally flipping the phase of all the states. I'm not sure how I'd actually implement the controlled phase flipping, though, since, I believe, phase flipping one or all of the bits would not produce the desired effect. Does anyone know how I can construct this? I am using Q#, by the way, if you'd like to answer in code. Answer: First, let's represent operation $2|\psi\rangle \langle\psi| - \mathcal{I}$ as $H^{\otimes n}(2|0\rangle \langle0| - \mathcal{I})H^{\otimes n}$, as Nielsen and Chuang do. Doing $H^{\otimes n}$ is easy - it's just ApplyToEach(H, register). $2|0\rangle \langle0| - \mathcal{I}$ flips the phase of all computational basis states except $|0...0\rangle$. Let's do instead $\mathcal{I} - 2|0\rangle \langle0|$, flipping the phase of only $|0...0\rangle$ (it introduces a global phase of -1 which in this case I think can be ignored). To flip the phase of only $|0...0\rangle$: flip the state of all qubits using ApplyToEach(X, register). Now we need to flip the phase of only $|1...1\rangle$ state. do a controlled-Z gate on one of the qubits (for example, the last one), using the rest as control. This can be done using Controlled functor: (Controlled Z)(Most(register), Tail(register)). Tail returns the last element of the array, and Most returns all elements except the last one. flip the state of all qubits again to return them to the original state.
{ "domain": "quantumcomputing.stackexchange", "id": 863, "tags": "programming, q#" }
Error in experimental verification of Newton's second law
Question: I am doing a school lab to prove/derive Newton's Second Law from observation data that includes the force, mass, and acceleration. We are told to plot these variables accordingly, I already know $F=ma$, however, there appear to be some errors in my lab. Error #1: The measured mass of a moving object is $0.503\ \mathrm{kg}$. Knowing $F=ma$, the mass should be the slope of $F$ vs $a$ graph. However, the slope of the trendline is $0.751$, a little bit too off. Error #2: This graph is $F$ vs $ma$. Knowing that $F=ma$, the slope of this graph should be $1$. Again, it's a bit off. Error #3: This graph is $a$ vs $m^{-1}$. The slope unit is $\frac{a}{m^{-1}} = ma = F$. The force measured during the experiment is $0.196\ \mathrm{N}$, but the slope shows $0.148$, a little bit better but is still off. So in conclusion, is my lab data correct? Does it mean that Newton's Second Law, $F=ma$ is not always perfect (due to air resistance, frictions, or other common factors in physics that may affect final calculation)? Answer: So in conclusion, is my lab data correct? Does it mean that Newton's Second Law, F=ma is not always perfect (due to air resistance, frictions, or other common factors in physics that may affect final calculation)? It depends on what you mean by "perfect." Newton's second law is used in the engineering of any moving device: rockets, Mars rovers, submarines, airplanes, race cars, electron beams in microwave ovens, and uncountably many others. The engineers who use $F=ma$ expect it to give precise and accurate results in all cases. So, you should expect that equation to give exact results. Even though one experiment can disprove a scientific idea like Newton's Laws, there have been so many tests of Newton's Laws over centuries that it would be a better use of time to examine your experiment to find the source of the discrepancy. Now, when you say that Newton's second law is "not always perfect" due to factors like air resistance and friction, you are saying that you want to ignore other forces. The full statement of Newton's second law is $$\Sigma F = ma$$ where the sigma ($\Sigma$) means sum. So, if you add together all the forces on your object, you will get exactly the mass times the acceleration. To put it another way, $$F_{applied} + F_{friction} + F_{air\ resistance} + F_{something\ else} + \cdots = ma$$ How large these other forces are depends on your specific experiment. In your experiment, force, acceleration, and mass are all measured quantities and require careful analysis of your measurement techniques. As an example, I'm going to assume your experiment was the following: A mass is mounted on an air track (like a one-dimensional air hockey table) to significantly reduce friction. The force is provided by a hanging weight. The hanging weight is attached to the mass on the air track by a string that goes over a lightweight pulley. The acceleration is measured by timing how long the object takes to move a set distance. Assuming constant acceleration, the acceleration is $a=2x/t^2$. Here are some common measurement errors from my lab courses: Mass For each mass, did you weight just the object itself or the mass of the object and the air track cart? Both have to be accelerated by the force. Acceleration How is the distance traveled defined? If you measure the starting point and the ending point of the cart on the track, did you measure the same location on the cart? If you measure when the front edge of the cart passes the start position and when the back edge passes the end position, the distance between the start and end is not the distance traveled. You have to measure the position of the same point on the cart at both locations. How accurately can you start and stop the stopwatch that times the motion? Is there anything that could cause the stopwatch to start or stop at the wrong time? Force When you measure the mass of the hanging weights, are you measuring just the weights or the weights and the weight holder? It is the total hanging mass that provides the accelerating force. Is the hanging mass swinging when it is let go? If the weight swings, then the force on the object will not be constant. These are the kinds of questions you have to ask yourself whenever you run an experiment. In my own work, one of two things always happens: If I run an experiment and get a result that does not match with well-known equations, my first thought is, "What did I do wrong?" If I run an experiment and get a result that exactly matches well-known equations, my first thought is, "What did I do wrong?" These are not contradictory. Even professional scientists do this. An important lesson to learn when doing science is that you should always doubt your results and look for errors. Only after this search and examination of your methods do you write your report. This is why even professional scientific papers have a Methods section to describe exactly what they did so readers of that paper can understand the measurement and have confidence that the measurement was done correctly.
{ "domain": "physics.stackexchange", "id": 83469, "tags": "newtonian-mechanics, forces, experimental-physics, acceleration, error-analysis" }
Avoiding nested if statements
Question: I occasionally find myself bumping into code like this (either in other projects or banging out initial prototypes myself): if @product.save if current_member.role == "admin" redirect_to krowd_path(@product) else redirect_to new_product_offer_path(@product) end else render :new end What is a good way to avoid this type of situation all together? Answer: One suggestion is to make the roles two classes, initialize them based on the role and call the save function. class UserRole def save(product) redirect_to new_product_offer_path(product) end end class AdminRole < UserRole def save(product) redirect_to krowd_path(product) end end def create_role(r) case r when :admin return AdminRole.new() else return UserRole.new() end end .... role = create_role(current_member.role) if @product.save role.save(@product) else render :new end
{ "domain": "codereview.stackexchange", "id": 1694, "tags": "ruby, ruby-on-rails" }
Are there any stars that orbit perpendicular to the Milky Way's galactic plane?
Question: Most stars orbit in the Milky Way's galactic disc. But is it possible for one to orbit perpendicular to it? Here on Earth since we're inside the galactic plane we can't get a good view of what the Milky Way looks like. But would the whole Milky Way be visible from a planet orbiting such a star? Answer: The Sun and most of the other stars are in the bulging disk of the Milky Way galaxy, but about 1% of the galaxy's stellar mass is in the galactic halo. The halo also includes 50 globular clusters and about 20 satellite galaxies according to Helmi 2008: The stellar halo of the Galaxy. Here is a nice graphic: Note that the Sagittarius Stream of stars is extremely close to passing the galactic poles. If you were on a planet next to one of these stars as it passed a pole, you would probably have a glorious view of the Milky Way stellar disk. Bodies in the galactic halo don't necessarily follow the elliptical paths predicted by Kepler, so their orbits may not be consistently perpendicular to the galactic plane. Some stellar streams have extremely odd orbital paths like the Phlegethon stellar stream, referenced in this answer: There is some speculation that the Halo stellar streams are the remnants of dwarf galaxies the Milky Way has absorbed. Indeed, Liang et al. say that: the Galactic halo has complicated assembly history and it not only interacts but also [is] strongly mixed with other components of the Galaxy and satellite dwarf galaxies
{ "domain": "astronomy.stackexchange", "id": 5207, "tags": "star, galaxy, orbital-mechanics, milky-way, galactic-halo" }
Initializing a multi-tree data structure
Question: I wrote this code to read text of a file and use it to initialize objects and place them in a multi-tree data structure. The multi-tree has the base node as an object called theTree, which has an ArrayList that contains a bunch of objects called Party. Each Party has an ArrayList that contains a Creature. Each creature has 3 ArrayLists: artifacts, treasures, and jobs, which contain items of those types. This is a sample of the .txt I'm scanning: // Parties format: // p:<index>:<name> p : 10000 : Crowd p : 10001 : Band ... // Creatures format: // c:<index>:<type>:<name>:<party>:<empathy>:<fear>:<carrying capacity>[:<age>:<height>:<weight>] c : 20000 : Gnome : Rupert : 10000 : 91 : 30 : 149 : 176.73 : 206.23 : 31.15 c : 20001 : Magician : Delmar : 10000 : 49 : 31 : 223 : 226.37 : 181.93 : 438.56 ... // Treasures format: // t:<index>:<type>:<creature>:<weight>:<value> // creature = 0 means noone is carrying that treasure at the moment t : 30000 : Marks : 20000 : 291.8 : 82 t : 30001 : Chest : 20001 : 82.8 : 66 ... // Artifacts format: // a:<index>:<type>:<creature>[:<name>] a : 40000 : Stone : 20000 : Chrysoprase a : 40001 : Stone : 20000 : Onyx ... // Jobs for creatures // measure time in seconds // j:<index>:<name>:<creature index>:<time>[:<required artifact:type>:<number>]* j : 50000 : Get Help : 20000 : 2.00 : Stone : 0 : Potions : 2 : Wands : 1 : Weapons : 1 j : 50001 : Swing : 20000 : 8.00 : Stone : 0 : Potions : 2 : Wands : 1 : Weapons : 2 Here is what I wrote to scan the .txt and send the gathered information to the proper constructors. As you can see I utilize a HashMap to store each object in a spot associated with its index. Each object that belongs to it has a value that matches that index. Creatures have an attribute called party. Its party is equal to the index of what party it belongs to. Treasures, Artifacts, and Jobs have a similar field called creature. public static void readFile(){ String [] param; param = new String [30]; int findParty; int findCreature; char x; int u;//used to determine what constructor to call in case some data is missing //HashMap used for an easy way to reference objects during instantiation HashMap< Integer, Assets > gameAssets = new HashMap< Integer, Assets >(); while ( input.hasNextLine () ) { String line = input.nextLine().trim(); Scanner getStat = new Scanner(line).useDelimiter("\\s*:\\s*"); if ( line.length()== 0 || line.charAt(0) == '/' ) continue; while ( getStat.hasNext() ) { u = 0; x = getStat.next().charAt(0); switch ( x ) { case 'p' : for ( int i = 0; i<2; i++){ if (getStat.hasNext() ){ param [i] = getStat.next(); } } Party newParty = new Party( param ); SorcerersCave.theCave.addParty( newParty ); gameAssets.put( newParty.getIndex(), newParty ); continue; case 'c' : Creature newCreature; while ( getStat.hasNext() ){ param [u] = getStat.next(); u++; } if (u == 7){ newCreature = new Creature ( param [0], param [1], param[2], param [3], param[4], param [5], param [6]); }else { newCreature = new Creature ( param ); } findParty = ( newCreature.getParty() );// == index of parent Node in HashMap if (findParty == 0 ) { SorcerersCave.theCave.addCreature( newCreature ); }else { ((Party)gameAssets.get( findParty )).addMember( newCreature ); gameAssets.put( newCreature .getIndex(), newCreature ); } continue; case 't' : for ( int i = 0; i<5; i++){ param [i] = getStat.next(); } Treasure newTreasure = new Treasure ( param ); findCreature = newTreasure.getCreature(); if ( findCreature == 0 ) { SorcerersCave.theCave.addTreasure( newTreasure ); } else { ((Creature)gameAssets.get( findCreature )).addItem( newTreasure ); gameAssets.put( newTreasure.getIndex(), newTreasure ); } continue; case 'a' : while ( getStat.hasNext() ){ param [u] = getStat.next(); u++; } if ( u == 4 ) { Artifact newArtifact = new Artifact( param ); findCreature = newArtifact.getCreature(); ((Creature)gameAssets.get( findCreature )).addArtifact( newArtifact ); gameAssets.put( newArtifact.getIndex(), newArtifact ); } else { Artifact newArtifact = new Artifact ( param [0], param [ 1 ], param[ 2 ]); findCreature = newArtifact.getCreature(); ((Creature)gameAssets.get( findCreature )).addArtifact( newArtifact ); gameAssets.put( newArtifact.getIndex(), newArtifact ); } continue; case 'j' : while ( getStat.hasNext() ) { param[u] = getStat.next(); u++; } Job newJob = new Job ( param,( Creature )(gameAssets.get( Integer.parseInt( param [2] )))); SorcerersCave.theCave.addJob( newJob ); findCreature = newJob.getCreature(); ((Creature)gameAssets.get( findCreature )).addJob( newJob ); newJob.target = ( Creature )(gameAssets.get( findCreature ) ); GameInterface.jobHeight = GameInterface.jobHeight + 1; } } } input.close(); } I turned this in last weekend, got a good grade, but looking back on this method I'm not very proud of it. It's ugly, complicated, and the least readable method I have ever written. For my first time performing a task like this it works, but I would never show this off. Answer: This could benefit from breaking the code down into a number of separate methods to improve readability, however the issue that you will find with code like this is that you end up with lots of parameters on each method to pass the data around. One effective way to deal with this is through a combination of the Extract Class and Extract Method refactorings as follows: Extract the readFile function into a separate class, making it a public method on the new class, and having existing call readFile on the new class. Start using Extract Method to break down this large method into smaller methods (by introducing private methods in the new class). When extracting the methods, instead of passing variables into the method as a parameter, move the variable to the class level so that it can be shared amongst methods without using parameters. By doing this, it will become very simple to aggressively apply Extract Method to break the code down into smaller pieces and make it more readable. Furthermore, by encapsulating the code for reading a file into a separate class, it will clean up the code that calls this function.
{ "domain": "codereview.stackexchange", "id": 4169, "tags": "java, tree" }
Where does the energy used to fight gravity go?
Question: Imagine the following scenarios: A. We have a spaceship at x distance from a star. It faces directly away from the star, and fires its engines such that it remains at exactly the same distance from the star until it runs out of fuel. At this point, it falls into the star. B. We have a spaceship at x distance from a star. It faces directly away from the star, and fires its engines such that it depletes the fuel extremely quickly - and as a result - gains enough speed to reach escape velocity. So let's compare the two scenarios: In scenario A, the final state is that the star gains the momentum from the fuel fired, plus the momentum of the ship once it hits the star. In scenario B, the final state is that the star gains the momentum from the fuel fired - however, the ship now has a large amount of momentum travelling away from the star. We can take things to the extreme and say in scenario A, the ship stays in a fixed position 1km away from the star firing its engines for 10 years. In scenario B, the ship depletes its fuel in 10 seconds. Here, it's clear the differences in momentum between the two ships would be significant - far outweighing the momentum transferred to the star by the ship on collision. So where does this additional energy go? And what if we make this more complicated and repeat the same experiment.. but now, we have two identical ships; one on each side of the star. Thus, the final momentum of the star is unchanged - it only gains mass from the fuel and the ships. In scenario A: The ship uses all its fuel to stay 1km away from the star. In the process, it transfers momentum to the star from the expelled propellant. Once the fuel is exhausted, the momentum of the ship gained from acellerating towards the star is transferred to the star. Thus, the star's final momentum is -fuel -ship. In scenario B: The ship immediately uses all its fuel to escape the star. In the process, it transfers momentum to the star from the expelled propellant. It then escapes with a large amount of momentum. Thus, the star's final momentum is -fuel, and the ship's final momentum is +fuel. Both ships used the same amount of energy. My assertion is that the momentum of ShipA when it collides with the star is less than the momentum of ShipB as it escapes. If both input the same amount of energy into the system, how can the final momentums not be equal? Answer: I am confused (and I think you may be confused) about whether you are asking about energy or momentum. However the momentum aspects are simple. Let's assume: all the fuel ejected by the spacecraft hits the star; the spacecraft and star are initially momentarily at rest in some inertial frame, and all velocities are measured with respect to this frame; Newtonian gravitation; all collisions inelastic (fuel & spacecraft stick to the star on collision); everything happens along a line, so I will use scalars when I mean vectors. Let the masses be: spacecraft, without fuel, $m_s$; fuel $m_f$; star $M$; Then the total momentum in the initial state is trivially $0$. Final state 1: spacecraft and all its fuel hits the star. Final velocity of the combined star, spacecraft & fuel is $V = 0$ by conservation of momentum. Final state 2: spacecraft escapes to infinity, asymptotic final velocity of spacecraft $v$, of star + fuel $V$. By conservation of momentum: $$v m_s + V(M + m_f) = 0$$ & hence $$V = -v\frac{m_s}{M + m_f}$$ & it's as simple as that. Note that in the second case I am computing the asymptotic velocities: the velocities after the spacecraft has escaped to infinity. However, in fact, once all the fuel has hit the star the expression is good at any time after that, although the two velocities change over time of course.
{ "domain": "physics.stackexchange", "id": 60010, "tags": "newtonian-gravity, energy-conservation, potential-energy, rocket-science" }
Laser scan to point cloud to octomap - strange result
Question: Hi, I have a laser scanner mounted on an arm. With laser_pipeline and pcl_assembler, I build a point cloud and pass it to the octomap_server. The pcl_assembler runs every 3 seconds. My problem is, in simulation and Gazebo all works fine. But on the real system, the octomap is completely wrong. What could be the problem here, or is there anything that works better? Made 2 screenshots, first is just the laser scans, looks great. TF should also be ok. The 2nd one is the assembled_cloud. The 3rd one is the octomap. <?xml version="1.0"?> <launch> <include file="$(find powerball_peak_start)/launch/powerball.launch" /> <!--<include file="$(find powerball_vscom_start)/launch/powerball.launch" />--> <!-- Laser Scanner --> <node ns="LMS100" name="lms100" pkg="LMS1xx" type="LMS100" > <param name="host" value="192.168.5.80"/> <param name="frame_id" value="/laser_link"/> </node> <!-- Laser Pipeline --> <node type="laser_scan_assembler" pkg="laser_assembler" name="my_assembler"> <remap from="scan" to="/LMS100/scan"/> <param name="max_scans" type="int" value="400" /> <param name="fixed_frame" type="string" value="/laser_link" /> </node> <!-- Point cloud publisher --> <node name="periodic_snapshotter" pkg="pcl_assembler" type="periodic_snapshotter" respawn="false" output="screen" /> <!-- Octomap Server --> <node pkg="octomap_server" type="octomap_server_node" name="octomap_server"> <param name="resolution" value="0.05" /> <param name="frame_id" type="string" value="base_link" /> <param name="max_sensor_range" value="20.0" /> <param name="latch" value="false" /> <remap from="cloud_in" to="assembled_cloud" /> </node> <!-- Rviz --> <node name="rviz" pkg="rviz" type="rviz" respawn="false" output="screen" /> </launch> Originally posted by madmax on ROS Answers with karma: 496 on 2013-02-12 Post score: 0 Original comments Comment by AHornung on 2013-02-12: Which version of octomap_server is that, and which ROS distribution? Comment by AHornung on 2013-02-12: Also: What does the assembled PointCloud2 look like in the fixed frame "base_link"? Comment by madmax on 2013-02-12: I am on fuerte and the octomap version is 1.4.3-0precise. Added a l of the PointCloud2. Comment by AHornung on 2013-02-12: What is the version of octomap_mapping (ros-fuerte-octomap-mapping)? I ask since there was a bug that could have caused this, but that got fixed some time ago. The current version should be 0.4.5 or 0.4.6. Comment by AHornung on 2013-02-12: octomap_point_cloud_centers will essentially show the same as the MarkerArray, without the spatial extents. What would help would be to see assembled_cloud in the base_link frame, since that is what gets inserted into the octomap. Comment by madmax on 2013-02-13: Ok, I think the problem is the assembled_cloud. It doesn't look like a Point Cloud, it's just 3D data mapped to 2D data. But why is this working in simulation? Comment by Gazer on 2013-07-10: @Ahornung, so, how do I improve the result of the octomap? Answer: Stupid mistake by me: fixed_frame in the laser pipeline was laser_link and should be base_link. Originally posted by madmax with karma: 496 on 2013-02-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12855, "tags": "ros, octomap, laser-pipeline, octomap-server, laser-scan" }
Understanding Kirchhoff's radiation law calculation from Weinberg's book
Question: From second edition of Weinberg's Lectures on Quantum Mechanics, Consider an enclosure whose walls are kept at a temperature $T$, and suppose that the energy per volume of radiation within this enclosure in a frequency interval between $\nu$ and $\nu + d\nu$ is some function $\rho(\nu,\, T)$ times $d\nu$. Kirchoff calculated the energy per time of the radiation in any frequency interval that strikes a small patch of area $A$. He reasoned that, from a point in the enclosure with polar coordinates $r, \theta, \phi$ (with $r$ the distance to the patch, and $\theta$ measured from the normal to the patch), the patch will subtend a solid angle of $A \cos\theta / 4 \pi r^2 \times \rho(\nu, \, T) \,d\nu$ over a hemisphere with radius $ct$, where $c$ is the speed of light: $$2\pi \int_0^{ct} dr \int_0^{\pi/2}d\theta \, r^2 \sin\theta \times \frac{A \cos\theta \, \rho(\nu, T) \, d\nu}{4\pi r^2} = \frac{ctA\,\rho(\nu,\, T) \,d\nu}{4}.$$ I don't understand how are we doing the integration here. If I am correct in my understanding, the origin of our spherical coordinate system is some point in the enclosure and $\mathbf{r}$ is the vector which points to the patch from the origin. If I change $r$, then the tip of my vector $\mathbf{r}$ would not lie on the patch anymore. Similar thing for $\theta$ as well. I know this is supposed to be trivial but I'm unable to understand it. Edit: I think that I have a wrong idea of how is this integration being done. It would be helpful if someone could explain in detail (possibly with figures) how is the calculation done. Answer: "He reasoned that, from a point in the enclosure with polar coordinates ,, (with the distance to the patch, and measured from the normal to the patch)". It means that you are integrating over all the points in the enclosure. Approximatively you can place the origin on the patch and all the possible points in the enclosure subtend the aforementioned solid angle. Hope it is clear.
{ "domain": "physics.stackexchange", "id": 70909, "tags": "thermodynamics, radiation, thermal-radiation" }
Electrolysis not running, maybe dependant on distance of electrodes?
Question: So I created a DC power supply of 15 V (special transformator connected to the net) and I have a graphite anode and stainless steel cathode. When I put them in a beaker next to each other in water, the electrolysis takes place as I see gas bubbles. However, when i put them in another container that consists of two beakers connected with a tube to allow liquid to flow through but to separate the gases, the reaction doesn't run while it is the same setup more or less. How come? It is true that the tube connecting the beakers is relatively thin and the distance is also quite large (10 cm or sth), so does this lead to a higher required overpotential? I really don't know why it doesn't work in the larger container. Any help is appreciated! Answer: Solution resistance in a two electrode cell can make a big difference as it is totally uncompensated. You may be applying 15 V between the electrodes, but the redox reaction only occurs at a very close proximity to the surface (<10 nm), so it's the interfacial potential we're concerned with, which can be significantly different than the applied voltage if the solution is highly resistive. A tight constriction in the cell can certainly increase the resistance and you should make sure you're using an electrolyte as plain water is very resistive.
{ "domain": "chemistry.stackexchange", "id": 1963, "tags": "water, electrolysis" }
Best way to install from a stand-alone network
Question: I've been having issues installing electric on systems that are deployed to a network that does not have any internet access. I've had limited success using rosinstall on a system that does have internet access to checkout the full install, installing rosinstall on the machines behind the network and then coming up with a custom rosinstall file pointing to both the ros and stacks directories. However this build does not always work. Just wondering if there's an easy way that given I have all the source code, how do I build the ros core system manually? I managed to get the environment variables setup and build rospack, but couldn't proceed from there. Originally posted by venabled on ROS Answers with karma: 3 on 2012-02-08 Post score: 0 Original comments Comment by ahendrix on 2012-02-08: Which Linux distribution are you using? Answer: If you're on Ubuntu and you only need released packages you can pull all the debian packages needed onto a single CD and install them using apt-cdrom. Or if you can transport a computer between the internal and external networks you can setup an apt-mirror as @Mac suggests. Originally posted by tfoote with karma: 58457 on 2012-03-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8162, "tags": "ros, installation, rosinstall" }
Differential cross section in the case of identical masses
Question: In the book "An Introduction to Quantum Field Theory" by M.E. Peskin and D.V. Schroeder in page $107$, they calculate the differential cross section for two particles $A$ and $B$ with initial energy $E_A$, $E_B$ and momentum $p_A$, $p_B$ to scatter and become particles $1$ and $2$ with final momentum $p_1$, $p_2$, in the center of mass. They come to this relation $$\left(\frac{dσ}{dΩ}\right)_{CM}=\frac{1}{2E_A2E_B\vert υ_A-υ_B\vert}\frac{\vert\textbf{p}_1\vert}{(2π)^24E_{cm}}\vert M(p_A,p_B\rightarrow p_1,p_2)\vert^2\qquad\quad(4.84)$$ where $υ_A-υ_B$ is the relative velocity of the beams as viewed from the laboratory frame, $E_cm$ is the energy of the system in the center of mass and $M(p_A,p_B\rightarrow p_1,p_2)$ is the invariant matrix element of the process. Then the authors make the hypothesis that the four particles have identical mass and this formula reduces to $$\left(\frac{dσ}{dΩ}\right)_{CM}=\frac{\vert M\vert^2}{64π^2E^2_{cm}}\qquad\quad(4.85)$$ My question is how did they came to eq. $(4.85)$ with this assumptions. I can not follow the maths. Any helps? Answer: Assume the 3-velocities $v$ to be along the x-axis and express them through momenta and energies $$v_{Ax}=\frac{{\rm d} x}{{\rm d} t}=\frac{{\rm d} x}{{\rm d} \tau}\frac{{\rm d} \tau}{{\rm d} t}=\frac{p_{Ax}}{E_A}.$$ So, (4.84) becomes $$\left(\frac{dσ}{dΩ}\right)_{CM}=\frac{1}{4\vert p_{Ax}E_B-p_{Bx}E_A\vert}\frac{\vert\textbf{p}_1\vert}{(2π)^24E_{cm}}\vert M(p_A,p_B\rightarrow p_1,p_2)\vert^2\qquad\quad(4.84)$$ The expression in the denominator can be simplified by choosing a frame where one of the initial particles is at rest, i.e. $E_A=m$, such that $$\frac{1}{4\vert p_{Ax}E_B-p_{Bx}E_A\vert}= \frac{1}{4 p_{Bx}m}.$$ Using the energy-momentum relation for $p_B$ and going to the limit $m\rightarrow 0$ this becomes $\frac{1}{2s}$. For the $\mathbf{p_1}$ in the numerator look at the direction of the momenta of $p_1$ and $p_2$ after the collision. In CMS the two particles have the same energy but opposite momenta (e.g. along the x-axis). Hence, $s=(p_1+p_2)^2=4E_1^2$ and ${\mathbf p_1}\approx E_1=\frac{\sqrt s}{2}$. Using $E_{CM}=\sqrt s$ one should arrive at the desired result.
{ "domain": "physics.stackexchange", "id": 33556, "tags": "quantum-field-theory, scattering-cross-section" }
Does rubber insulate lightning more effectively than air?
Question: Last week, an Ars Technica writer was struck by lightning. He says that the 911 operators were concerned about whether or not he was wearing shoes at the time, but he didn't think it would make much difference. Apparently, air has a thousand times more resistance than hard rubber. Does this mean that wearing shoes wouldn't do anything? What about with electrical wiring? I've seen electricity spark through the air before, but I don't think I've seen it go through rubber, especially a greater volume of it. Is there something I'm missing here, or have I just never considered it? Answer: At sufficiently high voltages almost everything conducts due in part to quantum tunneling of electrons. An insulator has a breakdown voltage which is the field strength required before it will start conducting. Related to the breakdown voltage is the dielectric strength which is the minimum voltage over distance ($\mathrm{V}/\mathrm{m}$) before a material will conduct. The table at Wikipedia lists dielectric strength of air as $3.0 \times 10^6\: \mathrm{\frac{V}{m}}$ and rubber at least five times better at greater than $15 \times 10^6\: \mathrm{\frac{V}{m}}$. When it comes to lightening though, I doubt it matters much. The bolt of lightening overcame dozens or even hundreds of meters of air to strike. A few cm of rubber isn't going to matter. If the rubber is a bad path it'll just take the air around the rubber shoe soles. Regarding the resistance of rubber versus air, resistance stops having much of any meaning once the breakdown voltage is exceeded. The current will form a plasma out of the material and plasmas are great conductors.
{ "domain": "physics.stackexchange", "id": 8036, "tags": "electricity, electric-current, electrical-resistance, lightning, dielectric" }
Nested cross-validation and selecting the best regression model - is this the right SKLearn process?
Question: If I understand correctly, nested-CV can help me evaluate what model and hyperparameter tuning process is best. The inner loop (GridSearchCV) finds the best hyperparameters, and the outter loop (cross_val_score) evaluates the hyperparameter tuning algorithm. I then choose which tuning/model combo from the outer loop that minimizes mse (I'm looking at regression classifier) for my final model test. I've read the questions/answers on nested-cross-validation, but haven't seen an example of a full pipeline that utilizes this. So, does my code below (please ignore the actual hyperparameter ranges - this is just for example) and thought process make sense? from sklearn.cross_validation import cross_val_score, train_test_split from sklearn.grid_search import GridSearchCV from sklearn.metrics import mean_squared_error from sklearn.ensemble import RandomForestRegressor from sklearn.svm import SVR from sklearn.datasets import make_regression # create some regression data X, y = make_regression(n_samples=1000, n_features=10) params = [{'C':[0.01,0.05,0.1,1]},{'n_estimators':[10,100,1000]}] # setup models, variables mean_score = [] models = [SVR(), RandomForestRegressor()] # split into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.3) # estimate performance of hyperparameter tuning and model algorithm pipeline for idx, model in enumerate(models): clf = GridSearchCV(model, params[idx], scoring='mean_squared_error') # this performs a nested CV in SKLearn score = cross_val_score(clf, X_train, y_train, scoring='mean_squared_error') # get the mean MSE across each fold mean_score.append(np.mean(score)) print('Model:', model, 'MSE:', mean_score[-1]) # estimate generalization performance of the best model selection technique best_idx = mean_score.index(max(mean_score)) # because SKLearn flips MSE signs, max works OK here best_model = models[best_idx] clf_final = GridSearchCV(best_model, params[best_idx]) clf_final.fit(X_train, y_train) y_pred = clf_final.predict(X_test) rmse = np.sqrt(mean_squared_error(y_test, y_pred)) print('Final Model': best_model, 'Final model RMSE:', rmse) Answer: Yours is not an example of nested cross-validation. Nested cross-validation is useful to figure out whether, say, a random forest or a SVM is better suited for your problem. Nested CV only outputs a score, it does not output a model like in your code. This would be an example of nested cross validation: from sklearn.datasets import load_boston from sklearn.cross_validation import KFold from sklearn.metrics import mean_squared_error from sklearn.grid_search import GridSearchCV from sklearn.ensemble import RandomForestRegressor from sklearn.svm import SVR import numpy as np params = [{'C': [0.01, 0.05, 0.1, 1]}, {'n_estimators': [10, 100, 1000]}] models = [SVR(), RandomForestRegressor()] df = load_boston() X = df['data'] y = df['target'] cv = [[] for _ in range(len(models))] for tr, ts in KFold(len(X)): for i, (model, param) in enumerate(zip(models, params)): best_m = GridSearchCV(model, param) best_m.fit(X[tr], y[tr]) s = mean_squared_error(y[ts], best_m.predict(X[ts])) cv[i].append(s) print(np.mean(cv, 1)) By the way, a couple of thoughts: I see no purpose to grid search for n_estimators for your random forest. Obviously, the more, the merrier. Things like max_depth is the kind of regularization that you want to optimize. The error for the nested CV of RandomForest was much higher because you did not optimize for the right hyperparameters, not necessarily because it is a worse model. You might also want to try gradient boosting trees.
{ "domain": "datascience.stackexchange", "id": 1453, "tags": "python, scikit-learn, cross-validation, model-selection" }
[ndt_matching] gpu version died
Question: description the ndt_matching node with pcl_generic method works fine, but pcl_anh_gpu method always crashed when started environment gpu: nvidia gtx980m gpu driver version: 410 cuda version: 10.0 cudnn version: 7.6.1.34 ros: melodic autoware: 1.12.0 and master steps to reproduce 1. in simulation tab, select bag file (demo data provided by autoware), press start and pause 2. in setup tab, press tf and vehicle model 3. in map tab select and press point cloud (demo data provided by autoware) select and press vector map (demo data provided by autoware) select and press tf (demo data provided by autoware) 4. in simulation tab, press pause again to resume, in rviz, verify that the pcd map and vector map are loaded correctly 5. in sensing tab, click voxel_grid_filter with default parameters 6. in computing tab, click nmea2tfpose with default parameters select pcl_anh_gpu method and click ndt_matching 7. in simulation tab, press pause to resume 8. then ndt_matching died with error message as below error messages Error: out of memory /home/leon/autoware.ai/src/autoware/core_perception/ndt_gpu/src/VoxelGrid.cu 181 terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >' what(): boost: mutex lock failed in pthread_mutex_lock: Invalid argument [ndt_matching-1] process has died [pid 8436, exit code -6, cmd /home/leon/autoware.ai/install/lidar_localizer/lib/lidar_localizer/ndt_matching __name:=ndt_matching __log:=/home/leon/.ros/log/a41e9918-b072-11e9-96f6-9cb6d01138d9/ndt_matching-1.log]. log file: /home/leon/.ros/log/a41e9918-b072-11e9-96f6-9cb6d01138d9/ndt_matching-1*.log contents of ndt_matching-1-stdout.log Log file: method_type: 2 use_gnss: 1 queue_size: 1 offset: linear get_height: 0 use_local_transform: 0 use_odom: 0 use_imu: 0 imu_upside_down: 0 imu_topic: /imu_raw localizer: velodyne (tf_x,tf_y,tf_z,tf_roll,tf_pitch,tf_yaw): (1.2, 0, 2, 0, 0, 0) Update points_map. Originally posted by hit_leon on ROS Answers with karma: 1 on 2019-07-27 Post score: 0 Answer: Thank you for reporting the bug. pcl_anh_gpu is an algorithm that consumes a lot of memory. If the map is large or the resolution of the NDT is small, the memory will be insufficient and the process will die. A similar bug has been identified for the CPU implementation pcl_anh. Since pcl_anh_gpu, pcl_anh and pcl_openmp have known bugs, I recommend using pcl_generic. If you like, please report the bug to autoware gitlab. https://gitlab.com/autowarefoundation/autoware.ai/core_perception/issues Originally posted by Yamato Ando with karma: 231 on 2019-07-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by hit_leon on 2019-07-29: thanks for your reply, and I have reported this bug to gitlab. Comment by Josh Whitley on 2019-09-11: @yamato-ando Are there Issues on the Autoware.ai Gitlab site for these known bugs? I don't see them anywhere. Comment by Yamato Ando on 2019-09-13: @Maximus5684 I'm sorry. Later I will report the bugs I know.
{ "domain": "robotics.stackexchange", "id": 33538, "tags": "ros-melodic" }
Arduino sensor data recombined and published on master computer
Question: I having trouble publishing LaserScan data directly from an Arduinio Mega 2560, is there an example where data is published to the Master in a simpler format and then recombined on the master into the standard Laserscan message type and republished? Originally posted by charlie on ROS Answers with karma: 36 on 2015-07-28 Post score: 0 Answer: Here is an example used for a Imu message. Not sure it will help you though. Originally posted by Gary Servin with karma: 962 on 2015-07-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22319, "tags": "laser" }
Hackerrank "Strings: Making Anagrams" Javascript Solution
Question: This is the original problem : Input Format The first line contains a single string, a. The second line contains a single string, b. Constraints 1<= |a|,|b| <= 10^4 It is guaranteed that and consist of lowercase English alphabetic letters (i.e., through ). Output Format Print a single integer denoting the number of characters you must delete to make the two strings anagrams of each other. Sample Input cde abc Sample Output 4 Explanation We delete the following characters from our two strings to turn them into anagrams of each other: Remove d and e from cde to get c. Remove a and b from abc to get c. We must delete characters to make both strings anagrams, so we print on a new line. And this is the solution I've came up with using javascript. I've decided to use objects in order to avoid nested for loops which leads to O(M*N). I think my solution is O(M+N+O+P), however, I do believe there's a much better solution out there, and some more refactoring can be done to my code. Anyone? There are some default I/O codes you may find in the original website function main() { var a = readLine(); var b = readLine(); // Creating object with {"k": 5, "a": 2, "b": 1} for example var objA = countAlphabetFrequency(a); var objB = countAlphabetFrequency(b); var numOfDeletionsA = countNumberOfDeletions(objA,objB); var numOfDeletionsB = countNumberOfDeletions(objB,objA); console.log(numOfDeletionsA + numOfDeletionsB); } function countAlphabetFrequency (arrOfAlphabets){ var resultObj = {} for (i = 0; i < arrOfAlphabets.length; i++) { if (resultObj[arrOfAlphabets[i]]) { resultObj[arrOfAlphabets[i]] += 1; } else { resultObj[arrOfAlphabets[i]] = 1; } } return resultObj; } function countNumberOfDeletions (mainObj, referenceObj){ var numOfDeletions = 0; for (var k in mainObj) { if (mainObj.hasOwnProperty(k)) { if (mainObj[k] && referenceObj[k]) { // Alphabet k exists in both strings if (mainObj[k] > referenceObj[k]) { // Main string has more k than in reference string numOfDeletions += mainObj[k] - referenceObj[k]; mainObj[k] = referenceObj[k]; } } else { // Alphabet k only exists in Main string numOfDeletions += mainObj[k]; } } } return numOfDeletions } Answer: I like your idea of counting character frequencies first. This allows you to count the required deletions in linear time. Your code is readable, but readability can be improved by more semantic naming and leveraging modern JavaScript language features. Naming: Regarding the variable names a, objA, mainObj, resultObj, arrOfAlphabets: Those identifiers mainly include type information (obj, arrOf). But as a reader, I am more interested in the role of your variables instead of their type. So instead of objA I would prefer to read frequenciesA or even freqA. And instead of arrOfAlphabets I suggest the simpler characters. For-loops: First of all, you probably forgot to declare the local loop iterator in for (i = 0; ...). Unfortunately, those omissions can introduce very hard to trace bugs as you now access and potentially share a global variable i. Also, JavaScript arrays and strings implement the iterable protocoll. This means you can iterate over them using a simpler for-of loop: for (let char of characters) { ... } This allows you to replace arrOfAlphabets[i] with the more readable char. Default properties: There is a pretty common technique used to handle default values of object properties. Replace if (obj[arrOfAlphabets[i]]) { resultObj[char] += 1; } else { resultObj[char] = 1; } with the shorter and idiomatic resultObj[char] = (resultObj[char] || 0) + 1; Enumerate object keys: Instead of for (var k in mainObj) { if (mainObj.hasOwnProperty(k)) { ... } } you can nowadays use Object.keys to write for (let key of Object.keys(mainObj)) { ... } Refactoring: You probably noticed the redundancy in var numOfDeletionsA = countNumberOfDeletions(objA,objB); var numOfDeletionsB = countNumberOfDeletions(objB,objA); console.log(numOfDeletionsA + numOfDeletionsB); If you modify your countAlphabetFrequency function to increment frequencies for string a and decrement frequencies for string b, you can simply sum the absolute frequencies to get the number of required deletions. If you combine that with a more descriptive approach by replacing for-loops with forEach and reduce, you get a simpler implementation: // Count character deletions required to make 'a' and 'b' anagrams: function count(a, b) { let freqs = {}; a.split('').forEach(char => freqs[char] = (freqs[char] || 0) + 1); // increment b.split('').forEach(char => freqs[char] = (freqs[char] || 0) - 1); // decrement return Object.keys(freqs).reduce((sum, key) => sum + Math.abs(freqs[key]), 0); } // Example: console.log(count('ilovea', 'challenge')); // 9
{ "domain": "codereview.stackexchange", "id": 26013, "tags": "javascript, algorithm, programming-challenge, bitwise" }
Browse Folders for Videos
Question: I am a self-taught programmer and would love any advice / suggestions. I wanted to implement some sort of n-tier Architecture, but not sure how to move all my code around. My program will take movies from my hard drive and eventually allow me to view the results in a nice Netflix like UI. The code below is just the beginning, you click Add Directory to add directories that contains movies and it will crawl the directories looking for types of video. I am using Windows Form because a ton of my code is already in it and don't mind it. For my next project I will definitely try to use WPF. To start, the Add Directory button. private void buttonAddDirectory_Click(object sender, EventArgs e) { FolderBrowserDialog folderBrowserDialog1 = new FolderBrowserDialog(); if (folderBrowserDialog1.ShowDialog() == DialogResult.OK) { processSelectedPath(folderBrowserDialog1.SelectedPath); } } Starts the processSelectedPath method. void processSelectedPath(string Path) { new Thread(() => { Thread.CurrentThread.IsBackground = true; foreach(DataGridViewRow row in dgwVideoFolders.Rows) { if (row.Cells[0].Value.ToString() == Path)//check for duplicates return; } processFolders(Path, true); }).Start(); } Which starts a thread that does the crawling and does not block the main UI. After it checks for duplicates, it will call processFolders method. public void processFolders(string Folder, bool UpdateSettings) { try { //crawl the directory to find movies with bgw List<string> videosFound = FindMovies.CrawlDirectory(Folder, checkBoxIgnoreSamples.Checked, checkBoxReplacePeriods.Checked); if (videosFound == null) { //MessageBox.Show("No videos found."); return; } string[] rows = new string[] { Folder, videosFound.Count.ToString() }; this.dgwVideoFolders.Invoke(new UIUpdaterDelegate(() => { this.dgwVideoFolders.Rows.Add(rows); })); for (int i = 0; i < videosFound.Count; i++) { SettingsClass.SaveMovieToDisk(videosFound[i]); // addToMoviesDatagrid(videosFound[i]); } if (UpdateSettings) SaveVideoDirectories(); this.dgwMyVideos.Invoke(new UIUpdaterDelegate(() => { this.dgwMyVideos.Refresh(); })); this.dgwVideoFolders.Invoke(new UIUpdaterDelegate(() => { this.dgwVideoFolders.Refresh(); })); } catch(Exception e) { WriteToLogs(e.ToString()); } } The adding, refreshing to a datagrid and processing folder logic is all on the UI thread. Bad? Should the logic be moved to another class and just make a method on the UI for adding rows / refreshing the data grid? Crawling Directory object. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.IO; using System.Text.RegularExpressions; namespace Movie_Management { class FindMovies { private static string[] AllowedVideoTypes = new string[] {"\\.avi", "\\.mp4", "\\.wmp", "\\.mkv", "\\.m4v" }; private static long minVideoLength = 1000000; public static List<string> CrawlDirectory(string Path, bool ignoreSamples, bool replacePeriods) { List<string> videosFound = new List<string>(); foreach (var file in Directory.GetFiles(Path)) { string replacement = file; if (replacePeriods) { string endString = file.Substring(file.Length - 4); replacement = file.Substring(0, file.Length - 4); replacement = replacement.Replace(".", " "); replacement = replacement + endString; if (file != replacement) File.Move(file, replacement); } bool bigEnough = false; FileInfo videoInfo = new FileInfo(replacement); if (videoInfo.Length > minVideoLength) bigEnough = true; if (IsVideo(replacement, ignoreSamples) && bigEnough) videosFound.Add(replacement); } foreach (var dir in Directory.GetDirectories(Path)) { string cleanedDir = dir; if (replacePeriods) { cleanedDir = dir.Replace(".", " "); if (cleanedDir != dir) { try { Directory.Move(dir, cleanedDir); } catch (Exception) { break; } } } foreach (var file in Directory.GetFiles(cleanedDir)) { string replacement = file; if (replacePeriods) { string endString = file.Substring(file.Length - 4); replacement = file.Substring(0, file.Length - 4); replacement = replacement.Replace(".", " "); replacement = replacement + endString; try { if (file != replacement) File.Move(file, replacement); } catch (Exception) { break; } } bool bigEnough = false; FileInfo videoInfo = new FileInfo(replacement); if (videoInfo.Length > minVideoLength) bigEnough = true; if (IsVideo(replacement) && bigEnough) videosFound.Add(replacement); } } return videosFound; } public static void SaveSettings(List<string> Folders, string Dir) { if (!Directory.Exists(Dir)) Directory.CreateDirectory(Dir); string FilePath = Dir + "video_folders.dat"; if (!File.Exists(FilePath)) File.Create(FilePath).Dispose(); StreamWriter file = new StreamWriter(FilePath); for (int i = 0; i < Folders.Count; i++) { file.WriteLine(Folders[i]); } file.Close(); } public static string GetType(string input, bool ignoreSamples = false) { for (int i = 0; i < AllowedVideoTypes.Length; i++) { bool ignore = false; if (ignoreSamples) { Match containsSamples = Regex.Match(input, "sample", RegexOptions.IgnoreCase); if(containsSamples.Success) ignore = true; } if(!ignore) { Match rightType = Regex.Match(input, AllowedVideoTypes[i], RegexOptions.IgnoreCase); if (rightType.Success) return rightType.Value.ToString(); } } return null; } private static bool IsVideo(string File, bool ignoreSamples = true) { for (int i = 0; i < AllowedVideoTypes.Length; i++) { bool ignore = false; if (ignoreSamples) { Match containsSamples = Regex.Match(File, "sample", RegexOptions.IgnoreCase); if (containsSamples.Success) { ignore = true; return false; } } if (!ignore) { Match rightType = Regex.Match(File, AllowedVideoTypes[i], RegexOptions.IgnoreCase); if (rightType.Success) return true; } } return false; } } } The way that I store wether or not the check box is checked bugs me, I feel like there is a better way I don't know. Answer: I wanted to implement some sort of n-tier Architecture I doubt that you want to do that, since n-tier architectures focus on physical separation of concerns, which would for your scope not make much sense. In any case, it is always a good idea to logically separate your concerns. Your question about how to structure the code depends on several things: Which concerns/responsibilities do you have? Which quality concerns do you have, e.g. maintainability, maximum reuse, etc.? What is likely to be changed or extended in the future? Answering these conceptual kinds of questions is commonly known as design. For producing clean code, readability and name expressiveness is important. Focusing on these properties is my personal opinion, what clean code is in general is an ongoing discussion in the field. However, for beginners, I'd suggest these to begin with. Considering these more general comments should already change your thinking. With respect to your code, here are my comments: The processSelectedPath method actually has two responsibilities: duplicate checking by using the DataGrid and actually processing the folders. Both responsbilities should be separated into two methods. Including the duplicate check that uses the datagrid view in a new thread is not a good idea, since the datagrid view is itself created in the UI thread. Since the UI should be only responsible for showing a view on your data, it should not act as the data representation itself, i.e. checking for duplicates should not be done by making use of the UI, but by using a data representation that is independent of the UI. This could be a class maintaining a list on which folders have been crawled already. The datagrid can be data bound to this list, since it is always up-to-date. An indication that a method has too many responsibilites is that you cannot find an expressive name for it as it's the case with "processFolders". "ProcessXXX" is in general a bad pattern for naming, since it can mean anything. In your case however, you a) look recursively for videos in a directory structure b) update the UI and c) persist the found videos. These should also go in their own methods. All behavior that is not related to UI updating should be put in separate classes and your UI Form should get a public method that accepts an object that is then displayed. This method is invoked by your business logic class and passed the information it needs. In this way the folder crawling does not need to know anything about UI and the interface between UI and business logic is clearly defined, i.e. it consists of a data object that has only the minimum data the UI needs to perform its job. For your interest, this is what is commonly known as Model View Presenter Pattern (MVP), if you want to further read on that. All in all, your code structure should be driven by minimizing interfaces and modularizing responsibilities, since these are integral ingredients of good and maintainable software.
{ "domain": "codereview.stackexchange", "id": 18222, "tags": "c#, winforms" }
Measurement of Spin in different directions
Question: imagine you have an electron in the spin state $x=\frac{1}{\sqrt{2}}(i,1)^T$, which is an eigenvector of the Pauli Matrix for y-direction. Now I want to know what are the possible results for measurement in z-direction. Of course, it is clear that it must be either $\tfrac 12\hbar$ or $-\tfrac 12\hbar$. But I am confused with the associated eigenvalue equations. A measurement of spin z direction would mean that we apply the spin operator for z direction to our spin state $x$. But when I do this, I get: $S_z \cdot x= \tfrac 12\hbar y$, where y is almost equal to $x$ but with switched signs of the components. I expected it to be equal to $x$. Where am I going wrong? Ok, I see that the vector $x$ might not be an eigenvector of $S_z$ but then how do we set up a proper equation? EDIT: Or is it as simple as just saying that in any spin state, the possible values for measuring spin in any direction (x,y,z) in the spin state are h/2 or -h/2? Answer: Applying the operator to a state is not the same as a measurement. To perform a measurement you have to look at the eigenvectors of the operator. The $S_z$ operator has the eigenvectors $\pmatrix{1\\0},\pmatrix{0\\1}$ in the z-basis. After the measurement, the state is in either of those eigenstates. The probability of getting a certain outcome is given by the overlap of the state with the eigenvector: \begin{align} p(\uparrow)&=\left|\langle \uparrow|x\rangle\right|^2\\ &=\left|\frac{1}{\sqrt 2}\pmatrix{-i& 1}\pmatrix{1\\0}\right|^2\\ &=\frac 1 2 \end{align} A nice, equivalent way of this process is achieved by expanding your vector in the basis of your measurement operator and then reading of the coefficients. If your measurement operator is Hermitian its eigenvectors form an orthonormal basis and we can always do this. \begin{align} |\psi\rangle=\alpha_1|1\rangle+\alpha_2|2\rangle+\alpha_3|3\rangle\\ \implies p(1)=|\alpha_1|^2=|\langle1|\psi\rangle|^2 \end{align}
{ "domain": "physics.stackexchange", "id": 95609, "tags": "quantum-mechanics, quantum-spin" }
Have we detected galaxies which have red-shifted beyond the visible light range?
Question: According to this answer it is possible for galaxies' light to move beyond the visible frequencies due to redshift: It is possible that eventually the light from them could move into the infrared and even the microwave in extreme cases Could this have already happened? Have we already looked for it? Is there a telescope with the instrumentation to detect galaxies' 'light' at these wavelengths? Answer: Yes, of course. Many, many examples. Telescopes work in the infrared, far-infrared and there are even samples of galaxies that are selected on the basis of their mm emission. The most distant galaxies detected now have redshifts of 10 or more (see for example here). This means the wavelength of their light has been stretched by a factor $1+z$ - i.e. by a factor of 11. Thus light in the visible range, say 500nm, now appears at wavelength of 5.5 microns, in the infrared. Telescopes that work in this range include the Spitzer space telescope; the James Webb Space Telescope and many ground-based telescopes. Observations of highly redshifted galaxies are routinely made at infrared wavelengths on telescopes all around the world. Galaxies are also detected in the far infrared by the Herschel satellite or at mm (getting on for microwave) wavelengths by JCMT or the ALMA telecope.
{ "domain": "astronomy.stackexchange", "id": 626, "tags": "observational-astronomy, redshift, deep-sky-observing" }
rviz not launching with MoveIt config launch file
Question: Hello, System: Virtualbox VM Linux: Ubuntu 16.04 ROS: Kinetic I am following the tutorials on using MoveIt with ROS. I have tried 2 tutorials both here and here. The problem occurs in both tutorials after creating the MoveIt config file and attempting to launch it with roslaunch: roslaunch myworkcell_moveit_config myworkcell_planning_execution.launch The Rviz splashscreen shows up with the initialising status but does not progress from this. There is an error about the planner: [ERROR] [1560344053.702727103]: Could not find the planner configuration 'None' on the param server It even states in the terminal: You can start planning now. I am able to run rviz fine on its own with: rosrun rviz rviz The rest of the output in the terminal is as follows: ` SUMMARY ======== PARAMETERS * /joint_state_publisher/source_list: ['move_group/fake... * /joint_state_publisher/use_gui: False * /move_group/allow_trajectory_execution: True * /move_group/capabilities: * /move_group/controller_list: [{'joints': ['joi... * /move_group/disable_capabilities: * /move_group/jiggle_fraction: 0.05 * /move_group/manipulator/default_planner_config: None * /move_group/manipulator/longest_valid_segment_fraction: 0.005 * /move_group/manipulator/planner_configs: ['SBL', 'EST', 'L... * /move_group/manipulator/projection_evaluator: joints(joint_a1,j... * /move_group/max_range: 5.0 * /move_group/max_safe_path_cost: 1 * /move_group/moveit_controller_manager: moveit_fake_contr... * /move_group/moveit_manage_controllers: True * /move_group/octomap_resolution: 0.025 * /move_group/planner_configs/BFMT/balanced: 0 * /move_group/planner_configs/BFMT/cache_cc: 1 * /move_group/planner_configs/BFMT/extended_fmt: 1 * /move_group/planner_configs/BFMT/heuristics: 1 * /move_group/planner_configs/BFMT/nearest_k: 1 * /move_group/planner_configs/BFMT/num_samples: 1000 * /move_group/planner_configs/BFMT/optimality: 1 * /move_group/planner_configs/BFMT/radius_multiplier: 1.0 * /move_group/planner_configs/BFMT/type: geometric::BFMT * /move_group/planner_configs/BKPIECE/border_fraction: 0.9 * /move_group/planner_configs/BKPIECE/failed_expansion_score_factor: 0.5 * /move_group/planner_configs/BKPIECE/min_valid_path_fraction: 0.5 * /move_group/planner_configs/BKPIECE/range: 0.0 * /move_group/planner_configs/BKPIECE/type: geometric::BKPIECE * /move_group/planner_configs/BiEST/range: 0.0 * /move_group/planner_configs/BiEST/type: geometric::BiEST * /move_group/planner_configs/BiTRRT/cost_threshold: 1e300 * /move_group/planner_configs/BiTRRT/frountier_node_ratio: 0.1 * /move_group/planner_configs/BiTRRT/frountier_threshold: 0.0 * /move_group/planner_configs/BiTRRT/init_temperature: 100 * /move_group/planner_configs/BiTRRT/range: 0.0 * /move_group/planner_configs/BiTRRT/temp_change_factor: 0.1 * /move_group/planner_configs/BiTRRT/type: geometric::BiTRRT * /move_group/planner_configs/EST/goal_bias: 0.05 * /move_group/planner_configs/EST/range: 0.0 * /move_group/planner_configs/EST/type: geometric::EST * /move_group/planner_configs/FMT/cache_cc: 1 * /move_group/planner_configs/FMT/extended_fmt: 1 * /move_group/planner_configs/FMT/heuristics: 0 * /move_group/planner_configs/FMT/nearest_k: 1 * /move_group/planner_configs/FMT/num_samples: 1000 * /move_group/planner_configs/FMT/radius_multiplier: 1.1 * /move_group/planner_configs/FMT/type: geometric::FMT * /move_group/planner_configs/KPIECE/border_fraction: 0.9 * /move_group/planner_configs/KPIECE/failed_expansion_score_factor: 0.5 * /move_group/planner_configs/KPIECE/goal_bias: 0.05 * /move_group/planner_configs/KPIECE/min_valid_path_fraction: 0.5 * /move_group/planner_configs/KPIECE/range: 0.0 * /move_group/planner_configs/KPIECE/type: geometric::KPIECE * /move_group/planner_configs/LBKPIECE/border_fraction: 0.9 * /move_group/planner_configs/LBKPIECE/min_valid_path_fraction: 0.5 * /move_group/planner_configs/LBKPIECE/range: 0.0 * /move_group/planner_configs/LBKPIECE/type: geometric::LBKPIECE * /move_group/planner_configs/LBTRRT/epsilon: 0.4 * /move_group/planner_configs/LBTRRT/goal_bias: 0.05 * /move_group/planner_configs/LBTRRT/range: 0.0 * /move_group/planner_configs/LBTRRT/type: geometric::LBTRRT * /move_group/planner_configs/LazyPRM/range: 0.0 * /move_group/planner_configs/LazyPRM/type: geometric::LazyPRM * /move_group/planner_configs/LazyPRMstar/type: geometric::LazyPR... * /move_group/planner_configs/PDST/type: geometric::PDST * /move_group/planner_configs/PRM/max_nearest_neighbors: 10 * /move_group/planner_configs/PRM/type: geometric::PRM * /move_group/planner_configs/PRMstar/type: geometric::PRMstar * /move_group/planner_configs/ProjEST/goal_bias: 0.05 * /move_group/planner_configs/ProjEST/range: 0.0 * /move_group/planner_configs/ProjEST/type: geometric::ProjEST * /move_group/planner_configs/RRT/goal_bias: 0.05 * /move_group/planner_configs/RRT/range: 0.0 * /move_group/planner_configs/RRT/type: geometric::RRT * /move_group/planner_configs/RRTConnect/range: 0.0 * /move_group/planner_configs/RRTConnect/type: geometric::RRTCon... * /move_group/planner_configs/RRTstar/delay_collision_checking: 1 * /move_group/planner_configs/RRTstar/goal_bias: 0.05 * /move_group/planner_configs/RRTstar/range: 0.0 * /move_group/planner_configs/RRTstar/type: geometric::RRTstar * /move_group/planner_configs/SBL/range: 0.0 * /move_group/planner_configs/SBL/type: geometric::SBL * /move_group/planner_configs/SPARS/dense_delta_fraction: 0.001 * /move_group/planner_configs/SPARS/max_failures: 1000 * /move_group/planner_configs/SPARS/sparse_delta_fraction: 0.25 * /move_group/planner_configs/SPARS/stretch_factor: 3.0 * /move_group/planner_configs/SPARS/type: geometric::SPARS * /move_group/planner_configs/SPARStwo/dense_delta_fraction: 0.001 * /move_group/planner_configs/SPARStwo/max_failures: 5000 * /move_group/planner_configs/SPARStwo/sparse_delta_fraction: 0.25 * /move_group/planner_configs/SPARStwo/stretch_factor: 3.0 * /move_group/planner_configs/SPARStwo/type: geometric::SPARStwo * /move_group/planner_configs/STRIDE/degree: 16 * /move_group/planner_configs/STRIDE/estimated_dimension: 0.0 * /move_group/planner_configs/STRIDE/goal_bias: 0.05 * /move_group/planner_configs/STRIDE/max_degree: 18 * /move_group/planner_configs/STRIDE/max_pts_per_leaf: 6 * /move_group/planner_configs/STRIDE/min_degree: 12 * /move_group/planner_configs/STRIDE/min_valid_path_fraction: 0.2 * /move_group/planner_configs/STRIDE/range: 0.0 * /move_group/planner_configs/STRIDE/type: geometric::STRIDE * /move_group/planner_configs/STRIDE/use_projected_distance: 0 * /move_group/planner_configs/TRRT/frountierNodeRatio: 0.1 * /move_group/planner_configs/TRRT/frountier_threshold: 0.0 * /move_group/planner_configs/TRRT/goal_bias: 0.05 * /move_group/planner_configs/TRRT/init_temperature: 10e-6 * /move_group/planner_configs/TRRT/k_constant: 0.0 * /move_group/planner_configs/TRRT/max_states_failed: 10 * /move_group/planner_configs/TRRT/min_temperature: 10e-10 * /move_group/planner_configs/TRRT/range: 0.0 * /move_group/planner_configs/TRRT/temp_change_factor: 2.0 * /move_group/planner_configs/TRRT/type: geometric::TRRT * /move_group/planning_plugin: ompl_interface/OM... * /move_group/planning_scene_monitor/publish_geometry_updates: True * /move_group/planning_scene_monitor/publish_planning_scene: True * /move_group/planning_scene_monitor/publish_state_updates: True * /move_group/planning_scene_monitor/publish_transforms_updates: True * /move_group/request_adapters: default_planner_r... * /move_group/sensors: [{'point_subsampl... * /move_group/start_state_max_bounds_error: 0.1 * /move_group/trajectory_execution/allowed_execution_duration_scaling: 1.2 * /move_group/trajectory_execution/allowed_goal_duration_margin: 0.5 * /move_group/trajectory_execution/allowed_start_tolerance: 0.01 * /robot_description: <?xml version="1.... * /robot_description_kinematics/manipulator/kinematics_solver: kdl_kinematics_pl... * /robot_description_kinematics/manipulator/kinematics_solver_attempts: 3 * /robot_description_kinematics/manipulator/kinematics_solver_search_resolution: 0.005 * /robot_description_kinematics/manipulator/kinematics_solver_timeout: 0.005 * /robot_description_planning/joint_limits/joint_a1/has_acceleration_limits: False * /robot_description_planning/joint_limits/joint_a1/has_velocity_limits: True * /robot_description_planning/joint_limits/joint_a1/max_acceleration: 0 * /robot_description_planning/joint_limits/joint_a1/max_velocity: 2.72271363311 * /robot_description_planning/joint_limits/joint_a2/has_acceleration_limits: False * /robot_description_planning/joint_limits/joint_a2/has_velocity_limits: True * /robot_description_planning/joint_limits/joint_a2/max_acceleration: 0 * /robot_description_planning/joint_limits/joint_a2/max_velocity: 2.72271363311 * /robot_description_planning/joint_limits/joint_a3/has_acceleration_limits: False * /robot_description_planning/joint_limits/joint_a3/has_velocity_limits: True * /robot_description_planning/joint_limits/joint_a3/max_acceleration: 0 * /robot_description_planning/joint_limits/joint_a3/max_velocity: 2.72271363311 * /robot_description_planning/joint_limits/joint_a4/has_acceleration_limits: False * /robot_description_planning/joint_limits/joint_a4/has_velocity_limits: True * /robot_description_planning/joint_limits/joint_a4/max_acceleration: 0 * /robot_description_planning/joint_limits/joint_a4/max_velocity: 5.75958653158 * /robot_description_planning/joint_limits/joint_a5/has_acceleration_limits: False * /robot_description_planning/joint_limits/joint_a5/has_velocity_limits: True * /robot_description_planning/joint_limits/joint_a5/max_acceleration: 0 * /robot_description_planning/joint_limits/joint_a5/max_velocity: 5.75958653158 * /robot_description_planning/joint_limits/joint_a6/has_acceleration_limits: False * /robot_description_planning/joint_limits/joint_a6/has_velocity_limits: True * /robot_description_planning/joint_limits/joint_a6/max_acceleration: 0 * /robot_description_planning/joint_limits/joint_a6/max_velocity: 10.7337748998 * /robot_description_semantic: <?xml version="1.... * /rosdistro: kinetic * /rosversion: 1.12.14 * /rviz_talha_VirtualBox_2209_1189131161637441768/manipulator/kinematics_solver: kdl_kinematics_pl... * /rviz_talha_VirtualBox_2209_1189131161637441768/manipulator/kinematics_solver_attempts: 3 * /rviz_talha_VirtualBox_2209_1189131161637441768/manipulator/kinematics_solver_search_resolution: 0.005 * /rviz_talha_VirtualBox_2209_1189131161637441768/manipulator/kinematics_solver_timeout: 0.005 NODES / joint_state_publisher (joint_state_publisher/joint_state_publisher) move_group (moveit_ros_move_group/move_group) robot_state_publisher (robot_state_publisher/robot_state_publisher) rviz_talha_VirtualBox_2209_1189131161637441768 (rviz/rviz) ROS_MASTER_URI=http://localhost:11311 process[joint_state_publisher-1]: started with pid [2226] process[robot_state_publisher-2]: started with pid [2227] process[move_group-3]: started with pid [2228] [ WARN] [1560344053.452253758]: The root link base_link has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF. process[rviz_talha_VirtualBox_2209_1189131161637441768-4]: started with pid [2249] [ INFO] [1560344053.496640076]: Loading robot model 'kuka_kr16_2'... [ INFO] [1560344053.570688196]: Loading robot model 'kuka_kr16_2'... [ WARN] [1560344053.574339033]: The root link base_link has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF. [ INFO] [1560344053.575414819]: rviz version 1.12.17 [ INFO] [1560344053.575455888]: compiled against Qt version 5.5.1 [ INFO] [1560344053.575467911]: compiled against OGRE version 1.9.0 (Ghadamon) [ INFO] [1560344053.613958606]: Publishing maintained planning scene on 'monitored_planning_scene' [ INFO] [1560344053.617565517]: MoveGroup debug mode is ON Starting context monitors... [ INFO] [1560344053.617611959]: Starting scene monitor [ INFO] [1560344053.621063575]: Listening to '/planning_scene' [ INFO] [1560344053.621105594]: Starting world geometry monitor [ INFO] [1560344053.624478374]: Listening to '/collision_object' using message notifier with target frame '/base_link ' [ INFO] [1560344053.627665820]: Listening to '/planning_scene_world' for planning scene world geometry [ INFO] [1560344053.671201746]: Listening to '/head_mount_kinect/depth_registered/points' using message filter with target frame '/base_link ' [ INFO] [1560344053.677909803]: Listening to '/attached_collision_object' for attached collision objects Context monitors started. [ INFO] [1560344053.697808790]: Initializing OMPL interface using ROS parameters [ERROR] [1560344053.702727103]: Could not find the planner configuration 'None' on the param server [ INFO] [1560344053.730134588]: Using planning interface 'OMPL' [ INFO] [1560344053.735427908]: Param 'default_workspace_bounds' was not set. Using default value: 10 [ INFO] [1560344053.736112214]: Param 'start_state_max_bounds_error' was set to 0.1 [ INFO] [1560344053.736775365]: Param 'start_state_max_dt' was not set. Using default value: 0.5 [ INFO] [1560344053.737461052]: Param 'start_state_max_dt' was not set. Using default value: 0.5 [ INFO] [1560344053.738201938]: Param 'jiggle_fraction' was set to 0.05 [ INFO] [1560344053.738845543]: Param 'max_sampling_attempts' was not set. Using default value: 100 [ INFO] [1560344053.738943580]: Using planning request adapter 'Add Time Parameterization' [ INFO] [1560344053.738979920]: Using planning request adapter 'Fix Workspace Bounds' [ INFO] [1560344053.739012586]: Using planning request adapter 'Fix Start State Bounds' [ INFO] [1560344053.739172871]: Using planning request adapter 'Fix Start State In Collision' [ INFO] [1560344053.739207349]: Using planning request adapter 'Fix Start State Path Constraints' [ INFO] [1560344053.742072959]: Stereo is NOT SUPPORTED [ INFO] [1560344053.742138831]: OpenGl version: 3 (GLSL 1.3). [ INFO] [1560344053.749284081]: Fake controller 'fake_manipulator_controller' with joints [ joint_a1 joint_a2 joint_a3 joint_a4 joint_a5 joint_a6 ] [ INFO] [1560344053.750216084]: Returned 1 controllers in list [ INFO] [1560344053.775381406]: Trajectory execution is managing controllers Loading 'move_group/ApplyPlanningSceneService'... Loading 'move_group/ClearOctomapService'... Loading 'move_group/MoveGroupCartesianPathService'... Loading 'move_group/MoveGroupExecuteTrajectoryAction'... Loading 'move_group/MoveGroupGetPlanningSceneService'... Loading 'move_group/MoveGroupKinematicsService'... Loading 'move_group/MoveGroupMoveAction'... Loading 'move_group/MoveGroupPickPlaceAction'... Loading 'move_group/MoveGroupPlanService'... Loading 'move_group/MoveGroupQueryPlannersService'... Loading 'move_group/MoveGroupStateValidationService'... [ INFO] [1560344053.864494438]: ******************************************************** * MoveGroup using: * - ApplyPlanningSceneService * - ClearOctomapService * - CartesianPathService * - ExecuteTrajectoryAction * - GetPlanningSceneService * - KinematicsService * - MoveAction * - PickPlaceAction * - MotionPlanService * - QueryPlannersService * - StateValidationService ******************************************************** [ INFO] [1560344053.864554545]: MoveGroup context using planning plugin ompl_interface/OMPLPlanner [ INFO] [1560344053.864575536]: MoveGroup context initialization complete You can start planning now! ` Originally posted by mt on ROS Answers with karma: 61 on 2019-06-12 Post score: 0 Answer: EDIT: Just found out that I had been running a roscore hidden behind the many terminal windows which was the culprit. The launch file for the MoveIt config launches its own master so there probably is a conflict there. Just tested with the original ompl_planning.yaml file and it ran fine although still gave the error could not find the planner configuration 'None' on the param server but that is probably a non-issue. Originally posted by mt with karma: 61 on 2019-06-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2019-06-12: Which version of MoveIt are you using? Afaik this has been fixed quite some time ago, so either the MoveIt configuration is outdated, or the MoveIt version you are using. Comment by mt on 2019-06-12: 0.9.15. Apparently the latest version under kinetic when running the command apt install ros-kinetic-moveit Comment by gvdhoorn on 2019-06-12: And where did you get the moveit configuration package? Comment by mt on 2019-06-12: Generated myself using the setup assistant as in the tutorials Comment by mt on 2019-06-12: Just updated answer above Comment by gvdhoorn on 2019-06-12: So there was essentially no real problem then? Comment by mt on 2019-06-12: No. Just my oversight
{ "domain": "robotics.stackexchange", "id": 33165, "tags": "rviz, moveit, ros-kinetic, ros-industrial" }
universal_robot package for ROS Kinetic
Question: Currently I am working on some movement planning projects with UR3, and one of the ideas is to use Raspberry Pi 3 model B as a computing center for robot's motion planning. The issue is that Ubuntu development team officially supports only Ubuntu 16.04 Xenial version for RPi3, and even Ubunty Trusty image for Raspberry Pi 2 is no longer maintained. As we tested, there is no problem to install Ubuntu Xenial with Kinetic on RPi3, but the problem is that officially universal_robot driver is not compatible with Kinetic, so it won't work. Is there plans to develop ROS Kinetic version for driver, with compatibility with moveit planner? Maybe there are other solutions for computing boards with Trusty + Indigo, do you know some? Thanks in advance. Originally posted by Ilya Boyur on ROS Answers with karma: 21 on 2017-02-13 Post score: 0 Answer: Edit: ur_modern_driver has Kinetic support and is now hosted at ros-industrial/ur_modern_driver. Be sure to use the kinetic-devel branch. Original answer: I prefer the ur_modern_driver over ur_driver (in universal_robot). Github page here. It doesn't have official Kinetic support yet either but there is a user fork available that should work as discussed in issue 58. I would hope/expect that a Kinetic branch will be added soon as there is significant demand. Originally posted by BrettHemes with karma: 525 on 2017-02-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2017-02-13: I agree about ur_modern_driver being the preferred package, but just to clarify: only for the driver. universal_robot still provides the rest of the packages (robot support, moveit, etc). Comment by Ilya Boyur on 2017-02-14: And because of that support, it is not that clear when we are talking about Kinetic compatibility. I am using ur_modern_driver too, but it also has dependencies with universal_robot and that package just a replacer for old ur_driver
{ "domain": "robotics.stackexchange", "id": 27004, "tags": "ros, moveit, ubuntu, universal-robot, ubuntu-xenial" }
D minMax function implementation
Question: I happened to need to find both the maximum and the minimum of an array, so I decided to implement a minMax function in D. Questions: Is this an efficient implementation? Is it readable? Any general feedback is appreciated. Thanks! import std.range; import std.traits; import std.typecons; import std.functional; class EmptyContainerException : Exception { this(string msg, string file = __FILE__, size_t line = __LINE__) { super(msg, file, line); } } // returns a tuple containing the minimum and maximum elements of a finite InputRange template minMax(alias pred = "a") { alias map = unaryFun!pred; auto minMax(Range)(Range r) if(isInputRange!Range && !isInfinite!Range && is(typeof(map(r.front)))) { if(r.empty) { throw new EmptyContainerException( "minMax expected a container with at least 1 element, got an empty container." ); } auto front = r.front; auto frontMap = map(front); auto min = front; auto max = front; auto minMap = frontMap; auto maxMap = frontMap; r.popFront; while(!r.empty) { front = r.front; frontMap = map(front); if(frontMap > maxMap) { max = front; maxMap = frontMap; } if(frontMap < minMap) { min = front; minMap = frontMap; } r.popFront; } return tuple!("min", "max")(min, max); } } unittest { import std.exception : assertThrown; int[] test1 = [1, 2, 3, 4, 5]; auto test1result = test1.minMax; assert(test1result.min == 1 && test1result.max == 5); string test2 = "Hello, World!"; auto test2result = test2.minMax; assert(test2result.min == ' ' && test2result.max == 'r'); int[] test3 = []; assertThrown!EmptyContainerException(test3.minMax); string[] test4 = ["who", "is", "the", "longest", "word"]; auto test4result = test4.minMax!"a.length"; assert(test4result.min == "is" && test4result.max == "longest"); } Answer: Is this an efficient implementation? Yup. It calls map exactly once per item, and otherwise does nothing but the regular range calls. A specialized implementation could be more efficient in some cases (but probably only if using vector instructions), as could a parallel implementation, but neither of these have the genericity that your code does. Is it readable? Absolutely, it's some of the most readable code I've seen. The content of the loop is somewhat dense. You could perhaps make it less dense like this: auto front = tuple!("elem", "map")(r.front, map(r.front)); auto min = front; auto max = front; r.popFront; foreach (e; r) { front = tuple(e, map(e)); if(front.map > max.map) max = front; if(front.map < min.map) min = front; } return tuple!("min", "max")(min.elem, max.elem); Apart from that, I got nuthin'. Any general feedback is appreciated. As hinted at in the section above, instead of while (!empty), you might want to use foreach. This reduces your code by one line (r.popFront()), and makes it clearer that you're going to iterate over every element of the range. It shouldn't change the performance of the code in any way. You should put an if (is(typeof(unaryFun!pred))) constraint on the template, so the user of the function gets an error message on the line where he or she tries to instantiate it with arr.minMax!"invalid string". Also, the name pred is wrong - a predicate is a boolean-valued function, like equal, same color or contains bees. I'd call it Fn or Fun. As a minor nit, I'd also change the name of map, probably to fn or fun (note the difference in capitalization). This because I kept conflating your map with std.algorithm.map. The concept is correct, but essentially the name is already taken. All in all, very good - my only comments are essentially nitpicks.
{ "domain": "codereview.stackexchange", "id": 31006, "tags": "unit-testing, template, iteration, d" }
Only QwtPlot working. No graph in PyQtGraph and MatPlot
Question: Hi I am trying to plot my data and are having some trouble. The problem is that only QwtPlot is able to plot my data. I like PyQtGraph the best so i would like to use this. At first glance it looks like it doesn't receive any data, but when i look at the Y-axis it autoscrolls according to the data on the topic. The X-axis is not moving thou. I am able to plot other topics with both PyQtGraph and MatPlot, so this is an event only occuring when plotting this topic. This is the message I am sending: Header header string[] name float64[] position float64[] velocity float64[] effort I am trying to get the position on the plot. Does anyone know what I'm doing wrong here? Originally posted by eirikaso on ROS Answers with karma: 108 on 2014-05-31 Post score: 0 Answer: Do the messages you're sending have the timestamp in the header set properly? The matplotlib and pyqtgraph plots use the timestamps for the X axis, and the qwtplot plots only use the received message order. Originally posted by ahendrix with karma: 47576 on 2014-05-31 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by eirikaso on 2014-05-31: Probably not. How do I do that? Comment by ahendrix on 2014-05-31: Before you publish your message, just do msg.header.stamp = ros::Time::now(); in C++ or msg.header.stamp = rospy.Time.now() in python. Comment by eirikaso on 2014-05-31: NICE. Thanks :D
{ "domain": "robotics.stackexchange", "id": 18121, "tags": "rqt-plot" }
Identifying flying insect from northern Poland / Baltic Sea area
Question: Found during this year vacations at Baltic Sea (northern parts of Poland): I don't have a picture containing other object, for size verification, but I'd say, that it was 3-4 cm long. There were two interesting characteristics of this insect, that I haven't found in case of any other: it was not flying / running away, when when being chased / waived by a hand; you actually had to snap it with a finger and sent it airborn, to make it fly away, once touched by finger or upon falling from certain height it was pretending dead, but when you tried to remove "dead body" it turned out in most cases that it is alive and it was flying over. This is all I can tell about this insect. Can someone help me with identifying, what species is it? Answer: It is possibly a vine weevil (Otiorhynchus sulcatus) Others of the family:
{ "domain": "biology.stackexchange", "id": 4449, "tags": "entomology, species-identification, taxonomy" }
Compute random triplets of float pairs and average area of generated triangles
Question: I am trying to get in a lot of practice for interviews so I am trying to write as much code as possible, and getting feed back from excellent programmers like "you". Can I get some general feed back on this small bit of code? Any recommendations on style, good practice, etc. is greatly appreciated. Normally I would put the class declarations in their own header files, and the definitions in a .cpp file, write a make file and build using make. However, this is a small program so I just place everything inside one module. /******************************************************************** * Define a data type for triangles in the unit square, including a * function that computes the area of a triangle. Then write a client * program that generates random triples of pairs of floats between 0 * and 1 and computes the average area of the triangles generated. * *****************************************************************/ #include <iostream> #include <math.h> #include <time.h> #include <stdlib.h> using namespace std; class Point { public: Point( ) { ;} Point( float x, float y ) : x( x ), y( y ){ ;} float getX( ) { return x; } float getY( ) { return y; } void setX( float X ) { x = X; } void setY( float Y ) { y = Y; } void print( ) { cout << "("<<x<<", "<<y<<") "<<endl; } private: float x; float y; }; class Line { public: Line( Point a, Point b ) : a( a ), b( b ){ ;} void mid_point( Point &mid ); float length( ); Point getA( ) { return a; } Point getB( ) { return b; } void print( ) { a.print( ); b.print( ); } private: Point a; Point b; }; void Line::mid_point( Point &mid ) { mid.setX( (a.getX( )+b.getX( ))/2.0 ); mid.setY( (a.getY( )+b.getY( ))/2.0 ); } float Line::length( ) { return sqrt( pow( a.getX( )-b.getX( ),2.0 ) + pow( a.getY( )-b.getY( ),2.0 ) ); } class Triangle { public: Triangle( Line A,Line B,Line C ) : A( A ), B( B ), C( C ){ ;} float calc_area( ); void print( ) { cout << "Line A formed from points\n"; A.print( ); cout << "Line B formed from points\n"; B.print( ); cout << "Line C formed from points\n"; C.print( ); } private: Line A; Line B; Line C; }; float Triangle::calc_area( ) { Point mid; C.mid_point( mid ); Line height( mid, A.getB( ) ); return 0.5*C.length( )*height.length( ); } int main( int argc, char *argv[ ] ) { srand( time( NULL ) ); //Calculate area of 10 triangles; for( int i=0;i<10;i++ ) { Point p1( (rand( )%10)/10.0,(rand( )%10)/10.0 ); Point p2( (rand( )%10)/10.0,(rand( )%10)/10.0 ); Point p3( (rand( )%10)/10.0,(rand( )%10)/10.0 ); Line l1( p1,p2 ); Line l2( p2,p3 ); Line l3( p3,p1 ); Triangle t( l1,l2,l3 ); t.print( ); cout << "Area = " << t.calc_area( ) << endl << endl; } } Output: (in the case you don't want to copy and pasted the code) [mehoggan@desktop triangle]$ ./tri_area Line A formed from points (0.4, 0.2) (0.6, 0.4) Line B formed from points (0.6, 0.4) (0.8, 0.6) Line C formed from points (0.8, 0.6) (0.4, 0.2) Area = 0 Line A formed from points (0.5, 0.2) (0.6, 0.3) Line B formed from points (0.6, 0.3) (0.5, 0.9) Line C formed from points (0.5, 0.9) (0.5, 0.2) Area = 0.0942404 Line A formed from points (0.5, 0.9) (0.5, 0.8) Line B formed from points (0.5, 0.8) (0.1, 0.2) Line C formed from points (0.1, 0.2) (0.5, 0.9) Area = 0.129059 Line A formed from points (0.3, 0.1) (0.7, 0.2) Line B formed from points (0.7, 0.2) (0.5, 0.4) Line C formed from points (0.5, 0.4) (0.3, 0.1) Area = 0.0548293 Line A formed from points (0.8, 0.1) (0.2, 0.5) Line B formed from points (0.2, 0.5) (0.8, 0.3) Line C formed from points (0.8, 0.3) (0.8, 0.1) Area = 0.067082 Line A formed from points (0.5, 0.8) (0.2, 0.2) Line B formed from points (0.2, 0.2) (0.8, 0.3) Line C formed from points (0.8, 0.3) (0.5, 0.8) Area = 0.166208 Line A formed from points (0.6, 0.2) (0.8, 0.5) Line B formed from points (0.8, 0.5) (0.4, 0.2) Line C formed from points (0.4, 0.2) (0.6, 0.2) Area = 0.0424264 Line A formed from points (0.1, 0.3) (0.1, 0) Line B formed from points (0.1, 0) (0.4, 0.9) Line C formed from points (0.4, 0.9) (0.1, 0.3) Area = 0.20744 Line A formed from points (0.2, 0.4) (0.6, 0.9) Line B formed from points (0.6, 0.9) (0.3, 0.9) Line C formed from points (0.3, 0.9) (0.2, 0.4) Area = 0.109659 Line A formed from points (0.1, 0.4) (0.1, 0.3) Line B formed from points (0.1, 0.3) (0.8, 0.5) Line C formed from points (0.8, 0.5) (0.1, 0.4) Area = 0.134629 Answer: using namespace std; is almost always a bad idea. Even in small programs I would avoid it. Your algorithm for calc_area is just plain wrong. The area of a triangle is half the length of a side multiplied by the perpendicular distance from the third vertex to the base line, not the distance from the vertex to the mid-point of the base. (Think of a triangle with vertices at (-1,0), (1,0) and (1, 1). The area of this triangle should be 1, not sqrt(2).) Reviewing your Point class, the default constructor doesn't initialized the member variables. This may be acceptable for performance reasons - for example - if you deliberately want to be able to create large arrays of uninitialized Points but it's often safer to explicitly initialize all class members in a constructor. Having both getters and setters for x and y effectively makes them public data members. The only functionality that Point has is print but this can be provided as a non-member function. Once you've done this your class provides no functionality that a simple struct doesn't. In addition, you can use aggregate initialization for a struct which can be useful. E.g. struct Point { float x; float y; }; // print functionality std::ostream& operator( std::ostream& os, const Point& point ) { os << "(" << point.x << ", " << point.y << ") " << std::endl; } This is a lot simpler, although you would have to change initializations. // Point p( p1, p2 ); Point p = { p1, p2 }; One disadvantage is that you can't easily construct a temporary Point with explicit initial values. If you need to do this you could consider a helper function analogous to std::make_pair. E.g. inline Point MakePoint( int px, int py ) { Point p = { px, py }; return p; } int main() { // FunctionTakingPoint( Point( 1, 2 ) ); FunctionTakingPoint( MakePoint( 1, 2 ) ); } Being a trivial POD-struct, most compilers will have little difficulty in eliding most of the implied copies. There is some argument that Line deserves to be a class as you have no setters for its members, but given that it has little behaviour and the behaviour it has can be provided by free functions I would keep it as a POD struct. Clients can choose to make a const instance should they so choose. Also, I don't see any need to make mid_point take a reference to a struct. It can return by value for more readable code. struct Line { Point a; Point b; }; Point mid_point( const Line& line ) { Point p = { ( line.a.x + line.b.x ) / 2.0, ( line.a.y + line.b.y ) / 2.0 }; return p; } double length( const Line& line ) { double xdiff = line.b.x - line.a.x; double ydiff = line.b.y - line.a.y; return sqrt( xdiff * xdiff + ydiff * ydiff ); } std::ostream& operator<<( std::ostream& os, const Line& line ) { os << line.a << line.b; }
{ "domain": "codereview.stackexchange", "id": 533, "tags": "c++, mathematics, graph" }
Why do lithium and sodium corrode so easily?
Question: I want to know what is the phenomenon and explanation behind this corrosion. What is the reaction? Answer: $$\ce{2Li + 2H2O -> 2LiOH + H2}$$ $$\ce{2LiOH + CO2 -> Li2CO3 + H2O}$$ $$\ce{4Li + O2 -> 2Li2O}$$ $$\ce{Li2O + CO2 -> Li2CO3}$$ $$\ce{6Li + N2 -> 2Li3N}$$ Lithium and sodium are reacting with the gases in the air; a destructive physical process.
{ "domain": "chemistry.stackexchange", "id": 242, "tags": "inorganic-chemistry" }
error finding ros controll in catkin_make
Question: I was following ros control tutorial to control robot in gazebo with ros, but as i try to build the package by running catkin_make i get this error: -- Could not find the required component 'ros_control'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found. CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "ros_control" with any of the following names: ros_controlConfig.cmake ros_control-config.cmake Add the installation prefix of "ros_control" to CMAKE_PREFIX_PATH or set "ros_control_DIR" to a directory containing one of the above files. If "ros_control" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): myrobot_control/CMakeLists.txt:7 (find_package) my CMakeLists.txt file is: cmake_minimum_required(VERSION 2.8.3) project(myrobot_control) find_package(catkin REQUIRED COMPONENTS ros_control ros_controllers ) catkin_package() include_directories( ${catkin_INCLUDE_DIRS} ) I have installed ros-control package by sudo apt-get install ros-indigo-ros-control but it didn't helped. Originally posted by hari1234 on Gazebo Answers with karma: 56 on 2016-08-25 Post score: 0 Answer: Ok problem, solved i have to install 2 extra packages which is not mentioned in the documentation also, and these are: sudo apt-get install ros-indigo-effort-controllers sudo apt-get install ros-indigo-joint-state-controller Originally posted by hari1234 with karma: 56 on 2016-08-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3973, "tags": "gazebo" }
efficient algorithm for min cut with specified number of vertices
Question: Consider a graph with vertices $V$ and edges $E$. The standard version of the min cut problem is to find the partition of $V$ into a (non-empty) subset $C$ and its complement $\bar{C}$ so as to minimize the number of edges going between $C$ and $\bar{C}$. Algorithms are known for this problem which solve it in polynomial time. My question is, what if one additionally specifies the constraint that $|C| = n$ for some $n < |V|$? That is, we wish to find the set of $n$ vertices with the minimal number of edges connecting it to the rest of the vertices. Are there also efficient algorithms for this case? I am interested both in the question of whether this problem is formally solvable in polynomial time (which I would guess that it is) and also in what algorithms are best in practice. Answer: For $n= \frac {|V|} 2$, it's called Minimum Bisection, and it's NP-hard. There exists an $O(\log^{3/2} n)$-approximation: "A polylogarithmic approximation of the minimum bisection". If you are interested, the more general problem is splitting into multiple components of the same size, and it is called Balanced Graph Partitioning. For more than 2 parts no finite approximation exists unless P=NP: "Balanced Graph Partitioning" (Andreev, Rakke), since you can't efficiently check if the answer is 0. If the parts are not necessarily exactly balanced (a small imbalance is allowed), an $O(\log n)$-approximation algorithm exists: "Balanced Partitions of Trees and Applications". Some algorithms (also check https://en.wikipedia.org/wiki/Graph_partition and "references" sections of the following papers): Local search with various flavors: we start with some partitioning and then try to swap vertices between parts to minimize the cut. E.g. we compute "gain" for each vertex (improvement if we move it to another part), and swap vertices with the maximum gain. Its advantage is that you can apply it after any other algorithm. Spectral partitioning (see e.g. Spectral Graph Theory and Graph Partitioning): uses the second eigenvector of a Laplacian matrix to approximate the partitioning (e.g. by moving the smallest $|V|/2$ coordinates to the first part). Works surprisingly well. However, I'm not sure it can be adapted to the case when you want an unbalanced bisection (e.g. $1:2$ instead of $1:1$). Linear embedding: "Distributed Balanced Partitioning via Linear Embedding". We embed vertices into a one-dimensional array while minimizing sum over all pairs of vertices: the distance between them multiplied by the weight of their edge. Then we just split this array into consecutive chunks of required sizes. Didn't work that well in my experience. (Ads) Our paper: "Multi-Dimensional Balanced Graph Partitioning via Projected Gradient Descent", where we used gradient descent to find minimum bisection: for each vertex we introduce a variable which roughly represents a probability that the vertex belongs to the first part, and minimizing the cut reduces to constrained optimization of a quadratic function. It's a bit outperformed in practice by a fine-tuned local search, but it works really well when you have multiple balance constraints. Aside from the spectral method, all of them can be trivially adapted to partitioning the graph in arbitrary proportions. There are also standard solvers: KaHIP, METIS. In my experience, KaHIP was pretty good. I'm not sure they support splitting into parts of arbitrary sizes though.
{ "domain": "cs.stackexchange", "id": 16822, "tags": "algorithms, graphs, time-complexity, optimization" }
Hyperparameter Tuning in Random Forest Model
Question: I'm new to the machine learning field, and I'm learning ML models by practice, and I'm facing an issue while using the machine learning model. While I'm implementing the RandomForestClassifier model with hyper tunning it's taking too much time to predict output. And I'm also using GridSearchCV on it. so it's take much time. Is there any way how can I solve this problem. OR, Is Google Colab or Kaggle Notebook editor can perform better than Jupiter Notebook ? Answer: You can access the GPU by going to the settings: Runtime> Change runtime type and select GPU as Hardware accelerator.
{ "domain": "datascience.stackexchange", "id": 8337, "tags": "machine-learning, python, predictive-modeling, hyperparameter-tuning" }
Simple login with jsp
Question: I made a simple login page with Java EE, jsp, servlet, tomcat and jdbc. It does following: login user register user after login it creates token for the session so ya can be directed from start page if ya have already login you can remember your login: then email and token is store in cookies logout: clear cookies and session Here is whole app: https://github.com/JulianRNajlepszy/simplelogin/tree/master/simplelogin and here is the Controller class and Account class for review :) package main; import java.io.IOException; import java.io.PrintWriter; import java.sql.Connection; import java.sql.SQLException; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.Cookie; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.servlet.http.HttpSession; import javax.sql.DataSource; import bean.User; import db.Account; /** * Servlet implementation class Controller */ public class Controller extends HttpServlet { private static final long serialVersionUID = 1L; private DataSource ds; Account account; /** * @see HttpServlet#HttpServlet() */ public Controller() { super(); } /** * @see HttpServlet#HttpServlet() */ public void init(ServletConfig config) throws ServletException { try { InitialContext initContext = new InitialContext(); Context env = (Context) initContext.lookup("java:comp/env"); ds = (DataSource) env.lookup("jdbc/loginjspjdbcDB"); Connection conn = null; try { conn = ds.getConnection(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); return; } this.account = new Account(conn); } catch (NamingException e) { throw new ServletException(); } } /** * @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse * response) */ protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { HttpSession session = request.getSession(); String page = (String) request.getParameter("page"); request.setAttribute("email", ""); request.setAttribute("message", ""); if (session.getAttribute("email") == null) { String cookieEmail = getCookie(request, "email"); if(cookieEmail != null) { session.setAttribute("email", cookieEmail); } } if (session.getAttribute("token") == null) { String cookieToken = getCookie(request, "token"); if(cookieToken != null) { session.setAttribute("token", cookieToken); } } if (page == null) { try { String email = (String) session.getAttribute("email"); String token = (String) session.getAttribute("token"); if (account.isLoginNow(email, token)) { request.setAttribute("email", session.getAttribute("email")); request.getRequestDispatcher("/succes.jsp").forward(request, response); return; } } catch (SQLException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } request.getRequestDispatcher("/index.jsp").forward(request, response); } else if (page.equals("login")) { request.getRequestDispatcher("/login.jsp").forward(request, response); } else if (page.equals("register")) { request.getRequestDispatcher("/register.jsp").forward(request, response); } else { PrintWriter out = response.getWriter(); out.print("<html><h1>404</h1></html>"); } } /** * @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse * response) */ protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { HttpSession session = request.getSession(); PrintWriter out = response.getWriter(); String action = request.getParameter("action"); request.setAttribute("email", request.getParameter("email")); if (action.equals("login")) { String email = request.getParameter("email"); String password = request.getParameter("password"); String tmpRemember = request.getParameter("remember"); boolean remember = tmpRemember != null && tmpRemember.equals("true"); try { if (!account.isLoginExist(email)) { request.setAttribute("message", "email doesn't exist"); request.getRequestDispatcher("/login.jsp").forward(request, response); return; } if (!account.login(email, password)) { request.setAttribute("message", "bad password"); request.getRequestDispatcher("/login.jsp").forward(request, response); return; } } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } String token = TokenGenerator.generate(20); session.setAttribute("email", email); session.setAttribute("token", token); try { account.remember(email, token); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } if (remember) { response.addCookie(new Cookie("email", email)); response.addCookie(new Cookie("token", token)); } else { eraseCookie(request, response); } } else if (action.equals("register")) { User user = new User(); String email = request.getParameter("email"); String password = request.getParameter("password"); String password2 = request.getParameter("password2"); user.setEmail(email); user.setPassword(password); user.setPassword2(password2); if (!user.isValid(email, password, password2)) { request.setAttribute("message", user.getValidationMessage()); request.getRequestDispatcher("/register.jsp").forward(request, response); return; } try { if (account.isLoginExist(email)) { request.setAttribute("message", "The email is already in use. Change it."); request.getRequestDispatcher("/register.jsp").forward(request, response); return; } } catch (SQLException e) { out.println("Problem with database, cannot check if the email is already in use."); e.printStackTrace(); } try { account.register(email, password); request.getRequestDispatcher("succesregister.jsp").forward(request, response); return; } catch (SQLException e) { out.println("Cannot register."); // TODO Auto-generated catch block e.printStackTrace(); } } else if (action.equals("logout")) { String toRemove = (String) session.getAttribute("email"); if (toRemove != null) { session.removeAttribute("email"); session.removeAttribute("token"); try { account.removeToken(toRemove); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } eraseCookie(request, response); request.getRequestDispatcher("/index.jsp").forward(request, response); return; } request.getRequestDispatcher("/succes.jsp").forward(request, response); } /** * @author Gray * https://stackoverflow.com/questions/890935/how-do-you-remove-a-cookie-in-a-java-servlet * */ private void eraseCookie(HttpServletRequest req, HttpServletResponse resp) { Cookie[] cookies = req.getCookies(); if (cookies != null) for (Cookie cookie : cookies) { cookie.setValue(""); cookie.setPath("/"); cookie.setMaxAge(0); resp.addCookie(cookie); } } private String getCookie(HttpServletRequest request, String name) { Cookie[] cookies = request.getCookies(); if(cookies != null) { for(Cookie cookie : cookies) { if(cookie.getName().equals("name")) { return cookie.getValue(); } } } return null; } } . package db; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; public class Account { public Connection conn; public Account(Connection conn) { this.conn = conn; } public boolean login(String login, String password) throws SQLException { String passwordInDb = ""; String sql = "select password as password from users where email = ?"; PreparedStatement stmt = conn.prepareStatement(sql); stmt.setString(1, login); ResultSet rs = stmt.executeQuery(); if (rs.next()) { passwordInDb = rs.getString(1); } return passwordInDb.equals(password); } public void register(String login, String password) throws SQLException { if (isLoginExist(login)) { throw new RuntimeException("Login already in use!"); } String sql = "insert into users (email, password) values(?, ?)"; PreparedStatement stmt = conn.prepareStatement(sql); stmt.setString(1, login); stmt.setString(2, password); stmt.executeUpdate(); } public boolean isLoginExist(String login) throws SQLException { String sql = "select count(*) as no from users where email = ?"; PreparedStatement stmt = conn.prepareStatement(sql); stmt.setString(1, login); ResultSet rs = stmt.executeQuery(); int count = 0; if (rs.next()) { count = rs.getInt("no"); } rs.close(); return count > 0; } public void remember(String email, String token) throws SQLException { removeToken(email); String sql = "insert into active_users (email, token) values(?, ?)"; PreparedStatement stmt = conn.prepareStatement(sql); stmt.setString(1, email); stmt.setString(2, token); stmt.executeUpdate(); } public boolean isLoginNow(String email, String token) throws SQLException { String sql = "select token from active_users where email = ?"; PreparedStatement stmt = conn.prepareStatement(sql); stmt.setString(1, email); ResultSet rs = stmt.executeQuery(); String validToken = ""; if (rs.next()) { validToken = rs.getString("token"); } rs.close(); return validToken != null && validToken.length() > 0 && validToken.equals(token); } public void removeToken(String email) throws SQLException { String sql = "delete from active_users where email = ?"; PreparedStatement stmt = conn.prepareStatement(sql); stmt.setString(1, email); stmt.executeUpdate(); } } Answer: Object Oriented So, from a object oriented perspective, stuff is a bit chaotic. Servlet What bothers me the most is that there's "too much code" in the servlet. Just think about that: What if, let's say, you want to introduce a presentation framework (because servlet programming is a bit year ~2003)? Which parts would you have to move 'to somewhere else', or which parts aren't reusable? Account First, I thought it's a type which represents an actual account, but it's actually ... well a mixture of business-object/domain model and data access object. It does do some sort of logic, like validating an existing login, and it does execute queries on the backend. "Usually", or "a common used pattern is", you have a layer which is dedicated to read and write data. A dedicated layer which does the 'business logic', and the dedicated layer which displays the data and takes data from the user. That's called a 'three tier architecture', which helps you to 'separate the concerns'. Readability Servlet.doGet The servlet has a bit of a readability problem. It starts with the problem, that you have code in your 'doPost' and code in your 'doGet' and it's a bit hard to understand what happens where, or rather what should happen where. Let's look at that 'set-empty-string-to-that-attribute-in-request-method': request.setAttribute("email", ""); request.setAttribute("message", ""); What are you clearing the email and the message? Looks important. I think it's only in the doGet. Maybe wrap it in a separate method which describes what you are doing here. And then the 'cookie-to-session-stuff' if (session.getAttribute("email") == null) { String cookieEmail = getCookie(request, "email"); if(cookieEmail != null) { session.setAttribute("email", cookieEmail); } } So, you're checking if the email attribute is not in the session, then get the cookie, and then set it to the session. Maybe wrap it in a initializeEmail-Method or something? Same goes for the token. And I'm pretty sure you can write one method which works for the email and for the token. This if is also confusing if (page == null) { Why does the code have to be run, when no page is set? And this method if (account.isLoginNow(email, token)) { ... is badly named. I don't understand what it does, until I go and check the implementation of it. This is bad, because it's like having to open another tab, read a text, and go back again, it messes with the initial train of thought. Servlet.doPost This method is waaaaay too long. And you could easily take that a part. For every "if(action)" statemend, you can move the code within in a separate method, like "performAction" method. If you'd do that, a reader of the code sees in a few lines what the doPost is responsible for. And if you want to know what a certain part does, it's isolated in a separate method. The attribute remember: Remember what? This is a bit redundant: user.setEmail(email); user.setPassword(password); user.setPassword2(password2); if (!user.isValid(email, password, password2)) { First, you set the attribute to the user. And the isValid method needs those attributes? Account login: Well, this one is not well named either. It actually does not login. It validates a user's password. in general, you use 'login' as your parameter name, but not always, sometimes email. Either use one or the other. isLoginExist: It's "loginExists". remember: Yeah that could use a proper name, too. rememberWhat()? isLoginNow: Yeaaaah, that one, too. I'd say 'verifyToken' or something would be more helpful. A rule is, that a method name must contain a verb. But a noun usually helps the readability, too. For instance, you have a method name removeToken, that's much more clear than remember(). Other stuff Use constants! The string action, login, etc. are all over the place request.getRequestDispatcher("/login.jsp").forward(request, response): This code is used a lot. Maybe dedicated methods 'forwardToLogin' would be helpful. You must not save passwords in clear text. Never ever do that. Empty catch blocks / e.printStackTrace: Never ever leave that. Not even if it's for a review. It's offensive! :P Your connections / statement handling will lead to out of memory exceptions. Always use a "try-with-resource" block. But working with java.sql api is a subject of its own. You have a JavaDoc for instance at the doPost method. That's just clutter. Hope this helps, slowy
{ "domain": "codereview.stackexchange", "id": 29128, "tags": "java, authentication, controller, servlets" }
why does water do that?
Question: I found recently that water fills a bottle quicker through only observation if I hold the tube away from the filling water than if I leave the tube inside the bottle. why does that happen? does potential energy have anything to do with this? Answer: I'm not totally clear what exactly you are asking, but I think the answer is backpressure. This "purifier" thing is apparently just a source emitting water out of the bottom end of a vertical tube. In your right drawing, the pressure at the bottom of the tube trying to push the water back up the tube is just the ambient air pressure. In the left drawing, the top of the liquid in the tank is at ambient air pressure, so the pressure at the end of the tube is that plus the water pressure at the depth of the end of the tube. This higher pressure pushing back up the tube slows the water flow.
{ "domain": "physics.stackexchange", "id": 9986, "tags": "water, potential-energy" }
Differentiating $\nu = \dfrac{c}{\lambda}$
Question: I am currently studying Laser Systems Engineering by Keith Kasunic. Chapter 1.2.1 Temporal Coherence says the following: The coherence time $\tau_c$ over which the emitted wavelengths are considered to be in phase – that is, are temporally coherent – thus depends inversely on the absolute value of the wavelength difference $\vert \Delta \lambda \vert$ or frequency difference $\Delta \nu = c \vert \Delta \lambda \vert / \lambda^2$ [from differentiating Eq. (1.1)]: $$\tau_c \approx \dfrac{1}{2 \vert \Delta \nu \vert} = \dfrac{1}{2c} \dfrac{\lambda^2_o}{\vert \Delta \lambda \vert} \ \ \ \ \ \text{[sec]} \tag{1.3}$$ clearly showing that a narrow-spectrum laser with small $\Delta \nu$ has a longer time over which different frequencies propagate before they are no longer considered to be in phase. Typical numbers for a HeNe laser are used for interferometric optical testing are a coherence time $\tau_c = 0.33$ nsec and a coherence length $d_c = c \tau_c = 100$ mm for a linewidth $\Delta \lambda = 2$ pm (see Table 1.3). Note that the results shown in Fig. 1.11(b) – where the different wavelengths are out of phase at a time $t \approx 10^{-8}$ sec – are not consistent with the estimates from Eq. (1.3), as the equation assumes that $\Delta \lambda$ is small in comparison with the center wavelength $\lambda_o$. Eq. (1.1) is given as follows: A laser is a source of both light and heat. Light is an electromagnetic wave with a wavelength $\lambda$ and frequency $\nu$ with energy propagating at the speed of light $c$ in a vacuum: $$\nu = \dfrac{c}{\lambda} \ \ \ \ \ \text{[Hz]} \tag{1.1}$$ How does differentiating $\nu = \dfrac{c}{\lambda}$ get us (1.3)? I don't even understand how this is a function in the first place, since $c$ is just the speed of light in vacuum and $\lambda$ is the wavelength. Answer: I don't know much about the topic but this can be done as follows : $$\nu=\frac{c}{\lambda}$$ $$\frac{d\nu}{d\lambda}=-\frac{c}{\lambda^2}$$ $$\frac{\Delta \nu}{\Delta \lambda}=-\frac{c}{\lambda^2}$$ $$|\Delta \nu|=\frac{c}{\lambda^2}|\Delta\lambda|$$ That's the exact thing that is used.
{ "domain": "physics.stackexchange", "id": 74256, "tags": "optics, differentiation, frequency, wavelength" }
Why are electrons defined to have negative charge?
Question: We normally think of the "default" or "root" state of things as being on the positive side of the spectrum. For example, we don't normally use a + symbol to indicate the sign of positive numbers, whereas negative numbers are invariably preceded with - to indicate their negative sign. Similarly, positive temperatures (in °C or in F) are those that we tend to encounter more typically. Why, then, do we define the basic unit of electricity as something with a negative charge? Answer: As Kyle Kanos stated, the convention is largely due to Ben Franklin's work on electricity, and his theory of a "single-fluid model". Particle physicists (such as they were in the 1700s) followed the "corpuscularian" atomic model that we call the "plum pudding" model today; the particles that formed matter were spaced relatively evenly apart and evenly distributed, forming a loose "fluid", and different types of matter had different types of particles and at different densities, which gave them their properties such as weight, phase, malleability, and yes, electrical conductivity. Most people who worked with electricity thought that different types of particles carried opposing charges, and electrical current thus involved a "two-fluid transfer"; positively-charged particles moved from surplus to deficit, and same with negatively-charged particles, creating an equilibrium. Ol' Ben thought a little differently; he saw a propagation of current from only one end of a connection between charges, through experiments conducted with a Leyden jar to store a static charge gathered either with friction or from lightning, and then discharging it through matter with different electrical resistance, including, as the stories go, his party guests. The people at the far end of a chain of people holding hands reacted last (and least) to the discharge from a Leyden jar, instead of those in the middle as would be expected from the prevalent two-fluid model. So, he proposed that while oppositely-charged potentials did seek to equalize, only one "charge carrier" was an actual moving "fluid" in this circumstance, and the other potential was simply caused by a deficit of this fluid charge carrier, creating a relative surplus of a "fixed" charge carrier distributed evenly through the material. It was a genius proposal at the time. There was only one problem; he could not devise an experiment or instrument that could detect which of the opposing charges, positive or negative, was being transported fluidly by the charge carrier. He had to guess, and as luck would have it he guessed wrong; he documented the fluid charge carrier as being "positive", describing the charge as flowing from positive to negative. Since lightning is often observed traveling from the clouds to the ground, the negative pole of a DC circuit came to be called the "ground side" for this reason. In the following years, Hans Christian Oersted discovered, purely by accident while demonstrating resistive heating of a metal coil, that the wire through which a current passed caused the needle of a nearby compass to deflect from true north, thereby demonstrating electromagnetism. His work was duplicated, leading to experiments and mathematical models predicting the force vectors of magnetic fields based on the direction and strength of current in the wire (Biot-Savart Law, Maxwell's Equations and the Lorentz Force Law). Around the same time, Sir William Crookes was experimenting with vacuum tubes, passing a strong electrical charge through them, causing the glass of the tube (used as the insulator and for visibility) to phosphoresce. It wasn't until 1897 that J.J. Thomson, while experimenting with these "cathode ray tubes", connected the dots; using a thin cross-shaped sheet of mica placed within the tube, he showed, based on the "shadow" the cross forms on the wall of the tube, that what is passing through the tube is some sort of particle, which is being reflected by the mica. He then showed that these particles had to be negatively charged, because they were reflected by the mica sheet on the side of the negative pole, and were affected by the magnetic field of a permanent magnet as a negatively-charged particle would be, in accordance with the Biot-Savart Law and Maxwell's Equations. He reasoned that this negative charge carrier must be of lower mass than any other particle that makes up matter, otherwise some other particle would be moving to carry the charge (creating a more detectable change in mass; in fact this difference is detectable, but the ratio between charge carrier masses is over 1800:1). He named this particle the "electron" and asserted that it, and not any positive charge carrier, was most directly responsible for electromagnetism. However, it was far too late. The convention that current flows from the positive to the negative of charged dipoles had been in common use for almost 150 years, and a lot of the work that ended up disproving it was, ironically, documented using it. Nowadays, we recognize that the movement of electrons is from the negative charge to the positive, but we diagram the movement of current in the opposite direction, as the propagation of a "positive charge", even though we now know better. That's why the positive lead or terminal is the red one, even though the source of the electrical charge is actually the negative "ground", while the actual ground in a lightning strike has a relative positive potential caused by the movement of air over it. It's only with alternating current that we regain some sanity, because in practical terms no one wire is "positive" or "negative"; the black or red wire (in U.S. home wiring codes) is the "hot" side, on which the potential change is being actively driven by the generator; the white wire is the "neutral" or undriven side; and bare (or green) is a safety "ground"; U.S. codes typically state that the ground wire goes to the same terminator on the service panel as the neutral instead of directly to the actual earth, but either way it provides an easier path for a short circuit than through a person.
{ "domain": "physics.stackexchange", "id": 9228, "tags": "electricity" }
What determines the factors of the multipole expansion?
Question: The multiple expansion of a potential V has contributing terms proportional to $\frac{1}{r^{n+1}}$ where $n=0,1,2...$. First, why are we interested only in integer powers of r? Second, why are we interested in symmetry pole arrangements? In other words, the $1/r$ term is a monopole, the $1/r^2$ term is a dipole, the $1/r^3$ term a quadrupole -- all of these are symmetric configurations. Why do we not consider an asymmetric 'tripole' formed in an isosceles triangle? The reason I pose these questions is that my understanding of this formalism is that it represents the potential far away from an arbitrarily complicated charge configuration. But if the charge configuration is arbitrarily complicated, then why do we assume such perfectly symmetric poles? Answer: First, why are we interested only in integer powers of r? Because when you expand $$ \frac{1}{|\vec r - \vec r'|} $$ about $r'=0$ you get only integer valued terms in the power series expansion. So, why does this matter? It's because the potential at $\vec r$ due to a localized charge distribution $\rho$ is $$ V(\vec r)=\int \frac{\rho(\vec r')}{|\vec r - \vec r'|}d^3r' =\frac{1}{r}\int \rho (\vec r')d^3r' + \frac{\hat r}{r^2}\cdot \int \rho(\vec r')\vec r'+\mathcal{O}\left(\frac{1}{r^3}\right) $$
{ "domain": "physics.stackexchange", "id": 21134, "tags": "multipole-expansion" }
Wormhole question
Question: first time of posting with very little knowledge of physics, but I am also very curious and eager to understand. So I was watching interstellar (Christopher Nolan Film)the other day (for the 100th time) and I got to the part of the film where one character explains how the bending of space and time from one point to another would create this hypothetical bridge between two points. He also explains the reason why a wormhole presents as a sphere or more aptly a spherical hole. In the film, as soon as they enter the wormhole or the 'higher dimension' they travel along this hypothetical corridor and emerge in another star system. After watching the film, I had a bizarre thought and googled it but found nothing on it, so I thought I'd ask here. Like a hypothetical wormhole, the Earth and the other planets are also a spheres. Whilst the physics of a wormhole don't exist on Earth, could it be possible that our planet could be a wormhole but existing with a totally different set of rules. Obviously we can enter the planets atmosphere and leave with relative ease, but upon entering the earths atmosphere you still have to travel for a period of time before reaching the ground. My point is that you still have to enter a wormhole and into its atmosphere to get where you are going and if you are returning to Earth, you have to experience similar things. Maybe we are entering a wormhole with a different set of laws which enables us to see our planet but upon entering it, we can't perceive any speeding up of time. Also allows leaving much easier. Maybe it's a granddad wormhole which has calmed with time. Or maybe our planet / land mass exists within a wormhole. Anyway, I know this is full of incorrect thinking but I am having fun with it and just want to see where it goes. Thanks Answer: As it happens I discussed this in an answer on the Science Fiction Stack Exchange, which as far is we know is where wormholes belong (or at least macroscopic wormholes). Although the wormhole looks like a sphere, the inside of the wormhole simply doesn't exist. It is as though the surface of the sphere is a mirror, but it reflects you into the other universe. Given this, the surface of the Earth or any other astronomical body cannot be analogous to a wormhole since the inside of the Earth certainly does exist. If the surface of the Earth were like a wormhole then if you tried to go down past the surface you would find you got turned around and were moving up away from the ground again.
{ "domain": "physics.stackexchange", "id": 75343, "tags": "wormholes" }
Better code indentation
Question: I'm sure code indentation is a basic requirement for correct development. I have checked out may pages to learn more and results of my coding are below. I like to share that here to receive opinions and suggestions. <?php include_once "../../config.php"; // Start if (!isset($_SESSION)) session_start(); $level = 2; if (!isset($_SESSION['user_id']) OR ($_SESSION['user_levell'] > $req_level)) { header("Location: $url/dashboard.php?error=1"); exit; } // Load permissions $result = mysql_query(" SELECT conf FROM adm_users WHERE user_id=".$_SESSION['user_id'] ); $row = mysql_fetch_array($result, MYSQL_BOTH); $access = $row['acess']; if (($_SESSION['user_level'] != "1") && ($access_config != "1")) { header("Location: index.php?error=1"); exit; } // End include_once "$url/includes/head.php"; include_once "$url/includes/menu.php"; // Load ser $result = mysql_query(" SELECT * FROM adm_ser WHERE ser_id='$ser_id' "); $row = mysql_fetch_array($result, MYSQL_BOTH); $ser_id = stripslashes($row['ser_id']); $ser_cod = stripslashes($row['ser_cod']); $ser_cd = stripslashes($row['ser_cnpj']); $ser_name = stripslashes($row['ser_name']); $ser_status_id = stripslashes($row['ser_status_id']); // Dropdown Ser Status $sql2 = mysql_query(" SELECT * FROM adm_ser_status WHERE status_id=".$ser_status_id ); while($row = mysql_fetch_array($sql2)) { $status_id_dropdown = $row['status_id']; $status_de_dropdown = $row['status_desc']; } ?> Answer: You don't need to indent your $ser_ value assignments. Indentation should be left for within a class, method, loop, etc as you do for most of your other code. I notice that you separate your MySQL statement, which can be done, but on such a simple query, really isn't needed as the only purpose for doing that is to help readability when it is a complicated query. ^^ On a side note, mysql_ functions are deprecated now, and you should switch to mysqli_ (or the MySQLi class), or (preferably) PDO.
{ "domain": "codereview.stackexchange", "id": 3495, "tags": "php, optimization" }
Tensor Product of two doublets
Question: What will be the tensor product of two doublets $$ (x_1,x_2) ~\text{and}~ (y_1,y_2)? $$ I am very much confused in determining this. Answer: Let me give a simpler (and, surely, more naiive) answer. Given two n-tuplets $x_i$ and $y_j$, their tensor product is a matrix: $$a_{ij} = x_iy_j$$ So, in your case: $$a_{ij} = \left(\begin{array}{cc}x_1y_1&x_1y_2\\x_2y_1&x_2y_2\end{array}\right)$$
{ "domain": "physics.stackexchange", "id": 4775, "tags": "tensor-calculus" }
finding names of un-named list elements
Question: I have a naming problem: lists with un-named elements. I wish to get the names of the elements in the list, without going back upstream to where the list was created. An example is modelList below: ## naming problem lmNms <- c( "mod1", "mod2", "mod3", "mod4", "mod5", "mod6") lapply(lmNms, function(N) assign(N, lm(runif(10) ~ rnorm(10)), env = .GlobalEnv)) modelList <- list(mod1, mod2, mod3, mod4, mod5, mod6) I have written the below function which takes the list and the environment as arguments (as well as an argument to return names only), looks for a matching object in the target environment, and then attaches that name to the element and returns the list. nameListObjects <- function(LIST, ENV = NULL, NAMES.ONLY = FALSE) { if(is.null(ENV)) ENV <- .GlobalEnv for(i in seq_along(LIST)){ # check the class of all objects in the target environment classMatches <- sapply(ls(ENV), function(N) class(get(N)) == class(LIST[[i]])) # see which objects of matching class are all.equal to the subject TF <- sapply(names(classMatches[classMatches]), function(N) is.logical(all.equal(LIST[[i]], get(N)))) names(LIST)[i] <- names(classMatches[classMatches])[TF] } if(NAMES.ONLY) names(LIST) else LIST } R>nameListObjects(modelList, NAMES.ONLY=TRUE) [1] "mod1" "mod2" "mod3" "mod4" "mod5" "mod6" This took me to the edge of my R ability. I'm interested in better ways to do this (i presume there are multiple better ways), problems with this approach, and any other comments. Answer: You could have used identical to compare objects. However, I would recommend representing all objects by their MD5 digests, so you can then use match to find the matches: nameListObjects <- function(LIST, ENV = NULL, NAMES.ONLY = FALSE) { if(is.null(ENV)) ENV <- .GlobalEnv require(digest) list.md5 <- sapply(LIST, digest) env.names <- ls(envir = ENV) env.md5 <- sapply(env.names, function(x)digest(get(x, envir = ENV))) list.names <- env.names[match(list.md5, env.md5)] if(NAMES.ONLY) list.names else setNames(LIST, list.names) } nameListObjects(modelList, NAMES.ONLY=TRUE) [1] "mod1" "mod2" "mod3" "mod4" "mod5" "mod6"
{ "domain": "codereview.stackexchange", "id": 5638, "tags": "r" }
Hermitian operators in the expansion of symmetry operators in Weinberg's QFT
Question: This is related to Taylor series for unitary operator in Weinberg and Weinberg derivation of Lie Algebra. $\textbf{The first question}$ On page 54 of Weinberg's QFT I, he says that an element $T(\theta)$ of a connected Lie group can be represented by a unitary operator $U(T(\theta))$ acting on the physical Hilbert space. Near the identity, he says that $$U(T(\theta)) = 1 + i\theta^a t_a + \frac{1}{2}\theta^a\theta^bt_{ab} + \ldots. \tag{2.2.17}$$ Weinberg then states that $t_a$, $t_{ab}$, ... are Hermitian. I can see why $t_a$ must be by expanding to order $\mathcal{O}(\theta)$ and invoking unitarity. However, expanding to $\mathcal{O}(\theta^2)$ gives $$t_at_b = -\frac{1}{2}(t_{ab} + t^\dagger_{ab})\tag{2},$$ so it seems the same reasoning cannot be used to show that $t_{ab}$ is Hermitian. Why, then, is it? $\textbf{The second question}$ In the derivation of the Lie algebra in the first volume of Quantum Theory of Fileds by Weinberg, it is assumed that the operator $U(T(\theta)))$ in equation (2.2.17) is unitary, and the rhs of the expansion \begin{equation} U(T(\theta)))=1+i\theta^a t_a +\frac{1}{2} \theta_b\theta_c t_{bc} + \dots \end{equation} requires $$t_{bc}=-\frac{1}{2}[t_b,t_c]_+.$$ If this is the case there is a redundancy somewhere. In fact, by symmetry $$ U(T(\theta))=1+i\theta_at_a+\frac{1}{2}\theta_a\theta_bt_{ab}+\dots\equiv 1+i\theta_at_a-\frac{1}{2}\theta_a\theta_bt_at_b+\dots $$ and it coincides with the second order expansion of $\exp\left(i\theta_at_a\right)$; the same argument would then hold at any order, obtaining $$U(T(\theta))=\exp\left(i\theta_at_a\right)$$ automatically. However, according to eq. (2.2.26) of Weinberg's book, the expansion $$U(T(\theta))=\exp\left(i\theta_at_a\right)$$ holds only (if the group is just connected) for abelian groups. This seems very sloppy and I think that Lie algebras relations could be obtained in a rigorous, self-consistent way only recurring to Differential Geometry methods. There have been some answers or speculations for these two questions but I do not think they are solved. I think the crucial point for these two questions is that the $t_{ab}$ operator is $\textbf{not}$ hermitian unless the $\{t_a\}$ operators commute with each other. Here is why: From the unitarity of $U(t(\theta))$ we have $$t_at_b = -\frac{1}{2}(t_{ab} + t^\dagger_{ab})\tag{2},$$ And from the expansion of $f(\theta_a,\theta_b)$ we have $$t_{ab} = t_a t_b - if^c_{ab} t_c.$$ So $t_{ab}$ is hermitian iff $f^c_{ab}$ is zero, which mean that $\{t_a\}$ group algebra is abelian. I think that solves the problem. Any other opinions? Answer: I looked at page 54 and Weinberg does not say that the $t_{ab}$ are Hermitian, only that the $t_a$ are Hermitian. I have the 7th reprinting of the paperback edition. Maybe it was wrong in earlier editions?
{ "domain": "physics.stackexchange", "id": 71157, "tags": "symmetry, group-theory, lie-algebra, textbook-erratum" }
What is $I$ in the noise described in the paper "Parameter Space Noise for Exploration"?
Question: In the paper Parameter Space Noise for Exploration, the authors describe the noise that they add to the parameter vector as: $$ \tilde{\theta} = \theta + \mathcal{N}(0, \sigma^2I) $$ is $I$ simply the identity matrix, or am I missing something? Answer: Yes, since $\tilde{\theta}$ is a vector, to define its distribution one needs a covariance matrix. Here $I$ is the identity matrix, which means that the noise has a zero-mean normal distribution with standard deviation $\sigma$, and different components of this noise are uncorrelated.
{ "domain": "ai.stackexchange", "id": 343, "tags": "reinforcement-learning, notation" }
Convert Json response to object array
Question: I'm looking for the best solution. Here is a response from server and i need to get Organizations list: Content-Type:application/json;charset=UTF-8 { "code": 0, "message": "success", "organizations": [ { "organization_id": "10234695", "name": "Zillum", "contact_name": "John Smith", "email": "johnsmith@zillum.com", "is_default_org": false, "language_code": "en", "fiscal_year_start_month": 0, "account_created_date": "2016-02-18", "time_zone": "PST", "is_org_active": true, "currency_id": "460000000000097", "currency_code": "USD", "currency_symbol": "$", "currency_format": "###,##0.00", "price_precision": 2 }, {...}, {...} ] Here is my convert method: var contentJson = await SendRequest(request); var contentJo = (JObject)JsonConvert.DeserializeObject(contentJson); var organizationsJArray = contentJo["organizations"] .Value<JArray>(); var organizations = organizationsJArray.ToObject<List<Organization>>(); Code works, but I'm looking for a better Convert Method. Can I do without converting to JArray? Answer: Given that you are already using the ToObject, consider simplifying the code for readability and the advantage of not having to convert anything. var contentJson = await SendRequest(request); dynamic response = JsonConvert.DeserializeObject(contentJson); List<Organization> organizations = response.organizations.ToObject<List<Organization>>(); The actual response appears to be of no concern so using a dynamic simplifies things. Converting back to strongly typed objects by calling ToObject<T> was a good choice and should work out fine.
{ "domain": "codereview.stackexchange", "id": 30873, "tags": "c#, json" }
Neutron in a magnetic field (Schrödinger Equation, Eigenstates, Eigenvalues).
Question: Consider the spin of a neutron in a magnetic field $\vec{B}$. A neutron is a neutral particle with the mass of a proton and the spin $\frac{1}{2}$. The Hamiltonian is $H=\mu_n\vec{S}\cdot\vec{B}$ , whereas $\mu_n$ is the magnetic moment of the neutron. Consider a constant magnetic field along the z-axis. Thus the Hamiltonian is $H=\omega S_z$ with $\omega=\mu_n B$. a) What are the eigenvalues and eigenstates of the system? b) At t=0 the system is in the state $\left|\alpha(t=0)\right>=\frac{1}{\sqrt{2}}\left|+\right>+\frac{1}{\sqrt{2}}\left|-\right >$ $\left|+\right> =\left|j=\frac{1}{2},m=+\frac{1}{2}\right>$ and $\left|-\right>=\left|j=\frac{1}{2},m=-\frac{1}{2}\right>$. Use the time-dependent Schrödinger Equation to get $\left|\alpha(t)\right>$. I'm studying for an upcoming exam on QM and this is one of the exercises I'm doing dealing with right now. a): I don't know how to start this. My general approach when I get the Hamiltonian in a problem is to write down the Schrödinger Equation. $H\psi=E\psi$ In this case $\omega S_z\psi=E\psi$. Since $\omega S_z$ is not really an operator it means that $\omega S_z$ is already the eigenvalue? Not really sure about the eigenstates. b) I don't know how one would get $\left|\alpha(t)\right>$ from using the Schrödinger Equation. Usually my approach to find a unitary operator and get it from $U\left|\alpha(t=0)\right>=\left|\alpha(t)\right>$ since $\left|\alpha(t=0)\right>$ is already given. Do I get the unitary operator by solving the corresponding Schrödinger Equation? Sorry for my dumb rumblings, I'm not comfortable using the dirac notation with the Schrödinger Equation. I don't know what the problem is. I can solve the Schrödinger Equation when there's a typical problem involved and the wave function is a function and not something like $\left|\alpha(t=0)\right>=\frac{1}{\sqrt{2}}\left|+\right>+\frac{1}{\sqrt{2}}\left|-\right >$ . When I first solved the Schrödinger Equation for the potential well problem I thought I had the hang of it, but when I see this type of problem I just can't come up with any approach. Answer: a) $S_z$ in this case is the spin operator which is equal to $S_z=\frac{\hbar}{2}\sigma_{z}$, where $\sigma_{z}$ is the z-Pauli matrix. The procedure you suggested should work now. b) Once you have the eigenstates and eigenvalues of the Hamiltonian, you can write down an equation of motion for the eigenstates by solving the time dependent Schrodinger equation for them. Then, you can find a representation of the initial state given in the question in terms of these eigenstates and simply write down an equation of motion by substituting in your equations of motion for the eigenstates.
{ "domain": "physics.stackexchange", "id": 32007, "tags": "homework-and-exercises, magnetic-fields, schroedinger-equation, eigenvalue" }
How to determine the particles size measurement error in SEM images
Question: I have a question that bothered me last time. I did some scanning electron microscope (SEM) on my sample which is basically array of nanopillars . My goal is to determine the pillars diameter distribution. I use ImageJ for the analysis and implement two methods: 1) I apply threshold, make binary image and use Analyze Particle tool to calculate the area of every particle. Then I extract diameters from the formula $D=\sqrt{4\cdot Area/\pi}$. 2) I use find edge tool, make binary image and apply Hough transform and find out all the diameters. Two methods give pretty similar results.(see attached histograms comparison). However, I would like to understand what is the error measurement of the both methods. The naive answer is the pixel size (which is $2 \,nm$). But I am sure there is some error coming from the threshold determination and the methods itself. Do you have any idea how to estimate these errors, since I suspect they could be larger than the pixel size? Answer: A proper way to evaluate the different contributions to the overall error is to perform experimental runs in the spirit of a gauge R&R test. Although you are probably not interested in the contribution of the "operator" (how does the person who measures and evaluated the data influence the error, also called reproducibility) you should evaluate the repeatability. Hence, you should measure the same sample multiple times and evaluate how precise you are able to repeat the measurement of the identical pillar. Here are some possible questions: Does the set-up of the sample (e.g. tilde) influence the measurement? Does the order in which you measure the pillars has an effect? Does the settings of the SEM influence it? Does the error depend on the diameter of the pillar? Does the ellipticity of the pillar matter? It it usually a good idea to write all possible input parameters down, and to priories them in accordance with your current believe. Try to figure out ways, how you could evaluate them and try to put a "cost tag" (time, and effort) to each of them. It is usually not the best idea to evaluate the "expensive" question, unless they are very promising. Note that the key point in this gauge R&R approach is that as many parameters as possible are randomised, including the order of measurement. This minimises the probability that unknown input parameters influent your result significantly. To evaluate the data I usually plot the different effects of interest, check for correlations, and for outliers. Of course, there are also statistical hypothesis test, e.g. ANOVA and diagnostic checks, which one could apply. If you know how to apply these hypothesis tests efficiently, apply them. However, in my experience the plots are far more important to build intuition. After all, I am often interested in significant effects, which are "rather obvious" once they are plotted. Thus, I am usually running hypothesis tests only if I like to falsify that an effect is present -- people tend to see structures, and to overemphasise the effect, if they are convinced that such an effect is present.
{ "domain": "physics.stackexchange", "id": 66581, "tags": "error-analysis, microscopy" }
Is the transition electric quadrupole or magnetic dipole?
Question: If a nucleus makes a transition from 0$^+$ ground state to 2$^+$ excited state, then will the transtion have E2 character, or M1? Or partly, both? Should the matrix elements of both E2 and M1 be determined for such transitions? Answer: It can only be E2. Your initial and final angular momenta are 0 and 2 respectively. You can not couple 0 and 1 to get 2 --- the Clebsch-Gordan coefficient is 0. But there are other transitions (like ^{135}Cs(5/2+) --> ^{135}Cs(7/2+) where both are possible because 5/2 can be coupled with 1 to get 3/2, 5/2 or 7/2, and with 2 to give 1/2, 3/2, 5/2, 7/2 or 9/2. The matrix elements of both E2 and M1 (and all others that contribute) are determined for such transitions.
{ "domain": "physics.stackexchange", "id": 27968, "tags": "quantum-mechanics, nuclear-physics, multipole-expansion" }
Does this Qualify as Sub-Exponential?
Question: I don't have a strong CS background so apologies if the question is trivially simple: So I am working on an algorithm, say $A(n)$ , which runs over all integer partitions of $n$. Now the algorithm calls a sub-routine $S(\pi)$ for each of the partition $\pi$ (and does nothing else). The time-complexity of running the subroutine is $O(n^c)$ for some constant $c$. Now the number of integer partitions is $\frac{1}{4n\sqrt 3}\exp\Big(\pi \sqrt \frac{2n}{3}\Big)$. And hence, the time complexity of my algorithm should be $O(e^{\pi\sqrt \frac{2n}{3}}n^{c'})$ or maybe just $O(e^{\pi\sqrt \frac{2n}{3}})$ or just $O(e^{\sqrt n})$ ? Also, in my understanding I can call this SubEXP in the sense of $\textbf{III}$ category presented here. I would basically like to understand how best to describe this complexity in an academically accurate manner. Thanks alot ! Answer: Lets say there are $N(n)$ partitions. Then, for each partition you do $O(n^c)$ work (important note: make sure for all partitions this is the same $O(n^c)$. That is, for this particular $c$, for all partitions it takes $O(n^c)$!) This means that the total work will be $O(n^c)$ times $O(N(n))$ (because you do $O(n^c)$ work for $N(n)$ iterations), hence a total of $O(N(n)\cdot n^c)$. Substitute here the number of partitions $N(n)$ and this will be the running time. This running time is indeed considered sub-exponential in terms of category $\textbf{III}$ from the link you provided (and hence it is also $\textbf{IV},\textbf{V},\textbf{VI}$ in those terms), since the algorithm takes $$O\left(e^{c\cdot n^{\frac{1}{2}}}\right)=O\left(2^{\ln(2)\cdot c\cdot n^{\frac{1}{2}}}\right)=O\left(2^{n^\epsilon}\right)$$ for any $\epsilon>\frac{1}{2}$ of your choice. Note, that this isn't category $\textbf{II}$ as this is true only if $\frac{1}{2}<\epsilon$, rather than for all $0<\epsilon$.
{ "domain": "cs.stackexchange", "id": 18674, "tags": "complexity-theory, integer-partitions" }
If the Halting Problem was solvable, and we solved it, what would be its implications?
Question: Perhaps a way to better understand the Halting Problem's importance is to know what would happen or what could be possible if this was solved. What would be the Halting Problem's implications in today's technology, mathematics and its practical applications, if it was somehow solved? Answer: Maybe I should make a more serious answer. First off: the unsolvability of the halting problem by "conventional" computing methods is a logical theorem. It is very simple to prove, and can be proven in almost any reasonable logical framework (that can express the problem). There are however two questions we can reasonably ask: What are the mathematical consequences of having a halting oracle? A halting oracle can be imagined as a genie (instantly) giving us the answer to any halting problem we ask of it. This is a rather well studied subject in computability theory and the "world" in which you have access to such a genie is called the first turing degree. An interesting observation is that even given such an oracle, we can ask computational problems for which we cannot compute the answer, such as "does the turing machine $M$ with access to the halting oracle halt on all inputs"? The proof is almost identical to the original proof of undecidability of the halting problem! What would happen if we could solve all halting problems that come up "in practice"? This isn't really a mathematical question, since you would need to define in practice but it's a really interesting scientific question. The whole field of termination analysis tries to find such algorithms. It works surprisingly well: if you write a reasonable program to compute some quantity, odds are that there is a tool in this list that can handle it. It's not too hard to find examples that break these tools though: even a program as simple as the $3n+1$ algorithm would choke them, since there it is not known whether it halts on all inputs. A lot of similar open problems in number theory (the twin primes conjecture, the existence of odd perfect numbers) can be expressed as a halting problem, which suggests that there are many such problems that can't be handled by any "reasonable" automated termination tool.
{ "domain": "cs.stackexchange", "id": 3738, "tags": "computability, halting-problem, philosophy" }
Does a DC supplied superconductive coil gives off radiation?
Question: A DC supplied superconductive electric coil must emit EM radiation according to Maxwell's law because rotation is acceleration Answer: Energy can be stored via an electric current sent through a coil of superconducting wire. Once the coil is short-circuited (closed on itself), the current flows almost indefinitely without losses (an experiment has already been done for years for a ring) and produces an "eternal" magnetic field The energy is therefore stored in the coil in magnetic and electrical form and can then be retrieved in a very short time. Just like in an atom, there is also a rotational movement of the electrons, but there is no electromagnetic radiation because of the quantization of the energy ( we can consider a a coil ring as a linearized atom...). so if the system (coil) is used to store energy, there is no loss of energy in any form. https://en.wikipedia.org/wiki/Superconducting_magnetic_energy_storage
{ "domain": "physics.stackexchange", "id": 96100, "tags": "electromagnetic-radiation, maxwell-equations" }
Why does a prism refract light into a rainbow?
Question: Why does a prism refract light such that the different frequencies of light “spread out”? The same goes for rainbows, why do the raindrops “spread out” the different frequencies of light? Answer: As the other answer gives you the formula, and the reasoning, the real reason is that different wavelength photons diffract at different angles at the edge of two different medium. Now the question remains, why do different wavelength photons change angle differently at the edge of a new medium? I am going to use lattice structure here, but for air and water it is hard to talk about a lattice structure, so in this case what I mean is the molecular structure. First of all, lets clarify that the change in angle not only happens at the edge of a medium, but: the angle of the photons will change even in the same medium, at the edge of different lattice structures, or different densities even if you cut that triangular glass prism into two pieces, and put them exactly back together as they were before, there will be diffraction, as the lattice structure is irreversibly broken at the edge of the two pieces So, shooting back the question as a boomerang, why do photons travel parallel in white light (inside a single medium, with no density and lattice structure changes): because the medium has a continuous, unbroken lattice structure, no density changes this lattice structure is uniform (same structure, same atoms, same molecules, same covalent bonds) throughout the whole path of the photons So, everytime the lattice structure changes in any way (like density or broken or the structure itself, or different covalent bonds), the photons will diffract (change angle), and the different wavelength photons will change angle in different amounts. Now you see that the answer to your question, why do the photons change angle in different amounts still remains, and the answer is the lattice structure. To be more precise, in air, or water, there is no lattice (like a solid), but the structure of the molecules, and densities, can still be different. Still, water can act as a prism in air, and you get the same rainbow effect. Why? Because the edge of the water and air has a different molecular structure. Now the question is why do different wavelength photons bend differently at the edge of different structural media? So shooting back the question again as a boomerang, why does a group of certain (same) wavelength photons bend parallel (the same way) at the edge of a new medium? Why does the wavelength decide the angle? First of all, there is no two photons with the same exact wavelength, as the wavelength is continuous. But, with our eyes, we cannot tell the difference at a certain level, so let's say there are two photons that seem to have the same wavelength (for our viewing purposes). Why do these photons bend at the same angle at the edge? Why do you see a static image of the colors at the prism, and why doesn't the absolute angles of the different color photons randomly change? And why doesn't the relative angle randomly change? The answer could be classical or QM: classical answer is that the different wavelength photons travel at different speeds inside a denser medium. the QM answer is that the photons building up the EM wave interact with the lattice structure, and this elastic scattering decides the angle. Let me elaborate on the QM answer. When a photon interacts with an atom, three things can happen: elastic scattering, the photon keeps its phase, energy, and changes angle inelastic scattering, the photon keeps part of its energy and changes angle absorption, the photon gives all its energy to the atom Now in the case of the prism, it is elastic scattering. This is the only way, that the energy of the photons is kept, phase is kept, and relative energy is kept, and relative phase is kept. Now with elastic scattering, as the photons in the white light, combined of all the different wavelengths from the Sun traveling parallel reach the edge of the glass prism from the air, what happens is: the different wavelength photons start interacting with the lattice structure of the glass prism. The different wavelength photons elastic scattering will produce a certain angle change for the photons photons with the same wavelength will interact similarly with the lattice structure, and create the same angle, these photons (same wavelength) will continue in one separate beam separated from the other beams (other wavelengths) Now this is not the complete reality. Imagine the double slit experiment. The photon (shot one at a time) will have its partial waves pass through both slits and interfere with each other, sometimes constructively and sometime destructively. The destructive interference will not be visible (just its void, missing dot), creating a dark area. The constructive interference will create a visible bright area. In reality with the prism, the photons interact with the lattice structure of the glass, and the partial waves create interference. Some of the interference will be destructive, basically, in all directions where you do not see the certain beam (certain wavelength photons). In only one angle, direction, will be the constructive interference, where you will see the certain wavelength photons in one beam. So the answer to your question why different angles for different wavelength is that the elastic scattering as an interaction between the photons and the lattice atoms will create an angle that is: mostly similar for similar wavelength photons different for different wavelength photons It is all about probabilities at the QM level, but if you use a lot of photons, that build up the white light, most of the similar wavelengths will change angle in the same amount, and they will be visibly separated from the other wavelengths.
{ "domain": "physics.stackexchange", "id": 58726, "tags": "optics, visible-light, scattering, refraction, geometric-optics" }
What observational constraints are there in detecting the presence of volcanism on exoplanets?
Question: This question is somewhat related to my earlier question How are the compositional components of exoplanet atmospheres differentiated?, but this about a specific surface-atmospheric phenomena - volcanism. Using our solar system as a rough analogue (where asides from Earth, Venus, Io and Triton have active volcanism, probably more), volcanism should be not uncommon amongst exoplanets given the right conditions. Obviously, we could probably only detect either the massive shield, plateau volcanism, or prolonged volcanism from multiple vents. What observational constraints are there in detecting the presence of volcanism on exoplanets? Answer: Exoplanets are too far away to send satellites or to image them directly. So there is no way to go there and say: there it is a volcano. My guess is that we have to guess from what we know working in the solar system. And we might just get a statistical probability that the planet is active. I would say that there are two cases: If the planet is rocky and near enough to the star or a giant planet, you might expect volcanism caused by tidal effects, similarly to what happen on Jupiter's Io moon. In few words: the gravitational force of the nearby big planet/star deform the planet and this deformations are converted to heat due to friction. If the gravitational forces are large enough, you can end up having volcanism If you are able to measure the spectral lines in the planet atmosphere you could try to identify signatures of gas or dust typically associated with volcanic activity. I don't know what you would want to look for, but my guess is that you want to search mostly in the infrared.
{ "domain": "astronomy.stackexchange", "id": 55, "tags": "exoplanet, observational-astronomy, surface, volcanism" }
Do electrons have no hair, like black holes?
Question: Does John Wheeler's conjecture that black holes have no hair apply to electrons? Can the electrons have some hair that I can't see? Answer: In some sense electron's have even less hair then black holes do. Electrons are governed by quantum field theory and the Standard Model. All electrons are quantum excitations of the same quantum field. Therefor, all electrons in the universe are identical. They all have the same mass, same charge, and same spin. There is no freedom. However, this "baldness" of the electron has nothing to do with general relativity and the no hair theorem for black holes. In fact even if you were to try to describe an electron as a classical charged black hole, the no hair theorem still would not apply to it. The electron's charge is insanely big for its mass. In geometric units, the electron charge is more than 20 orders of magnitude bigger than the electron mass. This means that when described as a Kerr-Newman solution in general relativity, the electron does not describe a black hole, but a naked singularity without a black hole horizon. The no hair theorem in general relativity applies only to solutions with a horizon.
{ "domain": "physics.stackexchange", "id": 70410, "tags": "black-holes, electrons, identical-particles, no-hair-theorem" }
Why 100% accuracy on test data is not good?
Question: I was asked this question in an interview and wasn’t able to give a satisfactory answer not only upto the interviewers' expectations but of my own as well. The question was as above only, he later gave an example as if why if my model predicted the prices of oil of tomorrow 100% accurately why that might be bad or why having a model 100% accurate bad or is it? Is there something in the question or is there a deeper explanation? Answer: I see two ways to go: 1 - There is an error 2 - There is no error. 1 - Look for the error You probably have commited Data Leakage. You have added the target in one of the features and the model found out. The validation is not right, you have a time series and you have done random validation. Your test has only a few instances or it is unique. The test is repeated from the train. 2 - There is no error If the prediction is right and you have 100% accuracy, then no need to do Machine Learning. Open the model find where is taking the decision and don't do machine learning, do classical modeling. For example if your model is a decision tree, just plot it or print it and get the decision and put them yourself. This sometimes happens when modeling a previous developed algorithm. The new ML model is able to learn what was going on before.
{ "domain": "datascience.stackexchange", "id": 7383, "tags": "machine-learning, neural-network, deep-learning, statistics" }
Is color charge quantized?
Question: I was reading this stackexchange question, and found the answer to my question not totally answered. Clearly there is color and anti-color in analogy to electric charge, and color charge clearly cannot vary from color to anti-color. However can color (or anti-color) continuously vary between a red green and blue basis, or is it like wavelengths in atomic orbitals where in order to go from one color to another you have emit the exact amount of color charge required? Answer: Naively, color can vary continuously between the colors according to a gauge transformation $\psi\mapsto \mathrm{e}^{\mathrm{i}\epsilon^a T^a}$ for some $\mathfrak{su}(2)$-valued object $\epsilon$, this is precisely the same as saying that a particle with electrical charge $e$ can vary continuously in phase according to $\psi\mapsto \mathrm{e}^{\mathrm{i}e\phi}\psi$. However, there is a crucial difference: The $\mathrm{U}(1)$ symmetry of electromagnetism is Abelian, and so all transformations with constant $\phi$ are global symmetry transformations that have no gauge character, since the gauge field does not change under such transformations. In contrast, the $\mathrm{SU}(3)$ symmetry group is non-Abelian, and even constant $\epsilon$ change the gauge field, unless they commute with it. The set of elements of a non-Abelian group that commute with all others is called the center, and the center of $\mathrm{SU}(N)$ is the discrete group $\mathbb{Z}_n$. So while electrically charged matter retains a continuous $\mathrm{U}(1)$ symmetry even after eliminating the gauge, color-charged matter retains only a discrete $\mathbb{Z}_3$ symmetry. That is, if you eliminate the gauge (which, in general, we cannot do: Gribov ambiguities prevent us even in principle from doing so globally, and even then, we will face a loss of covariance) you will end up with a particular set of red/blue/green particles that no longer can transform into each other. In this gauge-fixed world, you could think of color as a fixed property of each object, but this is not a useful intuitive picture to have. We describe the world through gauge theories precisely because the gauge-less description is not tractable. However, that there is a discrete global $\mathbb{Z}_3$ symmetry is a valuable insight, as this is what is actually broken in the Higgs mechanism, as explained in this answer by Dominic Else.
{ "domain": "physics.stackexchange", "id": 54453, "tags": "gauge-theory, quantum-chromodynamics, yang-mills, discrete, color-charge" }
Where does the 2015 Nepal earthquake rank amongst earthquakes since 1900?
Question: Wikipedia provides a list of deadly earthquakes since 1900. Where would the April 2015 Nepal earthquake (magnitude currently estimated as 7.8 or 8.1) rank in this list, in terms of magnitude? Answer: About 80th. I counted the earthquakes of each magnitude on the List of deadly earthquakes... article. If the Nepal earthquake is about M7.8, then it's in the range 69th to 86th on that list: Note that there are more earthquakes than this in recent history, the list has already selected for deadliness. Also note the point in the comments about different magnitude measures in the list, and the fact that the list is a few years out of date. I don't think these things change the answer.
{ "domain": "earthscience.stackexchange", "id": 452, "tags": "seismology, earthquakes, seismic-hazards, history" }
Complex Scalar Field - Euler Lagrange equation
Question: I am trying to derive the equations of motion for a complex scalar field given by: $$L = \partial_\mu \phi^* \partial^\mu \phi - m^2 \phi^*\phi$$ Euler-Lagrange equation: $$\partial_\mu \frac{\delta L}{\delta(\partial_\mu \phi)}-\frac{\delta L}{\delta\phi} = 0.$$ From $\delta L / \delta\phi$ I get $\delta/\delta \phi (-m^2\phi^*\phi) = -m^2(1^*\cdot\phi + \phi^*\cdot1)$. From $\partial_\mu \delta L/\delta(\partial_\mu\phi)$ I get: $$\partial_\mu \frac{\delta}{\delta(\partial_\lambda \phi)}(\partial_\mu \phi^* \partial^\mu \phi) = \partial_\mu \frac{\delta}{\delta(\partial_\lambda \phi)}(\partial_\mu \phi^* g^{\mu\nu} \partial_\nu \phi) = \partial_\mu \big(\frac{\delta(\partial_\mu \phi^*)}{\delta(\partial_\lambda \phi)}g^{\mu\nu} \partial_\nu \phi + \partial_\mu \phi^* g^{\mu\nu}\frac{\delta( \partial_\nu \phi)}{\delta(\partial_\lambda \phi)} \big)=$$ $$= \partial_\mu \big( (\delta^\lambda_\mu)^*\cdot g^{\mu\nu}\partial_\nu \phi + \partial_\mu\phi^*g^{\mu\nu}\cdot \delta^\lambda_\nu \big) = \partial_\mu \big(g^{\lambda\nu}\partial_\nu \phi + \partial_\mu\phi^*g^{\mu\lambda}\big) =\partial_\lambda\big(\partial^\lambda\phi + \partial^\lambda\phi^*\big) =\partial^2(\phi+\phi^*)$$ Putting them together gives: $$ (\partial^2+m^2)(\phi+\phi^*) = 0 $$ Taking the same derivatives but with the complex conjugate provides the same equation. Now, the answer is supposed to be: $$\begin{cases} (\partial^2 + m^2)\phi = 0 \\ (\partial^2 + m^2)\phi^* = 0 \end{cases}$$ In other words, it seems that I must treat the complex conjugate $\phi^*$ constant when differentiating with regards to $\phi$ and vice versa. What am I missing...? Answer: 1) The fields $\phi$ and $\phi^*$ are independent, and must be varied independently. You thus have 2 equations of motion. If this is a little confusing, then note that there exists a suitable change of variables for real fields $\phi_1,\phi_2$ where $$\phi=\frac{1}{\sqrt{2}}(\phi_1+i\phi_2)$$, and the usual complex conjugate $\phi^*$. These can be inverted, ofcourse. This converts your Lagrangian into a Lagrangian for 2 independent REAL scalar fields, each with their own equations of motion. Thus, there are really 2 degrees of freedom. 2)Another way to see this is that a priori, for any complex number $z$ you do not really know what $\bar{z}$ is until you can expand $z=x+iy$ and know both what $x$ and $y$. Conversely, you can know $x$ and $y$ only if you know both $z$ and $\bar{z}$, as $x=(z+\bar{z})/2$ etc. Telling you that $z$ is complex doesn't tell you what $\bar{z}$ is. I must instead tell you $x,y$. Either way, you have 2 degrees of freedom-you can call them the complex $z,\bar{z}$ or the real $x,y$.
{ "domain": "physics.stackexchange", "id": 66277, "tags": "lagrangian-formalism, field-theory, complex-numbers, variational-calculus" }
Select in union of sorted arrays: Already known?
Question: I am looking for bibliographic references for the following algorithm/problem: I named it "BiSelect" or "t-ary Select" or "Select in Union of Sorted Arrays", but I guess it was introduced before under another name? Problem Consider the following problem: Given $k$ disjoint sorted arrays $A_1,\ldots, A_k$, of respective sizes $n_1,\ldots,n_k$, and an integer $t\in[1..\sum n_i]$, what is the $t$-th value of their sorted union $\cup_i A_i$? Solutions There is a very simple and elegant algorithm running in time $O(\lg\min\{n_1,n_2,t\})$ if $k=2$: if $k=2$, just compare $A_1[t/2]$ with $A_2[t/2]$ and recurse on $A_1[t/2..t]$ and $A_2[1..t/2]$ or $A_1[1..t/2]$ and $A_2[t/2..t]$ accordingly, in both cases with parameter $t/2$ (and some minor optimizations when $n_1$ or $n_2$ are smaller than $t$). This generalizes to a slightly more sophisticated algorithm running in time $O(k\lg t)$ for larger values of $k$, based on computing the median of the values $A_i[t/k]$ for $i\in[1..k]$: the $t/k$ smallest elements can be further ignored in the $k/2$ arrays where $A_i[t/k]$ is smaller than the median, and the elements of ranks in $[t-t/k..]$ can be further ignored in the $k/2$ other arrays, resulting in a halving of $t$ in each recurrence (and a cost of $O(k)$ for the median). References? I am happy with my solution(s), but I suppose that the problem (and its solution) was already known. It is related to the linear time algorithm for computing the median (by sorting groups of size $5$, and recurse on the median of their middles), but is slightly more general. I asked several colleges at Madalgo in Aarhus (Denmark), and then some others at the workshop Stringology (Rouen), without success: I am hoping that someone more knowledgeable might help on Stack Exchange... Motivations Solutions to this problem have applications to Deferred Data Structure on arrays (indeed, it can be seen as an operator in a deferred data structure for the union of sorted arrays); and in a more convoluted way, to the adaptive computation of optimal prefix free codes. Answer: The algorithm described by Frederickson and Johnson in 1982 considers that all sets have the same size. They also described in 1980 an optimal solution that takes advantage of the different sizes of the sorted sets. The complexity of this algorithm is within $O(k + \sum^k_{i=1}\log{n_i})$. Reference Greg N. Frederickson and Donald B. Johnson. 1980. Generalized selection and ranking (Preliminary Version). In Proceedings of the twelfth annual ACM symposium on Theory of computing (STOC '80). ACM, New York, NY, USA, 420-428. DOI=10.1145/800141.804690 http://doi.acm.org/10.1145/800141.804690
{ "domain": "cstheory.stackexchange", "id": 3349, "tags": "reference-request" }
Number to words in left to right direction
Question: I have faced this question in an interview: If we enter a number it will convert into individual strings and display in same order. Ex: If I enter 564 then the output should be FIVE SIX FOUR. I have written the following working code (of course it has some limitations). Suggest any possible ways of optimizing this code. public class NoToWord { public static void main(String[] a) { int no = 0; String[] words= new String[]{"ZERO","ONE","TWO","THREE","FOUR","FIVE","SIX","SEVEN","EIGHT","NINE"}; String[] word=new String[10]; BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); try { no = Integer.parseInt(br.readLine()); } catch (NumberFormatException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } int i,j; j=0; while(no>0){ i = no%10; no /= 10; word[j] = words[i]; j++; } for(int k= word.length;k>0;k--) { System.out.println(word[k-1]+" "); } } } Answer: This code: int i,j; j=0; while(no>0){ i = no%10; no /= 10; word[j] = words[i]; j++; } for(int k= word.length;k>0;k--) { System.out.println(word[k-1]+" "); } can be drastically simplified to: String numbers = String.valueOf(no); for(int i = 0 ;i<numbers.length(); i++){ System.out.print(words[Character.getNumericValue(numbers.charAt(i))]+" "); } Or you may save save the user input as String (but make sure it only contains digits). You will have something like this: String numbers = br.readLine(); for(int i = 0 ;i<numbers.length(); i++){ System.out.print(words[Character.getNumericValue(numbers.charAt(i))]+" "); }
{ "domain": "codereview.stackexchange", "id": 8469, "tags": "java, interview-questions, converting, numbers-to-words" }
Is it possible that all neutron stars are actually pulsars?
Question: I'm assuming that what I've been told is true: We can only detect pulsars if their beams of electromagnetic radiation is directed towards Earth. That pulsars are the same as neutron stars, only that they emit beams of EM radiation out of their magnetic poles. So, isn't it possible that neutron stars emit EM radiation in the same fashion as pulsars, just not in the right direction for us to detect it? Answer: Pulsars are a label we apply to neutron stars that have been observed to "pulse" radio and x-ray emissions. Although all pulsars are neutron stars, not all pulsars are the same. There are three distinct classes of pulsars are currently known: rotation-powered, where the loss of rotational energy of the star provides the power; accretion-powered pulsars, where the gravitational potential energy of accreted matter is the power source; and magnetars, where the decay of an extremely strong magnetic field provides the electromagnetic power. Recent observations with the Fermi Space Telescope has discovered a subclass of rotationally-powered pulsars that emit only gamma rays rather than in X-rays. Only 18 examples of this new class of pulsar are known. While each of these classes of pulsar and the physics underlying them are quite different, the behaviour as seen from Earth is quite similar. Since pulsars appear to pulse because they rotate, and it's impossible for the the initial stellar collapse which forms a neutron star not to add angular momentum on a core element during its gravitational collapse phase, it's a given that all neutron stars rotate. However, neutron star rotation does slow down over time. So non-rotating neutron stars are at least possible. Hence not all neutron stars will necessarily be pulsars, but most will. However practically, the definition of a pulsar is a "neutron star where we observe pulsations" rather than a distinct type of behaviour. So the answer is of necessity somewhat ambiguous.
{ "domain": "physics.stackexchange", "id": 9333, "tags": "electromagnetic-radiation, astrophysics, neutron-stars, pulsars" }
Network arithmetic, finding the next host for the given
Question: Disclaimer: It's literally the second time I write in Haskell. The purpose of this module is to provide some basic operations with IP addresses (and networks). At the moment I only implemented the successor operation, that returns a next address in the range if it exists. Eg: for 192.168.1.1 in 192.168.1.0/24 it would return Maybe 192.168.1.1 And for 192.168.1.255 (which is the last valid IP address in the given network) it would return Nothing. module Main where import Data.IP (IPv4, AddrRange(addr), fromIPv4, toIPv4, isMatchedTo, makeAddrRange) import Data.Bits (shiftL, shiftR, (.|.), (.&.)) import Data.List (foldl') import Text.Read (readMaybe) import qualified Safe as S parse :: String -> Maybe IPv4 parse = readMaybe toInt :: IPv4 -> Int toInt i = foldl' accum 0 (fromIPv4 i) where accum a v = (a `shiftL` 8) .|. v fromInt :: Int -> IPv4 fromInt i = toIPv4 $ map (\s -> (i `shiftR` s) .&. 0xff) [24, 16, 8, 0] addressesInNetwork :: AddrRange IPv4 -> [IPv4] addressesInNetwork i = takeWhile (`isMatchedTo` i) adds where net = toInt $ addr i adds = map (\s -> fromInt $ net + s) [0..] successor :: Int -> IPv4 -> Maybe IPv4 successor r a = S.headMay $ dropWhile (<= a) $ addressesInNetwork $ makeAddrRange a r net24 :: IPv4 -> Maybe IPv4 net24 = successor 24 main::IO() main = print $ parse "192.168.1.201" >>= net24 What I don't like: it just looks terrible. I look on it and see Perl, not Haskell. Have I done something idiomatically wrong or my sense of beauty is broken? Answer: I think what you've missed is the Enum instance for the IPv4 datatype from iproute. Using methods from Enum and Bounded you can eliminate the use of Data.Bits in your code. Because IPv4 is Enum, your toInt method is just fromEnum :: (Enum a) => a -> Int. And similarly your fromInt is toEnum. addressesInNetwork is largely the same logic, you just let someone else do all the work. addressesInNetwork :: AddrRange IPv4 -> [IPv4] addressesInNetwork range = takeWhile (`isMatchedTo` range) [low..] where (low, _) = addrRangePair range I would use addrRangePair instead of the addr field accessor for two reasons. First, addr doesn't show up in the Haddock documentation for Data.IP which is a bit confusing. Second, on the off-chance that the author of iproute begins hiding the constructors of AddrRange from export (which is very common practice in Haskell so that library authors can change implementations without breaking users' code) you'll be safe from breaking changes. Knowing the Enum instance and available Data.IP functions, you can now write a clearer version of successor. Unfortunately we have to be a bit cautious of bounds, Enum throws an exception if you try to get the successor of maxBound, but it's not too bad. successor :: AddrRange IPv4 -> IPv4 -> Maybe IPv4 successor range ip | ip == maxBound = Nothing | succ ip `isMatchedTo` range = Just $ succ ip | otherwise = Nothing Notice I changed the type signature of your function to take an AddrRange instead of a mask length. This is kind of a separation of concerns issue, successor doesn't care about the mask length, what matters is whether an IP falls within the range it defines. So net24 would be implemented as— net24 :: IPv4 -> Maybe IPv4 net24 ip = successor (makeAddrRange ip 24) ip (That's kind of a strange name though, maybe successorMask24 instead?)
{ "domain": "codereview.stackexchange", "id": 12897, "tags": "beginner, haskell, networking" }
How to write quark composition of $\rm SU(3)$ mesons?
Question: In $\rm SU(2)$, taking up quark and down quark as a doublet we can easily apply the isospin ladder operators to write the combination of 2 quark or 3 quark (baryon) systems. In $\rm SU(3)$ quark model, to get light pseudoscalar mesons, we need to combine a triplet and antitriplet to form an octet and singlet. But how to explicitly write down the states? E.g. the singlet state is $$|\eta’\rangle = \frac{|u \bar u\rangle + |d\bar d\rangle + |s \bar s\rangle }{\sqrt{3}}$$ It can be verified that this is indeed a singlet by operation of $\hat{T_{\pm}}|\eta‘ \rangle=0$, where $\hat{T}_{\pm}$ are the isospin ladder operators. From the condition that it should be a $Y=0,T_3=0$ state, we can find that the terms are linear combination of $|u \bar u\rangle ,|d\bar d\rangle$ and $|s \bar s\rangle$.How to find the coefficients? In $\rm SU(2)$ the singlet state could be found by allowing orthogonality with the triplet. So the problem becomes evaluating the quark compositions for all the octet states, so that we can find the singlet by orthogonality. The quark composition at the vertices of the meson hexagon in the eightfoldway weight diagram of the pseudoscalar mesons are easy, but how to get those at the center? My approach: By applying ladder operators we get 6 linearly dependent states since there are 6 ladder operators $T_{\pm},U_{\pm},V_{\pm}$, but we should get 2 states, because we already got 6 at the vertices of the hexagon, to complete octet we need 2 more. In general how to obtain all the quark composition of flavour states in the nonet systematically, and how to do the same for vector mesons preferably without invoking QCD? Answer: In point of fact the 3 central members of octets (+singlet $\leadsto$ nonets) are not the ideal states you find in the pseudo scalars, as QCD effects weird mixings: a very different question. But the pseudoscalars are ideal and easy and the ladder method you have in mind of course works. You got the six outside pseudoscalars, so let us focus on the $|\pi^+\rangle = |u\bar{d}\rangle$ and $|K^+\rangle=|u\bar{s}\rangle$. Application of $T_-$ on $|\pi^+\rangle$ yields the neutral member of the isotriplet, $$|\pi^0 \rangle = \frac{|u\bar{u} \rangle- |d\bar{d}\rangle}{\sqrt{2}},$$ which you may likewise lower to the third isotriplet member $|\pi^-\rangle = |d\bar{u}\rangle$. Now, there are two more combinations with the same quark content orthogonal to that $|\pi^0 \rangle$: both isosinglets, $$|\eta'\rangle = \frac{|u \bar u\rangle + |d\bar d\rangle + |s \bar s\rangle }{\sqrt{3}}\\ |\eta\rangle = {\frac{|u\bar{u}\rangle + |d\bar{d}\rangle - 2|s\bar{s}\rangle}{\sqrt{6}}} , $$ corresponding to the traceful SU(3) singlet I, and traceless $\lambda_8$, respectively. You are asking how to determine the relative coefficients of their summands. Both are annihilated by $T_+$; but only one is annihilated by $V_+$, which sends an s to a u, and the converse for their conjugates with a minus sign, $$ V_+|\eta'\rangle=0, \qquad V_+|\eta\rangle=|K^+\rangle . $$ So you can see the η' is a Τ,U,V singlet, i.e. an SU(3) singlet, as stated, and the η, the state orthogonal to the other two, is an isosinglet, but still firmly in the octet: it connects to four outer states of the octet by suitable raising and lowering operators, as illustrated. That's why it corresponds to the traceless Gell-Mann matrix mentioned. Convince yourself these are the only coefficient arrangements with these properties.
{ "domain": "physics.stackexchange", "id": 72013, "tags": "particle-physics, group-theory, quarks, mesons" }
How to avoid the axis jumping when the orientation is constrainted
Question: Hi Now the end-effector's orientation constraints is implemented in my offline simulation in Moveit. It can be seen that the orientation of end-effector is keeping horizontal to the ground. But during the motion, several axis of UR-10 jumps anyway. So do you guys know how to avoid the unexpected pose? video!!!!!!!!!! On the other hand, i am using Trac IK to UR-10 and the solve_type is inevitably set as 'speed', which has been changed as 'distance' in kinematics.yaml, when running the c++ code. Will this setting affects the axis jumping? [ INFO] [1511333074.249636452]: Looking in private handle: /move_group_interface_tutorial for param name: arm/position_only_ik [ INFO] [1511333074.250346507]: Looking in private handle: /move_group_interface_tutorial for param name: arm/solve_type [ INFO] [1511333074.251106283]: Using solve type Speed Originally posted by pdmike on ROS Answers with karma: 43 on 2017-11-22 Post score: 0 Answer: Late response but I believe this is related to a known moveit namespace issue, reported here: https://bitbucket.org/traclabs/trac_ik/issues/25 I resolved it on my system by adding the following code to my movegroup.launch file; <group ns="move_group"> <rosparam command="load" file="$(find MY_PACKAGE)/config/kinematics.yaml"/> </group> This ensured the correct solve type was used. Originally posted by rmck with karma: 147 on 2018-01-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by pdmike on 2018-01-10: yes, the parameters of kinematics can been correctly loaded when starting the launch-file. But after that, when i run a code with C++, it still change back to solve type Speed. You can see the last post in link text in Nov is mine. Comment by rmck on 2018-01-10: Ahhh right, I see. I also added the same load command to my C++ launch file. That resolved the issue for me. It seems to be a namespace problem. If you ensure the parameters are loaded in the same namespace as your C++ component it should work.
{ "domain": "robotics.stackexchange", "id": 29421, "tags": "ros, moveit, ik" }
How much mass has humanity added to the planet?
Question: If anyone could work out how much mass (not weight :) ) have humanity added to the Earth, seeing that the LHC (atlas, I believe, weighs 7000 tonnes), and the Three Gorges dam in China slowed the Earth's rotation by 0.06 micrometers. How much does the collective building of humanity added to the mass of the Earth? Answer: All mass used in construction on Earth comes from material on Earth. Since mass is conserved, there is no change. If we start harvesting material from the moon or asteroids for use in construction on Earth, then there would be a net increase in mass. But currently, the net mass of Earth has decreased due to man, since we have launched many satellites into space which came from material on Earth.
{ "domain": "earthscience.stackexchange", "id": 1272, "tags": "human-influence" }
Father with mutated mtDNA- why isn't his offspring at risk?
Question: Mothers transmit their mitochondria (and therefore mtDNA) to their offspring and fathers don't. Lets assume that father had a mutation of the gene that encodes mtDNA, would then be his offspring at risk? Why? I also found the following statement: "The current genetic advice is that fathers with mtDNA mutations are at no risk of transmitting the defect to their offspring." How can that be true? Is it because of gene silencing? Thank you in advance! Answer: ...would then be his offspring at risk? Why? No. Generally speaking, fathers do not pass on their mtDNA (Mitochondrial DNA). Why? Because the mitochondria present in oocytes (egg cell) is the mother's, as every oocyte directly inherits the mother's mitochondria when they are made in the reproductive organs. The mitochondria that the sperm from the father carry to the egg do not enter the egg cell or are destroyed in the process. It's also worth mentioning that, in general, mtDNA does NOT reside in the nucleus of cells, but in the mitochondria itself. It is not condensed during cell division, it is not spliced during Meiosis II, and it does not undergo recombination with another cell's mtDNA. Instead, when a cell divides, each cell takes about half of the mitochondria present in the cell and maintains them. That way only the mitochondria present in the cell before division will be inherited by the daughter cells, and thus only the maternal mitochondria present in oocytes (egg cells) before sperm instigate cell division will be inherited by any offspring.
{ "domain": "biology.stackexchange", "id": 3355, "tags": "genetics, gene-expression, human-genetics, mitochondria, gene" }
$E$ in a solid uniformly charged conductor: Is my reasoning here correct?
Question: Suppose we take spherical conductor which is having both positive and negative charges but as a whole is electrically uncharged and is not under the influence of any external Electric field, We can say that the E outside the conductor is zero by gauss law if take the gaussian surface outside the solid conductor as q enclosed by the gaussian surface is zero. But if I take the gaussian surface inside the conductor which is centered around the center of the conductor the q enclosed by that gaussian surface would still be zero as the enclosed volume still has equal positive and negative charges and hence the net enclosed charge is zero and hence E (electric field) inside the conductor is zero. If the conductor is composed of only positive or negative charges then there will be E inside the conductor as q enclosed in this case by a gaussian surface would not be zero as the conductor contains only 1 type of charge. the value of this electric field can easily be calculated by using the gauss law. Answer: No, the E-field within the conductor, and by extension the E-field on the Gaussian surface, will be zero, regardless of the net charge on the conductor. This is true as long as the radius of the Gaussian surface is less than that of the conductor. If the conductor carries a net charge, this charge will flow to the surface of the conductor. This is what conductors do: charge is free to move around inside the conductor. The charge collects at the outer surface because the conductor must have the same electrostatic potential everywhere, and hence the E-field inside must be zero. The charge will affect the E-field outside the conductor. A positive net charge will cause an outward E-field and a negative charge an inward E-field.
{ "domain": "physics.stackexchange", "id": 59739, "tags": "electrostatics, electric-fields, gauss-law, conductors" }
2D path following robot, converting XY axis path to input on wheels
Question: at the moment I am creating an android program, that will steer my simple, 3 wheel (2 motors, 1 for balance) robot to move online following the path drawn by user on his screen. The robot is operated through WiFi and has 2 motors that will react on any input signals. Imagine user drawing a path for this robot on smartphone screen. It has aquired all the points on XY axis, every time beginning with (0,0). Still I have no idea, how to somehow "convert" just points, into voltage input to both motors. Signals will be sent in approx. 60Hz connection, so quite fast. Maybe not every single axis point will be taken into consideration, there will be surely some skips, but that is irrelevant, since this path does not have to be done perfectly by the robot, just in reasonable error scale. Do you have any idea on how to make the robot follow defined axis points that overall create a path? Edit 10.01: The voltage will be computed by the robot, so input on both is between -255 and 255 and the velocity should increase or decrease lineary in those borders. Additionaly, I would like to solve it as if there were perfect conditions, I don't need any feedback crazy models. Let's assume that all the data is true, no sensors and additional devices. Just XY axis path and required input (ommit wheel slide too). Answer: You're attempting to move a robot along a predefined path without the aid of sensors, so really we just need to convert the list of points into a list of pre-scripted actions. Convert input points to $({\Delta}x, {\Delta}y)$ pairs Convert $({\Delta}x, {\Delta}y)$ pairs to $(\Delta\theta, d)$ pairs Convert $(\Delta\theta, d)$ pairs to $(V_{left}, V_{right}, {\Delta}t)$ tuples The first step is easy -- simple subtraction. The second step is also fairly straightforward: using the pythagorean theorem for the distance and the atan2 function for the angle: (Then just keep track of the last $\theta$ so you can calculate $\Delta\theta$). The last step is a little tricky. You want to convert that set of angles and distances to the left and right wheel voltages, and the time to run them. This will actually give you two $(V_{left}, V_{right}, {\Delta}t)$ tuples for every $(\Delta\theta, d)$ pair: one to change the angle, and one to travel the distance. For a given width $w$ between the wheels, the change in the angle (radians) will be based on the movements of the right and left wheel: $$ \Delta \theta = \frac{1}{w} \left( {\Delta}d_{left} - {\Delta}d_{right}\right) $$ You'll have to decide what voltages and $\Delta t$ will produce that change in distance for your robot. Next, you'll do the same calculation for $d$. Both motors will use the same voltage to (hopefully) travel the same distance. That should produce a list of times and voltages that approximate the line you drew.
{ "domain": "robotics.stackexchange", "id": 292, "tags": "wheeled-robot, wifi, two-wheeled" }
How can the area be a vector in the equation $\Phi=BA\cosθ$
Question: I'm a high school student. We were taking Magnetic flux in school and the formula is $\Phi=BA\cosθ$. Please Bear with me. My issue lies in the cosθ. I'm assuming here we treated the area as a vector but how can we treat the area as a vector? and how can the area decrease due to change in orientation? I really want to understand the math behind it because if it is treated as a vector won't the area have two components? Again I'm not even sure if the area is even a vector. I'm not even sure what am I asking I'm just fed up with understanding physics it's too taxing Answer: I'm assuming here we treated the area as a vector but how can we treat the area as a vector? Yes, area is considered as a vector in physics. You will come across another law called Gauss Law which also considers area as vector. and how can the area decrease due to change in orientation? The area does not change. I am assuming that you are not familiar with the concept of flux. Basically the number of magnetic field lines(in your case) passing through the given area changes when the orientation is changed (second diagram) I really want to understand the math behind it because if it is treated as a vector won't the area have two component? Since area is a vector, it can be resolved into two directions. While finding flux, we take the components of the area vector which are directed parallel to the electric/magnetic field lines. $|\overrightarrow{A}cos\theta|$ gives the effective area in the direction of the field lines. EDIT: I forgot to add vector signs in the picture.
{ "domain": "physics.stackexchange", "id": 73361, "tags": "electromagnetism, electricity, vectors" }
Average Regret Bounds for Linear Stochastic Bandits
Question: I am reading this paper on linear stochastic bandits : http://papers.nips.cc/paper/4417-improved-algorithms-for-linear-stochastic-bandits.pdf All the results are stated in a high-probability setting, that is $\delta$ is a parameter to the algorithm , the regret at time $T$ is such that , \begin{align*} \mathbb{P}(R_T > f(T) ) \geq 1 - \delta \end{align*} Is there any resource where the bounds are obtained in the form $\mathbb{E}[R_T] \leq f(T) $ ? Trivially replacing $\delta = \frac{1}{T}$, does not give an anytime algorithm, that is the bounds don't hold for all $t > 0$. Answer: Using the "doubling trick," you can turn your favorite high probability algorithm (e.g. LinUCB) to be an anytime algorithm with expected regret as a function of $T$, usually giving a $\tilde{O}(\sqrt{T})$ dependence without knowing $T$ in advance. This turns out to be tight, i.e. in Chu et al. (2011), we give lower bounds on expected regret of $\Theta(\sqrt{dT})$ for the $d$-dimensional linear stochastic bandit problem. Hence, the expected regret, in the worst case, will indeed still be a function of $T$.
{ "domain": "cstheory.stackexchange", "id": 3509, "tags": "pr.probability, lg.learning, online-learning" }
What can an entanglement witness tell me, exactly?
Question: Let's say I encounter particles and I want to know, with certainty, if they are entangled. Can entanglement witness help me determine this? More specifically: 1. Can I prepare a witness at any time to determine this? Even after the entanglement event occurred? 2. Will the entanglement witness tell me if the particles are entangled with 100% certainty? And not give me a false positive? Answer: The notion of a "witness" is that it is a measure which will not give a false positive. That is: If witness reports there is entanglement, then there is (assuming the experiment has not gone wrong in some way). If witness does not report there is entanglement, then there may or may not be entanglement present. The study of entanglement witnesses is itself quite complex, so it's hard to answer your further questions. It depends on the scenario. In the standard Bell state case, you have a pair of particles and you want to know whether or not they are in a product state. I think that if you only have access to one copy of a pair of particles in some given state, it won't be possible to determine whether or not they are entangled, because for each entangled state $|\psi\rangle$, there are product states $| u \rangle$ such that $\langle u | \psi \rangle \ne 0$, so there is always a product state which can partially 'mimic' an entangled state.
{ "domain": "physics.stackexchange", "id": 59554, "tags": "quantum-entanglement" }
What is the electron count in this nickel complex?
Question: Our class has started learning about electron counting using the ionic method. I was having a little difficulty, especially when there are two metals in one complex, so I looked at Wikipedia for help. The Wikipedia page for organonickel complexes says: In $\ce{(allyl)2Ni2Br2}$ and $\ce{(allyl)Ni(C5H5)}$, nickel is assigned to oxidation number +2, and the electron counts are 16 and 18, respectively. I understand that in $\ce{(allyl)Ni(C5H5)}$, the allyl and cyclopentadienyl ligands both have a $-1$ "charge", so the nickel has a +2 "charge". Thus, the electron count is: $4 + 8 + 6 = 18$. However, I am struggling to calculate the electron count in $\ce{(allyl)2Ni2Br2}$. Using the same logic, $\ce{Ni}$ is in the +2 oxidation state. Since the structure (see below) does not have a metal-metal bond, we do not have to add $1$ when counting electrons. So, I think the electron count should be $\frac{4*2 + 8*2 + 2*2}{2} = 14$, not $16$. How does Wikipedia get that $\ce{Ni}$ has an electron count of $16$ in $\ce{(allyl)2Ni2Br2}$? Answer: The bromide ions contribute an electron pair on each side of the square they form with the nickel centers (thus the bromine has a positive formal charge like a bridged bromonium ion), so in your fraction you need $4×2$ in the numerator where you have $2×2$. Then it will come out to $16$.
{ "domain": "chemistry.stackexchange", "id": 13884, "tags": "coordination-compounds, transition-metals" }
Unknown CMake command 'get_executable_path'
Question: In ubuntu 22, in ros2 humble, i am building RMF from sources, when i ran colcon build i got the error from rmf_fleet_msgs package Unknown CMake command 'get_executable_path' Originally posted by Vignesht.tech on ROS Answers with karma: 16 on 2023-05-16 Post score: 0 Answer: Solved by uninstalling ros2 humble completely and reinstalled, now the problem solved. First time maybe ros2 humble not installed properly. Originally posted by Vignesht.tech with karma: 16 on 2023-05-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38386, "tags": "ros2" }
How can contact binaries persist?
Question: This answer discusses contact binaries, which I did not even know existed. I can understand how they could exist for a short time (maybe) as gravitational waves carry off energy that causes the stars' orbits to slowly decay, but how can they persist? My main concern is that surely the stars orbit within the Roche limit of each other (or at least the smaller one orbits within the Roche limit of the larger one). Has there been any good theoretical work on their stability? Answer: I agree with Walter, they don't. However, in addition to the common envelope drag and mass exchange a very important feature of their evolution is the loss of angular momentum through a magnetised wind. The loss of angular momentum from the orbit also leads to orbital shrinkage and closer contact, until presumabaly at some point they truly merge. This mechanism is thought to be one of the main processes that lead to the blue straggler phenomenon - apparently over-luminous main sequence stars seen in older clusters. You can try to observe this orbital shrinkage, but it takes place slowly enough, and their are other confusing pseudo-periodic phenomena that also lead to changes (up and down) in orbital period (e.g. the Applegate mechanism), so that it is very difficult to pin down the orbital shrinkage rate, or even verfy experimentally that it is happening. An alternative approach is to estimate the age distributions of contact binaries by looking at their kinematics compared to the expected time for short-period binaries to evolve into a contact state. These numbers are still highly debatable and uncertain; Eker et al. (2008) use this approach to estimate a timescale of 1.6 Gyr for the contact stage.
{ "domain": "physics.stackexchange", "id": 23861, "tags": "gravity, astrophysics, orbital-motion, tidal-effect" }
Gauge Transformation of the Vector Potential (on bundles)
Question: Let $D$ be a $G$-Connection on a vector bundle $E$. That is, one can write (locally) any connection $D$ as $D^0 + A$, where $D^0$ is the standard flat connection and $A$ is the vector potential whose components in local coordinates $A_{\mu} \in \text{End}(E)$ live in $\mathfrak{g}$. Now, let $g \in G$ be a gauge transformation. Under the gauge transformation one can show that the vector potential components transform as $$ A_{\mu}' = gA_{\mu}g^{-1} + g\partial_{\mu}g^{-1}. $$ The claim is that provided $A_{\mu}$ lives in $\mathfrak{g}$, then so will $A'_{\mu}$. My problem is that it is not obvious to me why this is the case. In particular, the term $$ gA_{\mu}g^{-1}. $$ I'm not even sure how to interpret this term, let alone how it lives in $\mathfrak{g}$. Any thoughts? [1] Gauge Fields, Knots and Gravity. Baez & Muniain. Answer: $g X g^{-1}$ denotes the adjoint operation of the group element $g$ on the Lie algebra element $X$. If you wish you can describe this explicitly using the BCH formula. Writing $g=e^Y$ we have $$ e^Y X e^{-Y} = X + [ Y , X ] + \frac{1}{2!} [ Y , [ Y , X ] ] + \frac{1}{3!} [ Y , [ Y , [ Y , X ] ] ] + \cdots $$ The RHS is clearly an element of the Lie algebra. BTW, the second term in the gauge expansion has a similar interpretation $$ e^Y \text{d}e^{-Y} = - d Y + \frac{1}{2!} [ Y ,\text{d}Y] - \frac{1}{3!} [ Y , [ Y , \text{d} Y]] + \cdots $$
{ "domain": "physics.stackexchange", "id": 86102, "tags": "differential-geometry, gauge-theory" }
catkin_make issue updating .srv and .cpp file simultaneously
Question: Think of a situation where rosnode (written in c++) is using messages generated by a .srv file. You want to add a field to the .srv file and use it in the rosnode. If you make these changes simultaneously, catkin_make will fail because it does not yet recognize the new field you have added. Is there a way to make sure that message_generation occurs before compilation of c++ files during catkin_make? Would this even avoid the issue? Originally posted by stbuebel on ROS Answers with karma: 5 on 2019-01-28 Post score: 0 Answer: The catkin message documentation describes how to do this. You can have messages in the same package as your executable, or in another package. In either case, if you're building with catkin_make, you need to tell cmake about the dependency between your C++ code and the messages. If your messages are in the same package, you can use: add_dependencies(your_program ${${PROJECT_NAME}_EXPORTED_TARGETS}) If you use messages from a different package, you can use: add_dependencies(your_program ${catkin_EXPORTED_TARGETS}) If you use messages from other packages and your package, you will need both of those. Originally posted by ahendrix with karma: 47576 on 2019-01-28 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by gvdhoorn on 2019-01-29: +1 and I would add: storing msg/srv/action definitions in a separate package is almost always preferable over merging them into the pkg that hosts the node(s) using them. It makes interfacing with your nodes much easier, as a "client" now only needs to build your msg pkg.
{ "domain": "robotics.stackexchange", "id": 32360, "tags": "c++, rosnode, ros-kinetic, rosservice, msg" }
What's the difference between the Fermi level and the electrochemical potential?
Question: I was asked in a Thermostatistics test to compute the electrochemical potential $\mu(T)$ and the Fermi level $\epsilon_F$ for a system of non-interacting fermions, with two possible energetic states each. As far as (I tought) I knew, both concepts mean the same in this context. What's the difference between the two? What would be a correct answer to this question? Answer: The Fermi energy, $\epsilon_F$, is only equal to the chemical potential, $\mu$, when the Fermi gas is at zero temperature. The Fermi energy basically means, "chemical potential at zero temperature". At any other temperature you could find $\mu$ via one of the standard thermodynamic relations (i.e. as the appropriate derivative of a free energy).
{ "domain": "physics.stackexchange", "id": 24080, "tags": "statistical-mechanics, definition, chemical-potential" }
Some Questions about Formulae
Question: I am currently studying Quantum Field Theory from the textbook Overview of Quantum Field Theory and I am confused by two formulae presented in chapter 2 (2.39) and (2.40). The first is $$(1)_{1-particle} = \int \frac{d^3p}{(2\pi)^3}|p\rangle\frac{1}{2E_p}\langle p|.\tag{2.39}$$ The textbook says this is the completeness relation. I sort of see it but I do not understand where the $\frac{1}{2E_p}$ comes from. Can anyone explain? The second equation is also confusing; can anyone explain why this is true? My guess that it comes from the above completeness relation but I am not sure. $$\int \frac{d^3p}{(2\pi)^3}\frac{1}{2E_p} = \int \frac{d^4 p}{(2\pi)^4}(2\pi)\delta(p^2 - m^2).\tag{2.40}$$ Answer: Equation (2.39) is written the way it is because one has chosen the covariant normalization. In this normalization the momentum eigenstates are normalized as $$\langle p'|p\rangle = (2\pi)^3 (2E_p) \delta^3(\mathbf{p}-\mathbf{p'}).\tag{1}$$ In that case observe that if we decompose a state as $$|\phi\rangle=\int d^3p\ \phi(p) |p\rangle\tag{2},$$ then you can find out by using (1) that $\phi(p)$ is given by $$\phi(p) = \dfrac{1}{(2\pi)^3}\dfrac{1}{2E_p} \langle p|\phi\rangle\tag{3}.$$ This means that (2) is actually the statement that $$|\phi\rangle=\int \dfrac{d^3p}{(2\pi)^3 2E_p}|p\rangle \langle p|\phi\rangle\tag{4},$$ from which you can read off the resolution of the identity in the form of your equation (2.39). As for (2.40) you can easily derive it from the right-hand side by recalling that $$\delta(f(x))=\sum_{x_i} \dfrac{\delta(x-x_i)}{|f'(x_i)|}\tag{5},$$ where $x_i$ are the zeroes of $f(x)$. View $p^2-m^2$ as a function of $p^0$ and in the end use the above result to eliminate the integration over $p^0$ to understand the relation with the left-hand side. That's a good exercise.
{ "domain": "physics.stackexchange", "id": 79188, "tags": "quantum-field-theory, special-relativity, hilbert-space, integration, dirac-delta-distributions" }
How to delay visible light
Question: I want to delay visible light (~450nm-600nm) by 10ns. One way would just be to have it travel about 10ft since the speed of light is about 1ft/ns. Could I reduce that length by sending it through some high index of refraction material? Dispersion is a bit of a concern in that case I guess. Any recommendations for what that material should be? Answer: This may not be a complete solution, but you can use mirrors! It's not hard to design a mirror-based optical system which will have the property you are looking for. This reminds me of an interesting Project-Euler question. Or a less fancy but more practical way would be to use two simple planar mirrors, carefully aligned.
{ "domain": "physics.stackexchange", "id": 8509, "tags": "optics" }
Could electromagnetic charge curve something like spacetime in analogy with general relativity?
Question: Newtonian gravity and electrostatics have the same form; this analogy is extended when we look at full dynamic electromagnetism, and correspondingly "gravitomagnetism". We are quite capable of observing higher-order general relativistic effects, because mass (and energy) attracts more of itself and we get very large quantities of it. It would be difficult to isolate a net charge on planetary scales, though -- the whole Sun has a charge of 77 Coulombs. How plausible is it to consider that classical electromagnetism is just the low-charge limit of a larger theory, that curves spacetime (or something like that -- in some way that only affects charge) in a more complicated, dynamical way? I'm aware that, as electromagnetic waves move through space, they would carry energy themselves, bending spacetime somewhat. I lack the GR background to understand how that behaves, though. My first thought is that - if a single photon moved through space, its energy is so small as to only bend spacetime a very small amount, with radii of curvature much less than its own wavelength. As the energy of the photon goes up, and the radius goes down, so does the wavelength. So to see nonlinear behavior would require enormous numbers of coherent photons. Answer: Yes and no. Charges and currents curve the $U(1)$ gauge connection. We experience this curvature every day so we even have a special name for it: an electromagnetic field. Just like spacetime curvature is called gravity. However, the choice of the word 'curvature' is somewhat unintuitive here due to the fact that it is not our spacetime that gets curved. You can think of the gauge connection as of a way to do parallel transport. You can transport only some special structures, however. Those are elements (vectors) of representations of the $U(1)$ group. In General Relativity, the affine connection lets you do parallel transport of tangent-space vectors. In this sense GR has a beautiful geometrical interpretation.
{ "domain": "physics.stackexchange", "id": 23236, "tags": "electromagnetism, general-relativity" }
Could the Periodic Table have been done using group theory?
Question: These three questions are phrased as alternative-history questions, but my real intent is to understand better how well different modeling approaches fit the phenomena they are used to describe; see 1 below for more discussion of that point. Short "informed opinion" answers are fine (better, actually). If Dmitri Mendeleev had had access to and a full understanding of modern group theory, could have plausibly structured the periodic table of chemistry in terms of group theory, as opposed to the simpler data-driven tabular format that he actually used? If Mendeleev really had created a group-theory-based Periodic Table, would it have provided any specific insights, e.g. perhaps early insights into quantum theory? The inverse question: If Murray Gell-Mann and others had not used group theory concepts such as $SU(3)$ to organize particles into families, and had instead relied on simple grouping and graphical organization methods more akin to those of Mendeleev, is there any significant chance they could have succeeded? Or less speculatively, is it possible to create useful, concise, and accurate organizational structures (presumably quark based) that fully explain the particle data of the 1970s without making any reference to algebraic structures? 1 Background: My perspective on the above questions is to understand the interplay between expressive power and noise in real theory structures. One way to explain that is to note that mathematical modeling of data sets has certain strong (and deep) similarities to the concept of data compression. On the positive side, a good theory and a good compression both manage to express all of the known data by using only a much smaller number of formula (characters). On the negative side, even very good compressions can go a astray by adding "artifacts," that is, details that are not in the original data, and which therefore constitute a form of noise. Similarly, theories can also add "artifacts" or details not in the original data set. The table-style periodic table and $SU(3)$ represent two extremes of representation style. The table format of the periodic table would seem to have low expressive power and low precision, whereas $SU(3)$ has high representational power and precision. The asymmetric and ultimately misleading emphasis on strangeness in the original Eight-Fold Way is an explicit example of an artifact introduced by that higher power. We now know that strangeness is just a fragment -- the first "down quark" parallel -- of the three-generations issue, and that strangeness showed up first only because it was more easily accessible by the particle accelerators of that time. 2012-06-30 - Update on final answers I have selected Luboš Motl's answer as the most persuasive with respect to the questions I asked. If you look at the link he includes, you will see that he has looked into this issue in minute detail with regards to what kind of representation works best, and why. Since that issue of "what is the most apt form of representation" was at the heart of my question, his answer is impressive. With that said, I would also recommend that anyone interested in how and to what degree group theory can be applied to interesting and unexpected complexity problems, even if only approximately, should also look closely at David Bar Moshe's fascinating answer about an entire conference that looked at whether group theory could be meaningfully applied to the chemical elements. This excellent answer points out a rich and unexpected set of historical explorations of the question. If I could, I would also flag this as an answer from a different perspective. Finally, Arnold Neumaier's answer shows how a carefully defined subset of the problem can be tractable to group theoretic methods in away that is predictive -- which to me is the single most fundamental criterion for when a model crosses over from being "just data" into becoming true theory. And again, I would flag this one as an answer if I could. Impressive insights all, and my thanks to all three of you for providing such interesting, unexpected, and deeply insightful answers! Answer: No, the elements of the periodic table don't form any representation of a group or, more precisely, any irreducible representation. Even more precisely, the real insights by Mendeleev – that the reactivity etc. is a repeating function of the atomic number – doesn't follow from any property of a representation that could be derived by group theory. The periodic table boils down to the electron's filling the shells in the atom, quantum states that are close to the energy eigenstates of a rescaled hydrogen atom. The closest thing to your project that actually can be done is to solve the full hydrogen by the $SO(4)$ symmetry – the rotational symmetry enhanced by the Runge-Lenz vector: http://motls.blogspot.com/2011/11/hydrogen-atom-and-so4-symmetry.html This solution dictates not only degeneracies but even the energies because the Hamiltonian is a function of a Casimir. And these energies are important to determine which $Z$ produce more reactive elements. More complicated atoms don't have any $SO(4)$ symmetry, only $SO(3)$, and they can't be solved purely by symmetries. The eightfold symmetry is useful because the elementary building blocks are numerous and they carry various labels – like quarks come in different flavors. But that ain't the case of atoms in the approximation of chemistry or atomic physics for which the nucleus only matters when it comes to its charge, i.e. $Z$, and electrons are the only other particles that matter, without any flavor indices. So there's simply no room for eightfold symmetries etc. The fundamental symmetry between elementary particles is $U(1)$, not $SU(3)_f$ as it is for light quarks. If we neglect the electron-electron interactions in the atoms, we get another solvable problem – one in which we literally fill shells of the Hydrogen atom. This system is a second-quantized Hydrogen atom of some sort and it is solvable. We could say it is solvable by group theory. Of course, this approximation ultimately leads to a wrong ordering of shells and the predicted periodic table would be wrong for high $Z$, too. To conclude, physical systems that may be fully solved just by group theory – and even properties of physical systems that may be determined by group theory – are rare enough, a small enough minority of the questions we may ask. Atoms are complicated enough so that their properties mostly boil down to more complicated dynamics than just symmetries.
{ "domain": "physics.stackexchange", "id": 4413, "tags": "physical-chemistry, group-theory, group-representations, quarks" }
How do we know that energy and momentum are conserved?
Question: How do we know energy and momentum are conserved? Are these two facts taken as axioms or have they been proven by an experiment? This question has been in part addressed here: Conservation of Momentum but I don't see how translational symmetry implies conservation of momentum. If the reasoning behind this could be explained that would be great. Conservation of energy, like conservation of momentum, seems intuitive to me but similarly how do we know for certain that it is impossible to create or destroy energy? Is this taken as an axiom or has it been proved by an experiment? I hope it is clear that I'm not trying to suggest that I don't trust these laws to be true but rather that I'd like to know how we know they are true. Thanks for the help Answer: We know through experimental observation. That is the beginning and end of the subject of physics, at least the part of it the tells it apart from, say mathematics. Conservation of momentum is simply an inductively reasoned hypothesis to summarize certain patterns in experimental data. You are alluding to the conservation of momentum's being "explained" through Noether's Theorem. As I discuss in my answer to the Physics SE question "What is Momentum, Really?" here, whenever the Lagrangian of a physical system is invariant with respect to co-ordinate translation, there is a vector conserved quantity. That fact is wholly mathematical result, that continuous symmetries of a Lagrangian always imply quantities conserved by system state evolution described by that Lagrangian, one for each "generator" of continuous symmetry (i.e. basis vector of the Lie algebra of the Lie group of the Lagrangian's symmetries). Note carefully, however, that Noether's theorem is an "if" theorem: a one-way implication. It's far from being the only way that a conservation might arise. Experimentally, it has been found to be fruitful to act on the hunch that it is the explanation, in the following way. Since the conserved quantity in a Lagrangian formulation of Newtonian mechanics implied by co-ordinate translation invariance is Newtonian momentum, we hypothesize that the result is more general and therefore deliberately construct Lagrangians for other theories to have this symmetry so that they too will have conserved momentums (i.e. spatial co-ordinate translational invariance). When we make predictions with these theories, they turn out, again determined experimentally, to be sound. We say that the symmetry "explains" conservation of momentum, but what we really mean that is that we have found a compelling translation of the conservation law: it translates conservation into symmetry terms. It is nonetheless an important translation; in my opinion it makes physics much more "visceral". The statement of conservation laws as givens seems abstract and, from a 21st century standpoint, arbitrary and open to question. In stark contrast, a symmetry description is much more accessible to us: even tiny children begin to understand that the World's behavior is independent of the way we choose to describe it. Why should the laws of physics change simply because I decide to shift my co-ordinate origin to another place, or rotate my co-ordinate system (rotational invariance of a Lagrangian gives rise to conservation of angular momentum)? Unless, of course, there is a clear, outside, experimentally measurable agent breaking this independence (e.g. grain structure in a crystal making laws depend on their orientation relative to the grain). User knzhou adds: ... I would just add that we are now so confident in energy/momentum conservation that it can be used "in reverse" to your method in paragraph 3: if we saw events at the LHC with missing energy, this would be taken as evidence for dark matter, not evidence against conservation of energy! We would change our Lagrangian, nothing more. I can't really add any clarifying comment to that statement.
{ "domain": "physics.stackexchange", "id": 32502, "tags": "energy-conservation, momentum" }
The range of significance in Type Theory
Question: What exactly does "Types as ranges of significance of propositional functions. In modern 
terminology, types are domains of predicates" mean? Update: I found in this paper (Pag 14 or 234) by Russell, where he defines what is ranges of significance, not exactly including types else propositions. A function is said to be significant for the argument $x$ if it has a value for this argument. Thus we may say shortly $\phi x$ is significant, meaning the function $\phi$ has a value for the argument $x$. The range of significance of a function consists of all the arguments for which the function is true, together with all the arguments for which it is false. Answer: I believe this is simply referring to old terminology, as the quoted text implies. When using a formula such as $\sqrt{x-5}$, it is common to specify what $x$ is intended to be so that the formula makes sense. That is, we could let $x$ to be a real number $\geq 5$, so that the square root is defined. Only when $x$ "ranges" over the values satisfying such condition the formula is meaningful, or "has significance". Hence, the "range of significance" of a formula essentially is the class of the values the variables must be in so that the formula makes sense. This is true in propositions, too. If a propositional formula involves variables, as in $f(x) < 4 \implies f(x) < 0$, then we can regard the formula as a function mapping the value of its (free) variables $f,x$ to a proposition. This is why it can be called a "propositional function". Its "range of significance" are the values $(f,x)$ which make the proposition meaningful. We can't let e.g. $f = 5$ and $x = \mathbb{N}$, so that pair is outside the range. In modern terms, indeed, this is what we usually call "domain of a predicate".
{ "domain": "cs.stackexchange", "id": 6627, "tags": "type-theory" }