anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Proving that a preorder traversal of a rooted tree can be performed in linear time
Question: Definition: Let $T(V, E)$ be a rooted tree with root $r$. If $T$ has no other vertices, then the root by itself constitutes the preorder traversal of $T$. If $\lvert V \rvert > 1$, let $T_1, T_2, \dots, T_k$ denote the subtrees of $T$ from left to right. The preorder traversal of $T$ first visits $r$ and then traverses the vertices of $T_1$ in preorder, then the vertices of $T_2$ in preorder, and so on until the vertices of $T_k$ are traversed in preorder. Question: How does one prove, using the above definition, that a preorder traversal of a rooted tree $T(V, E)$ can be computed in $O(\lvert V \rvert)$ time? Since $T$ is a tree, $\lvert E \rvert = \lvert V \rvert - 1$, and so showing that a preorder traversal algorithm simply visits the vertices and edges of $T$ a constant number of times and does constant work on each visit would do it. Obviously this is true, but how does one prove this formally? Answer: You can prove that formally by using structural induction. The structure of the proof is as follows: Observe that a preorder traversal of a tree with $1$ vertex requires constant time. Then, assume that a preorder traversal of a tree with up to $i \ge 1$ vertices can be performed in time at most $c \cdot i$, for a suitable constant $c$, and show that the preorder traversal of a tree $T$ with $i+1$ vertices requires at most $c \cdot (i+1)$ time. Here you use the induction hypothesis and the definition of the traversal. In particular, notice that the subtrees of $T$ that are rooted in the children of the root have at most $i$ vertices.
{ "domain": "cs.stackexchange", "id": 17838, "tags": "algorithms, time-complexity, trees, graph-traversal" }
Are identical twins exactly the same?
Question: http://www.scientificamerican.com/article.cfm?id=identical-twins-genes-are-not-identical According to this article some identical twins show differences with respect to their copy number variants. How is this possible? If mitosis of an egg can lead to this difference, why can't mitosis of any cell in our body? If we assume that identical twins are exactly identical, then if we make a clone of a twin will all three be exactly identical? Answer: Errors in division occur all the time and can show up in any dividing cell; this is, of course, important for cancer biology. If one of my cells replicates oddly right now it likely won't matter since it's only one out of trillions, but if that happened at a very early age in development it could be present in many if not all of my cells. Identical twins indeed come from the same egg-sperm fusion event but they develop separately from a very early state. It's trivial to imagine a minor replication error in DNA at an early cell stage that results in different genotypes between so-called identical twins. Besides, we've known for years that identical twins can have major epigenetic differences, so this isn't particularly ground-breaking. If we assume that identical twins are exactly identical, then if we make a clone of a twin, will all three be exactly identical? Depends on your definition of "clone" but how identical is identical? If you mean exact, then probably not; at the very least they will have different numbers of cells, for example, not to even mention CNV or epigenetics. If you want some SciFi cloning thing then, well, join the club.
{ "domain": "biology.stackexchange", "id": 1389, "tags": "genetics, reproduction, mitosis, twins" }
Significance of Convex Loss Function with Nonlinear Models
Question: When used in a linear model, a convex loss function guarantees a unique global minimum for the parameters, which can be found by local optimization methods. However, when the model is nonlinear (e.g. MLPs), local minima are possible for a convex loss. Are there any benefits to a convex loss function when the model is nonlinear? Can convexity be completely disregarded in the nonlinear case? Answer: Another benefit of a convex loss function is it will have faster convergence for all models, both linear and nonlinear. There will be even faster convergence in a convex loss function if a momentum term is added to gradient descent. However often in real-world scenarios and with many models types, the loss function is not guaranteed to be convex. It is not clear what "convexity be completely disregarded" means. Machine learning systems should be designed to be robust to non-convex loss functions in order to find useful parameters in a board range of problems.
{ "domain": "datascience.stackexchange", "id": 10021, "tags": "machine-learning, loss-function, optimization" }
Density Functional Definition
Question: Is it wrong to say that Density Functional means that Electron Density is a function of the orbitals (wave function) of all electrons in 3 dimensions, if so, why ? Answer: In general, a functional $F$ is a mapping from an arbitrary set $\mathcal{X}$ of functions to the set of complex numbers $\mathbb{C}$ or the set of real numbers $\mathbb{R}$: $$ F : \mathcal{X} \mapsto \mathbb{R}. $$ or $$ F : \mathcal{X} \mapsto \mathbb{C}. $$ For example, if you consider $\mathcal{X}$ as the set of polynomials with real coefficients, you can define a functional $F$ as $$ F[f] = \int_0^1f(x)\,dx $$ i.e. your functional $F$ takes a polynomial function $f\in\mathcal{X}$ (for example $f(x)=3x+1/2$) as an argument and returns a scalar (2 for $f(x)=3x+1/2$, as you can easily verify). A density functional is simply a functional $F[f]$ where the argument $f$ is the electron density $\rho(\vec{r})$ (i.e. a density functional is a functional of the electron density). For example Hohenberg and Kohn showed that the energy $\epsilon$ of a quantum system is a functional of the density $$ \epsilon=E[\rho] $$ This means that when you plug the electron density of your system $\rho(\vec{r})$ into the energy functional $E[\rho]$ you get a number $\epsilon$, which is the energy of your system. The whole energy functional is not known explicitly, but some of its components are known. For example for the external potential energy we have $$ V[\rho] = \int v(\vec{r})\rho(\vec{r})d\vec{r} $$ and for the Coulomb interaction between electrons we have $$ J[\rho] = \frac{1}{2}\iint \frac{\rho(\vec{r})\rho(\vec{r}')}{|\vec{r}-\vec{r}'|}\,d\vec{r}d\vec{r}' $$ which are clearly functionals of the electron density.
{ "domain": "chemistry.stackexchange", "id": 5691, "tags": "quantum-chemistry, computational-chemistry, density-functional-theory" }
Checking whether a string is a valid number
Question: I'm developing a Java Helper Library which has useful methods developers don't want to type out over and over. I'm trying to think of the best way of checking whether a string is a valid number. I've always just tried something like Integer.parseInt(iPromiseImANumber); and if it throws an exception then it's not a valid number (integer in this case). I'm thinking that trying to catch an exception isn't good practice. Is there a better way to do this? (I'm thinking a regular expression could do it, but it doesn't seem to be as reliable as the parse option). Here's my method as it stands. What could be improved? /** * Does a try catch (Exception) on parsing the given string to the given type * * @param c the number type. Valid types are (wrappers included): double, int, float, long. * @param numString number to check * @return false if there is an exception, true otherwise */ public static boolean isValidNumber(Class c, String numString) { try { if (c == double.class || c == Double.class) { Double.parseDouble(numString); } else if (c == int.class || c == Integer.class) { Integer.parseInt(numString); } else if (c == float.class || c == Float.class) { Float.parseFloat(numString); } else if (c == long.class || c == Long.class) { Long.parseLong(numString); } } catch (Exception ex) { return false; } return true; } Answer: NumberUtils.isNumber from Apache Commons Lang does that.
{ "domain": "codereview.stackexchange", "id": 6586, "tags": "java, strings, validation" }
Gauss’s law two concentric cylinders
Question: In the answer key, for part b, they take the charge enclosed to be $Q_{enc} = +\frac{Qh}{L}$ where $h$ is the height of the Gaussian cylinder, $L$ is the length of the cylinder with radius $R_1$, and $Q$ is the charge of the cylinder. That doesn’t make a lot of sense to me because I thought you would also have to divide the expression for $Q_{enc}$ by $2\pi R_1$ since $\frac{Q}{2\pi R_1L}$ would give you the surface charge density of the cylinder, and when you multiply that by $h$, you would get $Q_{enc}$. Am I not correct? If not, could you please explain where I’m going wrong? PS: the answer key I have is from Walter Lewin's MIT course so I doubt it would be incorrect Answer: Assuming charge density on the inner shell to be constant $$\sigma= \frac {+Q}{2\pi R_1 l}$$Now for a Gaussian cylinder in the interior region , radius R and height h $$Q_{enc}=(\frac {+Q}{2\pi R_1 l})(2\pi R_1 h)$$ $$Q_{enc}=\frac {+Qh}{l}$$Instead of taking surface charge density you could've just found out the linear charge density and multiplied it by the height of Gaussian cylinder , both give the same result. From your comment it seems you are confusing the area terms appearing in Gaussian cylinder and in the evaluation of $Q_{enc} $ $$\epsilon _o \int EdA = \frac{+Qh}{l} $$ $$\epsilon _o E(2\pi Rh)=\frac{+Qh}{l}$$ $$E=\frac {+Qh}{2\pi \epsilon _o Rhl}=\frac {+Q}{2\pi \epsilon _o Rl}$$It is here that 'R' comes into scene , this is quite obvious I think you'll understand.
{ "domain": "physics.stackexchange", "id": 58836, "tags": "homework-and-exercises, electrostatics, gauss-law" }
Why frictionless joints are assumed in pin-joint truss analysis?
Question: In pin-jointed truss analysis, we consider some assumptions. One of them is that they have friction-less joints. Why do we have this assumption? What would happen if the joints were to have friction? How does it affect the analysis? Answer: No friction in the pin joints means none of the joints can transmit bending moments. This simplifies the analysis. Including friction in the pin joints means it becomes possible to transmit bending moments across the joints. In this case it is possible to get the structure to be statically indeterminate (more unknowns than the number of equations) if the pin joints can transmit bending moments.
{ "domain": "physics.stackexchange", "id": 46678, "tags": "equilibrium, statics, structural-beam" }
What does this lambda–sigma notation mean?
Question: I am currently reading the CRC Handbook of Phosphorus-31 Nuclear Magnetic Resonance Data by John C. Tebby (CRC Press, 1991) and on several figures (ex. pages 9 to 14) there is a caption that reads along these lines $\ce{^{31}P}$ NMR chemical shifts of three coordinate (λ3 σ3) phosphorus compounds. (Tables B to E.) What does the (λ3 σ3) signify? I've not seen that notation before, and it is on the captions of some of the charts I need. Answer: The notation refers to the valency (number of valence electrons involved in bonds, i.e., either 3 or 5) and coordination number (number of substituents attached) in organophosphorus compounds. For example, it could refer to the tautomeric forms of phosphonate esters:1 So the λ3σ3 refers to a P with 3 bonding valence electrons (the lone pair doesn't count) and 3 substituents attached while λ5σ4 refers to a P with all five of its valence electrons involved in bonds and 4 substituents attached. Reference Kraszewski, A.; Stawinski, J. H-Phosphonates: Versatile synthetic precursors to biologically active phosphorus compounds. Pure Appl. Chem. 2009, 79 (12), 2217–2227. DOI: 10.1351/pac200779122217.
{ "domain": "chemistry.stackexchange", "id": 8694, "tags": "notation" }
Things that must be considered when mass producing a capacitor
Question: The question is to list four things that must be considered when a capacitor is chosen for inclusion in a circuit which is to be mass produced. Are there a specific set of four things? I've come up with only three so far which are: dielectric, charge and potential difference. Answer: Consider shape, capacitance, size , max. voltage/current it can withstand. Try to look at some capacitor, may be from your house fan. (They generally burn after some time.) You will find your answer. Economically, CuriousOne answers correctly.
{ "domain": "physics.stackexchange", "id": 29863, "tags": "electrostatics, electric-circuits, capacitance, electronics, dielectric" }
Magnetar field energy density
Question: According to Wikipedia a Magnetar... Earth has a geomagnetic field of 30–60 microteslas, and a neodymium-based, rare-earth magnet has a field of about 1 tesla, with a magnetic energy density of 4.0×105 J/m3. A magnetar's 1010 tesla field, by contrast, has an energy density of 4.0×1025 J/m3, with an E/c2 mass density >104 times that of lead. Is this correct? Does a 10^10 increase in field lead to a 10^20 increase in energy? Answer: Yes the energy $u$ stored in a field $B$ in a region with permiability $\mu$ is given by: $$u = \frac{1}{2}\frac{B^2}{\mu}$$ So if you double $B$ then $u$ gets quadrupled and if you increase $B$ by a factor of $10^{10}$ then $u$ increases by $10^{2\times 10} = 10^{20}$. I'm not quite sure about the assumptions that go into the above formula however (I'll see if I can find out) so it may or may not be valid for the extreme fields around a magnetar.
{ "domain": "physics.stackexchange", "id": 18367, "tags": "astrophysics, magnetic-fields" }
How do I convert specific humidity to relative humidity?
Question: How do I convert specific humidity to relative humidity? What variables are needed (e.g. air temperature, pressure, etc.)? Answer: Relative humidity is just $e/e_s$, the ratio of the vapor pressure to saturation vapor pressure or $w/w_s$, the ratio of mass mixing ratios of water vapor at actual and saturation values. If you have specific humidity, which is the mass mixing ratio of water vapor in air, defined as: $$ q \equiv \dfrac{m_v}{m_v + m_d} = \dfrac{w}{w+1} \approx w$$ Relative humidity can be expressed as the ratio of water vapor mixing ratio to saturation water vapor mixing ratio, $w/w_s$, where: $$ w_s \equiv \dfrac{m_{vs}}{m_d} = \dfrac{e_s R_d}{R_v(p-e_s)} \approx 0.622\dfrac{e_s}{p}$$ and from Clausius-Clapeyron: $$ e_s(T) = e_{s0}\exp\left[\left(\dfrac{L_v(T)}{R_v}\right)\left(\dfrac{1}{T_0} -\dfrac{1}{T} \right)\right] \approx 611\exp\left(\dfrac{17.67(T-T_0)}{T-29.65}\right)$$ Once you have calculated $w$ and $w_s$ you can obtain the relative humidity as: $$ RH = 100\dfrac{w}{w_s} \approx 0.263pq\left[\exp\left(\dfrac{17.67(T-T_0)}{T-29.65}\right)\right]^{-1} $$ You could also calculate $RH = 100(e/e_s)$, but I think since you are starting with $q$ it isn't as straightforward as doing it this way. Variables used: $q$ specific humidity or the mass mixing ratio of water vapor to total air (dimensionless) $m_v$ specific mass of water vapor (kg) $m_{vs}$ specific mass of water vapor at equilibrium (kg) $m_d$ specific mass of dry air (kg) $w$ mass mixing ratio of water vapor to dry air (dimensionless) $w_s$ mass mixing ratio of water vapor to dry air at equilibrium (dimensionless) $e_s(T)$ saturation vapor pressure (Pa) $e_{s0}$ saturation vapor pressure at $T_0$ (Pa) $R_d$ specific gas constant for dry air (J kg$^{-1}$ K$^{-1}$) $R_v$ specific gas constant for water vapor (J kg$^{-1}$ K$^{-1}$) $p$ pressure (Pa) $L_v(T)$ specific enthalpy of vaporization (J kg$^{-1}$) $T$ temperature (K) $T_0$ reference temperature (typically 273.16 K) (K)
{ "domain": "earthscience.stackexchange", "id": 701, "tags": "meteorology, humidity" }
Why don't two observers' clocks measure the same time between the same events?
Question: Person A in reference frame A watches person B travel from Star 1 to Star 2 (a distance of d). Of course, from person B's reference frame, he is at rest and is watching Star 2 traveling to him. Now we know from the principle of relativity, each one will measure the other one’s clock as running slower than his own. Let’s say that Person A measures Person B’s speed to be v, and that Person A measures 10 years for person B to make it to Star 2. Let’s also say that person B is moving at the speed so that the Gamma Factor is 2. This means person A observes person’s B’s clock to have elapsed a time of 5 years. Now let’s look at this from Person B’s perspective: Person B observes Star 2 approaching (and Star 1 receding from) him also at speed v. Since the two stars are moving, the distance between them is length contracted (after all, if there were a ruler in between the stars, the moving ruler would be contracted) by a factor of 2. Since person B measures the initial distance to Star 2 to be d/2 and its speed v, he calculates the time to Star 2’s arrival to be 5 years. Since he observes person A’s clock as running slow (since Person A is moving also at speed v), when Star 2 arrives, he measures Person A’s clock to have elapsed a time of 2.5 years. Do you see why I’m confused? Person A measures Person B’s elapsed time to be the same as Person B measures Person B’s elapsed time (both 5 years), but Person B does not measure Person A’s elapsed time to be the same as Person A measures Person A’s elapsed time (Person B get’s a measurement of 2.5 years while Person A measured 10 years). This is asymmetrical, which probably means it is wrong. But I’m not sure what the error is. I suspect if I had done this correctly, each person should measure his own elapsed time to be 10 years and measure the other’s elapsed time to be 5 years. This would be symmetrical and would make the most sense, but again, I can’t seem to justify how person B wouldn’t measure his trip time to be 5 years. What's my mistake? Answer: Everything you have said describing the situation in your question is correct; Person A and Person B disagree about how much time elapses on person A's clock between the two events. (The first event is Person B leaving Star 1 and the second event is Person B arriving at Star 2) This is not a logical contradiction. It stems from the relativity of simultaneity and the fact that the time between two events is different in different reference frames. The time between two events is minimized when the spatial separation between them is zero because the interval $$\Delta s^2 = \Delta t^2 - \Delta x^2$$ is invariant (the same for everyone). Person B therefore perceives the minimum possible time between the two events, which is 5 years. Person A perceives some spatial separation between the events, and so perceives a longer time between them (10 years). We can use this information to work out the speed $v$. For Person A, $\Delta t^2 - \Delta x^2 = 5^2$ because that's the answer for Person B, and it must be the same for A. We know $\Delta t^2 = 100$, so $$100 - \Delta x^2 = 25$$ or $$ \Delta x = \sqrt{75} = 5\sqrt{3}$$ $v$ is then $$v = \frac{\Delta x}{\Delta t} = \frac{5\sqrt{3}}{10} = \frac{\sqrt{3}}{2}$$ The situation is not symmetric with respect to A and B because A is not moving relative to the stars, but B is. The existence of the stars breaks the symmetry. A symmetric situation would be if A and B start at their own stars, then meet in the middle. Another symmetric scenario would be to let B start moving away from A. When A's clock reads 10 years, ask her what B's clock reads. When B's clock reads 10 years, ask him what A's clock reads. In that case, both would say that the other's clock reads 5 years. So, if the setup of the problem is symmetric with respect to A and B, their answers should be, also. Because this problem does not have that symmetry, the answers A and B give do not have the symmetry. Finally, you might be concerned that Person A thinks the time between the two events is 10 years, but according to Person B, Person A's clock reads only 2.5 years elapsed. This is due to the relativity of simultaneity. According to Person B, he is arriving at Star 2 and checking Person A's clock simultaneously. Those events have a big spatial separation, though. According to Person A, they are not simultaneous. Person A thinks Person B has checked her clock too soon.
{ "domain": "physics.stackexchange", "id": 1097, "tags": "special-relativity, time, reference-frames, time-dilation, observers" }
Problem conciliating relativistic momentum with Hamilton-Jacobi relations: massive object going at (imaginary) light-speed?
Question: I can't seem to make sense of a strange paradox emerging from my attempts to conciliate the two physical statements described in the title. I'm sure it's some silly mistake I made in the process to cause it, but I can't identify why, and even my best guesses about the kind of mistake don't look to me as likely to generate it. I'd really appreciate any insight/explanation/correction/clarification. THE PARADOX In Special Relativity, assuming for simplicity a point-like, massive, free body moving along a single coordinate x (thus no quadri-potentials, no gravity, etc.), I have this equation for relativistic linear momentum along that coordinate in terms of gamma factor (depending in general on the velocity), rest mass and velocity: $$p_x=\gamma m v_x$$ Of course, in my reference frame the velocity is, trivially: $$v_x=\frac{\partial x}{\partial t}$$ I can use the mass-energy equivalence to replace the rest mass times gamma with the total energy, using the squared of the speed of light as proportionality factor: $$p_x=\frac{E v_x}{c^2}$$ If I want to solve for the velocity, I trivially get: $$v_x=\frac{p_x c^2}{E}$$ From (classical) Hamilton-Jacobi relations (which every single source I've found so far confirms can also apply to Special Relativity, provided that the Hamiltonian also includes the rest energy term) I can find the Hamiltonian $H$ as (minus) the partial time-derivative of the Hamilton-principal-function $S$ (analogous to action): $$H=-\frac{\partial S}{\partial t}$$ In a simple reference frame which doesn't depend explicitly on time, I can identify this Hamiltonian with the total energy of the body: $$E=-\frac{\partial S}{\partial t}$$ I can use the Hamilton-Jacobi relations for the momentum along $x$ as well, as a partial coordinate-derivative of the same $S$ (in the relativistic case, mechanical and canonical momentum are the same since I'm taking a simple case without potentials): $$p_x=\frac{\partial S}{\partial x}$$ If I try to match 4 with 6 and 7, I get: $$v_x=-\frac{\frac{\partial S}{\partial x}}{\frac{\partial S}{\partial t}}c^2$$ Which matching with 2 in "well-behaved enough" conditions (more on this later) should simplify as: $$v_x=-\frac{\partial t}{\partial x}c^2=-\frac{1}{v_x}c^2$$ This is quite alarming: while dimensionally the equation is still ok (the light-speed squared factor fixes the units), quantitatively speaking I'm equating a velocity with a negative reciprocal of a velocity, so much that if I try to solve I get: $$v_x=\pm \sqrt{-c^2}=\pm i c$$ I don't like the fact that massive objects can travel at the speed of life, let alone that they must always go at the speed of light, let alone that it's actually an imaginary speed of light! This seems quite evil. SOME POSSIBLE (hints to) SOLUTIONS Just to save some time to the kind answerers, I listed here, in order of increasing likeliness (according to me, that is), the things that I could possibly have got wrong: I could have messed up with rest/invariant vs relativistic/total quantities (I know that many people get $E=mc^2$ wrong, comparing total energy with rest mass without the gamma in non-stationary cases), but it really doesn't look like I did; also, I really struggle to see how a similar mistake could solve the "paradox", since it doesn't seem like multiplying or dividing for gamma once would improve much. I could have messed up with considering the Hamiltonian in 5 as the total energy in 3 (after all I'm admittedly using a classical result in a relativistic setup), but every source so far confirmed that in simple setups that should be exactly the case; also, I really struggle to see how a similar mistake could solve the "paradox", since it doesn't seem like adding or subtracting a rest-energy would improve much. I could have messed up in 9, "simplifying away" differentials and partial derivatives in a reckless way (It's not permitted, in general), but while on the one hand I think in this specific cases the way $S$ depends on $x$ and $t$ allows me to do that, on the other hand I could simply get rid of differentials integrating over a finite time interval, since for an insulated body the energy is a constant of motion (this is what I meant above with "well-behaved enough" conditions); also, I really struggle to see how a similar mistake could solve the "paradox", since some it doesn't seem like adding some integration constant would improve much. I could have messed up in 1 already, using the simple "relativistic mass" for the linear momentum (just like almost every source suggests), instead of the "longitudinal mass" (as opposed to "transverse"). Funny trivia: the linked source corrects the definition of momentum precisely to fix a similar "paradox" with the Lagrange formalism. This could be true (and most sources about relativistic momentum could be wrong), but still, another squared gamma factor doesn't improve the situation so much, since: $$p_x=\gamma^3 m v_x$$ $$v_x=\frac{p_x c^2}{E \gamma^2}=-\frac{c^2}{v_x \gamma^2}=-\frac{c^2}{v_x} (1-(\frac{v_x}{c})^2)=v_x-\frac{c^2}{v_x}$$ $$v_x^2=-c^2 v_x^2$$ $$c=\pm i$$ which is...well...not very reassuring (to the point that I really hope you will tell me to stick with the transverse mass instead)! Answer: The Hamilton's principal function is $$ \begin{align}S(x,t)~=~&p x -Et, \cr p~=~&\pm\sqrt{(E/c)^2-(m_0c)^2}, \end{align}\tag{1}$$ for a relativistic free particle in 1+1D. The $\pm$ is the sign of the velocity/momentum. From the triple product rule (TPR) we calculate $$\left(\frac{\partial x}{\partial t}\right)_S ~\stackrel{TPR}{=}~-\frac{\left(\frac{\partial S}{\partial t}\right)_x}{\left(\frac{\partial S}{\partial x}\right)_t} ~\stackrel{(1)}{=}~\frac{E}{p}, \tag{2}$$ which is the phase velocity. The phase velocity (2) is not the velocity $$ \frac{d x}{d t}~=~v~=~\frac{p}{\gamma m_0}~=~\frac{pc^2}{E}\tag{3}$$ of the particle. The latter is the group velocity. References: H. Goldstein, Classical Mechanics, 2nd (not 3rd) edition; section 10.8.
{ "domain": "physics.stackexchange", "id": 72413, "tags": "special-relativity, momentum, speed-of-light, hamiltonian-formalism" }
Navigation stack on youBot
Question: Hi everybody, I am developing my final thesis using a KUKA youBot. I am managing to run the navigation stack on it, after many attempts, everything seems to work (no warning, no errors), but when I give through rviz a navigation goal this is what happens http://www.youtube.com/watch?v=EjOxJTRCj7s The green path is the full plan for the robot (topic: NavfnROS/plan), while the red path is the portion of the global plan that the local planner is currently pursuing (TrajectoryPlannerROS/global_plan). There should be also TrajectoryPlannerROS/local_plan in black, but it is not visible. Here are the configuration files I am using: costmap_common_params.yaml obstacle_range: 2.5 raytrace_range: 3.0 footprint: [[0.290,0.190], [0.290,-0.190], [-0.290,-0.190], [-0.290,0.190]] #robot_radius: ir_of_robot inflation_radius: 0.55 observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: base_laser, data_type: LaserScan, topic: scan\ , marking: false, clearing: false} #point_cloud_sensor: {sensor_frame: openni_rgb_optical_frame, data_type: PointC\ loud2, topic: camera/rgb/points, marking: false, clearing: false} -------------------------------------------- global_costmap_params.yaml global_costmap: global_frame: /map robot_base_frame: /base_footprint update_frequency: 5.0 publish_frequency: 2.0 static_map: true transform_tolerance: 0.8 (If a use a lower tolerance I obtain the warning concerning the costmap2dros transform timeout ) ----------------------------- local_costmap_params.yaml local_costmap: global_frame: odom robot_base_frame: base_footprint update_frequency: 5.0 publish_frequency: 2.0 static_map: false rolling_window: false width: 6.0 height: 6.0 resolution: 0.10 origin_x: 10 origin_y: 10 base_local_planner_params.yaml TrajectoryPlannerROS: max_vel_x: 0.07 min_vel_x: 0.05 max_rotational_vel: 0.07 min_in_place_rotational_vel: 0.05 acc_lim_th: 4.0 acc_lim_x: 3.7 acc_lim_y: 3.5 yaw_goal_tolerance: 0.2 xy_goal_tolerance: 0.2 holonomic_robot: true y_vels: [-0.1, -0.05, 0.05, 0.1] dwa: true The navigation is blind because in the task involved will not be dynamic obstacles, the only ones are the work space walls. Am I missing something? What possibly is going wrong? Thank you so much for the help Lorenzo Originally posted by Lorenzo on ROS Answers with karma: 76 on 2011-09-07 Post score: 0 Answer: There are a couple of problems that I see in your parameter set: If you really do intend to avoid obstacles based only on localization and a static map, you'll need to make sure that the local costmap is set up to run with that information. Otherwise, the local planner won't know about those static obstacles. If the map is small, you'll probably just be able to have the local costmap have the same parameters as the global costmap. However, if the map is going to be large, you'll probably want to feed the local costmap tiles of the global map to make sure computing the cost function for the map doesn't get too expensive. The velocity limits you've set are extremely limiting. It'll be hard for the robot to follow any global path when its rotational velocity is limited to 0.07 radians/second and its translational velocity is limited to 0.07 meters/second. To make those limits work, I'd expect you'll have to play around a lot with the trajectory scoring parameters. Personally, I've never run navigation with such tight constraints. Originally posted by eitan with karma: 2743 on 2011-09-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6623, "tags": "ros, navigation, costmap-2d, youbot, move-base" }
Are there any "catalysts" that decrease the rate of reaction
Question: Catalysts are things you can add to a reaction that provide an alternate reaction pathway for the reaction that decreases the overall activation energy of the reaction meaning the reaction is faster but the catalyst isn't consumed in the reactions. Are there any materials, in any reactions where a similar material can be added to a reaction that will decrease the rate of reaction by providing a more favorable reaction pathway but one that takes longer. Maybe it isn't possible for a reaction to be more favorable but require a higher activation energy and this is why this kind of "pseudo catalyst" doesn't exist. Answer: Well, such a "negative catalyst" cannot increase the activation energy of already existing path. An alternative with the higher activation energy would not be significantly taken. You can detour a river making it flow easier by a side channel, but not if you lead it across a hill. It could act differently, as an inhibitor, blocking the low activation energy path by reaction with the key intermediate product or existing catalyst, or by competitive inhibition. Many redox reactions, including food deterioration, are catalyzed by traces of transient metals acting as redox catalysts, typically copper. An additive, forming metal complexes like EDTA, in large extend inhibits these catalyzed reactions.
{ "domain": "chemistry.stackexchange", "id": 14708, "tags": "experimental-chemistry" }
Check if target joint position is valid
Question: Hi, I was wondering if there is a way to check if a goal pose of a planned path (planned in moveit with OMPL) is a valid pose, meaning that joint position constraints, collision, etc. are taken into consideration. A Python function like bool check_if_pose_if_valid(joint_position) would be great. Thanks for your answers Originally posted by nmelchert on ROS Answers with karma: 143 on 2018-09-10 Post score: 0 Answer: The easiest way should be to use the StateValidationService of the move_group node, by calling the service /check_state_validity with this message. You will need the RobotState, which you can get from the /get_planning_scene service with this message. In the definition of the StateValidationService, you also find a pretty good example of how to do the checks yourself, in case you want to avoid calling the service too often. Originally posted by fvd with karma: 2180 on 2018-09-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by nmelchert on 2018-09-11: Thanks a lot for you answer. I will try to test the ''/check_state_validity'' service and mark this question as answered if it works for me.
{ "domain": "robotics.stackexchange", "id": 31751, "tags": "moveit, ros-kinetic, rospy, movegroup, ompl" }
Multithreaded segmented Sieve of Eratosthenes
Question: I am fairly new to Rust and thought a good way to practice would be to write a multithreaded segmented Sieve of Eratosthenes. It performs ok (searches ten-billion numbers in about 11 seconds on my system, I tried one-hundred-billion but it couldn't allocate enough memory). I was just wondering if there was any clear improvements that could be made in my code? I'm mostly hoping to get some Rust and formatting tips, but am very open to math improvements as well. Thank you in advance! use std::{ sync::{Arc, RwLock}, thread, time::Instant, cmp::min }; pub fn main() { let now = Instant::now(); let primes = threaded_segmented_sieve_of_eratosthenes(10000000000); let finished =now.elapsed().as_secs_f64(); //println!("{:?}\n", primes); println!("found {} primes in {}s", primes.len(), finished); } fn threaded_segmented_sieve_of_eratosthenes(limit:usize) -> Vec<usize> { let threads = num_cpus::get(); explicit_threaded_segmented_sieve_of_eratosthenes(limit, threads) } fn explicit_threaded_segmented_sieve_of_eratosthenes(limit:usize, threads:usize) -> Vec<usize> { let sqrt_of_limit = (limit as f64).sqrt().ceil() as usize; let early_primes = if limit <= 230 { Arc::new(RwLock::new(vec![2,3,5,7,11,13,17])) } else { Arc::new(RwLock::new(threaded_segmented_sieve_of_eratosthenes(sqrt_of_limit))) }; let mut thread_handles = Vec::new(); let thread_spacing = (limit - sqrt_of_limit) / threads; let segment_size = min(100000, sqrt_of_limit); for i in 0..threads { let early_primes = early_primes.clone(); let lowest_checked = sqrt_of_limit + i *thread_spacing; let mut highest_checked = lowest_checked + thread_spacing; if i == threads - 1 { highest_checked = limit; } thread_handles.push(thread::spawn(move|| { eratosthenes_segment_thread(early_primes, lowest_checked, highest_checked, segment_size) })); } let mut new_primes = Vec::new(); for handle in thread_handles { new_primes.append(&mut handle.join().unwrap()); } let mut early_primes = early_primes.write().unwrap(); early_primes.append(&mut new_primes); return early_primes.to_owned(); } fn eratosthenes_segment_thread(early_primes:Arc<RwLock<Vec<usize>>>, lowest_checked: usize, highest_checked: usize, segment_size: usize) -> Vec<usize> { let mut returned_primes = Vec::new(); let mut lower = lowest_checked; let mut higher = lowest_checked + segment_size; let early_primes = early_primes.read().unwrap(); while lower < highest_checked { if higher > highest_checked { higher = highest_checked; } let mut new_primes = vec![true; segment_size]; for i in 0..early_primes.len() { let mut lolim = (lower / early_primes[i]) * early_primes[i]; if lolim < lower { lolim += early_primes[i] } let mut j = lolim; while j < higher { new_primes[j - lower] = false; j += early_primes[i]; } } let mut p = lower; while p < higher { if new_primes[p - lower] { returned_primes.push(p); } p += 1 } lower += segment_size; higher += segment_size; } return returned_primes; } github link: https://github.com/knot427/Primes_To_N Answer: Rust has great tooling. Use it! Formatting I'm mostly hoping to get some Rust and formatting tips Formatting tip: cargo fmt will format your code in the idiomatic Rust style. I don't necessarily always agree with the format... but it's good enough that I love automating that problem away. Linting In the same vein, cargo clippy will run a linter-on-steroids. In this case, it points out that using return x; as the last statement of a function is unnecessary, and you can just type x instead. Let us start with some Rust style. Learn your iterators You are using indexes and arrays as if you were using C. This is not idiomatic, and may be costing you performance. The loop while lower < highest_checked can be rewritten as: for lower in (lowest_checked..highest_checked).step(segment_size) { let higher = cmp::min(lower + segment_size, highest_checked); // ... } The loop for i in 0..early_primes.len() can be rewritten as: for early_prime in early_primes.iter().copied() { ... } Note: the copied ensures we have a local copy instead of a reference. Similarly, the inner loop while j < higher can be rewritten as: new_primes[(lolim - lower)..(higher - lower)] .iter_mut() .step(early_prime) .for_each(|is_prime| *is_prime = false); Note: unlike your version, - lower is only applied at the beginning, rather than repeatedly, and bounds are not checked repeatedly. And finally that while p < higher can be rewritten as: new_primes .into_iter() .enumerate() .filter(|(is_prime, _)| *is_prime) .for_each(|(_, offset)| returned_primes.push(lower + offset as u64)); Using iterator code will avoid faffing about with indexes as much as possible: This avoids the risk of getting them wrong. This avoids the cost of bounds checks. Let us focus on the core algorithm now. Numeric Types Your use of numeric types is problematic. You use usize when the number wouldn't fit in u32 (and 32-bits platforms are a thing), you convert usize to f64 which could lose precision, etc... First of all, I'd argue for replacing all sieve numbers by u64, it's guaranteed to be large enough to hold the values you want regardless of the platform. Secondly, your calculation of the square root of a u64 using a f64 is flawed for any u64 with at least 54 significant bits. You should either guard against that with an assert (53 significant bits is pretty large, already) or you should use the floating square as estimate and refine it. Sieve on Stack Your while lower < highest_checked loop will repeatedly create and throw away your sieve. Because you limit the sieve to 100 K elements, if you made that 100 K a constant you could use an array of 100 K elements on the stack, without issues. Flip Sieve At the moment, you initialize your sieve with 1s. Initializing with 0s may be faster, so you may want trying to flip the meaning of those booleans. (No guarantee there) Sieve Memory Trashing Cutting down on the sieve size will also cut down on the sieve cache usage, but it won't change the fact that you are repeatedly (in while j < higher) looping over that memory from one end to the other. At the moment, you limited the segment size to 100 KB, which is a good start, but the L1 data cache is only 32 KB. It may be that limiting the segment size to 16 KB (half of L1) would be more cache-friendly, speeding up this inner loop execution. And while at it, I'd advise putting that early_primes[i] in a local variable1, just in case the compiler doesn't figure out it doesn't have to re-read it from memory every single time. Altogether, this will keep the inner loop with minimal memory access, and hopefully it will make it easier for the compiler to unroll that loop. 1 Using Rust Iterators properly will do that. Let us look at threading next. Locking or not locking Your use of RwLock appears superfluous: You create the vector of primes. You then run multiple threads in parallel which only ever read the data. After joining, you write to the data. There's never, actually, concurrent attempts at reading and writing, since all writes only happen when a single thread accesses the data. You can thus, instead: Use Arc<Vec<u64>>, directly. Use Arc::try_unwrap(early_primes).unwrap() to get back the vector at the end. This will eschew locking altogether, and avoid the to_owned() call to clone the vector again at the end. Thread Creation is a costly endeavor. Creating and Joining a thread are NOT trivial operations. Really not. You would need to measure the cost of creating and joining compared to the cost of actually running the sieve, but I would not be surprised to learn it's a significant overhead especially at the beginning when sqrt_of_limit is low and early_primes is small. The typical answer to this is to use a thread-pool, or some message-passing, so that rather than spinning up a new thread per "job to do", you spin up a few threads, have each of them perform all the jobs that need doing, and only then unwind them all. I've never used thread pool libraries, so no idea which are good ones, but it may be worth investigating.
{ "domain": "codereview.stackexchange", "id": 44368, "tags": "performance, multithreading, rust, primes, formatting" }
Custom class in C++ nodes
Question: I wrote a class with some functionality I want to use in a few of my ROS nodes, but I'm having trouble getting catkin to generate the proper files, or I'm including things wrong, or something... So far, I've been able to get catkin to compile a library from my class, which is a .so file, but then I'm struggling with using this class in any of my nodes because it lacks a .h file. Am I going about this the wrong way? Here's the CMakeLists.txt that succeeds in generating a library .so file, but no .h file. cmake_minimum_required(VERSION 2.8.3) project(LogixEIP) find_package(catkin REQUIRED COMPONENTS roscpp ) catkin_package( INCLUDE_DIRS include LIBRARIES LogixEIP CATKIN_DEPENDS roscpp DEPENDS system_lib ) include_directories( ${catkin_INCLUDE_DIRS} ) add_library(LogixEIP src/LogixEIP.cpp ) ## Mark executables and/or libraries for installation install(TARGETS LogixEIP ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ## Mark cpp header files for installation install(DIRECTORY include/${PROJECT_NAME}/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} FILES_MATCHING PATTERN "*.h" PATTERN ".svn" EXCLUDE ) Originally posted by Robot_Cpak on ROS Answers with karma: 71 on 2015-01-28 Post score: 0 Original comments Comment by William on 2015-01-28: Do you have a header file? Or are you expecting CMake/catkin to generate one for you? You have to write, install, and include the header file with the class prototype in it for other software to make use of your .so. Comment by Robot_Cpak on 2015-01-28: I was hoping that catkin would generate one for me, is that possible? Comment by William on 2015-01-28: No, this is not something catkin nor CMake does. This is a C++ issue. I would suggest looking elsewhere: http://stackoverflow.com/questions/42770/writing-using-c-libraries Answer: Change include_directories( ${catkin_INCLUDE_DIRS} ) for include_directories( ${catkin_INCLUDE_DIRS} include ) Note that catkin_package( INCLUDE_DIRS include LIBRARIES LogixEIP CATKIN_DEPENDS roscpp DEPENDS system_lib ) makes source files in dependend packages of your first package finding the headers in the include folder of your first package while include_directories( ${catkin_INCLUDE_DIRS} include ) makes source files in your first package finding the headers in the include folder of your first package. Source files in your package most likely also use the headers in the same package..... Originally posted by Wolf with karma: 7555 on 2015-01-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20720, "tags": "catkin, roscpp" }
How can I reduce the background gradient in phase contrast microscopy?
Question: I'm looking for ways to set up a phase contrast microscope to reduce the brightness gradient that appears across the background of my images. I use phase contrast to find the outlines of cells in my images. My problem is that the automated processing algorithms I use get confused by strong brightness gradients in some images. Post-processing of the images disrupts the continuous nature of the histogram, again throwing off the algorithms. Does anyone know why these gradients appear and how to reduce their formation? Thanks, Oliver Edited the question to clarify that I am interested in microscope usage, rather than image manipulation Sample image: The image background is darker at the left of the image than at the right Answer: If you're not already doing so, make sure you set up Kohler illumination every time you use the microscope -- see this tutorial by Steven Ruzin. The lamp or the phase ring may be misaligned. There are usually small hexagonal set screws on the lamp housing and the condenser, respectively, to adjust these. Imaging near the edge of a dish or well can lead to uneven illumination as the light is bent by the walls of the culture vessel. If this might be the cause, try re-doing the Kohler illumination before you take each image. If you can't find the cause of the problem, there are several ways to correct for uneven background described here. Maybe one of them will work with your analysis.
{ "domain": "biology.stackexchange", "id": 3635, "tags": "microscopy, methods" }
Why did the Chelyabinsk meteor explode?
Question: In February, a meteor entered the Earth's atmosphere over Chelyabinsk, Russia and exploded, producing a shockwave blast that damaged hundreds of buildings around the explosion site in Russia. But why did it explode? Why didn't it simply burn instead? Answer: Same reason as the explosion at the Tunguska event years ago: Pressure, due to boiling water. Meteors usually cme into our atmosphere at incredible speeds. At these speeds, the meteor is highly affected by the viscosity of air and is heated by compression (and a little bit of friction). This starts boiling the ice present in the meteor, leading to a pressure buildup. If the conditions are right, it can just explode. See also: http://www.nature.com/nature/journal/v361/n6407/abs/361040a0.html , https://physics.stackexchange.com/q/76045/7433
{ "domain": "astronomy.stackexchange", "id": 54, "tags": "planetary-atmosphere, meteor, explosion" }
Using a decorator to apply an inherited method to all child objects of the inherited type
Question: I wanted to share this and get some feedback. My goal was to be able to have an object with methods: class Thing(object): def __init__(self, data=123): self.data = data def add_one(self): self.data += 1 def add_number(self, number): self.data += number And then a collection object, MetaThing that can hold several Thing objects, which inherits from Thing like this: class MetaThing(Thing): def __init__(self): super(MetaThing, self).__init__() self.list = [1, 2, 3, 4, 5] self.things = [] self.setupThings() def setupThings(self): for i in self.list: thing = Thing(data=i) self.things.append(thing) Finally, I want to be able to apply the methods in Thing to all the Thing instances stored in MetaThing.things when those same methods are called on MetaThing. Doing this explicitly creates methods like this: def add_one(self): for t in self.things: t.add_one() def add_number(self, number): for t in self.things: t.add_number(number) So I started wondering if this would be a good situation for a decorator, so I wrote one that I think does what I want: def allthings(func): def new_func(*args, **kwargs): self = args[0] for thing in self.things: arglist = list(args) arglist[0] = thing func(*arglist, **kwargs) return new_func So now, the MetaThing method can use the @allthings decorator to call an inherited method on all child objects without rewriting the for loop every time: class MetaThing(Thing): def __init__(self): super(MetaThing, self).__init__() self.list = [1, 2, 3, 4, 5] self.things = [] self.setupThings() def setupThings(self): for i in self.list: thing = Thing(data=i) self.things.append(thing) @allthings def add_one(self): self.add_one() @allthings def add_number(self, number): self.add_number(number) Finally this block verifies that it's working: if __name__ == '__main__': meta = MetaThing() meta.add_one() print [t.data for t in meta.things] meta.add_number(5) print [t.data for t in meta.things] Is this a valid approach? Is there a better way to achieve this? Am I crazy for using decorators in this way? My actual scenario is a PlottedShape with lots of methods like plot, delete, update, select, apply_opacity, apply_color, apply_wireframe, etc. and I also have a PlottedMultiShape made up of multiple shapes (like a snowman shape made of multiple circles). If PlottedMultiShape.select() is called I want to call select on all the child PlottedShape objects -- and I'd like this same behavior for all of the other methods as well. Using a decorator seemed like a good "Don't Repeat Yourself" tactic. Answer: Another way to do this would be to use __getattr__(). class Wraps(object): def __init__(self, *args): self.things = args def __getattr__(self, item): def each(*args, **kwargs): for thing in self.things: getattr(thing, item)(*args, **kwargs) return each def some_real_wraps_method(self): print "real method" class Thing(object): def __init__(self, name): self.name = name def a(self): print self.name, "a" def b(self, *args, **kwargs): print self.name, "b", args, kwargs w = Wraps(Thing("1"), Thing("2")) w.a() w.b() w.b("arg1", "arg2", named=True) w.some_real_wraps_method() Running this will output: 1 a 2 a 1 b () {} 2 b () {} 1 b ('arg1', 'arg2') {'named': True} 2 b ('arg1', 'arg2') {'named': True} real method The upside is that you don't have to define an methods in order for them to be proxied. The downside is that it will try to proxy any attribute not defined by Wraps (this can be counteracted with a white list).
{ "domain": "codereview.stackexchange", "id": 8183, "tags": "python, inheritance, meta-programming" }
Rearranging function in 'Theorems for free'
Question: I'm reading Wadler's 'Theorems for free'. In section 3.5 he states that $m_{AA}(I_A)$ is a rearranging (i. e. injective) function. $I_A$ is the identity function on the type A. $$m : \forall X.\forall Y. (X \rightarrow Y) \rightarrow (X^* \rightarrow Y^*)$$ How does he use section 3.4 to prove this without any $\forall^{(=)}X$-types? Why can't $m_{AA}(I_A)$ e. g. drop a few elements in the list? Answer: It can, as good evidence for it define m as something like m f = map f . drop 2 And everything still typechecks. Here "rearranging" is supposed to refer to structural operation on the list without regards to the specific values of each element. This is captured by the free theorem at this type, m (\x -> x) . map f = map f . m (\x -> x) m (\x -> x) performs all the structural changes to the list and map f actually alters this element. This theorem implies that they are independent of each other.
{ "domain": "cs.stackexchange", "id": 4627, "tags": "type-theory" }
PHP Email List Sign Up
Question: See my original post: here. I have one html page and four php files that allow users to sign up for an email list. One of the php scripts is a cronjob that deletes unverified rows older than 24 hours, and it is not included below for the sake of post length. I use PDO for my prepared statements. Everything has been tested live and is fully functional to the best of my knowledge. Any and all feedback is welcome. I've bulleted some questions below the snippets. :) email.html --- Users sign up here <form action="signup.php" method="POST" autocomplete="off"> <input type="text" autocomplete="off" placeholder="Email address" name="email" required> <br/> <input type="submit" autocomplete="off" value="Subscribe"> </form> signup.php --- Filters and sends user input to the database <?php //1---DATABASE CONNECTION--- $dbHost = "HOST"; $dbName = "DATABASE"; $dbUser = "USER"; $dbPassword = "PASSWORD"; $port = "PORT"; $charset = 'utf8mb4'; $options = [ \PDO::ATTR_ERRMODE => \PDO::ERRMODE_EXCEPTION, \PDO::ATTR_DEFAULT_FETCH_MODE => \PDO::FETCH_ASSOC, \PDO::ATTR_EMULATE_PREPARES => false, ]; $dsn = "mysql:host=$dbHost;dbname=$dbName;charset=$charset;port=$port"; try { $pdo = new \PDO($dsn, $dbUser, $dbPassword, $options); } catch (\PDOException $e) { throw new \PDOException($e->getMessage(), (int)$e->getCode()); } //1---END--- //2---Add to table: IPv4 ADDRESS, EMAIL, DATETIME, and ACODE--- //prevent direct url access of .php from users, routes to starting page if(($_SERVER['REQUEST_METHOD'] == 'POST') == NULL) { header("Location: email.html"); exit (0); } //trim spaces on ends of user email input $Temail = trim($_POST['email']); //(on mobile, auto-complete often leaves a space at the end) //allow international characters if(preg_match("/^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,3})$^/", $Temail)) { //prevents invalid email addresses header("Location: invalid.html"); exit (0); } //Check Email Domain MX Record $email_host = strtolower(substr(strrchr($Temail, "@"), 1)); if (!checkdnsrr($email_host, "MX")) { header("Location: invalid.html"); exit (0); } //Prevent users from inputting a specific domain...like mine $notallowed = [ 'mydomain.com', ]; if (!in_array($email_host, $notallowed) == NULL) { header("Location: notallowed.html"); exit (0); } //checks database to make sure the email is not a duplicate $stmt1 = $pdo->prepare("SELECT email FROM emailTable WHERE email = ?"); $stmt1->execute([$Temail]); if($stmt1->fetch()) { //prevents adding a duplicate email header("Location: duplicate.html"); exit (0); } //send verification email using seperate php file include_once 'vEmail.php'; //check to see if email could be put together if(include_once 'vEmail' == NULL) { header("Location: failure.html"); exit (0); } //set date and time date_default_timezone_set('America/Los_Angeles'); $dateTime = date('Ymd-His', strtotime('NOW')); // ('Ymd-His' format and LA timezone are preferred) //variable to store ipv4 address $euserIP4 = $_SERVER['REMOTE_ADDR']; //add all data to the database $stmt2 = $pdo->prepare("INSERT INTO emailTable (IP4, datetime, email, acode) VALUES (:IP4, :datetime, :email, :acode)"); $stmt2->execute(['IP4' => $euserIP4, 'datetime' => $dateTime, 'email' => $Temail, 'acode' => $Acode]); header("Location: success.html"); exit (0); //2---END--- ?> vEmail.php --- include_once in signup.php, sends verification email <?php //generate verification code $Acode = bin2hex(random_bytes(30)); //send verification email w/ code $emailbody = "<html> <body> <table> <tr> <td> <button><a href='https://www.MYDOMAIN.com/status/verify.php?acode=$Acode'>VERIFY</a></button> </td> </tr> </table> </body> </html>"; $headers = "Reply-To: MY NAME <no-reply@MYDOMAIN.com>\r\n"; $headers .= "Return-Path: MY NAME <no-reply@MYDOMAIN.com>\r\n"; $headers .= "From: MY NAME <no-reply@MYDOMAIN.com>\r\n"; $headers .= "MIME-Version: 1.0\r\n"; $headers .= "Content-type: text/html; charset=UTF-8\r\n"; $headers .= "X-Priority: 3\r\n"; //send email mail($Temail, 'Confirm Your Email Subscription', $emailbody, $headers, '-f ' . 'no-reply@MYDOMAIN.com'); ?> verify.php --- Attached to link that was sent in the verification email <?php //1---DATABASE CONNECTION--- $vHost = ""; $vName = ""; $vUser = ""; $vPassword = ""; $vPort = ""; $vCharset = ""; $vOptions = [ \PDO::ATTR_ERRMODE => \PDO::ERRMODE_EXCEPTION, \PDO::ATTR_DEFAULT_FETCH_MODE => \PDO::FETCH_ASSOC, \PDO::ATTR_EMULATE_PREPARES => false, ]; $vdsn = "mysql:host=$vHost;dbname=$vName;charset=$vCharset;port=$vPort"; try { $vpdo = new \PDO($vdsn, $vUser, $vPassword, $vOptions); } catch (\PDOException $ve) { throw new \PDOException($ve->getMessage(), (int)$ve->getCode()); } //1---END--- //2---VERIFICATION LINK--- //set timezone date_default_timezone_set('America/Los_Angeles'); //prevent direct url access of .php from users, routes to starting page if(isset($_GET['acode']) == NULL) { header("Location: email.html"); exit (0); } //set verification code variable $vAcode = $_GET['acode']; //check if row still exists $vStmt1 = $vpdo->prepare("SELECT verified, acode FROM emailTable WHERE acode = '$vAcode' LIMIT 1"); $vStmt1->execute(); if($vStmt1->rowCount() == NULL) { //EXPIRED header("Location: expired1.html"); exit (0); } //check if row is verified ('verified' set to 0) $vStmt2 = $vpdo->prepare("SELECT verified, acode FROM emailTable WHERE verified = 0 AND acode = '$vAcode' LIMIT 1"); $vStmt2->execute(); if($vStmt2->rowCount() == NULL) { //if 'verified' is set to 1 already header("Location: expired2.html"); exit (0); } //since 'verified' is set to 0,update verification status to 1 $vStmt3 = $vpdo->prepare("UPDATE emailTable SET verified = 1 WHERE acode = '$vAcode' LIMIT 1"); $vStmt3->execute(); //check if the 'verified' was updated correctly if($vStmt3->fetch()) { header("Location: failure.html"); exit (0); } //SUCCESS header("Location: verified.html"); exit (0); //2---END--- ?> Questions and Comments: I've included database connections for each php file, but I've found that some prefer to have a global config file for their connections. Why is that? Is it more efficient? In the original post, someone mentioned that the regex found in signup.php is missing the unicode flag. Could someone explain this, because I wasn't able to find anything on it. While storing IPv4 works well, I haven't been able to figure out how to correctly store IPv6 (as far as I know). I've tried this: bin2hex(inet_pton($_SERVER['REMOTE_ADDR'])); but I couldn't figure out if the output was correct, because it looked nothing like an ipv6 address. Correct me if this looks usable. I'm looking into PHPMailer rather than using the native mail() function. In the case of the above scripts, would this be recommended or is it more for bulk email sending? Edit 1: I've just realized that the email validation found in signup.php allows an email address with spaces (ie: ohnothere are spaces@mydomain.com). The previous version of the script, found at the link at the top of the post, prevented these sorts of addresses. Any ideas why that may be, or how to prevent that? I'd like to stay away from filter_var(FILTER_VALIDATE_EMAIL) since it cancels international characters. Will continue experimenting with it... Answer: Including a global configuration file is not about efficiency, it's just about centralizing the application configuration data. Suppose, for example, that you're developing a new feature for your application, you want to use a database specifically for this purpose, so as not to interfere with the production database. If you're including the database configuration on each script, you're going to have to make sure you check (and hopefully double-check) every script to make sure none of your development actions get propagated to the production database. (Even if you were certain none of your changes would be visible to your application if pushed to the production database, using a completely different database saves your production database and application from having to handle the load from both regular users and the development team.) On the other hand, you could just do this. Note that defining array-valued constants is valid as of PHP 7.0.0, although you could simply use ini-style define( 'DEV_HOSTNAME', ... ) style constant definitions, it's just not as fancy. define( 'CONFIGURATION', 'PRODUCTION' ); define( 'ENVIRONMENT', [ 'DEVELOPMENT' => [ 'hostname' => 'localhost', 'database' => 'devdb', 'username' => 'username', 'password' => 'password' ], 'PRODUCTION' => [ 'hostname' => 'hostname', 'database' => 'proddb', 'username' => 'username', 'password' => 'password' ] ]); Now all you have to do to switch environments in your application is modify the CONFIGURATION constant, since your application could access the configuration values using something like this. $hostname = ENVIRONMENT[CONFIGURATION]['hostname']; $database = ENVIRONMENT[CONFIGURATION]['database']; $username = ENVIRONMENT[CONFIGURATION]['username']; $password = ENVIRONMENT[CONFIGURATION]['password']; What some production applications do is define an abstract Database class whose derived classes implement an interface that defines how they should be accessed, so you could switch between say MySQL and Postgres by doing something like the above. This is one of the reasons PDO is so useful; it allows us to code to an interface, not an implementation. To answer your question about efficiency, it's actually less efficient to include a separate file that contains the configuration data. To include a file, the interpreter has to get it first. Application configuration files are (obviously) application-specific, and thus located somewhere in or around the current directory. However, unless the include path is absolute or begins with a '.' or '..', PHP will first look for the file in the include_path. Assuming the file is actually found eventually, reading it from wherever it is stored requires disk access, which is really slow, although the impact can be mitigated by using SSDs or even tempfs, as well as caching. Worse yet, if you use the *_once version of either require or include, it won't just include the file (if and when the file is eventually found), the interpreter will go through the additional trouble of verifying the file hasn't already been included. You can think of the tradeoff between inefficient configuration change propagation and execution speed as a tradeoff between development time efficiency vs. execution time efficiency, and it's definitely worth your while (pun intended). Larger applications tend to opt for a more object-oriented approach to configuration by essentially encapsulating these global variables in a singleton class that oversees the getting and setting of configuration parameters. You'll probably see this referred to as the Registry pattern.
{ "domain": "codereview.stackexchange", "id": 39551, "tags": "php, html, validation, email" }
turtlesim velocity units?
Question: Can anyone tell me how to correlate a velocity message to turtlesim with its actual motion? In particular, I'm currently trying to get a specific rotation. I had assumed that sending a velocity message with angular=1 meant 1 radian per second for one second = 1 radian of rotation. Instead, I get something less than significantly less than this. Using electric with ubuntu 10.4. Thanks, Paul. Originally posted by Paul0nc on ROS Answers with karma: 271 on 2012-01-03 Post score: 1 Answer: You are correct. The turtle moves in radians per second. So if you command your turtle to move at 6.28 rad/sec it will make one full revolution in 1 second. You can test this out by using this command since the turtle executes commands for 1 second it will make 1 revolution. pub -1 /turtle1/command_velocity turtlesim/Velocity 0.0 6.28 EDIT: To command the turtle propgramatically in C++ you will need to repeatedly publish the command velocity until you achieve the desired angle because the turtle times out the velocity commands after 1 second if it does not receive a new command within that time window. A good example of commanding the turtle in C++ is the turtle_actionlib package (http://ros.org/wiki/turtle_actionlib). The action server drives the turtle in a polygonal shape. The turtle_actionlib draws regular polygons, the goal shown below takes in the number of sides and the radius of the circumscribed circle. When a goal is received, the goalCB() function computes the interior angles and apothem which are returned in the result in the action msg. Once a goal is received and the angles computed the controlCB() function publishes the velocity commands to draw the shape. Look in the shape_client.cpp file for how to send a goal to the shape_server. #goal definition int32 edges float32 radius --- #result definition float32 interior_angle float32 apothem --- #feedback You should not need to change the command duration. The point of having a timeout is so that the turtle behaves more like robots in the real world where you need to publish msgs at a steady rate to continue moving. Originally posted by mmwise with karma: 8372 on 2012-01-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Paul0nc on 2012-01-05: Again, thanks. Is it possible to use the turtle_actionlib to generate shapes besides the pentagon example? I'm not seeing how the actionlib actually takes a shape input. Also, is it possible to change the command duration? I tried altering the WallDuration(1.0) in turtle.cpp... but no luck. Comment by Paul0nc on 2012-01-04: Thanks mwise_wg. I'm trying to do it programmatically. Is there any timer built into the command that I might need to implement in a C++ program?
{ "domain": "robotics.stackexchange", "id": 7781, "tags": "ros, velocity, turtlesim" }
Partial derivative of momentum with respect to position in Poisson bracket representation
Question: The representation of a Poisson bracket is given by the following equation: $$\tag{1} \{f,g\} = \sum_{s=1}^n \sum_{i=1}^{d=3}\left ( \frac{\partial f}{\partial x_i^{(s)}} \frac{\partial g}{\partial p_i^{(s)}} - \frac{\partial f}{\partial p_i^{(s)}} \frac{\partial g}{\partial x_i^{(s)}}\right),$$ where $n$ is the number of particles, and $d$ is the number of dimensions. Assume we have an arbitrary Hamiltonian $H$ (possibly explicitly time-dependent). Then according to the Hamilton equation we have: $$\frac{d p_j^{(r)}}{dt} = \{p_j^{(r)}, H\} + \frac{\partial p_j^{(r)}}{\partial t} =\{p_j^{(r)}, H\}.$$ Using the representation given in (1) we can show that $\frac{d p_j^{(r)}}{dt} = -\frac{\partial H}{\partial x_i^{(r)}}$. In the process of derivation, I get that $$\tag{2} \frac{\partial p_i^{(r)}}{\partial x_i^{(r)}} = 0.$$ I don't understand why the quantity in (2) is zero? Can't the momentum depend on time as well as $x$? P.S. Unfortunately, I don't have any physics background whatsoever, so I will appreciate an intuitive answer or a mathematical proof which does not rely on Lagrangian mechanics. Answer: Variables $x_i$ and $p_i$ are independent variables used as arguments for the Hamiltonian $H(x_i, p_i)$. Since they're independent variables, the partial derivative of one w.r.t. the other is identically zero.
{ "domain": "physics.stackexchange", "id": 90942, "tags": "momentum, hamiltonian-formalism, differentiation, phase-space, poisson-brackets" }
Blending Artifacts in Photo Stitching
Question: I am working on a photo stitching application that uses multi-band blending. I need to get rid of unpleasant edges appearing at some places: Here is the area of overlap (left - new image added to the mosaic, right - current mosaic contaning pixels of new image on background pixels to improve blending, middle - blending mask): If I just compute weighted average between left and right image according to mask, the result is of course OK. However, this would leave a visible seam as two images have usually slightly different exposure. So all three images need to be successively blurred to build a Gaussian pyramid - here is how one level of the pyramid looks like: You can see that the top part of blending mask "touches" the border. The Gaussian blurring filter reflects on image borders and this causes inaccuracy in lower band. I colored the images to make the problem more visible: I am not sure about how to blur the mask so that it would deal with overlap area edges nicely. Few suggestions: change behavior of the blurring filter (how?) extend area of blending mask so that one part never "touches" the border update weighting masks so that resulting blend mask is more "edge-aware" Any other suggestions/hints? Answer: I have solved this problem by adding certain padding to overlap area. There are pixels belonging to image1, image2, background and overlap. A pixels in overlap are successively relabeled to either image1 or image2 depending on neighborhood. This padding will make some space for the blurring so that sharp edges of the overlap have no change of appearing. Another treat is photometric calibration, i.e. gain compensation and vignetting removal. This minimizes or even eliminates the differences in low frequency band. Finally, the blurring have to avoid background pixels (usually black) from bleeding into the image. This can be done by alpha blending or using binary mask as described here. It is also possible to reduce amount of blurring near edges and corners of the overlap. This, unfortunately, reduces performance of the blending algorithm and the results are somewhat uncontrolled (as the blurring is non-uniform). Finally, a gradient domain blending improves the quality near edges as it does not rely on Gaussian blurs of fixed sizes.
{ "domain": "dsp.stackexchange", "id": 399, "tags": "image-processing, stitching, photo" }
Minimum spanning tree - Prim's algorithm
Question: I have to demonstrate Prim's algorithm for an assignment and I'm surprised that I found two different solutions, with different MSTs as an outcome. Now I now that shouldn't happen, so I wonder what I did wrong ? Here is the exercise: The first solution (using Prim's) is visiting the nodes in the following order: v0,v1,v8,v7,v6,v3,v2,v4,v5 Here the MST has a weight of 37, which is the same result that I got by using Kruskal's on the same graph. The second solution (again using Prim's) however is the following order of visited notes: v0,v1,v3,v2,v7,v8,v6,v4,v5 , which leads to a MST with a weight of 39. How can this be ? Where is my mistake ? Thanks in advance Answer: Prim's algorithm could give different solutions (depending on how you resolve "ties"), but all solutions should give a MST with the same (minimal) weight, which is not the case here. Your second solution: $v_0,v_1,v_3,v_2,v_7,v_8,v_6,v_4,v_5$ is incorrect. After adding $v_0,v_1,v_3,v_2$ you add $v_7$. But the minimum distance between $v_7$ and any existing vertex in the tree is $6$. Vertex $v_6$ however has a distance of only $4$, and should thereby be added first. You could continue this, and end up with a solution that is indeed different from your solution 1, but has the same weight. For example, $v_0,v_1,v_3,v_2, v_6, v_7,v_8,v_4,v_5$, which gives an MST of weight $37$, as expected.
{ "domain": "cs.stackexchange", "id": 10650, "tags": "algorithms, graphs, minimum-spanning-tree, prims-algorithm" }
Coulomb's law with an $r^3$, not $r^2$, in the denominator
Question: I am reading an older physics book that my professor gave me. It is going over Coulomb's law and Gauss' theorem. However, the book gives both equations with an $r^3$, not $r^2$, in the denominator. Can somebody please explain why it is given as r^3? An image is attached for reference. Also for equation 1-24, can somebody please explain how the middle side is equal to the right side with the del operator? Answer: It does give "Coulomb's law" with $\frac{1}{r^3}$, it gives it in its proper vectorial form $$ \vec E \propto \frac{\vec r}{r^3}$$ which, when taking the absolute values, yields the form you are probably more familiar with $$ E \propto \frac{1}{r^2}$$ since $\lvert \vec r \rvert = r$.
{ "domain": "physics.stackexchange", "id": 23546, "tags": "electrostatics, vectors, coulombs-law" }
Terminal Freezes after roslaunch event
Question: I want to spawn a pioneer3at robot in gazebo through ros so I created a file called p3at.launch and made the corresponding urdf file. When I roslaunch it, it started normally but freezes. Terminal looks like this - //Terminal arpit@arpit-SVE15137CNB:/opt/ros/indigo/share/urdf$ roslaunch gazebo_ros p3at.launch ... logging to /home/arpit/.ros/log/ee1fb8ae-5bcf-11e5-97ce-b8763fea9d09/roslaunch-arpit-SVE15137CNB-16521.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://arpit-SVE15137CNB:54185/ SUMMARY ======== PARAMETERS * /robot_description: <?xml version="1.... * /rosdistro: indigo * /rosversion: 1.11.13 NODES / robot_state_publisher (robot_state_publisher/robot_state_publisher) spawn_urdf (gazebo_ros/spawn_model) auto-starting new master process[master]: started with pid [16536] ROS_MASTER_URI=http://localhost:11311 setting /run_id to ee1fb8ae-5bcf-11e5-97ce-b8763fea9d09 process[rosout-1]: started with pid [16549] started core service [/rosout] process[spawn_urdf-2]: started with pid [16566] process[robot_state_publisher-3]: started with pid [16567] And here is the launch file <launch> <env name="GAZEBO_MODEL_PATH" value="$GAZEBO_MODEL_PATH:~/.gazebo/models" /> <env name="GAZEBO_RESOURCE_PATH" value="$GAZEBO_RESOURCE_PATH:~/.gazebo/models" /> <arg name="world" default="worlds/empty.world" /> <arg name="urdf" default="$(find urdf)/pioneer3at.urdf" /> <arg name="name" default="pioneer3at" /> <param name="robot_description" command="$(find xacro)/xacro.py $(arg urdf)" /> <node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -model $(arg name)" /> <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" /> </launch> Originally posted by Arpit on ROS Answers with karma: 3 on 2015-09-15 Post score: 0 Original comments Comment by ahendrix on 2015-09-15: That looks normal. What do you expect it to do? Comment by Arpit on 2015-09-15: It does not launch the gazebo simulator with the p3at robot. It is not doing anything after this whereas the other launch files are launched in the gazebo simulator. Comment by bvbdort on 2015-09-15: May be your processor is slow or less RAM. What is your processor and how much RAM ? Answer: It looks like your launch file doesn't include an instance of gazebo; only the spawner, which expects to connect to gazebo. If you want to solve this on your own, compare your launch file to an existing launch file that behaves the way you expect, with a different robot. If you want more help, please edit your question to include the launch file that you've written. EDIT Your launch file does not start gazebo. You can start gazebo by adding the following include to your launch file: <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="use_sim_time" value="true"/> <arg name="debug" value="false"/> </include> Originally posted by ahendrix with karma: 47576 on 2015-09-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Arpit on 2015-09-15: Yeah! Thanks !! :) Comment by Shivam on 2021-09-26: Thanks it worked
{ "domain": "robotics.stackexchange", "id": 22640, "tags": "ros, gazebo, p3at, spawn" }
AC induction motor
Question: Could the rotor be shaped like a fan inside the stator of an AC induction motor. And could the stator act as a duct. Would this lead to a high airflow:materials used. Thanks Answer: It could, yes. But the tradeoff would be worse performance. The strength magnetic field is a function of distance from the coil, so increasing the volume by turning your rotor into a fan shape would reduce the efficiency. Typically, this is dealt with by close mounting a fan, sometimes inside the motor case. The motor stays compact and efficient while the cooling is provided by a fan specially built for the purpose.
{ "domain": "engineering.stackexchange", "id": 1400, "tags": "electrical-engineering, motors, ac" }
A good ecc algorithm for random errors?
Question: I know I will have a 10% error rate, but this errors are truly random. I was using solomon reed codes, but if a single bit in the code or parity word is bad the whole word becomes useless. If I have eight bytes of data and eight bytes of parity codes then 5 bit error can make my whole message bad. Is there a better algorithm for cases like this? Edit: I am trying to send packets of 128bit or 16bytes. 1/2 is data 1/2 is parity. I am having trouble figuring out how LDPC would be more beneficial. If I have the same 16bytes with 8 bytes of parity and 8 bytes of data how ldpc be better? For example suppose with my example I have 1 bit errors in 5 different bytes. This would make SR unable to decode my data. Would it be the same LDPC or would it be able to recover that. From would I understand from LDPC it would still be unable to decode it. Answer: This really is a comment but is posted as an answer because it is too long to post as a comment. Your edited question says that you are transmitting $16$-byte packets consisting of $8$ data bytes and $8$ parity bytes over a channel with bit error rate of $10\%$ or $0.1$. Thus, your code rate is fixed at $0.5$. Now Shannon's capacity formula for a (binary memoryless) channel with bit error rate $p$ is $$C = -p\cdot\log_2(p) - (1-p)\cdot\log_2(1-p) = 0.46899559358929 ~~ \text{when}~p=0.1.$$ Thus it is not too surprising that using a rate-$\frac{1}{2}$ code is not working too well, and switching from a Reed-Solomon code to an LDPC code is not going to help matters very much as long as the code rate is fixed at $\frac{1}{2}$. You may need to redesign the packet structure to reduce the code rate and/or improve SNR on the physical layer channel to reduce the error rate to something more easily handled by coding.
{ "domain": "dsp.stackexchange", "id": 514, "tags": "reed-solomon, ecc" }
Is there enough evidence in this paper that there is a second (mechanical) layer of information in DNA?
Question: The following commentator writes: Mechanical cues Since the mid 80s it has been hypothesized that there is a second layer of information on top of the genetic code: DNA’s mechanical properties. Each of our cells contains two meters of DNA molecules, so these molecules need to be wrapped up tightly to fit inside a single cell. The way in which DNA is folded, determines how the letters are read out, and therefore which proteins are actually made. In each organ, only relevant parts of the genetic information are read, based on how the DNA is folded. The theory goes that mechanical cues within the DNA structures determine how DNA prefers to fold. My question is: Is there enough evidence in this paper that there is a second (mechanical) layer of information in DNA? Answer: Biologists already know that transcription can be regulated by winding or unwinding DNA. (This paper might be informative:http://www.cell.com/trends/parasitology/fulltext/S1471-4922(16)30226-4) But this is hardly the only method of gene regulation.
{ "domain": "biology.stackexchange", "id": 8202, "tags": "dna" }
Understanding gravity
Question: Forgive me if my question has an obvious answer, but I need to know the answer. I always thought that more massive/energetic objects had a stronger force of gravity than less massive ones; that is, the Sun would create a stronger gravitational pull than the Earth does. Upon looking into Newton's law of universal gravitation, I began to wonder whether I misunderstood this. Does the gravity between the Sun and Earth have an equal force on both? As in, does the Earth pull the Sun with as much force as the Sun does on Earth? If so, would the Sun just resist accelerating to Earth because it is extremely massive? Mass is just an object's resistance toward accelerating from a force, right? If this is the case, would it apply to GR? Sorry for hypothesizing, I'm honestly trying to get a grasp on it. Answer: The magnitude of the force of gravity between two bodies is proportional to the product of their masses: $$F=G\frac{m_1m_2}{r^2}$$ This doesn't change depending on which body you're applying the force to, i.e. if you interchange the masses. The magnitude is the same. What does change is the direction of the force. Force is a vector quantity, denoted as $\vec{F}$ or $\mathbf{F}$. If we write the equation for gravity using proper vector notation, we have $$\mathbf{F}=G\frac{m_1m_2}{|\mathbf{r}_1-\mathbf{r_2}|^2}\frac{\mathbf{r_1}-\mathbf{r_2}}{|\mathbf{r_1}-\mathbf{r_2}|}$$ Here, the positions of the objects are represented by vectors, $\mathbf{r}_1$ and $\mathbf{r_2}$. Additionally, $|\mathbf{x}|$ denotes the norm of a vector $\mathbf{x}$ - its magnitude. Now, if you interchange the masses, the direction of the force changes, although $|\mathbf{r_1}-\mathbf{r_2}|=|\mathbf{r_2}-\mathbf{r_1}|$, because this refers to the magnitude of the vectors. So the force applied on one object is the opposite of the force applied on the other object. This is Newton's third law. The acceleration is more interesting. The force on object $1$ due to gravity is $$F_1=m_1g_1$$ Here, $$g_1=\frac{Gm_2}{r^2}$$ where $m_2$ is the other mass. This should tell you that $g_1\neq g_2$, except when $m_1=m_2$. I'm not an expert in general relativity, but I do know that it describes how spacetime curves due to the presence of one body. The solution to the Einstein Field Equations, the metric, is different for different bodies, because one piece of it, the stress-energy tensor, is different for objects of different mass/energy/etc.
{ "domain": "astronomy.stackexchange", "id": 1249, "tags": "gravity, general-relativity, newtonian-gravity" }
Sign of $dr$ in Schwarzschild geodesics
Question: There is an equation that relates energy $E$, angular momentum $L$ and other constants and variables to find $\left(\frac{dr}{d\tau}\right)^2$ in a plane. $$\left(\frac{dr}{d\tau}\right)^2=\frac{E^2}{m^2c^2}-\left(1-\frac{2GM}{c^2r}\right)\left(c^2+\frac{L^2}{m^2r^2}\right)$$ So, $\frac{dr}{d\tau}$ is: $$\frac{dr}{d\tau}=\pm\sqrt{\frac{E^2}{m^2c^2}-\left(1-\frac{2GM}{c^2r}\right)\left(c^2+\frac{L^2}{m^2r^2}\right)}$$ So, which sign should I use? Is there a method to find which sign is to be used? Perhaps, if $|d\phi|$ is increasing, so $r$ is decreasing and then $dr$ is negative, and vice versa. But I don't know how much this is true, especially near the event horizon. Answer: To determine a geodesic you have to fix its initial point and its initial tangent vector everything at $t=0$. We can always assume $\theta_0 = \pi/2$, whereas $\phi_0$, and $r_0$ are arbitrary. Surely $d\theta/dt|_0=0$, $dt/dt|_0=1$, while $d\phi/dt|_0= \dot{\phi}_0$ and $dr/dt|_{0}= \dot{r}_0$ are arbitrary. Using these values you determine the values of $L^2$ and $E^2$ that, as they being constant, they can determined with data at $t=0$. So $E^2$ and $L^2$ are known. Therefore, up to the sign, the right-hand side of $$\frac{dr}{d\tau}=\pm\sqrt{\frac{E^2}{m^2c^2}-\left(1-\frac{2GM}{c^2r}\right)\left(c^2+\frac{L^2}{m^2r^2}\right)}\qquad (1)$$ is already known once you fix the initial conditions. Unless $\dot{r}_0=0$, that number has a sign. This is the sign you have to choose in (1), since, by continuity (and the considered functions are $C^2$ at least) the sign does not change around $t=0$. If you assume $\dot{r}_0=0$, the sign is determined by the other initial conditions with a more careful analysis.
{ "domain": "physics.stackexchange", "id": 11356, "tags": "homework-and-exercises, general-relativity, differential-geometry, energy-conservation, geodesics" }
Probability of generating a desired permutation by random swaps
Question: I'm interested in the following problem. We're given as input a "target permutation" $\sigma\in S_n$, as well as an ordered list of indices $i_1,\ldots,i_m\in [n-1]$. Then, starting with the list $L=(1,2,\ldots,n)$ (i.e., the identity permutation), at each time step $t\in [m]$ we swap the $i_t^{th}$ element in $L$ with the $(i_t+1)^{st}$ element, with independent probability $1/2$. Let $p$ be the probability that $\sigma$ is produced as output. I'd like to know (any of) the following: Is deciding whether $p>0$ an $NP$-complete problem? Is calculating $p$ exactly $\#P$-complete? What can we say about approximating $p$ to within a multiplicative constant? Is there a PTAS for this? The variant where the swaps don't need to be of adjacent elements is also of interest. Note that it's not hard to reduce this problem to edge-disjoint paths (or to integer-valued multicommodity flow); what I don't know is a reduction in the other direction. Update: OK, checking Garey & Johnson, their problem [MS6] ("Permutation Generation") is as follows. Given as input a target permutation $\sigma\in S_n$, together with subsets $S_1,\ldots,S_m\in [n]$, decide whether $\sigma$ is expressible as a product $\tau_1 \cdots \tau_m$, where each $\tau_i$ acts trivially on all indices not in $S_i$. Garey, Johnson, Miller, and Papadimitriou (behind a paywall, unfortunately) prove that this problem is $NP$-hard. If the swaps don't need to be adjacent, then I believe this implies that deciding whether $p>0$ is also $NP$-hard. The reduction is simply this: for each $S_1,S_2,\ldots$ in order, we'll offer a set of "candidate swaps" that corresponds to a complete sorting network on $S_i$ (i.e., capable of permuting $S_i$ arbitrarily, while acting trivially on everything else). Then $\sigma$ will be expressible as $\tau_1 \cdots \tau_m$, if and only if it's reachable as a product of these swaps. This still leaves open the "original" version (where the swaps are of adjacent elements only). For the counting version (with arbitrary swaps), it of course strongly suggests that the problem should be $\#P$-complete. In any case, it rules out a PTAS unless $P=NP$. Answer: I think that whether p>0 can be decided in polynomial time. The problem in question can be easily cast as the edge-disjoint paths problem, where the underlying graph is a planar graph consisting of m+1 layers each of which contains n vertices, plus m degree-4 vertices to represent the possible adjacent swaps. Note that the planarity of this graph follows from the fact that we allow only adjacent swaps. If I am not mistaken, this falls in the special case of the edge-disjoint paths problem solved by Okamura and Seymour [OS81]. In addition, Wagner and Weihe [WW95] give a linear-time algorithm for this case. See also Goemans’ lecture notes [Goe12], which gives a nice exposition of the Okamura–Seymour theorem and the Wagner–Weihe algorithm. References [Goe12] Michel X. Goemans. Lecture notes, 18.438 Advanced Combinatorial Optimization, Lecture 23. Massachusetts Institute of Technology, Spring 2012. http://math.mit.edu/~goemans/18438S12/lec23.pdf [OS81] Haruko Okamura and Paul D. Seymour. Multicommodity flows in planar graphs. Journal of Combinatorial Theory, Series B, 31(1):75–81, August 1981. http://dx.doi.org/10.1016/S0095-8956(81)80012-3 [WW95] Dorothea Wagner and Karsten Weihe. A linear-time algorithm for edge-disjoint paths in planar graphs. Combinatorica, 15(1):135–150, March 1995. http://dx.doi.org/10.1007/BF01294465
{ "domain": "cstheory.stackexchange", "id": 2827, "tags": "np-hardness, counting-complexity, permutations, sorting-network, disjoint-paths" }
Good explanatory resource for algorithm techniques such as greedy, backtracking and recursive functions
Question: What would be a good book/resource that explains the basic idea behind those techniques, how to use (and maybe when to use them) and plenty of exercises with perhaps some worked examples (kind of like when for loops were introduced for the first time you were asked to use one to compute sums of odd numbers, even numbers etc.) ? If the explanation is both formal and "plain" (for dummies style explanation) that would be great. Thanks! Answer: The Coursera course on Algorithms running currently suggests four books: CLRS - Cormen, Leiserson, Rivest, and Stein, Introdution to Algorithms (3rd edition) DPV - Dasgupta, Papadimitriou, and Vazirani, Algorithms KT - Kleinberg and Tardos, Algorithm Design SW - Sedgewick and Wayne, Algorithms (4th edition) Sedgewick and Wayne is apparently very implementation-oriented, with tons of Java examples as well as testing metrics and lots of models. I haven't checked it out myself, but I will be. Another great book is the Algorithms Design Manual by Skiena, apparently also very programmer-friendly.
{ "domain": "cs.stackexchange", "id": 888, "tags": "algorithms" }
How does the Hutchinson Effect work?
Question: I have seen pictures online of metal ripped apart and metal completely messed up due to the Hutchinson Effect. How does this effect work and what are the other principles behind it? Answer: It isn't real. source 1 source 2 "One suggestion made by skeptics is that Hutchison uses an electromagnet on the ceiling, and places hidden pieces of metal inside objects so they will be attracted to the magnet. He could then film the objects with an upside-down camera as he powers down the electromagnet, making the objects on film appear to float up and out of the shot when in reality they are falling down to the floor. Many of the videos include conspicuous objects in the scene which do not move (such as an old broom), which could be deliberately attached to add to the illusion that the camera is not upside-down. Critics also point out that the videos do not show what happens to the objects after they levitate." It is supposed to use zero-point energy, which is real, but it doesn't.
{ "domain": "physics.stackexchange", "id": 20984, "tags": "electromagnetism, metals" }
DRCSIM: Atlas URDF glitches
Question: Torque limits on leg_uhz are not the same: 110 vs 260 /usr/share/drcsim-2.3/ros/atlas_description/urdf/atlas.urdf <joint name="l_leg_uhz" type="revolute"> <origin xyz="0 0.089 0" rpy="0 -0 0" /> <axis xyz="0 0 1" /> <parent link="pelvis" /> <child link="l_uglut" /> <dynamics damping="0.1" friction="0" /> <limit effort="110" velocity="12" lower="-0.32" upper="1.14" /> <joint name="r_leg_uhz" type="revolute"> <origin xyz="0 -0.089 0" rpy="0 -0 0" /> <axis xyz="0 0 1" /> <parent link="pelvis" /> <child link="r_uglut" /> <dynamics damping="0.1" friction="0" /> <limit effort="260" velocity="12" lower="-1.14" upper="0.32" /> As long as we are cleaning up the model. l_lglut:ixy and r_lglut:ixy should have opposite signs, same for lglut:iyz, uglut:ixy, uglut:iyz, lleg:ixz, hand:ixz These links have no inertial fields, so the default mass and moi need to be zero. link name="right_palm_left_camera_optical_frame" link name="right_palm_right_camera_optical_frame" link name="left_palm_left_camera_optical_frame" link name="left_palm_right_camera_optical_frame" link name="left_camera_optical_frame" link name="right_camera_optical_frame" Originally posted by cga on Gazebo Answers with karma: 223 on 2013-04-10 Post score: 0 Answer: For questions 1 and 2, there's a pull request. Some of the moi signs had already been fixed, but a couple had not. Thanks for finding those! For question 3, those frame just don't factor into simulation. They have no mass, inertia, or other physical properties. Originally posted by gerkey with karma: 1414 on 2013-04-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Yohei Kakiuchi on 2013-05-19: This problems have fixed in source files. But, atlas.urdf and atlas_sandia_hands.urdf in debian package (drcsim2.6.1) still remain. ex. effort_limit of r_leg_uhz is 260.
{ "domain": "robotics.stackexchange", "id": 3203, "tags": "gazebo-model" }
Why when you vertically shake a pendulum very fast it goes upward?
Question: So I put a pendulum on a vibrating machine, I turned on the machine and the pendulum went up instead of down I thought it would go down because of friction(not only friction there others involved.) I'm very confused, so could you explain why it happened? Answer: This is Kapitza's pendulum. It is not surprising that the pendulum is all over the place when one pumps the system with an external force. The surprise is instead that the inverted position is a stable equilibrium for the effective potential of the pendulum. For details, see the Wikipedia page.
{ "domain": "physics.stackexchange", "id": 77874, "tags": "newtonian-mechanics, lagrangian-formalism, oscillators" }
Elastic Collision with objects of the same mass
Question: I understand this is a homework problem. I don't expect anyone to do my homework for me, but help me understand what I am doing wrong. The problem goes: Two shuffleboard disks of equal mass, one orange and the other green, are involved in a perfectly elastic glancing collision. The green disk is initially at rest and is struck by the orange disk moving initially to the right at oi = 5.30 m/s as in Figure (a) shown below. After the collision, the orange disk moves in a direction that makes an angle of θ = 35.0° with the horizontal axis while the green disk makes an angle of = 55.0° with this axis as in figure (b). Determine the speed of green disk if the final velocity of the orange disk is 4.34m/s First I draw the collision out. with the angles and everything. This helps me with visualizing what is happening. Now i separate the Elastic Collision into x & ycomponents: since the masses are all the same does that mean in the elastic collision equation the masses cancel out? if so this is the equations My x equation 5.30m/s + 0 = 4.34m/s*cos(35) + Vgf Vgf(x) = -1.3513 m/s y equation 0 + 0 = 5.30*sin(35)+Vgf Vgf(y) = 2.6412 Then I do sqrt( (Vgf(x))^2 + (Vgf(y))^2) = 2.64 m/s (which is the incorrect velocity of the green disk.) Though this is incorrect. My only idea to why this is incorrect is probably because of the mass part of the Elastic Collision equation. I don't know how to use the collision equations without the masses given. What do I do? Answer: First of all, check your arithmetic on the X equation. You should get Vg(x) = +1.74 m/s. Second, in the Y equation you should use the final orange speed 4.34 instead of 5.3 on the right hand side. You should get Vg(y) = -2.48 or so.
{ "domain": "physics.stackexchange", "id": 2056, "tags": "homework-and-exercises, collision" }
Custom encryption
Question: I made a custom encryption library/class which allows to en-/decrypt Strings with a custom charset. I think you'll get what I mean with that when you take a look at the code. Since I haven't much cryptography knowledge, I'd like to ask if someone who knows about it could tell me if this is "safe". I know nothing is 100% safe, but I'd like to know how far it is safe and maybe some improvements. I currently have two programs on a server running, which are trying to brute-force it. After nearly 2 months now, there is nothing found yet, so I guess it's working at least a bit. PreCrypt.java package net.prefixaut.prelib.crypt; import java.security.SecureRandom; import java.util.List; import java.util.ArrayList; import java.math.BigInteger; public class PreCrypt { // Remove Constructor private PreCrypt() {} /** * Default Charset which contains only default Characters.<br/> * Supports: a-z, A-Z, 0-9 and most special-characters. <b>Current amount of supported Characters: 92</b> */ public static final String defaultCharset = "abcdefghijklmopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ#?!\\§$%&/{[(=+-~*_.:,;µ@€<>|)]}"; /** * Complete Charset which contains most relevant Characters<br/> * <b>Current amount of supported Characters: 102</b> */ public static final String completeCharset = defaultCharset + "\n\t\r" + // Extra Space Chars "üäöÜÄÖ"; // German Extra Chars public static char countUp(char c, int amount) { return PreCrypt.countUp(c, amount, defaultCharset); } public static char countUp(char c, int amount, String str) { return PreCrypt.countUp(c, amount, str.toCharArray()); } public static char countUp(char c, int amount, char[] charset) { boolean set = false; for (int i = 0; i < amount; i++) { for (int o = 0; o < charset.length; o++) { if (c == charset[o]) { if (o + 1 >= charset.length) o = 0; else o++; c = charset[o]; set = true; break; } } if (!set) c++; else set = false; } return c; } public static char countDown(char c, int amount) { return PreCrypt.countDown(c, amount, defaultCharset); } public static char countDown(char c, int amount, String str) { return PreCrypt.countDown(c, amount, str.toCharArray()); } public static char countDown(char c, int amount, char[] charset) { boolean set = false; for (int i = 0; i < amount; i++) { for (int o = 0; o < charset.length; o++) { if (c == charset[o]) { if (o - 1 < 0) o = charset.length - 1; else o--; c = charset[o]; set = true; break; } } if (!set) c--; else set = false; } return c; } public static String encrypt(String code, String key) { return PreCrypt.encrypt(code, key, PreCrypt.defaultCharset); } public static String encrypt(String code, String key, String charset) { return PreCrypt.encrypt(code, key, charset.toCharArray()); } public static String encrypt(String code, String key, char[] charset) { String r = ""; r = PreCrypt.count(code, key, charset, (boolean foo, int number, char c) -> { if (foo) c = PreCrypt.countUp(c, number, charset); else c = PreCrypt.countDown(c, number, charset); return c; }); return r; } public static String decrypt(String code, String key) { return PreCrypt.decrypt(code, key, PreCrypt.defaultCharset); } public static String decrypt(String code, String key, String charset) { return PreCrypt.decrypt(code, key, charset.toCharArray()); } public static String decrypt(String code, String key, char[] charset) { String r = ""; r = PreCrypt.count(code, key, charset, (boolean foo, int number, char c) -> { if (foo) c = PreCrypt.countDown(c, number, charset); else c = PreCrypt.countUp(c, number, charset); return c; }); return r; } @FunctionalInterface private interface Handler { public char onCode(boolean foo, int number, char c); } private static String count(String code, String key, char[] charset, Handler handler) { String r = ""; List<Integer> keys = new ArrayList<Integer>(); BigInteger keySum = new BigInteger(new byte[] { 0 }); // Load all Key-Values into KEYS, and summarize them in KEYSUM for (int i = 0; i < key.length(); i++) { int v = (int) key.charAt(i); keys.add(v); keySum = keySum.add(new BigInteger("" + v)); } List<Integer> temp = new ArrayList<Integer>(); for (int i = 0; i < code.length(); i++) { char c = code.charAt(i); int kIndex = i; // Get current Index while (kIndex > keys.size() - 1) kIndex -= keys.size() - 1; // Load from Index till end, and from 0 till Index into TEMP for (int o = kIndex; o < keys.size(); o++) temp.add(keys.get(o)); for (int o = 0; o < kIndex; o++) temp.add(keys.get(o)); // Encode for (int o = 0; o < temp.size(); o++) { int count = (temp.get(o) * (o + 1)) / (keySum.intValue() / key.length()) + keySum.intValue(); count = count >> o + 1; c = handler.onCode( (temp.get(o) * (o + 1)) % 2 == 0, count, c); } r += c; temp.clear(); } return r; } /** * Generates a Random String with the length of length, from the {@link PreCrypt#defaultCharset default-Charset} * * @param length * Length of the random String */ public static String generateRandomString(int length) { return PreCrypt.generateRandomString(length, PreCrypt.defaultCharset); } /** * Generates a Random String with the length of the given length, from the given charset * * @param length * Length of the random String * @param charset * Charset which contains all Characters which are allowed for the random String */ public static String generateRandomString(int length, CharSequence charset) { return PreCrypt.generateRandomString(length, (String) charset); } /** * Generates a Random String with the length of the given length, from the given charset * * @param length * Length of the random String * @param charset * Charset which contains all Characters which are allowed for the random String */ public static String generateRandomString(int length, String charset) { return PreCrypt.generateRandomString(length, charset.toCharArray()); } /** * Generates a Random String with the length of the given length, from the given charset * * @param length * Length of the random String * @param charset * Charset which contains all Characters which are allowed for the random String */ public static String generateRandomString(int length, char[] charset) { String re = ""; SecureRandom r = new SecureRandom(); for (int i = 0; i < length; i++) re += charset[ (r.nextInt(charset.length))]; return re; } } Answer: I don't think I can give you a very good answer, but here goes: If the span of (temp.get(o) * (o + 1)) (for use in the evenenss test) is not itself even, then your cyphertext is going to be biased. In other words, if the total span of numbers covered by that function is { 0, 1, 2, ... 7, 8}, Then, { 0, 2, 4, 6, 8 } will output 0, and { 1, 3, 5, 7 } will output 1. notice that there is one less member in the one-output camp than there is in the zero output camp. This means that, when encrypting, there will be more countUp's than countDown's, which is a statistical weakness. How is SecureRandom being used? In the main loop of the count function, I see no usage of the result variable or any kind of persistent state. If you're not carrying state between rounds of the main loop, then your cypher is operating in electronic codebook mode, which is not a good thing. How are you chaining your rounds? Even if you address my concerns, I'm no cryptologist, so I would be very surprised if we can come up with a robust crypto system on this forum. I would strongly encourage you to look into one of the many libraries that provide the AES or a modern stream cypher like Spritz or HC-128 if performance is a constraint.
{ "domain": "codereview.stackexchange", "id": 17001, "tags": "java, cryptography" }
It is possible to run the simulated omni-wheel robot without plugins
Question: Hi there, I am wondering if it is possible to simulate an omni-wheel robot without moving it with the gazebo plugin (Planar Move Plugin), If so, can someone help me make it? I would appreciate it, thank you! Originally posted by al_ca on ROS Answers with karma: 79 on 2020-10-30 Post score: 0 Answer: yes you can if you thing of normal wheels, but with mecanum \ onmi wheels it get to complicaten with all the roller and and friction so you resould would not be better then the planar mover. Originally posted by duck-development with karma: 1999 on 2020-11-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by al_ca on 2020-11-02: In my case, I want to build my robot in a simulation in order to test there some things, not in the real one but to have almost the same results as if I did it in the real one... it is achievable or it isn't worth it? Comment by duck-development on 2020-11-03: what kind of kinematic do like to use? if you have somethin like http://www.karispro.de/ then it schould work if you have some think like this one https://www.youtube.com/watch?v=rIiF0I7LT-g you will not have bennefits compared to the planar mover. Comment by al_ca on 2020-11-03: It's something like this Comment by duck-development on 2020-11-03: if it is the 3 sweden wheel then it is in the same class as mecanum and you can use the planar mover. you can look at the https://github.com/GuiRitter/OpenBase may it helps. Comment by al_ca on 2020-11-03: Basically, I am trying to do the same as he does from around two weeks ago and it's difficult to implement it, and many problems in the movement were appearing.. maybe I am doing something wrong idk because I am relatively new to this environment. Nevertheless, I will move on with the planar move plugin so thank you for the time and help! If you want something to add, I am waiting.
{ "domain": "robotics.stackexchange", "id": 35695, "tags": "ros, gazebo, urdf, ros-melodic" }
Gene pool simulation
Question: this simulation, you sampled the gene pool without replaceing beads in the beakerafter you drew each one. Thus, f(A) and f(a) in the gene pool changed slightly after each bead was drawn. For example, if you begin with 50 light and 50 dark beads, the probability of drawing a dark bead the first time is 50/100 = 0.5. THe beaker would then contain 49 dark beads and 50 light beads, so the probability of drawing a second dark bead becomes 49/99 = 0.495. Does this make your simulation slightly less realistic? In small natural populations, does one mating change the gene pool available for the next mating, or not? What biological factors must be considered in answering this question? Answer: I am not sure I'm answering your question but I hope this will help. There are two main models of genetic drift in biology: Moran model and Wright-Fisher model. The Wright-Fisher model implies picking $N$ beads (where $N$ is the population size) with replacement in order to form the new population. Therefore, the change in allele frequency (due to genetic drift) follows a Poisson distribution. $$\frac{2N!\cdot p^kq^{2N-k}}{k!(2N-k)!}$$ The Moran model consists of the following: At each time step you draw two beads. One disappear from the population and the other replicate in order to keep a stable population size. The two models yield to similar results. Moran model runs twice as fast as Wright-Fisher model. I don't fully understand your model. You say that in your model you pick one bead at a time. What do you do with this bead? How do you construct your population at the next time step? Do you allow changes in population size? Can you please try to develop how you defined your model. If it is a computer simulation you might paste the important part of your code, that might help. Haldane and Kimura both worked a lot on these models of genetic drift. For example, they provide different ways of calculating the mean time to fixation of a given allele given their selection coefficient $s$ starting from a given initial frequency $p_0$. Also some work has been done on genetic drift in population where the population size changes over time. Some work have been performed considering more or less complex probability density function for the number of offspring per individual. This kind of literature is very interesting but sometimes ask for quite good knowledge in mathematics (especially for Kimura's articles). I once asked this question concerning the different models of genetic drift and their assumptions but I didn't get an answer yet.
{ "domain": "biology.stackexchange", "id": 1858, "tags": "genetics, homework, biostatistics" }
Energy lost due to stability
Question: We used to say that when an atom is stable it is in its minimum energy state.Every shell has its fixed energy. So from where does the energy come which is lost by an atom to achieve stability. Do electrons lose energy? If yes then will there be any effect of the lose of energy on the speed of electron revolving around the nucleus. Thank you Sorry for the English. Answer: Let call it rather the atom ground state and the atom excited state. Same excited states can be kinetically quite stable, like atomized hydrogen in interstellar space with mutual parallel (excited state) and antiparallel ( ground state) proton and electron spin. This spin flipping is the source of the famous 21 cm hydrogen spectral line, used as the scale for the Pioneer cosmic sons plaquette. Excited atom states have higher energy of electrons than ground states. This energy consists of kinetic and potential energy. The energy exchange with the rest of universe during excited/ground state switching is via absorption/emission of photons, or via collisions/interactions with other molecules, atoms or subatomic particles. If we consider classical central force like gravity or electrostatics, we know that lower energy means higher mean speed, as potential energy decrease is twice as much as kinetic energy increase. In the quantum realm, we cannot speak about particular orbit, momentary values of kinetic or potential energy, nor the momentary position nor velocity nor speed. We can speak about electron energy, orbital and spin angular momentum and related distribution of the probability of electron occurrence.
{ "domain": "chemistry.stackexchange", "id": 13893, "tags": "stability" }
How can acceleration be maximum in a spring oscillator when velocity is zero?
Question: Since acceleration is equal to $\Delta v / \Delta t$, and velocity is equal to zero. Then wouldn't acceleration also be zero? I have seen a response talking about differential equations, but how can the above basic formula be ignored? Answer: Since acceleration is equal to $\Delta v / \Delta t$ and velocity is equal to zero. Then wouldn't acceleration also be zero? This is an often asked question in other topics eg an object thrown vertically upwards at it greatest height has no velocity, does that mean it will stay there for ever? Acceleration is the rate of change of velocity with respect to time, $\Delta v / \Delta t$, not velocity divided by time, $v/t$. Zero acceleration implies zero change in velocity which in turn means that if the velocity of a body is zero it will stay at zero for ever. With a non zero constant value of acceleration a body in successive intervals of time might have a velocity of $. . . . +2,\,+1,\,0,\,-1,\,-2\, . . . \rm m/s$ in a particular direction. Notice that the body is slowing down, stops and the starts moving in the opposite direction ie its velocity is still changing even when the velocity is zero.
{ "domain": "physics.stackexchange", "id": 64912, "tags": "newtonian-mechanics, acceleration, harmonic-oscillator, spring" }
Gravitational force of attraction
Question: If I keep two objects where no other force than gravity acts between them, will they eventually stick together? Answer: Yes, so long as they are not moving relative to each other initially. If they are moving tangentially to the line connecting them, then they are likely to orbit each other (or fly off in whatever directions they're moving it, with only a slight deflection.) You can even estimate the amount of time it will take for them to collide. If their initial separation is $r$ (and it's much larger than the sizes of the objects themselves, and their total mass is $M$, then the amount of time to collision works out to be $$ T = \sqrt{ \frac{\pi^2 r^3}{8GM}}. $$ (It's actually remarkably easy to prove this by taking the appropriate limit of Kepler's Third Law.) This works out to approximately 100 days for two 1-kg masses separated by 10 meters.
{ "domain": "physics.stackexchange", "id": 80419, "tags": "newtonian-gravity" }
How do lens capture reflections clearly?
Question: In an optical system which uses lenses (cameras, our eye) multiple rays from the same point are conveyed in a single point on the retina / image sensor. This is typically shown as in the image below. However, while this is valid for diffuse light which is spread in all directions and is roughly of the same perceived color, I was wondering what happens with light which is reflected instead. Look at this image with a mirror: Looking at the image, the bottom ray (yellow) reflected from the object is very different from the top ray reflection (purple), and these two will end up in the same point on the image sensor. This would lead to unclear/blurred reflections on the image sensor. Why then do we see reflections on a mirror clearly? Answer: Assuming that the lens is achromatic (that its focal length is the same for all wavelengths), light from an object point more than one focal length away from the lens is focused to an image point on the far side of the lens regardless of what kind of source the object point is, and regardless of what the color mix of the light from that object point is. The lensmaker's equation, with the refractive index assumed to be a constant, covers that situation. It makes no difference whether light is emitted from the object point or reflected from the object point; and it makes no difference whether the reflected light from the object point is from a diffuse source or from a laser: light from a point is imaged to a point. Even if the lens is not achromatic, light of any one wavelength from an object point will be focused to a single image point. But in this case each wavelength will be focused to a slightly different point, which results in a slight color blur. A correct way to draw rays from a point object reflecting from a mirror and then proceeding through a lens to focus at an image is like this:
{ "domain": "physics.stackexchange", "id": 58364, "tags": "optics, visible-light, reflection, lenses" }
Direct Product vs Tensor Product
Question: I am confused in the notation on page 67 and page 70 a text (http://www-pnp.physics.ox.ac.uk/~tseng/teaching/b2/b2-lectures-2018.pdf), whether it's talking about a direct product or an outer product: On page 67, it mentioned that "you can take a direct product of two $j = 1/2$ representations" and build representations of higher j. On page 70, it mentioned "we can think of [the Lorentz Group] as the direct product $SU(2) \times SU(2)$." In each of the above, does the author mean Direct Product or Tensor Product? Answer: On p. 67 Tseng means a tensor product of representations. On p. 70 Tseng means a direct product of groups. Note however that the actual statement about the Lorentz group is wrong/imprecise as explained in e.g. this Phys.SE post. Concerning direct product vs. tensor product of groups, see also my related Phys.SE answer here.
{ "domain": "physics.stackexchange", "id": 54274, "tags": "special-relativity, group-theory, representation-theory, lorentz-symmetry, group-representations" }
GPS navigation and Arduino ECU
Question: Does Autoware.Auto support the Arduino ECU? Additionally is there a link to compatible ECUs somewhere Also does it support GPS based navigation? I wanted to try using it in a racetrack situation like the IAC racetrack in the LGSVL simulation environment. How would I go about doing so? I saw a RecordPlanner in the documentation which would be helpful but I am not sure how GPS and waypoints translate here I see that the RecordPlanner( https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/recordreplay-planner-howto.html ) uses the 3D Perception Stack. Does that mean while following waypoints it will also avoid obstacles that we see through the 3D Perception Stack? Lastly, What's the difference b/w the RecordPlanner and the TrajectoryFollowing (https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/test_trajectory_following-package-design.html). It seems the TrajectoryFollowing does have support for avoidance Thanks in advance for taking the time to answer these questions! Originally posted by sisaha9 on ROS Answers with karma: 90 on 2021-03-25 Post score: 0 Answer: Let me try to answer your questions. For future reference, you should only ask one question per post on this site to make answers easier for others to find. If you are talking about the Arduino microcontroller, it is not capable of running ROS so it can not be used for Autoware. Autoware.Auto does not currently support GPS-based navigation. We are working to add this for our currently-targeted development ODD: Cargo Delivery. We are also working with Clemson University to provide a reference implementation which will include this. RecordReplayPlanner currently records and replays waypoints in a local, but earth-fixed frame. It does not use GPS or any earth-level measurement to do this. The X, Y, amd Z coordinates that are recorded are relative to some local origin and their relation to GPS coordinates is not stored. Yes, it can avoid static obstacles using lidar-based perception and stop for them automatically. However, due to some problems with our object-collision-estimation algorithm, it can detect dynamic objects but will not properly stop in time to avoid a collision. This is something else we are working on in the current development cycle. The TestTrajectoryFollowing package is for simulating certain inputs for testing and debugging different control algorithms and not for recording and replaying waypoints. Originally posted by Josh Whitley with karma: 1766 on 2021-03-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sisaha9 on 2021-03-28: Sorry will make sure to split them more next time. Thanks for the answer! A followup on the ECU portion. I was looking through the documentation and it seems that Autoware.Auto communicates with vehicles that are compatible with the PACMod firmware. Which PACMod version does it work with? I see there are 3 versions: v1, v2 and v3 Edit: Nvm I see on the main tree it uses v3. Thanks! Comment by Josh Whitley on 2021-03-28: Please ask this as a seperate question as well.
{ "domain": "robotics.stackexchange", "id": 36241, "tags": "ros, navigation, ros2, gps, avoidance" }
Photon Passing Through Water
Question: When a light is passed through water why doesn't it forms ripples?(like a stone does) I am assuming light is particle here. Photon mass ≠ 0 when it's in motion It also have very high momentum Or is that ripples form are of very small amplitude and can't be seen by naked eye? Answer: Absorption of a photon does launch an acoustic wave in water, but only at the point where it is absorbed or scattered. Photons in, e.g., a laser beam passing through water, are absorbed at points more or less randomly distributed along the beam's path. Absorption causes local heating, which causes local expansion, which causes the acoustic wave. Systems using the photoacoustic effect make use of that acoustic wave. A very brief (nanoseconds) laser pulse can be launched in a thin sheet or line through a liquid solution, and spectral & phase analysis of the resulting acoustic signal provides clues re the dynamics of absorption and relaxation.
{ "domain": "physics.stackexchange", "id": 62746, "tags": "homework-and-exercises" }
Nomenclature of aryl halides
Question: How do you know when to name a benzene ring substituted by a simple haloalkane as a derivative of benzene or as a derivative of the alkyl halide. For example, is it 1-chloro-4-isopropylbenzene or 2-(4-chlorophenyl)propane? Answer: In the usual substitutive nomenclature, simple halogen compounds are always expressed by prefixes (‘bromo’, ‘chloro’, etc.). Hence, the senior parent structure of such compounds is not determined by a principal characteristic group (a characteristic group chosen for citation at the end of a name by means of a suffix or a class name, or implied by a trivial name). According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), the ring or the chain can be the senior parent structure; however, for the preferred IUPAC name, the ring is always selected as the senior parent structure. P-44.1.2.2 Systems composed of rings and chains (exclusive of linear phanes) Two methods are recognized to name systems composed of rings and chains (exclusive of linear phanes). (1) Within the same class, a ring or ring system has seniority over a chain. When a ring and a chain contain the same senior element, the ring is chosen as parent. Rings and chains are chosen regardless of their degree of hydrogenation. As a consequence, this approach prefers the choice of a ring over a chain in systems composed of cyclic and acyclic hydrocarbons. (2) The context may favor the ring or the chain, so that, for example, substituents may be treated alike or an unsaturated acyclic structure may be recognized, or the one chosen has the greater number of skeletal atoms in the ring or in the principal chain of the acyclic structure. (…) For selection of a preferred IUPAC name, see P-52.2.8. P-52.2.8 Selection between a ring and a chain as parent hydride Within the same heteroatom class and for the same number of characteristic groups cited as the principal characteristic group, a ring is always selected as the parent hydride to construct a preferred IUPAC name. In general nomenclature, a ring or a chain can be the parent hydride (see P-44.1.2.2). Therefore, the benzene ring is selected as the senior parent stucture for the preferred IUPAC name of the compound given in the question, which leads to ‘1-chloro-4-isopropylbenzene’. However, according to current recommendations, the preferred prefix for the isopropyl substituent is ‘propan-2-yl’; the prefix ‘isopropyl’ is retained for use in general nomenclature. Thus, the complete preferred IUPAC name is ‘1-chloro-4-(propan-2-yl)benzene’.
{ "domain": "chemistry.stackexchange", "id": 6172, "tags": "organic-chemistry, nomenclature" }
Recursive factorial algorithm
Question: I'm struggling to understand this factorial algorithm. Falling(n, m): if m = 0 return 1 if m = 1 return n return Falling(n, floor(m/2))* Falling(n - floor(m/2), ceil(m/2)) The algorithm should compute $n!/(n-m)!$ I'm analyzing it in the case of n = m, i.e it returns n! Now I'm getting the correct results from my implementation of it in Java, but the correctness of it has shaking my head. How should I approach this? Answer: If you plug in the result in what is returned by the recursion, you get: $$\frac{n!}{(n - \lfloor \frac{m}{2} \rfloor)!} \frac{(n - \lfloor \frac{m}{2} \rfloor)!}{(n - \lfloor \frac{m}{2} \rfloor - \lceil \frac{m}{2} \rceil)!} = \frac{n!}{(n - \lfloor \frac{m}{2} \rfloor - \lceil \frac{m}{2} \rceil)!}$$ You can see the right side is equal to $\frac{n!}{(n - m)!}$ because of the identity $m = \lfloor \frac{m}{2} \rfloor + \lceil \frac{m}{2} \rceil$ (if you are in doubt about this: check the cases $m$ even and $m$ odd separately). For correctness, the only thing left to check then is the base cases $m=0$ and $m=1$ are correct (which is a trivial observation), that Falling is not invoked with incorrect parameters (this is the case if $m \le n$ holds) and that it terminates (also the case because the invoked instances have strictly smaller parameters).
{ "domain": "cs.stackexchange", "id": 13117, "tags": "algorithms, algorithm-analysis, recursion" }
Launch files not copied into share folder by catkin_make
Question: My launch-files do not get placed into the devel/share folder also I state inside CMakeLists.txt. The 4 files are inside the src//launch folder. # install the launch files install(DIRECTORY launch/ DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/launch) Originally posted by user23fj239 on ROS Answers with karma: 748 on 2016-01-31 Post score: 0 Original comments Comment by Javier V. Gómez on 2016-01-31: Try removing the last "/" in the install line. Answer: Launch files are not copied into the devel folder. Using the install tag, they are installed in the respective folder in install, if you do a catkin_make install Originally posted by mgruhler with karma: 12390 on 2016-02-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by user23fj239 on 2016-02-01: I see: catkin_make install triggers install inside CMakeList. Still the question how to get them into devel? Comment by mgruhler on 2016-02-02: you don't need them in devel. You use those from src, if you source devel/setup.bash.
{ "domain": "robotics.stackexchange", "id": 23608, "tags": "roslaunch, cmake" }
Bluetooth Codec (SBC) - Uses a FFT?
Question: In the A2DP specification, the SBC code uses a "polyphase filterbank" which has the formula $$x[n] = X[m]\cos((m+0.5)(n+M/2)\pi/M).$$ I am confused to what this is? It seems to take in $N$ points and output $2N$ points in some sort of frequency domain. Is this a special type of DFT and can it be optimized with a radix-2 algorithm? I tried searching but could not find any websites that could help me. Edit: Links - https://www.bluetooth.org/docman/handlers/downloaddoc.ashx?doc_id=8236 and bottom of Page 64 Answer: I figured out the answer. The polyphase filterbank in the SBC codec is a variation on the discrete cosine transform type 4 that also divides it into frequency bands at the same time. If you look in the spec, the double for loop is used to both to multiple across all basis and sum up the frequency bands.
{ "domain": "dsp.stackexchange", "id": 5273, "tags": "fft, filters, audio" }
How can I obtain a computer readable model of Dr. Gerhard Michal's biological pathways map?
Question: I want to run simulations of various metabolic pathways – the project could end up becoming quite large, and having a machine readable chart would make thing a lot easier. Does anyone know if there is a machine readable file of Dr. Gerhard Michal's famous metabolic pathways chart? Answer: Roche's Biochemical Pathways works as a big png image and just put labels on the map. But you could try to extract data using queries like http://biochemical-pathways.com/pol/fts/query?query=Glutarate It seems to be legal as it's not prohibited. Escher is a good pathway visualization tool where you can save map in JSON or SVG. Another choice is VMH (Virtual Metabolic Human) - the virtual metabolic human database provides literature-derived information on human metabolism, gut microbial metabolism, nutrition, and diseases. Reactome has two layer map - first layer is biological interactions, second layer is pathways.
{ "domain": "biology.stackexchange", "id": 5385, "tags": "bioinformatics, metabolism, theoretical-biology" }
Is the exact cover problem NP-hard when there is a restriction on the size?
Question: The exact cover problem with restrictions on the size is: Input: Given a set $U=\{1,2,\ldots,n\}$ and a collection of $C$ of subsets of $U$. Question: Is there a subcollection $C^\star$ of $C$ such that The intersection of any two distinct subsets in $C^\star$ is empty; The union of the subsets in $C^\star$ is $U$; and The size of any set $S\in C^\star$ is at most $\log_2 n$, i.e., $|S|\leqslant \log_2 n$ for all $S\in C^\star$. Is this problem NP-hard? Answer: Yes, it is still NP-hard. In fact, it remains hard even if you replace $\log_2 n$ with the constant 3. This follows by reduction from 3-dimensional perfect matching. 3-dimensional perfect matching is the following problem: Input: disjoint sets $X,Y,Z$ such that $|X|=|Y|=|Z|$; a set $T \subseteq X \times Y \times Z$ Question: does there exist a perfect matching? In other words, does there exist $S \subseteq T$ such that $|S|=|X|$ and the union of elements of $S$ is an exact cover for $X \cup Y \cup Z$? From this statement of the problem, it is clear that 3-dimensional perfect matching is a special case of your problem: take $U=X \cup Y \cup Z$ and $C = \{\{x,y,z\} : (x,y,z) \in T\}$. In contrast, if the size of every set is at most 2, then it can be solved in polynomial time, using algorithms for finding a perfect matching in an undirected graph.
{ "domain": "cs.stackexchange", "id": 7899, "tags": "np-hard, set-cover" }
Frequency and phase range of the FFT of an image
Question: I have some questions: 1. What is the horizontal and vertical frequency range (and the steps) of the FFT amplitudes of an image? 2. What is the relevant axes range for the phase image? 3. Which sector is physically relevant? May be you can explain it for the following image: of which the FFT amplitude image is: Answer: What is the horizontal and vertical frequency range (and the steps) of the FFT amplitudes of an image? Unlike FFT of time varying signals whose frequency are represented in hertz, for images the frequency will be represented in cycles/pixel. 0.5 cycles per pixel is the maximum possible spatial frequency. So the FFT will be ranging from 0 to 0.5. Due to negative frequency component the value ranges from -0.5 to 0.5 for both fx and fy. From this you can calculate the step. Step_x = 1/Width(cycles/pixel), Step_y = 1/Height(cycles/pixel) What is the relevant axes range for the phase image? For both magnitude and phase, you will plot for the same frequency range(-0.5 to +0.5) Which sector is physically relevant? Zero frequency is DC component.So the image will vary from low frequency to high frequency as you move outwards from center of the image.
{ "domain": "dsp.stackexchange", "id": 4485, "tags": "fft, fourier-transform" }
Can relativistic mass actually change gravitational pull?
Question: I've heard that relativistic mass can influence gravity, but this seems to create a paradox, unless I am missing something. It seems to me that if there were two celestial bodies that are observed to be moving along approximately parallel trajectories at a relativistic speed, wouldn't the gravitational force between them also be larger than if the bodies were at rest, and therefore draw them closer together than they otherwise would be? How would this attractive force be accounted for if one were observing the second celestial body from the reference frame of the first, where one would not otherwise see any significant relative motion to the other body and not have the apparent relativistic velocity to attribute the increased gravitational pull to? What am I missing here? Answer: The answer is that "relativistic mass" does not produce gravity in the Newtonian sense. Newtonian gravity breaks down in the realm where velocities are relativistic, and you have to use general relativity to determine the answer. In GR gravity (i.e. curvature of spacetime) is produced by the stress-energy tensor, which is independent of coordinates. So if there's a frame of reference in which the two bodies are at rest relative to one another, then there is no additional gravitational attraction between them. This would be the case if they are moving on parallel paths in the same direction. If they were moving in opposite directions then there is no frame in which the relative kinetic energies disappear, and so that would produce an effect. But to determine the effect you would have to solve the Einstein field equations, I don't think the Newtonian approximation would work.
{ "domain": "physics.stackexchange", "id": 78091, "tags": "general-relativity, gravity, mass, mass-energy, speed" }
Place cooling bath apparatus in freezer to speed up the process?
Question: As part of an at-home lab, my chemistry class is creating cooling baths to measure the freezing point of different solutions. We have to determine the molar mass of the solutes in these solutions by observing the freezing point. The cooling bath is constructed from ice and salt inside a 500 mL beaker, into which are placed test tubes of solutions (one at a time). A thermometer sticking out of the test tube hypothetically measures the freezing point. The problem is that the ice bath is taking hours to freeze the solution. To speed up this process, can I just stick the whole apparatus in the freezer and check it every 10 minutes or so to see the temperature at which the solutions freeze? Will the freezing point of salt water and calcium chloride water (the solutions in question) be radically different if frozen in an ice bath vs. frozen inside a kitchen freezer? Answer: The freezing point remains the same, but its measured value may depend on the way of measurement. In case the solution freezing point is too close to the ice/salt 3:1 bath temperatures, it may be the reason for too long freezing time. The bath ice/$\ce{CaCl2 . 6 H2O}$ bath 1:2 could help, if applicable. Note that fast freezing may lead to biased value, especially, if supercooling occurs. E.g. a PET bottle with water can get supercooled in a freezer deep below the freezing point.
{ "domain": "chemistry.stackexchange", "id": 14450, "tags": "experimental-chemistry, phase" }
Inserting tables with foreign key
Question: How can I improve this method that adds data to three tables in the database? The tables are: UserTable UserInfoTable ContactTable UserInfoTable and ContactTable have a one-to-many relationship, with ContactTable having UserInfoID as a foreign key. public static boolean addUser(User user) { Connection connection = getConnection(); try { PreparedStatement insertUser = connection.prepareStatement(Insert_User); insertUser.setString(1, user.getUsername()); insertUser.setString(2, user.getPassword()); insertUser.setString(3, user.getUserType()); insertUser.executeUpdate(); PreparedStatement insertUserInfo = connection.prepareStatement(Insert_UserInfo); insertUserInfo.setString(1, user.getFirstName()); insertUserInfo.setString(2, user.getMiddleName()); insertUserInfo.setString(3, user.getLastName()); insertUserInfo.setString(4, user.getGender()); insertUserInfo.setString(5, user.getBirthdate()); insertUserInfo.executeUpdate(); //I need to get the UserInfoID to insert to ContactTable. I really hate to do this but I can't think of any way to insert the foreign key. PreparedStatement getUserInfoID = connection.prepareStatement(GET_USER_INFO_ID); getUserInfoID.setString(1, user.getFirstName()); getUserInfoID.setString(2, user.getMiddleName()); getUserInfoID.setString(3, user.getLastName()); getUserInfoID.setString(4, user.getGender()); getUserInfoID.setString(5, user.getBirthdate()); ResultSet rs = getUserInfoID.executeQuery(); PreparedStatement insertContact = connection.prepareStatement(Insert_Contact); while (rs.next()) { insertContact.setInt(1, rs.getInt(1)); insertContact.setString(2, user.getEmailAddress()); insertContact.setString(3, user.getAddress()); insertContact.setString(4, user.getContactNumber()); insertContact.executeUpdate(); } } catch (SQLException sqle) { sqle.printStackTrace(); return false; } return true; } I originally want to set the connections's autocommit to false so the connection will rollback if an insert statement fails, but since I need to get the UserInfo ResultSet to get the UserInfoID, I can't do it anymore. How can I do this better? Answer: janos recommended the Spring and Apache DbUtils frameworks, I can highly recommend using Hibernate. Hibernate is a very commonly used ORM framework (Object-Relational-Mapping). By using Hibernate and the proper entity classes, you can do all this with only these lines: try { em.getTransaction().begin(); em.persist(user); em.getTransaction().commit(); } catch (Exception ex) { if (em.getTransaction().isActive()) { em.getTransaction().rollback(); } // Log/handle exception in another way as well } When using Hibernate, you have the ability to tell it all about the one-to-many/many-to-many/one-to-one/many-to-one relationships, and tell it when to cascade operations for you. Here's an example of how your UserInfo could look. @Entity public class UserInfo { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String firstName; @ManyToOne(cascade = CascadeType.PERSIST) private List<Contact> contacts; // some getters and setters or other methods } When this entity gets persisted in Hibernate, it will also persist all it's contact information. One formatting issue with your current code is that you're not indenting your code properly. Each { should add one indentation, each } should remove one. This makes things so much readable.
{ "domain": "codereview.stackexchange", "id": 8367, "tags": "java, sql, jdbc" }
Schrödinger equation for the 'octal' spherical harmonic?
Question: Would it be possible to develop an analogy for the Schrödinger equation for an octal spherical harmonics? Some background The cubical atom was an early atomic model in which electrons were positioned at the eight corners of a cube in a non-polar atom or molecule. This theory was developed in 1902 by Gilbert N. Lewis and published in 1916... It was further developed in 1919 by Irving Langmuir as the cubical octet atom. Although the cubical model of the atom was soon abandoned in favor of the quantum mechanical model based on the Schrödinger equation, and is therefore now principally of historical interest... (Wikipedia) Here are the relevant spherical harmonics: Wikipedia Right top is the spherical harmonics for the p-shell (the s- shell has to be calculated separately), left bottom is the octal spherical harmonics. Answer: No. What you call the "octal" spherical harmonic (by no means a standard term), i.e. $Y_{3,2}(\theta, \phi)$, is a single function, and it therefore represents a single orbital, which fits a single electron. Cutting it up into octants is pretty much equivalent to cutting up $\sin(x)$ into the intervals $(n\pi,(n+1)\pi)$, drawing doors and windows on them, and pretending that they're little houses where electrons can live. Now, you can go on and formulate whatever interpretation you want of bits of QM that you've cut-and-pasted to try and fit the mold of other failed scientific theories, but if you intentionally set out to break quantum mechanics, you'll mostly just get nonsense. In this specific case that's all that's come out.
{ "domain": "physics.stackexchange", "id": 43101, "tags": "quantum-mechanics, schroedinger-equation" }
Element Names in English
Question: Na - Sodium - Natrium K - Potassium - Kalium W - Tungsten - Wolfram Sb - Antimony - Stibium and so forth. [English only] Why do we not use the names that match the symbols? Answer: In the $18$th and $19$th century, there was a strong rivalry between chemists of France, Germany and England, specially about giving a symbol to newly found elements. If a French chemist had discovered and given a name to a new element, the German and English had to accept it, but they often proposed another name, and another symbol, just not to obey to "the enemy". If you read old French publications from the $19$th century, you may find formula like SoCl for sodium chloride. Beryllium Be was called Glucinium Gl in France up to late in the middle of the $20$th century. This stupid rivalry was finished after WWI, in $1919$, when IUPAC was created, where delegates from the world over met just to define a unique symbol for each element. Unfortunately they failed to give a common name. But they succeeded in defining a common symbol.
{ "domain": "chemistry.stackexchange", "id": 16951, "tags": "nomenclature, elements" }
Why does Morocco have so much phosphate?
Question: I read that Morocco/Western Sahara has 75% of the world's known phosphate reserves, but I couldn't find an explanation of what mineral is responsible for it and how it all ended up there. Answer: The Earth's crust contains an average of 0.27 percent P2O5. Most of it in the mineral apatite Ca10(PO4)6(OH,F,Cl)2. Apatite is slowly soluble in neutral or alkaline waters, and its solubility increases with increasing acidity. The PO4 content of most river and lake waters ranges from about 0.01 to 0.5 part per million (ppm), but may be much higher in soft acid waters and highly saline alkaline lakes. About one-sixth of the phosphate carried to the sea by runoff is in dissolved form. The ocean as a whole is nearly saturated with phosphate, but its distribution within the ocean is not uniform; deep cold waters contain nearly 0.3 ppm PO*, but warm surface waters contain only 0.01 ppm or less. Oceanic circulation brings phosphate- rich water to the surface in several environments, and phosphate may be precipitated either inorganically or biochemically, as the pH and temperature increase near the surface. Most of the world's phosphate production comes from marine phosphorites, many of which are associated with black shale and chert. Secondary processes including diagenetic phosphatization of calcium carbonate and .interstitial precipitation, reworking by waves and currents, and weathering have often played a prominent part in forming deposits of mineable quality. Common igneous rocks, such as granite, diorite, gabbro, and peridotite, contain only from about 0.005 to 0.4 percent P205 , but some of the less common alkalic rocks, such as ijolite and turjaite, can contain more than 1 percent. Variations in phosphate content of igneous rocks generally parallel those of ilmenite and magnetite.Apatite is slowly soluble in neutral or alkaline waters, and its solubility increases with increasing acidity, decreasing hardness, and decreasing temperature. Most phosphorus is carried to the sea as phosphate minerals or adsorbed on iron or aluminum hydroxides or clay. Most of the world's phosphate production comes from marine phosphorites. The richest and largest of these form at low latitudes in areas of upwelling associated with divergence, chiefly along the west coasts of the continents or, in large mediterranean seas, along the equatorial side of the basin. Lesser but significant concentrations form along the west sides of poleward-moving warm currents along the eastern coasts of continents. In the Moroccan phosphate deposits however, The unconsolidated sediments on the continental shelf and uppermost continental slope off Morocco and the Spanish Sahara are sands, silty sands, and silts. Their sand fraction is carbonate-rich and considered to be mainly relict from Pleistocene times when sea level was lower. This relict sand is mixed with, and locally buried by, Holocene detrital silts which are concentrated on the slope and the middle shelf. Phosphate is locally abundant in the form of sand-sized detrital grains. These were derived from Cretaceous and Tertiary phosphatic rocks cropping out on the shelf, by erosion during the Pleistocene. The sand-sized phosphatic detritus is concentrated in relict placer-type deposits near parent-rock outcrops. Longshore transport during the Pleistocene has contributed further concentrations of phosphatic sand along the shelf edge. These deposits are of low grade (< 8% P 2O 5) and are of no economic interest at present. No signs of Recent phosphate mineral formation were noted. Thus, the hypothesis that this phenomenon is linked with the upwelling of nutrient-rich water does not appear to apply, at this time, to the region studied.
{ "domain": "earthscience.stackexchange", "id": 2026, "tags": "geology, mining, economic-geology" }
If earth water were pure, would the atmosphere still produce lightning?
Question: Like the title says: If Earth’s water were composed of pure H2O molecules, without anything else dissolved in it, would the atmosphere still produce lightning? I remember that distilled water is not a conductor, this is why I’m wondering. Answer: Lightning occurs because the atmosphere is an insulator. If the atmosphere were a conductor, then thunder clouds would not be able to accumulate a static charge: the charge would just travel back to the ground by conduction. There is some research which suggests that the details of paths taken by lightning bolts may be related to ionization channels in the air produced by cosmic rays. However, it doesn't change the basic fact that electricity is conducted between clouds and the ground when the insulating air breaks down under voltage and becomes conducting. An amusing computation would be to reverse your question. Perhaps, if the ocean were made of pure water without any dissolved electrolyte, its conductivity might be poor enough that charges could accumulate until they were a static breakdown in the non-ionized water. There are in fact measurements of the breakdown electric field in water, but I have not read them carefully.
{ "domain": "physics.stackexchange", "id": 95442, "tags": "water, electrochemistry, lightning" }
What chemical reaction can cause this blue-ish deposit?
Question: Recently I noticed that some of our glasses and cutlery had developed a blue-ish deposit as you can see for the two sets of spoons below. For each set the spoon on the left has the blue-ish deposit whereas the spoon on the right is the original. I traced the cause of this blue-ish deposit to a specific effect. It only occurs when the spoon or glass has been used for a medicine called Questran-A, which is a powder that needs to be dissolved in water and then ingested. And more specifically, it only happens when these glasses/spoons are subsequently washed in our dishwasher. This leads me to believe that some reaction occurs between the (left-overs) of the medicine and the dishwasher tablets. My question is: does anyone know what reaction between the chemicals in the medicine and those in the dishwasher tablets could cause this blue-ish deposit? Obviously this will be impossible to say without the ingredients of both, so here is a list of the contents of the dishwasher tablets and the medicine: Dishwasher tablets (Albert Heijn): pentasodium triphosphate sodium carbonate and sodium carbonate peroxide taed sodium silicate PEG-90 alcohols, C16-18, ethoxylated mind 30 EO acrylic copolymer PEG-4 maleic acid/acrylic acid copolymer,sodium salt alcohols, C12-18 ethoxylated propoxylated zinc sulfate beta-alanine, N-(2-carboxyethyl)-, N-coco alkyl derivs, disodium salt mn-complex benzotriazole subtilisin tetrasodium etidronate alpha-amylase benzisothiazolinone Questran-A: 4 mg colestyramine 30 mg Aspartame(E 951) equal to 17 mg Phenylalanine Citric acid(E 330) kelcoloïd Silicon dioxide(E551) Orange flavoring Xantham gum (E 415). Answer: Let's speculate a bit ;) Colestyramine is a twodimensional polymer network, build from styrene and and 1,4-divinylbenzene. The polymer has a lot of benzyltrimethylammonium groups. That means it can act as an anion exchanger. It is used to strongly bind bile acids Neither the polymer itself, nor the adduct with bile acid are supposed to degrade in the bowel. The loaded polymer is supposed to be excreted. The dishwasher tablets, on the other hand, contain the sodium salt of a maleic acid/acrylic acid copolymer. That means we have two polymers, one with a lot of $\ce{Ph-CH2N^+(CH3)3}$ groups and another one with lots of $\ce{-COO-}$ groups. Sounds like a mass wedding of two chain gangs :D Edit It is conceivable that this deposits a thin film on the spoons and the glassware. In order to explain the bluish colour, we have two options: The polymer aggregate absorbs in the 580 nm range - we observe the complementary colour. Due to lack of conjugation, I don't see how this could happen here. The colour results from some dispersion effect related to opalescence, as suggested in Michiel's comment.
{ "domain": "chemistry.stackexchange", "id": 924, "tags": "organic-chemistry, inorganic-chemistry, everyday-chemistry, precipitation" }
Finding the appropriate coordinate transformation given two metrics
Question: Given the two-dimensional metric $$ds^2=-r^2dt^2+dr^2$$ How can I find a coordinate transformation such that this metric reduces to the two-dimensional Minkowski metric? I know that $g_{\mu\nu}=\begin{pmatrix}-r^2&0\\0&1\end{pmatrix}$ (this metric) and $\eta_{\mu\nu}=\begin{pmatrix}-1&0\\0&1\end{pmatrix}$ (Minkowski). Obviously, the matrix transformation is $\begin{pmatrix}1/r^2&0\\0&1\end{pmatrix}g_{\mu\nu}=\eta_{\mu\nu}$, but how is that related to the coordinate transformation itself? EDIT: would the following transformation be acceptable? $$r'=r\cosh t$$ $$t'=r\sinh t$$ Such that: $dr'=\cosh t\ dr+r\sinh t\ dt,\quad dt'=\sinh t\ dr+r\cosh t\ dt$ And: $ds'^2=-dt'^2+dr'^2=-r^2dt^2+dr^2=ds^2$ Where we have: $ds'^2=\eta_{\mu\nu}dx^{\mu}dx^{\nu}$ as requested. Is that correct? Also, is there a formal way of "deriving" the proper change of coordinates (since mine is more of an educated guess)? Answer: If you were to Wick rotate $t \rightarrow i \theta$, the metric would be $ds^2 = dr^2 + r^2 d\theta^2$, which is just flat space in polar coordinates. The standard cartesian coordinates can be obtained by $x=r\cos\theta$, $y=r\sin\theta$. The same procedure works in the original Lorentzian signature metric, but with hyperbolic trig functions instead of sines and cosines. By the way, this is two-dimensional Rindler space, which is just a patch of two-dimensional Minkowski space: http://en.wikipedia.org/wiki/Rindler_coordinates.
{ "domain": "physics.stackexchange", "id": 20151, "tags": "homework-and-exercises, general-relativity, metric-tensor, coordinate-systems" }
Whence the $i$ in QM Poisson bracket definition?
Question: On p. 87 of Dirac's Quantum Mechanics he introduces the quantum analog of the classical Poisson bracket$^1$ $$ [u,v]~=~\sum_r \left( \frac{\partial u}{\partial q_r}\frac{\partial u}{\partial p_r}- \frac{\partial u}{\partial p_r}\frac{\partial u}{\partial q_r}\right) \tag{1}$$ as $$uv-vu ~=~i~\hbar~[u,v]. \tag{7}$$ I'm not worried about the $\hbar$ but if there is an (alternative) explanation of why the introduction of $i$ is unavoidable that might help. $^1$ Note that Dirac uses square brackets to denote the Poisson bracket. Answer: The imaginary unit $i$ is there to turn quantum observables/selfadjoint operators into anti-selfadjoint operators, so that they form a Lie algebra wrt. the commutator. Or equivalently, consider the Lie algebra of quantum observables/selfadjoint operators with the commutator divided with $i$ as Lie bracket. The latter Lie algebra corresponds in turn to the Poisson algebra of classical functions, cf. the correspondence principle.
{ "domain": "physics.stackexchange", "id": 40372, "tags": "quantum-mechanics, classical-mechanics, commutator, complex-numbers, poisson-brackets" }
Why is it said that photon-wavelengths have increased by a factor of 1000 since our universe became transparent to light?
Question: After reading several explanations for the so-called "Hubble-radius", and still being confused, (as I reckon are some of the folks who tried to answer THAT question !!), I have a related question, which I hope might help clarify this issue. It's said that the average wavelength of photons at the time when our universe became transparent to light, [i.e., at approx. 300,000 years after the Big Bang started], was about 1000 times less than what it is now, and that each photon carried about 1000 times as much energy. These are, of course, the photons which constitute the so-called "cosmic microwave backround" [CMB] radiation. If our universe has been expanding at approx. the same rate since year-300,000 it seems like the factor would be more like (13 billion) / (300 thousand), which is much larger than 1000. Am I missing something important here? Answer: See the lookback time to redshift relation in https://en.m.wikipedia.org/wiki/Redshift You can ignore inflation if you get redshifts, temperatures and universe size (radius, scale) ratios between now and times in the past after inflation. For recombination the relations of 1+z to the scale ratios and temperature ratios are linear and direct . So for T(then)/Tnow) = 3000K/3K = 1000 you get z about 1000, or more exactly closer to 1100. Similarly, the scale ratio of 1000 would give you the size of the universe at recombination of about 13.8 million light years (again, you can use exact numbers for better accuracy). But the lookback time is not linear. That Wikipedia article has the equation. The Dodelson book derives it exactly and dos more calculations, and it's not that hard. There is also an online calculator, but it won't teach you anything. The dependence is that lookback time is proportional to (1+z)^(-3/2).
{ "domain": "physics.stackexchange", "id": 32465, "tags": "cosmology, photons, space-expansion, big-bang, cosmic-microwave-background" }
Solve recurrence where the base case's time complexity is a function of the original input size
Question: I'm trying to analyse the time complexity of the following algorithm for generating the power set: public static List<List<Integer>> generatePowerSet(List<Integer> inputSet) { List<List<Integer>> powerSet = new ArrayList<>(); directedPowerSet(inputSet, 0, new ArrayList<Integer>(), powerSet); return powerSet; } private static void directedPowerSet(List<Integer> inputSet, int toBeSelected, List<Integer> selectedSoFar, List<List<Integer>> powerSet) { if (toBeSelected == inputSet.size()) { powerSet.add(new ArrayList<>(selectedSoFar)); return; } selectedSoFar.add(inputSet.get(toBeSelected)); directedPowerSet(inputSet, toBeSelected + 1, selectedSoFar, powerSet); selectedSoFar.remove(selectedSoFar.size() - 1); directedPowerSet(inputSet, toBeSelected + 1, selectedSoFar, powerSet); } It generates the power set as a union of all subsets that include a particular element (toBeSelected) and those subsets that don't. Let $n$ be the size of the sub list [inputSet, inputSet.size()-1] and let $N$ be the size of inputSet which never changes in subsequent recursive calls. The recurrence relation is $T(n)=\begin{cases}\mathcal{O}(N) \mbox {, if n=0}\\2T(n-1)+\Theta(1)\end{cases}$ By analysing the recursion tree we can see that $T(n) = 2^n*\mathcal{O}(N)+2^{n-1}+\ldots+1= \mathcal{O}(N2^n)$. I'm a bit concerned about $N$ in the answer, some analyses that I've seen just say it's $\mathcal{O}(n2^n)$. My problem with it is that if we take $n$ as the size of the original input the recurrence doesn't make sense - $n$ doesn't change from call to call. If we take $n$ as the sub lists's size the we end up with $N$ in the base case's runtime, also $N$ is a constant, in which case is the base case in $\mathcal{O}(1)$ and the algorithm's time is just $\mathcal{O}(2^n)$? Doesn't make sense too. Should we consider $T$ as a function of two variables $T(N, n)$? Answer: You are right that in order to analyze the recurrence, you need to take two parameters into account: the original list size $N$, and the size of the sublist currently operated on $n$. In terms of these parameters, the running time is $\Theta(N2^n)$. However, what we are really interested in is the running time of the algorithm when running on the entire list (in your snippet, the running time of the first procedure rather than the second procedure); the fact that the algorithm proceeds recursively is an implementation detail. In other words, when analyzing the algorithm, there is only one parameter, namely the input size $n$. Therefore we can safely identify $N$ and $n$ in the final formula, and conclude that the running time is $\Theta(n2^n)$, where $n$ is the input size. Stated differently, our end goal is understanding the running time of the algorithm (your first procedure) as a function of the input size. This analysis involves analyzing a certain recursive procedure (your second procedure), whose analysis involves both the original input size and the current input size. The final conclusion is that if we run the recursive procedure on an array of size $n$, then its running time is $\Theta(n2^n)$, and this is what we care about. The more refined running time $\Theta(N2^n)$ only shows up as part of the analysis, but is not reflected in the final result.
{ "domain": "cs.stackexchange", "id": 19350, "tags": "time-complexity, runtime-analysis, recurrence-relation, recursion" }
How to convert old bag files to newer messagetypes
Question: We have many older bag files(fuerte) from our robot system and we want to use them with indigo. The problem is, that the message versions are not the same and they can't be parsed by newer programs. As an Example here is the error message from rosbag when we tried to display an image topic: Client [/image_view_1419002470714874703] wants topic /usb_cam_side/image_raw/compressed to have datatype/md5sum [sensor_msgs/Image/060021388200f6f0f447d0fcd9c64743], but our version has [sensor_msgs/CompressedImage/8f7a12909da2c9d3332d540a0977563f]. Dropping connection. Is there a way to convert the files, so we can use them with the newer ros version? Originally posted by ptw on ROS Answers with karma: 1 on 2014-12-19 Post score: 0 Answer: Bag files can be migrated to new or arbatrary formats with rosbag check and rosbag fix. http://wiki.ros.org/rosbag/migration http://wiki.ros.org/rosbag/Commandline#check Originally posted by kmhallen with karma: 1416 on 2014-12-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20390, "tags": "ros, sensor-msgs, ros-fuerte, sensor-msgs#image, ros-indigo" }
How to attach sensor from gazebo database(.sdf) to urdf model?
Question: Hi, In gazebo, there are already available models (insert tab) which you can simple insert in your scene. I am having trouble with using it with my urdf model. I want to add velodyne to my robot. But the problem is that i cannot do this as the model available is in .sdf format. I have tried adding it using this link information http://gazebosim.org/tutorials/?tut=add_laser but NO LUCK. -- -- I have also tried this approach but i cannot add velodyne using this approach though i have added different sensors using the same underlying concept: http://gazebosim.org/tutorials?tut=ros_gzplugins I am using kinetic. Originally posted by RoboRos on ROS Answers with karma: 37 on 2018-02-10 Post score: 0 Answer: Use the velodyne_description package. Originally posted by joq with karma: 25443 on 2018-02-11 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30011, "tags": "gazebo, urdf, sdf, ros-kinetic, sensor" }
Which is the equivalent processing of human brain in terms of computer processing?
Question: How many flops my brain can process, or how many GHz is a human brain capable of? Is it valid to think that each celular brain is like a small cpu? (like cuda architecture). Our brains works in parallel, right? Answer: Here is what we know for sure. The largest neuronal network simulation to date was achieved using the K Japanese supercomputer (see the Top500) during summer 2013, a few months ago. Using the open-source software NEST, the scientists simulated a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K Computer and used 1 petabyte of memory. The process took 40 minutes, to complete the simulation of 1 second of neuronal network activity in real, biological, time. Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. K is a peta-scale supercomputer, so, we need an exa-scale machine to completely simulate the whole human brain. These machines will be (probably) available between 2017-2020.
{ "domain": "cs.stackexchange", "id": 19188, "tags": "terminology, human-computing, computer-vs-human" }
How is wavelet time & frequency resolution computed?
Question: Mallat gives analytic wavelet time & frequency widths/uncertainties as $$ \begin{align} \sigma_{ts}^2 &= \int_{-\infty}^{\infty} (t - u)^2 |\psi_{u, s}(t)|^2 dt = s^2 \sigma_t^2 \tag{4.51} \\ \sigma_{\omega s}^2 &= \frac{1}{2\pi}\int_{-\infty}^{\infty} \left(\omega - \frac{\eta}{s} \right)^2 |\hat\psi_{u, s}(\omega)|^2 d\omega = \sigma_\omega^2 / s^2 \tag{4.54} \end{align} $$ where $$ \begin{align} \psi_{u,s}(t) = \frac{1}{\sqrt{s}} \psi^* \left( \frac{t-u}{s}\right),\ \ \hat\psi_{u,s}(\omega) &= \sqrt{s} \hat\psi(s\omega) e^{-i\omega u} \tag{4.53} \\ \eta = \frac{1}{2\pi} \int_0^\infty \omega |\hat\psi (\omega)|^2 d\omega \tag{4.52} \\ \end{align} $$ $$ \sigma_t^2 = \int_{-\infty}^{\infty}t^2 |\psi(t)|^2dt,\ \ \sigma_\omega^2 = \frac{1}{2\pi}\int_0^\infty (\omega - \eta)^2 |\hat\psi(\omega)|^2 d\omega $$ But how would one actually code this? Continuous math can't just be ported directly; the following considerations arise: How to treat $\psi(t)$ for wavelets defined in frequency domain? We ifft, which now mixes the discrete and discretized. Any normalization considerations? How to define $t$ and $\omega$, particularly in light of (1)? What are their ranges? For $\omega$ one can presume $0$ to $\pi$ - but what of $t$? Even if we go to infinity, one must still define the interval over which the wavelet decays. This answer discusses center frequency computed for wavelets sampled directly in time-domain; is the approach the same here? How to tell whether the implementation(s) is correct? One can always code integrals to spew numbers. Answer: The key is units, and understanding wavelet behavior in context of application (in this case CWT). Full implementations for all discussed here is available at squeezepy. This answer assumes analytic wavelets as per the question ($\hat \psi (\omega \leq 0) = 0$), but all ideas apply to general time-frequency atoms (e.g. windows for STFT) with some code adjustments. Time Resolution Firstly, note we're computing a "width", which is inversely-related to "resolution". The width is the standard deviation of the time-domain wavelet. Standard deviation is well-defined for normally-distributed data, and has desirable properties if data is a probability distribution. By squaring modulus of $\psi(t)$ and normalizing it to integrate to 1, we interpret the result as a probability distribution, and define the time variance: $$ \sigma_t^2 = \frac{\int_{-\infty}^{\infty} t^2 |\psi(t)|^2 dt}{\int_{-\infty}^{\infty} |\psi(t)|^2 dt}, \tag{1} $$ where $\psi(t)$ is assumed zero-mean. We thus interpret $\sigma_t$ as as the time-span of ~68% of wavelet's energy. The purpose of the denominator is to enforce validity as probability distribution, i.e. normalizing $\psi$ if we haven't already. This is valuable for coding the integral, as the wavelet at different scales may not integrate to $1$ automatically. It also reminds us of the probabilistic context rooting $\sigma_t$. Mind that $\sigma_t$ is half the said span, as we compute the uni-lateral ('radius') variance. The question is, how to define $t$? We can't take infinity, and even if we could, what finite width would span the non-decayed wavelet? Answer to latter is rooted in how DFT works, discussed in "Are we right?" below, but there's an extra concern: should t be zero-mean? No; it should be defined such that it aligns with the wavelet as if it were inputs to generate the wavelet. Most wavelets will peak at $t=0$, for which t=0 should align with wavelet's peak (i.e. same array index), negative to left, positive to right. Such t will be nearly zero-mean, but not exactly, and requires different handling for even- and odd-length t. Frequency resolution Much of what's said for $\sigma_t$ also holds here; we define the frequency variance as $$ \sigma_\omega^2 = \frac{\int_0^{\infty} (\omega - \tilde\omega_\psi)^2 |\Psi(\omega)|^2 d\omega } {\int_0^{\infty} |\Psi(\omega)|^2 d\omega}, \tag{2} $$ where $\tilde\omega_\psi$ is the energy center frequency (see "Center Frequency"). We thus interpret it as frequency-span of ~68% of wavelet's spectral energy. In CWT filter bank terms, a small $\sigma_\omega$ for a given scale means that most of input signal's frequencies will be band-passed about $\tilde\omega_\psi$, and the rest attenuated/rejected. Center Frequency The cited answer's approach won't work here - at least not directly. Either way it's suboptimal; since discretization favors us in frequency domain, integrating is best. Ref [1] defines three meaningful frequencies one can associate with a wavelet's scales, and corresponding center frequencies; two of those for center frequency are the energy center frequency and the peak center frequency, latter defined as frequency at which $\Psi(\omega)$ peaks, and former as: $$ \tilde\omega_\psi = \frac{\int_0^{\infty}\omega |\Psi(\omega)|^2 d\omega} {\int_0^{\infty}|\Psi(\omega)|^2d\omega} \tag{3} $$ Here we don't assume $\Psi(\omega)$ is zero-mean (in fact it can't be). If $\Psi(\omega)$ is even-symmetric, then energy center frequency == peak center frequency. For finite and coded $\Psi(\omega)$, this distinction becomes significant and crucial to any wavelet regardless of shape ("Edge Cases"). Are we right? Three sanity checks: Can results be interpreted meaningfully? Is $\sigma_t \sigma_\omega \approx 0.5$? Does $\sigma_t \sigma_\omega$ scale correctly with varying wavelet parameters ($\mu$, scale, etc)? #2 is our main check; if the product is much greater or lesser than 0.5, we're off. To cut it short, I had to 'cheat' to find the proper range for $t$. I started assuming $\omega_\psi$ to be correct since it made sense and correctly gave center frequency as the exact location of the mode=peak. For $\sigma_t$ I ranged $t\in [-.5, .5)$, as that's interpreted as fraction of frame's length. $\sigma_t \sigma_\omega$, however, did not multiply to $0.5$. Working backwards, $\sigma_t = 0.5 / \sigma_\omega$, I found the correct $t$ to range $t \in [-N/2, N/2)$! And then it all clicked: units. The units of $\sigma_\omega$ are samples per cycle. Then, to cancel in Heisenberg's relation, $\sigma_t$ must have the reciprocal units: cycles per sample. This is unified by below relation for DFT: $$ \begin{align} f_p \left[ \frac{\text{cycles}}{\text{second}} \right] & = \left( f_{\text{DFT}} \left[ \frac{\text{cycles}}{\text{samples}} \right] \right) \cdot \left( f_s \left[ \frac{\text{samples}}{\text{second}} \right] \right) \end{align} \tag{4} $$ which can also be used to obtain resolutions in physical units (Hertz, seconds, etc). Above is also a bit simplified; the full units are: $$ \begin{align} \left[\left[ \sigma_\omega \right]\right] &= \frac{\text{cycles}\cdot \text{radians}}{\text{samples}} \tag{5a} \\ \left[\left[ \sigma_t \right]\right] &= \frac{\text{samples}}{\text{cycles}\cdot \text{radians}} \tag{5b} \\ \end{align} $$ Radians in $(5b)$ are mandated by those in $(5a)$ to cancel in product to yield unitless Heisenberg area. They're also sensibly interpreted once considering the nondimensional variants. Edge cases Discretizing problems don't end at finite precision and scaling conversions; consider what happens with scale pushed to extremes. High scale Freq-domain not doing too poorly - mainly finite precision imperfections. But time-domain is trouble: it doesn't fit the frame! Two problems here: How do we handle $\psi (t)$? Should we extend the frame by adding samples until decay? How does our choice in 1 affect CWT? #2 is key to #1. Recall in CWT with freq-domain wavelets, we do a form of discretized convolution theorem, getting sort of the exact continuous-time result, then discretizing. An imperfect time-domain wavelet is characterized by not fitting in the frame - but what if the frequency-domain wavelet fits just fine? I've yet to meet a time-domain wavelet that doesn't decay sufficiently if the frequency-domain wavelet does; problems arise when the frequency-domain wavelet doesn't. At the high scale extreme, the freq-dom wavelet is defined by a single sample, which is a time-domain pure sine with $\sigma_t=\infty$. Proper decay demands even symmetry of the freq-domain wavelet, which is bare minimally attainable by two adjacent bins at equal values. How exactly various scenarios are handled can be found in squeezepy's tests/props_test.py. Low scale Now the time-domain wavelet fares fine, while freq-domain is clearly troubled: half (or more) of it is entirely missing! We have the "trimmed bell" problem, contrary to "insufficiently sampled bell" for high scale case. The center frequency distinction becomes critical here; peak will always equal $\pi$, while energy will lie to its left. Investigating the behavior of the $\sigma_\omega$ integral, I found the energy variant to yield stabler and at times more sensible results, but depending on application, peak can also be used (with due re-interpreting of $\sigma_\omega$). Nondimensional measures Doing everything right above, one can take the Morlet wavelet and vary its mu only to notice that time and frequency resolutions don't change! What's the deal? In short, we need a nondimensional measure; this is obtained by normalizing above as follows (thus completing formulae in ref[2]): $\sigma_t \cdot= \omega_\psi,\ \sigma_\omega /= \omega_\psi$ in relevant equations above, where $\omega_\psi$ is peak center frequency. As to how this is interpreted or why it works - it's already a lot for one question, so briefly: try making sense of $\sigma_t$ relative to center frequency's time-domain period, and $\sigma_\omega$ relative to spread around center frequency in log space (or simply ratio-wise). Non-dimensional measures change with wavelet parameters, but not with scales, so each has its own use and interpretation. Where's the $2\pi$? Mallat's equations divide by $2\pi$, we don't, but there's no contradiction: this is per Parseval-Plancherel's theorem relating time-domain and frequency-domain energies. Since we explicitly divide by said energies in $(1)$ and $(2)$, the $2\pi$ is automatically accounted for. Further reading Not everything was explained here, nor can be in a single post. ssqueezepy contains a complete implementation, with accompanying tests that provide further explanation - particularly on handling scale extrema, which is a whole topic of its own. References Higher-Order Properties of Analytic Wavelets - J. M. Lilly, S. C. Olhede Wavelet Tour, Ch1-4 - S. Mallat
{ "domain": "dsp.stackexchange", "id": 9637, "tags": "fourier-transform, dft, wavelet, time-frequency, cwt" }
To which character or characters does a Kleene star apply?
Question: If you have a Kleene star applying to a set of characters not in any closure, does it apply to that whole string, or just the one character it belongs to? Any examples I search don't specify. For example: (A)BB* Does this mean 1: one A followed by zero to an infinite number of BB's, or does it mean 2: one A followed by at least one B and zero to infinite number of B's? Answer: Kleene stars applies to the character(s) it belongs, so in your example the correct option is the 2° one. ABB* = {AB, ABB, ABBB, ABBBB, ..} A(BB)* = {A, ABB, ABBBB, ABBBBBB, ..}
{ "domain": "cs.stackexchange", "id": 8157, "tags": "regular-expressions, kleene-star" }
scaling a model in urdf
Question: I have a perfect working car model in urdf and now I wanted to scale it. I made a smaller robot so I wanted to scale them. Is there a way to do it in ROS? Originally posted by Dpp_coder on ROS Answers with karma: 60 on 2014-06-13 Post score: 0 Answer: AFAIK this is not easily possible. While you can scale the individual link geometries, to scale the whole model, you´d need to also scale all translations that are part of the model in joints. Stringent use of xacro macros and properties throughout the model probably could make this easier (if not completely painless). Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-06-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18263, "tags": "simulation, ubuntu-precise, ubuntu" }
Adding header files from specific folder to library using catkin_simple
Question: I'm trying to create a catkin wrapper to build a library from external files. I'm usually using catkin_simple. However, the header files here are not contained in the include directory. I'm trying to figure out where to install them so that I can use #include <AdsLib/AdsLib.h> in other catkin packages to use this library. Looking at catkin_simple's cs_install macro, I tried to add my header files. However, other packages can still not find the includes. Edit: The files exist in install/include. But it seems like the build system is looking for them in the devel or source space. Since catkin_simple is only including from the include folder, I tried to add catkin_package(INCLUDE_DIRS ${SOURCE_DIR}/AdsLib). But the command is executed before the external project folder is cloned, which leads to an error. What am I missing? CMakeLists.txt: cmake_minimum_required(VERSION 2.8.3) project(ads_catkin) find_package(catkin_simple REQUIRED) catkin_simple() include(ExternalProject) ExternalProject_Add(ads PREFIX ${CATKIN_DEVEL_PREFIX}/ads GIT_REPOSITORY https://github.com/Beckhoff/ADS.git GIT_TAG 6b3a03009a757cf651fe44d8be7b6df698028f0e #GIT_TAG master CONFIGURE_COMMAND "" UPDATE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" BUILD_BYPRODUCTS <SOURCE_DIR>/AdsLib/AdsDef.cpp <SOURCE_DIR>/AdsLib/AdsLib.cpp <SOURCE_DIR>/AdsLib/AmsConnection.cpp <SOURCE_DIR>/AdsLib/AmsPort.cpp <SOURCE_DIR>/AdsLib/AmsRouter.cpp <SOURCE_DIR>/AdsLib/Frame.cpp <SOURCE_DIR>/AdsLib/Log.cpp <SOURCE_DIR>/AdsLib/NotificationDispatcher.cpp <SOURCE_DIR>/AdsLib/Sockets.cpp <SOURCE_DIR>/AdsLib/AdsDef.h <SOURCE_DIR>/AdsLib/AdsLib.h <SOURCE_DIR>/AdsLib/AdsNotification.h <SOURCE_DIR>/AdsLib/AmsConnection.h <SOURCE_DIR>/AdsLib/AmsHeader.h <SOURCE_DIR>/AdsLib/AmsPort.h <SOURCE_DIR>/AdsLib/AmsRouter.h <SOURCE_DIR>/AdsLib/Frame.h <SOURCE_DIR>/AdsLib/Log.h <SOURCE_DIR>/AdsLib/NotificationDispatcher.h <SOURCE_DIR>/AdsLib/RingBuffer.h <SOURCE_DIR>/AdsLib/Router.h <SOURCE_DIR>/AdsLib/Semaphore.h <SOURCE_DIR>/AdsLib/Sockets.h <SOURCE_DIR>/AdsLib/wrap_endian.h <SOURCE_DIR>/AdsLib/wrap_socket.h ) ExternalProject_Get_Property(ads SOURCE_DIR) include_directories( ${SOURCE_DIR} ) cs_add_library(AdsLib ${SOURCE_DIR}/AdsLib/AdsDef.cpp ${SOURCE_DIR}/AdsLib/AdsLib.cpp ${SOURCE_DIR}/AdsLib/AmsConnection.cpp ${SOURCE_DIR}/AdsLib/AmsPort.cpp ${SOURCE_DIR}/AdsLib/AmsRouter.cpp ${SOURCE_DIR}/AdsLib/Frame.cpp ${SOURCE_DIR}/AdsLib/Log.cpp ${SOURCE_DIR}/AdsLib/NotificationDispatcher.cpp ${SOURCE_DIR}/AdsLib/Sockets.cpp ) add_dependencies(AdsLib ads) cs_install() install(DIRECTORY ${SOURCE_DIR}/ DESTINATION ${CATKIN_GLOBAL_INCLUDE_DESTINATION} FILES_MATCHING PATTERN "*.h" PATTERN "*.hpp" ) cs_export( INCLUDE_DIRS ${SOURCE_DIR}/AdsLib ) And the error: In file included from /home/xxx/xxx/src/xxx/nav_controller/src/controller.cpp:19:0: /home/xxx/xxx/src/Navion/nav_controller/include/nav_controller/controller.h:27:10: fatal error: AdsLib/AdsLib.h: No such file or directory #include <AdsLib/AdsLib.h> Originally posted by prex on ROS Answers with karma: 151 on 2020-01-18 Post score: 0 Answer: I switched to the DownloadProject Cmake module, which works perfectly fine now: https://github.com/Crascit/DownloadProject The problem was that the steps from cs_export were executed before the source code was downloaded. Originally posted by prex with karma: 151 on 2020-01-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34288, "tags": "ros, ros-melodic, include, catkin, header" }
Why isn't water an ionic compound?
Question: If two alkali metal atoms join with an oxygen atom, an ionic bond forms. Since hydrogen has the same number of valence electrons as alkali metals, why can't water be ionic? This is what I'm thinking: $$\ce{(H+)2O^2-}$$ Answer: First of all, the difference between ionic and covalent bonds is not sharp. As electronegativity differences increase, you move away from covalent and towards ionic bonds. There are "in between" states like polar covalent, where one side of the bond is stronger but not fully ionic. And this I think is the main reason: hydrogen has fairly high Pauling electronegativity (2.20), rather close to oxygen (3.44), which seems polar covalent overall (and why we get hydrogen bonding with water). In contrast, the alkali metals all have electronegativity less than 1.00, a much bigger difference versus oxygen and thus a more ionic bond.
{ "domain": "chemistry.stackexchange", "id": 2741, "tags": "bond, water, ionic-compounds" }
Does a non-relativistic treatment of a moving charged particle violate conservation of energy?
Question: If I have charged particle in an external electric field such that the particle accelerates, then energy must be radiated according to the Larmor formula. If this motion is non-relativistic, then I can ignore back-reactive forces, so the particle will never lose energy because it never interacts with its own field. Its kinetic and potential energies may change, but the total energy will remain constant. But this seems to contradict the fact that energy is radiated. Does this mean that the non-relativistic treatment violates conservation of energy? The context for this question is a solution to a problem in Jackson's Electromagnetism. It has a non-relativistic charged particle incident on a potential field and asks for the total radiated energy. The solution treats the energy of the particle as a constant, and uses this to write the potential in terms of the kinetic energy at infinity. But if there's radiation, isn't this untrue? Answer: So here's the answer. Ignoring back-reactive forces is equivalent to assuming that the radiated energy is small compared to the energy of the particle. This is not the same thing as assuming the particle moves non-relativistically. The radiation from a relativistic particle can still be negligible, and non-relativistic particle can radiate a substantial amount of energy. For a trivial example of the first case, just consider a particle moving at a relativistic constant speed. Then there is no radiation at all. An example of the second case is the Bremsstrahlung radiation from a particle moving at a non-relativistic speed that suddenly stops. We usually assume that the radiated energy is small, and this was an unstated assumption in the problem in Jackson. If we let the particle accelerate in a potential field for long enough, the total radiated energy will be comparable to the energy of the particle, and our treatment of the energy of the particle as a constant is no longer valid.
{ "domain": "physics.stackexchange", "id": 35328, "tags": "electromagnetism, electromagnetic-radiation" }
failed to lunch this command
Question: when i try this command : roslaunch panda_moveit_config demo.launch rviz_tutorial:=true i get this error : No such file or directory: /opt/ros/noetic/share/franka_description/robots/panda_arm_hand.urdf.xacro [Errno 2] No such file or directory: '/opt/ros/noetic/share/franka_description/robots/panda_arm_hand.urdf.xacro' RLException: while processing /home/chizu/ws_moveit/src/panda_moveit_config/launch/planning_context.launch: Invalid <param> tag: Cannot load command parameter [robot_description]: command [['/opt/ros/noetic/lib/xacro/xacro', '/opt/ros/noetic/share/franka_description/robots/panda_arm_hand.urdf.xacro']] returned with code [2]. Param xml is <param if="$(eval arg('load_robot_description') and arg('load_gripper'))" name="$(arg robot_description)" command="$(find xacro)/xacro '$(find franka_description)/robots/panda_arm_hand.urdf.xacro'"/> The traceback for the exception was written to the log file Originally posted by nizar00 on ROS Answers with karma: 17 on 2021-08-22 Post score: 0 Answer: As written in CHANGELOG in franka_ros (0.8.0 - 2021-08-03): BREAKING Remove panda_arm_hand.urdf.xacro. Use panda_arm.urdf.xacro hand:=true instead. you should change line in /home/chizu/ws_moveit/src/panda_moveit_config/launch/planning_context.launch which previously written as: <param if="$(eval arg('load_robot_description') and arg('load_gripper'))" name="$(arg robot_description)" command="$(find xacro)/xacro '$(find franka_description)/robots/panda_arm_hand.urdf.xacro'"/> become, <param if="$(eval arg('load_robot_description') and arg('load_gripper'))" name="$(arg robot_description)" command="$(find xacro)/xacro '$(find franka_description)/robots/panda_arm.urdf.xacro' hand:=true"/> Originally posted by rizgiak with karma: 106 on 2021-08-25 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by yin-ye on 2021-08-27: Worked for me, thanks super much! Comment by Donjaga on 2021-08-29: Thanks a lot, dude it works! Comment by patelrishi1994 on 2021-08-30: Thank you, It worked.
{ "domain": "robotics.stackexchange", "id": 36836, "tags": "ros, xml" }
$N$ Independent Oscillators Entropy and Number of States
Question: I have been working on building intuition and experience with solving problems related to the microcanonical ensemble, but I haven't been able to solve the following problem: If you have $N$ independent oscillators, each having the same oscillation frequency $\omega$, given the energy $E=(N_E+\frac{N}{2})\hbar\omega$, find the number of states with energy E and calculate the entropy as a function of the energy. What I tried/thought: Since the oscillators don't interact and are said to have the same vibrational frequency, I don't see how the energy could be anything other than the sum of their vibrational energies, which would be a constant given that $\omega$ seem to be a constant. I'm also unsure how to interpret $N_E$, so I'm unable to move forward. My gut tells me to do something like this to calculate the number of states, $\frac{E}{N}=(\frac{1}{2}+\frac{N_E}{N})\hbar\omega$ from which, if I'm reading incorrectly that all having the same vibrational frequency means all vibrating at the same frequency and instead that if they vibrate they vibrate at the same frequency, I would extract that the fraction of particles vibrating to produce a system with a certain energy E would be given by $\frac{N_E}{N}=\frac{E}{N}-\frac{\hbar\omega}{2}$. The term subtracted from $\frac{E}{N}$ would represent a ground energy for each atom. The number of states that would satisfy this would be $N\choose N_E$, though I don't think this reduces to a nice number. Answer: The energy levels of a system of $N$ independent oscillators, all with the same frequency $\omega$, have the form $$ E(n_1,\dots,n_N) = \sum_{i=1}^N \hbar\omega\left(n_i+\frac12\right), $$ where $n_i$ is the number of quanta of energy in $i$-th oscillator. Hence, in your formula, $$ N_E = \sum_{i=1}^N n_i. $$ From here it is straightforward to obtain, that the number of different states of the system of N oscillators with energy corresponding to the number $N_E$ is equal to $$ \Gamma(N_E) = \sum_{n_1,\dots,n_N \geq 0} \Delta(n_1+\dots+n_N - N_E), $$ where $\Delta(x) = 1$ if $x=0$ and $\Delta(x) = 0$ if $x\neq0$. A simple explicit formula for the $\Gamma(N_E)$ can be derived in different ways.
{ "domain": "physics.stackexchange", "id": 57565, "tags": "homework-and-exercises, statistical-mechanics, entropy, harmonic-oscillator" }
Why is the center of buoyancy located at the center of mass of the displaced fluid volume?
Question: Consider Archimedes' principle Any object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. I do not understand why the center of buoyancy is the center of mass of the volume of the displaced fluid. In principle it could be another point, for istance the center of mass of the object (not the displaced fluid) or maybe a point of the displaced fluid but not the center of mass. Is there an simple way to understand why the center of buoyancy located in the center of mass of the displaced volume of fluid? Answer: Let us suppose we remove the object and fill the space left (in the fluid) with the same fluid. Assume this portion of fluid become solid without changing its volume or density. It will be in equilibrium with the fluid. Now suppose the buoyancy force on this solidified portion is off the center of mass. This would imply non vanishing resultant force or torque and the solidified portion would not be in equilibrium. A kind of inverse argument can also be given. Suppose the center of buoyancy coincides with the center of mass of an object immersed in a fluid. Then you would never observe resultant torque (by means of rotations) on any object. And that is empirically not true.
{ "domain": "physics.stackexchange", "id": 31018, "tags": "buoyancy, fluid-statics" }
Java program to disturb user and waste SSD lifetime
Question: This Program has three goals: Disturbing the user that accidentally run this by opening a lot of windows Creating a lot of files that waste the lifetime of SSD's Destroying Overview and waste time - A lot of files shall be created also to prolong the time that is needed to delete all the unnessecary files. Ideally, creating unnecessary windows prevents you from prematurely quitting the program: opening the Task Manager should take a long time. The goal of this little program is to defend myself as a script kiddie and annoy my friends and family members. Virus.java public class Virus { public static void main(String[] args) { new FrameCreator().start(); new FileCreator().start(); } } FileCreator.java import java.io.File; import java.io.FileWriter; import java.io.BufferedWriter; import java.io.IOException; public class FileCreator extends Thread { @Override public void run() { int number = 0; while (true) { File file = new File(Integer.toString(number)); number++; try (BufferedWriter bw = new BufferedWriter(new FileWriter(file))) { bw.write("bla"); } catch (IOException e) { System.out.println("Virus could not be executed properly :("); } } } } FrameCreator.java import java.util.Random; import java.awt.FlowLayout; import javax.swing.JFrame; import javax.swing.JLabel; public class FrameCreator extends Thread { @Override public void run() { Random random = new Random(); while (true) { JFrame frame = new JFrame(); frame.setSize(200, 200); frame.getContentPane().setLayout(new FlowLayout()); frame.getContentPane().add(new JLabel("I prefer old HDD's.")); frame.setVisible(true); frame.setLocation(random.nextInt(800), random.nextInt(600)); try { Thread.sleep(40); } catch (Exception e) { e.printStackTrace(); } } } } Answer: Your program is very inefficient. First, to use up the SSD write cycles, you should write lots of data. A single three-bytes string is not that much. Plus, there might be SSD implementations that detect when the same block is written and just store a pointer to the block, using copy-on-write. Instead, you should prepare a buffer with random data (a few megabytes is fine), write this buffer to a file, change one byte in each 512-byte area of the buffer and then write the buffer again, appending to the same file. This is probably handled more efficiently by the operating system than creating lots of very small files. To create real damage, your program should hide itself. Creating lots of pop-up windows is a very bad strategy since the user immediately notices that something is going on, navigates to the task manager and just kills the process, which closes all the windows at once. Therefore, don't interact with the user at all. Don't annoy them, don't let them even notice anything. Now this gets tricky since you have two opposing goals: Write to the SSD as fast as possible. This takes CPU cycles and will therefore probably activate the CPU fan after a short time. Be unsuspicious. This means reducing the write speed to a level that will not make any noise. To reduce the load on the CPU, you should not use character streams at all since the characters need to be converted into bytes before being written to disk. This takes time and makes noise. Instead, use a byte[] and write to a byte stream. You can even think about memory mapping to avoid copying memory blocks. To hide the generated file, don't create it in the current folder. Instead, hide it in AppData/Local/Temp or ~/.local/config, it will keep hidden longer than in ~/Downloads. Instead of extending the Thread class, you should implement Runnable. Luckily, you start your threads instead of running them, so you are not bitten by that bug. But there was a different memory leak bug in Java >= 6, I just don't remember the exact details. If your target environment is Java >= 8, you can write much less code by just having two methods and converting them to Runnable implicitly: import java.awt.FlowLayout; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; import java.util.Random; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; public class Virus { public static void main(String... args) { new Thread(Virus::createFiles).start(); new Thread(Virus::openWindows).start(); } private static void createFiles() { byte[] bytes = new byte[1 << 20]; new Random().nextBytes(bytes); int number = 0; while (true) { try { Files.write(Paths.get(Integer.toString(number)), bytes); number++; } catch (IOException e) { System.out.println("Virus could not be executed properly :("); } } } private static void openWindows() { while (true) { SwingUtilities.invokeLater(Virus::openSingleWindow); } } private static void openSingleWindow() { Random random = new Random(); JFrame frame = new JFrame(); frame.setSize(200, 200); frame.getContentPane().setLayout(new FlowLayout()); frame.getContentPane().add(new JLabel("I prefer old HDD's.")); frame.setVisible(true); frame.setLocation(random.nextInt(800), random.nextInt(600)); try { Thread.sleep(40); } catch (Exception e) { e.printStackTrace(); } } } Using method references instead of implementing the Runnable interface explicitly saves a lot of code, especially for UI applications that have lots of very small event handlers. When doing UI operations in multithreaded applications, you have to be very careful about thread synchronization. Therefore, any UI related code should run in the main Swing thread (SwingUtilities.invokeLater). By the way, a characteristic feature of a virus is that it spreads. Your program doesn't do this, therefore you should call it only Malware, not virus. It also doesn't infect other programs by changing their executable code, therefore it could only be a worm. If you do damage, at least get the terminology right. :)
{ "domain": "codereview.stackexchange", "id": 30685, "tags": "java, file, swing" }
How uncertainty principle can be used to calculate the range of actual variables
Question: I know the uncertainty relation $\Delta x \Delta p \ge \frac{\hbar}{2}$ tells the uncertainty in position and momentum, or energy and time should be greater than or equal to $\frac{\hbar}{2}$, but I don't understand how this relation can be used to give an estimation on size of system or energy of system. i.e. When we want to prove that an electron cannot remain inside the nucleus, we use this formula and put the size of nucleus (femtometers) in it and get energy(using momentum calculated from the relation) of electron which contradicts the actual value measured in the experiment, so we conclude that electron cannot reside in the nucleus. Also we use energy-time uncertainty relation to calculate the time range of exchange particles using their energy values, but I don't understand how we can do that since uncertainty relation only tells about the uncertainty in variables and not anything about their typical values? A similar type of question is asked here Why the uncertainty principle can be used for estimation? but no satisfactory answer is provided Answer: The uncertainties in the Heisenberg uncertainty principle are actually standard deviation from the mean. The actual value of a measured quantity is usually probabilistic and so the measured quantity has an associated distribution with it. That distribution would have a mean and standard deviation. Let's say that the electron is confined to the nucleus. And the boundaries of the nucleus have x coordinates $x_{min}$ and $x_{max}$. Then the probability of measuring the electron at position $x<x_{min}$ and $x>x_{max}$ is zero. So for this electron confined to the nucleus, the minimum value of the distribution is $x_{min}$ and the maximum value is $x_{max}$. Since the standard deviation can never exceed the range of a distribution, a reasonable upper bound on it would be the range itself $x_{max} - x_{min}$. So you can treat the size of the nucleus as an upper bound to $\Delta x$. A similar argument could be applied to your other example.
{ "domain": "physics.stackexchange", "id": 70124, "tags": "heisenberg-uncertainty-principle" }
Diluting methanol in water: is it exothermic?
Question: Can someone explain in simple terms why does temperature increase when mixing water with methanol? I do not have a strong background in chemistry, but I usually have to mix both liquids (sometimes also water + ethanol) (as part of my work in a biology lab) and have always wandered about the reason behind the evident increase of temperature upon mixing. I guess that the break/formation of H-bonds is involved but I haven't found a clear ("easy-to-understand") explanation in online searches. Answer: When the methanol and water are separate, they both exhibit hydrogen bonding with themselves. When the water and methanol are mixed together, some of the existing hydrogen bonding (water-water or methanol-methanol) is disturbed and now there is hydrogen bonding between water and methanol. This new hydrogen bonding pair (water-methanol) releases energy when it forms as it is more favorable than maintaining only water-water and methanol-methanol hydrogen bonding; the temperature increase you observe is the result of the released energy warming the solution.
{ "domain": "chemistry.stackexchange", "id": 10816, "tags": "water, aqueous-solution, heat" }
Meaning of the general matrix element $\langle x'|O|x\rangle $
Question: In a recent lecture we were told that $\langle x'|\hat{O}|x \rangle = O(x,x') = O(x)\delta(x-x')$ "due to the locality of quantum mechanical observables". I have no idea what this is supposed to mean any comments or references to a book from which this, or something similar, originates would be much appreciated Answer: Since having asked this question I know feel I understand the meaning: Firstly the matrix element can only really be understood within a corresponding matrix equation such as: $$\langle \psi |\hat{O}|\psi\rangle=\int \mathrm dx \int \mathrm dx' \langle \psi |\,x'\rangle \langle x' |\hat{O}|\,x\rangle \langle x\, |\,\psi\rangle $$ Now in non relativistic QM $\hat{O}=f(\hat{p},\hat{x})$ i.e. the operator can be written as some function of momentum and position operators. Firstly the simplest case is if $\hat{O}=f(\hat{x})$ then we have \begin{align}&=\int \mathrm dx \int \mathrm dx' \langle \psi |\,x'\rangle \langle x' |\,f(\hat{x})|\,x\rangle \langle x\, |\,\psi\rangle \\&= \int \mathrm dx \int \mathrm dx' \langle \psi |\,x'\rangle f(x)\langle x' |\,x\rangle \langle x\, |\,\psi\rangle \\ &= \int \mathrm dx \int\mathrm dx' \langle \psi |\,x'\rangle f(x)\delta(x-x') \langle x\, |\,\psi\rangle \end{align} Next we can consider the case when $\hat{O}=f(\hat{p})$: \begin{align}&=\int\mathrm dx \int \mathrm dx' \langle \psi |\,x'\rangle \langle x' |\,f(\hat{p})|\,x\rangle \langle x\, |\,\psi\rangle\\ & = \int\mathrm dp \int\mathrm dx \int\mathrm dx' \langle \psi |\,x'\rangle f(p)\langle x' |\,p\rangle\langle p\, |x\rangle \langle x\, |\,\psi\rangle \\& = \int \mathrm dp \int\mathrm dx \int\mathrm dx' \langle \psi |\,x'\rangle f(p)e^{-ip(x'-x)/\hbar} \langle x\, |\,\psi\rangle \\& = \int \mathrm dp \int \mathrm dx \int \mathrm dx' \langle \psi |\,x'\rangle f\left(-i\hbar\frac{\partial}{\partial x'}\right)e^{-ip(x'-x)/\hbar} \langle x\, |\,\psi\rangle \\ &= \int \mathrm dx \int \mathrm dx' \langle \psi |\,x'\rangle f\left(-i\hbar\frac{\partial}{\partial x'}\right)\delta(x-x') \langle x\, |\,\psi\rangle \\&= \int \mathrm dx' \langle \psi |\,x'\rangle f\left(-i\hbar\frac{\partial}{\partial x'}\right) \langle x'\, |\,\psi\rangle \end{align} Hence we have again related the matrix form of the operator to its corresponding differential operator in the position representation. This process can be generalised for any product of operators $\hat{p}$ and $\hat{x}$ and so in general we see that: $$\langle x' |\hat{O}|\,x\rangle=\langle x' |f(\hat{x},\hat{p})|\,x\rangle=f\left(x',-i\hbar\frac{\partial}{\partial x'}\right)\delta(x-x')\,.$$
{ "domain": "physics.stackexchange", "id": 28402, "tags": "quantum-mechanics" }
Can $|\Psi\rangle\simeq\sum_k |u_k\rangle|v_k\rangle$ be maximally entangled even if $\{|u_k\rangle\}_k,\{|v_k\rangle\}_k$ are not orthonormal?
Question: Let $\newcommand{\ket}[1]{\lvert#1\rangle}\{\ket{u_k}\}_k,\{\ket{v_k}\}_k\subset\mathcal H$ be orthonormal bases in an $N$-dimensional space. It then follows that the state $$\ket\Psi = C\sum_{k=1}^N \ket{u_k}\otimes\ket{v_k}\tag1$$ is maximally entangled (where $C$ is a normalisation constant). Or more generally, it means that $|\Psi\rangle$ has rank $N$ (which if the embedding space is larger does not correspond to a maximally entangled state). Does the opposite direction hold? In other words, if we know that a state $\ket\Psi$ is maximally entangled and can be written as (1), can we conclude that $\langle u_j|u_k\rangle=\langle v_j|v_k\rangle=\delta_{jk}$? Equivalently, suppose $\ket\Psi$ has the form in (1) with $\{\ket{u_k}\}_k,\{\ket{v_k}\}_k$ not orthogonal. Can $\ket\Psi$ then be maximally entangled? More generally, suppose $\dim\mathcal H=M$ with $M>N$ (the vectors are not a basis). If $|\Psi\rangle$ is as in (1), can it have rank $N$ even if $\{\ket{u_k}\}_k,\{\ket{v_k}\}_k$ are not orthogonal? For example, in the simplest case with $M=2$ and $N>M$, the question is whether a state of the form $$\frac{1}{\sqrt{2(1+\Re[\langle u_1|u_2\rangle\langle v_1|v_2\rangle])}} \left(\ket{u_1}\otimes\ket{v_1}+\ket{u_2}\otimes\ket{v_2}\right)$$ can be maximally entangled (or more precisely, have Schmidt coefficients $(1/\sqrt2,1/\sqrt2,0,...,0)$) even if $\langle u_1|u_2\rangle,\langle v_1|v_2\rangle\neq0$. I'll show here that it is crucial that both bases are non-orthogonal for this to be possible. If only one of the two sets, say $\{\ket{v_k}\}_k$, is orthonormal, then the matrix of coefficients of $\ket\Psi$, write it with $\Psi$, has the form $\Psi = U \sqrt D V^T$, where $U,V$ are the matrices whose columns equal $\ket{u_k}$ and $\ket{v_k}$, respectively, and $D$ is diagonal. The orthonormality of $\{\ket{v_k}\}_k$ (and thus of $\{\ket{\bar v_k}\}_k$) then implies that $$\Psi\Psi^\dagger = UDU^\dagger,$$ which tells us that the Schmidt coefficients of $\ket\Psi$ majorize the diagonal of $\sqrt{D}$. E.g. if $D$ is a multiple of the identity then $\Psi\Psi^\dagger \simeq UU^\dagger\neq I$, and thus $\ket\Psi$ is not maximally entangled. This would seem to suggest that, if at least one of the bases is orthonormal, then indeed $\ket\Psi$ is maximally entangled only if the other basis is also orthonormal. But this still leaves open the possibility of it being possible when both bases are not orthonormal. Here is an example of a pair of non-orthogonal states $|\psi\rangle$ and $|\phi\rangle$ such that $|\psi\psi\rangle+|\phi\phi\rangle$ has rank 2. Define \begin{align} 2\sqrt2 \ket\psi &= \ket1 + (2+i)\ket2 - \ket3 + i \ket4, \\ 2\sqrt2 \ket\phi &= \ket1 + i\ket2 + (1-2i)\ket3 - i \ket4. \end{align} Let $\ket\Psi\equiv (\ket{\psi\psi}+\ket{\phi\phi})/\sqrt{3/2}$. You can then verify that the corresponding matrix of coefficients is $$C = \frac{1}{2\sqrt{6}}\begin{pmatrix} 1 & 1+i & -i & 0 \\ 1+i & 1+2 i & 0 & i \\ -i & 0 & -1-2 i & -1-i \\ 0 & i & -1-i & -1 \end{pmatrix}.$$ As can be readily checked, the only non-vanishing eigenvalue of $C^\dagger C$ is a two-fold degenerate $+1/2$, hence $\ket\Psi$ has rank $2$. It's possible that this is only possible because the states live in a larger space, i.e. $M=4$ but $N=2$. I'm wondering if there is a good way to understand why this can happen, and if it possible also when $N=M$. Answer: To my surprise, your statement is not true. Consider the qubit example: $$ |\Psi\rangle=|00\rangle-|++\rangle. $$ This is clearly of the correct form but with non-orthogonal states (in case you're worrying about the negative sign, we put $|u_2\rangle=|+\rangle$ and $|v_2\rangle=-|+\rangle$). However, it is maximally entangled. To see this, we expand it out: $$ |\Psi\rangle=\frac12\left(|00\rangle-|01\rangle-|10\rangle-|11\rangle\right)=\frac{1}{\sqrt{2}}(|0-\rangle+|1+\rangle), $$ which is equivalent under local unitaries (Hadamard on the second qubit) to the Bell state, and hence maximally entangled.
{ "domain": "quantumcomputing.stackexchange", "id": 1969, "tags": "quantum-state, entanglement" }
How long would it take a computer with twice the processing power to solve a polynomial time problem?
Question: Say I have some problem of $O\left(n^k\right)$ complexity. If I were to solve the problem on a computer $x$, it would take time $t$. Now I have a new computer $x'$, which has double the computing power of $x$. How long would it take $x'$ to solve the same problem in terms of $t$? Answer: It will take half the time of course, $t'=t/2$, and the asymptotic complexity remains $O(n^k)$ !
{ "domain": "cs.stackexchange", "id": 4318, "tags": "algorithm-analysis, asymptotics, runtime-analysis, applied-theory" }
Is there a 5th fundamental force which may be responsible for the behavior of matter to be wave or a particle?
Question: We already know about the 4 fundamental forces in nature which are gravitational,electromagnetic,weak nuclear and strong nuclear forces. There are several other questions on this website which are somewhat related to this question but I want to know if there is any other fundamental force which is responsible for wave particle duality and is internal for the matter as a system and decides when matter would chose between particle and wave behaviour? If I had not addressed the problem properly then I ask it in a short version below(I take matter as a system): Is there any other internal fundamental force which is responsible for the wave particle duality of matter? Answer: Quantum mechanics is a physics theory , and physics theories impose extra axioms on mathematical solutions of equations in order to fit observations and predict future behaviors. These axioms are called: postulates, laws, principles and depend directly on experimental observations. The postulates of quantum mechanics make it a probabilistic theory, not a deterministic one. The wave function is postulated as defining the probability distribution by evaluation of $Ψ^*Ψ$ , a real number between 0. and 1. The wave particle duality is the observation that when interacting, quantum mechanically described particles act like a classical particle. An accumulation of measurements though is needed for the QM solutions to predict the probable location of a particle, and that distribution has a wave behavior, because in general the quantum mechanical equations are wave equations. This can be seen clearly in the accumulation of data in a single electron at a time double slit experiment. The standard model, and any extensions of it are a meta level on the wave function solutions of the quantum mechanical boundary conditions problem. So within the present framework any new forces from GUT theories or supersymmetry etc will not change the axiomatic status of the probabilistic nature of quantum mechanics. There does exist serious research which tries to make the quantum mechanical level a meta-level of a deterministic system, which will reproduce all the successes of quantum mechanics in describing the data as emergent from an underlying theory. For example the Bohmian mechanics., or the research of G. 't Hooft, who has honored us by discussing it a few years ago, but these are not main stream research directions. In deterministic theories, the probabilistic nature used in the postulates of quantum mechanics will be seen as emergent, but not by any particular forces at the standard model level and extensions. These will be emergent meta levels out of the hypothetical deterministic forces.
{ "domain": "physics.stackexchange", "id": 58758, "tags": "quantum-mechanics, forces, classical-mechanics, particle-physics, wave-particle-duality" }
Of 2-bromobutan-2-ol and trans-1,2-dimethylcyclobutane which is chiral as well as dissymmetric?
Question: In an examination I was asked to determine the molecules which are chiral as well as dissymmetric. There were four options, and among them, two were achiral, as they had an improper axis of symmetry. The other two options are 2-bromobutan-2-ol and trans-1,2-dimethylcyclobutane: These two compounds are chiral, as they do not have an improper axis of symmetry. I couldn't find any axis of symmetry as well. The question has only one correct answer, but I don't know which one is correct. Please help me with the answer. Answer: I assume you already studied molecular symmetry and point groups. So, first of all, among the existing point groups, you can have chiral point groups (point groups C1, Cn, Dn) and achiral point groups (all the others). As a matter of fact, chirality is unchanged when it comes to operations such as E or Cn (that equal an even number of reflections). On the contrary, this is not the case when it comes to operations that equal an odd number of reflections (σ, Sn, inversion), as they can convert a right-handed object into a left-handed object. So an improper rotation axis (Sn) implies that the molecule cannot be chiral. Both molecules that you show here are chiral, as you correctly pointed out. 2-bromobutan-2-ol belongs to point group C1 as it has no symmetry other than the identity (which is the C1 axis that every molecule has and that corresponds to a rotation by 360° that of course brings the molecule back to the same initial position). Being the C1 axis the only element, C1 molecules are defined asymmetric. The other two chiral point groups (Cn and Dn) are dissymmetric. trans-1,2-dimethylcyclobutane has a C2 axis that allows you to rotate the molecule by 180° (which is 360°/2, that's why it corresponds to a 2-fold rotation axis) obtaining a situation that is equivalent to the initial one: Let's see this with colors, so it's easier: So, in conclusion, since (1R,2R)-1,2-dimethylcyclobutane belongs to the C2 point group, it is "chiral as well as dissymmetric". Note: The above is only true in a first order approximation. A more detailed look will reveal that the cyclobutane ring is puckered and the C2 axis no longer exists.
{ "domain": "chemistry.stackexchange", "id": 15889, "tags": "organic-chemistry, stereochemistry, molecular-structure, chirality, symmetry" }
HackerRank: Friend Circles (DFS in Java)
Question: I got this problem on a HackerRank challenge and this was the best I could come up with. It's basically just using DFS to find connected components (in this case, friend circles). I was having a tough time figuring out how to track unvisited nodes. Please let me know how I can improve this code. It appears to work, but the test cases given were quite simple. Problem: There are N students in a class. Some of them are friends, while some are not. Their friendship is transitive in nature, i.e., if A is friend of B and B is friend of C, then A is also friend of C. A friend circle is a group of students who are directly or indirectly friends. You have to complete a function int friendCircles(char[][] friends) which returns the number of friend circles in the class. Its argument, friends, is a NxN matrix which consists of characters "Y" or "N". If friends[i][j] == "Y" then i-th and j-th students are friends with each other, otherwise not. You have to return the total number of friend circles in the class. Sample Input 0: 4 YYNN YYYN NYYN NNNY Sample Output 0: 2 Explanation 0: There are two pairs of friends [0, 1] and [1, 2]. So [0, 2] is also a pair of friends by transitivity. So first friend circle contains (0, 1, 2) and second friend circle contains only student 3 Sample Input 1: 5 YNNNN NYNNN NNYNN NNNYN NNNNY Sample output 1: 5 Constraints (sorry, couldn't get formatting down so I had to put the constraints down here): 1 <= N <= 300. Each element of matrix friends will be "Y" or "N". Number of rows and columns will be equal in friends. friends[i][j] = "Y", where 0 <= i < N. friends[i][j] = friends[j][i], where 0 <= i < j < N. Solution: import java.util.ArrayDeque; import java.util.Deque; import java.util.HashSet; import java.util.Set; public class Solution { public static int friendsCircle(char[][] friends) { // The only alternative I could think of, instead of // tracking unvisited nodes, was to put visited nodes // in a set and then do setOfAllNodes.removeAll(visited) // to see which nodes are still unvisited Set<Integer> unvisited = new HashSet<>(); boolean[] visited = new boolean[friends.length]; Deque<Integer> stack = new ArrayDeque<>(); int connectedComponents = 0; for (int i = 0; i < friends.length; i++) { unvisited.add(i); } // dfs on friends matrix while (!unvisited.isEmpty()) { stack.push(unvisited.iterator().next()); connectedComponents++; while (!stack.isEmpty()) { int currVertex = stack.pop(); if (visited[currVertex] == false) { visited[currVertex] = true; unvisited.remove(currVertex); for (int i = 0; i < friends[currVertex].length; i++) { if (friends[currVertex][i] == 'Y' && visited[i] == false) { stack.push(i); } } } } } return connectedComponents; } public static void main(String[] args) { char[][] friends = { {'Y','Y','N','N'}, {'Y','Y','Y','N'}, {'N','Y','Y','N'}, {'N','N','N','Y'} }; System.out.println(friendsCircle(friends)); } } Answer: Welcome to Code Review. Your code is good and easy to read, you can iterate over only half of the matrix (in my case the elements of the matrix a[i,j] with i < j) because from the test matrices always results a[i, j] = a[j, i] so friendship in symmetric . I'm using a TreeMap<Integer, Set<Integer>> to store initial situation to guarantee natural order of the keys so it will always the lower index the index where friends will be added : everybody is friend of himself (reflexive property) so map[i]={ i }. int n = friends.length; Map<Integer, Set<Integer>> map = new TreeMap<>(); for (int i = 0; i < n; ++i) { Set<Integer> set = new TreeSet<>(); set.add(i); map.put(i, set); } Now I check all elements a[i, j] with i < j and see if i appears as key in the map: if yes I add j to map[j], otherwise it means that i appears in another set and I will add j to this set. At the end in any case I will remove the key j: for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { if (i < j && friends[i][j] == 'Y') { if (map.containsKey(i)) { map.get(i).add(j); } else { for (Integer key : map.keySet()) { Set<Integer> set = map.get(key); if (set.contains(i)) { set.add(j); } } } map.remove(j); } } } The number of circles will coincide with the number of keys present in the map at the end: public static int CountFriendsCircles(char[][] friends) { int n = friends.length; Map<Integer, Set<Integer>> map = new TreeMap<>(); for (int i = 0; i < n; ++i) { Set<Integer> set = new TreeSet<>(); set.add(i); map.put(i, set); } for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { if (i < j && friends[i][j] == 'Y') { if (map.containsKey(i)) { map.get(i).add(j); } else { for (Integer key : map.keySet()) { Set<Integer> set = map.get(key); if (set.contains(i)) { set.add(j); } } } map.remove(j); } } } return map.size(); }
{ "domain": "codereview.stackexchange", "id": 37416, "tags": "java, programming-challenge, graph" }
More concise and/or idiomatic max subarray in Clojure?
Question: I've implemented the following two versions of the classic "Max Sub-Array" problem in Clojure, using the Kadane algorithm. First with loop / recur (defn max-sub-array [A] (loop [x (first A) a (rest A) max-ending-here 0 max-so-far 0] (if (seq a) (recur (first a) (rest a) (max x, (+ max-ending-here x)) (max max-so-far, max-ending-here)) max-so-far))) Then with reduce (defn max-sub-array-reduction [A] (letfn [(find-max-sub-array [[max-ending-here max-so-far] x] [(max x (+ max-ending-here x)) (max max-so-far max-ending-here)])] (second (reduce find-max-sub-array [0 0] A)))) Is there a more concise implementation, perhaps using filter or merely by making the reduce version more "idiomatic" somehow? Answer: Great answer from Jean Niklas L'Orange on the Clojure Google Group: (defn max-subarray [A] (let [pos+ (fn [sum x] (if (neg? sum) x (+ sum x))) ending-heres (reductions pos+ 0 A)] (reduce max ending-heres)))
{ "domain": "codereview.stackexchange", "id": 2463, "tags": "clojure" }
Optics of perceived shadows on LCD monitor
Question: After changing my desktop wallpaper on my LED-lit LCD monitor to one with a dark corner, I started seeing shadows on where the clock widget is while looking sideways at the screen. These shadows move with the turn of my head and disappear when I look straight. My question has two parts: Is this normal with LCD screens or is it a result of the fact that I wear very thick glasses? Does this effect have a name, and where can I read more about it? A diagram illustrating light propagation would be very much appreciated. Below is the image in question. I see dark blue shadows of the big numbers while looking sideways at them Answer: Lenses can exhibit chromatic aberration due to a color (wavelength) dependent index of refraction in the lens material. Chromatic aberration can cause the three colors from the red-green-blue (RGB) display to separate when viewed off the lens axis: Left: on axis. Right: off axis. In less pronounced cases, this might lead to a ghosting of images. Definitely causes me problems when I word in CAD programs and don't view the screen straight on.
{ "domain": "physics.stackexchange", "id": 24354, "tags": "optics, visible-light, shadow" }
Applications for set theory, ordinal theory, infinite combinatorics and general topology in computer science?
Question: I am a mathematician interested in set theory, ordinal theory, infinite combinatorics and general topology. Are there any applications for these subjects in computer science? I have looked a bit, and found a lot of applications (of course) for finite graph theory, finite topology, low dimensional topology, geometric topology etc. However, I am looking for applications of the infinite objects of these subjects, i.e. infinite trees (Aronszajn trees for example), infinite topology etc. Any ideas? Thank you!! Answer: One major application of topology in semantics is the topological approach to computability. The basic idea of the topology of computability comes from the observation that termination and nontermination are not symmetric. It is possible to observe whether a black-box program terminates (simply wait long enough), but it's not possible to observe whether it doesn't terminate (since you can never be certain you have not waited long enough to see it terminate). This corresponds to equipping the two point set {HALT, LOOP} with the Sierpinski topology, where $\emptyset, \{HALT\}, and \{HALT, LOOP\}$ are the open sets. So then we can basically get pretty far equating "open set" with "computable property". One surprise of this approach to traditional topologists is the central role that non-Hausdorff spaces play. This is because you can basically make the following identifications $$ \begin{matrix} \mathbf{Computability} & \mathbf{Topology}\\ \mbox{Type} & \mbox{Space} \\ \mbox{Computable function} & \mbox{Continuous function} \\ \mbox{Decidable set} & \mbox{Clopen set} \\ \mbox{Semi-decidable set} & \mbox{Open set} \\ \mbox{Set with semidecidable complement} & \mbox{Closed set} \\ \mbox{Set with decidable equality} & \mbox{Discrete space} \\ \mbox{Set with semidecidable equality} & \mbox{Hausdorff space} \\ \mbox{Exhaustively searchable set} & \mbox{Compact space} \\ \end{matrix} $$ Two good surveys of these ideas are MB Smyth's Topology in the Handbook of Logic in Computer Science and Martin Escardo's Synthetic topology of data types and classical spaces. Topological methods also play an important role in the semantics of concurrency, but I know much less about that.
{ "domain": "cstheory.stackexchange", "id": 3269, "tags": "set-theory, topology, topological-graph-theory" }
What can cause the bloating in high protein diet of Whey proteins?
Question: I am thinking what can cause the swelling of gastrointestinal system i.e. bloating after high protein diet of Whey proteins. Liver does breaks those proteins to branched chain amino acids (BCAA), which can cause this swelling. However, I am not sure if this is the reason. It may be the carbohydrates which are broken to enable this BCAA synthesis. If you take the protein diet with carbohydrates, there seem to be less swelling. So apparently easier energy for BCAA synthesis. The combination of proteins, carbohydrates and lipids seem to be important. Creatine should also be mentioned. Fat burner thing is also important here, I think. What can cause the swelling of gastrointestinal system after high protein diet of Whey proteins? Answer: The cause of the swelling is lack of lymphatic carriage capability of proteins to the muscles, because of systemic conditions affecting the circulation of proteins through your lymphatic system. There will be edema extracellularly because of insufficient carriage of proteins through lymphatic system. My schematic drawing about the fluid movement between these spaces I started to think why some people do not observe so much swelling. I discussed with a few and come to conclusion that the main difference between those people who did not have noticeable swelling was that that they did lymphatic system massage by maximising abdominal breathing after taking whey proteins (before training and after training). I think it sounds sensible to some extent. Those people also included lymphatic system massage in their recovery after training. One video here Lymphatic Drainage for Abdomen and Trunk. So another reason what may cause the swelling is insufficient drainage of lymphatic system. The swelling does not happen to the same extent in all people. Some people swell much after one intake of protein drink. Some people also do not have to do lymphatic massage. My conjecture: The reason to this is most probably one glue-gene-group. Those people who have this genome have other systemic diseases that causes the lymphatic drainage have high proportion of proteins already: for instance, uncontrolled asthma patients, who are circulating lysed DNAs of lymphocytes from small bronchial tree through the lymphatic circulation Lymphatic circulation is the system which circulates the proteins to the muscles. After absorption from the digestive tract, lymphatic circulation carries proteins systemically. However, since there is already a high proportion of proteins, lymphatic circulation cannot take much more proteins to carry. Our body has to remove toxic wastes before it can build new materials. Catabolism then conquers anabolism in some situations.
{ "domain": "biology.stackexchange", "id": 2004, "tags": "biochemistry, proteins, lipids, carbohydrates" }
Subgraph containing all nodes and edges that are part of length-limited simple s-t paths in a digraph
Question: Note: I posted a similar question regarding undirected graph. Given A digraph $G$ with no multiple-edges or loops A source node $s$ A target node $t$ Maximal path length $l$ I am looking for $G'$ - A subgraph of $G$ that contains any node and any edge in $G$ (and only those), that are part of at least one simple path from $s$ to $t$ with length $\leq l$. Note that I don't need to enumerate the paths. Answer: As the question is stated, having $l$ as part of the input, the problem is $\mathsf{NP}$-hard. This follows as a special case of the classification of the class of patterns for which the directed subgraph homeomorphism problem is $\mathsf{NP}$-complete by Fortune, Hopcroft, and Wyllie's paper: The directed subgraph homeomorphism problem. In particular the following problem is $\mathsf{NP}$-complete: Given a directed graph $G$ and vertices $s,t,v$, does there exist a (simple) $(s,t)$-path through $v$?
{ "domain": "cstheory.stackexchange", "id": 2621, "tags": "graph-algorithms" }
What is the morphological difference between Leydig cell in human and pig?
Question: The pig is only an example, just an animal. Leydig cells have protein inclusions (Reinke crystals) that are mostly made of crystallised lipofuscin. They are secretory inclusions i.e. cells formed in secreting cells. An example of Leydig cell in pig's testicle: My teacher told me that there is a difference in the existence of some cells between humans and animals. However, I cannot find such a difference. I have not been provided a slide of human Leydig cell so I have not been able to compare. What is the morphological difference between Leydig cell in human and pig? Answer: Perhaps what your teacher meant was not so much a difference in Leydig cell morphology, but in interstitial tissue morphology, ie. tissue which occupies the space in between seminiferous tubules. Leydig cells are its most interesting component, others being small blood vessels (a lot of them), nerves and connective tissue (mostly fibroblasts, mastocytes, lymphocytes and macrophages). In different species interstitial tissue looks differently, for example: Leydig cells form clumps around blood vessels, and are surrounded by fibroblasts (which makes them more difficult to isolate) in guinea pig, but not in mice or rat; in human a lot of interstitial tissue is extracellular matter, Leydig cells are not so frequent, but more active; and, last but not least, in pig, there is little extracelluar matter and a lot of Leydig cells that are very easily isolated (and produce large amounts of estrogen). I found all this in some old lecture notes; I couldn't dig up any sensible reference, though.
{ "domain": "biology.stackexchange", "id": 95, "tags": "anatomy, histology" }
Inserting data into SQL-Server table dynamically
Question: I know I can do all this via SSMA quite quickly, but I wanted to see if I could do it efficiently in PowerShell myself as a little project. Basically, the entire script takes tables from an Access database and creates a database in SQL Server with the same data. Everything from creating the table structure to cleansing the data is running quick. The slow part is inserting the data, which was inevitable but I want to see if I can make it a bit faster. Here is the code: <# Insert data into tables #> function Insert_Data { param ( $data, $tableName ) $columns = ($data.PSObject.Properties | where {$_.name -eq "Columns"}).value.columnName | Sort-Object $Insert = "INSERT INTO $tableName VALUES " $i = 0 $x = 0 foreach ($item in $data) { $Insert += " (" foreach ($item in $columns) { $Insert += "'" + $data.rows[$x].$item + "'," } $Insert = $Insert.Substring(0,$Insert.Length-1) $Insert += ")," if ($i -eq 900) { $Insert = $Insert.Substring(0,$Insert.Length-1) $Insert += ";" Invoke-SQLCMD -Query $Insert -ServerInstance "." -database tmpAccessData -erroraction "Stop" $Insert = "INSERT INTO $tableName VALUES " $i = 0 } $i++ $x++ } $Insert = $Insert.Substring(0,$Insert.Length-1) $Insert += ";" Invoke-SQLCMD -Query $Insert -ServerInstance "." -database tmpAccessData -erroraction "Stop" Remove-Variable -Name data } It generates a large SQL query and inserts that into the specified table once it hits 900 values or the end of the Access table. The data variable is the full Access table pulled into an Object: $data_Clients = Get-AccessData -Query "SELECT * FROM Client" -Source $($settings.FastrackFiles.access_Fastrack); And tablename is just the name of the table in the destination SQL database. Answer: The main problem with this code is that it's not screening column values, unfortuantely buit in mechanisms in Invoke-SQLCMD aren't viable. It allows to use -Variable but it also accepst stings, like -Variable "p0='var_value'" which don't help at all with escaping, even making it worse. but here are opesource alternative to Invoke-SQLCMD: https://github.com/RamblingCookieMonster/PowerShell/blob/master/Invoke-Sqlcmd2.ps1 It can be installed by executing Install-Module WFTools -Force -AllowClobber Example of unescaped query with Invoke-SQLCMD > Invoke-Sqlcmd -Database tb-sql-db -Server XXXXX -Username YYYY -Password ZZZZ -Query "SELECT id, um.text FROM archive_srv_db.tbUniMessage um WHERE id = '3161e665-a30e-48c4-87f2-0008d62da8a6'" id text -- ---- 3161e665-a30e-48c4-87f2-0008d62da8a6 Hello! Same query using parameters and Invoke-Sqlcmd2: $secpasswd = ConvertTo-SecureString "ZZZZ" -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ("YYYYY", $secpasswd) Invoke-Sqlcmd2 -Database tb-sql-db -Server XXXXX -Credential $mycreds -Query "SELECT id, um.text FROM archive_srv_db.tbUniMessage um WHERE id = @p0" -SqlParameters @{ p0="3161e665-a30e-48c4-87f2-0008d62da8a6"} id text -- ---- 3161e665-a30e-48c4-87f2-0008d62da8a6 Hello! If you are concerned with escaping issues, if any had occur, you should migrate to Invoke-Sqlcmd2. But please take a note, that parametrized query in ADO.NET can hold up to roughly 2000 parameters. So your loop condition should be ($i * $columns.Length) -gt 1900 (just assumption that you won't have more than 100 columns. It's to late here to write exact code :-). And in your loop you should append to query @p$counter, where counter is incrementing variable defined outside of a loop and incremetring inside loop on each value. and build dictionary, populating it with named parameters. Example: #define dictionary before outer loop $dict = @{} .... #add value inside inner loop $Insert += "?," $dict.Add("@p$counter",$data.rows[$x].$item)
{ "domain": "codereview.stackexchange", "id": 32238, "tags": "performance, sql, sql-server, t-sql, powershell" }
Reverse reaction of acid dissociation
Question: Consider an acid HA with Ka 10^-10 dissociating into its ions as HA⇌H+ +A- My question is-if we introduce solution of pH=3(let's say any weak acid or a buffer) and put ions of A- of 0.1M into it (using strong electrolyte),using expression for Ka(in this case we are operating reverse reaction so k will be reciprocal of Ka) we get concentration of HA to be 10^6 M.Although such a high concentration is theoretically not obtained (assume for my doubt to get clarified) my focus is on the following question (value of 10^6 from FORMULA only):- How is it CHEMICALLY CORRECT that concentration of products has become more than reactants?While dissociating we can make the argument that some acid is left undissociated so concentration decreases.But how does concentration INCREASE in reverse reaction and what is its chemical significance? Please help. (I came across this doubt when I saw a question asking to calculate solubility of AgCN in buffer of pH=3 ksp=1.2×10^-15 and ka of hcn 4.8×10^-10.Solution implied concentration of hcn was much greater than cn from AgCN which confused me as explained above.) Answer: The concentration of an acid will never be greater than the concentration of A-. Just because there is not enough of A's to make HA. You are doing something completely wrong in your calcs. In your problem of AgCN, you are not given the amount of AgCN, so how can you actually think of possible concentration of cyanides?
{ "domain": "chemistry.stackexchange", "id": 8978, "tags": "equilibrium, solubility, ions, concentration" }
Convergence of Simulated Annealing Based Algorithms
Question: I designed a simulated annealing-based optimization algorithm. My simulation shows that it converge fast. I am looking for some sort of proof to show that simulation annealing-based algorithm converge fast (based on satisfying some properties) to global/local optimal point and doesn't oscillate in the optimal points (or any related fast about its stability). Are there any useful literature about it? Answer: I copied this answer from computational science There is a theorem that syas that a black box algorithm is guaranteed to find the global minimum of an arbitrary smooth (i.e., twice continuously differentiable) function if and only if it samples points densely in the search space. Here dense is meant in the topological sense, i.e., it must sample points in arbirarily small neighborhoods of every point. In this case, the worst case complexity is O((dδ)−n), [the Latex parser chokes with nested big brackets] where d is the diameter of the search space and δ the guaranteed error in x Edit: While this looks like being the worst case complexity for locating an δ-accurate x rather than for locating a point for a ϵ-accurate f, the complexity for the latter is as bad (even for the rather big value of ϵ=(ffirst−fglobal)/2), as one can easily construct a smooth function which interpolates all data points so far but takes much smaller values in the center of the largest open ball not containing one of the points evaluated. Thus guaranteed convergence to a global minimum is worthless in practice. (For example, uniformly random search has a convergence guarantee to the global minimizer, whereas most practically fast algorithms don't have one.) Note that simulated annealing usually performs much worse than modern methods. Rather use code recommended in: Comparison of derivative-free optimization algorithms (2012, by Nick Sahinidis) Black-Box Optimization Benchmarking (BBOB) 2012 (by Auger, Hansen, et al.)
{ "domain": "cs.stackexchange", "id": 980, "tags": "algorithms, reference-request, proof-techniques, optimization" }
Binary search in C++
Question: I was wondering if I have any hidden bugs or traps in my code I wrote for an exercise. #include<iostream> #include<stdlib.h> #include<stdio.h> #include<time.h> #include<algorithm> using namespace std; # define MAX 1000 int binSearch(int *arr, int arrSize, int num) { int leftPos = 0, rightPos = arrSize - 1; while(mid != leftPos) { int mid = (rightPos + leftPos)/2; if(arr[mid] == num) return mid; else if(arr[mid] > num) rightPos = mid; else leftPos = mid; } return -1; } int main(){ /*testing program */ int arr[MAX]; for(int i = 0; i < MAX; i++) arr[i]=rand()%100; sort(arr, arr+MAX); cout<<binSearch(arr, MAX, 4); } Answer: Advice 1 while(mid != leftPos) Above, mid is not defined, so my compiler does not compile that. Advice 2 #include<stdio.h> I don't see your code using anything in stdio.h; why not remove it? Advice 3 It seems that you forgot to seed the random number generator. Did you mean to srand(time(NULL)); before creating the integer array? Advice 4 int mid = (rightPos + leftPos)/2; If rightPos + leftPos overflow, mid will end up negative. You can "extend the range" by writing mid = leftPos + (rightPos - leftPos) / 2; instead. Putting pieces together After a minor rewrite, your implementation may look like this: int binSearch(int *arr, int arrSize, int num) { int leftPos = 0, rightPos = arrSize - 1; // 'mid' not yet declared: int mid; while(leftPos < rightPos) { // Avoids overflowing '(leftPos + rightPos) / 2': mid = leftPos + (rightPos - leftPos) / 2; if(arr[mid] == num) return mid; else if(arr[mid] > num) rightPos = mid - 1; else leftPos = mid + 1; } return -1; } Alternative implementation It is not hard to write the binary search as a template: template<typename RandomIt, typename T> RandomIt binarySearch(RandomIt begin, RandomIt end, const T& value) { RandomIt save_end = end; while (begin != end) { auto range_length = std::distance(begin, end); RandomIt middle = begin; std::advance(middle, range_length / 2); if (*middle == value) { return middle; } if (value < *middle) { end = middle; std::advance(end, - 1); } else { begin = middle; std::advance(begin, +1); } } return save_end; } Hope that helps.
{ "domain": "codereview.stackexchange", "id": 22700, "tags": "c++, binary-search" }