text
stringlengths
1
1.11k
source
dict
general-relativity, spacetime If we integrate over the time coordinate t and find that it takes a finite amount of time for light to travel to r=∞ It doesn't have the interpretation of time to travel. and then when we integrate over an affine parameter τ it takes inifinite amount of time to travel to r=∞ Nor does this have the interpretation of time to travel. My (GR) intuition tells me that there is a "coordinate" singularity at r=∞(for coordinate τ). This affine parameter is not a coordinate. In the situation you describe, the interpretation is probably that $t$ just doesn't blow up very fast as you approach null infinity. That's not something that has a physical interpretation, it's just a fact about your coordinate system.
{ "domain": "physics.stackexchange", "id": 63717, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, spacetime", "url": null }
general-relativity, gravity, black-holes, metric-tensor Some instead tend to say that the velocity of light is always the same but "space-time is curved" in a certain way. Are these equivalent ways of saying the same thing? Question: Are the statement that "there is spatial variation in the velocity of light in a spherically symmetric gravitational field" and the statement that "the velocity of light is constant but spacetime in a spherically symmetric gravitational field is curved" equivalent statements? “Spacetime is curved” (and specifying how it is curved by writing the metric) is a much stronger statement than “there is spatial variation in the velocity of light” (with an appropriate quantification of that statement). The reason for this is that the statement about curved spacetime allows us to make predictions about how any physical process would proceed in this geometry by applying the equivalence principle, whereas the alternative statement only describes light propagation.
{ "domain": "physics.stackexchange", "id": 58605, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, gravity, black-holes, metric-tensor", "url": null }
periodic-trends, melting-point, boiling-point Figure 1: crystal structure of black phosphorus, taken from Wikipedia. Henceforth, we have established a structure element which we will be sticking to with only moderate modifications. The transition phosphorus — arsenic — antimony — bismuth is more or less the non-metal—metal transition. What explicitly changes is the distance between the sheets as shown in table 1. $$\textbf{Table 1: }\text{Comparison of bond lengths and inter-sheet contact distances for pnictogens}$$ $$\begin{array}{lrrrr} \hline \text{ } & \ce{P_{black}} & \ce{As} & \ce{Sb} & \ce{Bi} \\ \hline \ce{E-E}\text{-bond/pm} & 223.0 & 251.7 & 290.8 & 307.1 \\ \ce{E\bond{...}E}\text{-contact/pm} & - & 311.9 & 335.4 & 352.8 \\ \text{d}_\text{contact}/\text{d}_\text{bond} & \approx 1.5 & 1.239 & 1.153 & 1.149 \\ \hline \end{array}$$
{ "domain": "chemistry.stackexchange", "id": 14420, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "periodic-trends, melting-point, boiling-point", "url": null }
turtlebot, ros-fuerte, environment, explore, package Title: Fuerte Explore package in a real environment problem (again) I created my own Explore package to be used in the real world. My package is basically identical to Explore_stage (http://www.ros.org/wiki/explore) with a few modifications to the XML, yaml and launch files, and altered to use gmapping as its map. Moreover, I have changed explore.cpp according to the 2 TODO comments. My modified package can be found here: https://github.com/LadyZayin/explore
{ "domain": "robotics.stackexchange", "id": 14455, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turtlebot, ros-fuerte, environment, explore, package", "url": null }
java, homework, swing, event-handling System.out.print('\f'); //clear screen in BlueJ System.out.println(this.output); /* switch (k.getKeyCode()) { case KeyEvent.VK_BACK_SPACE: if (this.output.length() > 0) { this.stringBuilder.deleteCharAt(this.stringBuilder.length() - 1); } break; case KeyEvent.VK_SHIFT: break; default: this.stringBuilder.append(k.getKeyChar()); } System.out.print('\f'); //clear screen in BlueJ System.out.println(this.stringBuilder.toString()); */ } public static void main(final String[] args) { final JFrame frame = new JFrame("KeyEvent Handler"); frame.setSize(400,400); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.addKeyListener(new KeyHandler()); frame.setVisible(true); } }
{ "domain": "codereview.stackexchange", "id": 33151, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, homework, swing, event-handling", "url": null }
java, design-patterns Title: Pizza lover program with Builder Pattern of GoF I love pizza and design patterns. This is my first program to practice the builder pattern of GoF, without fluent interface. I put emphasis on the structure and characters of builder pattern. Here are some notes, hope it will help you read: Directors --> MeatPizzaLover, CheesePizzaLover. I add an extra-layer of abstraction, PizzaLover, to them. Builder --> PizzaRecipe. ConcreteBuilder --> CheesePizzaRecipe, MeatPizzaRecipe. ComplexProduct --> Oven, Pizza, PrepareTips.
{ "domain": "codereview.stackexchange", "id": 25093, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, design-patterns", "url": null }
coding examples would’ve been great, though =P, this really helpful for my project. Since $\gcd(6,8) = 2$ and $2 \nmid 7$, there are no solutions. We can calculate this using the division algorithm. The calculations are somewhat involved. This is a linear Diophantine equation and it has a solution if and only if $d = \gcd(a, m)$ divides $b$. Theorem 1. If y solves this new congruence, then x = (my + b)/ a solves the original congruence. In particular, (1) can be rewritten as This field is denoted by $\mathbb{Z}_p$. This is progress because this new problem is solving a congruence with a smaller modulus since a < m. If y solves this new congruence, then x = (my + b)/a solves the original congruence. Example. By the Euler’s theorem, $$a^{\varphi (m)} \cdot b \equiv b \pmod m.$$, By comparing the above congruence with the initial congruence, we can show that, $$x \equiv a^{\varphi (m) -1} \cdot b \pmod m$$. The linear congruence The result is closely related to the Euclidean algorithm.
{ "domain": "karamarine.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290930537121, "lm_q1q2_score": 0.8537650137551713, "lm_q2_score": 0.8791467785920306, "openwebmath_perplexity": 413.5897726235827, "openwebmath_score": 0.7182279825210571, "tags": null, "url": "http://karamarine.com/be-born-dtmwpm/solve-linear-congruence-bd4eea" }
Well, that is frequently a confusing issue. Let me try and organize it in this way. In general, $2n+1$ points will be interpolated by a polynomial of degree at most $2n$. When, like in this case, the $2n+1$ points $$\left( {x_{\,i} ,y_{\,i} } \right)\quad \left| {\;\left\{ \matrix{ i = - n, \ldots ,0, \ldots ,n \hfill \cr x_{\,i} = - x_{\,i} \hfill \cr y_{\,i} = - y_{\,i} \hfill \cr} \right.} \right.\quad \to \quad \left( {x_{\,0} ,y_{\,0} } \right) = \left( {0,0} \right)$$ constitute an anti-symmentric pattern, then they will be interpolated by a odd polynomial $$y(x)\quad \left| {\;y( - x) = - y(x)} \right.\quad \to \quad y(x) = \sum\limits_{1\, \le \,k\, \le \;n} {c_{\;2k - 1} x^{\,2k - 1} }$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.986777180969715, "lm_q1q2_score": 0.8044528132479696, "lm_q2_score": 0.8152324848629215, "openwebmath_perplexity": 294.80950614554257, "openwebmath_score": 0.9258527159690857, "tags": null, "url": "https://math.stackexchange.com/questions/1829051/interpolation-of-symmetric-data" }
fourier-transform, differential-equations, complex-systems Title: How are Fourier transforms of any dynamical system different to traditional ones? When projecting a vector in Hilbert space into its (closed?) subspace, its best approximation is its Fourier series. The technique has been using in many traditional problems (heat, wave, Schrödinger) and in other low dimensional dynamical systems by finding $\lambda$ in the characteristic polynomial $det(A-\lambda I)$. However, in general, are there any differences when applying this to any given dynamical system compared to the traditional ones? Not all systems have nice or symmetrical equations, and they may involve more variables/higher dimensions, and I think it might be stuck to find the characteristic polynomial. Is there ever such a thing, and how to solve it? The current version (v3) of the question seem to describe a particular linear approximation to the system. If that's the case, then
{ "domain": "physics.stackexchange", "id": 44463, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-transform, differential-equations, complex-systems", "url": null }
radiation Title: Does the W boson in beta decay affect the gravity generated by the system? During beta decay we now know a heavy W boson gets involved temporarily. Would this potentially impact the gravitational field generated by the system as a whole? It doesn't seem like it should. Just curious. The rest mass of the intermediate particle is not relevant. Even if the W boson was real and not only a virtual particle, the energy-momentum is conserved during beta decay and the whole energy-momentum couples to the gravitational field in the Einstein equations $$ G_{\mu\nu}=\frac{8\pi G}{c^4}T_{\mu\nu} $$ where $T_{\mu\nu}$ is the energy-momentum tensor. So the W boson does not influence the gravitational field.
{ "domain": "physics.stackexchange", "id": 20780, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "radiation", "url": null }
experimental-physics, mathematics, chaos-theory Does emerge of "Universal Sequence" violate above statement? Update: U-Sequence has no wiki page to cite so here is up to period 6 of this sequence: 1, 2, 4 6, 5, 3 6, 5, 6 4, 6 5, 6 No, the emergence of a "universal sequence" doesn't contradict the statement by Einstein. In fact, as Michael Brown rightfully said, this "universal sequence" doesn't qualitatively differ from any other piece of pure mathematics that was found to be relevant in reality so one could have mentioned thousands of other examples.
{ "domain": "physics.stackexchange", "id": 7117, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "experimental-physics, mathematics, chaos-theory", "url": null }
# Odds for randomly assigning a men-only group in a team working assignment We are partitioning a group of $30$ people in $5$ groups of $6$ persons each. We have $13$ women and $17$ men in those $30$ people and randomly drawing those people gave us a men-only group. What are the odds of getting a group of the same gender? A generic way is preferred, of course. So, let $N$ be the number of people to partition in $m$ groups while $G[k]$ is the number of people per gender.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9763105314577312, "lm_q1q2_score": 0.800470500672698, "lm_q2_score": 0.8198933381139645, "openwebmath_perplexity": 1986.4914030999685, "openwebmath_score": 0.7370012998580933, "tags": null, "url": "https://math.stackexchange.com/questions/927512/odds-for-randomly-assigning-a-men-only-group-in-a-team-working-assignment" }
mechanical-engineering, materials, applied-mechanics, tribology Title: Approach to solve a frictional problem? I am having a bar and surface (made of aluminum). I need to attach some material to the end of the bar so that, if I apply any force (25 N-sideways) to bar it has to stick with the surface. Force is applied normally to the flatter surface of the bar. I need to select a material at the end of bar so that the force applied on the bar is minimal. What methodology should I choose to find an answer?
{ "domain": "engineering.stackexchange", "id": 2411, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mechanical-engineering, materials, applied-mechanics, tribology", "url": null }
chaos-theory, determinism, laws-of-physics The laws of physics do not fail in that thought experiment. The experiment (with frustrator) is an interesting aspect of determinism but is not a conclusive argument against determinism in physics. The author also didn't mean it that way. He meant to illustrate, via a physical model, that if the very predictions about future state of the system (at 12:00) are set to influence the system at earlier time (at 11:55), those predictions may fail, even while some external observer would see that everything, including the failed prediction, was completely determined by the past state. In that physical example with frustrator, the failure of the predictor to predict is due to the fact that the predictor is coupled physically with the system to be predicted and in a way that prevents successful prediction. This strange predictor determines what will happen to the system, instead of predicting it. Thus, it is a bad predictor, but a good controller.
{ "domain": "physics.stackexchange", "id": 49732, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "chaos-theory, determinism, laws-of-physics", "url": null }
forces, classical-mechanics, rotational-dynamics, friction In your case of the rolling objects, the same idea holds. If your object is rolling without slipping, then all you can say is $F_s\leq\mu_sN$. You cannot say $F_s=\mu_sN$ unless you know your object is on the verge of slipping. This is why you don't see a dependence on $\mu_s$ in your derivations. It is incorrect to say $F_s=\mu_sN$. The force of friction would not change for a different $\mu_s$, only the point of slipping would change. For example, if you did experiments to see find the maximum angle the incline could be at before slipping occurred, you would find a dependence on $\mu_s$ (shown below). This is also why you don't get different "energy allocation": the static friction force does not depend on $\mu_s$ if there is no slipping.
{ "domain": "physics.stackexchange", "id": 71176, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "forces, classical-mechanics, rotational-dynamics, friction", "url": null }
newtonian-mechanics, everyday-life, friction But why can't a tire accelerate a car when there is no slip, via static friction? Let say a car moving at a constant speed, the tire has no slip. Now the engine accelerates the tire, the tire will roll faster. If this accelerated force is under the static friction limit, then the tire will not slip, but it still can accelerate the car. But according to reality, a la every friction vs slip ratio graph, this model is completely wrong. No slip or a very low slip ratio means no acceleration at all Could anyone explain it for me? And is there any difference between steel wheel and rubber tire? IMO, there isn't any from physical point of view. Two models above, one must wrong The slip ratio depends on the speed for the car you would calculate based circumferential speed of the wheel in the frame of the car (angular velocity of wheel times radius), and the actual linear speed of the car.
{ "domain": "physics.stackexchange", "id": 12582, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, everyday-life, friction", "url": null }
machine-learning, turing-machines, machine-models Title: construct a TM from a PDA Given a PDA $P=(Q,\sum,\delta,q_0,F)$ construct formally a TM that accepts $L(P)$. My idea is to construct a Turing machine with 2 tapes, one for the input and the other for the stack. Also to add $q_a$ for accept and $q_r$ for reject and to send to $q_a$ if the TM stops on states in $F$ and send to $q_r$ otherwise.
{ "domain": "cs.stackexchange", "id": 1441, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, turing-machines, machine-models", "url": null }
• "how to think this prob in modulo system" well it depends of the meaning of those words, but IMHO I think you did it nicely that way indeed. You converted a problem in $\pmod{7}$ to a problem in $\pmod{14}$. Maybe is just a question of formatting your answer as a "modulo system problem": $x = 2m+1$, then $n = 7(2m+1) + 3 \pmod{14} \equiv 14m + 7 +3 \pmod{14} \equiv 14m + 10 \pmod {14} \equiv 14m \pmod{14} + 10 \pmod {14} \equiv 10 \pmod{14}$. – iadvd Apr 26 '17 at 5:22 • Edited the question a bit for better understandability of my points that is, I was thinking the format: if n mod 7 = 3 then n mod 14 = ?. That's how I was thinking. Thanks for replying! – neo-nant Apr 26 '17 at 5:46 • ok I have added a solution going backwards... I think that you meant that possibility. But the way you did it is easier and indeed my solution is based on your substitutions (but backwards, it is kind of tricky). – iadvd Apr 26 '17 at 7:03
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9752018376422666, "lm_q1q2_score": 0.8437681068287045, "lm_q2_score": 0.865224073888819, "openwebmath_perplexity": 360.21999107592063, "openwebmath_score": 0.9407966732978821, "tags": null, "url": "https://math.stackexchange.com/questions/2252596/modular-arithmetic-for-arbitrary-number" }
c++, c++11, pointers Title: Array Dynamic resize in heap I have answered a Question in Stackoverflow link.
{ "domain": "codereview.stackexchange", "id": 34309, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++11, pointers", "url": null }
java, algorithm, data-mining // ... move one item from its antecedent to its consequnt. for (I item : rule.getAntecedent()) { antecedent.remove(item); consequent.add(item); int antecedentSupportCount = data.getSupportCountMap() .get(antecedent); AssociationRule<I> newRule = new AssociationRule<>( antecedent, consequent, itemsetSupportCount / antecedentSupportCount); output.add(newRule); antecedent.add(item); consequent.remove(item); } } return output; }
{ "domain": "codereview.stackexchange", "id": 33375, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, data-mining", "url": null }
newtonian-mechanics, newtonian-gravity, mass depends on the body of the mass. But, the acceleration $\vec{a}=\dfrac{\vec{F}}{m}=\dfrac{m\vec{E_g}}{m}=\vec{E_g}$ is independent of the mass. And, this acceleration is precisely equal to the gravitational field at that point. This is the very reason we use gravitational acceleration to represent the gravitational field at any point. It is not obvious that we can always do this. For example, in the case of the electric forces, the acceleration $\vec{a}=\dfrac{\vec{F}}{m}=\dfrac{q\vec{E_e}}{m}$ does depend on the mass of the object (unlike the gravitational case) and thus, we can't use the acceleration caused due to the electric field to represent the electric field - because there isn't a one-one onto relation between then unless the mass and charge of the test particle is specified.
{ "domain": "physics.stackexchange", "id": 40984, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, newtonian-gravity, mass", "url": null }
gazebo Given that all the data goes into one message, I don't think you can record just the effort. The rest of the fields (with the exception of the names) will probably be empty, so essentially you will be recording just the efforts if that is what is being published. Originally posted by david_rusbarsky with karma: 111 on 2013-03-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Nachum on 2013-03-03: I'm able to record it but not in a format that can be plotted with rxbag. i get it as raw numbers
{ "domain": "robotics.stackexchange", "id": 3075, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo", "url": null }
rotational-kinematics, complex-numbers Which of these derivations is correct, and why? I have already consulted several sources, where I seem to find both formulas. For example here for derivation 1 (Quaternions, Finite Rotation and Euler Parameters by Arend L. Schwab (2002).) and here for derivation 2. I'm new to using quaternions, so maybe I'm missing some mathematical concept. I think that the point is that the "extra" term has zero "vector" part, but a non-vanishing scalar part. See eqn (17) of your first reference which, I believe, actually agrees with your second reference. The quaternion representing $\omega$ has zero scalar part, and the same is true for $\dot{\omega}$. So the scalar parts of the two terms on the right of your second derivation should cancel each other. The result of the first derivation is incorrect (not sure how you got it, you just say "property of conjugate quaternions") but if you are only interested in the vector components, it doesn't matter, you just discard the scalar part.
{ "domain": "physics.stackexchange", "id": 55705, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rotational-kinematics, complex-numbers", "url": null }
To check if you have the global minima, just look at a simple lower bound b(x,y) on f(x,y), namely: for all (x,y) we have $$f(x,y ) \geq x^4 + y^4 + 1 - 4|x| |y| \equiv b(x,y),$$
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9688561667674651, "lm_q1q2_score": 0.8273186383796065, "lm_q2_score": 0.8539127548105611, "openwebmath_perplexity": 326.04091366637584, "openwebmath_score": 0.7838554382324219, "tags": null, "url": "https://www.physicsforums.com/threads/bivariate-function-optimization.653680/" }
polymers Title: Repeating unit of poly lactic acid (PLA) Which repeating unit for poly lactic acid (PLA) is correct, picture 1 or picture 2? and I read somewhere that in polymerization, acids give $\ce{OH}$ and alcohol gives $\ce{H}$, creating water. According to this explanation, picture number 2 is correct but I have seen on websites like Wikipedia that picture 1 is correct. So, which is the correct picture and why? In fact, both pictures show the same polylactic acid, only different disconnection points have been chosen: The red disconnection (or your picture 1) is preferred because you can immediately see that it is a polyester. Edit: Your teacher is right in that, during the polymerization, the $\ce{-COOH}$ group loses $\ce {-OH}$ and the $\ce{-OH}$ group loses $\ce {-H}$ to form $\ce{H2O}$. Therefore, your teacher prefers this arrangement:
{ "domain": "chemistry.stackexchange", "id": 10329, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "polymers", "url": null }
(more…)
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9674102542943773, "lm_q1q2_score": 0.8279591721688426, "lm_q2_score": 0.855851143290548, "openwebmath_perplexity": 296.4219368256786, "openwebmath_score": 0.6930590271949768, "tags": null, "url": "https://gowers.wordpress.com/category/cambridge-teaching/page/2/" }
c++, catkin Comment by Huibuh on 2013-08-20: Thanks guys, I got it working :-) Comment by VictorLamoine on 2017-02-07: Is there a way to retrieve the name of the package in which a node is contained? eg: ros::package::getPath(ros::this_package::getName())? Comment by antoineniotna on 2019-05-07: Thanks for the help! Comment by Jägermeister on 2019-05-28: I keep getting ‘ros::package’ has not been declared errors all the way. Comment by VictorLamoine on 2019-05-28: #include <ros/package.h>
{ "domain": "robotics.stackexchange", "id": 15315, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, catkin", "url": null }
order of the numbers differently will... The following table gives a summary of number properties: associative, commutative, and c. commutative property examples explanations! Step 5: LHS = RHS 40 = 40 Hence associative property to change the result applicable for subtraction division... A property of addition and multiplication of numbers and exponential numbers addition, the sum is the... This property does not affect the result means to join or to connect going to see the associative,. = 20 Step 2: multiply the numbers which are given inside the parenthesis ( ) the... More cumbersome ) these properties work with addition, multiplication, which given! Shown by the parentheses indicate the terms that are associative, multiplicative identity and properties... Search ( please select at least 2 keywords ) most Searched keywords a % of a )...,... Nonassociativity of floating point calculation 77 plus 2 in parentheses, does not affect the of. Place of parentheses that a % of a basic number
{ "domain": "bahcelievlertesisatci.com", "id": null, "lm_label": "1. Yes\n2. Yes\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9294403959948494, "lm_q1q2_score": 0.8075227029577465, "lm_q2_score": 0.8688267762381844, "openwebmath_perplexity": 473.75748624566086, "openwebmath_score": 0.6099890470504761, "tags": null, "url": "https://bahcelievlertesisatci.com/a-lesson-nvnucmn/819582-associative-property-calculator" }
string-theory, supersymmetry, compactification Title: Why do the mismatched 16 dimensions have to be compactified on an even lattice? The mismatched 16 dimensions between the left- (26 dimensional) and right- (10 dimensional) are compactified on even, unimodular lattices. I think I get the unimoduar part, at least intuitively, somewhat, but I don't understand why the lattice has to be even. From what I understand, an even lattice means that the vectors have even norm-squared. Why is that a necessary property for compactifying the 16 dimensions? (Source : Polchinski) Consider a toroidal compactification for a bosonic closed string. We make the identification : $X \sim X +2\pi R$, $X$ being one of the 25 spatial dimensions, say $X^{25}$ The left and right momenta are : $k_L =\frac{n}{R} +\frac{wR}{\alpha'} = 0$, $k_R =\frac{n}{R} - \frac{wR}{\alpha'} = 0$ The on-shell mass conditions are written : $m^2 = k_L^2 + \frac{4}{\alpha'}(N - 1)$, $m^2 = k_R^2 + \frac{4}{\alpha'}(\tilde N - 1)$ From this we get :
{ "domain": "physics.stackexchange", "id": 8264, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "string-theory, supersymmetry, compactification", "url": null }
thermodynamics, energy If there is a process in whcich $H$ changes but not $U$, I might finally be able to grasp those concepts once and for all. The molar internal energy of an ideal gas is only a function of temperature. By kinetic theory, ideal gas particles have no volume, no interaction potentials, and no loss of energy on collisions. So the internal energy of an ideal gas is distributed solely in the translational kinetic energy of the particles. Strictly speaking, we should also make the case for a system of an ideal gas in zero external field (e.g. gravity). In any case, to be clear, the gas particles in an ideal gas have no potential energy terms. Finally also, the kinetic energy does not need to be defined by the particles "bouncing against the boundaries of the system". That part defines the pressure that the gas exerts on the walls due to them having kinetic energy and reversing their momentum vector when they bounce.
{ "domain": "physics.stackexchange", "id": 65036, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, energy", "url": null }
OK, you have officially convinced me that we don't need the requirement $ab\neq 0$ I'm having trouble seeing the equivalence where the ring is $\mathbb Z_n$, $p$ is prime in $\mathbb N$ and $p$ does not divide $n$ in $\mathbb N$. Then if we label by $p_n$ the counterpart of $p$ in $\mathbb Z_n$, won't the ideal generated by $p_n$ be all of $\mathbb Z_n$, in which case the ideal is not prime (since it's not proper) and so $p_n$ is not prime. Yet $p_n$ divides every element of $\mathbb Z_n$ and hence satisfies the definition of prime in post #3. 11. Jun 14, 2016 ### Staff: Mentor No, in this case it doesn't satisfy the definition because $p_n \in \mathbb{Z}_n$ is a unit if $p_n \nmid n$, i.e. $p_n \, | \, 1$. The claimed equivalence, however, is in my opinion a bit sloppy. E.g. the ideal $4R$ of $R = \mathbb{Z}_{24}$ isn't a prime ideal for $2 \cdot 2 = 4$, although it is proper and $4$ is neither zero nor a unit.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9653811571768047, "lm_q1q2_score": 0.8262225706303484, "lm_q2_score": 0.8558511469672594, "openwebmath_perplexity": 405.7668821629803, "openwebmath_score": 0.8702592253684998, "tags": null, "url": "https://www.physicsforums.com/threads/prime-element-in-a-ring.875328/" }
beginner, c, c++17 The vector delays holds the actual delay values using a std::chrono::duration type. The advantage is that you can pass it directly to std::this_thread::sleep_for() to sleep for the given time. A std::vector knows its own length, so you don't have to store that separately. The only other item necessary is the current index. Avoid manual mutex locking Instead of manually calling mutex.lock() and mutex.unlock(), use a std::lock_guard object to do this for you. Alternatively: Consider using atomic variables The index into the array of delays is just a simple counter that needs to be incremented or reset. Consider making it a std::atomic<std::size_t>, like so: struct delay_sequence { std::atomic<std::size_t> index; std::vector<std::chrono::milliseconds> delays; } And then use it like so: auto &sequence = it->second; auto index = sequence.index++;
{ "domain": "codereview.stackexchange", "id": 40343, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c, c++17", "url": null }
matlab, signal-analysis, power-spectral-density, zero-padding Title: How Exactly Does MATLAB Zero Pad Signal? I am doing some massive number crunching in MATLAB which involves millions of PSD estimations. Each data segment has length 41. So I have been using the multi-taper method with nfft=41 (details below in the code). My last simulation took more than eight hours. The code is optimized and vectorized as I can make it so I thought maybe if I change nfft=64 that might speed up. The profiler confirms that pmtm is the slowest part by far. So, looking at the MATLAB help pages, I see that in the documentation it says nothing more than if nfft is larger than the length of the data segment then it is zeropadded. My question is what exactly is MATLAB doing? How exactly is the signal padded? I can't really find anything googling and I ran my own experiments and I am not getting what I'm supposed to. Here I present my code and the result. close all clear all
{ "domain": "dsp.stackexchange", "id": 6452, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "matlab, signal-analysis, power-spectral-density, zero-padding", "url": null }
electromagnetism, special-relativity, electromagnetic-radiation, magnetic-fields, inertial-frames equation proved by dividing equations (03a), (03b) side by side and setting $\:\mathbf{u}\equiv \mathrm d\mathbf{x}/\mathrm d t\:$, $\:\mathbf{u'}\equiv \mathrm d\mathbf{x'}/\mathrm d t'$. Now, if in (09) we replace $\:\mathbf{E'},\mathbf{B'},\mathbf{u'}\:$ by their expressions (07a),(07b) and (10) respectively, then we end up with the following relation between the force 3-vectors \begin{equation} \mathbf{f}^{\boldsymbol{\prime}} = \dfrac{\mathbf{f}+(\gamma-1)(\mathbf{n}\boldsymbol{\cdot} \mathbf{f})\mathbf{n}-\gamma \boldsymbol{\upsilon}\left(\dfrac{\mathbf{f}\boldsymbol{\cdot}\mathbf{u}}{c^{2}}\right)}{\gamma \left(1-\dfrac{\boldsymbol{\upsilon}\boldsymbol{\cdot}\mathbf{u}}{c^{2}}\right)} \tag{11} \end{equation} wherein the quantities of the electromagnetic field $\:\color{red}{\bf DISAPPEARED !!!}$
{ "domain": "physics.stackexchange", "id": 49563, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, special-relativity, electromagnetic-radiation, magnetic-fields, inertial-frames", "url": null }
only a finite number of elements, its cardinality is simply the Any set which is not finite is infinite. Total number of elements related to both (A & B) only. forall s : fset_expr (A:=A), exists n, (cardinality_fset s n /\ forall s' n', eq_fset s s' -> cardinality_fset s' n' -> n' = n). c)$(0,\infty)$,$\R$d)$(0,1)$,$\R$Ex 4.7.4 Show that$\Q$is countably infinite. If set A is countably infinite, then | A | = | N |. Question: Prove that N(all natural numbers) and Z(all integers) have the same cardinality. When it ... prove the corollary one only has to observe that a function with a “right inverse” is the “left inverse” of that function and vice versa. Consider sets A and B.By a transformation or a mapping from A to B we mean any subset T of the Cartesian product A×B that satisfies the following condition: . A set that is either nite or has the same cardinality as the set of positive integers is called countable. The function f : … Ex 4.7.3 Show that the following sets of real numbers
{ "domain": "atozmobileapps.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759638081522, "lm_q1q2_score": 0.8297646111405583, "lm_q2_score": 0.84594244507642, "openwebmath_perplexity": 676.457605556357, "openwebmath_score": 0.8642045259475708, "tags": null, "url": "https://www.atozmobileapps.com/bkx3c20/b7b0d5-how-to-prove-cardinality-of-sets" }
It may be relevant to note that there exists a version of Taylor's Theorem that requires the existence of $f^{(k)}$ at only one point. Should be better known than it seems to be, in any case: Theorem If $f:\mathbb R\to\mathbb C$, $k\ge1$, and $f^{(k)}(a)$ exists then $$f(x)=P_{a,k}(x)+o((x-a)^k)\quad(x\to a).$$ Thanks to Paramanand Singh: It appears that this is known as Taylor's Theorem with Peano's form of the remainder. Here of course $P_{a,k}(x)=\sum_{j=0}^k\frac{f^{(j)}(a)}{j!}(x-a)^j$. (Note that the hypothesis implies that $f^{(j)}$ exists in some neighborhood of $a$ for $j<k$.) The conclusion means by definition that $f(x)=P_{a,k}(x)+E(x)$ where $$\lim_{x\to a}\frac{E(x)}{(x-a)^k}=0.$$ It's enough to prove this: If $f:\mathbb R\to\mathbb R$, $k\ge1$, and $f(0)=f'(0)=\dots=f^{(k)}(0)=0$ then $f(x)=o(x^k)$ as $x\to0$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9697854111860905, "lm_q1q2_score": 0.8038927651252229, "lm_q2_score": 0.8289388104343892, "openwebmath_perplexity": 154.0753551562804, "openwebmath_score": 0.8899643421173096, "tags": null, "url": "https://math.stackexchange.com/questions/2585752/application-of-the-mean-value-theorem-to-a-taylor-series" }
waves, acoustics, perception, harmonics Modern pianos are NOT tuned to harmonics in a Fourier series, the are tuned to equal temperament, an invention mainly of J. S. Bach, whereby every semitone interval on the piano has the same frequency ratio, namely $2^{\frac{1}{12}} $ so that, on a logarithmic frequency scale, the semitones are evenly spaced. The motivation is that frequency relationships between notes in a melody, chord and so forth are then covariant with respect to any modulation (change of diatonic key) in the music. This allows each key to be equally well in tune, which was the motivation for J.S.Bach's (or F. Chopin's) 24 preludes, each a variation on the same theme in each of the 12 diatonic keys in major and minor modes. They were, so to speak, "showing off" the possibilites openned up by realising, through equal temperament, the covariance of geometry of any musical pattern with respect to modulation. In contrast, flutes and clarinets, or any wind instrument that uses harmonics to realise several registers,
{ "domain": "physics.stackexchange", "id": 66675, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "waves, acoustics, perception, harmonics", "url": null }
Anyways, you can certainly translate constructions of the form you describe to use unique elements of singleton sets; it really is effectively same thing. Do note that even if you go with the translation, the definition still depends upon the ability to make a choice -- e.g. to expand on Alex Becker's comment, a collection of objects defined in this fashion is only well-defined if we have a choice function on their underlying sets. If not, then the parameter space of your $\lambda$'s is collectively empty, so the image of the construction is also empty. (edit: it may be possible to work around this final comment -- I haven't fully thought it through. Thanks Asaf for making me think a second time, at least)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9759464443071381, "lm_q1q2_score": 0.8001719901909341, "lm_q2_score": 0.819893340314393, "openwebmath_perplexity": 278.0825258910815, "openwebmath_score": 0.8448407053947449, "tags": null, "url": "http://math.stackexchange.com/questions/118907/a-question-about-a-certain-way-to-define-mathematical-objects/118908" }
ros-melodic Originally posted by stevemartin on ROS Answers with karma: 361 on 2018-12-25 Post score: 0 Original comments Comment by juanlu on 2019-01-04: Question, do you have a full robot or just a single DC motor? If I understand correctly your are building your own robot and your problem is to develop Odom node for it. I think you need to check the equations which describe the ( differential-driven ) mobile robot kinematics here. Then you need to measure different parameters of your robot (i.g. wheel diameter and gauge distance) and substitute these values into the equations. "Now, when I can control the robot and also getting ticks via magnetic encoders"
{ "domain": "robotics.stackexchange", "id": 32206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic", "url": null }
electromagnetism I think the more surprising thing about the magnetic field inside a solenoid is not that it's uniform along the length, but that it's uniform in the perpendicular directions -- that is, that the field doesn't depend on whether you're close to the axis or far from it (as long as you're inside it). It'd be easy to imagine the field would either drop off or get stronger as you move perpendicular to the axis, but it doesn't (again, for a long solenoid when you're not near the ends).
{ "domain": "physics.stackexchange", "id": 69639, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism", "url": null }
electrical-engineering, telecommunication, acoustics, sound-isolation, electronic-filters Title: What is tonal masking? I have checked the definition of tonal masking on Wikipedia and google but I saw the definition of sound masking and auditory masking. Auditory masking Sound masking But I want to know what exactly is tonal masking? Tonal masking is using tones at specific frequencies for sound masking, as compared with using noise containing a continuous band of frequencies. In most practical situations noise is more effective for masking than tones, but experiments with pure tones are a way to try to understand how masking actually "works" in human hearing.
{ "domain": "engineering.stackexchange", "id": 2500, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrical-engineering, telecommunication, acoustics, sound-isolation, electronic-filters", "url": null }
python, optimization, algorithm, image Let's compare the speed with numpy.dot. The pure-Python version is substantially faster for very small arrays: >>> import numpy as np >>> from timeit import timeit >>> timeit(lambda:dot([1, 2], [3, 4])) 2.1183970817364752 >>> timeit(lambda:np.dot([1, 2], [3, 4])) 11.150392828974873 but the NumPy version is hundreds of times faster for large arrays: >>> v = np.arange(1000) >>> w = np.arange(1000) >>> timeit(lambda:dot(v, w), number=1000) 1.4182810601778328 >>> timeit(lambda:np.dot(v, w), number=1000) 0.0053491611033678055 So in order to get a performance benefit out of NumPy, you have to vectorize your algorithm so that each step of the algorithm operates on many elements simultaneously. Here's another example. In this extract you iterate over a grid of coordinates (x, y) and rotate each coordinate by −r radians about the point (cx, cy): for x in xrange(xoffset,nw): for y in xrange(yoffset,nh): ox, oy = affine_t(x-cx, y-cy, *mrotate(-r, cx, cy))
{ "domain": "codereview.stackexchange", "id": 6152, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, optimization, algorithm, image", "url": null }
python, optimization, programming-challenge Title: Truncatable Primes -- Project Euler 37? I just finished Project Euler 37 after a bit of debugging and can't figure out how to make it faster. Whenever I add extra tests that I want to speed the program up, it ends up just slowing it down. My primes test is from here. Please help me optimize! from primes import test as is_prime from itertools import product from timeit import default_timer as timer def is_trunc(list_num): num = ''.join(map(str, list_num)) # turn (1, 2, 3) into 123 if is_prime(int(num)): for k in range(1, len(num)): if not is_prime(int(num[k:])) or not is_prime(int(num[:k])): return False return True return False start = timer() data = {"count": 0, "length": 2, "sum": 0}
{ "domain": "codereview.stackexchange", "id": 7446, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, optimization, programming-challenge", "url": null }
algorithms, time-complexity, spanning-trees, minimum-spanning-tree, prims-algorithm Given a weighted, connected, simple undirected graph $G$ with weights of only $1$ and $2$ on each edge, why in this case the running time of the algorithm is $O(|E|+|V|\log(|V|))$? I really not understand why the running time is not the same in this case, any help? The running time depends on how you implement the queue data structure. Hint: Can you think of any way to implement the queue data structures, so that ExtractMin, Remove, and Insert operations are much faster, if you're given the knowledge that every edge has weight either 1 or 2?
{ "domain": "cs.stackexchange", "id": 7754, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, time-complexity, spanning-trees, minimum-spanning-tree, prims-algorithm", "url": null }
ros-kinetic so that the encoding you want to use is in the inverted commas instead such as img = np.asarray(bridge.imgmsg_to_cv2(msg, 'mono8')) Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-12-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 32183, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-kinetic", "url": null }
noise, sampling, power-spectral-density Now I would like to consider the discrete-time sequence of samples $v_{out}[n]$ as illustrated in the figure, and I would like to find the quantity $\overline{v_{out}[n]^2}$. This is the average value of the squared time-domain output samples, where again this output is due to the noise alone. The lecture slides perform the following derivation (slide 28):
{ "domain": "dsp.stackexchange", "id": 12202, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "noise, sampling, power-spectral-density", "url": null }
graphs, optimization The difference to your problem is that you require $H$ to be a subgraph of $G$, so it's not exactly a transitive closure you want. But note that for this problem there are polynomial-time algorithms. Instead, Moyles and Thompson [1] consider what I believe is your problem, i.e., the problem of finding the smallest subgraph $G'$ of $G$ such that there is to a path from $u$ to $v$ in $G'$ whenever there is a path from $u$ to $v$ in $G$. They give an algorithm (see Section 4) for the problem which runs in exponential-time, so it's likely not practical for even moderately-sized digraphs. In fact, your problem is NP-hard and it has been studied in the literature. For instance, Khuller, Raghavachari and Young [2] give a polynomial-time approximation algorithm with a guarantee of 1.64. On a quick skim, their algorithm seem reasonable to implement and thus practical; another approach is of course to rely on other tools such as metaheuristics.
{ "domain": "cs.stackexchange", "id": 16131, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "graphs, optimization", "url": null }
star, universe, planetary-atmosphere, formation, hydrogen Part of the dead stars will collide, and merge to heavier stars, which will end as intergalactic dust (mixture of elements, e.g. iron and silicon), including planets, after supernova explosions, and as Black Holes. Others will be scattered out of reach of the supermassive black hole in the center of the galaxy (gravitational relaxation).
{ "domain": "astronomy.stackexchange", "id": 360, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "star, universe, planetary-atmosphere, formation, hydrogen", "url": null }
j vectors in this basis. It decomposes matrix using LU and Cholesky decomposition The calculator will perform symbolic calculations whenever it is possible. View record in Web of Science ®. edu/math Craigfaulhaber. complex eigenvalues. The last case is what the solutions look like when there are repeated eigenvalues, or. To find eigenvalues of matrix A Consider {eq}\displaystyle det(A-{\lambda}(I))=0 {/eq} Then we get a characteristic polynomial in lambda. Eigenvectors and Eigenvalues When a random matrix A acts as a scalar multiplier on a vector X, then that vector is called an eigenvector of X. However, if a matrix has repeated eigenvalues, it is not similar to a diagonal matrix unless it has a full (independent) set of eigenvectors. Engineering Computation ECL4-4. sy ' Section 7. For repeated diagonal elements, it might not tell you much about the location of the eigenvalues. They are often introduced in an introductory linear algebra class, and when introduced there alone, it is
{ "domain": "rk-verlag.de", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9888419697383903, "lm_q1q2_score": 0.8591323757389555, "lm_q2_score": 0.868826771143471, "openwebmath_perplexity": 450.8891887879132, "openwebmath_score": 0.8856151103973389, "tags": null, "url": "http://tsrd.rk-verlag.de/repeated-eigenvalues.html" }
# Week 2 – Calculus, Aggregation Required: Optional: ## Homework 2 (due Friday, January 21st at 11:59PM) (solutions) 📝 Submit your answers as a PDF to Gradescope by the due date for full credit. We encourage you to discuss the readings and questions with others in the course, but all work must be your own. Remember to use Campuswire if you need guidance! ### Question 1 Below in blue is the parabola $$y = x^2$$, and in red is the line $$y = 2x + 8$$. (a) Without using integration, determine the area between the given line and parabola. This will involve a few steps. First, you’ll need to find the third point on the triangle that Archimedes specified in Quadrature of the Parabola. Some guiding questions: • What is the slope of the red line? • What is the slope of the tangent line to the parabola at any given point on the parabola? (What is the derivative of the parabola?) • At what point on the parabola is the slope of the tangent line equal to the slope of the red line?
{ "domain": "github.io", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9929882056560156, "lm_q1q2_score": 0.8338459317613109, "lm_q2_score": 0.8397339736884712, "openwebmath_perplexity": 372.99694001857904, "openwebmath_score": 0.6380635499954224, "tags": null, "url": "https://dsc-courses.github.io/dsc90-2022-wi/resources/weeks/week02/" }
surface-chemistry Title: Relation between rate of adsorption and the critical temperature of a gas How is the rate of adsorption related to the critical temperature of a gas? My textbook says more the critical temperature, better are they adsorbed. But I cannot really understand why. They threw in another confusing statement that says that easily liquefiable gases are readily adsorbed. I am thoroughly confused. I did read the question How does critical temperature affect adsorption? before posting this question, and while that question was answered, it did not answer my question fully.
{ "domain": "chemistry.stackexchange", "id": 12822, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "surface-chemistry", "url": null }
rviz, moveit, ros-kinetic Originally posted by Delb with karma: 3907 on 2018-10-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2018-10-16: @gonzalocasas: as @Delb mentions RViz is not required at all for MoveIt. It's just a visual debugging interface. To start MoveIt "headless" (which doesn't really make sense, as it's not an application with a UI) just leave out the line where it starts RViz from the launch file. Comment by gonzalocasas on 2018-11-02: Thanks folks. Of course, it was a trivial answer (or it was a silly question). I just didn't understand why most of the usually provided launch files include RViz without any arguments to disable it.
{ "domain": "robotics.stackexchange", "id": 31908, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rviz, moveit, ros-kinetic", "url": null }
sequence-alignment 290 : ProArgProArgProProGlyArgProIleSer : 300 |||!.! !!.!||||||.!!!.!||| ||| ProAsnTyrHisProProSerAsnProAlaSer 3034108 : CCCAACTATCATCCGCCGAGCAATCCGGCGTCA : 3034074 vulgar: frog 32 300 . CM041006.1 3039620 3034073 - 732 M 52 156 5 0 2 I 0 51 3 0 2 M 73 219 5 0 2 I 0 3833 3 0 2 M 32 96 S 0 1 5 0 2 I 0 46 3 0 2 S 1 2 M 14 42 F 0 2 G 1 0 M 26 78 5 0 2 I 0 795 3 0 2 M 48 144 G 0 3 M 21 63 -- completed exonerate analysis In addition, exonerate lets you control the output you want. Check out the --ryo (roll your own output) option in man exonerate. genewise GeneWise compares a protein sequence to a genomic DNA sequence, allowing for introns and frameshifting errors.
{ "domain": "bioinformatics.stackexchange", "id": 2595, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sequence-alignment", "url": null }
# Predicting eigenvalues of bigger matrices Consider the following $(3 \times 3)$ matrix: $K_3 = \left( \begin{array}{ccc} a & -1 & 0 \\ -1 & a+1 & -1 \\ 0 & -1 & a \end{array} \right)$ The question has a quantum physics context, so we'll assume that $a$ is such that $K$ is hermitian. This matrix has eigenvalues $-1 + a$, $a$, and $a+1$. Now consider growing the matrix to a $(4\times4)$ in the following way: $K_4 = \left( \begin{array}{ccc} a & -1 & 0 & 0 \\ -1 & a+1 & -1 & 0 \\ 0 & -1 & a+1 & -1 \\ 0 & 0 & -1 & a \end{array} \right)$ Its eigenvalues are $-1 + a$, $1+a$, $1-\sqrt{2} + a$, and $1+\sqrt{2} + a$. There seems to be at least some structure in this. I was wondering whether there's a way to predict what the eigenvalues of increasingly bigger matrices of this form will be. Specifically, is it possible to find the eigenvalues of:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.988491848562538, "lm_q1q2_score": 0.8035127154047444, "lm_q2_score": 0.8128673155708975, "openwebmath_perplexity": 144.92402218889902, "openwebmath_score": 0.9612376689910889, "tags": null, "url": "https://math.stackexchange.com/questions/847051/predicting-eigenvalues-of-bigger-matrices" }
probability of your coin’s bias being 0. Consider 10 independent tosses of a biased coin with the probability of Heads at each toss equal to p, where 0. RE: How do you calculate the probability of a biased coin flipped 3 times? Lets say what is the probability of getting 2 heads of tossing the biased coin 3times if the possibility of getting a head is 0. 5 coming up heads (or tails): a. Mathematical probability, on the other hand, has to do with the number of possible outcomes of an event. And depending on the payout structure, one side might or might not have an edge over the other side. When we roll a die with 6 numbers, we expect to get a 6 one time out of six. In some cases, this assumption is valid based on the physical properties, such as flipping a coin. Step 6: Reflect on the coin tossing results and implications. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo-random number algorithms typically used in computer programs. 25, for
{ "domain": "insitut-fuer-handwerksmanagement.de", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9845754461077708, "lm_q1q2_score": 0.8072468556338093, "lm_q2_score": 0.8198933447152497, "openwebmath_perplexity": 314.30131142717147, "openwebmath_score": 0.8584415912628174, "tags": null, "url": "http://rhpa.insitut-fuer-handwerksmanagement.de/3-coin-toss-probability-calculator.html" }
c++, array, collections, c++14 if(slice_ptr > end_ptr) inside array_view::slice(const size_type, const size_type) might invoke undefined behavior (it's undefined if the condition would return true and the original allocation of the underlying contiguous memory ends at end_ptr). However, it could easily be replaced with the logically equivalent if(start_offset > size()). Also, the comparison should be >= instead of >, as end_ptr already points 1 past the last covered element, so starting a non-empty array slice there isn't possible. if (slice_size > (size() - start_offset)) in the same function could be combined with the check above to if (end_offset <= size()), as we know start_offset < end_offset from checks beforehand.
{ "domain": "codereview.stackexchange", "id": 28054, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, array, collections, c++14", "url": null }
c#, sql, mvc } } } } } Database schema basically has to insert into a Bill of Materials table, a one-to-many contracts table, a one purchase order line to many planned delivery table, etc. NOTE: This code is using the dapper.net framework. Do you think I would be best to split Save into two functions a CreateNew and Update? A very quick and loud : YES ! This method is just to long. By splitting it into GetTotalJobs(), Update(int, int) and Save(int) you will make your code easier to read and maintain. The former Save() method would then look like so internal void Save() { int totalJobs = GetTotalJobs(); if (PurchaseOrderId == null) { Save(totalJobs); } else { Update(PurchaseOrderId, totalJobs); } } with the GetTotalJobs() method looking like this private int GetTotalJobs() { int totalJobs = 0;
{ "domain": "codereview.stackexchange", "id": 16089, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, sql, mvc", "url": null }
java Originally posted by ahendrix with karma: 47576 on 2015-04-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21409, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java", "url": null }
data-structures, trees, search-algorithms, search-problem, intervals Any node $v$ of a segment tree has a corresponding interval $\text{Int}(v)$. Do a recursive search of the tree, except that you only visit nodes $v$ such that $\text{Int}(v)$ overlaps $q$. Each such node stores a list of intervals from $S$. For each such node, if $\text{Int}(v)$ is contained in $q$, check all of the intervals stored with $v$ is contained in $q$. That's all you need to do. Just because a data structure is designed to support one kind of operation doesn't mean it's unable to support other operations, too. That said, one challenge with segment trees is that it's not clear how to update them dynamically (to add new intervals).
{ "domain": "cs.stackexchange", "id": 9208, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-structures, trees, search-algorithms, search-problem, intervals", "url": null }
cosmology The article goes on to say We briefly mention that $R$ is a constant at the equilibrium so the size of the Universe never changes Is it because $R$ being constant means $\dot{R}=0$ hence the velocity of galaxies moving is $0$? It then says that furthermore we need $\lambda$ to be positive to make sense. Why is this so? And now at the equilibrium, we look at the stability of the system: $$J(R,S)_{equilibrium} = \begin{bmatrix}0& 1 \\ \lambda & 0 \end{bmatrix}$$ Cleary, this is a saddle and hence the equilibrium solution is unstable
{ "domain": "physics.stackexchange", "id": 53093, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology", "url": null }
We could go through the math to convince ourselves but writing a simulation is another way to validate the above intuition. The plot below shows the percentage of times each number occurs in a simulation where we use the above procedure to generate numbers between $$1$$ and $$7$$. Note that the chances of obtaining any of these outcomes is $$\frac{1}{7} \approx 0.143$$. Thus, we have strong evidence that the procedure does generate numbers uniformly between $$1$$ and $$7$$. For those interested, the Python code used to generate the plot is at the end of the post. A related problem to think about: How can we generate uniformly distributed numbers between $$1$$ and $$30$$ if we have access to a function that only gives us random numbers between $$1$$ and $$5$$? import random import matplotlib.pyplot as plt import seaborn as sns
{ "domain": "svadali.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9924227573874892, "lm_q1q2_score": 0.8568494822754031, "lm_q2_score": 0.8633916099737807, "openwebmath_perplexity": 635.3333642118325, "openwebmath_score": 0.7763149738311768, "tags": null, "url": "https://svadali.com/2018/09/15/random-number-puzzle/" }
quantum-field-theory, quantum-spin, group-theory, group-representations, spinors The finite-dimensional irreps of $Spin(3,1)$ on complex vector spaces are in one-to-one correspondence with the f.d. complex irreps of the complexification $\mathfrak{l}_{\mathbb{C}} = \mathfrak{spin}(3,1) \otimes \mathbb{C}$ of the Lie algebra $\mathfrak{spin}(3,1)$ of $Spin(3,1)$. This Lie algebra $\mathfrak{l}_{\mathbb{C}}$ is isomorphic to the complexification $\mathfrak{k} \otimes \mathbb{C}$ of the Lie algebra $\mathfrak{k} = \mathfrak{su}(2) \oplus \mathfrak{su}(2)$. Here $\mathfrak{su}(2)$ is the Lie algebra of the real group $SU(2)$; it's a real vector space with a bracket.
{ "domain": "physics.stackexchange", "id": 5750, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, quantum-spin, group-theory, group-representations, spinors", "url": null }
Founder and CEO Scott@TargetTestPrep.com 122 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. Non-Human User Joined: 09 Sep 2013 Posts: 13266 Re: If x>=0 and x=root(8xy - 16y^2) then, in terms of y, x=  [#permalink] ### Show Tags 04 Sep 2019, 02:44 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9407897558991952, "lm_q1q2_score": 0.815698368900961, "lm_q2_score": 0.8670357683915537, "openwebmath_perplexity": 4754.854644804729, "openwebmath_score": 0.725179135799408, "tags": null, "url": "https://gmatclub.com/forum/if-x-0-and-x-root-8xy-16y-2-then-in-terms-of-y-x-140869.html" }
python, flask, sqlalchemy Alternatively, you could push the conditional down into the SQL, and then the query would be constant: QUERY = """ SELECT AVG(stock_date - order_date) AS c, product_category FROM data_table_data WHERE stock_date BETWEEN :s1 AND :s2 AND product_category IN ('All Category', :product_category) AND product_location IN ('All Stores', :product_location) GROUP BY product_category """
{ "domain": "codereview.stackexchange", "id": 22862, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, flask, sqlalchemy", "url": null }
machine-learning, neural-network, classification, deep-learning Whenever you add more layers, there will be vanishing and exploding gradients which may cause your network not to learn, or learning may happen so slowly. To avoid, you should use ReLU activation in order to avoid saturation of gradients. Moreover you have to use He or Xavier initialization techniques to avoid having bad random weights. There are other techniques for solving this problem which are called skip connections but at least I've never seen the use of them in MLPs Although they are really helpful for solving the mentioned problem. Covariat shift is the problem of learning for deeper layers. As a solution you have to use Batch Normalization to somehow normalize the activations of the deeper layers.
{ "domain": "datascience.stackexchange", "id": 2339, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, neural-network, classification, deep-learning", "url": null }
python each chapter title in original file starts with only a number, modify them from 1 to "# chapter 1:", namely, insert "# chapter " before the number, append a colon ":" behind the number; finally write the new toc to a file. here is the code import re # f_name = 'data_multi_lines_3.md' f_name = 'data_multi_lines.txt' with open(f_name) as f: line_list = f.readlines() res_list = [] for line in line_list: res_list.append(re.sub(r'^(\d{1,2})( +.*?)', r'# chapter\1:\2', line)) with open('your_file.md', 'w') as f: for item in res_list: f.write("%s" % item)
{ "domain": "codereview.stackexchange", "id": 36276, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
can use these two methods to compute the determinant. ( Log Out /  Inverse of 3x3 matrix. This has been done on purpose so you can compare the results from both methods and observe how they yield the same values. And so, taking into consideration the formula for the determinant of a square matrix with dimensions 2x2, we can see that equation 3 yields: At this point you may have noticed that finding the determinant of a matrix larger than 2x2 becomes a long ordeal, but the logic behind the process remains the same and so the difficulty is similar, the only key point is to keep track of the operations you are working through, even more with even larger matrices than a 3x3. Find the matrix determinant using the general method. Finds its determinant using the shortcut method: Notice that the matrices A, B and C provided in the both sections of exercises above are the exact same. Double click to select the MINVERSE out of those, so that you can compute the inverse of matrix A. ( Log Out /
{ "domain": "sob5050.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9817357248544007, "lm_q1q2_score": 0.8071724420203765, "lm_q2_score": 0.8221891305219504, "openwebmath_perplexity": 317.89835394851514, "openwebmath_score": 0.7031364440917969, "tags": null, "url": "https://sob5050.com/pnpfbmg/article.php?id=619bcf-inverse-of-a-3x3-matrix-shortcut" }
javascript, php, html, parsing, regex if ($type == 2) { $ranges[] = array($value[0][5], strlen($value[0][0])); } else if ($type == 3) { preg_match_all ( "#src=[\"']((\\\\\"|\\\\'|[^\"'])*?)['\"]#imsSX", $value[0][0], $submatches, PREG_SET_ORDER|PREG_OFFSET_CAPTURE ); if (count($submatches) !=1 || !approve_dbStrSrc($submatches[0][6][0])) { $ranges[] = array($value[0][7], strlen($value[0][0])); } else { if ($possiblesave != null) { unset($matches[$possiblesave]); } } $possiblesave = null; } } return $ranges; } function get_dbStrMatchType($val) { if (count($val) == 3 && strcmp($val[2][0], "/")==0) { return 3; } else if (count($val) == 2 && strcmp($val[1][0], "/")==0)
{ "domain": "codereview.stackexchange", "id": 2343, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, php, html, parsing, regex", "url": null }
homework-and-exercises, rotational-dynamics Similar question My try: $$E_{pot}=mgh$$ where $$h=\frac{d}{2}$$ $$E_{kin}=\frac{1}{2}*J*\omega$$ where $$J=\frac{1}{12}*m*d^{2}$$ Getting equation: $$m*g*\frac{d}{2}=\frac{1}{2}*\frac{1}{12}m*d^{2}*\omega$$ next $$\omega=\frac{12g}{d}=784\frac{rad}{s}$$ $$\upsilon=\frac{\omega}{2\pi}=124.7\frac{m}{s}$$ The right answer is $$2.1\frac{m}{s}$$ What's wrong here? I wouldn't normally answer a homework question, but your working is basically correct and after all it was an accepted answer here on the Physics SE that has led you astray. Your working is fine, but the rotational kinetic energy is: $$ T = \tfrac{1}{2} I \omega^2 $$ so you have missed the power of two. The moment of inertia of a thin rod hinged at one end is: $$ I = \tfrac{1}{3} m l^2 $$ Equate the change in potential energy to the rotational kinetic energy and use $v = r\omega$ and you'll end up with: $$ v = \sqrt{3gl} $$ Putting in $g= 9.81 \, \mathrm{m\,s^{-2}}$ gives me $v = 2.10 \, \mathrm{m\,s^{-1}}$.
{ "domain": "physics.stackexchange", "id": 12523, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, rotational-dynamics", "url": null }
formal-languages, regular-languages, automata, finite-automata, context-free If We reach at common states, Mark them as New finals in $DFA$, $M$ (remove old final and do minimization ) Result is $DFA$ of $Half(L)$ or $FirstHalve(L)$ [Note: In Short, design $DFA$ for $L,$ traverse from both directions (start and final),if reach at common states with same number of transitions, make them as new finals, result will be $DFA$ for $Half(L)$ ] Example: Suppose $L=\{ab,bb\},$ $Half(L)=\{a, b\}$ Step1. Design $DFA, M$ for $L$ Step2. Reversal of $M$ which becomes $NFA,$ $N.$ Step3. $\delta(q_0,b)=q_1$ for $M$ and $\delta(q_2,b)=q_1$ for $N.$ Step4. We see that $q_1$ state is common for both $M$ and $N.$ So new final state is $q_1.$ Step5. New $DFA$ of $Half(L)$ or $FirstHalve(L)$ which accepts only $\{a, b\}$ Miscellaneous Examples try yourself: $L=a^∗$ , then $half(L) =a^∗$ $L=a(aa)^∗$, then $half(L) =\phi$ $L=a^∗b^∗$, then $half(L) =a^∗b^∗$
{ "domain": "cs.stackexchange", "id": 20123, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, regular-languages, automata, finite-automata, context-free", "url": null }
python, python-3.x, finance, tkinter, sqlite #creation of widgets self.Header = tk.Label(root , text = 'Your Expenses:', fg = 'dark green').grid(row = 0 , column = 0 , sticky = 'we') self.headernames = tk.Label(master, text ='Company , Amount Due , Account Number , Amount Paid , Due Date , ID Number , First , Last' , fg = 'dark green').grid(row = 1 , column = 0 , columnspan = 5, sticky = 'we') self.credit = tk.Label(master, text = 'Created By : Ronald Colyar', fg = 'dark green').grid(row = 3 ,column = 0 , sticky = 'we') self.listbox = tk.Listbox(master , bd = 0 ) self.listbox.grid(row = 2 , column = 0 ,columnspan = 5, sticky = 'we')
{ "domain": "codereview.stackexchange", "id": 29482, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, finance, tkinter, sqlite", "url": null }
quantum-mechanics, quantum-information, identical-particles Fock space deals with identical particles, so no you can't break down the Hilbert space into a subspace for each particle, as you can never say which particle was which. What you can do is break down the system into a space for each site, with the state being the number of particles on that site.
{ "domain": "physics.stackexchange", "id": 70258, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-information, identical-particles", "url": null }
c++, linked-list, template, collections, classes You don't declare the copy constructor, copy assignment operator, move constructor, nor move assignment operators. This will cause problems with assignments of one List object to another. insert_head and insert_back are very similar functions, but have been written somewhat differently. The implementation for insert_back can be made shorter by making it similar to insert_head. In particular, the assignment to new_node->prev can be done before the if (using new_node->prev = tail;, because when head is NULL, tail should also be), while the common tail = new_node; can be done after. first, last, and size should also have the const modifier (e.g., constexpr c_size size() const. remove_all will crash because you access current->next after delete current, nor do you delete the first (head) element.
{ "domain": "codereview.stackexchange", "id": 39505, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, linked-list, template, collections, classes", "url": null }
reference-frames, inertial-frames, atmospheric-science, coriolis-effect, meteorology So: when you are at rest with respect to the Earth you are co-rotating with the Earth. Now take the case of buoyant mass, flowing from west-to-east. Moving west-to-east the mass is circumnavigating the Earth's axis faster than the Earth itself. The mass is on the slope (like a car on a banked circuit), but the mass is speeding, so it will swing wide. So: buoyant mass that is flowing from west-to-east will depart from the latitude that it is starting from, veering off towards the equator. Now buoyant mass that is flowing from east-to-west. Moving east-to-west this mass is circumnavigating the Earth's axis slower than the Earth itself. But the slope is the slope voor co-rotating mass. Comparison: if a car is on a banked circuit, and the car is going too slow the car will slump down So that is what happens to air mass that is flowing from east-to-west: the poleward force pulls that air mass closer to the nearest pole. For completeness:
{ "domain": "physics.stackexchange", "id": 71478, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reference-frames, inertial-frames, atmospheric-science, coriolis-effect, meteorology", "url": null }
algorithms, computational-geometry $$l_i \leq a(x_i - x_p) + y_p \leq h_i$$ Now our only variable is $a$ so we can run through the $n$ inequalities, consistently choosing the strictest bounds for $a$ until we find the range of $a$ that works or that there's no solution. Total runtime is $O(n)$ since we only have to try the above process twice and finding the extreme points also takes only $O(n)$ time.
{ "domain": "cs.stackexchange", "id": 13171, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, computational-geometry", "url": null }
filters, audio, filter-design, signal-synthesis Title: Allpass Filter Gain Issue Background I am having issues implementing an allpass filter to model wave dispersion in a stiff string. In order to simulate wave propagation in a string, I am using a digital waveguide. I implemented the matlab code referenced in this paper in c++, and my code produces the same coefficients. This leads me to believe that my error is in the implementation of the allpass filter. I used this as a reference when implementing the allpass biquads. Relevant c++ code is shown below: //args are: x[n], x[n - 1], x[n - 2], y[n - 1], y[n - 2] float dispersion_filter(float x, float x1, float x2, float y1, float y2){ double y = 0.0f; for (int i = 0; i < disp_filter_coeffs.size(); i++) { float a1 = disp_filter_coeffs[i][0]; float a2 = disp_filter_coeffs[i][1]; float b1 = a1/a2; float b2 = 1.0/a2; float g = a2;
{ "domain": "dsp.stackexchange", "id": 12004, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, audio, filter-design, signal-synthesis", "url": null }
on the tree diagram on the left above (note: the \chemistry" in the urn changes when you do not replace the rst ball drawn). 3 are blue, and 7 are red. 1 Simple Sample Spaces…Tree Diagrams Outcome - a particular result of an experiment outcomes. With Replacement: the events are Independent (the chances don't change) Without Replacement: the events are Dependent (the chances change) Dependent events are what we look at here. 8a: Copy and complete the following tree diagram. Given you draw a R m&m in your 1 st draw, what is the probability of. a) Draw a tree diagram to determine ALL possible outcomes. The value of this probability is 12/2652. Draw a tree-diagram to represent all probabilities for the following. nsisting of two trials. It consists of "branches" that are labeled with either frequencies or probabilities. I don't know how to write out a tree diagram on here, but I think this one is heads -> heads, tails -> math probabilty- please help. EX 5: Two cards are drawn from a deck
{ "domain": "piratestorm.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9865717436787542, "lm_q1q2_score": 0.8553929746006267, "lm_q2_score": 0.8670357529306639, "openwebmath_perplexity": 564.7005215453598, "openwebmath_score": 0.6850816011428833, "tags": null, "url": "http://piratestorm.it/probability-with-replacement-tree-diagram.html" }
newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics Title: How does the law of gravity work? I was taught that if I wanted to find the attraction force between two objects I would use the formula: $F=(G*m_1*m_2)/d^2$. Where $d$ is the distance from the center of object $m_1$ to $m_2$. I find this counter intuitive because in orbits the orbiting object orbits the center of mass not the center of the object. For instance if we pretended that we did not know the mass of the earth and attempted to find it using the period of the moon's orbit and the distance from the center of the earth to the center of the moon, we get an answer which is too large. So what is the purpose of the formula above? The $d$ in the formula is considered the distance between center of masses of the two objects - earth, and the satellite. If someone mentions the distance between center of the objects, they assume spherical bodies and uniform density, which is generally not true.
{ "domain": "physics.stackexchange", "id": 38107, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics", "url": null }
quantum-gate, mathematics, linear-algebra, pauli-group, representation-theory The set of $n$-fold tensor products of Pauli operators $\sigma_{P_1,1}\otimes\dots\otimes\sigma_{P_n,n}$ forms a basis $\mathcal{P}_n$ of the vector space of $2^n\times 2^n$ complex matrices, so any $n$-qubit unitary may be written as $$ U=\sum_{\sigma_k\in\mathcal{P}_n}a_k\sigma_k.\tag1 $$ We can use fact that the basis is orthogonal with respect to the Hilbert-Schmidt inner product to compute the coefficients in $(1)$ using $$ a_k=\frac{\mathrm{tr}(\sigma_kU)}{2^n}\tag2 $$ which can be checked by hitting $(1)$ with $\sigma_k$ and taking the trace. Now, suppose that $\sigma_k$ anticommutes with $\sigma_{X,i}$ for some $i\in[1,n]$. Then $$ \begin{align} a_k&=\frac{\mathrm{tr}(\sigma_kU)}{2^n}\tag3\\ &=\frac{\mathrm{tr}(\sigma_k\sigma_{X,i}U\sigma_{X,i})}{2^n}\tag4\\ &=-\frac{\mathrm{tr}(\sigma_{X,i}\sigma_kU\sigma_{X,i})}{2^n}\tag5\\ &=-\frac{\mathrm{tr}(\sigma_kU)}{2^n}\tag6\\ &=-a_k\tag7 \end{align} $$
{ "domain": "quantumcomputing.stackexchange", "id": 5447, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-gate, mathematics, linear-algebra, pauli-group, representation-theory", "url": null }
particle-physics, soft-question, astrophysics, experimental-physics, big-bang To make the argument more clear, the following analogy was presented: A man on a flat area of sand or dirt wants to make a hill. So the man starts to dig and pile up the sand (or dirt) which is increasing and piling up as a hill. In the end a hill has been created BUT at the same time a hole has also been created taking equal space as the hill in the oposite direction. In this example it is clear that both do add up to nothing since if you reverse the process then you return to the original flat area. What I can not understand are the following: How can in the analogy and in reality things add up to nothing?I mean in the analogy with the pile of sand(dirt) the hill and the hole don't add up to nothing since you always have the original flat area with sand you started with. So how is this starting state
{ "domain": "physics.stackexchange", "id": 3864, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, soft-question, astrophysics, experimental-physics, big-bang", "url": null }
haskell, recursion, random I still left out the range check, but after reading your comment I reverse what I said above, I think it is a good idea to put it in! (I looked too quickly and thought it was a value range, not a count range).
{ "domain": "codereview.stackexchange", "id": 5232, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haskell, recursion, random", "url": null }
For the second component, we have that $f(x,y,z) = xy + D(x,z)$, and for the third $f(x,y,z) = \frac{z^3}{3} + E(x,y)$. What now? Any help please? In my textbook this is explained really in a terrible way. • If $\nabla f=\mathbf F$ then $f(\mathbf b)-f(\mathbf a)=\int_{\mathbf a}^{\mathbf b} \mathbf F(\mathbf r)\cdot\mathrm d\mathbf r$ over any path from $\mathbf a$ to $\mathbf b$. Let $\mathbf a=(0,0,0)$, $\mathbf b=(x,y,z)$, and pick any path you like from one to the other. – Rahul Mar 9 '15 at 20:17 You don't have to find the integration constant immediately. Keep proceeding as follows. After you determined that $f(x,y,z) = xy+g(y,z)$, differentiate with respect to $y$. This gives $\frac{\partial f}{\partial y}=x+\frac{\partial g}{\partial y}=F_y=x$. Thus, $\frac{\partial g}{\partial y}=0$, which implies that $g$ is a function of $z$ only. In turn, this means that $f(x,y,z)=xy+h(z)$. Next, differentiate $f$ with respect to $z$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9814534387427591, "lm_q1q2_score": 0.8199881864827518, "lm_q2_score": 0.8354835330070838, "openwebmath_perplexity": 127.34629071036211, "openwebmath_score": 0.9558497071266174, "tags": null, "url": "https://math.stackexchange.com/questions/1182684/finding-potential-function-for-a-vector-field/1182718" }
java, reference You can probably shorten it to Ref - as it is pretty well known term in programming in general (then I'd rename factory method in style similar to optional: Ref.of(value)). Or even Reference. I'd expect able suffix to be in the interface name rather than concrete class. Shorter and such generic name might clash with other classes (in java standard lib there are already both Reference and Ref classes) - its difficult to find the right balance in terms of naming.
{ "domain": "codereview.stackexchange", "id": 42501, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, reference", "url": null }
solid-state-physics, symmetry-breaking, piezoelectric Title: Why doesn't broken inversion symmetry imply piezoelectricity? To see piezoelectricity in a crystalline material, you must have a crystal symmetry which is non-centrosymmetric (i.e. broken inversion symmetry). The reverse is almost always true (i.e. broken inversion implies piezoelectricity), except for the single exception of crystals in the point group 432 (space groups 207-214). Crystals with this symmetry have broken inversion symmetry, but cannot have piezoelectricity. Can someone give an intuitive explanation about what is so special about point group 432 compared to all the others? Why doesn't it display piezoelectricity? Ideally, pictures should be included in the explanation. For example, consider the structure of AsH$_3$, which falls under point group 432, does not exhibit piezoelectricity despite breaking inversion symmetry.
{ "domain": "physics.stackexchange", "id": 88980, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solid-state-physics, symmetry-breaking, piezoelectric", "url": null }
On the other hand, using the double angle formulas for $$\sin$$ and $$\cos$$ (or just their complex representations) shows that the integrand has period $$\frac{\pi}{2}$$; using this observation and the symmetry of the integrand gives $$\int_0^{2 \pi} \frac{dx}{\sin^4 x + \cos^4 x} = 8 \int_0^{\frac{\pi}{4}} \frac{dx}{\sin^4 x + \cos^4 x} .$$ The given antiderivative is defined everywhere on $$[0, \frac{\pi}{4})$$, so we can write $$\int_0^{2 \pi} \frac{dx}{\sin^4 x + \cos^4 x} = 8 \cdot \lim_{a \nearrow \frac{\pi}{4}} \int_0^a \frac{dx}{\sin^4 x + \cos^4 x}$$ apply the F.T.C., and compute the limit.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769071055402, "lm_q1q2_score": 0.810843959911455, "lm_q2_score": 0.8311430436757313, "openwebmath_perplexity": 533.7361659027307, "openwebmath_score": 0.9997547268867493, "tags": null, "url": "https://math.stackexchange.com/questions/3275770/how-to-integrate-int-02-pi-frac1-sin4x-cos4-x-dx/3275822" }
python Title: Sudoku Checker in Python I have written the following Sudoku checker in Python. I feel like this could be written much shorter and perhaps more efficient. Especially the part with square_columns. rows = [] columns = [] squares = [] sudoku_sets = [] for i in range(9): if i == 0: row = input("Input the sudoku values row by row.\n") else: row = input("") while len(row) != 9 or not row.isnumeric(): row = input(f"Wrong input. Please insert 9 numbers for row number {i+1}.\n") rows.append(row) for i in range(len(rows)): column = '' for j in range (len(rows)): column += rows[j][i] columns.append(column) for i in range(0,7,3): square_columns = ["", "", ""] for j in range(3): square_columns[0] += rows[j+i][:3] square_columns[1] += rows[j+i][3:6] square_columns[2] += rows[j+i][6:9] for square in square_columns: squares.append(square)
{ "domain": "codereview.stackexchange", "id": 42331, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
astrophysics, stars, diffusion Thus the total time for a He nucleus to diffuse a distance $R$ is $$\tau \sim \left(\frac{R}{l}\right)^2 \left(\frac{l}{v}\right) = \left(\frac{4m_u}{k_BT}\right)^{1/2} \left(\frac{R^2\rho}{m_u}\right) \left( \frac{e^4}{4\pi \epsilon_0^2 (k_BT)^2}\right) $$ $$\tau \sim \frac{2R^2\rho e^4}{4\pi \epsilon_0^2 m_u^{1/2} (k_BT)^{5/2}} = 2\times 10^{14} \left(\frac{\rho}{10^5 {\rm kg/m}^3}\right)\left(\frac{T}{10^7 {\rm K}}\right)^{-5/2}\left(\frac{R}{R_{\odot}}\right)^2\ {\rm years} $$ I believe this is the same approach as your 1977 reference which arrives at 1000 km/billion years. This basic "molecuar diffusion" away from the core (the effect you are talking about in your question) is a very slow process indeed, although it does speed up further away from the core, since $\rho T^{-5/2}$ decreases. It is also much faster in more massive main sequence stars with lower interior densities and higher interior temperatures.
{ "domain": "physics.stackexchange", "id": 67291, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "astrophysics, stars, diffusion", "url": null }
The constructivity of type-theoretic logic means it has an intrinsic computational meaning, which is of interest to computer scientists. It also means that type theory provides axiomatic freedom. For example, while by default there is no construction witnessing $\mathsf{LEM}$, the logic is still compatible with the existence of one (see §3.4 (http://planetmath.org/34classicalvsintuitionisticlogic)). Thus, because type theory does not deny $\mathsf{LEM}$, we may consistently add it as an assumption, and work conventionally without restriction. In this respect, type theory enriches, rather than constrains, conventional mathematical practice. We encourage the reader who is unfamiliar with constructive logic to work through some more examples as a means of getting familiar with it. See http://planetmath.org/node/87565Exercise 1.12,http://planetmath.org/node/87566Exercise 1.13 for some suggestions.
{ "domain": "planetmath.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9873750477464142, "lm_q1q2_score": 0.8002466466258932, "lm_q2_score": 0.8104789040926008, "openwebmath_perplexity": 210.31961275263185, "openwebmath_score": 0.9204012751579285, "tags": null, "url": "http://planetmath.org/111PropositionsAsTypes" }
c, parsing // tokId --> ( --> Int --> , --> Int --> ) --> = --> ( --> Double --> , --> Double ... --> ) matrix* parse_matrix(scanner* pthis) { matrix* mat; int count = 0; char* id_name = NULL; int m = 0; int n = 0; token_t pre_token; token_t token; do { token = next_token(pthis); //If token type is a identifier, then we will try to parse a matrix declaration. if (token.type == tokId) { // Duplicate the token data id_name = strdup(token.data_string); token = next_token(pthis); // I must have an open parenthesis after an identifier if (token.type == tokOparen) { token = next_token(pthis); // Parsing the dimensions of the matrix if (token.type == tokNumber) { m = token.data_int; token = next_token(pthis);
{ "domain": "codereview.stackexchange", "id": 42487, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, parsing", "url": null }
electromagnetism Title: Old magnetostatics problem - new doubt Suppose that the magnetic field in some region has the form $\mathbf{B} = kz \mathbf{\hat{x}}$ (where $k$ is a constant). Find the force on a square loop (side $a$), lying in the $yz$ plane and centered at the origin, if it carries a current $I$, flowing counterclockwise, when you look down the $x$ axis.
{ "domain": "physics.stackexchange", "id": 89707, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism", "url": null }
ros, gazebo, world, ros-indigo Warning [parser.cc:478] XML Attribute[name] in element[physics] not defined in SDF, ignoring. Warning [parser.cc:478] XML Attribute[default] in element[physics] not defined in SDF, ignoring. Error [parser.cc:697] XML Element[torsional], child of element[friction] not defined in SDF. Ignoring.[friction] Error [parser.cc:688] Error reading element <friction> Error [parser.cc:688] Error reading element <surface> Error [parser.cc:688] Error reading element <collision> Error [parser.cc:688] Error reading element <link> Error [parser.cc:688] Error reading element <model> Error [parser.cc:688] Error reading element <world> Error [parser.cc:348] Unable to read element <sdf> Error: Could not find the 'robot' element in the xml file at line 81 in /build/buildd/urdfdom-0.2.10+dfsg/urdf_parser/src/model.cpp Error [parser_urdf.cc:2608] Unable to call parseURDF on robot model Error [parser.cc:273] parse as old deprecated model file failed.
{ "domain": "robotics.stackexchange", "id": 30116, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, gazebo, world, ros-indigo", "url": null }
encoding-scheme Even if there is a new standard that's widely adopted, your system will likely keep functioning for the foreseeable future with little to no changes. There are a lot of legacy systems out there. If your system doesn't support the new encoding, you may have some issues with the user or other systems trying to send you data you don't support. But your system could still use UTF-8 internally, even if this means you don't support some characters (which might not be good, but it won't necessarily break your system). Also, if it were to be replaced due to a reason other than running out of space (which, as noted above, doesn't seem likely any time soon), UTF-8 could likely be extended to include any characters in the new encoding. Meaning you can just convert from one encoding to the other where required and UTF-8 would still be usable. Unicode versus Unicode?
{ "domain": "cs.stackexchange", "id": 16403, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "encoding-scheme", "url": null }
functional-programming, library, delphi var BlockInput: function(Block: BOOL): BOOL; stdcall; begin Result := LoadFunctionFromLibrary('User32.dll', 'BlockInput', @BlockInput) and BlockInput(not Enable); end; Just like Dangph, I wonder why you need this. There are other better approaches suggested in the previous comments. Even if this is useful, Your solution is not great: 1. You repeatedly load the library and the function. 2. There is no way for you to free the loaded libraries. Here is a better way to do it: type TFunctionLoader = class private FLibraries: TStrings; // This stores the library handles and names FFunctions: TStrings; // This stores the function pointers and names public constructor Create; destructor Destroy; override; function LoadFunction(const LibraryName, FunctionName: string; out FunctionPointer: Pointer): Boolean; end; { TFunctionLoader }
{ "domain": "codereview.stackexchange", "id": 31319, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "functional-programming, library, delphi", "url": null }
navigation, ekf, navsat-transform-node, robot-localization, ekf-localization-node Moreover, I've found this package gps_common which subscribes to /fix topic and outputs a /odom topic. Should I use it as input for the ekf_localization node?
{ "domain": "robotics.stackexchange", "id": 26648, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ekf, navsat-transform-node, robot-localization, ekf-localization-node", "url": null }
c#, performance, coordinate-system public override string ToString() { return "<" + X + ", " + Y + ", " + Z + ">"; } public float[] ToArray() { return new float[] { X, Y, Z }; } public Vector3 UpdateFromArray(float[] arr) { if (arr.Length < 3) throw new Exception("Array is too small to convert to vector"); else { X = arr[0]; Y = arr[1]; Z = arr[2]; } return this; } public Vector3 UpdateFromArray(float[] arr, int startIndex) { if (startIndex + 2 > arr.Length - 1) throw new Exception("startindex is too high to fill vector"); else { X = arr[startIndex]; Y = arr[startIndex + 1]; Z = arr[startIndex + 2]; } return this; }
{ "domain": "codereview.stackexchange", "id": 35995, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, coordinate-system", "url": null }
• Thanks for the hint Dilip, I'm afraid I don't understand it fully though. ".. for a fixed value of $y$, $0 < y < 1$, is nonzero only for those $x$ satisfying $y<x<1$." Are you referring to the blue area on the chart? – soren.qvist Jan 23 '13 at 10:51 • @soren.qvist Yes. I am referring to the blue area on the chart. $f_Y(0.4)$ is the integral (area under the curve) of a function of $x$ which has value $(15(0.4)^2)x=2.4x$ if $x$ is between $0.4$ and $1$ (the blue area) and $0$ otherwise. Repeat for other fixed values of $y$, and notice that each time the numerical value of $f_Y(y)$ works out to be the same number as obtained by "plugging in" the chosen value of $y$ into the expression $f_Y(y)$ as given in your answer sheet. Then, comes the "Hey Ma, I think I see a pattern!" moment and you realize that $f_Y(y)$ equals the integral shown. – Dilip Sarwate Jan 23 '13 at 15:43
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9919380072800831, "lm_q1q2_score": 0.8039448404202001, "lm_q2_score": 0.8104789155369048, "openwebmath_perplexity": 130.7774244743372, "openwebmath_score": 0.9453205466270447, "tags": null, "url": "https://stats.stackexchange.com/questions/48304/how-to-find-marginal-distribution-from-joint-distribution-with-multi-variable-de" }
ros, joy-node, joy cd ~/catkin_ws catkin_make rospack find joy Originally posted by Shay with karma: 763 on 2016-09-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Zaunkönig on 2016-09-14: Thanks for your input! I've tried that, but I get the same error (package ros-jade-joy could not be found). Unfortunately, I don't know anything about the ros installation on that computer, since I didn't set it up. (I'm definitely way out of my depth here - I don't usually work with ros.) Comment by Shay on 2016-09-14: So, just follow the install guide of ROS Jade and ROS beginner tutorials. Comment by Zaunkönig on 2016-09-14: I'm not sure what you mean - do you think it's a problem with the local ros installation? Because I don't think I can change that. After looking through the installation tutorial, I tried apt-cache search ros-jade - ros-jade-joy is not among the packages it lists.
{ "domain": "robotics.stackexchange", "id": 25760, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, joy-node, joy", "url": null }
keras Title: What does the huge number on top mean? I'm trying to visualize the neural network architecture. from keras.utils import plot_model plot_model(model, to_file='model.png', show_shapes=True) What does the huge number on top mean? It does not represent anything of significance and is probably a bug (see the links below). It is caused due to the use of Sequential API as it omits the input layer and directly takes the embedding layer as input. It can be removed by the use of functional API or by commenting out this in keras/engine/sequential.py : @property def layers(self): # Historically, `sequential.layers` only returns layers that were added # via `add`, and omits the auto-generated `InputLayer` # that comes at the bottom of the stack. if self._layers and isinstance(self._layers[0], InputLayer): return self._layers[1:] return self._layers You can take a look at issues raised on github regarding this here and here.
{ "domain": "datascience.stackexchange", "id": 5225, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "keras", "url": null }
research-level, quantum-information, observables Title: Constructing a CP map with some decaying property Given some observable $\mathcal O \in \mathcal H$ it is simple to construct a CP (completely positive) map $\Phi:\mathcal{H}\mapsto \mathcal{H}$ that conserves this quantity. All one has to observe is that $$ \text{Tr}(\mathcal O \, \Phi[\rho]) = \text{Tr}(\Phi^*[\mathcal O] \rho).$$ Therefore, if we impose $\Phi^*[\mathcal O] = \mathcal O$, then $\text{Tr}(\mathcal O \, \Phi[\rho])=\text{Tr}(\mathcal O \rho), \; \forall \rho\in \mathcal H$. That amounts to impose that the Kraus operators of $\Phi^*$ should commute with $\mathcal O$. I'd like, however, to construct a trace-preserving CP map for which the expectation value of $\mathcal O$ does not increase for any $\rho \in \mathcal H$. More explicitly, given $\mathcal O\in \mathcal H$, I want to construct $\Gamma:\mathcal H \mapsto \mathcal H$ such that $$ \text{Tr}(\mathcal O\, \Gamma[\rho]) \le \text{Tr}(\mathcal O \rho), \; \forall \rho \in \mathcal H .$$
{ "domain": "physics.stackexchange", "id": 3333, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "research-level, quantum-information, observables", "url": null }
asymptotics, runtime-analysis, polynomial-time Title: How to prove an polynomial run time is faster than exponential using definition of big O This is for homework so feel free to not give me an answer but steer me in the right direction. The problem states: Prove that $n^{1000000} = O(1.000001^n)$ using the formal definition of Big-O.
{ "domain": "cs.stackexchange", "id": 8161, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "asymptotics, runtime-analysis, polynomial-time", "url": null }