text
stringlengths
49
10.4k
source
dict
homework-and-exercises, forces, classical-mechanics v_{y1}=gt_1 = \sqrt{2g(h+d\sin\theta)}\\ v_{y2}=gt_2 = \sqrt{2g(h-d\sin\theta)}$$ The speed $v_c$ when being caught is (we approximate $\sin^2\theta\approx0$) $$v_c = \sqrt{v_x^2+v_{y2}^2}=\sqrt{\frac{b^2g+16gh(h-\sin\theta)}{8h}}$$ and the speed $v_r$ when released (again we approximate $\sin^2\theta\approx0$) $$v_r = \sqrt{v_x^2+v_{y1}^2}=\sqrt{\frac{b^2g+16gh(h+\sin\theta)}{8h}}$$ Let's denote $T$ the period of the motion and assume that during the time the ball is in your hand, its speed increases linearly from $v_c$ to $v_r$: $$\pi d = v_c\frac T 2 + \frac12\frac{v_r-v_c}{\frac T 2}\left(\frac T 2\right)^2$$ From that we have $$T = \frac{4\pi d}{v_c + v_r}$$ Assume that during the time the ball is in your hand, its speed increases linearly from $v_c$ to $v_r$, the dependence of the speed on time is therefore: $$v(t) = v_c + \frac{v_r - v_c}{\frac T 2}t = v_c + \frac{v_r^2 - v_c^2}{2\pi d}t$$ For your next steps, look up centripetal force.
{ "domain": "physics.stackexchange", "id": 59797, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, forces, classical-mechanics", "url": null }
null space of a matrix. Solution: QUESTION: 8. Rotation, coordinate scaling, and reflection. To show that the null space is indeed a vector space it is sufficient to show that , ∈ ⇒ + ∈ and ∈ ⇒ ∈ These are true due to the distributive law of matrices. However, in our case here, A 2 is not zero, and so we continue with Step 3. The dimension of the null space of a matrix is the nullity of the matrix. Step 3. In the special case when M is an m × m real square matrix, the matrices U and V * can be chosen to be real m × m matrices too. A diagonal matrix with all its main diagonal entries equal is a scalar matrix, that is, a scalar multiple λI of the identity matrix I. Here is the call graph for this function: Here is the caller graph for this function: m_matrix::m_matrix (const m_matrix & : that) D. Skew symmetric matrix. Matrix multiplication also known as matrix product . Google Classroom Facebook Twitter. When we add or subtract the 0 matrix of order m*n from any other matrix, it returns the same Matrix. We know that a matrix can be defined as an array of numbers. If a matrix A is symmetric as well as skew-symmetric, then A is a (A) Diagonal matrix (B) Null matrix asked Dec 6, 2019 in Trigonometry by Rozy ( 41.8k points) matrices We use the notation I p to denote a p×p identity matrix. 0, a matrix composed entirely of zeros, is called a null matrix. Just a note on separate Qs & As here. A submatrix of the given matrix can be obtained by deleting . Even a single number is stored as a matrix. 9. Let M be an arbitrary square matrix and Z be a zero matrix of the same dimension. A. has rank zero. It is a binary operation that produces a single matrix by taking two or more different matrices. A = 3: 0: 0: 3: B = 5: 0: 0: 0: 5: 0: 0: 0: 5: The identity matrix is also an example of a scalar matrix. Learn what a zero matrix is and how it relates to matrix addition, subtraction, and scalar multiplication. Symmetric. A matrix is a two-dimensional, rectangular
{ "domain": "onepercentevent.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676490318648, "lm_q1q2_score": 0.8660646773173749, "lm_q2_score": 0.8840392741081575, "openwebmath_perplexity": 455.83827433220875, "openwebmath_score": 0.7774893641471863, "tags": null, "url": "http://onepercentevent.com/hiccups-linen-qhnvw/4715f0-is-null-matrix-a-scalar-matrix" }
general-relativity, time Title: Understanding how the rate of time changes The rate at which time passes is relative depending on speed and the gravity as predicted in general relativity. This theory has been tested by scientists by comparing two identical atomic clocks, one on Earth the other on a rocket speeding at escape velocity. The initially synchronised clocks measured different amounts of time when the rocket returned. Given the current scientific definition of time rate, "The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom", does this mean that 1) the above definition is only true on the surface of an object with Earths mass and 2) the fundamental properties of sub-atomic particles change on the speeding rocket so the rate at which electrons of the caesium 133 atom oscillate between the two energy levels is different? The definition is true everywhere, but only as long as you and the caesium atom are in the same place and moving at the same rate. Suppose two scentists calibrate their clocks on Earth to make sure they measure time at the same rate, then one scientist stays on Earth while the other scientist flies off in a rocket travelling near the speed of light. If the scientists count the number of oscillations of their caesium atom in one second they'll both count 9,192,631,770 (give or take experimental error). However if they count the oscillations of the other caesium atom they will both count less than 9,192,631,770 because they will see time running slowly for the other scientist. Re your Q2, I wouldn't say: the fundamental properties of sub-atomic particles change Firstly the scientist on the rocket would deny there was any change (though they would claim the earth's time had changed). Secondly though the Earth scientist would claim time has changed on the rocket, this change affects everything on the rocket not just the fundamental properties of sub-atomic particles.
{ "domain": "physics.stackexchange", "id": 6745, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, time", "url": null }
interstellar-medium, astrochemistry The new work may also solve another longstanding puzzle. Carbon chains with more than nine atoms are unstable, the team explains. Yet observations have detected more complex carbon molecules in interstellar space. How nature builds these complex carbon molecules from simpler carbon molecules has been a mystery for many years. Buseck explained, "Longer carbon chains are stablized by the addition of iron clusters." This opens a new pathway for building more complex molecules in space, such as polyaromatic hydrocarbons, of which naphthalene is a familiar example, being the main ingredient in mothballs. Said Timmes, "Our work provides new insights into bridging the yawning gap between molecules containing nine or fewer carbon atoms and complex molecules such as C60 buckminsterfullerene, better known as 'buckyballs.'" On Earth we don't see a big difference between hydrocarbon chains with lengths below and above 9 (think kerosene, wax...), why is there such a cutoff in stability in interstellar space? What is it about hydrocarbon chains longer than 9 atoms that makes them unstable there but not here? Unlike the saturated hydrocarbons in kerosene, carbynes are unsaturated carbon chains with alternating single and triple bonds. Molecules containing such chains are called polyynes, e.g. the short cyanopolyyne HC5N: H−C≡C−C≡C−C≡N Those carbon atoms readily interact, and long chains (if they form; see comments) are more likely to crosslink or form cycles than to remain linear. Tarakeshwar et al. suggest that iron clusters in "pseudocarbynes" inhibit this by bonding to some of the carbons. Loomis et al. 2016 looked for cyanopolyynes in radio spectra of Taurus molecular cloud 1. They got a good HC9N signal but did not detect the HC11N lines Travers et al. 1996 observed in the lab.
{ "domain": "astronomy.stackexchange", "id": 3827, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "interstellar-medium, astrochemistry", "url": null }
homework-and-exercises, electric-circuits, capacitance, voltage, inductance Title: Finding the current and voltage in a circuit with DC sources I know that in a circuit with DC sources a capacitor(steady state) can be replaced with an open circuit and an inductor(steady state) can be replaced with a short circuit. My understanding is that since there will be no current through the capacitors in steady state I could just remove them from the circuit. I've simplified the circuit using those information given above but I am unsure what to do first to find ix and Vx. I tried combining the two 2 ohm resistors and doing a current division using the 4 amp source but ix turned out to be different from the answer (ix = -1/4A and Vx = 9/2V) Next step in a systematic approach like you have started would be to use superposition. In that technique you find the contribution of each source separately to the $i_x$ and $V_x$. Be sure that you pay careful attention to all signs. A missed negative can really mess you up. First, find the (signed) contribution to $i_x$ and $V_x$ from the 5V supply by replacing the current supply with an open circuit. Then, find the (signed) contribution to $i_x$ and $V_x$ from the 4A current supply by replacing the voltage supply with an short circuit. If you ever have any dependent supplies, you cannot replace them.
{ "domain": "physics.stackexchange", "id": 20148, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electric-circuits, capacitance, voltage, inductance", "url": null }
fluid-dynamics, newtonian-gravity, moon, tidal-effect Title: How can the Moon have such a strong effect on the ocean? The gravitational acceleration on Earth is approximately $ 10 \mathrm{m}/\mathrm{s}^2 $. Compared to this, the tidal effect of the Moon's gravity gives a local variation in the acceleration of approximately $ 9 \cdot 10^{-7} \mathrm{m}/\mathrm{s}^2 $, that is, seven orders of magnitudes less. The level of the water can rise $ 1 \mathrm{m} $ during high tide, which is only three and a half orders of magnitude smaller than the depth of the ocean. How can this small variation in gravity move so much water and cause such a strong effect on the seas? The relevant "100%" from which you should calculate the percentage isn't the depth of the ocean but the radius of the Earth $$ R\sim 6,378,000\,{\rm m} $$ Multiply this $R$ by $10^{-7}$ and you will get $0.6$ meters, a reasonable estimate for average tides. You must understand that the surface of the ocean always tries to create an "equipotential surface" – connect all points that have the same gravitational potential. The Earth's gravity (plus the centrifugal potential) adds the major contribution to the potential and, as you said, the Moon modifies this function by corrections that are 7 orders of magnitude smaller and that are anisotropic (different in different directions). That's why the ellipse we get because of the Moon will differ from the previous one by corrections of order $10^{-7}$, too. For example, if you imagine the Moon-less Earth to be a sphere, its ocean is spherical, i.e. ellipsoid with semi-axes $a=b=c$. A correction to the original potential that is 7 orders of magnitude smaller will create $|a-b|/a$ of order $10^{-7}$. All these calculations may be done much more accurately although the precise shape of continents etc. is needed for learning the precise shape of tides at various points of the real globe.
{ "domain": "physics.stackexchange", "id": 5982, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, newtonian-gravity, moon, tidal-effect", "url": null }
$$f'(x)<0\iff x<0$$ Now use IVT to show that f has exactly two intersection point with x axis. • yeah thats my problem, how do i show it has exactly two intersection points? since we havent done investigations of functions yet.. – Nicole Jan 6 '18 at 11:57 • @Nicole you need to investigate the function by derivative – user Jan 6 '18 at 11:58 • i see what youre saying but then the function is equal to -1 whereas if you consider the function to be what you said then the solutions would need to make f(x) = 0 , if im not mistaken? – Nicole Jan 6 '18 at 12:04 • @Nicole f has a negative minimum at x=0 and is positive for some x>a and x<b thus, since it is strictly increasing for x>o and strictly decreasing for x<0 it easy to show that f has necessarly only two intersection with x axis. – user Jan 6 '18 at 12:09 Let $f(x)=x^2+x\cos{x}-\sin{x}-1.$ Thus, for all $|x|\geq\sqrt3$ by C-S we obtain $$f(x)=x^2-1-(\sin{x}-x\cos{x})\geq$$ $$\geq x^2-1-\sqrt{(\sin^2x+\cos^2x)(1^2+(-x)^2)}=x^2-1-\sqrt{1+x^2}=$$ $$=\frac{x^2(x^2-3)}{x^2-1+\sqrt{1+x^2}}\geq0.$$ The equality does not occur, which says that the equation $f(x)=0$ has no roots for $|x|\geq\sqrt3$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9748211597623861, "lm_q1q2_score": 0.8014873681302704, "lm_q2_score": 0.8221891370573388, "openwebmath_perplexity": 273.22434266686986, "openwebmath_score": 0.8425495028495789, "tags": null, "url": "https://math.stackexchange.com/questions/2594196/how-to-show-that-an-equation-has-exactly-two-solutions" }
We could get the z-score for any other observed value following a similar approach. For instance, the z-score for a return of 1 will be: \begin{align*} Z & =\cfrac {(1 – 0.6)}{0.2} \\ & = 2 \quad (\text{The return of } 1 \text{ is two standard deviations above the mean}) \\ \end{align*} Calculating Probabilities using z-values under the Standard Normal Distribution Using the standard normal distribution table, we can be able to calculate the that a normally distributed random variable Z, with mean equal to 0 and variance equal to 1, is less than or equal to z, i.e., P(Z ≤ z). However, the table does this when we only have positive values of z. Simply put, if the examiner asks you to find the probability behind a given positive z-value, you’d have to look it up directly on the table. P(Z ≤ z) = θ(z) when z is positive Example: Using the z-score table Using the data from our first example, suppose you were asked to calculate the probability that the return is less than1. Solution First, you’d be required to calculate the z-value (2 in this case). P(Z ≤ 2) can be read off directly from the table. You just move down and locate the z-value that lies to the right of “2” i.e., 0.9772. Note: The table above is incomplete. Negative z-values If we have a negative z-value and do not have access to the negative values from the table (as shown below), we still can calculate the corresponding probability by noting that: $$P(Z \le -z) = 1 – P(Z \le z) \text{ or}$$ $$\theta(–z) = 1 – \theta(z)$$ This relationship is true when we consider the following facts: 1. The total area (probability) under the standard normal distribution is 1. 2. The standardized normal distribution is symmetrical about the mean. Question Calculate P(Z  ≤ -2.5) A. 0.9938 B. 0.0062 C. 0.06 Solution
{ "domain": "analystprep.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9937100982356567, "lm_q1q2_score": 0.8101047637701386, "lm_q2_score": 0.8152324960856175, "openwebmath_perplexity": 593.6882081380221, "openwebmath_score": 0.9986006617546082, "tags": null, "url": "https://analystprep.com/cfa-level-1-exam/quantitative-methods/standard-normal-distribution-calculations/" }
javascript, unit-testing, rational-numbers function divide(f1, f2) { return multiply(f1, invert(f2)); } function add(f1, f2) { var r = gcd(f1.d, f2.d); return createFraction( f1.n * r[1] + f2.n * r[0], f1.d * r[1] ); } function subtract(f1, f2) { return add(f1, negate(f2)); } Testing Code function equals(f1, f2) { return f1.n === f2.n && f1.d === f2.d; } function toString(f0) { return f0.n + "/" + f0.d; } function testEquals(line, f1, f2) { if (!equals(f1, f2)) console.log(line + " failed: " + toString(f1) + " !== " + toString(f2)); else console.log(line + " good: " + toString(f1)); } var line = 0; testEquals(line++, add(createFraction(10, 40), createFraction(3, 30)), createFraction(7, 20)); testEquals(line++, subtract(createFraction(8, 3), createFraction(11, 30)), createFraction(69, 30)); testEquals(line++, multiply(createFraction(8, 3), createFraction(11, 30)), createFraction(88, 90)); testEquals(line++, divide(createFraction(8, 3), createFraction(11, 30)), createFraction(240, 33));
{ "domain": "codereview.stackexchange", "id": 20593, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, unit-testing, rational-numbers", "url": null }
A linear map $$g:V\to W$$ is defined "abstractly", and has no need for chosen bases. But in practice, $$g$$ is usually given bases-specific in the following way. Let $$v$$ be a vector in $$V$$. We write it w.r.t. $$\mathcal B$$ as $$v=x_1b_1+x_2b_2+x_3b_3$$, and write this data as a column vector: $$v = \begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}_{\mathcal B} :=x_1b_1+x_2b_2+x_3b_3 \ .$$ Then we consider a matrix $$M=M_{\mathcal B, \mathcal C}$$ and build the matrix multiplication vector: $$\begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix} = M \begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}\ .$$ Then we consider the vector $$w\in W$$ which written in base $$\mathcal C$$ has the $$y$$-components, so $$w = \begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix}_{\mathcal C} :=y_1c_1+y_2c_2+y_3c_3 \ ,$$ and the map $$g$$ is mapping linearly $$v$$ to $$w$$. This concludes the section related to conventions and notations.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9875683476832692, "lm_q1q2_score": 0.8851094890427877, "lm_q2_score": 0.8962513745192026, "openwebmath_perplexity": 295.1646856718479, "openwebmath_score": 0.9979261755943298, "tags": null, "url": "https://math.stackexchange.com/questions/4194996/representing-a-linear-transformation-with-respect-to-a-new-basis" }
classical-mechanics, lagrangian-formalism, symmetry, time Since the Lagrangian doesn't depend on $t$ explicitly we have that $$L(x,y,t')=L(x,y,t)\quad\text{ for all }t'$$ meaning the numerator of this limit is zero and consequently $\frac{\partial L}{\partial t}=0$. In this part of classical mechanics it is hard to keep track of all the derivatives, but the partial derivative is really simple in this regard. Side note: I used $y$ in this expression to emphasize that the Lagrangian is just a function with three arguments. $L: \mathbb R^3\rightarrow\mathbb R$. The third argument is used for $t$ but nothing prevents you from also using $t$ in the other arguments. You could have something like $L(e^t,t^2-3t,t)$. But when you calculate the partial derivative you only use one of these arguments. The total derivative uses all the arguments.
{ "domain": "physics.stackexchange", "id": 67826, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, lagrangian-formalism, symmetry, time", "url": null }
python, pandas, json I would like to get the result as shown below. "{'name':'abc','id':'AB10','address':'some_address','data_sample':{'sub1':10,'sub2':5,'sub3':1}}" "{'name':'abc','id':'AB10','address':'some_address','data_sample':{'sub1':20,'sub2':10,'sub3':2}}" "{'name':'abc','id':'AB10','address':'some_address','data_sample':{'sub1':30,'sub2':15,'sub3': 3}}" "{'name':'abc','id':'AB10','address':'some_address','data_sample':{'sub1': 40,'sub2':20,'sub3': 4}}" I would like to send this as a parameter in requests.post as following. response=requests.post(url="some_url",data=str_details) If I don't convert to string, gives me an error, JSONDecodeError: Expecting value: line 2 column 1 (char 1) If I pass json=str_details, as argument gives me the same error, How to resolve this and get the desired results. To build your data use sth like: import copy, json
{ "domain": "datascience.stackexchange", "id": 7867, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, pandas, json", "url": null }
complexity-theory Title: 3 Colorability reduction to SAT I'd like to reduce 3 colorability to SAT. I've stuffed up somewhere because I've shown it's equivalent to 2 SAT. Given some graph $G = (V,E)$ and three colors, red, blue, green. For every vertex $i$, let the boolean variable $i_r$ tell you whether the $i$-th vertex is red (or more precisely, that the $i$-th vertex is red when $i_r = 1$). Similarly, define $i_b$ and $i_g$. Suppose two vertices $i$ and $j$ were connected by an edge $e$. Consider the clause \begin{align} (\bar i_r \vee \bar j_r) \end{align} If we demand the clause is true, it means that the vertices cannot both be red at the same time. Now consider the bigger clause $\phi_e$ \begin{align} (\bar i_r \vee \bar j_r)\wedge(\bar i_b \vee \bar j_b)\wedge(\bar i_g \vee \bar j_g) \end{align} which, if true, demands that the vertices $i$ and $j$ aren't both the same color. By itself, this clause is in 2-SAT. For every edge $e \in E$, I now make a clause $\phi_e$ of the above form and put them all together using $\wedge$'s \begin{align} \phi = \wedge_{e \in E} \phi_e \end{align} Thus, for the entire graph, I've come up with a 2SAT formula which is equivalent to 3 coloring. This is obviously wrong, but I can't tell where I've screwed up. With your modeling, setting $i$, $i_r$, $i_g$ and $i_b$ to false for all vertices yields a solution of the SAT problem and this is not a solution of the graph coloring problem. You need to add clauses to say that each vertex is blue or green or red, namely $(i_r\vee i_g\vee i_b)$.
{ "domain": "cs.stackexchange", "id": 1567, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory", "url": null }
fluid-dynamics, pressure adhesion, against a force that scales as the Z component (axial direction) of the oil velocity divided by half of the gap distance. There's drag against both the shaft AND the journal holding the oil from leaking in that direction. The small-distance forces are the reason for much careful machining (and for the success of bearings made by pouring molten metal against the shaft parts). The oil itself is the pump mechanism, and it only has the capability to work as a pump vane and shaft seal when in a narrow space, and while continuously drawn into that space (because it does leak out). Oils have long-chain molecules, and entanglement of those molecules keeps an 'oil film' intact against the strain. The closest thing to a 'pressure pixel' is in oils that contain particles, like graphite flakes, to discourage metal/metal contact during the startup phase before a hydrodynamic film is established.
{ "domain": "physics.stackexchange", "id": 38290, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, pressure", "url": null }
html, formatting, dice, common-lisp Now we pass a stream to htmlinitlist and write the contents to the stream: (defun htmlinitlist (specs stream) (write-string "<table><tr><td>&nbsp;</td><td>&nbsp;</td></tr>" stream) (loop for (a b) in (initlist specs) do (format stream "<tr><td>~A</td><td>~A</td></tr><tr><td>&nbsp;</td><td>&nbsp;</td></tr>" a b)) (write-string "</table>" stream)) If we want to get a string from a stream, we can use with-output-to-string. This binds a stream variable, which we can use and pass around... (defun htmlinitlists (specs count) (with-output-to-string (stream) (write-char #\space stream) (loop repeat (1+ count) do (htmlinitlist specs stream)))) Alternate version of htmlinitlist using only one FORMAT call: (defun htmlinitlist (specs stream) (format stream "<table><tr><td>&nbsp;</td><td>&nbsp;</td></tr>~ ~{<tr><td>~A</td><td>~A</td></tr><tr><td>&nbsp;</td><td>&nbsp;</td></tr>~}~ </table>" (loop for (a b) in (initlist specs) collect a collect b))) Benefit All the various string operations (which are creating lots of intermediate strings which are immediately garbage) have been replaced with the usual output functions and a stream.
{ "domain": "codereview.stackexchange", "id": 35726, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "html, formatting, dice, common-lisp", "url": null }
quantum-mechanics, magnetic-fields, quantum-spin, hamiltonian, matrix-elements Note that \begin{align} \exp(-i t\mu \vec B\cdot \vec \sigma/\hbar)\ne \exp(-it B_x\sigma_x)\exp(-it B_y\sigma_y)\exp(-it B_z\sigma_z) \end{align} or any other kind of factorization suggested by your snippet of Mma code since in general $\exp(A+B)\ne \exp(A)\exp(B)$. The correct implementation would be to compute $-it\mu\vec B\cdot\vec \sigma/\hbar$ as a single matrix and then exponential this single matrix.
{ "domain": "physics.stackexchange", "id": 71141, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, magnetic-fields, quantum-spin, hamiltonian, matrix-elements", "url": null }
python, game, library, adventure-game cactus.MapPosition( "position name", "position description", { "a choice": 1 # <-- References the index of another position. } function=a_func # <-- If no function is referenced, default is None. ) Finally, an instance of MainGame is created with the following data. The name of the game. A description of the game. The prompt to be used in the game. The game map, and instance of GameMap. GAME = cactus.MainGame( "game name", "game description", "game prompt", GAME_MAP ) Finally, after all that has been done, call GAME.play_game(), and play your game. For those who also want a concrete example of a simple game, the following program demonstrates that. import cactus from sys import exit GAME_MAP = cactus.GameMap([ cactus.MapPosition( "Start", "Welcome to the test!", { "left": 1, "right": 2, } ), cactus.MapPosition( "Left", "You took the left path and lived!", {}, function=exit ), cactus.MapPosition( "Right", "You took the right path and died!", {}, function=exit ) ]) GAME = cactus.MainGame( "Test Game", "This is a simple test game! Yay!", "> ", GAME_MAP ) GAME.play_game() Finally, for those who are interested, here's the Official Cactus Discussion Chat Room, and here's the link to the official project on Github. else: continue Does nothing: remove it. lower or not lower: you must decide I suggest you decide: either all input is lowercased or it is kept as the original, the following is not a clean solution (you can understand it is not clean because you spelled it out in the docs). if user_input.lower() in possible_choices: self.map_position = possible_choices[user_input.lower()] elif user_input in possible_choices: self.map_position = possible_choices[user_input]
{ "domain": "codereview.stackexchange", "id": 13852, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, game, library, adventure-game", "url": null }
python, python-2.x, hash-map Change match to return, rather than being in the if. Move match into fetch. It makes the code easier to read. Changing your nested trys to a function that yields the possible symbols, makes the code more DRY, and easier to read. Move adddict into lookup. Possibly remove the comments. This makes swapping between try and dict.get simpler if one is faster than the other. And also makes the code a little more dense, whilst still being readable. Personally, I find the code being a bit more dense makes your code more readable. But it's not much of a change from your current code. This can change your code to: # lookup helpers def fetch(name, symbols, key): ''' find name by dict[key] in list of dicts symbols ''' return next(( item for item in symbols if name.strip('.') in item[key] or name.split()[0] in item[key] ), None) def methods(entry, symbols): yield next((item for item in symbols if entry['stock'] == item['name']), None) yield fetch(entry['stock'], symbols, 'name') yield fetch(entry['stock'].upper(), symbols, 'symbol') yield fetch(entry['stock'], SPECIALS, 'name') def lookup(stox, symbols): '''lookup symbol for stock name''' hits = [] misses = [] for entry in stox: for symbol in methods(entry, symbols): try: hits.append({'source': entry['stock'], 'found' : symbol['name'], 'symbol': symbol['symbol']}) break except TypeError: continue else: misses.append(entry['stock'])
{ "domain": "codereview.stackexchange", "id": 24221, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-2.x, hash-map", "url": null }
ros-melodic, hector-slam I have set roscd as ~a_ws/devel and the HectorSLAM package has the hector_geotiff package in it any help fixing this error would be greatly appreciated. Originally posted by Nosnik on ROS Answers with karma: 3 on 2020-03-11 Post score: 0 Have you installed hector_geotiff with sudo apt-get install ros-melodic-hector-geotiff? Do you have this folder: /opt/ros/melodic/share/hector_geotiff if you have already installed? Have you tried to launch without changing roscd from ~ just typing roslaunch hector_slam_launch tutorial.launch? Originally posted by tp_ink with karma: 26 on 2020-04-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 34579, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic, hector-slam", "url": null }
complexity-theory, reference-request, algorithm-analysis, education, books Title: Good text on algorithm complexity Where should I look for a good introductory text in algorithm complexity? So far, I have had an Algorithms class, and several language classes, but nothing with a theoretical backbone. I get the whole complexity, but sometimes it's hard for me to differentiate between O(1) and O(n) plus there's the whole theta notation and all that, basic explanation of P=NP and simple algorithms, tractability. I want a text that covers all that, and that doesn't require a heavy mathematical background, or something that can be read through. LE: I'm still in highschool, not in University, and by heavy mathematical background I mean something perhaps not very high above Calculus and Linear Algebra (it's not that I can't understand it, it's the fact that for example learning Taylor series without having done Calculus I is a bit of a stretch; that's what I meant by not mathematically heavy. Something in which the math, with a normal amount of effort can be understood). And, do pardon if I'm wrong, but theoretically speaking, a class at which they teach algorithm design methods and actual algorithms should be called an "Algorithms" class, don't you think? In terms of my current understanding, infinite series, limits and integrals I know (most of the complexity books I've glanced at seemed to use those concepts), but you've lost me at the Fast Fourier Transform. It is my very personal opinion that the book of Jon Kleinberg and Éva Tardos is the best book for studying the design and analysis of efficient algorithms. It might be not as comprehensive as Cormen et al. but it is a great textbook. Let me point out, why I think this book might suit your interests best you don't need heavy math machinery for the proofs the book gives often a great intuition why something is working (or not), this is in my opinion very important for beginners and self learners a very intuitive approach to NP-completeness it has a great chapter how to deal with NP-complete problems in practice it focuses on design patterns, which might help you to design your own clever algorithms
{ "domain": "cs.stackexchange", "id": 425, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, reference-request, algorithm-analysis, education, books", "url": null }
java, validation public final boolean isValid( Person person ) { if ( person.getName() == null || person.getName().length() < 3 ) { return false; } if ( person.getSurname() == null || person.getSurname().length() < 3 ) { return false; } return detailIsValid(person); } protected abstract boolean detailIsValid(); } So, that class declares an abstract method that the child classes will need to implement, and the abstract method will do the child-specific checks: public class SourcingPersonValidator extends AbstractPersonValidator { @Override protected boolean detailIsValid( Person person ) { if ( person.getTechnology() == null ) { return false; } if ( person.getSource() == null ) { return false; } return true; } } The advantage of doing things this way is that you don't run the risk of a bug where you forget to call super. Note also that I added braces to your 1-liners. My experience suggests this makes for more reliable code when your code enters a maintenance cycle.
{ "domain": "codereview.stackexchange", "id": 13103, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, validation", "url": null }
thermodynamics, visible-light, temperature, thermal-radiation, biology Title: Why are people dark skinned in hotter areas despite dark colour absorbing the most heat? I’m not sure if the reason is in the field of biology or more towards physics but as my reasoning is based on the physics part being that perfectly black bodies are perfect absorbers of heat and light while white is a perfect reflector but we have darker skinned humans near equator and lighter skinned people (as well as animals like polar bears) near the poles, I’ve posted it on physics stackexchange. Also, if the answer is based on the biology of animals and this should belong to biology stack exchange, feel free to let me know. This is a bio question. The biggest threat to fitness is not lack of cooling, but damage from UV rays. A pigment in black (actually all skin to differing degrees) absorbs the UV so that skin cells don't. Fair skinned people, from higher latitudes, have another risk to their fitness which is a lack of vitamin D which is produced by the skin when exposed to UV. As a result of this, fair skinned people in the tropics get more skin cancer than otherwise, and dark people are more likely to have a vitamin D deficiency if far from the tropics.
{ "domain": "physics.stackexchange", "id": 46999, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, visible-light, temperature, thermal-radiation, biology", "url": null }
information-theory Title: Prove that $I(A;B|C)=0$ given $I(A;B)=0$ Let $A$, $B$ and $C$ be 3 discrete random variables. If $A$ and $B$ are independent ($I(A;B) = 0$, where $I$ represents the mutual information), how can we prove that $I(A;B|C)=0$? When I draw a Venn diagram this seems trivially true, but I just cannot find a way to prove it. $I(A;B\mid C)$ indicates "the reduction in the uncertainty of $A$ due to knowledge of $B$ when $C$ is given". To find an example where $I(A;B\mid C)\not= 0$, we can see if the combination of $B$ and $C$ can help determine $A$ even though knowing $B$ alone does not help at all. So let us try to create $A$ from the combination of $B$ and $C$. Let $B$ and $C$ be two simplest non-trivial independent random variables, $$P(B=0, C=0)=P(B=0, C=1)=P(B=1, C=0)=P(B=1, C=1)=1/4.$$ Let $A$ be 0 if $B$ and $C$ turns out the same and 1 otherwise. We can check that $A$, $B$ and $C$ are pairwise independent. However, given the values of any two of them, the third one is determined. (In modulo arithmetic, $B+C=A(\bmod 2)$). When $C$ is given, we will not know anything more about $A$; if $B$ is known additionally, we will know $A$ completely. The above shows how we can find and understand an example. Beginners are encouraged to verify various "obvious" (or "obscure") claims above by using the corresponding definitions. Exercise. Verify that $I(A;B)=0$ and $I(A;B\mid C)=1$ in the example above.
{ "domain": "cs.stackexchange", "id": 13471, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "information-theory", "url": null }
python, python-3.x, sqlite, sqlalchemy Arguments: name (str): Name of the password. value (str): The password. Returns: None """ with SaferSession(record=Password(name, value)): print(f"Successfully added {name} record.") def is_master_password_valid(master_password: str) -> Optional[bool]: """Check if provided master password is valid or not. Arguments: master_password (str): The master password. Returns: True if the password matches or None otherwise """ with SaferSession() as session: password_obj = session.query(MasterPassword).one_or_none() return password_obj.value == master_password if password_obj else None def get_password_by_name(name: str) -> Any: """Get a password by its name. Arguments: name (str): Name of the password. Returns: password or None """ with SaferSession() as session: try: password = session.query(Password) password = password.filter_by(name=name).first().value except AttributeError: password = None print(f"{name} could not be found!") return password def update_password(name: str, new_value: str) -> None: """Update a specific password. Arguments: name (str): Name of the password that needs updating. new_value (str): New password. Returns: None """ with SaferSession() as session: try: password = session.query(Password).filter_by(name=name).first() password.value = new_value print(f"Successfully updated {name} record.") except AttributeError: print(f"{name} could not be found!") return def delete_password(name: str) -> None: """Delete a specific password. Arguments: name (str): NAme of the password that needs to be deleted.
{ "domain": "codereview.stackexchange", "id": 37857, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, sqlite, sqlalchemy", "url": null }
c, algorithm, performance } int main() { int n,i,j; scanf("%d",&n); int *b; b=(int *)malloc(n*sizeof(int)); for(i=0;i<n;i++) { scanf("%d",&b[i]); } for(i=0;i<n;i++) factorial(b[i]); return 0; } How can i make my program more efficient and produce the output in the given time limit? This challenge is from HackerEarth The best solution is only to compute the factorials you need and only compute them once. Read all the user input find the maximum value. Generate the factorials for all values upto the max Note: Saving them as you go: Print the factorial values by looking up the result you generated in 2. So lets look at the code: I would not bother dynamically allocating the data storage. b=(int *)malloc(n*sizeof(int)); The question specifically limits the maximum value of n to 100. int b[101]; // should be suffecient When reading values this should work (if you assume that the input is good). scanf("%d",&b[i]); But the question states that the input is one value per line. Personally I would validate that there is one value per line. Here you are calculating the factorial multiple times: for(i=0;i<n;i++) factorial(b[i]); // Factorial is O(n) // Thus this loop is O(n^2) Technically you only need to call factorial once. If you calculate factorial for 'n' you need to calculate the factorial for 'n-1' (its how it is defined). If you store the numbers then you only need to look up the values to print it once they have been calculated. It seems like you calculate this value each time. float p=0.0; for(i=2;i<=N;i++) p=p+log10(i);
{ "domain": "codereview.stackexchange", "id": 3785, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, algorithm, performance", "url": null }
ros, ros-kinetic, abb, ros-industrial -- Could not find the required component 'abb_egm_interface'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found. CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "abb_egm_interface" with any of the following names: abb_egm_interfaceConfig.cmake abb_egm_interface-config.cmake Add the installation prefix of "abb_egm_interface" to CMAKE_PREFIX_PATH or set "abb_egm_interface_DIR" to a directory containing one of the above files. If "abb_egm_interface" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): YuMi/yumi_hw/CMakeLists.txt:10 (find_package) -- Configuring incomplete, errors occurred! See also "/home/pbprobotics/catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/pbprobotics/catkin_ws/build/CMakeFiles/CMakeError.log". Invoking "cmake" failed I am obviously missing something, can anyone help. Bob Originally posted by Bob Walton on ROS Answers with karma: 3 on 2017-07-06 Post score: 0
{ "domain": "robotics.stackexchange", "id": 28301, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-kinetic, abb, ros-industrial", "url": null }
ros2 [move_group-3] [INFO] [1685550205.807974815] [moveit_move_group_default_capabilities.move_action_capability]: Solution found but controller failed during execution [rviz2-4] [INFO] [1685550205.809449015] [move_group_interface]: Plan and Execute request aborted [rviz2-4] [ERROR] [1685550205.889689215] [move_group_interface]: MoveGroupInterface::move() failed or timeout reached [rviz2-4] [INFO] [1685550214.709188686] [move_group_interface]: MoveGroup action client/server ready [move_group-3] [INFO] [1685550214.709703786] [moveit_move_group_default_capabilities.move_action_capability]: Received request [move_group-3] [INFO] [1685550214.709881186] [moveit_move_group_default_capabilities.move_action_capability]: executing.. [rviz2-4] [INFO] [1685550214.709934386] [move_group_interface]: Planning request accepted [move_group-3] [INFO] [1685550215.710077184] [moveit_ros.current_state_monitor]: Didn't received robot state (joint angles) with recent timestamp within 1.000000 seconds. [move_group-3] Check clock synchronization if your are running ROS across multiple machines! [move_group-3] [WARN] [1685550215.710154784] [moveit_ros.planning_scene_monitor.planning_scene_monitor]: Failed to fetch current robot state. [move_group-3] [INFO] [1685550215.710255684] [moveit_move_group_default_capabilities.move_action_capability]: Planning request received for MoveGroup action. Forwarding to planning pipeline.
{ "domain": "robotics.stackexchange", "id": 38410, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2", "url": null }
observational-astronomy, radio-astronomy, radio-telescope, radar It's really hard to add a new multi hundred kilowatt transmitter to an existing very large dish like a 70 m DSN or FAST. You can read more about that in answer(s) to What is a Beam Waveguide dish and why does the Deep Space Network use them?. The big 70 m DSN dishes use the focus up between the primary and secondary; it is a real challenge to add more hardware there. The image below shows a DSN 70 m dish, for scale, the red lines in the dish itself are a safe walking path and going up each arm of the secondary reflector are stairways for people, not ants. I think that the Chinese project is quite ambitious but it's a next step in technology, rather than a makeshift retrofit that would interrupt availability of currently very busy large dishes, and it seems that adding a transmitter to FAST is not an option. From this answer to How will the closure of the Arecibo dish impact deep space communications? (found here): With the loss of Arecibo, Goldstone's DSS-14 now becomes the world's largest and most powerful radar dish. (China's 500 meter FAST dish is larger, but has no transmitter and is purely passive.) Sky and Telescope reports that "Arecibo offered 18 times the sensitivity of other existing facilities, such as NASA's Goldstone receiver." It also states Arecibo is also irreplaceable for scientists. Even though it’s technically the second-largest radio dish in the world (China’s Five-hundred-meter Aperture Spherical Telescope, or FAST, recently broke the record Arecibo held for decades), the observatory has unique capabilities, among them its radar. “FAST cannot do radar, it’s specifically incapable of doing active observation,” Springman explains. Because of that, FAST can’t take Arecibo’s place in planetary defense by characterizing asteroids and their orbits. See also answers to What will succeed the Arecibo Observatory? Arecibo: Advantages of Giant Dish? What are monostatic radar observations, and how will Deep Space Network's DSS-13 be used to observe asteroid 1999 WK4's flyby of Earth?
{ "domain": "astronomy.stackexchange", "id": 5421, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "observational-astronomy, radio-astronomy, radio-telescope, radar", "url": null }
Reference 1. Engelking, R., General Topology, Revised and Completed edition, Heldermann Verlag, Berlin, 1989. 2. Willard, S., General Topology, Addison-Wesley Publishing Company, 1970. # A Space with G-delta Diagonal that is not Submetrizable The property of being submetrizable implies having a $G_\delta$-diagonal. There are several other properties lying between these two properties (see [1]). Before diving into these other properties, it may be helpful to investigate a classic example of a space with a $G_\delta$-diagonal that is not submetrizable. The diagonal of a space $X$ is the set $\Delta=\left\{(x,x): x \in X \right\}$, a subset of the square $X \times X$. An interesting property is when the diagonal of a space is a $G_\delta$-set in $X \times X$ (the space is said to have a $G_\delta$-diagonal). Any compact space or a countably compact space with this property must be metrizable (see compact and countably compact space). A space $(X,\tau)$ is said to be submetrizable if there is a topology $\tau^*$ that can be defined on $X$ such that $(X,\tau^*)$ is a metrizable space and $\tau^* \subset \tau$. In other words, a submetrizable space is a space that has a coarser (weaker) metrizable topology. Every submetrizable space has a $G_\delta$-diagonal. Note that when $X$ has a weaker metric topology, the diagonal $\Delta$ is always a $G_\delta$-set in the metric square $X \times X$ (hence in the square in the original topology). The property of having a $G_\delta$-diagonal is strictly weaker than the property of having a weaker metric topology. In this post, we discuss the Mrowka space, which is a classic example of a space with a $G_\delta$-diagonal that is not submetrizable.
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9790357628519819, "lm_q1q2_score": 0.802704897583365, "lm_q2_score": 0.8198933359135361, "openwebmath_perplexity": 118.06878639059939, "openwebmath_score": 0.9476520419120789, "tags": null, "url": "https://dantopology.wordpress.com/tag/pseudocompact-space/" }
climate, seasons, ice-age, axial-obliquity Image originally from The Petroleum System Blog Using that formula, the temperature at the poles (reduced to sea level) would be -16.8 °C (from the figure actual data points it can be seen that in real life the south pole is much colder than the north pole). Now, the previous assumptions contradicts the requirement of "equilibrium", because the above scenario is far from steady state. So now I will go on to try to describe what would happen to Earth's climate in your hypothetical scenario: One thing that we learned by studying how the Milankovitch cycles trigger and reverse Pleistocene ice ages, is that to initiate an ice age cold winters are not necessary, what is needed are cold or mild summers. Currently, the inclination of Earth axis (a.k.a. obliquity) varies between 22° and 24.5° , with a mean period of 41,040 years. When the inclination is 22°, mild summers occur and, therefore, the perfect condition to initiate an ice age (specially when combined with other ad-hoc orbital conditions). The permanent equinox situation you propose, is equivalent to an obliquity of 0°, that would lead to the coldest possible summer (this is, no summer at all). Therefore, such condition would set the Earth on track for an intense and never-ending ice age. Let me explain how this could work: Using the formula above, the temperatures would be permanently below zero between the poles and latitudes 58.3°. Therefore, snow would start to accumulate in those areas, building an ice sheet and once the ice sheet gets thick enough it would start flowing outwards. Figure from Lumen Learning. The ice sheet then becomes self-sustaining due to two positive feedbacks: Due to its high albedo, it would reflect most of the solar radiation back to the space, cooling down the Earth. As the ice sheet advance, its thickness adds to the elevation of the terrain, therefore the surface is higher and colder, allowing snowfall beyond the 58.3° of latitude. The thicker it grows the more it can advance towards the equator.
{ "domain": "earthscience.stackexchange", "id": 1545, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "climate, seasons, ice-age, axial-obliquity", "url": null }
navigation, ros-kinetic, robot-localization Title: robot_localization asking for map to odom transform So I just recently migrated from robot_pose_ekf to robot_localization as I heard it's better. I'm fusing data from wheel odometry, imu and laser_scan_matcher as odom0, imu0 and pose0 respectively. I followed the instructions in configuring the parameters and set the frames as such: map_frame: map odom_frame: odom base_link_frame: base_link world_frame: map The documentation tells me this should make the ekf_localization node publish the transform from map to odom if the publish_tf param is set to true. Sure, it does, but I also keep getting this warning: [ WARN] [1521163510.296212370]: Transform from odom to map was unavailable for the time requested. Using latest instead. And once in a while this warning: [ WARN] [1521163517.010447912]: Failed to meet update rate! Took 0.035404531000000002972 seconds. Try decreasing the rate, limiting sensor output frequency, or limiting the number of sensors. Why would it be looking up the transform of what it's supposed to be broadcasting instead? It's still doing its job, but I'm just concerned somewhere it ignores some data because of this warning and curious why the warning comes up in the first place. Thanks in advance for any insight on the matter! Here's my TF tree: Originally posted by emilyfy on ROS Answers with karma: 26 on 2018-03-15 Post score: 1 Original comments Comment by stevejp on 2018-03-16: Can you post your TF tree? It seems like you are trying to localize your robot globally. Hence, according to the document, you should have something else publishes the transformation from odm->base_link. You may want to refer to this #q258330 Originally posted by tuandl with karma: 358 on 2018-03-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 30333, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ros-kinetic, robot-localization", "url": null }
homework-and-exercises, newtonian-mechanics, projectile, free-fall $$x \leq \frac{v}{g}\sqrt{v^2 - 2yg}$$ This represents the maximum horizontal distance. Now, to figure out $v$, I'm going to use the vertical leap of some NBA basketball players: 45 inches or 115 cm. The starting velocity needed to reach a height is $$\frac{1}{2}mv^2 = mgh$$ $$v = \sqrt{2gh}$$ Substituting in $h = 1.15$ meters gives $v = 4.75$ m/s. Solving for $x$ in the above equation gives $$x \leq \frac{4.75}{9.8}\sqrt{4.75^2 - 2(-20)(9.8)} = 9.86\,m$$ Incidentally, the launch angle is 13 degrees above horizontal to reach that distance.
{ "domain": "physics.stackexchange", "id": 26869, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-mechanics, projectile, free-fall", "url": null }
quantum-mechanics, condensed-matter, computational-physics, berry-pancharatnam-phase Title: Numerically calculating Berry curvature in >2-band 2D systems? The standard method for numerically calculating the Berry curvature of a 2D condensed matter system is given by Fukui-Hatsugai-Suzuki in this paper. They discretize $k$-space into a grid with tiny rectangles and calculate the Berry curvature on each rectangle using so-called overlap matrices $U_\mu$ and difference methods for derivatives. However, their definition of $U_\mu$ involves only eigenstates of the band in question (as in $U_\mu=\langle n(k) | n(k+\mu)\rangle$). However, the general definition of Berry curvature (as from this popular reference) explicitly involves >1 bands in the expression (unlike $U_\mu$). Berry curvature $\Omega$ is also known as an interband quantity. So, my question is, why is it okay to still use an intraband $U_\mu$ to calculate $\Omega$ in multiband systems? How does $U_\mu$ incorporate effects of other bands? Or, have I misunderstood the application of the numerical scheme in the first reference? I know that the multiband formula for $\Omega$ comes from inserting the identity $\Sigma_p |p\rangle\langle p|=1$ to $\Omega=\nabla \times A$ (the berry connection, which is intraband like $U_\mu$). I know that for 2-band systems, $\Omega_m$ = $-\Omega_n$ for energy bands $m,n$. However, will $U_\mu$ be appropriate even when this symmetry is not present, as in >2-band systems?
{ "domain": "physics.stackexchange", "id": 70524, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, condensed-matter, computational-physics, berry-pancharatnam-phase", "url": null }
reference-request, lo.logic, sat Title: Translating SAT to HornSAT Is it possible to translate a boolean formula B into an equivalent conjunction of Horn clauses? The Wikipedia article about HornSAT seems to imply that it is, but I have not been able to chase down any reference. Note that I do not mean "in polynomial time", but rather "at all". No. Conjunctions of Horn clauses admit least Herbrand models, which disjunctions of positive literals don't. Cf. Lloyd, 1987, Foundations of Logic Programming. Least Herbrand models have the property that they are in the intersections of all satisfiers. The Herbrand models for $(a \lor b)$ are $\{\{a\}, \{b\}, \{a,b\}\}$, which doesn't contain its intersection, so as arnab says, $(a \lor b)$ is an example of a formula which can't be expressed as a conjunction of Horn clauses. Incorrect answer overwritten
{ "domain": "cstheory.stackexchange", "id": 2937, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reference-request, lo.logic, sat", "url": null }
gazebo Title: Control Model Tutorial: Sensor isn't attached I'm going through the Control Model tutorial. Part of the plugin displays the minimum distance sensed by a ray sensor. Unfortunately, it's perpetually displaying min_range = 9.500. I think the sensor isn't travelling with the box, because when I change the initial pose of the box, the 9.500 value changes to something else and stays put. My understanding was that having the sensor inside of tags was supposed to keep it attached. Am I wrong? How do I attach it? I'm wondering if something in Gazebo is configured wrong because running this tutorial up to and including the gztopic echo /gazebo/default/box/link/my_contact doesn't produce anything other than $ gztopic echo /gazebo/default/box/link/my_contact Msg Waiting for master Msg Connected to gazebo master @ http://127.0.0.1:11345 Msg Publicized address: 142.103.111.179 Originally posted by Ben B on Gazebo Answers with karma: 175 on 2013-03-25 Post score: 1 hi. try to check if each ray is measuring correctly. edit the plugin to: this->model->SetLinearVel(math::Vector3(0, 0, 0)); //so it does not move and add this below it // console output static int rgcount =0; rgcount ++; if(rgcount>=2000) { rgcount = 0; float r0 = this->raysensor->GetRange(0); //index 0 float r90 = this->raysensor->GetRange(90); //index 90 float r180 = this->raysensor->GetRange(179); //index 179 printf("0deg: %3.3f \n90deg: %3.3f \n180deg: %3.3f \n\n", r0, r90, r180); }
{ "domain": "robotics.stackexchange", "id": 3155, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo", "url": null }
java, spring Title: Logging a message and the stack trace of caught exceptions Using Spring's JdbcTemplate to load a specific object by ID if exists, I have this code: Person person = null; try { person = jdbcTemplate.queryForObject(sql, new Object[]{ id }, new PersonMapper()); } catch (EmptyResultDataAccessException e) { LOGGER.warn("Could not find person with id " + id); } A Sonar rule (squid:S1166) complains about this piece of code, basically saying that logging a message is not enough, I should also log the exception. But in this case, I don't see the point. Adding the exception as a 2nd parameter to the logger will put a completely pointless stack trace in my logs that doesn't contain anything that I don't already know. Would you agree that this is a false positive or am I missing something? Or is there another way of writing this code that complies better with Sonar or code quality analysis tools in general? If you know what your PersonMapper does and there is no way it could generate a null instance based on a existing row, you could extract this code differently. private Person execute(String sql, String id) { try { return jdbcTemplate.queryForObject(sql, new Object[]{ id }, new PersonMapper()); } catch (EmptyResultDataAccessException e) { LOGGER.warn("A smart error message based on sql " + id); return null; } } That is still not going to help you with your original question except now you can remove that logger and put it in the caller since you'll have to check if the result is non null. If you want to keep this code as is, this is a false positive simply because SonarQube can't figure out that you are catching a very specific exception type and that the nested cause is not relevant.
{ "domain": "codereview.stackexchange", "id": 13822, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, spring", "url": null }
machine-learning, neural-network, machine-learning-model, rnn Title: How many Hidden Layers and Neurons should I use in an RNN? I am very new to neural networks and machine learning and I have been making a Bitcoin price predictor to learn it. I was wondering about the number of hidden layers I'd need in a recurrent neural net using LSTM cells. I have 60 inputs for 30 previous days' close prices in 12-hour intervals and require 1 output for the future 12 hours. I am doing this with Keras in python 3.6. Any help would be awesome! Number of layers is a hyperparameter. It should be optimized based on train-test split. You can also start with the number of layers from a popular network. Look at kaggle.com and see how many layers do they use in competitions.
{ "domain": "datascience.stackexchange", "id": 4005, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, neural-network, machine-learning-model, rnn", "url": null }
slam, navigation, gmapping, slam-gmapping, p2os global_costmap_params.yaml: global_costmap: global_frame: /map robot_base_frame: base_link update_frequency: 5.0 static_map: false rolling_window: true costmap_common_params.yaml: obstacle_range: 2.5 raytrace_range: 3.0 inflation_radius: 0.35 #---standard pioneer footprint--- #---(in meters)--- footprint: [ [0.254, -0.0508], [0.1778, -0.0508], [0.1778, -0.1778], [-0.1905, -0.1778], [-0.254, 0], [-0.1905, 0.1778], [0.1778, 0.1778], [0.1778, 0.0508], [0.254, 0.0508] ] #transform_tolerance: 0.2 #map_type: costmap observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: laser, data_type: LaserScan, topic: scan, marking: true, clearing: true} #point_cloud_sensor: {sensor_frame: frame_name, data_type: PointCloud, topic: topic_name, marking: true, clearing: true} base_local_planner_params.yaml: TrajectoryPlannerROS: max_vel_x: 1.2 min_vel_x: 0.2 max_rotational_vel: 0.8 min_in_place_rotational_vel: 0.3 #sim_time: 2.0 path_distance_bias: 0.6 goal_distance_bias: 0.6 acc_lim_th: 3.2 acc_lim_x: 2.5 acc_lim_y: 2.5 holonomic_robot: true Originally posted by jan on ROS Answers with karma: 28 on 2011-07-17 Post score: 0
{ "domain": "robotics.stackexchange", "id": 6163, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, gmapping, slam-gmapping, p2os", "url": null }
electromagnetism $$ \Phi(S) = \iint_S \vec E(\vec r) \cdot\mathrm d\vec S, $$ in terms of the electric field $\vec E$ itself. If the surface $S$ is closed, then the electric flux is still $$ \Phi(S) = \oint_S \vec E(\vec r) \cdot\mathrm d\vec S $$ and it just happens, because of Gauss's law, to coincide with $Q_S/\varepsilon_0$, i.e. the total charge enclosed by $S$ divided by the vacuum permittivity. Moreover, if you're dealing with the electric field $\vec E$ produced by a free-charge distribution that's embedded in a homogeneous, isotropic linear dielectric with permittivity $\varepsilon$, then $\Phi(S)$ can also be seen to equal $Q_{S,\mathrm{free}}/\varepsilon$, where $Q_{S,\mathrm{free}}$ is the free charge enclosed by $S$. Because of this, the direct understanding of the term "electric field density" is to assign that to the vector field $\vec E$ itself, since it is the vector field whose surface integrals give the electric flux. It is possible to start re-defining those terms so that you have a bit more operational room in how you distinguish between $\vec E$ and $\vec D$, though you run the risk of creating an extremely confusing situation. It looks to me that what's happened is that your lecturer has chosen a confusing choice of terminology and that's led you into some conceptual contradictions which are impossible to clear up within that framework. However, without seeing the notes in full as provided directly by your lecturer, it's impossible to tell that for sure.
{ "domain": "physics.stackexchange", "id": 56210, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism", "url": null }
algorithms, shortest-path Title: Find all shortest paths in a graph where path has even number of edges and greater than 6 Let $G=(V,E)$, a directed with non-negative weights ($w:E\to\mathbb{R}^+$). Describe an algorithm, finds all shortest paths in the graph from a source vertex, $s\in V$, such that, each paths has an even number of edges and the number of edges is greater-equal to $6$. So I know I need to use Dijkstra algorithm on a modified graph. Somehow I need to "count" the number of edges. I think I need to add some vertices for each vertex which will make the "count" but I can't figure it out completely. I'd be glad for help. Thanks. Hint: Use an appropriate layered graph. If you find the problem difficult, try solving it under only one of the conditions (only an even number of edges, or only at least 6 edges).
{ "domain": "cs.stackexchange", "id": 6614, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, shortest-path", "url": null }
curvature, stars, gravitational-lensing Title: Can stars bend light? Is it possible for a star to be so big that its gravity can bend light like a black hole? If so, would the star appear dark or bright, or would it collapse on itself? Yes stars can bend light with the immensity of their gravitational fields. A good example of this phenomenon being put to good use is in the historical incidence of the first major verification of Einstein's theory of General relativity, which of course predicts the phenomenon in the first place. In 1919, the famous astronomer Arthur Eddington, verified that the bending of starlight occurred to the degree predicted by Einstein's theory by observing the light from occulted stars during an ordinary solar eclipse, thus Eddington witnessed the gravitational field of our own sun bending star light! There is nothing really special or odd about the gravity of black holes save from the fact that the fields are unusually strong, however, ignoring rotation, charge, and the cosmological constant; the solutions of Einstein's equation that describe the gravity around a spherical black hole are entirely similar to the description of the fields around stars, i.e. the curved space-time around spherically symmetric massive bodies is that described by the Schwarzschild metric. Ordinary stars, however, are not massive enough to have an external "event horizon" or point of no return from which nothing can escape, hence their light is able to shine out into surrounding space.
{ "domain": "physics.stackexchange", "id": 99872, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "curvature, stars, gravitational-lensing", "url": null }
r df #> C1 C2 C3 #> 1 L01_005000 L002g034 1.5e-12 #> 2 L01_003000 L001g045 2.3e-06 #> 3 L02_145000 L004g034 1.1e-02 #> 4 L02_155000 L002g050 1.1e-04 #> 5 L02_148000 L002g001 1.1e-03 df %>% separate(C1, into = c("C1_1", "C1_2")) %>% mutate(C1_2 = as.numeric(C1_2)) %>% arrange(C1_1, C1_2) %>% group_by(C1_1) %>% mutate(subgroup = cumsum(C1_2 > lag(C1_2, default = first(C1_2)) + 3000)) %>% group_by(C1_1, subgroup) %>% slice_min(C3) %>% ungroup() %>% select(-subgroup) #> # A tibble: 3 × 4 #> C1_1 C1_2 C2 C3 #> <chr> <dbl> <chr> <dbl> #> 1 L01 5000 L002g034 1.5e-12 #> 2 L02 148000 L002g001 1.1e- 3 #> 3 L02 155000 L002g050 1.1e- 4
{ "domain": "bioinformatics.stackexchange", "id": 2406, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "r", "url": null }
react.js Title: React Hooks - Did I write this correctly? I am brand new to writing hooks and I am having a little trouble getting my head wrapped around it. Initially, I wanted to write this with one event listener that passed the event and a separate param to distinguish itself from the other incoming arguments. With hooks, I was confused as to how I should pass a separate param and even more confused on how I would go about putting the logic into deciphering what each argument was. So... Basically, I am calling two arrow functions in render and using separate arguments to dictate what iconType the onMouseOver is effecting. I guess my question is, is this an acceptable way to write react hooks? This is my first component with any kind of state in my project (a simple navbar). I wan to make sure I am on the right path. import React, { useState, useEffect } from 'react'; import '../styles/HomeNavBar.css'; import logo from '../styles/images/pulse_logo.png' // relative path to image export default function HomeNavBar() { const [isTrue, handleMouseOver] = useState(false) const [iconType, setArg] = useState('')
{ "domain": "codereview.stackexchange", "id": 37224, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "react.js", "url": null }
java, rags-to-riches private Map<String, Set<Position>> getPositions(int row, String line) { int[] counter = new int[1]; return aggregate( SPLITTER.splitAsStream(line) .map(String::toLowerCase) .map(word -> Collections.singletonMap(word, new Position(row, counter[0]++))), Collectors.toSet()); } private static <K, U, V> Map<K, Set<V>> aggregate(Stream<Map<K, U>> stream, Collector<U, ?, Set<V>> downstream) { return stream.map(Map::entrySet) .flatMap(Set::stream) .collect(Collectors.groupingBy(Entry::getKey, Collectors.mapping(Entry::getValue, downstream))); } @Override public String getName() { return name; } @Override public int getCount(String word) { return getPositions(word).size(); } @Override public Set<Position> getPositions(String word) { return lookup.getOrDefault(word, Collections.emptySet()); } } FileWordSearcher public final class FileWordSearcher implements WordCount { private final Set<Path> inputs; private final Set<WordCount> results; public FileWordSearcher(Path...paths) { this(Arrays.stream(Objects.requireNonNull(paths))); } public FileWordSearcher(Stream<Path> paths) { Objects.requireNonNull(paths); this.inputs = paths.collect(toUnmodifiableSet()); this.results = inputs.stream() .map(FileWordCount::new) .collect(toUnmodifiableSet()); } private static <T> Collector<T, ?, Set<T>> toUnmodifiableSet() { return Collectors.collectingAndThen(Collectors.toSet(), Collections::unmodifiableSet); } @Override public String getName() { return toString(); }
{ "domain": "codereview.stackexchange", "id": 18265, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, rags-to-riches", "url": null }
inorganic-chemistry, acid-base, metal, alloy Title: Boiling sodium hydroxide in stainless steel cup: Solution turning to a blue color I boiled highly concentrated sodium hydroxide in a stainless steel cup. This created a blackish layer on the bottom of the cup and turned the colour of the sodium hydroxide solution to blueish. Am I right to assume that there was some oxidation happening at the surface of steel? Are any oxides of metals, present in common stainless steel, known to have a blue colour when dissolved in an aqueous solution? Do you say stainless steel? Stainless steel is an alloy of $\ce{Ni, Cr, Fe}$ with other trace elements, and owes its apparent resistance to corrosion to a protective, adherent, coating of mixed chrome, nickel, and iron oxides. A large amount is probably $\ce{Cr^3+}$, which is amphoteric and will dissolve in hydroxide solution. Once the protective coating is breached chromium will react in base similarly to aluminum. The potential to $\ce{[Cr(OH)4]^-}$ is about $\pu{+1.2 V}$. Stainless steel can be much more reactive than pure iron if the protective layer is continually disrupted.
{ "domain": "chemistry.stackexchange", "id": 15538, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, acid-base, metal, alloy", "url": null }
programming-languages, functional-programming A representation more like the spine calculus probably makes more sense for efficiency. Or you could do something like: data CA = S0 | S1 CA | S2 CA CA | K0 | K1 CA Applying a higher-order function splits into two cases: either a combinator has been fully applied and thus it should be executed, or we return a new value that represents the (partial) application. I haven't done an exhaustive survey, but I'm pretty confident variations on closure conversion are by far the most common implementation strategy for higher-order functions (hence them often being called "closures"). It has the nice properties of being modular, simple, and reasonably efficient even in its naivest form. It takes a good choice of base combinators and some cleverness to get combinator-based approaches to perform well. Defunctionalization just isn't widely used as far as I can tell, but there is little reason not to take advantage of function pointers. To the extent that you do, e.g. instead of a large case analysis you have a table of function pointers that you index into, you've basically recreated closure conversion. There are some other approaches. One is template instantiation which is basically to take $\beta$-reduction literally, and simply literally substitute terms into other terms. Usually, this requires having and manipulating an abstract syntax tree-like structure. Higher-order functions are then represented by their (syntactic) lambda terms or "templates" thereof which can simplify performing the substitutions.
{ "domain": "cs.stackexchange", "id": 7745, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming-languages, functional-programming", "url": null }
homework-and-exercises, thermodynamics, pressure Pressure is inversely proportional to height. When you are closing the lid before cooking on 3 different altitudes, the air that gets trapped inside the cooker in the beach is having the highest pressure. So wont the cooking of food be faster in beach since you reach a particular temperature must faster as there is already some pressure inside the cooker? Like if cooking of food has to take place when you reach a particular pressure (inside cooker) isn't it much faster in the low altitude (beach) This is a common misunderstanding. I'll start answering by telling how a normal utensil cooks food. At lower altitudes we know that the pressure is high. And, at higher pressure water boils competitively slower. Therefore, cooking at a higher pressure will obviously be simple. Now, when you go up higher and higher, you'll find that the pressure decreases, and therefore cooking food becomes hard at higher altitudes(since boiling point of water reduces). Now, in the case of pressure cookers, they work by making a high pressure environment inside them. To be more clear, when you keep a pressure cooker on the stove putting the whistle on it, what happens is that the water inside turns to steam initially. But when more and more water gets converted to steam, the pressure inside the cooker increases a hence the boiling point of water increases. That means, it will take more heat for water to convert to steam and or food inside will get cooked with this heat. Edit:
{ "domain": "physics.stackexchange", "id": 75454, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, thermodynamics, pressure", "url": null }
From a point on the sphere exactly one half of space is visible; it is divided in half by the plane that touches the sphere at this given point. This is the average horizon in practice. So we can use Pythagoras' theorem to calculate the distance to an object at a certain height, when it rises above the horizon: $$(\textrm{Earth’s Radius})^2 + (\textrm{ground Distance})^2 = (\textrm{Earth’s Radius} + \textrm{clouds Height})^2$$ Radius of Earth is about $$6400\;\mathrm{km}$$. If rainy clouds lie at $$2\;\mathrm{km}$$ and we can see them at the horizon, then its raining in about $$160\;\mathrm{km}$$ from our location. For double directions ($$320\;\mathrm{km}$$) this is $$0.8$$ percents of the length of Earth's circle ($$40000\;\mathrm{km}$$). The corresponding area is about $$0.02$$ percents of Earth's sphere. Notice, the most of area is observed close to the horizon. And, for example, if a small cloud is visible at $$45$$ degrees above the horizon, then it floats right above the place (ground) at same distance from us as the altitude of this cloud. I am assuming you mean the fraction of the "astronomical" sky. If you mean the fraction of the atmosphere (?), then that is a different question.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9884918499186287, "lm_q1q2_score": 0.8058506670759098, "lm_q2_score": 0.8152324848629215, "openwebmath_perplexity": 360.9634743612383, "openwebmath_score": 0.7526221871376038, "tags": null, "url": "https://physics.stackexchange.com/questions/129317/how-much-of-the-sky-is-visible-from-a-particular-location" }
angle is perpendicular to its external bisector, it follows that the center of the incircle together with the three excircle centers form an orthocentric system.[5]:p. enl. {\displaystyle \triangle ABC} Pedoe, D. Circles: The circumcircle is a triangle's circumscribed circle, i.e., the unique circle that passes through each of the (Kimberling 1998, pp. The touchpoint opposite C Circumcircle of a triangle. {\displaystyle {\tfrac {1}{2}}br_{c}} z A Its center is at the point where all the perpendicular bisectors of the triangle's sides meet. ) is defined by the three touchpoints of the incircle on the three sides. and s Δ A A ( Containing an Account of Its Most Recent Extensions, with Numerous Examples, 2nd the length of {\displaystyle T_{B}} and {\displaystyle \Delta } {\displaystyle x:y:z} Modern Geometry: The Straight Line and Circle. , the circumradius C , {\displaystyle \triangle ABC} B where 08, Apr 17. C cos r are the triangle's circumradius and inradius respectively. B , B 4 {\displaystyle z} that are the three points where the excircles touch the reference △ r C The center of this excircle is called the excenter relative to the vertex . {\displaystyle A} Assoc. a B C {\displaystyle a} c . {\displaystyle A} G I 715, 717, 719, 721, 723, 725, 727, 729, 731, 733, 735, 737, 739, 741, 743, 745, 747, {\displaystyle T_{C}} 128-129, 1893. {\displaystyle \Delta } side a: side b: side c ... Incircle of a triangle. 2 B and {\displaystyle A} {\displaystyle v=\cos ^{2}\left(B/2\right)} enl. This is particularly useful for finding the length of the inradius given the side lengths,
{ "domain": "edusteps.net", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9915543710622855, "lm_q1q2_score": 0.890113684664837, "lm_q2_score": 0.8976952859490985, "openwebmath_perplexity": 1355.146206697197, "openwebmath_score": 0.8856181502342224, "tags": null, "url": "http://edusteps.net/6zisq/998e1f-circumcircle-and-incircle-of-a-triangle" }
quantum-mechanics, hilbert-space, vectors, superposition, quantum-states A velocity vector $$\left\lvert v \right\rangle = a\left\lvert x \right\rangle + b\left\lvert y \right\rangle \tag{2}$$ for some values $a$ and $b$ is also a superposition of two orthogonal velocity vectors. It is unlike either basis vector alone. Talking about $\left\lvert \Psi \right\rangle$ as "simultaneously in both states" is just plain sloppy. It's a superposition. It's not like either basis vector alone. It is, as you say, something completely distinct.
{ "domain": "physics.stackexchange", "id": 57967, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, hilbert-space, vectors, superposition, quantum-states", "url": null }
#### Note6.4.23. In part (a) of the Example above, observe that \begin{equation*} 2\sqrt{3} \cdot 5\sqrt{6} = 2 \cdot 5 \cdot \sqrt{3}\cdot \sqrt{6} = 10\sqrt{18} \end{equation*} We multiply together any expressions outside the radical, and apply the product rule to expressions under the radical. Expand $$~(\sqrt{5}-2\sqrt{3})^2=$$ $$17-4\sqrt{15}$$ Solution. $$17-4\sqrt{15}$$ True or false. 1. We can only simplify products or quotients of like radicals. • True • False 2. $$4\left(3\sqrt{5}\right) = 3 \cdot 4 + 3\sqrt{5}$$ • True • False 3. $$(\sqrt{3}+\sqrt{5})^2 = 3+5=8$$ • True • False 4. $$(3\sqrt{x})^2=9x$$ • True • False $$\text{False}$$ $$\text{False}$$ $$\text{False}$$ $$\text{True}$$ Solution. 1. False 2. False 3. False 4. True ### Subsection6.4.5Rationalizing the Denominator It is easier to work with radicals if there are no roots in the denominators of fractions. We can use the fundamental principle of fractions to remove radicals from the denominator. This process is called rationalizing the denominator. For square roots, we multiply the numerator and denominator of the fraction by the radical in the denominator. Rationalize the denominator of each fraction. 1. $$\displaystyle \displaystyle{\sqrt{\frac{1}{3}}}$$ 2. $$\displaystyle \displaystyle{\frac{\sqrt{2}}{\sqrt{50x}} }$$
{ "domain": "runestone.academy", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9940889299179002, "lm_q1q2_score": 0.800870036012662, "lm_q2_score": 0.8056321843145404, "openwebmath_perplexity": 4106.2361557560025, "openwebmath_score": 0.979746401309967, "tags": null, "url": "https://runestone.academy/ns/books/published/int-algebra/WorkingwithRadicals.html" }
javascript, beginner, database, promise, amazon-web-services if (data.Item) { throw new Error(`Client name '${event.name}' is already in use.`); } const timestamp = new Date().getTime(); return await dynamo .put({ TableName: 'client', Item: { id: event.name, created: timestamp, updated: timestamp, deleted: null }, ConditionExpression: 'attribute_not_exists(id)' }) .promise(); }
{ "domain": "codereview.stackexchange", "id": 27348, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, beginner, database, promise, amazon-web-services", "url": null }
wpf, xaml <RowDefinition Height="Auto"></RowDefinition> </Grid.RowDefinitions> <Border Background="Gray"> <StackPanel Orientation="Vertical" Margin="0,5,5,0" Grid.Row="0" Grid.Column="0"> <Label Content="Title" FontWeight="Bold" HorizontalAlignment="Right"></Label> </StackPanel> </Border> <Border Grid.Row="0" Grid.Column="1" Background="Gray"> <!-- Here comes the problem, without the Binding Hack the text box has auto size Width of 16578 and I need the Width to not to overflow actual Grid Column Width--> <TextBox TextWrapping="Wrap" Width="Auto" MinHeight="60" AcceptsReturn="True" FontSize="14" Margin="0,5,5,5">
{ "domain": "codereview.stackexchange", "id": 16792, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "wpf, xaml", "url": null }
natural-language-processing, long-short-term-memory, sequence-modeling, text-classification, padding Title: Text classification of non-equal length texts, should I pad left or right? Text classification of equal length texts works without padding, but in reality, practically, texts never have the same length. For example, spam filtering on blog article: thanks for sharing [3 tokens] --> 0 (Not spam) this article is great [4 tokens] --> 0 (Not spam) here's <URL> [2 tokens] --> 1 (Spam) Should I pad the texts on the right: thanks for sharing -- this article is great here's URL -- -- Or, pad on the left: -- thanks for sharing this article is great -- -- here's URL What are the pros and cons of either pad left or right? For any model that does not take a time series approach like an RNN does, the padding shouldn't make a difference. I prefer padding right simply because there also might be text you need to cut-off. Then padding is more intuitive as you either cut-off a text if it's too long or pad a text when it's too short. Either way, when a model is trained a certain way, it shouldn't make a difference so long the testing is also padded the way it was presented in training.
{ "domain": "ai.stackexchange", "id": 2140, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "natural-language-processing, long-short-term-memory, sequence-modeling, text-classification, padding", "url": null }
javascript, array var keyWords = ["campaign","evar","event","prop", "mvvar1", "mvvar2", "mvvar3", "purchase", "scOpen", "scView", "scAdd"]; var arrLen = []; var different = []; for(var i = 0; i < allActions.length; i++) { arrLen.push(allActions[i].length); } var max = Math.max.apply(null, arrLen) var maxValue = arrLen.indexOf(max);
{ "domain": "codereview.stackexchange", "id": 26989, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, array", "url": null }
ros, openrave Title: librobot_control.so bad plugin info hash Hello, I installed the openrave_robot_control plugin under ROS Diamondback and I was able to successfully run "rosmake" on the source. Then I loaded the plugin into my OPENRAVE_PLUGINS environment variable: export OPENRAVE_PLUGINS=$OPENRAVE_PLUGINS:rospack find openrave/share/openrave/plugins:rospack find openrave_robot_control/lib The issue comes when I run "openrave --listplugins" and I get the following error: [plugindatabase.h:832] /opt/ros/diamondback/stacks/openrave_planning/openrave_robot_control/lib/librobot_control.so failed to load: openrave (InvalidPlugin): [void OpenRAVEGetPluginAttributes(OpenRAVE::PLUGININFO*, int, const char*):87] bad plugin info hash Anyone else experience this? Thanks, Chris Originally posted by chriscannon on ROS Answers with karma: 16 on 2011-12-05 Post score: 0 Turns out the issue was because I had installed OpenRAVE separately from the ROS stack of OpenRAVE. This occurs because during compilation it looks under /usr/bin/openrave which is not linked to the ROS stack OpenRAVE. After uninstalling the separate version of OpenRAVE and re-running rosmake in the openrave_robot_control directory everything worked perfectly. Originally posted by chriscannon with karma: 16 on 2011-12-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7529, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, openrave", "url": null }
$$a$$ is known as the n th Taylor polynomial. It first prompts the user to enter the number of terms in the Taylor series and the value of x. If f has a power series representation (expansion) at a,. (b) The Maclaurin series for g evaluated at x = L is an alternating series whose terms decrease in absolute 17 value to 0. We can improve this approximation of f(x) in two ways: Take more terms, increasing N. ” This becomes clearer in the expanded …. 6 Taylor Series You can see that we can make Taylor Polynomial of as high a degree as we'd like. The Maclaurin Series: Approximations to f Near x = 0 If we let a Taylor polynomial keep going forever instead of cutting it off at a particular degree, we get a Taylor series. Thanks, Prasad. An approximation for the exponential function can be found using what is called a Maclaurin series: e x ≈ 1 + x 1 1 ! + x 2 2 ! + x 3 3 ! + … We will write a program to investigate the value of e and the exponential function. 01SC Single Variable Calculus Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw. This means that the power series converges fastest when x is closest to 0. Here’s the formula for …. We'll focus on the Maclaurin right now. Consider the function of the form. FP2: Taylor's Series What does it mean to perform a Taylor expansion on T and V? Why does trig not work when using the 90 degree angle, i. Limits and Continuity Definition of Limit of a Function Properties of Limits Trigonometric Limits The Number e Natural Logarithms Indeterminate Forms Use of Infinitesimals L’Hopital’s Rule Continuity of Functions Discontinuous Functions Differentiation of Functions Definition of the Derivative Basic Differentiation Rules Derivatives of Power Functions Product Rule Quotient Rule Chain Rule. When the Maclaurin series approximates a function, the series values and the function values are very close near x = 0. The calculator will find the Taylor (or power) series expansion of the given function around the given point, with steps shown. Maclaurin/Taylor Series: Approximate a Definite Integral to a Desired Accuracy. We now take a particular case of Taylor Series, in the region near x = 0. Taylor
{ "domain": "timstourenblog.de", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9938070091084549, "lm_q1q2_score": 0.8634461298002867, "lm_q2_score": 0.8688267660487572, "openwebmath_perplexity": 390.82097345168205, "openwebmath_score": 0.9073213934898376, "tags": null, "url": "http://enej.timstourenblog.de/maclaurin-series-approximation.html" }
r, statistics, data-visualization library(tidyverse) # anonymous functions are quick and easy to type, my preference if only one input arg newdat_func <- . %>% # meant to start with df select(weight, age) %>% # keep only column of interest map(~ round(seq(min(.), max(.), length.out = 15))) %>% # don't repeat yourself and call the same operation on both columns in one line c(list(sex = c("female", "male"))) %>% # prep a 3-element list for expand.grid to process expand.grid() newdat2 <- newdat_func(df) # fall back to traditional function format for multiple inputs x_func <- function(model, newdata, link_func) { predict.glm(model, newdata = newdata, type="link", se=TRUE) %>% # obviously this only works on glm objects, you could add checks to be defensive keep(~ length(.) == nrow(newdata)) %>% # drop the third element that is length 1 bind_cols() %>% # build data frame with a column from each list element mutate(low = fit - 1.96 * se.fit, high = fit + 1.96 * se.fit) %>% mutate_all(funs(link_func)) %>% # again don't repeat yourself bind_cols(newdata) %>% # bolt back on simulated predictors mutate(category = cut(age, breaks = c(0, 69, 138, 206), labels = c("0-69", "70-139", "139-206")), age = as.factor(age)) } x2 <- x_func(m1, newdat2, link_func)
{ "domain": "codereview.stackexchange", "id": 32691, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "r, statistics, data-visualization", "url": null }
kinematics, dynamics, jacobian is referring to the fact that you do not directly integrate the angular velocity vector back into some variable that represents the current orientation. For example, when you have the position derivative $\dot{x}(t)$, we know the position $x(t)$ can be found simply from using the equation $$ x(t) = \int_{0}^{t}\dot{x}(\tau) d\tau $$ While a similar concept can be applied to a subset of rotations represented using the axis-angle representation, it does not hold universally. For example, consider a rotation about the x-axis with a magnitude of $2\pi$ radians followed by a rotation about the y-axis with a magnitude of $\pi$ radians. For the first rotation, we can directly integrate the angular velocity to produce its axis-angle representation, $\left[ 2\pi, 0, 0 \right]$. Meanwhile, directly integrating the second rotation would give you the axis-angle value of $\left[ 2\pi, \pi, 0 \right]$. Conceptually, we can see the final result of this example should be a $\pi$ rotation about the y-axis as a $2\pi$ rotation about any axis will return to the original orientation. So, let's consider the rotation matrix derived from the computed axis-angle values as well as the rotation matrix produced from the y-axis rotation alone. So, the rotation matrix produced from the axis-angle representation is $$ \mathbf{R}_{\omega} \approx \left[ \begin{array}{ccc} 0.95 & 0.11 & 0.30 \\ 0.11 & 0.79 & -0.60 \\ -0.30 & 0.60 & 0.74 \end{array} \right] $$ while the y-axis rotation matrix is $$ \mathbf{R}_{y} = \left[ \begin{array}{ccc} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right] $$ Thus, simply integrating the angular velocities will not produce a meaningful representation of orientation.
{ "domain": "robotics.stackexchange", "id": 2666, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinematics, dynamics, jacobian", "url": null }
ros-kinetic [ 67%] Generating Lisp code from pr2_msgs/PeriodicCmd.msg [ 67%] Generating Lisp code from pr2_msgs/PowerState.msg [ 67%] Generating Lisp code from pr2_msgs/LaserTrajCmd.msg [ 68%] Generating Lisp code from pr2_msgs/AccessPoint.msg [ 68%] Generating Lisp code from pr2_msgs/BatteryServer2.msg [ 68%] Generating Lisp code from pr2_msgs/BatteryState2.msg [ 68%] Generating Lisp code from pr2_msgs/SetLaserTrajCmd.srv [ 69%] Generating Lisp code from pr2_msgs/SetPeriodicCmd.srv [ 69%] Built target realsense2_camera_generate_messages_eus [ 69%] Built target pr2_msgs_generate_messages_lisp [ 70%] Linking CXX shared library /home/mobiltech/catkin_ws/devel/lib/libqtutorials.so [ 70%] Built target qtutorials [ 71%] Linking CXX executable /home/mobiltech/catkin_ws/devel/lib/turtlebot_teleop/turtlebot_teleop_joy [ 71%] Built target turtlebot_teleop_joy [ 71%] Linking CXX executable /home/mobiltech/catkin_ws/devel/lib/ndt/ndt_data_transfer_noscore [ 71%] Built target ndt_data_transfer_noscore [ 71%] Linking CXX shared library /home/mobiltech/catkin_ws/devel/lib/libmt_loc.so [ 71%] Built target mt_loc [ 71%] Linking CXX executable /home/mobiltech/catkin_ws/devel/lib/ndt_ums/ndt_loc_ums [ 71%] Built target ndt_loc_umslibraarieslibraaries
{ "domain": "robotics.stackexchange", "id": 35161, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-kinetic", "url": null }
homework-and-exercises, general-relativity, lagrangian-formalism, differential-geometry, geodesics $$-\frac{d}{d\lambda}\Big( \frac{\partial \mathcal{L}^2}{\partial \dot{t}} \Big)$$ then, $$-\frac{d}{d\lambda}\Big( \frac{\partial}{\partial \dot{t}} [K(x,y,z,t)(-\dot{t}^2+\dot{x}^2)+M(x,y,z,t)\dot{x}\dot{t}+\dot{y}^2+\dot{z}^2] \Big) = -\frac{d}{d\lambda}\Big( -2K\dot{t}+M\dot{x}\Big) = 2K\ddot{t}-M\ddot{x} \implies $$ $$ 2K\ddot{t}-M\ddot{x} = 0 \tag{2}$$ My question is: both $(1)$ and $(2)$ are valid? I mean, did I calculate right? The Lagrangian is $$ L= \frac{1}{2} g_{\mu\nu}\frac{dx^\mu}{d\lambda}\frac{dx^\nu}{d\lambda}$$ and the Euler/Lagrange eqs. are $$\frac{d}{d\lambda}\frac{\partial L}{\partial(dx^\mu/d\lambda)} = \frac{\partial L}{\partial x^\mu} $$ Now the part in the $t$ component that you considered
{ "domain": "physics.stackexchange", "id": 54506, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, general-relativity, lagrangian-formalism, differential-geometry, geodesics", "url": null }
general-relativity, lagrangian-formalism, conventions, stress-energy-momentum-tensor, variational-calculus Note that the velocity $\dot{x}_{\mu}:= g_{\mu\nu}\dot{x}^{\nu}$ with lower index implicitly depends on the metric. In contrast the velocity $\dot{x}^{\nu}$ with upper index does not depend on the metric. This is important when we vary wrt. the metric. The stress-energy-momentum tensor depends on the sign convention for the metric, cf. this Phys.SE post.
{ "domain": "physics.stackexchange", "id": 71514, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, lagrangian-formalism, conventions, stress-energy-momentum-tensor, variational-calculus", "url": null }
thermodynamics, adiabatic Title: Why is $C_V$ used in this derivation? In his lecture (26:30-38:40), Shankar derives the adiabatic pressure-volume relationship $P_1V_1^\gamma = P_2V_2^\gamma$, where $\gamma = C_P / C_V$, from the First Law of Thermodynamics $\Delta U =Q - W$. His first step in doing is is to make the substitution $\Delta U = n C_V \Delta T$ into the First Law. In adiabatic processes, volumes are not held constant, so why is using the specific heat at constant volume $C_V$ valid?
{ "domain": "physics.stackexchange", "id": 53552, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, adiabatic", "url": null }
c#, object-oriented Title: Basic traffic light simulation library I'm planning on completing the April 2017 Community Challenge, Simulate a Multi-Way Intersection. And so I've decided to do this by going in cycles. There are two sections to my code; the generic code, that powers the creation of Delay<T>, and the subclass of Delay<T>, TrafficLightsDelay, which contains the business logic. I personally think this to be a good design, but at least allows me to better understand C# inheritance. Coming from Python, I don't like having to constantly create new instances of objects, and prefer to use sugar, and so DelayBuilder is largely sugar for creating a List<DelayItem>. Whilst it's not needed I find reading the chained functions simpler to read, rather then lots of object creation. I personally find DelayBuilder.Build<T> to be a little hack to instantiate the wanted child object of Delay<T>, and would like to know if there is a better way to do this. I'm unsure on how good Delay<T>.ChangeList is for the overall structure of this code. Is this ok, or should I use another way, possibly one which doesn't use DelayTypes. Anyway, any and all improvements are welcome. Here is the code: using System; using System.Collections.Generic; using System.Linq; namespace TrafficLights { [Flags] internal enum DelayTypes { Sleep = 1, Change = 2 } class DelayItem { public object[] Data; public DelayTypes Type; public DelayItem(DelayTypes type, object[] data) { Type = type; Data = data; } } class DelayItemSleep : DelayItem { public DelayItemSleep(int amount) : base(DelayTypes.Sleep, new object[] {amount}) { } } class DelayItemChange : DelayItem { public DelayItemChange(object data) : base(DelayTypes.Change, new[] {data}) { } public DelayItemChange(object[] data) : base(DelayTypes.Change, data) { } } class DelayBuilder { private readonly List<DelayItem> _list;
{ "domain": "codereview.stackexchange", "id": 25530, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, object-oriented", "url": null }
By analogous argument, $\displaystyle L(f,P_n)=\dfrac{b^2-a^2}{2}-\dfrac{(b-a)^2}{2n}.$ Combining our results reveals $\displaystyle U(f,P_n)-L(f,P_n)=\dfrac{(b-a)^2}{n}, \quad \text{for all}\ n\in\mathbb{N}.$ By the Archimedean property of $\mathbb{R}$ , there is $N\in\mathbb{N}$ such that $(b-a)^2 \leq N\varepsilon \quad\implies\quad\dfrac{(b-a)^2}{N}\leq\varepsilon.$ Hence $\displaystyle U(f)-L(f)\leq U(f,P_N)-L(f,P_N)=\dfrac{(b-a)^2}{N}\leq \varepsilon.$ Since $\varepsilon$ was chosen arbitrarily, we deduce $U(f)\leq L(f)$ , which the first result follows. Step 2. By construction, $\begin{array}{rl}\dfrac{b^2-a^2}{2}-\dfrac{(b-a)^2}{2n}&=L(f,P_n)\\&\leq L(f)\\ &\leq U(f) \\ &\leq U(f,P_n) \\ &=\dfrac{b^2-a^2}{2}+\dfrac{(b-a)^2}{2n}.\end{array}$ As the Archimedean property also implies $1/n\rightarrow 0$ , applying the squeeze lemma yields
{ "domain": "typal.academy", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462183543601, "lm_q1q2_score": 0.8054058615873716, "lm_q2_score": 0.8152324960856175, "openwebmath_perplexity": 1205.4329969031794, "openwebmath_score": 0.9526798129081726, "tags": null, "url": "https://docs.typal.academy/analysis/integration/basic-integration" }
fl.formal-languages Title: Irreducible languages This is not necessarily a research question. Just a question out of curiosity: I am trying to understand if one can define "irreducible" languages. As a first guess I call a language L "reducible" if it can be written as $L = A \cdot B$ with $A \cap B = \emptyset$ and $|A|,|B|>1$, otherwise call the language "irreducible". Is it true: 1) If P is irreducible, A,B, C are languages such that $A\cap B = \emptyset$, $P \cap C = \emptyset$ and $A\cdot B = C\cdot P$, then there exists a language $B' \cap P = \emptyset$ such that $B = B'\cdot P$? This would correspond in integers to the lemma of Euklid and would be usefull to prove uniqueness of "factorization". 2) Is it true that every language can be factored in a finite number of irreducible languages? If someone has a better idea on how to define "irreducible" language, I would like to hear it. (Or is there maybe already a definiton of this, which I am unaware of?) Here's a counterexample to this: call a language L "reducible" if it can be written as $L = A \cdot B$ with $A \cap B = \emptyset$ and $|A|,|B|>1$, otherwise call the language "irreducible". Is it true: 1) If P is irreducible, A,B, C are languages such that $A\cap B = \emptyset$, $P \cap C = \emptyset$ and $A\cdot B = C\cdot P$, then there exists a language $B' \cap P = \emptyset$ such that $B = B'\cdot P$?
{ "domain": "cstheory.stackexchange", "id": 4651, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fl.formal-languages", "url": null }
arrays vis1 and vis2 of size N (number of nodes of a graph) and keep false in all indexes. To demonstrate DFS, BFS and Graph Connect Algorithms visually I have developed a widows application using C# language that will generate a Graph randomly given the number of nodes and then display them. # visits all the nodes of a graph (connected component) using BFS def bfs_connected_component(graph, start): # keep track of all visited nodes explored = [] # keep track of nodes to be checked queue = [start] # keep looping until there are nodes still to be checked while queue: # pop shallowest node (first node) from queue node = queue.pop(0) if node not in explored: # add … To subscribe to this RSS feed, copy and paste this URL into your RSS reader. it is possible to reach every vertex from every other vertex, by a simple path. If BFS is performed on a connected, undirected graph, a tree is defined by the edges involved with the discovery of new nodes: ... An articulation vertex is a vertex of a connected graph whose deletion disconnects the graph. Yes, it's the same concept. If each vertex in a graph is to be traversed by a tree-based algorithm (such as DFS or BFS), then the algorithm must be called at least once for each connected component of the graph. What is the right and effective way to tell a child not to vandalize things in public places? (b) Does The Algorithm Written In Part (a) Work For Directed Graphs … Explaining algorithm that determines if a directed graph is strongly connected, Draw horizontal line vertically centralized. As the name BFS suggests, you are required to traverse the graph breadthwise as follows: The most important aspect of the Facebook network is the social engagement between users. We do a BFS traversal of the given graph. Introduction Graphs are a convenient way to store certain types of data. Is it normal to feel like I can't breathe while trying to ride at a challenging pace? Questions to Clarify: Q. However, if you want to apply some sort of optimization, like traversing through graph and finding shortest distance BFS is a traversing algorithm where you should start traversing from a selected node (source or starting node) and traverse the graph layerwise thus exploring the neighbour nodes (nodes
{ "domain": "astburygarage.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.95598134762883, "lm_q1q2_score": 0.8200103124373601, "lm_q2_score": 0.857768108626046, "openwebmath_perplexity": 1039.2370077693708, "openwebmath_score": 0.3814409077167511, "tags": null, "url": "http://www.astburygarage.com/0rfpx8at/bfs-connected-graph-ff6f49" }
motor Title: How much weight can DC Motor carry? How much weight can following Brushed Motors carry together? 4X 12 Volt 100rpm supplied voltage => (5/17)*12 2X 12 Volt 100rpm supplied voltage => (7/17)*12 These motors are basically the wheels for my battle bot. Here is the link to the Motor. I am new to this so please tell me if I'm wrong somewhere. Well, to answer this question for your special case, one needs more information about the motors. Maybe, you can supply a product code. And one needs more details of the involved mechanics... is it a car-like application? Is it going up-hill or down-hill? How long needs the motion to last? How well is the cooling? And so on... I'll try to give you an idea on what all these data means which you can find out about a motor... You can approximately calculate some kind of upper limit for the "power" a motor can deliver for an infinite duration, due to thermal design limits at lets say 'standard conditions' (normal room temperature, normal ventilation, ...): A Motor has one or multiple phases (coils that will induce a magnetic field), which have: I_max - a maximum design current (in the unit: Ampere or milli-Ampere), R- an electrical resistance (given in the unit Ohms) U_max - a maximum design voltage (given in Volts)
{ "domain": "robotics.stackexchange", "id": 2607, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "motor", "url": null }
The second model can be re-expressed like this: $$Y = \beta_0 + (\beta_1 + \beta_3X_2)X_1 + \beta_4 X_1^2 + \beta_2X_2 +\beta_5X_2^2 + \epsilon,$$ which shows that, in this model, the effect of $$X_1$$ on $$Y$$ (controlling for the effect of $$X_2$$) is assumed to be quadratic rather than linear. This quadratic effect is captured by including both $$X_1$$ and $$X_1^2$$ in the model. While the coefficient of $$X_1^2$$ is assumed to be independent of $$X_2$$, the coefficient of $$X_1$$ is assumed to depend linearly on $$X_2$$. Using either model would imply that you are making entirely different assumptions about the nature of the effect of $$X_1$$ on $$Y$$ (controlling for the effect of $$X_2$$). Usually, people fit the first model. They might then plot the residuals from that model against $$X_1$$ and $$X_2$$ in turns. If the residuals reveal a quadratic pattern in the residuals as a function of $$X_1$$ and/or $$X_2$$, the model can be augmented accordingly so that it includes $$X_1^2$$ and/or $$X_2^2$$ (and possibly their interaction). Note that I simplified the notation you used for consistency and also made ther error term explicit in both models.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9591542794197471, "lm_q1q2_score": 0.8013576139265429, "lm_q2_score": 0.8354835411997897, "openwebmath_perplexity": 434.42950109571484, "openwebmath_score": 0.6716662049293518, "tags": null, "url": "https://stats.stackexchange.com/questions/379841/in-linear-regression-why-should-we-include-quadratic-terms-when-we-are-only-int" }
general-relativity, perturbation-theory Title: Linear Metric Perturbation and Brans-Dicke Theory Recently, I have been researching about modified gravity theories and one of the theories has been the theory of the graviton. If one starts with the metric tensor $g_{\mu\nu}$ and then performs the perturbation $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$ one then obtains the perturbation for a linearized gravitational wave, which can also represent a spin-2 particle, the graviton. The Riemann tensor is then simplified in a local inertial frame to be: $R^\alpha_{\beta\mu\nu}=\partial _\mu \Gamma^a_{\beta\nu}-\partial _\nu \Gamma^a_{\beta\mu}$ The other curvature tensors can then be calculated and the result could construct the Einstein Hilbert Action for the graviton field. Now the question that I have is the following: since the tensor $h_{\mu\nu}$=$kH_{\mu\nu}$, where k is the gravitational constant commonly seen in the Einstein Field Equation, could one instead of the k plug in a scalar field $\phi$ like in Brans-Dicke Theory? Would this yield a result similar to the Brans-Dicke action? I tried to calculate the Ricci scalar by plugging-in the scalar field into the graviton tensor and got: $R$=$\partial_{\mu}\partial_{\nu}h^{\mu\nu}-\square h$
{ "domain": "physics.stackexchange", "id": 11345, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, perturbation-theory", "url": null }
turing-machines But this means exactly that the machine $M_c$ - whose index is $f(c)$ - behaves as if it was given its own index! (Intuitively, it "thinks" it has index $c$, and that's good enough: index $c$ and index $f(c)$ are "the same.")
{ "domain": "cs.stackexchange", "id": 12650, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turing-machines", "url": null }
We obtain \begin{align*} \color{blue}{\sum_{n=m}^\infty r^{2n}\binom{2n}{n-m}}&=\sum_{n=0}^\infty\binom{2n+2m}{n}r^{2n+2m}\tag{3}\\ &=r^{2m}\sum_{n=0}^\infty[u^n](1+u)^{2n+2m}r^{2n}\tag{4}\\ &=r^{2m}\left.\frac{(1+u)^{2m}}{1-r^2\cdot 2(1+u)}\right|_{u=r^2(1+u)^2}\tag{5}\\ &=r^{2m}\frac{(1+u)^{2m+1}}{1-u}\tag{6}\\ \end{align*} Comment: • In (3) we shift the index to start with $$n=0$$. • In (4) we apply the coefficient of operator $$[u^n](1+u)^{2n+2m}=\binom{2n+2m}{n}$$. • In (5) we apply (2) with $$F(u)=(1+u)^{2m}, \phi(u)=(1+u)^2$$ evaluated at $$r^2$$. • In (6) we substitute $$r^2=\frac{u}{(1+u)^2}$$ and do some simplifications.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9877587265099557, "lm_q1q2_score": 0.8165917973364074, "lm_q2_score": 0.8267118026095991, "openwebmath_perplexity": 538.6026935834526, "openwebmath_score": 0.977874219417572, "tags": null, "url": "https://math.stackexchange.com/questions/3392491/power-series-with-shifted-central-binomial-coefficients" }
Image of compacta under a continuous map There is a well-known result in topology, Any continuous bijection from a compact topological space to a Hausdorff space is a homeomorphism. I was wondering whethet the following (slightly weaker) statement holds: Let $K$ be a compact topological space and $X$ a topological space. Then $f(K)$ is compact in $X$ for any continuous map $f\colon K\to X$. - Yes. Take an open cover of $f(K)$ and take preimages. A finite number of these cover $K$. Now what? – Dylan Moreland Jul 26 '12 at 23:52 Incidentally, your well-known result is usually proved using your later, true, result. – David Mitra Jul 26 '12 at 23:55 The continuous image of a compact set is compact............. – user38268 Jul 27 '12 at 0:07 The statement is true: Take any be an open cover $\mathcal U$ of $f[K]$. Then, by continuity of $f$, the set $f^{-1}[U]$ is open in $K$ for any $U\in\mathcal U$. Thus, $\{f^{-1}[U]: U\in\mathcal U\}$ is an open cover of $K$. By using that assumption that $K$ is compact, there is a finite sub-cover $\mathcal V\subset\mathcal U$ such that $\{f^{-1}[V]: V\in\mathcal V\}$ is a cover of $K$. Then $\{V:V\in\mathcal V\}$ covers $f[K]$, so $f[K]$ is compact. -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.984810949854636, "lm_q1q2_score": 0.8074399284194773, "lm_q2_score": 0.8198933293122506, "openwebmath_perplexity": 110.80620003255147, "openwebmath_score": 0.9351478219032288, "tags": null, "url": "http://math.stackexchange.com/questions/175659/image-of-compacta-under-a-continuous-map" }
reinforcement-learning, markov-decision-process, rewards, reward-functions Title: What is the reward system of reinforcement learning? Can you describe this reward system in more detail? I understand that the environment sends a signal indicating whether or not the action taken by the agent was 'good' or not, but it seems too simple. Basically, can you detail the nitty-gritty workings of this system? I dunno, I may just be overthinking things. In this case, the word "system" refers to a Markov decision process (MDP), which is the mathematical model used to represent the reinforcement learning (RL) problem or, in general, a decision making problem. Recall that, in RL, the problem consists in finding an (optimal) policy, which is a policy that allows the agent to collect the highest amount of reward (in the long run). Hence, in RL, the MDP is the problem and the optimal policy (for that MDP) is the solution. The MDP is composed of the set of states of the environment $S$, the set of possible actions that the RL agent can take $A$, a transition function, $P(s_{t+1}=s'\mid s_{t}=s, a_t = a)$, which is a probability distribution and describes the dynamics of the environment, and a reward function $R_a(s', s)$, which is a function that describes the "reward system" of the environment. $R_a(s', s)$ can be thought of as the reward (signal) that the agent receives after having taken action $a$ in state $s$ and having landed in state $s'$.
{ "domain": "ai.stackexchange", "id": 1130, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning, markov-decision-process, rewards, reward-functions", "url": null }
# Is there a relationship between the cross product and quaternion multiplication? I've just been introduced to the Kronecker delta, $\delta_{ij}$, along with the alternating tensor, $\varepsilon_{ijk}$ (in vector calculus). Motivation for the question: I've been introduced to some properties of $\varepsilon_{ijk}$, e.g. antisymmetry (i.e. if you swap two indices, then the $\varepsilon_{...}$ is negated). Also, cyclic permutations of indices are allowed ($\varepsilon_{ijk}=\varepsilon_{kij}=\varepsilon_{jki}$). These properties seemed familiar: namely, they satisfy two of the conditions for unit quaternion multiplication. When multiplying unit quaternions, $i, j, k$, we may cyclically permute them ($ijk=kij=jki=-1$), and swapping the order in which we multiply unit quaternions negates the result (e.g. $ij=-ji, jk=-kj, ...$). Question: Since $\varepsilon_{ijk}$ is used to represent the cross product of two vectors, is there some inherent relationship between the cross product and quaternion multiplication (and if not/so, why?), or is this just a coincidence?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.980280869588304, "lm_q1q2_score": 0.8190085322940455, "lm_q2_score": 0.8354835411997897, "openwebmath_perplexity": 196.8827238507358, "openwebmath_score": 0.9009994268417358, "tags": null, "url": "https://math.stackexchange.com/questions/984438/is-there-a-relationship-between-the-cross-product-and-quaternion-multiplication" }
catkin Title: How to have one catkin workspace rely on another Is there a generally accepted way to handle the case where I have some libraries that are generated in one catkin workspace and are used by a project in another catkin workspace? What I mean by handle is, how do I tell the project workspace where to find the libraries generated by the library workspace? I can imagine lots of ways to do this using symbolic links and/or environment variables but they all seem somewhat fragile or messy. I'm wondering if anyone has a way of doing this that they like. Originally posted by gabehein on ROS Answers with karma: 1 on 2016-01-13 Post score: 0 What you want is called chaining catkin workspaces. The technique is described in that tutorial. Originally posted by joq with karma: 25443 on 2016-01-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23427, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "catkin", "url": null }
javascript, node.js, ecmascript-6, wrapper, twitter Title: TweetResolver class, to be used in a graphql project This is my module TweetResolver (tweet-resolver.js) : import Tweet from '../../models/Tweet'; import { requireAuth } from '../../services/auth'; export default { getTweet: async (_, { _id }, { user }) => { try { await requireAuth(user); return Tweet.findById(_id); } catch (error) { throw error; } }, getTweets: async (_, args, { user }) => { try { await requireAuth(user); return Tweet.find({}).sort({ createdAt: -1 }); } catch (error) { throw error; } }, createTweet: async (_, args, { user }) => { try { await requireAuth(user); return Tweet.create(args); } catch (error) { throw error; } }, updateTweet: async (_, { _id, ...rest }, { user }) => { try { await requireAuth(user); Tweet.findByIdAndUpdate(_id, rest, { new: true }); } catch (error) { throw error; } }, deleteTweet: async (_, { _id }, { user }) => { try { await requireAuth(user); await Tweet.findByIdAndRemove(_id); return { message: 'Tweet has been deleted' } } catch (error) { throw error; } } } It is used in my graphql project like this: import GraphQLDate from 'graphql-date'; import TweetResolvers from './tweet-resolvers'; import UserResolvers from './user-resolvers';
{ "domain": "codereview.stackexchange", "id": 27004, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, node.js, ecmascript-6, wrapper, twitter", "url": null }
o o o ML.3. o I~ l~ 0.7500] o . [-; -i] [-~ -n - 3 ML.7. (a) A T A = 9 2 - I - I AA T = 6 4 - 3 B -i H ~ H ~ - 3 2 [ [ 0 0 0 - I (0) B+C~ [ -~ -4 2 4 B + C=2A. n Row Operatiolls and Ecl/elol1 Forms, p. 600 ML.1. (a) [ 1.0000 -3.0000 1.0000 0.5000 1.0000 0 5.0000 - 1.0000 1.0000 (b) [ l.~ 5.0000 0.5000 2.5000 o - 1.0000 0.5000] 4.0000 3.0000 ' 0.5000 - 3.5000 - 0.5000 2.5000 3.0000 . 5.0000 05000] 2.5000 2.5000 . 5.5000 ~] 1 o o 1 . o = - r + l,x2 =r + 2.x) = r - 1.x.J =r. r = any real number. !\fl.,. Xl ML..9. , -_ [0.,5']. ML.11. (a) X l = l - r . .12 =2.x) = any real number. 1 .x~ = r.wherer is (b) _t l=l - r . .T2 =2 + r.x3 = - I+r.x4 =r. where r is any real number. ML.J3. TIle \ command yields a matrix showing that the system is inconsistent. TIle rref command leads 10 the display of a warning that the result may contain large roundoff errors. LU· FactoriUllitm, p. 601 ML.1. L ~ U~ S.OOOD 05.5000 .5000] 2.5000 . 2.5000 ML.S. x = - 2 + r. y = - I, z =8 - 2r, w =r, r =
{ "domain": "silo.pub", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.97737080326267, "lm_q1q2_score": 0.8080039639783043, "lm_q2_score": 0.8267117876664789, "openwebmath_perplexity": 4465.94787859596, "openwebmath_score": 0.8407867550849915, "tags": null, "url": "https://silo.pub/elementary-linear-algebra-with-applications-9th-edition.html" }
along with the applications was great piece !\n\nFeels like filled with super power of calculus after completing this course ... A vector is a mathematical construct that has both length and direction. The cross product of the cross product of two vectors. In common use, a cylinder is taken to mean a finite section of a right circular cylinder, i.e. If you continue browsing the site, you agree to the use of cookies on this website. b From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as being k-vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. Thus the bound vector represented by $(1,0,0)$ is a vector of unit length pointing from the origin along the positive $x$-axis. The spherical system is used commonly in mathematics and physics: Spherical Coordinate System: The spherical system is used commonly in mathematics and physics and has variables of $r$, $\theta$, and $\varphi$. This structure simply means that the tangent space at each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegenerate metric tensor and an orientation, and works because vector calculus is defined in terms of tangent vectors at each point. Application of Gauss,Green and Stokes Theorem, Application of coordinate system and vectors in the real life, Customer Code: Creating a Company Customers Love, Be A Great Product Leader (Amplify, Oct 2019), Trillion Dollar Coach Book (Bill Campbell), No public clipboards found for this slide. Vertical Line, Graphed: Vertical line $x = a$, lying on the $xy$-plane ($z=0$). See our User Agreement and Privacy Policy. >> Vector calculus, or vector analysis, is concerned with differentiation and integration of vector fields, primarily in 3-dimensional Euclidean space. Because it is perpendicular to both original vectors, the resulting vector is normal to the plane of the original vectors. BYMIND BOGGLERS 14. A vector field is an assignment of a vector to each point in a space. supports HTML5 video, We cover both basic theory and applications. Vectors are needed in order to describe a plane and
{ "domain": "lartpurgallery.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9683812299938007, "lm_q1q2_score": 0.8111354503600527, "lm_q2_score": 0.8376199633332891, "openwebmath_perplexity": 459.20380164245637, "openwebmath_score": 0.8249136209487915, "tags": null, "url": "http://lartpurgallery.com/docs/vector-calculus-in-architecture-8f04d8" }
java, object-oriented, android, classes, null public String GetCellPhoneNumber() { return this.cellPhoneNumber; } public String GetEmailInfo() { return this.emailInfo; } public String GetHash() throws NoSuchAlgorithmException { return HashingMethod(this.fullName + this.personalID); } public String GetHashedPassword() throws NoSuchAlgorithmException { return this.password; } public boolean CheckPassword(String password) { boolean result = false; try { result = this.password.equals(HashingMethod(password)); } catch (Exception e) { e.printStackTrace(); } return result; } //********************************************************************************************** // Reference: https://stackoverflow.com/a/2624385/6667035 private String HashingMethod(String InputString) throws NoSuchAlgorithmException { MessageDigest messageDigest = MessageDigest.getInstance("SHA-256"); String stringToHash = InputString; messageDigest.update(stringToHash.getBytes()); String stringHash = new String(messageDigest.digest()); return stringHash; } } MainActivity.java implementation: package com.example.userclasstest; import androidx.appcompat.app.AppCompatActivity; import android.content.Context; import android.os.Bundle; import android.util.Log; import android.widget.Toast; public class MainActivity extends AppCompatActivity {
{ "domain": "codereview.stackexchange", "id": 41342, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, object-oriented, android, classes, null", "url": null }
$$\text{34}$$; $$\text{23}$$; $$\text{65}$$; $$\text{22}$$; $$\text{63}$$; $$\text{45}$$; $$\text{53}$$; $$\text{38}$$, $$\text{4}$$; $$\text{28}$$; $$\text{5}$$; $$\text{73}$$; $$\text{79}$$; $$\text{17}$$; $$\text{15}$$; $$\text{5}$$; $$\text{34}$$; $$\text{37}$$; $$\text{45}$$; $$\text{56}$$. To get the count in each interval we subtract the cumulative count at the start of the interval from the cumulative count at the end of the interval. Draw the histogram corresponding to this ogive. plotting the beginning of the first interval at a $$y$$-value of zero; plotting the end of every interval at the $$y$$-value equal to the cumulative count for that interval; and. From the ogive, find the 1st quartile, median, 3rd quartile and 80th percentile. • The radius is the only thing thats bogging me down and preventing me from completing this. Use the data to answer the following questions. A Cumulative Frequency Graph is a graph plotted from a cumulative frequency table. Arguments x. for the generic and all but the default method, an object of class "grouped.data"; for the default method, a vector of individual data if y is NULL, a vector of group boundaries otherwise.. y. a vector of group frequencies. 3. Compute the average mark for this class, rounded to the nearest integer. Example: Draw a cumulative frequency graph for the frequency table below. So, to get from a frequency polygon to an ogive, we would add up the counts as we move from left to right in the graph. Ogive in statistics Statistics ogive shows a single line curve. (iii) The number of students who did not pass the test if minimum marks required to pass
{ "domain": "exciteways.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104924150546, "lm_q1q2_score": 0.827103533704437, "lm_q2_score": 0.8558511524823263, "openwebmath_perplexity": 611.7938920923737, "openwebmath_score": 0.4436640441417694, "tags": null, "url": "http://exciteways.com/bape-sleeve-palnml/7e562e-ogive-graph-example" }
quantum-chemistry, computational-chemistry, density-functional-theory Title: What to do with (large) imaginary frequencies for constrained minimum structures? I am performing DFT calculations using ORCA 4.0.1 on an enzyme active site model. The model contains 89 atoms including the substrate (see Animation 1), five of which are fixed in space (the spherical atoms in Animation 1). I am using dispersion-corrected B3LYP with 6-31g(d,p) basis set, with the "RIJCOSX" approximation, and with a CPCM model for the solvent (dielectric constant $\epsilon = 4)$. From what I have seen in the literature, this method is often applied to enzyme models with success, and this is the reason for using it. Once I start computing reaction paths, I plan to try different functionals and basis sets to see if this give drastically different results. My geometry optimization tolerances are Energy change: 5e-6; Max gradient: 2e-3; MRS gradient: 5e-4; Max displacement: 4e-3; RMS displacement: 2e-3
{ "domain": "chemistry.stackexchange", "id": 10127, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-chemistry, computational-chemistry, density-functional-theory", "url": null }
The partial sum for the positive series is: $$\left(\frac{\ln^2n}{2}-\frac{\ln^2}{2}\right)+L+o(1)$$ Returning to the original, alternating series: $$-S_{2n}=\frac{ -\ln 2}{2}+\frac{ \ln 3}{3}-\frac{\ln 4}{4}+\frac{\ln 5}{5}-\cdots$$ $$=\frac{\ln 2}{2}+\frac{\ln 3}{3}+\cdots+\frac{\ln 2n}{2n}-2\left(\frac{\ln 2}{2}+\frac{\ln 4}{4}+\cdots+\frac{\ln 2n}{2n}\right)$$ Consider the partial sum in parentheses $$\frac{\ln 2}{2}+\frac{\ln 4}{4}+\cdots+\frac{\ln 2n}{2n}=\frac{\ln 2}{2}+\frac{\ln 2 +\ln 2}{4}+\frac{\ln 2+\ln 3}{6}+\cdots+\frac{\ln 2+\ln n}{2n}$$ $$=\frac{1}{2}\left(\ln 2\left(1+\frac{ 1}{2}+\cdots+\frac{ 1}{n}\right)+\left(\frac{ \ln 2}{2}+\frac{ \ln 3}{3}+\cdots+\frac{ \ln n}{n}\right)\right)$$
{ "domain": "bootmath.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9770226260757066, "lm_q1q2_score": 0.8304410664165451, "lm_q2_score": 0.8499711718571774, "openwebmath_perplexity": 218.56782066403986, "openwebmath_score": 0.9712955951690674, "tags": null, "url": "http://bootmath.com/the-sum-of-1n-fracln-nn.html" }
# Polynomial Orthogonal Complement Let $V = \mathbb{P^4}$ denote the space of quartic polynomials, with the $L^2$ inner product $$\langle p,q \rangle = \int^1_{-1} p(x)q(x)dx.$$ Let $W = \mathbb{P^2}$ be the subspace of quadratic polynomials. Find a basis for and the dimension of $W^{\perp}$. The answer is $$t^3 - \frac{3}{5}t, t^4 - \frac{6}{7}t^2 + \frac{3}{35};\,\, \dim (W^{\perp}) =2$$ How did they get that? - "Space of quartic polynomials"? Do you mean the space of (real, complex, rational...) polynomials of degree $\,\leq 4\,$ and zero? –  DonAntonio Nov 6 '12 at 2:57 @DonAntonio yes i do.. –  diimension Nov 6 '12 at 3:01 Which one? Real? Complex? –  EuYu Nov 6 '12 at 3:01 @EuYu it doesnt specify but im sure is real –  diimension Nov 6 '12 at 3:06 It must be real or there would be a conjugate inside the inner product... –  copper.hat Nov 6 '12 at 5:06
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9783846716491914, "lm_q1q2_score": 0.8410818397890527, "lm_q2_score": 0.8596637541053281, "openwebmath_perplexity": 335.024175507922, "openwebmath_score": 0.9669734239578247, "tags": null, "url": "http://math.stackexchange.com/questions/231137/polynomial-orthogonal-complement/231142" }
# Homework Help: Area Element of Elliptic Cylinder Coordinates 1. Nov 11, 2009 ### jameson2 1. The problem statement, all variables and given/known data Compute the area element for elliptic cylinder coordinates 2. Relevant equations The coordinates are defined as follows: x=a*cosh(u)*cos(v) y=a*sinh(u)*sin(v) 3. The attempt at a solution Starting from the assumption that the area element dA=dx*dy, I found dx and dy: dx=a*du*sinh(u)*cos(v) - a*cosh(u)*dv*sin(v) dy=a*du*cosh(u)*sin(v) - a*sinh(u)*dv*cos(v) Then multiplying these together, to get dA: dA=[(a^2)*sinh(u)*cosh(u)*sin(v)*cos(v)*(du^2 - dv^2)] + [(a^2)*du*dv*((sinh(u))^2)*((cos(v))^2) - (cosh(u))^2)*((sin(v))^2)] I don't like this answer for a couple of reasons. It seems like there should be a tidier, more compact expression than what I have. Compared to surface elements in other coordinate systems, this is frankly a mess. Also, I don't think I've seen a "du^2" in any area element formulae either, which I'm not sure makes it wrong, but I feel a little uneasy about it anyway. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Nov 12, 2009 ### HallsofIvy
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9658995713428385, "lm_q1q2_score": 0.8151179159196716, "lm_q2_score": 0.8438951005915208, "openwebmath_perplexity": 1378.7021970801309, "openwebmath_score": 0.8129535913467407, "tags": null, "url": "https://www.physicsforums.com/threads/area-element-of-elliptic-cylinder-coordinates.353815/" }
quantum-mechanics, statistical-mechanics, waves, phase-space Why is that? The definition of $c_n$is $$c_n= \langle \phi_n|\Psi \rangle$$ which is a complex number. How do I get from there to the $c_n$ being wave functions of the outside world? Huang seems to conceal the fact that he splits the Hilbert space in two $\mathcal{H}=\mathcal{H}_S\otimes\mathcal{H}_E$, where $S$ is the system and $E$ is the envoirement. A general state in the total Hilber space can be written as $$\rvert\Psi\rangle=\sum_{n,m} \gamma_{nm}\ \rvert\phi_n\rangle\otimes\rvert\psi_m\rangle$$ where $\{\phi\}$ are a basis of $\mathcal{H}_S$ and $\{\psi\}$ are a basis of $\mathcal{H}_E$ and where $\gamma_{nm}$ are complex numbers defined as $$\gamma_{nm}=\langle \phi_n\otimes\psi_m\rvert\Psi\rangle$$ in general, $\Psi$ is an entangled state. If you define $\rvert c_n\rangle= \sum_{m} \gamma_{nm}\ \rvert\psi_m\rangle$ then you can write $$\rvert\Psi\rangle=\sum_{n} \rvert c_n\rangle\otimes\rvert\phi_n\rangle$$ where the "coefficients" $c_n$ now are linear combinations of states describing the envoirement. If you project onto the coordinate basis to obtain the wave functions you recover the Huang expression $$\Psi=\sum_{n}c_n\ \phi_n$$ where now $c_n=\langle x_E\rvert c_n\rangle$ and $\phi_n=\langle x_S\rvert \phi_n\rangle$
{ "domain": "physics.stackexchange", "id": 52677, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, statistical-mechanics, waves, phase-space", "url": null }
# Select Part of a boundary mesh I would like to pick a part of a boundary mesh, for example bm=BoundaryMesh[Cuboid[]] Is it possible to select the part(side) x==1 of the cuboid and define a new 2D-mesh of this side? Thanks! MeshTools package can help you with SelectElements function and some manual postprocessing. Needs["MeshTools"] bm = ToBoundaryMesh[Cuboid[]] side = SelectElements[bm, #1 == 1 &] This "projects" 3D mesh with "BoundaryElements" to 2D mesh with "MeshElements". Reverse on element incidents is necessary to avoid warning messages about bad their quality (inverted elements). mesh2D = ToElementMesh[ "Coordinates" -> side["Coordinates"][[All, {2, 3}]], "MeshElements" -> MapAt[Reverse, side["BoundaryElements"], {All, 1, All}] ] mesh2D["Wireframe"["MeshElementStyle" -> FaceForm@LightBlue]] • Thanks for this helpful tool, I'll try to select the side mesh. Oct 17 '19 at 19:27 You can extract information about meshes using functions like MeshCells and MeshCoordinates. bm = BoundaryMesh[Cuboid[]] sel = Select[MeshCoordinates[bm], #[[1]] == 1&] hull = ConvexHullMesh[sel[[All, 2;;3]]] You might also be able to get away with using Polygon instead of ConvexHullMesh if your 2D mesh isn't always convex, but you'd have to be able to order the points of sel` first. • Thank you, very interesting ideas. I hoped to find a function which easily creates a submesh. Oct 17 '19 at 17:52
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9626731105140616, "lm_q1q2_score": 0.818244391859031, "lm_q2_score": 0.8499711718571774, "openwebmath_perplexity": 3908.1177789338344, "openwebmath_score": 0.30482596158981323, "tags": null, "url": "https://mathematica.stackexchange.com/questions/208076/select-part-of-a-boundary-mesh" }
matlab, finite-impulse-response, self-study, fading-channel Title: Educational purpose - What is the correct way to simulate a multipath fading channel which has ISI I am following the codes for rayleigh fading channel given in the webpage: http://www.raymaps.com/index.php/m-qam-bit-error-rate-in-rayleigh-fading/ I am confused and need help in clarifying some basic questions regarding how to simulate a Rayleigh fading channel with $L$ taps? This is not clearly mentioned. Question 1: I have studied that a flat fading channel is one which has 1 tap. In the webpage, a flat fading is simulated by h=(1/sqrt(2))*(randn(N,1)+1i*randn(N,1)); where $N$ denotes the number of bits or the number of data points. So, there are $N$ taps and definitely this is not a 1 tap channel. Please correct me where wrong. Question 2: Also, how to simulate a fading and flat fading with say $L = 5$ taps? This is how I have done but I am not sure how to properly do such that there is inter-symbol-interference as well. If number of paths, $L$ through which the signal reaches the receiver with some delay among the different paths. If the delay spread is greater than the symbol duration then we get inter symbol interference. In the following code, I am not sure if inter-symbol-interference is there or not and if I am correclty applying the theory to simulate the fading channel. clear all % Number of data points N = 100; L= 5; % number of taps %Data symbols of 0/1 x = randn(N,1)>0; %channel coefficients as Rayleigh random variable with variance 0.5 for real and imaginary component h = 0.5*(randn(L,1)+1i*randn(L,1)); %Channel is an FIR filter y = filter(h,1,x); Lets say we want to transmit a sequence of discrete data $\left\lbrace x[n] \right\rbrace$. But because we are living in analog world, the sequence must be modulated.
{ "domain": "dsp.stackexchange", "id": 5219, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "matlab, finite-impulse-response, self-study, fading-channel", "url": null }
vectors, notation, conventions, differentiation Title: Is there a difference in handwritten nabla $\vec{\nabla}$ with an overset arrow and typeset nabla $\nabla$? According to some physicist at KIT it is usual to write the following when using pen and paper: whereas in typeset texts you write $\nabla$. Is that true? Are there sources for this convention? Yes, there are sometime different conventions for indicating vectors in hand-writing and printing. Yes, overset arrows in handwriting and boldface in printing is one of those conventions. No, it is not the only convention. Yes, you should familiarize yourself with the most common conventions in your sub-discipline. Yes, you should read the section on notation in the introduction to each book before proceeding. (And if writing a book you should write a section on notation.) Yes, is get more complicated still if you want to visually distinguish more than just two types of values (say scalars, three-vectors and four-vectors).
{ "domain": "physics.stackexchange", "id": 15633, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vectors, notation, conventions, differentiation", "url": null }
image-processing, filters, estimation, statistics, parameter-estimation Title: Concept About Estimated Standard Deviation I am looking for the concept about how to estimate standard deviation. Actually I'm not sure how can I get a concept the estimate standard deviation ? If you know the concept, then would you let me know it here? Also if you have any reference or examples, it will be help to me. p.s. Is this different between "estimate standard deviation" and "estimated standard deviation" ? I think it seems to the same. Updated What does different between "Method of Least Squares" and "estimate standard deviation"? Are those the same concept? I don't know estimate standard deviation. Given data $ { \left\{ {x}_{i} \right\} }_{i = 1}^{N} $ the Empirical STD of the data is well defined: $$ STD = \sqrt{ \frac{1}{N - 1} \sum_{i = 1}^{N} { \left( {x}_{i} - \bar{x} \right) }^{2} } $$ Where $ \bar{x} $ is the empirical mean of the data given by: $$ \bar{x} = \frac{1}{N} \sum_{i = 1}^{N} {x}_{i} $$ Now, if there's a model on the data (Such as signal and noise with certain CDF) the empirical calculation should be altered accordingly. For instance, given a signal which is linear with AWGN the STD of the noise can be estimated by removing the linear estimated signal first. Update There are 2 classic estimator of the Standard Deviation (Also for Variance): The Unbiased Estimator: $ \sqrt{ \frac{1}{N - 1} \sum_{i = 1}^{N} { \left( {x}_{i} - \bar{x} \right) }^{2} } $. The Biased (Maximum Likelihood) Estimator: $ \sqrt{ \frac{1}{N} \sum_{i = 1}^{N} { \left( {x}_{i} - \bar{x} \right) }^{2} } $.
{ "domain": "dsp.stackexchange", "id": 2752, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, filters, estimation, statistics, parameter-estimation", "url": null }
homework-and-exercises, waves, harmonic-oscillator, resonance, vibrations Title: Resonance and standing waves on a bar I'm having trouble solving this problem: By applying a harmonic force, acting on the end of a free bar of length $L$, a standing wave is formed due to multiple reflections: Where are the nodes of the tensile stresses in it? What will be the amplitude of the driving force $F_o$, if the amplitude of the tensile stresses in the standing wave is $\sigma_o$ and the cross section of the bar is $S$ ? Plot the resonance curve (the graph of the dependence of $\dfrac{\sigma_o S}{F_o}$ with respect to the frequency $\omega$ of the driving force). For what frequencies are harmonic oscillations possible in the absence of the driving force? I know that the driving force should be $F=F_o\sin\omega t$ and that somehow, Hooke's law is applied. After that, I don't know any way to approach the problem. Well, i think i can help a little, see this image: If it helps you to construct a equation: $$ \frac{F}{A} = Y\frac{\partial \varepsilon }{\partial x} $$ I think is enough to answer the question with this, being epsilon the wave function.
{ "domain": "physics.stackexchange", "id": 70959, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, waves, harmonic-oscillator, resonance, vibrations", "url": null }
algorithms, np-complete, reductions Title: Proving NP hardness in Puzzle with SAT reduction Introduction I have created an algorithm that generates n^2 x n^2 Sudoku Grids. Out of those grids I remove elements to give only one solution. The algorithm follows an infinite language based on circular matrix shift of elements. It does so in such a way that allows valid grids to be formed. Explained Here-1 and Here-2 and Grid generation and Grid generation written in Python SAT Instance Used I have taken this SAT instance from this linked pdf on page 3 and reduced to a 3-SAT instance. (x1 ∨ x2 ∨ ¬x4) ∧ (x2 ∨ ¬x3) ∧ x5 SAT reduced into 3-SAT (x1 ∨ x2 ∨ ¬x4) ∧ (x2 ∨ ¬x3 ∨ x1) Here are the definitions for the variables. shift(L) = the circular shifts x1 = shift(L) to generate valid n^2 x n^2 grid and puzzle x2 = shift(L) to generate another valid n^2 x n^2 grid and puzzle ¬x4 = valid grids and puzzles not following shift(L) ¬x3 = another valid grid and its puzzle not following shift(L)
{ "domain": "cs.stackexchange", "id": 13903, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, np-complete, reductions", "url": null }
np-complete, integer-programming Is that correct? If so, why can't I find any formulation with only $x_{ij}$? An integer program, or more properly an integer linear program, consists of a linear program together with integrality constraints stating that some of the variables are integers. As such, its objective function is always a linear combination of variables. When the objective is minimization, it is admissible to have $\max$ operators (appearing positively) in the objective function. This means that there is an equivalent proper integer program. This program is obtained by introducing auxiliary variables, just as the variables $y_i$ are introduced in your example to implement $\max_j x_{ij}$. To answer your question, it all depends on what you consider as an integer program. The standard definition only allows linear objective functions, and in this case the $y_i$ are necessary. If you also allow $\max$ operators in the objective function, then the $y_i$ are not necessary.
{ "domain": "cs.stackexchange", "id": 6294, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "np-complete, integer-programming", "url": null }
computational-chemistry, software B 1 5 F --Link1-- %chk=df-bp86-d3.def2svp.rescan.chk #P BP86/def2SVP/W06 ! Density Functional Theory Calculation DenFit ! Use density fitting empiricaldispersion=GD3 ! Use Grimme Dispersion guess(read) ! Use the MO from the previous step opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed) scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence int(ultrafinegrid) ! Larger Grid scrf(pcm,solvent=water) ! Use solvent gfinput gfoldprint iop(6/7=3) ! For molden symmetry(loose) ! Loosen symmetry requirements Volume ! Report vdW volume Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP Step 6 0 1 N 0.00000 0.00000 -1.21413 H 0.95402 0.00000 -1.53258 H -0.47701 -0.82621 -1.53258 H -0.47701 0.82621 -1.53258 B 0.00000 0.00000 0.78587 H 0.58528 1.01374 0.96868 H -1.17057 0.00000 0.96868 H 0.58528 -1.01374 0.96868 B 1 5 F For comparison: Energies
{ "domain": "chemistry.stackexchange", "id": 9424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computational-chemistry, software", "url": null }
operators, differential-geometry, metric-tensor, coordinate-systems, linear-algebra $$\mathbf b^* (\mathbf a) = \mathbf a \cdot \mathbf b \tag{definition of dual vector space $V^*$}$$ $$\begin{align}\mathbf a \cdot \mathbf b &= \begin{pmatrix}a_1&a_2\end{pmatrix} \begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix} \begin{pmatrix}b_1\\b_2\end{pmatrix} \\ &= \left[ \begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix} \begin{pmatrix}b_1\\b_2\end{pmatrix} \right]^{\mathrm T} \begin{pmatrix}a_1\\a_2\end{pmatrix} \\ &\Rightarrow \quad \mathbf b^* = \left[ \begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix} \begin{pmatrix}b_1\\b_2\end{pmatrix} \right]^{\mathrm T} \end{align}.$$ Having to take the transpose makes this a little confusing, but it should be clear that $\mathbf A$ defines a dual vector $\mathbf b^*$ for each vector $\mathbf b$.
{ "domain": "physics.stackexchange", "id": 82096, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "operators, differential-geometry, metric-tensor, coordinate-systems, linear-algebra", "url": null }
php Title: Is my routing good or bad? in PHP I just want to checking if this kind of routing is good or bad? I implement the routing of my web appl to something like this. All page will be pointing to the index.php in index.php I have this code $url = array_values(array_filter( explode( '/', strtolower($_SERVER["REQUEST_URI"]) ) ) ); switch( $url[0] ) { case "admin" : { if( !$this->us->isAdmin ) { $this->GoToLoginPage(); } array_shift( $uri ); $department = new Admin( $uri ); } break; case "article" : { if( count( $uri ) == 2 ) { displayArticle( $uri[1] ); } else { displayError(); } } break; default : { displayIndex(); } break; } I just want to know is there any pro and con of doing something like this. Separate your logic and site specific functionality like this: function Router{ $url = array_values(array_filter(explode('/', strtolower($_SERVER["REQUEST_URI"])))); switch($url[0]){ case "admin": $this->Admin($url); break; case "article": $this->Article($url); break; default: displayIndex(); } } function Admin($uri){ if( !$this->us->isAdmin ) { $this->GoToLoginPage(); } array_shift( $uri ); $department = new Admin( $uri ); } function Article($uri){ if( count( $uri ) == 2 ) { displayArticle( $uri[1] ); } else { displayError(); } } It's much cleaner, and easier to update.
{ "domain": "codereview.stackexchange", "id": 6447, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php", "url": null }
ros-fuerte, rqt, rosbuild I hope I did not forget anything. Maybe somebody could link to this question in the tutorials. Originally posted by Daniel Robin Reuter with karma: 71 on 2013-06-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by 130s on 2013-06-24: @Daniel Robin Reuter I'm glad that you figured it out! I added a link here to this QA thread. Also, please consider "upvote" other answers if they helped you ;) Comment by goshawk on 2014-07-15: What if we are using python plugins? Comment by Daniel Robin Reuter on 2014-07-15: @goshawk if you are using python you should not have any problems. If you experience any problems don't hesitate on asking me :) Comment by goshawk on 2014-08-01: @Daniel Robin Reuter I made my rqt plugin using python & QT designer to build my .ui. The only problem is that I can't see the first instance of the plugin even though it shows it as running under the running tab. But when I launch another one I can see the plugin and I can close the first instance. Comment by Daniel Robin Reuter on 2014-08-04: @goshawk Could you please open a new question and provide the console output?
{ "domain": "robotics.stackexchange", "id": 14649, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-fuerte, rqt, rosbuild", "url": null }
algorithms, probability-theory Title: Reservoir sampling algorithm probability I'm reading about the reservoir sampling technique called Algorithm R. The idea is we can take a sample of size $n$ from a population of size $N$ even when $N$ is unknown/too expensive to retrieve in $N$ time. I quote a sample implementation from wikipedia: (* S has items to sample, R will contain the result *) ReservoirSample(S[1..n], R[1..k]) // fill the reservoir array for i = 1 to k R[i] := S[i] // replace elements with gradually decreasing probability for i = k+1 to n j := random(1, i) // important: inclusive range if j <= k R[j] := S[i] The explanation for why this works: The algorithm creates a "reservoir" array of size $k$ and populates it with the first $k$ items of $S$. That is pretty clear, the first for loop in the code sample does that. Then: It then iterates through the remaining elements of $S$ until $S$ is exhausted. At the $i$-th element of $S$, the algorithm generates a random number $j$ between $1$ and $i$. If $j$ is less than or equal to $k$, the $j$-th element of the reservoir array is replaced with the $i$-th element of $S$. Also clear. But here comes the Math: In effect, for all $i$, the $i$-th element of $S$ is chosen to be included in the reservoir with probability $\frac{k}{i}$.
{ "domain": "cs.stackexchange", "id": 10708, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, probability-theory", "url": null }
So, to put this into terms of a linear program, I proceeded as follows: • For each $$i\in\{1,2,3,4,5\},$$ let $$r_i$$ be the rate per task at which employee $$i$$ is paid, let $$x_i$$ be the number of tasks allocated to employee $$i$$ next week, and let $$m_i$$ be the contractual maximum number of tasks per week that may be allocated to employee $$i$$. • Since the employees can be allocated at most $$77-9=68$$ tasks in total, then $$x_1+x_2+x_3+x_4+x_5\le 68,$$ and since they must be allocated at least $$53$$ of the tasks in total, then $$-x_1+-x_2+-x_3+-x_4+-x_5\le -53.$$ • Constraints due to contractual limits take the form $$x_i\le m_i$$ for each $$i\in\{1,2,3,4,5\}.$$ • Since a negative number of tasks cannot be assigned, then $$x_i\ge 0$$ for each $$i\in\{1,2,3,4,5\}.$$ • The goal is to minimize $$r_1x_1+r_2x_2+r_3x_3+r_4x_4+r_5x_5.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.987758725428898, "lm_q1q2_score": 0.8075663203529827, "lm_q2_score": 0.8175744739711883, "openwebmath_perplexity": 282.8837148598187, "openwebmath_score": 0.8351263403892517, "tags": null, "url": "https://math.stackexchange.com/questions/3183203/is-this-dual-linear-program-correct-if-so-how-would-i-interpret-its-variables" }
polar function $r=a(\theta )$. On a Calculator page, I use the Define function (menu –> 1:Actions –> 1:Define) to make the polar differentiator. All you need to do is enter the expression for a as shown in line 2 below. This can be evaluated exactly or approximately at $\theta=\frac{\pi }{6}$ to show $\displaystyle \frac{dy}{dx} = 5\sqrt{3}=\approx 8.660$. Conclusion: As with all technologies, getting the answers you want often boils down to learning what questions to ask and how to phrase them. ## Controlling graphs and a free online calculator When graphing functions with multiple local features, I often find myself wanting to explain a portion of the graph’s behavior independent of the rest of the graph. When I started teaching a couple decades ago, the processor on my TI-81 was slow enough that I could actually watch the pixels light up sequentially. I could see HOW the graph was formed. Today, processors obviously are much faster. I love the problem-solving power that has given my students and me, but I’ve sometimes missed being able to see function graphs as they develop. Below, I describe the origins of the graph control idea, how the control works, and then provide examples of polynomials with multiple roots, rational functions with multiple intercepts and/or vertical asymptotes, polar functions, parametric collision modeling, and graphing derivatives of given curves. BACKGROUND: A colleague and I were planning a rational function unit after school last week wanting to be able to create graphs in pieces so that we could discuss the effect of each local feature. In the past, we “rigged” calculator images by graphing the functions parametrically and controlling the input values of t. Clunky and static, but it gave us useful still shots. Nice enough, but we really wanted something dynamic. Because we had the use of sliders on our TI-nSpire software, on Geogebra, and on the Desmos calculator, the solution we sought was closer than we suspected. REALIZATION & WHY IT WORKS: Last week, we discovered that we could use $g(x)=\sqrt \frac{\left | x \right |}{x}$ to create what
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9845754510863375, "lm_q1q2_score": 0.8482493711786774, "lm_q2_score": 0.8615382094310357, "openwebmath_perplexity": 662.4722118474091, "openwebmath_score": 0.7494776844978333, "tags": null, "url": "https://casmusings.wordpress.com/tag/derivative/" }