text stringlengths 49 10.4k | source dict |
|---|---|
Assuming that the word independent in the opening statement is used in the way that probabilists use the word and not in the sense of independent versus dependent variable as is common in regression analysis, the joint distribution of the five random variables $Y_{11}, Y_{12}, Y_{13}, Y_{21},Y_{22}$ is the product of the joint distributions of $Y_{11}, Y_{12}, Y_{13}$, and $Y_{21},Y_{22}$, both of which are multivariate normal. This $5$-variate joint distributions is also a multivariate normal distribution in which the mean vector is just the concatenation $(\mu_1, \mu_2)^T$ of the two mean vectors and the covariance matrix is $$\Sigma = \left[\begin{matrix}\Sigma_{11} & 0\\0 & \Sigma_{22}\end{matrix}\right].$$ Thus, the joint distribution of $Y_{11}-Y_{13}+Y_{22}$ and $Y_{21}-Y_{12}$ is a bivariate normal distribution which can be found by the standard methods involving setting up a linear transformation mapping $(Y_{11}, Y_{12}, Y_{13}, Y_{21},Y_{22})$ to $Y_{11}-Y_{13}+Y_{22},Y_{21}-Y_{12})$ and doing matrix calculations. More simply, the means and variances of $Y_{11}-Y_{13}+Y_{22}$ and $Y_{21}-Y_{12}$ as well as their covariance can be computed more directly and used in writing down the mean vector and covariance matrix of this bivariate normal distribution.
Given that $Y_{1}$ and $Y_{2}$ are independent, we have that | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9904405986777259,
"lm_q1q2_score": 0.8003425724740437,
"lm_q2_score": 0.8080672112416737,
"openwebmath_perplexity": 419.4714558531591,
"openwebmath_score": 0.994788408279419,
"tags": null,
"url": "https://stats.stackexchange.com/questions/73225/joint-distribution-of-two-multivariate-normal-distributions"
} |
quantum-field-theory, gauge-theory, correlation-functions
Title: To what extent correlation functions determines the theory (and lagranian) In other words, does a finite set correlation functions sufficient to determine a theory? Is there a chance correlation functions are more fundamental then the lagrangian? The observables of the theory are, first of all, those in the algebra $\cal A$ (technically a $^*$-algebra with unit) of objects generated by the smeared fields $\phi(f)$. I mean linear combinations of $I$ and products of smeared fields $\phi(f)$, where $f$ is a complex valued compactly supported smooth function. This algebra can be enlarged by including renormalised objects like $\phi^n(f)$ of $T_{\mu\nu}(f)$ and so on. When you are equipped with the correlation functions you are able to compute all expectation values of the elements of $\cal A$. E.g.,
$$\langle \phi(f)\phi(g) \rangle = \int_{M\times M} G(x_1,x_2) f(x_1)g(x_2) d^nx_1 d^nx_2\:.$$
So the correlation functions determine a state $\langle \cdot \rangle$ on the algebra $\cal A$. There is a corresponding theorem that assures it provided the linear map $\langle \cdot \rangle : {\cal A} \to C$ is positive, i.e. $\langle A^*A \rangle \geq 0$ for $A\in \cal A$, the so-called GNS theorem. That theorem also guarantees that there is a (uniquely defined up to unitary equivalences) Hilbert space where everything can be represented in the standard way (the elements of $\cal A$ are operators, $\langle \cdot \rangle$ corresponds to an expectation value od the form $\langle \Psi| \cdot |\Psi\rangle$). | {
"domain": "physics.stackexchange",
"id": 10913,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, gauge-theory, correlation-functions",
"url": null
} |
observational-astronomy, amateur-observing, red-dwarf
Title: At what distance does Proxima Centauri become visible to the naked eye? The red dwarf star Proxima Centauri is the closest-known star to the Sun (being about 4.2 ly or 1.3 pc away), however it's not visible to the unaided eye for it is too small and dim. I'd like to know how close one should get to see Proxima well, like most other stars. In Celestia the star would become visible to me at about 0.75 ly away. Is Celestia right? "Back of an envelope" calculation:
Proxima Centauri has an apparent magnitude of about $11$. The faintest objects visible to the unaided eye have magnitudes about $6.5$. So we need to decrease Proxima Centauri's magnitude by about $5$ in round numbers, which corresponds to an increase in brightness of about $100$ times. To achieve this we would have to be about $10$ times closer to Proxima Centuari than we are now, which gives a distance of about $0.4$ ly.
At a distance of $0.75$ ly the apparent magnitude of Proxima Centauri would be about $7.5$, so I think Celestia is being a little optimistic. | {
"domain": "astronomy.stackexchange",
"id": 4227,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "observational-astronomy, amateur-observing, red-dwarf",
"url": null
} |
In the second case $$2y-1<\lfloor 2y\rfloor=\lfloor y\lfloor y\rfloor \rfloor \le 2y,$$ so that we need $y\ge\frac 52$. And indeed, for $\frac25\le y<3$, we verify directly that $\lfloor y\lfloor y \rfloor\rfloor=5$. If we impose that $y=\frac x{100}$ with $x\in\Bbb Z$, this is equivalent to $$250\le x<300$$ whic allows exactly $50$ differnet values of $x$.
First note that as $[n] \le n < [n]+1$
then $[n]^2 \le [n[n]] < ([n]+1)^2$
So $[x/100]^2 \le [n[n]]= 5 < ([x/100] + 1)^2$. As $[x/100]$ is an integer, this means $[x/100] = 2$
So
$[x/100[x/100]]= [x/100*2] = [x/50] = 5$
$5 \le x/50 < 6$
$250 \le x < 300$ so $x = [250.... 299]$.
• how to get second argument? – mathlover Oct 11 '16 at 2:23
• What do you mean? – fleablood Oct 11 '16 at 2:48
• explain deeply: $[n]^2 \le [n[n]] < ([n]+1)^2$ – mathlover Oct 11 '16 at 2:58
• [n] < = n so [n][n] <= n [n] so [[n][n]]=[n][n] <= [n [n]]. And n < [n]+1 so [n [n]] < = n^2 < ([n]+1)^2. – fleablood Oct 11 '16 at 3:12 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9808759604539052,
"lm_q1q2_score": 0.806465544528272,
"lm_q2_score": 0.822189121808099,
"openwebmath_perplexity": 312.32718138679013,
"openwebmath_score": 0.9778703451156616,
"tags": null,
"url": "https://math.stackexchange.com/questions/1962599/hints-to-find-solution-of-left-lfloor-dfracx100-left-lfloor-dfracx"
} |
cosmology, supernova, expansion, redshift, hubble-constant
Title: Why does the Supernova 2006cm give a very different value for the Hubble constant? Why doesn't it increase error bars for the Hubble constant? The Supernova 2006cm has a redshift of 0.0153 which translates into a recession speed of 4600 km/s.
It has a distance modulus of 34.71 which translates into a luminosity distance of 87 Mpc.
This gives a value of 53 km/s/Mpc for the Hubble constant.
Why doesn't this anomalous observation create sufficient doubt in the value of the Hubble constant? At a distance of $d = 87\,\mathrm{Mpc}$, with a Hubble constant of roughly $H_0 = 70\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$ cosmological expansion should make the host galaxy UGC 11723 recede at $v=H_0 \,d\simeq6100 \,\mathrm{km}\,\mathrm{s}^{-1}$.
However, galaxies also move through space, at typical velocities from several
$100\,\mathrm{km}\,\mathrm{s}^{-1}$ in galaxy groups (e.g Carlberg et al. 2000) to some $1000\,\mathrm{km}\,\mathrm{s}^{-1}$ in rich clusters (e.g Girardi et al. 1993; Karachentsev et al. 2006).
The observed velocity of $4900\,\mathrm{km}\,\mathrm{s}^{-1}$ (Falco et al. 2006) is hence $\sim1200\,\mathrm{km}\,\mathrm{s}^{-1}$ smaller than the Hubble flow, but quite consistent with what may be expected.
This is why supernovae at such small distances cannot be used to deduce the Hubble constant, unless a very large number is observed such that statistical errors cancel out. | {
"domain": "astronomy.stackexchange",
"id": 4105,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, supernova, expansion, redshift, hubble-constant",
"url": null
} |
addSpiralLayer[] := Module[{
matrix = {{5, 4, 3}, {6, 1, 2}, {7, 8, 9}}, layer = 2, start = 9,
perimeter, shortSide, longSide
},
Function[
perimeter = 8 layer;
shortSide = (perimeter - 4)/4;
longSide = shortSide + 2;
layer++;
matrix = Join[
{Reverse[start + shortSide + Range[longSide]]},
Transpose@Join[
{start + shortSide + longSide + Range[shortSide]},
Transpose@matrix,
{Reverse[start + Range[shortSide]]}
],
{start + 2 shortSide + longSide + Range[longSide]}
];
start = start + 2 shortSide + 2 longSide;
matrix
]
]
Here's a test with as many layers as in Kuba's example (the visualization is with one layer more).
spiral = addSpiralLayer[];
Do[spiral[];, {101}] // Timing
spiral[] // PrimeQ // Boole // Image // ColorNegate
(As an aside for future visitors: //Image//ColorNegate is faster than ArrayPlot, if you want to optimize every aspect of the problem.)
One point of confusion for some readers in regards to the code above could be that it is implemented as a closure which, as they say in the Mathematica Coobook, is "a bit outside garden-variety Mathematica". I'm referencing that book because, for those interested, its author has implemented a closure in an answer of his on this site. That example is simpler so it could be a good place to start learning how closures work.
One improvement of my code that I know but did not implement is that Range can list numbers in reversed order by having a negative step size. This could be used instead of Reverse.
-
Yet an other approach using Graph:
Basically I'm taking advantage of PathGraph:
PathGraph@Range@25 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9441768635777511,
"lm_q1q2_score": 0.813444446111319,
"lm_q2_score": 0.8615382112085969,
"openwebmath_perplexity": 5300.123425998542,
"openwebmath_score": 0.3374619483947754,
"tags": null,
"url": "http://mathematica.stackexchange.com/questions/50416/generating-an-ulam-spiral/50449"
} |
Purely imaginary (complex) number : A complex number $z = x + iy$ is called a purely imaginary number iff $x=0$ i.e. $R(z) = 0$.
Imaginary number : A complex number $z = x + iy$ is said to be an imaginary number if and only if $y \ne 0$ i.e., $I(z) \ne 0$.
This is a slightly different usage of the word "imaginary", meaning "non-real": among the complex numbers, those that aren't real we call imaginary, and a further subset of those (with real part $0$) are purely imaginary. Except that by this definition, $0$ is clearly purely imaginary but not imaginary!
Anyway, anybody can write a textbook, so I think that the real test is this: does $0$ have the properties we want a (purely) imaginary number to have?
I can't (and MSE can't) think of any useful properties of purely imaginary complex numbers $z$ apart from the characterization that $|e^{z}| = 1$. But $0$ clearly has this property, so we should consider it purely imaginary.
(On the other hand, $0$ has all of the properties a real number should have, being real; so it makes some amount of sense to also say that it's purely imaginary but not imaginary at the same time.)
I don't think there is a
complete and formal definition of "imaginary number"
It's a useful term sometimes. It's an author's responsibility to make clear what he or she means in any particular context where precision matters. If $0$ should count, or not, then the text must say so.
Your question shows clearly that you understand the structure of the complex numbers, so you should be able to make sense of any passage you encounter.
A complex number z=a+ib where a and b are real numbers is called : 1- purely real , if b=0 ; e.g.- 56,78 ; 2- purely imaginary, if a=0 ,e.g.- 2i, (5/2)i ; 3- imaginary,if b≠ 0 ,e.g.- 2+3i,1-i,5i ; 0 is purely imaginary and purely real but not imaginary. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9473810525948927,
"lm_q1q2_score": 0.8490915699252843,
"lm_q2_score": 0.8962513738264114,
"openwebmath_perplexity": 503.67068777645045,
"openwebmath_score": 0.8885458111763,
"tags": null,
"url": "https://math.stackexchange.com/questions/2221033/is-0-an-imaginary-number"
} |
Enter in the number of radians into the calculator field and let the radians to degrees conversion process take its course in a flash! Also, convert Degrees to Radians. 4155 to six significant digits. SAFETY THAT FITS 3-ACROSS: The Diono Radian RXT packs our innovative safety features into a sleek design that allows you to fit 3-across (in most mid-size vehicles) while providing a spacious interior for your child. same sine value. Converting radians to degrees and degrees to radians. For instance, if you write you imply that radians. Nothing fancy here, just making sure everyone can do it! When to use pi and when not to use pi. Popular Quizzes Today. 1 Exercises - Page 98 6 including work step by step written by community members like you. deg2rad(x[, out]) = ufunc ‘deg2rad’) : This mathematical function helps user to convert angles from degrees to radians. What is the sum of trigonometric ratios Sin 33 and Sin 57? 10. exercises, see the tutorial entitled “Using Trigonometry to Find Missing Sides of Right Triangles. Unit Circle Practice -- practice going between degrees and radians. Unit Circle definition Unit Circle definition. Degrees and Radians. Multiply by b. For circular motion at a constant speed v, the centripetal acceleration of the motion can be derived. Rate 5 stars Rate 4 stars Rate 3 stars Rate 2 stars Rate 1 star. Quizzes HTML Quiz CSS Quiz SQL Server RADIANS() Function SQL Server Functions. In radians, a full circle is 2π. —2000 23m ( (00 [o Find a coterminal angle between 00 and 3600. Radian’s latest puzzle lay in store for us this morning, although only in the afternoon is my blog of it being posted, for which I apologise. Enter in the number of radians into the calculator field and let the radians to degrees conversion process take its course in a flash! Also, convert Degrees to Radians. Given tan 8 9, which statement is true for all possible values of ? A. scientific- calculator. We can convert between degree measurement and radian measurement easily. A Full Circle is 360 ° Half a circle is 180° (called a Straight Angle). | {
"domain": "elettronicauno.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.975576912786245,
"lm_q1q2_score": 0.8021087294304953,
"lm_q2_score": 0.8221891261650247,
"openwebmath_perplexity": 1730.328449217289,
"openwebmath_score": 0.7418038249015808,
"tags": null,
"url": "http://elettronicauno.it/gylh/radians-quiz.html"
} |
homework-and-exercises, electromagnetism, special-relativity
Title: Proof of equivalence of two different covariant notations of Maxwell's equations My homework assignment is
Prove that
$$\partial_\beta F_{\gamma \delta} + \partial_\gamma F_{\delta \beta} + \partial_\delta F_{\beta \gamma} = 0$$
with the electromagnetic tensor
$$F_{\alpha \beta} = \begin{pmatrix} 0 & -E_1 & - E_2 & -E_3 \\ E_1 & 0 & B_3 & -B_2 \\ E_2 & -B_3 & 0 & B_1 \\ E_3 & B_2 & -B_1 & 0 \end{pmatrix}$$
leads to the following Maxwell's equations
$$\operatorname{div} \mathbf{B} = 0$$
$$\operatorname{rot} \mathbf{E} = - \frac{1}{c} \frac{\partial \mathbf{B}}{\partial t}$$ | {
"domain": "physics.stackexchange",
"id": 17522,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, electromagnetism, special-relativity",
"url": null
} |
mechanical-engineering, design, mechanisms
Title: how can I improve my chain-drive lifting system? I am, in actuality, a network engineer and not a trained mechanical engineer. It was kind of a hobby of mine until recently, when my boss gave me the opportunity to try my hand at mechanical engineering through CAD design (Autodesk Inventor) about two years ago and I took to it like a duck to water. I have been working in that subject since then.
My problem, however, is that I am unfamiliar with most of the best-practices of mechanical engineering and it has been a "trial by fire" sort of situation. Currently my main problem has to do with the alignment of a chain-drive system used for balancing a counterweight and lifting mechanism.
Details:
Horizontal alignment is great in my CAD design, however when I actually order and build out the products, I find it extremely difficult to get the alignments properly set and I end up spending hours fidgeting with the axle and chain sprocket alignment... I am currently using aluminum extrusions (via MISUMI) for the frame design.
In the images below, you will find an example (several components have been redacted due to proprietary design). The images show the main components for operating the lift mechanism. One side houses the sprockets for the Counterweight, the other for the forklift itself. It is controlled by an AC servo motor which attaches to a custom gearbox, and finally to one of the chain axles (the counterweight).
**The design must use these aluminum extrusions, however this makes it very difficult to properly align the bearing housings for the axles, often making everything off-center or skewed, Even when using an accurate measure and square.
Can anyone suggest some improvement points or alternative types of components for mounting the axle/sprockets?
Side Profile: | {
"domain": "engineering.stackexchange",
"id": 1622,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, design, mechanisms",
"url": null
} |
8
Ilmari Karonen gets it right in the other answer. But it gets even worse than that: arithmetic operations involving floating-point numbers don't necessarily behave the same as operators we're used to from mathematics. For instance, we're used to addition being associative, so that $a + (b + c) = (a + b) + c$. This doesn't generally hold using floating-point ...
8 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307704335867,
"lm_q1q2_score": 0.8020619247361671,
"lm_q2_score": 0.8244619199068831,
"openwebmath_perplexity": 795.4619629985855,
"openwebmath_score": 0.8967234492301941,
"tags": null,
"url": "https://cs.stackexchange.com/tags/floating-point/hot"
} |
condensed-matter, fourier-transform, hamiltonian, topological-insulators
with $\overrightarrow{k} = (\mathbf{k}, k_z) = (k_x, k_y, k_z)$ a 3D wavevector. The last step which you can try to do by yourself because it is not related to Fourier transform is to remember that $H'\left(\overrightarrow{k}\right)$ is actually an operator, which needs to be diagonalized. It can be seen as a $4 \times 4$ matrix acting on the product of the two spin spaces associated respectively with $\sigma$ and $\tau$. | {
"domain": "physics.stackexchange",
"id": 66695,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter, fourier-transform, hamiltonian, topological-insulators",
"url": null
} |
Squaring both sides, we expand the left hand side of this equation as
$\begin{array}{rcl} (\cos(\alpha_ { o}) - \cos(\beta_ { o}))^2 + (\sin(\alpha_ { o}) - \sin(\beta_ { o}))^2 & = & \cos^2(\alpha_ { o}) - 2\cos(\alpha_ { o})\cos(\beta_ { o}) + \cos^2(\beta_ { o}) \\ & & + \sin^2(\alpha_ { o}) - 2\sin(\alpha_ { o})\sin(\beta_ { o}) + \sin^2(\beta_ { o}) \\ & = & \cos^2(\alpha_ { o}) + \sin^2(\alpha_ { o}) + \cos^2(\beta_ { o}) + \sin^2(\beta_ { o}) \\ & & - 2\cos(\alpha_ { o})\cos(\beta_ { o}) - 2\sin(\alpha_ { o})\sin(\beta_ { o}) \end{array}$
From the Pythagorean Identities, $$\cos^2(\alpha_ { o}) + \sin^2(\alpha_ { o}) = 1$$ and $$\cos^2(\beta_ { o}) + \sin^2(\beta_ { o}) = 1$$, so
$\begin{array}{rcl} (\cos(\alpha_ { o}) - \cos(\beta_ { o}))^2 + (\sin(\alpha_ { o}) - \sin(\beta_ { o}))^2 & = & 2 - 2\cos(\alpha_ { o})\cos(\beta_ { o}) - 2\sin(\alpha_ { o})\sin(\beta_ { o}) \end{array}$ | {
"domain": "libretexts.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9937100967454597,
"lm_q1q2_score": 0.8344521202632329,
"lm_q2_score": 0.8397339656668287,
"openwebmath_perplexity": 235.8874990521153,
"openwebmath_score": 0.9533764123916626,
"tags": null,
"url": "https://math.libretexts.org/Bookshelves/Precalculus/Book%3A_Precalculus_(Stitz-Zeager)/10%3A_Foundations_of_Trigonometry/10.4%3A_Trigonometric_Identities"
} |
forces, classical-mechanics, torque, elasticity
In case there is still some confusion regarding how the wind-back initiates, I have added a second diagram showing local twisting forces (torques) on the individual strands of wool with the 'disks' removed. Cross-section B is at the bottom-most part of the 'loop' of wool where the wool is hanging in the shape of a V, but the same idea would work if it formed a more natural U. | {
"domain": "physics.stackexchange",
"id": 61859,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, classical-mechanics, torque, elasticity",
"url": null
} |
We next perform back-substitution to express each variable in terms of the free variable $z$. Working from the bottom up, we get first that $w$ is $0\cdot z$, next that $y$ is $(1/3)\cdot z$, and then substituting those two into the top equation $x+2((1/3)z)-z+2(0)=0$ gives $x=(1/3)\cdot z$. So, back substitution gives a parametrization of the solution set by starting at the bottom equation and using the free variables as the parameters to work row-by-row to the top. The proof below follows this pattern.
Comment: That is, this proof just does a verification of the bookkeeping in back substitution to show that we haven't overlooked any obscure cases where this procedure fails, say, by leading to a division by zero. So this argument, while quite detailed, doesn't give us any new insights. Nevertheless, we have written it out for two reasons. The first reason is that we need the result— the computational procedure that we employ must be verified to work as promised. The second reason is that the row-by-row nature of back substitution leads to a proof that uses the technique of mathematical induction.[1] This is an important, and non-obvious, proof technique that we shall use a number of times in this book. Doing an induction argument here gives us a chance to see one in a setting where the proof material is easy to follow, and so the technique can be studied. Readers who are unfamiliar with induction arguments should be sure to master this one and the ones later in this chapter before going on to the second chapter.
Proof
First use Gauss' method to reduce the homogeneous system to echelon form. We will show that each leading variable can be expressed in terms of free variables. That will finish the argument because then we can use those free variables as the parameters. That is, the $\vec{\beta}$'s are the vectors of coefficients of the free variables (as in Example 3.6, where the solution is $x=(1/3)w$, $y=w$, $z=(1/3)w$, and $w=w$). | {
"domain": "wikibooks.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9907319860717888,
"lm_q1q2_score": 0.8005780353626254,
"lm_q2_score": 0.8080672135527632,
"openwebmath_perplexity": 807.7210400630585,
"openwebmath_score": 1.0000100135803223,
"tags": null,
"url": "https://en.wikibooks.org/wiki/Linear_Algebra/General_%3D_Particular_%2B_Homogeneous"
} |
c#, mongodb
//Create a log
LogSystem.CreateLog("money", "Datbase cash changed to " + money + " player: " + player.GetData("objectid"));
//change it in the client
Utils.ChangeCashClient(player, money);
} | {
"domain": "codereview.stackexchange",
"id": 31455,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, mongodb",
"url": null
} |
algorithms, graphs, search-problem
If you only want to list the edge subsets from each $p_i$ that meet your requirements, the number of subsets for a vertex with $n$ neighbors in $V$ will be ${n \choose 3}$, but the algorithm to enumerate them should not be complicated. | {
"domain": "cs.stackexchange",
"id": 2893,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs, search-problem",
"url": null
} |
### Show Tags
15 Aug 2010, 03:11
Bunuel wrote:
praveengmat wrote:
How many factors does 36^2 have?
A 2
B 8
C 24
D 25
E 26
Finding the Number of Factors of an Integer:
First make prime factorization of an integer $$n=a^p*b^q*c^r$$, where $$a$$, $$b$$, and $$c$$ are prime factors of $$n$$ and $$p$$, $$q$$, and $$r$$ are their powers.
The number of factors of $$n$$ will be expressed by the formula $$(p+1)(q+1)(r+1)$$. NOTE: this will include 1 and n itself.
Example: Finding the number of all factors of 450: $$450=2^1*3^2*5^2$$
Total number of factors of 450 including 1 and 450 itself is $$(1+1)*(2+1)*(2+1)=2*3*3=18$$ factors.
Back to the original question:
How many factors does 36^2 have?
$$36^2=(2^2*3^2)^2=2^4*3^4$$ --> # of factors $$(4+1)*(4+1)=25$$.
Or another way: 36^2 is a perfect square, # of factors of perfect square is always odd (as perfect square has even powers of its primes and when adding 1 to each and multiplying them as in above formula you'll get the multiplication of odd numbers which is odd). Only odd answer in answer choices is 25.
Hope it helps.
Thanks a ton !!.. loved the approach !
Senior Manager
Status: Not afraid of failures, disappointments, and falls.
Joined: 20 Jan 2010
Posts: 266
Concentration: Technology, Entrepreneurship
WE: Operations (Telecommunications)
Re: Help: Factors problem !! [#permalink]
### Show Tags
14 Oct 2010, 13:58
1
Factors of a perfect square can be derived by using prime factorization and then using the formula to find perfect square's factors. | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347860767305,
"lm_q1q2_score": 0.8466150305317466,
"lm_q2_score": 0.8688267728417087,
"openwebmath_perplexity": 5490.911460627911,
"openwebmath_score": 0.8478909134864807,
"tags": null,
"url": "https://gmatclub.com/forum/how-many-factors-does-36-2-have-99145.html"
} |
.
. /* 2 Component FMM */
. fmm 2, nolog: regress y ph
Finite mixture model Number of obs = 300
Log likelihood = -194.5215
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.Class | (base outcome)
-------------+----------------------------------------------------------------
2.Class |
_cons | .0034359 .1220066 0.03 0.978 -.2356927 .2425645
------------------------------------------------------------------------------
Class : 1
Response : y
Model : regress | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211612253742,
"lm_q1q2_score": 0.8266169130308847,
"lm_q2_score": 0.8479677564567913,
"openwebmath_perplexity": 3017.8725220138645,
"openwebmath_score": 0.47146642208099365,
"tags": null,
"url": "https://stats.stackexchange.com/questions/389545/what-if-my-linear-regression-data-contains-several-co-mingled-linear-relationshi/389674"
} |
textbook-and-exercises, quantum-fourier-transform
Title: What are the input and output of QFT and IQFT, respectively? I have read two opposite explanations about QFT and IQFT from 2 books for beginners of Quantum Computing. Which one is correct?
The first book said, if we input an n-qubit non-superposition state into QFT, the QFT will output an n-qubit superposition state with periodical phases. On the other hand, IQFT swaps the input and output of QFT; we can apply IQFT to find the period hidden in an input state because the amplitude of the state, which matches the period(or frequency), will be amplified.
e.g., If we input |010〉into QFT, QFT will output a uniformly distributed superposition whose phases shifted two rotations(periods), the 1st rotation is from |000〉 to |011〉, and the 2nd one is from |100〉to |111〉. And if we input the output into IQFT, it will output |010〉.
The second book told me an opposite fact, that QFT allows us to determine how many rotations the relative phase makes per register, and we can use the IQFT to prepare periodically varying register superpositions easily.
e.g.
QFT allows us to determine that the frequency of repetition it contained was 2 (i.e., the relative phase makes two rotations per register) | {
"domain": "quantumcomputing.stackexchange",
"id": 2779,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "textbook-and-exercises, quantum-fourier-transform",
"url": null
} |
javascript
@import url("https://fonts.googleapis.com/css2?family=Poppins:wght@300;500;600&display=swap");
*,
*::after,
*::before {
padding: 0;
margin: 0;
-webkit-box-sizing: border-box;
box-sizing: border-box;
font-family: 'Poppins', sans-serif;
}
body {
background-color: #111010;
height: 100vh;
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
-webkit-box-align: center;
-ms-flex-align: center;
align-items: center;
}
main {
background-color: #fffbfc;
padding: 10rem 20rem;
border-radius: 5px;
overflow: hidden;
position: relative;
}
main .menu {
text-align: center;
-webkit-transition: all 1s;
transition: all 1s;
}
main .menu h2 {
font-size: 2rem;
font-weight: 600;
color: black;
}
main .menu p {
font-size: 1rem;
font-weight: 300;
color: #353535;
margin-bottom: 1.3rem;
}
main .menu .quiz-type {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
}
main .menu .quiz-type div {
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
padding: 1rem 2rem;
border-radius: 5px;
cursor: pointer;
font-size: 0.8rem;
font-weight: 600;
color: white;
}
main .menu .quiz-type div:not(:first-child) {
margin-left: 0.2rem;
}
main .menu .quiz-type__html {
background-color: #d81159;
} | {
"domain": "codereview.stackexchange",
"id": 40049,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript",
"url": null
} |
javascript, beginner, html, event-handling, firebase
addTransaction(date, accounts, debits, credits, description);
document.getElementById('AddTransactionDescriptionInput').value = descriptionInput.defaultValue;
for(let i = 0; i < document.getElementsByClassName('resettable').length; i++) {
document.getElementsByClassName('resettable')[i].value = document.getElementsByClassName('resettable')[i].defaultValue;
}
});
} else {
for(let i = 0; i < tableBody.rows.length; i++) {
let account = getAccount(tableBody.rows[i].cells[1].children[0].value);
let accountMod = tableBody.rows[i];
let accountValues = [account.cells[2].innerHTML,account.cells[3].innerHTML];
let accountModValues = [accountMod.cells[2].children[0].value,accountMod.cells[3].children[0].value];
//Debit
let newAmount1 = addMoney(deformat(accountValues[0]), deformat(accountModValues[0]));
accountValues[0] = format(newAmount1);
//Credit
let newAmount2 = addMoney(deformat(accountValues[1]), deformat(accountModValues[1]));
accountValues[1] = format(newAmount2); | {
"domain": "codereview.stackexchange",
"id": 36583,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, html, event-handling, firebase",
"url": null
} |
c++, gui, c++17, snake-game, qt
std::vector<SnakeSegment> initSnake(
std::size_t boardWidth, std::size_t boardHeight)
{
auto x = boardWidth / 2;
auto y = boardHeight / 2;
std::vector<SnakeSegment> body{
SnakeSegment{ {x, y} },
SnakeSegment{ {x - 1, y} },
};
return body;
}
}
Snake_qt.pro
QT += widgets
CONFIG += c++17
SOURCES += \
Board.cpp \
Snake.cpp \
SnakeBoard.cpp \
SnakeWindow.cpp \
main.cpp
HEADERS += \
Board.h \
Snake.h \
SnakeBoard.h \
SnakeWindow.h
FORMS += \
SnakeWindow.ui | {
"domain": "codereview.stackexchange",
"id": 34565,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, gui, c++17, snake-game, qt",
"url": null
} |
hamiltonian-formalism, field-theory, phase-space, poisson-brackets, classical-field-theory
One then constructs a quotient space, dividing space of all $\rho$ by space of all $(\Box +m^2) \chi$. On this space the symplectic form $ \sigma (\rho_1, \rho_2)=\{ \phi_{\rho_1}, \phi_{\rho_2} \} $ is well-defined and non-degenerate. It can also be rewritten as
$$ \sigma (\rho_1, \rho_2) = \int \rho_1(x) D(x-y) \rho_2(y) d^4 x d^4 y.\tag{7} $$
First question: are these symplectic forms ($\sigma(\cdot, \cdot)$ and $\{ \cdot, \cdot \}$) somehow related to Poisson bracket on phase space in Hamiltonian mechanics? I would expect something like that to be true, but for that one would need to somehow interpret $\rho$ as a function on some infinite-dimensional phase space. I am wondering if this can be done. And second, but closely related question: what is the intrepretation of these $\rho$ functions? Our lecturer told us that they should be thought of as degrees of freedom of the field but again, I don't quite see it. Some intuition here would be nice.
The first part of OP's construction is directly related to the covariant Hamiltonian formalism for a real scalar field with Lagrangian density
$$ {\cal L} ~=~ \frac{1}{2}\partial_{\alpha} \phi ~\partial^{\alpha} \phi -{\cal V}(\phi), \tag{CW4} $$ | {
"domain": "physics.stackexchange",
"id": 31840,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "hamiltonian-formalism, field-theory, phase-space, poisson-brackets, classical-field-theory",
"url": null
} |
z_{1}=3+3i\0.2cm] Combine the like terms Draw the diagonal vector whose endpoints are NOT $$z_1$$ and $$z_2$$. Subtracting complex numbers. Closure : The sum of two complex numbers is , by definition , a complex number. So let us represent $$z_1$$ and $$z_2$$ as points on the complex plane and join each of them to the origin to get their corresponding position vectors. The addition of complex numbers is just like adding two binomials. The addition of complex numbers is just like adding two binomials. For addition, the real parts are firstly added together to form the real part of the sum, and then the imaginary parts to form the imaginary part of the sum and this process is as follows using two complex numbers A and B as examples. So, a Complex Number has a real part and an imaginary part. Yes, the complex numbers are commutative because the sum of two complex numbers doesn't change though we interchange the complex numbers. First, draw the parallelogram with $$z_1$$ and $$z_2$$ as opposite vertices. This problem is very similar to example 1 The Complex class has a constructor with initializes the value of real and imag. Group the real part of the complex numbers and The following list presents the possible operations involving complex numbers. What Do You Mean by Addition of Complex Numbers? Addition belongs to arithmetic, a branch of mathematics. Operations with Complex Numbers . i.e., we just need to combine the like terms. Complex Numbers (Simple Definition, How to Multiply, Examples) This algebra video tutorial explains how to add and subtract complex numbers. Free Complex Numbers Calculator - Simplify complex expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. We multiply complex numbers by considering them as binomials. A user inputs real and imaginary parts of two complex numbers. Complex numbers are numbers that are expressed as a+bi where i is an imaginary number and a and b are real numbers. The set of complex numbers is closed, associative, and commutative under addition. Next lesson. The additive identity, 0 is also present in the set of complex numbers. For instance, the sum of 5 + 3i and 4 + 2i | {
"domain": "mestaritalo.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9693241947446617,
"lm_q1q2_score": 0.8781978537051432,
"lm_q2_score": 0.9059898210180105,
"openwebmath_perplexity": 512.6572821089109,
"openwebmath_score": 0.7448074221611023,
"tags": null,
"url": "http://mestaritalo.com/orvovf1/addition-of-complex-numbers-5b5093"
} |
organic-chemistry
Title: Why does enantiopure sec-butyl alcohol retain its optical activity over aqueous base but forms a racemic mixture over dilute sulphuric acid? Why does enantiopure sec-butyl alcohol retain its optical activity over aqueous base but forms a racemic mixture over dilute sulphuric acid ?
My reasoning is that, sec-butyl alcohol get dehydrated into a alkene because of sulphuric acid and then get hydrolysed to form alcohol again. Thus a equilibrium is formed between alkene and alcohol and between $R-$ and $S-$ sec butyl alcohol.
Dehydration cannot occur with a base, so the alcohol remains optically active.
I feel that conversion of alcohol to alkene and then back to alcohol seems ridiculous and impossible. What do you think, is this reasoning correct or just plain non-sense ?
To be specific, I have some doubt whether alkene will convert back to alcohol or not in dil sulphuric acidic ? Although sulphuric acid is a notable dehydrating agent, it is concentrated rather than dilute sulphuric acid that would be typically used for such a purpose.
Here, it is due to the improvement of $\ce{H2O}$ as a leaving group compared to $\ce{OH-}$, due to the lack of any charge as a leaving group.
Under acidic conditions, the sec-butyl alcohol will be protonated in a small amount creating a much better leaving group. The protonation also makes the alcoholic carbon centre more electrophilic and vulnerable to $S_N2$ attack by other water molecules in solution, leading to inversion of the chiral centre.
This in not possible under neutral conditions as the unprotonated alcohol would have $\ce{OH-}$ as a much poorer leaving group, preventing elimination or substitution reactions.
Under acidic conditions, with neutral water as a nucleophile $S_N2$ substition will be the preferred path. | {
"domain": "chemistry.stackexchange",
"id": 8573,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry",
"url": null
} |
c#, beginner, programming-challenge
Title: Add multiples of 3 or 5 below 1000, Can this code be optimised. Project Euler #1 I have seen some solution here for the same problem, this and this I have found better ways to solve this problem. I just want to know how good or bad my code and way of approaching this problem is.
using System;
public class Program
{
public static void Main()
{
int sum = 0;
for (int i = 0; 3 * i < 1000; i++)
{
sum += 3 * i;
if (5 * i < 1000 && (5 * i) % 3 != 0)
{
sum += 5 * i;
}
}
Console.WriteLine(sum );
} The main problem
The main issue here is that you define i in function of it being a "factor of 3", and then also try to use this number as a "factor of 5". This doesn't make any sense - as there is no inherent relationship between being a threefold and a fivefold number.
It would've made sense if you were doing it for 3 and 6, for example, because 3 and 6 are related:
//i*3 is obviously always a multiple of 3
sum += 3 * i;
if ((3*i) % 2 == 0)
{
//Now we know that i*3 is also a multiple of 6
}
But that is not the case here.
The for loop's readability
I understand your idea, you wanted to only iterate over factors of three which keep the multiple under 1000. While I will suggest to change this approach (later in the answer), your approach could've been written in a much more readable manner:
for( int i = 0 ; i < 1000 ; i+=3 )
This will iterate over i values of 0,3,6,9,12,15,... and will skip all values inbetween.
The benefit here is that you don't need to work with i*3 all the time, you can just use i itself.
You will need to iterate over the multiple of 5 separately. However, should you keep using this approach, I would always suggest splitting these loops anyway.
The algorithm
Your approach works, but it's not the easiest approach. If I put your approach into words: | {
"domain": "codereview.stackexchange",
"id": 35037,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, beginner, programming-challenge",
"url": null
} |
Sort by:
i want an testing exercise
- 5 years, 9 months ago
You might try the Practice section of the site. There's a "prime factorization" skill early in the map.
Staff - 5 years, 9 months ago | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.978712651931994,
"lm_q1q2_score": 0.8197892635595186,
"lm_q2_score": 0.8376199714402812,
"openwebmath_perplexity": 1221.0723456805683,
"openwebmath_score": 0.9980568885803223,
"tags": null,
"url": "https://brilliant.org/discussions/thread/prime-factorization-2/"
} |
physical-chemistry, gas-laws, hydrogen, mole
Title: How to find the molecular weight of a given gas based on its density?
The density of a gas $\ce{X}$ is $10$ times that of hydrogen. In that case, what is the molecular weight of gas $\ce{X}$? | {
"domain": "chemistry.stackexchange",
"id": 378,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, gas-laws, hydrogen, mole",
"url": null
} |
laser, interference, interferometry
Say the v is smaller than u, i.e. the Airy disc A is smaller than A'. The wavefronts that interfere in A' must, therefore, origin from the surrounding area of A. Different wavefronts with different phases result in a different intensity. Because the resulting pattern is depending on the imaging system, it is called subjective.
Moreover, the Airy disc depends also on the aperature (or the lens diameter) the subjective pattern also changes when changing the aperature. | {
"domain": "physics.stackexchange",
"id": 42182,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "laser, interference, interferometry",
"url": null
} |
c++, floating-point
int totalShift = 0;
if(newNormalBiasedExponent < 1) {
int diff = -newNormalBiasedExponent + 1;
newNormalBiasedExponent = 1;
totalShift += diff;
}
int implicitBit = msb(fullNewMantissa);
int shift = implicitBit - totalShift - 23;
if(shift >= 0) {
newNormalBiasedExponent += shift;
totalShift += shift;
fullNewMantissa &= ~(1ll << implicitBit);
} else {
newNormalBiasedExponent = 0;
}
roundedShift(&fullNewMantissa, totalShift);
if(fullNewMantissa == (1 << 23)) {
++newNormalBiasedExponent;
fullNewMantissa = 0;
}
if(newNormalBiasedExponent >= 255) {
newNormalBiasedExponent = 255;
fullNewMantissa = 0;
}
return SoftFloat{(uint)fullNewMantissa, (uint)newNormalBiasedExponent, sign ^ right.sign};
// i don't really understan why this return expression gives warning
// narrowing conversion of ‘(((int)((const SoftFloat*)this)->SoftFloat::sign) ^ ((int)right.SoftFloat::sign))’ from ‘int’ to ‘unsigned int’
}
bool isNan() const {
return exponent == 255 && mantissa != 0;
// This is same as (*reinterpret_cast<const unsigned int*>(this) << 1) > 0b11111111000000000000000000000000u,
// but the experiments have shown it doesn't make any difference for speed on my machine.
// Current version does not invoke any UB
}
bool isZero() const {
return exponent == 0 && mantissa == 0;
} | {
"domain": "codereview.stackexchange",
"id": 36091,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, floating-point",
"url": null
} |
experimental-physics, mass, standard-model, higgs, beyond-the-standard-model
And the masses of the fermions are calculated as the product $m_{\rm fermion}=yv$ of the Higgs vev $v$ and the corresponding Yukawa coupling $y$ (how much the Higgs interacts with the given fermion species). The gauge couplings, Yukawa couplings, and $\lambda$ (the fourth-order Higgs' self-interaction) are dimensionless (without units) parameters of the Standard Model that have to be measured; they determine the masses of the particles. One needs a deeper theory to calculate them from first principles (string theory to calculate all of them). The Higgs mass or Higgs vev or Higgs quadratic coefficient is the only dimensionful parameter of the Standard Model (with the units of mass) which determines the natural scale for all masses of particles in the Standard Model. (A mystery why this mass scale is so much lower than the Planck scale, the natural scale of quantum gravity, is known as the hierarchy problem.)
There won't be any "new or better" model found by the LHC if the spectrum of particles remains what it is. The LHC won't find any new values of parameters in the Standard Model; we already know all of them. At most, the accuracy will improve a bit. If the LHC finds new physics in 2015 (or in the 2012 data that haven't been processed yet), we will have to switch to a more comprehensive model that also describes the Beyond the Standard Model physics.
But to explain the values of the parameters from deeper principles, one needs to make theoretical and not experimental advances – advances in very high-energy theoretical physics near the fundamental scale. Grand unified theories may do a part of the job but only string theory has the potential to explain and calculate all the values of the parameters. | {
"domain": "physics.stackexchange",
"id": 9329,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "experimental-physics, mass, standard-model, higgs, beyond-the-standard-model",
"url": null
} |
navigation, ros2, simulation, ros-humble, nav2
Title: Inflation Layer doesn't seem to be working in local costmap specifically I am trying to set up the nav2 parameter yaml file for my robot. The code I wrote for the costmaps can be seen here:
local_costmap:
ros__parameters:
update_frequency: 5.0
publish_frequency: 5.0
global_frame: odom #change here
robot_base_frame: baseplate
use_sim_time: True
rolling_window: true # Setting the "rolling_window" parameter to true means that the costmap will remain centered around the robot as the robot moves through the world.
width: 5
height: 5
resolution: 0.05
footprint: "[ [0.8, 0.6], [0.8, -0.6], [-0.8, -0.6], [-0.8, 0.6] ]"
plugins: ["inflation_layer", "obstacle_layer"]
filters: ["keepout_filter"]
inflation_layer:
enabled: True # Disable it to see difference
plugin: "nav2_costmap_2d::InflationLayer"
cost_scaling_factor: 2.58 #A scaling factor to apply to cost values during inflation. increasing the factor will decrease the resulting cost values.
inflation_radius: 0.9 #The radius in meters to which the map inflates obstacle cost values.
obstacle_layer:
plugin: "nav2_costmap_2d::ObstacleLayer"
enabled: True
#observation_persistence: 15.0 # How long to store messages in a buffer to add to costmap before removing them (s). Effectivelly, obstacles are not so easily forgotten.
footprint_clearing_enabled: True # If true, the robot footprint will clear (mark as free) the space in which it travels.
combination_method: 1 # 0 - Overwrite: Overwrite master costmap with every valid observation. 1 - Max: Sets the new value to the maximum of the master_grid’s value and this layer’s value.
observation_sources: scan
scan:
topic: "/lidar_ign" | {
"domain": "robotics.stackexchange",
"id": 38635,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, ros2, simulation, ros-humble, nav2",
"url": null
} |
programming, quantum-state, entanglement, cirq
Title: Multiple Bipartite Entangled State in Cirq I am trying to create this state:
rho = = q . rho_{1,2} + r . rho_{2,3} + s . rho{1,3} + (1-q-r-s) . rho_separable
And I wrote this code:
import random
import numpy as np
import cirq
circuit, circuit2, circuit3 = cirq.Circuit()
p = 0.2
q = 0.1
r = 0.3
alice, bob, charlie = cirq.LineQubit.range(1, 4)
rho_12 = circuit.append([cirq.H(alice), cirq.CNOT(alice, bob)])
#circuit.append([cirq.H(alice), cirq.CNOT(alice, bob)])
rho_23 = circuit.append([cirq.H(bob), cirq.CNOT(bob, charlie)])
rho_13 = circuit.append([cirq.H(alice), cirq.CNOT(alice, charlie)])
circuit = rho_12 + rho_23 + rho_13
print(circuit)
In here I have 2 problem:
1)This line is not working: circuit = rho_12 + rho_23 + rho_13
2)I cannot multiply the state with p or q or r. What I mean is that I can't write this line:
rho_12 = circuit.append([cirq.H(alice), cirq.CNOT(alice, bob)]) * q | {
"domain": "quantumcomputing.stackexchange",
"id": 2040,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming, quantum-state, entanglement, cirq",
"url": null
} |
electromagnetism, electricity, electrostatics
Title: Electric field in wire's cross section? 
$\Delta V_{ABCDA} = - \int_A^A \vec{E} \dot{}d\vec{l}$
The requirement that the round-trip potential difference be zero means that $E_1$ and $E_2$ have to be equal. Therefore the electric field
must be uniform both along the length of the wire and also across the cross-sectional area of the wire. Since the drift speed is proportional to $E$, we find that the current is indeed uniformly distributed across the cross section. This result is true only for uniform cross section and uniform material, in the steady state. The current is not uniformly distributed across the cross section in the case of high-frequency (non-steady-state) alternating currents, because time-varying currents can create non-Coulomb forces, as we will see in a later chapter on Faraday's law. | {
"domain": "physics.stackexchange",
"id": 11954,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, electricity, electrostatics",
"url": null
} |
cellular-respiration
Title: Do cold blooded animals generate any heat? In explaining energy and work to an 8 year-old I said that all conversion of energy generates heat as a by-product. For example, cars generate heat in their engines and running generates heat in our bodies. Then the 8 year-old said, except for cold-blooded animals.
So my question is, do cold-blooded animals generate any heat in their conversion of stored energy (food, fat, etc) into motion? If they generate heat, why are they cold-blooded? They do generate heat. They just do not SPEND energy specifically on heating their bodies by raising their metabolisms. This is a form of energy conservation. The metabolic rate they need to live is not nearly enough to heat their bodies.
An example of spending energy to heat the body is seen in humans shivering. Here muscle is activated not for its usual purpose, but to function as a furnace. "Warm-blooded" and "cold-blooded" is somewhat a misnomer. The correct way to think of it is...
Endotherm or ectotherm. Does the heat primarily come from within (endo) or from the surroundings (ecto). Endothermic animals include mammals. Most of their body heat is generated by their own metabolisms. Ectothermic animals include reptiles and insects. They absorb most of their body heat from the surroundings. This is not the same as saying they let their body temperature fluctuate with their surroundings, some avoid this by moving around to accomodate themselves.
Homeotherm or poikilotherm. Homeotherms want to maintain homeostasis for their body temperatures. They don't want it to change. Poikilotherms do not exhibit this behaviour, instead their body temperatures vary greatly with the environment.
We can have endotherm poikilotherms, such as squirrels, who let their body temperature drop while hibernating. Endotherm homeotherms, such as humans, where temperature is constant by means of complex thermoregulation. Ectotherm homeotherms, such as snakes (moving into shadow or into the sun to regulate temperature), and ectotherm poikilotherms, such as maggots. | {
"domain": "biology.stackexchange",
"id": 815,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cellular-respiration",
"url": null
} |
java, concurrency
/** Main lock. */
private final Object mutex = new Object();
/** Underlying executor on which tasks are actualy run. */
private final Executor exec;
// ------------------------------------------------------------
// Mutable fields
/** Pending tasks in submission order. */
private final Set<Runnable> pending = new LinkedHashSet<>();
/** Currently running tasks. */
private final Set<Runnable> running = new HashSet<>();
/**
* Current run state.
* <pre>
* 2 - running (shutdown() not yet called)
* 1 - shut down, but not terminated (some tasks still running)
* 0 - shut down & terminated (no running tasks)
* </pre>
*/
private final CountDownLatch runState = new CountDownLatch(2);
/**
* Hard shutdown: prevent execution of any pending tasks.
*/
private boolean hardShutDown = false;
// ------------------------------------------------------------
// Constructor
public ExecutorExecutorService(Executor exec) {
this.exec = exec;
}
// ------------------------------------------------------------
// ExecutorService | {
"domain": "codereview.stackexchange",
"id": 30475,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, concurrency",
"url": null
} |
quantum-field-theory, particle-physics, quantum-electrodynamics, pair-production
What I don't understand is how every answer key I could find instead got the following:
$-ig_e^2[\bar{v}\not\!\epsilon^*(4)(\frac{\not p_\mu(1)- \not p_\mu(3) + mc}{(p_1-p_3)^2-m^2c^2})\not\!\epsilon^*(3)u] \times (2\pi)^4\delta^4(p_2+p_1-p_3-p_4)d^4q$
From how I understand it, $\not\! a \equiv a^\mu\gamma_\mu$; after all, that is the definition that is provided on page 249 of the textbook.
What this seems to be saying, however, is that $\bar{v}\not\!\epsilon^*(4)=\bar{v}(\epsilon^*)^\mu(4)\gamma_\mu$. But, given my solution, this means that $\bar{v}(\epsilon^*)^\mu(4)\gamma_\mu=\epsilon_\mu^*(4)\bar{v}\gamma^\mu$.
Additionally, by the same logic, $(\epsilon^*)^\nu(3)\gamma_\nu u=\gamma_\nu u \epsilon^*_\nu(3)$.
Is this right? Do $(\epsilon^*)^\mu$, $\gamma_\mu$, $u$, and $\bar{v}$ commute like this? Or am I missing something else? Any and all help would be tremendous! I felt like I understood Chapter 6 when I set up Feynman calculus with the 'toy model' Griffiths creates, but I'm feeling a bit lost for actual QED. Hi and welcome to Physics stackexchange. You are right in saying that the polarization vector for any given momentum commutes with
(a) gamma matrices
(b) spinors (representing both particles and anti-particles) | {
"domain": "physics.stackexchange",
"id": 88936,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, particle-physics, quantum-electrodynamics, pair-production",
"url": null
} |
Since $(n+1)^2-n^2=2n+1$, we obtain: $$2(1+2+...+n)+n=2^2-1^2+3^2-2^2+...+(n+1)^2-n^2=(n+1)^2-1=n^2+2n,$$ which gives the answer.
Note that we have the elementary sum
$$\sum_{\ell=1}^m(1)=m \tag 1$$
Hence, we have
\begin{align} \sum_{m=1}^n m&=\sum_{m=1}^n \sum_{\ell=1}^m(1)\\\\ &=\sum_{\ell=1}^n\sum_{m=\ell}^n (1)\\\\ &=\sum_{\ell=1}^n \left(n-\ell+1\right)\\\\ &=n(n+1)-\sum_{\ell=1}^n \ell\\\\ &=n(n+1)-\sum_{m=1}^n m\\\\ 2\sum_{m=1}^n m&=n(n+1)\\\\ \sum_{m=1}^n m&=\frac{n(n+1)}{2} \end{align}
as was to be shown.
• Would the cowardly downvoter dare to comment? – Mark Viola Apr 5 '17 at 14:08
$f(x)=ax^2+bx+c$
$$f(0)=c=0$$
$$f(1)=a+b=1$$ $$f(2)=4a+2b=2+1=3$$
we have | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.980280868436129,
"lm_q1q2_score": 0.829261185188084,
"lm_q2_score": 0.8459424353665381,
"openwebmath_perplexity": 533.9731215218569,
"openwebmath_score": 0.7050276398658752,
"tags": null,
"url": "https://math.stackexchange.com/questions/2106689/different-ways-to-come-up-with-123-cdots-n-fracnn12/2106739"
} |
ros, sicklms
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-05-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Ashesh Goswami on 2014-05-17:
thanks a lot..yes I agree that the Velodyne 3D LIDARS are the main sensors used for complex autonomous car projects but it is a very expensive tool to be used in case I am planning to perform comparatively easier tasks like lane change or collision avoidance..so I guess I should keep the hunt on to see if any radar related help is available..otherwise I would stick to the SICK LMS laser-scanner. | {
"domain": "robotics.stackexchange",
"id": 17962,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, sicklms",
"url": null
} |
classical-mechanics, forces, work
Title: A few questions about the concept of work From Wikipedia: The work done by a constant force of magnitude F on a point that moves a displacement d in the direction of the force is the product:
$$W = Fd.$$
If I lift some object from a ground, the force to be put in above equation is the gravitational force $mg$.
But while I am moving the object upwards, against the gravity, I must pull the object with greater force than $mg$, right? So the net force is $F_{myforce}-mg$ which results in object being lifted. But this is not the case. Why? Does it matter how strongly I pull the object?
What work will I do if I move this object in absence of gravity (in space)? Will work done be zero or not? You are getting confused between work done by a single force and total work done on a body. When you substitute $F=mg$ in the above equation[1], you are calculating the work done by the gravitational force, only the gravitational force and nothing else.
Substituting $F=F_{my force}$ will give you work done by you on the object. So if $F_{myforce}>mg$, the total work done(sum of the work done by all forces) on the body will be positive, which will show as the body's kinetic energy(velocity).
So from this, you can deduce that if you pull an object in zero gravity, you will do work, on the body. It will just be more than the work you would do on earth, ofcourse because there's no gravity.
[1]A more correct equation for work done would be $W= \vec F\cdot\vec s$, which involves scalar product of two vectors force($\vec F$) and displacement($\vec s$), because you have to take into account the signs(Work done can be negative too). | {
"domain": "physics.stackexchange",
"id": 18312,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, forces, work",
"url": null
} |
arduino, kalman-filter, automatic, probability
Title: Modeling a robot to find its position The task of the robot is as follows.
My robot should catch another robot in the arena, which is trying to escape. The exact position of that robot is sent to my robot at 5Hz. Other than that I can use sonsor to identify that robot.
Is that possible to estimate the next position of other robot using a mathematical model. If so, can anyone recommend tutorials or books to refer..? Assuming a constant update of 5Hz, your sample time is (1/5) = 0.2s.
Get one position of the target, p1.
Get a second position of the target, p2.
Target speed is the difference in position divided by difference in time:
$$
v = (p_2 - p_1)/dT \\
v = (p_2 - p_1)/0.2
$$
Now predict where they will be in the future, where future is $x$ seconds from now:
$$
p_{\mbox{future}} = p_2 + v*x
$$
where again, $p_2$ is the current point of the target, $v$ is the target's calculated speed, and $x$ is how far in the future you want to predict, in seconds.
You can do a number of things from here such as filtering a number of samples in the past, you could try curve fitting, etc., but this is a simple method of predicting future location based on known position updates.
If you knew more about the path or potential paths your target could take you could optimize the prediction or provide a series of cases, but assuming relatively linear motion this model should hold. | {
"domain": "robotics.stackexchange",
"id": 767,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "arduino, kalman-filter, automatic, probability",
"url": null
} |
classical-mechanics, mathematical-physics, lagrangian-formalism, differential-geometry, action
Title: How does one express a Lagrangian and Action in the language of forms? In Lipschitzs Classical Mechanics a Lagrangian is defined as:
$L(q,q',t)$ for some trajectory $q(t)$ of a particle
And the action is defined as:
$S:=\int^a_b L(q,q',t) dt$
How does one express this on the language of forms? Is the following possible?
Consider:
$L:TM \rightarrow \mathbb{R}$
so that $L$ is a 1-form; ie $L \in \Omega^1 M$ which is neccessary to integrate over a trajectory $q$ considered as curve, a 1-dimensional manifold:
$q:I \rightarrow J \subseteq M$.
Then defining the action:
$S:=\int_J L$
we have $S=\int_{qI} L=\int_I q^*L=\int_I Lq_*$
Which we define as the Lagrangian $\int_I L(q,q',t) dt$ Introduced by Lipschitz.
Does this work? You may know about it already, but you can find an excellent account of Lagrangian Mechanics on manifolds in the book Mathematical Methods of Classical Mechanics by V. Arnold.
Also to specifically address your question:
$L:TM \rightarrow \mathbb{R}$
so that $L$ is a 1-form; ie $L \in \Omega^1 M$ which is neccessary to integrate
over a trajectory $q$ considered as curve, a 1-dimensional manifold:
$q:I \rightarrow J \subseteq M$. | {
"domain": "physics.stackexchange",
"id": 22205,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, mathematical-physics, lagrangian-formalism, differential-geometry, action",
"url": null
} |
cnn, semantic-segmentation
Title: Suitable instance counting CNN for training on polygonal masks I have a medical dataset labeled with polygonal masks (rather than rectangle boxes). It works well for pixel annotation with UNet to generate masks of healthy vs damaged skin. Now I need to do instance counting. Most of the CNNs like YOLOv4 consumes bounding boxes, not polygons.
Which CNN should I use for instance counting given that my dataset consists of labeled polygons?
Which CNN should I use for instance counting given that my dataset consists of labeled polygons?
CNNs for instance segmentation. To start with you can try Mask-RCNN. Here are all state of the art CNNs for instance segmentation. You will also find code for most of them. | {
"domain": "datascience.stackexchange",
"id": 9609,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cnn, semantic-segmentation",
"url": null
} |
ros
Title: ros-canopen publishing
In the documentation for canopen_chain_node, in the node configuration, it allows you to set ros publishers for certain objects for debugging purposes. This uses SDO's and therefore blocks the control loop.
My question is: If my EDS files specify PDO's, does ros-canopen automatically publish the PDO's into the ROS system when the package is up and running?
Originally posted by JadTawil on ROS Answers with karma: 71 on 2017-10-19
Post score: 0
If you map objects, ros_canopen will read data from PDOs and not via SDOs.
There is no option to publish all mapped data automatically.
Please note that you need an exclamation mark behind the object index to read mapped data periodically.
Originally posted by Mathias Lüdtke with karma: 1596 on 2017-12-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 29138,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
newtonian-mechanics
The answer is no.
From the conservation of momentum we have:
$m_g*v_g = m_b*v_b$ (1), where $g$ and $b$ come from gun and bullet.
Using the formula for the kinetic energy and the conservation of momentum (1) we get:
$E_b=m_b*v_b^2/2$ (2)
$E_g=m_g*v_g^2/2 = (m_b/m_g)*E_b$ (3)
As $m_g>>m_b$ it immediately follows from (3) that $E_g/E_b<<1$
We also know that:
$E_g = F_g *d_g$ (4)
$E_b = F_b *d_b$ (5)
where $F_b$ is the average resistance force the bullet faces inside the target and $F_g$ the average force the gun exerts during the recoil till it stops. $d_{b,g}$ are the distances traveled by the bullet inside the target and the gun, respectively.
From (3), (4), (5) it follows that:
$F_g *d_g / (F_b *d_b) <<1$
which means that:
$F_g <<F_b$ if we consider $d_g = d_b$ (similar traveled distances that is a realistic case for a bulled hitting a relatively tough target situated close to the gun).
The opposite case $F_g = F_b$ is not realistic because it leads to $d_g << d_b$ which means a bullet traveling a quite soft target for a long distance, hundreds of times greater, more precisely $m_g/m_b$ greater, than the distance traveled by the gun back. | {
"domain": "physics.stackexchange",
"id": 26365,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics",
"url": null
} |
all integrals can be converted to cylindrical coordinates. Midpoint formula Cylindrical coordinates To get a third dimension, each point also has a height above the original coordinate system. Δθ2π. Then, polar coordinates (r; ) are de ned in IR2 f(0;0)g, and given by r= p x2 The Cartesian, or rectangular, coordinate system is the most widely used coordinate system. Δθ2. 2. Convert quadric surfaces in cylindrical or spherical coordinates to Cartesian and identify. From polar coordinates. The true origin point (0, 0) may or may not be in the proximity of the map data you are using. Care should be taken, however, when calculating . Cartesian (double[] elements) Initializes a set of Cartesian coordinates from the first 3 consecutive elements in the provided array. Cartesian coordinates in the figure below: (2,3) A Polar coordinate system is determined by a fixed point, a origin or pole, and a zero direction or axis. First I’ll review spherical and cylindrical coordinate systems so you can have them in mind when we discuss more general cases. In this chapter we will describe a Cartesian coordinate system and a cylindrical coordinate system. 0000 2. Cylindrical coordinates is a method of describing location in a three-dimensional coordinate system. In this handout we will find the solution of this equation in spherical polar coordinates. Someone please help me do this? I don't need you to solve the question completely, I just want help in how to solve it and to be shown the right direction because I'm completely lost! Fluent environment supports cylindrical and Cartesian coordinates. E Figure 11. Elasticity equations in cylindrical polar coordinates 1. 1 Spherical coordinates Figure 1: Spherical coordinate system. Examples: planes parallel to coordinate planes, cylindrical parame- terization of cylinder, and spherical parameterization of sphere. Figure 2. 1416 The focus of this chapter is on the governing equations of the linearized theory of elasticity in three types of coordinate systems, namely, Cartesian, cylindrical, and spherical coordinates. x = [1 2. Conventions. Using cylindrical coordinates can greatly simplify a triple integral when the region this with cylindrical coordinates is much easier than it would be for cartesian Abstract: Application peculiarities of the Green's functions method for Cartesian, cylindrical and spherical coordinate system are under consideration. Converts from Cartesian (x,y,z) to Cylindrical | {
"domain": "sramble-communication.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9914225158534695,
"lm_q1q2_score": 0.8682749313616078,
"lm_q2_score": 0.8757869803008764,
"openwebmath_perplexity": 535.9419383625223,
"openwebmath_score": 0.931949257850647,
"tags": null,
"url": "http://angs-diasporanew.sramble-communication.com/apdsmyht/cartesian-to-cylindrical-coordinates.php"
} |
lagrangian-formalism, differential-geometry, action, covariance, diffeomorphism-invariance
Title: What is the purpose of emphasizing that an action is invariant under diffeomorphism? When learning field theory and string theory, I always see physicists stress the fact that the action, which is an integral of the Lagrangian density $S(x)=\int L(x,\dot{x})dt$, is invariant under diffeomorphism. For example, in string theory, people always say that the Polyakov action is invariant under worldsheet diffeomorphisms.
I do not understand why this is important because as I see it, integrals are defined to be independent of the choice of coordinates and so actions are trivially invariant under diffeomorphism. Can any one show me an example of action which is not invariant under diffeomorphism? Perhaps a simple example is in order: The action for a non-relativistic free particle
$$ S[x]~=~\int \! dt ~L, \qquad L ~=~ \frac{m}{2}\dot{x}^2, $$
is not form invariant under time-reparametrizations
$$t\quad\longrightarrow \quad t^{\prime} ~=~f(t). $$
In contrast, modern fundamental physics (such as, e.g. string theory) is believed to be geometric, and the action formulation are expected to be reparametrization and diffeomorphism invariant. | {
"domain": "physics.stackexchange",
"id": 34003,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lagrangian-formalism, differential-geometry, action, covariance, diffeomorphism-invariance",
"url": null
} |
homework-and-exercises, quantum-field-theory, path-integral, 1pi-effective-action
$$ \frac{\delta \Gamma}{\delta \phi_c(x)} = -J_\phi(x),$$
where $J_\phi$ is the source current such that
$$ \langle \phi(x)\rangle_{J_\phi} = \phi_c(x).$$
Your eq. (1) is wrong, the Schwinger-Dyson equation says that
$$ \left\langle \frac{\delta S}{\delta \phi} + J \right\rangle_J = 0,\tag{1'}$$
which coincides with your eq. (1) only in the case $J=0$.
The classical equation of motion of $\Gamma$ is $J_\phi = 0$, meaning $\langle \phi(x)\rangle_0 = 0 = \phi_c(x)$ in a Lorentz-invariant theory, which is not an interesting equation, and in particular does not depend on the form of $S$ so cannot be directly related with the Schwinger-Dyson equation.
The proper way to see how $\Gamma$ encodes the full quantum theory "at a classical level" is not to compute equations of motion. The quantum theory is not concerned with the time evolution of a classical field, it makes no sense to expect the classical equation of motion of $\Gamma$ to encode the dynamics of the quantum theory. Instead, the following relation holds true:
$$ \frac{Z[J]}{Z[0]} = \mathrm{e}^{\mathrm{i}W[J]} = \int \mathrm{e}^{\mathrm{i}/\hbar\left(S[\phi] + J\phi\right)}\frac{\mathcal{D}\phi}{Z[0]} = \mathrm{e}^{\mathrm{i}/\hbar\left(\Gamma[\phi_c] + J\phi_c\right)}\tag{3}.$$ | {
"domain": "physics.stackexchange",
"id": 38027,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, quantum-field-theory, path-integral, 1pi-effective-action",
"url": null
} |
python, python-3.x, console, ssh, curses
def del_selected_items(self, sel_file):
PopUpDelete(sel_file)
def copy_selected_items(self):
file_name = self.data[self.position][0]
left_panel_path = file_explorer_0.abs_path
right_panel_path = file_explorer_1.abs_path
if wins.active_panel == 0:
self.from_file = self.path + '/' + file_name
self.to_path = file_explorer_1.path
elif wins.active_panel == 1:
self.from_file = self.path + '/' + file_name
self.to_path = file_explorer_0.path
if self.position != 0:
PopUpCopyFile(stdscr, self.from_file, self.to_path, file_name)
def start_rsync(self):
file_name = self.data[self.position][0]
left_panel_path = file_explorer_0.abs_path
right_panel_path = file_explorer_1.ssh_abs_path
if wins.active_panel == 0:
self.from_file = left_panel_path + '/' + file_name
self.to_path = right_panel_path
elif wins.active_panel == 1:
self.from_file = right_panel_path + '/' + file_name
self.to_path = left_panel_path
if self.position != 0:
rsync_obj = RSync(0).start(self.from_file, self.to_path, file_name)
def menu(self):
self.pad.erase()
self.height, self.width = self.window.getmaxyx()
self.screen_height, self.screen_width = self.screen.getmaxyx() | {
"domain": "codereview.stackexchange",
"id": 43913,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, console, ssh, curses",
"url": null
} |
physical-chemistry, kinetic-theory-of-gases, statistical-mechanics
In this range, an approximate expression for the partition function of a heteronuclear diatmic moeclue is $$q^R = \frac{1}{\beta hcB}$$
$$\langle E^R \rangle = -\frac{1}{q^R} \left( \frac{\partial q^R}{\partial \beta}\right)_V = \frac{1}{\beta} = kT$$
Note: I already pointed out that this result is valid at "high" temperatures, however, I did not give you a sense of scale. For hydrogen molecule $\theta^R = 87.6 \ K$ for chlorine molecule it is $ 0.351 K $. Similarly, the approximations made while deriving the translational functional are valid at room temperatures and for everyday macroscopic containers whose dimensions are much greater than the thermal wavelength ($\Lambda$) of the molecules.
A similar analysis can be performed for vibrational modes, however the temperature range over which it is valid is much greater ($> 805 K$ for chlorine and $> 6332 K $ for hydrogen). These conditions aren't within the realm of everyday experience. | {
"domain": "chemistry.stackexchange",
"id": 6627,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, kinetic-theory-of-gases, statistical-mechanics",
"url": null
} |
javascript, beginner, html, tree, vue.js
<script src="https://unpkg.com/vue@2.2.6"></script>
<div id="app">
<tree v-bind:nodes="nodes"></tree>
</div>
<style>
.selected {
font-weight: bold;
}
</style>
<script>
var selectedNode;
Vue.component("tree", {
template: `
<ul>
<div v-for="(node, index) of nodes">
<node
v-bind:node="typeof node === 'string' ? node : Object.keys(node)[0]"
v-bind:nestingTree="typeof node === 'string' ? undefined : Object.values(node)[0]"
v-bind:id="index + '/' + (nestingLevel || 0)"
v-bind:nestingLevel="nestingLevel || 0"
></node>
</div>
</ul>
`,
props: ["nodes", "nestingLevel"]
});
Vue.component("node", {
template: `
<li>
<div v-on:click="onClick" v-bind:id="id">
{{node}} {{
nestingTree
? open ? "[-]" : "[+]"
: ""}}
</div>
<tree v-if="nestingTree && open" v-bind:nodes="nestingTree" v-bind:nestingLevel="nestingLevel + 1"></tree>
</li>
`,
props: ["node", "nestingTree", "nestingLevel", "id"],
data: function() {
return {
open: false
}
},
methods: {
onClick: function(event) {
this.toggleOpenClosedState(event);
this.selectCurrentNode(event);
},
toggleOpenClosedState: function(event) {
this.open = !this.open;
},
selectCurrentNode: function(event) {
if (selectedNode) { | {
"domain": "codereview.stackexchange",
"id": 27590,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, html, tree, vue.js",
"url": null
} |
observational-astronomy, telescope, amateur-observing
Title: Effect of black coating on the inside of a hobby telescope's tube? I was watching this (rather old) episode of The Making, where they show how a telescope is built in a factory. At 7:34, the cylindrical body of the telescope is shown to have a dark coating on the inside and the video explains that this prevents internal reflection of the light received from whatever you're observing.
I saw an interesting comment from Izumikawa Fukumi who asked if the telescope would become more accurate if they coated the inside with the blackest black in the world. How would such a dark coating affect the telescope's performance? Telescopes have the inner part of the tube blackened to minimize scattered light reaching the eyepiece. Such scattered light reduces contrast, and hence reduces the ability to see very faint objects, or low contrast detail. A complementary approach for refractor telescopes is to use baffles in the tube. The baffles form a physical barrier to scattered light. However baffles are costly to make accurately and add to the weight of the telescope.
The more that scattered light is reduced, the better the performance.
Alternatives to super absorbing paint are to attach to the tube, either matt black velvet, or sand paper sprayed with matt black paint. | {
"domain": "astronomy.stackexchange",
"id": 4525,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "observational-astronomy, telescope, amateur-observing",
"url": null
} |
c++, template-meta-programming, coordinate-system, c++17
/*!
* \brief sets Z value
*
* \remark This function will be available only, if this Vector has 3 or more dimensions.
*
* \tparam U must be implicit convertible to T
* \tparam Index This is a little trick to make the std::enable_if a dependent name. Do not pass any other type than the default one.
* \param _value The value.
*/
template <typename U, typename Index = std::size_t, typename = std::enable_if_t<std::greater<Index>()(DIM, static_cast<Index>(CommonDimensions::z))>>
constexpr void setZ(U&& _value) noexcept
{
set(CommonDimensions::z, std::forward<U>(_value));
}
/*!
* \brief member wise addition
*
* \param _other The other.
*
* \return A reference of this object.
*/
constexpr Vector& operator +=(const Vector& _other) noexcept
{
for (std::size_t i = 0; i < DIM; ++i)
m_Values[i] += _other.m_Values[i];
return *this;
}
/*!
* \brief member wise subtraction
*
* \param _other The other.
*
* \return A reference of this object.
*/
constexpr Vector& operator -=(const Vector& _other) noexcept
{
for (std::size_t i = 0; i < DIM; ++i)
m_Values[i] -= _other.m_Values[i];
return *this;
}
/*!
* \brief member wise addition
*
* \param _val The value.
*
* \return A reference of this object.
*/
constexpr Vector& operator +=(const T& _val) noexcept
{
for (auto& el : m_Values)
el += _val;
return *this;
} | {
"domain": "codereview.stackexchange",
"id": 28706,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template-meta-programming, coordinate-system, c++17",
"url": null
} |
3Is Moebius $\times$ Moebius = cilinder $\times$ cilinder (no boundaries)? – Yaakov Baruch – 2011-04-04T16:38:18.597
Without knowing any algebraic topology, it's possible to conclude at least something about X. If X is metric, compact, or locally compact and paracompact, then $\dim(X\times X)\le 2\dim X$, which means X has to have Lebesgue covering dimension at least 2. Wage, Proc. Natl. Acad. Sci. USA 75 (1978) 4671 , www.pnas.org/content/75/10/4671.full.pdf . What is the weakest condition that guarantees $\dim(X\times Y)= \dim X+\dim Y$? Given Yaakov Baruch's comment about the "dogbone space," it's not obvious that X is at all well behaved simply from the requirement that its square is $\mathbb{R}^3$. – Ben Crowell – 2013-01-19T15:55:50.700
@YaakovBaruch, isn't the cylinder factorizable? And could you elaborate this identity a little? – Ash GX – 2013-10-17T06:35:37.190
9Dear @BigM, I fail to see the value in editing an old question simply to add mathjax to its title which was perfectly readable to begin with. – Ricardo Andrade – 2015-05-31T11:44:50.003
1@Ricardo Andrade: every little improvement should be (mildly) welcome, don't you think so? – Qfwfq – 2018-08-03T08:23:23.150
199
No such space exists. Even better, let's generalize your proof by converting information about path components into homology groups. | {
"domain": "kiwix.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9787126506901791,
"lm_q1q2_score": 0.8395084957123562,
"lm_q2_score": 0.8577681049901037,
"openwebmath_perplexity": 427.21514499864395,
"openwebmath_score": 0.8483476042747498,
"tags": null,
"url": "http://library.kiwix.org/mathoverflow.net_en_all_2019-02/A/question/60375.html"
} |
model-checking, temporal-logic, linear-temporal-logic
Title: Are there temporal logics linear time properties that only have counterexamples that are more complex than a lasso? Are there linear time temporal logics that can express some property $P_{nonlasso}$ that does have a counterexample, but none that is a lasso (or finite)?
Details:
One advantage of model checking over other formal methods is their ability to return a counterexample, a (finite path or a) path in form of a lasso. This is sufficient up to (extended) Büchi automata, since an infinite accepting path can be transformed to a lasso that infitely visits some accepting state.
But is this still the case for more complex linear time logics? Or a non-linear time logic that does have counterexamples, e.g. ACTL*? Can the $\mu$-calculus or MCRL2 express such a linear time property $P_{nonlasso}$?
For instance, having the Kripke structure $s_1 \leftrightarrow init \leftrightarrow s_2$, can I express the counting property "repeat $(init \rightarrow s_1 \rightarrow init)^i \cdot (init \rightarrow s_2 \rightarrow init)^i$ for infinitely increasing $i$"? This path does have a very simple schema... | {
"domain": "cstheory.stackexchange",
"id": 3320,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "model-checking, temporal-logic, linear-temporal-logic",
"url": null
} |
ros, roscpp, debian, jessie
Command failed, exiting.
chip@chip:~/ros_catkin_ws$ sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/indig
Edit: I apologize, i honestly didn't know where to look. for precompiled binaries... Any hints?
I was perhaps under the mistaken impression that they did not exist.
https://erlerobotics.gitbooks.io/erle-robotics-erle-brain-a-linux-brain-for-drones/content/en/ros/tutorials/rosinstall.html
Debian Wheezy (7.5)
There are no precompiled binaries of ROS for Debian, while it seems that most of the dependencies are being pulled into the main repositories there are still some missing so compiling requires that the dependencies be made from source as well. The following steps describe how to compile everything from source doing this for a machine running Debian Wheezy.
I don't know when this was written so perhaps it is out of date.
NOTE: I am on a Debian Jessie platform......
Originally posted by w on ROS Answers with karma: 99 on 2017-01-31
Post score: 0
'ROS_CREATE_SIMPLE_SERIALIZER_ARM'
ROS_CREATE_SIMPLE_SERIALIZER_ARM(double);
From this, I have the impression you are attempting this on an ARM platform. Those are typically resource limited, and compilation of (heavily templated) C++ sources is a resource intensive process. c++: internal compiler error: Killed (program cc1plus) can then often be caused by the system running out-of-memory.
Can you try to reduce the paralellisation by adding -j1 to your catkin_make_isolated call?
Edit:
I apologize, i honestly didn't know where to look. for precompiled binaries... Any hints? | {
"domain": "robotics.stackexchange",
"id": 26871,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, roscpp, debian, jessie",
"url": null
} |
newtonian-mechanics, newtonian-gravity, orbital-motion, solar-system, inertial-frames
Title: Why do we say that the Earth moves around the Sun? In history we are taught that the Catholic Church was wrong, because the Sun does not move around the Earth, instead the Earth moves around the Sun.
But then in physics we learn that movement is relative, and it depends on the reference point that we choose.
Wouldn't the Sun (and the whole universe) move around the Earth if I place my reference point on Earth?
Was movement considered absolute in physics back then? Imagine two donut-shaped spaceships meeting in deep space. Further, suppose that when a passenger in ship A looks out the window, they see ship B rotating clockwise. That means that when a passenger in B looks out the window, they see ship A rotating clockwise as well (hold up your two hands and try it!).
From pure kinematics, we can't say "ship A is really rotating, and ship B is really stationary", nor the opposite. The two descriptions, one with A rotating and the other with B, are equivalent. (We could also say they are both rotating a partial amount.) All we know, from a pure kinematics point of view, is that the ships have some relative rotation.
However, physics does not agree that the rotation of the ships is purely relative. Passengers on the ships will feel artificial gravity. Perhaps ship A feels lots of artificial gravity and ship B feels none. Then we can say with definity that ship A is the one that's really rotating.
So motion in physics is not all relative. There is a set of reference frames, called inertial frames, that the universe somehow picks out as being special. Ships that have no angular velocity in these inertial frames feel no artificial gravity. These frames are all related to each other via the Poincare group.
In general relativity, the picture is a bit more complicated (and I will let other answerers discuss GR, since I don't know much), but the basic idea is that we have a symmetry in physical laws that lets us boost to reference frames moving at constant speed, but not to reference frames that are accelerating. This principle underlies the existence of inertia, because if accelerated frames had the same physics as normal frames, no force would be needed to accelerate things. | {
"domain": "physics.stackexchange",
"id": 64825,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, solar-system, inertial-frames",
"url": null
} |
The key is to do the following two quick checks to eliminate composite numbers so that we can reach a probable prime as quickly as possible.
• For any given $P_j$, the first step is to look for small prime factors, i.e., to factor $P_j$ using prime numbers less than a bound $B$. If a small prime factor is found, then we increase $j$ by 2 and start over. Note that we skip any $P_j$ where the sum of digits is divisible by 3. We also skip any $P_j$ that ends with the digit 5.
• If no small factors are found, then compute the congruence $2^{P_j-1} \ (\text{mod} \ P_j)$. If the answer is not congruent to 1, then we know $P_j$ is composite and work on the next number. If $2^{P_j-1} \equiv 1 \ (\text{mod} \ P_j)$, then $P_j$ is said to be a probable prime base 2. Once we know that a particular $P_j$ is a probable prime base 2, it is likely a prime number. To further confirm, we apply the Miller-Rabin primality test on that $P_j$.
In the first check, we check for prime factors among the first 100 odd prime numbers (i.e. all odd primes up to and including 547).
___________________________________________________________________
Searching the first probable prime
At the outset, we did not know how many numbers we will have to check. Since there can be a long gap between two successive prime numbers, the worse fear is that the number range we are dealing with is situated in such a long gap, in which case we may have to check thousands of numbers (or even tens of thousands). Luckily the search settles on a probable prime rather quickly. The magic number is 297. In other words, for the number | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9808759638081522,
"lm_q1q2_score": 0.8042136639257234,
"lm_q2_score": 0.8198933337131076,
"openwebmath_perplexity": 196.70244281521545,
"openwebmath_score": 0.8269802331924438,
"tags": null,
"url": "https://mathcrypto.wordpress.com/tag/square-and-multiply-algorithm/"
} |
python, pandas, stata
Title: Stata-style replace in Python In Stata, I can perform a conditional replace using the following code:
replace target_var = new_value if condition_var1 == x & condition_var2 == y
What's the most pythonic way to reproduce the above on a pandas dataframe? Bonus points if I can throw the new values, and conditions into a dictionary to loop over.
To add a bit more context, I'm trying to clean some geographic data, so I'll have a lot of lines like
replace county_name = new_name_1 if district == X_1 and city == Y_1
....
replace county_name = new_name_N if district == X_N and city == Y_N
What I've found so far:
pd.replace which lets me do stuff like the following, but doesn't seem to accept logical conditions:
`
replacements = { 1: 'Male', 2: 'Female', 0: 'Not Recorded' }
df['sex'].replace(replacements, inplace=True)
`
df.where(condition, replacement, inplace=True)
Condition is assumed to be boolean Series/Numpy array. Check out where documentation - here is an example. | {
"domain": "datascience.stackexchange",
"id": 3221,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, pandas, stata",
"url": null
} |
• This answer is at least as awesome as the question! – R. van Dobben de Bruyn Dec 28 '16 at 23:21
• Beautiful argument! It appears (visually) that, not only doesn't the $(5,2)$ spiral close and repeat, but it also seems never to repeat the same "structural units" (however defined) as it walks out above the $y=x$ diagonal. If you also have any remarks on that behavior, I would appreciate your insights. Thanks! – Joseph O'Rourke Dec 29 '16 at 3:56
• @JosephO'Rourke I'm not sure I see what you mean. In your "first 1,000 turns" picture, the same (small) structural unit appears near $(20,20)$ and near $(50,50)$. If the sequence were random, we would expect a structural unit of size $n$, whatever that means, to repeat after an exponential-in-$n$ number of steps. Is that what you're seeing? – Will Sawin Dec 29 '16 at 7:21
• @JosephO'Rourke In lieu of that, two fun facts I think I can prove: 1. This sequence, when extended backwards in time, is symmetric about the $(x,y)$ axis 2. every other path is closed. – Will Sawin Dec 29 '16 at 7:24
• @JosephO'Rourke 1. This is easy to check from the fact that the backwards evolution rule is the forwards evolution rule, flipped around the $x=y$ axis. 2. Check first for paths contained in the sets A, B, C. Observe that, in this set, the only way to get from $y= p-\epsilon$ to $y=p + \epsilon$ for a prime $p$ is by passing vertically through $(p,p)$ - everywhere else you will be stopped by a relatively prime point and turn around. The path starting at $(5,2)$ passes through $(p,p)$ because it is infinite, so no other path can. Then use this symmetry to handle paths in the other octant. – Will Sawin Dec 29 '16 at 7:30 | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.984575447918159,
"lm_q1q2_score": 0.8247000505840649,
"lm_q2_score": 0.837619963333289,
"openwebmath_perplexity": 624.3216286993172,
"openwebmath_score": 0.6793840527534485,
"tags": null,
"url": "https://mathoverflow.net/questions/258157/relatively-primes-spirals?noredirect=1"
} |
complexity-theory, regular-languages, finite-automata
Let $A$ be a NFA. Add a new final state $q_\$$ to it and for every accepting state $q$, add a transition $q\overset{\$}{\to}q_\$$ where $\$$ is a fresh letter. For every letter $a\in \Sigma\sqcup \{\$\}$, also add a transition $q_\$\overset{a}{\to}q_\$$. And then make all states accepting. The new automaton accepts $L(A)\$ (\Sigma\sqcup \{\$\})^* \sqcup L'$ for some $L'\subseteq \Sigma^*$.
Now, add some automaton (with all states final) that recognizes the language $\Sigma ^*$ (i.e. the language of words that don't contain $\$$) in parallel. The new automaton will recognize $L(A)\$ (\Sigma\sqcup \{\$\})^* \sqcup (L' \cup \Sigma ^*)=L(A)\$ (\Sigma\sqcup \{\$\})^* \sqcup \Sigma ^*$.
Now, notice that we have $L(A)=\Sigma^*$ iff $L(A)\$ (\Sigma\sqcup \{\$\})^* \sqcup \Sigma ^*=(\Sigma\sqcup\{\$\})^*$. I.e. the process I described is a reduction from universality of a NFA to universality of a NFA with all states accepting. Since the reduction is polynomial, your problem is PSPACE-hard. | {
"domain": "cs.stackexchange",
"id": 20901,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, regular-languages, finite-automata",
"url": null
} |
python
So I would do:
for shape in tgons:
if shape[0] == shape[2] > 0 and shape[1] == shape[3] > 0 and len(shape) == 4:
if shape[0] == shape[1]:
squares += 1
else:
rects += 1
else:
neithers += 1
This can also be changed to allow all 2D rects/squares. (not lines.)
if shape[0] == shape[2] != 0 and shape[1] == shape[3] != 0 and len(shape) == 4:
Also follow PEP8 a bit more, as then you'll have easier to read code.
You mostly need more spaces:
Before and after infix operators.
After but not before commas. | {
"domain": "codereview.stackexchange",
"id": 16876,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
homework-and-exercises, thermodynamics, ideal-gas
Title: Ideal gas number of microstates constant while applying the equation $\Omega(E)=BV^N E^{3N/2}$ for a system with ideal gas in two boxes separated by a divider, I was asked to calculate the change in entropy after removing the divider.
The final result was $\Delta S=K_Bln(\frac{32}{B})$. How can I find B to show that the change in entropy is not negative?
starting parameters are:
$N_1=N_2=N$ $V_1=V_2=V$ $T_1=T_2=T$
my calculations went as such:
$\Delta S=S_f - S_i = K_B ln(B(2V)^{2N} (2E)^{2\frac{3N}{2}} - [K_Bln(BV^N E^\frac{3N}{2})+K_Bln(BV^NE^\frac{3N}{2})]=K_Bln(\frac{32BV^{2N}E^{3N}}{B^2 V^{2N}E^{3N}})=K_Bln(\frac{32}{B})$ You final result is not correct:
\begin{align}
\Delta S & = S_f - S_i\\
&= K_B\, ln\left\{B(2V)^{2N} (2E)^{2\frac{3N}{2}}\right\} - \left[K_Bln\left(BV^N E^\frac{3N}{2}\right)+K_Bln\left(BV^NE^\frac{3N}{2}\right)\right]\\
&=K_B\,ln\left(\frac{2^{5N}BV^{2N}E^{3N}}{B^2 V^{2N}E^{3N}}\right)\\ | {
"domain": "physics.stackexchange",
"id": 79535,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, thermodynamics, ideal-gas",
"url": null
} |
pl.programming-languages, type-theory, type-systems, object-oriented
When one defines subclasses in object oriented programs, one typically adds publicly visible fields (or methods). In most programming languages, such subclasses are regarded as nominal subtypes. The question is whether they are also structural subtypes. If they are not, i.e., the programming language allows one to declare nominal subtypes that are not structural subtypes, then there would be type holes in the programming language.
In simple cases, adding fields works fine. The type of the superclass expects fewer fields than the type of the subclass. So, if you plug in an instance of a subclass where an instance of the sueprclass is expected, the program will just ignore the additional fields provided and nothing goes wrong.
However, if the superclass or subclass has methods that take arguments of the same type as itself, or return results of the same type as itself, then problems arise. Then the interface type of the subclass is not a structural subtype of that of the superclass. Widely used type-safe programming languages such as Java do not allow such subclasses. So, they restrict the language to obtain type safety. The programming language Eiffel is said to have sacrificed type-safety to obtain flexibility instead. If one designs a strong type system that retains flexibility, one must give up the principle that subclasses give rise to subtypes. Hence the title of the paper "Inheritance is not subtyping". The authors propose a different notion of higher-order subtyping which works instead. Kim Bruce also has a closely related proposal called "matching" that achieves the same effect. See this presentation. Also helpful is a position paper of Andrew Black.
The semantics community is probably at fault for largely ignoring the problem. We have traditionally regarded it as a practical type-system engineering issue that is of little theoretical interest. If that is not the case and there is indeed some semantics work in the area, I hope the other people will mention them. | {
"domain": "cstheory.stackexchange",
"id": 1411,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pl.programming-languages, type-theory, type-systems, object-oriented",
"url": null
} |
java, security, file
Title: Checking if file is within a directory for security I have a security manager I'm writing for a Java program, and within the checkRead method, I wish to only allow reads to files within a given directory.
The basis of it is that it will get a canonical file for the input (which is actually a string as per the method contract). It will then begin taking the file's parent repeatedly until it either matches one of the trusted directories, or becomes null (from trying to take the parent of the root directory). This check should be secure both on Unix/linux and Windows environments and styles of filenames. I don't need 100% certainty that a valid and allowed, but mangled or non-standard path would pass, but I certainly cannot rework the program's API to open files in a different, more secure manner, as my application's classloader needs to use this method to check its reads.
Are there any glaring issues (security or style) that anyone might see here?
File tested;
try {
tested = new File(file).getCanonicalFile();
} catch (IOException e1) {
throw new SecurityException(
"The basedir resolution failed to resolve.");
}
File parentFile = tested;
while (parentFile != null) {
if (basedir.equals(parentFile) || classDir.equals(parentFile)) {
return;
}
parentFile = parentFile.getParentFile();
}
logger.warn("MosstestSecurityManager stopped an attempt to read a file from non-core code"
+ file); | {
"domain": "codereview.stackexchange",
"id": 7075,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, security, file",
"url": null
} |
ros, openni, mit-ros-pkg
/opt/ros/groovy/include/ros/publisher.h:102: undefined reference to `ros::console::checkLogLocationEnabled(ros::console::LogLocation*)'
CMakeFiles/tracker.dir/src/main.cpp.o: In function `ros::Publisher ros::NodeHandle::advertise<mapping_msgs::PolygonalMap_<std::allocator<void> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, bool)':
/opt/ros/groovy/include/ros/node_handle.h:241: undefined reference to `ros::NodeHandle::advertise(ros::AdvertiseOptions&)'
CMakeFiles/tracker.dir/src/main.cpp.o: In function `ros::Publisher ros::NodeHandle::advertise<body_msgs::Skeletons_<std::allocator<void> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, bool)':
/opt/ros/groovy/include/ros/node_handle.h:241: undefined reference to `ros::NodeHandle::advertise(ros::AdvertiseOptions&)'
CMakeFiles/tracker.dir/src/main.cpp.o: In function `main': | {
"domain": "robotics.stackexchange",
"id": 16735,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, openni, mit-ros-pkg",
"url": null
} |
ros, roomba, turtlebot
def _init_params(self):
self.port = rospy.get_param('~port', self.default_port)
self.robot_type = rospy.get_param('~robot_type', 'create')
#self.baudrate = rospy.get_param('~baudrate', self.default_baudrate)
self.update_rate = rospy.get_param('~update_rate', self.default_update_rate)
self.drive_mode = rospy.get_param('~drive_mode', 'twist')
self.has_gyro = rospy.get_param('~has_gyro', True)
self.odom_angular_scale_correction = rospy.get_param('~odom_angular_scale_correction', 1.0)
self.odom_linear_scale_correction = rospy.get_param('~odom_linear_scale_correction', 1.0)
self.cmd_vel_timeout = rospy.Duration(rospy.get_param('~cmd_vel_timeout', 0.6))
self.stop_motors_on_bump = rospy.get_param('~stop_motors_on_bump', True)
self.min_abs_yaw_vel = rospy.get_param('~min_abs_yaw_vel', None)
self.max_abs_yaw_vel = rospy.get_param('~max_abs_yaw_vel', None)
self.publish_tf = rospy.get_param('~publish_tf', False)
self.odom_frame = rospy.get_param('~odom_frame', 'odom')
self.base_frame = rospy.get_param('~base_frame', 'base_footprint') | {
"domain": "robotics.stackexchange",
"id": 26158,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, roomba, turtlebot",
"url": null
} |
Matrix Multiplication Suppose we want to multiply two matrices of size N x N: for example A x B = C. This problem is mostly used to teach recursion, but it has some real-world uses. • How matrix multiplication can be stated as a divide-and-conquer algorithm (i. Recur: solve the sub problems recursively Conquer: combine the solutions for S1, S2, …, into a solution for S The base case for the recursion are sub problems of constant size Analysis can be done using recurrence equations 5. •These are huge matrices, say n ≈50,000. Divide, Conquer and Combine The correct answer is: Divide, Conquer and Combine. January 2, 2013 January 3, 2013 saeediqbalkhattak How to multiply any two integer using divide & Conquer approach. And this is a super cool algorithm for two reasons. Split each matrix into 4 of size (n / 2) x (n / 2) 2. The multiply() method takes 3 matrices and their indexes and using the divide and conquer matrix multiplication algorithm where each matrice is divided into four parts and is multiplied to get the output. 3302576 https://doi. For example, if the first bit string is "1100" and second bit string is "1010", output should be 120. q] Initially, p= 1 and q= A. Both merge sort and quicksort employ a common algorithmic paradigm based on recursion. its deals with how parallel programming can be achieved. •The native algorithm will have to multiply one row of X by one column of Z (i. Divide and Conquer. Matrix Multiplication operation is associative in nature rather commutative. 1 Compute C = AB using the traditional matrix multiplication algorithm. Feb 1, 2020 - Explore anokair's board "MATRIX MULTIPLICATION " on Pinterest. Given two square matrices A and B of size n x n each, find their multiplication matrix. Top-Down Algorithms: Divide-and-Conquer. Solve the subproblems recursively and concurrently 3. Towers of Hanoi 🗼 The Towers of Hanoi is a mathematical problem which compromises 3 pegs and 3 discs. There are four types of algorithms: Iterative Algorithm; Divide and conquer algorithm; Sub-cubic algorithms. Following is simple Divide and Conquer method to multiply two square matrices. The name divide and conquer is | {
"domain": "claudiaerografie.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9835969650796874,
"lm_q1q2_score": 0.8436981083867225,
"lm_q2_score": 0.857768108626046,
"openwebmath_perplexity": 1385.7947430231193,
"openwebmath_score": 0.6192194223403931,
"tags": null,
"url": "http://claudiaerografie.it/nxqd/matrix-multiplication-divide-and-conquer.html"
} |
performance, image, swift, ios, color
FWIW, I used a points of interest OSLog:
import os.log
private let log = OSLog(subsystem: Bundle.main.bundleIdentifier!, category: .pointsOfInterest)
And logged the range:
@IBAction func didTapProcessButton(_ sender: Any) {
os_signpost(.begin, log: log, name: #function)
let final = replaceColor(frontImage: circleImage, backImage: squareImage, sourceColor: [.blue], withDestColor: .green, tolerance: 0.25)
os_signpost(.end, log: log, name: #function)
processedImageView.image = final
}
Then I could easily zoom into just that interval in my code using the “Points of Interest" tool. And having done that, I can switch to the “Time Profiler” tool, and it shows that 49.3% of the time was spent in contains and 24.9% of the time was spent in UIColor initializer.
I can also double click on the the replaceColor function in the above call tree, it will show this to me in my code (for debug builds, at least). This is another way of seeing the same information shown above:
So, regarding UIColor issue, in Change color of certain pixels in a UIImage, I explicitly use UInt32 representation of the color (and have a struct to provide user-friendly interaction with this 32-bit integer). I do this to enjoy efficient integer processing, and avoiding UIColor. In that case, processing colors in a 1080×1080 px image takes 0.03 seconds (avoiding the UIColor to-and-fro for each pixel).
The bigger issue is that contains is very inefficient. If you must use contains sort of logic (once you are using UInt32 representations of your colors), I would suggest using a Set, with O(1) performance, rather than Array (with O(n) performance).
But even with that contains is inefficient approach. (In my example, my array had only one item.) I see an unused tolerance parameter, and wonder if you might consider doing this mathematically rather than looking up colors in some collection. | {
"domain": "codereview.stackexchange",
"id": 42033,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, image, swift, ios, color",
"url": null
} |
pcl
make[2]: Entering directory `/home/nutan/new/my_pcl/build'
make[3]: Entering directory `/home/nutan/new/my_pcl/build'
make[3]: Leaving directory `/home/nutan/new/my_pcl/build'
[ 0%] Built target rospack_genmsg_libexe
make[3]: Entering directory `/home/nutan/new/my_pcl/build'
make[3]: Leaving directory `/home/nutan/new/my_pcl/build'
[ 0%] Built target rosbuild_precompile
make[3]: Entering directory `/home/nutan/new/my_pcl/build'
make[3]: Leaving directory `/home/nutan/new/my_pcl/build'
make[3]: Entering directory `/home/nutan/new/my_pcl/build'
[100%] Building CXX object CMakeFiles/my_pcl.dir/src/ttt.o
In file included from /opt/ros/fuerte/include/pcl-1.5/pcl/visualization/pcl_visualizer.h:49:0,
from /home/nutan/new/my_pcl/src/ttt.cpp:5:
/opt/ros/fuerte/include/pcl-1.5/pcl/visualization/common/common.h:39:24: fatal error: vtkCommand.h: No such file or directory
compilation terminated.
make[3]: *** [CMakeFiles/my_pcl.dir/src/ttt.o] Error 1
make[3]: Leaving directory `/home/nutan/new/my_pcl/build'
make[2]: *** [CMakeFiles/my_pcl.dir/all] Error 2
make[2]: Leaving directory `/home/nutan/new/my_pcl/build'
make[1]: *** [all] Error 2 | {
"domain": "robotics.stackexchange",
"id": 9945,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pcl",
"url": null
} |
organic-chemistry, spectroscopy, analytical-chemistry, nmr-spectroscopy
Obviously there is some guess work in here and if it was an important project I'd do some further experiments and look for some additional model compounds in order to confirm these assignments and remove the discrepancies, but this is a good start. | {
"domain": "chemistry.stackexchange",
"id": 1369,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, spectroscopy, analytical-chemistry, nmr-spectroscopy",
"url": null
} |
python, array
"""
if isinstance(key, slice):
start, step, end = key.start, key.step, key.stop
if start is None: start = 0
if end is None: end = len(self)
if step not in [None, 1]: raise ValueError("Custom steps not allowed: "+repr(key))
contained_by, edge_overlaps, contains = self._find_overlapping_ranges(start, end)
if contains:
if not value:
self.set_ranges.remove(contains)
self.set_ranges.append((contains[0], start))
self.set_ranges.append((end, contains[1]))
return
if value:
for overlap_start, overlap_end in edge_overlaps:
if overlap_start < start:
start = overlap_start
self.set_ranges.remove((overlap_start, overlap_end))
elif overlap_end > end:
end = overlap_end
self.set_ranges.remove((overlap_start, overlap_end))
else:
raise Exception("Logic Error!")
else:
for overlap_start, overlap_end in edge_overlaps:
if overlap_start < start:
self.set_ranges.remove((overlap_start, overlap_end))
self.set_ranges.append((overlap_start, start))
elif overlap_end > end:
self.set_ranges.remove((overlap_start, overlap_end))
self.set_ranges.append((end, overlap_end))
else:
raise Exception("Logic Error!")
for set_range in contained_by:
self.set_ranges.remove(set_range)
if value:
self.set_ranges.append((start, end))
else:
raise ValueError("Single element assignment not allowed") | {
"domain": "codereview.stackexchange",
"id": 2571,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, array",
"url": null
} |
Since $$\sum |a_n|$$ converges, there exists a rank $$N$$ such that $$\forall n\geqslant N$$, $$|a_n|<1$$. Then $$\forall n\geqslant N, |a_n|^2 \leqslant |a_n|$$. By theorem of comparison between positive terms series, we get that $$\sum |a_n|^2$$ converges.
You can prove it by contraposition. If $$\sum n^2 a_n$$ converges then $$n^2 a_n \rightarrow 0$$ so there exists a rank $$N$$ such that $$\forall n\geqslant N, n^2 |a_n| < 1$$, so $$\forall n\geqslant N, |a_n|<\frac{1}{n^2}$$. By comparison, the serie converges absolutely. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534382002797,
"lm_q1q2_score": 0.8001127318100041,
"lm_q2_score": 0.8152324915965392,
"openwebmath_perplexity": 274.9124778200695,
"openwebmath_score": 0.993039071559906,
"tags": null,
"url": "https://math.stackexchange.com/questions/3969176/assertions-about-convergence-divergence-of-infinite-series"
} |
algorithms, graphs
Title: Finding independent sets so that all nodes are hit frequently I have a problem, and I appreciate it if you could share your thoughts.
Assume that I have a graph. Assume that I have $k$ iterations. I want to find only one independent set (IS) in the graph in each iteration. Then, I will increase a counter for those vertices that are in the selected IS. Initially, the counter for all vertices of the graph are zero.
With an integer programming formulation, we can choose one independent set (provided that we know the set of all ISs) in every iteration, so that the counter of each vertex $i$ in the graph is at least $r_i$ over $k$ iterations (clearly $r_i \leq k$)? In other words, we can make a plan, with IP, so that each vertex $i$ has been selected $r_i$ times over $k$ iterations. Also, with simple modification, minimum value for $k$ can be included in IP formulation. I aim at finding the minimum $k$ by which all $r_i$'s are satisfied.
Now the main concern is that the above formulation need the set of all ISs as an input. Finding this set, however, is an NP-hard problem. So, I was wondering if there there is any way to bypass the need for having all ISs? For sure, many heuristic algorithms can be developed (one example is proposed in comments below). However, I am interested to find an algorithm for which I can say something analytically, e.g. convergence proof, optimality gap, etc. Approach 1: Use ILP
If you want an algorithm that might work in practice as long as the graph is not too large, one approach is to formulate this as an integer linear problem.
That's pretty straightforward in this case. We'll have $k|V|$ integer variables. In particular, $x_{v,i}$ will be $1$ if the vertex $v$ is included in the independent set in the $i$th iteration and $0$ otherwise.
First, let's encode the constraint that the $x_{\cdot,i}$ have to encode an independent set in each iteration. We get $0 \le x_{v,i} \le 1$ and | {
"domain": "cs.stackexchange",
"id": 3095,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs",
"url": null
} |
The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.
These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $$y = ax^2 + bx + c$$ is a 3 dimensional vector space.
The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.
The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.
How, would I compute their dot product?
You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.
In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?
The value of $$\vec{e_1} \cdot \vec{e_1'}$$ is an empirical value. You can't calculate it simply from a definition.
Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?
Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.960951711746926,
"lm_q1q2_score": 0.8129098257064014,
"lm_q2_score": 0.8459424295406088,
"openwebmath_perplexity": 231.80618080282196,
"openwebmath_score": 0.9279655814170837,
"tags": null,
"url": "https://physics.stackexchange.com/questions/479656/formal-definition-of-dot-product"
} |
c#, file-system, event-handling
Title: FileSystemWatcher firing multiple events Like others (FileSystemWatcher Changed event is raised twice) I am facing the problem that events of filesystemwatcher are raised twice. I require to catch all non-duplicate events of watcher in real time. I came up with this. So, I want to know if it will be efficient/over-kill/buggy to use this class.
class ModifiedFileSystemWatcher:FileSystemWatcher
{
public new event RenamedEventHandler Renamed;
public new event FileSystemEventHandler Deleted;
public new event FileSystemEventHandler Created;
public new event FileSystemEventHandler Changed;
class BooleanWrapper{
public bool value { get; set; }
}
//stores refrences of previously fired events with same file source
Dictionary<string, BooleanWrapper> raiseEvent = new Dictionary<string, BooleanWrapper>();
public int WaitingTime { get; set; } //waiting time, any new event raised with same file source will discard previous one
public ModifiedFileSystemWatcher(string filename="", string filter="*.*"):base(filename,filter)
{
base.Changed += ModifiedFileSystemWatcher_Changed;
base.Created += ModifiedFileSystemWatcher_Created;
base.Deleted += ModifiedFileSystemWatcher_Deleted;
base.Renamed += ModifiedFileSystemWatcher_Renamed;
WaitingTime = 100;
}
void PreventDuplicate(FileSystemEventHandler _event, FileSystemEventArgs e)
{
if (_event != null)
{
//create a reference
BooleanWrapper w = new BooleanWrapper() { value = true }; //this event will be fired
BooleanWrapper t; //tmp
if (raiseEvent.TryGetValue(e.FullPath, out t))
{
//a previous event occurred which is waiting [delayed by WaitingTime]
t.value = false; //set its reference to false; that event will be skipped
t = w;//store current reference in dictionary
}
else raiseEvent[e.FullPath] = w; //no previous event, store our reference | {
"domain": "codereview.stackexchange",
"id": 7400,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, file-system, event-handling",
"url": null
} |
acid-base, aqueous-solution, teaching-lab
Title: Why bother to standardize a solution? I have been preparing lessons for some of my students about preparing acidic and basic solutions, and I keep finding pre-made labs in which the students "standardize a basic solution" by titrating it with an acid of "known" concentration.
The problem is that the students create a solution of NaOH by measuring a certain mass and adding it to a volumetric flask, filling with water to the desired volume. But the students will do the same thing with an acid (potassium hydrogen pthalate, for example). They will measure out a specific mass, do the calculations to determine the molarity of the KHP solution, and then do the titration to determine the molarity of the NaOH solution. But if the students took the mass of the NaOH in the beginning, can they not just use that information to determine the approximate molarity of the solution?
As the title says, why bother to standardize? If our NaOH solution had some room for error in measurement and calculation, surely titrating against another solution of "known" concentration would only introduce more error, no? Usually, if we're doing these for basic application or experimentation, standardization doesn't really matter. But when it comes to anything analytical where you start to involve calculations, standardization is a must. This is done with NaOH because it's hygroscopic and readily sucks up the moisture in the air. So what is being weighed isn't totally NaOH, but also the moisture that it has absorbed. So almost always the concentration will be lesser than what is sought to be prepared because of this. Tedious as it may be (because you also have to heat up the KHP for around an hour before titrating), standardization brings you closer to the true concentration, but not exactly on the mark. At least in the end, calculations using the standardized concentrations will be analytical. This is why they never introduce standardization in general chemistry courses, I only got to grips with it when I took up Analytical Chemistry, and back then it was really tedious to do.
For example, I actually experienced this before where we had to prepare a 1M stock of NaOH, but after standardization it turned out to be around 0.8 (maybe due to the humidity in the lab at the time). | {
"domain": "chemistry.stackexchange",
"id": 7995,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "acid-base, aqueous-solution, teaching-lab",
"url": null
} |
java, unit-testing, connect-four
@Test
public void aPlayerConnectingFourVerticalTokensOfSameColorShouldWinTheGame() throws Exception {
connectFourGame.putTokenInColumn(player2Token, 0);
connectFourGame.putTokenInColumn(player2Token, 0);
connectFourGame.putTokenInColumn(player2Token, 0);
try {
connectFourGame.putTokenInColumn(player2Token, 0);
printFailingMessageForNotWinningTheGame(player2Token);
} catch (GameIsOverException ignore) {
}
}
@Test
public void aPlayerNotConnectingFourVerticalTokensOfSameColorShouldNotWinTheGameYet() throws Exception {
connectFourGame.putTokenInColumn(player2Token, 0);
} | {
"domain": "codereview.stackexchange",
"id": 27775,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, unit-testing, connect-four",
"url": null
} |
c++, performance, linked-list, stack
std::vector<int> stack;
stack.reserve (n);
char line[256];
for (int i = 0 ; i < m; i++)
{
std::cin.getline (line, sizeof(line));
if (line[1] == 'O')
{
std::cout << stack.back() << '\n';
stack.pop_back ();
}
else
{
stack.push_back (std::atoi(line+4));
}
}
} | {
"domain": "codereview.stackexchange",
"id": 17706,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, linked-list, stack",
"url": null
} |
ros, navigation, global-planner
Title: How to log a global path defined in "/map" frame
Hi, everyone.
I'm using navigation stack and would like to log a global path defined in "/map" fame.
How can I do it ?
Navigation stack's setup has already done and it works well.
I thought that a global path defined in "/map" frame could be got
by subscribing "/move_base/TrajectoryPlannerROS/global_plan" topic
and transforming it from "/odom" frame to "/map" frame
with tf::TransformListener::transformPose() method.
However it didn't work well and I had a following error.
Frame id /map does not exist! Frames (1):
A result of "rosrun tf view_frames" shows that there is a "/map" frame.
However, a result of "rosrun tf tf_monitor" doesn't show a "/map" frame
(Perhaps, because it is not broadcasted continuously?).
How can I log a global plan (an array of geometry_msgs::PoseStamped) defined in a "/map" frame?
Thanks for your attention.
Originally posted by moyashi on ROS Answers with karma: 721 on 2013-02-26
Post score: 0
One thing you could try is to wait a little for the transform:
listener.waitForTransform(destination_frame, original_frame, ros::Time(0), ros::Duration(10.0) );
This would wait at most 10 seconds for the transform to show up.
Of course that won't help if there is some fundamental issue with your tf setup. But generally for a new run node you should call waitForTransform before you try to use transforms for the first time. Also you should be sure to keep the transformlistener over the whole runtime of your node. If you require the tf for a special timestamp, be sure to adjust the third param of the waitForTransform call, also. ros::Time(0) means "the latest transform" in the tf-context.
Originally posted by Achim with karma: 390 on 2013-02-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13082,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation, global-planner",
"url": null
} |
The Cooley-Tukey algorithm can be derived as follows. If $$n$$ can be factored into $$n=n_1n_2$$, the equation can be rewritten by letting $$l=l_1n_2+l_2$$ and $$k=k_1+k_2n_1$$. We then have:
$\mathbf{Y}\left [ k_1+k_2n_1\right ]=\sum_{l_2=0}^{n_2-1}\left [ \left ( \sum_{l_1=0}^{n_1-1}\mathbf{X}[l_1n_2+l_2]\omega _{n_1}^{l_1k_1}\right ) \omega _{n}^{l_2k_1}\right ]\omega _{n_2}^{l_2k_2}$
where $$k_{1,2}=0,...n_{1,2}-1$$. Thus, the algorithm computes $$n_2$$ DFTs of size $$n_1$$ (the inner sum), multiplies the result by the so-called twiddle factors $$\omega _{n}^{l_2k_1}$$ and finally computes $$n_1$$ DFTs of size $$n_2$$ (the outer sum). This decomposition is then continued recursively. The literature uses the term radix to describe an $$n_1$$ or $$n_2$$ that is bounded (often constant); the small DFT of the radix is traditionally called a butterfly. | {
"domain": "libretexts.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9893474891602739,
"lm_q1q2_score": 0.8244475878368711,
"lm_q2_score": 0.8333245870332531,
"openwebmath_perplexity": 1160.4492122227443,
"openwebmath_score": 0.652184247970581,
"tags": null,
"url": "https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.2%3A_Review_of_the_Cooley-Tukey_FFT"
} |
c++, beginner, playing-cards
Adding Cards to a deck. (Based on comment below)
#include <vector>
class Card
{
public:
enum Ranks { Ace = 1, Ten = 10, Jack, Queen, King };
enum Suits { Spades, Hearts, Diamonds, Clubs };
Card(Ranks rank, Suits suit){}
};
// Need to add the ++ operators for Ranks/Suits
Card::Ranks& operator++(Card::Ranks& r)
{
return r = Card::Ranks(static_cast<int>(r)+1);
}
Card::Suits& operator++(Card::Suits& s)
{
return s = Card::Suits(static_cast<int>(s)+1);
}
int main()
{
std::vector<Card> deck;
// Does not look that verbose to me.
for(Card::Ranks rank=Card::Ace;rank <= Card::King;++rank)
{
for(Card::Suits suit=Card::Spades;suit <= Card::Clubs;++suit)
{
deck.push_back(Card(rank, suit));
}
}
// Compared to
for(int rank=Card::Ace;rank <= Card::King;++rank)
{
for(int suit=Card::Spades;suit <= Card::Clubs;++suit)
{
deck.push_back(Card(rank, suit));
}
}
} | {
"domain": "codereview.stackexchange",
"id": 1427,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, playing-cards",
"url": null
} |
magnetic-fields, electric-fields, magnetic-moment, elementary-particles
\mu_{Hydrogen\,atom} & \sim 1\mu_e\\
\mu_{Holmium\,atom} & \sim 11\mu_e
\end{align}$$
but this is in no way more fundamental than any other choice of unit for magnetic moments, e.g. J/T, Bohr magneton, nuclear magneton, … | {
"domain": "physics.stackexchange",
"id": 91882,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "magnetic-fields, electric-fields, magnetic-moment, elementary-particles",
"url": null
} |
java, algorithm, concurrency, sieve-of-eratosthenes, bitset
/**
* Linked Nodes. As a minor efficiency hack, this class opportunistically inherits from AtomicReference, with the atomic
* ref used as the "next" link.
*
* Nodes are in doubly-linked lists. There are three kinds of special nodes, distinguished by: * The list header has a
* null prev link * The list trailer has a null next link * A deletion marker has a prev link pointing to itself. All
* three kinds of special nodes have null element fields.
*
* Regular nodes have non-null element, next, and prev fields. To avoid visible inconsistencies when deletions overlap
* element replacement, replacements are done by replacing the node, not just setting the element.
*
* Nodes can be traversed by read-only ConcurrentLinkedDeque class operations just by following raw next pointers, so
* long as they ignore any special nodes seen along the way. (This is automated in method forward.) However, traversal
* using prev pointers is not guaranteed to see all live nodes since a prev pointer of a deleted node can become
* unrecoverably stale.
*/
public class Node
extends AtomicReference<Node> {
private volatile Node prev;
int element;
/** Creates a node with given contents */
Node(int element, Node next, Node prev) {
super(next);
this.prev = prev;
this.element = element;
}
/** Creates a marker node with given successor */
Node(Node next) {
super(next);
this.prev = this;
this.element = 0;
}
/**
* Gets next link (which is actually the value held as atomic reference).
*/
private Node getNext() {
return get();
}
/**
* Sets next link
*
* @param n
* the next node
*/
void setNext(Node n) {
set(n);
}
/**
* compareAndSet next link
*/
private boolean casNext(Node cmp, Node val) {
return compareAndSet(cmp, val);
}
/**
* Gets prev link
*/
private Node getPrev() {
return prev;
} | {
"domain": "codereview.stackexchange",
"id": 21091,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, concurrency, sieve-of-eratosthenes, bitset",
"url": null
} |
You can use the contraction mapping estimates directly.
You have the estimate $\|\bar{x}_a - f_a^{(k)}(x_0)\| \le {(1-\epsilon)^k \over \epsilon} \|f_a(x_0) - x_0\|$, so we can see that if we let $B = \sup_{a \in B(\hat{a},1)} \|f_a(x_0) - x_0\|$, then $\|\bar{x}_a - f_a^{(k)}(x_0)\| \le {(1-\epsilon)^k \over \epsilon} B$ for all $a \in B(\hat{a},1)$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9861513914124559,
"lm_q1q2_score": 0.8260200963133342,
"lm_q2_score": 0.8376199673867852,
"openwebmath_perplexity": 177.3434217407299,
"openwebmath_score": 0.9769796133041382,
"tags": null,
"url": "https://math.stackexchange.com/questions/700725/continuity-of-fixed-point/700752"
} |
filters, filter-design, infinite-impulse-response
I chose a random filter to show the result of the suggested procedure in the figure below (blue: original impulse repsonse: red: new impulse response shifted to the right by one sample for comparison):
The left column below shows the original numerator coefficients alongside the modified numerator coefficients, aligned in such a way that it becomes obvious that for higher indices the new coefficients equal the original coefficients. (This is the case because in this example $M>N$; c.f. Eq. $(7)$ of the new answer above, including the text below Eq. $(7)$).
-0.67010
0.65673 0.92477
-0.61219 -0.74621
-0.60645 -0.53944
-0.50316 -0.50316
0.82855 0.82855
0.20396 0.20396
1.15680 1.15680
-0.29830 -0.29830
0.10605 0.10605
The denominator coefficients of both filters are
a = [1, 0.4, -0.2, 0.1]
EDIT:
This method can be generalized to remove $K$ initial values of the original impulse response, even if $K>M$:
Compute $L=\max(M,N)$ coefficients of the original impulse response $h[n]$ for indices $n=K,K+1,\ldots,K+L-1$
Compute the new numerator coefficients according to Eq. $(4)$ with $\tilde{h}[n]=h[n+K],\qquad n=0,1,\ldots,L-1$
The resulting numerator coefficients will generally have several trailing zeros (or values sufficiently close to zero) which can be removed.
The following example shows the result of removing $20$ initial values of the impulse response of the filter with coefficients
[b,a] =
1.000000 1.000000
2.000000 -0.187178
3.000000 -0.075960
-1.000000 0.812250
The numerator coefficients of the new filter are | {
"domain": "dsp.stackexchange",
"id": 6618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, filter-design, infinite-impulse-response",
"url": null
} |
roslaunch, roscpp, ros-kinetic
</launch>
Comment by surabhi96 on 2020-06-18:
Also, I managed to get the most recent log file (although it was not the same name as mentioned in the console) and it says: error: [Errno 111] Connection refused
I think the error is just a node trying to connect to a service that is already shut down. Are you running the bag and the node in different machines/networks. Have you properly set up the ROS_MASTER_URI/ROS_IP enviromental variables. Check this.
Besides that, exit code -11 means a SIGSEGV that basically means a segmentation fault, maybe in your code you have something that is crashing the excution. Just to be sure you can debug it GDB.
Originally posted by Weasfas with karma: 1695 on 2020-06-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35149,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "roslaunch, roscpp, ros-kinetic",
"url": null
} |
solutions, filtering
Title: Home made dispersion/solution with detectable >0.1μm particles Sorry for the possibly wrong terminology. I'm not a chemist and I'm not a native English speaker.
Please edit the question as needed. Thank you.
Is it possible to buy or prepare a dispersion with the following properties?
the particles should be >0.1μm in size, for example 0.11μm - 0.2μm
there must be a way to detect them - even in a small amount (colour, taste, chemical reaction with something else, ...)
it mustn't be a "glue"; it must be possible to flush them completely away with water or something else readily available
it mustn't be a toxic or corrosive substance
the dispersion (or components to prepare it) should be readily available (drugstore, grocery etc.)
the total price shouldn't be higher than $20 (I know this difficult with prices different in every country - this is just a guideline)
Why do I need it?
There's a very slight chance that a Sawyer Micro Squeeze Water Filter might have been exposed to a below zero temperature and might have been damaged by frozen residual water.
The manufacturer says there's no way to test the filter functionality in that case and a new filter should be bought instead. I wonder if this is really needed or there's a way to test the filter at home. Try to get some clay and shake it vigorously with water. Let it settle for a several hours. Test your filter with the supernatant water and collect the filtrate in a very clean glass tumbler. Colloids have an interesting property of scattering light. In a dark room, try to shine light (ordinary flashlight might work or perhaps an ordinary pointer used in presentations/offices). Be careful, never ever look at the pointer directly. Look at the filtrate at $90^o$ (right angles to the light beam), if you see a light beam travelling in water, it means that colloidal particles are seeping through the filtrate and the filter is damaged.
Read more about the Tyndall Effect https://www.sciencesource.com/archive/Tyndall-Effect-SS2423500.html or on Wikipedia before doing anything. | {
"domain": "chemistry.stackexchange",
"id": 12575,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "solutions, filtering",
"url": null
} |
free-energy
Title: Minimizing Gibbs free energy with Helmholtz free energy expressions If I have an expression for the Helmholtz free energy (from statistical associating fluid theory for a polymer system), can I still find the Gibbs free energy minimum under constant $T,P$ conditions?
Seems that since
$\mu_i = \left(\frac{\partial A}{\partial n_i}\right)_{T,V,n_j} = \left(\frac{\partial G}{\partial n_i}\right)_{T,P,n_j}$
where $A$ is the Helmholtz free energy and $G$ is the Gibbs free energy and $\mu_i$ is the chemical potential of component $i$.
If I can just find the expressions for $\mu_i=\left(\frac{\partial A}{\partial n_i}\right)_{T,V,n_j}$ I should be able to use this in the expression for $G = \sum_i \mu_i n_i$ and find its minimum, without having to worry about an equation of state to relate change in volume ($dV$) with respect to constant pressure ($P$)?
Edit I should clarify that I wonder about the Gibbs free energy because I'm interested in the constant $T,P$ (isothermal, isobaric case), and in the case especially for liquids. If you needed to introduce an equation of state to parameterize the change in V as a function of $T,P$ this would not be too difficult for an ideal gas, but for liquids it would be a challenge. So in this case, I wonder how to handle the $PV$ term (for constant $P$) in the Helmholtz free energy which does not appear in the Gibbs. In short, I agree with Curt F.'s answer above. In general, you need a suitable equation of state (EoS) to handle this calculation.
From the differential form of the fundamental equation of thermodynamics (i.e. the differential form of the internal energy $U(S,V,N)$) | {
"domain": "chemistry.stackexchange",
"id": 3348,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "free-energy",
"url": null
} |
Footnotes
[1] Suppose that $$\|S \| < 1$$. Take any nonzero vector $$x$$, and let $$r := \|x\|$$. We have $$\| Sx \| = r \| S (x/r) \| \leq r \| S \| < r = \| x\|$$. Hence every point is pulled towards the origin.
• Share page | {
"domain": "quantecon.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9937100989807552,
"lm_q1q2_score": 0.8876905691374989,
"lm_q2_score": 0.8933093968230773,
"openwebmath_perplexity": 430.0254335697078,
"openwebmath_score": 0.9785626530647278,
"tags": null,
"url": "https://lectures.quantecon.org/jl/linear_algebra.html"
} |
homework-and-exercises, newtonian-mechanics, friction, free-body-diagram, rigid-body-dynamics
Title: Paradox in the two block problem UPDATE (regarding duplicate) :
This question is not a duplicate of another question. Sure, the situation in both the questions is same and, yes, both questions ultimately provide a methodology to solve the problem and finding the correct value of friction, but the moderator should realize that here I was NOT asking for a method to solve for friction. The problem was that I did not even realize that I will have to solve for friction.
I had solved a lot of such problems way back in school and I had got into a habit of assuming static friction to be the equal and opposite to whatever force is applied against friction (upto a maximum limit of friction). This was a blunder I made, and this is what made it appear like a "paradox". Moreover, as it turns out, I asked the same problem to a few friends of mine and many of them made the same mistake.
So, essentially the "another" problem is just asking for a general methodology to solve such problems, while this problem is like a puzzle which presents the user a methodology of solving the problem by considering different systems and the contradictions that arise due to them. I believe, a user who knows the general methodology presented in the other question is susceptible to the confusion/paradox that this problem presents.
ORIGINAL QUESTION:
This is a familiar problem with the setting as given below: | {
"domain": "physics.stackexchange",
"id": 22391,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, friction, free-body-diagram, rigid-body-dynamics",
"url": null
} |
python, error-handling, mongodb, amazon-web-services
# Delete the documents in MongoDB
logging.info("Deleting snapshots with metadata_id %s from collection %s", str(metadata_id), collection)
for retry in range(0, 3):
if dry_run:
break
records_count = mongo_count(mongo.client[database][collection], match={"metadata_id": metadata_id})
deleted_count = mongo_delete(mongo.client[database][collection], {"metadata_id": metadata_id})
flow_status = deleted_count == records_count
if not flow_status:
logging.info("Failed to delete all records of snapshot with metadata_id %s. Deleted %s out of %s "
"records", str(metadata_id), deleted_count, records_count)
else:
logging.info("Deleted %s records of snapshot with metadata_id %s from collection %s",
deleted_count, str(metadata_id), collection)
break
if not flow_status:
logging.error("Snapshot archiving flow failed for %s. Failed to delete all snapshot records.",
str(metadata_id))
continue
# Set the archive_status to "ArchivedAndPurged" in the MongoDB metadata collection.
for retry in range(0, 3):
flow_status = update_archive_status(mongo.client[database][metadata_collection], metadata_collection,
metadata_id, archiving_status.ArchivedAndPurged.name, dry_run)
if not flow_status:
logging.info("Failed to update snapshot metadata for %s to %s. Retry no %s", metadata_collection +
"/" + str(metadata_id), archiving_status.ArchivedToPurge.name, retry)
else:
logging.info("Updated snapshot metadata for %s to %s", metadata_collection + "/" + str(metadata_id),
archiving_status.ArchivedToPurge.name)
break | {
"domain": "codereview.stackexchange",
"id": 41786,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, error-handling, mongodb, amazon-web-services",
"url": null
} |
ros, yaml, getparam
Title: Get the name of a sub-parameter, not the whole structure
I have a YAML file with a structure like the following:
field_a:
field_a1: {field_b1, field_b2, ..., field_bM}
field_a2: {field_b1, field_b2, ..., field_bM}
...
field_aN: {field_b1, field_b2, ..., field_bM}
where field_a and field_bj are known a priori for all j, but field_ai are frequently changed by the user (i.e. can't be hard coded).
If I try to retieve the parameters field_ai using the C++ method getParam(...) or the command line rosparam get /field_a, what I get is everything in the tree hierarchy where /field_a is the root.
Is there a way to retrieve only the names of these field_ai and not their values? I could create another list with those names, but it not scale very well...
Thank you for your help!
Originally posted by alextoind on ROS Answers with karma: 217 on 2015-06-28
Post score: 0
Thanks to both! I did not think about XmlRpc and the hint of using the iterator works like a charm!
This is the code for the example above:
std::vector<std::string> field_names;
XmlRpc::XmlRpcValue list;
if (!node_handle_.getParam("/field_a", list)) {
ROS_ERROR("Can't find '/field_a' in the YAML configuration file.");
return;
}
for (auto it = list.begin(); it != list.end(); it++) {
field_names.push_back(it->first);
}
For the sake of completeness it->second is the the nested list {field_b1, field_b2, ..., field_bM}.
Originally posted by alextoind with karma: 217 on 2015-07-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22028,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, yaml, getparam",
"url": null
} |
More generally, a matrix $A$ is non-singular if and only $0$ is not an eigenvalue of it. This is easily shown as follow:
• if $0$ is an eigenvalue of $A$ then there is some $v\neq 0$ such that $Av= 0v = 0$ and thus $0 \neq v \in \ker(A)$ which shows that $A$ is singular.
• if $A$ is singular then there $\ker(A) \neq \{0\}$ and there exists some $v \in \ker(A), v \neq 0$ such that $Av = 0 = 0v$, it follows that $(0,v)$ is an eigenpair of $A$.
In particular, if your matrix has only $i$ and $-i$ as eigenvalues, it must be non-singular. This is the case when the associated characteristic polynomial is $p(t)=t^2+1$.
Edit: Here is a "determinant" approach of the problem. We know that $\lambda$ is an eigenvalue of $A$ if and only if $\det(A-\lambda I)=0$, where $I$ is the identity matrix. This is the reason why we look for the roots of the characteristic polynomial. We also know that $A$ is singular if and only if $\det(A)=0$. It is now clear that $0$ is an eigenvalue of $A$ if and only if $0=\det(A-0I)=det(A)$.
To make a connection with the first argument, note that $\det(A-\lambda I)=0 \iff \ker(A-\lambda I)\neq \{0\}$ and $\ker(A-\lambda I)$ is the eigenspace associated to the eigenvalue $\lambda$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9783846666070896,
"lm_q1q2_score": 0.8044172404850658,
"lm_q2_score": 0.8221891327004132,
"openwebmath_perplexity": 96.41146745510953,
"openwebmath_score": 0.9648844599723816,
"tags": null,
"url": "https://math.stackexchange.com/questions/842441/a-matrix-i-i-are-eigenvalues/842460"
} |
reference-request, np-hardness, packing
Proof of Lemma 3.
The proof is an elaboration of @Gamow's answer in the comments.
Use the following equivalent problem formulation: Given a collection $V=(S_1, S_2, \ldots, S_n)$ of subsets of $\{1,2,\ldots, d\}$, where $d$ is constant, partition $V$ into a minimum number of parts, such that within each part each element $j$ occurs in more than $b_j$ subsets. Here is the algorithm.
Fix an input $V=(S_1, \ldots, S_n)$ of subsets of $\{1,2,\ldots, d\}$.
Let $\mathcal S_d$ denote the set of non-empty subsets of $\{1,2,\ldots,d\}$.
Let $\mathcal P_d$ denote the set of possible parts, that is, multi-subsets of $\mathcal S_d$ in which each element $j$ occurs at most $b_j$ times.
The goal is to partition the input $V$ into a minimum number of parts, each of which is in $\mathcal P_d$.
Note that $|\mathcal S_d| < 2^d$ and $|\mathcal P_d| \ll (bd)^{bd}$, so (as $b$ and $d$ are constant) there are constantly many possible subsets, and constantly many possible parts.
For each subset $S\in\mathcal S_d$ and possible part $p\in\mathcal P_d$, let $n(S, p)$ (either 0 or 1) denote the number of times that $S$ occurs in $p$.
For each possible subset $S\in\mathcal S_d$, count the number of times that $S$ occurs in the input $V$. Let $n(S, V)$ denote this number.
Finally, construct and solve the following integer linear program (ILP), with an integer-valued variable $x_p$ for each possible part $p\in\mathcal P_d$: | {
"domain": "cstheory.stackexchange",
"id": 4946,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reference-request, np-hardness, packing",
"url": null
} |
special-relativity, field-theory, lorentz-symmetry, poincare-symmetry
Title: Application of a Poincaré group element to a scalar function Let $f(x)$ be a scalar function and let's say that we want to know how it transforms when it's subjected to a translation (by a vector $a^{\mu}$), rotation and a Lorentz boost. Thus we can write an infinitesimal transformation as:
\begin{equation}
\bar{x}^{\mu} = x^{\mu} + a^{\mu} + \omega^{\mu}_{\nu}x^{\nu} = a^{\mu} + \Lambda^{\mu}_{\nu}x^{\nu}
\end{equation}
where $\omega^{\mu}_{\nu}$ is:
\begin{pmatrix}
0 & v^{1} & v^{2} & v^{3} \\
v^{1} & 0 & \theta^{3} & -\theta^{2} \\
v^{2} & -\theta^{3} & 0 & \theta^{1} \\
v^{3} & \theta^{2} & -\theta^{1} & 0 \\
\end{pmatrix}
therefore a function $f(x)$ transforms to:
\begin{align}
f(\bar{x}) & = f(x+a+\omega x) \\
& = f(x) + a^{\mu}\partial_{\mu}f(x) + \omega^{\mu}_{\nu}x^{\nu}\partial_{\mu}f(x).
\end{align}
So far so good, the real pain (for me) starts when it is required to use the antisymmetry of $\omega_{\mu\nu}$ (obtained by lowering the index using $g_{\mu\lambda}\omega^{\lambda\nu}$ with $g^{\alpha\beta}=g_{\alpha\beta}=diag(1,-1,-1,-1)$) to write:
\begin{equation} | {
"domain": "physics.stackexchange",
"id": 65104,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, field-theory, lorentz-symmetry, poincare-symmetry",
"url": null
} |
Re: How many 4 digit codes can be made, if each code can only contain [#permalink]
### Show Tags
05 Oct 2010, 12:37
utin wrote:
Hi Bunuel,
why can't i write TOO,OTO,OOT AS
(4^3)*3! , taking the T as one entity ans assuming that 3 things can be arranged in 3! ways???
Just one thing: TOO can be arranged in 3!/2! ways and not in 3! (# of permutations of 3 letters out which 2 O's are identical is 3!/2!), so it would be $$4^3*\frac{3!}{2!}=4^3*3$$.
Hope it's clear.
_________________
Kudos [?]: 139450 [0], given: 12790
Manager
Joined: 26 Mar 2010
Posts: 116
Kudos [?]: 16 [0], given: 17
Re: How many 4 digit codes can be made, if each code can only contain [#permalink]
### Show Tags
05 Oct 2010, 12:53
Bunuel wrote:
utin wrote:
Hi Bunuel,
why can't i write TOO,OTO,OOT AS
(4^3)*3! , taking the T as one entity ans assuming that 3 things can be arranged in 3! ways???
Just one thing: TOO can be arranged in 3!/2! ways and not in 3! (# of permutations of 3 letters out which 2 O's are identical is 3!/2!), so it would be $$4^3*\frac{3!}{2!}=4^3*3$$.
Hope it's clear.
I though about the same but but when i see that TOO as three things to be arranged in 3! ways then i also thought that OO ARE TWO DIGITS AND THEY ARE TWO DIFFERENT PRIME NOS SO WHY DIVIDE BY 2!
this might clear my entire probability confusion i hope...
Kudos [?]: 16 [0], given: 17
Math Expert
Joined: 02 Sep 2009
Posts: 43322
Kudos [?]: 139450 [1], given: 12790 | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9637799441350253,
"lm_q1q2_score": 0.8052222767152837,
"lm_q2_score": 0.8354835371034368,
"openwebmath_perplexity": 2454.699305506171,
"openwebmath_score": 0.4953223466873169,
"tags": null,
"url": "https://gmatclub.com/forum/how-many-4-digit-codes-can-be-made-if-each-code-can-only-contain-102194.html"
} |
aromatic-compounds
And from the March's Advanced Organic Chemistry:
The reaction is reversible (by treatment of the addition product
with either acid or base) and is useful for the purification of the starting compounds, since the addition products are soluble in water
and many of the impurities are not.
If you want more details (kinetics, etc.), there is freely available couple of scientific articles, Sulphur isotope effects in the bisulphite addition reaction of aldehydes and ketones
I. Equilibrium effect and the structure of the addition product
II. Bond-formation effect | {
"domain": "chemistry.stackexchange",
"id": 10511,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "aromatic-compounds",
"url": null
} |
particle-physics
Title: Please explain the physics of a Cloud Chamber A friend of mine was telling me about building a cloud chamber while he was in graduate school. As I understand it, this allows you to "see" interactions caused by high energy particles going through the cloud chamber. This has fascinated me, and I would like to build one with my daughter, but I want to make sure I am able to explain it to her when the eventual questions come. Can someone help me out please? How would I explain a cloud chamber to someone who is a freshman in high school? Feynman used to say - if you can't explain something in simple words, such that a child could understand, then you don't understand it either. So here's my take:
A cloud chamber is nothing more than a box where mist is about to form, but not quite yet. There's vapors of stuff (either alcohol, or water, or something else) in it, and the temperature is such that the vapors are almost about to produce mist (or "clouds"). Imagine wetlands or marshes on a cold autumn morning, it's kind of like that - fill a box with that kind of "cold wet air".
Now a charged particle (such as Alpha radiation from a chunk of radioactive ore) zips through the chamber at high speed. It bumps into water (or alcohol) molecules and ionizes them - it creates a trail of ionized molecules marking its path.
Now, the vapors are such that they really want to produce mist; any tiny disturbance is enough to push them over the edge. The trail of ionized molecules is enough to do that - the ions attract a bunch of molecules, the resulting clumps attract even more, and before you know it a droplet of water is formed, then another, and another. Voila, a trail of mist follows the particle.
I could try to describe the construction, but this Instructables page will do it much better:
http://www.instructables.com/id/Make-a-Cloud-Chamber-using-Peltier-Coolers/ | {
"domain": "physics.stackexchange",
"id": 1607,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.