text
stringlengths
49
10.4k
source
dict
haar-distribution, t-designs Title: Werner Twirling Channel - How to Retrieve Prefactors? In Watrous' Theory of Quantum Information, Example 7.25 discusses the Werner Twirling Channel: $$\Xi(X) = \int (U \otimes U) X (U \otimes U)^* \mathrm{d}\eta(U)$$ where $\eta$ denotes the Haar measure and $X$ some density matrix. I understand that by their Theorem 7.15 one can conclude that $$\Xi(X)\in \mathrm{span}\{1\otimes 1, W\} = \mathrm{span}\{\Pi_0,\Pi_1\}, $$ where $W$ is the SWAP operator, and $\Pi_0 = \frac{1}{2}1\otimes 1 + \frac{1}{2}W$, $\Pi_1 = \frac{1}{2}1\otimes 1 - \frac{1}{2}W$. Thus, the original channel can be written as a linear combination $$\Xi(X) = \alpha(X) \Pi_0+ \beta(X) \Pi_1$$ or $$\Xi(X) = \alpha'(X) 1 \otimes 1 + \beta'(X) W.$$ In the book they follow the first equation and determine the prefactors as $$\alpha(X)=\frac{1}{\binom{n+1}{2}}\langle \Pi_0, \Xi(X)\rangle,\, \beta(X) = \frac{1}{\binom{n}{2}}\langle \Pi_1,\Xi(X) \rangle. \tag{*}$$ What I do not understand here, is how and why they include these binomial coefficients here? The inner product is clear of course, the problem is the binomial coefficient.
{ "domain": "quantumcomputing.stackexchange", "id": 4708, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haar-distribution, t-designs", "url": null }
4. Hello, wonderboy1953! A fascinating exploration . . . Thank you for sharing it. I cranked out several cases and noted the pattern that Opalg found. I listed the cases in which the remainder was not zero. . . $\begin{array}{c|cccccc} n &&& \dfrac{n!}{T_n} \\ \\[-3mm] \hline \\[-3mm] 2 & \dfrac{2!}{T_2} &=& \dfrac{2!}{3}&=& \dfrac{2!}{1\cdot3}\\ \\[-3mm] 4 & \dfrac{4}{T_4} &=& \dfrac{4!}{10} &=& \dfrac{4!}{2\cdot5}\\ \\[-3mm] 6 & \dfrac{6}{T_6} &=& \dfrac{6!}{21} &=& \dfrac{6!}{3\cdot7}\\ \\[-3mm] 10 & \dfrac{10!}{T_{10}} &=& \dfrac{10!}{55} &=& \dfrac{10!}{5\cdot11}\\ \\[-3mm] 12 & \dfrac{12!}{T_{12}} &=& \dfrac{12!}{78} &=& \dfrac{12!}{6\cdot13}\\ \\[-3mm] 16 & \dfrac{16!}{T_{16}} &=& \dfrac{16!}{136} &=& \dfrac{16!}{8\cdot17}\\ \\[-3mm] \vdots & \vdots && \vdots && \vdots \end{array}$
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9763105342148367, "lm_q1q2_score": 0.8177771978284064, "lm_q2_score": 0.8376199673867852, "openwebmath_perplexity": 799.3079217454126, "openwebmath_score": 0.7682934403419495, "tags": null, "url": "http://mathhelpforum.com/math-puzzles/154430-factorials-versus-triangle-numbers.html" }
quantum-mechanics, hilbert-space, operators, differentiation, hamiltonian Title: Differentiation of a ket vector with respect to a spatial dimension Consider a state $|\psi\rangle$. While discussing the Schroedinger equation, we say $$\hat{H}|\Psi(t)\rangle=i\hbar\frac{\partial}{\partial t}|\Psi(t)\rangle$$ We also define the hamiltonian operator as $$\hat{H}=\frac{-\hbar^2}{2m}\frac{\partial^2}{\partial x^2}+V(x)$$ But does this not imply that you're differentiating across "dimensions" of the vector $|\Psi(t)\rangle$? Is there some property of Hilbert spaces which allows this? When I think of a standard vector in $\text{R}^3$, it's clearly not possible to do such differentiation because there are only three dimensions and they are not 'continuous' in any way. This might be similar to the following question: Applying an operator to a function vs. a (ket) vector But I'm interested in knowing what the whole theory behind $\frac{\partial}{\partial x}|\Psi(t)\rangle$ vs $\frac{\partial}{\partial x}\Psi(t)$ is. Is the former undefined? If yes, how is it indicated that we need to switch to using a function (rather than a vector) while applying a hamiltonian operator? If no, what is a rigorous argument that we can do this cross-dimention differetiation in a hilbert space? In general the Hamiltonian operator is $$\hat{H}= \frac{\hat{p}^2}{2m}+V(\hat{x}) $$ This operator acts on vectors in the Hilbert space (kets)
{ "domain": "physics.stackexchange", "id": 60005, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, hilbert-space, operators, differentiation, hamiltonian", "url": null }
quantum-computing, reference-request, fault-tolerance Title: What is the best lower bound for the fault-tolerance threshold in quantum computing? It is well established that there exists a noise threshold for quantum computation, such that below this threshold, the computation can be encoded in such a way that it yields the correct result with bounded probability (with at most polynomial computational overhead). This threshold depends on the encoding used and the exact nature of the noise, and it is the case that results from simulation often give thresholds much higher than what can be proved for adversarial noise models. So my question is simply what is the highest lower bound that has been proved for independent stochastic noise? The noise model I am referring to is the one dealt with in quant-ph/0504218, where Aliferis, Gottesman and Preskill prove a lower bound $2.73 \times 10^{-5}$. Note, however, I do not care which type of encoding is used, and it need not be restricted to the code considered in that paper. The highest I'm aware of is $1.94 \times 10^{-4}$ due to Aliferis and Cross (quant-ph/0610063). Has this value been improved upon since then? The highest threshold lower bound for for independent stochastic noise of which I am aware is $1.04 \times 10^{-3}$ by Aliferis, Gottesman and Preskill (quant-ph/0703264). They analyze Knill's teleportation-based scheme with postselection. If you are willing to consider independent depolarizing noise, then I know of two slightly higher lower bounds: $1.25\times 10^{-3}$ by Aliferis and Preskill (arXiv:0809.5063) and $1.32 \times 10^{-3}$ by myself and Ben Reichardt (arXiv:1106.2190).
{ "domain": "cstheory.stackexchange", "id": 1501, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-computing, reference-request, fault-tolerance", "url": null }
electrostatics, electric-fields, potential-energy Title: Relationship Between Electric Potential Energy and Work If you gain electric potential energy, is the magnitude of the EPE equal to the work done by the particle in the electric field? Or is the magnitude of the EPE equal to the work done on the particle? Assuming the charge begins and ends at rest, or moves between two points at constant velocity so that there is no change in kinetic energy, then the increase in the magnitude of the EPE is equal to the work done on the charge by an external agent against the direction of the electric field. An example is a battery that does work (converts chemical energy to electrical potential energy) to separate charge at its terminals increasing the electrical potential and electrical potential energy of the charge. Under the same assumption, a decrease in the magnitude of the EPE is equal to the work done by the electric field on the particle. An example is the work done by an electric field moving charge through a resistor in a circuit. There is a drop in voltage (drop in electrical potential) across the resistor. Hope this helps.
{ "domain": "physics.stackexchange", "id": 67411, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, electric-fields, potential-energy", "url": null }
function is a straight, horizontal line. To simplify this even further, let's consider how to tell the difference between a constant function, and a function that is not a constant function. Constant Member Function Syntax. f (x 1) = f (x 2) for any x 1 and x 2 in the domain. Namely y(0) = 2, y(−2.7) = 2, y(π) = 2, and so on. An error occurred trying to load this video. How to Become a Police Supervisor: Education and Career Roadmap, Best Game Design Bachelor's Degree Programs, Army Ranger: Job Description, Salary and Outlook, 10 Things Students Should Consider Before Choosing a Religious College, Chicago Public Schools CEO Choice Gets Mixed Reviews, Decreased Funding for Vocational Schools Raising Quality or Limiting Opportunity, How To Decide if Grad School Is Right for You, Algebra II - Basic Arithmetic Review: Homework Help, Homework Help for Algebraic Expressions and Equations - Algebra II, Algebra II - Complex and Imaginary Numbers Review: Homework Help, Algebra II Homework Help: Exponents & Exponential Expressions, Algebra II - Properties of Functions Review: Homework Help, Algebra II - Linear Equations Review: Homework Help, Algebra II - Systems of Linear Equations: Homework Help, Algebra II - Inequalities Review: Homework Help, Algebra II - Matrices and Determinants: Homework Help, Algebra II - Absolute Value Review: Homework Help, Algebra II Homework Help: Quadratic Equations, Algebra II - Rational Expressions: Homework Help, Algebra II - Graphing and Functions: Homework Help, Algebra II - Roots and Radical Expressions Review: Homework Help, Algebra II - Quadratic Equations: Homework Help, Algebra II - Exponential and Logarithmic Functions: Homework Help, Algebra II - Conic Sections: Homework Help, Algebra II - Sequences and Series: Homework Help, Algebra II - Combinatorics: Homework Help, Algebra II Percents & Proportions: Homework Help, Accuplacer Arithmetic Test: Practice & Study Guide, Smarter Balanced Assessments - Math Grade 7: Test Prep & Practice, Smarter Balanced Assessments - Math Grade 8: Test Prep & Practice, AEPA Middle Grades Mathematics (NT203): Practice & Study Guide, NES Mathematics - WEST (304): Practice & Study Guide, GACE Mathematics (522): Practice & Study Guide, ORELA Mathematics: Practice & Study Guide, Quiz & Worksheet - Using Determinants with Systems of Linear Equations
{ "domain": "evilorchidgames.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9481545289551957, "lm_q1q2_score": 0.8237820427677769, "lm_q2_score": 0.8688267762381844, "openwebmath_perplexity": 933.7962154027222, "openwebmath_score": 0.6064518094062805, "tags": null, "url": "https://evilorchidgames.com/vafc927/35d956-constant-function-example" }
inorganic-chemistry, solubility Title: What's the point of solubility products? What is the advantage of solubility products, e.g. LiF(s) ⇄ Li+(aq) + F-(aq) | K = [Li+(aq)] * [F-(aq)] = 0.00184 M2 at 25°C (source) compared to just stating that Solubility in water = 0.134 g/100 mL (25 °C) (source) (There's a little difference between (0.00184 M2)½ = 1.11 g/L and the solubility of 1.34 g/L above; I take it that's because of the difference between activity and concentration?) It seems to me that solubility products are just a roundabout, complicated way of expressing the concentration of a saturated solution. I suspect that I'm missing something though: otherwise why would people publish long lists of solubility products? What I read: Wikipedia; all my high school textbooks. Solubility products figure prominently in equilibria involving precipitated species. Following is an example that may arise in a practical situation. Problem: We have 0.01 M ferrous ion in water and we propose adding a base to drive out the iron as hydroxide. We do not want excess base dissolving into the water so we will try magnesium hydroxide. How well will it work? The proposed reaction is then $\ce{Fe^{2+} + Mg(OH)2(s) <=> Mg^{2+} + Fe(OH)2(s)}$ $K=\frac{[\ce{Mg^{2+}}]}{[\ce{Fe^{2+}}]}$ Compare this equilibrium constant with: $K_{sp}(\ce{Mg(OH)2})=[\ce{Mg^{2+}}][\ce{OH^-}]^2=5.61×10^{-12}$ (source)
{ "domain": "chemistry.stackexchange", "id": 10259, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, solubility", "url": null }
complexity-classes, regular-expressions Title: How powerful is POSIX regex The set of languages recognized by POSIX regex is a true superset of type 3 languages. But how powerful is POSIX regex really? Is it in an already known class? Is it its own class? If so, what is the next bigger class (for some context)? Proof that POSIX regex is more powerful than type 3: (a+)b(\1) which recognizes $\{ a^nba^n | n \geq 1 \}$ I haven't found a really good description of what features POSIX regex allows but this is one link that lists all features (as far as I know): regex manual Emil Jeřábek mentioned that POSIX regex is not even completely contained in type 2 languages because (.*)\1 is $\{ ww | w \in \Sigma^* \}$. So, it recognizes all type 3 languages, some type 2 languages and some type 1 languages, maybe even some type 0 languages. In [1], the authors formally define the notion of an "extended regex" with the intent of capturing the back-reference capability of POSIX/perl/emacs/etc style regexes. Exactly how closely their definition matches the exact POSIX specification is an exercise left to the reader. Under their definition, extended regexes are a proper subset of Type 1 (context-sensitive) languages and incomparable with Type 2 (context-free) languages. They show that extended regexes are a subset of Type 1 languages by showing how to construct an equivalent LBA for any extended regex. They prove that they're incomparable with Type 2 languages by showing that $\{a^nba^nba^n|n \geq 1\}$ (which is not context-free) can be recognized by an extended regex while $\{a^n b^n | n \geq 0\}$ (which is context-free) cannot. Câmpeanu, Cezar, Kai Salomaa, and Sheng Yu. "A formal study of practical regular expressions." International Journal of Foundations of Computer Science 14.06 (2003): 1007-1018.
{ "domain": "cstheory.stackexchange", "id": 4399, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-classes, regular-expressions", "url": null }
vba, excel ElseIf cellInfo = 23 Then Selection.TextToColumns Destination:=ActiveCell, DataType:=xlFixedWidth, _ FieldInfo:=Array(Array(0, 1), Array(6, 9), Array(7, 1), Array(12, 9), Array(13, 1), _ Array(17, 9), Array(21, 1)) ElseIf cellInfo = 24 Then Selection.TextToColumns Destination:=ActiveCell, DataType:=xlFixedWidth, _ OtherChar:="/", FieldInfo:=Array(Array(0, 1), Array(6, 9), Array(7, 1), Array(12, _ 9), Array(13, 1), Array(17, 9), Array(22, 1)) ElseIf cellInfo = 25 Then Selection.TextToColumns Destination:=ActiveCell, DataType:=xlFixedWidth _ , OtherChar:="/", FieldInfo:=Array(Array(0, 1), Array(6, 9), Array(7, 1), Array( _ 12, 9), Array(13, 1), Array(17, 9), Array(23, 1)) ElseIf cellInfo = 26 Then Selection.TextToColumns Destination:=ActiveCell, DataType:=xlFixedWidth _ , OtherChar:="/", FieldInfo:=Array(Array(0, 1), Array(6, 9), Array(7, 1), Array( _ 12, 9), Array(13, 1), Array(17, 9), Array(22, 1)) ElseIf cellInfo = 27 Then Selection.TextToColumns Destination:=ActiveCell, DataType:=xlFixedWidth _ , OtherChar:="/", FieldInfo:=Array(Array(0, 1), Array(6, 9), Array(8, 1), Array( _ 13, 9), Array(14, 1), Array(18, 9), Array(23, 1))
{ "domain": "codereview.stackexchange", "id": 25438, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
(d) f(x) = x x+ 1 (e) f(x) = p x3 + 9 (f) f(x) = 1 p x2 + 1 3. Quadratic Functions-Worksheet Find the vertex and “a” and then use to sketch the graph of each function. Here is a small outline of some applications of linear equations. Give each student a card with one representation of a linear function from the LESSON 1 section of the INSTRUCTIONAL ACTIVITY SUPPLEMENT. Some of the worksheets below are Free Linear Equations Worksheets, Solving Systems of Linear Equations by Graphing, Solving equations by removing brackets & collecting terms, Solving a System of Two Linear Equations in Two Variables by Addition, …. mathematics content. a change in the size or position of a figure B. In its most basic form, a linear supply function looks as follows: y = mx + b. Grade 9 Math Solving Linear Equations. Systems of Linear Equations 0. Linear Algebra is one of the most important basic areas in Mathematics, having at least as great an impact as Calculus, and indeed it provides a signiflcant part of the machinery required to generalise Calculus to vector-valued functions of many variables. Pre-Algebra Chapter 8—Linear Functions and Graphing SOME NUMBERED QUESTIONS HAVE BEEN DELETED OR REMOVED. During a 45-minute lunch period, Albert (A) went running and Bill (B) walked for exercise. Linear function word problems Calculator tables. Multiple Choice Questions have been coming in Class 10 Linear Equations exams, thus do MCQs to test understanding of important topics in the chapters. Heart of Algebra questions on the SAT Math Test focus on the mastery of linear equations, systems of linear equations, and linear functions. This representational technique has succeeded at finding good policies for problems with high dimensional state-spaces such as simulated soccer [Stone et al. Modeling with Linear Functions Work with a partner. Linear Equations are no different. function, it is often simplest to look at the graph of the relation. You should create the following: 1. An example arises in the Timoshenko-Rayleigh theory of beam bending. Then the generating function A(x. Let’s see what happens. Students should work in pairs or groups of three. The order of a differential equation is the highest order derivative occurring. For the equation, complete the table for the given values of x. Patterns and Linear Functions - Word Docs & PowerPoints To
{ "domain": "umood.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.986363161511632, "lm_q1q2_score": 0.8621575883180621, "lm_q2_score": 0.8740772384450967, "openwebmath_perplexity": 1239.807064332939, "openwebmath_score": 0.620610773563385, "tags": null, "url": "http://uldn.umood.it/linear-functions-pdf.html" }
the complex numbers are not ordered, the definition given at the top for the real absolute value cannot be directly applied to complex numbers.However, the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. In this C program, we use the if block statement. Geometrically, the absolute value of a complex number is the number’s distance from the origin in the complex plane.. Absolute Value of Complex Number. It also demonstrates elementary operations on complex numbers. Setting of JAVASCRIPT of the vector Im² = Abs² ) we are able find. General, you can skip the multiplication sign, so 5x is equivalent to 5... ) can be thought of as the distance formula \ ( r\,... The final solution is basically the distance between the point 8 + 6i to the origin conjugate... A 501 ( c ) ( 3 ) nonprofit organization a positive number otherwise will... Into a positive number otherwise it will remain as it is both +2 and -2 is the distance of from. To 5 * x 5 absolute value and principal value of the number from negative to positive minus! B ) distance from the origin in the complex number and its conjugate on the complex!. Coordinate system Basic ) Complex.FromPolarCoordinatesmethod to create a complex number is the same \! Point 8 + 6i to the origin or the angle to the real of! Limited now because setting of JAVASCRIPT of the complex plane & angle of numbers. Number and its conjugate on the complex plane and the second value represents the real axis use... The Pythagorean theorem to find this distance be posted as customer voice value calculator, \ ( \normalsize Complex\ and\... A … Some complex numbers have absolute value of a number: 5 absolute value principal! In Visual Basic ) Complex.FromPolarCoordinatesmethod to create a complex number absolute value of as the distance of that number the. Free, world-class education to anyone, anywhere and the origin the circle of radius 1 centered 0... Recognize this is a 30-60-90 triangle Thank you for your questionnaire.Sending completion convert the number the. To create a complex number in the complex plane as \ ( \normalsize Complex\ conjugate\ and\ value\\... 5X is equivalent
{ "domain": "co.za", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9825575142422757, "lm_q1q2_score": 0.8144800611915987, "lm_q2_score": 0.8289388146603365, "openwebmath_perplexity": 712.5041068294182, "openwebmath_score": 0.7335454225540161, "tags": null, "url": "http://analyticalsolutions.co.za/sx445b3a/absolute-value-of-complex-numbers-calculator-7fb181" }
I got this puzzle from some others: $$\begin{array}{c c c c c c}&\mathrm H&\mathrm E&\mathrm R&\mathrm E&\mathrm S\\&\mathrm M&\mathrm E&\mathrm R&\mathrm R&\mathrm Y\\+&&\mathrm X&\mathrm M&\mathrm A&\mathrm S\\\hline\mathrm R&\mathrm E&\mathrm A&\mathrm D&\mathrm E&\mathrm R\end{array}$$ Find the letters such that every letter is a distinct digit, and that there are no leading $$0$$'s. We only managed to solve this by breaking it down to some cases and then simply brute forcing it. Is there any way to do this without brute force though? $$\mathrm{(A, D, E, H, M, R, S, X, Y)} = (8, 0, 4, 6, 7, 1, 3, 9, 5)$$ Code: Try it online Breakdown of what we managed to get: We started by noting $$\mathrm R$$ was either $$1$$ or $$2$$. From the rightmost column, $$\mathrm{Y = (R - 2S) \% 10}$$. From the next column, $$\mathrm{A = 10 - R - \lfloor 2S+Y \rfloor}$$, where the last bit is from a carry digit. From the next column, $$\mathrm{D = (M + 2R + 1) \% 10}$$. The $$1$$ comes from a guaranteed carry digit from the previous column. From the next column, $$\mathrm{X = (A - 2E - \lfloor (M+2R+1)/10 \rfloor) \% 10}$$, which also uses a carry. And from the leftmost column, $$\mathrm{H = 10 + E - M - \lfloor (2E+x)/10 \rfloor}$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9850429125286726, "lm_q1q2_score": 0.8229871428107519, "lm_q2_score": 0.8354835432479663, "openwebmath_perplexity": 122.08663069464956, "openwebmath_score": 0.9465024471282959, "tags": null, "url": "https://math.stackexchange.com/questions/3483105/christmas-cryptarithm-heresmerryxmas-reader" }
java Title: Logical gates with integer values I have created this code for both boolean and integer values to display a truth table for an "AND","OR","XOR", "NOT" gate. However I think that my code needs reviewing as it could be simplified. public class LogicalOpTable { public static void main(String[] args){ boolean p,q; System.out.println("P\tQ\tAND\tOR\tXOR\tNOT"); p = false; q = false; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); p = false; q = true; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); p = true; q = false; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); p = true; q = true; System.out.print(p + "\t" + q + "\t" + (p&&q) + "\t"); System.out.println((p||q)+"\t"+(p^q)+"\t"+(!p)); System.out.println(); withBinary(); } public static void withBinary(){
{ "domain": "codereview.stackexchange", "id": 15086, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java", "url": null }
is diluted 1:1, each volume of ethylene glycol in the cooling system is diluted in an equal volume of water. It is used for reporting concentration of liquids solutes in solution. The formula of the molar volume is expressed as $$V_{m} = \frac{Molar\ mass}{Density}$$ Where V m is the volume of the substance. 12% of 80 is? Volume percent ={ Volume of solute / volume of solution} x 100. Make 1000ml of a 5% by volume solution of ethylene glycol in water. Percent by volume Formula Definition: Percent by volume or volume percent is a common expression used for expressing concentration. First find the Error: Subtract one value from the other. Percentage by mass = 100*mass of substance of interest/total mass. Note: when the result is positive it is a percentage increase, if negative, just remove the minus sign and call it a decrease. The symbol for percentage is “%” and the major application of percentage is to compare or find the ratios. Molar Volume Formula. Add or subtract a percentage from a number or solve the equations. There are two different methods that we can use to find the percent of change. Multiply this decimal by the total volume: 0.05 x 1000ml = 50ml (ethylene glycol needed). In each case, the concentration in percentage is calculated as the fraction of the volume or weight of the solute related to the total volume or weight of the solution. For many applications, percent error is always expressed as a positive value. Take an example, if you had 10 strawberries and you ate 2 then you have consumed 20 percent of strawberries out of all and left with 80 percent of the things. The formula for percent decrease is the same as that of percentage change. Percentage Formula In simple terms, percent means per hundred. Your email address will not be published. It can even be used to solve more complex problems that involve percent increase. Note that weight/volume is also referred to as mass/volume. Finally, convert the fraction to a percent by moving the decimal two places to the right and adding a percent … This Percentage Chart shows what 15% of$1 through $100 is although it is customizable so you can set the percentage and the numbers to whatever you want. Percentage by volume = 100*volume of substance
{ "domain": "spatulacitybbs.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9621075711974104, "lm_q1q2_score": 0.8038250405991413, "lm_q2_score": 0.8354835411997897, "openwebmath_perplexity": 1508.9800474068898, "openwebmath_score": 0.8082000613212585, "tags": null, "url": "https://spatulacitybbs.net/euoss/what-is-the-formula-in-solving-percent-by-volume-d64fdc" }
quantum-mechanics, scattering Title: Free electron and compton scattering For compton scattering, electron needs to be free from any surrounding electric field. But electrons in graphite are bound to graphite. I have two doubts, first, if electrons are free in graphite, why don't they leave graphite, second if there is some force binding them, why do they show Compton scattering? If we are doing a calculation based on the equation for the Compton effect we assume that any other interactions with the electron are negligible compared with the energy transferred between the photon and the electron. At visual wavelengths, i.e. energies of a few eV, this requires that the electron be almost completely free so it only works for an isolated electron. However if we are doing Compton scattering with X-rays then the energies exchanged are in the keV range and vastly greater than typical electron binding energies in atoms. That means that when considering Compton scattering by X-ry photons we can treat even the electrons in atoms as though they were free. You don't say exactly what is going on in your experiment, but if the electrons involved are the conduction electrons then the binding energy will be about the work function of graphite, which is around 4eV. For Compton scattering by any photon of an energy much greater than 4eV it is a good approximation to treat the electron as free.
{ "domain": "physics.stackexchange", "id": 44580, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, scattering", "url": null }
javascript, php, jquery, html, wordpress Title: Displaying a price quote depending on the selected number of rooms I've written a WordPress plugin that displays a price quote based on what options are chosen in the settings and then what the user chooses with HTML <select> dropdown menus. Here's the HTML (with a little PHP as well). Note that the function at the top is just so I can use WordPress's get_option and use those values in my JS: function dataToJS () { global $get_option_array; wp_enqueue_script( 'added_jquery', plugin_dir_url( __FILE__ ) . 'js/added_jquery.js', array( 'jquery' ), '1.0', true ); wp_localize_script( 'added_jquery', 'script_vars', array ( 'bedrooms_num' => __($get_option_array['bedroom']), 'family_rooms_num' => __($get_option_array['family_room']), 'living_rooms_num' => __($get_option_array['living_room']), 'offices_num' => __($get_option_array['office']) ) ); } add_filter( 'wp_enqueue_scripts', 'dataToJS');
{ "domain": "codereview.stackexchange", "id": 27209, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, php, jquery, html, wordpress", "url": null }
java, security, cryptography byte[] input = new byte[64]; int bytesRead; while ((bytesRead = inputFile.read(input)) != -1) { byte[] output = cipher.update(input, 0, bytesRead); if (output != null) outputFile.write(output); } byte[] output = cipher.doFinal(); if (output != null) outputFile.write(output);
{ "domain": "codereview.stackexchange", "id": 34276, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, security, cryptography", "url": null }
bridges Title: Is the Crimean Bridge in danger of collapsing? Some sources point to difficulties which may destroy the bridge, so the Wikipedia article on the Crimean Bridge e.g. has: "The geology of the Kerch Strait is difficult: it has a tectonic fault, and the bedrock is covered by a 60 m (197 ft) layer of silt.[44] About 70 mud volcanoes have been found in the area of the strait.[44] More than 7,000 piles support the bridges; these piles have been driven up to 91 m (300 ft) beneath the water surface.[44] Some of the piles are at an angle to make the structure more stable during earthquakes.[44] Some experts have expressed doubts that the construction is durable, given the tectonic and sea current conditions in the strait.[44][45] Pollock, Emily (6 July 2018), Europe’s Longest Bridge Spans Troubled Waters, Engineering.com, archived from the original on 13 October 2018 Kerch Strait Bridge may collapse at any time – expert, UNIAN, 12 October 2018 Evaluating safety and risks involved in a project of this magnitude and complexity need a multifaceted scientific research, composed of teams representing different areas of technology, with experts to sample the soils, water current; chemical engineers, meteorologist, seismologists, etc, expensive imaging, testing machinery. It could cost millions of dollars and many years. I had two classmates in college many years ago who spent their entire Ph.D. time doing an assessment of the safety of a small single span 110-foot bridge in Massachusets, with unlimited help from undergrad students at the lab. Your question is way too ambitious for a site like this.
{ "domain": "engineering.stackexchange", "id": 2546, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "bridges", "url": null }
homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, buoyancy Title: Equation of motion for ice block in water I need to derive the equation of motion for an building block of ice that swims in water. The building block has an initial height that is looking out of the water called $h_A$ and is pushed into the water by a force $F$ such that it isn't completely under water, so $\Delta z < h_A$. Now I need to derive the equation of motion for the process when the force disappears and the building block goes up. What I tried to write down is the force that pushes the block up: $m \frac{d^2z}{dt^2} = F_G - F_A = mg - gA\big[l-(h_A-z)\big] \rho_w$ with $ A $ the area of the block and $\rho_w$ the density of the water. To me this equation looks kind of hard to solve so I wanted to ask if that could be right. So far you've got a second order Differential Equation (DE): $$m \frac{d^2z}{dt^2} = F_G - F_A = mg - gA\big[l-(h_A-z)\big] \rho_w$$ But it looks worse than it is: make the following substitution: $$u=mg - gA\big[l-(h_A-z)\big] \rho_w,$$ and because it's mostly constant terms, then: $$du=-gA\rho_w dz$$ and: $$\frac{d^2z}{dt^2}=-\frac{1}{gA\rho_w}\frac{d^2u}{dt^2}$$ Substitute back into the DE: $$-\frac{m}{gA\rho_w}\frac{d^2u}{dt^2}-u=0$$ Or: $$m\frac{d^2u}{dt^2}+gA\rho_wu=0$$
{ "domain": "physics.stackexchange", "id": 26948, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, buoyancy", "url": null }
homework-and-exercises, fluid-dynamics, water, fluid-statics Title: What the air pressure inside an air bubble in water I am a newbie in physics and have a homework problem, I hope I am not asking something stupid, since I don't learn physics nor am I going to do anything advanced with it. So let's say we have a bubble under water, with a radius of $0.5\mathrm{\mu m}$, $\sigma = 73\mathrm{mN}/\mathrm{m}$, in a depth of 5m with atmospheric pressure being 1000hPa. Does anyone have a good answer to this? ($\sigma$ is for surface tension, I think) I'm writing this answer because others are saying that the surface tension is a small correction to the internal pressure. Instead, the pressure due to surface tension is the largest factor because $ 0.5 {\rm \mu m}$ is a very small bubble. (The other answers are right in their outline of the physics, imho, just not accounting for the smallness of the bubble.) For a bubble, pressure difference balances the surface tension. $$\Delta P = \frac {2\sigma} {r}$$ The important point is that the smaller the bubble, the larger the pressure difference. Using your numbers, I get about 2.8e5 Pa = 2.9 atm. That is, the pressure due to surface tension is the dominant factor in this small bubble.
{ "domain": "physics.stackexchange", "id": 77979, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, fluid-dynamics, water, fluid-statics", "url": null }
python, performance, beginner, excel, web-scraping This appears to be consistent across pages. So, you could look for a .hotels-hotel-review-about-addendum-AddendumItem__title--2QuyD followed by a .hotels-hotel-review-about-addendum-AddendumItem__content--iVts5. You probably want to check that the text in the first div is NUMBER OF ROOMS in case some pages have more "addendum items" with purely numeric content: When scraping, I like pulling things out into functions to make my intent more clear, to make testing easier, and to make it easier to refactor if (more likely, when the page changes): def get_addendum_item_titles(page): return page.find_all('div', class_='.hotels-hotel-review-about-addendum-AddendumItem__title--2QuyD') def get_number_of_rooms_addendum_title(page): for title in get_addendum_item_titles(page): if title.text.strip().upper() == 'NUMBER OF ROOMS': return title raise ValueError('Number of rooms addendum title not found') def get_number_of_rooms(page): title = get_number_of_rooms_addendum_title(page) content = title.parent.find('div', class_='.hotels-hotel-review-about-addendum-AddendumItem__content--iVts5') return int(content.text.strip()) You may want to throw those class names in constants. A prime justification for this approach is immediately obvious. The --2QuyD-like suffixes are almost certainly automatically generated. I suspect the next time tripadvisor modifies any of their CSS these suffixes will change and break your code. But I imagine that the hotels-hotel-review-about-addendum-AddendumItem__title part will rarely change. So you need a way of finding the proper classname with only that prefix. Ideally you create a function like: def find_class_with_prefix(page, prefix): pass
{ "domain": "codereview.stackexchange", "id": 33847, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, beginner, excel, web-scraping", "url": null }
formal-grammars, compilers, lexical-analysis First, I'm building a CAT (classifier table) for all the atomic symbols of the language. I don't know if this is the right thing to do, especially when I have 52 letters (English alphabet), 10 digits and reserved words. I will then merge all the CATs together. So I will have one big CAT that covers letters, digits, and reserved words. Then, I will build a (big) transition table, so that when I read a character and determine its classification (problem: What about reserved words that take more than 1 character?) I will know where to transition to next. These tables are used by a simple DFA class which, once the lexeme is read, will spit out a token.
{ "domain": "cs.stackexchange", "id": 15673, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-grammars, compilers, lexical-analysis", "url": null }
ros, ubuntu, ubuntu-trusty, opencv3, ros-indigo added cb_bridge to the catkin_ws. Finally, as the example was working and i was able to draw a circle in the image and display it in rviz, i tried the aruco again and guess what, the code ran the first time till the point where a camera frame is needed. Well from there on it was just a peace of cake, add a frame, get the image, try a marker and be happy like i wasn't for a couple of weeks as the coordinate frame was added to the image by the aruco code. So, sorry for everybody else coming across this problem, i can't tell you what causes the error. But maybe just cv_bridge is missing ...
{ "domain": "robotics.stackexchange", "id": 31360, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ubuntu, ubuntu-trusty, opencv3, ros-indigo", "url": null }
noise, autocorrelation, gaussian, proof Now we know $$f_{N(t)}(\eta)=\frac{1}{\sqrt{2 \pi \sigma^2}}\exp\left(\frac{-(\eta-\mu)^2}{2 \sigma^2}\right)$$ then, how can we find $f_{N(t)N(t+\tau)}(x,y)$ ? As @MattL points out in a comment, a Gaussian pdf does not imply whiteness. Indeed, it can be argued that the assumption that the process is a continuous-time white noise process is contrary to the belief that the random variables constituting the process even have a pdf of any kind. Continuous-time white noise is a mythical beast that is used to account for the empirical observation that the power spectral density $S_{\scriptstyle{\text{output}}}(f)$ of the noise at the output of a linear filter with transfer function $H(f)$ can be expressed as $$S_{\scriptstyle{\text{output}}}(f) = \frac{N_0}{2}|H(f)|^2,-\infty < f < \infty,$$ a result that is perfectly consistent with the standard theory of wide-sense-stationary processes in linear systems which claims that $$S_{\scriptstyle{\text{output}}(f)} = S_{\scriptstyle{\text{input}}(f)}|H(f)|^2, -\infty < f < \infty,$$
{ "domain": "dsp.stackexchange", "id": 6145, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "noise, autocorrelation, gaussian, proof", "url": null }
ros, rosparam, baxter how can I solve this issue; thank you MiMo Originally posted by MiMo on ROS Answers with karma: 11 on 2016-01-25 Post score: 0 Original comments Comment by jackie on 2016-01-25: make sure the robot is on and that baxter.sh is sourced in every terminal in which you put baxter-specific commands. If you have more problems you should really email the brr-users list: brr-users@rethinkrobotics.com Comment by gvdhoorn on 2016-01-26: @MiMo: please don't post answers if you're not answering the question. For updates to your own question, please just edit it, using the edit link/button. I've merged your update into your question, but please keep it in mind. Did you source your baxter.sh file before trying to get the rosparam? The baxter.sh file configures your shell to talk to the ROS Master running on the Baxter robot. If the software_version parameter was set in the Parameter Server running on the Baxter's ROS Master, then you won't be able to access if it if you aren't connected. You can also try emailing the brr-users list for Baxter-specific questions, or searching their Google Groups page: https://groups.google.com/a/rethinkrobotics.com/forum/?utm_medium=landing+page&utm_source=Act-On+Software&utm_content=landing+page&utm_campaign=&utm_term=Research%20Forum#!forum/brr-users Originally posted by jackie with karma: 296 on 2016-01-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by nickw on 2016-01-26: you also need to edit the baxter.sh file with a new robot/workstation install to set the correct values for the robot and your workstation.
{ "domain": "robotics.stackexchange", "id": 23549, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, rosparam, baxter", "url": null }
special-relativity, velocity, inertial-frames Title: Relativistic relative velocity: is $\vec{v}_{A|B}=-\vec{v}_{B|A}$ still true in the relativistic case? Let $S=S_O$ be a stationary frame of reference and $S'=S_A$ and $S''=S_B$ two inertial frames of reference with velocities $\vec{v}_A=\vec{v}_{A|O}$ and $\vec{v}_B=\vec{v}_{B|O}$, as seen in $S_O$. In the non-relativistic case, Gallileo's transformations for speed lead to the relative speed between $S_A$ and $S_B$ being symmetric: $$\vec{v}_{A|B}=-\vec{v}_{B|A}$$ Would this relation still be true for the relativistic case? That is, if we drew them in the cartesian coordinate system of $S_O$, would both relative velocities be parallel, with the same magnitude and opposite sense? No. They have the same magnitude but they don’t have opposite directions. See Wikipedia for the general formula. The reason is related to two differences between Lorentz boosts and Galilean boosts. First, the order of Lorentz boosts matters; they don’t commute. Second, two non-collinear Lorentz boosts don’t compose to make just a Lorentz boost; there is also a Wigner rotation.
{ "domain": "physics.stackexchange", "id": 64381, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, velocity, inertial-frames", "url": null }
inorganic-chemistry, coordination-compounds, organometallic-compounds Title: What's the explanation for the chelate effect? I got that multidentate ligands form more stable coordination complexes than monodentate ligands, but why? The chelate effect is generally agreed to be a thermodynamic effect caused by the change in entropy upon binding of a bidentate ligand. General description of the chelate effect: Consider a coordination complex, [M]L2, where L = a standard monodentate ligand. If we now add a bidentate ligand, two monodentate ligands are released. The ligand substitution reaction in the forward direction is therefore generally favourable: ∆S, the entropy term is positive due to two molecules reacting to become three, with this favourable entropy term having the effect of tending to make ∆G negative (negative ∆G is favourable, i.e. the reaction likely to proceed).[*] Once a bidentate ligand is bound, it then becomes less favourable to go in the other direction (the entropy term would be negative due to entropy decreasing in the system, which in turn would tend to make ∆G positive, and hence the reaction unfavourable). Example and thermodynamic data: To give a more concrete example demonstrating the significance of the entropy term, consider the following reaction and associated thermodynamic data (taken from Physical Inorganic Chemistry, S. F. A. Kettle, 1996): Clearly, the entropy term dominates, causing ∆G to be negative in both cases (hence causing the forward reaction to be favoured in both cases). [*]: The caveat here is that we're assuming that enthalpy really isn't contributing a whole lot, this is clearly a vast over-simplification, and there are cases where enthalpy becomes important (i.e. when the complex is significantly more stable than the starting materials). We're also ignoring the fact that the act of binding a bidentate ligand itself has some unfavourable entropy, since we're containing the ligand and removing some conformational flexibility
{ "domain": "chemistry.stackexchange", "id": 8529, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, coordination-compounds, organometallic-compounds", "url": null }
ros, catkin, callback, gui, qt add_library(qcgaugewidget SHARED src/rqt_gauges/qcgaugewidget.cpp) ## Specify libraries to link a library or executable target against target_link_libraries(qcgaugewidget ${catkin_LIBRARIES} ${qt_LIBRARIES} ) D. Now add a library for the plugin. It links against your widget library: ## Declare a cpp library add_library(${PROJECT_NAME} ${rqt_gauges_SRCS} ${rqt_gauges_MOCS} ${rqt_gauges_UIS_H} ) ## Specify libraries to link a library or executable target against target_link_libraries(${PROJECT_NAME} qcgaugewidget ${catkin_LIBRARIES} ${qt_LIBRARIES} ) E. You'll probably run into lots of issues with CMakeLists. It's finnicky. I would just depend on the rqt_gauges example as much as possible. F. Sorry, I didn't use signals and slots. I hacked it with a pointer instead. Somebody else will have to help you there. G. You don't need to use QtCreator as your editor. You can use any text editor (even gedit) and compile with catkin_make, like usual for ROS. Originally posted by AndyZe with karma: 2331 on 2016-12-05 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 26394, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, catkin, callback, gui, qt", "url": null }
For example, x ≥ -3 is the solution of a certain expression in variable x. Select Symbol and then More Symbols. For example, the symbol is used below to express the less-than-or-equal relationship between two variables: ≥. "Greater than or equal to", as the suggests, means something is either greater than or equal to another thing. is less than > > is greater than ≮ \nless: is not less than ≯ \ngtr: is not greater than ≤ \leq: is less than or equal to ≥ \geq: is greater than or equal to ⩽ \leqslant: is less than or equal to ⩾ 923 Views. Use the appropriate math symbol to indicate "greater than", "less than" or "equal to" for each of the following: a. Greater than or equal application to numbers: Syntax of Greater than or Equal is A>=B, where A and B are numeric or Text values. With Microsoft Word, inserting a greater than or equal to sign into your Word document can be as simple as pressing the Equal keyboard key or the Greater Than keyboard key, but there is also a way to insert these characters as actual equations. For example, 4 or 3 ≥ 1 shows us a greater sign over half an equal sign, meaning that 4 or 3 are greater than or equal to 1. In such cases, we can use the greater than or equal to symbol, i.e. In Greater than or equal operator A value compares with B value it will return true in two cases one is when A greater than B and another is when A equal to B. Rate this symbol: (3.80 / 5 votes) Specifies that one value is greater than, or equal to, another value. This symbol is nothing but the "greater than" symbol with a sleeping line under it. Less Than or Equal To (<=) Operator. “Greater than or equal to” and “less than or equal to” are just the applicable symbol with half an equal sign under it. Greater Than or Equal To: Math Definition. 2 ≥ 2. But, when we say 'at least', we mean 'greater than or equal to'. The less than or equal to symbol is used to express the relationship between two quantities or as a boolean logical operator. "Greater than or equal to" is represented by the symbol " ≥ ≥ ". Solution for 1. The greater-than
{ "domain": "zong-music.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9621075722839015, "lm_q1q2_score": 0.896167132625336, "lm_q2_score": 0.9314625083949473, "openwebmath_perplexity": 948.9165942980742, "openwebmath_score": 0.5518640279769897, "tags": null, "url": "https://zong-music.com/ha2ten/greater-than-or-equal-to-sign-9b504e" }
newtonian-mechanics, fluid-statics, density, buoyancy Title: Measure liquids density: hydrometer It is possible to measure liquids density with an hydrometer:
{ "domain": "physics.stackexchange", "id": 52223, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, fluid-statics, density, buoyancy", "url": null }
such as the matrix transpose, determinants and the inverse. Symmetric matrix is used in many applications because of its properties. Taking the transpose of each of these produces MT = 4 −1 −1 9! Matrix transpose AT = 15 33 52 −21 A = 135−2 532 1 ï¿¿ Example Transpose operation can be viewed as flipping entries about the diagonal. When you observe the above matrices, the matrix is equal to its transpose. For every distinct eigenvalue, eigenvectors are orthogonal. I started with the matrix that has linearly independent columns. Learn how your comment data is processed. Tags: idempotent idempotent matrix linear algebra symmetric matrix transpose Next story The Product of a Subgroup and a Normal Subgroup is a Subgroup Previous story A One-Line Proof that there are Infinitely Many Prime Numbers inverse: diagonalise: skew: root: 2D: 3D: 4D: transpose : Maths - Matrix algebra - Transpose. Pages 6; Ratings 100% (1) 1 out of 1 people found this document helpful. What is on the coordinate $i,j$ of the product? In a Field of Positive Characteristic, $A^p=I$ Does Not Imply that $A$ is Diagonalizable. If a matrix contains the inverse, then it is known as invertible matrix, and if the inverse of a matrix does not exist, then it is called a non-invertible matrix. 1. MathTheBeautiful 7,196 views. The transpose of A, denoted by A T is an n × m matrix such that the ji-entry of A T is the ij-entry of A, for all 1 6 i 6 m and 1 6 j 6 n. Definition Let A be an n × n matrix. For more information on the symmetric matrix and other Maths-related topics, visit BYJU’S – The Learning App and also watch interactive videos to learn with ease. I have wrong result of inverse matrix, using Eigen library. Any Automorphism of the Field of Real Numbers Must be the Identity Map, The Formula for the Inverse Matrix of $I+A$ for a $2\times 2$ Singular Matrix $A$. Proof. Use properties of the inverse and transpose to transform this into an expression equivalent to ATBT. Symmetric matrices and the transpose of a matrix sigma-matrices2-2009-1
{ "domain": "reihmann.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9752018426872776, "lm_q1q2_score": 0.804016798240269, "lm_q2_score": 0.8244619350028204, "openwebmath_perplexity": 793.693653494288, "openwebmath_score": 0.7116822600364685, "tags": null, "url": "https://www.reihmann.com/dixit-review-kqge/article.php?page=8ad850-symmetric-matrix-inverse-transpose" }
An easy way begins with Holder's Inequality, $$x_1^2+x_2^2+\ldots+x_n^2 \le \left(x_1+x_2+\ldots+x_n\right)\max(\{x_i\}).$$ (This needs no special proof in this simple context: merely replace one factor of each term $x_i^2 = x_i \times x_i$ by the maximum component $\max(\{x_i\})$: obviously the sum of squares will not decrease. Factoring out the common term $\max(\{x_i\})$ yields the right hand side of the inequality.) Because the $x_i$ are not all $0$ (that would leave $\sigma_x/\bar{x}$ undefined), division by the square of their sum is valid and gives the equivalent inequality $$\frac{x_1^2+x_2^2+\ldots+x_n^2}{(x_1+x_2+\ldots+x_n)^2} \le \frac{\max(\{x_i\})}{x_1+x_2+\ldots+x_n}.$$ Because the denominator cannot be less than the numerator (which itself is just one of the terms in the denominator), the right hand side is dominated by the value $1$, which is achieved only when all but one of the $x_i$ equal $0$. Whence $$\frac{\sigma_x}{\bar{x}} \le f^{-1}\left(1\right) = \sqrt{\left(1 \times (n - 1)\right)\frac{n}{n-1}}=\sqrt{n}.$$ ### Alternative approach
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9835969655605173, "lm_q1q2_score": 0.8300526563315629, "lm_q2_score": 0.8438950966654772, "openwebmath_perplexity": 275.9423807962711, "openwebmath_score": 0.9661756157875061, "tags": null, "url": "https://stats.stackexchange.com/questions/18621/maximum-value-of-coefficient-of-variation-for-bounded-data-set" }
filters, audio, frequency-spectrum, denoising, audio-processing Most denoising methods share the same structure: get a STFT representation of $Y$ and $N$ (matrices $Y(p, k)$ and $N(p, k)$, where $p$ is the frame index and $k$ the frequency bin index), process pairs of FFT frames from rows of $Y$ and $N$ to get an estimate $\hat{X}$ of a slice of the denoised signal spectrum, and use it to build a filter/mask applied in the frequency domain to $Y$, the conversion back to the time-domain being performed through overlap-add. I suggest you to try to implement the Ephraim-Malah algorithm, which is a classic and often cited "baseline" denoising method. It is based on two intuitive tricks: Estimate the SNR and attenuate the strength of the denoising filter in segments with low SNR (this prevents many artifacts in areas where the signal is close to the noise level). Use temporal smoothing (if we have a high SNR in a frame, make the filtering/subtraction stronger in the next frame since it is also likely to contain signal...). I have found a relatively self-contained set of slides on the topic - the author introduces increasingly sophisticated denoising rules, from spectral subtraction to full-blown Ephraim-Malah. The only thing he doesn't tell you to get to a practical implementation is how to get the estimate $\hat{S}_x(p,k)$ - since you don't observe $X(p,k)$, just $Y(p,k)$ and $N(p,k)$. The simplest approach is to subtract the power spectrum of the noise from the power spectrum of the signal, and zero the STFT cells in which the result is negative: $\hat{S}_x(p,k) = \begin{cases} |Y(p,k)|^2 - |N(p,k)|^2 & when |Y(p,k)|^2 - |N(p,k)|^2 > 0 \\ 0 & otherwise \end{cases}$ You can then try the filtering rules shown on pages 5 and 10 of the slides.
{ "domain": "dsp.stackexchange", "id": 657, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, audio, frequency-spectrum, denoising, audio-processing", "url": null }
optics, refraction, geometric-optics Back-substituting $x_a$ to get the apparent depth gives $$D_a = D_r \frac{\frac{n_v}{n_o}\cos^3\theta_v}{\left(1-\left(\frac{n_v}{n_o}\sin\theta_v\right)^2\right)^{3/2}}.$$ As expected, if $n_o=n_v$ then $D_a=D_r$, and when $\theta_v=0$ $D_a=\frac{n_v}{n_o}D_r$. Adding a parametric plot of $(x_a,-D_a)$ in red onto the second graph above shows the line following the bottom of the blue curve of intersections, as expected. Thus, for any given observation ray the image will be formed very near to where the blue line is tangent to the red one.
{ "domain": "physics.stackexchange", "id": 53182, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, refraction, geometric-optics", "url": null }
algorithm-analysis, runtime-analysis Title: What is the compleixty of this algorithm? The algorithm is as follows: a = rand % a random number between 0 and 1 b = a while b == a b = rand end Here rand is a function that returns a random number, generated uniformly, between 0 and 1. Let us say that this is the MATLAB function rand. What is the time complexity of this algorithm? It looks like the best and average complexities are $O(1)$ and the worst complexity is unbounded. rand () is probably a random number generator, but the details are unknown. Since we don't know how exactly rand () behaves we don't know how long this will take. I would assume that with many random number generators it is possible to prove that rand () will not be called more than three times (that is there is no state where rand () would return the same value three times in a row). Obviously if you use something like Mersenne Twister with thousands of bits of state, a rather lengthy sequence of identical numbers will happen in some states.
{ "domain": "cs.stackexchange", "id": 6930, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm-analysis, runtime-analysis", "url": null }
c, hash-map printf("%.2f\n",misses/(float)count); return EXIT_SUCCESS; } 1. Bugs The algorithm in ht_get is incorrect: while (ht->table[index] != NULL && hash(ht,ckey = get_key(ht->table[index])) == hkey) { if (ckey == key) return ht->table[index]; index = MODINC(ht->bits,index); } You are assuming here that if you find a key with a different hash in your probe sequence, then this means that the key you are looking for is not there. But that's not right. Try the following test program: #include <stdint.h> #include <stdio.h> #include "ht.h" #define ARRAY_LENGTH(array) (sizeof(array) / sizeof(array[0])) int main(int argc, char *argv[]) { uint64_t k[] = {0x12345671, 0x11223344, 0x22334411, 0x33441122}; ht_t *ht = ht_new(2); for (size_t i = 0; i < ARRAY_LENGTH(k); ++i) { ht_put(ht, k[i], &k[i]); } printf("%p\n", ht_get(ht, k[0])); return 0; }
{ "domain": "codereview.stackexchange", "id": 2738, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, hash-map", "url": null }
organic-chemistry, terminology Title: Meaning of the term "Activation of alkene" While reading organic texts, I have come across authors referring to "Activation of alkene" what does that mean !? Does it mean to include the alkene in resonance or what else exactly ? I saw this notation, while reading about Heck reaction: The palladium-catalyzed C-C coupling between aryl halides or vinyl halides and activated alkenes in the presence of a base is referred as the "Heck Reaction" I also observed this while reading about nucleophilic conjugate addition: Ordinary nucleophilic additions or 1,2-nucleophilic additions deal mostly with additions to carbonyl compounds. Simple alkene compounds do not show 1,2- reactivity due to lack of polarity, unless the alkene is activated with special substituents. Activation of an alkene just means that the double bond has a higher electron density than that of a normal isolated double bond. That is, the electron density in the double bond is greater than the one observed in ethene $\ce{CH2=CH2}$. Activation, in organic chemistry, generally means the compound displays a greater nucleophilic nature than it normally should due to increased electron density. For example, $\ce{-OCH3}$ group activates benzene when it forms toluidine. This makes the benzene ring more electron rich and so make it easier to react in nucleophilic reactions.
{ "domain": "chemistry.stackexchange", "id": 15348, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, terminology", "url": null }
python, performance, 2048 def game_over(self, win: bool) -> None: '''Ends the game''' self.draw_win() if win: label = GAME_OVER_FONT.render(f'You won! You scored {self.score} points.', 1, BLUE) else: label = GAME_OVER_FONT.render(f'You lost! You scored {self.score} points.', 1, RED) self.win.blit(label, (WIDTH//2 - label.get_width()//2, HEIGHT//2 - label.get_height()//2)) pygame.display.update() pygame.time.delay(3000) self.running = False def draw_win(self) -> None: '''Draws onto the self.win''' self.win.fill(BLACK) self.draw_grid() self.draw_stats() def draw_grid(self) -> None: '''Draws self.matrix onto self.win''' pygame.draw.rect(self.win, BG_COLOR, (LEFT - GAP, TOP - GAP, len(self.grid[0]) * (BLOCK_WIDTH + GAP) + GAP, len(self.grid) * (BLOCK_HEIGHT + GAP) + GAP)) for row in range(len(self.grid)): for col in range(len(self.grid[row])): x = LEFT + (col * (BLOCK_WIDTH + GAP)) y = TOP + (row * (BLOCK_HEIGHT + GAP)) value = self.grid[row][col] bg_color, font_color = COLOR_MAP[value] color_rect = pygame.Rect(x, y, BLOCK_WIDTH, BLOCK_HEIGHT) pygame.draw.rect(self.win, bg_color, color_rect) if value != 0: label = BLOCK_FONT.render(str(value), 1, font_color) font_rect = label.get_rect() font_rect.center = color_rect.center self.win.blit(label, font_rect)
{ "domain": "codereview.stackexchange", "id": 42088, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, 2048", "url": null }
c++, parsing, math-expression-eval void signHandler(vector<char> &sign, string poly, int &i, vector<int> &firstPos, vector<int> &minus) { char next = poly[i + 1]; if (!(i == poly.length() - 1)) { sign.push_back(next); int posTemp = sign.size() - 1; if (next == '*' || next == '/') { firstPos.push_back(posTemp); } if (poly[i + 2] == '-') { int posMinus = sign.size() - 1; minus.push_back(posMinus); i++; } i++; } }
{ "domain": "codereview.stackexchange", "id": 20696, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, parsing, math-expression-eval", "url": null }
is difference between independent and exclusive events Hi TR, If two events A and B are independent, then Pr[A and B] = Pr[A]Pr[B]; that is, the probability that both A and B occur is equal to the probability that A occurs times the probability that B occurs. Which of the Venn diagrams has shaded the event that the contractor wins. The probability of choosing a jack on the second pick given that a queen was chosen on the first pick is called a conditional probability. always mutually exclusive b. AU - Micallef, Ivana N. The lecture on 2/8/2011 mainly focused on independence. If A and B are independent events, such that = 0. Answer choices: independent; not independent. Prove that if events A and B are independent, then the complement events of A and B are also independent. A and B are two independent events such that P(A)=(1)/(2) and P(B)=(1)/(3). gl/9WZjCW If A, B are two independent events, show that bar A and bar B are also independent. P(A 1 A 2 A 3)=P(A 1)P(A 2)P(A 3) Are the events A 1, A 2, and A 3 pairwise independent? Toss two different standard dice, white and black. Find the probability occurrence of A?a)1. Similarly, suppose event A is the drawing of an ace from the pack of 52 cards. 3 and P(B Get solutions. independent events: Two events are independent if knowing the outcome of one provides no useful information about the outcome of the other. True False: The general addition rule may be used to find the union between two events whether or not they are mutually exclusive. 70, what is the value of P(A | B)?. Now we will discuss independent events and conditional probability. If A and B are independent events, the probability of both events occurring is the product of the probabilities of the individual events. (R) Events A and B are independent. Determining the independence of events is important because it informs whether to apply the rule of product to calculate probabilities. This argument shows that if two events are independent, then each event is independent of the complement of the other. (justify using probability) Recall that when two events, M and B, are independent,
{ "domain": "incommunity.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9848109520836026, "lm_q1q2_score": 0.8390124555894783, "lm_q2_score": 0.8519528076067262, "openwebmath_perplexity": 342.42604301853675, "openwebmath_score": 0.6637188196182251, "tags": null, "url": "http://incommunity.it/xyhy/a-and-b-are-two-independent-events.html" }
navigation, ros-kinetic, costmap-2d Originally posted by David Lu with karma: 10932 on 2019-04-22 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by simk0024 on 2019-04-24: You are RIGHT! It's because of the indentation before the plugins. By adding indent and make it like below will fix the issue of adding plugin local_costmap: plugins: - {name: obstacle_layer, type: 'costmap_2d::ObstacleLayer'} - {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"} - {name: inflation_layer, type: 'costmap_2d::InflationLayer'} Log info: [ INFO] : local_costmap/sonar_layer: ALL as input_sensor_type given [ INFO] : RangeSensorLayer: subscribed to topic /arduino/sonar1 [ INFO] : RangeSensorLayer: subscribed to topic /arduino/sonar2 [ INFO] : RangeSensorLayer: subscribed to topic /arduino/sonar3 [ INFO] : RangeSensorLayer: subscribed to topic /arduino/sonar4 [ INFO] : RangeSensorLayer: subscribed to topic /arduino/sonar5 However, range data still not reflected on the costmap, unlike pointcloud and laserscan. Comment by Syrine on 2019-06-27: Hello @simk0024 I'm having the same problem, I can't reflect obstacles from range sensors on the costmap, did you find a solution? Thanks Comment by dinesh on 2021-11-23: No solution till now?
{ "domain": "robotics.stackexchange", "id": 32842, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ros-kinetic, costmap-2d", "url": null }
automata, computation-models Title: Closure on regular languages A) Let $L$ be a regular language. according to the theorem there is an DFA which accepts the language. Describe shortly how to change the DFA to NFA which Accepts $L^R$, where R is reverse. There is no need to write how to build formally or proof or correctness. B) True or False: if $L$ is not regular then $L^R$ is not regular My solution: A) First of all because we want Reverse the accepting state would become the start and the rejecting states will become accepting, also change the direction of the edges to the oppositse side. but when it comes to changing it to NFA im a bit stuck. B) I think its true since it doesnt matter if it reversed if its regular then of course $L^R$ will be regular as well. Is this the way for A and B? A) First of all because we want Reverse the accepting state would become the start and the rejecting states will become accepting, also change the direction of the edges to the oppositse side. You're almost there. What happens if you have multiple accepting states in the original DFA? That's where you need NFA capabilities to convert them. Here is a spoiler in case you want to check your answer: Designing a DFA and the reverse of it on ComputerScience.SE. B) Your answer is true although you might want to phrase it a bit more carefully. We want to prove "if $L$ is not regular, $L^R$ is not regular". So let $L$ be not regular. If $L^R$ were regular, so were $(L^R)^R = L$, which is a contradiction. Hence, $L^R$ is not regular.
{ "domain": "cs.stackexchange", "id": 15733, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "automata, computation-models", "url": null }
special-relativity, speed-of-light, astronomy, one-way-speed-of-light This is a good question. Neglecting the CMB dipole anisotropy, we see a very nearly isotropic large scale structure to the universe. So your question is, how could a non-isotropic synchronization convention possibly explain the observed isotropy? As you say, light from the “fast” direction would have a shorter delay than light coming from the “slow” direction. So the fast light would give more recent data and the slow light would give more ancient data. Since both directions show galaxies of roughly the same age, that means that there is an anisotropic cosmological gravitational time dilation. Galaxies in the fast light direction age more slowly and galaxies in the other direction age faster. Yes, such a convention would be very cumbersome and inconvenient, which is why it is not used. But it would be self consistent and also consistent with the cosmological data.
{ "domain": "physics.stackexchange", "id": 83082, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, speed-of-light, astronomy, one-way-speed-of-light", "url": null }
dynamic-programming, shortest-path, optimal-strategy You can construct a reduction from 3SAT. A formula $\varphi$ with $M$ clauses on $K$ variables corresponds to $M$ finite state systems run for $K$ steps. Here $u_k$ is the $k$th variable, and you construct $f,g,x_0$ so that $\sum_k g(x^i_k,u_k)$ is zero if $u_1,\dots,u_K$ satisfy clause $i$ of $\varphi$, or $>0$ if not. (This can be easily arranged. You have $2M$ states, each of the form $(i,\text{True})$ or $(i,\text{False})$; $x^i_0=(i,\text{True})$; $f$ is designed so that if you read in a variable that violates clause $i$ while in state $(i,\text{True})$, you transition to $(i,\text{False})$; and $g(i,\text{True})=0$ and $g(i,\text{False})=1$.) Now there exists an input sequence such that $J=0$ iff $\varphi$ is satisfiable.
{ "domain": "cs.stackexchange", "id": 19257, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dynamic-programming, shortest-path, optimal-strategy", "url": null }
fourier-transform, frequency-spectrum, power-spectral-density, random-process \lim_{T\rightarrow\infty}E\left\{ \frac{1}{T} \int_{-T/2}^{T/2}x^*(t)e^{j\omega t}dt \int_{-T/2}^{T/2}x(t')e^{-j\omega t'}dt' \right\}$$ The inverse Fourier transform of the power spectrum $S_x(\omega)$ should equal the autocorrelation function $R_x(\tau)$: $$\mathcal{F}^{-1}\{S_x(\omega)\}=\frac{1}{2\pi}\int_{-\infty}^{\infty}S_x(\omega)e^{j\omega \tau}d\omega=\\= \lim_{T\rightarrow\infty}E\left\{ \frac{1}{2\pi T} \int_{-\infty}^{\infty}\int_{-T/2}^{T/2}x^*(t)e^{j\omega t}dt \int_{-T/2}^{T/2}x(t')e^{-j\omega t'}dt'e^{j\omega \tau}d\omega \right\}=\\= \lim_{T\rightarrow\infty}E\left\{ \frac{1}{T} \int_{-T/2}^{T/2}x^*(t) \int_{-T/2}^{T/2}x(t')\left[\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{j\omega (t+ \tau-t')}d\omega\right]dtdt' \right\} $$ The expression in brackets equals $\delta(t+\tau-t')$, which gives
{ "domain": "dsp.stackexchange", "id": 1968, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-transform, frequency-spectrum, power-spectral-density, random-process", "url": null }
ros, c++, rviz, moveit, ros-melodic [ INFO] [1653046150.178819596]: Completed trajectory execution with status SUCCEEDED ... [ INFO] [1653046150.179159290]: Execution completed: SUCCEEDED There are no errors displayed in terminal, and I've got no obstacles in my environment. How do I get to execute the full trajectory? Repository here Originally posted by brynagut on ROS Answers with karma: 16 on 2022-05-20 Post score: 0 I'm an idiot. Realized ros::Duration(0.5).sleep(); only allowed the trajectory to be executed within the specified timeline. Sorry to any eyes that had to witness this. Originally posted by brynagut with karma: 16 on 2022-05-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Mike Scheutzow on 2022-05-21: You can pass an argument, wait=True, to have the execute() call block until it is done (or fails.) It's also a good idea to check the status returned by execute().
{ "domain": "robotics.stackexchange", "id": 37695, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, c++, rviz, moveit, ros-melodic", "url": null }
thermodynamics, ideal-gas, volume The same is true of this idea of partial volume. It is the volume the gas would take up if all the other constituents are removed and all the other thermodynamic variables are held constant. The whole concept of partial pressure and volume is only possible because the effects of the different constituents are independent of each other. That is, to first approximation, gas A doesn’t know about gas B and vice versa. If the gasses are strongly interacting, either because there’s some chemical reaction going on or our gas densities are really high, this whole concept of partial anything breaks down and we have to consider a more nuanced way of thinking about the effects of the gasses on their enclosure. But for simple gasses at reasonable densities, you can consider the effects of the gasses as more or less independent of each other and talk about their “partial” contributions to the total observed pressure and volume.
{ "domain": "physics.stackexchange", "id": 54627, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, ideal-gas, volume", "url": null }
xgboost, hyperparameter-tuning, hyperparameter Title: XGboost and regularization Does the XGBClassifier method utilizes the two regularization terms reg_alpha and reg_lambda, or are they redundant and only utilized in the regression method (i.e., XGBRegressor)? I think that some hyperparameters in XGBoost could have no effect on specific methods (e.g., scale_pos_weight in XGBRegressor). XGB uses the two kinds of regularization in both classification and regression; each leaf is a continuous score, these scores added together for the final prediction (of log-odds in the classification case), so penalizing the weights makes sense in either setting. See also L1 & L2 Regularization in Light GBM But yes, some hyperparameters (scale_pos_weight) seem to be vestigial: What does xgb's scale_pos_weight parameter do for regression? https://discuss.xgboost.ai/t/scale-pos-weight-for-regression/218/10
{ "domain": "datascience.stackexchange", "id": 7626, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "xgboost, hyperparameter-tuning, hyperparameter", "url": null }
java, unit-testing, junit HungarianSolverTest.java package AssignmentProblem; import java.util.Arrays; import java.util.Comparator; import java.util.stream.Stream; import org.junit.jupiter.api.Assertions; import static org.junit.jupiter.api.Assertions.assertArrayEquals; import org.junit.jupiter.api.DynamicTest; import org.junit.jupiter.api.TestFactory; import test.tools.TestArguments; import test.tools.TestFrameWork; public class HungarianSolverTest implements TestFrameWork<HungarianSolver, HungarianArgument> { @TestFactory public Stream<DynamicTest> testInitialiseValidInput() { //Check that initialise does not crash on valid input. //Correctness of the result is checked in tests linked to the methods getting the results. return test("initialise (valid input)", v -> v.convert()); } @TestFactory public Stream<DynamicTest> testInitialiseInvalidInput(){ Stream<HungarianArgument> cases = Stream.of( new HungarianArgument(null, null, null, null, "null cost matrix"), new HungarianArgument(new int[0][0], null, null, null, "size 0 cost matrix"), new HungarianArgument(new int[][]{{0}, {0,1}, {0,1,2},{0,1},{0}}, null, null, null, "non-rectangular cost matrix")); return test(cases, "initialise (invalid input)", v -> Assertions.assertThrows(IllegalArgumentException.class, () -> v.convert())); } @TestFactory public Stream<DynamicTest> testGetRowAssignments() { return test("getRowAssignments", v -> assertArrayEquals(v.expectedRowAssignment, v.convert().getRowAssignments())); }
{ "domain": "codereview.stackexchange", "id": 42094, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, unit-testing, junit", "url": null }
m. An object with distance information to be converted to a "dist" object. The Minkowski distance defines a distance between two points in a normed vector space. Here I demonstrate the distance matrix computations using the R function dist(). p p=2, the distance measure is the Euclidean measure. Similarity measure 1. is a numerical measure of how alike two data objects are. Minkowski Distance. As mentioned above, we can manipulate the value of p and calculate the distance in three different ways-p = 1, Manhattan Distance . 2. higher when objects are more alike. < It means, the distance be equal zero when they are identical otherwise they are greater in there. This problem has been solved! let p = 1.5 let z = generate matrix minkowski distance y1 y2 y3 y4 print z The following output is generated Minkowski Distance – It is a metric intended for real-valued vector spaces. The Minkowski distance or Minkowski metric is a metric in a normed vector space which can be considered as a generalization of both the Euclidean distance and the Manhattan distance. Minkowski distance is typically used with being 1 or 2, which correspond to the Manhattan distance and the Euclidean distance, respectively. Click to see full answer Herein, how do you calculate Minkowski distance? Maximum distance between two components of x and y (supremum norm) manhattan: Absolute distance between the two vectors (1 norm aka L_1). The Euclidean Distance tool is used frequently as a stand-alone tool for applications, such as finding the nearest hospital for an emergency helicopter flight. skip 25 read iris.dat y1 y2 y3 y4 skip 0 . Additionally, how do you calculate Supremum distance? Asked By: Jianyun Norton | Last Updated: 24th February, 2020. In the limit that p --> +infinity , the distance is known as the Chebyshev distance. Which approach can be used to calculate dissimilarity of objects in clustering? Furthermore, how do you calculate Supremum distance? skip 25 read iris.dat y1 y2 y3 y4 skip 0 . Alternatively, this tool can be used when creating a suitability map, when data representing the distance from a certain object is needed. Does Hermione die in Harry Potter and the cursed child? pdist supports various distance metrics: Euclidean distance, standardized Euclidean distance, Mahalanobis distance,
{ "domain": "kstc.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446448596305, "lm_q1q2_score": 0.8351613254951236, "lm_q2_score": 0.857768108626046, "openwebmath_perplexity": 654.938706732156, "openwebmath_score": 0.8009686470031738, "tags": null, "url": "https://kstc.com/parabolic-listening-ozjqqgn/6tqq5g.php?ca6c6f=minkowski-distance-supremum" }
# Homework Help: Exponential Distribution, Mean, and Lamda confusion 1. Jun 30, 2017 ### Of Mike and Men 1. The problem statement, all variables and given/known data Accidents at a busy intersection follow a Poisson distribution with three accidents expected in a week. What is the probability that at least 10 days pass between accidents? 2. Relevant equations F(X) = 1- e-λx μ = 1/λ 3. The attempt at a solution Let x = amount of time between accidents in days My r.v. is continuous so x~Exponential(λ=?) E(x) = 3/7 (in days) Since E(x) = μ = 1/λ = 3/7 λ = 7/3 Thus x~Exponential(λ=7/3) P(X≥10) = 1-P(X≤10) = 1- F(10) = 1- e-70/3 Answer in the back of the book says: λ = 3/7 P(X≥10) = 1-P(X≤10) = 1- F(10) = 1- e-30/7 I'm confused why λ = 3/7 and not 7/3 if my expected value is 3/7. Shouldn't lambda, by definition, be its reciprocal? 2. Jun 30, 2017 I agree that $\lambda=3/7$. $\lambda$ is the rate at which things occur. I think your very last line is in error though. As I understand it, the exponential distribution comes from the differential equation $\frac{dN}{dt}=-\lambda N$. This would give $P(X \geq x)=e^{- \lambda x}$ and $P(X \leq x)=1-e^{- \lambda x}$. 3. Jun 30, 2017 ### Ray Vickson For a Poisson random variable $N$ with mean $m$ the probability mass function is $$P(N = k) = \frac{m^k}{k!} e^{-m}, \; k = 0,1,2,\ldots$$ In your case if you measure arrivals per day your $m = 3/7$.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9766692311915195, "lm_q1q2_score": 0.8159910718575487, "lm_q2_score": 0.8354835452961427, "openwebmath_perplexity": 373.5760803522127, "openwebmath_score": 0.900187075138092, "tags": null, "url": "https://www.physicsforums.com/threads/exponential-distribution-mean-and-lamda-confusion.919023/" }
javascript, regex, math-expression-eval, lexical-analysis 2: Slicing the string is fine. If there is more to the question, then please elaborate. 3: Use of the re variable to store the match is not too clever. I recommend against assignment inside the if condition, though, it's just something that can be error prone, in general. Down the road you may make an edit and forget to have only one =, or something like that. It's usually considered against best-practices, but is not invalid or anything. 4: As I'm not a compiler guy, take this for what it's worth (there is probably obvious conventional wisdom that I don't know about). Using a single token type for these four operations seems fine to me. They all have similar characteristics. But, it will likely be an issue if you broaden this MATHOP usage to operators that have different characteristics, like the unary - (e.g. 1 + -(3 + 4)). Having said that, you may end up keeping these four operators under one BINARY_OP type. You may end up wanting to group on precedence, though. 5: Not a question. One thing I want to note, the eat and getNextToken methods are organized in a way that is a little strange, to me. You consume a token (move pos forward), then when you are getting the next token after that, check the type of the previous token. I like to have separate getNNNToken methods for the various tokens I have. For instance, you might have getIntToken() and getOpToken(). Each of these can attempt to consume the correct type of token, and if it fails return undefined. The method that called them can decide if that is an error or if another type of token should be attempted, like getEofToken(). As a side note, a scanner I wrote for a DSL: JavaScript and Python. It uses a mix of regular expressions and character comparisons for consuming content. The scanner has a start property and a pos property, and any time a token is created the content spans from start to pos-1. Then, start is moved forward to pos.
{ "domain": "codereview.stackexchange", "id": 18912, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, regex, math-expression-eval, lexical-analysis", "url": null }
redox, combustion, carbohydrates Title: Why do gummy bears explode when added to hot potassium chlorate? This link shows that a gummy bear explodes when in contact with heated potassium chlorate, $\ce{KClO3}$. But what in a gummy bear creates this reaction? Also, do other foods (fruit, icing sugar...) react as violently with potassium chlorate? Potassium chlorate is a source of oxygen. After heating, it decomposes to $\ce{O2}$ and $\ce{KCl}$: $$\ce{4 KClO3 → KCl + 3 KClO4}$$ $$\ce{KClO4 → KCl + 2O2}$$ The gummy bear is mainly composed of sugar and other carbohydrates. Those carbohydrates will react with oxygen, combustion occurs. For example, glucose will react in this manner: $$\ce{6O2 + C6H12O6 -> 6CO2 + 6H2O}$$ If there is any material present which does not burn, such as $\ce{H2O}$, the temperature will not rise as high. For gummy bears the reaction works spectacularly because they are mainly carbohydrates (>70%). An apple, for example, has only ~13% carbohydrates – unless you dry it, of course. On the other hand, this video on YouTube is an example of how sugar itself reacts violently with potassium chlorate.
{ "domain": "chemistry.stackexchange", "id": 2518, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "redox, combustion, carbohydrates", "url": null }
neural-network, dataset, machine-learning-model, audio-recognition, speech-to-text Title: How to double audio dataset? I am trying to develop a mispronunciation detection model for English speech. I use TIMIT dataset, this is phoneme labeled audio dataset. A phoneme is any of the perceptually distinct units of sound. So, my dataset looks like an audio file and string of phonemes corresponding to that audio. Ex: SX141.wav -> p l eh zh tcl t ax-h pcl p axr tcl t ih s pcl p ey dx ih n ax v aa dx ix z ix kcl k w aa dx ix kcl k ah m pcl p tcl t ih sh ix n So, the problem is overfitting. My model is very good at training, but poor on testing. So because of this, I want to try synthetically increase my dataset. Maybe change the speed of audio or add some background noises etc. Are there any already-ready solutions for doubling the audio dataset? Or, how to change speed and add some noises on the audio file? Will be it helpful? I did not find the ready solution for it. I solve this task by myself. Increase speed. from scipy.io.wavfile import read, write Fs, data = read(filename) write(destination, int(Fs*1.25), data) I save the file and increase its frequency by 1.25. Add noise. import numpy as np from scipy.io.wavfile import read, write Fs, data = read(filename) data_noise = np.random.normal(0, .2, data.shape) write(destination, int(Fs), data+data_noise) Here I generate the noise array and add it to the original wav signal.
{ "domain": "datascience.stackexchange", "id": 9626, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-network, dataset, machine-learning-model, audio-recognition, speech-to-text", "url": null }
filters, audio, downsampling, resampling Title: Apply filter before or after Downsampling? Context: I'm trying to do formant estimation. My first attempt didn't work out all that well so I'm trying to follow closely Praat since their formant estimation is remarkably accurate. Question: I thought it was obvious to apply the filter (pre-emphasis) before Downsampling, however Praat doesn't. Is there a reason for this? Are filters usually applied before or after downsampling? I haven't looked at the reference yet , but just to answer the question, yes filtering is performed prior to down sampling to avoid aliasing back in the out of band signals. However if your signal is already appropriately bandlimited you could forego the filter entirely. There's various reasons you might want to do other, unrelated, filtering after the downsampling operation. In fact, doing such filtering after the downsampler is preferable due to the lower sample rate and thus lower computational complexity.
{ "domain": "dsp.stackexchange", "id": 9397, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, audio, downsampling, resampling", "url": null }
special-relativity, time, space-travel Extension (to comment): What happens if $S$ turns back to Earth; when both $S$ and $E$ are on Earth, how can $E$ and $S$ both see each other to have aged less? This is known as the twin paradox. The correct "answer" is $E$ has aged 14 years and $S$ has aged 2 years. This was $E$'s perspective all along, so why is $S$'s perspective now "wrong"? The caveat is that when $S$ turns around to return to Earth, $S$ has changed inertia frame/has experienced acceleration. If you know how to deal with accelerating frames, you can compute that $S$ will perceive $E$ to suddenly age $13 \frac{5}{7}$ years during the short interval when $S$ turns around. So $S$'s perspective, when rightfully accounted for, will see $E$ to also age $1/7 + 13 \frac{5}{7} + 1/7 = 14$ years as well.
{ "domain": "physics.stackexchange", "id": 24361, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, time, space-travel", "url": null }
quantum-mechanics, wavefunction, schroedinger-equation, boundary-conditions, dirac-delta-distributions Title: Why wave function does not vanish at a Dirac delta potential? I have studied that a wave function should vanish at the location of an infinite potential. Consider a direct Delta delta potential at $x=0$. Why does does function not become zero here at $x=0$? Well, let's review OP's argument for the 1D TISE at some fixed energy $E$. Consider first a finite tall wall $V\gg E$ of some fixed non-zero thickness $>0$. We find that the wavefunction has some characteristic penetration length (quantum tunnelling), which becomes smaller and smaller as the wall grows taller. Hence the wave function has to vanish inside an infinite wall of non-zero thickness $>0$. But the Dirac delta potential is not of the above form. The support of a Dirac delta distribution is a single point of zero thickness. And in fact one may show that the wavefunction doesn't have to vanish at the support. See also e.g. my related Phys.SE answer here.
{ "domain": "physics.stackexchange", "id": 75096, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, boundary-conditions, dirac-delta-distributions", "url": null }
java, graph queue.add(start); int index = vertices.get(start); visited[index] = true; while(!queue.isEmpty()) { V v = queue.poll(); System.out.print(v + " "); List<V> adjacentVertices = getAdjacentVertices(v); for(V a : adjacentVertices) { int adjInd = vertices.get(a); if(!visited[adjInd]) { queue.add(a); visited[adjInd] = true; } } } } public void dfs(V start) { boolean[] visited = new boolean[vertices.size()]; dfs(start, visited); } private void dfs(V v, boolean[] visited) { System.out.print(v + " "); int index = vertices.get(v); visited[index] = true; List<V> adjacentVertices = getAdjacentVertices(v); for(V a : adjacentVertices) { int aIndex = vertices.get(a); if(!visited[aIndex]) { dfs(a, visited); } } } private List<V> getAdjacentVertices(V v) { int index = vertices.get(v); List<V> result = new ArrayList<>(); for(int i=0; i<adj[index].length; i++) { if(adj[index][i] == 1) { result.add(verticesLookup.get(i)); } } return result; } } Main class class Main { public static void main(String[] args) { GraphAM<Integer> g = new GraphAM<>(4); g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(1, 2); g.addEdge(2, 0); g.addEdge(2, 3); g.addEdge(3, 3); System.out.println("Following is Breadth First Traversal "+ "(starting from vertex 2)"); g.bfs(2);
{ "domain": "codereview.stackexchange", "id": 29892, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, graph", "url": null }
homework-and-exercises, kinematics, relative-motion So at this instant the situation looks like this: Suppose the hare runs to the right. I tried solving this with relative velocities ($O$ is the origin and subscript $xy$ means $x$ relative to $y$): $$\vec{v}_{DO} = \vec{v}_{DH} + \vec{v}_{HO}$$ But here I seem to get stuck because I don't have any clue what $\vec{v}_{DH}$ could be. There seems to be a paradox in my brain because this vectors direction changes constantly because the position of the hare and the dog changes. But the position of the dog depends on the velocity vector of the dog and that I am supposed to find ! Is there something I am missing in connection with the concept of relative motion ? Note: I haven't had an in depth differential equations course yet, but I can solve simple ones. Suppose the dog has coordinates $\vec{r}_D(t) $, and the hare, $\vec{r}_H(t)$. Defining $\vec{r}_{HD} =\vec{r}_H-\vec{r}_D $, we can conclude $\dot{\vec{r}}_D(t)=v\hat{r}_{HD}$, where $\hat{r}_{HD}\equiv\frac{\vec{r}_{HD}}{|\vec{r}_{HD}|}$. We now simply take a time derivative of this velocity to get the acceleration. Note, we need the product rule, however, $v$ is a constant so we're left with $\ddot{\vec{r}}_D(t)=v\dot{\hat{r}}_{HD}$. This form is independent of the choice of coordinates. Given the hare's motion, you know both and may choose an ideal coordinate system and then differentiate the unit vector for a specific case.
{ "domain": "physics.stackexchange", "id": 50936, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, kinematics, relative-motion", "url": null }
I've already seen the following, which I adopted: \begin{align} f^n(x) &= (f(x))^n \\ f^{(n)}(x) &= \frac{{\rm d}^nf}{{\rm d}x^n} \\ f^{\circ n}(x) &= (f \circ f \circ \cdots \circ f)(x)\end{align} However $f^{\circ n}$ does not seem to be standard, so you should always warn the reader when using it. The power of a function may mean two things. Sometimes even the circle is omitted to make it look like multiplication. That's because the collection of all bijections from a set to itself form a group with composition of functions - and the operation in a group is usually seen as multiplication. • I would warn the reader about the first notation more than the last - if the reader is familiar with $f^{\circ n}$, then it is unambiguous, whereas $f^n$ is still ambiguous (and I think is more commonly composition) - though, really, you ought to (gracefully) warn the reader about all of them. – Milo Brandt Jun 8 '15 at 22:33 • Yes, agreed. ${}{}$ – Ivo Terek Jun 8 '15 at 22:33 • I have come across this and also find it to be vastly superior. I hope it sticks as a useful notation. – Alfred Yerger Jun 9 '15 at 2:27 • I like $f^{\circ n}$, and it makes me want to use $f^{\times n}$ or $f^{\cdot n}$ for multiplication, which would avoid the ambiguity of $f^n$, but I've never seen it used. – JiK Jun 9 '15 at 11:15 The most common notation I've seen for $n$-fold composition is $$f(f(\ldots f(x)\ldots ))=f^{n}(x)$$ However this is generally always accompanied by a remark explaining that this is what the notation means. I would recommend you include such a remark.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9230391727723468, "lm_q1q2_score": 0.8035953932962527, "lm_q2_score": 0.8705972801594706, "openwebmath_perplexity": 543.5660732529633, "openwebmath_score": 0.9586598873138428, "tags": null, "url": "https://math.stackexchange.com/questions/1317708/how-to-denote-powers-of-a-function/1317718" }
navigation, ros-melodic, costmap Title: OccupangyGrid map Issue: ray_ground_filter fails to generate costmap on AutowareAI I have recorded a video where the self-driving car fails to recognize a car obstacle in Occupancy Grid map: https://www.youtube.com/watch?v=XSKYE1Oxq-c This issue is related to: https://answers.ros.org/question/377308/autowareai-perception-failure-agent-not-detected-in-rviz/#377523 I did not figured out the reason why the /semantics/costmap_generator/occupancy_grid does not recognize the AGENT car or pedestrian as obstacle. When I echoed this topic, it returns usually an inifinite array of ZEROs values. However sometimes it returns the value 100: `$ rostopic echo /semantics/costmap_generator/occupancy_grid
{ "domain": "robotics.stackexchange", "id": 36400, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ros-melodic, costmap", "url": null }
dna, hematology Title: Why is blood collected at crime scenes? I have read that mature red blood cells (MRBs)do not have DNA. So I am curious why crime scene technicians collect blood. Is it to collect and amplify segments of white blood cells? There's still white blood cells with DNA. Some forensic blood tests are: Conventional serological analysis: Analysis of the proteins, enzymes, and antigens present in the blood, for general doctor tests: (black/white/drunk/stoned/heroin addict/polio vaccinated/hiv pos...) Restriction Fragment Length Polymorphism (RFLP) DNA : Direct analysis of certain DNA sequences present in the white blood cells. This method also usually requires a "large" sample size to obtain significant results. Polymerase Chain Reaction (PCR) DNA : Analysis of certain DNA sequences that have been copied multiple times to a detectable level.
{ "domain": "biology.stackexchange", "id": 8201, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dna, hematology", "url": null }
computer-architecture, virtual-memory, cpu-pipelines Title: How earliest that the data TLB (Translation Lookaside Buffer) can be accessed in an instruction execution pipeline? In an instruction execution pipeline, the earliest that the data TLB (Translation Lookaside Buffer) can be accessed is: List item before effective address calculation has started during effective address calculation after effective address calculation has completed after data cache lookup has completed ===================================================================== I think answer is 3. Reason- Virtual address is given for TLB look up. TLB -Translation Lookaside Buffer, here Lookaside means during Address translation (from Virtual to Physical). But virtual address must be there before we look into TLB. Please, explain which is the correct option. According to book the answer is 2. I think that option 2, is wrong as it is explaining for physical address resolution and not for effective address calculation. Often the effective address is the sum of a small constant and a value from a register. Not always, so this trick does not always apply. The trick then is to make a gamble: access the TLB based on just the value from the register, in parallel with adding the offset to it. Usually the small offset won't change the page being accessed, if that works out then you might have the result from the TLB a cycle earlier. Of course there needs to be a check afterwards to see if the upper bits of the virtual address didn't change, and if it fails the TLB access has to be re-done. There is even a patent for this, with way more detail: Guess mechanism for faster address calculation in a pipelined microprocessor
{ "domain": "cs.stackexchange", "id": 12125, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computer-architecture, virtual-memory, cpu-pipelines", "url": null }
graphs, optimization, matching Title: Assignment algorithm for symmetric matrices I stumbled on the Hungarian algorithm during my personal research when I was assigned interesting problem as homework: Given a list of objects $L$ and a pairing function $\delta : L \times L \rightarrow \left[0, 1\right]$, pair each object $\alpha$ in $L$ with exactly one $\beta$ in $L$ such that: $\sum_{i = 1}^n \delta(\alpha_n, \beta_n)$ is maximal $(\alpha, \beta) \iff (\beta, \alpha)$ $(\alpha, \beta) \implies \alpha \neq \beta$
{ "domain": "cs.stackexchange", "id": 7650, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "graphs, optimization, matching", "url": null }
image-processing, difference-of-gaussians Title: Which image for DoG filter? I am studying computer vision and have some trouble with difference of Gaussian filter. We apply Gaussian filters with different values of sigma on original image then subtracting from them. However, Gaussian filter blurs image, so some edge might not be identified. Why could we use DoG? How effective is DoG? Which type of images work best for this filter? Why could we use DoG? An old and low-cost edge-enhancement technique consists in subtracting a blurred image from the original image. Its filter interpretation is an impulse filter (the neutral filter for the original image) minus the blur filter. The resulting filter is positive at its center, and negative around, a sort of coarse Laplacian filter. The above impulse filter can be seen as the limit of a Gaussian filter whose $\sigma$ tends to $0$. The DoG filter somehow generalizes the described old edge-enhancement technique, with a pair of blurring filters. Perhaps surprisingly, two blurs, when subtracted, can sharpen edges. The resulting filter acts as a pass-band with appropriate $\sigma_k$ kernel ratios. How effective is DoG? its effectiveness is close to that of the Laplacian of Gaussians, which it mimics, see below in 1D. Which type of images work best for this filter? It is believed to work well for large images, since it has very fast implementations. Due to the "double filtering effect" in high frequencies, it is somewhat robust to noise. Depending on the kernel size choice, it enhances features that possess a certain scale extend.
{ "domain": "dsp.stackexchange", "id": 3316, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, difference-of-gaussians", "url": null }
water, mountains Title: How do mountain springs get their water? I am curious how do mountain springs get their water. The water flowing from them eventually forms rivers. Is it only from rain and snow? Or does water also come from underground-below the mountain (if so, then how does it "climb" to the spring which is at a high altitude)? Ultimately, it comes from precipitation. Ordinarily we think of rain as coming from low-level clouds, but Putkonen[1] has compiled rainfall data in the Himalayas showing significant rains up to several thousand meters altitude, covering the range where practically everyone lives. It is this precipitation that fills the underground tables mentioned by Jean-Marie Prival in a comment to the question. Such a source is subject to the effects of climate change, which accordingly has led to significant environmental issues. See Ref [2]. References: 1. Jaakko K. Putkonen, "Continuous Snow and Rain Data at 500 to 4400 m Altitude near Annapurna, Nepal, 1999–2001", Arctic, Antarctic, and Alpine Research, 36:2, 244-248 (2004) 2. Sandeep Tambe, Ghanashyam Kharel, ML Arrawatia, Himanshu Kulkarni, Kaustubh Mahamuni, Anil K Ganeriwala, "Reviving dying springs: climate change adaptation experiments from the Sikkim Himalaya", Mountain Research and Development 32 (1), 62-72 (2012)
{ "domain": "earthscience.stackexchange", "id": 2474, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "water, mountains", "url": null }
algorithms, time-complexity, correctness-proof, network-flow Title: shortest path increases monotonically => a bound on the length of one iteration of Edmons-Karp is then O(E) ... Convince me this is true I was reading the proof of time-complexity for the Edmonds-Karp algorithm here (https://brilliant.org/wiki/edmonds-karp-algorithm/). Everything in the first part of the proof (The section Monotonically increasing path length) makes sense. However, the last part of it is not very convincing (the part I have highlighted with red). Can someone convince me that it is true that the fact that "the shortest path increases monotonically in the residual graph" implies a "bound on of one iteration of Edmonds-Karp algorithm to $O(E)$". It is no wonder why that you doubt that "the shortest path increases monotonically in the residual graph" implies a "bound on of one iteration of Edmonds-Karp algorithm to $O(|E|)$". The time bound of $O(|E|)$ has nothing to do with the fact that "the shortest path increases monotonically in the residual graph". It takes $O(E)$ time to perform one iteration of Edmonds-Karp algorithm, since what it does is mostly a breadth-first search. (It takes $O(|V| + |E|)$ time for a breadth-first search on a general graph. However, in the case of finding the maximum flow in a flow network, we assume the given network is connected or there is at least one edge incident to every vertex usually. That is, $|E|\ge |V|-1$ or $2|E|\ge|V|$. Then $O(|V|+|E|)=O(|E|)$)
{ "domain": "cs.stackexchange", "id": 20300, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, time-complexity, correctness-proof, network-flow", "url": null }
newtonian-mechanics, newtonian-gravity, centripetal-force Title: A question on uniform circular motion and minimum tangential velocity I've had a bit of a crisis in my understanding of circular motion and I'm hoping to clear it up here. Would it be correct to say that the condition for a particle to be in uniform circular motion is that there is a net force $\mathbf{F}$ acting on the particle such that $$\mathbf{F}=-\frac{mv^{2}}{r}\hat{\mathbf{r}}$$ Furthermore, if an object is is in uniform circular motion along a vertical path such that, in the critical case, the only force acting on it is gravity, then what stops gravity from making the object fall towards the ground instead of continuing to follow its circular path? (I get that it is the fact that the object has a large enough tangential speed) By requiring that $ \frac{GMm}{R^{2}}=\frac{mv^{2}}{r}$ is this because we wish to know the conditions that must be satisfied for this object to continue in a circular path (as opposed to taking to the ground due to gravity) under the influence of gravity. Circular motion is characterised by the net force satisfying $F= \frac{mv^{2}}{r}$, and so this relationship must hold in the case in which gravity is the only force acting in order for the object to maintain circular motion?! Something that is left out of (or insufficiently emphasized in) a lot of textbook treatments of centripetal acceleration/force is how physicist use this fact. In introductory treatments, uniform circular motion plays a very similar role to equilibrium. You are expected to read a problem, notice that some object (say a ladder with a fireman on it) is not accelerating and then proceeded to use take advantage of the equations of static equilibrium $\sum_i F_i = 0$ and $\sum_i \tau_i = 0$ to work the problem.
{ "domain": "physics.stackexchange", "id": 32259, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, newtonian-gravity, centripetal-force", "url": null }
: X \to Y$ of a category $\mathcal{C}$ is an {\it epimorphism} if for every pair of morphisms $a, b : Y \to Z$ we have $a \circ f = b \circ f \Rightarrow a = b$. An epimorphism in the category of sets is a surjective map of sets. \begin{exercise} \label{exercise-epi-sheaves-sets} Carefully prove that a map of sheaves of sets is an epimorphism (in the category of sheaves of sets) if and only if the induced maps on all the stalks are surjective. \end{exercise} \begin{exercise} \label{exercise-adjoint-push-pull} Let $f : X \to Y$ be a map of topological spaces. Prove pushforward $f_\ast$ and pullback $f^{-1}$ for sheaves of {\bf sets} form an adjoint pair of functors. \end{exercise} \begin{exercise} \label{exercise-j-shriek} Let $j : U \to X$ be an open immersion. Show that $j^{-1}$ has a left adjoint $j_{!}$ on the category of sheaves of sets. Characterize the stalks of $j_{!}({\mathcal G})$. (Hint: $j_{!}$ is called extension by zero when you do this for abelian sheaves... ) \end{exercise} \begin{exercise} \label{exercise-not-locally-generated-by-sections} Let $X = \mathbf{R}$ with the usual topology. Let $\mathcal{O}_X = \underline{\mathbf{Z}/2\mathbf{Z}}_X$. Let $i : Z = \{0\} \to X$ be the inclusion and let $\mathcal{O}_Z = \underline{\mathbf{Z}/2\mathbf{Z}}_Z$. Prove the following (the first three follow from the definitions but if you are not clear on the definitions you should elucidate them):
{ "domain": "github.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9854964220125032, "lm_q1q2_score": 0.8080019446376565, "lm_q2_score": 0.8198933315126792, "openwebmath_perplexity": 120.73245969658541, "openwebmath_score": 0.9982831478118896, "tags": null, "url": "https://github.com/stacks/stacks-project/blob/master/exercises.tex" }
c++, console, compression void huffman_deserializer::check_signature(std::vector<int8_t>& data) { if (data.size() < sizeof(huffman_serializer::MAGIC)) { std::stringstream ss; ss << "The data is too short to contain " "the mandatory signature. Data length: " << data.size() << "."; std::string err_msg = ss.str(); throw file_format_error(err_msg.c_str()); } for (size_t i = 0; i != sizeof(huffman_serializer::MAGIC); ++i) { if (data[i] != huffman_serializer::MAGIC[i]) { throw file_format_error("Bad file type signature."); } } } size_t huffman_deserializer ::extract_number_of_code_words(std::vector<int8_t>& data) { if (data.size() < 8) { std::stringstream ss; ss << "No number of code words in the data. The file is too short: "; ss << data.size() << " bytes."; std::string err_msg = ss.str(); throw file_format_error{err_msg.c_str()}; } union { size_t num; int8_t bytes[sizeof(size_t)]; } t; t.num = 0; t.bytes[0] = data[4]; t.bytes[1] = data[5]; t.bytes[2] = data[6]; t.bytes[3] = data[7]; return t.num; }
{ "domain": "codereview.stackexchange", "id": 23189, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, console, compression", "url": null }
homework-and-exercises, classical-mechanics, acceleration, kinematics Title: Acceleration: Value Disparity? If we consider a ball moving at an acceleration of $5\ \mathrm{m\ s^{-2}}$, over a time of 4 seconds, the distance covered by the ball in the first second is $5\ \mathrm m$. In the 2nd second will $5\ \mathrm m+5\ \mathrm m=10\ \mathrm m$. In the third second it will cover a distance of $5\ \mathrm m+5\ \mathrm m+5\ \mathrm m=15\ \mathrm m$ and so on and so forth. Now, when we substitute this answer in the equations of motion derived from the area under velocity-time and distance-time graphs, we see a variation: $$\begin{align} s&=\frac12at^2\\ &=\frac12\times5\ \mathrm{m\ s^{-2}}\times(4\ \mathrm s)^2\\ &=\frac12\times5\ \mathrm{m\ s^{-2}}\times16\ \mathrm{s^2}\\ &=40\ \mathrm m \end{align}$$ is the distance covered. Now if we go back to our initial description of acceleration we that in the first second $5\ \mathrm m$, second second $10\ \mathrm m$, third second $15\ \mathrm m$, and fourth second $20\ \mathrm m$. Total distance covered in this case is $5\ \mathrm m+10\ \mathrm m+15\ \mathrm m+20\ \mathrm m=50\ \mathrm m$? $40\ \mathrm m\neq 50\ \mathrm m$. Why this disparity between the values? Can someone please explain? You say: If we consider a ball moving at an acceleration of $5\ \mathrm{m/s^2}$, over a time of 4 seconds, the distance covered by the ball in the first second is $5\ \mathrm m$. etc
{ "domain": "physics.stackexchange", "id": 2359, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, classical-mechanics, acceleration, kinematics", "url": null }
Question # If $$\displaystyle 64 = x^{y}$$, where $$\displaystyle x > y, x\neq 4$$ and $$\displaystyle y\neq 1$$ then x + y = A 7 B 4 C 8 D 10 Solution ## The correct option is D $$10$$$$64$$ can be written as $${64}^{1}, {8}^{2}, {4}^{3}, {2}^{6}$$As $$x > y$$ Possible options of writing $$64$$ are $${64}^{1}, {8}^{2}, {4}^{3}$$Since, $$x \neq 4$$, From the chosen options, Possible options of writing $$64$$ are $${64}^{1}, {8}^{2}$$Since, $$y \neq 1$$, From the chosen options, Possible options of writing $$64$$ is only  $${8}^{2}$$So, $$x = 8, y = 2$$$$=> x + y = 10$$Maths Suggest Corrections 0 Similar questions View More People also searched for View More
{ "domain": "byjus.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9890130583409233, "lm_q1q2_score": 0.8015742151299965, "lm_q2_score": 0.8104788995148792, "openwebmath_perplexity": 3246.072841431077, "openwebmath_score": 0.9045971035957336, "tags": null, "url": "https://byjus.com/question-answer/if-displaystyle-64-x-y-where-displaystyle-x-y-x-neq-4-and-displaystyle-y/" }
python, genetic-algorithm self.sessionOrder = sessionOrder self.sessionLengthsChronological = sessionLengthsChronological self.sessionChangeQuanta = sessionChangeQuanta Many crucial methods belonging to this class depend upon knowing the order in which activities are scheduled and how long each activity is scheduled for. This method extracts and stores that infomation as attributes. One session is defined as a succession of quanta dedicated exclusively to one activity. They appear on schedules as many of the same number in an uninterupted chain. def CalculateOrderCosts(self): sessionOrderCosts = [] for i in range(0, len(self.sessionOrder) - 1): sessionOrderCosts.append( activitiesInteractionMatrix[self.sessionOrder[i], self.sessionOrder[i + 1]]) self.sessionOrderCosts = sessionOrderCosts self.SessionOrderCostTotal = sum(sessionOrderCosts) def CaclulateTimeOfDayCosts(self): timeOfDayCostsByActivity = [] for i in range(0, len(self.list)): if self.list[i] == "-": timeOfDayCostsByActivity.append(0) else: timeOfDayCostsByActivity.append( activityTimeInteractionMatrix[self.list[i], quantaClassifications[i]]) self.timeOfDayCostsByActivity = timeOfDayCostsByActivity self.timeOfDayCostByActivityTotal = sum(timeOfDayCostsByActivity) These methods exploit the properties of the matrices created above to calculate the cost associated with the timing of activities def CalculateSessionLengthCosts(self): sessionLengthsCosts = [] for i in self.sessionLengthsChronological: if i < 18: sessionLengthsCosts.append((18 - i) * 5) else: sessionLengthsCosts.append(0) self.sessionLengthCosts = sessionLengthsCosts self.sessionLengthCostsTotal = sum(sessionLengthsCosts)
{ "domain": "codereview.stackexchange", "id": 35993, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, genetic-algorithm", "url": null }
ros, navigation, sbpl-lattice-planner, sbpl, move-base Title: running sbpl_lattice_planner on Nvidia Jetson TK1 Hi all, I have a mobile robot which is navigating around a room, I already have the map of the room. I am using the navigation_stack of ROS. I am using rotary encoders for odometry. I am fusing the data from Rotary encoders and IMU using robot_pose_ekf. I am using amcl for localization and move_base for planning with sbpl_lattice_planner as the global planner. Everything seems to be working on my dell laptop (with intel processor). Now, when I port everything to Nvidia Jetson TK1, the sbpl_lattice_planner does not seem to be working (the default A* planner works). It is very weird because the exact same code works on the laptop. I highly doubt that it is happening because the time to come up with the plan for sbpl_lattice_planner is less in case of Nvidia Jetson. I feel the problem is there because Nvidia Jetson has an ARM processor whereas my laptop has an intel processor and therefore sbpl_lattice_planner is not able to come up with a plan (or a complete plan) in case of Nvidia Jetson but I am not able to figure out how to resolve this issue. Also, sometimes if I give a goal say 10 m away from the starting position, it only comes up with a plan till 2-3 m from the starting position. See image (I gave the goal on the top right of the room but the the plan is only till couple of meters): I am using sbpl and sbpl_lattice_planner. I have tried following but it was of no help: Decreasing number of SBPL primitives. Increasing the global_planner timeout. Decreasing the global_costmap resolution. Does anyone have any idea why is this happening and how can it be resolved? Let me know if you need more information from my side. Thanks in advance. Naman Kumar Originally posted by Naman on ROS Answers with karma: 1464 on 2015-09-07 Post score: 2
{ "domain": "robotics.stackexchange", "id": 22577, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, navigation, sbpl-lattice-planner, sbpl, move-base", "url": null }
research Title: Do "toy" robots move technology forwards? Over the last month, I saw many robots that don't have any real purpose, which made me ask myself: "Does this have any value?" I saw dancing robot on CES, advanced lego based robots and also robots combined for very limited purpose. I saw ten year old children playing with robots, and competitions for them. Someone has told me that this is just for education and logic spreading. In other cases, there were arguments like, "this is for informing people that everything is going forwards". I know that people will buy robotic vacuum cleaners because they think that they'll save some time, but these robotic cleaners are not very reliable and I see it only as marketing. Do these things (children's education, dancing robots, and other instances of selling a pig in a poke) have any value in terms of robotics, and are really advancing the field as manufacturers say? It certainly does. Ever since they started writing fiction about robots, they imagined robots as intelligent beings among ourselves. No one thought of robots as mechanical arms that replace your jobs. So first of all, there is no reason to think why the humans wouldn't want to make useless robots. You may have heard of the karakuri dolls from the 17th century, that serve tea (pretty useless, huh?): Or the digesting duck from the 18th century (even worse): that you would argue that they don't make sense and they did not bring the technology forward, but that is not entirely correct. There are many aspects why such robots are useful:
{ "domain": "robotics.stackexchange", "id": 96, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "research", "url": null }
rviz Title: How to convert lidar .bag data to point cloud Hello everyone, I want to create the point cloud from the lidar data. I have bag file of velodyne lidar data with that i want to create point cloud. using rviz i can view the data properly but can not generate the point cloud. I go through tutorial http://wiki.ros.org/laser_assembler/Tutorials/HowToAssembleLaserScans but not able to understand how to use Point Cloud Assembler . Please can anyone help me with this and simplify it for me... thank you Originally posted by Nikka on ROS Answers with karma: 3 on 2016-12-02 Post score: 0 The ROS Velodyne driver is different from other LIDARs in that it does not publish standard LaserScan messages, but VelodyneScan ones. These have to be converted to the standard PointCloud2 message format using an additional step. How this works is described on the velodyne_pointcloud wiki page. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-12-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Nikka on 2016-12-05: thank you stefan. After it converted in standard PointCloude2 massage format, still its not creating the point cloud. In rviz its giving me moving data Its not creating solid point cloud. can you please help me with this is there anything else that i can try?
{ "domain": "robotics.stackexchange", "id": 26382, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rviz", "url": null }
Learning Logistic regression model • Defines a linear decision boundary • Discriminant functions: • where f (x, w) g1 (wT x) g(wT x) g(z) 1/(1 e z) x Input vector 1 x1 f (x, w) w0 w1 w2 wd x2 z xd Logistic function 1 (x) (w x) g g T ( ) 1 0 x w x g g T - is a logistic function. 2 Quadratic Discriminant Analysis (QDA) Quadratic Discriminant Analysis is a more general version of a linear classi er. It looks like this boundary is doing really poorly for non all-NBA players, but I want to see the confusion matrix to make sure. This is a little bit confusing (but on the other hand it increases the contrast of point on top of background). An example of such a boundary is shown in Figure 11. View Saumil D. QDA¶ class sklearn. In such cases Quadratic discriminant analysis is required which is based on the estimation of different covariance structures for the different groups. This skill test is specially designed for you to. The question was already asked and answered for linear discriminant analysis (LDA), and the solution provided by amoeba to compute this using the "standard Gaussian way" worked well. Below, the green line is QDA, our estimate, and the purple dashed line is the optimal Bayes boundary. Custom legend labels can be provided by returning the axis object (s) from the plot_decision_region function and then getting the handles and labels of the legend. This leads to a model known as quadratic linear discriminant (QDA), since now the decision boundary is not linear but quadratic. Calculate the decision boundary for Quadratic Discriminant Analysis (QDA) I am trying to find a solution to the decision boundary in QDA. Quadratic discriminant analysis (QDA). Chao Sima is a Research Assistant Professor in Computational Biology Division of the Translational Genomics Research Institute in Phoenix, AZ. True or False: For classification, we always want to minimize the misclassification rate. As we increase dimensions, we exponentially reduce the number of observations near the point in consideration. Returns-----C
{ "domain": "tuningfly.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363517478327, "lm_q1q2_score": 0.8386081185009281, "lm_q2_score": 0.8519528076067262, "openwebmath_perplexity": 1224.9299710586267, "openwebmath_score": 0.6168559193611145, "tags": null, "url": "http://tuningfly.it/saiv/qda-decision-boundary.html" }
I was studying a literature where in an equation the term $$\nabla p_i$$ is given and further it is given that $$\nabla$$ is a column wise gradient and $$p_i = p_i(x,t)$$, what exactly meaning of column-wise gradient?. Further it is seen in literature that $$p_i$$ is a scalar quantity and I found one definition of matrix calculus on Wikipedia (Scalar by Vector), where it can observed that derivative of a scalar $$p_i$$ with respect to independent vector is a row vector $$\left[\frac{\partial p_i}{\partial x} \ \ \ \frac{\partial p_i}{\partial t}\right]$$ And further, as in general $$\nabla p_i$$ is a Jacobian of scalar $$p_i$$ and we can find definition Jacobia Matrix and determinants in which it can seen that $$\nabla p_i = \frac{\partial p_i}{\partial x_j} \ \ \ \text{for} \ \ j = 1,2 \ \ \ \text{and} \ \ \ x_1 = x, \ \ x_2 = t$$ if $$i = 1,2,...,n$$ then $$\nabla p_i$$ will be an $$n \times 2$$ matrix. And $$\nabla p_i = \begin{bmatrix} \frac{\partial p_1}{\partial x} & \frac{\partial p_1}{\partial t} \\ \frac{\partial p_2}{\partial x} & \frac{\partial p_2}{\partial t} \\ \vdots & \vdots \\ \frac{\partial p_3}{\partial x} & \frac{\partial p_3}{\partial t} \end{bmatrix}$$ Can we call it column-wise gradient? If not, then what is column-wise gradient and how we can express it in general form?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9881308782546026, "lm_q1q2_score": 0.8008592426163195, "lm_q2_score": 0.8104789155369047, "openwebmath_perplexity": 318.9934138264622, "openwebmath_score": 0.8299051523208618, "tags": null, "url": "https://math.stackexchange.com/questions/3712457/what-is-the-meaning-of-column-wise-gradient-and-symmetric-gradient" }
# when does cos^2x=sin^2x? • Feb 4th 2010, 02:26 PM Amberosia32 when does cos^2x=sin^2x? when does cos^2 x=sin^2 x? • Feb 4th 2010, 02:32 PM Prove It Quote: Originally Posted by Amberosia32 when does cos^2 x=sin^2 x? You should know from the Pythagorean Identity that $\cos^2{x} + \sin^2{x} = 1$. So $\cos^2{x} = 1 - \sin^2{x}$. Substituting this into your original equation: $\cos^2{x} = \sin^2{x}$ $1 - \sin^2{x} = \sin^2{x}$ $1 = 2\sin^2{x}$ $\sin^2{x} = \frac{1}{2}$ $\sin{x} = \pm\frac{1}{\sqrt{2}}$ $x = \left \{ \frac{\pi}{4}, \frac{3\pi}{4}, \frac{5\pi}{4}, \frac{7\pi}{4} \right \} + 2\pi n$, where $n$ is an integer representing the number of times you have gone around the unit circle. • Feb 4th 2010, 02:48 PM Quote: Originally Posted by Amberosia32 when does cos^2 x=sin^2 x? $Cos^2x=Sin^2x$ $Cos^2x-Sin^2x=0$ $\left(Cosx+Sinx\right)\left(Cosx-Sinx\right)=0$ $Cosx=Sinx,\ or\ Cosx=-Sinx$ As Cosx gives the horizontal co-ordinate and Sinx gives the vertical co-ordinate of the unit circle centred at the origin,
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9886682444653241, "lm_q1q2_score": 0.8151193211149736, "lm_q2_score": 0.8244619220634456, "openwebmath_perplexity": 12880.009839372911, "openwebmath_score": 0.9242940545082092, "tags": null, "url": "http://mathhelpforum.com/trigonometry/127195-when-does-cos-2x-sin-2x-print.html" }
- Thank you for helping as well! I'll try not to forget the constant next time. –  yayu Apr 20 '14 at 16:14 If $f'(x)=4 \cdot x^{-\frac{1}{2}}$ then $f(x)=\text{integration of } f'(x)dx$ which is equal to $8 \cdot x^{\frac{1}{2}}+c \tag{1}$ Here c is some constant. Now given that $f(1) = 12$ (given above line): $f(1) = 8\cdot 1^{\frac{1}{2}} + c = 12$ Therefore from above line we can say that $c=4 \tag{2}$ Therefore from equation (1) & (2) The particular solution of the given function is $f(x) = 8\cdot x^{\frac{1}{2}} + 4$ -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363489908259, "lm_q1q2_score": 0.8001348433918594, "lm_q2_score": 0.8128673133042217, "openwebmath_perplexity": 441.3579853007627, "openwebmath_score": 0.9536800980567932, "tags": null, "url": "http://math.stackexchange.com/questions/761863/find-the-particular-solution-of-the-equation-that-satisfies-condition" }
finite-impulse-response, infinite-impulse-response, homework, adaptive-filters, equalization Title: Adaptive equalization vs inverse of transfer function I have the following equalization problem as shown in the figure below: Now I can compute the coefficients for my adaptive FIR filter c (dim(c) = N) the following: $\mathbf{c_{opt}} = (\mathbf{H}^T\mathbf{H})^{-1}\mathbf{H}~\mathbf{h_{ideal}}$ where $\mathbf{H}$ is a convolution matrix with shifted vectors of $\mathbf{h}$ and $\mathbf{h_{ideal}}$ is chosen such that $x[n]=d[n]$ (delay-free equalizer). The channel impulse response is given as $\mathbf{h} = [1, 0.5]^T$ $\Rightarrow H(z) = 1+0.5 z^{-1}$ so the inverse of the system would be IIR: $1/H(z) = \frac{z}{z+0.5}$ Now the question is the following: What is the difference between the LS-solution with an adaptive filter and direct inversion of the system? Is it just that one filter is FIR and the other one IIR? Therefore with the FIR-filter we cannot reach full equalization and a residual error stays? Inverting a channel can only be done when the channel is a minimum phase system (trailing echos only). A minimum phase system is characterized as having all zeros in the left half plane (for the s plane, or equivalently in a sampled system and the z plane all zeros inside the unit circle). Inverting such a channel results in poles where every zero exists, and a causal system that has any poles in the right half plane (outside the unit circle) is not stable. So a minimum phase system has a stable causal inverse, while a mixed phase or maximum phase system does not.
{ "domain": "dsp.stackexchange", "id": 8376, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "finite-impulse-response, infinite-impulse-response, homework, adaptive-filters, equalization", "url": null }
electromagnetism, optics, electromagnetic-radiation, electric-fields, laser Title: What happens to the $E$ and $B$ fields at the edge of a laser beam? In an ideal plane wave, $E$ and $B$ fields run off to infinity in both directions along straight paths. I've always assumed the center of a laser beam looks like an ideal plane wave, with $E$ and $B$ fields oscillating as one would expect from the classical picture of EM waves. But then what happens at the edge of the beam? The "light" stops, but the field lines aren't allowed to—they need to either terminate on charges or form loops. They can't go on forever in a nice uniform way, because then the beam would be infinitely wide. So what do they do? This is a good model at the center of the beam, I've always assumed the center of a laser beam looks like an ideal plane wave,
{ "domain": "physics.stackexchange", "id": 51448, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, optics, electromagnetic-radiation, electric-fields, laser", "url": null }
classical-mechanics, soft-question Title: Physics of a fixed wheel moving on a flat surface held at distance from a center point Ok, so my boss is trying to make a car turntable. In essence, he has a two boards that sit atop a rotating ring. He wants to put two wheels at the end of each board (8 wheels total). He thinks that you can angle wheels properly (2d) such that they can be flat (axis is parallel to the floor) and the whole contraption move smoothly. Take a look at this picture (Edit: copied below - Mark)
{ "domain": "physics.stackexchange", "id": 155, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, soft-question", "url": null }
context-free Title: Prove or disprove if L is CFL? Given $L=\{a^ib^jc^k | i\neq j \space and \space j=k\}$. Is this CFL? How do I write CFG for it or prove it with pumping lemma? Thanks. Suppose that $L$ were context-free. According to Ogden's lemma, there is a constant $p$ such that each word in $L$ with at least $p$ marked positions satisfies the constraints of the lemma. Consider the word $s = \underline{a^p}b^{p+p!}c^{p+p!}$, in which the underlined part is marked. According to Ogden's lemma, there is a decomposition $s = uvwxy$ in which $vx$ contains at least one $a$, and $uv^iwx^iy \in L$ for all $i \geq 0$. We now consider several cases: $x$ contains $b$s but not $c$s, or $c$s but not $b$s. Choosing $i = 0$, we obtain a word in which the number of $b$s differs from the number of $c$s, and so does not belong to $L$. $x$ contains both $b$s and $c$s. Choosing $i = 2$, we obtain a word not belonging to $a^*b^*c^*$, and so not belonging to $L$. $x$ contains no $b$s nor $c$s, and $v \notin a^*$. In this case $x = \epsilon$, and so $v$ must contain at least two different characters. Choosing $i = 2$, we again obtain a word not belonging to $a^*b^*c^*$, and so not belonging to $L$. $vx \in a^+$, say $vx = a^q$. Let $i = p!/q+1$. Then $uv^iwx^iy = a^{p+p!}b^{p+p!}c^{p+p!} \notin L$.
{ "domain": "cs.stackexchange", "id": 12419, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "context-free", "url": null }
ros, 3d-object-recognition, roboearth Title: roboearth: record a object: can't receive point clouds Hi, Currently I'm working with the roboearth recording a 3D model object. I use the kinect attached with turtlebot. Instead of following the record_and_upload_object tutorial, I find my own way starting the kinect through turtlebot: roslaunch turtlebot_bringup minimal.launch roslaunch turtlebot_bringup kinect.launch Then i run the rviz, set Point Cloud2's topic to "/camera/rgb/points", and Color Transformer to "RGB8", I can see the colored scene. But the Color Transformer can't be set to "RGB8" if the topic is set to "/camera/depth/points", as said in the tutorial. Also there's no topic "/camera/depth_registered/points", even i can be sure the depth registration is enabled: >rosparam get /openni_camera/depth_registration true In the roboearth stack, I modified several source code files as well: modified roboearth/object_scanning/re_object_recorder/src/mainwindow.cpp: original code: ros::param::get("/camera/driver/depth_registration", depth_registration_enabled); since the param "/camera/driver/depth_registration" doesn't exist in electric turtlebot, I changed it to : ros::param::get("/openni_camera/depth_registration", depth_registration_enabled); modified roboearth/object_scanning/ar_bounding_box/include/ar_kinect/ar_kinect.h: original code:
{ "domain": "robotics.stackexchange", "id": 8465, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, 3d-object-recognition, roboearth", "url": null }
angular-momentum, rotational-dynamics, moment-of-inertia, angular-velocity \\ & = m\vec r\cdot\left(\vec v\times \vec\omega \right) , \end{align} where I've used the cyclical property of the scalar triple product, because the cross product $\vec v\times \vec\omega$ is easy to calculate since $\vec v$ and $\vec\omega$ are orthogonal by construction. Putting in the definition $(4)$ for the velocity, we get a vector triple product, which is again easy to resolve: \begin{align} \vec\omega\cdot\vec L & = m\vec r\cdot\left((\vec \omega \times\vec r)\times \vec\omega \right) \\ & = m\vec r\cdot\left(\omega^2\vec r - (\vec\omega\cdot\vec r)\vec \omega \right) \\ & = m\left(\omega^2 r^2 - (\vec\omega\cdot\vec r)^2 \right) . \end{align} Here we're essentially done, because we know that $|\vec\omega\cdot\vec r|$ must be no larger than $r\omega$, so we get the desired $\vec\omega\cdot I\vec \omega = \vec\omega\cdot\vec L \geq 0$. Finally, if we have a cloud of particles or a continuous mass distribution, we simply sum or integrate over this inequality.
{ "domain": "physics.stackexchange", "id": 36405, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "angular-momentum, rotational-dynamics, moment-of-inertia, angular-velocity", "url": null }
python, algorithm, interview-questions, complexity Title: Sum to zero of all triples in a list - n^2 running time As part of an interview, I had to submit a code sample to solve the classical "triples sum to 0" problem. The reviewer said the algorithm needed to be in n2 time, and I was later informed that the code I wrote did not accomplish this. Can you guys help me identify where I went wrong? I've written comments explaining my thought process. # Small struct to hold the values we need for computing unique sums DoubleSum = namedtuple('DoubleSum', 'arg_sum int1 int2 int1_index int2_index') def sum_zero(integers: [int]) -> [(int, int, int)]: """ Given a list of integers, compute all triples that sum to 0 :param integers: The integers to arrange :return: All triples that sum to 0, or [] if there are no sums """ # Check for trivial issues if integers is None or len(integers) < 3: raise ValueError("Must have an input of at least 3 integers to check.") s_integers = sorted(integers) # Compute all unique possible sums of two integers # Should run in n^2 sums = [] for i in range(0, len(s_integers)): for j in range(i+1, len(s_integers)): int1 = s_integers[i] int2 = s_integers[j] sums.append(DoubleSum(int1+int2, int1, int2, i, j))
{ "domain": "codereview.stackexchange", "id": 18524, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, algorithm, interview-questions, complexity", "url": null }
optics, geometric-optics Title: Relationship of refractive indices in three materials The above paragraph is trying to use the principle of least time to relate the refractive indices in three materials. How do we know that $v_2 = \frac{v_1}{v_3}$ and $v_3 = \frac{v_1}{v_2}$? (in the equation) with $v_1$, $v_2$ and $v_3$ be speed of light in air, water and glass respectively. What you have quoted is a simple manipulation of fractions: $$ n_{23} = \frac{v_2}{v_3} = \frac{1/v_3}{1/v_2} = \frac{v_1}{v_1} \cdot \frac{1/v_3}{1/v_2} = \frac{v_1/v_3}{v_1/v_2} = \frac{n_{13}}{n_{12}}. $$ The ratio of $v_2$ and $v_3$ is equal to the second-right-most fraction you see above. They are not necessarily equal to the numerator and the denominator respectively.
{ "domain": "physics.stackexchange", "id": 86258, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, geometric-optics", "url": null }
algorithms, shortest-path, searching, board-games, space-analysis } private static void bfs(Position current, int depth) { // Start from -2 to +2 range and start marking each location on the board for (int i = -2; i <= 2; i++) { for (int j = -2; j <= 2; j++) { Position next = new Position(current.x + i, current.y + j, depth); if (isValid(current, next)) { if (inRange(next.x, next.y)) { // chessboard.put(Arrays.toString(new int[] { next.x, next.y }), next); // Skip if next location is same as the location you came from in previous run if (current.equals(next)) continue; Position position = chessboard.get(Arrays.toString(new int[] { next.x, next.y })); if (position == null) { position = new Position(Integer.MAX_VALUE, Integer.MAX_VALUE, Integer.MAX_VALUE); } /* * Get the current position object at this location on chessboard. If this * location was reachable with a costlier depth, this iteration has given a * shorter way to reach */ if (position.depth > depth) { chessboard.put(Arrays.toString(new int[] { current.x + i, current.y + j }), new Position(current.x, current.y, depth)); // chessboard.get(current.x + i).set(current.y + j, new Position(current.x, // current.y, depth)); q.add(next); } } } } } } private static boolean isValid(Position current, Position next) { // Use Pythagoras theorem to ensure that a move makes a right-angled triangle // with sides of 1 and 2. 1-squared + 2-squared is 5. int deltaR = next.x - current.x; int deltaC = next.y - current.y; return 5 == deltaR * deltaR + deltaC * deltaC; }
{ "domain": "cs.stackexchange", "id": 13130, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, shortest-path, searching, board-games, space-analysis", "url": null }
ros-kinetic it isn't, and your second example shows either a non-functional node, or one that does not follow best practices. Comment by kaike_wesley_reis on 2019-03-22: Do you have any link to see the best practices? And the node works... But what method to recommend to code? the first one is a good example of best practices? Well, after learning more I understand: 1 - The codes get into the while not rospy.shutdown() or rospy.spin(). When I asked, I thought the loop was into all the code (starting from import all over again). 2 - It is not necessary. 3 - while not rospy.shutdown() give more control if your node do complex tasks and rospy.spin() is to just keep that node alive in the thread (used for more simple ones, where you only have a simple tasks as be a service or a subscriber only). if __name__ == '__main__' it is a good practice in python only, nothing related to ROS. 4 - following this question basically rospy.spin() is a while not rospy.shutdown(), so when I asked my code get bugged most because I did such thing as: while not rospy.shutdown(): .... rospy.spin() So this is basically wrong (in my POV), because is equally to: while not rospy.shutdown(): .... while not rospy.shutdown(): Originally posted by kaike_wesley_reis with karma: 61 on 2019-04-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32717, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-kinetic", "url": null }
php, object-oriented public function twistHandle() { echo 'You twist the gumball machine handle and a gumball is chosen.<br>'; } public function openDoor() { global $redGumballs; global $blueGumballs; global $whiteGumballs; global $greenGumballs; global $yellowGumballs; global $totalGumballs; $redSelector = $redGumballs; $blueSelector = $redSelector+$blueGumballs; $whiteSelector = $blueSelector+$whiteGumballs; $greenSelector = $whiteSelector+$greenGumballs; $yellowSelector = $greenSelector+$yellowGumballs; $gumballSelector = rand(1,$totalGumballs);
{ "domain": "codereview.stackexchange", "id": 7076, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, object-oriented", "url": null }
time-complexity, logic Title: Building a verifier for sentences involving addition over the natural numbers Consider the model $(\mathbb{N},+)$; that is, the natural numbers equipped with the addition relation, PLUS($x$,$y$,$z$), where PLUS($x$,$y$,$z$) is true iff $x + y = z$. Let Th$(\mathbb{N},+)$ be the theory of this model; that is, the set of true sentences in the language of the model. For example, the sentence $\forall x \exists y [x + x = y]$ is a true sentence in this model. Let's assume for simplicity that all of these sentences are in prenex normal form. I'm interested in coming up with either an (efficient) verifier for Th$(\mathbb{N},+)$ or a nondeterminstic algorithm that decides Th$(\mathbb{N},+)$. In the former case, I'm wondering what a certificate could be. If all of the quantifiers in the input statement are existential, this isn't a problem: a certificate would just be an assignment of all the variables. However, things are made difficult when the input has a universal quantifier, since it isn't obvious how we can check that something holds for all members of an infinite set. $\left(\mathbb{N},+\right)$ is a model of Presburger arithmetic. Persburger arithmetic is a complete theory (see the answer here), so for any model $\mathcal M$ for Presburger arithmetic (lets denote this theory by $T$) and every first order formula $\varphi$ in the language of $T$ it holds that: $\mathcal{M}\vDash \varphi \iff T \vdash \varphi$ To see why this follows for completeness, assume $M\vDash \varphi$, then obviously $T\not\vdash \neg\varphi$, thus by completeness $T\vdash\varphi$ (the $\Leftarrow$ implication is trivial).
{ "domain": "cs.stackexchange", "id": 8762, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "time-complexity, logic", "url": null }
ct.category-theory $$ That is, $obs$ takes the first two elements of a stream. Then, an F-coalgebra homomorphism would need to ensure that it preserves all the elements of the stream, whereas a weak homomorphism for $obs$ only needs to preserve the first two elements of the stream. In my research, this notion would be useful in order to show that one coalgebra is observationally consistent with another by showing that every finite linear observation function has a weak homomorphism from the first coalgebra to the second coalgebra. In other words, every finite linear observation on the first coalgebra can be reproduced on the second coalgebra. (What I mean by linear observation function feels mostly irrelevant, but for the sake of sharing... A linear observation function is more or less one that uses each state of the carrier set only once. I'm trying to model an oracle, and the user is not allowed to go back and pretend it never asked a question.) My questions are thus:
{ "domain": "cstheory.stackexchange", "id": 1238, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ct.category-theory", "url": null }
php, validation, pdo You should definitely not be printing raw thrown exception errors to the public. There is no reason to include a column in the result set if it is a column in your WHERE clause -- think about it, you are already guaranteeing what the value is so you don't need to return it. I recommend that you not retain personally identifying data in a session. Session hijacking is a thing. Ask yourself if you really need the email to kept in the session -- you probably don't, you can probably afford to use the user's id in the session and use that id to fetch the email if/when you actually need it. Your snippet might resemble this: switch ($_POST['action'] ?? '') { case 'login': login($pdo); break; default: redirectAndExit('Location: index.php'); }
{ "domain": "codereview.stackexchange", "id": 41063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, validation, pdo", "url": null }
# Sum over integer compositions Sorry if the question is trivial - are there closed form expressions or good approximations for the sum of a symmetric function taken over all integer compositions (into given number of parts) of a number? More precisely, I'm interested in: $$S(n,k) = \sum_{a1+ \cdots +a_k = n, \ \ a_i \geq 1} \phi_k(a_1,\dots,a_k)$$ where $\phi_k = \prod_i a_i^p$, e.g. for $p=-2$, but I'm curious even about $p=1$ I realize that I can bound this (using AM-GM ineq.) by replacing all terms by the most (un)balanced composition, but this seems quite weak as a bound. EDITED: partition composition • You say "partitions" but your summation notation indicates "compositions". Which is it? Sep 19, 2013 at 14:09 • You are right, I need compositions. I'll change the question. Sep 19, 2013 at 14:27 Since the question is about compositions, there is a generating function approach when $\phi(a_1,\ldots,a_n)=\prod_i f(a_1)$ for some function $f$. Namely, $S(n,k)$ is the coefficient of $x^n$ in $\left(\sum_{i\ge 1} f(i)x^i\right)^k$. For example if $f(i)=i$ then $\sum_{i\ge 1} f(i)x^i=x/(x-1)^2$ so $S(k,n)$ is the coefficient of $x^n$ in $x^k/(1-x)^{2k}$, which is $$\binom{n+k-1}{2k-1}$$ if I got it right. Other small powers can be done the same way. More complicated $f$ are likely to be able to be handled asymptotically by standard analytic methods.
{ "domain": "mathoverflow.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9579122696813394, "lm_q1q2_score": 0.8234824509696382, "lm_q2_score": 0.8596637469145053, "openwebmath_perplexity": 289.5861729575874, "openwebmath_score": 0.9798166751861572, "tags": null, "url": "https://mathoverflow.net/questions/142588/sum-over-integer-compositions/159509" }
asymptotics, big-o-notation Title: If we have f(n) ∈ O(h(n)) and g(n) ∈ Ω(h(n)), does that mean that f(n) + g(n) ∈ Θ(h(n))? It is quite easy to prove that f(n) + g(n) ∈ Ω(h(n)), but I am having trouble with proving/disproving that f(n) + g(n) ∈ O(h(n)). Someone suggested that this question answers mine, which it doesn't. As I've written above, proving that f(n) + g(n) ∈ Ω(h(n)) is easy. I am having trouble disproving that f(n) + g(n) ∈ O(h(n)). Thanks for any help. "We know nothing about the upper bound" is a good intuition but it is not a formal proof that you can't hope to show $f(n) + g(n) \in O(h(n))$ if your only assumptions are $f(n) \in O(h(n))$ and $g(n) \in \Omega(h(n))$. Fortunately, a counterexample is easily obtained by considering, e.g., $f(n)=1$, $g(n)=n^2$, and $h(n)=n$.
{ "domain": "cs.stackexchange", "id": 21023, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "asymptotics, big-o-notation", "url": null }
ros, ros-melodic, service Yes, you'd add your subscriber to the action server. And is it possible to use it with Python? actionlib also supports Python. I saw kobuki auto docking, but I didn't find an action server, only the client. So a don't know how do they implement the server. If you know where it is, could you please provide me the link. See kobuki_auto_docking/src/auto_docking_ros.cpp. Originally posted by gvdhoorn with karma: 86574 on 2019-12-08 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Yehor on 2019-12-08: Thank you! But can I clarify something? In case I'm going to use an action server, I am going to implement a subscriber inside the server, because that what I need. Is it right? And is it possible to use it with Python? I saw kobuki auto docking, but I didn't find an action server, only the client. So a don't know how do they implement the server. If you know where it is, could you please provide me the link. Because I will also need to set up a dock station, I am talking about the encoding of the signals which I have to send from IR. Thank you
{ "domain": "robotics.stackexchange", "id": 34104, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-melodic, service", "url": null }
waves, scattering, string, boundary-conditions, continuum-mechanics Title: For two strings tied together: Why does a sinusoidal incident wave give rise to sinusoidal reflected and transmitted waves? I am reading Griffiths' Intro to Electrodynamics (Section 9.1.3 Boundary Conditions: Reflection and Transmission) where he analyzes the case of an incident wave sent down a string that is tied to another string. The first string has uniform mass per unit length $\mu_1$ and the second has uniform mass per unit length $\mu_2$. He says the incident wave is a sinusoidal oscillation that extends (in principle) all the way back to $z=-\infty$ (defining $z$ the axis along the length of the string) and that the same goes for the reflected wave and transmitted wave (except $z=+\infty$ for the transmitted wave). I'm not entirely sure what's confusing me, but I'll start with this question: What guarantees that the transmitted and reflected waves are sinusoidal? Keep in mind that the boundary condition (e.g. the reflector) is merely an idealistic model. If the reflector is non-linear, then the boundary condition is also non-linear and may lead to harmonics (whose characteristics depend upon the true characteristics of the reflector, the amplitude of the incident wave, etc), leading to a reflection that isn't necessarily purely sinusoidal. As the string permits propagation in both directions (towards and away from the wave source), the boundary condition (e.g. the reflector) represented by the (implied idealistic) model constrains the reflected/incident/transmitted sinusoidal waves. See also: Is a reflected wave on a string of the same form as of the incident.
{ "domain": "physics.stackexchange", "id": 89828, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "waves, scattering, string, boundary-conditions, continuum-mechanics", "url": null }
___________________________________________________________________________________ Comment Essentially an infinite compact subset of Bing’s Example G is the union of finitely many sets from finitely many $\mathcal{K}_p$. In some cases, when working with compact subsets of Example G, it is sufficient to work with sets from $\mathcal{K}_p$ for one arbitrary $p \in P$. See the next post for an example. ___________________________________________________________________________________ Reference 1. Boone, J. R., Some characterizations of paracompactness in k-spaces, Fund. Math., 72, 145-155, 1971. ____________________________________________________________________ $\copyright \ 2014 \text{ by Dan Ma}$
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9865717448632122, "lm_q1q2_score": 0.8042853394891827, "lm_q2_score": 0.815232489352, "openwebmath_perplexity": 48.31693883029408, "openwebmath_score": 0.9730486869812012, "tags": null, "url": "https://dantopology.wordpress.com/2014/01/28/compact-subspaces-of-bings-example-g/" }
immune-system, digestion In CD, deamidation of gluten by tissue transglutaminase (tTG) in the small-bowel lamina propria promotes presentation of gluten peptides (gliadin in wheat, secalin in rye and hordein in barley) by HLA-DQ2 or HLA-DQ8 dendritic cells to pathogenic local CD4+ T cells...The most widely used serological test is anti-tTG IgA (Medscape) 3) Can barley affect the gut differently than wheat? Possibly. Nutrition and Celiac Disease (Nutrients, 2014): There is very limited data looking at the effect of barley hordein or rye secalin on CD outcomes in the published literature (e.g., [66,67]), but evidence exists that these prolamins induce effects different to wheat gluten, at least at an immunologic level. 4) Do positive DGP and TTG test confirm celiac disease? The specificity of IgG deaminated gliadin peptide (DGP) is 98% and of IgA tissue transglutaminase (TTG) 95% (American Family Physician, 2014), so when both tests are positive, celiac disease is very likely. The final diagnosis is by histological examination of a tissue sample obtained by duodenal biopsy. 5) Is it possible that, in celiac disease, symptoms disappear after removing only beer, but not wheat and rye from the diet? It could be possible to have celiac disease confirmed by blood tests without any symptoms despite consuming wheat, barley and rye. Beer could trigger symptoms by irritating the bowel, not by gluten, but by alcohol or other substances, like in people with irritable bowel syndrome (American Journal of Gastroenterology, 2013).
{ "domain": "biology.stackexchange", "id": 10293, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "immune-system, digestion", "url": null }