text
stringlengths
49
10.4k
source
dict
quantum-field-theory, operators, s-matrix-theory, wick-theorem Now we are ready to derive the expression for the correlations in the interaction picture. $$\left<\phi_{1}(x_{1})\dots\phi_{n}(x_{n})\right>=\left<\Omega\right|\text{T}\,\hat{\phi}_{1}(x_{1})\dots\hat{\phi}_{n}(x_{n})\left|\Omega\right>=$$ $$=\frac{1}{e^{-2iE_{\Omega}T}\cdot\left|\left<0|\Omega\right>\right|^{2}}\cdot\left<0\right|\hat{U}(T,t_{0})\,\text{T}\left\{ \hat{U}(t_{0},x_{1}^{0})\,\hat{\phi}_{I\,1}(x_{1})\,\hat{U}(x_{1}^{0},t_{0})\dots\right\} \,\hat{U}(t_{0},-T)\left|0\right>.$$ We can glue together the evolution operators between interaction-picture field operators (inside the chronological ordering symbol) by using the composition law:
{ "domain": "physics.stackexchange", "id": 25053, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, operators, s-matrix-theory, wick-theorem", "url": null }
Where did I make mistakes? • The random element here is the confidence interval. The sample mean is a random variable and each instance of it generates a random interval. 95% of these random intervals will contain the true mean (in the frequentist sense - generate a large number of samples and their intervals etc). The true mean is not going anywhere but, potentially, 5% of the confidence intervals will not capture it. To say "95% probability that the population mean lies between a and b" begs the question of why that particular a and b is so special. – Paul Sep 6 at 9:05 • As what Paul said above, you are using a pair of estimator as an interval estimator, to cover a deterministic but unknown parameter of interest. The estimators are random variable before they realize into $a, b$. The realizations $a, b$ are no longer random, and usually we called them the estimate. E.g., for a normal random variable $Z$ with mean $0$, we know that $\Pr\{Z > 0\} = 1/2$. Suppose you observe one realization of $Z$ as $0.5$. Then you will not say $\Pr\{0.5 > 0\} = 1/2$ – BGM Sep 6 at 10:47 The two comments on this question are good. I'll try to flesh them out a bit more into an answer. You're right that the interpretation of 95% confidence is as follows: if you collected many samples, and from each one generated a different confidence interval, then 95% of the intervals generated would capture the true mean $$\mu$$ inside them.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9706877658567787, "lm_q1q2_score": 0.8231119251689031, "lm_q2_score": 0.8479677545357568, "openwebmath_perplexity": 205.1707728045544, "openwebmath_score": 0.8392453789710999, "tags": null, "url": "https://math.stackexchange.com/questions/3345893/confidence-intervals-doubts-on-interpretations" }
sets Title: Determining whether a relation is the "union" of two other relations Given a relations $P$ and $Q$ on $S$, what are the most efficient algorithms to find whether the relation satisfy the constraints $P \cup Q = S \times S$ and $P \cap Q = \emptyset$? If it helps, for my application the elements of $S$ are strings. Apologies if my question is not up to standard, this is my first question on Stack Exchange and it's been several years since I've done any formal math. The following should suffice: for each (x,y) in P: if (x,y) in Q, print "nope" and quit for each (x,y) in Q: if (x,y) in P, print "nope" and quit count the number of pairs in P and Q. if this sums to |S|^2, output "yup, otherwise output "nope". You could use any method. For instance, you could compute $P \cup Q$ directly, then compute $P \cap Q$ directly (say, using mergesort-like methods), and check that each has the desired value. All of these will have essentially the same asymptotic running time.
{ "domain": "cs.stackexchange", "id": 8464, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sets", "url": null }
conventions, units, notation, dimensional-analysis Title: Square bracket notation for dimensions and units: usage and conventions One of the most useful tools in dimensional analysis is the use of square brackets around some physical quantity $q$ to denote its dimension as $$[q].$$ However, the precise meaning of this symbol varies from source to source; there are a few possible interpretations and few strict guidelines. What conventions are there, who uses them, and when am I obliged to follow them? I had an extensive look around, and I turned up four conventions. This included a short poll of google, other questions on this and other sites, and multiple standards documents. (I make no claim of exhaustiveness or infallibility, by the way.) Using $[q]$ to denote commensurability as an equivalence relation. That is, if $q$ and $p$ have the same physical dimension $Q$, one might write $$[q]=[p]=[Q],$$ but no bracketed quantity is ever shown equal to an unbracketed symbol. Thus, if $v$ is a speed one might write $[v]=[L]/[T]$ or $[v]=[L/T]$ or $[v]=[L\,T^{-1}]$ or some equivalent construct. You can see $L$ and $T$ as denoting the dimension or just "some length" and "some time". To see how you would work without evaluating braces, here is a proof that the fine structure constant is dimensionless: $$ [\alpha]=\left[\frac{e^2/4\pi\epsilon_0}{\hbar c}\right]=\frac{[F\,r^2]}{[E/\omega][r/t]} =\frac{[F r][\omega t]}{[E]}=\frac{[E]}{[E]}[1]=[1] ,$$ so $\alpha$ and $1$ are commensurable. Some examples are this, this, this, or this.
{ "domain": "physics.stackexchange", "id": 9499, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "conventions, units, notation, dimensional-analysis", "url": null }
• This is very helpful and thorough, thank you for this! I was just wondering, say if you saw this expression $n^4 + 6n^3 + 11n^2 + 6n +1$, by some trick would you be able to tell at first look that this must be an enclosed form of $(a+b+c)^2$? – Idaisa Mar 24 '18 at 15:50 • @Idaisa What you could do is plug in some values of $n$ and notice the result is always a square number. Then, since the highest power in the expression is $4$ and $4/2=2$, you could conjecture that it is equal to $(an^2+bn+c)^2$ for some $a,b,c$. Work out the brackets and simplify, then put it equal to your original expression. You'd get equations like $a^2=1$, $2ab=6$ and so on. Finally, solve for $a$ ,$b$ and $c$. – Mastrem Mar 24 '18 at 15:59 \begin{align}(\color{red}{n^2}+\color{blue}{3n+1})^2&=(\color{red}{n^2})^2+2\color{red}{n^2}(\color{blue}{3n+1})+(\color{blue}{3n+1})^2\\&=n^4+2(3n^3+n^2)+(3n)^2+2(3n)(1)+1^2\\&=n^4+6n^3+2n^2+9n^2+6n+1\\&=n^4+6n^3+11n^2+6n+1\end{align} • Oh, this makes a lot of sense! Thank you!! – Idaisa Mar 24 '18 at 15:37 • You are welcome :) – TheSimpliFire Mar 24 '18 at 15:38
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9752018354801187, "lm_q1q2_score": 0.8754340912266927, "lm_q2_score": 0.8976952866333484, "openwebmath_perplexity": 487.5758791971208, "openwebmath_score": 0.9685267210006714, "tags": null, "url": "https://math.stackexchange.com/questions/2704997/how-does-n4-6n3-11n2-6n-1-n2-3n-12" }
thermodynamics, cosmology, entropy, reversibility If the actual process was a reversible isothermal expansion, where heat transfer can occur between the system and surroundings (in contrast to an adiabatic expansion where heat transfer cannot occur), then the change in entropy of the surroundings will be $\Delta S_{surr}= -R\ln\frac{V_2}{V_1}$ and therefore $\Delta S_{univ}=0$. Hope this helps.
{ "domain": "physics.stackexchange", "id": 94673, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, cosmology, entropy, reversibility", "url": null }
c# public DbConnection CreateConnection() { var connection = factory.CreateConnection(); connection.ConnectionString = _connectionString; return connection; } public DbCommand CreateCommand(DbConnection connection) { var command = factory.CreateCommand(); command.Connection = connection; command.CommandType = CommandType.Text; command.CommandText = sql; return command; } public DbCommand CreateCommand(DbConnection connection, SqlParameter[] parameters) { var command = CreateCommand(connection); AddAnyParameters(command, parameters); return command; } private void AddAnyParameters(DbCommand, SqlParameter[] parameters) { if (parameters == null) return; foreach (var parameter in parameters.Where(p => p != null)) { command.Parameters.Add(parameter); } } } And it's usage from your example would look something like: private static void UpdateScanSchedule(DBData data, ScanSchedule schedule) { data.Query = "UPDATE ScanSchedules " + "SET GroupID = @GroupID, ScheduleType =@ScheduleType, " + "RunDays =@RunDays ,RunDate =@RunDate, Description = @Description, " + "RunTime = @RunTime, Ranges = @Ranges , Devices = @Devices, Excluded = @Excluded " + "WHERE ScanScheduleID = @ScanScheduleID";
{ "domain": "codereview.stackexchange", "id": 8074, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#", "url": null }
statistical-mechanics, quantum-information, density-operator Title: Two definitions of the density matrix? There seems to be two different definitions of definitions of density matrices in Physics. In Quantum Information we define a the density matrix associated with a wave function $ | \psi \rangle$ as $| \psi \rangle \langle \psi | $. These represent pure states. Mixed states can be defined by taking convex combinations of these. In any case the density matrix depends on the state of the system and can evolve in time In Quantum statistical mechanics the density matrix is defined as $e^{ -\beta H}$ . Where $\beta$ is the inverse temperature. By this definition the density matrix depends on the Hamiltonian ($H$) and not on what state the system is in. I have two questions,
{ "domain": "physics.stackexchange", "id": 30302, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "statistical-mechanics, quantum-information, density-operator", "url": null }
# A Rouché's Theorem application We have for example the problem: Show that all the roots of $f(z) = z^5 + 3z + 1$ are in the disk $|z| < 2.$ The solution to this is not so hard as we can let $g(z)=-z^5$, this has $5$ roots in the disk $|z|<2,$ then for some $z\in \Gamma$, $$|f(z)+g(z)|\le 3|z|+1=7<|g(z)|=32$$ Clearly by Rouché's Theorem, $f$ has $5$ roots inside $|z|=2.$ Then come the problem I am stuck in, unlike above which I solved. Problem: Prove that $e^z+z^3$ has no root in $\{z:|z|<3/4\}$ and has three roots in $\{z:|z|<2\}$. My attempt: Following the above method that I used, if we let some $f(z)=-z^3,$ then we have $g(z)=e^z+z^3$ which gives us in $|z|=2:$ $$|g(z)+f(z)|=|e^z|= e^x\le e^{2}<|f(z)|=2^3$$ Then by Rouché's theorem, $f$ and $g$ has the same number of roots, and since $f(z)$ has $3$ roots (counting multiplicity) therefore $g(z)$ must have $3$ roots inside the disk $|z|=2.$ Where I am stuck: I am stuck in trying to show the case for when there are no roots inside the disk $|z|=3/4.$ I am assuming we work with $e^z$ since $e^z$ has no roots anywhere as its never $0.$ I would appreciate the help.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9873750536904563, "lm_q1q2_score": 0.8249356023049983, "lm_q2_score": 0.8354835371034368, "openwebmath_perplexity": 101.9792958049392, "openwebmath_score": 0.8793485164642334, "tags": null, "url": "https://math.stackexchange.com/questions/2749994/a-rouch%C3%A9s-theorem-application" }
python, beginner, python-2.x, tic-tac-toe print '\n' "No numbers allowed" '\n' continue Your message isn't quite accurate. It isn't just numbers that aren't allowed. The user might type Johnny$. There are no numbers, but the error is that numbers aren't allowed. I would suggest Only alphabetic characters are allowed. Your two loops are identical. I would suggest putting that code into a function and calling the function twice to get the two names. ... import time You shouldn't put your imports before the first uses. You might think, "I want to add a pause before asking for the second name". Therefore, you put time.sleep(1) up there. You get a NameError! Of course it's because you didn't import time early enough. All imports should be at the beginning of the file. PEP 8, the Python style guide, has some useful rules about imports. def grid(): ... Function definitions should go near the top of the file. Usually, code is written like this: Imports Constants Functions and Classes Module level code Often, the module level code is written in an if __name__ == '__main__: block.1 global p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14 With that many variables, you should be using a list instead. If you have a list of strings, you can concatenate them all with "".join().2 There is no reason to use global variables anyway. In fact, there is no reason to split up the string into multiple variables. You should use triple-quoted strings.3 while ... def user_input(j,h): ... def replay(): ...
{ "domain": "codereview.stackexchange", "id": 19431, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, beginner, python-2.x, tic-tac-toe", "url": null }
water, solubility Title: Does CO2 dissolve in water? First of all not a homework question, but one day it suddenly popped into my head while opening a bottle of soda and accidentally leaving a glass out for a while. I get that $\ce{CO2}$ in water is not the same as Carbonic acid, however this also raises the question if $\ce{CO2}$ can combine with water (It does so in acid rain in the atmosphere(I would guess normal temperature and pressure)), then shouldn't dissolved $\ce{CO2}$ just form carbonic acid and hence become unusable for aquatic plants? And also if Carbonic acid does form in the atmosphere ( Again,I would guess normal temperature and pressure (feel free to correct me!)), why does the fizzing happen in soda bottles when opened in the first place, should Carbonic acid be stable at normal conditions? Please do note that I talk about $\ce{CO2}$ dissolving under normal temperature and pressure I feel like I'm missing something super basic and obvious here and I just can't put a finger on it. Thanks in advance! I want to extend Maurice's comment: The amount of $\ce{CO2}$ dissolved in water is proportional to the outer pressure. At $\pu{20 °C}$, 1 liter water dissolves about $\pu{1.7 g}$ $\ce{CO2}$ at normal pressure (1 atm). If the pressure is twice as large, the amount of dissolved $\ce{CO2}$ is twice as much, $\pu{3.4 g}$. To talk about solubility of gases in liquids, we take the help of Henry's Law which states that: The amount of dissolved gas in a liquid is proportional to its partial pressure above the liquid. Mathematically, $$\pu{S_g=k_HP_g^\circ}$$ where $\pu{S_g}$ is the solubility of the gas, $\pu{k_H}$ is the Henry's law constant which is different for different gases and $\pu{P_g^\circ}$ is the partial pressure of the gas.
{ "domain": "chemistry.stackexchange", "id": 17672, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "water, solubility", "url": null }
context-free, pushdown-automata Title: Creating a PDA for the language L = {$a^{m}$ $b^{n}$ : m $\neq$ n} I didn‘t find a DPDA for the language L = {$a^{m}$ $b^{n}$ : m $\neq$ n}, so I guess an NPDA is the only option. NPDA are not very intuitive to me. The only solution I found online is: I don‘t really understand how it works and also I‘m not allowed to use final states. The only way an input is accepted is by emptying the Stack (how we do it at my university). Thanks in advance! First of all the answer you have provided is DPDA and not NPDA. "Z" denotes that your stack is empty, so you start by taking input i.e., "a" and keep storing them in your stack, as soon as "b" arrives you start removing one "a" for every "b" that arrives (b,a|Epsilon). Now their are two possibilities (i) m>n :- if so number of "b" are less than "a", so after all the "b" are computed only "E" will be left in the string so we reach the final state i.e., (E,a|a) (ii) n>m:- in this case all the "a"s will be removed so only "Z" will remain in the stack, so in this case this will be the final state i.e., (b,Z|Z) Hope this will help.
{ "domain": "cs.stackexchange", "id": 15414, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "context-free, pushdown-automata", "url": null }
performance, algorithm, c, vectorization, sse if ((jj + pxShift) < -2) { currPx = _mm_set1_ps(mTmp[(ii * numRows)]); } else if ((jj + pxShift) < -1) { // currPx = _mm_set_ps(mI[(ii * numRows)], mI[(ii * numRows)], mI[(ii * numRows)], mI[(ii * numRows) + 1]); currPx = _mm_set_ps(mTmp[(ii * numRows) + 1], mTmp[(ii * numRows)], mTmp[(ii * numRows)], mTmp[(ii * numRows)]); } else if ((jj + pxShift) < 0) { // currPx = _mm_set_ps(mI[(ii * numRows)], mI[(ii * numRows)], mI[(ii * numRows) + 1], mI[(ii * numRows) + 2]); currPx = _mm_set_ps(mTmp[(ii * numRows) + 2], mTmp[(ii * numRows) + 1], mTmp[(ii * numRows)], mTmp[(ii * numRows)]); } else { currPx = _mm_loadu_ps(&mTmp[(ii * numRows) + jj + pxShift]); } currSum = _mm_add_ps(currSum, _mm_mul_ps(kernelWeight, currPx)); } _mm_store_ps(tmpVal, currSum); // Unpack Data in Transpose for (kk = 0; kk < SSE_STRIDE; kk++) { mO[((jj + kk) * numCols) + ii] = tmpVal[kk]; } } }
{ "domain": "codereview.stackexchange", "id": 27915, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, algorithm, c, vectorization, sse", "url": null }
In more geometric problems, we often use curves or functions to provide natural constraints. For instance, we could investigate which isosceles triangle that circumscribes a unit circle has the smallest area, which you can explore for yourself at http://gvsu.edu/s/9b. Or similarly, for a region bounded by a parabola, we might seek the rectangle of largest area that fits beneath the curve, as shown at http://gvsu.edu/s/9c. The next activity is similar to the latter problem. ##### Activity3.4.4 Consider the region in the $x$-$y$ plane that is bounded by the $x$-axis and the function $f(x) = 25-x^2\text{.}$ Construct a rectangle whose base lies on the $x$-axis and is centered at the origin, and whose sides extend vertically until they intersect the curve $y = 25-x^2\text{.}$ Which such rectangle has the maximum possible area? Which such rectangle has the greatest perimeter? Which has the greatest combined perimeter and area? (Challenge: answer the same questions in terms of positive parameters $a$ and $b$ for the function $f(x) = b-ax^2\text{.}$) ##### Activity3.4.5 A trough is being constructed by bending a $4 \times 24$ (measured in feet) rectangular piece of sheet metal. Two symmetric folds 2 feet apart will be made parallel to the longest side of the rectangle so that the trough has cross-sections in the shape of a trapezoid, as pictured in Figure 3.4.4. At what angle should the folds be made to produce the trough of maximum volume? # Subsection3.4.2Summary • While there is no single algorithm that works in every situation where optimization is used, in most of the problems we consider, the following steps are helpful: draw a picture and introduce variables; identify the quantity to be optimized and find relationships among the variables; determine a function of a single variable that models the quantity to be optimized; decide the domain on which to consider the function being optimized; use calculus to identify the absolute maximum and/or minimum of the quantity being optimized.
{ "domain": "gvsu.edu", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.985936372130286, "lm_q1q2_score": 0.8172809234776766, "lm_q2_score": 0.8289388104343892, "openwebmath_perplexity": 313.8154072666012, "openwebmath_score": 0.6953738927841187, "tags": null, "url": "http://faculty.gvsu.edu/boelkinm/Home/AC/sec-3-4-applied-opt.html" }
c++, matrix, computational-geometry, graphics, c++17 template <int row_count, int column_count, typename Type> class MatrixContainer <row_count, column_count, Type, false> { public: Type matrix[row_count][column_count]; MatrixContainer() { for (int i = 0; i < row_count; i++) for (int j = 0; j < column_count; j++) matrix[i][j] = Type(0); } Type *operator [] (int index) { return matrix[index]; } ~MatrixContainer() {} }; template <int rows, int cols, typename Type> class MatrixToVectorContainer { public: using MatCont = MatrixContainer<rows, cols, Type, ((rows * cols) > MAX_MATRIX_SIZE)>; MatCont matrix; MatrixToVectorContainer() : matrix() {} ~MatrixToVectorContainer() {} }; template <typename Type> struct MatrixToVectorContainer <1, 1, Type> { using MatCont = MatrixContainer<1, 1, Type, ((1 * 1) > MAX_MATRIX_SIZE)>; union { MatCont matrix; Type array[1]; /// for compatibility with opengl union { Type x; Type r; }; }; MatrixToVectorContainer() : matrix() {} ~MatrixToVectorContainer() {} }; template <typename Type> struct MatrixToVectorContainer <2, 1, Type> { using MatCont = MatrixContainer<2, 1, Type, ((2 * 1) > MAX_MATRIX_SIZE)>; union { struct { MatCont matrix; Type array[2]; /// for compatibility with opengl union { Type x; Type r; }; union { Type y; Type g; }; }; }; MatrixToVectorContainer() : matrix() {} ~MatrixToVectorContainer() {} };
{ "domain": "codereview.stackexchange", "id": 30704, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, matrix, computational-geometry, graphics, c++17", "url": null }
So all of this is well done. But now I am not sure what the problem even is. $$\displaystyle g^{-1} f^{-1}(x)$$ is meaningless. Does it mean $$\displaystyle g^{-1}(x) * f^{-1}(x)$$ or $$\displaystyle g^{-1}(f^{-1}(x))$$? And is $$\displaystyle (fg)^{-1}(x)$$ supposed to represent a composition or a product of functions? EDIT: I see Dr. Peterson has beat me to it, but it does look as though there are some notation differences between your text and what I am used to. Last edited: #### Dr.Peterson ##### Elite Member '...even though you said you were doing the right thing' - I certainly wasn't that confident by a long chalk! (Is that a British Idiom?) Anyway, so I fell into the trap of getting the order wrong, i was trying to avoid that but must have reversed it in my head ..zzzzzz.. By the way, I have indeed reproduced the notation used in the book. One needs a lot of patience to not be put off by one's own cognitive deficiencies when learning maths- many thanks. Let's rephrase that: "What you said you were doing was the right thing", namely, "applied the inverse of g to the inverse of f". As to notation, this must be another of many areas where notation varies, though I hadn't seen this one. That's why I started with "I'm not sure whether your book uses slightly non-standard notation"; I've learned not to say someone's notation, or even grammar, is "wrong", because it may be what they were taught. And I've never heard of a long chalk. We're deprived over here. #### Simonsky ##### Junior Member Let's rephrase that: "What you said you were doing was the right thing", namely, "applied the inverse of g to the inverse of f".
{ "domain": "freemathhelp.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9777138190064203, "lm_q1q2_score": 0.8126200495962483, "lm_q2_score": 0.831143054132195, "openwebmath_perplexity": 709.2376829933124, "openwebmath_score": 0.8134908080101013, "tags": null, "url": "https://www.freemathhelp.com/forum/threads/confusion-over-inverse-of-a-function-involution.111481/" }
special-relativity, photon-emission After many days of back and forth, it is now clear what the OP wants is something completely else. We now consider the problems as covered by AP French. For the situation of a photon being completely absorbed by an atom, we have $$\tag{10} \begin{pmatrix}+Q_0\\+Q_0/c \end {pmatrix}+ \begin{pmatrix}+mc^2\\0 \end {pmatrix}\Rightarrow \begin{pmatrix}+mc^2+Q_0\\+Q_0/c \end {pmatrix} $$ That is, the exact full SR velocity of the recoil of the atom is $v=c\frac{Q_0}{mc^2+Q_0}$, and this formula is in the book (and seems to be in OP's question too). However, it is important and interesting to consider the invariant rest energy of the resulting atom, which is $\sqrt{(mc^2+Q_0)^2-Q_0^2}=\sqrt{m^2c^4+2mc^2Q_0}=mc^2+E_\text{excitation}$; For a rough estimate, consider that the rest energy of the Hydrogen atom is $938.27208816\times10^6\,e$V whereas the maximum energy that a Hydrogen atom can absorb and yet still stay an atom, the binding energy, is the famous $13.6\,e$V, and you can immediately tell that $Q_0\ll mc^2$ for the above formula to be applicable. i.e. the recoil velocity of the atom is necessarily non-relativistic. Similarly, we can consider the emission of a photon. Now the excited atom is stationary and transitions to the ground state. The smart thing to do is to take the invariant rest energy level from earlier and deduce what the new photon energy is. Namely, $$\tag{11} \begin{pmatrix}+\sqrt{m^2c^4+2mc^2Q_0}\\0 \end {pmatrix}\Rightarrow
{ "domain": "physics.stackexchange", "id": 99794, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, photon-emission", "url": null }
shader stage that knows about the entire primitive (other than the tessellation shaders, but that’s not relevant here), i. n = norm (X,p) returns the p -norm of matrix X, where p is 1, 2, or Inf: If p = 1, then n is the maximum absolute column sum of the matrix. Specifically, provided the. Use that and solve a system of equations to get the answer. Such a normal vector in n dimensions can be thought of as one of the vectors perpendicular to the analogue of a "plane" through the origin point in that number of dimensions. By using this website, you agree to our Cookie Policy. The y-component. I need to find the unit tangent vector, unit normal vector and the binormal vector at t=1. Using the formula above, calculate the magnitude of the original vector. From simple vector addition, L + N. Find the normal vector to the straight line given by the equation x + y = 2. Normal Vectors. Moreover, we use a lowercase letter with a circumflex, or 'hat' (Pronunciation "i-hat"). If m_line is not 0, you get your 2D-Vector. How to normalize a vector. ( d s) 2 = ( d x) 2 + ( d y) 2. This unit vector calculator will help you transform any vector into a vector of length 1 without changing its direction. Important Solutions 3417. Use that and solve a system of equations to get the answer. Remember not to confuse the two ideas normalizing a vector (making a unit vector in the same direction as the vector), and computing a unit normal (making a unit vector in. This is easy. The Unit Normal to the Surface when Force on the Fluid Element is Given is defined as a vector which is perpendicular to the surface at a given point is calculated using unit_normal_to_surface = - Force / Pressure * Area Element. 2 If you're calculating Unit Normals as a nodal variable, each component of the face normals is then arithmetically averaged to the nodes and the resulting vectors are re-normalized. → r ′ ( t) = 1, 3 cos t, − 3 sin t ‖ → r ′ ( t) ‖ = √ 1 + 9 cos 2 t + 9 sin 2 t = √ 10. where is
{ "domain": "ariellafiori.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.986151391819461, "lm_q1q2_score": 0.8039426562443741, "lm_q2_score": 0.8152324915965391, "openwebmath_perplexity": 417.6234340594931, "openwebmath_score": 0.8306130170822144, "tags": null, "url": "http://ariellafiori.it/unit-normal-vector-calculator.html" }
stabilizer-code Title: The commutativity of $I$ and $Y$ in a stabilizer code Let $P_1 = \lbrace I, -I, iI, -iI, X, -X, iX, -iX, Y, -Y, iY, -iY, Z, -Z, iZ, -iZ\rbrace$. Let $P_n$ be the $n$-tensor fold of $P_1$. It is said that two operators either commute if $AB = BA$ or anti-commute if $AB = -BA$ for all $A,B \in P_n$. Let us have $n=1$ and $A=I$ and $B=Y$, then we have: \begin{align*} IY &\stackrel{\text{true}}{=} YI,\\ IY &\stackrel{\text{true}}{=} -YI. \end{align*} In other words, $I$ and $Y$ both commute and anti-commute. I have also added a matlab code snippet for completeness. I = [1 0; 0 1]; Y = [0 -i;i 0]; if isequal(I*Y,Y*I) disp('commute') end if isequal(I*Y,-Y*I) disp('ANTI-commute') end
{ "domain": "quantumcomputing.stackexchange", "id": 1870, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "stabilizer-code", "url": null }
immunology, immune-system, autoimmune As for why autoimmunity occurs you also understood the basics, however the devil is in the details: wikipedia does have a much longer list of mechanistically different causes. While its often sufficient to explain immune recognition using 'keys' (T cell receptor / B cell receptor = antibody) and their respective 'locks' (peptides bound to MHC / antigens), the exact process of immune cell activation becomes important in autoimmunity, where cells are wrongly activated. This means that there are many possible points at which the 'security system' of the immune cell maturation can (or rather has to) fail to allow autoimmunity. As for the cross-reactivity of cells: the difference between the binding of B cells (via antibody/B cell receptor to protein surfaces) and T cells (via T cell receptor to peptide sequences) means that in general T cells are much more likely to show cross reactivity - just because the possibility space of a 10-12aa peptide is much smaller than that of a protein surface (even though the probability is still very low). Additionally - as stated in the comments to your question - both the immune activation pathway and the issue of cross reactivity, while principally understood, are not 'completely solved'. The immune system is insanely complex just by itself and the addition of interactions with both almost all human proteins AND proteins any kind of pathogen, mean that it will take quite a lot of time until researchers can figure out all the weird quirks caused by the 'wrong' combinations.
{ "domain": "biology.stackexchange", "id": 7253, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "immunology, immune-system, autoimmune", "url": null }
javascript Linkerator.prototype.getExternalLinks = function() { return [].slice.call( this.document.querySelectorAll("#content a[href]:not([href^='#']):not([href^='javascript:']), a[href].open-new") ).map(function(link) { return new Linkerator.Link(link, this.location); }, this).filter(function(link) { return link.hasHref() && link.isExternal(); }, this); }; function documentReady() { var linkerator = new Linkerator(document, window.location); linkerator.getExternalLinks().forEach(function(link) { link.onClick(function(evt) { if(evt.preventDefault) { evt.preventDefault(); } var href = this.getHref(); window.open(href); if(typeof ga === "function" && this.isPDF()) { ga("send", "event", "pdf", "click", href, {"hitCallback": function() {}}); } return false; }); }); } if(document.addEventListener) { document.addEventListener('DOMContentLoaded', documentReady); } else if(document.attachEvent) { document.attachEvent("onreadystatechange", function() { if(document.readyState === "complete") { documentReady(); } }); } else { window.onload = documentReady; }
{ "domain": "codereview.stackexchange", "id": 14825, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript", "url": null }
Have just a few values, but is not true is invertible if and only if it is still valid! Rules, to find out more you can read injective, surjective what is bijective function bijective like... Usually work on sets with infinitely many elements invertible if and only if is... Not true f: a function is ( at the very least ) injective by M. Winter the!... ta ta ta taaaann.... the bijective functions Winter, the converse is not a function is if... More than once it is still a valid curve, but is not true input value like. A bijective function is a bijection one input value injection and a surjection have. As pointed out by M. Winter, the converse is not true ah...... If it crosses more than once it is a bijection, thus it is both an injection and a.... Both an injection and a surjection read injective, surjective and bijective inverse functions said. Functions are said to be invertible a bijective function can read injective, surjective and bijective like that said be... Onto or bijective function or bijection is a function is both injective and surjective thus! The bijective functions it is ( at the very least ) injective... Today we present... ta ta....... Input value... the beautiful invertable functions... Today we present... ta ta taaaann.... the functions. Types of functions have stricter rules, to find out more you can read injective, and! More you can read injective, surjective and bijective in mathematics, bijective. I can write such that, like that!... the beautiful invertable.... Definition: a function f: a → B that is both injective and surjective each... Once it is ( at the very least ) injective we present... ta!... the beautiful invertable functions... Today we present... ta ta....! I can write such that, like that bijective function be inverted
{ "domain": "namescapes.us", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676487350494, "lm_q1q2_score": 0.8579801940168429, "lm_q2_score": 0.8757870029950159, "openwebmath_perplexity": 446.2498104846663, "openwebmath_score": 0.8679229617118835, "tags": null, "url": "http://namescapes.us/cad-iwkgj/e90175-what-is-bijective-function" }
This entails adding $$P$$ to $$L_k$$ as a newn-ary predicate symbol, and adding $$P(x_1, ... x_n) ↔ \varphi(x_1, ..., x_n)$$ to $$\Gamma_k$$ as the defining axiom for $$P$$. Thus, we can start with the formula $$\varphi(n) := \exists y(n = 2 \times y)$$ and extend the "basic" language with the new predicate $$Even(n)$$ and the theory with the defining axiom : $$Even(n) \leftrightarrow \exists y(n = 2 \times y)$$. This is "implicitly" universally quantified, i.e. : $$\forall n [Even(n) \leftrightarrow \exists y(n = 2 \times y) ]$$. Comment About the natural language, my "feeling" is that : a number $$n$$ is even if it is divisible by $$2$$ is a "correct" form for a definition; "a number $$n$$" must be interpreted as "a number $$n$$ whatver" i.e. as having $$n$$ universally quantified. I would prefer to "read" : every number $$n$$ is even if it is divisible by $$2$$ as meaning : "for every number $$n$$, $$n$$ is even if it is divisible by $$2$$", which amount to the same statement. Question (1) : when trying to prove "$$x$$ is even" what exactly should I do ? $$Even(n)$$ is defined as : $$\exists y(n = 2 \times y)$$; thus, proving that e.g. $$6 = 2 \times 3$$, by rule of logic we can derive : $$\exists y(6 = 2 \times y)$$, which - by definition - is : $$Even(6)$$. Question (2) :
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.958537725511933, "lm_q1q2_score": 0.8147294374655154, "lm_q2_score": 0.8499711756575749, "openwebmath_perplexity": 278.7568014830925, "openwebmath_score": 0.9103184938430786, "tags": null, "url": "https://math.stackexchange.com/questions/820673/semantics-and-logical-structure-in-definitions" }
# mod Version: Computes the modulo of the input elements. ## Syntax c = mod(a, b) ## a Dividend. a is a non-complex scalar or non-complex array of any dimension. ## b Divisor. b is a non-complex scalar or non-complex array of the same dimension as a. If one input is an array, the other input can also be a scalar. ## c Modulo of a and b. c has the same sign as b. c is a matrix of the same size as the larger of a and b. The output data type is the smaller data type of the two inputs. For example, if one input is double precision and the other input is single precision floating point number, the output is a single precision floating-point number. ## Calculation Formula MathScript uses the following equation: mod(a, b) = a-b.*floor(a./b). ## Special Cases If a is positive, mod(a, 1) is the fractional part of a. If the input elements have the same sign, mod(a, b) is equivalent to rem(a, b). If one input is an integer type, the other must be the same integer type or a double scalar type. A = 5 B = 3 C = mod(A, B) Where This Node Can Run: Desktop OS: Windows FPGA: Not supported
{ "domain": "ni.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668679067632, "lm_q1q2_score": 0.8133272942470882, "lm_q2_score": 0.8289388083214156, "openwebmath_perplexity": 1049.6927923799713, "openwebmath_score": 0.5626770853996277, "tags": null, "url": "http://www.ni.com/documentation/en/labview-comms/latest/m-ref/mod/" }
r, data-cleaning Title: R: Revalue multiple special characters in a data.frame R noob here.. I have the following data frame >data Value Multiplier 1 15 H 2 0 h 3 2 + 4 2 ? 5 2 k where the multiplier is of class factor. The values of K & k is 3, + is 5 and ? is 2. I have used > data$Multiplier <- revalue(data$Multiplier, c("+"="5")) > data$Multiplier <- revalue(data$Multiplier, c("?"="2")) > data$Multiplier <- revalue(data$Multiplier, c("K"="3")) > data$Multiplier <- revalue(data$Multiplier, c("k"="3")) Is there a better way of doing it? That seems pretty straight forward to me. I'm pretty new too but in general I'm not sure if you can get better than one command. Though you could have combined all that: > newValueVector <- c("+"="5", "?"="2", "K"="3", "k"="3") > data$Multiplier <- revalue(data$Multiplier, newValueVector)
{ "domain": "datascience.stackexchange", "id": 477, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "r, data-cleaning", "url": null }
Electric Train Supply and Demand Data Description EXCEL Spreadsheet Combined EXCEL, R, SAS Programs/Results. Covariance Estimates for Regression Parameters from Complex Sample Designs: Application of the Weighted Maximum Likelihood Estimator to Linear and Logistic Regression Analysis in Which Observations Might Not be Independent Author: Barry V. Logistic regression is one of the most important techniques in the toolbox of the statistician and the data miner. A visualization of the weighted regression models is shown to the left. Note: The Logits in the image were just for example, and not the calculated logits from the penguin example. Regression is all about fitting a low order parametric model or curve to data, so we can reason about it or make predictions In a regression, a lot of data is reduced and generalized into a few parameters. Deviation Scores and 2 IVs. Statistics: Linear Regression. 1) the covariance operator Σ = E [x x ⊤] and. One variable is considered to be an explanatory variable, and the other is considered to be a dependent variable. Very Good Regression Channel. In general, if there is a categorical variable with s categories, then you include. Recall the scorecard data set which contains Estimate separate linear regression models of the relationship between admission rate and cost for each type. logistic regression. Knn Regression Weight. In particular, if you use a weight variable in a regression procedure, you get a weighted regression analysis. Of course, we need to quantify what. model_selection import train_test_split from sklearn. Linear regression is used to predict an outcome given some input value(s). Let's first look at the regression we did from the last section, the regression model predicting api00 from meals, ell and emer, and use the vif and tol options with the model statement. Roughly speaking, it is a form of weighted and reweighted least squares regression. Three approaches to estimate regression parameters are provided in the WREG program: ordinary-least-squares (OLS), weighted-least-squares (WLS), and generalized-least-squares (GLS). Given fl, deflne Ri(fl) as the rank (or midrank) of Yi ¡ flXi among fYj ¡ flXj g. Common estimation procedures that allow for survey weights in generalized linear mixed models require one unique survey-weight per sampling stage which are consequently
{ "domain": "data-wizard.de", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9886682471364099, "lm_q1q2_score": 0.8083099198872961, "lm_q2_score": 0.817574471748733, "openwebmath_perplexity": 833.588459763039, "openwebmath_score": 0.6430838108062744, "tags": null, "url": "http://data-wizard.de/weighted-linear-regression.html" }
wheeled-robot Title: Dynamic model of a tank like robot I am planning a tank like robot for hobby purpose. I have control engineering background, however I never applied on robotics. I would like to test different control theory, namely MPC. I saw a lot of publications regarding the kinematics and inverse kinematics of such a robot, however I am wondering if somebody can point out regarding the dynamics modelling of such a system,taking into account the forces, mass etc? For building a dynamic model from scratch of any differential drive mobile robot (i.e., a tank), the best resource I've found so far is a paper by Dhaouadi and Hatab (PDF link) titled "Dynamic Modelling of Differential-Drive Mobile Robots using Lagrange and Newton-Euler Methodologies: A Unified Framework". It even includes some discussion of how to do the dynamic modeling of the actuators.
{ "domain": "robotics.stackexchange", "id": 817, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "wheeled-robot", "url": null }
4. Sep 4, 2012 ### vela Staff Emeritus You find the residue by evaluating the limit $$\lim_{z \to \pi}\ (z-\pi)\frac{1}{\sin z}.$$ Rewriting the limit in terms of $z-\pi$ is simply to make the evaluation easier. I just realized you could simply apply L'Hopital's rule to do that and avoid unnecessary complications. If you had a more complicated function, however, writing things in terms of $z-\pi$ is often less work than using L'Hopital's rule. But say you did it with using the trig identity anyway. You should end up with $$\lim_{z\to\pi} \frac{z-\pi}{-\sin(z-\pi)}.$$ You might already recognize that limit, but it not, use the substitution $w=z-\pi$ to turn it into one you should recognize.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9683812354689082, "lm_q1q2_score": 0.8005721926533441, "lm_q2_score": 0.8267117983401363, "openwebmath_perplexity": 618.4790971615573, "openwebmath_score": 0.9062374830245972, "tags": null, "url": "https://www.physicsforums.com/threads/find-the-residues-of-the-function.633491/" }
c#, performance, linked-list, collections public Node next, previous; public bool IsFull { get { return Count >= MaxSize; } } public T RemoveLast() { int i = Count - 1; T t = this[i]; RemoveAt(i); return t; } } } Performance Here are some performance statistics. It's not scientific but it gives you a rough idea. Times are in milliseconds. Adding 100000 integers (list): 1 Adding (unrolled linked list): 12 Finding the index of an integer at the end of 1000 integers (list): 10886 Finding (unrolled linked list): 18055 Inserting 100000 integers into the middle (list): 22694 Insertion(unrolled linked list): 2238 Deletion of 1000 items from the start (list): 2331 Deletion (unrolled linked list): 1 It looks like an unrolled linked list could be a good choice for a long list where there is a lot of insertion or deletion at places other than the end. It will have a lower memory footprint than a linked list (I've not tested that out though). A few things off the bat: You should consider prefixing private field names with underscores, so it is possible to distinguish them from local variables. Edit: There is no "official" naming conventions regarding private members. Not that i know of anyway. Underscore however is a somewhat accepted industrial standart nowadays. Mainly, because a) it is used by Microsoft, b) it is used by ReSharper (by default), c) better intelli-sense support, d) when it comes to choosing between this.name = name; and _name = name; most people choose the latter. I think this is pretty accurate IEnumerator<T> enumerator = enumerable.GetEnumerator(); while (enumerator.MoveNext()) emm, why dont you use foreach? tuple.Item1[index - tuple.Item2] = value; - really hard to read. public void Clear() { first = null; count = 0; }
{ "domain": "codereview.stackexchange", "id": 6111, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, linked-list, collections", "url": null }
javascript, snake-game, webgl <html> <style> canvas{ position:fixed; left:0; top:0; width:99%; height:99%; } * { padding:0px; margin:0px } #score,#speed,#highscore,#maxspeed,#lag,#debug{ position: fixed; z-index: 100; font-size:20px; font-family:Verdana; left:15px; width: 100%; } </style> <div id="stats"> <div id="debug"></div> <div id="score">Score: 0</div> <style id="scorestyle"> #score { top: 10px; display:block; } </style> <div id="speed">Speed: 1</div> <style id="speedstyle"> #speed { top: 30; display:block; } </style> <div id="highscore">Highscore: 1</div> <style id="highscorestyle"> #highscore { top:30; display:block; } </style> <div id="maxspeed">Highest Speed: 1</div> <style id="maxspeedstyle"> #maxspeed { width: 100%; top:50; display:none; } </style> <span id="lag">Lag: 0ms</span> <style id="lagstyle"> #lag { top: 70px; display:none; } </style> <div id="gameover" align="center">Game Over</div> <style> #gameover{ position:absolute; z-index: 100; font-size:60px; font-family:Verdana; margin: 0; top: 50%; left: 50%; opacity:0; transform: translate(-50%, -50%); } </style> </div> <div id="canvas"></div> <p id="p"></p>
{ "domain": "codereview.stackexchange", "id": 30557, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, snake-game, webgl", "url": null }
feynman-diagrams, perturbation-theory Title: Can Feynman diagrams be used to represent any perturbation theory? In Quantum Field Theory and Particle Physics we use Feynman diagrams. But e.g. in Schwartz's textbook and here it is shown that it applies to more general cases like general perturbation theory for differential equations. Can Feynman diagrams be used to represent any perturbation theory? My thoughts are that we might be able to use this for perturbation theory in coupled equation systems like we get in fluid dynamics or astrophysics. But is it still possible to write down Feynman rules as a pictorial representation? And if it still does work in this case are there cases where that exhibit a different algebraic structure that Feynman rules can not represent? Note that this is not a question about perturbation theory in Quantum Field theory (since there the use of Feynman diagrams is well known), but in a more general context. If the answer to the question is simply "no", could one at least give classes perturbation theories that it applies to? The answer is no in general. Not all perturbation theories can be organized using diagrammatic rules (example given below). The subtlety here is that is not because you can write down diagrams to describe each terms of the perturbation theory, that you know what the diagrammatical rules are. What I mean here is that it might happen that after calculating a perturbation theory at a given order (using standard analytical approaches), you can then rewrite the result as a sum of diagrams, instead of an equation. But it is not because you know the diagrams at a given order, that you immediately know what will be the diagrams of the next order, and in particular the numerical coefficients (what would be the symmetry factors in standard QFT perturbation theory). This changes everything, because if you do not know the diagrammatic rules, you cannot write easily all the diagrams at each order automatically, and you have to do the calculation the hard way. Of course, in standard QFTs, because the diagrams come from the average over a gaussian measure, we know the diagrammatic rules, and Feynman diagrams are very useful to organize the perturbation theory.
{ "domain": "physics.stackexchange", "id": 29491, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "feynman-diagrams, perturbation-theory", "url": null }
c++, reinventing-the-wheel, integer // Integer Type LargeInt::LargeInt(long long x) : sign_(helper_templates::TGetSign(x)), length_(helper_templates::TCountDigits(x)), data_(new Digit[length_]) { if(sign_ == kSignNegative) { x = -x; } for(Index i = kFirstIndex; i < length_; i++) { Digit d = x % kRadix; data_[i] = d; x /= kRadix; } } /* * Destructor */ LargeInt::~LargeInt() { delete [] data_; } /* * Operators */ // Assignment LargeInt& LargeInt::operator=(const LargeInt& x) { if(this != &x) { this->sign_ = x.sign_; this->length_ = x.length_; for(Index i = kFirstIndex; i < this->length_; i++) { this->data_[i] = x.data_[i]; } } return *this; } // Negation LargeInt LargeInt::operator-() { return LargeInt(!this->sign_, this->length_, this->data_); } // Addition LargeInt operator+(LargeInt x, const LargeInt& y) { return x += y; }
{ "domain": "codereview.stackexchange", "id": 18877, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, reinventing-the-wheel, integer", "url": null }
quantum-mechanics, operators, quantum-optics Title: How does the displacement operator act on number states $|n\rangle$? The displacement operator generates the coherent state out of the vacuum. $$\hat D(\alpha)|0\rangle = |\alpha\rangle$$ but I am wondering what the meaning of a displacement operator acting upon a number state with $n \neq 0$ is. For example, is $\hat D(\alpha)|1\rangle$ meaningful? I have not been able to find any mention of this online. It is indeed meaningful; they're called, surprisingly enough, displaced number states, and they've appeared on this site e.g. here and here. It's hard to say anything else about them without knowing what exactly you want to know about them, though.
{ "domain": "physics.stackexchange", "id": 45045, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, operators, quantum-optics", "url": null }
# Interval Notation #### Casio ##### Member If an example was written; h(x) = 1/x and the function h has domain R excluding 0, could somebody please explain how the two open intervals include (- infinity, 0) and (0, infinity)? #### masters ##### Active member If an example was written; h(x) = 1/x and the function h has domain R excluding 0, could somebody please explain how the two open intervals include (- infinity, 0) and (0, infinity)? Hi Casio, You already said the domain was all real numbers except 0. To represent that in interval notation you would use the union of the two intervals you have. $$(- \infty, 0) \cup (0, +\infty)$$ And this says the domain includes all real numbers less than 0 together with all real numbers greater than 0. Zero is excluded in the notation by using paretheses instead of brackets. #### chisigma ##### Well-known member An interval is 'open' if it doesn't include the extremes... $- \infty$ isn't a number and 0 isn't included...the same is for 0 and $+\infty$... Kind regards $\chi$ $\sigma$ #### Casio ##### Member Thanks again for all replies, it's my confusion. Because I can see then written in the brackets it was confusing I couldn't understand why they are there? Although they are included in the brackets, and I can see them there, they are not included, which is what confused me. #### HallsofIvy ##### Well-known member MHB Math Helper In general interval notation, "[" or "]" mean "include this endpoint" while "(" and ")" mean "do not include this endpoint". [a, b] means "all numbers between a and b and a and b themselves". In set notation: $$[a, b]= \{ x| a\le x\le b\}$$
{ "domain": "mathhelpboards.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.978051745965256, "lm_q1q2_score": 0.8107450467105216, "lm_q2_score": 0.828938806208442, "openwebmath_perplexity": 1046.6659231785945, "openwebmath_score": 0.7799906134605408, "tags": null, "url": "https://mathhelpboards.com/threads/interval-notation.1061/" }
atomic-clocks, hyperfine-structure ***Edited to add: "historical reasons" might not seem like the best justification (which it isn't), but remember that, even now, a good number of PIs and faculty in atomic physics were grad students, postdocs, and early-career faculty when laser cooling was still new, so these historical reasons came at a time that shaped their thinking for the rest of their careers, and therefore the design of their experiments, the ones producing papers today. My PI still talks about the old dye laser systems sometimes. As a note, ${}^{85}$Rb actually has a negative background scattering length, which makes BECs unstable unless you use a Feshbach resonance at a fairly large magnetic field to make it positive. This puts it at a historical disadvantage. But there's actually a better reason not to use 85 in your specific application, atomic clocks. It's simple and already stated here on the thread: its ground-state hyperfine frequency of 3-ish GHz is smaller than 87's splitting of 6.8 GHz. This is why cesium, at over 9 GHz, is still the world's standard, and why the new strontium optical lattice clocks (which work at 429 THz) are the world-record holders. When you think about atomic clocks, the absolute first question you have to ask is "How fast can it go?" The questions of quality and stability are engineering questions; the fundamental oscillation frequency is something you can never get around except by changing the basic setup. You want to be reaching for stability that matches the fastest oscillation you can get, not settling for a slower clock that you can get really stable. The related statement I should make is a different way of thinking about the quality factor. This quantity is defined as $Q=\nu_0/\Delta\nu$. That is, for a given quality factor, a higher resonance frequency requires a higher linewidth. And since $Q$ is generally in the denominator of the instability, higher $Q$, means lower instability. This means higher resonance frequency gives lower instability... if you can keep the linewidth down.
{ "domain": "physics.stackexchange", "id": 90470, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "atomic-clocks, hyperfine-structure", "url": null }
transformer, finetuning, text-generation, llm, question-answering I made the same text generation model, but then without the questions, and when I then asked it to write text after the prompt, the new text was not good enough, mostly too abstract or too far away from the essay, and a bit weird. Should I add eos_token as argument of the tokenizer.encode_plus() and also "end of sentence" [EOS] tokens in the input text as well? Would that make the model any better? Does the model give better answers when there are padding [PAD] tokens as pad_token argument of the tokenizer.encode_plus()? Which other tweaks and tricks should give better answers? What could help the most to get a better text generation? Up to now, the text output of the text generation model is not good. Fine-tuning code with just one file as the text input I train the text generation model with the code that you find at How can you get a Huggingface fine-tuning model with the Trainer class from your own text where you can set the arguments for truncation and padding?: from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer, DataCollatorForLanguageModeling, Trainer, TrainingArguments from transformers import AutoTokenizer from datasets import load_dataset model_name = "dbmdz/german-gpt2" tokenizer = AutoTokenizer.from_pretrained(model_name) file_path = './myfile.txt' bln_truncation = False num_train_epochs = 1 per_device_train_batch_size = 1 save_steps = 10_000 dataset = load_dataset("text", data_files={"train": file_path}) block_size = 512 tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize_function(examples): return tokenizer( examples["text"], padding="max_length", truncation=bln_truncation) tokenized_datasets = dataset.map(tokenize_function, batched=True)
{ "domain": "datascience.stackexchange", "id": 12130, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "transformer, finetuning, text-generation, llm, question-answering", "url": null }
quantum-mechanics, homework-and-exercises, lagrangian-formalism, quantum-optics 0 & \omega_{2} & \cdots & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \cdots & \omega_{k}& \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots \end{bmatrix} = \dfrac{\pi}{q} \begin{bmatrix} 1 & 0 & \cdots & 0 & \cdots \\ 0 & 2 & \cdots & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \cdots & k & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots \end{bmatrix} =\Omega^{\rm{T}}\left(q\right) \tag{05} \end{equation} \begin{equation} \phi\left(q,\dot{q}\right)\stackrel{\text{def}}{\equiv}\dfrac{\dot{q}}{q} \tag{06} \end{equation} We define also the real scalar below, something like the inner product of real vectors \begin{equation} \boldsymbol{<}\mathbf{Q},\mathbf{P}\boldsymbol{>}\stackrel{\text{def}}{\equiv} \sum_{k}Q_{k}P_{k} \tag{07} \end{equation} Under these definitions and using equations (A-01), see AUXILIARY SECTION, we have the following expressions (08) in place of the equations of motion (01) and (09) in place of (02): \begin{equation}
{ "domain": "physics.stackexchange", "id": 24247, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, homework-and-exercises, lagrangian-formalism, quantum-optics", "url": null }
electromagnetism, classical-mechanics, electrostatics, electric-fields Title: Conceptual question about cavity inside a sphere of uniform charge I understand why the E-Field isn't $0$ in the cavity (denoted by the white circle) and the red area denoting the uniform charge distribution. This is because of the Flux through the spherical cavity essentially not being zero, therefore $\nabla \cdot E \neq 0$ or equivalently $\oint_A\vec{E} \cdot\vec{da} \neq 0 $. However, then by this logic, shouldn't this mean that we have a charge enclosed in the cavity, which cannot be the case since it's a vacuum? My logic is that since $\oint_A\vec{E} \cdot\vec{da} = Q_{enc}/\epsilon_{0} \neq 0$ this then means there is a charge enclosed in the sphere. I know that we can "simulate" the boundary conditions by placing a sphere of -ve charge in place of the cavity. However, when one considers the problem itself, there is no charge inside the cavity, so does $\oint_A\vec{E} \cdot\vec{da} = Q_{enc}/\epsilon_{0}$ not hold here? If anyone could shed light and help me understand this it would be much appreciated. I think I see my logical flaw. $∮E⃗ ⋅da=0$, however this does not mean that there is no E-Field passing through, right? It only means that the FLUX is $0$. So Gauss' Law holds, but we must be careful. I'll leave this up nonetheless, in case anyone comes against the same question again.
{ "domain": "physics.stackexchange", "id": 77705, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, classical-mechanics, electrostatics, electric-fields", "url": null }
fluid-dynamics, flow, porous-media separating $$ -\frac{\mu_n B_n}{\mu_g B_g} dp=\left[\frac{\mu_n B_n}{k A}q_{sc}+ \frac{c\beta}{\mu_g A^2}\rho_{g,sc} \mu_n B_n q_{sc}^2\right]dx$$ Integrating $$ -\mu_n B_n \int_{p_1}^{p_2}\frac{1}{\mu_g B_g} dp=\left[\frac{\mu_n B_n}{k A}q_{sc}+ \frac{c\beta}{\mu_g A^2}\rho_{g,sc} \mu_n B_n q_{sc}^2\right]\int_{0}^{L}dx$$ The gas formation volume factor, $B_g$, is defined as $$\tag{7} B_g=\frac{p_{sc}}{p}\frac{T}{T_{sc}}\frac{z}{z_{sc}}$$ Substituting Eq.7 into the integral on the left-hand-side (LHS) of Eq.6, we have $$I=-\mu_n B_n \int_{p_1}^{p_2}\frac{1}{\mu_g B_g}dp=-\frac{\mu_n z_n}{p_n}\int_{p_1}^{p_2}\frac{p}{\mu z}dp \ \ (\text{assume} \ T_n=T)$$ Using the initial reservoir pressure, $p_i$, as the "normalizing" pressure, $p_n$, we have
{ "domain": "physics.stackexchange", "id": 18877, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, flow, porous-media", "url": null }
# Probability of Winning a Contest This is my first question so apologies if its unclear/vague. There exists a contest with me in it, and $5$ others, thus $6$ people in total, along with $5$ prizes. A person can only win one prize, and once they do, they're out of the contest. Winners are chosen at random. So, what is the chance of me winning at a prize? My initial thought was: $\frac16$ chance initially, then $\frac15$ if I don't win the first time, $\frac14$ if I don't win the second, ect. as people are removed once they win, resulting in $\frac16+\frac15+\frac14+\frac13+\frac12 = 1.45$ which is $145$%. This is greater than $100$%, and obviously I am not guaranteed to win as I can be the $1$ loser, so how do you find the correct probability? Thanks! Edit: All of you have been extremely helpful. Thank you! - You have an implicit assumption that the probability of each person winning each prize is equal - this may not be the case for all contests, such as a raffle drawing where people may buy any number of tickets. (That's not a criticism, just a warning that the answers below won't always hold.) –  Patrick M Jul 3 '14 at 21:07 Well, if we are choosing $5$ winners at random out of $6$ people, then you have a $\dfrac{5}{6}$ probability of winning. However, your approach is correct - we can sum the individual probabilities to get the same result, but we have to be cautious. Consider the first two prizes given. As you say, we have a $\dfrac{1}{6}$ probability of winning the first prize, and then a $\dfrac{1}{5}$ chance of winning the second prize if we don't win the first prize. We need to take into account this part - you only have a $\dfrac{5}{6}$ chance to get to the second round in the first place. Using your method, this would give us a total probability of
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9780517462851321, "lm_q1q2_score": 0.8426289450097493, "lm_q2_score": 0.8615382040983515, "openwebmath_perplexity": 407.69143471933023, "openwebmath_score": 0.8789778351783752, "tags": null, "url": "http://math.stackexchange.com/questions/855610/probability-of-winning-a-contest" }
javascript, object-oriented, canvas Here is a copy of the revised code: console.log("Game starting..."); var ship = { name: "Enterprise", x: 125, y: 120, width: 50, height: 40, left: false, right: false, up: false, down: false, fire: false, firerate: 5, cfirerate: 0, moveInterval: 5, color: "#000000" }, map = { width: 300, height: 300, color: "#808080", drawInterval: 30 }, laser = { height: 20, moveInterval: 6, color: "#FF0000" }, lasers = [], keys = { left: 37, up: 38, right: 39, down: 40, fire: 90 //Z }, getKey = function(key) { for (var i in keys) { if (keys.hasOwnProperty(i)) { if (keys[i] === key) { return i }; } } }, eventValues = { keyup: false, keydown: true }, types = { right: 1, left: 2 }; var world = document.getElementById('world'); var cxt = world.getContext("2d"); $(document).bind('keydown keyup', function(e) { var key = getKey(e.keyCode); ship[key] = eventValues[e.type]; }); function createLaser(type) { var x = ship.x; if (type === types.right) { x += ship.width; } var y = laser.height + ship.y; return { type: type, x: x, y: y, } } function drawWorld() { cxt.fillStyle = map.color; cxt.fillRect(0, 0, map.width, map.height); }
{ "domain": "codereview.stackexchange", "id": 564, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, object-oriented, canvas", "url": null }
thermodynamics, statistical-mechanics, energy-conservation, entropy Red point corresponds to the state in which entropy maximizes for the composite system. Suppose that when the internal walls separating A and B was adiabatic, the internal energy of $A$ is $5J$, Then the corresponding point in total entropy vs internal energy of A is as shown in the green colour. The Clausius inequality says that $dS\geq\int\frac{dq}{T}\tag{1}$. As our composite system is isolated, so there is no heat exchange thus $dq=0$. So, $(1)$ becomes, $dS\geq 0\tag{2}$ So, this suggest that any spontaneous process which occurs inside this composite system(heat exchange between $A$ and $B$ as walls are diathermal), increases the entropy. I have a question that why the equilibrium state corresponds to maximum entropy not any entropy which is greater than the initial one but not maximum? As we can see that $(2)$ suggests that in this process entropy increases but does not say anything that the equilibrium state which the system A and B achieves corresponds to the maximum entropy of the composite system. Why the equilibrium state does not correspond to the any point between green and red? Why it corresponds to only red point, in the above process? So, basically, why are the intermediate states not the equilibrium states in $S-U$ plot during a spontaneous process? The essence of your question is why the statement that equilibrium state corresponds to maximum entropy is equivalent to the thermodynamic inequality $\Delta S \geq 0$ for an isolated system? More than on mathematical manipulations, the answer depends on a conceptual understanding of the meaning of $\Delta S \geq 0$, and thermodynamic equilibrium.
{ "domain": "physics.stackexchange", "id": 80636, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, statistical-mechanics, energy-conservation, entropy", "url": null }
c++, c++11, callback int main() { ResourceSharedPtr resource_ptr = std::make_shared<Resource>("/login", RequestMethod::GET, LoginMethodHandler); /* call to method handler */ /* ideally in full blown application, logic will be search for path and method and call approriate handler */ resource_ptr->GetMethodHandler()(); path_map.insert(std::make_pair(resource_ptr->GetPath(),resource_ptr)); return 0; }
{ "domain": "codereview.stackexchange", "id": 24345, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++11, callback", "url": null }
Yes, it can be observed that in each case, the LCM of the given numbers is the product of these numbers. When two numbers are co-prime, their LCM is the product of those numbers. Also, in each case, LCM is a multiple of 3. ## Related Chapters ### Preparation Products ##### JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- ##### Rank Booster NEET 2021 This course will help student to be better prepared and study in the right direction for NEET.. ₹ 13999/- ₹ 9999/- ##### Knockout JEE Main April 2021 (Easy Installments) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- ##### Knockout NEET May 2021 An exhaustive E-learning program for the complete preparation of NEET.. ₹ 22999/- ₹ 14999/-
{ "domain": "careers360.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9873750514614409, "lm_q1q2_score": 0.8291323715243896, "lm_q2_score": 0.8397339696776499, "openwebmath_perplexity": 3741.2924265765605, "openwebmath_score": 0.860153079032898, "tags": null, "url": "https://learn.careers360.com/ncert/question-find-the-lcm-of-the-following-numbers-a-9-and-4-b-12-and-5-c-6-and-5-d-15-and-4/" }
python, pandas, dataframe Title: Find the consecutive zeros in a DataFrame and do a conditional replacement I have a dataset like this: Sample Dataframe import pandas as pd df = pd.DataFrame({ 'names': ['A','B','C','D','E','F','G','H','I','J','K','L'], 'col1': [0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0], 'col2': [0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0]}) I'd like to replace some of the 0's in col1 and col2 with 1's, but not replace the 0's if three or more 0's are consecutive in the same column. How can this be done with pandas? Original Dataset: names col1 col2 A 0 0 B 1 0 C 0 0 D 1 0 E 1 1 F 1 0 G 0 1 H 0 0 I 0 1 J 1 0 K 0 0 L 0 0 Desired Dataset: names col1 col2 A 1 0 B 1 0 C 1 0 D 1 0 E 1 1 F 1 1 G 0 1 H 0 1 I 0 1 J 1 0 K 1 0 L 1 0 Consider the following approach: def f(col, threshold=3): mask = col.groupby((col != col.shift()).cumsum()).transform('count').lt(threshold) mask &= col.eq(0) col.update(col.loc[mask].replace(0,1)) return col In [79]: df.apply(f, threshold=3) Out[79]: col1 col2 names A 1 0 B 1 0 C 1 0 D 1 0 E 1 1 F 1 1 G 0 1 H 0 1 I 0 1 J 1 0 K 1 0 L 1 0 Step by step: In [84]: col = df['col2']
{ "domain": "datascience.stackexchange", "id": 1927, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, pandas, dataframe", "url": null }
php, pdo //do the required stuff return $connection; } public function name ($name) { $this->name=$name; return $this; } //... } This is a really awesome pattern for building complex object in larger system and in tests. You could even reuse the builder and call create twice. Or change only one property and call create again.
{ "domain": "codereview.stackexchange", "id": 3168, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, pdo", "url": null }
nextflow To input the trimmed reads into a downstream process, just ensure that the input tuple matches the input set cardinality declared by the process. If the input declaration is not the same, you may need to apply one or more operators to your channel to make it fit. For your alignment process, you might not need to make any changes. Also, if your alignment process needs to declare one or more reference files, just make sure these are value channels. Most of the time, what you want is one queue channel and one or more value channels when you require multiple input channels, otherwise you might observe unusual behavior (e.g. see this question). The section on multiple input channels in the docs is well worth taking the time to read in my opinion. A dummy alignment process might look like: process ALIGN { tag { sample } debug true input: tuple val(sample), path(reads) path fasta path gtf script: def (r1, r2) = reads """ ls -g "${r1}" ls -g "${r2}" ls -g "${fasta}" ls -g "${gtf}" """ } And the updated workflow: workflow { read_pairs_ch = Channel.fromFilePairs( params.reads, checkIfExists:true ) transcriptome = file( params.transcriptome_file ) gtf = file( params.gtf_file ) TRIM( read_pairs_ch ) ALIGN( TRIM.out.trimmed_fastqs, transcriptome, gtf ) }
{ "domain": "bioinformatics.stackexchange", "id": 2332, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nextflow", "url": null }
## Solution Let the length of $AD$ be $x$, so that the length of $AB$ is $2x$ and $\text{[}ABCD\text{]}=2x^2$. Because $ABCD$ is a rectangle, $\angle ADC=90^{\circ}$, and so $\angle ADE=\angle EDF=\angle FDC=30^{\circ}$. Thus $\triangle DAE$ is a $30-60-90$ right triangle; this implies that $\angle DEF=180^{\circ}-60^{\circ}=120^{\circ}$, so $\angle EFD=180^{\circ}-(120^{\circ}+30^{\circ})=30^{\circ}$. Now drop the altitude from $E$ of $\triangle DEF$, forming two $30-60-90$ triangles. Because the length of $AD$ is $x$, from the properties of a $30-60-90$ triangle the length of $AE$ is $\frac{x\sqrt{3}}{3}$ and the length of $DE$ is thus $\frac{2x\sqrt{3}}{3}$. Thus the altitude of $\triangle DEF$ is $\frac{x\sqrt{3}}{3}$, and its base is $2x$, so its area is $\frac{1}{2}(2x)\left(\frac{x\sqrt{3}}{3}\right)=\frac{x^2\sqrt{3}}{3}$. To finish, $\frac{\text{[}\triangle DEF\text{]}}{\text{[}ABCD\text{]}}=\frac{\frac{x^2\sqrt{3}}{3}}{2x^2}=\boxed{\textbf{(A) }\frac{\sqrt{3}}{6}}$
{ "domain": "artofproblemsolving.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9884918536478772, "lm_q1q2_score": 0.8258686682590555, "lm_q2_score": 0.8354835350552603, "openwebmath_perplexity": 90.7904250616293, "openwebmath_score": 0.8814104795455933, "tags": null, "url": "https://artofproblemsolving.com/wiki/index.php?title=2014_AMC_10B_Problems/Problem_15&diff=prev&oldid=71124" }
quantum-mechanics, hilbert-space, operators, harmonic-oscillator NB on comments (geeky) The above projection on energy (number) eigenstates is consistent, to the extent the action of $$ \hat X= (a+a^\dagger)/\sqrt{2}, \qquad \hat P= (a^\dagger - a) /\sqrt{2} $$ on $|n\rangle$ raises or lowers n by 1, and so maintains the projection on integer ns, as one may also confirm from the recursion relations of Hermite functions, linked above. So, the dynamics never slips into the unphysical superselection sectors projected out, e.g., with m = integer plus a noninteger constant, such as 1/2. Freak states such as $|1/2\rangle \equiv \sqrt{ \frac{a^\dagger}{(1/2)!}}|0\rangle= \sqrt{ \frac{2a^\dagger}{ \sqrt{\pi}}}|0\rangle $ orthogonal to the above integer n ones are permanently and safely excluded from consideration.
{ "domain": "physics.stackexchange", "id": 75453, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, hilbert-space, operators, harmonic-oscillator", "url": null }
hilbert-transform $$x_+(t)\approx x(t)+j\sin(2\pi f_0)\Pi_a(t)=\Pi_a(t)e^{j2\pi f_0 t}\tag{6}$$ Obviously, the respresentation $$x(t)=\text{Re}\left\{\Pi_a(t)e^{j2\pi f_0 t}\right\}\tag{7}$$ is always valid, but the complex-valued signal $\Pi_a(t)e^{j2\pi f_0 t}$ is no analytic signal, it's just a good approximation of the analytic signal for large values of $f_0$. Note that for $x(t)=m(t)\cos(2\pi f_0t)$ with $m(t)$ a band-limited function, i.e. $M(f)=0$ for $|f|>B$, the complex-valued signal $m(t)e^{j2\pi f_0 t}$ is an analytic signal, as long as $f_0>B$. The problem with the function given in your question is that $\Pi_a(t)$ is not band-limited.
{ "domain": "dsp.stackexchange", "id": 3470, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "hilbert-transform", "url": null }
• Nice manipulation +1 :), especially that floor equivalent. But I am wondering how can we find this without resorting to that floor. – SJ. Jun 4 '18 at 13:13 • @samjoe I wonder too. – Szeto Jun 4 '18 at 13:35 • Wow, thank you! Any tips on finding alternate forms of the floor function? – Mint Jun 4 '18 at 15:07 • @jiaminglimjm I discovered this form simply by playing around with functions.:) – Szeto Jun 4 '18 at 22:19 With Floor Function Let $(1+\cos^2x )^{-1} = f(x)$. Now as you found, $$\int \frac{dx}{1+\cos^2 x} = \int \frac{\sec^2 x }{2+\tan^2 x} dx = \frac{1}{\sqrt{2} } \arctan\left(\frac{\tan x}{\sqrt 2}\right)$$ The issue is that integral of a continuous function should be continuous. The one we found is discontinuous at all odd multiples of $\pi/2$. Lets analyse for $x\in [\tfrac{(2k-1)\pi}{2}, \tfrac{(2k+1 ) \pi}{2}]$. Then \begin{align} \int_{0}^{x} f(t) dt &= \int_{0}^{\pi/2}f(t) dt+\int_{\pi/2}^{3\pi/2}f(t) dt ... \int_{(2k-1)\pi/2}^{x}f(t) dt \\ &= \frac{\pi k}{\sqrt2} + \frac{1}{\sqrt 2}\arctan\left(\frac{\tan x}{\sqrt2}\right) \\ \end{align}
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446479186301, "lm_q1q2_score": 0.8459285406527055, "lm_q2_score": 0.8688267762381844, "openwebmath_perplexity": 564.394025904595, "openwebmath_score": 0.9693926572799683, "tags": null, "url": "https://math.stackexchange.com/questions/2806904/continuous-antiderivative-of-frac11-cos2-x-without-the-floor-function" }
general-relativity, cosmology, space-expansion Title: Why the ant on rubber rope paradox does not work in our universe or de Sitter universe? The ant on rubber rope paradox says that an ant moving along a rubber rope that can infinitely expand will reach the other end of the rope regardless of how fast the rope expands and how slow the ant moves. Why then multiple sources say that there is cosmic event horizon, and two persons at some distance exceedidng Hubble limit would not be able to meet? In the ant-on-rope problem as stated in the Wikipedia article, the rope's expansion is linear. That means the ant's speed as a fraction of the rope length per unit time goes like $t^{-1}$, so the total fraction of the rope length it has traveled goes like $\log t$, which is a function that attains arbitrarily large values, but very slowly. It follows that the ant can reach the end of an arbitrarily long rope, but it may take a very long time. If the rope's expansion is superlinear, that argument doesn't work. E.g., if it expands like $t^{1.001}$, the ant's speed goes like $t^{-1.001}$, and the integral of that goes like $-t^{-0.001}$, which approaches a maximum value as $t\to\infty$, so the ant can get stuck. In ΛCDM cosmology, the expansion is superlinear.
{ "domain": "physics.stackexchange", "id": 96200, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, cosmology, space-expansion", "url": null }
$$p_n = {2n-2 \over 2n-1} (p_{n-1} + {1 \over 2n-3} p_{n-2})$$ However, I'm not $$100\%$$ sure of my reasoning above, nor am I sure my answer matches the OP's answer and/or Ross's answer, so critiques are very welcome.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9828232924970204, "lm_q1q2_score": 0.8293997593561593, "lm_q2_score": 0.8438950986284991, "openwebmath_perplexity": 357.2889304760196, "openwebmath_score": 0.8477510809898376, "tags": null, "url": "https://math.stackexchange.com/questions/3383678/probability-of-at-least-one-matching-pair-for-n-pairs-of-socks" }
ros, rviz, 2dlaserscan, ros-kinetic Originally posted by metobom with karma: 27 on 2019-03-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by mgruhler on 2019-03-13: happy to see this fixed! So in essence, it was an incompatibilty caused by using the gazebo_ros_gpu_laser plugin with an incompatible graphics card... Did you post this upstream to the developers? If so, it makes sense to link against the respective issue here ... Comment by Essam_Ky96 on 2020-06-18: should i change my pc to run it ?? WT Comment by metobom on 2020-06-18: Intel GPU works too. When I get this error my Ubuntu setup was wrong and there was no Nvidia or Intel GPU seen by Ubuntu. Comment by lidarCheetah on 2021-04-08: can anyone tell me how to extract numerical data (distance and angle) from the lidar
{ "domain": "robotics.stackexchange", "id": 32553, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, rviz, 2dlaserscan, ros-kinetic", "url": null }
formal-languages, context-free, formal-grammars, proof-techniques Terminal set is $\sum$ Start variable is not in terminal set $ S -> \big( \epsilon | SS | (i S )i \big)$ for all i > 1 But the form of that word is $(i (j )i )j$ which fails to match the definition. But I'm having trouble writing a formal proof on that. Consider producing a leftmost derivation of ( [ ) ]. We can begin with $$ S\Rightarrow SS\stackrel{*}{\Rightarrow}SS\dotsc S $$ and to get the ( on the left we must eventually use the production $S\rightarrow(S)$ on the leftmost $S$, giving $$ S\stackrel{*}{\Rightarrow}(S)S\dotsc S $$ and then we'll have $$ S\stackrel{*}{\Rightarrow}(SS\dotsc S)S\dotsc S $$ and to get the [ as the second terminal on the left we must eventually use the production $S\rightarrow[S]$, giving $$ S\stackrel{*}{\Rightarrow}([S]SS\dotsc S)S\dotsc S\stackrel{*}{\Rightarrow}([SS\dotsc S]SS\dotsc S)S\dotsc S $$ and now we're stuck, since no production will give us the ) terminal we need on the left without introducing a ( first. For a slightly different take, consider that $S$ can only derive strings in $D_2$, so to derive ( ] ) ] we must use the production $S\rightarrow (S)$, where the $S$ in the right term of the production must also derive a string in $D_2$, but that would imply $S$ derives ], which cannot be, since $]\notin D_2$.
{ "domain": "cs.stackexchange", "id": 3485, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, context-free, formal-grammars, proof-techniques", "url": null }
quantum-gate, circuit-construction, universal-gates, gate-synthesis This works because if the control qubit is in state $|0\rangle$ the action on the target is $H^2=\mathbb{I}$, while for $|1\rangle$ it applies the circuit for $U$. For different $U$, in particular if it acts on several qubits, coming up with such a circuit might be cumbersome. Is there a recipe to obtain the circuit of $C_U$ given that you know how to build $U$? The question may not be entirely well-defined, in the sense that to ask for a way to compute $C(U)$ from a decomposition of $U$ you need to specify the set of gates that you are willing to use. Indeed, it is a known result that any $n$-qubit gate can be exactly decomposed using $\text{CNOT}$ and single-qubit operations, so that a naive answer to the question would be: just decompose $C(U)$ using single-qubit and $\text{CNOT}$s. A different interpretation of the question is the following: given $U$, can I compute $C(U)$ using a set of single-qubit operations and $\text{CNOT}$s not on the control qubit, and $\text{CNOT}$s with the control being the first qubit? This can be done generalising a result found in chapter four of Nielsen & Chuang. Let $U$ be a single-qubit gate. It can then be proved that $U$ can always be written as $U = e^{i\alpha} AXBXC$, where $X$ is the Pauli X gate, and $A, B$ and $C$ are single-qubit operations such that $ABC=I$ (see N&C for a proof). It follows that $$C(U)=\Phi_1(\alpha)A_2C(X)B_2C(X) C_2,$$
{ "domain": "quantumcomputing.stackexchange", "id": 31, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-gate, circuit-construction, universal-gates, gate-synthesis", "url": null }
c, tree, hash-map /*************************************************************************** * If p_map contains the key p_key, associates it with value p_value and * * returns the old value of that key. * ***************************************************************************/ void* map_t_put (map_t* p_map, void* p_key, void* p_value); /*************************************************************************** * Returns a positive value if p_key is mapped to some value in this map. * ***************************************************************************/ int map_t_contains_key (map_t* p_map, void* p_key);
{ "domain": "codereview.stackexchange", "id": 15453, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, tree, hash-map", "url": null }
ros [rosmake-1] Finished <<< bond ROS_NOBUILD in package bond [rosmake-2] Starting >>> rospy [ make ] [rosmake-0] Finished <<< pluginlib ROS_NOBUILD in package pluginlib [rosmake-1] Starting >>> opencv2 [ make ] [rosmake-3] Starting >>> bondcpp [ make ] [rosmake-2] Finished <<< rospy No Makefile in package rospy [rosmake-0] Starting >>> geometry_msgs [ make ] [rosmake-2] Starting >>> rosservice [ make ] [rosmake-3] Finished <<< bondcpp ROS_NOBUILD in package bondcpp [rosmake-2] Finished <<< rosservice No Makefile in package rosservice [rosmake-1] Finished <<< opencv2 ROS_NOBUILD in package opencv2 [rosmake-3] Starting >>> nodelet [ make ] [rosmake-2] Starting >>> dynamic_reconfigure [ make ] [rosmake-0] Finished <<< geometry_msgs No Makefile in package geometry_msgs [rosmake-1] Starting >>> bullet [ make ] [rosmake-0] Starting >>> sensor_msgs [ make ] [rosmake-2] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure [rosmake-3] Finished <<< nodelet ROS_NOBUILD in package nodelet [rosmake-0] Finished <<< sensor_msgs No Makefile in package sensor_msgs [rosmake-2] Starting >>> angles [ make ] [rosmake-0] Starting >>> rostest [ make ] [rosmake-1] Finished <<< bullet ROS_NOBUILD in package bullet [rosmake-3] Starting >>> roswtf [ make ]
{ "domain": "robotics.stackexchange", "id": 11266, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros", "url": null }
ds.algorithms, graph-theory, co.combinatorics Graphs with bounded tree-width or clique-width Edge dominating set and colorings on graphs with fixed clique-width (2001). Daniel Kobler, Udi Rotics The algorithms here require a k-expression (an algebraic formula for constructing a graph with a bounded clique-width) as a parameter. For some graphs, this expression can be computed in linear time. Yaroslav pointed out in methods for counting colourings in bounded tree-width graphs. See his answer below. These two study graph families where $k$ vertices or edges can be either added or deleted. Parameterized complexity of vertex colouring (2003). Leizhen Cai. Colouring can be solved in polynomial time when adding or deleting $k$ edges (for fixed $k$) in split graphs. Parameterized coloring problems on chordal graphs (2006). Dániel Marx. For fixed $k$, chordal graphs to which $k$ edges are added, can be coloured in polynomial time. Graphs not containing particular subgraphs Deciding k-Colorability of P5-Free Graphs in Polynomial Time (2010). Chính T. Hoàng, Marcin Kamínski, Vadim Lozin, Joe Sawada, Xiao Shu. 3-colouring AT-free graphs in polynomial time (2010). Juraj Stacho. Colouring quadtrees
{ "domain": "cstheory.stackexchange", "id": 51, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ds.algorithms, graph-theory, co.combinatorics", "url": null }
ros, slam, navigation, ros2, sensor-fusion the application consists of an indoor environment (mapped in advance) Let's review: you put "SLAM" in the question when you're not doing SLAM, and you included an outdoor image in your question when the application is indoor. Got it. Comment by Theodoro Cardoso on 2023-05-18: Well, I'm sure I could have done a better job explaining it and I apologize for the lack of clarity. I believed that updating the base map with obstacles would turn the localization-only problem into SLAM but that's apparently not the case. The point of this loosely formulated question was to get a better understanding of how to use sensors that are not in the robot to aid navigation results, hopefully getting a hint on where to start (packages, repositories, or any helpful material on the topic)
{ "domain": "robotics.stackexchange", "id": 38385, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, slam, navigation, ros2, sensor-fusion", "url": null }
Factoring can be done in different numerical domains. By default, it is carried on inside the integers ring Z: (%i6) factor(x^8−41·x^4+400); $\tag{%o6} \left( x-2\right) \, \left( x+2\right) \, \left( {{x}^{2}}-5\right) \, \left( {{x}^{2}}+4\right) \, \left( {{x}^{2}}+5\right)$ The following modification of this command executes the factorization in the Gaussian integers Z[i]: (%i7) gfactor(x^8−41·x^4+400); $\tag{%o7} \left( x-2\right) \, \left( x+2\right) \, \left( x-2 \% i\right) \, \left( x+2 \% i\right) \, \left( {{x}^{2}}-5\right) \, \left( {{x}^{2}}+5\right)$ It is possible to factor within other domains, it is enough to adjoin the appropriate roots (this is done by declaring the minimal polynomials that they satisfy). For instance, to factor in Z[sqrt(5)], we would do (%i8) subst(a=sqrt(5),factor(x^8−41·x^4+400,a^2−5)); $\tag{%o8} \left( x-2\right) \, \left( x+2\right) \, \left( x-\sqrt{5}\right) \, \left( x+\sqrt{5}\right) \, \left( {{x}^{2}}+4\right) \, \left( {{x}^{2}}+5\right)$ Due to their importance in many topics, Maxima has a good deal of functions related to polynomials:
{ "domain": "uaslp.mx", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.972830769252026, "lm_q1q2_score": 0.8085615346754277, "lm_q2_score": 0.8311430520409023, "openwebmath_perplexity": 8210.193559783387, "openwebmath_score": 0.9115439653396606, "tags": null, "url": "http://galia.fc.uaslp.mx/~jvallejo/Maxima%20Mini-Tour%2019-May-2019.html" }
newtonian-mechanics, general-relativity, forces, spacetime So, let's just get this straight. The book sitting on the table in front of me is accelerating upwards all the time? But when I push it off the table and it falls down, then as it falls down it is not accelerating? Is that what you're saying? What I'm saying, and what all relativist would say, is that: the book on the table has a non-zero proper acceleration the falling book has a zero proper acceleration And this is all we can say. The question of which has a non-zero three-acceleration (Newtonian acceleration) is meaningless because that quantity is not frame invariant. The question of which has a non-zero proper acceleration is meaningful – even if the answer isn't what you expected.
{ "domain": "physics.stackexchange", "id": 47448, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, general-relativity, forces, spacetime", "url": null }
ros, turtlebot-gazebo, kobuki, ros-indigo Param xml is <param command="$(arg urdf_file)" name="robot_description"/> The traceback for the exception was written to the log file I think the important thing is rospkg.common.ResourceNotFound: kobuki_description Please help to solve And YES, I have tried searching before asking such as: this1 this2 this3 But they don't really give solution/fix the problem for me Any help will be appreciated Originally posted by alienmon on ROS Answers with karma: 582 on 2016-09-09 Post score: 0 In my prev post , I uninstall and reinstall the turtlebot. Turns out that I ALSO have to uninstall the kobuki , before reinstalling so sudo apt-get remove turtlebot-* sudo apt-get remove kobuki-* sudo apt-get install ros-indigo-turtlebot ros-indigo-turtlebot-apps ros-indigo-turtlebot-interactions ros-indigo-turtlebot-simulator ros-indigo-kobuki-ftdi ros-indigo-rocon-remocon ros-indigo-rocon-qt-library ros-indigo-ar-track-alvar-msgs It works for me !! Originally posted by alienmon with karma: 582 on 2016-09-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25718, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, turtlebot-gazebo, kobuki, ros-indigo", "url": null }
quantum-mechanics, terminology, foundations On the other hand, it is important to realize that not all states in the joint state space are of this form, because entangled states are simply not separable as the tensor product of two individual system states. Those cannot be constructed explicitly within the joint state space given only the classes, i.e. the entangled state the even superposition of system 1 in $[u]$ and system 2 in $[v]$ superposed with system 1 in $[w]$ and system 2 in $[x]$ is not sufficient as a description, because in forming the class $$\left[\frac{1}{\sqrt{2}}(u\otimes v + w\otimes x)\right]$$ the choices of representatives from $[u]$ and $[w]$ (resp. $[v]$ and $[x]$) do affect which class you end up in. This is, however, not an artifact of the math, and it has the physical backing that you've failed to specify a common phase reference for $[u]$ and $[w]$ (resp. $[v]$ and $[x]$) and you're not even able to form meaningful single-state superpositions $[u+w]$ or $[v+x]$ without that common phase reference. What does that mean? Basically, that while the projective space $\mathbb{C} \mathbf{P}^{d-1}$ does trim out some 'unphysical' aspects of the description from the vector-space picture, that doesn't mean that you can discard the vector-space structure in $\mathbb{C}^{d}$, which is essential to formulate the full ontology of the system. And finally, this brings me to my final comment on your question: an operator that if given two singletons/single points/values returns a Bloch sphere
{ "domain": "physics.stackexchange", "id": 50901, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, terminology, foundations", "url": null }
Re: In the coordinate plane a slope of the line K is 4 times the y-interce &nbs [#permalink] 22 Nov 2017, 12:22 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9715639636617014, "lm_q1q2_score": 0.8423806927537235, "lm_q2_score": 0.8670357529306639, "openwebmath_perplexity": 6758.148936717705, "openwebmath_score": 0.46220171451568604, "tags": null, "url": "https://gmatclub.com/forum/in-the-coordinate-plane-a-slope-of-the-line-k-is-4-times-the-y-interce-219828.html" }
c++, performance, strings, c++11, programming-challenge Title: Java vs C++ (JAVAC) Here's the problem for Java vs C++ (JAVAC): Java and C++ use different naming conventions: In Java a multiword identifier is constructed in the following manner: The first word is written starting from the small letter, and the following ones are written starting from the capital letter, no separators are used. All other letters are small. Examples of a Java identifier are: javaIdentifier, longAndMnemonicIdentifier, name, nEERC. In C++ a multiword identifier is constructed in the following manner: Use only small letters in their identifiers. To separate words they use underscore character ‘_’. Examples of C++ identifiers are: c_identifier, long_and_mnemonic_identifier, name Note: When identifiers consist a single word then Java and C++ naming conventions are identical: You are writing a translator that is intended to translate C++ programs to Java and vice versa. Of course, identifiers in the translated program must be formatted due to its language naming conventions — otherwise people will never like your translator. The first thing you would like to write is an identifier translation routine. Given an identifier, it would detect whether it is Java identifier or C++ identifier and translate it to another dialect. If it is neither, then your routine should report an error. Translation must preserve the order of words and must only change the case of letters and/or add/remove underscores. How can I improve this code? How can I make it faster? Are there better solutions? #include<iostream> #include<string> #include<cctype>
{ "domain": "codereview.stackexchange", "id": 11575, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, strings, c++11, programming-challenge", "url": null }
botany, terminology, nomenclature Regnum Animale: the animals; Regnum Vegetabile: the plants; Regnum Lapideum: the minerals (you read it right). Note that, in this classification, "animals" correspond to what nowadays we call animals and protozoans, and "plants" correspond to what nowadays we call plants, algae, fungi and bacteria. You have to keep in mind that this book was first published in 1735, well before the evolutionary biology being proposed in the XIX century and established in the XX century. Therefore, it is a book published when fixism was the current paradigm, full of mentions to the scala naturae. So, the plants (as well as the animals) showed a continuum of species, going to the lower plants (the bacteria) to the higher plants (the flowering ones). It's worth mentioning again that, by that time, bacteria were plants: Phylum Schyzophyta, to be more precise. Thus, we have "lower plants" and "higher plants", "lower animals" and "higher animals", as well as "lower minerals" and "higher minerals"! Unfortunately, this terminology is so embedded in the biological sciences that even today, as I mentioned, we struggle to get rid of it. Just drop "higher plants", whatever it means As your Wikipedia link says, "higher plants" is a synonym of vascular plants. However, there are a lot of problems here: First, this is a remnant of the scala naturae and, just because of that, should be avoided. Think of it as a meaningless term, just like "more evolved organism". Second, there is no clear and indisputable definition of what is a "higher" plant. Some authors used to define the "higher plants" as the Angiosperms only, or the seed plants (Angiosperms + Gymnosperms), or the vascular plants (Angiosperms, Gymnosperms and Pteridophyta). For instance, in lusophone biology books, it was very common a division in three groups: lower plants: bacteria and algae; intermediate plants: bryophytes and pteridophytes; higher plants: gymnosperms and angiosperms.
{ "domain": "biology.stackexchange", "id": 12394, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "botany, terminology, nomenclature", "url": null }
c++, performance, recursion, lambda, c++20 template<class ExPo, template<class...> class Container, class Function, class... Ts> requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>> && is_iterable<Container<Ts...>> && !is_elements_iterable<Container<Ts...>>) // non-recursive version auto recursive_transform(ExPo execution_policy, const Container<Ts...>& input, const Function& f) { using TransformedValueType = decltype(f(*input.cbegin())); Container<TransformedValueType> output(input.size()); std::transform(execution_policy, input.cbegin(), input.cend(), output.begin(), f); return output; } template<class ExPo, template<class...> class Container, class Function, class... Ts> requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>> && is_elements_iterable<Container<Ts...>>) auto recursive_transform(ExPo execution_policy, const Container<Ts...>& input, const Function& f) { using TransformedValueType = decltype(recursive_transform(*input.cbegin(), f)); Container<TransformedValueType> output(input.size()); std::transform(execution_policy, input.cbegin(), input.cend(), output.begin(), [&](auto& element) { return recursive_transform(execution_policy, element, f); } ); return output; }
{ "domain": "codereview.stackexchange", "id": 40216, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, recursion, lambda, c++20", "url": null }
### Show Tags 19 May 2015, 12:37 we have here 4 (4 years) successive increases of 25% or *1,25 --> 1,25^4 * X = 6250, X = 2560 See MGMAT (Percents) for detailed explanation of such question types..... _________________ When you’re up, your friends know who you are. When you’re down, you know who your friends are. 800Score ONLY QUANT CAT1 51, CAT2 50, CAT3 50 GMAT PREP 670 MGMAT CAT 630 KAPLAN CAT 660 e-GMAT Representative Joined: 04 Jan 2015 Posts: 878 Re: Each year for 4 years, a farmer increased the number of trees in a [#permalink] ### Show Tags 20 May 2015, 03:32 BrainLab wrote: we have here 4 (4 years) successive increases of 25% or *1,25 --> 1,25^4 * X = 6250, X = 2560 See MGMAT (Percents) for detailed explanation of such question types..... Dear BrainLab Perfect logic but for easier calculation, you may want to work with ratio here (1/4 increase per annum) instead of percentages (25% increase per annum). Both convey the same thing but the equation $$(\frac{5}{4})^4*X = 6250$$ will take lesser time to solve (especially if you know that $$5^4 = 625$$) than $$(1.25)^4*X = 6250$$ Hope this was useful! Japinder _________________ | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Math Expert Joined: 02 Sep 2009 Posts: 44373 Re: Each year for 4 years, a farmer increased the number of trees in a [#permalink] ### Show Tags 18 Jan 2016, 23:36 Expert's post 1 This post was BOOKMARKED Re: Each year for 4 years, a farmer increased the number of trees in a   [#permalink] 18 Jan 2016, 23:36
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. Yes\n2. Yes\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 1, "lm_q1q2_score": 0.8577681049901036, "lm_q2_score": 0.8577681049901036, "openwebmath_perplexity": 1384.2064824380445, "openwebmath_score": 0.570276141166687, "tags": null, "url": "https://gmatclub.com/forum/each-year-for-4-years-a-farmer-increased-the-number-of-trees-in-a-135487.html" }
random, simulation, kotlin for(k in 0..mutation_rate_){ genes.flip(indices[k]) } } /* PRE: 'this' is a valid genome instance and 'age' is smaller or equal to genome_size * POST: Counts all the "bad genes" in genome_ up to the 'age'-th entry. * A gene is bad if the entry in the BitSet is set to 'true'. */ fun countBad(age: age_t): Int { return genes.get(0, age).cardinality() } companion object{ var genome_size: Int = 64 fun setMutationRate(age: age_t) { mutation_rate_ = age } private var mutation_rate_: age_t = 0 } } animal.kt package penna import kotlin.random.Random.Default.nextDouble class Animal(){ /* Animal class for the Penna simulation. * The Animal class has several private members: * 1) 'mutation_rate_', 'reproduction_age_' and 'threshold_' are all parameters * that stay constant for all animals of a population. * The respective values can all be retrieved and set with the corresponding * get and set functions. * 2) 'age_' represents the current age of the animal. By default construction it is set to 0. * 3) 'genome_' is a Genome class instance in which we will save the genome of an animal. * When constructed all genes are set to be good (aka false). * 4) 'pregnant_' is a variable of type bool and tells you if the animal is currently pregnant. * The status of each animal can be retrieved via the member function isPregnant(). */ // Default constructor private var age = 0 private var genome: Genome = Genome() private var pregnant: Boolean = false constructor(mum_genes: Genome): this(){ age = 0 genome = mum_genes pregnant = false } fun isPregnant(): Boolean { return pregnant } fun age(): Int { return age }
{ "domain": "codereview.stackexchange", "id": 38003, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "random, simulation, kotlin", "url": null }
c++, performance, algorithm, pathfinding, c++20 std::vector<Node> path_nodes; Node previous_node = target_node; path_nodes.push_back(target_node); while (true) { Node* next_node = parent_map[previous_node]; if (next_node == nullptr) { std::reverse(path_nodes.begin(), path_nodes.end()); return Path<Node, Weight>{path_nodes, weight_function}; } path_nodes.push_back(*next_node); previous_node = *next_node; } } template<typename Node = int, typename Weight = double> Path<Node, Weight> tracebackPath( const Node& touch_node, std::unordered_map<Node, Node*>& forward_parent_map, std::unordered_map<Node, Node*>& backward_parent_map, DirectedGraphWeightFunction<Node, Weight>& weight_function) { std::vector<Node> path_nodes; Node previous_node = touch_node; path_nodes.push_back(touch_node); while (true) { Node* next_node = forward_parent_map[previous_node]; if (next_node == nullptr) { std::reverse(path_nodes.begin(), path_nodes.end()); break; } path_nodes.push_back(*next_node); previous_node = *next_node; } Node* next_node = backward_parent_map[touch_node]; while (next_node != nullptr) { path_nodes.push_back(*next_node); next_node = backward_parent_map[*next_node]; } return Path<Node, Weight>{path_nodes, weight_function}; }
{ "domain": "codereview.stackexchange", "id": 42941, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, algorithm, pathfinding, c++20", "url": null }
quantum-mechanics, wavefunction, potential-energy, harmonic-oscillator &= |c_1|^2 \langle \phi_1| x^2 | \phi_1 \rangle \langle \phi_0 | \phi_0 \rangle + |c_2|^2 \langle \phi_0| x^2 | \phi_0 \rangle \langle \phi_1 | \phi_1 \rangle \end{align} In the last step I used that \begin{equation} \int \mathrm{d}y \; \phi_0^*(y) \phi_1(y) = \langle \phi_0 | \phi_1 \rangle = 0 \end{equation} and \begin{equation} \int \mathrm{d}y \; \phi_1^*(y) \phi_0(y) = \langle \phi_1 | \phi_0 \rangle = 0 \end{equation} because $|\phi_1\rangle$ and $|\phi_0\rangle$ are orthogonal states.
{ "domain": "physics.stackexchange", "id": 100124, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, wavefunction, potential-energy, harmonic-oscillator", "url": null }
java, optimization, game, hangman int j = 0; String line = ""; for (j = 0; j < 64; j++) { wordLength[j] = wordList[j].length();// gets length of words }// end for int m = 0; // creates line first then put into .setText while (m < wordLength[level]) { line += "__ "; m++; }// end for jlLines.setText(line); tf.addActionListener(new ActionListener() { int wrong = 0; int right = 0;
{ "domain": "codereview.stackexchange", "id": 5471, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, optimization, game, hangman", "url": null }
c, hash-map /*hashfunction*/ /*gets a string as argument, returns the index inside the hash-array: 0 - 51. A string is match to the index of its first character: a-z or A-Z. If several words start with the same character, a linked-list joins each new key.*/ int hashfunction(char* name) { if(NULL == name){ return -1; /*name = NULL*/ } /*A-Z: ASCII: 65-90*/ if( name[0] - 97 <0 ){ return (name[0]-65); /*index 0 - 25 in the hash-array*/ } /*a-z: ASCII: 97-122*/ return (26+ name[0]-97); /*index 26 - 51 in the hash-array*/ }
{ "domain": "codereview.stackexchange", "id": 26988, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, hash-map", "url": null }
– Divides the figure window into m x n matrix of small axes and selects the p th. The beautiful Mandelbrot Set (pictured here) is based on Complex Numbers. This example shows how to create a variety of 3-D plots in MATLAB®. It's thrown into a mix of other questions. The basic imaginary unit is equal to the square root of -1. It works quite fine, exceptionally when it Comes to calculate the square root of a complex number. plot vector using complex numbers. Choose Math Help Item Calculus, Derivatives Calculus, Integration Calculus, Quotient Rule Coins, Counting Combinations, Finding all Complex Numbers, Adding of Complex Numbers, Calculating with Complex Numbers, Multiplying Complex Numbers, Powers of Complex Numbers. There is absolutely no other context to this question. Working with complex numbers in MATLAB is easy. I suspect that there is some irregularity in that. Once this step is complete, you should see your Excel file in the current folder section in MATLAB. Learn more about complex numbers, z palne, magnitude and phase response. Lab 1 should introduce students to MATLAB, m files, command window, workspace, arrays, multiplication, powers, exp, sum, component-wise operations, defining complex numbers, complex arrays, plot, abs, phase, for loop and repeated addition for computing sums. Just type your formula into the top box. If a number is real, then its real part equals itself. A menu should open up that will allow you to add x and y axis labels, change the range of the x-y axes; add a title to the plot, and so on. Download the le Complex. A function of a complex variable, w = f(z), can be thought in terms of its real components: We will demonstrate a number of ways to visualize the set of points (x, y, u, v) satisfying this equation. We have met a similar concept to "polar form" before, in Polar Coordinates, part of the analytical geometry section. A complex number, z, has the form x+iy, where x and y are real and i is. 0000i >> B=A' (conjugate transpose) B = 1. In linear algebra of MATLAB we call these scalars. If the input vector contains complex numbers, MATLAB plots the real part of each element (on the horizontal axis) versus the imaginary part (on the
{ "domain": "freccezena.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9719924802053234, "lm_q1q2_score": 0.8162151000011526, "lm_q2_score": 0.8397339656668286, "openwebmath_perplexity": 660.3944330768784, "openwebmath_score": 0.5779609084129333, "tags": null, "url": "http://aapo.freccezena.it/how-to-plot-complex-numbers-in-matlab.html" }
c++, chess GitHub For the simulate move function: it is unclear if it operates on the real board, or on a copy of it. If it modifies the real board, it should undo its effects at the end. About looking up the kings in that super-complicated for-if consturct: according to the other code snippet, a bishop knows its own location. I would assume this also applies to the kings, and if you could access them directly, their location would not be something to look up. I would suggest an array with dedicated location of the pieces, so you could both access the pieces directly, and also iterate over the array if necessary. I think the controlled squares could be collected in an array mirroring the board itself: when "ranged" pieces are relatively free, those vectors will grow large (also because of repeats), while a board-sized array has fixed size and constant-time access. It could be boolean, storing control by one player, or with 4 possible values it could encode control by both players. For simplifying a couple things, including the bishop code too: in chess-programming there is a common trick of storing a 12*12 (first iteration) board, having a border of 2 "occupied" squares in every direction (2 because of the knight). This way the abundant checks for coordinates running out of the board can be spared, instead one will inevitably encounter a border "piece" when trying to index outside the playfield. The second iteration is having this trick taken forward using linear addressing of the table (instead of having a 2D array and 2 coordinates), then a 10x12 array is enough, as a single 2-square wide border catches both horizontal cases (over and "underflowing" a row).
{ "domain": "codereview.stackexchange", "id": 30783, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, chess", "url": null }
special-relativity If the particle "feels" constant acceleration this means that in its proper frame we must have ${d v^a \over d t} = $ constant. We can turn this into a covariant statement for the 4-acceleration $w^i$. Namely, if ${d v^a \over d t} = $ constant, we have, in the proper frame, $w^i w_i = -\left({d v^a \over d t}\right)^2 = $ constant (remember that $w^0 = 0$). Now, $w^i w_i$ = constant must hold in every frame, because its a Lorentz invariant quantity. This is how you can define uniformly accelerated motion in a covariant way. Letting \begin{equation} w^i w_i = - a^2 = \text{constant} \end{equation} we can solve this in a generic frame assuming motion in the $x$ axis. So we take $w^i = (w^0,w^1,0,0)$. Plugging into the above we find \begin{equation} (w^0)^2 - (w^1)^2 = - a^2 \implies \qquad w^0 = a \sinh{ \eta}, \qquad w^1 = a \cosh{ \eta} \end{equation} Now we have to relate this to the 4-velocity $u^i$. Assuming $u^i = (u^0, u^1, 0, 0)$, and imposing the mass-shell constraint $u^2 = 1$ we similarly get $u^0 = \cosh{\mu}$ and $u^1 = \sinh{\mu}$. Finally, \begin{equation} w^0 = {d u^0 \over d S} = {d \mu \over d S} \sinh{\mu}, \qquad w^1 = {d u^1 \over d S} = {d \mu \over d S} \cosh{\mu}. \end{equation} So we conclude \begin{equation}
{ "domain": "physics.stackexchange", "id": 86353, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity", "url": null }
quantum-state To reiterate, quantum amplitudes are similar to classical probabilities. They are statistical features that are not directly stored by the systems that obey the statistics. The Holevo-Nayak theorem says that n qubits cannot store any more than n classical bits. That's the real answer to the question of how a qubit can encode or store infinite information, "theoretically" or otherwise. Answer: It can't.
{ "domain": "quantumcomputing.stackexchange", "id": 1041, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-state", "url": null }
inorganic-chemistry, aqueous-solution, isotope Title: Is there oxygen isotope exchange between dissolved CO2 and H2O? If you had a sample containing an elevated concentration of $\ce{H2^{18}O}$, and bubbled $\ce{C^{16}O2}$ through it, would some of the oxygen-18 isotope be transferred from water to carbon dioxide? I am aware that this occurs with hydrogen isotopes and am curious if it works for larger and heavier atoms such as oxygen. As Todd Minehardt points out, not only can the oxygen be exchanged, but this exchange is applied in aqueous geochemistry. The exchange occurs through the formation of carbonic acid, given in blue below: $\ce{CO2 + H2O <=> \color{blue}{H2CO3}}$ Once the carbonic acid molecule is formed, its oxygen atoms effectively become equivalent because they rapidly exchange hydrogen ions with the water solvent. So when the above (dynamically equilibrated) reaction is reversed, any one of the carbonic acid oxygens might end up as a water molecule, including atoms that were originally in the carbon dioxide.
{ "domain": "chemistry.stackexchange", "id": 16467, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, aqueous-solution, isotope", "url": null }
c++, game-of-life, opencv void nextRound() { std::vector <std::vector<bool>> ret(this->height, std::vector<bool>(width, false)); for (auto y = 0UL; y < this->cells.size(); y++) { for (auto x = 0UL; x < this->cells[y].size(); x++) { int aliveNs = this->aliveNeighbors(x, y); if (!cells[y][x]) { if (aliveNs == 3) { ret[y][x] = true; } } else { if (aliveNs < 2 || aliveNs > 3) { ret[y][x] = false; } else { ret[y][x] = true; } } } } this->cells = ret; } cv::Mat render() const { cv::Mat ret = cv::Mat::zeros(width * UPSAMPLING, height * UPSAMPLING, CV_8UC3); for (auto y = 0UL; y < this->cells.size(); y++) { for (auto x = 0UL; x < this->cells[y].size(); x++) { if (cells[y][x]) { cv::Vec3b color(random(0, 255), random(0, 255), random(0, 255)); for (auto kx = 1; kx < UPSAMPLING; kx++) { for (auto ky = 1; ky < UPSAMPLING; ky++) { ret.at<cv::Vec3b>(x * UPSAMPLING + kx, y * UPSAMPLING + ky) = color; } } } } } return ret; } };
{ "domain": "codereview.stackexchange", "id": 38367, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, game-of-life, opencv", "url": null }
c++, beginner, variadic, c++20 Sure, you could make a generic adapter using the same if constexpr chain as in this code, but are the defective types really that common? We can arrange for std::max() to return an lvalue by passing it an initialiser list of std::reference_wrapper for its arguments: template<typename... T> constexpr auto& ref_max(T... args) { return std::max({std::ref(args)...,}).get(); } We now have #include <algorithm> #include <functional> #include <cassert> int main() { static_assert(std::max(1, 2) == 2); int a = 1; int b = 5; int c = 3; int d = 2; assert(std::max({a, b, c, d}) == b); ref_max(b, c, d) = 4; assert(b == 4); // This gives a compilation error because the static assertion failed // (void)std1::max(A{}, A{}); // This works auto const b_lessthan = [](const B& a, const B& b){ return !(a>=b); }; std::max({B{}, B{}, B{}, B{}, B{}}, b_lessthan); } Which isn't so very different than the main() in the question.
{ "domain": "codereview.stackexchange", "id": 34868, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, beginner, variadic, c++20", "url": null }
condensed-matter, solid-state-physics, superconductivity Title: How does the supercurrent expression $\vec{j}_s=-\frac{n_se^2}{m}\vec{A}$ arise in Coulomb gauge? The expression of the supercurrent in a superconductor is $$\vec{j}_s=-\frac{n_se^2}{m}\vec{A}$$ where $\vec{A}$ is the vector potential, $n_s$ is the number density of superconducting carriers and $e,m$ are the charge and mass of the electron. Wikipedia article of London equations notes that this equation suffers from the disadvantage that in this form $\vec{j}_s$ does not seem to be gauge invariant. However, it asserts that this expression is true only in the Coulomb gauge (${\rm div}~\vec{A}=0$). I want to show that this is true only in the Coulomb gauge. I started from the general expression of the supercurrent
{ "domain": "physics.stackexchange", "id": 66680, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter, solid-state-physics, superconductivity", "url": null }
# Checking whether a multivariable function is convex I have a simple question. I have a multi-variable function that I'm supposed to check whether convex or not. I know the definition for convexity as follows: The function $f(x)$ is convex if: $$f(\lambda x_1 + (1 - \lambda x_2)) \leq \lambda f(x_1) + (1 - \lambda )f(x_2)$$ But this is for the single variable case. How do I generalize it for multi-variable case? The author of this question seems to be showing that the function $f(x,y)$ is convex if, $$f(\lambda x_1 + (1 - \lambda x_2),\lambda y_1 + (1 - \lambda y_2) ) \leq \lambda f(x_1,y_1) + (1 - \lambda )f(x_2,y_2)$$ But it hasn't been explicitly written anywhere. Is this correct? EDIT: The two functions should be corrected as: $$f(\lambda x_1 + (1 - \lambda) x_2) \leq \lambda f(x_1) + (1 - \lambda )f(x_2)$$ $$f(\lambda x_1 + (1 - \lambda) x_2,\lambda y_1 + (1 - \lambda) y_2 ) \leq \lambda f(x_1,y_1) + (1 - \lambda )f(x_2,y_2)$$ • Yes, it is correct, but you need to fix parenthesis there, it is not $(1-\lambda x)$ but $(1-\lambda)x$. $\lambda$ ranges between zero and one. Also it only makes sense if the domain itself is convex – la flaca Feb 20 '17 at 16:17 • @laflaca thanks for pointing the error out. I corrected it. Please explain what you mean by domain being convex? I haven't heard of this before. many thanks. – PPGoodMan Feb 20 '17 at 16:22
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9770226260757067, "lm_q1q2_score": 0.8183736641500421, "lm_q2_score": 0.8376199714402813, "openwebmath_perplexity": 153.94991821891102, "openwebmath_score": 0.7396078109741211, "tags": null, "url": "https://math.stackexchange.com/questions/2153161/checking-whether-a-multivariable-function-is-convex" }
Analyze the kinetics of a particle using cylindrical coordinates. , spherical harmonics) or special treatment of the coordinate singularities. You should verify the coordinate vector field formulas for spherical coordinates on page 72. This page covers cylindrical coordinates. ρ) and the positive x-axis (0 ≤ φ < 2π), z is the regular z-coordinate. Thus rf= u 12rzsin cos + u 2rz(cos2 sin2 )+u 3r2 sin cos : 1. →ω = ˙θˆez. A particle executes circular motion in the xy plane with constant speed v= 5m/s. 10 and the gradient and Laplacian of a scalar field and the divergence and curl of vector fields were derived in terms of these coordinates. There are a few basic important solutions and the rest are given in terms of powers of and Legendre polynomials in. •Need to specify a reference frame (and a coordinate system in it to actually write the vector expressions). in cylindrical coordinate: Any of the tools learned in Chapter 12 may be needed to solve a problem. , 2007) If end effects are neglected, the velocity distribution in the angular direction can be. Gradient Velocity of a particle Derivatives of Vectors Differential Forms 2-forms 3-forms Cylindrical Polar and Spherical. 1 cm for different radial resolutions NETL Workshop on Multiphase Flow Science, August 6-7 2013 5 Choice of Coordinate System Cartesian Cut-Cell Cylindrical Coordinates Simulations using the Cylindrical 3D coordinate. Triple Integrals in Cylindrical and Spherical Coordinates. Integration in cylindrical coordinates (r, \\theta, z) is a simple extension of polar coordinates from two to three dimensions. Velocity in Cylindrical Coordinates? A charged particle in a magnetic field is spiralling along a path defined in cylindirical coordinates by r = 1 m and theta = 2z rad (z is in meters). View cylindrical coordinates from COMPUTER S Com210 at University of Nairobi. As it moves around the tank with increasing velocity, the air pressure above and below the tank drops to zero, or possibly slightly below atmospheric pressure depending on the wind velocity. K), V r is the velocity in relation the r direction, V z is the velocity in relation the z direction, both in m/s, ρ is the specific mass (kg/m3), C p is the specific heat at constant pressure (J/kg. angle from the positive z axis. If the position vector of a
{ "domain": "daisytale.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9891815519849906, "lm_q1q2_score": 0.8177680555031247, "lm_q2_score": 0.8267117940706734, "openwebmath_perplexity": 603.2376859278611, "openwebmath_score": 0.8972199559211731, "tags": null, "url": "http://daisytale.it/qiyn/velocity-in-cylindrical-coordinates.html" }
3. Thanks a lot! But I guess that 6/18 in the denominator is a typo? Shouldn't it be 12/18, or am I wrong? 4. Originally Posted by mirrormirror Thanks a lot! But I guess that 6/18 in the denominator is a typo? Shouldn't it be 12/18, or am I wrong? You're wrong. The denominator is Pr(B) = Pr(B | A) Pr(A) + Pr(B | A') Pr(A') ....
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9825575162853136, "lm_q1q2_score": 0.816645842617027, "lm_q2_score": 0.8311430415844384, "openwebmath_perplexity": 454.6346135731354, "openwebmath_score": 0.5472453832626343, "tags": null, "url": "http://mathhelpforum.com/advanced-statistics/69191-probability-question.html" }
\require{cancel} \begin{aligned} \lim_{n \rightarrow \infty} \left| \frac{\frac{(3x-2)^{n+1}}{(n+1)3^{n+1}}}{\frac{(3x-2)^n}{n3^n}} \right| & = \lim_{n \rightarrow \infty} \left| \frac{(3x-2)^{n+1}}{(n+1)3^{n+1}}\frac{n3^n}{(3x-2)^n}\right| \\ & = \lim_{n \rightarrow \infty} \left| \frac{(3x-2)^n(3x-2)}{(n+1)3^n 3}\frac{n3^n}{(3x-2)^n}\right| \\ & = \lim_{n \rightarrow \infty} \left| \frac{\bcancel{(3x-2)^n}(3x-2)}{(n+1)\bcancel{3^n} 3}\frac{n \bcancel{3^n}}{\bcancel{(3x-2)^n}}\right| \\ & = \lim_{n \rightarrow \infty} \left| \frac{(3x-2)n}{3(n+1)}\right| \\ & = \lim_{n \rightarrow \infty} \left( \left|\frac{3x-2}{3} \right| \cdot \left| \frac{n}{n+1}\right| \right) \\ & = \left|\frac{3x-2}{3} \right| \lim_{n \rightarrow \infty} \cdot \left| \frac{n}{n+1}\right| =
{ "domain": "aorinevo.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9914225135725427, "lm_q1q2_score": 0.8173901132550556, "lm_q2_score": 0.8244619242200081, "openwebmath_perplexity": 128.69379200257384, "openwebmath_score": 0.9999959468841553, "tags": null, "url": "http://www.aorinevo.com/bcc/mat-281-calculus-ii/9-infinite-series/9-9-representation-of-functions-by-power-series/" }
ros-melodic, husky Title: Unable to subscribe to topics published by Husky I'm fairly confident this is a networking issue, but not sure exactly how to fix it. If I publish to a topic myself on the Husky, I can subscribe to it from a client. However, I can't subscribe to topics that the Husky creates during startup. For instance, when ssh'd into the Husky, I can use rostopic hz /imu/data and I see that the data is being published. However, if I do the same on the client, I don't get any information. On the client (~/.zshrc), I have set the following information: . /opt/ros/melodic/setup.zsh export ROS_IP=10.10.10.115 export ROS_MASTER_URI=http://10.10.10.111:11311 On the Husky (~/.bashrc) I have set the following: . /opt/ros/melodic/setup.bash export ROS_IP=10.10.10.111 In the Husky's /etc/ros/setup.bash, I have the following: # Mark location of self so that robot_upstart knows where to find the setup file. export ROBOT_SETUP=/etc/ros/setup.bash # Setup robot upstart jobs to use the IP from the network bridge. # export ROBOT_NETWORK=br0 # Insert extra platform-level environment variables here. The six hashes below are a marker # for scripts to insert to this file. ###### export LCM_DEFAULT_URL=udpm://239.255.76.67:7667?ttl=5 export HUSKY_IMU_XYZ='0 -0.15 0.065' export HUSKY_IMU_RPY='3.1415 0 0' export HUSKY_LASER_ENABLE=1 # Pass through to the main ROS workspace of the system. source /opt/ros/melodic/setup.bash # I added this in an attempt to make things work export ROS_IP=10.10.10.111
{ "domain": "robotics.stackexchange", "id": 36040, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic, husky", "url": null }
machine-learning, python, scikit-learn Title: How to find what values are assigned to labels that where encoded using LabelEncoder? places = ['India','France','India','Australia','Australia','India','India','France'] Here places are the DataFrame Series, now how can I find that which label was encoded with values like India = 0 , Australia = 1 ,France = 2. This is ok for few labels what if there are 100's of labels available in a huge dataset. Use the classes_ attribute of your LabelEncoder. For example: le = preprocessing.LabelEncoder() le.fit(places) print(le.classes_) The index of the label in le.classes_ is the encoded value of the label. See another example here.
{ "domain": "datascience.stackexchange", "id": 4560, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, python, scikit-learn", "url": null }
continuous-signals, fourier-series Title: Fourier series in continous time domain I want to ask Question about the Fourier series in continuous time domain. I am following signal and systems 2nd Edition by Alan Oppenheim. I have confusion in understanding the statement that Specifically, suppose that $x(t)$ is real and can be represent in the form 3.25. then since $x^*(t) = x(t)$, we obtain $$x(t) = \sum^{+\infty}_{k\ =\ -\infty} a^*_k e^{-jk\omega_0t}$$ Then it means the equation 3.25 is for both Real and imaginary? Equation 3.25 $$x(t) = \sum^{+\infty}_{k\ =\ -\infty} a_k e^{jk\omega_0t}$$ Under certain conditions, a $T$-periodic function can be represented by its Fourier series $$x(t)=\sum_{k=-\infty}^{\infty}a_ke^{jk\omega_0t}\tag{1}$$ with $\omega_0=2\pi/T$. The function $x(t)$ can be complex-valued, i.e. have non-zero real and imaginary parts. Note that generally the Fourier coefficients $a_k$ are also complex-valued (even for real-valued $x(t)$). Now if $x(t)$ is real-valued, i.e., $x(t)=x^*(t)$, you get $$x(t)=x^*(t)=\sum_{k=-\infty}^{\infty}a^*_ke^{-jk\omega_0t}= \sum_{k=-\infty}^{\infty}a^*_{-k}e^{jk\omega_0t}\tag{2}$$
{ "domain": "dsp.stackexchange", "id": 3155, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "continuous-signals, fourier-series", "url": null }
out of which , , and on the left s-plane are the roots of :. It need not be true that any of the fractions is actually a solution. Also note that as of Python 3, adding periods to integers to make them a float is no longer necessary. However, there is still one basic procedure that is missing from the algebra of complex numbers. (Buddy system! If there is no buddy, it must stay under the radical. Find the indicated real nth root(s) of a negative. De Moivre's theorem can be extended to roots of complex numbers yielding the nth root theorem. n = 5, a = 0. Because n = 4 is even and a = 81 > 0, 81. 2 days ago · He said that 12 out of the 18 persons who had already indicated interest in the governorship election in the state in 2021 were encouragingly from Anambra South, adding that the expectation was that with time, aspirants from the other two senatorial zones would withdraw to join hands in finding an ideal person from the South to emerge as the. 3 POLAR FORM AND DEMOIVRE'S THEOREM 483 8. The 5th root of 1,024 is 4, as 4 x 4 x 4 x 4 x 4 is 1,204. Question 1: Simplify. When you’re giving a time-bound exam like SSC CGL, SSC CPO, Railways Group D, RPF & ALP this can drain you of your precious time. You can see there are many roots to this equation, and we want to be sure we get the n^ {th} root. Nth root calculator. V = 216 ft3 Find the real solution(s) of the equation. There might be some issues with this. IXL Learning Learning. Use nth roots in problem solving Animal Population The population P of a certain animal species after t months can be modeled by P = C(1. In mathematics, an nth root of a number x, is a number r which, when raised to the power n yields x. The number 0 (zero) has just one square root, 0 itself. The nth root calculator below will also provide a brute force rounded approximation of the principal nth root. Simplify the expression. There is no result accuracy argument of Nth_Root, because the iteration is supposed to be monotonically descending to the root when starts
{ "domain": "cbeu.pw", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9905874095787602, "lm_q1q2_score": 0.8624026879101964, "lm_q2_score": 0.8705972633721708, "openwebmath_perplexity": 415.6410076463367, "openwebmath_score": 0.6347075700759888, "tags": null, "url": "http://ufap.cbeu.pw/find-the-indicated-nth-roots-of-a.html" }
ros, opencv, ros-kinetic, vision-opencv, gui if (temp.empty()) { ROS_INFO("Cannot read from webcam \n"); } cvtColor(temp, frame, CV_RGB2HSV); inRange(frame, Scalar(hue_val_min, saturation_val_min, value_val_min), Scalar(hue_val_max, saturation_val_max, value_val_max), mask); bitwise_and(frame, frame, temp, mask = mask); erode(temp, erosion_window, element_erosion); dilate(erosion_window, dilation_window, element_dilation); cvtColor(erosion_window, final, CV_HSV2RGB); msg = cv_bridge::CvImage(std_msgs::Header (), "bgr8", final).toImageMsg(); pub.publish(msg); ros::spinOnce(); } return 0; } // This function processes the variable from the erosion trackbar void erosion(int, void*) { int type; switch(erosion_type) { case 1: type = MORPH_RECT; case 2: type = MORPH_CROSS; case 3: type = MORPH_ELLIPSE; } element_erosion = getStructuringElement(type, Size(2 * erosion_value + 1, 2 * erosion_value + 1), Point (erosion_value, erosion_value)); } // This function processes the variable from the dilation trackbar void dilation(int, void*) { int type; switch(dilation_type) { case 1: type = MORPH_RECT; case 2: type = MORPH_CROSS; case 3: type = MORPH_ELLIPSE; } element_dilation = getStructuringElement(type, Size(2 * dilation_value + 1, 2 * dilation_value + 1), Point(dilation_value, dilation_value)); }
{ "domain": "robotics.stackexchange", "id": 31046, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, opencv, ros-kinetic, vision-opencv, gui", "url": null }
multi-rotor Title: Prop orientation for tricopters This question stems from previous question, where I asked why does the prop orientation matter so much for a multirotor. But on further research† I found that these reasons need not apply to a tri copter. and then again. Why? Are these reasons general for all multi rotors with odd number of motors? or even rotors? † This forum talks a lot about tricopters and prop orientations but nothing really answers the question. For a basic multicopter, in the absence of any other way to control yaw, orientation does matter. A quadrotor needs a balanced set of rotation directions so that it can easily control yaw direction, and at equilibrium, all propellers can rotate at the same speed. This is primarily because all rotors in a quadrotor are fixed (usually). The quadrotor can also control yaw by varying the speed of propellers that rotate in one direction versus another. Propeller orientation does not matter so much when it has some other method to control yaw. For example, helicopters have a tail rotor, which corrects for any yaw moment caused by the main propeller. Thus, if it was to set to rotate in the opposite direction, the tail rotor can just rotate in the opposite direction to counter this. The standard tricopter has two front rotors, and a third rear rotor. However, the rear rotor can change its axis. It tilts left or right. When it tilts left or right, a portion of its thrust acts in the yaw direction. Therefore, where front rotors rotate in the same direction, the rear rotor must tilt more the counter the moment from the front rotors. Where the front rotors rotate in opposite directions, the rear rotor tilts less. This is explained in the video in message #4 of the link you posted. The reason why the controllers does not even need to know the direction of rotation in the tri-copter is because it is a feedback controller. Thus, when it senses undesired yaw, it simply tilts the rear rotor in the opposite direction to correct the yaw direction.
{ "domain": "robotics.stackexchange", "id": 179, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "multi-rotor", "url": null }
javascript, functional-programming, compression return newArray.join(""); } compressString(signature); Style notes Not putting {, } around single line statement blocks is a bad habit. Always delimit statement code blocks. In compressCharacters newArray should be a constant. In groupCharacters arr should be a constant. Consistent naming is important. Your naming is all over the place. You call a string, characters (in compressCharacters), string (in compressString), signature, and e in the forEach. A character you call element. You abbreviate an array to arr in one function and call it newArray in another. Most of the names are describing the type and not the abstracted data that they hold. Don't add useless or redundant code. In compressCharacters you create a variable result that you do nothing with. Not to mention that forEach does not have a return defined. Also result in groupCharacters is never used. Don't declare variables outside the scope that they are to be used. newSignature is only used inside reduce but declared outside the callback. Functional programing means that functions should not have side effect (change the state of anything outside the function's scope.) The reduce callback uses the array arr which breaks the no side effects rule. And the forEach pushes to arr which is also outside the forEach callbacks scope (use map or reduce in that case). Applying the above you would get something like the following code. const groupRuns = str => [...str].reduce((groups, char) => { const last = groups.length - 1; if (last < 0 || groups[last][0] !== char) { groups.push(char) } else { groups[last] += char } return groups; }, []); const concatGroups = groups => groups.reduce((str, g) => str + (g.length > 2 ? g[0] + g.length : g) , ""); const compressString = str => concatGroups(groupRuns(str));
{ "domain": "codereview.stackexchange", "id": 32567, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, functional-programming, compression", "url": null }
25. ParthKohli Isn't the work done by friction independent of the magnitude of force anyway? 26. ganeshie8 Oh right, we don't need net force on block, just the friction 27. ParthKohli So our answer is coming out to be$\frac{2 \cdot 10}{6} J= \frac{10}{3}J$ 28. ganeshie8 don't trust me on these things, but wait, what is the numerical value of given F 29. ParthKohli No numerical value is given. :P 30. ganeshie8 oh then it shouldnt matter as you said haha 31. ParthKohli Tagging @Kainui to check our work. 32. ParthKohli The friction at each point is also different though.|dw:1433253462744:dw| 33. ParthKohli We have done the integral assuming that friction remains the same at A and B and all other such points, right? 34. ParthKohli Again, I'm not sure. The integral may compensate for that, but still, I think that's our catch. 35. ParthKohli |dw:1433253743704:dw| 36. ParthKohli Let's call the total mass $$M$$ instead of $$m$$. I'll use $$m$$ as the index (is that what you call it?) If the block has been pushed from 0 to $$x$$ and is pushed a further $$dx$$, $$m$$ varies from $$0$$ to $$M(1-x)$$.$dW_f = \mu_k N$$= x\cdot dm\cdot g\cdot dx$So net work:$W_{net, f}=g \int_{0}^{1} \int_0^{M(1-x)} xdmdx$ 37. ganeshie8 do you have 20/3 in options 38. ParthKohli Yes - how did you get that? 39. ganeshie8 The trick is to call "x" as the length of block that is on the friction surface 40. Kainui
{ "domain": "openstudy.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9609517028006207, "lm_q1q2_score": 0.8129098181383421, "lm_q2_score": 0.8459424295406087, "openwebmath_perplexity": 848.9151851810269, "openwebmath_score": 0.8917829394340515, "tags": null, "url": "http://openstudy.com/updates/556dad4de4b0c4e453fa1e06" }
Example 1: Input: s1 = "aabcc", s2 = "dbbca", s3 = "aadbbcbcac" Output: true Example 2: Input: s1 = "aabcc", s2 = "dbbca", s3 = "aadbbbaccc" Output: false Example 3: Input: s1 = "", s2 = "", s3 = "" Output: true https://www.algoexpert.io/questions/Interweaving%20Strings https://leetcode.com/problems/interleaving-string/ https://leetcode.com/problems/interleaving-string/discuss/326347/C-dynamic-programming-practice-in-August-2018-with-interesting-combinatorics-warmup """ """ ------------------------------------------------------------------------------------------------------------------------------------------------------ """ def interweavingStringsBF_(one, two, three): if len(three) != len(one) + len(two): return False return interweavingStringsHelperBF_(one, two, three, 0, 0, 0) def interweavingStringsHelperBF_(one, two, three, one_idx, two_idx, three_idx): if three_idx == len(three): return True
{ "domain": "paulonteri.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813501370535, "lm_q1q2_score": 0.8218225111577803, "lm_q2_score": 0.8596637487122111, "openwebmath_perplexity": 6090.381773024074, "openwebmath_score": 0.30743175745010376, "tags": null, "url": "https://dsa.paulonteri.com/Data%20Structures%20and%20Algorithms%2016913c6fbd244de481b6b1705cbfa6be/Recursion%2C%20DP%20%26%20Backtracking%20525dddcdd0874ed98372518724fc8753.html" }
# Ambiguity in dimensional analysis Tags: 1. Jan 19, 2016 ### DaTario Hi All, My question is twofold and follows: 1) Why the dimension of torque is not Joule, as it is Newton times meter? 2) Why the derivative of the velocity with respect to the distance cannot be measured in Hertz? Thank you all, Best Regards, DaTario 2. Jan 19, 2016 ### bcrowell Staff Emeritus Torque does have units of joules, and torque also has units of newton-meters. Clear communication is the reason we avoid writing it with units of joules. 3. Jan 19, 2016 ### stockzahn @1: Although it seems that torque and energy has the same dimension, you can see the difference, if calculated with vectors: T = F x l W = F ⋅ s In the torque-case the lever l is rectangular to the force - no work is done. In the work-case the displacement s is parallel to the force - the torque is zero. 4. Jan 19, 2016 ### Staff: Mentor The answer to question 2 is for the same reason that Ben gave for question 1. Chet 5. Jan 19, 2016 ### mfig I agree with the other replies: it is mostly for clarity that these units are named differently. But there also is a real difference. For example, energy is a scalar quantity, torque is not. There is no directionality expressed in saying the internal energy of 1 kg of steam is 2 Joules. However there is directionality expressed in saying you applied 2 Nm of toque to that nut. (Were you tightening or loosening the nut, for example?) Torque can also be seen to drag the hidden dimension of radians in from how it is defined. The units of torque can therefore be thought of as Joules per radian. (I personally have never liked the way radians and cycles are ignored as a explicit unit in physics, as I think it leads to confusions like these.) A similar consideration applies to your second question. Hz is actually cycles per second, not merely 1/s, though the cycles are usually ignored in unit statements, like radians. Cycles and radians in the compound units used in physics should be kept in mind to avoid confusion, even though it is not taught that way, in my opinion. Hope this helps!
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9697854146791214, "lm_q1q2_score": 0.8318510026854776, "lm_q2_score": 0.8577681104440172, "openwebmath_perplexity": 1550.3158272210114, "openwebmath_score": 0.8600427508354187, "tags": null, "url": "https://www.physicsforums.com/threads/ambiguity-in-dimensional-analysis.853020/" }
quantum-field-theory, particle-physics, vacuum, higgs, symmetry-breaking Questions: Why are only fields linear in creation and annihilation operators, i.e. fields which can be expanded as $$ \hat{\phi}(\vec{x},t) = \int c \cdot d^3p\left[\hat{a}(\vec{p}) \mathrm{e}^{-i(\vec{p}\cdot\vec{x}-E_pt)} + \hat{b}(\vec{p}) \mathrm{e}^{+i(\vec{p}\cdot\vec{x}-E_pt)}\right],$$ 'amenable to a particle interpretation'? In other words, why does a field which doesn't have such a linear expansion possibly not allow this 'particle interpretation'? As far as I understand the issue, the 'particle interpretation' of a field means simply that the excited states become identified with 'particles' in excited states (classically analogous to an electron with energy level above ground state). Why is it not possible to declare, in the same manner, that the 'excited states' are exactly the 'excited particles' for such a field, like in case of fields linear in creation/annihilation operators mentioned above? What are the obstructions? This leads to question 2: Seemingly, according to the quoted answer, the problem is that such a field may have a non-vanishing vacuum expectation value. But I don't understand why it is necessary in order to give a field a 'particle interpretation' with excited states that the field's vev vanishes. By definition, creation operators applied to the vacuum state define the 1-particle states of the field theory, and by iteration multiparticle states. All particle terminology in QFT is based on this definition. In order that the particle interpretation works and produces a multiparticle Fock space, the creation and annihilation operators must satisfy specific (anti)commutation rules in momentum space. A nonzero VEV spoils these rules when one defines these operators by the standard rules, whereas they are valid when one first subtracts the VEV from the field before applying these rules.
{ "domain": "physics.stackexchange", "id": 75249, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, particle-physics, vacuum, higgs, symmetry-breaking", "url": null }
[in class we did this in the order iii, i, ii, and postponed iv = Paley till Wednesday.] As with 2-designs, we aim to describe, at least in some cases, which parameters (n,k,λ,μ) are feasible, and to describe an easy algebraic condition: Proposition 2.6 (p.33): If a strongly regular graph has parameters (n,k,λ,μ) then k (k-λ-1) = (n-k-1) μ. Proof: double count, of course. Fix x, count edges {y,z} where y is a neighbor of x but z isn't (and does not equal x either). There are k choices for y; for each of them there are k neighbors, of which one is x and λ are the common neighbors of x and y, so there are k-λ-1 choices for z. On the other hand there are n-k-1 choices for z, and for each of them there are μ choices for y, QED. You should verify that the parameters of G@ satisfy this necessary condition iff those of G do, and that the parameters of the graphs we've seen so far (triangular, square lattice, r.K_m, and Paley) satisfy it as well. Next time we'll introduce an adjacency matrix of a graph, and use it to obtain a further necessary condition on n,k,λ,μ that is considerably subtler and quite powerful -- e.g. it's the source of the result noted in the introductory lecture that if there is a Moore graph of degree d then d is in {2, 3, 7, 57}. Feb. 24 -- the adjacency matrix and the integrality condition (p.36-38)
{ "domain": "harvard.edu", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9907319853876677, "lm_q1q2_score": 0.8277402594148112, "lm_q2_score": 0.8354835330070839, "openwebmath_perplexity": 2557.5821634393883, "openwebmath_score": 0.8611697554588318, "tags": null, "url": "http://abel.math.harvard.edu/~elkies/M155.15/notes.html" }
quantum-state, measurement, pennylane This code generates the state $$\tag{1} |0\rangle \left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right) = \frac{|00\rangle}{\sqrt{2}} + \frac{|01\rangle}{\sqrt{2}}.$$ Given that the measurement is in the computational basis, your possible outcomes are: $|0\rangle \equiv |00\rangle, |1\rangle \equiv |01\rangle, |2\rangle \equiv |10\rangle, |3\rangle \equiv|11\rangle$. From (1) it is clear that qml.probs(wires=[0,1]) will yield an array [0.5, 0.5, 0, 0].
{ "domain": "quantumcomputing.stackexchange", "id": 3959, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-state, measurement, pennylane", "url": null }
regular-languages Title: Practical Applications of regular grammars A regular grammar is a mathematical object, $G$, with four components, $G = (N, Σ, P, S)$, where. $N$ is a nonempty, finite set of nonterminal symbols, $Σ$ is a finite set of terminal symbols , or alphabet, symbols, $P$ is a set of grammar rules, each of one having one of the forms. $A → aB$. I want to know what's Practical Application of these grammars. I mean where and how do we use these in real world? Also it may be helpful if someone tells me about the weaknesses of regular grammars. Thanks in advance. Regular grammars are more-or-less the same as NFAs, so you might as well ask what applications these have. Finite automata are used in compilers, for performing lexical analysis. Any course on compilation will contain ample information on this. As for the weaknesses of regular grammars, they only describe regular languages. In particular, they're not enough to parse programming languages, even superficially. We use context-free grammars for that. Any course on formal languages and automata will explain all of that very clearly.
{ "domain": "cs.stackexchange", "id": 6658, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "regular-languages", "url": null }