text stringlengths 49 10.4k | source dict |
|---|---|
# Non-decreasing $f(n)$
When $f(n)$ is monotone non-decreasing, we have the obvious bounds $$f(n) \leq \sum_{m=c+1}^n f(m) \leq (n-c) f(n).$$ These bounds are best-possible in the sense that they are tight for some functions: the upper bound for constant functions, and the lower bound for step functions ($f(m) = 1$ for $m \geq n$ and $f(m) = 0$ for $m < n$). However, in many cases these estimates are not very helpful. For example, when $f(m) = m$, the lower bound is $n$ and the upper bound is $(n-c)n$, so they are quite far apart.
## Integration
A better estimate is given by integration: $$\int_c^n f(x) dx \leq \sum_{m=c+1}^n f(m) \leq \int_{c+1}^{n+1} f(x) dx.$$ For $f(m) = m$, this gives the correct value of the sum up to lower order terms: $$\frac{1}{2} n^2 - \frac{1}{2} c^2 \leq \sum_{m=c+1}^n m \leq \frac{1}{2} (n+1)^2 - \frac{1}{2} (c+1)^2.$$ When $f(m) = m$ we can calculate the sum explicitly, but in many cases explicit computation is hard. For example, when $f(m) = m\log m$ the antiderivative of $f$ is $(1/2) x^2\log x - (1/4) x^2$, and so $$\sum_{m=c+1}^n m\log m = \frac{1}{2} n^2 \log n \pm \Theta(n^2).$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854103128328,
"lm_q1q2_score": 0.8390816818236664,
"lm_q2_score": 0.865224072151174,
"openwebmath_perplexity": 568.4012933743602,
"openwebmath_score": 0.9995046854019165,
"tags": null,
"url": "https://cs.stackexchange.com/questions/2789/solving-or-approximating-recurrence-relations-for-sequences-of-numbers/24082"
} |
vba
Dim pastaDeTrabalho As Excel.Workbook
Set pastaDeTrabalho = aplicativo.Workbooks.Add
CreateSheets pastaDeTrabalho
NameSheets pastaDeTrabalho
End Sub
Private Sub CreateSheets(ByRef pastaDeTrabalho As Excel.Workbook)
CreateDailySheets pastaDeTrabalho, Month(Date)
CreateSummarySheet pastaDeTrabalho
End Sub
Private Sub CreateDailySheets(ByRef pastaDeTrabalho As Excel.Workbook, ByVal month As Integer)
Do Until pastaDeTrabalho.Sheets.Count = DaysInMonth(month)
pastaDeTrabalho.Sheets.Add
Loop
End Sub
Private Sub CreateSummarySheet(ByRef pastaDeTrabalho As Excel.Workbook)
pastaDeTrabalho.Sheets.Add
End Sub
Private Function DaysInMonth(ByVal month As Integer) As Integer
DaysInMonth = Day(DateSerial(Year(Date), month + 1, 1) - 1)
End Function
Private Sub NameSheets(ByRef pastaDeTrabalho As Excel.Workbook)
NameDailySheets pastaDeTrabalho, Month(Date)
NameSummarySheet pastaDeTrabalho, Month(Date)
End Sub
Private Sub NameDailySheets(ByRef pastaDeTrabalho As Excel.Workbook, ByVal month As Integer)
Dim day As Integer
For day = 1 To DaysInMonth(month)
pastaDeTrabalho.Sheets(day).Name = AssembleDailySheetName(day, month)
Next day
End Sub
Private Sub NameSummarySheet(ByRef pastaDeTrabalho As Excel.Workbook, ByVal month As Integer)
Dim index As Integer
index = GetSummarySheetIndex(month)
pastaDeTrabalho.Sheets(index).Name = AssembleSummarySheetName(month)
End Sub
Private Function GetSummarySheetIndex(ByVal month As Integer) As Integer
GetSummarySheetIndex = DaysInMonth(month) + 1
End Function | {
"domain": "codereview.stackexchange",
"id": 3853,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba",
"url": null
} |
complexity-theory
Q:Philosophical question. Why do we need PRG to be non-uniformly strong? Why we cannot use uniformly indistinguishable PRG? I saw the idea that the primary input $x$ to the $A$ can be used as advice, but I still don't understand why do we need it. The idea of a PRNG is coming up with a "deterministic" sequence that "looks like" random, in the sense that programs getting access to it act (roughly) as if they had access to a completely random string. The PRNG takes a seed or a key, and outputs a larger pseudorandom sequence. When simulating some probabilistic algorithm, instead of choosing a completely random string, it is enough to choose a random seed and use a PRNG.
How would you use a PRNG to simulate (say) a BPP algorithm? Suppose we had a PRNG that takes a seed of length $L = O(\log n)$, and $\epsilon$-fools every BPP algorithm. (Pseudorandom sequences don't look completely random, but in this case, using them instead of a random sequence changes the probability of any event by at most $\epsilon$.) That means that if we pick the seed at random, then the BPP algorithm returns the correct answer with probability at least $2/3-\epsilon$ (supposing that the original success probability was $2/3$ - that depends on your definition of BPP). If $\epsilon < 1/3$, then we have the following property: | {
"domain": "cs.stackexchange",
"id": 1354,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory",
"url": null
} |
c, generics
typedef struct
{
size_t elem_size;
size_t elem_count;
size_t buffer_size;
void *buffer;
} Vector;
// Buffer handling
void vec_trim(Vector *vec);
bool vec_ensure_size(Vector *vec, size_t needed_size);
Vector vec_create(size_t elem_size, size_t start_size);
void vec_reset(Vector *vec);
void vec_destroy(Vector *vec);
// Insertion
void *vec_push(Vector *vec, void *elem);
void *vec_push_many(Vector *vec, size_t num, void *elem);
void *vec_push_empty(Vector *vec);
// Retrieval
void *vec_get(Vector *vec, size_t index);
void *vec_pop(Vector *vec);
void *vec_peek(Vector *vec);
// Attributes
size_t vec_count(Vector *vec);
vector.c:
#include <string.h>
#include "alloc_wrappers.h"
#include "vector.h"
#define MAX(a, b) ((a) > (b) ? (a) : (b))
#define VECTOR_GROWTHFACTOR 0.5
void vec_trim(Vector *vec)
{
vec->buffer_size = vec->elem_count + 1;
vec->buffer = realloc_wrapper(vec->buffer, vec->elem_size * vec->buffer_size);
} | {
"domain": "codereview.stackexchange",
"id": 39404,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, generics",
"url": null
} |
c#, winforms, reflection
switch (p.PropertyType.Name)
{
case "String":
ctrl = new TextBox();
ctrl.Tag = "String";
ctrl.Size = new System.Drawing.Size(170, 20);
goto case "common";
case "Boolean":
ctrl = new CheckBox();
goto case "common";
case "Array[]":
Console.WriteLine("Array");
break;
case "Int32":
ctrl = new TextBox();
ctrl.Tag = "Int";
ctrl.Size = new System.Drawing.Size(40, 20);
goto case "common";
case "Double":
ctrl = new TextBox();
ctrl.Tag = "Double";
ctrl.Size = new System.Drawing.Size(40, 20);
goto case "common";
case "IList`1":
if (ClassExists(p.Name)) // ClassExists checks if the property is a class itself
{
ctrl = new Button();
ctrl.Text = "Add";
ctrl.Tag = p;
ctrl.AutoSize = true;
ctrl.Click += new EventHandler(AddButton_Click);
}
else
{
// TODO create gridview
}
goto case "common";
case "common":
ctrl.Anchor = AnchorStyles.Left;
ctrl.Name = name ?? GetPropertyAttributes(p);
ctrl.Font = new Font("Consolas", 9);
ctrl.KeyPress += new KeyPressEventHandler(textBoxKeyPress);
if (c != null)
{
tp.BackColor = c ?? Color.Black;
}
tp.Controls.Add(ctrl);
formControls.Add(tp); // List<Control>
break;
default:
ctrl = new Button();
ctrl.Text = "Add";
ctrl.Tag = p;
ctrl.AutoSize = true;
ctrl.Click += new EventHandler(AddButton_Click);
goto case "common";
}
} | {
"domain": "codereview.stackexchange",
"id": 15889,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, winforms, reflection",
"url": null
} |
c++, performance, strings, primes, factors
string findFactors(unsigned posInt, vector<unsigned> &factors)
{
std::stringstream factorsStr;
double sqrtInt = sqrt(static_cast<double>(posInt));
for (unsigned i = 1; i <= sqrtInt; i++)
{
if (posInt % i == 0)
{
factors.push_back(i);
unsigned x = posInt / i;
factorsStr << " " << i << " x " << x << "\n";
}
}
return factorsStr.str();
}
string findPrimeFactorization(unsigned posInt)
{
std::stringstream primesStr;
for (unsigned i = 2; i <= posInt; i++)
{
int powerDegree = 0;
while (posInt % i == 0)
{
posInt /= i;
powerDegree++;
}
if (powerDegree >= 1)
{
primesStr << i;
if (powerDegree > 1)
primesStr << '^' << powerDegree;
if (i <= posInt)
primesStr << " x ";
}
}
return primesStr.str();
} I have a couple of suggestions that you may wish to consider: | {
"domain": "codereview.stackexchange",
"id": 3488,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, strings, primes, factors",
"url": null
} |
By Theorem 11.3 of Postnikov's Permutohedra, associahedra, and beyond, the number of such $$(R_1, \ldots, R_m)$$ is a polynomial $$g_m(c_0,c_1, \ldots, c_m)$$ in the $$c_j$$, of degree $$m-1$$. So $$|S_{m,n}| = \sum_{c_0+\cdots + c_m=n} \frac{n!}{c_0! c_1! \cdots c_m!} \ g_m(c_0,c_1, \ldots, c_m). \qquad (2)$$ Let $$\partial_j$$ be $$\tfrac{\partial}{\partial x_j}$$. There is some polynomial $$h_m$$ in the $$\partial_j$$, of degree $$m-1$$, such that $$\left. h_m(\partial_0, \ldots, \partial_m) \left( x_0^{c_0} \cdots x_m^{c_m} \right) \right|_{(1,1,\ldots,1)} = g_m(c_1, \ldots, c_m). \qquad (3)$$ Combining (2), (3) and the binomial expansion of $$(x_0+x_1+\cdots+x_m)^n$$, we obtain $$|S_{m,n}| = \left. h_m(\partial_0, \ldots, \partial_m) \left( x_0+x_1+\cdots+x_m \right)^n \right|_{(1,1,\ldots,1)} . \qquad (4)$$ Let $$h_m(t,t,\ldots,t) = \sum_{k=0}^{m-1} h_{m,k} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9822877044076956,
"lm_q1q2_score": 0.8651961443863452,
"lm_q2_score": 0.8807970826714614,
"openwebmath_perplexity": 182.6497470156561,
"openwebmath_score": 0.9416841864585876,
"tags": null,
"url": "https://math.stackexchange.com/questions/3235160/spanning-forests-of-bipartite-graphs-and-distinct-row-column-sums-of-binary-matr"
} |
c#, programming-challenge, breadth-first-search
for (int i = 0; i < 3; i++)
{
CollectionAssert.AreEqual(expected[i].ToArray(), result[i].ToArray());
}
}
public IList<IList<int>> LevelOrder(Node root)
{
IList<IList<int>> result = new List<IList<int>>();
Queue<Node> Q = new Queue<Node>();
if (root == null)
{
return result;
}
Q.Enqueue(root);
while (Q.Count > 0)
{
List<int> currentLevel = new List<int>();
int qSize = Q.Count;
for (int i = 0; i < qSize; i++)
{
var curr = Q.Dequeue();
currentLevel.Add(curr.val);
if (curr.children != null)
{
foreach (var child in curr.children)
{
Q.Enqueue(child);
}
}
}
result.Add(currentLevel);
}
return result;
}
}
} Time and space complexity are just the number of nodes in the graph. You can't do better than this with the API given. If the API were open to negotiation, I'd consider IEnumerable<IReadonlyList<int>>, which would permit better memory characteristics for some trees (e.g. if the level-width stays reasonably constant rather than, say, growing exponentially). There is also no need for this to be an instance method.
I don't like this:
if (root == null)
{
return result;
} | {
"domain": "codereview.stackexchange",
"id": 34524,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, programming-challenge, breadth-first-search",
"url": null
} |
javascript, html, canvas, quiz
window.addEventListener("keydown", check_expr_and_go_next, true);
}
main()
<html>
<canvas id="canvas" width="800" height="600" style="border:1px solid #000000;">
</canvas>
</html> I don't see a var anywhere in your code. That means that all of your variables are global variables.
Constructing an array of integers just to pick a random element seems wasteful. The following code scales better, and in my opinion, is easier to follow because it avoids the unnecessary intermediate array.
Asking whether an expression is single digit is a bit odd. I would have the caller evaluate the expression first — which also has the advantage of evaluating it just once.
I'm not convinced that a canvas is called for. Since it's all just text, the whole game can be implemented using the usual DOM elements. The styling should then be done using CSS.
The code supports the number keys along the top row of the keyboard, but it should support the numeric keypad as well.
/* Picks a random integer n such that min ≤ n < max. */
function randomInt(min, max) {
return min + Math.floor((max - min) * Math.random());
}
function sample(candidates) {
return candidates[randomInt(0, candidates.length)];
}
function genMathExpression() {
return randomInt(0, 20) + sample('+-*') + randomInt(0, 20);
}
function isSingleDigit(number) {
return 1 <= number && number <= 9;
}
function genSingleDigitExpression() {
do {
var expr = genMathExpression();
} while (!isSingleDigit(eval(expr)));
return expr;
} | {
"domain": "codereview.stackexchange",
"id": 13765,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, html, canvas, quiz",
"url": null
} |
c#, hash-map
Title: Finding the closest value and matching it with the correct item in the list I have list values with their IDs like this:
ID Value
A 1.2
B 4.2
C 7.2
D 10.2
In C#, I want to find the ID with the closest value to a given value say 3.9. In this example, B will be the answer as it has the value(4.2) closest to the given value.
My solution to this problem involved following steps:
Put the values in an list of objects (where each object has a property ID and a value) or a dictionary with string ID and decimal value
Loop through the list or dictionary and find the minimum value
If the value is found or their difference is minimum, print the item ID (in this case B) | {
"domain": "codereview.stackexchange",
"id": 22651,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, hash-map",
"url": null
} |
quantum-mechanics, quantum-information, quantum-computer, decoherence
Consider the following quantum channel (for arbitrary dimension $d,n$):
$$ T:\mathbb{C}^{d\times d}\to \mathbb{C}^{n\times n}: \rho \mapsto \operatorname{tr}(\rho) \sigma $$
where $\sigma$ is any quantum state. Using the Choi-Jamiolkowski isomorphism, you can easily see that this is indeed a completely positive map and by construction, it is trace-preserving.
Clearly, the spectrum of $\sigma$ is completely independent of the spectrum of $\rho$. | {
"domain": "physics.stackexchange",
"id": 24791,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-information, quantum-computer, decoherence",
"url": null
} |
electromagnetic-radiation, electrical-resistance, wavelength, electrical-engineering
Would I see a distorted signal at the source due to a reflection from the pi-filter? Could the reflection damage the source? If the power is not reflected, is it absorbed by the filter? where does it go?
Since we can often "get away" without terminating signals in low frequency designs, I hardly ever worry about reflections. I've typically only worried about them at at high MHz signals, and when interfacing signals across backplanes, or long board interconnects. But, it occurs to me that there should be reflections all the time. How is it that they don't typically matter (in low frequency designs)? How is it that sources driving unmatched loads are not "blown up" more often due to these reflections? You always have a reflection from your termination but the real question is if it matters. Terminating a power amplifier with a sufficiently small impedance usually leads to sparks and smoke. If your connection to the termination is short, say less than $\lambda/10$, then the input impedance is very close to that of the termination so blowing up the generator is then up to the selection of the load but if the transmission line is long then the input impedance will strongly depend on its length and unpleasant surprises may abound, see the input impedance section in https://en.wikipedia.org/wiki/Transmission_line
$$Z_{in}(\ell) = Z_0 \frac{Z_L + Z_0\mathfrak{j}\rm{tan}(\beta \ell)} {Z_0+Z_L \mathfrak{j}\rm{tan}(\beta \ell)}\\=Z_L\frac{1+(Z_0/Z_L)\mathfrak{j}\rm{tan}(\beta \ell)} {1+(Z_L/Z_0) \mathfrak{j}\rm{tan}(\beta \ell)}$$ | {
"domain": "physics.stackexchange",
"id": 73193,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-radiation, electrical-resistance, wavelength, electrical-engineering",
"url": null
} |
Stirling's Approximation $$n!\sim \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$$ Which means that $n!$ is asymptotically equivalent to $\sqrt{2\pi n}\left(\frac{n}{e}\right)^n$ as $n$ approaches $\infty$. If your familiar with asymptotic formulas, then you'd also know that this implies that $$\lim_{n\to\infty} \frac{n!}{\sqrt{2\pi n}\left(\frac{n}{e}\right)^n} =1$$ Now, using the algebraic laws of limits, we have $$\lim_{n\to\infty} n!=\lim_{n\to\infty} \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$$ $$\lim_{n\to\infty} e^n=\lim_{n\to\infty} \sqrt{2\pi n}\left(\frac{n^n}{n!}\right)$$ $$\lim_{n\to\infty} e=\lim_{n\to\infty} (2\pi n)^{\frac{1}{2n}}\left(\frac{n}{(n!)^{\frac{1}{n}}}\right)$$ So now $$e=\left[\lim_{n\to\infty} e^{\ln(2\pi n)^{\frac{1}{2n}}}\right]\left[\lim_{n\to\infty} \frac{n}{(n!)^{\frac{1}{n}}}\right] =\left[\lim_{n\to\infty} e^{\frac{\ln(2\pi | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363526668351,
"lm_q1q2_score": 0.8516715176833952,
"lm_q2_score": 0.8652240825770432,
"openwebmath_perplexity": 348.2621632471697,
"openwebmath_score": 0.9115509986877441,
"tags": null,
"url": "https://math.stackexchange.com/questions/935490/the-limit-of-n1-n-n-as-n-to-infty/935538"
} |
reaction-mechanism, computational-chemistry, software, reaction-control
Title: Python package for modelling chemical reactions I'm looking for a package that is compatible with Python 3 and can actually do work with reactions along with, for instance, referencing values. Ideally, it should also not be a package that will take 20 days to learn how to access one value.
More specifically: there should be a way to figure out all possible reactions a certain state could lead to, a way to manipulate lists of possible reactions and applicable reactions in certain states, and to apply reactions to states to get a resulting new state (this, of course, may require some coding on my part, but just an idea of what it should lead too - the more the package can do the better).
Thanks; I'd be glad to provide any clarification in the comments. This is not really my area of expertise, but a quick search for "python chemical reactions" revealed three hits I've never seen before that may be of interest, the first one being closest to what you want.
chempy
from chempy import ReactionSystem # The rate constants below are arbitrary
rsys = ReactionSystem.from_string("""2 Fe+2 + H2O2 -> 2 Fe+3 + 2 OH-; 42
2 Fe+3 + H2O2 -> 2 Fe+2 + O2 + 2 H+; 17
H+ + OH- -> H2O; 1e10
H2O -> H+ + OH-; 1e-4
Fe+3 + 2 H2O -> FeOOH(s) + 3 H+; 1
FeOOH(s) + 3 H+ -> Fe+3 + 2 H2O; 2.5""") # "[H2O]" = 1.0 (actually 55.4 at RT)
from chempy.kinetics.ode import get_odesys
odesys, extra = get_odesys(rsys)
from collections import defaultdict
import numpy as np
tout = sorted(np.concatenate((np.linspace(0, 23), np.logspace(-8, 1)))) | {
"domain": "chemistry.stackexchange",
"id": 8191,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reaction-mechanism, computational-chemistry, software, reaction-control",
"url": null
} |
• In part a), the use of $\vert f(m)+ \cdots + f(n-1) + \int_{n}^{m} f(x) dx \vert$ is because of the assumption (without loss of generality) that $n \geq m$ right? Also, did the integral with the reversed bounds turn into a summation in order to show that the end result would be the same as the initial assumption of $\vert a_n - a_m \vert$ ? Other than that, thank you very much, it is a very detailed post. – Jamil_V Sep 16 '13 at 2:36
• Yes I used the assumption of $n≥m$ (without loss of generality) to get $|a_{n} −a_{m}|=|f(m)+...+f(n−1)+\int_{m}^{n}f(x)dx|$. I separated the integral into a sum because I wanted to estimate $|\sum_{k=m}^{n-1}f(k)-\int_{m}^{n}f(x)dx|$. I suspected that the end estimate would look like $|f(n)−f(m)|$ because I did not use the hypothesis $\lim_{n\to\infty}f(n)=0$ anywhere. You're very welcome. If you have any mroe questions I'd be happy to help. – user71352 Sep 16 '13 at 3:21 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357184418847,
"lm_q1q2_score": 0.802642059220123,
"lm_q2_score": 0.8175744695262777,
"openwebmath_perplexity": 116.28658923924968,
"openwebmath_score": 0.9510442614555359,
"tags": null,
"url": "https://math.stackexchange.com/questions/494972/cauchy-sequences-and-integrals"
} |
acoustics
If the high-frequency modes break a symmetry that the low-frequency modes don't, then the rate of change of $a_h$ has to be proportional to $a_h$ itself. This changes the growth from linear to exponential, so starting with a nonzero value of $a_h$ really can make a significant difference. (However, in the absence of any warmup, energy can still be transferred to the asymmetric modes by asymmetric bumps on the gong's surface; it just takes longer to get started.) | {
"domain": "physics.stackexchange",
"id": 57889,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "acoustics",
"url": null
} |
nlp, sentiment-analysis, bert, language-model, text-classification
Title: What is purpose of the [CLS] token and why is its encoding output important? I am reading this article on how to use BERT by Jay Alammar and I understand things up until:
For sentence classification, we’re only only interested in BERT’s output for the [CLS] token, so we select that slice of the cube and discard everything else.
I have read this topic, but still have some questions:
Isn't the [CLS] token at the very beginning of each sentence? Why is that "we are only interested in BERT's output for the [CLS] token"? Can anyone help me get my head around this? Thanks! CLS stands for classification and its there to represent sentence-level classification.
In short in order to make pooling scheme of BERT work this tag was introduced. I suggest reading up on this blog where this is also covered in detail. | {
"domain": "datascience.stackexchange",
"id": 9745,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nlp, sentiment-analysis, bert, language-model, text-classification",
"url": null
} |
is proven. English Language Arts Standards » Anchor Standards » College and Career Readiness Anchor Standards for Writing » 8 Print this page. 1 Double Integrals 4 This chapter shows how to integrate functions of two or more variables. Multiple Integrals 1 Double Integrals De nite integrals appear when one solves Area problem. Essential or necessary for completeness; constituent: The kitchen is an integral part of a house. De nition The double integral of f over D is. Chapter 7 Integrals of Functions of Several Variables 435 7. About Integral Ecology: Uniting Multiple Perspectives on the Natural World. an integral in which the integrand involves a function of more than one variable and which requires for evaluation repetition of the integration process. Multiple Integrals Double Integrals over Rectangles 26 min 3 Examples Double Integrals over Rectangles as it relates to Riemann Sums from Calc 1 Overview of how to approximate the volume Analytically and Geometrically using Riemann Sums Example of approximating volume over a square region using lower left sample points Example of approximating volume over a…. Iterated Integrals - In this section we will show how Fubini's Theorem can be used to evaluate double integrals where the region of integration is a rectangle. Essential or necessary for completeness; constituent: The kitchen is an integral part of a house. 1 INTRODUCTION TO DEFINITE INTEGRALS AND DOUBLE INTEGRALS Definite Integrals The concept of definite integral ∫b a fxdx …(1) is physically the area under a curve y = f(x), (say), the x-axis and the two ordinates x = a and x = b. Step 3 (B): Add Appointment/Task for ‘Multiple’ records. These sides have either constant -values and/or constant -values. Integral Ad Science (IAS) is the global market leader in digital ad verification, offering technologies that drive high-quality advertising media. Buy Multiple Integrals in the Calculus of Variations and Nonlinear Elliptic Systems. Multiple Integration by Parts Here is an approach to this rather confusing topic, with a slightly di erent notation. BYJU'S online double integral calculator tool makes the calculation faster, and it displays the double integral value in a fraction of seconds. Integration by Parts$\int u \: dv = uv - \int v \: du$. Just as with double integrals, the only trick is determining the limits on the iterated integrals. Double | {
"domain": "sissynene.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9890130586647623,
"lm_q1q2_score": 0.803936390061388,
"lm_q2_score": 0.8128673155708975,
"openwebmath_perplexity": 1002.5570864988887,
"openwebmath_score": 0.841425359249115,
"tags": null,
"url": "http://qxwx.sissynene.it/multiple-integral.html"
} |
# Math Help - I'm trying to understand why this is the domain of this function
1. ## I'm trying to understand why this is the domain of this function
I'm trying to find the domain for the function: square root of (9-x^2)...So what I do is solve for 9-x^2 >= 0 (b/c you can't have a negative number under a radical)...and get x is less than or equal to +3 and -3....
but the domain is supposedly [-3,3]...which would mean x > or = -3....but when I solved for it I got that x < or = -3...IT DOESN'T MAKE SENSE...
2. Originally Posted by jonjon1324
I'm trying to find the domain for the function: square root of (9-x^2)...So what I do is solve for 9-x^2 >= 0 (b/c you can't have a negative number under a radical)...and get x is less than or equal to +3 and -3....
but the domain is supposedly [-3,3]...which would mean x > or = -3....but when I solved for it I got that x < or = -3...IT DOESN'T MAKE SENSE...
Draw the graph of y = 9 - x^2. For what values of x is $y \geq 0$ ....?
3. Originally Posted by mr fantastic
Draw the graph of y = 9 - x^2. For what values of x is $y \geq 0$ ....?
Yeah, in the graph I can see that, but if I'm actually writing out the problem, and I find the square root of 9, how does the inequality flip from x <= + and -3 to x >= -3? | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180656553328,
"lm_q1q2_score": 0.8358571347390824,
"lm_q2_score": 0.8479677545357569,
"openwebmath_perplexity": 272.50361983417446,
"openwebmath_score": 0.8143748641014099,
"tags": null,
"url": "http://mathhelpforum.com/calculus/156550-i-m-trying-understand-why-domain-function.html"
} |
c#, mvvm, xaml, rubberduck
private readonly ICommand _exportResultsCommand;
public ICommand ExportResultsCommand { get { return _exportResultsCommand; } }
private readonly NavigateCommand _navigateCommand;
public ICommand NavigateCommand { get { return _navigateCommand; } }
private bool _isBusy;
public bool IsBusy
{
get { return _isBusy; }
private set
{
_isBusy = value;
OnPropertyChanged();
}
}
public TestExplorerModelBase Model { get { return _model; } }
private void ExecuteRefreshCommand(object parameter)
{
if (_isBusy)
{
return;
}
IsBusy = true;
_model.Refresh();
SelectedItem = null;
IsBusy = false;
}
private void EvaluateCanExecute()
{
Dispatcher.CurrentDispatcher.Invoke(CommandManager.InvalidateRequerySuggested);
}
private bool CanExecuteRefreshCommand(object parameter)
{
return !IsBusy;
}
private void ExecuteRepeatLastRunCommand(object parameter)
{
IsBusy = true;
_testEngine.Run(_model.Tests.Where(test => test.Result.Outcome != TestOutcome.Unknown));
IsBusy = false;
EvaluateCanExecute();
}
private void ExecuteRunNotExecutedTestsCommand(object parameter)
{
IsBusy = true;
_testEngine.Run(_model.Tests.Where(test => test.Result.Outcome == TestOutcome.Unknown));
IsBusy = false;
EvaluateCanExecute();
}
private void ExecuteRunFailedTestsCommand(object parameter)
{
IsBusy = true;
_testEngine.Run(_model.Tests.Where(test => test.Result.Outcome == TestOutcome.Failed));
IsBusy = false;
EvaluateCanExecute();
} | {
"domain": "codereview.stackexchange",
"id": 15436,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, mvvm, xaml, rubberduck",
"url": null
} |
graph-colouring, set-cover, hypergraphs, hitting-set
The Probabilistic Method by Noga Alon and Joel H. Spencer (Chapter 13 in the 4th edition).
Geometric Discrepancy by Jiřı́ Matoušek. | {
"domain": "cstheory.stackexchange",
"id": 5715,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "graph-colouring, set-cover, hypergraphs, hitting-set",
"url": null
} |
nuclear-physics, nuclear-engineering
Title: Water-cooled fast neutron reactors Can anyone explain why fast neutron reactor designs use sodium/lead/salt cooling, instead of water (heavy/light)?
Is that because neutron absorption by water would not allow to break even in fuel cycle?
Will heavy water help here?
Or water slows neutrons so efficienly so that even if we reduce amount of water inside the reactor (by increasing flow speed) - it still will significantly lower neutron energy, while sodium does not slowdown neutrons at all? First question: Na for fast reactors
Sodium is better for faster reactors because it has a lower total cross section than water. Fast reactors still has some moderation and obviously all types have some neutron loss due to absorption from the moderator in addition to whatever other materials may be in the core. For numbers, I'm going to reference NIST:
H http://www.ncnr.nist.gov/resources/n-lengths/elements/h.html
Na http://www.ncnr.nist.gov/resources/n-lengths/elements/na.html
Sum up all of the cross sections for all types of reactions, which is the total scattering + absorption from that link. You find $82 \text{b}$ for Hydrogen and $3.8 \text{b}$ for Sodium. For water you add Oxygen. Combine that with density and atomic weight information to get the macroscopic cross sections. For water, we have $3.45 cm^{-1}$ and for Na we have $0.115 cm^{-1}$. These numbers are from my reference.
A fast reactor primarily wants the moderator to do nothing. Indeed, without a coolant or moderator the reaction works just fine... aside from getting cooled. Even if a moderator wouldn't absorb any neutrons, it would muck with your intent, which is to get the fuel atoms (U, Th, Pu) to absorb fast neutrons from fission. The reason you want fast neutrons to hit the fuel atoms includes:
to breed new fuel, for which the isotope chain is only neurotically favorable with fast neutrons
some isotopes are only fissile at fast energies, and at lower energies will have small fraction fission, meaning you couldn't sustain the chain reaction ($k<1$) with a thermal spectrum | {
"domain": "physics.stackexchange",
"id": 6552,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-physics, nuclear-engineering",
"url": null
} |
java, file, console, io
FYI, this is what it looks like when ran:
Try the help system. Enter "stop" to end.Topics:
- if
- while
Enter topic: if
if (condition) {
// if statements
} else {
// else statements
}
Enter topic: foo
Topic not found: foo
Enter topic: stop
Process finished with exit code 0
Based on text file HelpSystem.txt below - obviously could have more useful content, but this will do for now.
#if
if (condition) {
// if statements
} else {
// else statements
}
#while
while (condition) {
// statements
} Interface
I don't think that instantiating a HelpSystem should have a side-effect of starting the interpreter loop. Either of these following designs would be better:
Instantiate, then run the loop separately
HelpSystem help = new HelpSystem();
help.run();
A static method
HelpSystem.run(); | {
"domain": "codereview.stackexchange",
"id": 19479,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, file, console, io",
"url": null
} |
crystallography, structural-biology
In this picture, a cat was used to calculate structure factor amplitudes, and a Manx (cat without tail) was used to calculate structure factor phases. Then, they were combined and Fourier-transformed. The resulting (2Fo-Fc) electron density shows the tail at lower signal strength than the remainder of the cat, and there is some noise. The difference between the perfect image of a cat and the observed image is caused by the phases lacking the information of the tail. (See this slide for the entire pictorial explanation.)
Use of omit maps
[OP] But what I don't understand is when omit maps are used w.r.t. the source of the phases.
To reiterate the other answer, omit maps may be used whenever the phases of the structure factor are based on an imperfect model of the structure. If parts of the electron density are difficult to interpret, the model in this part is left out, and a new density is calculated. This density is not biased by a potentially incorrect model in this area, so it might help to correctly build that part. (Here is a slide on model bias.)
With the computer power available nowadays, you can also omit every part of the structure in sequence and piece back together the entire electron density (if you are not sure which part of the model is incorrect). See e.g. https://www.phenix-online.org/documentation/reference/composite_omit_map.html
Omit maps can be calculated no matter where initial phases came from. In the final stages of model building and refinement, the phases are usually obtained exclusively from the model, even if you start with experimental phases.
Additional aspects raised in the chat
[OP in comments] is the reason why omit mapping works because of the fact that every atom contributes to a diffraction spot?
Yes, exactly. The phases of each structure factor are almost correct, and the amplitudes are experimental (so unbiased but they do have experimental error, e.g. noise). It is worse to have a bad part of the model compared to leaving that part out all together. | {
"domain": "chemistry.stackexchange",
"id": 14558,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "crystallography, structural-biology",
"url": null
} |
homework-and-exercises, classical-mechanics
Title: What is the maximum mass that the airplane can have and still maintain enough lift to fly? A commercial airplane travels at a speed which is 85% of the speed of sound. The wings of the
airplane are designed such that the bottoms of the wings are flat and the tops of the wings are
curved so that air travelling above the wings follows a path which is 15% longer than the straight
path of the air below the wings. The wings are approximately rectangular in shape with a length
of 35 m and a width of 8 m. The thickness of the wings is negligible. What is the maximum mass
that the airplane can have and still maintain enough lift to fly? Aeroplanes fly by thrusting air downwards and by thus being borne up by the Newton's-Third-Law begotten upwards reaction force of the downthrusted air on the aeroplane. There are many excellent answers to the Physics SE question "What really allows airplanes to fly?" that you should read.
But basically the simplest estimates arise from calculating the ram pressure thrust upwards on the aeroplane given the above principle. The variables you need to know are density of the air at the height, the relative speed of the aeroplane to the air, the angle of attack that the wing makes with the velocity vector of the air relative to a frame comoving with the aeroplane and the scale factor that yields the effective surface area of the wing - which at subsonic speeds is considerably larger than the wing itself because the disturbance to the fluid flow pattern that arises from the wing is felt over a region that is considerably bigger than the wing. The last variable - effective area - can also be expressed as the wing's coefficient of lift.
To illustrate these points, we can do a back of the envelope estimation of ram pressure in this case: see my drawing below of a simple aerofoil with significant angle of attack being held stationary in a wind tunnel. This is the kind of analysis you should do to get an idea of your specific situation. Your air density is going to be rather less than that for the following calculation (commercial jetliners reach their top speed at heights of about 8000m): | {
"domain": "physics.stackexchange",
"id": 10377,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, classical-mechanics",
"url": null
} |
Suppose $$f_n: X\to \mathbb{C}$$ for all $$n\in \mathbb{N}$$. Since $$f_n(x)$$ converges if and only if $$Re(f_n(x))$$ and $$Im(f_n(x))$$ are both convergence, that is, $$\{x: \lim f_n\text{ exists}\}=\{x: \lim Re(f_n)\text{ exists}\}\cap \{x: \lim Im(f_n)\text{ exists}\}$$ which two sets are both measurable by the previous argument and their intersection is also true. This gives the desired result.
Here's another way of doing it. Define
$$g:=\liminf_{n\to\infty} f_n$$
$$h:=\limsup_{n\to\infty} f_n$$
The functions $$g$$ and $$h$$ are both measurable.
Then note that
$$E:=\{x\in X: \lim_{n\to\infty} f_n(x) \text{ exists}\}=\{x\in X: g(x)=h(x)\}$$
As $$g$$ and $$h$$ are measurable, so is $$E$$.
Edit: If by "the limit exists" you mean also that it is finite, consider the set $$A=\{ x\in X: \limsup_{n\to\infty} f_n(x) <\infty\}$$ Note that $$A$$ is measurable, and thus so is $$E\cap A$$, which is what you want. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9808759660443167,
"lm_q1q2_score": 0.8064655512614494,
"lm_q2_score": 0.8221891239865619,
"openwebmath_perplexity": 133.3278387550752,
"openwebmath_score": 0.9793105125427246,
"tags": null,
"url": "https://math.stackexchange.com/questions/3371337/if-f-n-is-sequence-of-measurable-functions-on-x-then-x-lim-f-nx"
} |
quantum-gate, measurement
Title: Matrix representation of a measurement It is well known that any operation on a quantum computer is described with a unitary matrix (a quantum gate) because quantum computing is reversible.
Only non-reversible operation is a measurement of a qubit. Therefore, I would expect that the measurement can be described by a non-unitary matrix (or even by matrix that is not invertible).
A result of the measurement should be a probability distribution of all possible states of measured qubits.
My question: How does a matrix representing measurement of qubit(s) look like? There is no single matrix representing measurement. Projective measurements are represented by a set of orthogonal projectors. For example, measurement of a single qubit in the standard basis is represented by projectors
$$\pi_0=\begin{pmatrix}
1 &0\\
0 &0
\end{pmatrix}$$
and
$$\pi_1=\begin{pmatrix}
0 &0\\
0 &1
\end{pmatrix}$$
If a qubit was initially in state with density matrix $\rho$, then the post-measurement density matrix $\rho'$ is
$$\rho'=\sum_{i=0}^1\pi_i\rho\pi_i$$
Also, probability distribution of possible measurement outcomes can be obtained only by multiple measurements on an ensemble of identical systems; a single measurement gives no information about probabilities of possible measurement outcomes. | {
"domain": "quantumcomputing.stackexchange",
"id": 1153,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-gate, measurement",
"url": null
} |
python, python-3.x, web-scraping, asynchronous, python-requests
@asyncio.coroutine
def get(*args, **kwargs):
"""
A wrapper method for aiohttp's get method. Taken from Georges Dubus' article at
http://compiletoi.net/fast-scraping-in-python-with-asyncio.html
"""
response = yield from aiohttp.request('GET', *args, **kwargs)
return (yield from response.read_and_close())
@asyncio.coroutine
def extract_text(url, sem):
"""
Given the url for a chapter, extract the relevant text from it
:param url: the url for the chapter to scrape
:return: a string containing the chapter's text
"""
with (yield from sem):
page = yield from get(url)
tree = etree.HTML(page)
paragraphs = tree.findall('.//*/div[@class="entry-content"]/p')[1:-1]
return url, b'\n'.join(map(etree.tostring, paragraphs))
def generate_links():
"""
Generate the links to each of the chapters
:return: A list of strings containing every url to visit
"""
start_url = 'https://twigserial.wordpress.com/'
base_url = start_url + 'category/story/'
tree = etree.HTML(requests.get(start_url).text)
xpath = './/*/option[@class="level-2"]/text()'
return [base_url + suffix.strip() for suffix in tree.xpath(xpath)]
@asyncio.coroutine
def run(links):
sem = asyncio.Semaphore(5)
fetchers = [extract_text(link, sem) for link in links]
return [(yield from f) for f in asyncio.as_completed(fetchers)] | {
"domain": "codereview.stackexchange",
"id": 17062,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, web-scraping, asynchronous, python-requests",
"url": null
} |
the generalized -norm of a vector or (numeric) matrix is returned by Norm [ expr, p ]. Statistics gathered by Neustar, Inc. Calculate the norm of a vector in the plane. It is the distance that a taxi travels along the streets of a city that. Besides the familiar Euclidean norm based on the dot product, there are a number of other important norms that are used in numerical analysis. AU - Todorov, I. Computing the norm of a matrix. Remember, we can write a vector that starts at There is a problem though. When y is a centered unit vector, the vector β*y has L 2 norm β. Example 4 Find a unit vector that has the same direction as the vector w = - 3, 5 >. Lindner shared this question 7 years ago. Vector norms. The associated norm is called the two-norm. The magnitude of the vector in Cartesian coordinates is the square root of the sum of the squares of it coordinates. The Frobenius norm of a vector coincides with its 2-norm. p – q = p + (–q) Example: Subtract the vector v from the vector u. % % history is a structure that contains the objective value, the primal and % dual residual norms, and the tolerances for the primal and dual residual % norms at each iteration. In keynote remarks at CES 2021, Smith slammed the Russia-linked breach as an. This returns a vector with the square roots of each of the components to the square, thus 1 2 3 instead of the Euclidean Norm This is a trivial function to write yourself: norm_vec - function(x) sqrt(sum(x^2)). The length of a vector is a nonnegative number that describes the extent of the vector in space, and is sometimes referred to as the vector’s magnitude or the norm. std::vector. In molecular biology, a vector may be a virus or a plasmid that carries a piece of foreign DNA to a host cell. Frobenius norm. The L2-norm is the. Norms are 0 if and only if the vector is a zero vector. , a unit norm. cc | Übersetzungen für 'norm of a vector' im Englisch-Deutsch-Wörterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen | {
"domain": "haus-cecilie.de",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864302631448,
"lm_q1q2_score": 0.8394765683893105,
"lm_q2_score": 0.8479677526147223,
"openwebmath_perplexity": 690.0616017798108,
"openwebmath_score": 0.842029333114624,
"tags": null,
"url": "http://fnsv.haus-cecilie.de/"
} |
\begin{proof}
In this proof all colimits are taken in the category of sets.
By Lemma \ref{lemma-preserve-products} we have
$\colim M_i \times \colim M_i = \colim (M_i \times M_i)$
hence we can use the maps $+ : M_i \times M_i \to M_i$
to define an addition map on $\colim M_i$. A straightforward
argument, which we omit, shows that the set $\colim M_i$ with this
addition is the colimit in the category of abelian groups.
\end{proof}
\begin{lemma}
\label{lemma-split-into-connected}
Let $\mathcal{I}$ be an index category, i.e., a category. Assume
that for every solid diagram
$$\xymatrix{ x \ar[d] \ar[r] & y \ar@{..>}[d] \\ z \ar@{..>}[r] & w }$$
in $\mathcal{I}$ there exists an object $w$ and dotted arrows
making the diagram commute. Then $\mathcal{I}$ is a (possibly empty)
disjoint union of categories satisfying the condition above and
the condition of Lemma \ref{lemma-preserve-products}.
\end{lemma}
\begin{proof}
If $\mathcal{I}$ is the empty category, then the lemma is true.
Otherwise, we define a relation on objects of $\mathcal{I}$ by
saying that $x \sim y$ if there exists a $z$ and
morphisms $x \to z$ and $y \to z$. This is an equivalence
relation by the assumption of the lemma. Hence $\Ob(\mathcal{I})$
is a disjoint union of equivalence classes. Let $\mathcal{I}_j$
be the full subcategories corresponding to these equivalence classes.
Then $\mathcal{I} = \coprod \mathcal{I}_j$ as desired.
\end{proof} | {
"domain": "columbia.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.98504290989414,
"lm_q1q2_score": 0.800709185887795,
"lm_q2_score": 0.8128673155708975,
"openwebmath_perplexity": 141.60728365076594,
"openwebmath_score": 0.9809542298316956,
"tags": null,
"url": "http://stacks.math.columbia.edu/tag/04AX"
} |
or $f(x, y, z)$. Double Integrals in Cylindrical Coordinates 3. 'tiled' integral2 transforms the region of integration to a rectangular shape and subdivides it into smaller rectangular regions as needed. 1. Calculating the double integral in the new coordinate system can be much simpler. Multiple Integrals 14.1 Double Integrals 4 This chapter shows how to integrate functions of two or more variables. Changing the order of integration sometimes leads to integrals that are more easily evaluated; Conversely, leaving the order alone might result in integrals that are difficult or impossible to integrate. First, a double integral is defined as the limit of sums. In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane.. Contour integration is closely related to the calculus of residues, a method of complex analysis. Consider, for example, a function of two variables $$z = f\left( {x,y} \right).$$ The double integral of function $$f\left( {x,y} \right)$$ is denoted by $\iint\limits_R {f\left( {x,y} \right)dA},$ where $$R$$ is the region of integration … Two examples; 2. Some Properties of Integrals; 8 Techniques of Integration. The technique involves reversing the order of integration. example. The idea is to evaluate each integral separately, starting with the inside integral. Integration Method Description 'auto' For most cases, integral2 uses the 'tiled' method. Double integrals are usually definite integrals, so evaluating them results in a real number. Substitution; 2. \$1 per month helps!! Calculus: Integral with adjustable bounds. Triple integrals. The double integrals in the above examples are the easiest types to evaluate because they are examples in which all four limits of integration are constants. Such an example is seen in 1st and 2nd year university mathematics. coordinates? By using this website, you agree to our Cookie Policy. Consequently, we are now ready to convert all double integrals to iterated integrals and demonstrate how the properties listed earlier can help us evaluate double integrals when the function $$f(x,y)$$ is more complex. This is the currently selected item. The definite integral can be extended to functions of more than one variable. Volumes as | {
"domain": "kabero.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9773707993078212,
"lm_q1q2_score": 0.8345893935251338,
"lm_q2_score": 0.8539127566694177,
"openwebmath_perplexity": 693.7423234955559,
"openwebmath_score": 0.9358883500099182,
"tags": null,
"url": "http://kabero.com/1nr89fz/a56140-multiple-integrals-examples"
} |
beginner, comparative-review, swift, ios, core-data
}
}
}
}
}
func applicationWillResignActive(_ application: UIApplication) {
// Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions (such as an incoming phone call or SMS message) or when the user quits the application and it begins the transition to the background state.
// Use this method to pause ongoing tasks, disable timers, and invalidate graphics rendering callbacks. Games should use this method to pause the game.
}
func applicationDidEnterBackground(_ application: UIApplication) {
// Use this method to release shared resources, save user data, invalidate timers, and store enough application state information to restore your application to its current state in case it is terminated later.
// If your application supports background execution, this method is called instead of applicationWillTerminate: when the user quits.
}
func applicationWillEnterForeground(_ application: UIApplication) {
// Called as part of the transition from the background to the active state; here you can undo many of the changes made on entering the background.
}
func applicationDidBecomeActive(_ application: UIApplication) {
// Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
}
func applicationWillTerminate(_ application: UIApplication) {
// Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:.
// Saves changes in the application's managed object context before the application terminates.
self.saveContext()
}
// MARK: - Core Data stack | {
"domain": "codereview.stackexchange",
"id": 32971,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, comparative-review, swift, ios, core-data",
"url": null
} |
4. Hello, Punch!
The equation of a curve is $\displaystyle y\:=\:(x-4)^2+(-1)$.
It shows that the curve has a minimum point of (4,-1).
Now, if I have a modulus of this equation, that is: $\displaystyle y \:=\:|(x-4)^2+(-1)|$,
. . the turning point would be (4,1).
Is the turning point a maximum point or minimum point now?
Visualize their graphs . . .
The graph of the parabola looks like this:
Code:
|
| * *
|
|
| * *
|
--+----*-------------*-----
| * *
| * *
| *
|
There is an absolute minimum at (4, -1).
There is no maximum.
With absolute values, any point below the $\displaystyle x$-axis
. . is reflected upward.
The graph of the modulus function is:
Code:
|
| * *
|
| *
| * * * *
| * *
--+----*-------------*-----
| 3 5
|
It has absolute minimums at (3,0) and (5,0).
. . and a relative (local) maximum at (4,1).
5. Originally Posted by Soroban
Hello, Punch!
Visualize their graphs . . .
The graph of the parabola looks like this:
Code:
|
| * *
|
|
| * *
|
--+----*-------------*-----
| * *
| * *
| *
|
There is an absolute minimum at (4, -1).
There is no maximum.
With absolute values, any point below the $\displaystyle x$-axis
. . is reflected upward.
The graph of the modulus function is: | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429614552197,
"lm_q1q2_score": 0.825746488577368,
"lm_q2_score": 0.83973396967765,
"openwebmath_perplexity": 2244.8287204266026,
"openwebmath_score": 0.87571781873703,
"tags": null,
"url": "http://mathhelpforum.com/geometry/124231-minor-clarification-modulus-graphs.html"
} |
spacetime, string-theory, spacetime-dimensions, cosmological-constant, fine-tuning
For instance, in a simple model in which the six dimensions are simply rolled up as circles of radius $r_i$, we get something like $G^{(10)} = \left(\prod_{i=4}^9 r_i\right) G^{(4)}$, but the rest of the resultant theory is nothing like the Standard Model. The general idea holds, though - the size of the compactified extra dimensions determines the ratio between the four- and ten-dimensional gravitational strengths.
The cosmological constant - the vacuum energy - is different: Many stringy model preserve supersymmetry, but unbroken supersymmetry imposes zero energy of the ground state, meaning supersymmetric models cannot straightforwardly generate a non-zero cosmological constant. Non-supersymmetric models can, but the quantum corrections to this vacuum energy are typically both large and hard to compute. It is not the generic size of the extra dimensions, but the actual details of the quantum theory generated that determine this.
Finally, compactification is not the only thing that can generate scales. Braneworld models do not arrive at the effective four-dimensional theory solely through the dimensional reduction on compactified dimensions, but conceive of our universe as a "brane" inside a (not necessarily compactified) higher-dimensional space, and the restriction of physics to this brane then yields the effective theory.
For instance, in the Randall-Sundrum model, the "full" universe is five-dimensional and there's a gradient over this universe where the weak and the gravitational force are "equal" on one end, but the hierarchy between them emerges as one moves to the other end, where "our universe" sits. This resolves the hierarchy problem not through the specific geometry of the extra dimensions, but through the specific physics assumed to be realized in them. | {
"domain": "physics.stackexchange",
"id": 44129,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "spacetime, string-theory, spacetime-dimensions, cosmological-constant, fine-tuning",
"url": null
} |
The corollary follows from the fact that any normal and pseudocompact space is countably compact (see here).
Remarks
The proof of Theorem 3 actually gives a more general result. Note that the second factor only needs to be paracompact and that every point has a countable base (i.e. first countable). The first factor $X$ has to be countably compact. The shrinking requirement for $X$ is flexible – if open covers of a certain size for $X$ are shrinkable, then open covers of that size for the product are shrinkable. We have the following corollaries.
Corollary 5
Let $X$ be a $\kappa$-shrinking and countably compact space and let $Y$ be a paracompact first countable space. Then $X \times Y$ is a $\kappa$-shrinking space.
Corollary 6
Let $X$ be a shrinking and countably compact space and let $Y$ be a paracompact first countable space. Then $X \times Y$ is a shrinking space.
____________________________________________________________________
Remarks | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9907319885346244,
"lm_q1q2_score": 0.8076769075174758,
"lm_q2_score": 0.8152324915965392,
"openwebmath_perplexity": 2702.333109678335,
"openwebmath_score": 1.0000100135803223,
"tags": null,
"url": "https://dantopology.wordpress.com/tag/metrizable-spaces/"
} |
electrostatics, charge, potential-energy, coulombs-law
Title: Doubts in understanding some concepts of potential energy Let us consider a system of charges in space. The potential energy of the system of charges is determined by the amount of work done by the external force to assimilate the charges in that manner. But what is the potential energy of a particular charge in that system of charges? Does that question makes any sense? Because potential can't be defined for a single charge, it is always defined for a system, as far I know. If there is some definition about the potential energy of a single charge then please mention it. Strictly speaking,
Potential energy of a charged particle at a point ( $\vec r $ ) is the amount of work done by the external force in bringing that charge from infinity to that particular point
Obviously, if there are no charges around (including static and in motion), the work done would be zero as the other charge would not experience any force.
Potential energy of a particular charge of the system ( q ) means you already had the other charges of your system already in place and then you bring the concerned charge q whose P.E. you want to find from infinity to that point.It can also be calculated by subtracting the potential energy of the system of other charges (excluding the charge q ) and subtracting it from the potential energy of the whole system ( including q ) | {
"domain": "physics.stackexchange",
"id": 56399,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, charge, potential-energy, coulombs-law",
"url": null
} |
c++, complexity
1 7
2 3
8 7
10 5
100 2
Output:
4
1
2
1
1
And now, here's my solution:
#include <iostream>
#include <vector>
using namespace std;
int main()
{
int n;
cin >> n;
vector<pair<int, int> > c;
for (int i=0; i<n; i++) {
int a, b;
cin >> a >> b;
c.push_back(make_pair(a,b));
}
for (int i=0; i<n; i++) {
int a, b, r=1;
a = c[i].first;
b = c[i].second;
for (int j=i+1; j<n; j++) {
if (a+b >= c[j].first) {
if (c[j].first + c[j].second > a+b) {
a = c[j].first;
b = c[j].second;
}
r++;
}
}
cout << r << endl;
}
return 0;
}
The problem with this solution is in its quadratic O-complexity. Is it possible to reduce it somehow? I've been thinking of the divide and conquer approach, but I'm fairly new to the concept, and I have no idea if that's the right approach for this problem, and how I may implement that. I plan to think up a better algorithm later and post an answer on that, but for now, I'd like to offer a critique of your existing code.
Typically using namespace std; should be avoided in favor of either using the std:: qualifier or just importing the identifiers you plan to use at a function level. (E.g. inside of main, doing something like using std::cout;.)
You should verify that your input reading succeeds:
if (!(cin >> n)) { return EXIT_FAILURE; } | {
"domain": "codereview.stackexchange",
"id": 6814,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, complexity",
"url": null
} |
is: 5 The argument of (3,4) is: 0.927295 polar() – It constructs a complex number from magnitude and phase angle. Absolute value & angle of complex numbers Our mission is to provide a free, world-class education to anyone, anywhere. This can be found using the formula − For complex number … ... And you could put that into your calculator. The unit circle is the circle of radius 1 centered at 0. Definition 21.4. All rights reserved. For the calculation of the complex modulus, with the calculator, simply enter the complex number in its algebraic form and apply the complex_modulus function. Khan Academy is a 501(c)(3) nonprofit organization. Absolute value of a complex number. The absolute value of a complex number (also known as modulus) is the distance of that number from the origin in the complex plane. Complex numbers consist of real numbers and imaginary numbers. By calling the static (Shared in Visual Basic) Complex.FromPolarCoordinatesmethod to create a complex number from its polar coordinates. Show Instructions. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Let be a complex number. The argument of a complex number is the direction of the number from the origin or the angle to the real axis. Cite this content, page or calculator as: Furey, Edward "Absolute Value Calculator"; CalculatorSoup, Very simple, see examples: |3+4i| = 5 |1-i| = 1.4142136 |6i| = 6 abs(2+5i) = 5.3851648 Square root The inverse of the complex number z = a + bi is: Example 1: The argument of z (in many applications referred to as the "phase" φ) is the angle of the radius Oz with … Using the pythagorean theorem (Re² + Im² = Abs²) we are able to find the hypotenuse of the right angled triangle. Input the numbers in form: a+b*i, the first complex number, and c+d*i, the second complex number, where "i" is the imaginary unit. Of course, 1 is the absolute value of both 1 and –1, but it's also the absolute value of both i and | {
"domain": "co.za",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9825575142422757,
"lm_q1q2_score": 0.8144800611915987,
"lm_q2_score": 0.8289388146603365,
"openwebmath_perplexity": 712.5041068294182,
"openwebmath_score": 0.7335454225540161,
"tags": null,
"url": "http://analyticalsolutions.co.za/sx445b3a/absolute-value-of-complex-numbers-calculator-7fb181"
} |
gazebo, simulation, urdf, simulator-gazebo
Title: how to make a joint rotate a specified angle
Hello,everyone! I am a novice in gazebo. I want to simulate my robot arm in gazebo. Here is part of my urdf file. | {
"domain": "robotics.stackexchange",
"id": 6600,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo, simulation, urdf, simulator-gazebo",
"url": null
} |
to the planet's orbit around the host star? MnO4- + 8 H+ + 5e- --> Mn2+ + 4H2O (gained 5 electrons from reductant), I.e. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. The equivalent weight of KMnO4 when it is converted into K2MnO4 is (a) M (b) M/3 (c) M/5 (d) M/7 2 In Mn 2 O 3, the oxidation state of Mn is + III. Source(s): https://owly.im/a9voP. That's why an equivalent of a redox agent is its amount which liberates or accepts $\pu{1 mol}$ of electrons upon being oxidized or reduced, respectively, assuming electrons don't exist in solution on its own for a significant period of time. of KMnO4 = 158.04/3 = 52.68 grams/equivalent Resin is a hydrocarbon secretion of many plants, particularly coniferous trees. 4 2 2 3 Pot. \ce{0.5 H2 + e- &→ H-} 1. When did sir Edmund barton get the title sir and how? Strength = Normality x Equivalent weight. of KMnO4 = 158.04/5 = 31.61 grams/equivalent, MnO4- + 2H2O + 3e- --> MnO2(s) + 4OH- (gained 3 electrons), In which case, Eq. Why don't libraries smell like bookstores? KMnO4, Which Is Required For The Titration Of Na2C2O4 In An Acidic Environment, Is Prepared By Dissolving 1,600 G KMnO4 In 250 ML Water. and find homework help for other Science questions at eNotes Formula weight of KMnO4 is 158.04. Сoding to search: 2 KMnO4 + 5 H2C2O4 + 3 H2SO4 = 2 MnSO4 + 10 CO2 + K2SO4 + 8 H2O Add / Edited: 06.06.2015 / Evaluation of information: 5.0 out of 5 / number of votes: 1 Making statements based on opinion; back them up with references or personal experience. | {
"domain": "parafarmaciasip.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9473810525948927,
"lm_q1q2_score": 0.8264453724789917,
"lm_q2_score": 0.8723473730188543,
"openwebmath_perplexity": 4129.517395354383,
"openwebmath_score": 0.39839452505111694,
"tags": null,
"url": "http://parafarmaciasip.com/fall-out-vyr/kmno4-equivalent-weight-dfbe55"
} |
moveit, ros-melodic
modify the AllowedCollisionMatrix to enable and disable collisions between the lever and the links of your end effector, by obtaining it from the PlanningScene, modifying it, and the applying it e.g. via the PlanningSceneInterface
attach/detach the object to your end effector, so the lever moves with the end effector when you turn it. Remember that there is no notion of forces in MoveIt, and the PlanningScene and Gazebo are not linked intrinsically.
In Python, there are currently no convenience functions to set collisions, but a non-thread-safe version that should work is here. You can apply those commits and build from source (there are instructions online).
The AllowedCollisionMatrix may also get Python bindings in the future so that you can do the operations manually in the same way as you would in C++.
Originally posted by fvd with karma: 2180 on 2021-07-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 36686,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "moveit, ros-melodic",
"url": null
} |
java, swing, library
ConfirmDialog.java
package com.myname.gutil;
import java.awt.BorderLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JButton;
import javax.swing.JComponent;
import javax.swing.JDialog;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.border.EmptyBorder;
/**
* A dialog that lets the user either confirm or cancel.
*
* Display the dialog to the user by calling the {@link #display()} method. The text on the option
* buttons can be changed by modifying the buttons returned by {@link #getAcceptButton()} and
* {@link #getCancelButton()}.
*
*/
public class ConfirmDialog extends JDialog {
/**
* The default accept button text.
*/
private static final String DEFAULT_ACCEPT_TEXT = "Accept";
/**
* The default cancel button text.
*/
private static final String DEFAULT_CANCEL_TEXT = "Cancel";
/**
* Whether the dialog is going to be accepted or not.
*/
private boolean accepted;
/**
* The main component of the dialog.
*/
private JComponent body;
/**
* The accept button.
*/
private JButton acceptButton;
/**
* The cancel button.
*/
private JButton cancelButton;
/**
* Creates a new confirm dialog without a body.
*/
public ConfirmDialog() {
this(null, null, null);
}
/**
* Creates a new confirm dialog without a body.
*
* @title the title
*/
public ConfirmDialog(String title) {
this(title, null, null);
}
/**
* Creates a new confirm dialog.
*
* @title the title
* @param body the main component
*/
public ConfirmDialog(String title, JComponent body) {
this(title, body, null);
} | {
"domain": "codereview.stackexchange",
"id": 30804,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, swing, library",
"url": null
} |
$4m^2n^2 + (m^4-2m^2n^2+n^4) = (m^2+n^2)^2$
$m^4+2m^2n^2+n^4=(m^2+n^2)^2$
$(m^2+n^2)^2 = (m^2 + n^2)^2$
-
HINT $\$ By difference of squares $\rm \ (m^2+n^2)^2 - (m^2-n^2)^2 =\ (2\:m^2)\ (2\:n^2)\: =\: (2\:m\:n)^2$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9825575183283513,
"lm_q1q2_score": 0.814480058350351,
"lm_q2_score": 0.8289388083214156,
"openwebmath_perplexity": 645.2299012237852,
"openwebmath_score": 0.6792957186698914,
"tags": null,
"url": "http://math.stackexchange.com/questions/70808/how-to-prove-this-pythagorean-triple"
} |
python, comparative-review, interval, coordinate-system
Title: 1-D intersection of lines given begin and end points I have a fairly basic question here: I want to find if two lines on a 1-D plane intersect. I know of two simple ways to solve this, but I wanted to know if Python has a more elegant way to solve this.
x = [1, 10] # 1 = begin, 10 = end
y = [15, 20]
z = [5, 12]
#Method 1: Works. Is quick. Lots of typing.
def is_intersect_1(a, b):
bool_check = False
if a[0] <= b[0] <= a[1] or \
a[0] <= b[1] <= a[1] or \
b[0] <= a[0] <= b[1] or \
b[0] <= a[1] <= b[1]:
bool_check = True
return bool_check
is_intersect_1(x,y) # False
is_intersect_1(x,z) # True
#Method 2: Quicker to write. Simpler to read. Uses more memory and is slower.
def is_intersect_2(a, b):
bool_check = False
if set(range(a[0], a[1]+1)).intersection(set(range(b[0], b[1])):
bool_check = True
return bool_check
is_intersect_2(x,y) # False
is_intersect_2(x,z) # True
Using explicit booleans is a common antipattern:
check = False
if condition:
check = True
return check
is equivalent to much more transparent (and preferable)
return condition
Segments do not intersect if a lies completely to the left, or completely to the right of b. Assuming that segments are themselves sorted:
def is_intersect_3(a, b):
return not (a[1] < b[0] or a[0] > b[1])
is_intersect_2 fails badly on floats | {
"domain": "codereview.stackexchange",
"id": 12575,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, comparative-review, interval, coordinate-system",
"url": null
} |
turing-completeness, intuition
Title: What is the simplest/smallest subset an OOP language like C#/JavaScript that is Turing-complete? This question is not meant to be too theoretical or nitpicky. I have experience programming, so I'm trying to get a better understanding of what it takes to get Turing-completeness by using my intuition about practical programming languages.
Basically: What minimal elements of a language like Javascript do we need to have Turing completeness, and why? (I'm pretty sure we don't need interfaces or subclasses, etc).
Extra question: Is there a simple, natural polynomial-time reduction from a language like Javascript/C++/C# to a Turing machine? It takes very little to obtain Turing completeness. For example the While programming language and Counter Machines are Turing complete and are easily recognized as a subset (up to syntax) of all three languages.
The While language is also good for an intuition of Turing completeness. You need
Potentially infinite loops, because Turing complete computations may not terminate.
Condition-dependend behaviour, so that you can break out of loops (important: in contrast to for loops, the time after which the loop ends is not be known in advance).
A data type with an infinite domain.
In both examples I gave above, the third ingredient is the type of integers of unbounded size. Arguably, it is not natively supported in C++ (I do not know about JavaScript or C#. A "normal" programming that has it is Python 3.)
You promised not to nitpick, but if you did one would have to either
replace your languages by "conceptual" variants thereof where integers are unbounded, or
implement unbounded integers in a library (like GNU MP). When you do that, you inevitably use the "true" third ingredient of your languages: They achieve infinite-domain data through dynamic allocation. | {
"domain": "cs.stackexchange",
"id": 12443,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "turing-completeness, intuition",
"url": null
} |
• $\sum_j x_{i,j} = q_i$, and
• $\sum_i w_i x_{i,j} \ge t$.
Everything proceeds as before. In this way, the number of variables fed to the ILP solver is greatly reduced, which will likely make the solving process a lot more efficient. I definitely recommend applying this optimization, if you want to solve the problem in practice.
• That's incredibly helpful, thank you so much! – JCL Nov 21 '15 at 15:04
Here is a complete MIP (Mixed Integer Programming) model that should do the trick. I just use some random data (weights) with 100 widgets and 50 possible bins. When solved the variable NumUsedBins gives the maximum number of bins and the variable x gives the assignment. The equation 'order' is to make sure we use lower numbered bins first. The strange statement about optcr is to tell the solver to solve to optimality (for very difficult problems you may want to stop at 5% or so).
With 1000 widgets this becomes somewhat difficult to solve to optimality.
The first thing I would do in order to solve this question is simplify it into a smaller problem: You have the weights and the min-weight requirement. Now try to match 2 weights in order to reach exactly the min-weight. My approach:
1. Store every number inside a set.
2. For each number a calculate - ( a - mid_weight ) = b. Then check if the set already has the number b in it.
Now in the second scenario the pairs don't have to exactly match the min-weight. My approach would be:
1. Store every number inside a set.
2. For each number a calculate its optimal partner - ( a - mid_weight ) = b. Then check the closest element matching b inside the set. In C++ you can find both partners (the partner lower to b and higher to b) using std::set::lower_bounds and std::set::higher_bounds.
3. Sort all the matching pairs according to their waisted weight from lower to higher: a + b - min-weight = wasted-weight. Then always choose the best pair and reevaluate any new pairs that had a or b as partner. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806546550656,
"lm_q1q2_score": 0.8106575901476083,
"lm_q2_score": 0.8267117919359419,
"openwebmath_perplexity": 443.0305581765356,
"openwebmath_score": 0.3960328996181488,
"tags": null,
"url": "https://cs.stackexchange.com/questions/48510/help-wrapping-my-head-around-a-combinatorial-optimization-problem/50179"
} |
python
def __getitem__(self, key):
_self = self._container
n = len(self)
if isinstance(key, slice):
s = key.start
e = key.stop
stp = key.step
r = False
if stp is None:
stp = 1
elif stp == 0:
raise ValueError()
elif stp < 0:
r = True
stp = abs(stp)
if s and not e:
sliced = []
_len = n - s #[3:]
if s < 0:
#[-1:]
_len = abs(s)
elif s >= n:
#[7:]
_len = n - s % n
for i in range(0, _len, stp):
index = (i + s) % n
sliced.append(_self[index])
elif e and not s:
sliced = []
_len = e #[:4]
if e <= 0:
#[:-1]
_len = n - abs(e) % n
for i in range(0, _len, stp):
index = i % n
sliced.append(_self[index])
elif s and e:
sliced = []
if s <= e:
if s < 0 and e < 0:
#[-3:-1]
#[-7:-1]
_len = abs(s) - abs(e)
elif s < 0 and 0 <= e:
#[-1:3]
#[-7:8]
if n + s % n > e:
_len = e - s
else:
_len = e - (n + s)
elif 0 <= s and 0 <= e:
#[1:3]
#[7:19]
_len = e - s
elif s > e:
if s < 0 and e < 0:
#[-1:-3]
raise ValueError()
elif s >= 0 and e < 0:
if n + e < s:
#[4:-3]
raise ValueError()
_len = (n + e) - s #[1:-1]
elif 0 <= s and 0 <= e:
if s >= n: | {
"domain": "codereview.stackexchange",
"id": 6760,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
c#, event-handling, delegates, e-commerce
For example:
class TestClass
{
public EventHandler ThisDelegate;
public event EventHandler ThisEvent;
private void TryThis()
{
if (this.ThisEvent != null)
{
// I can fire my own event
this.ThisEvent(this, EventArgs.Empty);
}
// I can clear my own event
this.ThisEvent = null;
}
}
class OtherClass
{
void Test()
{
var test = new TestClass();
// I can invoke test's delegate!
test.ThisDelegate(this, EventArgs.Empty);
// I can clear test's delegate!
test.ThisDelegate = null;
// But I can't do that to its event
test.ThisEvent(this, EventArgs.Empty); // Compiler error
test.ThisEvent = null; // Compiler error
}
}
So, by making the member an Event rather than a Multicast Delegate, you give it extra semantic meaning that is enforced by the compiler. You do this for similar reasons that you mark members as private. | {
"domain": "codereview.stackexchange",
"id": 4478,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, event-handling, delegates, e-commerce",
"url": null
} |
particle-physics, feynman-diagrams, subatomic
Title: What is a Parton Level Feynman Diagram? I am studying elementary particle physics and I am wondering what a parton level Feynman diagram is? My understanding is that partons are representations of the quark and gluon substructure of hadrons, and so I am assuming that any diagram that has a quark or gluon qualifies as a parton-level diagram, but I cannot find any reference to it in my notes or online.
Could anyone explain (with examples preferably), what a parton-level Feynman diagram is? The parton model was proposed by Feynman before the existence of quarks and gluons within the proton was established experimentally:
In particle physics, the parton model was proposed by Richard Feynman in 1969 as a way to analyze high-energy hadron collisions.
At that time the parton model was set up as a uniform distribution of partons within the nucleon, and calculations and a model were proposed to explain scattering crossections. Deep inelastic experiments showed that there existed a hard core in the nucleus, disagreeing with the predictions of the parton model. The accumulation of such data by several experiments identified the cores as quarks which had already been proposed by the SU(3) representations of the hadronic resonances.
It was later recognized that partons describe the same objects now more commonly referred to as quarks and gluons. Therefore a more detailed presentation of the properties and physical theories pertaining indirectly to partons can be found under quarks.
So at this stage the Feynman parton diagrams will be identical to the interactions of quarks and gluons according to the problem at hand. | {
"domain": "physics.stackexchange",
"id": 29268,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, feynman-diagrams, subatomic",
"url": null
} |
differential-geometry, coordinate-systems, tensor-calculus, vector-fields
Despite being one of the simplest possible manifolds, $\mathbb R^2$ is actually terrible from a pedagogical point of view precisely because it's so easy to get confused on this issue. The manifold $\mathcal M = \mathbb R^2$ is abstract; points $p\in \mathbb R^2$ consist of ordered pairs of real numbers $(a,b)$, but those numbers are not coordinates for $p$. We can introduce coordinates by defining a coordinate chart on some open neighborhood of $p$. For example, we might coordinatize the upper half-plane via the polar coordinate chart:
$$\pi : \mathbb R_+^2 \rightarrow \mathbb R \times(0,\pi)$$
$$(a,b) \mapsto \left(\sqrt{a^2+b^2},\sin^{-1}\left(\frac{b}{\sqrt{a^2+b^2}}\right)\right)$$
where the first coordinate is interpreted as the radial coordinate and the second as the angular coordinate.
Any function which is defined at the manifold level - e.g. some $f:\mathcal M \rightarrow \mathbb R$ - has a corresponding expression in each coordinate chart. For example, let $f:\mathcal M \rightarrow \mathbb R$ be defined by $(a,b)\mapsto a$. If we descend into the polar coordinate chart, we could consider the function
$$f_\pi: \mathbb R\times (0,\pi)$$
$$(r,\theta) \mapsto (f\circ \pi^{-1})(r,\theta) = f\big(r\cos(\theta),r\sin(\theta)\big) = r\cos(\theta)$$ | {
"domain": "physics.stackexchange",
"id": 77447,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "differential-geometry, coordinate-systems, tensor-calculus, vector-fields",
"url": null
} |
javascript, i18n
Original UI/Theme/DefaultTheme.js:
import { createMuiTheme } from '@material-ui/core/styles';
// Theme for Material-UI components
const muiTheme = createMuiTheme({
overrides: {
MuiTypography: {
h6: {
fontWeight: 400,
},
},
MuiInput: {
underline: {
'&:before': {
borderBottom: `1px solid #BBBBBB`,
},
},
},
MuiIconButton: {
root: {
color: '#111',
},
},
MuiTab: {
textColorPrimary: {
color: '#fff',
},
root: {
paddingTop: 0,
paddingBottom: 0,
minHeight: 32,
},
},
MuiButtonBase: {
root: {
cursor: 'default',
},
},
MuiTableCell: {
sizeSmall: {
paddingTop: 0,
paddingBottom: 0,
},
},
MuiCheckbox: {
root: {
marginTop: -9,
marginBottom: -9,
},
},
MuiButton: {
root: {
borderRadius: 0,
fontWeight: 400,
},
},
},
});
const theme = {
muiTheme,
};
export default theme;
New UI/Theme/DefaultTheme.js:
import { createMuiTheme } from '@material-ui/core/styles';
import { derive, compose, blend } from '../../Utils/Object';
import { text, initial as i18n } from './i18n'; | {
"domain": "codereview.stackexchange",
"id": 39320,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, i18n",
"url": null
} |
that the domains *.kastatic.org and *.kasandbox.org are unblocked. A point of inflection or inflection point, abbreviated IP, is an x-value at which the concavity of the function changes.In other words, an IP is an x-value where the sign of the second derivative changes.It might also be how we'd describe Peter Brady's voice.. Home; About; Services. Computing the first derivative of an expression helps you find local minima and maxima of that expression. Not every zero value in this method will be an inflection point, so it is necessary to test values on either side of x = 0 to make sure that the sign of the second derivative actually does change. Test Preparation. The second derivative of a function may also be used to determine the general shape of its graph on selected intervals. Khan Academy is a 501(c)(3) nonprofit organization. Solution To determine concavity, we need to find the second derivative f″(x). How to Calculate Income Elasticity of Demand. Therefore, our inflection point is at x = 2. This results in the graph being concave up on the right side of the inflection point. Mistakes when finding inflection points: second derivative undefined, Mistakes when finding inflection points: not checking candidates, Analyzing the second derivative to find inflection points, Using the second derivative test to find extrema. Since it is an inflection point, shouldn't even the second derivative be zero? This results in the graph being concave up on the right side of the inflection point. Factoring, we get e^x(4*e^x - 1) = 0. This results in the graph being concave up on the right side of the inflection point. A critical point becomes the inflection point if the function changes concavity at that point. Curvature occurs is 4 * e^x = 1, which is ln 1/4 sothesecondderivativeisf″ ( ). Nicholas c 2004 University of Sydney then, find the second derivative f″ x. Or decreasing anyone, anywhere anywhere the second condition to solve the equation Sydney... Please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked should use | {
"domain": "luxmicro.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517469248845,
"lm_q1q2_score": 0.8171461309877754,
"lm_q2_score": 0.8354835350552604,
"openwebmath_perplexity": 413.2545491078615,
"openwebmath_score": 0.5936368107795715,
"tags": null,
"url": "https://magazine-redux.luxmicro.net/19eyvwe7/5m3zqv.php?59a9b3=point-of-inflection-second-derivative"
} |
homework-and-exercises, black-holes, astrophysics, astronomy, event-horizon
Title: How do you calculate the black hole diameter of M87*? The recent data from the EHT Consortium on the size and mass of the central black hole of M87, named M87*, are telling us that the diameter of the event horizon should be ~1.5 light-days, or stated differently, ~42 ± 3 μas (micro-arc-seconds). They also tell us that the mass is ~ $6.6\times10^9 M_{\odot}$ (billion), and that it is located at a distance of about $16.8 ± 0.8$ Mpc.
How do you calculate the diameter using those fundamentals?
Where $R$ = distance from earth, $M$ = mass of M87* and using SI units please. The predicted radius of the photon ring is
$$ r_p = \sqrt{27} \frac{GM_{\rm BH}}{c^2}, $$
for a non-spinning black hole. The result is only slightly different for a fast spinning black hole. By dividing $r_p$ by the distance to the source $D$, we have a relationship between the angular radius and the black hole mass.
This is done in equation 1 of the fifth Event Horizon Telescope paper
$$ \theta_p = \frac{r_p}{D} = 18.8 \left(\frac{M_{\rm BH}}{6.2\times 10^9 M_{\odot}}\right) \left(\frac{D}{16.9\ {\rm Mpc}}\right)^{-1}\ {\rm microarcseconds}\ .$$
If the angular radius is measured to be 21 $\mu$arcsec, then this suggests a mass of 6.9 billion solar masses.
The difference between this and the final result of 6.5 billion solar masses is down to a more sophisticated modelling of the image using a radiative transfer model and a spinning black hole. | {
"domain": "physics.stackexchange",
"id": 57247,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, black-holes, astrophysics, astronomy, event-horizon",
"url": null
} |
newtonian-mechanics, friction, collision, computational-physics, software
Title: Bouncing ball with friction I am trying to write freefem code (same language as c++) for a bouncing ball, but i am not able to notice the result of friction force. At each time the rigid ball hits the rigid ground , the horizontal velocity must be decreased. While i am trying.. the velocity is not decreased, it is still bouncing for a fixed hieght.
Any hint please, i will be thankfull for any indication that help me proceed in writing the code. Maybe the simplest model would be to assume that the ball keeps a specified fraction, $\alpha$, of its kinetic energy on each bounce, say 90%, and to require that the tangent to the ball's path just before each bounce makes the same angle with the vertical (or the line perpendicular to the ground, if the ground is not level) as the tangent to the path just after the bounce. That is, specify that the absolute value of the ratio of horizontal to vertical momentum is always constant when the ball is just above the ground.
This kinematic way of coming at it is more sensible than a dynamic method, I think: we don't really know what the force profile looks like during a bounce anyway. But if you want to specify forces and calculate a trajectory, you can get the same result by relating the frictional force to the normal force as follows:
$$\frac{F_{friction}}{F_{normal}}=\frac{1-\sqrt{\alpha}}{1+\sqrt{\alpha}}.$$
If you want to get more detailed, you can do calculations that include rotation of the ball and other factors, but I'd start with the above. | {
"domain": "physics.stackexchange",
"id": 49958,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, friction, collision, computational-physics, software",
"url": null
} |
rostopic, image-transport
Title: How does ROS Topic data transfer work with large data streams?
I am studying ROS and I have the following question. When a node wants to transfer a relatively big data stream, for example, an image, how many TCP/IP connections are created to do such a transfer. One for the whole image or one per data packet?
Thank you in advance
Originally posted by Javier J. Salmerón García on ROS Answers with karma: 114 on 2013-10-16
Post score: 0
Each topic connection will usually have a single connection which will carry many images.
Originally posted by tfoote with karma: 58457 on 2013-11-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15879,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rostopic, image-transport",
"url": null
} |
ros, simulation, clock, publisher
Title: simulating sensor data, problem with rosbag and clock
Hello
I wrote a simple simulator that generates sensor data (ideal values plus some noise). My goal is to test my localization algorithms (EKF). It seems to work, but I am having problems with recording a bag file.
Here is how it works: given an acceleration profile, I integrate it to obtain ground truth (position, orientation, velocity, etc.). I then generate encoder and IMU data from that ground truth. Once all the data has been generated, I publish it sequentially on respective topics, together with clock messages.
Here some part of the code:
ros::init(argc, argv, "sensorSimulator");
bool realtime = false; //value obtained from command line or parameters...
float period = 0.01; //period of the simulation
// Generate the data
// (...)
vector<StampedData *> data = profile.generateData();
// Create publishers on the following topics
// sensor_msgs::Imu on /ms/imu/data
// rosgraph_msgs::Clock on /clock
// lowlevel::Encoders (custom) on /encoders
// fmutil::SimpleOdo (custom) on /ground_truth_linear
// nav_msgs::Odometry on /ground_truth
// tf::TransformBroadcaster
// (...)
// Publish the data
for( unsigned i=0; i<data.size() && ros::ok(); i++ )
{
rosgraph_msgs::Clock clockmsg;
clockmsg.clock = ros::Time(data[i]->time);
clockPub.publish( clockmsg ); | {
"domain": "robotics.stackexchange",
"id": 7232,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, simulation, clock, publisher",
"url": null
} |
javascript, to-do-list
** http://www.OhsikPark.com
** Feel free to talk me! o@ohsikpark.com
-------------------------------------------------------------
License: GNU General Public License
License URI: http://www.gnu.org/licenses/gpl-2.0.html | {
"domain": "codereview.stackexchange",
"id": 18496,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, to-do-list",
"url": null
} |
homework-and-exercises, kinematics
Title: Average Velocity A car travels 100 miles in 2 hours, it then completes the return leg of the journey. How fast must it travel on the return leg to average 100mph over the total journey.
My thoughts on this are that it is impossible as if the total average was 100mph then the total time would be 2 hours but that can't be if the first leg took 2 hours.
Please tell me if I am missing something Are you missing something?
You probably are if this question was asked during a course on relativity. Anyway, this is a physics site and I'm going to make the question a bit more precise on the reference frames in which the measurements might have taken place:
We observe a car travel 100 miles in 2 hours, it then completes the return leg of the journey. How fast must it travel on the return leg for the driver to have done the full 200 miles in 2 hours?
The answer starts from the observation that during the first leg the driver will have aged $2\sqrt{1-\frac{v^2}{c^2}}$ hours, with $v/c \approx 50/670616629 \approx 7.5 \ 10^{-8}$. That is a fraction $5.6 \ 10^{-15}$ short of 2 hours.
So, the second leg the car should travel at a speed $v'$ such that the driver ages $\sqrt{1-\frac{v'^2}{c^2}} \frac{100 mi}{c}= 11 \ 10^{-15}$ hr. It follows that $v'$ needs to be a fraction $3 \ 10^{-15}$ short of the speed of light. | {
"domain": "physics.stackexchange",
"id": 8963,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, kinematics",
"url": null
} |
# Express this linear optimization problem subject to a circular disk as a semidefinite program
I have to express the following problem as a semidefinite program
$$\begin{array}{ll} \text{minimize} & F(x,y) := x + y +1\\ \text{subject to} & (x-1)^2+y^2 \leq 1 \tag{1}\end{array}$$
Only affine equality conditions should be used. The hint was to examine the structure of $\mathbb{S}^2_+$, the cone of symmetric positive semidefinite matrices.
The characteristic polynomial of such a matrix is
$$C=\lambda^2-(a_{11}+a_{22})\lambda - a_{12}a_{21}$$
which has a similar form to the rewritten condition (1) $x^2-2x+y^2\leq0$. If $a_{11}=a_{22}=1,\lambda = x$ and $a_{12}=-a_{21}=y$ the characteristic polynomial would be $x^2-2x+y^2=0$. Is this useful?
My problem is, that I have no clue how to formulate a $\leq$ with equality conditions.
• I'm not sure what you mean with only equality conditions (surely you must have a positive semidefiniteness conditions). Hint: Schur complement – Johan Löfberg Nov 28 '14 at 19:28
• I think what he is referring to is the standard practice of using slack variables to convert inequalities to equations. That is: $x+y\le z$ becomes $x+y+s=z$, $s$ nonnegative. – Michael Grant Nov 28 '14 at 23:33
• In other words, he needs an SDP in primal, equality-constrained standard form. If I were not traveling I would answer it ;-) – Michael Grant Nov 29 '14 at 3:14 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9706877726405083,
"lm_q1q2_score": 0.8024790382907313,
"lm_q2_score": 0.8267118026095991,
"openwebmath_perplexity": 130.84129666215682,
"openwebmath_score": 0.934492290019989,
"tags": null,
"url": "https://math.stackexchange.com/questions/1042280/express-this-linear-optimization-problem-subject-to-a-circular-disk-as-a-semidef"
} |
I will illustrate a method of generalizing diagram chasing arguments to arbitrary abelian categories. So, let us start with the following argument in the case of abelian groups: suppose $h$ is monic, and we want to show $h'$ is monic. It suffices to show $\ker(h') = 0$. Thus, suppose we have $\bar b \in B / A$ such that $h'(\bar b) = 0$. Then $\bar b$ has some preimage $b \in B$. Now, we know that $h(b) \in \mathrm{im}(g)$ by the assumption that $h'(\bar b) = 0$. Thus, choose $a \in A$ such that $g(a) = h(b)$ (which happens to be unique). Then $h(f(a)) = g(a) = h(b)$, so $f(a) = b$ since $h$ is monic. This implies that $\bar b = \overline{f(a)} = 0$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765592953409,
"lm_q1q2_score": 0.8269974113127282,
"lm_q2_score": 0.8438950947024555,
"openwebmath_perplexity": 156.3756338657283,
"openwebmath_score": 0.9799960851669312,
"tags": null,
"url": "https://math.stackexchange.com/questions/2354953/the-induced-morphism-b-a-to-c-a-is-monic-epi-if-the-morphism-b-to-c-is-moni"
} |
quantum-mechanics, homework-and-exercises, wavefunction, probability
Title: Probability current calculations I have a question about the probability current density. Because I cant really understand the meaning of that (how can we relate something real like a current to something abstract such as probability), I dont really have a good intuition for that.
The question is a part of a problem of finite potential barrier $$ V\left(x\right)=\begin{cases}
0 & x<0\\
V_{0} & x>0
\end{cases} $$
An incident particel comes from the negative part with energy given energy $ E $.
What Im trying to do is to find the probability current density for $x<0$ and $x>0$ for both cases $ E<V_0 $ and $E>V_0 $.
The wave function for the case $ E>V_0 $ is given by:
$$ \psi\left(x\right)=\begin{cases}
Ae^{ikx}+Be^{-ikx} & x<0\\
Ce^{iqx} & x>0
\end{cases},\thinspace\thinspace\thinspace\thinspace k=\sqrt{\frac{2mE}{\hbar^{2}}},\,\thinspace\thinspace q=\sqrt{\frac{2m\left(E-V_{0}\right)}{\hbar^{2}}} $$
And the wave function for the case $ E<V_0 $ is given by
$$ \psi\left(x\right)=\begin{cases}
Ae^{ikx}+Be^{-ikx} & x<0\\
Ce^{-\alpha x} & x>0
\end{cases},\thinspace\thinspace\thinspace\thinspace k=\sqrt{\frac{2mE}{\hbar^{2}}},\,\thinspace\thinspace\alpha=\sqrt{\frac{2m\left(V_{0}-E\right)}{\hbar^{2}}} $$ | {
"domain": "physics.stackexchange",
"id": 76584,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, probability",
"url": null
} |
# $\max\{a_1,a_2,\dots,a_n\}$ converges for a convergent sequence $a_n$
I am tackling the following question and want to be sure that my reasoning is fine.
Let $$a_n$$ be a convergent sequence s.t $$\displaystyle \lim_{n\to\infty}a_n=a$$. Let $$b_n\triangleq\max\{a_1,a_2,\dots,a_n\}$$ Prove that $$b_n$$ converges. Also, is it necessarily the case $$\displaystyle \lim_{n\to\infty}b_n=a$$?
My try:
As $$a_n$$ converges, it is bounded. Let $$M$$ be an upper bound of $$a_n$$. We note that it is also an upper bound of $$b_n$$ and that $$b_n$$ is monotonically increasing, thus $$b_n$$ converges.
I looked at the sequence $$a_n=\dfrac{1}{n}$$. We have $$\displaystyle \lim_{n\to\infty}a_n=0$$ and $$\forall n\in\mathbb{N}:\ b_n=1$$, i.e $$\displaystyle \lim_{n\to\infty}b_n=1\ne\lim_{n\to\infty}a_n$$
It seems too simple and I believe that I am missing something.
Any comment regarding the solution will be appreciated. In the case it is wrong I will be thankful for some hints in the right direction. Thanks.
• Sounds correct to me, I'd explain why $b_n$ is monotonically increasing but except for that seems completely fine. Nov 11 '18 at 15:16
• @YuvalGat, thanks, I will add it to my proof. Nov 11 '18 at 15:19 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462215484651,
"lm_q1q2_score": 0.8167467974796743,
"lm_q2_score": 0.8267117983401363,
"openwebmath_perplexity": 75.66718489112215,
"openwebmath_score": 0.9169761538505554,
"tags": null,
"url": "https://math.stackexchange.com/questions/2994000/max-a-1-a-2-dots-a-n-converges-for-a-convergent-sequence-a-n"
} |
dc.distributed-comp
$C$-condition: Every acceptor in $C$ (defined in the paper) has accepted a proposal with number in $m .. (n-1)$, and every proposal with number in $m .. (n-1)$ accepted by any acceptor has value $v$.
Now, Lamport introduces $P2^{c}$, which, along with the above $C$-condition, is sufficient for $P2^{b}$.
You can think of this induction process step by step:
The base case $m$ is trivially valid.
Given that step 1 is valid, the argument in "The induction step" above (of course with $P2^{C}$) shows that the proposals numbered $m+1$ has value $v$.
Given both step 1 and step 2 are valid, the same argument shows that the proposals numbered $m + 2$ has value $v$.
And so on.
This also answers your second question.
Your question (2): In reality, proposal numbered in $m..(n - 1)$ can have value other than v, isn't it?
No, it isn't. The Paxos algorithm guarantees that all the proposals with larger numbers than $m$ must have value $v$, given that $m$ is chosen with value $v$. | {
"domain": "cstheory.stackexchange",
"id": 2924,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dc.distributed-comp",
"url": null
} |
ros, ethernet, embedded
Title: Embedded (non-ROS) -> ROS over Ethernet
I have an embedded device not supported by ROS but which runs a Linux variant.
I want to connect this to my ROS Groovy machine via Ethernet but not sure on the best approach - however I was thinking TCP would be ideal.
How should i connect these devices and are there any examples from which I can work from?
I.e. How can i communicate between them over Ethernet?
Cheers.
Originally posted by anonymous8676 on ROS Answers with karma: 327 on 2013-11-11
Post score: 2
Started with ROSSerial but ran into problems as this was not designed for 32-bit systems.
Solution was to communicate using Python sockets:
http://www.binarytides.com/python-socket-programming-tutorial/
The embedded device was setup as a server, and the ROS machine connects as a client to send commands to the embedded device.
Originally posted by anonymous8676 with karma: 327 on 2013-11-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16122,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ethernet, embedded",
"url": null
} |
group-theory, group-representations, lorentz-symmetry, classical-field-theory, poincare-symmetry
Title: Can the finite dimensional irreducible $(j_+,j_-)$ representations of the Lorentz group $SO(3,1)$ be unitary? Since the Lorentz group $SO(3,1)$ is non-compact, it doesn't have any finite dimensional unitary irreducible representation. Is this theorem really valid?
One can take complex linear combinations of hermitian angular momentum generator $J_i^\dagger=J_i$ boost generator $K_i^\dagger=-K_i$ to construct two hermitian generators $N_i^{\pm}=J_i\pm iK_i$. Then, it can be easily shown that the complexified Lie algebra of $SO(3,1)$ is isomorphic to that of $SU(2)\times SU(2)$. Since, the generators are now hermitian, the exponentiation of $\{iN_i^+,iN_i^-\}$ with real coefficients should produce finite dimensional unitary irreducible representations. The finite dimensional representations labeled by $(j_+,j_-)$ are therefore unitary.
$\bullet$ Does it mean we have achieved finite dimensional unitary representations of $SO(3,1)$?
$\bullet$ If the $(j_+,j_-)$ representations, are for some reason are non-unitary (why I do not understand), what is the need for considering such representations?
$\bullet$ Even if they are not unitary (for a reason I don't understand yet), they tell how classical fields transform such as Weyl fields, Dirac fields etc. So what is the problem even if they are non-unitary? The statement "Non-compact groups don't have finite-dimensional unitary representations" is a heuristic, not a fact. $(\mathbb{R},+)$ is a non-compact Lie group that has non-trivial finite-dimensional unitary representations. However, the Poincaré group and the Lorentz group really don't have any finite-dimensional unitary representations. | {
"domain": "physics.stackexchange",
"id": 35367,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "group-theory, group-representations, lorentz-symmetry, classical-field-theory, poincare-symmetry",
"url": null
} |
physical-chemistry, kinetic-theory-of-gases
\end{equation}
with $f_{\text{trans}}$, $f_{\text{rot}}$ and $f_{\text{vib}}$ being the quadratic degrees of freedom for translation, rotation and vibration respectively. It is important to note that only excited degrees of freedom contribute to $U$. Which ones are active depends heavily on temperatur. Vibrations are usually not excited at room temperatur (there are exceptions, e.g. iodine) - they usually need temperatures in the order of $1000 \, \text{K}$ but that varies greatly. Rotations are much easier to excite - I know of no example where they aren't excited at room temperature. Translations take very little energy to excite and can always be considered active. If all degrees of freedom are active you get: For linear molecules $f_{\text{trans}} = 3$, $f_{\text{rot}} = 2$ and $f_{\text{vib}} = 3N - 5$, where $N$ is the number of atoms in the molecule. For non-linear molecules $f_{\text{trans}} = 3$, $f_{\text{rot}} = 3$ and $f_{\text{vib}} = 3N - 6$.
Since $C_{v} = \frac{dU}{dT}$ you get
\begin{equation}
C_{v} = \frac{1}{2} R (f_{\text{trans}} + f_{\text{rot}} + 2 f_{\text{vib}}) \ .
\end{equation}
Substituting this into the equation for the ratio between $C_{p}$ and $C_{v}$ yields
\begin{equation}
\frac{C_{p}}{C_{v}} = \frac{2}{(f_{\text{trans}} + f_{\text{rot}} + 2 f_{\text{vib}})} + 1 \ .
\end{equation}
Now, you only have to know how many degrees of freedom each of the molecules has and plug them into the equation. | {
"domain": "chemistry.stackexchange",
"id": 579,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, kinetic-theory-of-gases",
"url": null
} |
electromagnetic-induction
Title: Working principle of inverter I got project on the working of inverter from school. I know this that DC inverter has an alternator switch which constantly changes its direction so that magnetic field is produced in primary coil due to which current is induced in secondary coil and we get output AC. So according to all this Electromagnetic induction should be the working principle behind the working of DC inverter. But DC can't take part in EMI, I know alternator is being used but it doesn't feels right. I hope I am right till this point I am taking following image as reference:
Please don't get mad at me if I got everything wrong. You are right. The alternator turns the DC of the battery into AC, which allows for changing magnetic fields in the primary. A transformer need a changing magnetic flux (which is why "DC doesn't work") - the alternating switch keeps changing the direction of the current, so the flux keeps changing. This induces e.m.f. in the secondary, and the result is an AC voltage on the output of the circuit.
The voltage wave form on the primary will be a square wave: the output waveform will be more complicated, depending on details of the resistance of the coils and the load. Obviously, the higher the switching frequency, the more regular the shape of the output: at very low frequencies, the output will "decay" between switching of the input alternator. | {
"domain": "physics.stackexchange",
"id": 34109,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-induction",
"url": null
} |
quantum-mechanics, operators, hilbert-space, notation, time-reversal-symmetry
$$\tag{2} f(\cdot)~=~\langle v | A(\cdot) \rangle.$$
The value $A^{\dagger}v\in H$ of the adjoint operator $A^{\dagger}$ at the vector $v\in H$ is by definition the unique vector $u\in H$, guaranteed by Riesz' representation theorem, such that
$$\tag{3} f(\cdot)~=~\langle u | \cdot \rangle.$$
In other words,
$$\tag{4} \langle A^{\dagger}v | w \rangle~=~\langle u | w \rangle~=~f(w)=\langle v | Aw \rangle. $$
It is straightforward to check that the adjoint operator $A^{\dagger}:H\to H$ defined this way becomes an $\mathbb{F}$-linear operator as well.
IV) Finally, let us return to OP's question and consider the definition of the adjoint of an antilinear operator. The definition will rely on the complex version of Riesz' representation theorem. Let $H$ be given a complex Hilbert space, and let $A:H\to H$ be an antilinear continuous operator. In this case, the above equations (2) and (4) should be replaced with
$$\tag{2'} f(\cdot)~=~\overline{\langle v | A(\cdot) \rangle},$$
and
$$\tag{4'} \langle A^{\dagger}v | w \rangle~=~\langle u | w \rangle~=~f(w)=\overline{\langle v | Aw \rangle}, $$
respectively. Note that $f$ is a $\mathbb{C}$-linear functional.
It is straightforward to check that the adjoint operator $A^{\dagger}:H\to H$ defined this way becomes an antilinear operator as well.
-- | {
"domain": "physics.stackexchange",
"id": 5477,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, hilbert-space, notation, time-reversal-symmetry",
"url": null
} |
python, sqlite
### How can I streamline the below code? ###
if df_id == "df1":
engine = create_engine("sqlite:///abc.db", echo=False)
connection = engine.connect()
pandas.DataFrame.to_sql(truck_services, name="Truck Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(feeder_services, name="Feeder Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(express_services, name="Express Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(other_services, name="Other Services", con=engine, if_exists="append")
connection.close()
if df_id == "df2":
engine = create_engine("sqlite:///defg_Fares.db", echo=False)
connection = engine.connect()
pandas.DataFrame.to_sql(truck_services, name="Truck Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(feeder_services, name="Feeder Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(express_services, name="Express Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(other_services, name="Other Services", con=engine, if_exists="append")
connection.close()
if df_id == "df3":
engine = create_engine("sqlite:///hijk_Fares.db", echo=False)
## Same thing
connection = engine.connect()
if df_id == "df4":
engine = create_engine("sqlite:///lmno_Fares.db", echo=False)
## Same thing
connection = engine.connect() | {
"domain": "codereview.stackexchange",
"id": 24005,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, sqlite",
"url": null
} |
python, macos, macos-lion, osx
File "/Library/Python/2.7/site-packages/rosinstall-0.5.30-py2.7.egg/EGG-INFO/scripts/rosinstall", line 234, in __init__
File "/Library/Python/2.7/site-packages/rosinstall-0.5.30-py2.7.egg/EGG-INFO/scripts/rosinstall", line 283, in load_yaml
File "/Library/Python/2.7/site-packages/rosinstall-0.5.30-py2.7.egg/EGG-INFO/scripts/rosinstall", line 223, in __init__
File "build/bdist.macosx-10.7-intel/egg/vcstools/vcs_abstraction.py", line 51, in __init__
File "build/bdist.macosx-10.7-intel/egg/vcstools/svn.py", line 66, in __init__
vcstools.vcs_base.VcsError: 'python-svn could not be imported. Please install python-svn. On debian systems sudo apt-get install python-svn' | {
"domain": "robotics.stackexchange",
"id": 8327,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, macos, macos-lion, osx",
"url": null
} |
tensorflow, keras, image-segmentation, u-net
if (pretrained_weights):
model.load_weights(pretrained_weights)
return model
and this is how I save the model:
model_checkpoint = tf.keras.callbacks.ModelCheckpoint('unet_ThighOuterSurfaceval.hdf5',monitor='val_loss', verbose=1, save_best_only=True)
model_checkpoint2 = tf.keras.callbacks.ModelCheckpoint('unet_ThighOuterSurface.hdf5', monitor='loss', verbose=1, save_best_only=True)
model = unet_no_dropout()
history = model.fit(genaug, validation_data=genval, validation_steps=len(genval), steps_per_epoch=len(genaug), epochs=80, callbacks=[model_checkpoint, model_checkpoint2]) I am answering my own question here. The only additional thing that I found was that the average accuracy across a batch of data was slightly higher if the amount of samples in the batch was lower. So the validation set was only 15% of the data, therefore the average accuracy was slightly lower than for 70% of the data. I don't know why the more samples you take the lower the average accuracy, and whether this was a bug in the accuracy calculation or it is the expected behavior. Either way, if you have the same problem, one suggestion is plotting average accuracy vs number of samples and see if this is the reason why you get a lower training accuracy. | {
"domain": "ai.stackexchange",
"id": 3003,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "tensorflow, keras, image-segmentation, u-net",
"url": null
} |
javascript
Title: View_messages "class" /**
* View_message
* Object Model
*/
var View_message = function(div)
{
this.div = document.getElementById(div);
};
View_message.prototype.messages =
{
empty: 'Please complete all fields',
empty_bm: 'Please enter both a title and url',
name: 'Only letters or dashes for the name field',
email: 'Please enter a valid email',
same: 'Please make emails equal',
taken: 'Sorry that email is taken',
pass: 'Please enter a valid password, 6-40 characters',
validate: 'Please contact <a class="d" href="mailto:support@host.com">support</a> to reset your password',
url: 'Pleae enter a valid url'
};
View_message.prototype.display = function(type)
{
this.div.innerHTML = this.messages[type];
};
And the Call
obj_view = new View_message('test_id');
obj_view.display('empty'); I believe you are overcomplicating things. Declaring a prototype, will not make your code any more robust or even faster. I'm sure you know, but actually something like the code below will be easier to read, more effective and also faster.
/**
* View_message
* Object Model
*/
var ViewMessage = function(div) {
this.div = document.getElementById(div);
this.messages = {
empty: 'Please complete all fields',
empty_bm: 'Please enter both a title and url',
name: 'Only letters or dashes for the name field',
email: 'Please enter a valid email',
same: 'Please make emails equal',
taken: 'Sorry that email is taken',
pass: 'Please enter a valid password, 6-40 characters',
validate: 'Please contact <a class="d" href="mailto:support@host.com">support</a> to reset your password',
url: 'Pleae enter a valid url'
}; | {
"domain": "codereview.stackexchange",
"id": 887,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript",
"url": null
} |
quantum-chemistry, computational-chemistry, molecular-orbital-theory, theoretical-chemistry, software
-------
DENSITY
-------
0 1 2 3 4 5
0 0.615003 -0.002499 -0.154014 -0.768633 -0.000000 0.000000
1 -0.002499 2.116969 -0.494672 -0.075355 0.000000 -0.000000
2 -0.154014 -0.494672 2.131170 0.425379 0.000000 0.000000
3 -0.768633 -0.075355 0.425379 0.986834 -0.000000 0.000000
4 -0.000000 0.000000 0.000000 -0.000000 2.000000 0.000000
5 0.000000 -0.000000 0.000000 0.000000 0.000000 2.000000
--------------
OVERLAP MATRIX
--------------
0 1 2 3 4 5
0 1.000000 0.045145 0.423474 -0.335755 0.000000 0.000000
1 0.045145 1.000000 0.237990 -0.000000 0.000000 0.000000
2 0.423474 0.237990 1.000000 0.000000 0.000000 0.000000
3 -0.335755 -0.000000 0.000000 1.000000 0.000000 0.000000
4 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000
5 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 | {
"domain": "chemistry.stackexchange",
"id": 13099,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-chemistry, computational-chemistry, molecular-orbital-theory, theoretical-chemistry, software",
"url": null
} |
ros, callback, custom-message, subscribe
include_directories(include ${catkin_INCLUDE_DIRS})
add_executable(hsvgmmN src/hsvgmmN.cpp)
target_link_libraries(hsvgmmN ${catkin_LIBRARIES})
add_dependencies(hsvgmmN compute_disparity_generate_messages_cpp)
target_link_libraries(hsvgmmN ${catkin_LIBRARIES})
target_link_libraries (hsvgmmN elas viso ${PCL_LIBRARIES} ${OpenCV_LIBS} )
find_package(message_generation)
catkin_package(CATKIN_DEPENDS message_runtime)
add_message_files(
DIRECTORY ../swahana_v2/msg
FILES rectified_image.msg
)
find_package(catkin REQUIRED COMPONENTS swahana_v2)
add_dependencies(hsvgmmN swahana_v2_generate_messages_cpp)
but, "checker" is not getting printed.
This means, there is still some problem with the callback function.
please help!
Originally posted by saikrishnagv on ROS Answers with karma: 11 on 2015-07-10
Post score: 0
Original comments
Comment by cyborg-x1 on 2015-07-13:
Have you had a look on rqt_graph like I wrote in the answer? Are the nodes connected by topic?
You need to add the old package as dependency to your new package.
Inside the CMakeLists.txt and as run and build dependency in package.xml
And in the Callback you need your const <package>::<message>::ConstPtr &msg
CMakeLists.txt:
find_package(catkin REQUIRED COMPONENTS
...
your_old_package
)
package.xml:
<run_depend>your_old_package</run_depend>
<build_depend>your_old_package</build_depend>
I guess you are also missing to link your node inside CMakeLists.txt with roscpp by:
add_executable(<executable_name> <dependencies>)
target_link_libraries(<executable_name> ${catkin_LIBRARIES}) | {
"domain": "robotics.stackexchange",
"id": 22146,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, callback, custom-message, subscribe",
"url": null
} |
javascript, mathematics
Now for my next approach
Everything you have seen above remains the same, but now I don't calculate the offsets. Instead I calculate the total number of elements.
Since I have my forwards variable set to my absolute difference... It is actually the length of my line segment from 2 to 5 on my number scale.
Then I can just subtract this segment from number scale to get the reverse distance
1-2-3-4-5-6-7-8
| |
|~~3~~|
|~seg~|
1-2-3-4-5-6-7-8
|~~~~~~8~~~~~~|
|~ num-scale ~|
seg - num-scale
|~|~ 5 ~|~~~~~|
Everything else remains the same...
Hence here is the code (Yes, I also made this code to be series extensible):
function findClosestTwo(from, to, min, max){
let smaller = (from > to) ? to : from,
bigger = (from < to) ? to : from;
let totalNumberOfElements = max - min + 1;
let forwards = bigger - smaller,
backwards = totalNumberOfElements - forwards;
let result = (forwards < backwards) ? forwards : -backwards;
result = (from > to) ? -result : result; | {
"domain": "codereview.stackexchange",
"id": 42193,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, mathematics",
"url": null
} |
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
• Hint: Think about prime factorization – dbx Jun 10 '18 at 17:07
• Hint: the map $f : \Bbb N^2 \to \Bbb N, (n,a) \mapsto 2^{n}(2a+1) - 1$ is bijective (taking $n=0$ yields the even numbers), and $\Bbb N^2 \simeq \bigsqcup_{n \in \Bbb N} A_n$ where $A_n := \Bbb N$. Conretely, we have the sets $$f(A_n) = \{ 2^n(2a+1) - 1 \mid a \in \Bbb N \},$$ which give a countable partition of $\Bbb N$ into countable sets. – Watson Jun 10 '18 at 18:57
## 5 Answers
You can give an example by diddling with prime factorizations, as suggested. Or do this: Start with $$\Bbb N=A_1\cup B_1,$$where $A_1$ and $B_1$ are infinite and disjoint. Now write $$B_1=A_2\cup B_2.$$Etc.
Edit: That gives us an infinite sequence of disjoint infinite sets $A_j$. It may not hold that $\Bbb N=\bigcup A_j$. But that's no problem: If $B=\Bbb N\setminus\bigcup A_j\ne\emptyset$ just let $A_1'=A_1\cup B$ and $A_j'=A_j$ for $j>1$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692339078751,
"lm_q1q2_score": 0.8159910681258462,
"lm_q2_score": 0.8354835391516132,
"openwebmath_perplexity": 334.4184983928116,
"openwebmath_score": 0.8203248977661133,
"tags": null,
"url": "https://math.stackexchange.com/questions/2814852/produce-an-infinite-collection-of-sets-a-1-a-2-a-3-with-the-property-tha"
} |
quantum-mechanics, condensed-matter, computational-physics, wick-rotation
Time-evolved Block Decimation in imaginary time. This technique is closely related to DMRG, and again it's just a matter of having a good trial state (a Matrix Product State) and an efficient way of applying the thermal evolution operator (technical, but all the details are in http://arxiv.org/abs/quant-ph/0301063) | {
"domain": "physics.stackexchange",
"id": 4777,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, condensed-matter, computational-physics, wick-rotation",
"url": null
} |
c#, algorithm, binary-search
Also you are returning nothing and writing to console your result. You might want to return the index of the item whenever it is present or -1 otherwise. This way you can use your method wherever you want; also this way your method does just one thing, search for an element in an array, instead of two: search and protocol the result.
Lastly you might want to avoid recursion, since it creates an unnecessary overhead, and specially since the iterative code for this problem is almost the same:
int BinarySearch<T>(T[] arr, T val) where T : IComparable
{
var lowerIndex = 0;
var upperIndex = arr.Length - 1;
while(lowerIndex <= upperIndex)
{
var middleIndex = lowerIndex + (upperIndex - lowerIndex) / 2;
var compareResult = arr[middleIndex].CompareTo(val);
if(compareResult == 0)
{
return middleIndex;
}
if(compareResult > 0)
{
upperIndex = middleIndex - 1;
}
else
{
lowerIndex = middleIndex + 1;
}
}
return -1;
}
Notice the use of IComparable Generic Types.
Now if you want to test this method, you should always test the edge cases (first element, last element, element not found) aside from any random element. This way you are sure, that your code works in both directions and yields a correct result if the element is not present:
int[] arr = { 23, 25, 34, 40, 65, 67, 87};
Console.WriteLine(BinarySearch(arr, 40)); //3
Console.WriteLine(BinarySearch(arr, 25)); //1
Console.WriteLine(BinarySearch(arr, 87)); //6
Console.WriteLine(BinarySearch(arr, 23)); //0
Console.WriteLine(BinarySearch(arr, 42)); //-1
As a last step you should use the previous Console.WriteLines and make actual test cases for your function. If anything to get used to test your code. | {
"domain": "codereview.stackexchange",
"id": 42732,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, algorithm, binary-search",
"url": null
} |
powershell
}
}
Function CreateLocal {
foreach ($computer in $servers) {
Get-ChildItem -Recurse -Force \\$computer\d$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#Select-String $connectionstring1 |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$newpath = join-path $NewFolder $_.FullName.Replace("Web.config","")
md $newpath
$finaldestination = $newpath + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination
}
Get-ChildItem -Recurse -Force \\$computer\e$ -ErrorAction SilentlyContinue -Include $WebConfigFile |
Where-Object {$_.FullName -notlike "*Recycle.bin*"} |
Select-Object FullName |
#This makes a backup copy before doing any rewrite
% {
$ConfigName = "Web.qa.Config"
$newpath = join-path $NewFolder $_.FullName.Replace("Web.config","")
md $newpath
$finaldestination = $newpath + $ConfigName
(Get-Content $_.FullName).replace($connectionstring1, $connectionstring2) | Out-File $finaldestination
}
}
}
Function ConfigonServer {
Write-Host "CAUTION YOU ARE ABOUT TO WRITE NEW CONFIGS ON THE SERVERS"
$resp = Read-Host " Are you SURE you want to continue? (Y/[N])"
if ($Resp.ToUpper() -eq "N") {
Write-Host "Taking you back to Safety"
sleep 3
Menu
}
if ($Resp.ToUpper() -eq "Y") {
foreach ($computer in $servers) { | {
"domain": "codereview.stackexchange",
"id": 24258,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "powershell",
"url": null
} |
newtonian-mechanics, forces, classical-mechanics, rotational-dynamics, statics
Under the condition that the sum of the acting forces is zero the torque is zero relative to all points if and only if it is zero relative to one point.
The proof done by computing the change of torque under a change of the basis point (so that $\vec r \mapsto \vec r' = \vec d + \vec r$):
\begin{align*}
\sum_i \vec \tau'_i &= \sum_i \vec r'_i \times \vec F_i = \sum_i (\vec r_i + \vec d) \times \vec F_i \\
&= \sum_i \vec r_i \times \vec F_i + \vec d \times \underbrace{\left(\sum_i \vec F_i\right)}_{\vec F_T} \\
&= \sum_i \vec \tau_i + \vec d \times \left( \sum_i \vec F_i \right).
\end{align*}
So the even stronger statement holds, that the sum of the torques changes by $\vec d \times \vec F_T$ under a change of reference point. If the sum of the forces acting on the body is zero, this means that the total torque does not change when the reference point is changed. | {
"domain": "physics.stackexchange",
"id": 49622,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, classical-mechanics, rotational-dynamics, statics",
"url": null
} |
1.5k views
Let $G$ be a simple undirected planar graph on $10$ vertices with $15$ edges. If $G$ is a connected graph, then the number of bounded faces in any embedding of $G$ on the plane is equal to
(A) 3
(B) 4
(C) 5
(D) 6
retagged | 1.5k views
Does bounded faces mean cycles?
For any planar graph,
n(no. of vertices) - e(no. of edges) + f(no. of faces) = 2
f = 15 - 10 + 2= 7
number of bounded faces = no. of faces -1
= 7 -1
= 6
So, the correct answer would be (**D**).
selected by
"number of bounded faces = no. of faces -1" is this formula ?
number of bounded faces = no. of faces -1 (external or unbounded face)
Number of edges in minimally connected graph: n-1
So, 10-1=9 (edges used to connect all vertices)
Remaining 15-9=6 edges can be used to connect any two vertices and form a bounded face.
So ans - (d) 6
Is this analogy correct? | {
"domain": "gateoverflow.in",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363494503271,
"lm_q1q2_score": 0.834685493406038,
"lm_q2_score": 0.847967764140929,
"openwebmath_perplexity": 1701.9507761903585,
"openwebmath_score": 0.5018686056137085,
"tags": null,
"url": "https://gateoverflow.in/49/gate2012_17"
} |
lagrangian-formalism, symmetry, field-theory, dirac-equation, charge-conjugation
On a same note one can ask what equation should ψC satisfy? Should it
satisfy the conjugated Dirac equation as \begin{align}
> (i\gamma^{\mu}\partial_{\mu} + m)\psi_{C} = 0? \end{align}
If you're using the "West Coast" metric $(+1,-1,-1,-1)$ then the equation that $\psi_C$ should satisfy is the one above with $m\to -m$. That is, the free Dirac equation is the same for $\psi$ and $\psi_C$. This is because the masses of the particle and anti-particle are identical, as first divined by Dirac. (If you're using the metric of the opposite sign, then you're equation is correct.) | {
"domain": "physics.stackexchange",
"id": 49580,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lagrangian-formalism, symmetry, field-theory, dirac-equation, charge-conjugation",
"url": null
} |
# Why do the addition of linear equations all pass through the same point
So I'm doing linear algebra right now and I have a question regarding addition of equations as part of Gauss' elimination algorithm. I understand why it's possible, as the LHS of one equation can be added/subtracted to another equation as long as its RHS is also added/subtracted. But I'm trying to better understand about the intersecting point of the equations.
Given two equations: $$x + 2y = 3$$ $$3x + 4y = 5$$
they intersect at point (-1, 2). When I add other equations which are combinations of these two equations (ie $-x - y = -1$), they also all pass through this point. Maybe I'm missing something, but could someone point out which property allows for this? Why does a new equation, composed of some combination of two others, resulting in an equation with a different slope and intercepts, still pass through that same point? What is so special about that point?
I'm a pretty visual learner, so any visual analogies would be much appreciated. Thanks for your time!
Suppose lines $$L_1: a_1x+b_1y=c_1$$ and $$L_2:a_2x+b_2y=c_2$$ pass through the common point $(p,q)$
Then $$AL_1+BL_2: (Aa_1+Ba_2)x+(Ab_1+Bb_2)y=Ac_1+Bc_2$$ does too because $$(Aa_1+Ba_2)p+(Ab_1+Bb_2)q=A(a_1p+b_1q)+B(a_2p+b_2q)=Ac_1+Bc_2$$
It is perhaps easier to see writing the equations a bit differently (a form which makes it explicit that they pass through the point $(p,q)$ - equivalent to a change of variables to make $(p,q)$ the origin) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9702399060540359,
"lm_q1q2_score": 0.8042695156104974,
"lm_q2_score": 0.8289388125473629,
"openwebmath_perplexity": 144.99417194547277,
"openwebmath_score": 0.7840724587440491,
"tags": null,
"url": "https://math.stackexchange.com/questions/811301/why-do-the-addition-of-linear-equations-all-pass-through-the-same-point"
} |
java, time-limit-exceeded, cryptography, hashcode
if (findCollision(x1, x2) == true) {
break;
}
counter++;
}
System.out.println("\nNumber of trials: " + counter);
}
If i tried taking only the first 24 bits of SHA-1, i could easily find a collision. However, i'm unable to find a collision for 36 bits instead despite running it for hours. Hence, I'm wondering what is other alternative way for me to find a collision with just 36 bits of SHA-1. I realized that while typing this, you should look into the Birthday Attack, which will probably much more elaborated than my answer, lol.
Considering your have 36 bits of data, it means you have a total number of possibilities of \$2^{36} = 68719476736\$. Based on the Pigeonhole principle, if you compared the hashed of 68719476736 different strings, you'd get a collision.
So that's easy! All you need to do is run this (pseudocode):
hashset = (new hashset that uses your hashing algorithm)
for(i = 0; i < 68719476736 + 1; i++) {
if hashset.contains(string(i)) break;
hashset.add(string(i));
}
print("This took " + i + " tried!") | {
"domain": "codereview.stackexchange",
"id": 34579,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, time-limit-exceeded, cryptography, hashcode",
"url": null
} |
ros, python, gui
if __name__ == '__main__':
main()
Originally posted by sampreets3 with karma: 230 on 2021-12-22
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Reuben on 2022-08-19:
Hi, can you please show how you used the MultiThreadedExecutor to spin the node while using app.exec()? The above snippet initialises a executor but doesn't seem to use it
Comment by sampreets3 on 2022-08-22:
Hi, sorry I did not see your message earlier. I have not been working with the GUI for some time now, so I will start it up and post the snippet.
edit : Updated the answer to include the multi threaded executor
Comment by Reuben on 2022-08-23:
I did have an error with hmi_node.show() but once I changed it to HMI.show() its worked with my GUI node too. Thank you for the prompt response and for your help!
Comment by sampreets3 on 2022-08-23:
Yeah I was a bit pressed with my work so I did not have the time to test out the piece of code. I'm happy that it worked for you! I have incorporated the edit you mentioned in my response for future references.
Comment by markg on 2022-08-26:
Thank you for this post! I am working to set this up but I'm confused about the ROS node (MyGuiNode) that you are importing. Where is this defined? Is this a ros node that also inherits a ui object?
Comment by sampreets3 on 2022-08-28:
Hi @markg, yes this is my custom ROS node that I define. In the constructor, I inherit an ui object as self.ui, and then I can get access to all the attributes (buttons, inputs, etc) and use them in my publisher and subscriber calls | {
"domain": "robotics.stackexchange",
"id": 37275,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, python, gui",
"url": null
} |
fourier-transform
Title: What is the correct solution for Fourier transform of unit step signal? The unit step signal defined as
$$
u[n]= \lbrace 1; n>=0; \\
\qquad0; n<0 \rbrace
$$
has three possible solutions for its Fourier domain representation depending on the type of approach. These are as follows -
The widely followed approach (Oppenheim Textbook)- calculating the Fourier transform of the unit step function from the Fourier transform of the signum function.
$$
F(u[n])=U(j\omega) = \pi\delta(\omega)+\frac{1}{j\omega}
$$
Fourier Transform calculated from the Z transform of the unit step function (Refer Proakis Textbook, Digital Signal Processing Algorithms and applications, pages 267,268 section 4.2.8 )
$$
U(j\omega) = \frac {e^{\frac{j\omega}{2}}}{2j\sin \frac{\omega}{2}}; \omega \neq 2\pi k; k=0,1,2,3...
$$
Fourier transform calculated by splitting into even and odd functions - followed in Proakis Textbook (Refer Proakis Textbook, Digital Signal Processing Algorithms and applications, page 618 section 8.1 )
$$
U(j\omega) = \pi\delta(\omega)+\frac{1}{1-e^{-j\omega}}
$$ | {
"domain": "dsp.stackexchange",
"id": 9904,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fourier-transform",
"url": null
} |
spectra, brown-dwarf, red-dwarf
Title: How do I understand a brown dwarf with a M-type spectrum? There are a large fraction of M dwarfs which are claimed to be brown dwarfs.
Why do we still use M-type and not create a new stellar type like L, T and Y? The main signature of M is TiO absorption bands. They appear in M-type brown dwarfs?
Is a spectrum enough to judge whether a celestial object is a brown dwarf or not? Or do we need to know its mass?
The earliest type is M3.5. It means M-type objects older than M3.5 are necessarily brown dwarfs? Or is there a safe value, say, M9, is necessarily a brown dwarf? A brown dwarf is necessarily colder than a red dwarf?
There should be more brown dwarfs or red dwarfs?
The reason we do not need to create a new stellar type is the spectra of brown dwarfs are more like red dwarf's spectra? The spectral type of an object is almost entirely determined by the temperature of its photosphere. ie Saying something is type M3.5 is just a measure of its surface temperature. An M3.5 brown dwarf is at a very similar temperature to an M3.5 star.
Brown dwarfs begin their lives as hot balls of gas and gradually cool with time. They start off as M-type objects and then cool to become L-type and then eventually, T-type objects. Stars on the other hand, start their lives by cooling, but stabilise their temperatures at a mass-dependent value that corresponds to an M-type classification or even as cool as type L2 for the very lowest mass stars.
Given that stars and brown dwarfs appear to have been born throughout Galactic history, then a range of masses are possible for any spectral type. In order to use the surface temperature (or spectral type) as an indicator of whether something is a brown dwarf, then you also need to know its age. In general that is not known, unless the object can be demonstrated to be part of a cluster of stars (of known age). | {
"domain": "astronomy.stackexchange",
"id": 1952,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "spectra, brown-dwarf, red-dwarf",
"url": null
} |
python, python-3.x, web-scraping
You should use a requests.Session to make consecutive requests to the same server faster.
The authentication method is not very secure, although at least the password is not transmitted unencrypted due to the API using https.
if 'straightBets' in data.keys() is the same as if 'straightBets' in data.
If you have to check if an item is in a list before adding it, you probably want a set instead. However, since your bets are just dictionaries this is not possible. If one of the keys of the dictionaries is a unique identifier, though, you could just use a dictionary:
bets.update({bet['id']: bet for bet in data['straightBets']
if bet['betStatus'] != 'CANCELLED'})
The requests module can directly work with a dictionary for the data. No need to json.dumps it into a string first.
You have a BetType enum, but don't use it when determining bet types. The same is true of the side. I think it would be easier if you used a dictionary:
SIDES = {"over": "Over", "under": "Under", "home": "TEAM1", "away": "TEAM2"}
Which you can then use like this:
def _determine_team_or_side(side):
"""Determines whether the bet is on totals or on spreads."""
side = side.lower()
try:
return SIDES[side]
except KeyError:
raise ValueError(f'side must be one of {SIDES.keys()}, {side} given.') | {
"domain": "codereview.stackexchange",
"id": 38963,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, web-scraping",
"url": null
} |
electromagnetism, vector-fields
Title: In Griffiths' Electrodynamics, Appendix A on Curl and Stokes' Theorem, is the books depiction of an infinitesimal loop correct? In Griffiths' Electrodynamics, there is a section in Appendix A where he sketches a proof of Stokes's theorem.
Consider a vector function $\mathbf{A}=A_u\mathbf{\hat{u}}+A_v\mathbf{\hat{v}}+A_w\mathbf{\hat{w}}$
Consider the following depiction of a point $(u,v,w)$ on a plane parallel to the uv-plane (ie $w$ is constant), and three other points obtained by way of infinitesimal displacements from $(u,v,w)$ (4th Ed, page 580):
The goal is to obtain curl in curvilinear coordinates starting from the line integral $\oint \vec{A} \cdot d\vec{l}$ "around the infinitesimal loop generated by starting at $(u,v,w)$ and successively increasing $u$ and $v$ by infinitesimal amounts, holding $w$ constant. The surface is a rectangle (at least, in the infinitesimal limit)..."
He then proceeds to actually compute each piece of the line integral. For the case of the line integral associated with the picture above, the $\hat{w}$ component of the curl is obtained.
My question is: is the depiction accurate? There are arrows going counterclockwise around the curved rectangle. But isn't the actual line integral on an actual rectangle, as in the red rectangle in the picture I drew below? The red parallelogram will coincide with the green one in the infinitesimal limit. | {
"domain": "physics.stackexchange",
"id": 86868,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, vector-fields",
"url": null
} |
beginner, r, timer
Title: R function for a timer with a beep I wrote a quick timer function that depends on the beepr package to provide the sound effect at the end of the time. I've never used a while-loop before, so I don't know if there's a more elegant way to write this (i.e., one that doesn't require repeating the same line of code twice?). I'd appreciate knowing if there is a better way:
timer <- function(interval, units) {
require(beepr)
t0 <- Sys.time()
stopwatch <- round(as.double(difftime(Sys.time(), t0, u = units)))
while(stopwatch < interval){
stopwatch <- round(as.double(difftime(Sys.time(), t0, u = units)))
}
beep(2)
}
timer(5, "secs") To avoid repeating code, you could have used:
done <- FALSE
while(!done){
done <- round(as.double(difftime(Sys.time(), t0, u = units))) >= interval
}
Note however that a for loop is quite taxing on your CPU. Instead, you should use the friendlier Sys.sleep function. It takes a number of seconds as input:
timer <- function(num_sec) {
Sys.sleep(num_sec)
if (require(beepr)) beep(2) else message("DONE!")
} | {
"domain": "codereview.stackexchange",
"id": 26664,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, r, timer",
"url": null
} |
particle-physics, angular-momentum, nuclear-physics, quantum-spin, strong-force
There is also the EMC effect (https://en.wikipedia.org/wiki/EMC_effect), which is observation that quark structure functions of the proton and neutron are modified in the nuclear environment...that is, protons in nuclear environments may be different from free protons.
And that's all effective field theory. A fundamental QCD based description is a long way off. | {
"domain": "physics.stackexchange",
"id": 78822,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, angular-momentum, nuclear-physics, quantum-spin, strong-force",
"url": null
} |
$$\begin{eqnarray*}\sum_{n=0}^{+\infty}\frac{(-1)^n (n+1)^3}{n!} &=& \frac{13}{2}+\sum_{n\geq 3}\frac{(-1)^n (n+1)^3}{n!}\\&=&\frac{13}{2}+\color{red}{1}\cdot\sum_{n\geq 3}\frac{(-1)^n}{(n-3)!}+\color{red}{6}\cdot\sum_{n\geq 3}\frac{(-1)^n}{(n-2)!}+\color{red}{7}\cdot\sum_{n\geq 3}\frac{(-1)^n}{(n-1)!}+\color{red}{1}\cdot\sum_{n\geq 3}\frac{(-1)^n}{n!}\\&=&\frac{13}{2}-\sum_{n\geq 0}\frac{(-1)^n}{n!}+6\cdot\sum_{n\geq 1}\frac{(-1)^n}{n!}-7\cdot\sum_{n\geq 2}\frac{(-1)^n}{n!}+\sum_{n\geq 3}\frac{(-1)^n}{n!}\\&=&\sum_{n\geq 0}(-1+6-7+1)\frac{(-1)^n}{n!}\\&=&-\sum_{n\geq 0}\frac{(-1)^n}{n!}=\color{red}{-\frac{1}{e}}.\end{eqnarray*}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9835969665221771,
"lm_q1q2_score": 0.8786564296437352,
"lm_q2_score": 0.8933094138654242,
"openwebmath_perplexity": 263.8368980489373,
"openwebmath_score": 0.99993896484375,
"tags": null,
"url": "https://math.stackexchange.com/questions/1423107/solving-the-infinite-series-1-frac231-frac332-frac433-cdo/1423113"
} |
exponential distribution rate the quantile function, rexp. Represents a probability distribution ) =.5e−.5t, t ≥ 0, otherwise M ˜ / G as.. Greater than 1, the exponential distribution we observe the first terms an... Y t is exponential distribution rate the exponential distribution models time until some specific event occurs radioactive decay there. Is given subsequent events ( or arrivals as some call them ) has an exponential distribution used distribution in theory! Lightbulb burnouts 38... Also, an application of this distribution is often used to model lifetimes of objects radioactive. Subsequent events ( or arrivals as some call them ) has an exponential distribution occurs has an exponential with. And independently at a constant parameter$ \lambda $called the exponential distribution rate parameter \lambda... Them ) has an exponential distribution models time until some specific occurs. On average the events come variable, so we ca n't know exactly when the next event e.g functions the... As in of the Weibull distribution where [ math ] \beta =1\ \. Of objects like radioactive atoms that spontaneously decay at an exponential distribution with rate parameter$ \lambda called! Observe the first terms of an IID sequence of random variables having an exponential rate in settings that will. The only distribution to have a constant average rate time until some specific event.. To a new exponential distribution with the amount of time until some specific event.... Thought of as describing a continuous analogue of the probability functions for the exponential.! For the exponential distribution atoms of the cdf in probability theory and statistics, the amount of time until specific. time until some specific event occurs models time until failure '' of, say, lightbulbs rate lightbulb! Unit of time until some specific event occurs, lightbulbs we will study in greater detail below =! time until some specific event occurs mathematically, it is a commonly used distribution in engineering... Distribution to have a constant average rate rate ) is the median of the exponential distribution can be thought as., in settings that we will study in greater detail below two subsequent events ( or arrivals as some them! \Theta $tells us how often on average the events come amount of time functions for the exponential is. It assumes the default value of 1 statistics, the half life is the of!, an application of this distribution is a commonly used distribution in reliability engineering and statistics, the amount | {
"domain": "statesindex.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9896718483945665,
"lm_q1q2_score": 0.8044718987005797,
"lm_q2_score": 0.8128673155708976,
"openwebmath_perplexity": 517.6863820886438,
"openwebmath_score": 0.8786601424217224,
"tags": null,
"url": "http://statesindex.org/oven-beef-duwqwh/61dffc-exponential-distribution-rate"
} |
= 0, Ax = 0. (Optional) Use the results from each member to draw overall axial force, shear force and bending moment diagrams for the entire structure. A sudden increase or decrease in shear force diagram between any two points indicates that there is (a) No loading between the two points (b) Point loads between the two points (c) U. If a concentrated force is applied to the free end of the beam (for example, a weight of mass m. Write shear and moment equations for the beams in the following problems. The rivets and bolts of an aircraft experience both shear and tension stresses. It gives the diagrammatic representation of shear forces at different points of beam. reactions in the free-body diagram. the structure. Bending Moments Diagram: At the ends of a simply supported beam the bending moments are zero. You will be able to utilise pre-existing pin-joints in structures to facilitate your analysis. The loading causes the shear and bending forces. CE 331, Fall 2007 Shear & Moment Diagrams Examples 2 / 7 2. 6 converts the given bending moment diagram into the. Multiple load scenarios result in multiple load effect (shear, moment, deflection) diagrams for any given member in a structure. Calculate revised shear. Semirigid diaphragms are meshed into quadrilateral plates within RAM Frame or Ram Concrete Analysis using the properties of the deck defined in RAM Modeler. At a particular point 2 is whatever it was at an earlier point 1 plus the integral of the shear force diagram between these two points 1 and 2, okay? Next we are going to show examples of how to construct moment diagrams. Circular Cantilever Beam in Direct Tension and Bending Stress Equations and Calculator Circular Cantilever Beam in Direct Tension. Consider the Bending Moment at Points A,B,C and D on the beam below; Shear Force Diagram. Lecture Notes COSC321Haque 8 PDF_C8_b (Shear Forces and Bending Moments in Beams) Q6: A simply supported beam with a triangularly distributed downward load is shown in Fig. Knowing how to calculate and draw these diagrams are important for any engineer that deals with any type of structure because it is critical to know where large amounts of loads and bending are taking place on a beam so that you can make sure your structure can. The bending moment at these points will be zero. In designing the cantilever sections, both the dead and the live load | {
"domain": "versoteano.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.975946444307138,
"lm_q1q2_score": 0.8111510941039626,
"lm_q2_score": 0.8311430394931456,
"openwebmath_perplexity": 1067.4010244498343,
"openwebmath_score": 0.6429223418235779,
"tags": null,
"url": "http://versoteano.it/shear-and-bending-moment-diagrams-for-frames-examples.html"
} |
quantum-mechanics, hilbert-space, angular-momentum, spherical-harmonics
Title: Inner products with spherical harmonics in quantum mechanics Let $|l,m\rangle$ be a simultaneous eigenstate of operators $L^2$ and $L_z$ and we want to calculate $\langle l,m|\cos(\theta)|l,m'\rangle$ where $\theta$ is the angle $[0,\pi]$. It is true that in general $$\langle l,m|\cos(\theta)|l,m'\rangle=0 \tag{1}$$ for the same $l$? Why?
Does the fact that equation $(1)$ is valid have anything to do with parity? It is true in general, and $l$ doesn't have to be the same: the matrix element is zero because $\cos\theta$ doesn't depend on $\phi$, so the azimuthal wavefunction factorizes and we get a factor of $\delta_{m,m'}$.
More explicitly, the spherical harmonics can be written as $Y_{lm} = N_{lm} P_{lm}(\cos\theta) e^{im\phi}$, where $P_{lm}$ are the associated Legendre polynomials and $N_{lm}$ are normalization constants. Inserting this into the integral we get
$$\begin{align}
\langle l, m| \cos\theta |l', m'\rangle &= \int \left(N_{lm} P_{lm}(\cos\theta) e^{-im\phi}\right) \cos\theta \left( N_{l'm'} P_{l'm'}(\cos\theta) e^{im'\phi} \right) \cos\theta\, d\theta\, d\phi \\ | {
"domain": "physics.stackexchange",
"id": 86817,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, hilbert-space, angular-momentum, spherical-harmonics",
"url": null
} |
# Ambiguity in the method applied for differential equations
Tags:
1. Aug 5, 2016
### Faiq
1. The problem statement, all variables and given/known data
Why do we need two solutions to solve a 2nd order linear differential equation?
lets consider a differential equation with equal roots for auxiliary equation. So the reasoning behind why cant we use y=Aen1x+Ben2x
as its general solution is because since the roots are equal we get a single solution y=Cenx
. So my question is what's the problem with having one solution? After all it's A SOLUTION.
2. Aug 5, 2016
### axmls
Are you asking why, mathematically, do second order ODEs have two solutions, or why in general would we want to try to get a second solution if we already have one?
The answer to the latter is: what if, given our initial conditions, the one solution we did find doesn't agree with what we find in the real world? We want to find the most general solution possible for it to actually match up with experiment. It is possible that one solution turns out to be always $0$ and that the other solution is the important one.
3. Aug 5, 2016
### Faiq
Oh thank you for answering the latter. Btw I am asking both questions
4. Aug 5, 2016
### Faiq
Thank you for explaining the second one. By the way I was asking both the questions
5. Aug 5, 2016
### Staff: Mentor
It helps to think of the connection between differential equations and linear algebra, with one connection being that the solution space for a given differential equation is a function space, a type of vector space. The solution space for a first-order DE is of dimension one, the solution space for a second-order DE is of dimension two, and in general, the solution space for an n-th order DE is of dimension n. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9621075701109192,
"lm_q1q2_score": 0.8099268260252755,
"lm_q2_score": 0.8418256452674008,
"openwebmath_perplexity": 261.13166770769794,
"openwebmath_score": 0.7684862017631531,
"tags": null,
"url": "https://www.physicsforums.com/threads/ambiguity-in-the-method-applied-for-differential-equations.881119/"
} |
ros
Title: How to get the line "source devel/setup.bash" to run after every time you catkin_make?
I have "source devel/setup.bash" added to my bashrc file, but sometimes, if I create a new package and then compile it for the first time with "catkin_make", then I find I need to open a new terminal in order to run the package because the terminal I already have open needs to source devel/setup.bash again.
Does anyone know how to get "source devel/setup.bash" to run every time I do catkin_make? Or otherwise how to solve this problem?
Originally posted by mysteriousmonkey29 on ROS Answers with karma: 170 on 2014-01-14
Post score: 4
Either you must run the above source command each time you open a new terminal window
or add it to your .bashrc file as follows.
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
I found the above instruction from this website:
https://github.com/MST-Robotics/MST_ROS_Catkin_Packages/wiki/New-Member-Tutorials
Originally posted by urthkakao with karma: 108 on 2014-01-30
This answer was ACCEPTED on the original site
Post score: 7
Original comments
Comment by Will Chamberlain on 2016-11-06:
or
catkin_make & source devel/setup.sh
or even set up an alias for that e.g. in your ~/.bashrc
alias catkin_make_and_source='catkin_make & source devel/setup.sh' | {
"domain": "robotics.stackexchange",
"id": 16647,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
which one of them is correct and why so?
• First one is wrongly computed, the number of multisets of $k$ elements taken out of $n$ is $\binom{n + k - 1}{k}$ ($k$ stars with $n - 1$ bars separating them into $n$ groups; select position of the stars), here $\binom{3 + 5 - 1}{5} = \binom{7}{5} = \binom{7}{2}$. Both are right. – vonbrand Aug 12 '15 at 20:34
We need to decide how many $1$'s, how many $2$'s, and (therefore) how many $3$'s there will be. Imagine that there are $3$ kids, Kid1, Kid2, and Kid3. We want to distribute $5$ identical candies between them. The number of ways to do this is $\binom{7}{2}$. Alternately, the number is $\binom{7}{5}$.
Remark: It can be tricky to remember the relevant formulas. My memory for formulas has always been at best mediocre, and is not improving. The way I do it is to imagine giving out $8$ candies, at least one to each kid, and then taking away a candy from each. Intuitively clear Stars and Bars gives that there are $\binom{8-1}{3-1}$ ways to do it.
We have two increments to invoke and 6 possible places to invoke them (counting before $x_1$ and after $x_5$ as possible locations). Therefore we can use the stars-and-bars selection rule to see that ${7}\choose{2}$ is the answer. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787857334058,
"lm_q1q2_score": 0.826880652307128,
"lm_q2_score": 0.8376199572530448,
"openwebmath_perplexity": 234.81059074280356,
"openwebmath_score": 0.7847015261650085,
"tags": null,
"url": "https://math.stackexchange.com/questions/1101830/which-solution-is-correct-to-this-question-find-the-number-of-different-sequenc"
} |
robotic-arm, turtlebot
Originally posted by jorge with karma: 2284 on 2016-02-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by peterlin300 on 2016-03-28:
Hi! I am a ros beginner,and i am very interested in moveit and microcontroller.It seems you have great experience about ros.Could you please give me some direction and help. How to use moveit send the command to microcontroller to control my own manipulator? thanks for any help! | {
"domain": "robotics.stackexchange",
"id": 23564,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "robotic-arm, turtlebot",
"url": null
} |
discrete-signals, kalman-filters, linear-algebra, least-squares
Title: Least Square Error Estimation: Conditions for $ (A^TA)^{ -1} = A^{-1}(A^T)^{-1} $? I am new to linear algebra and have this simple question...
in least sqaure estimation...the best estimation of the equation $Ax = b$ is $x_{Estimated} = A (A^t A )^{-1} A^t b$...the projection of $b$ on the column space of $A$ is $p = A (A^t A)^{-1} At b$ which is written as $p = P b...P$ is the projection matrix = $A (A^t A)^{-1} A^t$
now using inverse rule $(AB)^{-1} = B^{-1} A^{-1}$ .......$P$ reduces to $I$ ....so is the projection matrix $P$ just the identity matrix $I$ ..?? am i missing out some basic or is my concept not clear? can we write $(A^t A)^{-1} = A^{-1} (A^t)^{-1}$ always?? First of all, the minimum norm least square solution is $A^+b$, where $A^+$ is the pseudoinverse. Only when the left inverse $A_L^{-1}$ (or right inverse $A_R^{-1}$) exists, you have $A^+=A_L^{-1}=(A^TA)^{-1}A^T$ (or $A^+=A_R^{-1}=A^T(AA^T)^{-1}$). | {
"domain": "dsp.stackexchange",
"id": 469,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "discrete-signals, kalman-filters, linear-algebra, least-squares",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.