text stringlengths 49 10.4k | source dict |
|---|---|
c#, programming-challenge, median
Title: Improving performance hacker rank Median I'm trying to compete on HackerRank and my answer got accepted, but the times are not so good. I have a friend who sent the answer in C# too but somehow made it a lot faster. I'm wondering what can I do to improve it.
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
class Solution {
static void Main(String[] args) {
int N;
StringBuilder st= new StringBuilder();
N = Convert.ToInt32(Console.ReadLine());
int[] x = new int[N];
List<double> a= new List<double>();
string[] s = new string[N];
for(int i=0; i<N ;i++){
string tmp = Console.ReadLine();
string[] split = tmp.Split(new Char[] {' ', '\t', '\n'});
s[i] = split[0].Trim();
x[i] = Convert.ToInt32(split[1].Trim());
bool r=true;
if(s[i]=="r"){
int index= a.BinarySearch(x[i]);
if(index>=0){
a.RemoveAt(a.LastIndexOf(x[i]));
}
else{
r=false;
}
}else{
var index = a.BinarySearch(x[i]);
if (index < 0) index = ~index;
a.Insert(index,x[i]);
}
if(!r || a.Count==0){
st.AppendLine("Wrong!");
}
else{
st.AppendLine(calcularModa(a).ToString());
}
}
Console.WriteLine(st.ToString());
}
static double calcularModa(List<double> a){
int i= a.Count/2;
if(a.Count % 2 !=0){
return a[i];
}else{
return ((a[i - 1] + a[i]))/2;
} | {
"domain": "codereview.stackexchange",
"id": 3709,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, programming-challenge, median",
"url": null
} |
quantum-mechanics, quantum-field-theory, gauge-theory, conformal-field-theory, field-theory
Title: Interaction potential analysis from $\phi^4$ model In this paper, the authors consider a real scalar field theory in $d$-dimensional flat Minkowski space-time, with the action given by
$$S=\int d^d\! x \left[\frac12(\partial_\mu\phi)^2-U(\phi)\right],$$
where $U(x)$ is a general self-interaction potential. Then, the authors proceed by saying that for the standard $\phi^4$ theory, the interaction potential can be written as
$$U(\phi)= \frac{1}{8} \phi^2 (\phi -2)^2.$$
Why is this so? What is the significance of the cubic term present?
In this question Willie Wong answered by setting $\psi = \phi - 1$, why is that? Or why is this a gauge transformation?
Does anyone have better argument to understand the interection potential? It's not a gauge transformation, it's a field redefinition. Srednicki gives an example of this in exercise 10.5. In this exercise, a free field theory is turned into what looks like an interacting field theory by a field redefinition, however in perturbation theory, the scattering amplitudes vanish, confirming that the physics hasn't changed.
I suspect you will find the same here (though I haven't done it!) - if you compute scattering amplitudes for the 3-way vertices represented by the cubic terms resulting from this field redefinition, they should cancel. | {
"domain": "physics.stackexchange",
"id": 6635,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-field-theory, gauge-theory, conformal-field-theory, field-theory",
"url": null
} |
which makes sense since $f$ is continuous: $x\mapsto x^2\sin\left(\tfrac\pi x\right)$ being defined everywhere given that $x\neq0$, so the remaining thing is to check that: $$\lim_{x\to0^-}x^2\sin\left(\tfrac\pi x\right)=0=\lim_{x\to0^+}x^2\sin\left(\tfrac\pi x\right).$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692264378963,
"lm_q1q2_score": 0.8468071414643852,
"lm_q2_score": 0.8670357563664174,
"openwebmath_perplexity": 192.7435907962689,
"openwebmath_score": 0.967206597328186,
"tags": null,
"url": "http://math.stackexchange.com/questions/865028/is-it-differentiable"
} |
• Well, I think that gets back to the distinction between the two formulations of the proof. I seem to be reading "there is no rational whose square is $12$" as saying that there does not exist an $x$ such that $x^2 = 12$, so $x = \pm \sqrt{12}$. So, that would require proving that $\sqrt{12} \not \in \mathbb{Q}$ and $- \sqrt{12} \not \in \mathbb{Q}$. Is this interpretation of the theorem incorrect? – Matt.P Aug 27 '18 at 8:50 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.982287697148445,
"lm_q1q2_score": 0.8053711433262251,
"lm_q2_score": 0.8198933425148213,
"openwebmath_perplexity": 196.29070550089983,
"openwebmath_score": 0.9216357469558716,
"tags": null,
"url": "https://math.stackexchange.com/questions/2895969/rudins-proof-that-nexists-x-in-mathbbq-x2-12"
} |
performance, sql, mysql
SELECT a1.fname, a1.lname, a2.fname, a2.lname, count(*) AS num_films
FROM Actor AS a1, Actor AS a2, Cast AS c1, Cast AS c2
WHERE c1.mid = c2.mid AND
a1.id = c1.pid AND
a2.id = c2.pid AND
a1.gender = 'M' AND
a2.gender = 'F'
GROUP BY a1.id, a2.id, a1.fname, a1.lname, a2.fname, a2.lname
HAVING COUNT(*) > 0
ORDER BY COUNT(*) DESC; Alias notation
Your table aliases a1/a2 and c1/c2 are not good. In this very short query it may not matter, however names for aliases, variables, parameters, etc. should be descriptive enough to say something about what they represent. Besides, with 4-5 character table names, is it really that much to type? I would use Actor1/Actor2 and Cast1/Cast2.
In fact, given the nature of your query, you could give them even more useful names... how about MaleAct/MaleCast and FemAct/FemCast!
Old style JOIN
This JOIN syntax should not be used:
FROM Actor AS a1, Actor AS a2, Cast AS c1, Cast AS c2
WHERE c1.mid = c2.mid AND
a1.id = c1.pid AND
a2.id = c2.pid AND
Instead:
FROM Actor AS MaleAct
INNER JOIN Cast AS MaleCast ON MaleAct.id = MaleCast.pid
INNER JOIN Cast AS FemCast ON MaleCast.id = FemCast.id
INNER JOIN Actor AS FemAct ON FemAct.id = FemCast.pid
Multiple counting
I see you COUNT(*) 3 times within your query. Since you're already selecting the COUNT(*) into your result set, and MySQL will allow you to use column aliases in GROUP BY, ORDER BY, or HAVING clauses.
This:
HAVING COUNT(*) > 0
ORDER BY COUNT(*) DESC;
Instead:
HAVING num_films > 0
ORDER BY num_films DESC; | {
"domain": "codereview.stackexchange",
"id": 9403,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, sql, mysql",
"url": null
} |
{f \circ g} \right) \circ h = f \circ \left( {g \circ h} … ⊆ }, If S is a binary relation, let The composition of functions is associative. This is on my study guide and I can't figure out the proper way to do it: "Prove the composition of relations is an associative operation." Similarly, the inclusion YC ⊆ D is equivalent to Y ⊆ D/C, and the right residual is the greatest relation satisfying YC ⊆ D.[2]:43–6, A fork operator (<) has been introduced to fuse two relations c: H → A and d: H → B into c(<)d: H → A × B. ⊆ \end{array}} \right] }\times{ \left[ {\begin{array}{*{20}{c}} \end{array}} \right] }={ \left[ {\begin{array}{*{20}{c}} is defined by the rule that says R Suppose f is a function which maps A to B. The composition of function is associative but not A commutative B associative from Science MISC at Anna University, Chennai X Binary operations on a set are calculations that combine two elements of the set (called operands) to produce another element of the same set. Y 0&1&0\\ 0 votes . The composition of binary relations is associative, but not commutative. ) = ( and complementation gives 1&1&0\\ {\displaystyle R{\bar {R}}^{T}R=R. }$, Consider the sets \(A = \left\{ {a,b} \right\},$$ $$B = \left\{ {0,1,2} \right\},$$ and $$C = \left\{ {x,y} \right\}.$$ The relation $$R$$ between sets $$A$$ and $$B$$ is given by, $R = \left\{ {\left( {a,0} \right),\left( {a,2} \right),\left( {b,1} \right)} | {
"domain": "tabvue.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812299938006,
"lm_q1q2_score": 0.8005721860597742,
"lm_q2_score": 0.8267117962054049,
"openwebmath_perplexity": 1378.5681441113877,
"openwebmath_score": 0.7744567394256592,
"tags": null,
"url": "http://tabvue.com/aditya-hitkari-ipoci/3c0b33-composition-of-relations-is-associative"
} |
python, beginner, regex, csv
def main():
# Run this part only once in the starting. From here
# change the directory to working folder and give the right filename (hdfcbk),
# if unsure what to do go to your folder and right click and copy the file here,
# it will look like /home/XYZ/.../Your_folder_name/hdfcbk
with open('hdfcbk', 'r') as smsFile:
data = smsFile.read()
data = data.split('\n')
main_data = data
regl = []
pat1 = 'INR (?P<Amount>(.*)) deposited to A\/c No (?P<AccountNo>(.*)) towards (?P<Towards>(.*)) Val (?P<Date>(.*)). Clr Bal is INR (?P<Balance>(.*)) subject to clearing.'
# TODO - Use much more descriptive names...no idea what's going on here without searching for a while
a, b, c = regex_search(pat1, main_data)
# Updating main_data to remaining messages
main_data = c
# Writing remaining sms to a file, you don't need to change the file name as it will be updated
# everything as you run the script. Just look at the remaining sms and make new regex.
with open('remaining_sms.txt', 'w') as fp:
fp.write('\n'.join('{}'.format(x) for x in main_data))
# Update the csv file
write_to_csv([a, b, c], 'hdfc_test_3.csv')
# Keeping all the regexes in one list, update the index number in [i, pat1]
regl.append([1, pat1]) | {
"domain": "codereview.stackexchange",
"id": 22644,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, regex, csv",
"url": null
} |
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Problem: Show that if $P(A_i)=1$ for all $i \ge 1$ then $P(\bigcap_{i=1}^{\infty}A_i)=1$.
[HR][/HR]
What is strange about this question is the first part, $P(A_i)=1$ for all $i \ge 1$. If I'm understanding this correctly that's saying that $P(A_1)=1$, $P(A_2)=1$...$P(A_n)=1$, for $n \ge i \ge 1$. This is only true if $A_1=A_2=...A_n$ because the sum of their probabilities (keeping in mind inclusion-exlusion of course) cannot be larger than 1. It's obvious that if this is the case that the intersections of all of them is 1 as well, but that's not the part that troubles me.
So is this a typo or am I misunderstanding the problem do you think?
Hi Jameson!
A proof should be based on the axioms and propositions of probability theory.
See wiki.
Let $B_n=\displaystyle\bigcap_{i=1}^{n}A_i$.
Then $P(\displaystyle \bigcap_{i=1}^{\infty}A_i)=\displaystyle \lim_{n \to \infty}P(B_n)$.
According to the sum rule, we have:
$P(B_n \cup A_{n+1})=P(B_n) + P(A_{n+1}) - P(B_n \cap A_{n+1})$
According to the monotonicity rule and the numeric bound rule we also have:
$1 = P(A_{n+1}) \le P(B_n \cup A_{n+1}) \le 1$ | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9648551566309688,
"lm_q1q2_score": 0.8181660680346886,
"lm_q2_score": 0.8479677622198947,
"openwebmath_perplexity": 617.92192589585,
"openwebmath_score": 0.8490941524505615,
"tags": null,
"url": "https://mathhelpboards.com/threads/strange-notation-or-typo.3302/"
} |
python, nlp, text-mining
Title: How to recognize a two part term when the space is removed? ("bigdata" and "big data") I'm not a NLP guy and I have this question.
I have a text dataset containing terms which go like, "big data" and "bigdata".
For my purpose both of them are the same.
How can I detect them in NLTK (Python)?
Or any other NLP module in Python? There is a nice implementation of this in gensim: http://radimrehurek.com/gensim/models/phrases.html
Basically, it uses a data-driven approach to detect phrases, ie. common collocations. So if you feed the Phrase class a bunch of sentences, and the phrase "big data" comes up a lot, then the class will learn to combine "big data" into a single token "big_data". There is a more complete tutorial-style blog post about it here: http://www.markhneedham.com/blog/2015/02/12/pythongensim-creating-bigrams-over-how-i-met-your-mother-transcripts/ | {
"domain": "datascience.stackexchange",
"id": 1650,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, nlp, text-mining",
"url": null
} |
# Statements about the set 2^M
#### mathmari
##### Well-known member
MHB Site Helper
Hey!!
1. Let $M:=\{7,4,0,3\}$. Determine $2^M$.
2. Prove or disprove $2^{A\times B}=\{A'\times B'\mid A'\subseteq A, B'\subseteq B\}$.
3. Let $a\neq b\in \mathbb{R}$ and $M:=2^{\{a,b\}}$. Determine $2^M$.
4. Is there a set $M$, such that $2^M=\emptyset$ ?
First of all how is $2^M$ defined? Is this the powerset?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
First of all how is $2^M$ defined? Is this the powerset?
Hey mathmari !!
Yep.
So $2^{\{a,b\}}=\{\varnothing,\{a\},\{b\},\{a,b\}\}$.
#### mathmari | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9674102580527664,
"lm_q1q2_score": 0.838779298102543,
"lm_q2_score": 0.8670357701094303,
"openwebmath_perplexity": 1693.0803334677269,
"openwebmath_score": 0.9573047161102295,
"tags": null,
"url": "https://mathhelpboards.com/threads/statements-about-the-set-2-m.26834/"
} |
Thanks.
• Binomial Theorem, do you know? – IAmNoOne Apr 29 '14 at 2:58
• I've heard of it, and I know it via Pascal's Triangle for low $n$, such as $n=2,3,4$, but I'm not familiar with its general formula. – Sujaan Kunalan Apr 29 '14 at 2:59
• I will post an answer then. – IAmNoOne Apr 29 '14 at 2:59
• You can find many posts about this here. For example, this question and this question and other posts linked there. – Martin Sleziak Sep 27 '15 at 20:35
There is an easy way to prove the formula.
Suppose you want to buy a burger and you have a choice of 6 different condiments for it - mustard, mayonnaise, lettuce, tomatoes, pickles, and cheese. How many ways can you choose a combination of these condiments for your burger?
Of course, you can choose either 6 different condiments, 5 different condiments, 4 different condiments, etc. So the obvious way to solve the problem is:
\begin{align}{{6}\choose{6}} + {{6}\choose{5}} + {{6}\choose{4}} + {{6}\choose{3}} + {{6}\choose{2}} + {{6}\choose{1}} = \boxed{64}\end{align}
But there is a better way. Imagine 6 spaces for 6 condiments:
_____ _____ _____ _____ _____ _____
For every space, there are $2$ possible outcomes: Yes or No, meaning the condiment was chosen or the condiment was not chosen. With $2$ possible outcomes for each space, there are $2^6 = \boxed{64}$ possible ways.
We know that both ways have foolproof logic and will both give identical answers no matter how many condiments there are. So this means we have proven:
\begin{align}\sum_{k = 0}^{n} \binom{n}{k} = 2^n\end{align} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9901401423688665,
"lm_q1q2_score": 0.8859669582450704,
"lm_q2_score": 0.894789454880027,
"openwebmath_perplexity": 224.06384668751375,
"openwebmath_score": 0.8196208477020264,
"tags": null,
"url": "https://math.stackexchange.com/questions/773686/why-does-2n-choose1n-choose2-ldotsn-choosen-1-11n"
} |
computability, reductions, undecidability, halting-problem
$S$ is a Turing machine (i.e. its function is computable)
$S$ decides $A_{TM}$.
Arguably, 1) is clear; simulating other TMs given their indices/encodings is something TMs can do (thanks to the existence of a universal TM, which you should have seen a proof of already) and $S$ does little more. This is as close to a proof as you'll get with this form of "definition" of $S$; it's more of an idea, really.
For the second, note that $S$ always halts because $R$ always halts and
$\qquad\begin{align*}
S\langle M,w \rangle) = 1 &\iff R(M,W) = 1 \land M(w) = 1 \\
&\iff M(w)\downarrow \land M(w) = 1 \\
&\iff w \in L_M \\
&\iff \langle M,w \rangle \in A_{TM} \;.
\end{align*}$
By definition, that means that $S$ decides $A_{TM}$.
Similar arguments usually works for this kind of proof; check our reference questions and other questions tagged computability+reductions. | {
"domain": "cs.stackexchange",
"id": 3050,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computability, reductions, undecidability, halting-problem",
"url": null
} |
ros, navigation
<arg name="odom_topic" value="$(arg odom_topic)"/>
<arg name="frame_id" value="$(arg frame_id)"/>
<arg name="rgbd_sync" value="$(arg rgbd_sync)"/>
<arg name="depth_topic" value="$(arg depth_topic)"/>
<arg name="rgb_topic" value="$(arg rgb_topic)"/>
<arg name="camera_info_topic" value="$(arg camera_info_topic)"/>
<arg name="approx_rgbd_sync" value="$(arg approx_rgbd_sync)"/>
<arg name="visual_odometry" value="$(arg visual_odometry)"/>
</include>
</launch>
What should I add to which part to solve this problem?
Is it possible that it is not the RTABMAP setting in the first place, but the Navigation stack setting?
Thank you.
Originally posted by donguri on ROS Answers with karma: 13 on 2022-12-04
Post score: 0
I asked the same question in the Official RTAB-Map Forum and the great matlabbe answered it.
http://official-rtab-map-forum.206.s1.nabble.com/Error-The-goal-sent-to-the-navfn-planner-is-off-the-global-costmap-td9216.html#a9226
Thank you very much, Mathieu!!!
Originally posted by donguri with karma: 13 on 2022-12-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38163,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation",
"url": null
} |
quantum-mechanics, conservation-laws, quantum-optics
In Vector Spherical Harmonics and total angular momentum it is explained that vector spherical harmonics have a fixed value for total angular momentum $\boldsymbol{J}$ but are superpositions of orbital optical, $\boldsymbol{L}$, and spin optical, $\boldsymbol{S}$ angular momentum.
This squares with our intuition about the dipole radiation pattern depicted above. We can see that in parts of the pattern (top and bottom) we see circularly polarized light which we know carries SAM, but in other parts (horizontal plane) we see linearly polarized light that rotates as a function of azimuthal angle. We know this pattern carries OAM.
This superposition of differing states of $\boldsymbol{L}$ and $\boldsymbol{S}$ is the same as for the case of angular momentum addition in an atom. The states with well defined values for $J_z$ are superpositions of states with well defined values of $L_z$ and $S_z$. And so it is for the dipole radiation pattern.
So how does this come back to the paradox raised in the original question. The answer is that the atom begins with $\hbar$ of angular momentum in the $z$ direction. When it decays we know it gives up all of its angular momentum. It gives up its angular momentum to a state of light which has the mode pattern depicted above. This state of light has both spin and orbital angular momentum, but it has a well defined total angular momentum of $\hbar$. The intuition that has failed us and made us think there is a paradox was the intuition that "horizontally polarized light in the $xy$ plane can't carry the angular momentum in the $z$ direction as needed.
The problem with this intuition is that one must consider the overall state of the light field over all space to determine its angular momentum. | {
"domain": "physics.stackexchange",
"id": 80124,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, conservation-laws, quantum-optics",
"url": null
} |
java, beginner, android
final WebView webView1;
webView1 = new WebView(MainActivity.this);
webView1.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT));
web_linearLayout.addView(webView1);
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view)
{
Document doc = Jsoup.parse(html_open);
doc.append(content2);
doc.append(html_close);
webView1.getSettings().setJavaScriptEnabled(true);
webView1.getSettings().setAllowFileAccess(true);
webView1.getSettings().setLoadsImagesAutomatically(true);
webView1.loadDataWithBaseURL("", String.valueOf(doc), "text/html", "utf-8", "");
}
});
}
if (content_type3.equals("text"))
{
if (content1.equals("NA"))
{
Log.d("Content3 blank","content blank");
}else
{
WebView webView1;
webView1 = new WebView(this);
webView1.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT));
web_linearLayout.addView(webView1); | {
"domain": "codereview.stackexchange",
"id": 21671,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, android",
"url": null
} |
algorithms, algorithm-analysis, recurrence-relation, recursion, pseudocode
Why is 1 added to the numerators of the first two rational numbers?
Similarly, why is 2 added to the numerator of the last rational number?
Finally,
3)
C(n) = the exact number of returns from line 5.1 of RANDOM(n), recursive or not.
$$
C(1) = 0
$$
$$
C(n) = \frac {C(n)}{3} + \frac {C(n-1)}{3} + \frac {1+2*C(n-1)}{3}
$$
, which, after some algebra, comes out to be:
$$
C(1) = 0
$$
$$
C(n) = \frac 32 * C(n-1) + \frac 12
$$
, where $n>0$.
My main question for this answer is: why is 1 added to the last rational number in the expression? What does that 1 represent?
I am so lost, and desperate. Any attempt to shine light on these answers would be such a great help to me. Thank you. Firstly, there is one minor typo in the question. The "$T(n) =\frac 32 * T(n-1) + 1$" should be "$T(n) =\frac 32 * (T(n-1) + 1)$" since this equality is said to be obtained by simple algebra from the equality above it. However, this typo does not affect either the question or this answer.
The answer to all of your questions are so simple that I can hardly imagine that you missed it. Well, on the other hand, it is indeed very easy to miss it.
Let me give you a simple example to illustrate the answer. Suppose we have the following function, which computes the factorial.
function FACTORIAL(n)
if n = 1 then
return 1
else
return n * FACTORIAL(n-1)
end-if | {
"domain": "cs.stackexchange",
"id": 12241,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, algorithm-analysis, recurrence-relation, recursion, pseudocode",
"url": null
} |
fluid-dynamics, aerodynamics, drag, lift
Title: Uses of the Reynolds number I have seen a lot of places talking about the Reynolds number and how it is calculated, but I have never seen an equation that actually made use of this number to calculate lift, drag, or other aerodynamic properties.
So, what is this number actually used for? tl;dr - The equations you are looking for are the Navier-Stokes equations. The Reynolds number is a convenient non-dimensionalizing parameter in these equations. The Navier-Stokes equations are hard to solve, so there isn't an easy way to find drag (or lift) as a function of Reynolds number.
From the Wikipedia article for Reynolds number:
In fluid mechanics, the Reynolds number ($Re$) is a dimensionless number that gives a measure of the ratio of inertial forces to viscous forces and consequently quantifies the relative importance of these two types of forces for given flow conditions.
In addition, the incompressible Navier-Stokes equations, which govern continuum fluid flow, can be written in non-dimensional form such that the only parameter is the Reynolds number (ignoring body forces). This is very nice because it is the basis for the validity of wind tunnel testing.
Suppose we would like to measure the aerodynamics of the flow around a Boeing 747 that is landing. Two (at least) options exist:
Build your very own full size 747, instrument it, and fly it. (extremely expensive)
Build a small scale model of a 747, instrument it, test inside a wind tunnel (much less expensive) | {
"domain": "physics.stackexchange",
"id": 14266,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, aerodynamics, drag, lift",
"url": null
} |
Obviously you will switch because you knew from the start that you were wrong with a tiny probability to win. In this case its clear switching is same as winning.
Now coming to your question initially you chose a door, the probability of you being right is $\frac{1}{3}$. And the probability of having the lottery in behind one of the two is $\frac{2}{3}$ with each gate having $\frac{1}{3}$ probability of having prize money(i.e probability $\frac{2}{3}$ is equally distributed amongst the unchosen gates) . $\because$ The total probability must be 1, once I open a gate which is empty the probability of you being correct is still $\frac{1}{3}$ but the probability is redistributed among other unopened gates(in case the only unopened gate ) , $\therefore$ the unopened gate gets probability of $\frac{2}{3}$.
Hence you must switch to have a better chance of winnig. Although you may loose if you chose the correct gate initially, but chances of that are $\frac{1}{3}$ < $\frac{2}{3}$ which you get after switching.
1
1 vote
2
139 views
1 vote
3 | {
"domain": "gateoverflow.in",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9822877012965886,
"lm_q1q2_score": 0.8142564006926728,
"lm_q2_score": 0.8289388125473628,
"openwebmath_perplexity": 1198.169432880473,
"openwebmath_score": 0.6019632816314697,
"tags": null,
"url": "https://gateoverflow.in/215771/doubt"
} |
If you don't want to verify the precise maximum you can bound as
$$0 \leqslant e^x - \left(1 + \frac{x}{n} \right)^n =e^x \left[1 - \left(1 + \frac{x}{n} \right)^ne^{-x}\right] \\ \leqslant e^x \left[1 - \left(1 + \frac{x}{n} \right)^n \left(1 - \frac{x}{n} \right)^n\right] \\ = e^x \left[1 - \left(1 - \frac{x^2}{n^2} \right)^n \right],$$
since $e^{x/n} > 1 + x/n$ which implies $e^{x} > (1 + x/n)^n$ for $x \in (-n,\infty).$
By Bernoulli's inequality, $(1 - x^2/n^2)^n \geqslant 1 - x^2/n$ and
$$0 \leqslant e^x - \left(1 + \frac{x}{n} \right)^n \leqslant \frac{e^xx^2}{n} \leqslant \frac{e}{n},$$
which enables you to prove uniform convergence on $[0,1]$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.978712650690179,
"lm_q1q2_score": 0.8554704493413514,
"lm_q2_score": 0.8740772368049822,
"openwebmath_perplexity": 263.70698307808743,
"openwebmath_score": 0.9416009187698364,
"tags": null,
"url": "https://math.stackexchange.com/questions/2392765/show-that-left-1-fracxn-rightn-is-uniformly-convergent-on-s-0-1"
} |
• A proper sub-field of $\Bbb R$ can be an ordered field according to an order that is not the usual order $<$ of $\Bbb R$. For example $\{a+b\sqrt 2\,:a,b\in \Bbb Q\}.$ For $a,b\in \Bbb Q$ let $a+b\sqrt 2\,>^*0\iff a-b\sqrt 2<0.$ – DanielWainfleet Jan 16 '19 at 23:06
• I think you meant $\psi$ rather than $f$. Please check if my reasoning is correct: For all $x\in F$ and $q\in\Bbb Q$: $\psi(x)-q=\psi(x)-\psi(q)$ [since $\psi|_{\Bbb Q}=\text{id}_{\Bbb Q}$] $=\psi(x-q)$. Then $x \ge q \iff x-q \ge 0 \iff x-q$ $=\sqrt {x-q} \cdot \sqrt {x-q} \iff \psi(x-q)=\psi(\sqrt {x-q} \cdot \sqrt {x-q})=\psi(\sqrt {x-q})\cdot \psi(\sqrt {x-q})=$ $(\psi(\sqrt {x-q}))^2 \ge 0 \iff \psi(x)-q=\psi(x-q) \ge 0$. To sum up, $x\ge q \iff \psi(x)-q \ge 0$ $\iff \psi(x)\ge q$.[...] – Abstract Analysis Jan 17 '19 at 1:54 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9808759621310287,
"lm_q1q2_score": 0.8432235048639305,
"lm_q2_score": 0.8596637469145053,
"openwebmath_perplexity": 129.39330932421777,
"openwebmath_score": 0.9746605753898621,
"tags": null,
"url": "https://math.stackexchange.com/questions/3075176/the-isomorphism-between-two-complete-ordered-fields-is-unique"
} |
special-relativity, kinematics, vectors, inertial-frames, lorentz-symmetry
\gamma \frac{\mathrm{d}x'^{3}}{\mathrm{d}t}\\
\end{array} } \right] = \left[ {\begin{array}{ccccc}
\gamma & -\beta \gamma & 0 & 0 \\
-\beta \gamma & \gamma & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array} } \right] \left[ {\begin{array}{ccccc}
\gamma \frac{\mathrm{d}x^{0}}{\mathrm{d}t}\\
\gamma \frac{\mathrm{d}x^{1}}{\mathrm{d}t}\\
\gamma \frac{\mathrm{d}x^{2}}{\mathrm{d}t}\\
\gamma \frac{\mathrm{d}x^{3}}{\mathrm{d}t}\\
\end{array} } \right] $$
I got the right results:
$$\gamma \frac{\mathrm{d}x'^{0}}{\mathrm{d}t} = \gamma \Big[ \gamma \frac{\mathrm{d}x'^{0}}{\mathrm{d}t}\Big] - \gamma \frac{v}{c}\Big[ \gamma\frac{\mathrm{d}x'^{1}}{\mathrm{d}t}\Big]$$
$$\gamma \frac{\mathrm{d}x'^{1}}{\mathrm{d}t} = \gamma \Big[ \gamma \frac{\mathrm{d}x'^{1}}{\mathrm{d}t}\Big] - \gamma \frac{v}{c}\Big[ \frac{\gamma\mathrm{d}x'^{0}}{\mathrm{d}t}\Big]$$ | {
"domain": "physics.stackexchange",
"id": 56956,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, kinematics, vectors, inertial-frames, lorentz-symmetry",
"url": null
} |
particle-physics, experimental-physics, data-analysis
Title: Optimizing Muon Lifetime Data I performed an experiment which measured the lifetime of a muon. The apparatus was very archaic and susceptible to disturbances, which means that there would inevitably be some unreasonable data recorded. I have received two pieces of advice on how to deal with this, either of which I would be happy to implement if I knew how to.
The first method is to cut/trim data points either manually or by using some algorithm. My question for going this way would be how to determine which points to exclude? There could clearly be some outliers such as if there was a point recorded at 1s when the expectation is only 2.2$\mu$s, but this is more difficult to determine as you start to exclude points closer to the expectation.
The second method would be to add some kind of weight to the fit. Here is a figure for reference: | {
"domain": "physics.stackexchange",
"id": 65678,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, experimental-physics, data-analysis",
"url": null
} |
java, game, mvc, snake-game
private Deque<Point> snakeBody;
private final Point apple;
private final int width, height, scale;
private int r = 0;
private int g = 255;
private int b = 0;
private final Random random = new Random();
private Color snakeColor = new Color(0, 255, 0);
private Graphics2D g2d;
GamePanel(int width, int height, int scale, Deque<Point> snakeBody, Point apple) {
this.snakeBody = snakeBody;
this.width = width;
this.height = height;
this.scale = scale;
this.apple = apple;
}
@Override
public Dimension getPreferredSize() {
return new Dimension(width, height);
}
@Override
public Color getBackground() {
return Color.black;
}
@Override
public boolean isOpaque() {
return true;
}
/**
* Ensures GUI is painted when the window is moved or hidden.
*/
@Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
g2d = (Graphics2D) g;
g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
paintDots();
paintApple();
paintSnake();
}
public void setSnakeBody(Deque<Point> snakeBody, Point apple) {
this.snakeBody = snakeBody;
} | {
"domain": "codereview.stackexchange",
"id": 31012,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, game, mvc, snake-game",
"url": null
} |
So $y > x$
Let $d-c = m>0$
$z = (a+d)(b+c)=$
$(a + c+m)(b+ d - m)=$
$(a+c)(b+d) + m[b+d - a-c] - m^2=$
$y + m[b-a +m] - m^2 =$
$y + m(b-a) > y$.
So $z > y$
So $z > y > x$.
....
Also there is AM-GM
If $j < k,m; k,m < n$ then $nj > km$ so
$(a+b) < (a+c)$ and $(b+d)< (c+d)$ so $(a+b)(c+d) > (a+c)(b+d)$... but ... I don't know. That doesn't have as "hands on" conviction. (Although its really the same thing. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9802808701643912,
"lm_q1q2_score": 0.8125928521917356,
"lm_q2_score": 0.8289388040954684,
"openwebmath_perplexity": 390.7325885722368,
"openwebmath_score": 0.8908186554908752,
"tags": null,
"url": "https://math.stackexchange.com/questions/2765067/which-is-larger-without-using-examples-of-numbers"
} |
algorithms, data-structures, number-theory
I'm interested in an algorithm that uses less memory than second solution (because $|S| \sim 10^6$) and still somewhat faster queries than first solution (number of queries is also $\sim 10^6$). With the following techniques, I have been able to achieve a speedup of 100,000x over the brute force method of storing the dividend in an array and checking each dividend each time a divisor needs to be checked.
To do this, I use 10-20x as much working space as the brute force method when I am pre-processing the set of dividends. In the end, I use roughly 2-3x as much storage as the brute force method. I believe the working space requirement can be reduced substantially (see below) by the use of more complex algorithms.
To be specific about timing, with $\sim10^6$ dividends and divisors, the brute force method takes $0.02 \mu s$ for each dividend (that's the build) and $16{,}000{,}000 \mu s$ per divisor (that's the query) while my method takes $55 \mu s$ per dividend and the same time per divisor.
To create the structure, first make a trie of the dividends. Treat them as strings of prime factors, biggest factor first. Thus, 650 would be inserted into the trie as the string $\langle 13,5,5,2 \rangle$. Insert not only that string, but also all subsequences. In this case, that means to also insert:
$\langle \rangle$
$\langle 13 \rangle$
$\langle 13,5 \rangle$
$\langle 13,5,5 \rangle$
$\langle 13,5,5,2 \rangle$ (the first sequence)
$\langle 13,5,2 \rangle$
$\langle 13,2 \rangle$
$\langle 5 \rangle$
$\langle 5,5 \rangle$
$\langle 5,5,2 \rangle$
$\langle 5,2 \rangle$
$\langle 2 \rangle$ | {
"domain": "cs.stackexchange",
"id": 6709,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, data-structures, number-theory",
"url": null
} |
Then [1] becomes: . $-\ln\left(\frac{\sqrt{4-x^2}}{2}\right) + C \;=\;-\bigg[\ln(\sqrt{4-x^2}) - \ln(2)\bigg] + C$
. . . . . . $= \;-\ln\left(\sqrt{4-x^2}\right) + \underbrace{\ln(2) + C}_{\text{This is a constant}} \;=\; -\ln\left(\sqrt{4-x^2}\right) + C$
. . . . . . $= \;-\ln\left(4-x^2\right)^{\frac{1}{2}} + C \;=\;-\tfrac{1}{2}\ln\left(4-x^2\right) + C$
3. Originally Posted by fattydq
The problem is the integral of x/(4-x^2), and I'm asked to solve it using a trig substitution despite there being no square root present in the problem, how can I go about doing this?
for the record, if the method of integration was not specified, a regular u-substitution of $u = 4 - x^2$ would take care of this guy with much less difficulty.
4. Originally Posted by Jhevon
for the record, if the method of integration was not specified, a regular u-substitution of $u = 4 - x^2$ would take care of this guy with much less difficulty.
Yep, I actually had to solve it that way first, and then in part 2 I was asked to solve it with a trig substitution, which did indeed prove to be much more painful | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.978384668497878,
"lm_q1q2_score": 0.8215828395874116,
"lm_q2_score": 0.8397339676722393,
"openwebmath_perplexity": 416.49049479716933,
"openwebmath_score": 0.9735257625579834,
"tags": null,
"url": "http://mathhelpforum.com/calculus/82358-solved-solving-indefinite-integral-trig-substitution.html"
} |
# Definition of $L^\infty$
It seems like different authors use different definitions for the space $L^\infty$. For example wikipedia starts with bounded functions, and define the seminorm $\lVert f \rVert_\infty$, and take quotients. When defining the seminorm, wikipedia takes the infimum over all $M \geq 0$ such that the set $\left\{x\in X\ |\ f(x)>M\right\}$ is null.
The books by Folland (Real Analysis) or Jones (Lebesgue Integration on Euclidean Space), on the other hand, starts with essentially bounded functions; $f$ is essentially bounded iff there is some $M\geq 0$ that makes $\left\{x\in X\ |\ f(x)>M\right\}$ a null set. Then they define the seminorm, take quotients, just like wikipedia.
The book by Cohn (Measure Theory) starts with bounded functions, but the seminorm differs! Here, the seminorm is given by the infimum over all $M\geq 0$ such that the set $\left\{x\in X\ |\ f(x)>M\right\}$ is locally null. When the given measure is $\sigma$-finite, the concept of locally null and null coincide, so this definition agrees with wikipedia's.
So the definition differs from books to books. There are at least 2*2=4 possibilities I guess;
• Which seminorm do you use? ($\left\{x\in X\ |\ f(x)>M\right\}$ is null? or locally null?...)
• What functions do you start with? (bounded functions? or those with finite seminorms?...)
Do the results concerning the properties of $L^p$ differs significantly when choosing a different definition for $L^\infty$ ? Do these differences affect the theory in a serious manner? Or, is it the case that, at least for familiar spaces such as $\mathbb{R}^n$ they all agree? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9702399060540359,
"lm_q1q2_score": 0.8042695156104973,
"lm_q2_score": 0.8289388125473628,
"openwebmath_perplexity": 237.44065267051593,
"openwebmath_score": 0.9422784447669983,
"tags": null,
"url": "https://math.stackexchange.com/questions/2088761/definition-of-l-infty"
} |
include the exponential term also: The limit of the ratio seems to converge to 1 (the "undefined" in the table is due to the b terms getting so small that the algorithm thinks it is dividing by 0), which we can verify: The limit comparison test says that in this. aa g x dx f x dx ³³ ff When we cannot evaluate an integral directly, we first try to determine whether it converges or diverges by comparing it to known integrals. Frequently we aren't concerned along with the actual value of these integrals. Theorem 1 (Comparison Test). We are covering these tests for definite integrals now because they serve as a model for similar tests that check for convergence of sequences and series -- topics that we will cover in the next chapter. For example, one possible answer is BF, and another one is L. Partial Fractions 32 1. Improper integrals are definite integrals where one or both of the boundaries is at infinity, or where the integrand has a vertical asymptote in the interval of integration. Page 632 Numbers 74. By definition, these integrals can only be used to compute areas of bounded regions. Convergence test: Direct comparison test Remark: Convergence tests determine whether an improper integral converges or diverges. This is , ∃ a psoitive number ,independent of ,such that ∫ + < M, 0< < −. If Maple is unable to calculate the limit of the integral, use a comparison test (either by plotting for direct comparison or by limit comparison if applicable). 7 Review of limits at infinity:-2 -1 0 1 2 2 4 6 x y y ex 1 2 3-4-2 0 2 x y Use the above comparison test to determine whether 1 1 x2 5. State the Comparison Test. Solutions; List of Topics that may. That means we need to nd a function smaller than 1+e x. Give a reasonable "best" comparison function that you use in the comparison (by "best, we mean that the comparison function has known integral convergence properties, and is a reasonable upper or lower bound for the integrand we are evaluating). Consider the following These are all improper because the function being integrated is not finite at one of the limits of integration. I Examples: I = Z ∞ 1 dx xp, and I = Z 1 0 dx xp I Convergence test: Direct comparison test. 10) | {
"domain": "pacianca.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918516137418,
"lm_q1q2_score": 0.8623082847681207,
"lm_q2_score": 0.8723473879530492,
"openwebmath_perplexity": 642.0331441711976,
"openwebmath_score": 0.9238064289093018,
"tags": null,
"url": "http://pacianca.it/improper-integrals-comparison-test.html"
} |
fft, lowpass-filter
You are almost done. All you need to do now is take the $N+M-1$ length FFT of your filter, $h[n]$. Take the $N+M-1$ length FFT of your signal, $s[n]$. Multiply them together. Then take the $N+M-1$ length IFFT of this element-by-element product, and viola, you have filtered your signal. | {
"domain": "dsp.stackexchange",
"id": 1099,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, lowpass-filter",
"url": null
} |
saturn, albedo, enceladus
The second major point is that the geometric albedo is a measure of how well your particular surface reflected the light back to you compared to a reference surface. Imagine now you have two surfaces, one is the surface of Enceladus, the other is your "reference surface". This reference surface is an "idealized" reflector which means that it reflects every photon that hits it (making it have a bond albedo of 1). The key here though, is that it reflects light isotropically. What I mean by that is that the light doesn't get reflected in any preferred direction, but rather gets reflected in all directions equally. So now you have your 100 photons hitting your reference surface and all 100 photons get reflected, but because they're all reflected in random directions, only 10 photons actually get to your camera/eye/detector. What can happen with Enceladus though, is that the surface has just the right properties such that 100 photons hit the surface, and 14 photons get reflected to your detector because the surface preferentially reflects light in a specific direction. It's sending more photons to your detector than the reference surface. The geometric albedo is the ratio of how much light gets reflected to your detector by Enceladus over how much light gets reflected to your detector by the reference surface. In this case, $14/10 = 1.4$.
One small, additional point, is that this definition relies on your detector being in the same direction as your light source. In other words, the geometric albedo is a measure of how much your surface can retro-reflect (i.e., reflect back to the source of the photons) compared to the reference surface. Technically, the geometric albedo attains a maximum value when your surface is made up of retroreflectors. | {
"domain": "astronomy.stackexchange",
"id": 2236,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "saturn, albedo, enceladus",
"url": null
} |
using the arctangent. The same type of strategy can be employed to evaluate any integral in the form by using the substitution $x\to x\sqrt{a}$, or any integral in the form by making the substitution $x\to ax$. But now let's turn our attention to the third example that I proposed: This integral isn't in either of those forms, so how will we evaluate it? Well, the answer lies in another class of vanishing integrals. Recall that I already mentioned that as long as $pq=2$. But this can be generalized further... in fact, the integral where each $c_i$ is any arbitrary constant, as long as each pair of numbers $p_i,q_i$ has a product of $2$, satisfying $p_iq_i=2$ (again, this can be verified by making the substitution $x\to 1/x$). This implies that the integral for any value of $a$. Notice now that the integral we want to evaluate is equal to ...which may not look very promising at first, since it isn't exactly in the form that we've shown to vanish. But once we make the substitution $x\to x\sqrt{2}$, we're in business: | {
"domain": "dyer.me",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9953904285043392,
"lm_q1q2_score": 0.8229010048247074,
"lm_q2_score": 0.8267117919359419,
"openwebmath_perplexity": 167.9743432857637,
"openwebmath_score": 0.9462973475456238,
"tags": null,
"url": "https://franklin.dyer.me/post/65"
} |
matlab, fft, signal-analysis, fourier-series, least-squares
%% The method
N=length(x); %no of data
n=(1:N);
T=g*365*24*60; %time that the data covers
f1=scaleFreq*1/(365*24*60); %frequency of 1 yr oscillations
f2=scaleFreq*1/((365*24*60))/2; %frequency of 1/2 year oscillations
a1=f1*T;
a2=f2*T;
if(doWindow)
window = hanning(numel(x)).';
else
window = ones(size(x));
end
xWin = x.*window;%windowing
xMean = mean(xWin);%calculate dc
xWin = xWin-xMean;%subtract dc
buildFunkCos1 = cos((2*pi*a1*n)/N);
buildFunkCos2 = cos((2*pi*a2*n)/N);
buildFunkSin1 = sin((2*pi*a1*n)/N);
buildFunkSin2 = sin((2*pi*a2*n)/N);
A1=(1/N)*sum(xWin.*buildFunkCos1)
A2=(1/N)*sum(xWin.*buildFunkCos2)
B1=(1/N)*sum(xWin.*buildFunkSin1)
B2=(1/N)*sum(xWin.*buildFunkSin2)
if(doWindow)%do correction due to windowing
A1 = 2*A1;
A2 = 2*A2;
B1 = 2*B1;
B2 = 2*B2;
end
C1=2*sqrt(A1^2+B1^2)%amplitude
C2=2*sqrt(A2^2+B2^2)
fi1=atan(B1/A1)%phase shift
if(A1<0)
fi1 = fi1 + pi;
end
fi2=atan(B2/A2)
if(A2<0)
fi2 = fi2 + pi;
end | {
"domain": "dsp.stackexchange",
"id": 1033,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, fft, signal-analysis, fourier-series, least-squares",
"url": null
} |
c++, calculator, physics
cin >> V2;
cout << "What is the value of n1?" << endl;
cin >> n1;
n2 = V2/(V1/n1);
return n2;
} else {
cout << "Please enter a valid option!" << endl;
avogOperation();
}
return 0;
} | {
"domain": "codereview.stackexchange",
"id": 24529,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, calculator, physics",
"url": null
} |
javascript, to-do-list
if (ev.target.classList.contains('todos__item__remove'))
t.remove(ev.target);
});
},
remove: function(btn) {
var item = this.get(btn.getAttribute('data-id'));
item.parentNode.removeChild(item);
this.updateDone(--this.done);
if (!--this.count) this.$summary.style.display = 'none';
this.updateCount();
},
check: function(btn) {
var item = this.get(btn.getAttribute('data-id'));
if (item.classList.contains('todos__item--completed')) {
btn.classList.remove('ion-ios-checkmark-outline');
btn.classList.add('ion-ios-circle-outline');
item.classList.remove('todos__item--completed');
this.updateDone(--this.done);
} else {
btn.classList.add('ion-ios-checkmark-outline');
btn.classList.remove('ion-ios-circle-outline');
item.classList.add('todos__item--completed');
this.updateDone(++this.done);
}
},
get: function(id) {
return document.getElementById('todos__item-' + id);
},
addItem: function(name) {
this.renderEl(name);
this.updateCount(this.count);
this.$summary.style.display = 'block';
},
updateCount: function(count) {
if (typeof count == 'undefined') count = this.count;
this.$count.innerText = count;
},
updateDone: function(doneCount) {
if (doneCount < 0) this.done = doneCount = 0;
this.$done.innerText = doneCount;
}, | {
"domain": "codereview.stackexchange",
"id": 18496,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, to-do-list",
"url": null
} |
So this essentially means $$f$$ has a roughly "parabolic down" shape. We want to ensure $$f(x) \le 0$$ whenever $$x \ge 3$$. We can, in fact, do even better. When is $$f(x) = 0$$? Checking the graph suggests it's about $$2.3$$; checking the easier $$x=2.5$$, for instance, we see $$f(x) < 0$$ there ($$f(2.5) \approx -0.51$$). And of course you can check $$f(2)$$ to see $$f(2) = 1 > 0$$, which ensures that $$f(x) = 0$$ for some $$x \in (2,2.5)$$ by the intermediate value theorem.
Since $$f'(x) < 0$$ for $$x \gtrsim 1.56$$, we're ensured there will be no zeroes $$x \gtrsim 1.56$$ as well. (After all, $$f$$ is continuous and differentiable on its domain, and its derivative has only the one real root. Being able to become positive again and violate the inequality would require that there be a "turning point" where $$f'(x)=0$$, or that $$f$$ suddenly "jumps" to above the $$x$$-axis.)
Thus, we know $$f(x) := x - (x-1)^e \le 0$$ whenever $$x \ge 2.5$$. We can return to our original inequality by reversing our steps: bring the $$(x-1)^e$$ to the other side, then take the logarithm of each side twice. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850872288504,
"lm_q1q2_score": 0.8336244633777384,
"lm_q2_score": 0.847967764140929,
"openwebmath_perplexity": 223.38519373523576,
"openwebmath_score": 0.9013097882270813,
"tags": null,
"url": "https://math.stackexchange.com/questions/3784551/proving-that-for-all-x-geq-3-log-log-x-leq-log-logx-1-1"
} |
steel, springs
Title: Spring degradation in hand grip trainer Consider a spring like this (this one CoC hand gripper):
They claim on their web site, that the spring will not degrade during time:
Do your grippers get weaker with time?
No, so don’t worry if your Captains of Crush No. 1 gripper is feeling easier to you now—it’s because you’re getting stronger! What is mistakenly called “seasoning” by some people is actually a weakening with use, and it’s a reflection of an under-designed gripper with a spring that is bending, which is why it’s not uncommon for low-quality grippers to get narrower and easier as they are used. On the other hand, Captains of Crush grippers hold their line for a lifetime of steady training.
There is also a lot of discussion on this topic, referencing some re-measurements, but none of that seems really trustable.
My questions, from engineering point of view:
Is that possible/probable? I mean, can the degradation be small enough (for the given brand) so the user should not care? How likely it is true?
Is possible what they say about the "low-quality grippers" that they degrade over time significantly. How probable is that? | {
"domain": "engineering.stackexchange",
"id": 3054,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "steel, springs",
"url": null
} |
In other words, both conditional distributions are familiar. Using the Gibbs sampler, we construct the following iterations:
1. Sample $Y_i\sim Beta(a+X_i,b+n-X_i)$
2. Sample $X_{i+1}\sim Binomial(n,Y_i)$
Fix $a=2,b=5,n=30$, and then we obtain the $X$ samples under different $i$.
n = 30
a = 2
b = 5
sample_y = function(x, a, b, n) rbeta(length(x), a + x, b + n - x)
sample_x = function(y, a, b, n) rbinom(length(y), size = n, prob = y)
gibbs = function(x0, niter, a, b, n)
{
x = x0
for(i in 1:niter)
{
y = sample_y(x, a, b, n)
x = sample_x(y, a, b, n)
}
list(x = x, y = y)
}
set.seed(123)
x0 = rbinom(10000, size = n, prob = 0.5)
res1 = gibbs(x0, niter = 1, a = a, b = b, n = n)
res10 = gibbs(x0, niter = 10, a = a, b = b, n = n)
res100 = gibbs(x0, niter = 100, a = a, b = b, n = n)
In order to study the performance of the Gibbs sampler, we use the obtained samples to approximate the density function of $X$ in a specific iteration, and then compare it with the true density. Below shows the result under $i=1$, $i=10$, and $i=100$: | {
"domain": "statr.me",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9873750536904564,
"lm_q1q2_score": 0.8118090454966597,
"lm_q2_score": 0.8221891392358015,
"openwebmath_perplexity": 851.6283879447415,
"openwebmath_score": 0.7547512650489807,
"tags": null,
"url": "https://statr.me/2019/12/mcmc-notes-1/"
} |
sum of the maximum sum path to reach from beginning of any array to end of any of the two arrays. Top Forums Shell Programming and Scripting Sum elements of 2 arrays excluding labels Post 303015114 by Don Cragun on Wednesday 28th of March 2018 06:34:58 AM. An index value of a Java two dimensional array starts at 0 and ends at n-1 where n is the size of a row or column. Given input array be,. In this solution dp* stores the maximum among all the sum of all the sub arrays ending at index i. Once the type of a variable is declared, it can only store a value belonging to this particular type. log10(a) Logarithm, base 10. This function subtracts when negative numbers are used in the arguments. (2-D maximum-sum subarray) (30 points) In the 2-D Maximum-Sum Subarray Prob- lem, you are given a two-dimensional m x n array A[1 : m,1: n of positive and negative numbers, and you are asked to find a subarray Ala b,c: 1 Show transcribed image text Expert Answer. C Program to read an array of 10 integer and find sum of all even numbers. if 2,3,4, 5 is the given array, {4,5,2,3} is also a possible array like other two. Our maximum subset sum is. SemanticSpace Technologies Ltd interview question: There is an integer array consisting positive numbers only. If any element is greater than max you replace max with. Enables ragged arrays. Array-2, Part I ”. I haven't gotten that far yet, I'm stuck just trying to print my two arrays, every time i try to print the first array it gives me the elements of the second array and it. All arrays must have the same number of rows and columns. i* n/2 – Or overlaps both halfs: i* n/2 j* • We can compute the best subarray of the first two types with recursive calls on the left and right half. Then we compare it with the other array elements one by one, if any element is greater than our assumed. I need to check an array of random integers (between 1 and 9) and see if any combination of them will add up to 10. The maximum product is formed by the | {
"domain": "lampedusasiamonoi.it",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97594644290792,
"lm_q1q2_score": 0.8963875270376713,
"lm_q2_score": 0.9184802440252811,
"openwebmath_perplexity": 681.2426012969111,
"openwebmath_score": 0.3012574017047882,
"tags": null,
"url": "http://lampedusasiamonoi.it/yvgr/max-sum-of-2-arrays.html"
} |
If A is square, symmetric, and positive definite, then its eigenvalue and singular value decompositions are the same. But, as A departs from symmetry and positive definiteness, the difference between the two decompositions increases. In particular, the singular value decomposition of a real matrix is always real, but the eigenvalue decomposition of a real, nonsymmetric matrix might be complex.
For the example matrix
```A = [9 4 6 8 2 7];```
the full singular value decomposition is
```[U,S,V] = svd(A) U = -0.6105 0.7174 0.3355 -0.6646 -0.2336 -0.7098 -0.4308 -0.6563 0.6194 S = 14.9359 0 0 5.1883 0 0 V = -0.6925 0.7214 -0.7214 -0.6925```
You can verify that `U*S*V'` is equal to `A` to within round-off error. For this small problem, the economy size decomposition is only slightly smaller.
```[U,S,V] = svd(A,"econ") U = -0.6105 0.7174 -0.6646 -0.2336 -0.4308 -0.6563 S = 14.9359 0 0 5.1883 V = -0.6925 0.7214 -0.7214 -0.6925```
Again, `U*S*V'` is equal to `A` to within round-off error.
### Batched SVD Calculation
If you need to decompose a large collection of matrices that have the same size, it is inefficient to perform all of the decompositions in a loop with `svd`. Instead, you can concatenate all of the matrices into a multidimensional array and use `pagesvd` to perform singular value decompositions on all of the array pages with a single function call. | {
"domain": "mathworks.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918529698321,
"lm_q1q2_score": 0.8300701837237089,
"lm_q2_score": 0.8397339656668287,
"openwebmath_perplexity": 343.3251088384055,
"openwebmath_score": 0.8030108213424683,
"tags": null,
"url": "https://ch.mathworks.com/help/matlab/math/singular-values.html"
} |
java, beginner, api, collections, client
Title: Get application credentials from linked array obtained from Cloud Foundry API I am new to Java and I use the following code to retrieve parameters
from a linked hash map from the Cloud Foundry API. My question is if there is a better way to do it in Java?
public HashMap<String, String> getUserData(String Url, String org, String space, String app) throws MalformedURLException {
URL endpoint = new URL(Url);
CloudFoundryOperations ops = sm.getCurrentUserCfClient(endpoint, org, space);
GetApplicationEnvironmentsRequest request = GetApplicationEnvironmentsRequest.builder()
.name(app)
.build();
ApplicationEnvironments environments = ops.applications()
.getEnvironments(request)
.block();
Map<String, Object> mapSystemProvided = environments.getSystemProvided();
LinkedHashMap<String, Object> SERVICES = (LinkedHashMap<String, Object>) mapSystemProvided.get("SERVICES");
ArrayList<Object> apps = (ArrayList<Object>) SERVICES.get("apps");
LinkedHashMap<String, Object> appsList = (LinkedHashMap<String, Object>) apps.get(0);
LinkedHashMap<String, Object> credentials = (LinkedHashMap<String, Object>) appsList.get("credentials");
HashMap<String, String> parameters = new HashMap<String, String>();
parameters.put(ID, credentials.get(ID).toString());
parameters.put(DEFAULTS, credentials.get(DEFAULTST).toString());
return parameters; Consider breaking this into more, reusable methods. E.g.
public Map<String, Object> getCredentials(String url, String org, String space, String app)
throws MalformedURLException {
GetApplicationEnvironmentsRequest request = GetApplicationEnvironmentsRequest.builder()
.name(app)
.build(); | {
"domain": "codereview.stackexchange",
"id": 25078,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, api, collections, client",
"url": null
} |
quantum-mechanics, wavefunction, atomic-physics, orbitals
As to your final question, it's actually ill-posed, and it depends sensitively on what you mean by "be in a state".
If you mean that the system's quantum state is the given one, then obviously not - the wavefunctions are different.
If you mean whether you can form a superposition of the two states, then yes, this is perfectly possible (but in this instance not normally helpful).
If you mean that measuring in the given basis will give a nonzero result, then yes: the $p_x$ orbital has nonzero support on $p_+$, and vice versa. | {
"domain": "physics.stackexchange",
"id": 47208,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, wavefunction, atomic-physics, orbitals",
"url": null
} |
symmetry, boundary-conditions, elasticity, continuum-mechanics, stress-strain
I am trying to understand the boundary conditions at $r_{max}$ and $z_{max}$. It isn't very clear to me. In the following, I will write down how I understand the system. Please correct me where I'm wrong.
External radial boundary condition
At the external radial boundary, the stress vector is equal to zero. Mathematically, this can be written as:
$$ \vec P_n(r_{max}, \theta, z) =
(\sigma _{rr}n_r + \sigma _{r \theta}n_{\theta} + \sigma _{rz}n_z)\vec r_0 +
(\sigma _{r \theta}n_r + \sigma _{\theta \theta}n_{\theta} + \sigma _{\theta z}n_z)\vec \theta _0 +
(\sigma _{rz}n_r + \sigma _{\theta z}n_{\theta} + \sigma _{zz}n_z)\vec z_0 = 0 \tag 1$$
Also, the longitudinal and angular component of the unit normal vector of that surface is equal to zero, while the radial component is equal to one:
$$\vec n = (n_r, n_{\theta}, n_z) = (1, 0, 0) \tag 2$$
Since the problem is axially symmetric, all terms containing derivatives with respect to the angular coordinate are zero and the angular displacement is zero. Because the displacement terms of $\sigma _{\theta z}$ have these properties, $\sigma _{r \theta}$ is equal to zero. So now, the stress vector can be written as:
$$ \vec P_n(r_{max}, \theta, z) = \sigma _{rr} \vec r_0 + \sigma _{rz} \vec z_0 = 0 \tag 3$$
Here is an illustration of the vectors: | {
"domain": "physics.stackexchange",
"id": 95107,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "symmetry, boundary-conditions, elasticity, continuum-mechanics, stress-strain",
"url": null
} |
computer-architecture, cpu-cache, cpu-pipelines
Title: What does the processor do while waiting for a main memory fetch Assuming l1 and l2 cache requests result in a miss, does the processor stall until main memory has been accessed?
I heard about the idea of switching to another thread, if so what is used to wake up the stalled thread? Memory latency is one of the fundamental problems studied in computer architecture research.
Speculative Execution
Speculative execution with out-of-order instruction issue is often able to find useful work to do to fill the latency during an L1 cache hit, but usually runs out of useful work after 10 or 20 cycles or so. There have been several attempts to increase the amount of work that can be done during a long-latency miss. One idea was to try to do value prediction (Lipasti, Wilkerson and Shen, (ASPLOS-VII):138-147, 1996). This idea was very fashionable in academic architecture research circles for a while but seems not to work in practice. A last-gasp attempt to save value prediction from the dustbin of history was runahead execution (Mutlu, Stark, Wilkerson, and Patt (HPCA-9):129, 2003). In runahead execution you recognize that your value predictions are going to be wrong, but speculatively execute anyway and then throw out all the work based on the prediction, on the theory that you'll at least start some prefetches for what would otherwise be L2 cache misses. It turns out that runahead wastes so much energy that it just isn't worth it. | {
"domain": "cs.stackexchange",
"id": 3240,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computer-architecture, cpu-cache, cpu-pipelines",
"url": null
} |
3 Let w\in S^*. Since (S^*)^* contains all concatenations of any finite number of words from S^*, it certainly contains w, which is a concatenation of just one word from S^*. So S^*\subseteq(S^*)^*. Conversely, let w\in(S^*)^*. By definition we have$$w=w_1w_2\cdots w_n$$for some n\ge0, where each w_j is in S^*. By definition again, ... 3 Suppose that I’ve a one-state machine E_a that recognizes a^* and another, E_b, that recognizes b^*; combining them in the second way would result in a one-state machine that recognized (a\lor b)^*, not a^*\lor b^*. This violates both (2) and (3), but you can use similar ideas to show that violating either of them individually can produce ... 3 There are algorithms for converting a DFA to a regular expression, but with a simple DFA like this one can do it in ad hoc fashion. I’ll illustrate this in some detail as a model for thinking about such problems. The key here is what has to happen if you reach q_1. Suppose that you reach q_1; you’ll have to get back to q_0, but before that you can ... 3 Let A = R_{1} and B = R_{2}. Let M(A) and M(B) be the finite state machines accepting A and B respectively. Define M(A \times B) to be the machine with states Q_{A} \times Q_{B}, transition function \delta_{A} \times \delta_{B} and final states F_{A} \times F_{B}. Argue that this machine captures A \times B exactly. Without loss of ... 3 (a) The requirement boils down to saying that once you get xx, you can never get a y. Your regular expression generates only valid strings, but it doesn’t generate all valid strings. For instance, it doesn’t generate xyxx. You want something like | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692298333416,
"lm_q1q2_score": 0.8117518363481158,
"lm_q2_score": 0.8311430436757312,
"openwebmath_perplexity": 230.7564990045687,
"openwebmath_score": 0.9002066850662231,
"tags": null,
"url": "http://math.stackexchange.com/tags/regular-expressions/hot"
} |
signal-analysis, frequency-spectrum, decimation
To test and validate against these effects (regardless of final frequency plan and bandwidth used), consider using a single tone at the frequency of interest and measure the “conversion gain” from input to output. Then put this tone at the image frequency locations and measure the image suppression. For example, to confirm the -6 KHz balance try a tone just at 6.25KHz and see how much appears at 0.75 KHz in the output as it should versus how much appears at 0.25 KHz which is dues to quadrature amplitude and phase imbalance in either of the two multiplier stages. (I suspect this should be quite good if done digitally but worth confirming). Then put a tone just at 4.75KHz to test the filter rejection. This will end up at -1.25 in the middle stage which will be rejected by the low pass filter (to some degree) and whatever is not rejected will shift to -0.75KHz after the second stage multiplier, and when you take the real of that will also be at +/-0.75KHz. (From this we also see as currently done, the appropriate filtering as we approach the 5.5 KHz edge is not feasible).
The best place to do decimation is after the filters and prior to translating to a higher frequency. Although depending on decimation ratios and approach it may not make any different. To decimate properly, understand where all the aliasing frequency bands are given the sampling rate and decimation ratio and ensure there is sufficient rejection in these zones specifically. See this post for further details on aliasing bands with decimation.
If the operations in the above spectrum plots are confusing, this post provides further detailed insight into the frequency translation of complex signals. | {
"domain": "dsp.stackexchange",
"id": 11100,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "signal-analysis, frequency-spectrum, decimation",
"url": null
} |
quantum-mechanics, energy, variational-principle, eigenvalue
But what happens if I take a trial wave function that cannot be expressed in this basis? For instance, say I have a square well from $0$ to $L$ and I have a text function which is normalized but nonzero at some location with $x>L$. Obviously this is a stupid trial function and it is not a possible solution. I will not be able to find $|E|$ to check if the functional is minimized. But what part of the math above will break down? Where does the variational principle say, for instance, that in addition to being normalizable, the test function must be zero in quantum mechanically disallowed regions? The professor's derivation shows that any trial solution will have energy larger than the ground state energy corresponding to the lowest energy eigenstate. But the fact that the ground state energy is lower than any trial function's can even be treated as a definition. If there exists a lowest energy and a lowest energy wavefunction, of course all other wavefunctions will provide an upper bound on that lowest energy.
The goal is to try out many trial solutions with the aim of getting closer to the ground state energy. Each trial solution provides an upper bound.
Trying out any test wavefunction which is nonzero when $V = \infty$ will add unwelcome energy. By the variational method, you'd simply prefer a lower energy trial function, which you can always achieve by moving that portion of the trial wavefunction to a place where $V$ is finite.
The professor's derivation broke down because you considered an infinite energy wavefunction. The variational principle prevents non-zero $V=\infty$ test functions because (as you pointed out) such functions would be obviously unproductive in forming upper bounds to the ground energy state. | {
"domain": "physics.stackexchange",
"id": 81153,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, energy, variational-principle, eigenvalue",
"url": null
} |
c++, reinventing-the-wheel, c++20
#include <boost/test/data/monomorphic.hpp>
#include <boost/test/data/test_case.hpp>
#include "Any.h"
struct A {
int age;
std::string name;
double salary;
A(int age, std::string name, double salary)
: age(age), name(std::move(name)), salary(salary) {}
};
BOOST_AUTO_TEST_CASE(default_constructor_test) {
Any a;
BOOST_CHECK(!a);
BOOST_CHECK(!a.has_value());
}
BOOST_AUTO_TEST_CASE(basic_int_test) {
Any a{7};
BOOST_CHECK_EQUAL(AnyCast<int>(a), 7);
BOOST_CHECK(a.has_value());
}
BOOST_AUTO_TEST_CASE(in_place_type_test) {
Any a(std::in_place_type<A>, 30, "Ada", 1000.25);
BOOST_CHECK_EQUAL(AnyCast<A>(a).age, 30);
BOOST_CHECK_EQUAL(AnyCast<A>(a).name, "Ada");
BOOST_CHECK_EQUAL(AnyCast<A>(a).salary, 1000.25);
}
BOOST_AUTO_TEST_CASE(bad_cast_test) {
Any a{7};
BOOST_CHECK_THROW(AnyCast<float>(a), bad_any_cast);
}
BOOST_AUTO_TEST_CASE(type_change_test) {
Any a{7};
BOOST_CHECK_EQUAL(AnyCast<int>(a), 7);
BOOST_CHECK_THROW(AnyCast<std::string>(a), bad_any_cast);
a = std::string("hi");
BOOST_CHECK_EQUAL(AnyCast<std::string>(a), "hi");
BOOST_CHECK_THROW(AnyCast<int>(a), bad_any_cast);
} | {
"domain": "codereview.stackexchange",
"id": 45017,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, reinventing-the-wheel, c++20",
"url": null
} |
ros, navigation, base-global-planner
Title: following a boustrophedon path in a given room with obstacles
Hi all,
I have a mobile robot which is navigating around a room, I already have the map of the room. I am using the navigation_stack of ROS. I am using rotary encoders for odometry. I am fusing the data from Rotary encoders and IMU using robot_pose_ekf. I am using amcl for localization and move_base for planning. Now, I have to write a Complete coverage Path planning algorithm and I am following this paper and I would like to ask what is the best way to generate the Boustrophedon path (simple forward and backward motions) in a cell (can be rectangular, trapezium, etc.) with no obstacles? If someone can suggest how to implement it in ROS, that will be great.
Update:
In cases like shown here (taken from here):
To come up with divisions in the 2nd or 3rd cell (center top or center bottom), I dont know whether knowing all the corner points will be enough (I might be wrong) or should we have all the boundary points (If yes, I am not sure how exactly to find it). Does anyone have any idea how to generate boustrophedon path in a cell like this?
Please let me know if you need more information from me. Any help will be appreciated.
Thanks in advance.
Naman Kumar
Originally posted by Naman on ROS Answers with karma: 1464 on 2015-07-13
Post score: 0
This should be pretty simple, just divide one direction of your cell by the coverage width of the robot and create path goals at the start and end of each division along the other direction.
Originally posted by dornhege with karma: 31395 on 2015-07-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 22167,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation, base-global-planner",
"url": null
} |
Update: John D. Cook has a related post in a different context (probability and statistics).
1. Some additional links for the readers that do not know which consequences such errors can have:
http://en.wikipedia.org/wiki/Ariane_5_Flight_501#Launch_failure
http://en.wikipedia.org/wiki/MIM-104_Patriot#Failure_at_Dhahran
Best regards,
Thomas
1. Thomas, thanks for the links. I was unaware of either example.
2. One minor error in your posting- it's not that floating point addition isn't commutative (it is), but rather that it isn't associative. That is, a+b and b+a will always give you the same answer, but (a+b)+c and a+(b+c) may give different answers due to rounding of the intermediate result.
1. Oops! You are of course correct. (Just heard a knock on the door. I think they've come to rescind my math degrees.)
3. One of the areas of OR where this might matter is concerned with steady state probabilities for queue models. Solving the equations in many cases leads to a recurrence relation p_{n+1}=(some expression)*p_n, and then an explicit formula for p_0.
However, for multiple server queues with limited waiting room, p_0 may be very small, and this made the recurrence subject to numerical errors of the kind discussed. I found that a good way round the problem was to find N such that p_N was maximal, and calculate that explicitly, then use the recurrence to find the other steady state probabilities.
1. David: Thank your for the comment. I taught those recurrence relations in a previous life (not that I recall them precisely), but had very limited experience with applying them, so it had not occurred to me that rounding errors could seriously degrade the results. Working outward from the largest probability is an interesting idea. I'm not sure it would have occurred to me. | {
"domain": "blogspot.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692277960746,
"lm_q1q2_score": 0.8052265950877174,
"lm_q2_score": 0.824461928533133,
"openwebmath_perplexity": 604.5381239424987,
"openwebmath_score": 0.5981199145317078,
"tags": null,
"url": "http://orinanobworld.blogspot.com/2013/08/numerical-analysis-size-matters.html"
} |
I assume the answer is no, yet I'm looking for my mistake, and if possible, to elaborate on where exactly I am wrong in here.
Consider a star on $$n-4$$ vertices together with a clique on $$4$$ vertices.
• Yes, I see what's the problem is. Would you say that its correct to assume that $G$ is $3$-colorable iff the induced subgraph among all vertices with degree at least $\sqrt{\left| V\right|}$ can be colored as described by Wigderson's algorithm? Oct 13, 2021 at 11:38
• Actually, I can answer my own question: many cliques of size $4$ which are separate to one another. The graph is not $3$-colorable, but the induced subgraph is empty. Oct 13, 2021 at 11:39 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.972414724168629,
"lm_q1q2_score": 0.8124364892965151,
"lm_q2_score": 0.8354835330070839,
"openwebmath_perplexity": 259.50484815855384,
"openwebmath_score": 0.7309628129005432,
"tags": null,
"url": "https://cs.stackexchange.com/questions/144735/attempting-to-verify-the-colorability-using-wigdersons-algorithm"
} |
quantum-mechanics, operators, semiclassical, matrix-elements, born-oppenheimer-approximation
$$
\vec d_{ij}(R) = \langle \phi_i(r,R)| \nabla_R \phi_j(r,R)\rangle_r
$$
where $\phi_{i/j}(r,R)$ are adiabatic electronic states, i.e. eigenstates to the electronic Hamilton operator within the Born Oppenheimer Approximation. You could use either matrix element, since the physical results would necessarily contain combinations of $O_{fi}$ and $O_{fi}$ - either magnitude squared (like in the Fermi golden rule, where the probability of transition is proportional to $|O_{fi}|^2$) or the real/imaginary part of the matrix element (as, e.g., in higher order Fermi golden rules, expressions for the density-of-states, etc.)
This is a nice flashback to the basics of quantum mechanics: the difference between a probability and the probability amplitude, the diagonal and non-diagonal matrix elements of operators, etc. | {
"domain": "physics.stackexchange",
"id": 74061,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, semiclassical, matrix-elements, born-oppenheimer-approximation",
"url": null
} |
python, python-3.x, multithreading
print(f'Current velocities: {self.dict_velocity}')
self.add_pos_to_out()
self.evaluate_all_particles()
self.update_all_particles()
print('All particles go!')
with self.condition_wait:
self.condition_wait.notifyAll()
i += 1
threadLock.release()
# time.sleep(0.2)
time.sleep(0.02)
with self.condition_wait:
self.condition_wait.notifyAll()
print(f'Current positions: {self.dict_shared_new_position}')
print(f'Error: {self.err_best_g}')
def evaluate_all_particles(self):
for i in range(NUMBER_OF_PARTICLES):
if self.dict_shared_errors[i] < self.err_best_g or self.err_best_g == -1:
self.pos_best_g = list(self.dict_shared_new_position[i])
self.err_best_g = float(self.dict_shared_errors[i])
def update_all_particles(self):
for i in range(NUMBER_OF_PARTICLES):
self.update_velocity(i)
self.update_position(i)
def add_pos_to_out(self):
for i in range(NUMBER_OF_PARTICLES):
self.output_pos[i] = np.vstack((self.output_pos[i], self.dict_shared_new_position[i]))
def update_velocity(self, i):
w = 0.5 # constant inertia weight (how much to weigh the previous velocity)
c1 = 1 # cognative constant
c2 = 2 # social constant
for j in range(0, NUM_DIMENSIONS):
r1 = random.random()
r2 = random.random()
vel_cognitive = c1 * r1 * (self.dict_best_positions[i][j] - self.dict_shared_new_position[i][j]) | {
"domain": "codereview.stackexchange",
"id": 41526,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, multithreading",
"url": null
} |
homework-and-exercises, forces, acceleration, vectors
Title: What is this vector problem asking to find, the magnitude of the force F or the component of it that goes along the ramp? A shopper pushes a 7.5-kg shopping cart up a 13 (degree)incline. Find the magnitude of the horizontal force, F, needed to give the cart an acceleration of 1.41 m/s$^2$.
I ask this because the solution to this problem is $7.5[1.41 + 9.81\sin(13)] = 27.1$, which describes the component of the force along the direction of the ramp.
However, the question explicitly states to find the magnitude of the "horizontal" force, which sounds to me a lot like its asking for the entire magnitude of the force, not just a component of it. The answer in that case would be, $\sqrt{27.1^2 + 7.5\cdot9.81\cos(13)} = 76.7$, would it not? Yet, this answer is not the one shown in the solutions. Use the directions up the ramp and perpendicular to the ramp as the simplest coordinate system.
Find the component of the vertical force of gravity that acts down the ramp.
Find the component of the horizontal force F that acts up the ramp.
Find an expression for the net force up the ramp, and equate to the mass of the cart times the acceleration of the cart up the ramp. Solve for F. | {
"domain": "physics.stackexchange",
"id": 10924,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, forces, acceleration, vectors",
"url": null
} |
newtonian-gravity, symmetry, electric-fields
Title: What is the gravitational/electric/inverse-square field inside a cylinder? I've read from the shell theorem that an inverse-square potential has zero field inside a spherical shell. What about the field inside a cylinder? Are objects inside a long cylinder attracted to the center, or to the sides? Is there a simple analytic form for the field in a (possibly infinite) cylinder?
(Edit: To be more precise I think I should have said that electric charge is evenly distributed over the surface of the cylinder - I think this allows us to use the same result for a gravitational field as well as an electric field. I guess that's not a very realistic assumption for electrical applications, since charge tends to redistribute itself to create a constant potential (at least if the surface is conducting). Also, my original intention was to ask about an open-ended cylinder, but I think it doesn't matter so much if the cylinder is long.) The field inside an hollow infinite cylinder is $0$, just like the field inside an hollow sphere.
This is because of Gauss' law: the flux of the electric field $\vec E$ through any closed surface $S$ is
$$\Phi = \int_S \vec E \cdot d \vec S = \frac Q {\epsilon_0}$$
Where $Q$ is the charge inside the volume enclosed by the surface.
Let $R$ be the radius of our cylinder and let's take a cylindrical surface with radius $r<R$ coaxial to it. Since the charge is all on the surface we will have
$$\Phi = \int_S \vec E \cdot d \vec S = 0$$
Now since the cylinder is infinite the field must be directed in the radial direction for symmetry, so that $\vec E \cdot d \vec S = E d S$, hence
$$\Phi = E \int_S d S = E S = 0$$
Which means that $\vec E$ must be $0$.
The same is true also for the gravitational field.
If the cylinder is not infinite, the last part of the argument is not valid and the field should be 0 only at the exact center of the cylinder if I'm correct. | {
"domain": "physics.stackexchange",
"id": 31134,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-gravity, symmetry, electric-fields",
"url": null
} |
newtonian-mechanics, orbital-motion, planets, celestial-mechanics
Title: What is the second $r$ in this equation for the Two Body Problem? $$r=\frac{r^2\frac{\mathrm d\theta^2}{\mathrm dt}}{\frac{Gm_2^3}{\left(m_1+m_2\right)^2}\left(1+e\cos\theta\right)}$$
I have this equation for the radial distance of a planet from the barycenter. But I don't understand why there is an $r$ on both sides, the booklet from which this originates states that the $ r^2 \mathrm d\theta^2/\mathrm dt $ is a constant, but what should it represent and how do I obtain this constant? It is the angular momentum per unit mass, $L$
$$
L = r^2\dot\theta = r^2 \frac{{\rm d}\theta}{{\rm d}t}
$$
In a central potential (e.g., Kepler's potential) this is a conserved quantity. If at any point you know the position (${\bf x}$) and velocity (${\bf v}$) of the test mass then you can calculate it as
$$
{\bf L} = {\bf r}\times {\bf v}
$$ | {
"domain": "physics.stackexchange",
"id": 54196,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, orbital-motion, planets, celestial-mechanics",
"url": null
} |
electric-circuits
Title: If the electric potential is location dependent, why do charges loose it when passed through a heavy resistor in a circuit? I really cannot understand this: I know through reading that unlike electric potential energy, which is charge dependent, the electric potential is purely location dependent. For example: If at a point, the electric potential is $5V$, then $1C$ of charge will have $5J$ of energy, and $10C$ of charge will have $50J$ of energy in a particular electric field. So, this is my question:
If electric potential is dependent upon position within the electric field, then why do we say
that upon passing through a heavy resistor(like a bulb) in a circuit,
the charge looses electric potential? Isn't this incorrect? Shouldn't a charge loose energy, instead of potential, because potential is lost constantly as the charge moves through the ciruit?
And one more question: When a charge moves through a bulb(or any heavy
resistor), what happens to the electric field?
Suppose, a charge moves through a bulb, looses some energy, and again, due to the electric field, gains some energy back. | {
"domain": "physics.stackexchange",
"id": 28349,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electric-circuits",
"url": null
} |
## 2 Answers
It's the same reason as why $\sqrt{(-1)^2}\neq -1$, no need to involve complex numbers in this case (to complicate our lives). When we consider square root as function $\sqrt{\,\cdot\,}\colon\mathbb R_{\geq 0}\to \mathbb R_{\geq 0}$, we define it as a function with property $(\sqrt x)^2=x$. On the other hand, $\sqrt{x^2}\neq x$ in general, it fails whenever $x<0$. We actually have $\sqrt{x^2} = |x|$.
The square root function is defined as being positive. As you likely know, $i^2$ is, by definition, negative.
Given the above, $\sqrt{x}=-3$ would have no solutions.
This is different from solving for $x^2 = 9$. There are two solutions,one of which is $-3$.
• If I had written it as $x^{1/2}=-3$, would it be any different? May 9 '17 at 4:53 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692339078752,
"lm_q1q2_score": 0.8007646027201176,
"lm_q2_score": 0.8198933425148213,
"openwebmath_perplexity": 152.83351355650478,
"openwebmath_score": 0.9530980587005615,
"tags": null,
"url": "https://math.stackexchange.com/questions/2272572/why-does-sqrti4-neq-i2"
} |
waves, interference, diffraction
Title: Single slit diffraction $a\sin\theta=m\lambda$ - why does $m$ have to be an integer? For example, when $a\sin\theta=3\lambda$, we rewrite this as $\frac{a}{6}\sin\theta=\frac{\lambda}{2}$ and explain that the waves from every pair of point source whose path difference is $\frac{a}{6}\sin\theta$ will destructively interfere, leading to a dark fringe.
Since each element of area of the slit opening can be considered as a source
of secondary waves according to Huygen's principle, what if we divided the slit into five strips of equal width, and say that two point sources, whose path difference $\frac{a}{5}\sin\theta$ is equal to $\frac{\lambda}{2}$ will destructively interfere to form a dark fringe, meaning that $a\sin\theta=\frac{5\lambda}{2}$. What is the reason behind why we can't do this? Yes. You may divide the slit into 5 segments. Each segment has a $\lambda /2 $ optical path difference to the next segment, as shown in the following figure.
Thus, the contribution of segment A (to the diffraction amplitude) cancels that of segment B, segment C cancels segment D. But, you still have a contribution from E left.
Therefore, $d \sin\theta = \frac{5}{2} \lambda$ is not a condition for dark fringe. | {
"domain": "physics.stackexchange",
"id": 80802,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, interference, diffraction",
"url": null
} |
electricity, electric-circuits
This means that the resistance and therefore the voltage are also time dependent.
However, note the important fact that the right hand side goes to zero for a certain value of $T$.
This means that the bulb will heat up as time passes, but will eventually reach an equilibrium temperature $T^*$.
We can therefore assume that there exists a function $g$ such that $T^* = g(I)$.
At this point note that if we have a constant voltage source instead of a constant current source we would have had
$$ \frac{dT}{dt} \propto \frac{V}{R_0(1 + \alpha (T - T_0))} - CT^4 $$
but the point about having an equlibrium temperature and a function $T^* = g(V)$ would still hold.
Comments on the equations in OP
The point is that your equation
$$R = R_0(1 + \alpha g(P))$$
should probably be more like
$$R = R_0(1 + \alpha g(I)) \qquad \text{or} \qquad R = R_0(1 + \alpha g(V)) $$
and this only holds when the system is in equilibrium.
For a light bulb I'd guess that equilibrium is reached within a few seconds, but I don't really know for sure.
Anyway, in the equilibrium case $V$ is some implicit function of $I$ (or vice versa for a constant voltage source) so you can still write
$$\text{unknown function}(IV) = \frac{1}{\alpha} \left( V/I - R_0 \right)$$
as you already did. | {
"domain": "physics.stackexchange",
"id": 22750,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electricity, electric-circuits",
"url": null
} |
molecular-genetics, homework
Title: Problem involving genetic interactions in yeast I'm having difficulty with the following problem:
In yeasts, genes MEC1 and SGS1 favor survival in response to HU
(hydroxyurea). In the figure below, Δ indicates homozygosis for the
mutant allele that causes loss of function. mec1-100 yeasts are
heterozygous (one MEC1 allel is wild and the other is mutant).
a) What genetic interaction do genes SGS1 and MEC1 maintain?
b) How do you explain the results at the molecular level?
c) How do you explain the difference in phenotype between the homozygous and heterozygous yeasts for the allel that causes loss of
function for the MEC1 gene?
d) Why would you combine the sgs1Δ mutant with mec1-100, instead of combining it with mec1Δ? | {
"domain": "biology.stackexchange",
"id": 11801,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-genetics, homework",
"url": null
} |
control, robotic-arm
Application program: Hobby robots usually have a "firmware" running on the arduino which determines their behavior. Robot controllers are more like a PC. They have an operating system and only execute something if somebody writes a program that is called. That program is the application program and is written in a robot manufacturer specific language (e.g. KRL for KUKA, Rapid for ABB, Java language can be used to program the Kuka iiwa, their light weight robot arm).
Tech support: one feature that hobby robots do not have is tech support. Industrial robot manufacturers offer courses, hotline telephone support in case there are difficulties in using/programming their robot. Furthermore, industrial robots come with necessary certificates (CE, UL, etc.) for industrial use.
If you would like to build a robot controller which is at least comparable to today's industrial robot controllers, starting from scratch, you would need: | {
"domain": "robotics.stackexchange",
"id": 1342,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "control, robotic-arm",
"url": null
} |
async-await, kotlin
sb.toString()
}
call.respondText("Hello again! Your key is ${key.await()}", ContentType.Text.Plain)
}
}
}
I'm wrapping my potentially expensive synchronous code in async to get a Deferred and then await it.
I think it works because I'm able to do simultaneous request to /hello and other routes.
Is this the correct way to do it? Do I even need to bother, or can I write synchronous code in ktor handlers? Thank you. To start, I don't know Ktor, so what I'm going to tell is when Ktor doesn't do additional stuff, which is highly probably not the case.
Coroutines are managers for threads: they tell threads what to do and when to do it.
When you write a coroutine, everytime you come across a suspend-keyword, the thread that executes the task asks the coroutine what to do next. The coroutine can tell to continue with the task he was working on, but it can also tell the thread to do something else.
This is great if you have to work with triggers:
A database-call
A with triggers the threads
Another thread that returns something.
Instead of waiting, the coroutine can tell the thread to do something else and once it comes across the suspend-keyword to continue with the task if the trigger is fulfilled.
In your code, you introduce a side-task by adding the async-keyword.
The next thing you do is telling to wait on this side-task.
This means that apart from adding a suspend-keyword, it does nothing.
So, a coroutine is not for computation, but for waiting. However, coroutines can manage more than one thread. Giving the side-task to a side-thread and waiting for that side-thread to finish is something that coroutines are great for.
Therefor, your code could be better written as:
/**
* A dedicated context for sample "compute-intensive" tasks.
*/
val compute = newFixedThreadPoolContext(4, "compute")
get("/"){
val key = withContext(compute){
//heavy computation
}
call.respond(key.toString())
} | {
"domain": "codereview.stackexchange",
"id": 36944,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "async-await, kotlin",
"url": null
} |
# Divisibility tricks
Based on personal experience, about half of our senior math majors never saw the basic divisibility rules (like adding the digits to check if a number is a multiple of 3 or 9) when they were children. I guess it’s also possible that some of them just forgot the rules, but I find that hard to believe since they’re so simple and math majors are likely to remember these kinds of tricks from grade school. Some of my math majors actually got visibly upset when I taught these rules in my Math 4050 class; they had been part of gifted and talented programs as children and would have really enjoyed learning these tricks when they were younger.
Of course, it’s not my students’ fault that they weren’t taught these tricks, and a major purpose of Math 4050 is addressing deficiencies in my students’ backgrounds so that they will be better prepared to become secondary math teachers in the future.
My guess that the divisibility rules aren’t widely taught any more because of the rise of calculators. When pre-algebra students are taught to factor large integers, it’s no longer necessary for them to pre-check if 3 is a factor to avoid unnecessary long division since the calculator makes it easy to do the division directly. Still, I think that grade-school students are missing out if they never learn these simple mathematical tricks… if for no other reason than to use these tricks to make factoring less dull and more engaging.
# A mathematical magic trick
In case anyone’s wondering, here’s a magic trick that I did my class for future secondary math teachers while dressed as Carnac the Magnificent. I asked my students to pull out a piece of paper, a pen or pencil, and (if they wished) a calculator. Here were the instructions I gave them: | {
"domain": "meangreenmath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9898303437461133,
"lm_q1q2_score": 0.8115553047004938,
"lm_q2_score": 0.8198933381139645,
"openwebmath_perplexity": 423.3035216612121,
"openwebmath_score": 0.7205690145492554,
"tags": null,
"url": "https://meangreenmath.com/tag/divisibility/page/2/"
} |
general-relativity, cosmology, mass, redshift
In special relativity we used to say the mass grows as the velocity gets closer to $c$. Nowadays one says in both SR and GR that the mass of an elementary particle is an invariant, the rest is momentum.
Now for the second, how or why mass energy is not conserved in GR. For energy you already know. For mass, it is for elementary particles, but for composite or macroscopicl bound bodies the mass can change because the binding energy, internal momentum, or simplistically gravitational potential energy, can contribute to the mass, and the energy in this effective view (the total mass or energy, ignoring the external momentum, of the composite body). Thus, when the two black holes merged in 2015 the sum of their masses was 3 solar masses less than the mass of the final black hole. The 3 solar masses were lost to gravitational radiation.
And in general, GR does not have a covariant measure of the energy of gravitational waves, or the total in energy in spacetime. For highly dynamics spacetimes or events it is not generally conserved. For asymptotically flat spacetimes you can get global (but not local) conservation, i.e., from the flux at infinity (or far enough away, like us and the black holes).
If you are not completely sure what is meant to be included in the energy and mass, such as in the simple cases I described above, you have to take the full stress energy tensor as the gravitational source, which includes energy, momentum, angular momentum, and stress. | {
"domain": "physics.stackexchange",
"id": 32785,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, cosmology, mass, redshift",
"url": null
} |
This can be compiled and optimized using GCC as follows.
$gcc -O3 -fopenmp -o problem-154-omp problem-154-omp.c When executed on a 16-core machine, we get the following result. $ time ./problem-154-omp result: 479742450 real 0m1.487s
This appears to be the fastest solution currently known, according to the forum of solutions on Project Euler. The CPUs on the 16-core machine are pretty weak compared to modern standards. When running on a single core on a new Intel Core i7, the result is returned in about 4.7 seconds. | {
"domain": "jasonbhill.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9875683469514965,
"lm_q1q2_score": 0.8508278618124334,
"lm_q2_score": 0.8615382058759129,
"openwebmath_perplexity": 1737.7390952399246,
"openwebmath_score": 0.4279261529445648,
"tags": null,
"url": "http://code.jasonbhill.com/c/project-euler-154/"
} |
algorithms, formal-languages, optimization, context-free, formal-grammars
Title: If you have a smallest grammar approximation, do you immediately have a CFG inference algorithm? The smallest grammar problem is to find a single-string CFG. So given a finite list of language samples, known to all lie in some CFG, can we, using the smallest grammars (approximated) of each respective sample, compute an approximate smallest CFG for the language?
I want to make a parser generator that automatically detects a grammar given some samples of programming languages. This is extremely optimistic.
The first problem you face is that an approximate smallest grammar will at best help you find strings which occur frequently. It won't distinguish keywords from names, and it won't isolate words. (Note for reference that Lempel-Ziv has a good approximation ratio. Charikar, Lehman, Liu, Panigrahy, Prabhakaran, Sahai, & Shelat (2005) The smallest grammar problem, Information Theory, IEEE Transactions on, 51(7), 2554-2576).
The second, related, problem you face is tokenisation. Programming language grammars deal in tokens, not in characters. Just the issue of tabs vs spaces in the samples is likely to mess things up. This might be finessed by making assumptions about tokenisation and then finding approximate smallest grammars over the language of tokens, but be prepared for nasty surprises. E.g. string literals in C# are definitely non-trivial, and you might not have many instances of @-strings from which to infer.
The third problem is that some languages (particularly those designed for code-golf) have unusual tokenisation and so little grammar that the tokenisation is where the real difficulty lies. E.g. identifying the tokenisation of CJam operators is going to be hard. You might do slightly better if instead of working from a grammar per sample you concatenate the samples, separated by a token which doesn't occur in any sample, and then find an approximate shortest grammar for that super-sample. At least that way you might have sufficiently many occurrences of the more common multi-character tokens to pick them out as symbols. | {
"domain": "cs.stackexchange",
"id": 12985,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, formal-languages, optimization, context-free, formal-grammars",
"url": null
} |
structural-engineering, solid-mechanics, machine-design, aircraft-design, fatigue
Title: Infinite life of a material, Fatigue Strength After 10^6 cycles a component is considered to have infinite life. What makes 10^6 cycles a deciding factor for infinite life? Why not 10^8? Studies on fatigue life estimations was first done on steel axis in trains and continued on for other steel constructions. The majority of the fatigue publications have been based on fatigue estimations for steel and it has become a "standard" that $10^6$ cycles is "infinite life", because the Stress - Number-of-cycles (S-N or Whöler) curve flattens out after $10^6$ cycles for most steel alloys with minimal inpact on life after more cycles.
However, new research in the field has started looking into "Very High Cyclic Fatigue" (VHCF), which is beyond the "infinite life" region. See the article by S. Sharma et. al. [1] for more info. In short, there is no such thing as "infinite life" for many alloys and more research is required, the $10^6$ rule has been a rule of thumb.
A. Sharma, M. C. Oh, B. Ahn, “Recent advances in very high cycle
fatigue behavior of metals and alloys – a review”, Metals, vol. 10,
no. 9, pp. 1200, Sep. 2020, doi: 10.3390/met10091200. | {
"domain": "engineering.stackexchange",
"id": 5127,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "structural-engineering, solid-mechanics, machine-design, aircraft-design, fatigue",
"url": null
} |
quantum-mechanics, research-level, adiabatic, berry-pancharatnam-phase
with (I adapted the Berry's notation to mine)
$$\mathbf{V}_{n}\left(\mathbf{R}\right)=\Im\left\{ \sum_{m\neq n}\dfrac{\left\langle n\left(\mathbf{R}\right)\right|\nabla_{\mathbf{R}}H\left(\mathbf{R}\right)\left|m\left(\mathbf{R}\right)\right\rangle \times\left\langle m\left(\mathbf{R}\right)\right|\nabla_{\mathbf{R}}H\left(\mathbf{R}\right)\left|n\left(\mathbf{R}\right)\right\rangle }{\left(\varepsilon_{m}\left(\mathbf{R}\right)-\varepsilon_{n}\left(\mathbf{R}\right)\right)^{2}}\right\} $$
for a trajectory along the curve $C$ in the parameter space $\mathbf{R}\left(s\right)$. In particular, Berry defines the adiabatic evolution as following from the Hamiltonian $H\left(\mathbf{R}\left(s\right)\right)$, so a parametric evolution with respect to time $s$. These are eqs.(9) and (10) in the Berry's paper.
Later on (section 3), Berry argues that | {
"domain": "physics.stackexchange",
"id": 26532,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, research-level, adiabatic, berry-pancharatnam-phase",
"url": null
} |
javascript, jquery, html, css
var summableAndResult = {
".pr1": "#result1",
".pr2": "#result2",
".pr3": "#result3",
".pr4": "#result4"
}
function calculateSum ($list) {
var sum = 0;
$list.each(function() {
var value = $(this).text();
// test isNaN only if string is not empty
if (value.length != 0 && !isNaN(value)) {
sum += parseFloat(value);
}
});
return sum;
}
$(function(){ | {
"domain": "codereview.stackexchange",
"id": 21024,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, html, css",
"url": null
} |
python, python-3.x
elif cmd == 'exit':
break
else:
print('''List Of Possible Commands:\
\nraise guess
\nreveal
\nexit
''')
continue
if guess == 'cmd':
continue
if is_not_valid or is_not_one_letter or is_guessed:
wars_rem -= 1
print_war_msg(wars_rem, g_rem, is_not_valid, is_not_one_letter)
else:
l_guessed.append(guess)
if guess in secret_word:
c_letters.append(guess)
print('Good Guess!')
print()
else:
if guess in 'aeiou':
g_rem -= 2
print('You lose two guesses because you guessed a wrong vowel')
else:
g_rem -= 1
if guess not in 'aeiou' :
print('Oops, that letter is not in my word')
print()
if wars_rem == 0:
g_rem -= 1
wars_rem = 3
end_game_msg(g_rem, secret_word)
list_of_words = ['hello', 'say', 'zebra', 'jake', 'man', 'human', 'god', 'fuel', 'car', 'sea', 'cat', 'dog', 'building']
secret_word = random.choice(list_of_words)
hangman(secret_word)
Any recommendations on ways I can improve this? Avoid mixing import styles
from string import ascii_lowercase
import random
Either way of importing is fine. But try and stick with one style
rather than mixing different ones. So:
from string import ascii_lowercase
from random import choice | {
"domain": "codereview.stackexchange",
"id": 37012,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x",
"url": null
} |
ocean, oceanography, ocean-currents, tides
Lets consider as a first cut the two tide producing forces associated with just the Moon and Earth systems. The two are the centrifugal force of Earth from the center of mass of the Moon-Earth system and the gravitational attraction due to the Moon. For the point closest to the Moon:
$$ Force = \frac {GM_1M_2}{(R-r)^2} - \frac{GM_1M_2}{R^2} $$ where R is the distance between Earth and Moon and r is Earth's radius. It gets a bit more complex for other points on the surface because instead of R you have to use $$R\pm r cos(lat) $$ for the gravitational force.
Ultimately, it comes down to a set of horizontal forces that tend to concentrate water in two theoretical points: one, the closest to the Moon and two, the farthest from the Moon. An equilibrium state would be reached, called equilibrium tide, that results in an ellipsoid with its two bulges directed toward and away from the Moon.
In practice, this equilibrium ellipsoid does not develop because of the rotation of Earth, the Coriolis acceleration and the fact that the tidal wave feels the ocean bottom and it is subject to friction. Going beyond the equilibrium theory of the tide, the dynamic theory of tides (developed by Euler, Laplace and Bernoulli) includes all these concepts (friction, inertia, Coriolis, land masses,...) and provides a better approximation to the observed tides.
My recommendation to understand tidal constituents is to use some decent tidal harmonic analysis package like t_tide for Matlab, which is likely to be the tool that NOAA is using to generate their harmonic constituents in the first place. If you really want to get into the details, then the best option is to go to the still likely best source available. That will the 1973 book "The Analysis of Tides" by G. Godin.
Pawlowicz, R., B. Beardsley, and S. Lentz, "Classical Tidal Harmonic Analysis Including Error Estimates in MATLAB using t_tide", Computers and Geosciences, 28, 929-937 (2002). | {
"domain": "earthscience.stackexchange",
"id": 355,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ocean, oceanography, ocean-currents, tides",
"url": null
} |
linear-bearing
By removing the excess constraint, we appeared to remove the tendency for a perturbation in one air bearing to affect another bearing and then ripple through to the other bearings.
Depending on your typical direction of motion and the flatness of your surface, you may find that a three point bearing system may work better for your system:
You may also want to consider purchasing commercial air-bearing modules and attaching them to a frame (at least for a prototype) rather than attempting to manufacture your own air bearings. That way you can leverage the knowledge and support provided by the air-bearing manufacturer.
One other point is that we used ordinary Polyimide tape on our air bearings. We considered various more permanent methods, but in the end decided that being able to easily remove old, scarred or scored tape and replace it with fresh tape quickly and easily made the most sense for our application. | {
"domain": "robotics.stackexchange",
"id": 151,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "linear-bearing",
"url": null
} |
physical-chemistry, kinetics
This gives me:
$$[\ce{A}]\mathrm e^{(k_\mathrm a+k_\mathrm b)t}=k_\mathrm b[\ce{A}]_0\frac{\mathrm e^{(k_\mathrm a+k_\mathrm b)t}}{k_\mathrm a+k_\mathrm b}+c\tag{6}$$
But this doesn't give me the final result if I put in the boundary conditions. Where have I gone wrong? Third equation looks right to me. That's a non-homogeneous (some would say heterogeneous) first-order differential equation.
I think the heterogeneity is what is giving you problems. Those equations have a general solution which is a sum of a "complementary" solution and a "particular" solution. Don't try to use any boundary conditions until after you have combined particular and complementary solutions.
$$\frac{d[\ce{A}]}{dt}+(k_\mathrm a+k_\mathrm b)[\ce{A}]=k_\mathrm b[\ce{A}]_0 \tag{3}$$
The "complementary solution" is the solution to the corresponding homogeneous equation $\frac{d[\ce{A}]}{dt}+(k_\mathrm a+k_\mathrm b)[\ce{A}]=0$, which is $[\ce{A}]_\mathrm c=K e^{-(k_\mathrm a+k_\mathrm b)t}$, where $K$ is an unknown constant of integration.
The "particular solution" is a polynomial in $t$ of the same degree as the hetereogeneous term, which in this case is a polynomial of degree 0, $[\ce{A}]_\mathrm p=C$, where $C$ is an undetermined constant. We solve for this constant by substitution into the original equation: | {
"domain": "chemistry.stackexchange",
"id": 6881,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, kinetics",
"url": null
} |
ros, remote, teleop, ros-kinetic
ROS_HOSTNAME=localhost (was set by apt-get installation or something else)
teleop was started: roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch
rostopic echo on machine1 does not show anything unless I start teleop on machine 1.
During start of teleop on machine2 this log is printed on the terminal:
marco@ros-pi:~$ roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch
... logging to /home/marco/.ros/log/ec0d656e-0ab1-11e8-adf6-247703a699e0/roslaunch-ros-pi-9040.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://localhost:35055/
SUMMARY
========
PARAMETERS
* /rosdistro: kinetic
* /rosversion: 1.12.12
NODES
/
turtlebot3_teleop_keyboard (turtlebot3_teleop/turtlebot3_teleop_key)
ROS_MASTER_URI=http://192.168.2.121:11311
process[turtlebot3_teleop_keyboard-1]: started with pid [9049]
Control Your Turtlebot3!
---------------------------
Moving around:
w
a s d
x
w/x : increase/decrease linear velocity
a/d : increase/decrease angular velocity
space key, s : force stop
Originally posted by Boregard on ROS Answers with karma: 46 on 2018-02-04
Post score: 0
Original comments
Comment by gvdhoorn on 2018-02-11:\
ROS_HOSTNAME=localhost
This is almost never a good idea. I'd remove it.
(was set by apt-get installation or something else) | {
"domain": "robotics.stackexchange",
"id": 29949,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, remote, teleop, ros-kinetic",
"url": null
} |
particle-physics, experimental-physics, history, antimatter
Title: Positron discovery I was reading about Carl David Anderson's discovery of the positron and I have a question.
I understand why it was classified as positive electron (due to its curvature direction and amount of the ionized vapor) but one thing I didn't understand was, why did it appear from below the chamber as opposed to other particles? As an old time experimental particle physicist , and having scanned thousands of (different bubble chamber) pictures , I think the answer lies in this quote from rob's link:
A special feature of Anderson’s experiment was the introduction of a 6mm lead plate into the chamber, making it possible to decide if a particle was moving upwards or downwards.This was possible because the particle will get a decreased radius of curvature when it penetrate the lead plate and lose energy. Millikan had opposed to the introduction of the plate. He deemed it unnecessary because of his conviction that cosmic rays come from above, and nothing originating from the bottom of the chamber would be observed
The lead plate was left in the experiment and Millikan was wrong, because there are secondaries from high energy cosmic rays coming through the earth,mainly due to high energy cosmic neutrinos but also other backgrounds can exist.
Looking at the two curvatures one can see by eye that the curvature at the bottom is smaller than the one at the top, from energy conservation the track is entering from the bottom. If I were writing the original paper I would also show in parallel another frame where a track is entering from the bottom and is identified from the ionization as proton or electron, for the contrast. But I suppose after it was established that the lead plate was crucial for determining direction, they did not see the need to stress the fact. | {
"domain": "physics.stackexchange",
"id": 78477,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, experimental-physics, history, antimatter",
"url": null
} |
c++, c++11
enum Options
{
NONE = 0,
UPPER = 1,
LOWER = 2,
DIGIT = 4,
PUNCTUATION = 8
};
Options usage = NONE;
double minLen = MIN_LENGTH;
double maxLen = MIN_LENGTH * (MAX_LENGTH_OFFSET + 1);
int numOptions = 0;
size_t maxRange = 1;
void CalcMaxRange();
void AddRange(const size_t);
string MakeSubstring(string*, size_t, uniform_int_distribution<int>&, mt19937_64&);
public:
PasswordGenerator(size_t minLength, bool lower, bool upper, bool digit, bool punct);
PasswordGenerator();
~PasswordGenerator();
string Generate();
};`
Body
#include "PasswordGenerator.h"
#include <cmath>
#include <functional>
#include <sstream>
#include <algorithm>
using std::bind;
using std::stringstream;
using std::random_shuffle;
using std::random_device;
using std::pow;
const double PasswordGenerator::MAX_LENGTH_OFFSET = .25;
void PasswordGenerator::CalcMaxRange()
{
maxRange = stringsSets.size();
for (auto s : stringsSets)
{
size_t rangeTemp = s.size();
AddRange(rangeTemp);
}
return;
}
void PasswordGenerator::AddRange(const size_t range)
{
if ( range != 0 && maxRange % range != 0)
{
maxRange *= range;
}
}
PasswordGenerator::PasswordGenerator(size_t minLength = MIN_LENGTH, bool lower = true, bool upper = true, bool digit = true, bool punct = true)
{
CalcMaxRange();
if (minLength > MIN_LENGTH)
{
minLen = minLength;
maxLen = minLength * (1 + MAX_LENGTH_OFFSET);
} | {
"domain": "codereview.stackexchange",
"id": 28610,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11",
"url": null
} |
algorithm, object-oriented, game, objective-c, ai
#import "BZGridTools.h"
static const int kNumOrbsPerRow = 9;
@implementation DMMatchFinder {
DMGameBoard *_board;
NSMutableArray *_allPossibleMatches;
}
#pragma mark - Initialization
+(DMMatchFinder *) finderWithBoard:(DMGameBoard *)board {
DMMatchFinder *finder = [[DMMatchFinder alloc]initWithBoard:board];
return finder;
}
-(id) initWithBoard:(DMGameBoard *)board {
self = [super init];
if (self) {
_board = [DMGameBoard boardWithBoard:board];
_allPossibleMatches = [self allPotentialMatches];
}
return self;
}
#pragma mark - Best Match Levels
-(DMMatch *) bruteForceRandomMatch {
DMMatch *bruteForceRandomMatch = nil;
//without the shuffle it will favor the bottom lefthand side of the board for matches
[self shuffleArray:_allPossibleMatches];
for (DMMatch *match in _allPossibleMatches) {
DMBoardEval *boardEval = [[DMBoardEval alloc]initWithOrbCalculator:nil];
boardEval.board = _board;
if ([boardEval swapHasMatchesForPosition:match.firstPosition secondPosition:match.secondPosition]) {
bruteForceRandomMatch = match;
break;
}
}
return bruteForceRandomMatch;
}
-(DMMatch *) bestMatch {
DMMatchBoardEvalPair *bestMatchBoardEvalPair = [self bestMatchBoardEvalPairForPairs:[self allMatchBoardEvalPairsForBoard:_board]];
return bestMatchBoardEvalPair.match;
}
-(DMMatch *) bestMatchTwoForwardFast {
DMMatch *bestMatch = nil;
int highestScore = 0; | {
"domain": "codereview.stackexchange",
"id": 14310,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm, object-oriented, game, objective-c, ai",
"url": null
} |
ros, gazebo-plugin
<collision name="cylinder2_geom">
<geometry>
<cylinder radius="0.3" length="1.0"/>
</geometry>
</collision>
<visual name="cylinder2_vis">
<geometry>
<cylinder radius="0.3" length="1.0"/>
</geometry>
<material script="Gazebo/Grey"/>
</visual>
</link>
</model>
<light type="directional" name="sun" cast_shadows="true">
<origin pose="0 0 10 0 0 0"/>
<diffuse rgba="1.0 1.0 1.0 1"/>
<specular rgba=".1 .1 .1 1"/>
<attenuation range="10" constant="0.8" linear="0.01" quadratic="0.0"/>
<direction xyz="0 .2 -1.0"/>
</light>
</world>
</gazebo>
the problem is that when i type : rostopic echo bumper_states
this error appears on the roslaunch terminal :
Error [ContactSensor.cc:237] Contact Sensor[contact_sensor] has no collision[cylinder1_geom]
one for each clock cycle.
I don't really know where i'am wronging.I'm so sorry for the silliness of the question, but I'm new in this world so also simple things are not discounted for me. Thanks a lot in advance.
Originally posted by tommy on ROS Answers with karma: 29 on 2012-10-26
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11523,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo-plugin",
"url": null
} |
ros-lunar
But the robot doesn't move.
Do you know why the robot doesn't move? Well, it moves. When I press the down arrow key, the robot base move to the ground.
All the source code is in Github.
UPDATE
I get this warnings when I launch Gazebo:
DiffDrive(ns = //): missing <rosDebugLevel> default is na
DiffDrive(ns = //): missing <publishWheelTF> default is false
DiffDrive(ns = //): missing <publishWheelJointState> default is false
DiffDrive(ns = //): missing <wheelAcceleration> default is 0
DiffDrive(ns = //): missing <wheelTorque> default is 5
DiffDrive(ns = //): missing <odometrySource> default is 1
GazeboRosDiffDrive Plugin (ns = ) missing <publishTf>, defaults to 1
This is the rosrun rqt_graph rqt_graph output:
Originally posted by Elric on ROS Answers with karma: 71 on 2018-09-05
Post score: 0
Original comments
Comment by stevemacenski on 2018-09-05:
Try publishing more than once, ie remove --1 and add -r 10 (rate at 10 hz)
These things don't latch unless you tell them to, do just hitting it once probably won't result in motion. It's been a while since I did the turtle demo, but I think you have to hold key or press a few times to get moving
The problem was the Gazebo version. All ROS packages were up to date but Gazebo version was 7.0.0.
I added the Gazebo repository following this instructions and then, I updated it to version 7.14.
And now it moves!!!
Originally posted by Elric with karma: 71 on 2018-09-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by stevemacenski on 2018-09-07:
You should mark an answer as correct then so it removes from the queue of unanswered questions :) | {
"domain": "robotics.stackexchange",
"id": 31719,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-lunar",
"url": null
} |
let's look at (a):
then (z+k)2 - z2 = 2, so
z2 + 2kz + k2 - z2 = 2, that is:
2kz + k2 = 2.
k divides the LHS, so k divides the RHS, that is k = -1,-2,1, or 2.
if k = -1:
-2z + 1 = 2, the LHS is odd, the RHS is even.
if k = 1:
2z + 1 = 2, same problem. so k must be -2 or 2.
if k = -2:
-4z + 4 = 2, so -2z + 2 = 1, the LHS is even, the RHS is odd.
if k = 2:
4z + 4 = 2, so 2z + 2 = 1, the LHS is again even, and the RHS is odd.
so none of our 4 possibilities actually work, so (a) is impossible.
i leave it to you to show (b) is impossible.
this shows that |x2 - z2| cannot be 2, and therefore must be less than 2. so xVz, and we are done.
5. ## Re: Equivalence relation
Originally Posted by Deveno
suppose that xVy and yVz.
then |x2-y2| < 2, and |y2-z2| < 2.
The inequalities should be non-strict.
I suggest proving first that xVy iff $x=\pm y$ or $x\pm 1 = y = 0$ or $y\pm 1 = x = 0$.
6. ## Re: Equivalence relation
Originally Posted by emakarov
The inequalities should be non-strict.
I suggest proving first that xVy iff $x=\pm y$ or $x\pm 1 = y = 0$ or $y\pm 1 = x = 0$.
Yes I had noticed that also. This is really interesting equivalence relation. It hard for to believe that I had never seen it anywhere. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692325496974,
"lm_q1q2_score": 0.8281840216779736,
"lm_q2_score": 0.8479677602988602,
"openwebmath_perplexity": 1686.3792428525328,
"openwebmath_score": 0.7736205458641052,
"tags": null,
"url": "http://mathhelpforum.com/discrete-math/208167-equivalence-relation.html"
} |
...
etc.
I would appreciate it if someone could present such a formula and explain the reasoning behind it. Also, please illustrate how the formula can be applied to the above examples.
Thank you.
• If you have a formula for the first one, you have it for all of them by simply factoring out the correct power of $r$. And this formula is well known. – Mathematician 42 Nov 7 '16 at 0:29
• @Mathematician42 Can you please elaborate? I have the formula $\dfrac{a-ar^n}{1-r}$. – The Pointer Nov 7 '16 at 0:30
• If $|r|<1$, then $\sum_{n\geq 0} ar^n=a\sum_{n\geq 0}r^n=\frac{a}{1-r}$. Now notice that $\sum_{n\geq m}ar^n=ar^m\sum_{n\geq 0}r^{n}$. – Mathematician 42 Nov 7 '16 at 0:32
• @Mathematician42 I do not understand the second part of your explanation. – The Pointer Nov 7 '16 at 0:35
• We have $\sum_{n\geq m}ar^n=ar^m\sum_{n\geq 0}r^{n}=r^m(a\sum_{n\geq 0}r^n)=r^m\frac{a}{1-r}$. If you understand why $\sum_{n\geq 0}r^n=\frac{1}{1-r}$ when $|r|<1$, then there should be no problem in understanding everything else I'm saying. – Mathematician 42 Nov 7 '16 at 0:39
In general, you have the finite geometric series given by $$\sum\limits_{n=0}^{N-1}ar^n = \frac{a(1-r^N)}{1-r}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787883738261,
"lm_q1q2_score": 0.8711140796646355,
"lm_q2_score": 0.8824278741843883,
"openwebmath_perplexity": 279.0794507593227,
"openwebmath_score": 0.9252971410751343,
"tags": null,
"url": "https://math.stackexchange.com/questions/2002780/what-is-the-general-formula-for-a-convergent-infinite-geometric-series"
} |
# A simple finite combinatorial sum I found, that seems to work, would have good reasons to work, but I can't find in the literature.
I was doing a consistency check for some calculations I'm performing for my master thesis (roughly - about a problem in discrete bayesian model selection) - and it turns out that my choice of priors is only consistent if this identity is true:
$$\sum_{k=0}^{N}\left[ \binom{k+j}{j}\binom{N-k+j}{j}\right] = \binom{N+2j+1}{2j+1}$$
Now, this seems to work for small numbers, but I searched for it a lot in the literature and I can't find it.(I have a physics background so probably my knowledge of proper "literature" is the problem here). Neither I can demonstrate it! Has anyone of you seen this before? Can this be rewritten equivalently in some more commonly seen way? Can it be proven right...or wrong? Thanks in advance! :) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.984810949854636,
"lm_q1q2_score": 0.8028498755597621,
"lm_q2_score": 0.8152324826183822,
"openwebmath_perplexity": 445.6855312652387,
"openwebmath_score": 0.7837901711463928,
"tags": null,
"url": "https://math.stackexchange.com/questions/3028345/a-simple-finite-combinatorial-sum-i-found-that-seems-to-work-would-have-good-r"
} |
supervised-learning
Title: Class imbalance and "all zeros" one-hot encoding? I tried this example for a multi class classifier, but when looking at the data I realized two things:
There are many examples of "all zeros" vectors, that is, messages that don't belong in any classification.
These all-zeros are actually the majority, by far.
Is it valid to have an all-zeros output for a certain input? I would guess a Sigmoid activation would have no problems with this, by simply not trying to force a one out of all the "near zero" outputs.
But I also think an "accuracy" metric will be skewed too optimistically: if all outputs are zero 90% of the time, the network will quickly overfit to always output 0 all the time, and get 90% score. I understand your questions as follows:
Is it valid to have an all-zeros output for a certain input?
Yes its possible in some cases like:
In the tutorial "jigsaw-toxic-comment-classification-challenge", the data is taken from wikipedia comments, so generally there is no toxic behavior in people's comment, it is very rare that a person posts "bad" comments in an informative source like wikipedia, this might lead to all the labels being zero for a particular example.
In a single label classification like predicting whether a person has a rare disease,the dataset would contain labels that are mostly "zero"-(no disease)-skewed towards zero
This happens in datasets where the positive output to be predicted is a very rare case.
I guess your second question is -whether "accuracy" is a good way of evaluating a model for this type of a problem
You are right, in such a case even a simple program/model that outputs zeros for all the inputs would achieve an accuracy of more than 90%, so here accuracy is not a good metric to evaluate the model on.
You should go through the metrics f1_score,recall, and precision which are ideal for this type of problems.
Basically here we are interested in "out of those which are predicted positive, how many are really positive -precision" and "out of those which should have been predicted positive, how many are really predicted positive"
If my definitions seem confusing to understand, please got through the link below:
f1_score/recall/precision
Hope I was helpful | {
"domain": "ai.stackexchange",
"id": 1348,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "supervised-learning",
"url": null
} |
salt, process-chemistry
Title: Monosodium Diphosphate (NaH3P2O7) Preparation I looked on wikipedia for an answer but I couldn't even find a page for this salt, a subsequent google search didn't further enlighten me.
Does any one know how monosodium diphosphate can be prepared? $\ce{NaH3P2O7}$ is an intermediate acid-salt formed during the neutralization of pyrophosphoric acid ($\ce{H4P2O7}$) which becomes unstable as pH rises and leads to formation of trisodium diphosphate, $\ce{Na3HP2O7}$:
$$\ce{H4P2O7 + NaOH ->[pH < 3] NaH3P2O7 + H2O}$$
$$\ce{NaH3P2O7 + NaOH ->[pH = 3-9] Na2H2P2O7 + H2O}$$
$$\ce{Na2H2P2O7 + NaOH ->[pH > 9] Na3HP2O7 + H2O}$$
This acid-salt can also be produced by the reaction of trisodium phosphate and pyrophosphoric acid:
$$\ce{Na3PO4 + H4P2O7 -> NaH3P2O7 + Na2HPO4}$$
Notes and references:
Study of crystallization processes in some rare earth vanadate, tungstate and phosphate systems under hydrothermal conditions, K Byrappa and B Nirmala, Indian J. Phyx. 73A (5), 621-632 (1999) (PDF)
Its CAS number is 13847-74-0
The first dissociation of pyrophosphoric acid gives the anion, $\ce{H3P2O7^-}$. The dissociation constant is pretty large compared to 2nd, 3rd, and 4th dissociation of the acid.
$$\ce{H4P2O7 + H2O <=> H3O+ + H3P2O7¯ ~~~~ K_{a1} \sim 10^{–1}}$$ | {
"domain": "chemistry.stackexchange",
"id": 14516,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "salt, process-chemistry",
"url": null
} |
comets
Title: Was comet ISON in a Kepler orbit prior to disintegrating? Was comet ISON in a Kepler orbit prior to disintegrating? If so, it is possible over time that Kepler orbits can be altered by increases or decreases in the mass of of celestial bodies?
For instance if ISON was in a Kepler orbit and it lost mass did it maintain its orbit and disintegrate in its orbital track or does it alter its track due to the loss of mass? To your first question, yes, it was in a Kepler orbit, (this interactive model shows just how it appeared). ISON never completed a full circuit around the Sun since this was it's first time coming out of the Oort Cloud and it disintegrated.
Further answering, the orbit of a body around another in space is not affected by it's mass. Remember the experiment that proves two object fall at the same speed regardless of it's mass (here the experiment is made on the moon with a feather and a hammer) and remember that orbiting is just falling at a certain speed and angle in respect to another body. | {
"domain": "astronomy.stackexchange",
"id": 159,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "comets",
"url": null
} |
python, python-3.x, random
Can the script be improved further with respect to use of Python idioms?
Since the generated file should again be parsed as input for testing, is writing to a file and then opening it again to test the scripts the best way? Is the written file format the best one for parsing again as input?
unif.seed_value is very un-Pythonic. Instead use a class.
Rather than using getattr(unif, "seed_value",0), you can make self.seed_value at object instantiation, and use that as the default, unless it's falsy, where it'd instead use seed.
math.floor is mostly the same as int. If you're using this on division, you can change it to floor divisions using //.
numpy.int32 is odd, it adds a dependency to numpy that shouldn't be needed. Instead you can use & or %, this also means that it doesn't throw an error if the number is greater than (1 << 31) - 1.
unif.seed_value/float(m) is the same as unif.seed_value/m in Python 3. In Python 2 it'd do floor division, unless one number is a float.
generate_flow_shop shouldn't need to take output, that should be implemented in main.
generate_flow_shop should take a single problem rather than the list of problems.
You can use tuple unpacking to give all values in the problem a name.
I heavily recommend str.format over the old % formatting operator. It has less bugs, and is enhanced in 3.6 with f-strings.
You can use _ as a 'throw away variable' in Python, it's somewhat a standard and lets developers ignore that variable, greatly increasing readability.
unif uses some fairly odd math, and so I decided to go for a more succinct approach.
And so I'd change your code to: (note, it's untested)
class Random:
def __init__(self):
self.seed_value = None
def unif(self, seed, low, high):
seed_value = self.seed_value or seed
a = 16807
b = 127773
c = 2836
m = (1 << 31) - 1 | {
"domain": "codereview.stackexchange",
"id": 25781,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, random",
"url": null
} |
java, beginner, object-oriented, snake-game
public int getRandomYPos(){
return randomYPos;
}
}
Snake.java
import java.awt.*;
import java.util.LinkedList;
public class Snake{
private LinkedList<Point> body; // list holding points(x,y) of snake body
private Point head;
private static Direction headDirection;
private static Point tailCell;
private static boolean hasEatenApple = false;
public Snake() {
body = new LinkedList<>();
}
public void createSnake(int x, int y) {
//creating 3-part starting snake
body.addFirst(new Point(x,y));
body.add(new Point(x - 1, y));
body.add(new Point(x - 2, y));
headDirection = Direction.RIGHT;
tailCell = body.getLast();
}
public void clearBody(){body.clear();
}
public void changeDirection(Direction theDirection) {
if (theDirection != headDirection.opposite())
this.headDirection = theDirection;
}
//updating localisation of snake
public void update() {
addPartOfBody(headDirection.getX(), headDirection.getY());
}
private void addPartOfBody(int x, int y) {
head = body.get(0);
body.addFirst(new Point(head.getX() + x, head.getY() + y));
tailCell = body.getLast();
if (hasEatenApple == false) {
body.removeLast();
} else {
hasEatenApple = false;
}
}
public LinkedList<Point> getBody() {
return (LinkedList<Point>) body.clone();
}
public Point getTailCell(){return tailCell;}
public void eat() {
hasEatenApple = true;
}
}
Point.java
public class Point {
private int x;
private int y;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
public int getX() {
return x;
} | {
"domain": "codereview.stackexchange",
"id": 23042,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, object-oriented, snake-game",
"url": null
} |
cosmology, antimatter
Title: Could the missing antimatter lie outside the observable universe? While I was reading a similar question asking if other galaxy could be made of antimatter, to which the answer was: if they were, we should detect the radiation from matter interacting with antimatter on that sort of scale. But what if the missing antimatter lies outside the observable universe. Wouldn't that result in the matter dominated observable universe we live in, without any of the radiation from the antimatter lying outside it being detected? The problem with talking about things that lie outside the observable universe is that they are, by definition, not observable. There could be invisible pink unicorns out there, and you cannot prove they don't exist.
So yes, it is possible the missing antimatter lies outside the observable universe, but in itself this is not a powerful statement. You need something more: a mechanism to explain why all the antimatter got "out there". If the mechanism is well-motivated, makes testable predictions within the observable universe, and is eventually accepted as true, then it might solve the baryon asymmetry problem as well.
In the same way, we rely on GR to calculate what happens within the event horizon of black holes, and we are reasonably confident in the results even though we cannot observe inside a black hole. | {
"domain": "physics.stackexchange",
"id": 77242,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, antimatter",
"url": null
} |
a professional dart player, has an 80% chance of hitting the bullseye on a dartboard with any throw. I tried using all the common factors for numbers 4, 9, and 13 but I cannot come up with a. Algebra 1 Common Core Answers Chapter 12 Data Analysis and Probability Exercise 12. Garrett throws a dart at a circular dart board. Some special discrete probability distributions Bernoulli random variable: It is a variable that has 2 possible outcomes: \success", or \fail-ure". Round your answer to the nearest hundredth. Slim, a professional dart player, has an 80% change of hitting the bullseye on a dartboard with any throw. pi * 10^2 = 100pi. P(white or black) 2. The power of Monte Carlo integration is that the error, given by the width of the distribution in the above plot, scales with the square root of the number of darts thrown, independent of the number of dimensions in the integral. What is the probability your dart will land closer to the center than closre to an edge?. Prerequisites. It is as if B has become the entire Sample space (i. And so we see that c = 65. For each of the five specific outcomes, you'll need to calculate the probability of achieving that outcome, the number of darts thrown to achieve that outcome, and the winnings for that outcome. pi * 20^2 = 400pi. Example: A dart is thrown at random onto a board that has the shape of a circle as shown below. The probability the dart will hit the. Give your answer as a fraction, decimal and percent. Let X and Y denote the x and y coordinates of the landing location of the dart. Find The Probability That The Dart Lands In The Shaded Circular Region. 05 otherwise. radius of circle A = 2 in. There is a very simple and very important rule relating P(A) and P(not A), linking the probability of any event happening with the probability of that same event not happening. Answer…… There is one “3” on a die and. The sum of the probabilities of all outcomes in a sample space is 1. If you throw 3 times, what is the probability that you will strike the bull's-eye all 3 times?. => 16 - 12. Geometric Probability : Example : Suppose you are considering the probability of hitting a target on a dart board. The | {
"domain": "clodd.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.986151392226466,
"lm_q1q2_score": 0.8085389568582907,
"lm_q2_score": 0.8198933381139645,
"openwebmath_perplexity": 835.8275025840575,
"openwebmath_score": 0.5500837564468384,
"tags": null,
"url": "http://clodd.it/cfop/probability-darts-answers.html"
} |
Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Definition: A set is a collection of distinct objects, each of which is called an element of S. For a potential element , we denote its membership in and lack thereof by the infix symbols , respectively. = 2^\kappa$. Cardinality Recall (from lecture one!) Since, cardinality of a set is the number of elements in the set. Hence by the theorem above m n. On the other hand, f 1 g: N n! 4. Well, only countably many subsets are finite, so only countably are co-finite. But even though there is a Here, null set is proper subset of A. Thus, there are at least$2^\omega$such bijections. For infinite$\kappa $one has$\kappa ! In this case the cardinality is denoted by @ 0 (aleph-naught) and we write jAj= @ 0. A set A is said to be countably in nite or denumerable if there is a bijection from the set N of natural numbers onto A. Number of functions from one set to another: Let X and Y are two sets having m and n elements respectively. A and g: Nn! �LzL�Vzb ������ ��i��)p��)�H�(q>�b�V#���&,��k���� This is a program which finds the number of transitive relations on a set of a given cardinality. The cardinality of a set X is a measure of the "number of elements of the set". that the cardinality of a set is the number of elements it contains. Continuing, jF Tj= nn because unlike the bijections… If A and B are arbitrary finite sets, prove the following: (a) n(AU B)=n(A)+ n(B)-n(A0 B) (b) n(AB) = n(A) - n(ANB) 8. Thus, there are exactly $2^\omega$ bijections. [ P i ≠ { ∅ } for all 0 < i ≤ n ]. The union of the subsets must equal the entire original set. Making statements based on opinion; back them up with references or personal experience. You can do it by taking | {
"domain": "minhcuongvj.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9752018455701407,
"lm_q1q2_score": 0.8083826578003334,
"lm_q2_score": 0.8289388104343892,
"openwebmath_perplexity": 442.3231220788641,
"openwebmath_score": 0.7859933972358704,
"tags": null,
"url": "http://minhcuongvj.com/w5kzgd/number-of-bijections-on-a-set-of-cardinality-n-a0be25"
} |
Luiz is right about your proof. To fix that, consider the following:
Let $f \in c_0^*$, then define $y = [y_1, y_2, ... ,y_n, ...]$ by $y_i = f(e_i), \forall i$. Observe that, if $f \in c_0^*$ then being a bounded linear functional we have that $\sup \{f(x) : x \in c_0^* \text{ and } \|x\| = 1 \} = M < \infty$. In particular, the limsup taken over the family of elements $\{x_n\}_{n=1}^{\infty}$, where $x_n = [\text{sgn}f(e_1) , \text{sgn}f(e_2), ... \text{sgn}f(e_n),\alpha, \alpha, ...]$ is finite. Thus $$\sum_{n=1}^{\infty} |y_i| = \limsup_{x_n} f(x_n) < \infty,$$ Consequently, $y \in \ell^{1}$ and we now have a method of defining a $y \in \ell^{1}$ from an $f \in c_{0}^*$. So that your mapping $\varphi : \ell^{1} \rightarrow c_{0}^*$ now has an inverse. Since you showed it preserves norms, and we now have it has an inverse it must be an isometry. Thus, you created an isometric embedding of $\ell^{1}$ onto $c_0^*$ in order to show the two vector spaces are "equal". | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9840936082881853,
"lm_q1q2_score": 0.8135617945421837,
"lm_q2_score": 0.8267117962054048,
"openwebmath_perplexity": 95.101298468121,
"openwebmath_score": 0.9919639229774475,
"tags": null,
"url": "https://math.stackexchange.com/questions/678911/the-dual-space-of-c-is-ell1"
} |
# What is a precise definition of soundness?
I'm trying to better understand soundness, especially in contrast to semantic consistency. Here is what I've put together so far:
Soundness: A theory is sound if all theorems are true under all possible interpretations.
Semantic Consistency: A theory is semantically consistent if there exists an interpretation for which all theorems are true.
Note that I'm aware that consistency usually refers to syntactic consistency (namely that the theory cannot derive both $$\varphi$$ and $$\lnot \varphi$$, for any formula $$\varphi$$).
I'm primarily confused about the definition of soundness because it feels like a very hard property to satisfy. Take, for example, Peano Arithmetic. One axiom states that $$0$$ is not the successor of any number. I think I can find an interpretation that makes that false, for example, modular arithmetic. I'm fairly sure, though, that Peano Arithmetic is considered a sound theory so I suspect that I'm just confused about the definition of soundness.
Any help would be appreciated. Thanks.
Your definition of soundness is incorrect. A theory is sound if it contains no sentences which are false with respect to a specific structure (or class of structures) of interest.
Now as an unfortunate matter of practice we generally only consider soundness when that specific context is understood in the background, so we omit it. For example, when we say
Peano arithmetic is sound
what we really mean is
Peano arithmetic is sound with respect to the structure $$(\mathbb{N};+,\times)$$; that is, for each axiom $$\varphi$$ of Peano arithmetic we have $$(\mathbb{N};+,\times)\models\varphi$$.
Meanwhile, Peano arithmetic is not sound with respect to the unique $$1$$-element $$\{+,\times\}$$-structure (basically, the trivial ring).
I think it's actually helpful to step a ways back and rephrase both soundness and consistency in a more general way. Specifically, given a class of structures $$\mathbb{K}$$ and a theory $$T$$, we say: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731158685838,
"lm_q1q2_score": 0.8346720197657481,
"lm_q2_score": 0.8670357632379241,
"openwebmath_perplexity": 454.4651767587068,
"openwebmath_score": 0.856113076210022,
"tags": null,
"url": "https://math.stackexchange.com/questions/4097507/what-is-a-precise-definition-of-soundness"
} |
java, chess
// all possible moves in the up negative diagonal
for (int j = offSetCol - 1, i = offSetRow - 1; j > -1 && i > 0; j--, i--) {
String squareNote = String.valueOf((char) ('a' + j) + "" + (i));
Square square = getSquare(squareNote);
if (square.getSquarePiece() == null) {
;
} else if (square.getSquarePiece().getColor() == lastMovedColor
|| !square.getSquarePiece().getType().equals("B")
&& !square.getSquarePiece().getType().equals("Q")) {
break;
} else if (square.getSquarePiece().getColor() != lastMovedColor
&& (square.getSquarePiece().getType().equals("Q")
|| square.getSquarePiece().getType().equals("B"))) {
leftInCheck = true;
break;
}
} | {
"domain": "codereview.stackexchange",
"id": 28306,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, chess",
"url": null
} |
Let $C$ be centered at $c$ with radius $r$, then we have $z=re^{i\theta}+c=c+r\cos(\theta)+ri\sin(\theta)$, and its conjugate become $c+r\cos(\theta)-ri\sin(\theta)=c+re^{-i\theta}$. Therefore we have $k-\overline{z}=k-c-re^{-i\theta}$. To noramlize it we mutiply $(k-c-r\cos(\theta))-ri\sin(\theta)=k-c-re^{i\theta}$. The result is $(k-c-r\cos(\theta))^{2}+r^{2}\sin^{2}\theta$. Rearranging gives $$k^{2}+c^{2}+r^{2}-2kc-2(k-c)r\cos(\theta)=A-2Br\cos(\theta),A=k^{2}+c^{2}+r^{2}-2kc,B=k-c$$
So we have $$\int_{C}\frac{1}{k-\overline{z}}=\int^{2\pi}_{0}\frac{k-c-re^{i\theta}}{(k-c-re^{-i\theta})(k-c-re^{-i\theta})}dre^{i\theta}=ri\int^{2\pi}_{0}\frac{Be^{i\theta}-re^{2i\theta}}{A-2Br\cos(\theta)}d\theta$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9669140235181257,
"lm_q1q2_score": 0.805753231346118,
"lm_q2_score": 0.8333245891029456,
"openwebmath_perplexity": 241.52240493878756,
"openwebmath_score": 0.9584892988204956,
"tags": null,
"url": "http://math.stackexchange.com/questions/276803/integrate-int-c-frac1r-barzdz-conflicting-answers"
} |
slam, navigation, rviz, hector-slam
Originally posted by Ben12345 on ROS Answers with karma: 98 on 2017-01-25
Post score: 0
Original comments
Comment by gvdhoorn on 2017-01-25:
Please tell us what platform this is, and how you installed ROS. From your other questions I gather this could be a raspberry pi, which could be important.
Comment by Ben12345 on 2017-01-25:
Hi, Ubuntu 16.04 running in Virtual box 5.0.30 on a mac. No raspberry pi.
Ros was installed via the installation guidelines presented on the website - i.e sudo apt-get install etc etc. It was desktop full.
Comment by gvdhoorn on 2017-01-25:
I'm not saying it's definitely the problem, but running RViz in vbox has been problematic before. Have you installed the relevant video drivers / extensions?
Also: are there more lines to the error you posted above?
And: can you rosrun rviz rviz by itself?
Comment by Ben12345 on 2017-01-25:
Hmm that's interesting, i had a few issues with the serial from a VM Ros node about a year ago so that would make sense. I'll give it a try on my dual boot tomorrow.
No i cannot run Rviz by itself, i get a segmentation fault and core dump.
No additional text from the original question.
Thanks
Comment by Ben12345 on 2017-01-25:
Oh and 3D acceleration is enabled, and i'm running OpenGL 2.1 within the VM
Comment by gvdhoorn on 2017-01-26:
Ok. May I suggest you update the title of your question then? The current title seems to suggest this is a problem with hector_slam, but it looks more like an RViz<->VBox problem.
Ok so it seems the problem is with RVIZ and a virtual machine, as this works under a real machine.
A solution for anyone wanting to run Rviz in a virtual box VM is to disable 3D acceleration, i.e switch the VM off, go into settings and uncheck the box for 3D acceleration, i disabled 2D acceleration too.
I imagine there will be a performance hit with any graphics, but something is better than nothing. | {
"domain": "robotics.stackexchange",
"id": 26829,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, rviz, hector-slam",
"url": null
} |
ros
139 | template <typename PointT, traits::HasNormal<PointT> = true> inline bool
| ^
/usr/include/pcl-1.10/pcl/common/point_tests.h:140:3: error: redefinition of ‘template<class PointT, <typeprefixerror><anonymous> > bool pcl::isNormalFinite(const PointT&)’
140 | isNormalFinite (const PointT& pt) noexcept
| ^~~~~~~~~~~~~~
/usr/include/pcl-1.10/pcl/common/point_tests.h:121:3: note: ‘template<class PointT, <typeprefixerror><anonymous> > constexpr bool pcl::isNormalFinite(const PointT&)’ previously declared here
121 | isNormalFinite (const PointT&) noexcept
| ^~~~~~~~~~~~~~ | {
"domain": "robotics.stackexchange",
"id": 36141,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
waves, acoustics, fourier-transform
Title: What does a sound wave's frequency content tell us about the motion of particles in the wave? The Fourier of a quasi-periodic pressure wave traveling through air gives that wave's frequency content. And it should be harmonic (right?).
If it has 5dB of 440Hz, what does that tell me about the air particles in the original wave?
Can I infer anything about particle motion from the frequency content? Short Answer: For simple waves (described below), the maximum particle velocity is independent of the frequency and depends only on the pressure amplitude. The maximum particle displacement does depend on the frequency. However, these simple waves are very limiting, so a deeper analysis is probably needed for any particular situation. (It is possible that these features are not specifically what was desired, but "the motion of the particles" is rather vague, so this is what I decided to focus on.)
Longer Answer:
For a progressive wave (the wave passes you and there is no echo) made of only one frequency you may write the pressure $p$ as
$$p=A\cos\left[2\pi f(t-x/c)+\phi\right],$$
where $A$ is the amplitude, $f$ is the frequency, $t$ is time, $x$ is position, $c$ is the wave speed, and $\phi$ is some phase constant. The level (the "decibels") is then given by
$$L=20\log_{10}\frac{A}{p_0},$$
where $p_0$ is some reference pressure (in air this is 20 $\mu$Pa). Finally, we obtain from conservation of momentum
$$\rho\ddot u = \frac{2\pi f}{c}A\sin\left[2\pi f(t-x/c)+\phi\right],$$
where $\rho$ is the air density, $u$ is the particle displacement (how far it has moved from its rest position), and the dots denote time derivatives. You can easily integrate in time once to obtain | {
"domain": "physics.stackexchange",
"id": 85218,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, acoustics, fourier-transform",
"url": null
} |
php, beginner, object-oriented, classes, database
Title: Deleting a user from a database I was hoping if anyone could tell me if I'm doing this correctly, I know there are other questions like this but everyone codes different and I just feel that I'm not doing this right. I'm trying to make a modular OOP site so all the files will be called through the one index file which will then route the user to the correct page. All the code below works correctly (I have tested it) but I was curious about a few things.
Is this the correct way of using OOP and classes (mainly the DB
connection)?
Why (if it is correct) when I use Netbeans PHP that it doesn't pick
up the functions? See images below.
db.php (DB connection file)
$host = "localhost";
$username = "root";
$password = "";
$database = "als";
$errors = new errors();
try {
$db = new PDO("mysql:host=$host;dbname=$database", $username, $password);
} catch(PDOException $ex) {
trigger_error($ex->getMessage());
$errors->addLog($ex->getPrevious(), true);
exit;
}
User.php (class)
class user {
private $db = null;
public function __construct() {
global $db;
$this->db = $db;
}
public function deleteUser($ID) {
if(isset($ID) && !empty($ID) && is_int($ID) == TRUE) {
$deleteUser = $this->db->prepare("DELETE FROM `users` WHERE userid=?");
$deleteUser->bindParam(1, $ID, PDO::PARAM_INT);
$deleteUser->execute();
} else {
return FALSE;
}
}
}
index.php (main file)
include 'config/autoloader.php';
include 'config/db.php';
$user = new user(); | {
"domain": "codereview.stackexchange",
"id": 9761,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, beginner, object-oriented, classes, database",
"url": null
} |
(vii) Similarly as $0<\alpha<1$ rivers are horizontal.
Remark. Note the similarity to pendulum (see f.e. http://weyl.math.toronto.edu/MAT244-2011S-forum/index.php?topic=126.msg450#msg450) where there are "normal" oscillations and fast rotations when pendulum goes over top point.
« Last Edit: March 30, 2013, 05:22:58 AM by Victor Ivrii »
#### Alexander Jankowski
• Full Member
• Posts: 23
• Karma: 19
##### Re: Easter challenge
« Reply #9 on: March 29, 2013, 09:20:39 PM »
That is a good explanation. I also noted another of my mistakes... The equations of the separatrices are $$y = ±x + n\pi,$$ not $y = ±\pi x + n\pi$. Thank you! | {
"domain": "toronto.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9825575147530352,
"lm_q1q2_score": 0.8187893433523604,
"lm_q2_score": 0.8333245953120233,
"openwebmath_perplexity": 1334.707267608598,
"openwebmath_score": 0.8989298939704895,
"tags": null,
"url": "https://forum.math.toronto.edu/index.php?PHPSESSID=2h9fj04alq45skvg6snedf70k1&topic=281.0"
} |
# The Stacks Project
## Tag 032N
Lemma 10.155.12. Let $R$ be a Noetherian domain with fraction field $K$ of characteristic $p > 0$. Then $R$ is N-2 if and only if for every finite purely inseparable extension $K \subset L$ the integral closure of $R$ in $L$ is finite over $R$.
Proof. Assume the integral closure of $R$ in every finite purely inseparable field extension of $K$ is finite. Let $K \subset L$ be any finite extension. We have to show the integral closure of $R$ in $L$ is finite over $R$. Choose a finite normal field extension $K \subset M$ containing $L$. As $R$ is Noetherian it suffices to show that the integral closure of $R$ in $M$ is finite over $R$. By Fields, Lemma 9.27.3 there exists a subextension $K \subset M_{insep} \subset M$ such that $M_{insep}/K$ is purely inseparable, and $M/M_{insep}$ is separable. By assumption the integral closure $R'$ of $R$ in $M_{insep}$ is finite over $R$. By Lemma 10.155.8 the integral closure $R''$ of $R'$ in $M$ is finite over $R'$. Then $R''$ is finite over $R$ by Lemma 10.7.3. Since $R''$ is also the integral closure of $R$ in $M$ (see Lemma 10.35.16) we win. $\square$
The code snippet corresponding to this tag is a part of the file algebra.tex and is located in lines 42778–42784 (see updates for more information). | {
"domain": "columbia.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9793540674530016,
"lm_q1q2_score": 0.8096435518625082,
"lm_q2_score": 0.8267117876664789,
"openwebmath_perplexity": 164.3875458562411,
"openwebmath_score": 0.9809995293617249,
"tags": null,
"url": "http://stacks.math.columbia.edu/tag/032N"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.