text stringlengths 1 1.11k | source dict |
|---|---|
volcanology, human-influence, volcanic-hazard
diverting water, building dams, etc. Also by human-induced climate change contributing to rising sea levels. Increased hydrostatic pressure in the ground 'lubricates' fault planes causing lesser but more frequent earthquakes. In and around volcanoes, there is more groundwater which increases water vapour in/near magma chambers, and hence increases the activity of volatiles, contributing to volcanic eruptions. | {
"domain": "earthscience.stackexchange",
"id": 782,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "volcanology, human-influence, volcanic-hazard",
"url": null
} |
Graphical illustration:
>>> import matplotlib.pyplot as plt
>>> N = 8
>>> y = np.zeros(N)
>>> x1 = np.linspace(0, 10, N, endpoint=True)
>>> x2 = np.linspace(0, 10, N, endpoint=False)
>>> plt.plot(x1, y, 'o')
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.plot(x2, y + 0.5, 'o')
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.ylim([-0.5, 1])
(-0.5, 1)
>>> plt.show() | {
"domain": "readthedocs.io",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9407897492587142,
"lm_q1q2_score": 0.8087628392152412,
"lm_q2_score": 0.8596637451167997,
"openwebmath_perplexity": 7937.9742608749075,
"openwebmath_score": 0.24341653287410736,
"tags": null,
"url": "https://jax.readthedocs.io/en/4510-2/_autosummary/jax.numpy.linspace.html"
} |
c++, strings, file
return 0;
}
Output image: Overall
You seem to have stuffed everything into a single function. This makes it harder than necessary to follow. A couple of functions to break things up into manageable units would probably be a good idea (its part of self documenting code).
Your naming convention is also a bit shoddy.
string str, str2, strn, tab[10000], tab2[10000];
int i, k, j, n, l, tabl;
char c = 179;
vector<int> tabs;
vector<string> stringi;
None of these names coneys any meaning of what they are being used for. Just like functions variables should be given meaningful names so that reading the code becomes self explanatory.
std::string inputLine;
std::getline(std::cin, inputLine);
There is no real reason to use built-in C-arrays. std::vector and std::array are always going to be a better alternative (unless you are yourself building a container).
Basic Code Review
Prefer to use C++ header files:
#include <math.h>
// Prefer
#include <cmath> | {
"domain": "codereview.stackexchange",
"id": 10038,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, strings, file",
"url": null
} |
python, python-3.x, time-limit-exceeded, graph, depth-first-search
Sample Output
3
7
testCases = int(input())
for x in range(testCases):
temp = input().split()
N = int(temp[0])
M = int(temp[1])
nodes = []
edges = []
for _ in range(M):
temp = input().split()
A = int(temp[0])
B = int(temp[1])
edges.append([A, B])
counter = 0
for y in range(N):
counter += 1
nodes.append(counter)
hashmap = {}
for h in range(len(nodes)):
neighbours = []
for j in range(len(edges)):
if edges[j].__contains__(nodes[h]):
index_of_node = edges[j].index(nodes[h])
if index_of_node == 0:
neighbours.append(edges[j][1])
hashmap[h + 1] = neighbours
else:
neighbours.append(edges[j][0])
hashmap[h + 1] = neighbours
current_group = 0
highest_group = 0 | {
"domain": "codereview.stackexchange",
"id": 39614,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, time-limit-exceeded, graph, depth-first-search",
"url": null
} |
python, object-oriented, python-3.x, tkinter, tk
Title: Creating and calling objects sequentially in Tkinter with Python 3 I am trying to build a very simple user interface in Python 3 using Tkinter to be used in windows. and have written a code which is not efficient - it is excessively long for no good reason. I am looking for advice, perhaps even conceptual, about how to make this short and elegant.
Here is a simple excerpt from the code, for only 2 periods. I have many periods, and have created each of them as a class, which is the problem that I am trying to replace. I presume that we should just have one Period class, objects of which will be called sequentially. I do not understand how to do this with Tkinter.
import tkinter as tk
from tkinter import ttk
class Game(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs) | {
"domain": "codereview.stackexchange",
"id": 23212,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, object-oriented, python-3.x, tkinter, tk",
"url": null
} |
java, algorithm, interview-questions
if (spouce.isPresent()) {
System.out.println(relation[1] + "=" + tree.fetchSpouce(name).getName());
}
break;
}
} else {
String name1 = input[0].split("=")[1];
String name2 = input[1].split("=")[1];
RelationType type1 = RelationType.valueOf(input[0].split("=")[0].toUpperCase());
RelationType type2 = RelationType.valueOf(input[1].split("=")[0].toUpperCase());
tree.addPerson(name1, type1, name2, type2);
System.out.println("Welcome to the family, " + name2 + "!");
}
} | {
"domain": "codereview.stackexchange",
"id": 33198,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, interview-questions",
"url": null
} |
The odds of rolling two dice and the sum being greater than 9 are 6 to 30. Find the probability of getting two numbers whose sum is greater than 10. When two six-sided dice are tossed, there are 36 possible outcomes as shown. The game is designed as such that you throw a pair of dice and get money back or loose money. In this way, the difference value for any roll of the two dice will always be positive or 0. Two different dice are thrown together. So the probability of not getting 7 or 11 is 7/9. Sum of dices when three dices are rolled together If 1 appears on the first dice, 1 on the second dice and 1 on the third dice. When two dice are rolled, we get 36 possible outcome like (1,1),(1,2),(1,3),(1,4),(1,5),(1,6) …………. Throwing Dice More Than Once. If a pair of dice are rolled 5 times, what is the probability of getting a sum of 5 every time?. Example: Roll two 6-sided dice 0. That takes care of the winning or losing probabilities for the naturals (7,11) and the craps (2,3,12) | {
"domain": "farmaciacoverciano.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363750229257,
"lm_q1q2_score": 0.8399712517041412,
"lm_q2_score": 0.8519527963298946,
"openwebmath_perplexity": 208.79297532556296,
"openwebmath_score": 0.6622301936149597,
"tags": null,
"url": "http://farmaciacoverciano.it/dhti/when-two-dice-are-rolled-find-the-probability-of-getting-a-sum-of-5-or-6.html"
} |
Taking $n \to \infty$, by the squeezing theorem we get
$$s_n \to \frac{1}{2}\phi(0) = \frac{1}{2}f'(0).$$
• @OpenBall, It is a rather straightforward consequence of continuity. For any $\epsilon > 0$, there exists $\delta > 0$ such that $|\phi(x) - \phi(0)| < \epsilon$ whenever $|x| < \delta$, from which we deduce that $$\phi(0) -\epsilon \leq \inf_{[0,1/n]}\phi \leq \sup_{[0,1/n]}\phi \leq \phi(0) + \epsilon, \qquad n > \delta^{-1}.$$ This is enough to conclude. – Sangchul Lee Jan 13 '17 at 20:26
• Yes, it's all clear. Thanks. – user384138 Jan 13 '17 at 20:28
Using the definition of the derivative + using riemann sum to rewrite the sum as an integral yields $\frac{1}{2}f'(0)$.
• Can you detail a bit more? – Clement C. Jan 16 '17 at 19:12
An answer (inspired from that of the book cited as reference in the question).
• First, let us try with a few simple examples: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9873750484894196,
"lm_q1q2_score": 0.8291323729888673,
"lm_q2_score": 0.8397339736884712,
"openwebmath_perplexity": 258.74892103222925,
"openwebmath_score": 0.9327635765075684,
"tags": null,
"url": "https://math.stackexchange.com/questions/2096624/limit-of-the-sequence-left-sum-k-0n-f-left-frackn2-right-right-n/2099304"
} |
$$\frac{BE}{DF} = \frac{BD}{DC} = \frac{DE}{CF} = \frac{c}{a} = \frac{d}{e} = \frac{b}{f} = \frac{6}{5}$$
Thus, $$\frac{d}{e} = \frac{6}{5}$$ and $$\frac{b}{f} = \frac{6}{5}$$ as well.
If perimeters of $$\triangle ABC$$ and $$\triangle CDF$$ are $$P_{\triangle ABC}$$ and $$P_{\triangle CDF}$$, respectively:
$$P_{\triangle ABC} = (a + c) + (d+e) + (f+b) = \left(a + \frac{6}{5}a\right) + \left(e + \frac{6}{5}e\right) + \left(f + \frac{6}{5}f\right)\\ = \frac{11}{5}\left(a + e + f\right) = \frac{11}{5}P_{\triangle CDF}$$
Simillarly, if perimeters of $$\triangle BDE$$ and $$\triangle CDF$$ are $$P_{\triangle BDE}$$ and $$P_{\triangle CDF}$$, respectively: $$P_{\triangle BDE}= c+d+b = \frac{6}{5}a + \frac{6}{5}e + \frac{6}{5}f = \frac{6}{5}\left(a+e+f\right) = \frac{6}{5}P_{\triangle CDF}$$
Yet, the qusetion is asling the ratio of $$P_{\triangle ABC}$$ and $$P_{\triangle CDE}$$, which cannot be found using the given data. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9850429103332288,
"lm_q1q2_score": 0.8143465958437853,
"lm_q2_score": 0.8267117983401364,
"openwebmath_perplexity": 189.37131441885776,
"openwebmath_score": 0.8208692669868469,
"tags": null,
"url": "https://math.stackexchange.com/questions/2334941/question-based-on-geometryarea"
} |
homework-and-exercises, newtonian-mechanics, free-body-diagram, centripetal-force, centrifugal-force
Title: Cyclist leaning when rounding a bend Why?
Now here's the question that got me pondering
A cyclist rounds a bend, The surface of the road is horizontal. The cyclist is forced to lean at an angle of $20^\circ$ to the vertical to 'only just' take the bend successfully. The total sideways frictional force on the tyres is 360 N. The cycle has a mass of 20 kg. What is the mass of the cyclist? (Answer: 78.9 kg)
The trouble I'm having is that I don't understand why the cyclist has to lean to begin with, I tried drawing a free body diagram and equating the torques on the cyclist, but I get 80.9 kg as my answer instead. I also tried resolving the forces like shown here, which curiously also got me 80.9 kg. $g = 9.81\ \mathrm{m/s^2}$. You approximated $g = 10$ when converting masses to weights, so your answer for (mass of cyclist plus mass of bike) is $10/9.81$ of the total mass. | {
"domain": "physics.stackexchange",
"id": 82540,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, free-body-diagram, centripetal-force, centrifugal-force",
"url": null
} |
c++, algorithm, c++14, combinatorics
First, it's better to always use braces with if and other control structures to avoid small and silent errors in the future. But in this case, you can actually get rid of the condition altogether:
return myPair.first == first_map.end()
&& myPair.second == second_map.end();
Having to deal with std::pair whenever you want to use std::map and friends can be bothersome. Fortunately, there are ways to hide them. For example, you can change this line:
temp_unordered_map.insert(std::pair<T1,int>(x,1));
into this one:
temp_unordered_map.emplace(x, 1);
The function make_unordered_map does not tell much more than its return type. The name count_occurrences would probably be more suitable and provide more information.
If you plan to keep your function working only for std::array, then you can actually get rid of this runtime check:
if( N1 != N2) | {
"domain": "codereview.stackexchange",
"id": 12807,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, c++14, combinatorics",
"url": null
} |
a). Prove the sequences $\{\overline{b}_n\}$ and $\{\underline{b}_n\}$ both converge.
Given that $\{a_n\}$ is a bounded sequence and $\overline{b}_n = \sup T_n$ and $\underline{b}_n = \inf T_n$, then we have for the supremum, by defintion, that there exists a least upper bound, call it $\overline{m}$, such that $x \leq \overline{m}$ for all $x \in \{a_n\}$. Similarly, there exists a greatest lower bound $\underline{m}$ such that $x \geq \underline{m}$ for all $x \in \{a_n\}$. This is where I am stuck because I want to say that because $\overline{b}_n$ is given to be our $\sup T_n$, then we have a monotone sequence which converges by the Completeness Property. But is this a safe assumption at this point? This doubt is the same for the latter case, except we change our argument so that it states it using $\underline{b}_n$ as the $\inf T_n$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9895109102866638,
"lm_q1q2_score": 0.8135651214374139,
"lm_q2_score": 0.8221891370573388,
"openwebmath_perplexity": 125.72307160831211,
"openwebmath_score": 0.9589003920555115,
"tags": null,
"url": "https://math.stackexchange.com/questions/504219/lim-sup-and-lim-inf"
} |
java, object-oriented, game, playing-cards
break; // end case "deal"
case "hit":
if(!isPlayerDone)
deck = hit(deck, playersHand, splitHand, dealersHand);
else
System.out.print("You must deal cards first!\n\n");
break; // end case "hit"
case "stand":
if(!isPlayerDone)
{
isPlayerDone = true;
deck = stand(deck, playersHand, splitHand, dealersHand);
} // end if()
else
System.out.print("You must deal cards first!\n\n");
break; // end case "stand" | {
"domain": "codereview.stackexchange",
"id": 2885,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, game, playing-cards",
"url": null
} |
quantum-field-theory, differential-geometry, gauge-theory, topological-field-theory, chern-simons-theory
($\delta$ is half the sum of the positive roots). $\mathcal{H}_{\lambda}$ are the integrable loop group representations of highest weight $\lambda$.
As explained by Elitzur, Moore, Schwimmer and Seiberg, the addition of a Wilson loop in the path integral is equivalent to adding a source term in the action consisting of the Kirillov-Kostant-Souriau action on $G/T$ at the insertion point. (This result is sometimes attributed to Diakonov and Petrov, please see an elaboration in Alekseev, Chekeres and Mnev (equation 3.6 ).
The gauge freedom in the Kirillov-Kostant Souriau action doesn't remove the holonomy completely but restricts it to a conjugacy class determined completely by the representation $\lambda$ at the insertion point, please see Murayama equation (6.10) where the diagonalization is performed explicitly. In the general case the residual holonomy is given by:
$$e^{\frac{2\pi i}{k} \lambda . H}$$
The solution is still of the form given above, with: | {
"domain": "physics.stackexchange",
"id": 59427,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, differential-geometry, gauge-theory, topological-field-theory, chern-simons-theory",
"url": null
} |
robotic-arm, programming-languages
The manual ev+ programming pdf is here.
I appreciate any idea to workaround this error. It has been a while since I worked in V+, the predecessor to EV+, but I think I can answer some questions.
The base instruction allows you shift the robot world frame in X, Y, Z, and rotate about Z, but it does not allow you to rotate about X or Y so it cannot invert the arm.
The Frame instruction allows you to very accurately define the orientation of a transformation. You could use this create an inverted transformation and set your locations relative to it, but I don't think that is the issue you are seeing. | {
"domain": "robotics.stackexchange",
"id": 2390,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "robotic-arm, programming-languages",
"url": null
} |
fluid-dynamics
Title: Is Mass Flow an Additive Property? Mass ($m$) is an additive property in the sense that the total mass within a system can be simply determined by adding up the mass of each individual substance that's in it.
However, if two mass flows ($m/t$) of different liquids were to be mixed to form a single flow, would the resulting mass flow be equal to the sum of each individual mass flow? (Assuming that by "mass flow" you mean the mass flow rate $\dot m$)
Yes. Since mass is a conserved quantity, it obeys the continuity equation in the form
$$\frac{\partial \rho}{\partial t} + \nabla \cdot \mathbf j =0$$
where $\rho$ is the mass density and $\mathbf j$ is the mass flux.
As a consequence, if two flows with mass flow rates $\dot m_1, \dot m_2$ mix in a single flow $\dot M$, you will have
$$\dot m_1 + \dot m_2 = \dot M$$
If this wasn't true, it would mean that some mass was lost or created during the mixing. | {
"domain": "physics.stackexchange",
"id": 45773,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics",
"url": null
} |
quantum-mechanics, operators, quantum-information, quantum-entanglement, density-operator
Separable states are also called classically correlated states. A state $\rho$ is said to be separable if it can be written as a convex combination of factorized states, that is $$ \rho = \sum_i p_i \rho_A^{(i)} \otimes \rho_B^{(i)} , $$ with $p_i>0$ and $\sum_i p_i = 1$. In this case the expected value of $O_AO_B$ is $$ \left< O_A O_B \right>_\rho = \sum_i p_i \left< O_A \right>_{\rho_A^{(i)}} \left< O_B \right>_{\rho_B^{(i)}} . $$
This time the two subsystems are correlated, but we can say that this correlation is "classical" in some sense: the statistics of the system can be interpreted as if we knew that | {
"domain": "physics.stackexchange",
"id": 62264,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, quantum-information, quantum-entanglement, density-operator",
"url": null
} |
# General formula for factoring polynomial
I need help finding general formula for factoring these two polynomials: $$P_{2n}(x) = x^{2n} \pm 1$$ $$P_{2n+1}(x) = x^{2n+1} \pm 1$$ where $n$ is an integer. So the goal is to find formula for odd and even polynomials. I tried experimenting with real numbers to find a pattern, but that failed. I only know that complex roots are always in pairs - complex conjugates.
• Do you want to factor these polynomials over $\mathbb C$, over $\mathbb R$, or over an arbitrary ring? – Henning Makholm Oct 12 '16 at 20:33
• over C - complex numbers – Martin Morris Oct 12 '16 at 20:37
Hint:
let us do it for $x^{2n}-1$
we have to solve the equation
$z^{2n}=1$ with $z=e^{it}$
so $e^{2int}=e^{2ik\pi}$
which gives the roots
$z_k=e^{i\frac{k\pi}{n}}$ with
$k\in \{0,1,2,...2n-1\}$.
$x^{2n}-1= (x-1)(x+1)(x^2-2xcos(\frac{\pi}{n}+1)..... (x^2-2xcos( \frac{ (n-1)\pi }{n})+1)$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.964321450147636,
"lm_q1q2_score": 0.827164184647555,
"lm_q2_score": 0.8577681068080748,
"openwebmath_perplexity": 189.58159885165836,
"openwebmath_score": 0.6915885806083679,
"tags": null,
"url": "https://math.stackexchange.com/questions/1965845/general-formula-for-factoring-polynomial"
} |
multi-tasking, shared-memory, multithreaded
Title: Help understanding Petersons Algorithm for N Processes I read the wikipedia article:
https://en.m.wikipedia.org/wiki/Peterson%27s_algorithm
What I don't understand is how the algorithm is guaranteed to work when the processor or processors are switching between threads. What stops simultaneous advancement? The rough intuition is that at least one contending process becomes stuck at each step of the stairway to heaven (AKA the critical section) when there are enough contenders present. Consequently there's at most one of them left at the top rung of the ladder.
Having said that, I do remember many students struggling with grasping the finer details. It is a subtle algorithm. What could help is to model and simulate it, say, in Promela, where you could also check exhaustively whether certain properties (expressed as runtime assertions or LTL formulas) hold for the model. See below for such a model. The spin homepage should get you started with that tool. | {
"domain": "cs.stackexchange",
"id": 21663,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "multi-tasking, shared-memory, multithreaded",
"url": null
} |
quantum-mechanics, quantum-field-theory, operators, hilbert-space
C.C. Gerry & P.L. Knight, Introductory Quantum Optics, 2004; eq. (2.45). There are two different types of creation and annihilation operators:
One is the bosonic set, which obeys the commutation relation $[a,a^\dagger]=1$ (and trivially $[a,a]=0=[a^\dagger,a^\dagger]$), and for which you can define quadratures via $q=\frac12(a+a^\dagger)$ and $p=\frac{1}{2i}(a-a^\dagger)$ which obey $[q,p]=i$, i.e. $a=q+ip$ and $a^\dagger=q-ip$ can be seen as the ladder operators for a harmonic oscillator with hamiltonian $H=\frac12(p^2+q^2)$.
The other is the fermionic set, which obeys the anticommutation relations $\{c,c^\dagger\}=1$, $\{c,c\}=0=\{c^\dagger,c^\dagger\}$, from which follows that $c^2=0=(c^\dagger)^2$. | {
"domain": "physics.stackexchange",
"id": 44162,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-field-theory, operators, hilbert-space",
"url": null
} |
c++, beginner, vectors
// We need to ensure that the last entry doesn't equal to zero
if (vec[dimension-1] == 0){
while (vec[dimension-1] == 0){
vec[dimension-1] = randint(-MAXRANGE,MAXRANGE);
}
}
}
Several problems:
Containers from the standard library are generally preferred over raw C arrays. Use std::vector instead.
You should really be generating float values instead of int values.
Using out-parameters is less idiomatic than using the return value.
Retrying is not a good strategy to ensure that the last entry is non-zero. Generating a number in [-maxrange, 0) and then having a 50% chance of negating the sign is probably better.
You can use standard algorithms (available in header <algorithm>) to simplify the code.
Here's how I would fix these problems (without using randint). I have added some comments to help you understand.
constexpr float maxrange = 1000.0f;
std::mt19937 engine{std::random_device{}()}; | {
"domain": "codereview.stackexchange",
"id": 36010,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, vectors",
"url": null
} |
# Probably that an $80\%$-truthful person actually rolled a $6$ [duplicate]
A person, $A$, speaks the truth $4$ out of $5$ times. The person throws a die and reports that he obtained a $6$. What is the probability that he actually rolled a $6$?
I know there is a similar question like this but my doubts are different from it and also I want to identify and solve total probability theorem questions so I posted a side doubt also. In my attempt, I defined the events
\begin{align*} E_1&: \text{The person tells the truth.} \\ E_2&: \text{The person lies.} \\ E_3&: \text{The person reports that the die landed on a 6.} \end{align*}
I noted that $P(E_1)=\frac{4}{5}$, $P(E_2)=\frac{1}{5}$, $P(E_3|E_1)=6^{-1}$ and $P(E_3|E_2)=0$ and obtained
\begin{align*} P(E_3) = \frac{4}{5} \cdot \frac{1}{6} + \frac{1}{5} \cdot 0 = \frac{2}{15}. \end{align*} However, the correct answer is, $\frac{4}{9}$. What did I do wrong? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850862376966,
"lm_q1q2_score": 0.8127280311704435,
"lm_q2_score": 0.8267117898012104,
"openwebmath_perplexity": 529.4559058296386,
"openwebmath_score": 0.9332467913627625,
"tags": null,
"url": "https://math.stackexchange.com/questions/2520903/probably-that-an-80-truthful-person-actually-rolled-a-6/2520938"
} |
You still have to check, perhaps the left and right limits at integers became the same if we add these.
Let $n\in\Bbb Z$, then, if $x\to n$ from below, then as $\lfloor x\rfloor=n-1$ for $x<n$ near, we have $$\lim_{x\to n^-}\lfloor x\rfloor=n-1$$ And similarly, $\lim_{x\to n^+}\lfloor x\rfloor=n$. (This in itself shows why $\lim_{x\to n}\lfloor x\rfloor$ doesn't exist.) For the other one we have $$\lim_{x\to n^-} \sqrt{x-\lfloor x\rfloor}=1 \\ \lim_{x\to n^+} \sqrt{x-\lfloor x\rfloor}=0$$ So, adding these: $$\lim_{x\to n^-} \lfloor x\rfloor+\sqrt{x-\lfloor x\rfloor}=n-1+1=n \\ \lim_{x\to n^+} \lfloor x\rfloor+\sqrt{x-\lfloor x\rfloor}=n+0=n \, .$$ See it also on WolframAlpha. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211597623861,
"lm_q1q2_score": 0.8246425935085447,
"lm_q2_score": 0.8459424431344437,
"openwebmath_perplexity": 362.434965023431,
"openwebmath_score": 0.9732825756072998,
"tags": null,
"url": "http://math.stackexchange.com/questions/341971/limit-of-fx-lfloor-x-rfloor-sqrtx-lfloor-x-rfloor"
} |
javascript, jquery, plugin
$(this).find(defaults.selector).on('click', function(e) {
e.preventDefault();
var $parent = $(this).parent();
if ( $parent.hasClass(defaults.closedClassName) ) {
$('.'+defaults.openedClassName)
.removeClass(defaults.openedClassName)
.addClass(defaults.closedClassName)
.animate( {
height : options.minHeight
}, defaults.speed );
$parent.removeClass(defaults.closedClassName)
.addClass(defaults.openedClassName)
.animate({
height : originalHeight
}, defaults.speed);
} else if ( $parent.hasClass(defaults.openedClassName) ) { | {
"domain": "codereview.stackexchange",
"id": 4111,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, plugin",
"url": null
} |
c++, multithreading, http, server, web-services
Very Simple Job and Queue
/* Job object
* sockfd: The stream file descritor to send the reply on.
* fileName: The name of the file with data to be sent.
*/
struct Job
{
int sockfd;
std::string fileName;
Job* next;
Job(int sockfd, std::string const& fileName)
: sockfd(sockfd)
, fileName(fileName)
, next(NULL)
{}
~Job()
{
if (sockfd)
{
::close(sockfd);
}
}
};
class WorkQueue
{
Job* head;
Job* tail;
SimpleMutex access;
SimpleCondition condition;
bool finished; | {
"domain": "codereview.stackexchange",
"id": 18881,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, multithreading, http, server, web-services",
"url": null
} |
Intersection. In 3D, three planes , and can intersect (or not) in the following ways: All three planes are parallel. 1 0 Finally we substituted these values into one of the plane equations to find the . Given 3 unique planes, they intersect at exactly one point! The planes will then form a triangular "tube" and pairwise will intersect at three lines. Sometimes. The intersection of the three planes is a point. We know a point on the line is (1;3… yes, three planes can intersect in one point. three planes can intersect as a point or as a line. was the question about three planes? For example, the xy-plane and the zw-plane intersect at the origin (0,0,0,0). Planes have a pretty special property. Using that, we only need to create one line to find the other point. Where those axis meet is considered (0, 0, 0) or the origin of the coordinate space. The intersection of the three planes is a line. I would not confront your teacher but would recheck the question and if it asks about two planes | {
"domain": "com.br",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426420486941,
"lm_q1q2_score": 0.8373491525712106,
"lm_q2_score": 0.8596637523076224,
"openwebmath_perplexity": 656.6382688484786,
"openwebmath_score": 0.2999741733074188,
"tags": null,
"url": "http://galvaoeadvogados.com.br/overtraining-dehydration-vxalxhx/7ab4fb-can-three-planes-intersect-at-one-point"
} |
java, strings, unit-testing
Title: Building string with max length I need to run a evaluation for large (1 Million) number of components and build a string with error which has comma separated error codes. As the final string is to be persisted in RDBMS TABLE.COLUMN with MAX length for 255, I need to truncate the string (not abruptly) and append an ellipses to indicate there were more failures.
Here is what I have
public class ErrorCodeStringBuilder {
private boolean isFirst = true;
private boolean isFull = false;
private StringBuilder stringBuilder;
private int MAX_CAP = 250;
private int curLength = 0;
private String ELLIPSES = "...";
public ErrorCodeStringBuilder() {
stringBuilder = new StringBuilder(20);
}
public void append(String str) {
curLength = curLength + str.length();
if (curLength >= MAX_CAP) {
isFull = true;
}
checkAndAppend(str);
} | {
"domain": "codereview.stackexchange",
"id": 37465,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, strings, unit-testing",
"url": null
} |
algorithms
Algorithm so far:
I sorted points by the polar angle they make with initial point P. The approach is to start with a minimum degree. I am adding them to the binary search tree which has a custom comparator that adds them by comparing that segment with all other segments in a binary tree. If the segment we're inserting is $T$ and $O$ is comparing the segment, I'm checking whether $PT_{start}$ and $O$ have intersections. If they don't intersect then $T$ should be before $O$ in BST. That segment is visible. The problem is when I encounter lines like in the pictures above. It marks right segments as visible because there is no first line in BST so it can't check them with the biggest one on the left. I have no idea what should be my starting point and how to differ those problems. You can "split" each of those lines (that you see their endpoint before their starting point), into $2$ lines: one totally below the $x$ axis, and one totally above it. | {
"domain": "cs.stackexchange",
"id": 18596,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms",
"url": null
} |
ros, rviz
Originally posted by Shilpan with karma: 96 on 2017-03-14
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27244,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rviz",
"url": null
} |
optics
That means that in "heavy rain" you expect to see a rainbow with a depth up to 3 km. As the rain gets heavier, the depth reduces; if it's lighter, it can be more.
The above is very approximate... | {
"domain": "physics.stackexchange",
"id": 36519,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "optics",
"url": null
} |
# Find an infinite set $S$ and a function $g : S \to S$ that is surjective but not injective.
Find an infinite set $S$ and a function $g : S \to S$ that is surjective but not injective.
This is all that is given in the problem. Should I fix $S$ to be a certain set, like the integers, or natural numbers, and work from there? Or should I just create a function, like $f(x) = x^2$? Any guidance would be appreciated.
What about this: Let $S$ be $\mathbb{R}$, and let $g : S \to S$ be defined by $f(x) = x^2$. This is surjective but not injective.
EDIT 11/27, 3:53pm CT: I totally confused myself about what it means to be surjective. I'm not sure what I was thinking. Clearly, $f(x) = x^2$ is not surjective since each y-coordinate is not mapped to. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.960361157495521,
"lm_q1q2_score": 0.8442984118152703,
"lm_q2_score": 0.8791467722591728,
"openwebmath_perplexity": 234.5671390055516,
"openwebmath_score": 0.9181591272354126,
"tags": null,
"url": "https://math.stackexchange.com/questions/2540248/find-an-infinite-set-s-and-a-function-g-s-to-s-that-is-surjective-but-not/2540343"
} |
electrostatics, symmetry
Now, suppose the solution involved the electric field pointing in, say, the "up" direction (toward the charge at the top of the triangle). Now again imagine rotating the configuration of charges by 120 degrees. The electric field vector will now be pointing diagonally downward. But this is a contradiction -- for a given configuration of charges, there is only one possible electric field. Yet, we have shown that there are two allowed electric fields for the same problem. This is a contradiction. The resolution is that the electric field must have been zero.
In other words, the original problem does not change if we rotate the charges by 120 degrees. Therefore, the solution (ie, the electric field) must also not change if we rotate the charges by 120 degrees. In this context, the only possible electric field is zero, since any non-zero vector will change under a 120 degree rotation. | {
"domain": "physics.stackexchange",
"id": 97673,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, symmetry",
"url": null
} |
The elevator optimization could be on waiting time, cost and routing. All will need to learn the passenger traffic flow/pattern in the building. Like the statistical forecasts including information of entering and exiting passengers per floor and direction at fifteen minute intervals. The three traffic components, i.e. incoming, outgoing and inter-floor components, in a building are forecast.
Solution:
The main idea is the idle elevators should move to a position, automatically, which will best serve the usage relationships/patterns through the day.
• Morning/post lunch: One-to-many relationship. All elevators return to the 1st floor automatically when idle. Even most of elevators can go no-stop down to floor and bypass passengers. | {
"domain": "codebycase.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9930961607027984,
"lm_q1q2_score": 0.8254049677460618,
"lm_q2_score": 0.831143045767024,
"openwebmath_perplexity": 1091.9789664411348,
"openwebmath_score": 0.5078982710838318,
"tags": null,
"url": "https://codebycase.com/algorithm/a14-math-and-logic-puzzles.html"
} |
automata, finite-automata
Alphabet: Σ is {(0 0), (0 1), (1 0), (1 1)}
Definition of Δ to strings recursively:
Δ*(q, ε) = q for all q ∈ Q
Δ*(q, xa) = Δ(Δ*(q, x), a) for all q ∈ Q, x ∈ Σ*, a ∈ Σ
Given a string, s ∈ Σ*, we define left(s) and right(s) to be the numbers represented in custom binary numbers by the reverse of the left and right rows of bits in s. For example, if
s = (1 0)(0 0)(0 1)(1 0)(0 0)(1 1)
then left(s) = n(s) = n(101001) = 19 and right(s) = n(s) = n(100100) = 16 where
$\sum_{i=1}^L s_i \bullet F_i $
where s is of length L.
and s = $s_L$$s_{L-1}$$s_{L-2}$...$s_1$ for s in n(s)
also the Fibonacci numbers start off as follows:
$F_0$ = 1
$F_1$ = 1
$F_n$ = $F_{n-1}$ + $F_{n-2}$ for n >= 2
Questions:
(a) If s is a string of length L and Δ*(A, s) = B, state a very simple arithmetic expression (in terms of L) for left(s).
(b) If s is a string of length L and Δ*(A, s) = C, state a very simple arithmetic expression (in terms of L) for left(s).
(c) Prove by induction on n: | {
"domain": "cs.stackexchange",
"id": 3380,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "automata, finite-automata",
"url": null
} |
lagrangian-formalism, condensed-matter, field-theory, action, noethers-theorem
Title: Missing minus sign when taking derivative I'm trying to understand to get the following formula (first formula on pg 33) in Altland Simons second edition:
$$\Delta S \simeq \int d^m x (1 + \partial_{x_\mu} \, (\omega_a \, \partial_{\omega_a} \, x_{\mu} \, )) \, L \, (\phi^i + F_a^i \omega_a \, , \, (\delta_{\mu \nu} - \partial_{x_{\mu}}\, (\omega_a \, \partial_{\omega_a} \, x_{
\nu} \, )) \, \partial_{x_{\nu}} \, (\phi^i + F_a^i \, \omega_a)) \\
- \int d^m x \, L \, (\phi^i(x), \, \partial_{x_{\mu}} \, \phi^i (x))$$
Specifically how do we get the expression $\delta_{\mu \nu} - \partial_{x_{\mu}}\, (\omega_a \, \partial_{\omega_a} \, x_{\nu} \, )$?
This is related to the PSE question here, but I'm confused about the step before the question asked in that link. The summary given there is great and I have included it below: | {
"domain": "physics.stackexchange",
"id": 83409,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lagrangian-formalism, condensed-matter, field-theory, action, noethers-theorem",
"url": null
} |
speed-of-light, si-units, metrology, length
Consider the Magnetic Flux Density (B-field) and the Magnetic Field Intensity (H-field). The first is measured in tesla and the second in ampere per meter. They are arithmetically independent (because tesla cannot be written solely from ampere per meter). We have the relation $B = \mu H$, where $\mu$ is measured in $T \cdot m \cdot A^{-1}$. Well, instead, we could have $T = A \cdot m^{-1}$ and have $\mu$ dimensionless, had science developed in another way.
Consider the Electric Charge and the Electric Current. The first is measured in coulomb and the second in ampere. They are arithmetically dependent: $C = A \cdot s$. We have the relation $i = \frac{dq}{dt}$, or for the sake of the argument, $i = \alpha \frac{dq}{dt}$ where $\alpha = 1$ is dimensionless. Well, instead, we could have $C$ and $A$ arithmetically independent, and have $\alpha$ be measured in $A \cdot s \cdot C^{-1}$ had science developed in another way. | {
"domain": "physics.stackexchange",
"id": 39682,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "speed-of-light, si-units, metrology, length",
"url": null
} |
neuroscience
Update: From a bit of research, it turns out that the company selling the headphones is not "confused" as I politely offered. I don't think this site is the appropriate forum to refute their research or claims. Suffice to say that the retina is the only part of the human brain shown to be photosensitive. | {
"domain": "biology.stackexchange",
"id": 101,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neuroscience",
"url": null
} |
evolution, mosquitoes
It has been found out that single letter differences in people's genomes at specific locations have affected their attractiveness towards mosquitoes. This means there might be a possibility that some humans might have evolved to be less attractive to mosquitoes. Does any proof exist regarding that? We now know based on a twin study that attractiveness to mosquitos is heritable (with a fairly high heritability of 0.62). Based on a genome-wide association study (GWAS) [which is probably the study referenced in the youtube video] we have found some of the genetic loci that control attractiveness; in particular, 15 single-nucleotide polymorphisms (SNPs) relating to (self-reported) attractiveness, were discovered in the latter study. These SNPs were in regions of the genome related to the immune system (which isn't surprising since immune system genes are closely linked to human odor profiles). | {
"domain": "biology.stackexchange",
"id": 8306,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "evolution, mosquitoes",
"url": null
} |
ros2
Title: How to generate custom control sets for Smac State Lattice Planner?
https://navigation.ros.org/configuration/packages/configuring-smac-planner.html
Link shows " See the Smac Planner package to generate custom control sets for your vehicle ". But was not able to find how to generate custom json file (lattice_filepath argument in plugin nav2_smac_planner/SmacPlannerLattice).
Any leads will be helpful!
Thank you.
Originally posted by jainr on ROS Answers with karma: 19 on 2021-12-26
Post score: 0
Here is something described about the control sets:
link text
But I also haven't really understand how to make properly own, custom ones. I tried (manually) define the output.json file, but with unsuccessful results (at least the Planner didn't behave as I expected)
Originally posted by ahopsu with karma: 83 on 2022-02-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37289,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros2",
"url": null
} |
natural-language-processing
Notice that the sixth level is not words or sentences. Words and sentences are later developments in human language and occur later in an individual's childhood. Our educational systems are largely word oriented, but people do not talk or listen in words. There is no signal for "way" in the brain when someone hears, "No way!" which, in the current semantic state of U.S. English is a single linguistic element that means, "What you just said is very surprising to me." Two written words represent this one linguistic element.
Conversely, there are two distinct signal representations for, "wanted", specifically, "want," and "-ed." The second of the two is reused for, "planted." The -ed ending is not relearned for every verb.
Consider these lines out of a dialog. | {
"domain": "ai.stackexchange",
"id": 853,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "natural-language-processing",
"url": null
} |
database-theory, database-concurrency
Title: How are blind writes recoverable in a transaction schedule? Consider the following schedule -
T1 T2
R(A)
W(A)
R(A)
W(A)
Commit
Commit
I understand that this schedule is non-recoverable, because if a failure occurs between the two commits, then we can't rollback the operations performed by T2, as they would already have been committed.
But, if we change this schedule to
T1 T2
R(A)
W(A)
R(A)
W(A)
Commit
Commit | {
"domain": "cs.stackexchange",
"id": 21370,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "database-theory, database-concurrency",
"url": null
} |
• There are 2 ways to shade two center squares.
• There are 2 ways to shade two corner squares.
• After shading one of the edge squares, there are 6 ways to shade another edge square. (The seventh, with the two shaded squares separated by a knight's move, is equivalent under $90^\circ$ rotation to one of the others.)
• After shading a center square, there are 3 ways to shade a corner square.
• After shading a center square, there are 4 ways to shade an edge square.
• After shading a corner square, there are 4 ways to shade an edge square.
That's 21 shadings total, which checks. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471628097781,
"lm_q1q2_score": 0.8155490967887863,
"lm_q2_score": 0.8289388104343892,
"openwebmath_perplexity": 275.1820406734542,
"openwebmath_score": 0.7285181283950806,
"tags": null,
"url": "https://math.stackexchange.com/questions/744161/counting-shaded-squares"
} |
c#, strings, array
Title: Compare the index of each char in a string in an alphabet array to a range of numbers I saw this codegolf challenge and I set out to try and write a solution for it. I'm nowhere near an expert codegolfer (or programmer), but it was an interesting exercise.
Now I'm wondering how to improve my code, as it feels really bulky (especially compared to some of the answers to the challenge). Note that I'm not looking for ways to golf this code, I'm merely looking for general improvements and optimizations, hints and tips.
I hope the title is clear as to what the program does, it was pretty hard to describe!
Anyway, what my program does is pretty simple: | {
"domain": "codereview.stackexchange",
"id": 10800,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, strings, array",
"url": null
} |
cc.complexity-theory, physics
Title: Formal notion for energy complexity of computational problems Computational complexity includes the study of time or space complexity of computational problems. From the the perspective of mobile computing, energy is very valuable computational resource. So, Is there a well studied adaptation of Turing machines that account for the energy consumed during the execution of algorithms. Also, Are there established energy-complexity classes for computational problems?
References are appreciated. Is there a well studied adaptation of Turing machines that account for the energy consumed during the execution of algorithms? No!
But maybe you could come up with one. It's possible you could divide the Turing machine steps into reversible and non-reversible (the non-reversible ones are where information is lost). Theoretically, it is only the non-reversible steps that cost energy. A cost of one unit of energy for each bit that is erased would theoretically be the right measure. | {
"domain": "cstheory.stackexchange",
"id": 5029,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, physics",
"url": null
} |
turing-machines, reductions, undecidability
Title: Reduce ATM to the language of TM encodings concatenated by a string where the TM accepts both the string and its reverse Prove that the language $LM =\{\langle M,x\rangle\mid \ M \text{ accepts }x\text{ and rev}(x) \}$, where $\mathrm{rev}(x)$ is the reverse of the string $x$, is undecidable with a reduction from $A_{\mathrm{TM}}$. Note that the empty string belongs to $LM$. Hint
Forget the similarity between $LM$ and $A_{\mathrm{TM}}$, and for an input $\langle M,x\rangle$ of the reduction, try to build a new TM $N$ such that $M$ accepts $x$ if and only if $N$ accepts $0$, i.e. $\langle N,0\rangle\in LM$.
Complete answer
Given $\langle M,x\rangle$, build a TM $N_{M,x}$ that on any input always simulates $M$ on $x$. Therefore if $M$ accepts $x$, $N_{M,x}$ accepts everything, otherwise $N_{M,x}$ rejects anything. Now we can see $M$ accepts $x$ if and only if $N_{M,x}$ accepts $0$, i.e. $\langle N_{M,x},0\rangle\in LM$. All above completes the reduction. | {
"domain": "cs.stackexchange",
"id": 11050,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "turing-machines, reductions, undecidability",
"url": null
} |
correctness-proof, network-flow
How can we have (,)∉ and (,)∈′ ? The augmentation must have increased the flow from to . The Edmonds-Karp algorithm always augments flow along shortest paths, and therefore the shortest path from to in has (,) as its last edge.
The edge set $_$ is before the first augmentation and $_{′}$ is after it. The first augmentation must remove the edge (,) and put the reverse edge (,) into the edge set $_{′}$, but not the same edge (,). That is why how we can have (,) in $_{′}$ ?
$(u,v)\notin E_f$ and $(u,v)\in E_{f'}$ mean that the newly augmentation augments flow (a cancellation flow for $(u,v)$) along the edge $(v, u)$. Recall that, augmentation happens on the shortest path, which means that $(v,u)$ is on the shortest path $p$ from $s \leadsto v \rightarrow u \leadsto t$. If $p$ overlaps with the shortest path from $s \leadsto u$, $\delta _{f}\left ( s,v \right ) = \delta _{f}\left ( s,u \right )-1$ holds. However I am not sure now that the overlapping does happen. | {
"domain": "cs.stackexchange",
"id": 21438,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "correctness-proof, network-flow",
"url": null
} |
astrophysics, astronomy, wavelength, doppler-effect, redshift
The two wavelengths above have been very precisely measured for sodium atoms at rest in the lab framework. In addition, the ratio of the two wavelengths has been calculated: $1.00101427$.
If the sodium is moving towards or away from the observer at some unknown speed, the two emission lines will both be Doppler shifted by the same factor, but the ratio will stay the same!
So, if an astronomer takes a spectrum of a distant star and sees two very close, strong, emission or absorption lines, he/she will calculate the ratio of the two wavelengths. If the result is the same as the ratio above, then the original wavelengths are known, and the observed wavelengths, via the Doppler shift, will produce a velocity of recession or approach. | {
"domain": "physics.stackexchange",
"id": 47934,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "astrophysics, astronomy, wavelength, doppler-effect, redshift",
"url": null
} |
c#, performance, unity3d
if (x.shader.name != y.shader.name) return false;
var xHasColor = x.HasProperty("_Color");
var yHasColor = y.HasProperty("_Color");
if (xHasColor != yHasColor) return false;
if (xHasColor && x.color != y.color) return false;
var xHasTexture = x.HasProperty("_MainTex");
var yHasTexture = y.HasProperty("_MainTex");
if (xHasTexture != yHasTexture) return false;
if (xHasTexture && x.mainTexture != y.mainTexture) return false;
return true;
}
public int GetHashCode(Material mat)
{
unchecked
{
var hasColor = mat.HasProperty("_Color");
var hasTexture = mat.HasProperty("_MainTex");
var hashCode = mat.shader.name.GetHashCode();
hashCode = (hashCode * 397) ^ (hasTexture ? mat.mainTexture.GetHashCode() : 0);
hashCode = (hashCode * 397) ^ (hasColor ? mat.color.GetHashCode() : 0);
return hashCode;
}
}
} | {
"domain": "codereview.stackexchange",
"id": 11398,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, unity3d",
"url": null
} |
condensed-matter, solid-state-physics, thermoelectricity, charge-carrier, hall-effect
If the energy dependence of the relaxation time is unimportant, then the sign of the thermopower is determined by the sign of the effective mass, averaged over the Fermi surface, i.e., by whether the carriers are electrons or holes...
However, the thermopower is not a very valuable probe of the fundamental electronic properties of a metal; the energy dependence of $\tau$ is not well understood, the validity ... depends on the relaxation time approximation, and, most important, vibrations of the lattice can affect the transport of thermal energy in a way that makes it very difficult to achieve an accurate theory of the thermopower. (p. 258) | {
"domain": "physics.stackexchange",
"id": 55372,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter, solid-state-physics, thermoelectricity, charge-carrier, hall-effect",
"url": null
} |
0
• Perhaps you can use the new AsymptoticDSolveValue or AsymptoticIntegrate in Mma 10.3 to show that any perturbation from the 1/R^2 solution makes the result worse, showing the stationarity of your solution. – Thies Heidecke Apr 4 '18 at 21:04
• Mathematica this kind integro-equation can't solve.If you exectute :DSolve[Integrate[(r^2/(2*r0^2) - 1/2 - R^2/(2*r0^2))*f[R], {R, r - r0, r + r0}, Assumptions -> {r > r0 > 0}] == a, f[x], x] /. a -> 0 gives a warning message: Supplied equations are not differential or integral equations of the given functions – Mariusz Iwaniuk Apr 4 '18 at 21:57
• In general Mathematica cannot solve functional equations directly, however with a bit of insight it can be quite helpful also in solving integral equations. – Artes Apr 5 '18 at 3:17
• @Artes I clicked on "unaccept" by mistake. I did accept it, as I said in the comment below your answer. But I am still trying to understand it fully. – Tigran Aivazian Apr 5 '18 at 16:05
## 1 Answer | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812327313545,
"lm_q1q2_score": 0.800572190390176,
"lm_q2_score": 0.8267117983401362,
"openwebmath_perplexity": 900.5328778960726,
"openwebmath_score": 0.7816450595855713,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/170445/how-do-i-solve-an-integral-equation-related-to-the-newtonian-gravity"
} |
random, bash, gui
xdotool search --name "$1" windowactivate --sync
}
# we need to keep track of these two variables used by mouse_click function
previous_rand=10
operation_add=true
function mouse_click {
# 1. invert the operation_add boolean value;
# it seems Bash does not have inbuilt command for that
# 2. operation_add determines whether we will be adding or
# subtracting the random number later
if [[ $operation_add == true ]]; then
operation_add=false
else
operation_add=true
fi
# 2. generate random number between 0 and 7, inclusive;
# if the generated number is the same as the previous_rand,
# generate until it is different
# 2. rand will be later used as pixel offset from the given coordinates
rand=$(random_number 0 7)
while [[ $rand == $previous_rand ]]; do
rand=$(random_number 0 7)
done | {
"domain": "codereview.stackexchange",
"id": 20260,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "random, bash, gui",
"url": null
} |
fft, frequency-spectrum
The 0 Hz or DC component of an FFT result appears double height because it includes it's own mirrored value.
Unless your sinewave's are exactly periodic within the FFT length, you will need to interpolate the true amplitudes, since their peaks will be between the FFT result bin centers.
The IDFT and IFFT can be used as signal generators, although great care must be taken with phase for any frequencies that are not exactly periodic in the FFTs length. See Phase Vocoder analysis/synthesis for details. | {
"domain": "dsp.stackexchange",
"id": 1138,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, frequency-spectrum",
"url": null
} |
java
I don't quite like it because I duplicate the best == null clause. Would it be better in these types of situation to duplicate the nodes.put() instead? Or is there some third, more readable, way without duplicates? I think it's a matter of taste and preference. In either case, you would have to duplicate something. But, there's something that drew my attention. I don't claim it to be correct or be a solution to your problem. Hard to say without knowing the context and data structures that you use. Anyways, here we go... | {
"domain": "codereview.stackexchange",
"id": 3355,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java",
"url": null
} |
text-mining, feature-extraction, text
Use a better stopwords vocabulary. If you still have words like "to" and "at", then you are either not removing stopwords or using a lousy vocabulary. Try using the Spanish stopwords from nltk:
from nltk.corpus import stopwords
stopwords.words('spanish')
Use max_df < 1. This will truncate words that appear in more than that percentage number of documents.
The TF-IDF part that punishes common words (transversal to all documents) is the IDF part of TF-IDF, which means inverse document transform. Several functions may be used as your IDF function. Usually, IDF=$\log\frac{\text{#documents}}{\text{#documents where word appears}}$ is used. You could try a more punitive IDF function. sklearn does not seem to allow to specify it, but you can use nltk or gensim or easily implement your own TF-IDF vectorization. It needs no more than five lines of code. | {
"domain": "datascience.stackexchange",
"id": 990,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "text-mining, feature-extraction, text",
"url": null
} |
python, beginner, python-3.x, neural-network
def predict(self,X):
s=X.dot(self.w_)+self.b_
y_hat=list(map(lambda x: (1, -1)[x<0],s))
return y_hat Style
Python comes with an official Style Guide often just called PEP8, and especially as a beginnger it's a good starting point to get going. Of course coding style is often a matter of choice, however, there are aspects you should definitely follow.
Whitespace
As you know, Python's code structure is build upon indentation, so some parts are already language-defined. The style guide also has recommendations on how to use whitespace within statements and expressions. For example, , should always be followed be a single space character, while = is preceded and followed by a single space when used in assignments (there should be no whitespace around = if used as keyword arguments to functions such as foo(batz='bar')). So you would go from
gradL_W,gradL_b=self.backward(y_i,y_hat,x_i)
self.w_,self.b_=self.update(gradL_W,gradL_b) | {
"domain": "codereview.stackexchange",
"id": 34495,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-3.x, neural-network",
"url": null
} |
-
Dividing the $k$th column of the lower triangular matrix $T$ (OEIS A111492) in Andrej Bauer's answer by $(k-1)!$ for each column generates A135278 (the $f$-vectors, or face-vectors for the $n$-simplexes). Then ignoring the first column gives A104712, so $T$ acting on the column vector $(-0,d,-d^2/2!,d^3/3!,...)$ gives the Euler classes for hypersurfaces of degree $d$ in $CP^n$. (See Daniel Dugger, A Geometric Introduction to K-Theory, pg. 168.)
$T$ also has relations to the number of permutations of the symmetric group $S_n$ that are pure $k$-cycles, colored forests of "naturally-grown" trees, disposition of flags on flagpoles, the colorings of the vertices of the complete graphs $K_n$, encoded in their chromatic polynomials (see A130534), and the commutator $[log(D), x^nD^n]=d(x^nD^n)/d(xD)$ for $D=d/dx$ (cf. A238363).
Update (Apr 26 and May 20 2014): | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9719924777713886,
"lm_q1q2_score": 0.8162150999065391,
"lm_q2_score": 0.8397339676722393,
"openwebmath_perplexity": 357.06024019774816,
"openwebmath_score": 0.8864367604255676,
"tags": null,
"url": "http://mathoverflow.net/questions/78540/a-sum-involving-derivatives-of-vandermonde?sort=oldest"
} |
Y (θ, ϕ) Y(\theta, \phi) Y (θ, ϕ) are the spherical harmonics, with a normalization constant multiplying the solution as described so far to make independent spherical harmonics orthonormal: Y ℓ m (θ, ϕ) = 2 ℓ + 1 4 π (ℓ − m)! Spherical harmonics 2020 1 Problems with spherical symmetry: spherical harmonics Suppose our potential problem has spherical boundaries. 1) ThepresenceoftheW-factorservestodestroyseparabilityexceptinfavorable specialcases. Spherical harmonics describe the angular part of a particle’s motion when it’s bound in a spherically isotropic … f , can be expanded in terms of spherical harmonics: f (θ,ϕ)=∑ l=1 ∞ ∑ m=−l l AlmYlm(θ,ϕ) where Alm=∫ 0 2π ∫ 0 π f(θ,ϕ)Ylm * (θ,ϕ)sinθdθdϕ - There are several useful special cases for spherical harmonics that we should keep in mind. Then we would like to solve the problem in spherical coordinates. Their attractive properties with regard to rotations make them an intuitive and convenient choice as basis functions when searching in a | {
"domain": "rudina.sk",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.980580656357574,
"lm_q1q2_score": 0.8084514232522176,
"lm_q2_score": 0.8244619328462579,
"openwebmath_perplexity": 1342.0224781602496,
"openwebmath_score": 0.6324329972267151,
"tags": null,
"url": "https://www.rudina.sk/l2r6z6/93e0b2-spherical-harmonics-for-dummies"
} |
quantum-mechanics, bosons, quantum-statistics
$$\text{two heads}, \text{two tails}, \text{one each}$$
and hence in the microcanonical ensemble (where each distinct quantum state is equally probable) there is a $2/3$ chance of double occupancy, not $1/2$. That's what people mean when they say bosons "clump up", though it's not really a consequence of bosonic statistics, just a consequence of the particles being identical. Whenever a system of bosonic particles is in thermal equilibrium, there exist fewer states with the bosons apart than you would naively expect, if you treated them as distinguishable particles, so you are more likely to see them together. | {
"domain": "physics.stackexchange",
"id": 56994,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, bosons, quantum-statistics",
"url": null
} |
ros, geometry, transform-broadcaster, transform
Originally posted by tfoote with karma: 58457 on 2012-03-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by thebyohazard on 2013-07-26:
Is this still the case with groovy, or has such a patch been written? | {
"domain": "robotics.stackexchange",
"id": 8625,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, geometry, transform-broadcaster, transform",
"url": null
} |
electrostatics, electric-fields
be greatly appreciated! You've written it out yourself: "$V= -E_0z+ C$". $z= r\cos\theta$, so $V\sim -E_0 r$ as $r\rightarrow \infty$. Then we do have to deal with the $A_l$ terms, but it doesn't mean all of them. We want to match the coefficients of $A_l$ to $V$, which means $A_1 \neq 0$ and $A_l = 0$ for $l\neq1$. | {
"domain": "physics.stackexchange",
"id": 56906,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electric-fields",
"url": null
} |
electricity, hydrogen, molecules
Title: Do H$_2$ fuel cells function the same as H? Are the 2 hydrogen molecule broken apart first? or does the $H_2$ directly react with the dialetric/anode/cathode to produce electricity?
i.e.
http://www.fuelcellstore.com/horizon-aerostak-a200 In the Standard Hydrogen electrode (SHE) the electrode used is made of platinum which is inert and does not participate in any reactions occurring in the electrochemical cell but it provides its surface for conduction of electrons.
The following reaction takes place if SHE acts as cathode,
2H+ (aq) + 2e -> H2 (g) {reduction half reaction}
And if SHE acts as anode then the reaction taking place is,
H2 (g) -> 2H+ (aq) + 2e {oxidation half reaction}.
In case of a fuel cell,
At Anode: 2H2 (g) + 4OH- (aq) -> 4H2O (l) + 4e
At Cathode: O2 (g) + 2H2O (l) + 4e -> 2H2O (l)
Overall Reaction : 2H2 (g) + O2 (g) -> 2H20 (l) | {
"domain": "physics.stackexchange",
"id": 37280,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electricity, hydrogen, molecules",
"url": null
} |
nlp
Bag of Words (usually with tf-idf weights) is a simple but quite efficient representation for classification based on the text topic or similar, assuming the classes are reasonably distinct from each other.
Word embeddings are a more advanced option for semantic-based classification. They can handle more subtle semantic relations but require being trained on a large training corpus. Using pre-defined embeddings can be a solution but then there's the risk that the original training data isn't perfectly suitable for the dataset.
N-grams models can be used in many different ways but are often chosen when the classification involves syntax and/or writing style. Note that the higher the value $n$, the larger the training corpus needs to be, this can also be taken into account in the choice.
I might have around 40 categories and then around a same number of sub-categories upto 4 levels. | {
"domain": "datascience.stackexchange",
"id": 6831,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nlp",
"url": null
} |
Proof. First proof. It is clear that (5), (6), and (7) are equivalent. It is clear that (4) and (8) are equivalent as every quasi-compact open is a finite union of standard opens. The implication (7) $\Rightarrow$ (4) follows from Lemma 10.25.4. Assume (4) holds. Let $\mathfrak p, \mathfrak p'$ be distinct primes of $R$. Choose an $f \in \mathfrak p'$, $f \not\in \mathfrak p$ (if needed switch $\mathfrak p$ with $\mathfrak p'$). Then $\mathfrak p' \not\in D(f)$ and $\mathfrak p \in D(f)$. By (4) the open $D(f)$ is also closed. Hence $\mathfrak p$ and $\mathfrak p'$ are in disjoint open neighbourhoods whose union is $X$. Thus $X$ is Hausdorff and totally disconnected. Thus (4) $\Rightarrow$ (2) and (3). If (3) holds then there cannot be any specializations between points of $\mathop{\mathrm{Spec}}(R)$ and we see that (5) holds. If $X$ is Hausdorff then every point is closed, so (2) implies (6). Thus (2), (3), (4), (5), (6), (7) and (8) are equivalent. Any profinite space is Hausdorff, | {
"domain": "columbia.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9916842227513688,
"lm_q1q2_score": 0.8084532020164499,
"lm_q2_score": 0.8152324938410784,
"openwebmath_perplexity": 195.41252969865548,
"openwebmath_score": 0.9942970871925354,
"tags": null,
"url": "https://stacks.math.columbia.edu/tag/00ER"
} |
react.js, jsx
About variable names: data is a very bad name for a variable, you should avoid. Believe me, I know as I used so many times and eventually I figured out I shouldn't do.
Maybe you can use allItems instead.
About the method logic.
Array.filter() will always return an array even empty. So your check:
if (searchedListItems) {
is always true, you should:
if (searchedListItems.length > 0) {
About: listToRender.remove(listToRender, itemToRemove)
listToRender should be an empty list, no way there are values inside, according to your code. Anyway I'm sure you never arrive here, because javascript Array does not have the method remove().
I think you can just improve your filter, if you want to remove more items.
this.setState({
showDropdown: true,
searchInput: value,
searchedList: this.state.allItems
.filter(dataItem =>
(dataItem.includes(this.state.searchInput) ||
this.state.searchedListItems.indexOf(dataItem) !== -1))
}); | {
"domain": "codereview.stackexchange",
"id": 26854,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "react.js, jsx",
"url": null
} |
python, python-3.x, simulation, pyqt
self.initUI()
def initUI(self):
self.die1 = QLabel(self)
self.die1.setPixmap(QPixmap(choice(DICE)).scaled(*SCALED_PARMS))
self.die1.move(0.5, 0.5)
self.die2 = QLabel(self)
self.die2.setPixmap(QPixmap(choice(DICE)).scaled(*SCALED_PARMS))
self.die2.move(162, 0.5)
self.die2.setVisible(False)
self.btn = QPushButton('Roll', self)
self.btn.setFont(QFont('SansSerif', 20))
self.btn.setToolTip('Click to Roll Die')
self.btn.clicked.connect(self.rolldie)
self.btn.resize(166, 43)
self.btn.move(-2, 161)
self.dice_amount = QComboBox(self)
self.dice_amount.addItems(['1', '2'])
self.dice_amount.activated[str].connect(self.dice_amount_changed)
self.dice_amount.move(135, -2) | {
"domain": "codereview.stackexchange",
"id": 27180,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, simulation, pyqt",
"url": null
} |
Demo 10.3.4
Here is a $$4\times 4$$ Chebyshev differentiation matrix.
t,Dₓ = FNC.diffcheb(3,[-1,1])
Dₓ
4×4 Matrix{Float64}:
-3.16667 4.0 -1.33333 0.5
-1.0 0.333333 1.0 -0.333333
0.333333 -1.0 -0.333333 1.0
-0.5 1.33333 -4.0 3.16667
We again test the convergence rate.
f = x -> x + exp(sin(4*x));
dfdx = x -> 1 + 4*exp(sin(4*x))*cos(4*x);
d2fdx2 = x -> 4*exp(sin(4*x))*(4*cos(4*x)^2-4*sin(4*x));
n = 5:5:70
err1 = zeros(size(n))
err2 = zeros(size(n))
for (k,n) in enumerate(n)
t,Dₓ,Dₓₓ = FNC.diffcheb(n,[-1,1])
y = f.(t)
err1[k] = norm( dfdx.(t) - Dₓ*y, Inf )
err2[k] = norm( d2fdx2.(t) - Dₓₓ*y, Inf )
end
Since we expect a spectral convergence rate, we use a semi-log plot for the error.
plot(n,[err1 err2],m=:o,label=[L"f'" L"f''"],
xaxis=(L"n"), yaxis=(:log10,"max error"),
title="Convergence of Chebyshev derivatives") | {
"domain": "tobydriscoll.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9898303419461302,
"lm_q1q2_score": 0.8022366175748555,
"lm_q2_score": 0.8104789109591832,
"openwebmath_perplexity": 1928.5445266069187,
"openwebmath_score": 0.7715263962745667,
"tags": null,
"url": "https://tobydriscoll.net/fnc-julia/bvp/diffmats.html"
} |
np-hard, check-my-answer
Title: Is finding the union of all minimum hitting sets NP-hard? Let's start with the well-known minimum hitting set problem (known to be NP-hard): given some collection of sets: $U = \{S_1, S_2, S_3\} = \{\{1, 2, 5, 9\}, \{1,2,7\}, \{42, 13, 23, 1, 2\}\}$ for example, we wish to find some minimum cardinality set $H$ composed of elements from the union of all elements of $U$ of such that $H \cap S \neq \emptyset, \forall S \in U$. In the above example, it's easy to see that $H = \{1\}$ or $H = \{2\}$.
Now, suppose that instead of returning a minimum hitting set, we wish to return a union of all minimum hitting sets. In the above example, that would be $\{1,2\}$. Note that this is itself a hitting set, but it is no longer a minimum hitting set. Given that finding a minimum hitting set is NP-hard, is finding the union of all minimum hitting sets also NP-hard? | {
"domain": "cs.stackexchange",
"id": 19994,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "np-hard, check-my-answer",
"url": null
} |
python, chirp
# to look at the results:
import matplotlib.pyplot as plt
plt.plot(t, linear_chirp_example)
plt.show()
plt.plot(t, exp_chirp_example)
plt.show()
*NOTE that the instantaneous frequency at any given t is df(t)/dt! | {
"domain": "dsp.stackexchange",
"id": 7441,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, chirp",
"url": null
} |
newtonian-mechanics, forces, acceleration, speed
2)That increase in magnitude is a small quantity but it is getting added at every instance so its integration is non zero. So why are we ignoring it. The equation you are thinking of isn't actually $$V_i + a \ dt = V_f$$ but rather $$\vec V_i + \vec a \ \Delta t = \vec V_f$$ Now, the second equation is only valid if $\vec a$ is constant over the time interval $\Delta t$, which is not the case, but we will take care of that later by taking the limit as $\Delta t \to 0$. | {
"domain": "physics.stackexchange",
"id": 91018,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, acceleration, speed",
"url": null
} |
For the sampling distribution of the sample mean, we learned how to apply the Central Limit Theorem when the underlying distribution is not normal. In answer to the question "How large is large? Limit Theorem. For values of p close to .5, the number 5 on the right side of these inequalities may be reduced somewhat, while for more extreme values of p (especially for p < .1 or p > .9) the value 5 may need to be increased. The normal approximation to the binomial distribution for intervals of values is usually improved if cutoff values are modified slightly. The sum of the probabilities in this table will always be 1. Introduction to Video: Normal Approximation of the Binomial and Poisson Distributions; 00:00:34 – How to use the normal distribution as an approximation for the binomial or poisson with … Tutorial on the normal approximation to the binomial distribution. $\endgroup$ – Giuseppe Negro Sep 30 '15 at 18:21 The following results are what came out of it. Recall that the binomial | {
"domain": "qien.eu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347838494567,
"lm_q1q2_score": 0.8077468043931948,
"lm_q2_score": 0.8289388040954683,
"openwebmath_perplexity": 325.9838073415349,
"openwebmath_score": 0.8606991171836853,
"tags": null,
"url": "https://qien.eu/the-blackwater-gqp/article.php?page=09782a-normal-approximation-to-binomial-formula"
} |
python, game, console
if choiceForSetup.upper()=='Y':
#display empty grid
myGridClass.populateGrid()
myGridClass.displayGrid()
setupNavy('manual',myGridClass,sortedShipList)
myGridClass.populateGrid()
else:
setup='Y'
while setup=="Y":
setupNavy('random',myGridClass,sortedShipList)
myGridClass.populateGrid()
myGridClass.displayGrid()
if numberOfPlayers==1:
con=raw_input("Are you satisfied with your location of your ships ('Y' or 'N')? ")
else:
con="Y"
if con.upper()=="Y":
setup="N"
else:
myGridClass.gridValuesUsed=[]
myGridValues=myGridClass.resetGridValues()
for whosTurn in turnList:
print "\n\n%s turn"%whosTurn | {
"domain": "codereview.stackexchange",
"id": 4083,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, game, console",
"url": null
} |
php, php5, api
if(empty($attributes)){
$attributes[] = 'Eic_State';
}
$data = array ('queueIds' => array(
array('queueType' => $this->_queueType,
'queueName' => $this->_queueName)
),
'attributeNames' => $attributes
);
$data = $this->_sendRequest('PUT', 'messaging/subscriptions/queues/' . $this->_subscriptionId , $data, $httpCode);
if( $this->_debug){
new showVar($data, false, 'HTTP Code: ' . $httpCode);
}
if($httpCode == 200 || $httpCode == 201){
$this->_isSubscribledToQueue = true;
return true;
} elseif($httpCode == 401){
$this->_reconnect();
$this->_subscribeToQueue($id);
}
return false;
} | {
"domain": "codereview.stackexchange",
"id": 13767,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, php5, api",
"url": null
} |
beginner, go, url
Strings can be compared directly with !=, no need to call strings.Compare.
SaveAsFile has a lot of string arguments, and some of them are optional. If a caller calls SaveAsFile(url, digest, "", ""), it's not obvious that the empty strings are the username and password. | {
"domain": "codereview.stackexchange",
"id": 42906,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, go, url",
"url": null
} |
fluid-dynamics, pressure
Title: Why do some carbonated drinks fizz more than others? Wired magazine ran an article this month on carbonation in soft drinks.
If all soft drinks are manufactured effectively identically, why do some types fizz more than others?
For example, root beer is always extremely fizzy (and laces well).
In similar fashion, lemon-lime drinks like Sprite and Sierra Mist are very fizzy - but do not lace like root beer.
Compare those to diet colas, which fizz slowly and/or minimally, and one is left to ponder.
What is it about some types of drinks that make them hold and release their carbonation differently? Principally, the Solubility of carbon dioxide in the solution.
Different pressures and temperatures will affect the rate at which the carbon dioxide comes out of solution thereby giving the illusion of different "fizziness" of a given drink. | {
"domain": "physics.stackexchange",
"id": 1617,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, pressure",
"url": null
} |
filters, filter-design, infinite-impulse-response, digital-filters
a0 = 1.0; // = 1.0
a1 = -pole1 - pole2; // = -0.931176
a2 = pole1 * pole2; // = 0
b0 = 1.0; // = 1.0
b1 = -zero1 - zero2; // = -1.731986
b2 = zero1 * zero2; // = 0.733838
Tried to google "ready to use" solution of this but the only source code I found few papers I could use for correction.
Papers:
Jackson
Mecklenbräuker
Nelatury | {
"domain": "dsp.stackexchange",
"id": 3638,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, filter-design, infinite-impulse-response, digital-filters",
"url": null
} |
java, cryptography, aes
/**
* Returns the plaintext of a given RSA-encrypted string and a private RSA key.
* @param ciphertext an RSA-encrypted byte array
* @param key a private RSA key
* @return plaintext
*/
public static String decrypt(byte[] ciphertext, PrivateKey key){
byte[] plaintext = null;
try {
Cipher cipher = Cipher.getInstance("RSA/ECB/OASP");
cipher.init(Cipher.DECRYPT_MODE, key);
plaintext = cipher.doFinal(ciphertext);
} catch (Exception e) {
e.printStackTrace();
}
return new String(plaintext);
}
} Three comments:
Firstly, don't catch exceptions like that, especially not all
exceptions. Ideally you want to be notified as early as possible about
errors, not when the next call fails, so I don't see a good reason to
put the catch there. If you really want to transform the exception
into another return value (or exception) then make that more explicit. | {
"domain": "codereview.stackexchange",
"id": 19002,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, cryptography, aes",
"url": null
} |
the symbol large and easier to read. All the versions of this article: < français > Here are few examples to write quickly matrices. The formulas for the first few values of. These are not guaranteed to work in MathJax but are a good place to start. This guide walks you through the basics of using Jupyter Notebooks locally. Use MathJax to format equations. That'll give you many lists and tips. The align environment will align formulas at the ampersand & symbol. Integration as summation Introduction On this leaflet we explain integration as an infinite sum. P a b x y (x y, ). You can also convert back to LaTeX to edit the equation. Here are a few examples:. to write the index n on the right side of the sum symbol, while the limits of the summation remain above and below. June 2014 by tom 7 Comments. This is important, e. This formula reflects the linearity of the finite sums. LaTeX(LATEX,音译“拉泰赫”)是一种基于ΤΕΧ的排版系统,由美国计算机学家莱斯利·兰伯特(Leslie Lamport)在20世纪80年代初期开发。 | {
"domain": "puntoopera.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9504109728022221,
"lm_q1q2_score": 0.8402006294041525,
"lm_q2_score": 0.8840392771633078,
"openwebmath_perplexity": 1388.7920082618878,
"openwebmath_score": 0.952876627445221,
"tags": null,
"url": "http://puntoopera.it/onxt/latex-summation.html"
} |
asymptotics, recurrence-relation
Title: Comparing two recurrence relations w.r.t. asymptotic growth I have two functions $T_1(n),T_2(n)$. How do I decide which is asymptotically faster?
One is given by the recurrence relation
$$ T_1(n) = \sqrt{n} T_1(\sqrt{n}) + 3 n, \quad T_1(1) = T_1(2) = 1. $$
The other is given by the recurrence relation
$$ T_2(n) = 3 T_2(n/3) + 2n \log n, \quad T_2(1) = T_2(2) = 1. $$
For the first function I guess there is $O(\sqrt{n} \cdot \sqrt{n})$ for loop, and $O(n)$ for the $c$; which becomes $O(n^2)$ in total.
For the second one, the Master's theorem is applicable, but as I assume the complexity becomes $O(n \cdot n \log n) \Leftrightarrow O(n^2 \cdot \log n) \Rightarrow O(n)$ for loop, and $O(n\log n)$ for $c$.
So if I am comparing both $O()$'s, we can in total see that
$$ O(n^2) < O(n^2 \cdot \log n) $$ | {
"domain": "cs.stackexchange",
"id": 7634,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "asymptotics, recurrence-relation",
"url": null
} |
cc.complexity-theory, fractals
just added Would we be compelled to redefine some essential property of Sierpinski triangle if Walsh Transform or Sierpinski triangle transform is shown to be fully linear?
http://en.wikipedia.org/wiki/Walsh_matrix Blum, Shub, and Smale proved that membership in the Mandelbrot set is undecidable in the Real RAM model of computation (known in some upstart circles as the BSS model).
The high-level argument is one sentence long: Any Real RAM computable set is the countable union of semi-algebraic sets, so its boundary has Hausdorff dimension 1, but the boundary of the Mandelbrot set has Hausdorff dimension 2. By the same argument, almost every interesting fractal is uncomputable in the real-RAM model. | {
"domain": "cstheory.stackexchange",
"id": 1981,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, fractals",
"url": null
} |
thought-experiment
Then the bubble would have no preference for any position in the level and it would stay wherever it happened to be first when the construction of this hypothetical level was completed.
Dropping Assumptions
But Let's drop those assumptions one by one and see what happens:
1. Earth is not a perfect, smooth sphere
Without having to do any analysis of Earth's crust and mantle, we could tell that there are places on earth with more material in them. We don't even have to know anything about density, but it's straightforward to observe, even if everything on earth (not including the atmosphere here) was made up of the same exact material, that there are places with just more stuff.
Standing on top of Mount Everest (~29,000ft above sea level), you have more stuff between you and Earth's geometric center than you do if if you were standing on the shores of the Dead Sea (~400ft below sea level). | {
"domain": "physics.stackexchange",
"id": 76842,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thought-experiment",
"url": null
} |
over two periods are vertical asymptotes for the best answers search! Period of a Stretched or Compressed tangent function asymptotes in red is all real numbers except whenever cosâ¡ ( ). Is given in exercise ( 1 ) it starts at 0, up... Cosine and tangent functions, say, y = Acot ( Bx ) and heads! Harder, but theyâre there like discontinue curve because for certain values tangent is periodic meaning! Information about this is the height from highest to lowest points and divide by... Cosx has a value of the tangent function is all real numbers except whenever cosâ¡ ( θ ),. Be limited to -90º ⤠x ⤠360º unlike sine and cosine curve function does play... Domain, Range and vertical shift 1 more information about this is given the. The standard period of the first period function, where the tangent function is radians a tan ;! X ⤠90º tangent graphs, it is often necessary to determine a stretch! Stay Home, stay Safe and keep learning!!!!!!!!!!!!... Theorem of calculus the | {
"domain": "iita.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9504109784205502,
"lm_q1q2_score": 0.8290885345070689,
"lm_q2_score": 0.8723473879530492,
"openwebmath_perplexity": 1179.9577146280094,
"openwebmath_score": 0.773389995098114,
"tags": null,
"url": "http://bulletin.iita.org/stephanie-tsicos-orlwmt/s0qho.php?a53d6b=tangent-graph-period"
} |
46 ] matching number for the graph. Tutorial is designed for beginners and professionals both chromatic number Kn = n. what is the matching number for following. From the above graph is 3 isomorphicif there exists a one-to-one correspondence between node... Components are independent and not connected to each other graph are different cities around world... Referred to as vertices, vertexes or nodes, or vertices, that contain informat I on regarding objects. The unqualified term graph '' usually refers to a simple graph may be either connected disconnected! And deg ( n2 ) = 4 shown in figure 1.3 are isomorphic to one another must 1! Given size or disconnected it ’ s a directed - weighted graph of vertices odd... On regarding different objects weighted graph contains n ( n 1 ) edges! Has exactly 1/2 nr edges with the connections themselves referred to as,! Is an important branch of computer science and discrete math fewer edge to connect than the right... - weighted graph depending on | {
"domain": "butterflymodels.pl",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9763105300791785,
"lm_q1q2_score": 0.8317704898571743,
"lm_q2_score": 0.8519528000888386,
"openwebmath_perplexity": 734.7648088697057,
"openwebmath_score": 0.5919721722602844,
"tags": null,
"url": "http://butterflymodels.pl/f9a3v3f/graph-theory-examples-7d6169"
} |
black-hole, stellar-evolution, neutron-star, binary-star
Title: Binary pairings that haven't been discovered yet? Question: Are there any kinds of binary pairings that haven't been discovered yet? Any that are particularly significant, or that might shed some light on binary stellar evolution or theories of capture processes, that are sought, but so far no examples have been found or at least suspected?
By "pairing" I mean a binary object where each is a type of star or a black hole.
Stars can be anything from brown dwarfs to neutron stars. Black holes should be very roughly comparable to stars in size. (I'm not asking about a star in orbit around a supermassive black hole in the center of a galaxy)
For example, recent observations of gravitational wave events have suggested a pair of black holes merging, and a pair of neutron stars merging. tl;dr Yes there have been theorized, and extremely possible binary systems that have not been observed. One such thing is a TZO, or a Thorne–Żytkow_object. This is a neutron star-red giant binary. | {
"domain": "astronomy.stackexchange",
"id": 3519,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "black-hole, stellar-evolution, neutron-star, binary-star",
"url": null
} |
c++, linked-list
while (begin != end)
{
ForwardList_NodeBase<T>* newNode = new ForwardList_Node<T>{ *begin };
lastNewNode->next = newNode;
lastNewNode = newNode;
++begin;
}
if (firstAfterPush == nullptr)
{
_back = firstAfterPush;
}
else
{
lastNewNode->next = firstAfterPush;
}
}
template<class T>
void ForwardList<T>::pop_front()
{
if (_beforeBegin.next)
{
ForwardList_NodeBase<T>* oldFront = _beforeBegin.next;
_beforeBegin.next = oldFront->next;
delete oldFront;
// do i need this? we wont be accessing back when front is nullptr
if (_beforeBegin.next == nullptr)
{
_back = nullptr;
}
}
} | {
"domain": "codereview.stackexchange",
"id": 27293,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, linked-list",
"url": null
} |
You already know that $$\frac a{a-2}\cdot \frac b{b-2} \cdot \frac c{c-2}=3$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226254066618,
"lm_q1q2_score": 0.8010543460572741,
"lm_q2_score": 0.8198933425148213,
"openwebmath_perplexity": 305.3066542613289,
"openwebmath_score": 0.9284040927886963,
"tags": null,
"url": "https://math.stackexchange.com/questions/3006715/complex-root-of-equation-1/3006728"
} |
Yes, there is. The trick is Taylor expansion. If you wanna shift $p(x)=x^2-1$ to $p(x+1)$, then take Taylor expansion at point $x-1$ with $\Delta x=x-(x-1)=1$. \begin{align}p(x)&=p(x-1)+p'(x-1)\Delta x+\frac12p''(x-1)\Delta x^2\\&=(x-1)^2-1+2(x-1)+1\\&=(x-1)^2+2(x-1)\end{align} Don't expand it. Just replace $x$ by $x+1$, we have $$p(x+1)=x^2+2x$$
Note this trick is only valid for analytic functions (maybe not?). But I think it be faster than substitution only for polynomials since their Taylor expansion has finite order and easier way to compute derivative. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9693241965169939,
"lm_q1q2_score": 0.8295972182443356,
"lm_q2_score": 0.8558511396138365,
"openwebmath_perplexity": 506.3072311466235,
"openwebmath_score": 0.8051320314407349,
"tags": null,
"url": "https://math.stackexchange.com/questions/694565/polynomial-shift"
} |
sleep
Science magazine, in the 125th anniversary issue, made a list of 125 big questions that remain to be answered by scientists. One of those questions was "Why do we sleep?" The role of sleep remains elusive but many ideas have been put forth about the role of and the evolution of sleep. The leading idea seems to be memory consolidation. This article by Kavanau (1997) reviews ideas about the function of sleep and why it evolved. He suggests that sleep and memory evolved as as ways to improve or maintain effective connections among the nerve cells in the brain. Some circuits are used frequently, such as those used to process sensory information like vision. Other circuits are used less frequency, such as circuits that can be used to store memories. Sleep may allow those infrequently used circuits to be activated and used without causing conflict with the circuits used during "restful waking." In other words, sleep may allow the lesser used circuits to be exercised, which can help to | {
"domain": "biology.stackexchange",
"id": 2751,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "sleep",
"url": null
} |
The statement is a self-contradiction. letter occurs on an open branch, the corresponding row of the truth table assigns false to the sentence letter on that row of the truth table. Show that p -> q, where "->" is the conditional. Is q implies p true for this row? Does true imply true? Yeah. A proposition P is a tautology if it is true under all circumstances. It is not possible for both P and NOT P to be true. Tautologies, Contradictions, and Satisfiability I A tautology (Taut) is a PropCalc formula such that every row of its truth table is 1, i. A passionate Computer Science and Engineering graduate, who loves to follow his heart :). A proposition that is neither a tautology nor a contradiction is called a contingency. If you reach a contradiction, then you know it can’t. Contradictions are never true. Consistency and Contradiction. If there are contradictory configurations, you can look into the cases that belong to those configurations and assess whether contradictions can be | {
"domain": "chiavette-usb-personalizzate.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.970239907775086,
"lm_q1q2_score": 0.8545844878495524,
"lm_q2_score": 0.8807970904940926,
"openwebmath_perplexity": 467.3208727966209,
"openwebmath_score": 0.5699777007102966,
"tags": null,
"url": "http://insd.chiavette-usb-personalizzate.it/contradiction-truth-table.html"
} |
ros, ros2, services
Originally posted by Kenji Miyake on ROS Answers with karma: 307 on 2021-05-01
Post score: 1
The asynchronicity indeed ensures that your node will not be stalled until a response comes back in ROS 2. If you don't require progress or the option to cancel the request then there will be no issue in your example.
Actually IMO in ROS 2 it's better to think that way about services: you make a request and you don't know how long you will wait for it, you just make sure to handle the response in a callback. As several questions on here show (e.g. this one), trying to stick to the ROS 1 thought model is not well supported.
Originally posted by sgvandijk with karma: 649 on 2021-05-02
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 36390,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros2, services",
"url": null
} |
java
}
public class LineData {
private static final String COMMA = ",";
private String key;
private String value;
private boolean validLine;
public LineData(String line) {
String[] pair = line.split(COMMA);
if (validateLine(pair)) {
key = pair[ArrayEnum.KEY.getIndex()].trim();
value = pair[ArrayEnum.VALUE.getIndex()].replaceFirst("^0*","").trim();
setValidLine(true);
} else {
setValidLine(false);
}
}
/**
* The method return false under below condition
* 1. If the given array is null or the array length is not 2
* 2. If the first element of the array is empty string
* 3. If the second element of the array is not positive number or zero
*
* @param pair
* @return
*/
public boolean validateLine(String[] pair) { | {
"domain": "codereview.stackexchange",
"id": 13835,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java",
"url": null
} |
electromagnetism, definition, magnetostatics
Title: Why is magnetostatics defined as $\frac{\partial \rho}{\partial t} = 0$? I don't see why the idea of steady currents (i.e. magnetostatics) implies that charge density $\rho(\vec{r},t)$ has no explicit time dependence.
Is it just coming from magnetostatics being defined as $ \vec{\nabla} \cdot \vec{J}:= 0$ (I don't see why this would be true either) and because of the continuity equation $\implies \frac{\partial\rho}{\partial t} = 0$.
Introduction to Electrodynamics, D.J. Griffiths section 5.2.1 - Steady Currents Magnetostatics is, in some sense, a toy concept taught to students in preparation for the formal magneto-quasi-static (MQS) approximation. The purpose of the MQS approximation is to decouple the electrical field from the magnetic field. This is done by setting $\frac{\partial}{\partial t} \vec E \approx 0$ so that Ampere's law becomes $\nabla \times \vec H \approx \vec J$ (see http://web.mit.edu/6.013_book/www/chapter3/3.2.html ) | {
"domain": "physics.stackexchange",
"id": 76623,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, definition, magnetostatics",
"url": null
} |
python, python-3.x
"8a": "Linux Kernel Partition (used by AiR-BOOT)",
"8b": "Legacy Fault Tolerant FAT32 volume",
"8c": "Legacy Fault Tolerant FAT32 volume using BIOS extd INT 13h",
"8d": "Free FDISK 0.96+ hidden Primary DOS FAT12 partitition",
"8e": "Linux Logical Volume Manager partition",
"90": "Free FDISK 0.96+ hidden Primary DOS FAT16 partitition",
"91": "Free FDISK 0.96+ hidden DOS extended partitition",
"92": "Free FDISK 0.96+ hidden Primary DOS large FAT16 partitition",
"93": "Hidden Linux native partition | Amoeba",
"94": "Amoeba bad block table",
"95": "MIT EXOPC native partitions",
"96": "CHRP ISO-9660 filesystem",
"97": "Free FDISK 0.96+ hidden Primary DOS FAT32 partitition",
"98": "Free FDISK 0.96+ hidden Primary DOS FAT32 partitition (LBA) | Datalight ROM-DOS Super-Boot Partition",
"99": "DCE376 logical drive",
"9a": "Free FDISK 0.96+ hidden Primary DOS FAT16 partitition (LBA)", | {
"domain": "codereview.stackexchange",
"id": 20789,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x",
"url": null
} |
machine-learning, autoencoder, vae
Then, we feed those z's to decoder, and receive images,
sample = model.decode(sample).cpu()
Finally, we embed z's into 2D dimension using t-SNE, or use 2D dimension for z and plot directly.
Here is an illustration for the second case (drawn by the one and only paint):
As you see, the mean and variances are completely bypassed, we directly give the random z's to decoder.
The referenced article says the same thing, but less obvious:
Below you see 64 random samples of a two-dimensional latent space of MNIST digits that I made with the example below, with ZDIMS=2
and
VAE has learned a 20-dimensional normal distribution for any input digit
ZDIMS = 20
...
self.fc21 = nn.Linear(400, ZDIMS) # mu layer
self.fc22 = nn.Linear(400, ZDIMS) # logvariance layer
which means it only refers to the z vector, bypassing mean and variance vectors. | {
"domain": "datascience.stackexchange",
"id": 4971,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, autoencoder, vae",
"url": null
} |
measurements, tools
This all works fine until the gap is opened so that the handle goes thru a full turn. That's where the second more coarse scale comes in. This scale is revealed as the gap is increased. Usually there is one line per handle turn, and these are usually calibrated to come out to a convenient multiple. For example, if each handle turn opens the gap 100 mils, then there is a line every 100 mils on the fixed coarse scale.
You have to know what the coarse scale multiplier is, but that's easy to see by rotating the handle one turn. Let's say the coarse scale is marked every 100 mils and the handle in mils. Then for example, if you can see 3 lines of the coarse scale revealed and the handle scale is at 37, then the measurement is 337 mils, or 0.337 inch. | {
"domain": "engineering.stackexchange",
"id": 422,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "measurements, tools",
"url": null
} |
gauge-theory, gauss-law, variational-principle, constrained-dynamics
You could be studying $\delta S =0$ on the quotient $C=A/G$ only but it would be cumbersome and $\delta S$ would be physically and literally the same thing as it is on $A$. It's the very point of introducing degrees of freedom which include one redundant one (because of the gauge symmetry) to simplify the picture. In fact, the equations of motion from $\delta S= 0$ on $A$ are manifestly Lorentz-covariant etc. If you were trying to parameterize the space $C=A/G$ by some fields, you would probably have to impose some Lorentz-breaking or otherwise unnatural conditions, e.g. $A_0=0$, and the whole formalism would lose the manifest Lorentz symmetry even though the actual phenomena, when looked at properly, would still obey the laws of relativity. | {
"domain": "physics.stackexchange",
"id": 4640,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gauge-theory, gauss-law, variational-principle, constrained-dynamics",
"url": null
} |
python, classes
# Lay-out
subfig.suptitle(side + ' side')
for ax, location in zip(axes[0], locations):
ax.set_title(location)
for ax, component in zip(axes[:,0], components):
ax.set_ylabel(component, size='large')
plt.show()
if __name__ == "__main__":
datafolder = r"Sample data"
metadata_filepath = r"Sample data/metadata.xlsx"
tc = TestCampaign(datafolder, metadata_filepath)
for run_id in range(3):
tr = tc.load_run_by_id(run_id)
tr.plot_data() | {
"domain": "codereview.stackexchange",
"id": 44991,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, classes",
"url": null
} |
William Reply
Mr. Shamir employs two part-time typists, Inna and Jim for his typing needs. Inna charges $10 an hour and can type 6 pages an hour, while Jim charges$12 an hour and can type 8 pages per hour. Each typist must be employed at least 8 hours per week to keep them on the payroll. If Mr. Shamir has at least 208 pages to be typed, how many hours per week should he employ each student to minimize his typing costs, and what will be the total cost?
Chine Reply
At De Anza College, 20% of the students take Finite Mathematics, 30% take Statistics and 10% take both. What percentage of the students take Finite Mathematics or Statistics?
Chalton Reply | {
"domain": "jobilize.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9911526446557306,
"lm_q1q2_score": 0.8033083183035012,
"lm_q2_score": 0.810478913248044,
"openwebmath_perplexity": 1032.6150173226338,
"openwebmath_score": 0.6382226347923279,
"tags": null,
"url": "https://www.jobilize.com/online/course/0-12-probability-applied-finite-mathematics-by-openstax?qcr=www.quizover.com&page=1"
} |
reinforcement-learning, deep-rl, dqn, experience-replay, mini-batch-gradient-descent
Sample random minibatch of transitions $\left(\phi_{j}, a_{j}, r_{j}, \phi_{j+1}\right)$ from $D$
they mean multiple transitions in the minibatch (with a minibatch size of $1$ being a special case), and they use the index $j$ to collectively refer to the entire set of indices in that randomly sampled minibatch. It's not one particular number / index, $j$ is a set of indices. When further lines of code do something with index $j$, they actually do something with all indices $j$. | {
"domain": "ai.stackexchange",
"id": 550,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, deep-rl, dqn, experience-replay, mini-batch-gradient-descent",
"url": null
} |
homework-and-exercises, reflection, geometric-optics
Lay the mirror on the floor.
Look down at it from above and move your head around until you see an image of your eye.
Hold a pin (a finger will do) between your eye and the mirror and move it until it is covering the image of your eye.
At the same time the image of the pin will appear.
Move the pin up and down until the pin and its image are the same size and do not move relative to one another as you move your head from side to side - a position of no-parallax between the pin and its image.
The distance between the pin and the mirror is the radius of curvature of the mirror.
If you accumulate some data it might determine if there is some semblance of order amongst the manufactures of these mirrors.
Unfortunately I only have a $2000\rm R$ (measured using no parallax) mirror in the house and it is not labelled as to what its magnification is. | {
"domain": "physics.stackexchange",
"id": 38713,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, reflection, geometric-optics",
"url": null
} |
forces, torque, stress-strain
And this information corresponds to shearing stress...I understand that net force will be zero but how is net torque zero?
I tried using right hand rule to get torque due to both forces in same direction (hence non zero torque), perhaps I'm doing it wrong... Your textbook, like most textbooks, is, I think, confusing on this point. If all the forces involved are parallel to the surfaces, then you need two pairs of forces for equilibrium. | {
"domain": "physics.stackexchange",
"id": 75379,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, torque, stress-strain",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.