text stringlengths 1 1.11k | source dict |
|---|---|
What exactly does it mean for the point to be uniformly distributed?
It means that every point of the circle has the same probability of being picked/generated by the function.
## Discussion
Before discussing solutions it is worth mentioning that the fact that the circle is centered at $$(x,y)$$ makes very little difference and we can continue our discussion as if it were centered at $$(0,0)$$. This is the case because all the points we generate can then be translated to $$(x,y)$$ by simply adding $$x$$ and $$y$$ to the $$x$$-coordinate and $$y$$-coordinate of the generated point.
### Polar Coordinates - The wrong approach
Let’s start by discussing an intuituve, but ultimately incorrect, approach. One might think that in order to pick a point in the circle it is sufficient to
1. Pick a random angle $$\theta \in [0, 2\pi[$$
2. Pick a random radius $$\overline{r} \in [0,r]$$ | {
"domain": "davidespataro.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9783846678676151,
"lm_q1q2_score": 0.850046803550406,
"lm_q2_score": 0.8688267830311355,
"openwebmath_perplexity": 434.14399494217605,
"openwebmath_score": 0.750119686126709,
"tags": null,
"url": "https://davidespataro.it/codinginterviewessentials/18.Generate_points_in_circle_uniformly.html"
} |
algorithms, permutations, stacks
EDIT
Thanks all for your answers, it was fun!
I've now implemented the algorithm in my Forth compiler. Ok here's my attempt 2 which won't construct the sequence of moves, but it at least proves what the optimal number of moves is and gives an indicator of how to construct the sequence. I'm addressing the inverse problem of turning "σ(1)σ(2)…σ(n)" to "12…n" using the moves "insert the current leftmost element somewhere different in the array", but they are equivalent problems because if we choose a prefix $\sigma(1)\sigma(2)\ldots \sigma(i)$ and right-cycle it, then we can reverse that by choosing the $\sigma(i)$ (now the leftmost point) and inserting it at position $i$. Likewise rather than start from the identity, we end at the identity. | {
"domain": "cs.stackexchange",
"id": 15104,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, permutations, stacks",
"url": null
} |
### Units of Ring of Mappings with Unity
Let $\struct {R, +, \circ}$ be a ring with unity $1$.
Let $f : S \to U_R$ is a mapping into the set of units $U_R$ of $R$.
$f$ is a unit in the ring of mappings from $S$ to $R$
and:
the inverse of $f$ is the mapping defined by:
$f^{-1} \in R^S : \forall x \in S: \map {\paren {f^{-1} } } x = \map f x^{-1}$
### Commutativity of Ring of Mappings
Let $\struct {R, +, \circ}$ be a commutative ring.
From Structure Induced by Commutative Ring Operations is Commutative Ring, the ring of mappings from $S$ to $R$ is a commutative ring.
## Also denoted as
It is usual to use the same symbols for the induced operations on the ring of mappings from $S$ to $R$ as for the operations that induces them. | {
"domain": "proofwiki.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806529525571,
"lm_q1q2_score": 0.8295147837277319,
"lm_q2_score": 0.8459424334245618,
"openwebmath_perplexity": 169.3990719259325,
"openwebmath_score": 0.8983725309371948,
"tags": null,
"url": "https://proofwiki.org/wiki/Definition:Ring_of_Mappings"
} |
c#, performance, programming-challenge, strings, hash-map
if (x.Length != y.Length)
{
return false;
}
for (int i = 0; i < x.Length; i++)
{
if (x[i] != y[i])
{
return false;
}
}
return true;
}
public int GetHashCode(char[] obj)
{
return 0;
}
}
}
Please review for performance. I don't like me copying the dictionary into IList<IList<string>. Is there a better way to do so, considering this the desired API defined by the question? Deriving from IEqualityComparer versus EqualityComparer.
The MSDN docs say the following: | {
"domain": "codereview.stackexchange",
"id": 35706,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, programming-challenge, strings, hash-map",
"url": null
} |
standard error of the sampling distribution of the sample proportion
Standard Error Of The Sampling Distribution Of The Sample Proportion p repeatedly randomly drawn from a population and the proportion of successes in each sample is recorded widehat p the distribution of the sample p Determine The Mean Of The Sampling Distribution Of P Hat p proportions i e the sampling distirbution can be approximated by a normal standard deviation of sample proportion distribution given that both n times p geq and n times -p geq This p Sample Proportion Formula p is known as theRule of Sample Proportions Note that some textbooks use a minimum of instead of The | {
"domain": "winaudit.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.986777175136814,
"lm_q1q2_score": 0.8044528173522394,
"lm_q2_score": 0.8152324938410783,
"openwebmath_perplexity": 1098.8971632177004,
"openwebmath_score": 0.925527811050415,
"tags": null,
"url": "http://winaudit.org/guides/sample-proportion/standard-error-of-sample-proportion-formula.html"
} |
quantum-mechanics, hilbert-space, conventions, complex-numbers, coherent-states
Title: Normalization of overcomplete coherent states as basis In a complete orthonormal basis $|x\rangle$, we often use the completeness relation:
$$\sum_{n=0}^\infty | x \rangle \langle x | = \mathbb{I}$$
if the basis is continuous we use the natural extension
$$\int | x \rangle \langle x | dx = \mathbb{I}.$$
This makes sense only if the choice of basis is complete. What if it is overcomplete? As an example, consider the overcomplete basis of coherent states. How does one construct the identity from these? I have seen the normalization
$$\frac{1}{\pi}\int | \alpha \rangle \langle \alpha | d^2\alpha = \mathbb{I}$$
(the $d^2$ implies integration over the real and imaginary parts of $\alpha$ separately.)
How does one derive the $\frac{1}{\pi}$ factor?
I thought you could get it by computing
$$Tr(\int | \alpha \rangle \langle \alpha | d^2\alpha) = \int Tr(| \alpha \rangle \langle \alpha |) d^2\alpha = \int \langle \alpha | \alpha \rangle d^2\alpha,$$ | {
"domain": "physics.stackexchange",
"id": 69465,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, hilbert-space, conventions, complex-numbers, coherent-states",
"url": null
} |
machine-learning, python, data-mining, sentiment-analysis, twitter
Now, you mentioned bigrams and unigrams. An n-grams (e.g. 1-gram == unigram) is just a sequence of tokens. So what we produced awhile ago was just a unigram list of tokens. If you want a bigram, then you'd want to take 2 tokens at a time. An example output of a bigram list of tokens would be
['I do', 'do not', 'not like', 'like the', 'the views', 'views of', 'of @Candaite1', '@Candidate1 on', 'on #Topic1', '#Topic1 .', '. Too', 'Too conservative', 'conservative !', '! !', '! I', "I can't", "can't stand", 'stand it', 'it !',] | {
"domain": "datascience.stackexchange",
"id": 3124,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, python, data-mining, sentiment-analysis, twitter",
"url": null
} |
geology, mantle, drilling
See page 102 in the linked document.
Typical rotatory drill heads designed for vertical drilling can be steer to a small degree by using a simple concept: point the bit in the direction that one wants to drill. A common way to achieve this is by the use of a bend near the bit in a down hole steerable mud motor. the bend points the bit in a direction different from the axis of the well bore.
By spinning the drill head at a different rate vs the drill stem, allows the bit to drill in the direction it points.
Reference: PDF document gives an excellent technical summary of the project. | {
"domain": "earthscience.stackexchange",
"id": 999,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "geology, mantle, drilling",
"url": null
} |
{\displaystyle (n-1)^{3}} This upper limit of nine cubes cannot be reduced because, for example, 23 cannot be written as the sum of fewer than nine positive cubes: It is conjectured that every integer (positive or negative) not congruent to ±4 modulo 9 can be written as a sum of three (positive or negative) cubes with infinitely many ways. You should expect to need to know them. purely by the location. | {
"domain": "thecorporategiveaways.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9615338035725359,
"lm_q1q2_score": 0.8074326036380256,
"lm_q2_score": 0.8397339756938818,
"openwebmath_perplexity": 1015.8618147157688,
"openwebmath_score": 0.6819213032722473,
"tags": null,
"url": "https://thecorporategiveaways.com/journal/cube-rule-math-8ba591"
} |
machine-learning, class-imbalance, kaggle, smote
CrossValidatd: https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he
Frank Harrell: https://twitter.com/f2harrell/status/1062424969366462473
Abishek https://twitter.com/abhi1thakur/status/1480525555527258122?t=guznAsPg_LbF_H-Qh1tOpg&s=08
Carlos Mougan https://twitter.com/CarlosMougan/status/1475756319999205377
JFPuget https://twitter.com/JFPuget/status/1475769513480179717
(UPDATE)
I found a paper does studies this problem
"To SMOTE or not to SMOTE" https://arxiv.org/abs/2201.08528 | {
"domain": "datascience.stackexchange",
"id": 11993,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, class-imbalance, kaggle, smote",
"url": null
} |
organic-chemistry, stoichiometry, titration, uv-vis-spectroscopy, ligand-field-theory
I have no idea what to do.
Thank you. I can advice only in general methodic, I cannot say what you can afford wrt the particular solutions, regarding their stability and interaction.
If the volume of available solutions is not the limiting factor, the easiest way is to prepare extra solution mixture for each measurement. | {
"domain": "chemistry.stackexchange",
"id": 17129,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, stoichiometry, titration, uv-vis-spectroscopy, ligand-field-theory",
"url": null
} |
Next, $3*8+1=25$, so $a_{25} = 2a_8+1 = 15$, and in fact $a_n=15$ for each $n$ between $25$ and $40$ inclusive. Continue in that way to see that $a_n=31$ for each $n$ between $76$ and $121$, $a_n=63$ for each $n$ between $229$ and $364$, $a_n=127$ for each $n$ between $668$ and $1093$. In particular, $a_{1000} = 127.$
#### chisigma
##### Well-known member
Hi chisigma, thank you so much for replying to this problem. But do you mean to say we could in the next step to determine what the integer value of $$\displaystyle a_{1000} is$$?
We can extrapolate the result writing...
$\displaystyle \ln (y+1) \sim \ln 2x \frac{\ln 2}{\ln 3} \sim .6309 \ln 2x$ (1)
... and that leads to...
$a_{31} \sim 30.9$
$a_{364} \sim 62.9$
$a_{1093} \sim 126,9$
... so that we could conclude that...
$a_{1000} \sim 119.9$
Unfortunately the 'extra information' $a_{2001}= 200$ is not coherent with (1) because it should be $a_{2001} \sim 186.4$ ...
Kind regards
$\chi$ $\sigma$ | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9658995742876885,
"lm_q1q2_score": 0.8006716461500398,
"lm_q2_score": 0.8289388125473629,
"openwebmath_perplexity": 628.4474312070223,
"openwebmath_score": 0.9056928157806396,
"tags": null,
"url": "https://mathhelpboards.com/threads/find-the-value-of-a_-1000.4297/"
} |
jupiter, newtonian-telescope, telescope-lens
Collimation - look this up, it just means adjusting the two mirrors
so your eye is looking right down the tube in a straight line. For a
long focal length scope like yours, it's not likely to be a problem
unless one of them is wildly out of line.
Tube currents/thermal behaviour - on almost any cold night, when
the tube and main mirror are still warm from being indoors, rising
air currents in the tube will mess up your image and make it
shimmery, at high power. Low power will look OK. It might take an hour or so for the image to improve (a guess)
Anyway, in summary my guess is changing the eyepieces straight away won't make a dramatic difference. Eyepiece makers will say otherwise of course :) | {
"domain": "astronomy.stackexchange",
"id": 1407,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "jupiter, newtonian-telescope, telescope-lens",
"url": null
} |
stoichiometry
I want to find the molar mass of the compound, I have tried so far:
$$ m = \pu{3.26 g} = \pu{0.00326 kg}$$
Since it has $3$ atoms of $\ce{Fe}$ and $4$ atoms of an unknown substance, therefore:
$$3 + 4 = 7~\text{atoms},\\
\pu{1 mol} = \pu{6.022* 10^23 atoms}\\
\frac{7}{\pu{6.022* 10^23}} = \pu{1.16 * 10^-23}$$
As we know: $M = m / n,$ I tried to divide $0.00326$ by $\pu{1.16 * 10^-23}$ and I obtained $\pu{2.79429 * 10^19}$, but the correct answer is $\pu{231.43 g/mol}$.
What I have done wrong? Let $\ce Z$ denote the unknown element.
The total amount of iron atom in the compound is,
$$n(\ce{Fe})=\frac{\pu{2.36g}}{\pu{55.845 g\cdot mol^{-1}}}=\pu{0.0423 mol},$$
The molecule comprises of 3 iron atoms and 4 other atoms, so the amount of the unknown atom is,
$$n(\ce Z)=\pu{0.0423 mol}\times \frac43=\pu{0.0564 mol},$$
The molar mass of $\ce Z$ is,
$$M(\ce Z)=\frac{\pu{3.26g}-\pu{2.36g}}{\pu{0.0564 mol}}=\pu{15.97 g\cdot mol^{-1}}$$
So $\ce Z$ is $\ce{O}$. The molar mass is | {
"domain": "chemistry.stackexchange",
"id": 9544,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "stoichiometry",
"url": null
} |
## Is every convergent sequence is Cauchy?
Every convergent sequence (with limit s, say) is a Cauchy sequence, since, given any real number ε > 0, beyond some fixed point, every term of the sequence is within distance ε/2 of s, so any two terms of the sequence are within distance ε of each other.
## How do you find if a function is bounded?
If f is real-valued and f(x) ≤ A for all x in X, then the function is said to be bounded (from) above by A. If f(x) ≥ B for all x in X, then the function is said to be bounded (from) below by B. A real-valued function is bounded if and only if it is bounded from above and below.
What does it mean when a function is bounded below?
Functions Bounded Below. Definition: A function f is bounded below if there is some number b that is less than or equal to every number in the range of f. Answers is in terms of y-values. Any such number b is called a lower bound of f.
### Is Lnx bounded?
For 1≤x<∞, we know lnx can be bounded as following: lnx≤x−1√x. | {
"domain": "technical-qa.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534349454033,
"lm_q1q2_score": 0.8069403441874828,
"lm_q2_score": 0.8221891283434876,
"openwebmath_perplexity": 580.5350384796975,
"openwebmath_score": 0.8924967646598816,
"tags": null,
"url": "https://technical-qa.com/can-a-sequence-be-non-decreasing-and-non-increasing/"
} |
python, python-3.x, game, dice
elif NoNumber == True:
print("\nYour password does not have numbers.")
if NoSymbol == True:
print("\nYour password does not have symbols, please try again.")
N = 0
N2 = 1
elif placeholder == True:
placeholder1 = True
N = 0
N2 = 1
elif NoSymbol == True:
print("\nYour password does not have symbols, please try again.")
N = 0
N2 = 1
elif NoLowercase == True:
print("\nYour password does not have lowercase letters.") | {
"domain": "codereview.stackexchange",
"id": 36625,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, game, dice",
"url": null
} |
context-free, formal-grammars, parsers, lr-k
I am confused because Wikipedia cites this grammar as an example of an LR(0) grammar and constructs an LR(0) parsing table for this grammar, while Grammophone reports shift-reduce conflicts and marks some cells in the grammar's LR(0) parsing table red. (The dots are not terminals; rather, they denote end of production; Grammophone requires this.)
What's going on here? Before applying the LR algorithm to a grammar, the grammar must be "augmented" by adding the rule
$S' \to S \$$
where $S$ is the original start symbol and $S'$ and $\$$ are symbols not in the grammar. $S'$ becomes the start symbol for the augmented grammar, and the string to be parsed is augmented by appending a $\$$ at the end. A reduce action for this newly added rule is written as "accept", which has the side effect of terminating the parse.
The table shown in the Wikipedia article is for the augmented grammar, as you can see from its parsing table, which includes "accept" actions and the $\$$ end marker symbol. | {
"domain": "cs.stackexchange",
"id": 14173,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "context-free, formal-grammars, parsers, lr-k",
"url": null
} |
php, laravel, eloquent
/**
* sendEmail
* Function mapped to Laravel route. Defines variable arrays and calls Email Class executeEmail.
*
* @param Request $request Request object passed via AJAX from client.
*/
public function sendEmail(Request $request) {
try {
$templateConfig = new TemplateConfiguration(
array(
'templateName'=>$request->input('emailTemplate'),
'companyName'=>$request->input('companyText'),
'projectName'=>$request->input('projectData')['projectName'],
'projectId'=>intval($request->input('projectData')['projectId'])
)
);
$currentProject = json_decode(Project::where('PRJ_Id',$templateConfig->getProjectId())->get(),true);
$periodInWeeks = 4;
$emailConfig = new EmailConfiguration(
array(
'host'=>$request->input('mailServerText'),
'port'=>$request->input('mailPortText'), | {
"domain": "codereview.stackexchange",
"id": 21237,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, laravel, eloquent",
"url": null
} |
newtonian-mechanics, forces, collision
Title: What's the safest place to sit on a bus in case of a crash? We shall talk about the average city bus.
Average bus -> ~13000kg. I checked multiple sources, not only the one given. Passengers' weight is neglected considering the strong rounding on the bus weight either way.
Let's consider multiple probable cases.
Bus crashes into something stationary in front of it. Bus probably tried to stop beforehand.
Bus crashes into something moving. E.g. A car that suddenly came from left/right.
A light car crashes into the bus from left/right. In the front part of the bus.
A light car crashes into the stationary bus from behind.
From multiple searches I might put the average car weight to ~2000kg | {
"domain": "physics.stackexchange",
"id": 47531,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, collision",
"url": null
} |
c++, programming-challenge, c++11
Keep in mind the Single Responsibility Principle:
The Single Responsibility Principle states:
that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by that module, class or function. | {
"domain": "codereview.stackexchange",
"id": 37129,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, programming-challenge, c++11",
"url": null
} |
newtonian-mechanics, projectile, free-fall
If you throw a ball 3 meters above your head, your hand stops applying a force to the ball at the moment the ball leaves your hand. At that moment, there are no forces acting on the ball any more except for gravity, so it meets the definition of being in "free fall", even though at that point the direction of the ball's velocity is away from the Earth, and hence the ball isn't "falling" as that word is used in common parlance.
To answer your follow-up question, no, the ball doesn't "carry force with it" after it leaves your hand. After the ball leaves your hand, the ball has kinetic energy and momentum and an upward velocity due to the force that had been on the ball, but there is no force acting on the ball any more. | {
"domain": "physics.stackexchange",
"id": 83595,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, projectile, free-fall",
"url": null
} |
robotic-arm, ros, moveit, ros-melodic, robot
My suggestion would be to somehow wrap the "clearpath servos" in a FollowJointTrajectory action server. MoveIt can interface with that directly (using a MoveItSimpleControllerManager) making the whole system almost trivial to integrate (if you're using ur_modern_driver for the UR10, that also has a FollowJointTrajectory action server). Use ros_control for the servo interface, then configure it with a joint_trajectory_controller.
One thing to keep in mind is that with two action servers, motions will not be synchronised between the gantry and the UR10.
If that would be desired/required, you would have to probably convert ur_modern_driver (assuming you're using that) into something compatible with the combined_robot_hw variant of hardware_interfaces. That would allow you to expose all joints of the gantry+robot as a single set to the joint_trajectory_controller. ur_modern_driver was not written with this in mind, so that may not be straightforward. | {
"domain": "robotics.stackexchange",
"id": 33690,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "robotic-arm, ros, moveit, ros-melodic, robot",
"url": null
} |
L_1家族,用于准确的测量绝对差异的特征。 Minkowski Distance¶ This distance is a generalization of the l1, l2, and max distances. Minkowski distance is used for distance similarity of vector. Given $\delta: E\times E \longrightarrow \mathbb{R}$ a distance function between elements of a universe set $E$, the Minkowski distance is a function $MinkowskiDis:E^n\times E^n \longrightarrow \mathbb{R}$ defined as $MinkowskiDis(u,v)=\left(\sum_{i=1}^{n}\delta'(u[i],v[i])^p\right)^{1/p},$ where $p$ is a positive integer. Given two or more vectors, find distance … Content here should include sexual / lewd pictures, text, cosplay, and videos of For Honor … Note that either of X and Y can be just a single vector -- then the colwise function will compute the distance between this vector and each column of the other parameter. r/34Honor: A place to post For Honor Rule 34 Content! Minkowski Distance. Note that Manhattan Distance is also known as city block distance. Let’s say, we want to calculate the distance, d, | {
"domain": "scandex.lt",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9504109728022221,
"lm_q1q2_score": 0.8274252063598478,
"lm_q2_score": 0.8705972784807408,
"openwebmath_perplexity": 1662.0202093542532,
"openwebmath_score": 0.8961589336395264,
"tags": null,
"url": "http://scandex.lt/2xv8b67n/b9dae4-minkowski-distance-in-r"
} |
computability, turing-machines, computable-analysis
(Calculation fo approximations) There is a Turing machine $M$ which on input $n$ terminates outputs a pair of integers $(a, b)$ such that $|x - a/b| < 2^{-n}$.
(Calculation of digits) There exists a Turing machine $M$ which runs forever and writes out the digits of $x$ on an infinite write-once tape. That is, once it writes a digit, it cannot change it.
(Calculation of neighborhoods) There exists a Turing machine $M$ which on input $(p,q)$, where $p$ and $q$ are rational numbers, terminates if, and only if, $p < x < q$.
There are many other equivalent definitions.
We can also ask about various other kinds of computability, and we shall discover a hierachy of classes of reals, see for instance X. Zheng's Classification of the Computable Approximations by Divergence Boundings. One can also study subclasses of computable reals, see again X. Zheng's work.
For instance, we can try these: | {
"domain": "cstheory.stackexchange",
"id": 3860,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computability, turing-machines, computable-analysis",
"url": null
} |
a system of equations A. Here we are using scipy. Solving a System of Equations WITH Numpy / Scipy With one simple line of Python code, following lines to import numpy and define our matrices, we can get a solution for X. Hello, to solve a system of ODEs, I set up a Python-code with solve_ivp as ODE-solver. All MATLAB ® ODE solvers can solve systems of equations of the form y ' = f (t, y), or problems that involve a mass matrix, M (t, y) y ' = f (t, y). The Python code to solve equations 10~12 in the outlined in the paper is given below. In [1]: # Import the required modules import numpy as np import matplotlib. The execution times are given in seconds. tt/3tQQzFd. An example of a simple numerical solver is the Euler method. pyplot as plt # This makes the plots appear inside the notebook % matplotlib inline The plot above illustrates that the system is periodic. Consider the following equation: ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33) ( x 1 x 2 x 3) = ( b 1 b. integrate package | {
"domain": "martinezgebaeudereinigungkoeln.de",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9518632234212403,
"lm_q1q2_score": 0.8032727162840214,
"lm_q2_score": 0.843895106480586,
"openwebmath_perplexity": 457.9827500926302,
"openwebmath_score": 0.5354330539703369,
"tags": null,
"url": "http://martinezgebaeudereinigungkoeln.de/solve-system-of-equations-python.html"
} |
javascript, html, css, json, pagination
.right{
width: calc(100% - 100px);
float: right;
display: flex;
flex-wrap: wrap;
justify-content: flex-start;
align-items: center;
min-height: 90px;
}
.line{
width: 100%;
}
<body id="body">
<script id="item" type="text/template">
<div class="item-inner">
<div class="left">__CharCode__</div>
<div class="right">
<div class="line name">__Name__</div>
<div class="line value">__Value__</div>
</div>
</div>
</script>
</body>
PLUNKER window.onscroll should be throttled. See more info here.
if (xhr.status === 200) {
/* main code*/
} else {
/*error handling*/
} | {
"domain": "codereview.stackexchange",
"id": 28039,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, html, css, json, pagination",
"url": null
} |
performance, algorithm, c, memory-optimization, compression
#include <ctype.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include "../include/huffman.h"
/* Interface functions */
int huffman_encode(uint8_t * input, uint8_t ** output, uint32_t decompressed_length)
{
size_t freq[256] = { 0 };
uint16_t encoded_bytes = 0;
/* Frequency analysis */
for(size_t i = 0; i < decompressed_length; i++)
freq[input[i]]++;
for(uint16_t i = 0; i < 256; i++)
if(freq[i])
encoded_bytes++;
/* Handle strings with either one unique byte or zero bytes */
if(!encoded_bytes) {
return INPUT_ERROR;
} else if(encoded_bytes == 1) {
for(uint16_t i = 0; i < 256; i++) {
if(freq[i]) {
++freq[i > 0 ? i - 1 : i + 1];
}
}
}
/* Construct a Huffman tree from the frequency analysis */
huffman_node_t * head_node = NULL; | {
"domain": "codereview.stackexchange",
"id": 38850,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, algorithm, c, memory-optimization, compression",
"url": null
} |
python, beginner, python-2.x, time-limit-exceeded
if past_iter > 6:
...
return 6 - past_iter + 10
...
return -1 * (6 - past_iter + 10)
Where did all of these numbers come from? What are is_leaf() and value() doing? At the least, leave comments to explain the numbers. It would be better if you didn't use magic numbers.
Whenever you have if ...: return True else: return False, you can change it to return ... or return bool(...). In is_leaf(), you have such a pattern. I'll give an example:
if x:
return True
else:
return False
If x is True, we return True. If x is False, we return False. Do you see a pattern, we are just returning whatever x is. You might need to use bool(...), however, if x is not either True or False, but is Truthy or Falsey. That is, if x is 4, we don't want to return 4; we want to return True. Similarly, if x is 0, we want to return False.
return child_nodes[child_node_values.index(max(child_node_values))] | {
"domain": "codereview.stackexchange",
"id": 19755,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-2.x, time-limit-exceeded",
"url": null
} |
ros
Originally posted by KruseT with karma: 7848 on 2012-10-08
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Lorenz on 2012-10-08:
Relevant link
Comment by jbohren on 2012-10-09:
Except some people are still getting used to the idea of workspaces, remember how long it took me to drink the kool-aid... | {
"domain": "robotics.stackexchange",
"id": 11258,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
will become less frequent as n become large. Now, recall that for almost sure convergence, we’re analyzing the statement. There wont be any failures (however improbable) in the averaging process. converges. This last guy explains it very well. 2 : X n(!) Di erence between a.s. and in probability I Almost sure convergence implies thatalmost all sequences converge I Convergence in probabilitydoes not imply convergence of sequences I Latter example: X n = X 0 Z n, Z n is Bernoulli with parameter 1=n)Showed it converges in probability P(jX n X 0j< ) = 1 1 n!1)But for almost all sequences, lim n!1 x n does not exist I Almost sure convergence … Almost sure convergence does not imply complete convergence. Some people also say that a random variable converges almost everywhere to indicate almost sure convergence. In this section we shall consider some of the most important of them: convergence in L r, convergence in probability and convergence with probability one (a.k.a. By itself the strong | {
"domain": "marinus.pl",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226314280634,
"lm_q1q2_score": 0.8098919819923017,
"lm_q2_score": 0.8289388146603365,
"openwebmath_perplexity": 675.5514698122238,
"openwebmath_score": 0.9525541067123413,
"tags": null,
"url": "http://marinus.pl/kq8nrwq/almost-sure-convergence-vs-convergence-in-probability-90ad89"
} |
python, beginner, database
def update(self, _id, **updates):
"""Updates a row of the table. Updates must map to the table definition."""
with Session(self.engine) as session:
entry = self.read(_id)
for key, val in updates.items():
setattr(entry, key, val)
session.add(entry)
session.commit()
def delete(self, _id):
"""Delete a row of the table."""
with Session(self.engine) as session:
entry = self.read(_id)
session.delete(entry)
session.commit()
test_db.py
import sqlite3
import pytest
from projects.db import DB, Account, Project
@pytest.fixture
def account_db(tmp_path):
db = DB(url=f"sqlite:///{tmp_path}/database_account.db", table=Account, echo=True)
db.create_metadata() | {
"domain": "codereview.stackexchange",
"id": 45209,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, database",
"url": null
} |
• @Aksakal The 1st ex. would also work if a 100 changed to a -100. Likewise in the 2nd a number at the median could shift by 10. I felt adding a new value was simpler and made the point just as well. Jan 8 at 20:39
• (I upvoted for the opening sentence "I'm going to say no, there isn't a proof the median is less sensitive than the mean since it's not always true. At least not if you define "less sensitive" as a simple "always changes less under all conditions". ", and because you recognized your own counterexample "seems like very fake data.")
– Stef
Jan 9 at 14:59 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232940063591,
"lm_q1q2_score": 0.8103003799056899,
"lm_q2_score": 0.8244619199068831,
"openwebmath_perplexity": 456.75374665749894,
"openwebmath_score": 0.8301829099655151,
"tags": null,
"url": "https://stats.stackexchange.com/questions/559659/why-is-the-median-less-sensitive-to-extreme-values-compared-to-the-mean"
} |
python, strings, datetime, beautifulsoup
Title: Remove (not extract) timestamp from HTML string I've got strings coming from irregularly and ugly formatted HTML sites, that contain a timestamp. I am interested in removing the timestamp entirely and get all the rest.
from bs4 import BeautifulSoup
date1 = '<P><SPAN STYLE="font-family: Univers" STYLE="font-size: 11pt"><STRONG></STRONG></SPAN><SPAN STYLE="font-family: Univers" STYLE="font-size: 11pt">10:15 AM ARVIND KRISHNAMURTHY, Northwestern University</SPAN></P>'
date2 = """<tr><td style="width:1.2in;padding:0in 5.4pt 0in 5.4pt" valign="top" width="115"><p class="MsoNormal"><span style="font-size:11.0pt;font-family:Univers"><span style="mso-spacerun: yes"> </span>8:45 a.m.<o:p></o:p></span></p></td><td style="width:5.45in;padding:0in 5.4pt 0in 5.4pt" valign="top" width="523"><p class="MsoNormal"><span style="font-size:11.0pt;font-family:Univers">RICARDO CABALLERO, MIT and NBER<o:p></o:p></span></p></td></tr>""" | {
"domain": "codereview.stackexchange",
"id": 16506,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, strings, datetime, beautifulsoup",
"url": null
} |
bash, awk
Title: How to extract all pair end fastq.gz files from multiple subdirectory to a single folder I have all file names for each paired-end fastq pair as shown below. Could you please suggest how I can extract all fastq.gz files in a single folder?
sample name:
188833984]$ ls
Leaf_T1_FD_R10_L001-ds.3b884c360b1e4ae185408a613b90a3bc
Leaf_T1_FD_R2_L001-ds.7db8eb7e3426486db549426601b3a0bd
Leaf_T1_FD_R3_L001-ds.147177ecc03a46ccbbce3162d1185a0a
pair end file within sample name Leaf_T1_FD_R10_L001-ds.3b884c360b1e4ae185408a613b90a3bc
188833984]$ cd Leaf_T1_FD_R10_L001-ds.3b884c360b1e4ae185408a613b90a3bc
Leaf_T1_FD_R10_L001-ds.3b884c360b1e4ae185408a613b90a3bc]$ ls
Leaf-T1-FD-R10_S73_L001_R1_001.fastq.gz
Leaf-T1-FD-R10_S73_L001_R2_001.fastq.gz | {
"domain": "bioinformatics.stackexchange",
"id": 1022,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bash, awk",
"url": null
} |
kinetics
What amounts of $\ce{A_2}$, $\ce{B_2}$, and $\ce{AB}$ can be expected to be produced?
To obtain the amounts, should probability theory be used? E.g., amount of $\ce{AB}$ equals to probability that species $\ce{A}$, $\ce{B}$ will interact ("collide" or similar interpretation).
Assume the rates of the reactions are equal. Well, if you assume the rates are known and the reactions' order follows from stoechiometry (e.g. if they are elementary reactions), you can put the chemical kinetics into simple equations:
$$\frac{\mathrm da}{\mathrm dt} = -k_1 a(t)^2 - k_3 a(t)b(t) $$
$$\frac{\mathrm db}{\mathrm dt} = -k_2 b(t)^2 - k_3 a(t)b(t) $$
($t$ here being time, not temperature).
Knowing initial amounts or concentrations $a_0=a(t=0)$ and $b_0=b(t=0)$, you can pretty much integrate the system to find out what happens. | {
"domain": "chemistry.stackexchange",
"id": 67,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "kinetics",
"url": null
} |
python, performance, parsing, logging
Title: Splitting a CAN bus log in .asc format I've written a quick script for a coworker to split a large CAN log into smaller chunks. (If you're not familiar with CAN, it's a communication protocol used by the ECUs in many cars.) I know where to split because I've inserted dummy CAN messages (with ID 0x00) at the start of each section, and one at the end of testing (which may be somewhere in the middle of the log) to tell me when to stop reading.
The log is in .asc or .csv format, and can be several gigabytes in size. Currently I can process a 1.5GB file in about 40 seconds, but I'm sure that can be improved. I'm looking more for advice on how to speed this up than to make it more Pythonic, but of course criticism is welcome in both areas.
Note: titles is a dictionary mapping section numbers to a particular string that needs to be added to the filename before saving. I can add the code for generating these, but I don't believe it's as relevant. | {
"domain": "codereview.stackexchange",
"id": 20643,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, parsing, logging",
"url": null
} |
c#, performance, file, compression
//Create the zip file name
for (int i = 0; i < randFileName.Length; i++)
{
randFileName[i] = chars[random.Next(chars.Length)];
}
string finalString = new String(randFileName);
Say("Starting file extraction..");
string day = DateTime.Now.ToString("MM-dd-yy ");
string userName = Environment.UserName;
string startDir = $"c:/users/{userName}/test_folder";
string zipDir = $"c:/users/{userName}/archive/{day}{finalString}.zip";
string dirName = $"c:/users/{userName}/archive"; | {
"domain": "codereview.stackexchange",
"id": 20102,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, file, compression",
"url": null
} |
$$\frac{\sqrt{27}}{\sqrt{x^{10}}} = \frac{3\sqrt{3}}{|x^5|}$$
However, using the product rule for absolute value and the fact that $$x^4 > 0$$, $$|x^5| =|x^4||x| = x^{4}|x|$$ and
$$\frac{3\sqrt{3}}{|x^5|} = \frac{3\sqrt{3}}{x^{4}|x|}$$
Finally, we are given that x < 0, so |x| = −x and we can write
$$\frac{3\sqrt{3}}{x^{4}|x|} = \frac{3\sqrt{3}}{x^{4}(−x)} = −\frac{3\sqrt{3}}{x^5}$$.
Exercise $$\PageIndex{11}$$
Use a calculator to first approximate $$\frac{\sqrt{5}}{\sqrt{2}}$$. On the same screen, approximate $$\sqrt{\frac{5}{2}}$$. Report the results on your homework paper.
Both $$\frac{\sqrt{5}}{\sqrt{2}} = \sqrt{\frac{5}{2}} \approx = 1.58113883$$
Exercise $$\PageIndex{2}$$
Use a calculator to first approximate $$\frac{\sqrt{7}}{\sqrt{5}}$$. On the same screen, approximate $$\sqrt{\frac{7}{5}}$$. Report the results on your homework paper.
Exercise $$\PageIndex{3}$$ | {
"domain": "libretexts.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9911526459624252,
"lm_q1q2_score": 0.8302092449775559,
"lm_q2_score": 0.8376199653600372,
"openwebmath_perplexity": 356.65065799553645,
"openwebmath_score": 0.9020271897315979,
"tags": null,
"url": "https://math.libretexts.org/Bookshelves/Algebra/Intermediate_Algebra_(Arnold)/09%3A_Radical_Functions/9.03%3A_Division_Properties_of_Radicals"
} |
error-correction
\begin{equation}
\{X_1 X_2, X_3 X_4, X_5 X_6\}
\end{equation}
Here I've labeled the qubits on the vertices from one to six. We can multiply these three stabilizers to find a weight six stabilizer: $$S_2 = X_1 X_2 X_3 X_4 X_5 X_6$$. This is the stabilizer on the hexagon.
Now if we measure ZZ (or YY) on the three type 1 edges, $S_2$ is still a stabilizer, because it commutes with each of the three measured operators ($Z_2Z_3, Z_4Z_5, Z_6Z_1$).
The qubits aren't stabilized by type 0 edges anymore because they don't commute with the measured operators. For example, $X_1 X_2$ and $Z_2 Z_3$ don't commute. | {
"domain": "quantumcomputing.stackexchange",
"id": 4453,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "error-correction",
"url": null
} |
c++, game, sfml
//Version
std::string ver = "V0.3";
//Create Text and font variables
sf::Text text;
sf::Font font;
// for eliminating magic numbers
enum class Players
{
PlayerOne,
PlayerTwo
};
namespace Screen
{
enum Size
{
Width = 800,
Height = 600
};
}
//Make this class Drawable
class Paddel : public sf::Drawable
{
// for member data perfered to be started with m prefix
float mSpeed;
sf::Vector2f mBorder;
sf::Vector2f mPosition;
sf::Vector2u mScreenSize;
sf::RectangleShape mShape;
public:
Paddel(Screen::Size screenSize, Players player)
// member data perfered be initilaized as contructor's initilaized list
: mScreenSize(Screen::Width, Screen::Height)
, mBorder(8, 6)
, mSpeed(5.f)
{
sf::Vector2f size = sf::Vector2f(20, 100); | {
"domain": "codereview.stackexchange",
"id": 12218,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, game, sfml",
"url": null
} |
np-hardness, tsp
Upper bound on query complexity and run time
Consider the following dynamic-programming algorithm.
For every subset $S\subseteq [n]$, define
$M(S)$ to be the minimum cost of any permutation of the indices in $S$.
The following recurrence holds:
$$M(S) = \begin{cases}
0 & ~(S=\emptyset) \\
\min_{i\in S} f\big(M(S\setminus\{i\}), i\big) & ~(S\ne \emptyset).
\end{cases}$$
The recurrence holds because of the second monotonicity requirement,
that is, $L \mapsto f(L, i)$ is non-decreasing for any fixed $i$.
(Because of this property,
in any minimum cost permutation $(\pi_1, \pi_2, \ldots, \pi_n)$ of $[n]$,
any prefix $(\pi_1, \pi_2, \ldots, \pi_k)$
can be replaced by the reordering of the prefix
that gives minimum cost $M(\{\pi_1, \pi_2, \ldots, \pi_k\})$,
without increasing the cost of $\pi$.)
The query complexity is $\sum_{S\subseteq [n]} |S| = n 2^{n-1}$.
The running time is proportional to this. | {
"domain": "cstheory.stackexchange",
"id": 5776,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "np-hardness, tsp",
"url": null
} |
quantum-mechanics, time-evolution, non-linear-systems
If so, could you please give me a quick description of such a system, its evolution equation, and some references? The conventional formalism of QM relies heavily on the theory of linear operators (spectral theorem,...), which would be hard to justify unless the linear structure on the Hilbert space is physically unambiguous, and in particular preserved under the time-evolution (see however udrv's comment below on a non-linear, yet consistent, quantum evolution).
While the Lagrangians used for interacting field theories (eg. the standard model) do lead to non-linear PDEs for the "wave-function", these equations are pathological in the context of QM (in particular, they do not support a healthy probabilistic interpretation, although this is not solely due to their non-linearity), and one has to go to QFT: roughly, quantizing a second time the wave-function allows to recover a linear system that supports the usual quantum interpretation. | {
"domain": "physics.stackexchange",
"id": 34346,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, time-evolution, non-linear-systems",
"url": null
} |
Theorem: If a sequence of real numbers is increasing and bounded above, then its supremum is its limit.
Proof: Let $$(a_n)_{n\in\mathbb N}$$ be a sequence of real numbers, and let $$\{a_n\}$$ be the set of terms in $$(a_n)_{n\in\mathbb N}$$. By the least-upper-bound property of the real numbers, $$l=\sup\{a_n\}$$ exists in $$\mathbb R$$. Now, for every $$\pmb{\varepsilon>0}$$ there is an $$\pmb{N>0}$$ such that $$\pmb{a_N>l-\varepsilon}$$, as otherwise $$\pmb{l-\varepsilon}$$ would be an upper bound of $$\pmb{\{a_n\}}$$, which contradicts the definition of $$\pmb{l}$$. Since $$(a_n)_{n\in\mathbb N}$$ is increasing, if $$n>N$$, then $$c\ge a_n\ge a_N>c-\varepsilon$$, and so $$|c-a_n|<\varepsilon$$. Hence, by the definition of the limit of a sequence, $$a_n\to\sup\{a_n\}$$ as $$n\to\infty$$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9585377284730286,
"lm_q1q2_score": 0.8166289035649364,
"lm_q2_score": 0.8519528019683106,
"openwebmath_perplexity": 173.8768554607002,
"openwebmath_score": 0.8441201448440552,
"tags": null,
"url": "https://math.stackexchange.com/questions/4276505/is-it-really-bad-style-to-write-proofs-by-contrapositive-as-proofs-by-contradict"
} |
The trick is to think of limit a bit differently: A limit is the value a function would have at a point were it continuous at that point, with nothing else being different.
You can see this by comparing the epsilon-delta definitions of the two concepts. We say $$f$$ is continuous at $$c$$, if $$f$$ is defined at $$c$$ and for every $$\epsilon > 0$$ there is a $$\delta > 0$$ such that
$$|f(x) – f(c)| < \epsilon$$ whenever $$0 < |x – c| < \delta$$.
Likewise, $$f$$ has limit $$L$$ at $$c$$ if for every $$\epsilon > 0$$ there is a $$\delta > 0$$ such that
$$|f(x) – L| < \epsilon$$ whenever $$0 < |x – c| < \delta$$.
Thus, if we have a function with isolated gaps and there is a different function $$f^{*}$$ such that
1. $$f^{*}(x) = f(x)$$ wherever $$f(x)$$ is defined and continuous,
2. $$f^{*}$$ is defined and continuous everywhere, i.e. has domain $$\mathbb{R}$$ (versus the original function having domain $$\mathbb{R}$$ minus a set of isolated points), | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9820137942490252,
"lm_q1q2_score": 0.8566571569404339,
"lm_q2_score": 0.8723473763375643,
"openwebmath_perplexity": 255.68626743071,
"openwebmath_score": 0.8391028046607971,
"tags": null,
"url": "https://math.stackexchange.com/questions/462199/why-does-factoring-eliminate-a-hole-in-the-limit"
} |
performance, c, reinventing-the-wheel, vectors
static const size_t INITIAL_VECTOR_CAPACITY = 100;
static const size_t MAX_VECTOR_SIZE = UINTMAX_MAX - 1;
gcl_vector *gcl_vector_init()
{
return calloc(1, sizeof(gcl_vector));
}
static GCLError gcl_vector_realloc(gcl_vector *v, size_t newElemCount)
{
void *newData = realloc(v->data, newElemCount * sizeof(*(v->data)));
if (!newData) {
return eFailedAlloc;
}
v->data = newData;
v->capacity = newElemCount;
return eNoErr;
}
static size_t gcl_vector_find_index(gcl_vector *v, void *elem, GCLIsEqual isEqual)
{
for (size_t i = 0; i < v->size; ++i) {
if (isEqual(elem, v->data[i])) {
return i;
}
}
return v->size + 1;
}
GCLError gcl_vector_push_back(gcl_vector *v, void *elem)
{
if (v->size >= gcl_vector_max_size()) {
return eInvalidOperation;
} | {
"domain": "codereview.stackexchange",
"id": 19151,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, c, reinventing-the-wheel, vectors",
"url": null
} |
statistical-mechanics, hamiltonian-formalism, hamiltonian, phase-space
(1) For microcanonical ensemble, the energy of the system is fixed and according to the 'equal a priori probability', each microstate is equally likely, therefore, $\rho=constant$ throughout the relevant region in the phase space. The exactly value for this constant is not important since in statistical mechanics all we care about the relative chance or probabilities, you can always normalize it in the end. More precisely, for the average value of some physical quantity $f(p,q)$, we have
$$<f>=\frac{\int dqdp f(p,q)\rho}{\int dqdp \rho}=\frac{\int dqdp f(p,q)}{\int dqdp},$$
since $\rho$ is constant in this case.
(2) For canonical ensemble, which you might be more interested in, the density $\rho(p,q)$ is no longer uniform, instead we have
$$\rho(p,q)\propto exp(-H(p,q)/kT),$$ | {
"domain": "physics.stackexchange",
"id": 21812,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, hamiltonian-formalism, hamiltonian, phase-space",
"url": null
} |
c++, game, library
Global variables and access
It seems like a bad idea for every object to have access to every other object through s_allInstances_.
Similarly, every graphics object can access and affect every other object through graphicalObjects_.
Even the moveMyInstance* functions are potentially unsafe, and might lead to a lot of "churn" if two objects don't agree on how they should be ordered.
Similarly the pause button decides by itself to clear the entire set of graphicalObjects_.
Perhaps we could delete the InstanceTracker class, and simply have the GameEngine contain two vectors: std::vector<LogicHandler*> logicHandlers_; and std::vector<GrahpicalHandler*> graphicalHandlers_;.
The engine could sort these objects as necessary. | {
"domain": "codereview.stackexchange",
"id": 41023,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, game, library",
"url": null
} |
electromagnetic-radiation, speed-of-light, frequency, wavelength
The second question is contradictory; maximum frequency -> minimum wavelength.
I am asking the very opposite;
What is the minimum frequency and maximum wavelenght of electromagnetic radiation?
The lowest measured/defined seems to be 3 Hz; ELF-waves Which means a wavelenght 1/3 of the speed of light; ~100 000 000 m.
But this can't be the physical limit for the wavelenght.
Does such a physical limit for the wavelength exist? (Similar limit like the speed of light is for velocity). There is no theoretical physical limit on the wavelength, though there are some practical limits on the generation of very long wavelengths and their detection.
To generate a long wavelength requires an aerial of roughly one wavelength in size. The accelerated expansion of the universe due to dark energy means the size of the observable universe is tending to a constant, and that will presumably make it hard to generate any wavelengths longer than this size. | {
"domain": "physics.stackexchange",
"id": 76749,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-radiation, speed-of-light, frequency, wavelength",
"url": null
} |
java, design-patterns, mvc, swing
Full project Extract the busy-work code of transforming the List<Message> to DefaultListModel into another method.
From what I'm seeing I would think about the concepts you want to express in your code vis-a-vis some technical adherence to "encapsulation". If you want Page users to have no concept of an imbedded Message in a Page fine, but as it stands I'm not thinking message is inadequately encapsulated. Alternatively if there are some Message public getters that you simply do not want Page users to access, then that's OK rational too.
In this case then the alternative is, in the Page class, a public public List<String> Page.getSubjects() - and likewise for all Message properties you want public. If, as far as Page users are concerned, the message's subject is the message, then rename the methods: Page.getMessages(); but it still returns a List<string>. | {
"domain": "codereview.stackexchange",
"id": 3527,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, design-patterns, mvc, swing",
"url": null
} |
quantum-mechanics, models, determinism
Title: Why do people categorically dismiss some simple quantum models? Deterministic models. Clarification of the question:
The problem with these blogs is that people are inclined to start yelling at each other. (I admit, I got infected and it's difficult not to raise one's electronic voice.) I want to ask my question without an entourage of polemics.
My recent papers were greeted with scepticism. I've no problem with that. What disturbes me is the general reaction that they are "wrong". My question is summarised as follows:
Did any of these people actually read the work and can anyone tell me where a mistake was made? | {
"domain": "physics.stackexchange",
"id": 4344,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, models, determinism",
"url": null
} |
c#, beginner, game, community-challenge, rock-paper-scissors
In your GetRandomOption you can inline the creation of the Random object with your call to get a random option, which would save you a whole line of code :P
public static string GetRandomOption(List<string> options)
{
return options[new Random().Next(0, options.Count)];
}
The Choice method does not need to use the { brackets } because switch statements end when there is a break or a return. So you can write it like this
Console.WriteLine(prompt);
switch (Console.ReadKey(true).Key)
{
case ConsoleKey.Y:
Console.Write("Y\n");
return true;
case ConsoleKey.N:
Console.Write("N\n");
return false;
} | {
"domain": "codereview.stackexchange",
"id": 6513,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, beginner, game, community-challenge, rock-paper-scissors",
"url": null
} |
python, python-3.x
Title: Creating an object using an API call I'm looking to create a wrapper for an API, mainly for learning purposes at work as "The IT guy". I've been trying to follow the PEP 8 styling guide and make my docstring as detailed as possible.
I believe my code looks quite sluggish. I've tried to avoid duplicate code and multiple line sets of logic.
Here's my class initialization:
class API:
def __init__(self, environment, client_code, api_key):
""" __init__()
Instantiate an instance of API.
Parameters:
environment(string): The runtime environment. Acceptable arguements [sandbox] [live]
client_code(string): Your client code, found the settings page of your account
api_key(string): The API key found in the settings page of your account
"""
self.base_url = 'https://%s.apiprovider.co.uk/api/v3/client/%s/' %\
(environment, client_code) | {
"domain": "codereview.stackexchange",
"id": 35576,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x",
"url": null
} |
general-relativity, gravity, equivalence-principle
In fact, there's a mathematical way to state this: given any geodesic of a free-falling body (i.e. worldline of a free-falling body), there exist coordinates for which the metric tensor $g_{\mu\nu}$ is exactly in the form of the Minkowski metric $\eta_{\mu\nu}$ at every point of the geodesic. At points near the geodesic, $g_{\mu\nu}$ is not exactly $\eta_{\mu\nu}$, but it looks more and more like $\eta_{\mu\nu}$ in the limit as you approach a point of the geodesic. These coordinates are called Fermi normal coordinates. | {
"domain": "physics.stackexchange",
"id": 84374,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, gravity, equivalence-principle",
"url": null
} |
c++, performance, algorithm, c++17, numerical-methods
Due to lack of expertise, I will not comment on possible mathematical improvements or multithreading.
Clear interface
I am bit confused by the Integrator class. The usage as shown in your main is as expected, but why are dx_, dy_ and integral_ member variables, which can be accessed, but do not contain any meaningful content (Or are even uninitialised for integral_!) until evaluate() or integrate() was called?
If this is meant to be some kind of result caching, then it should happen completely internally, maybe with an std::optional<double> integral_, which is set the first time something is calculated and then returned the next time. Also, both functions should not share the cached result. Since this is nothing but a wild guess, I’ll assume the smallest sensible interface as depicted by the main in the following.
struct Limits | {
"domain": "codereview.stackexchange",
"id": 38406,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, algorithm, c++17, numerical-methods",
"url": null
} |
python, python-3.x, cryptography, email
print(clear_msg)
input('Press any key to return to the menu.')
except imaplib.IMAP4.error:
print("Error please try again. (005)")
def delete_email(imap_server_instance):
"""Moves email to deleted folder and deletes."""
uids = []
try:
imap_server_instance.select('inbox')
_, data = imap_server_instance.uid('search', None, 'ALL')
data = data[0].split()
for item in data:
# Loops through all emails in inbox and displays there UID and who sent it.
_, email_data = imap_server_instance.uid('fetch', item, '(RFC822)')
raw = email_data[0][1]
msg = email.message_from_bytes(raw)
uids.append(item.decode())
print('UID:', item.decode(), end=" From: ")
print(msg['From'])
except imaplib.IMAP4.error:
print("Error please try again. (005)") | {
"domain": "codereview.stackexchange",
"id": 37899,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, cryptography, email",
"url": null
} |
accept rate: 0% 16.9k●49●115●225 1 Hmmm.. Surprisingly, just interchanging for loops as given in the solution section is getting AC. How does it makes a difference whether we iterate over {K}-> {H} or {H}-> {K}. Interesting and frustrating!!! (12 Dec '12, 23:04) @prakash1529 Maybe taking into account cache misses in the target system , your observation makes sense :) (04 Nov '14, 15:21) | {
"domain": "codechef.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9787126463438262,
"lm_q1q2_score": 0.8155853239924967,
"lm_q2_score": 0.8333245994514082,
"openwebmath_perplexity": 2841.5372820089574,
"openwebmath_score": 0.7036876678466797,
"tags": null,
"url": "https://discuss.codechef.com/questions/4443/dboy-editorial"
} |
Showing field extension $$\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5})/\mathbb{Q}$$ degree 8 [duplicate]
Other related questions are:
The square roots of different primes are linearly independent over the field of rationals
I'm sorry for duplicating this question. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534322330059,
"lm_q1q2_score": 0.8380755967388409,
"lm_q2_score": 0.853912747375134,
"openwebmath_perplexity": 174.4202964717395,
"openwebmath_score": 0.900250256061554,
"tags": null,
"url": "https://math.stackexchange.com/questions/3092833/prove-that-mathbbq-sqrt2-sqrt3-sqrt5-mathbbq-8"
} |
neural-networks, deep-learning, convolutional-neural-networks, residual-networks, vgg
Given these "simple" (always the same) images and only two classes, is the VGG model probably a better base than a modern network like ResNet or Xception? Or is it more likely that I messed something up with my model or simply got the training/hyperparameters not right? VGG is a more basic architecture which uses no residual blocks. Reset usually perform better then VGG due to it's more layers and residual approach. Given that resnet-50 can get 99% accuracy on MNIST and 98.7% accuracy on CIFAR-10, it probably should achieve better than VGG network. Also, the validation accuracy should not be 100%. You could try increasing the size of your validation set to improve accuracy on validation. VGG network should perform worst than ResNet in most scenario, but experimenting is the way to go. Try and experiment more to get a method that works for your data. Hope that I can help you and have a nice day! | {
"domain": "ai.stackexchange",
"id": 2467,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-networks, deep-learning, convolutional-neural-networks, residual-networks, vgg",
"url": null
} |
food, digestive-system, digestion, gastroenterology, stomach
Furthermore the study shows that the IMMC gets triggered only after all digestible matter has leaved the stomach:
The longer GRT of the Heidelberg capsule compared with the t1/2 of the 99mTC-DTPA is consistent with the finding that large nondigestible solids are emptied by the IMMC once all of the digestible materials have passed through the pylorus into the duodenum
and
the interdigestive migrating myoelectric complex can be markedly delayed by frequent feedings with solids, and the interdigestive migrating myoelectric complex is delayed by both liquid and solid meals
and
Feeding has been shown to interrupt the IMMC; resumption of myoelectric activity is necessary for the passage of large nondigestible particles such as the Heidelberg capsule | {
"domain": "biology.stackexchange",
"id": 10045,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "food, digestive-system, digestion, gastroenterology, stomach",
"url": null
} |
c#, .net, playing-cards, shuffle
"9-of-Clubs", "9-of-Hearts", "9-of-Spades", "9-of-Diamonds",
"8-of-Clubs", "8-of-Hearts", "8-of-Spades", "8-of-Diamonds",
"7-of-Clubs", "7-of-Hearts", "7-of-Spades", "7-of-Diamonds",
"6-of-Clubs", "6-of-Hearts", "6-of-Spades", "6-of-Diamonds",
"5-of-Clubs", "5-of-Hearts", "5-of-Spades", "5-of-Diamonds",
"4-of-Clubs", "4-of-Hearts", "4-of-Spades", "4-of-Diamonds",
"3-of-Clubs", "3-of-Hearts", "3-of-Spades", "3-of-Diamonds",
"2-of-Clubs", "2-of-Hearts", "2-of-Spades", "2-of-Diamonds",}; | {
"domain": "codereview.stackexchange",
"id": 42856,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, playing-cards, shuffle",
"url": null
} |
pressure, nature
Title: Jumping sewer lid - WHY? Intro: Few hours ago, there was a storm. We heard some constant banging which couldn't be explained by thundering. Then we found out, it was a sewer lid jumping. Maybe it's normal in other parts of the world, but for me it was like the first time in my life.
I've captured the video.
The question is what was causing this to happen. It's kind of clear that it was air pressure so strong that it was capable of lifting this metal lid. But where did this air pressure appear? | {
"domain": "physics.stackexchange",
"id": 50823,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pressure, nature",
"url": null
} |
newtonian-mechanics, newtonian-gravity, reference-frames, inertial-frames, earth
EDIT: Gravity on the disk is assumed to be uniformly perpendicular to the disk, such as most flat-Earth theories suggest.
N.B. I do not believe the Earth to be flat, this question is purely out of interest. Yes, it would work, but $\omega$ would be the same at all points on the disc, and equal to the rate of rotation of the disc itself. So although the pendulum would precess, you could very easily know if you were on a flat disc by moving it around and comparing precession rates, | {
"domain": "physics.stackexchange",
"id": 50118,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, newtonian-gravity, reference-frames, inertial-frames, earth",
"url": null
} |
waves, acoustics, interference
There is no interference happening here. The two sources do not maintain a constant phase difference. When interference occurs (with a constant phase relation between the two sources), you will have a net intensity of $(E_1 + E_2)^2$, which is four times either if they are equal. In the destructive case, the net result gives $0$ intensity (for a phase difference of $\pi$). However, when there is no constant phase relation, the phase difference would be randomly distributed between $0$ and $2\pi$, and in this case, you just have the average intensity adding up, to give $\langle E_1^2 \rangle + \langle E_2^2 \rangle$, which is 2 times either if they are equal. That's what you hear, and mistake it for constructive interference. | {
"domain": "physics.stackexchange",
"id": 14120,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, acoustics, interference",
"url": null
} |
algorithm, c, linked-list, matrix
source_node = matrix->rotations_list_heads[rotation_list_index];
target_node = source_node;
size_t i;
for (i = 0; i != count; i++)
{
matrix->buffer[i] = rotable_char_matrix_get(matrix,
source_node->x,
source_node->y);
source_node = source_node->next;
}
for (i = 0;
i != matrix->rotation_list_lengths[rotation_list_index] - count;
i++)
{
rotable_char_matrix_set(matrix,
target_node->x,
target_node->y,
rotable_char_matrix_get(matrix,
source_node->x,
source_node->y));
target_node = target_node->next;
source_node = source_node->next;
} | {
"domain": "codereview.stackexchange",
"id": 29489,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm, c, linked-list, matrix",
"url": null
} |
java, game, parsing, gui, dice
private void clearOutput() {
outputArea.setText("");
if (armiesBox.isVisible()) {
warFields[MAIN_GROUP].setText("");
warFields[OPPOSING_GROUP].setText("");
}
}
private class ButtonListener implements ActionListener {
@Override
public void actionPerformed(ActionEvent e) {
Object source = e.getSource();
if (source == rollButton) {
handleRoll();
} else if (source == clearButton) {
clearOutput();
} else {
throw new UnsupportedOperationException("Unsupported button: " + source);
}
}
private void handleRoll() {
String[][] resultLines;
StringBuilder appendText = new StringBuilder(); | {
"domain": "codereview.stackexchange",
"id": 19753,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, game, parsing, gui, dice",
"url": null
} |
$\begin{xy} \xymatrix { G \ar[d]^{\pi}\ar[r]^{f} & A \\ G/[G,G] \ar[ur]^{\overline{f}}&\\ } \end{xy}$
where $\pi:G\to [G,G]$ is the quotient map. In other words, every homomorphism from $G$ to an abelian group $A$ “comes from” a homomorphism from $G/[G,G]$ to $A$.
This is the first isomorphism theorem once we observe that $G/[G,G]$ is abelian and $[G,G]$ is in the kernel of every map to an abelian group.
The commutator subgroup of $\GL_{2}(\R)$ is $\SL_{2}(R)$. (This is proved by somewhat painful calculations.) | {
"domain": "jeremy9959.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9910145723089616,
"lm_q1q2_score": 0.823674870028375,
"lm_q2_score": 0.831143045767024,
"openwebmath_perplexity": 34.902528165913616,
"openwebmath_score": 0.9832044243812561,
"tags": null,
"url": "https://jeremy9959.net/Math-5210/notes/Five.html"
} |
java, performance, chess
if(Move.encodeMove(Move.decodeMove(move)) != move) {
throw new RuntimeException("ENCODE DECODE FAILED");
}
}
public static void main(String[] args) {
benchmark(100000000);
}
} By measuring with JMH I found your decoder being much slower than the encoder. Contrary to your conclusion that the bottleneck is array allocation, I find that the difference between one static array and a new array each time has secondary effect. Compare these results (single static array):
MeasureEncoding.decodeBitfield avgt 5 5.896 ± 0.091 ns/op
MeasureEncoding.decodeConstantin avgt 5 44.287 ± 1.217 ns/op
MeasureEncoding.decodeMarco13 avgt 5 7.256 ± 0.240 ns/op
MeasureEncoding.encodeBitfield avgt 5 8.637 ± 0.279 ns/op
MeasureEncoding.encodeConstantin avgt 5 8.942 ± 0.207 ns/op | {
"domain": "codereview.stackexchange",
"id": 14600,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance, chess",
"url": null
} |
quantum-field-theory, cosmology, cosmological-inflation
Title: Why is slow-roll is prefered over non-slow-roll inflation? Wikipedia says that in new inflation, the slow-roll conditions must be satisfied for inflation to occur.
What is fast-roll or non-slow-roll inflation and why is slow-roll is prefered? Slow roll inflation is quantified in terms of a sequence of parameters which measure the local smoothness of the inflaton potential. The lowest-order parameter is $\epsilon \propto (V'/V)^2$, the next-lowest is $\eta \propto V''/V$, and so on. Higher-order parameters measure ever more localized kinks in the potential.
The values of $\epsilon$, $\eta$, etc are constrained by a few different things. For one, inflation needs to last long enough to solve the horizon and flatness problems. Inflation occurs as long as $\epsilon < 1$, and its rate of change is
$$\frac{{\rm d}\epsilon}{{\rm d}N} = 2\epsilon (\eta - \epsilon)$$ | {
"domain": "physics.stackexchange",
"id": 50840,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, cosmology, cosmological-inflation",
"url": null
} |
python, beginner, python-3.x, number-guessing-game
return new_game.lower()
def start_game():
"""This is the main loop that runs the app.
"""
highscore = 0
while True:
print_header()
number_to_guess = generate_number_to_guess(LOWER_NUMBER,
HIGHEST_NUMBER)
guess = 0
count = 0
while guess != number_to_guess:
guess = user_input()
count += 1
if guess < number_to_guess:
print("It's higher")
elif guess > number_to_guess:
print("It's lower")
else:
print(
f'\nYou geussed the right number and needed {count} tries')
if count < highscore or highscore == 0:
highscore = count | {
"domain": "codereview.stackexchange",
"id": 34510,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-3.x, number-guessing-game",
"url": null
} |
slam, navigation, ros-kinetic, rasbperrypi, hector-slam
Original comments
Comment by ahendrix on 2017-10-24:
Building on the Raspberry Pi can take hours or days for complex software like hector_slam. How long are you waiting for catkin_make to finish? Is there any CPU activity while it seems stuck?
Comment by julimen5 on 2017-10-24:
not waiting too match. But it freezes everthig, I can't even move the mouse. Are you telling me I should wait longer?
I opened just one terminal with this command. Tried to echo 3 /proc/sys/vm/drop_caches but I get permission denied.
Comment by rmck on 2017-10-24:
I've had a lot of success by adding additional 1gb swap space and ssh'ing into the pi rather than using the GUI. I run "catkin_make -j1", compilation typically takes 2-3 minutes.
Are you building from source? Hector-Slam should be available as an installable package for 16.04.
Comment by julimen5 on 2017-10-24:
It has been compiling for an hour. May be I'll try to disable GUI and do what you are telling me to do. | {
"domain": "robotics.stackexchange",
"id": 29175,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, ros-kinetic, rasbperrypi, hector-slam",
"url": null
} |
python, algorithm, recursion, combinatorics, depth-first-search
# start DFS from the root node
result = []
nsum_recursive(arr, [], 0, result)
return result def nsum(arr, n, val):
A documentation string here is important. The names of the variables are so vague that even knowing the task in advance is difficult to understand their meaning.
if arr is None or len(arr) < n:
return []
This check is non very useful. It does not decrease the algorithm complexity. In particular the check arr is None is wrong... you want the function to fail if the arr is not a list, otherwise you hide possible errors in the caller's code.
# first sort the array by indice, so that we can search sequentially
# with early stopping
sorted_indice = sorted(xrange(len(arr)), key=lambda i: arr[i])
size = len(arr) | {
"domain": "codereview.stackexchange",
"id": 9763,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, algorithm, recursion, combinatorics, depth-first-search",
"url": null
} |
cosmology, space-expansion, redshift
Title: Why are physicists not more concerned that there are too many explanations for redshift in the universe? There are speculative explanations for red shift such as the tired light theory, but I am not referring to those. There are three mainstream explanations
Red shift due the expansion of the universe giving rise to a Doppler effect.
Cosmological red shift. The red shift is due to the stretching of light as the universe expands. Numerically, this seems to explain all of red shift, leaving no room for the other explanations.
Gravitational time dilation. Time runs slower as the force of gravity, or the gravitational field, gets stronger. Slower time equates to a reduced frequency, which is a red shift. There is definitely gravitational red shift at the local scale but I think there is at the cosmological scale. As the universe gets younger, the density increases, as will the force of gravity, or at least the gravitational field. | {
"domain": "physics.stackexchange",
"id": 100187,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, space-expansion, redshift",
"url": null
} |
biochemistry, proteins
Gu, the more stable is the protein to denaturation.
If a protein unfolds reversibly it may be fully unfolded and inactive
at high temperatures, but once it cools to room temperature, it will
refold and fully recover activity. In the case of irreversible or
slowly unfolding proteins, it is kinetic stability or the rate of
unfolding that is important. A protein that is kinetically stable will
unfold more slowly than a kinetically unstable protein. In a
kinetically stable protein, a large free energy barrier to unfolding
is required and the factors affecting stability are the relative free
energies of the folded (Gf) and the transition state (Gts) for the
first committed step on the unfolding pathway. Irreversible loss of
protein folded structure is represented by: F <-> U -> I, where I is
inactive due to aggregation, disulphide exchange, proteolysis,
irreversible subunit dissociation, chemical degradation, etc... | {
"domain": "biology.stackexchange",
"id": 3023,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "biochemistry, proteins",
"url": null
} |
transcription
Title: Anticodon Translation Question | {
"domain": "biology.stackexchange",
"id": 12313,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "transcription",
"url": null
} |
rust
I took a hopefully more efficient approach to the task than sorting the list, by using the median of medians algorithm and quickselect, which I largely copied from Wikipedia, but slightly modified to also count each element. To do this I used a point in the algorithm where it already looped over every element. However, I’m unsure of if my use of a closure to do this might slow things down, as later iterations are passed a noop closure, which might slow things down a bit.
I used the proptest crate to create property-based tests for my code.
Here are the source files from my project:
lib.rs
mod common;
mod select_and_iterate;
#[cfg(test)]
mod test;
use std::{
collections::{HashMap, HashSet},
hash::Hash,
};
use crate::{common::noop, select_and_iterate::select_and_iterate};
#[derive(Debug, PartialEq, Eq)]
pub enum Median<T> {
At(T),
Between(T, T),
}
#[derive(Debug, PartialEq, Eq)]
pub struct Mode<T: Eq + Hash>(HashSet<T>); | {
"domain": "codereview.stackexchange",
"id": 44282,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rust",
"url": null
} |
solid-state-physics, semiconductor-physics, equilibrium
Title: Is electron-hole pair thermal equilibrium in semiconductors a dynamic or static equilibrium? In my textbook, it is mentioned that electron hole pairs in semiconductors are formed due to thermal energy and at a given temperature the number of electrons and holes is constant. The formula given is
$$n(e)\cdot n(h)=\text{constant}$$
where $n(e)$ is the number density of electrons and $n(h)$ is the number density of holes.
Is this a dynamic equilibrium (e.g., chemical equilibrium, reaction for production of ammonia, etc.) where electron-hole pairs are continuously forming due to thermal energy and recombining or is it a static equilibrium?
The expression is also suspiciously similar to expression of equilibrium constant in chemistry. Doping is also similar-looking to addition of a strong acid or base to water. Yes, electron-hole pair generation and recombination is a dynamic equilibrium. In fact the equation you mentioned in the question, i.e.
$$n_e \times n_h= \text{constant}$$ | {
"domain": "physics.stackexchange",
"id": 62811,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "solid-state-physics, semiconductor-physics, equilibrium",
"url": null
} |
type(a[1])
numpy.int32
a = [1,2,3,4,5.0]
a = np.array(a)
type(a[1])
numpy.float64
If you want to get the desired element type, then you will have to ask numpy for it explicitly,
a = [1,2,3.5,4.9,5.0]
a = np.array(a, int) # convert all elements in the list to integer
a
array([1, 2, 3, 4, 5])
You can see the full list of input arguments to np.array function here.
A similar function np.zeros_like(c) generates an array of zeros where the length of the generated array is that of the input array c and the element type is the same as those in c.
b = [1,2,3,4,5,6,7]
a = np.zeros_like(b)
a
array([0, 0, 0, 0, 0, 0, 0])
Often one wants an array to have $n$ elements with uniformly distributed values in an interval $[p,q]$. The numpy function linspace creates such arrays,
a = np.linspace(1, 100, 53)
a | {
"domain": "cdslab.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9615338123908151,
"lm_q1q2_score": 0.8012697830497202,
"lm_q2_score": 0.8333246035907932,
"openwebmath_perplexity": 1516.4741989630986,
"openwebmath_score": 0.38048723340034485,
"tags": null,
"url": "https://www.cdslab.org/python/notes/scientific-computing/vectorization/vectorization.html"
} |
recurrence-relation, recursion, discrete-mathematics
Title: How to solve F(n)=F(n-1)+F(n-2)+f(n) recursive function? Like in the title the following equation:
$F(n)=F(n-1)+F(n-2)+f(n)$, with $F(0)=0, F(1)=1$
$f(n)=f(n-1)+f(n-2)$, with $f(0)=0, f(1)=1$
I don't know how to solve this. The $f(n)$ is basically just $F(n)$, but
then I have
$$F(n)=F(n-1)+F(n-2)+F(n) \Rightarrow F(n-1)+F(n-2)=0$$
and I cannot go anywhere from this. $f(n)$ is the well-known Fibonacci sequence.
Let $\alpha=\frac{1+\sqrt5}2$ be the golden ratio and $\phi=\frac{1-\sqrt5}2$. It is shown here that
$$f(n)=(\alpha^n-\phi^{n})/\sqrt5$$
Gnasher729 conjectured that $F(n) \approx 0.72 * n * f(n)$. Following that clue, we can find the following identity holds for all cases we tested by trial and error.
$$F(n)= nf(n) - (n-2)f(n-2) + (n-4)f(n-4) - (n-6)f(n-6) + \cdots$$
where the sequence of summands goes on as long as the summand makes sense. | {
"domain": "cs.stackexchange",
"id": 12738,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "recurrence-relation, recursion, discrete-mathematics",
"url": null
} |
java
public class BinaryPuzzleSolver {
// NOTE: each binary number is an int[]
public boolean solve(BinaryPuzzle binaryPuzzle) {
int[][] binaryPuzzleValues = binaryPuzzle.getValues();
List<int[]> binaryNumbers = generateAllBinaryNumbersWithBitSizeOf(
binaryPuzzleValues.length);
binaryNumbers.removeIf(this::binaryNumberInvalidBecauseOfRules);
HashMap<Integer, List<int[]>> matchingBinaryNumbersForBinaryPuzzleRows =
mapAllMatchingBinaryNumbersToEachBinaryNumberInsideBinaryPuzzleBasedOnSetValues(
binaryNumbers, binaryPuzzleValues);
return solveByTryingOutAllPossibilitiesAndCheckingAgainstBinaryPuzzleRules(
matchingBinaryNumbersForBinaryPuzzleRows, binaryPuzzleValues);
} | {
"domain": "codereview.stackexchange",
"id": 27875,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java",
"url": null
} |
ros, catkin-make, ros-kinetic, face-recognition, rosdistro
/home/horc/catkin_ws/src/procrob_functional/src/face_recognition_lib.cpp:375:45: error: ‘cvEigenDecomposite’ was not declared in this scope
projectedTrainFaceMat->data.fl + i*offset);
^
/home/horc/catkin_ws/src/procrob_functional/src/face_recognition_lib.cpp: In member function ‘void FaceRecognitionLib::doPCA()’:
/home/horc/catkin_ws/src/procrob_functional/src/face_recognition_lib.cpp:553:3: error: ‘CV_EIGOBJ_NO_CALLBACK’ was not declared in this scope
CV_EIGOBJ_NO_CALLBACK,
^
/home/horc/catkin_ws/src/procrob_functional/src/face_recognition_lib.cpp:558:23: error: ‘cvCalcEigenObjects’ was not declared in this scope
eigenValMat->data.fl);
^
procrob_functional/CMakeFiles/face_recognition_lib.dir/build.make:62: recipe for target 'procrob_functional/CMakeFiles/face_recognition_lib.dir/src/face_recognition_lib.cpp.o' failed | {
"domain": "robotics.stackexchange",
"id": 30088,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, catkin-make, ros-kinetic, face-recognition, rosdistro",
"url": null
} |
# Why is the focus of the parabola not within the parabola in the following result?
So i'm going through my book and try to solve the following question:
Find the equation of the parabola which is symmetric about the y axis and passed through the point (2,-3).
Since it passes through (2,-3), we can assume that the parabola opens downwards, and hence use the equation $x^2 = -4ay$.
Plugging in the values though you'd get $4 = -4(-3)a$ or $a = \frac13$
But this implies that the focus is at $(0, \frac13)$ which is clearly not in the parabola. How is this possible/where did i go wrong?
• Since the parabola is upside down, the focus is at $(0,-a)$ – David Quinn Feb 19 '16 at 16:58
• You seem to be assuming the parabola passes through $(0,0)$ – Henry Feb 19 '16 at 16:59
• There is no reason to assume it opens downward. $y=A x^2+B$, where $-3=4 A+B$, and $A\ne 0$, is a parabola thru $(2,-3)$. – DanielWainfleet Feb 19 '16 at 17:57 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307708274401,
"lm_q1q2_score": 0.8268781139960641,
"lm_q2_score": 0.849971175657575,
"openwebmath_perplexity": 279.0368847594172,
"openwebmath_score": 0.8267959952354431,
"tags": null,
"url": "https://math.stackexchange.com/questions/1663138/why-is-the-focus-of-the-parabola-not-within-the-parabola-in-the-following-result"
} |
python, file
How I can make it better?
from collections import namedtuple
Point = namedtuple('Point ', ['x', 'y'])
def read():
ins = open("PATH_TO_FILE", "r")
array = []
first = True
expected_length = 0
for line in ins:
if first:
expected_length = int(line.rstrip('\n'))
first = False
else:
parsed = line.rstrip('\n').split ()
array.append(Point(int(parsed[0]), int(parsed[1])))
if expected_length != len(array):
raise NameError("error on read")
return array Another improvement is to use open as a context manager so that you don't have to remember to .close() the file object even if there are errors while reading.
def read():
with open("FILE", "r") as f:
array = []
expected_length = int(f.next())
for line in f:
parsed = map(int, line.split())
array.append(Point(*parsed))
if expected_length != len(array): | {
"domain": "codereview.stackexchange",
"id": 2841,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, file",
"url": null
} |
ds.algorithms, big-picture, application-of-theory
Conflict Driven Clause Learning | {
"domain": "cstheory.stackexchange",
"id": 2355,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ds.algorithms, big-picture, application-of-theory",
"url": null
} |
c#, linq, file-system, wpf, xaml
Title: Categorize episode-file names - Follow up This is a follow up to this question.
I have implemented changes to the code as they were suggested in the previous question and have made a few changes of my own.
But it has been mentioned that the code snippet posted in the previous question was not sufficient to considerably affect code performance. So here, I'm posting all relevant code relating to the project.
I'd appreciate help with a few more things. They are after all the code (for a better context).
Category.cs
using System.Collections.ObjectModel; | {
"domain": "codereview.stackexchange",
"id": 18059,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, linq, file-system, wpf, xaml",
"url": null
} |
c#, .net, console, entity-component-system
public Nameable(string name)
{
Name = name;
}
}
Breedable.cs
Component providing the ability to breed objects and have a gender:
[Flags]
public enum Breeds
{
Lion = 1,
Tiger = 2,
}
public enum Gender
{
Male,
Female,
}
public class Breedable : IAnimalComponent
{
public Breeds Breed { get; }
public Gender Gender { get; }
public Breedable(Gender gender, Breeds breed)
{
Breed = breed;
Gender = gender;
}
//Should I return 'Animal' with attached 'Breedable' component instead?
public Breedable Propagate(Breedable other)
{
if (Gender == other.Gender)
{
//Would it be better to just return null?
throw new InvalidOperationException(nameof(other));
}
var genders = new[] {Gender.Male, Gender.Female};
return new Breedable(genders[new Random().Next(0, genders.Length)], Breed | other.Breed);
}
} | {
"domain": "codereview.stackexchange",
"id": 26567,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, console, entity-component-system",
"url": null
} |
c++, tree, iterator
Iterator
The iterator object is supposed to be a very cheap object to maintain and copy. As a result it feels strange to have a move operator. I don't think you will find many people move iterators around.
TreeIterator(TreeIterator&&);
TreeIterator<T>& operator=(TreeIterator<T>&&);
You allow increment but not decrement. So this is a forward iterator only.
TreeIterator<T>& operator++();
TreeIterator<T> operator++(int);
I see a normal iterator and thus normal access. You usually also want a const iterator with const accesses to the data. I see that you give const version of operator-> but not operator*.
T& operator*();
T* operator->();
const T* operator->() const;
Missing functionality: | {
"domain": "codereview.stackexchange",
"id": 24002,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, tree, iterator",
"url": null
} |
• Write $x^2+6x+9$ as the square of a linear term. – David Mitra Jan 18 '14 at 11:37
Note that $$x^2+2ax+a^2=(x+a)^2.$$
So, in your case, $$x^2+6x+9=x^2+2\cdot 3x+3^2=(x+3)^2.$$
In general, \begin{align}ax^2+bx+c&=a\left(x^2+\frac{b}{a}x\right)+c\\&=a\left\{x^2+\frac{b}{a}x+\left(\frac{b}{2a}\right)^2-\left(\frac{b}{2a}\right)^2\right\}+c\\&=a\left\{\left(x+\frac{b}{2a}\right)^2-\left(\frac{b}{2a}\right)^2\right\}+c\\&=a\left(x+\frac{b}{2a}\right)^2-a\left(\frac{b}{2a}\right)^2+c.\end{align}
• do you mean $r=-3$? – nadia-liza Jan 18 '14 at 11:39
• Sorry but how did you get r? – Sophia Jan 18 '14 at 11:39
• @Sophia: $r=6/2$. Note that $x^2+2ax+a^2=(x+a)^2$. In other word, $r$ is the half of the coefficient of $x$. – mathlove Jan 18 '14 at 11:46
• @nadia-liza: No. I think $r=3$. – mathlove Jan 18 '14 at 11:55
If you don't understand / remember the algorithm, then ignore the algorithm and simply solve for things.
You want
$$x^2 + 6x + 7 = (x+r)^2 + s$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.978051749483894,
"lm_q1q2_score": 0.82130327422149,
"lm_q2_score": 0.8397339656668287,
"openwebmath_perplexity": 229.34512109510297,
"openwebmath_score": 0.8341652750968933,
"tags": null,
"url": "https://math.stackexchange.com/questions/642524/highschool-precalculus-completing-the-square-with-r"
} |
15. Sep 3, 2016
### JonnyG
If a function is given then the topology for the domain and codomain must also be given. Then continuity is defined in terms of preimages of open sets being open as well.
Now, regardless of which topology $[a,b]$ is equipped with, continuity makes sense at the end points. A function $f: [a,b] \rightarrow \mathbb{R}$ is continuous at $a$ if for each open set containing $f(a)$, there is an open set $U \ni a$ such that $U \subset f^{-1}\big(f(a)\big)$. A similar definition is given for continuity at $b$. You'll notice that this definition actually makes sense for any point $x \in [a,b]$. So we can talk about continuity at the end points of a closed interval and in fact it is not, in any sense, a sort of "special" continuity. It's the same old definition.
16. Sep 4, 2016
### Stephen Tashi | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9688561676667173,
"lm_q1q2_score": 0.8094633779066694,
"lm_q2_score": 0.8354835371034368,
"openwebmath_perplexity": 362.59136521280755,
"openwebmath_score": 0.8383511304855347,
"tags": null,
"url": "https://www.physicsforums.com/threads/continuous-function.883693/"
} |
java, xml
public class Word extends FileElement<Line, FileElement.Void> {
private final String content;
private final boolean strong;
public Word(final String id, final BoundingBox boundingBox, final String content, final boolean strong) {
super(id, boundingBox);
this.content = Objects.requireNonNull(content);
this.strong = strong;
}
public static final Word of(final Element element) {
Objects.requireNonNull(element);
String elementId = element.getAttributeValue("id");
String elementTitle = element.getAttributeValue("title");
Title title = new Title(elementTitle);
Element child = element.getChild("strong");
return new Word(
elementId,
title.getBoundingBox().orElseThrow(() -> new IllegalStateException("No bounding box present in: " + elementTitle)),
child == null ? element.getText() : child.getText(),
child != null
);
} | {
"domain": "codereview.stackexchange",
"id": 7336,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, xml",
"url": null
} |
electromagnetism, particle-physics, experimental-physics, particle-detectors
Geiger counter
Photomultiplier tube or microchannel plate (being used to detect atoms or electrons, not photons)
cloud chamber: a particle passes through a supersaturated vapour and collisionaly ionises molecules along its path; the resulting ions attract water molecules which form drops which grow. True, we do normally photograph the drops, but the drops themselves indicate that the particle passed by.
bubble chamber, similar to cloud chamber but now it is bubbles forming in a superheated liquid
Millikin oil drop experiment: we detect the arrival of electrons by observing that the oil drop becomes more strongly influenced by an applied static field
particle detectors like charge-coupled-devices (CCDs) in which the arriving particle deposits energy, allowing promotion of an electron in a semiconductor across a band gap. This is quite like the photoelectric effect but no photons need be involved. | {
"domain": "physics.stackexchange",
"id": 73595,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, particle-physics, experimental-physics, particle-detectors",
"url": null
} |
visualization, matplotlib, seaborn, python-3.x
Title: What does the term 'Facet' in Seaborn FacetGrid imply? I am a newbie to data science. I have a very basic understanding of Seaborn. My understanding is that it is used to plot grids. That said, I intend to have a better understanding of 'FacetGrid' and the term 'facet'.
FacetGrid is a multi-plot grid for plotting conditional relationships.
FacetGrid object takes a DataFrame as input and the names of the
variables that will form the row, column, or hue dimensions of the
grid. The variables should be categorical and the data at each level
of the variable will be used for a facet along that axis. | {
"domain": "datascience.stackexchange",
"id": 7245,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "visualization, matplotlib, seaborn, python-3.x",
"url": null
} |
ros, catkin-make, catkin, ros-kinetic
As a result, the directories build, devel, and src are empty.
Do you have any idea to solve this problem?
Thanks.
Originally posted by ogd on ROS Answers with karma: 26 on 2021-02-05
Post score: 1
The problem is solved after adding below package into Yocto build
catkin-dev
Originally posted by ogd with karma: 26 on 2021-02-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 36047,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, catkin-make, catkin, ros-kinetic",
"url": null
} |
beginner, c, console, windows, text-editor
while (key != 'z') {
printLines(lines);
printf_s("what key do you want?(o for options) ");
scanf_s("%c", &key);
clearBuffer();
switch (key) {
case 'o': showOptions();
break;
case '1': moveCursor(1);
break;
case 'q': moveCursor(0);
break;
case 'd': deleteLine(&lines);
break;
case 'c': changeLine(&lines);
break;
case 'a': addLine(&lines);
break;
case 'z': printf_s("bye bye!\n");
}
}
return 0;
}
malloc doesn't need a cast. In fact, casting the result of malloc may lead to hard to find problems.
sizeof(char) is guaranteed to be 1.
Prefer taking sizeof expression rather than sizeof(type). The reason is that such code remains valid even if the type changes. In your case,
*lines = malloc(sizeof(**lines) * (_maxLines+2)); | {
"domain": "codereview.stackexchange",
"id": 29767,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c, console, windows, text-editor",
"url": null
} |
java, beginner, tree
@Override
public int compareTo(Node o) {
if(o.getData() > this.data)
return -1;
if(o.getData() < this.data)
return 1;
return 0;
}
This can be simplified to (Java7 or later):
@Override
public int compareTo(Node o) {
return Integer.compare(this.data, o.getData());
}
Common constructor
Your Node class has a constructor taking the data, left, and right nodes, but you never supply real values for the left and right nodes, only null. You should probably remove those parameters from the constructor, and just default them to null. Alternatively, have an additional constructor like:
public Node(int data) {
this(data, null, null);
}
Node permissions
Your node is declared class Node ... which means only classes in the same package as yours can see the Node class, it is not 'public'.
Unfortunately, your Tree class is public: public class Tree and it has the method:
public Node getRoot() {
return root;
} | {
"domain": "codereview.stackexchange",
"id": 9982,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, tree",
"url": null
} |
laser, android
Title: use point to present laserscan data?
Hi,
I am new to opengl, and I have some questions about presenting laserscan data on android. In the tutorial code (android_tutorial_teleop), triangle fan shape is used to present laser scan data. Why triangle fan shape is chosen here to present the data? Can we use just point shape instead?
Originally posted by ira on ROS Answers with karma: 106 on 2012-06-17
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9814,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "laser, android",
"url": null
} |
neural-networks
Title: Is it possible to add "memory" to a neural network? Suppose I have a NN with one hidden layer, 10 inputs and 5 outputs, intended to be used as a (for example) game-bot AI. Would it make any sense to add, say, 5 (insert any number here) more inputs and outputs, and directly link these outputs from a previous step to these additional inputs of the next?
I imagine it could work as a RAM of sorts, making it possible to "save" some data that the network finds useful and use it later, therefore enabling the network to make decision not only on the current situation, but also on whatever it may "remember".
But maybe it's better to just hard-code these variables as actual variables and write only what the programmer decided to, such as "enemies killed", without the network being able to save whatever it finds useful? Yes it is possible. | {
"domain": "cs.stackexchange",
"id": 7130,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-networks",
"url": null
} |
condensed-matter
So, the conclusion is that if we want to use the Hubbard model (the basis of $+U$ methods), we first need to construct such localized states, $\phi(\mathrm{r})$, that they satisfy several criteria: | {
"domain": "physics.stackexchange",
"id": 67892,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter",
"url": null
} |
algorithm-analysis, formal-methods
Title: Maximum value a variable can hold without documentation Suppose we work with some particular programming language (like C++) on some particular computer. Furthermore, we want to know which values are minimum and maximum for some particular numeric data type of this language. We can't use any specific tools of our language (like numeric_limits), and have no access to language or system documentation. Specifically, we don't know the lengths of our words. Is there an effective and rather precise way to find out the min/max we want to know?
We can certainly write some loop starting with zero and adding one at each step and wait for an overflow. But it is certainly not an effective way. We can write some loop and check some condition like $\log(\exp(n)) \stackrel{?}{=} n$, but I'm not sure whether this will answer our question. The first step should be
count = 0
while (x*2 > x)
x = x*2
count++ | {
"domain": "cs.stackexchange",
"id": 3336,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm-analysis, formal-methods",
"url": null
} |
-
Since the OP says that $n$ is known, I don't think it is necessary to have a long stretch of zeroes anywhere. You may be thinking of polynomial multiplication (or convolution of coefficients using the fast Fourier transform) which does need long strings of zeroes (zero-padding) to because the $n$-point DFT will give polynomial multiplication modulo $(x^n-1)$. – Dilip Sarwate Apr 20 '12 at 19:01
@Dilip: apparently I wasn't paying sufficient attention. I shall fix; thanks! – J. M. Apr 23 '12 at 16:22
Method 1 works only if you don't have a constant term in your polynomial, but that's easy enough to filter out. Are there any conditions on method 2 I should be aware of before I try it out? – wxffles Apr 23 '12 at 20:36
@wxffles: actually, it does work; it's a Vandermonde system that you're solving, after all. What were you working on that prompted this observation of yours? – J. M. Apr 23 '12 at 20:40 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9845754492759499,
"lm_q1q2_score": 0.8288408609476049,
"lm_q2_score": 0.8418256432832333,
"openwebmath_perplexity": 355.13842966440666,
"openwebmath_score": 0.9825674891471863,
"tags": null,
"url": "http://math.stackexchange.com/questions/134212/determine-the-coefficients-of-an-unknown-black-box-polynomial/134436"
} |
t1 is the time at the start; and t2 is the time at the end.
It might help to think of this equation as:
$$v_{average} = (x_{final}-x_{start})/(t_{final}-t_{start})$$
Also, for many problems, not all so be careful, you can choose t_start = 0. That's when you start the stopwatch. t_end is when you stop the stopwatch. And $\Delta t$ is what the stopwatch reads.
5. Aug 31, 2014
### ehild
The route consist of two stages. You can consider the initial position at the house as xi=0 and the initial time Ti=0. First, you walk due East - your displacement is Δx1= 55 m, and the time is Δt1=25 s. Your position is x1=55 m with respect to the house. Then you turn back moving to west: your displacement is negative, Δx2=-40 m and it took the time Δt2=47 s. Your position is x2= Δx1+Δx2=55-40=15 m measured to East from the house. Your total displacement is Δx=15 m to East, and you walked for 47+25=71 s. The average velocity is Vav=Δx/Δt = (15-0)/71 , and it points to East.
ehild | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.972414716174355,
"lm_q1q2_score": 0.8060723001291507,
"lm_q2_score": 0.8289388125473628,
"openwebmath_perplexity": 1157.2423711047575,
"openwebmath_score": 0.4665842056274414,
"tags": null,
"url": "https://www.physicsforums.com/threads/finding-average-velocity.768317/"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.