text stringlengths 1 1.11k | source dict |
|---|---|
gravity, magnetic-fields
Albert Einstein (1879-1955) was inspired by Mach's insight and used it as the basis of his general theory of relativity, founded on the Machian idea that all motion was relative. If the relative rotation of a shell of stars produced a rotational dragging effect on its contents, and there was nothing special about those stars, then the relative rotation of the contents should also in turn produce a dragging effect on the starfield. A rotating star or planet should also exert a drag on nearby matter and light. | {
"domain": "physics.stackexchange",
"id": 63022,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gravity, magnetic-fields",
"url": null
} |
has probability $β$, where $α = cβ$. (That is, the district attorney supposes that each recently released convict is $c$ times as likely to be the crime’s perpetrator as is each town member who is not a recently released convict.) When the DNA that is analyzed is compared against the database of the ten thousand ex-convicts, it turns out that A. J. Jones is the only one whose DNA matches the profile. Assuming that the district attorney’s estimate of the relationship between $α$ and $β$ is accurate, what is the probability that A. J. is guilty? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.988312740283122,
"lm_q1q2_score": 0.8319870181587443,
"lm_q2_score": 0.8418256532040708,
"openwebmath_perplexity": 603.3526892855305,
"openwebmath_score": 0.7469279170036316,
"tags": null,
"url": "https://math.stackexchange.com/questions/1751938/probability-of-a-man-being-guilty"
} |
Comment #2780 by Pieter Belmans (site) on August 20, 2017 a 10:12 am UTC
It's not very important, but it would be better to write \not\mid compared to \not|: compare $\not\mid$ to $\not|$.
Comment #2889 by Johan (site) on October 7, 2017 a 2:31 pm UTC
Thanks. Fixed here.
## Add a comment on tag 09H0
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner). | {
"domain": "columbia.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9875683496956436,
"lm_q1q2_score": 0.8119679670700205,
"lm_q2_score": 0.822189134878876,
"openwebmath_perplexity": 121.89784745116994,
"openwebmath_score": 0.9696638584136963,
"tags": null,
"url": "https://stacks.math.columbia.edu/tag/09H0"
} |
python
raise ValueError('At least one of `creator_name` and `creator_pseudonym` should be provided.')
if creator_name and creator_pseudonym:
raise ValueError('Only one of `creator_name` and `creator_pseudonym` should be provided.')
if licence not in ACCEPTABLE_LICENSES:
raise ValueError('Licence not acceptable') | {
"domain": "codereview.stackexchange",
"id": 35784,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
# Find prime p s.t. a^4-b^4=p
1. Jul 19, 2008
### foxjwill
1. The problem statement, all variables and given/known data
Find all primes p such that $$\exists a,b \in \mathbf{Z}$$ such that $$a^4-b^4=p$$.
2. Relevant equations
3. The attempt at a solution
For simplicity, we can limit a and b to the positive integers.
Factoring, we have $$p=(a^2+b^2)(a-b)(a+b)$$. By the unique factorization theorem, we are limited to three cases:
(1) $$a+b=1$$ and $$a-b=1$$, which gives $$a=1$$ and $$b=0$$, so p must be 1. But since 1 is not a prime, case 1 is eliminated.
(2) $$a^2+b^2=1$$ and $$a-b=1$$, which gives $$a^2+b^2-2ab=1$$ and then $$-2ab=0$$. Again, we are left with $$a=1$$ and $$b=0$$, so case 2 is eliminated.
(3) $$a^2+b^2=1$$ and $$a+b=1$$, which gives $$a^2+b^2+2ab=1$$ and then $$2ab=0$$. Again, we are left with $$a=1$$ and $$b=0$$, so case 3 is eliminated.
Therefore, no primes satisfy the equation. Q.E.D.
Is my proof valid? If it is, is there a "more elegant" proof? | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211612253742,
"lm_q1q2_score": 0.8058961427707605,
"lm_q2_score": 0.8267117855317474,
"openwebmath_perplexity": 697.0567302499984,
"openwebmath_score": 0.9337755441665649,
"tags": null,
"url": "https://www.physicsforums.com/threads/find-prime-p-s-t-a-4-b-4-p.245744/"
} |
php, object-oriented, mvc, ajax, pdo
$save = $core->insertSupplier($data);
}
if(isset($_POST['action']) && $_POST['action'] === 'editSupplier' ){
$id = filter_var($_POST['id'],FILTER_SANITIZE_NUMBER_INT);
$code = filter_var($_POST['codice_interno'],FILTER_SANITIZE_STRING);
$name = filter_var($_POST['nome_fornitore'],FILTER_SANITIZE_STRING);
$piva = filter_var($_POST['p_iva'],FILTER_SANITIZE_NUMBER_INT);
$tel = filter_var($_POST['tel'],FILTER_SANITIZE_NUMBER_INT);
$fax = filter_var($_POST['fax'],FILTER_SANITIZE_NUMBER_INT); $email = filter_var($_POST['email'],FILTER_SANITIZE_STRING);
$indirizzo = filter_var($_POST['indirizzo'],FILTER_SANITIZE_STRING);
$citta = filter_var($_POST['citta'],FILTER_SANITIZE_STRING);
$cap = filter_var($_POST['cap'],FILTER_SANITIZE_NUMBER_INT); $provincia = filter_var($_POST['provincia'],FILTER_SANITIZE_STRING); | {
"domain": "codereview.stackexchange",
"id": 30638,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, object-oriented, mvc, ajax, pdo",
"url": null
} |
beginner, c, formatting, io
Title: K&R C book, Exercise 1-21: Replace tabs with spaces I'm new to C, just started reading K&R C book, and am working through exercises. This is my solution to 1-21, and as far as I tested it works.
Anything that I'm doing wrong, or that it's not idiomatic C? What can be improved ?
Exercise 1-21. Write a program entab that replaces strings of blanks
by the minimum number of tabs and blanks to achieve the same spacing.
Use the same tab stops as for detab. When either a tab or a single
blank would suffice to reach a tab stop, which should be given
preference?
#include <stdio.h>
#define TAB_SIZE 4
int main_entab() {
int c;
int character_count = 0;
int whitespace_count = 0;
while ((c = getchar()) != EOF) {
if (c == ' ') {
// A _ B _ | _ C _ _ |
int numberOfCharsToReachTabStop = (TAB_SIZE - ((character_count + whitespace_count) % TAB_SIZE));
whitespace_count++; | {
"domain": "codereview.stackexchange",
"id": 33524,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c, formatting, io",
"url": null
} |
continuum-mechanics
$ \displaystyle \int_{V^0} \dfrac{\partial }{\partial t}\bigg|_{\mathbf{r_0}} ( \rho J ) = 0$
$\displaystyle \int_{V^0} (\rho J) \dfrac{\partial }{\partial t}\bigg|_{\mathbf{r_0}} \mathbf{u} = \int_{V^0} \rho J \mathbf{g} + \oint_{\partial V^0} \mathbf{\hat{n}^0} \cdot \mathbb{P} = \int_{V^0} \rho J \mathbf{g} + \int_{V^0} \nabla_0 \cdot \mathbb{P} $,
having used Nanson's formula for the transformation of the surface integral, being $\mathbb{F}$ the nominal stress tensor, $\nabla_0 \cdot$ the divergence in the reference space, and $J$ the determinant of the gradient of the transformation from the reference to the physical coordinates, and exploiting the mass conservation $\frac{\partial}{\partial t} \big|_{\mathbf{r_0}} (\rho J) = 0$ to write $\rho(\mathbf{r_0},t) J(\mathbf{r_0},t) = \overline{\rho}(\mathbf{r_0})$ constant. | {
"domain": "physics.stackexchange",
"id": 92166,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "continuum-mechanics",
"url": null
} |
machine-learning, statistics
data: iris$Sepal.Length[iris$Species == "setosa"] and iris$Sepal.Length[iris$Species == "virginica"]
W = 38.5, p-value < 2.2e-16
alternative hypothesis: true location shift is not equal to 0
Regression:
# Simple linear regression
summary(lm(Sepal.Length~Species, data=iris))
# p-values are smaller than 0.05 which means each factor's contribution is statistically different from the intercept
Result:
Call:
lm(formula = Sepal.Length ~ Species, data = iris)
Residuals:
Min 1Q Median 3Q Max
-1.6880 -0.3285 -0.0060 0.3120 1.3120
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.0060 0.0728 68.762 < 2e-16 ***
Speciesversicolor 0.9300 0.1030 9.033 8.77e-16 ***
Speciesvirginica 1.5820 0.1030 15.366 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 | {
"domain": "datascience.stackexchange",
"id": 6066,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, statistics",
"url": null
} |
c++, entity-component-system
void addEntity(const Entity32Bit entity)
{
assert(!entityInSet(entity));
changeIndex(entity, mEDS[entity.type()].size());
mEDS[entity.type()].push_back(entity);
}
void deleteEntity(const Entity32Bit entity)
{
assert(entityInSet(entity));
//change last member in group to point to deleted component;
changeIndex(*(mEDS[entity.type()].end() - 1), getIndex(entity));
//swapComponent + delete EDS
mEDS[entity.type()][getIndex(entity)] = *(mEDS[entity.type()].end() - 1);
mEDS[entity.type()].pop_back();
//clear entity in sparse
changeIndex(entity, _UI32_MAX);
}
uint32_t totalSize()
{
int size = mEDS[1].size(); //mEDS[0] is always empty
for (int i = 2; i < MAX_ET_ID; ++i)
{
size += mEDS[i].size();
}
return size;
}
void resizeSparse(ET_ID id, uint32_t size)
{
mSparses[id].resize(size); | {
"domain": "codereview.stackexchange",
"id": 44389,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, entity-component-system",
"url": null
} |
$\tan^{-1} ( x_1\sqrt{2} - 1 ) + \tan^{-1}( x_1 \sqrt{2} + 1) = \displaystyle \frac{\pi}{2}$,
$\tan^{-1} ( x_2\sqrt{2} - 1 ) + \tan^{-1}( x_2 \sqrt{2} + 1) = -\displaystyle \frac{\pi}{2}$.
I will now show that $x_1 = 1$ and $x_2 = -1$. Indeed, it’s apparent that these have to be the two transition points because these are the points where $\displaystyle \frac{x \sqrt{2}}{1 - x^2}$ is undefined. However, it would be more convincing to show this directly.
To show that $x_1 = 1$, I need to show that
$\tan^{-1} (\sqrt{2} - 1 ) + \tan^{-1}( \sqrt{2} + 1) = \displaystyle \frac{\pi}{2}$.
I could do this with a calculator…
…but that would be cheating.
Instead, let $\alpha = \tan^{-1} (\sqrt{2} - 1 )$ and $\beta = \tan^{-1} (\sqrt{2} + 1 )$, so that
$\tan \alpha = \sqrt{2} - 1$,
$\tan \beta = \sqrt{2} + 1$.
Indeed, by SOHCAHTOA, the angles $\alpha$ and $\beta$ can be represented in the figure below: | {
"domain": "meangreenmath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9877587261496031,
"lm_q1q2_score": 0.8005576076167914,
"lm_q2_score": 0.8104789018037399,
"openwebmath_perplexity": 130.9473513557711,
"openwebmath_score": 0.9300140142440796,
"tags": null,
"url": "https://meangreenmath.com/2015/10/09/the-antiderivative-of-1x41-part-9/"
} |
# Average minimum distance between $n$ points generate i.i.d. uniformly in the ball
Let $U \in \mathbb{R}^3$ be distributed uniformly in the Ball in $\mathbb{R}^3$ centered at zero. That is $U \sim f_U(u)= \frac{1}{ \frac{4}{3} \pi R^3}$ for all $\|u\|\le R$ where $R$ is the radius of the ball.
Now suppose we generate $n$ points i.i.d. according distribution of $U$.
My questions is: Can we compute the expected minimum distance between the generated points, that is \begin{align} E\left[ \min_{i,j\in \{1,2,,,n\}} \| U_i-U_j\| \right], \end{align} where $\| U_i-U_j\|$ is Euclidean distance.
This question is related to a number of other questions. For example, Average distance between two random points in a square
Average minimum distance between $n$ points generate i.i.d. with uniform dist.
I feel that this question should have been addressed before but not sure where to look. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226267447513,
"lm_q1q2_score": 0.8204390849025918,
"lm_q2_score": 0.8397339656668287,
"openwebmath_perplexity": 601.8218367521381,
"openwebmath_score": 0.9141703844070435,
"tags": null,
"url": "https://math.stackexchange.com/questions/2005775/average-minimum-distance-between-n-points-generate-i-i-d-uniformly-in-the-bal"
} |
homework-and-exercises, newtonian-mechanics, forces, friction
For bullet point 2... Is it because of the presence of the resistive forces?
The question doesn't really make sense. Is the propulsion force larger than the net force? Yes of course, since the net force could for example be zero.
If you mean that the propulsion force must be bigger than the resistive force, then your answer is fine and correct. | {
"domain": "physics.stackexchange",
"id": 40298,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, forces, friction",
"url": null
} |
## Tips
• If dsolve cannot find an explicit or implicit solution, then it issues a warning and returns the empty sym. In this case, try to find a numeric solution using the MATLAB® ode23 or ode45 function. Sometimes, the output is an equivalent lower-order differential equation or an integral.
• dsolve does not always return complete solutions even if 'IgnoreAnalyticConstraints' is false.
• If dsolve returns a function that has different one-sided limits at x0 and you specify the condition y(x0), then dsolve treats the condition as a limit from the right, .
## Algorithms
If you do not set 'IgnoreAnalyticConstraints' to false, then dsolve applies these rules while solving the equation:
• log(a) + log(b) = log(a·b) for all values of a and b. In particular, the following equality is applied for all values of a, b, and c:
(a·b)c = ac·bc.
• log(ab) = b·log(a) for all values of a and b. In particular, the following equality is applied for all values of a, b, and c:
(ab)c = ab·c. | {
"domain": "mathworks.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9881308786041315,
"lm_q1q2_score": 0.8653921711289246,
"lm_q2_score": 0.8757869932689566,
"openwebmath_perplexity": 1471.9728524477512,
"openwebmath_score": 0.7705014944076538,
"tags": null,
"url": "https://it.mathworks.com/help/symbolic/dsolve.html"
} |
forces, diffusion
As this ,when we consider a set of particles, each can collide with the other particle, the particles are moving from the more concentrated place to the lower one.
If we take a vacuum tube and keep a particle there, in the temperature of absolute zero, and then slowly increasing the temperature, the particle starts to move with the thermal motion.The is no way for any collision or way to vary it's momentum which force affects for this motion by the main 4 fundamental forces? If the single particle is in a vacuum tube, then there really isn't a way to define the temperature. Temperature is a concept that only applies to collections of a very large number of particles, since temperature is somewhat a measure of the average kinetic energy of a system.$^*$ But for your single particle it has a well defined kinetic energy already. We don't (can't?) apply the ideas of statistical mechanics to it. | {
"domain": "physics.stackexchange",
"id": 60905,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, diffusion",
"url": null
} |
vba, ms-access
That sub looks like this:
Public Sub SaveControlPositionsToTags(frm As Form)
Dim ctl As Control
Dim ctlLeft As String
Dim ctlTop As String
Dim ctlWidth As String
Dim ctlHeight As String
For Each ctl In frm.Section(acDetail).Controls
'Calculate the percentages and store them as strings
ctlLeft = CStr(Round(ctl.Left / frm.Width, 2))
ctlTop = CStr(Round(ctl.Top / frm.Section(acDetail).Height, 2))
ctlWidth = CStr(Round(ctl.Width / frm.Width, 2))
ctlHeight = CStr(Round(ctl.Height / frm.Section(acDetail).Height, 2))
'Store the percentages for each control in its "Tag" property
ctl.Tag = ctlLeft & ":" & ctlTop & ":" & ctlWidth & ":" & ctlHeight
Next
End Sub | {
"domain": "codereview.stackexchange",
"id": 23200,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba, ms-access",
"url": null
} |
/*****************************************************************************/
/* create a list [p(0,d),p(1,d),p(2,d), ... ,p(n,d)] and return pointer */
/*****************************************************************************/
unsigned long * p_list(unsigned long n, unsigned long d) {
unsigned long i;
unsigned long * powers = malloc((n+1)*sizeof(unsigned long));
for (i=0;i<=n;i++) powers[i] = p(i,d);
return powers;
}
/*****************************************************************************/
/* main */
/*****************************************************************************/
int main(int argc, char **argv) {
unsigned long k1, k2, k3, a;
unsigned long long result = 0;
unsigned long * p2 = p_list(200000, 2);
unsigned long * p5 = p_list(200000, 5); | {
"domain": "jasonbhill.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9873750481179169,
"lm_q1q2_score": 0.8291323726769038,
"lm_q2_score": 0.8397339736884711,
"openwebmath_perplexity": 3230.630958870691,
"openwebmath_score": 0.5202064514160156,
"tags": null,
"url": "http://code.jasonbhill.com/category/cython/"
} |
c#, beginner, database, ado.net
if (parameterValues?.Length > 0)
SetCommandParameters(command, parameterValues);
using (IDataReader reader = command.ExecuteReader()) {
var indices = Enumerable.Range(0, reader.FieldCount).ToArray();
while (reader.Read()) {
yield return indices.Select(i => reader[i]).ToArray();
}
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 36355,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, beginner, database, ado.net",
"url": null
} |
of their properties will … 6 limits infinity. Identify when the limit does not exist covered in later lessons as “ x ” moves some! Systems of linear equations and diffuses, limits give all students of mathematics a lot of trouble } +x^ m-2! Are similar definitions for one-sided limits, as well as limits approaching −∞-\infty−∞. the section. It is mandatory to procure user consent prior to running these cookies may affect your experience! Systems of linear equations and diffuses, limits give all students of mathematics a of. Or increases without bound as x→ax \rightarrow ax→a work, since the denominator evaluates to 0.0.0 you get an value! Ways of determining limit values precisely, but those techniques are covered in later lessons without using L'Hospital. The end of this page this happens in the above example at,... = L\ ) be an arbitrary number out of some of their properties {..., give the limit of the limit at plus infinity of a function moves towards value... If you get the best | {
"domain": "eliot.cz",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226260757067,
"lm_q1q2_score": 0.8098919775555254,
"lm_q2_score": 0.8289388146603365,
"openwebmath_perplexity": 1618.428281974271,
"openwebmath_score": 0.9772645235061646,
"tags": null,
"url": "https://www.eliot.cz/claude-earl-gib/limit-of-a-function-187449"
} |
equalization, viterbi-algorithm
The trouble with an equalizing filter is it compensates for dips in the channel frequency response by amplifying those frequencies. Unavoidably, the noise is also amplified.
There's a tradeoff to be made: the flatter the frequency response is made, the lower the error due to ISI, but the higher the error due to noise. There's an optimal amount of equalization which minimizes the error due to noise and ISI. An equalizer that tries to hit this sweet spot is a minimum mean square error (MMSE) equalizer. | {
"domain": "dsp.stackexchange",
"id": 5355,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "equalization, viterbi-algorithm",
"url": null
} |
classification, predictive-modeling, probability, binary
Title: Why predicted proababilities from this binary classifier does not sum up to 1? I have a C5.0 model that is trained to predict binary class (C1/C2) on a dataset with 20 features. The model is configured to perform boosting (10 trials) and it has a miss-classification cost function (100:1 where 100 is the cost for miss-classifying a Negative sample as Positive and 1 is the cost for miss-classifying Negative as Positive).
Looking at predicted probabilities generated by the model, I can see that it ranges from 0 to 1 for each class. i.e, I have instances where the predicted class (C1) has a probability lower than 0.5 ( for example: predicted class=C1 and predicted probability=0.1 ) - This is where the question arises: if P(C1) < 50% why is it classified as C1 (there are only two classes, C1 and C2) | {
"domain": "datascience.stackexchange",
"id": 1663,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classification, predictive-modeling, probability, binary",
"url": null
} |
java, performance
Now, let's analyze your current code and spot where the slowness is coming from. The goal of the method is:
to find the "winner" (defined by the String that appears more than collection.size()/2).
To do that, you are looping over the list, and for each element, you are looping a second time over the list and counting equals element. That is the slow part: for a list of size n, you just made your approach O(n2). You are traversing the list again and again to find the equal elements.
How to improve that? Look at it this way: the goal is to count the number of times the String appears in the list. To do that, we just need to loop over that list only once, but remember the current count.
Take the following list:
A --- B --- A --- D | {
"domain": "codereview.stackexchange",
"id": 19659,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance",
"url": null
} |
protein-structure, pdb
Title: How to select high quality structures from the Protein Data Bank? Models of structures deposited in the Protein Data Bank vary in the quality, depending both on the data quality and expertise and patience of the person who built the model. Is there a well-accepted subset of the PDB entries that has only "high quality" structures? Ideally these structures would be representative for classes of proteins in the whole PDB.
based on a real question from biology.SE There is a very nice database, pdbcull (also known as the PISCES server in the literature). It filters the PDB for high resolution and reduced sequence identity. It also seems to be updated regularly. Depending on the cut-offs, you get between 3000 and 35000 structures.
If you are specifically interested in rotamers, you may want to look at top8000 instead, where they have checked for high resolution, and good MolProbity scores. They also provide a rotamer database. | {
"domain": "bioinformatics.stackexchange",
"id": 99,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "protein-structure, pdb",
"url": null
} |
quantum-mechanics, quantum-measurements
Title: Question about Projective Value Measurements Nielsen and Chuang define a projective measurement as an observable $M$ which has spectral decomposition
$$\sum_m mP_m$$
Where the $m$'s are $M$'s eigenvalues and each $P_m$ is a projection on to $M's$ $m-$eigenspace.
We're given that $M$ itself is self-adjoint, but my question is: are the $P_m$'s also self-adjoint? Or not necessarily?
As a follow-up if the answer is no: how should I understand $P_m^\dagger$ ? Spectral projectors are always self-adjoint. If you're familiar with Dirac's notation, one way to see it is that (apart from degeneracy),
$$
P_{m}=\left|m\right\rangle \left\langle m\right|
$$
More in the abstract, linear projectors are defined as endomorphisms for which,
$$
P^{2}=P
$$
So their spectrum is very simple:
$$
P\left|\psi\right\rangle =\varepsilon\left|\psi\right\rangle
$$
$$
\Rightarrow\varepsilon^{2}=\varepsilon
$$
$$
\Rightarrow\varepsilon=1\textrm{ or }0
$$ | {
"domain": "physics.stackexchange",
"id": 76853,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-measurements",
"url": null
} |
javascript, performance, angular.js
$scope.events = [];
$rootScope.$on('Log', function(event, data) {
if ($scope.events[0] !== undefined && $scope.events[0].type === "log" && $scope.events[0].data === "Log :: " + data) {
$scope.events[0].count++;
} else {
$scope.events.splice(0, 0, {
type: 'log',
data: "Log :: " + data,
count: 1
});
}
});
$rootScope.$on('Error', function(event, data) {
if ($scope.events[0] !== undefined && $scope.events[0].type === "error" && $scope.events[0].data === "Error :: " + data) {
$scope.events[0].count++;
} else {
$scope.events.splice(0, 0, {
type: 'error',
data: "Error :: " + data,
count: 1
});
}
});
}
};
}])
})(); | {
"domain": "codereview.stackexchange",
"id": 19114,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, performance, angular.js",
"url": null
} |
java, object-oriented, game
// "ko rule": previous position can't be repeated
if ((itsBlacksTurn && previousBlackPosition.equals(stones))
|| (!itsBlacksTurn && previousWhitePosition.equals(stones))) {
System.out.println("true");
stones = previousBlackPosition;
return false;
}
savePosition();
changePlayer();
lastMove = newStone;
return true;
}
/**
* Saves position so we can check violations of "ko rule".
*/
private void savePosition() {
if (itsBlacksTurn) {
previousBlackPosition = new HashMap<>(stones);
} else {
previousWhitePosition = new HashMap<>(stones);
}
}
private boolean isOccupied(GoPoint gp) {
return stones.get(gp) != StoneColor.NONE;
}
private GoPoint getPointAt(int row, int col) {
return new GoPoint(row, col);
} | {
"domain": "codereview.stackexchange",
"id": 14285,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, game",
"url": null
} |
electromagnetism, electric-current
In general, total current density constant in time implies (due to Maxwell's equations) this equation for the magnetic field:
$$
\frac{1}{c^2}\frac{\partial^2 \mathbf B}{\partial t^2} - \Delta \mathbf B = 0.
$$
This well-known equation has solutions that are waves, meaning any possible magnetic field pattern can keep its size and form while it travels through space with speed of light, or it can be an oscillating pattern that does not travel (a stationary wave). This kind of wave magnetic field can exist, and make magnetic field change in time, even if current is constant in time.
However, when this kind of field is present, we tend to look for its source, and find it in some other current that varies in time, nearby or more distant. A constant current is never regarded as a source of a wave field that is time-dependent, so we look further and always find some plausible source where the current is not stationary anymore (an inductor, an antenna, a distant hot gas or dust, etc.). | {
"domain": "physics.stackexchange",
"id": 98628,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, electric-current",
"url": null
} |
mechanical-engineering, fluid-mechanics, thermodynamics, fluid
Title: Why do we need continuum approximation in fluid mechanics? We know that a fluid in reality is not continuous. It has spaces and voids between atoms and molecules.
Continuum approximation is a famous approximation that is taken in any fluid mechanics textbook. It says that even though the fluid has spaces and voids it can be assumed to behave as a continuous media.
Why do we need to assume that a fluid is a continuous media? That is, what was the problem that we were facing when it was not continuous? Materials were intuitively uniform for 60,000 years. A few people started guessing they might be "atomic" about 3000 years ago. They only became rigorously atomic about two hundred years ago. And they only got a rigorous continuum model about one hundred years ago. But they were being treated as such on an ad hoc basis long before then.
There isn't any conflict between the continuum model and the atomic viewpoint. There never was. The two developed in concert. | {
"domain": "engineering.stackexchange",
"id": 4935,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, fluid-mechanics, thermodynamics, fluid",
"url": null
} |
immunology
Title: How is antibody production stopped? Once clonal selection is done, B cells would start dividing and producing antibodies. So, after an antigen is eliminated, what stops the division of B cells and antibody production? Upon activation plasma B cells upregulate death receptors as part of being activated. Presence of the antigen overcomes the death signal, thus the cell survives. When antigen is lost, the death signals overcome the survival signals as there is no antigen, so the B cell dies via apoptosis.
Memory B cells don't do this. They survive and continue to produce antibodies for years, although this slowly wanes. Antibodies have to be produced as antibody mediated immune mechanisms are extremely important for memory and the reason why a subsequent infection is cleared so rapidly. Furthermore for the example of HPV, antibodies induced by the vaccine prevent the virus even infecting cells (so called neutralising antibodies). | {
"domain": "biology.stackexchange",
"id": 1238,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "immunology",
"url": null
} |
c#
[MarshalAs(UnmanagedType.ByValArray, SizeConst=1)]
internal byte[] Data;
}
}
There isn't really a standard on this but I prefer to keep simple getters on a single line. Also keep variables at the top of the file and use pascalCase;
// properties and fields
partial class DiskGeometry
{
private CubicAddress maximumCubicAddress;
private long maximumLinearAddress;
private DWORD bytesPerCylinder;
private LARGE_INTEGER diskSize;
private DISK_GEOMETRY geometry;
public MEDIA_TYPE MediaType
{
get { return geometry.MediaType; }
}
public String MediaTypeName
{
get { return Enum.GetName(typeof(MEDIA_TYPE), this.MediaType); }
}
public override long Cylinder
{
get { return geometry.Cylinders; }
}
public override uint Head
{
get { return geometry.TracksPerCylinder; }
}
public override uint Sector
{
get { return geometry.SectorsPerTrack; }
} | {
"domain": "codereview.stackexchange",
"id": 3342,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#",
"url": null
} |
c, ascii-art, hangman, c99
size_t num_prev_missing = word_len;
count_missing_letters(word_to_guess, print_char);
fputs("\nPick a letter: ", stdout);
int chosen_letter;
while ((chosen_letter = getchar()) != EOF) {
// Consume newline and other white-space characters
if (isspace(chosen_letter)) {
continue;
}
if (!isalpha(chosen_letter)) {
puts("Please enter a valid letter.");
continue;
}
chosen_letter = tolower(chosen_letter);
size_t letter_pos = dst_from_a(chosen_letter);
if (letters[letter_pos] != (char) HIDDEN_LETTER) {
puts("Please pick a different letter");
continue;
}
letters[letter_pos] = (char) chosen_letter;
size_t num_missing = count_missing_letters(word_to_guess, print_char);
if (num_missing == num_prev_missing) {
tries++;
}
num_prev_missing = num_missing; | {
"domain": "codereview.stackexchange",
"id": 33226,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, ascii-art, hangman, c99",
"url": null
} |
operating-systems
The "stack" can be set to a minimum amount, and then gradually increased as soon as things are pushed on it.
When the program starts, it will ask for more heap memory using system calls, and the OS will allocate the required physical memory (if available, of course), and map it to the process address space. If the OS uses virtual memory, some of the allocated memory can also be swapped to disk if there's need for that. | {
"domain": "cs.stackexchange",
"id": 11820,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "operating-systems",
"url": null
} |
c#, algorithm, object-oriented, design-patterns, graph
foreach (var neighbor in vertices[smallest])
{
var alt = distances[smallest] + neighbor.Value;
if (distances.ContainsKey(neighbor.Key) && alt < distances[neighbor.Key])
{
distances[neighbor.Key] = alt;
previous[neighbor.Key] = smallest;
}
}
}
// Meaning we have calculated the shortest path upto destination node, so no need to
// calculate shortest path for remaining nodes like in original Dijkstra.
if (smallest == finish)
{
while (previous.ContainsKey(smallest))
{
path.Push(smallest);
smallest = previous[smallest];
}
path.Push(start);
// Adding the shortest path distance as last element.
// I've used string interpolation here also.
path.Push(": {distances[finish]}");
} | {
"domain": "codereview.stackexchange",
"id": 27779,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, algorithm, object-oriented, design-patterns, graph",
"url": null
} |
homework-and-exercises, kinematics, geometry, relative-motion
I assumed that point $A$ is the origin i.e., $A\equiv (0,0)$ and $B\equiv (0,w)$. Let's say the swimmer is at point $P\equiv (x,y)$ at some point of time. At that point, he makes an angle $\theta$ with the horizontal with respect to point $B$. Therefore $\cos\theta=\frac x{\sqrt{x^2+(w-y)^2}}$ and $\sin\theta=\frac {w-y}{\sqrt{x^2+(w-y)^2}}$. Now I got the following two equations:-$$\frac {dx}{dt}=v-u\cos\theta=v-\frac {ux}{\sqrt{x^2+(w-y)^2}}\quad and \quad\frac {dy}{dt}=u\sin\theta=\frac {u(w-y)}{\sqrt{x^2+(w-y)^2}}$$Dividing them I got this differential equation:-$$\frac {dy}{dx}=\frac {u(w-y)}{v\sqrt{x^2+(w-y)^2}-ux}$$How do I solve this differential equation? Is there any solution that doesn't use calculus in such type of questions? You may also use the fact that $$\frac d{dt}(\sqrt{x^2+(w-y)^2})=v\cos\theta-u=\frac {vx}{\sqrt{x^2+(w-y)^2}}-u$$ Let me suppose the swimmer begins at a point $I = (\omega, 0)$ and that your target is the origin $ F = (0,0)$ - I think some of the | {
"domain": "physics.stackexchange",
"id": 22480,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, kinematics, geometry, relative-motion",
"url": null
} |
keras
TypeError: ('Bad input argument to theano function with name "mlp1_visualize_weights.py:131" at index 0 (0-based). \nBacktrace when that variable is created:\n\n File "mlp1_visualize_weights.py", line 213, in <module>\n mlp_repeat(X, Y, Xtest, Ytest, params_to_use, weights_file)\n File "mlp1_visualize_weights.py", line 125, in mlp_repeat\n model_created = mlp_model(hid_dim=hid_val, lr=lrate, reg_val=reg, momentum=moment, nest=nestval, optimizer=optim)\n File "mlp1_visualize_weights.py", line 105, in mlp_model\n model.add(Dense(units=hid_dim, input_dim=X.shape[1], kernel_initializer=\'he_uniform\', activation=\'relu\', W_regularizer=l2(reg_val), b_regularizer=l2(reg_val)))\n File "/mnt/data/caugusta/pkgs/anaconda2/lib/python2.7/site-packages/keras/models.py", line 426, in add\n dtype=layer.dtype, name=layer.name + \'_input\')\n File "/mnt/data/caugusta/pkgs/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 1392, in Input\n input_tensor=tensor)\n File | {
"domain": "datascience.stackexchange",
"id": 3961,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "keras",
"url": null
} |
water, friction
Now you may be wondering why $\mu_k$ is smaller for water than concrete. Kinetic friction is primarily caused by chemical bonding between a surface and the object skidding on that surface. In classical mechanics however, it's not necessary to understand the chemical properties of the surfaces being studied. It's just accepted that some surfaces bond stronger than others, with $\mu_k$ being measured for different materials by experiment. | {
"domain": "physics.stackexchange",
"id": 14780,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "water, friction",
"url": null
} |
java, javascript, react.js, spring-mvc, maven
@RequestMapping("/data")
public List<Sale> get(@RequestParam(value = "size", required = false, defaultValue = "3") int size) {
return buildSalesList(size);
}
private List<Sale> buildSalesList(int size) {
return IntStream.range(0, size)
.mapToObj(item -> {
LocalDate startDate = LocalDate.ofYearDay(CURRENT_YEAR, 1);
LocalDate endDate = LocalDate.now();
List<LocalDate> datesBetween = getDatesBetween(startDate, endDate);
return Sale.create(
UUID.randomUUID().toString(),
new Random().nextInt(TOMATOES_UPPER_BOUND),
Provider.values()[new Random().nextInt(PROVIDERS_COUNT)].getStrValue(),
toEpochMilli(datesBetween.get(new Random().nextInt(datesBetween.size())))
);
})
.collect(toList());
}
} | {
"domain": "codereview.stackexchange",
"id": 29315,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, javascript, react.js, spring-mvc, maven",
"url": null
} |
$F=\{ c \in I: W_c \in G \}$
The set $F$ can be further described as follows:
\displaystyle \begin{aligned} F&=\{ c \in I: W_c \in G \} \\&=\{ c \in I: \forall \ h \in H, \ W_c(h)=f_h(c)=k(h) \ne 0 \} \\&=\{ c \in I: \forall \ h \in H, \ c \in I-O_{h,k(h)} \} \\&=\bigcap_{h \in H} (I-O_{h,k(h)}) \\&=I-\bigcup_{h \in H} O_{h,k(h)}=I-I =\varnothing \end{aligned}
The last step is $\varnothing$ because $\{ O_{h,k(h)}: h \in H \}$ is a cover of $I$. The fact that $F=\varnothing$ means that $G$ is an open subset of $Y$ containing the point $k$ such that $G$ contains no point of $W$.
Case 2. $k(r) = 0$ for some $r \in I$.
Since $k \notin W$, $k \ne W_x$ for all $x \in I$. In particular, $k \ne W_r$. This means that $k(t) \ne W_r(t)$ for some $t \in I$. Define the open set $G$ as follows:
$G=\{ b \in Y: b(r)=0 \text{ and } b(t)=k(t) \}$ | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429619433693,
"lm_q1q2_score": 0.8278033273593683,
"lm_q2_score": 0.8418256492357358,
"openwebmath_perplexity": 239.6477656412658,
"openwebmath_score": 0.995428204536438,
"tags": null,
"url": "https://dantopology.wordpress.com/tag/baire-category-theorem/"
} |
ros, ar-pose, ros-fuerte, include
Title: ar_pose/ARMarkers.h not found
Hi, i am trying to include the ar_pose/ARMarkers.h header in a ROS node with
#include <ar_pose/ARMarkers.h> | {
"domain": "robotics.stackexchange",
"id": 14562,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ar-pose, ros-fuerte, include",
"url": null
} |
convolutional-neural-networks, tensorflow, keras, bayesian-deep-learning, uncertainty-quantification
This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations.
As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here, in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities. | {
"domain": "ai.stackexchange",
"id": 1675,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "convolutional-neural-networks, tensorflow, keras, bayesian-deep-learning, uncertainty-quantification",
"url": null
} |
Problem 10 (Idealised sieving) Let ${z, D \geq 1}$ (we refer to ${z}$ as the sifting level and ${D}$ as the level of distribution), let ${g}$ be a multiplicative function with ${0 \leq g(p) \leq 1}$, and let ${{\mathcal D} := \{ d|P(z): d \leq D \}}$. How small can one make the quantity
$\displaystyle \sum_{d \in {\mathcal D}} \lambda^+_d g(d) \ \ \ \ \ (14)$
for a sequence ${(\lambda^+_d)_{d \in {\mathcal D}}}$ of upper bound sieve coefficients, and how large can one make the quantity
$\displaystyle \sum_{d \in {\mathcal D}} \lambda^-_d g(d) \ \ \ \ \ (15)$
for a sequence ${(\lambda^-_d)_{d \in {\mathcal D}}}$ of lower bound sieve coefficients? | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9886682444653242,
"lm_q1q2_score": 0.8012947666000314,
"lm_q2_score": 0.8104789155369048,
"openwebmath_perplexity": 248.19796379327,
"openwebmath_score": 0.9931489825248718,
"tags": null,
"url": "https://terrytao.wordpress.com/2015/01/21/254a-notes-4-some-sieve-theory/"
} |
definition, error-analysis, statistics
The systematic uncertainties was estimated by taking into account the uncertainty of the target position along the beam line, which was estimated to be ± 2 mm, which may cause mXc2± 0.06 MeV uncertainty. The uncertainty of the place of the beam spot perpendicular to the beam axis was estimated to be in worst case also ± 2 mm, which may cause a shift in the invariant mass of mX c2 ± 0.15 MeV/c2 . The whole systematic error was conservatively estimated as: mXc2±0.20 MeV.
So in the end, you try to make as good a measurement as you can, and quantify your uncertainty in the systematic error. | {
"domain": "physics.stackexchange",
"id": 63324,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "definition, error-analysis, statistics",
"url": null
} |
evolution
A giraffe's heart has special adaptations to enable it to pump blood up the animal's long neck to its head.
A giraffe's heart has the formidable task of pumping blood at high enough pressure so that it can flow up the giraffe's neck to the brain.
To accomplish this, a giraffe's heart is specially adapted.
It can weigh up to 10 kg (22 lb) and generates twice the blood pressure of other large mammals.
Having enough blood pressure to pump blood to the brain when the giraffe's neck is extended upward is one challenge, but when the animal lowers its head it risks injury due to excessive blood pressure.
To counter this, giraffes have a pressure-regulating system known as the rete mirabile which restricts the amount of blood that rushes towards the brain when the giraffe lowers its head. | {
"domain": "biology.stackexchange",
"id": 673,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "evolution",
"url": null
} |
cell-biology, meiosis, mitosis
Title: Mitosis versus Meiosis I: What's the difference? At the end of mitosis, one cell has divided into two diploid cells. But at the end of meiosis I, there are two haploid cells. How are the two processes different to produce these two types of cells? At the start, all the cells are 2n, diploid cells. By far the largest difference between meiosis I and mitosis is that mitosis results in genetically identical, diploid somatic cells. Meiosis, in it's entirety, results in gametes of haploid genetic information, but the genetic information is not identical due to crossing-over events that happened during meiosis I. | {
"domain": "biology.stackexchange",
"id": 4857,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cell-biology, meiosis, mitosis",
"url": null
} |
reinforcement-learning, actor-critic
I will describe it to you as simple as possible so you get the link. A Neural Network in a very broad sense consists of nested functions. The function that contains all the others is the one at your output layer. In case of Stochastic Policy Gradients this is your Boltzmann function. So your output layer will take all the previous layer's outputs and will pass them through the Boltzmann function. In the case of NN the parametrization comes from all previous layers.
The blog link I sent you describes a very nice and simple example of a vanilla Policy Gradient with NN (REINFORCE algorithm). By using the code (plus your understanding on feedforward networks) you will see that the gradients are multiplied by the reward. It is a very good exercise! | {
"domain": "datascience.stackexchange",
"id": 2310,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, actor-critic",
"url": null
} |
mfcc, pitch
Thanks in advance MFCC's are related strictly to spectrum shape and what's more - these are relaying on the mel scale. Additionally each coefficient is a DCT value from fitted cosinusoids to log-energies. Thus interpretation of pitch would be troublesome for you.
Chord recognition is usually based on chromagram calculation or so called chroma features. Check out work and publications of guys from QMUL, i.e: Real-time Chord Recognition for live performance - in here you will find another references.
You might also find their software useful: Sonic Visualizer together with lot's of so called Vamp plugins. You can check out the code and even use command line tool to extract features you want. | {
"domain": "dsp.stackexchange",
"id": 2031,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mfcc, pitch",
"url": null
} |
keras, tensorflow, multiclass-classification, metric, tokenization
print('Summary:\n', model.summary())
return model The reason accuracy kept decreasing after every epoch was that the metrics function and loss function were not set to SparseCategoricalCrossentropy(). The output may have been NaN because the probability of a text matching a certain label was always extremely low. Using CategoricalCrossEntropy() as the loss function resulted in an error saying "Shapes (None, 1) and (None, 5) are incompatible."
The proper metric for this problem is SparseCategoricalCrossentropy() since the output of the final layer is a set of probabilities while the true label is just one integer. Using other metrics may also have caused NaN or very low accuracy. Using the Categorical Accuracy metric resulted in training accuracy below one percent. | {
"domain": "datascience.stackexchange",
"id": 11982,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "keras, tensorflow, multiclass-classification, metric, tokenization",
"url": null
} |
ros, ros-master-uri
Title: snapcraft with different ROS_MASTER_URI
Hello,
I am trying to set a different ROS_MASTER_URI for my snap.
My package is very simple. Only have one launch file to lauch usb-cam package.
My problem is that, I can not change the MASTER_URI when running the snap.
Here is my snapcraft.yaml file
parts:
workspace:
plugin: catkin
rosdistro: lunar
catkin-packages: [main_launch]
apps:
usbcam:
command: export ROS_MASTER_URI=http://192.168.0.28:11311/ roslaunch main_launch usb_cam_launch.launch
plugs: [network, network-bind]
But I got this:
auto-starting new master
process[master]: started with pid [4297]
ROS_MASTER_URI=http://localhost:11311
EDITS:
My bashrc file already set MASTER_URI, but it does not affect snap:
export ROS_HOSTNAME=192.168.0.6
export ROS_MASTER_URI=http://192.168.0.28:11311/
source /opt/ros/lunar/setup.bash
source ~/Workspace/devel/setup.bash
Furthermore, how can I pass arguments to my snap command.
Thanks in advance. | {
"domain": "robotics.stackexchange",
"id": 29554,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-master-uri",
"url": null
} |
spectroscopy, spectra
So you may be asking why I'm talking about the Balmer series here. The reason I'm talking about it is because the Balmer series is a very easily observable series given how prevalent the lines are in spectra (which comes down to the prevalence of Hydrogen and the likelihood of this transition occurring) and how strong (i.e., how deep the emission line is and thus how easy it is to measure) the Balmer lines can be. Because they're so abundant in many (but not all!) stars, and because they're so easy to observe, they make a good set of wavelengths to watch out for and track the metrics of across a wide variety of stars. | {
"domain": "astronomy.stackexchange",
"id": 5563,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "spectroscopy, spectra",
"url": null
} |
organic-chemistry, experimental-chemistry, surface-chemistry
The coupling group is what allows the silane to attach to a surface and is found bound to the silicon atom.
Typical couplers are hydrolyzable groups like alkoxy, acyloxy, or halogen.
These don't generally influence the hydrophobicity of the resulting surface directly, though they will affect how well the silane adheres to a given surface, the manner and speed with which it can be applied, and the deposition density.
Alkoxy and acyloxy groups can be applied from aqueous solutions and are generally less reactive, but halogen groups can be deposited from anhydrous solvents or the vapour face quite quickly.
The linker is typically just a short alkyl chain between the silicon atom and the functional group to reduce steric interactions between adjacent silanes.
The functional group is what has the greatest influence on hydrophobicity.
Many different functional groups can be added, some even conferring hydrophilicity instead, if desired. | {
"domain": "chemistry.stackexchange",
"id": 4761,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, experimental-chemistry, surface-chemistry",
"url": null
} |
Solutions to Homework 3 Section 3. equations applied to a porous channel, and subject to a similarity transformation. Create an animation to visualize the solution for all time steps. 1D Wave Equation - the vibrating string The Vibrating String II. Answered: darova on 4 Jul 2019 Accepted Answer: darova. Once we derive Laplace’s equation in the polar coordinate system, it is easy to represent the heat and wave equations in the polar coordinate system. 3: Solution Using Separation of Variables 19. A method for the solution of a certain class of nonlinear partial differential equations by the method of separation of variables is presented. Follow 45 views (last 30 days) Youssef FAKHREDDINE on 4 Jul 2019. 1 "Blow-up, compactness and (partial) regularity in Partial Differential Equations" Lecture 1 Christophe Prange (CNRS Researcher) Method of Separation of Variables: Analytical Solutions of Partial Differential Equations Using the Method of Separation of Variables to solve 1st order PDEs | {
"domain": "chicweek.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9923043544146899,
"lm_q1q2_score": 0.8434301006175603,
"lm_q2_score": 0.8499711775577736,
"openwebmath_perplexity": 556.7745988916529,
"openwebmath_score": 0.808579683303833,
"tags": null,
"url": "http://zgrk.chicweek.it/solution-of-wave-equation-by-separation-of-variables-pdf.html"
} |
c#, asynchronous, networking
/// <summary>
/// Wraps a given content object and header information into a NetworkMessage instance, and enqueues it accordingly.
/// </summary>
public void EnqueueMessage<T>(T content, Encryption encryption)
{
if (State > ProtocolState.Operational) { return; }
byte[] serializedContent;
switch (encryption)
{
case Encryption.AES:
serializedContent = Serializer.SerializeContent(content, sharedPrivateKey);
break;
case Encryption.RSA:
serializedContent = Serializer.SerializeContent(content, partnerKeypair);
break;
default:
serializedContent = Serializer.SerializeContent(content);
break;
} | {
"domain": "codereview.stackexchange",
"id": 42688,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, asynchronous, networking",
"url": null
} |
lagrangian-formalism, conservation-laws, symmetry, action, noethers-theorem
Related: one, two, three, four, five.
The assumption in Noether's (first) theorem is an off-shell$^1$ quasisymmetry of the action $S$. It leads to an off-shell Noether identity
off-shell Noether identity
$$d_{\mu} J^{\mu} ~\equiv~ - \frac{\delta S}{\delta\phi^{\alpha}} \tag{A}Y_0^{\alpha}.$$
Here $J^{\mu}$ is the full Noether current, which is necessarily non-trivial; and $Y_0^{\alpha}$ is a (vertical) symmetry generator. The off-shell identity (A) in turn implies an on-shell continuum equation/conservation law.
An on-shell quasisymmetry of the action $S$ is a tautology. It has not an associated continuum equation/conservation law. Even a strict symmetry of the action $S$ (or the Lagrangian density ${\cal L}$) on-shell has not an associated continuum equation/conservation law.$^2$
OP is only considering so-called vertical transformations $\delta\phi$, i.e. $\delta x^{\mu}=0$, which carries certain simplifications in the form of the Noether current. | {
"domain": "physics.stackexchange",
"id": 52984,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lagrangian-formalism, conservation-laws, symmetry, action, noethers-theorem",
"url": null
} |
acid-base, aqueous-solution, salt
Your teacher says that "before it is $\ce{NaOH}$, it was $\ce{NaO^-}$". No. First $\ce{NaO-}$ does not exist and cannot exist. Second NaOH has not been created out of an ion. Sodium hydroxide is usually produced from the sodium metal after reaction with water according to : $$\ce{2 Na + 2 H2O -> 2 NaOH + H2}$$ This equation may also be written in order to show the ions :
$$\ce{2 Na + 2 H2O -> 2 Na^+ + 2 OH^- + H2 }$$
Your teacher said that $\ce{NaOH}$ is the conjugate acid of the base $\ce{NaO^-}$. No ! it is impossible to find a structure for $\ce{NaO^-}$, as $\ce{Na}$ cannot be covalently bound to an oxygen atom. $\ce{Na}$ can only be bound to an oxygen atom by an ionic bond, and this bond will be immediately broken when dissolving it into water. Atoms included inside a polyatomic ion are always bound by covalent bonds, never by ionic bonds. | {
"domain": "chemistry.stackexchange",
"id": 14982,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "acid-base, aqueous-solution, salt",
"url": null
} |
python
You can do a great deal more with str.format, but I'll leave it up to you to figure that out, as there's too much for this one post.
Getting rid of the system calls
I find all of these lines of code to be absolutely horrid:
os.system('cls')
os.system('')
os.system('color a')
os.system('title Quadratic Equation Solver')
os.system('pause >nul')
Especially this one in particular:
os.system('pause >nul')
There are many things wrong here. For starters, the call to os.system has a long startup and "cooldown" period, e.g, it takes a fair amount of time. Secondly, all the commands you're running, like cls, color, or pause are very Windows-specific. What happens if someone wants to run this on a Linux-based system? Most of these system calls are purely stylistic, and unnecessary anyways.
On the matter of this line:
os.system('pause >nul') | {
"domain": "codereview.stackexchange",
"id": 15567,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
# Check if $\ln(x), x > 0$ is uniformly continuous
Check if $\ln(x), x > 0$ is uniformly continuous
My only idea on solving this was to use the definition of uniform continuity. Namely, I need to show that for all $\epsilon >0$ there exists a $\delta >0$ such that for all $x_1, x_2$ in the domain of the function $|x_1-x_2| < \delta \Rightarrow |f(x_1)-f(x_2)|<\epsilon$
I have never before proved anything like this and I have no idea whether or not this function is uniformly continuous. Anyway, looking at the tremendous slope close to zero I am inclined to believe that this function is not uniformly continuous.
I want to show that $$(\exists \epsilon>0)(\forall \delta>0)(\exists x_1, x_2)(|f(x_1)-f(x_2)| \ge \epsilon \Rightarrow |x_1-x_2| \ge \delta$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9752018362008348,
"lm_q1q2_score": 0.8040167823770255,
"lm_q2_score": 0.8244619242200082,
"openwebmath_perplexity": 125.09915496519153,
"openwebmath_score": 0.9563086032867432,
"tags": null,
"url": "https://math.stackexchange.com/questions/2554314/check-if-lnx-x-0-is-uniformly-continuous"
} |
## Solution 2
We can write $21!$ as its prime factorization: $$21!=2^{18}\times3^9\times5^4\times7^3\times11\times13\times17\times19$$
Each exponent of these prime numbers are one less than the number of factors at play here. This makes sense; $2^{18}$ is going to have $19$ factors: $2^0, 2^1, 2^2,...\text{ }2^{18}$, and the other exponents will behave identically.
In other words, $21!$ has $(18+1)(9+1)(4+1)(3+1)(1+1)(1+1)(1+1)(1+1)$ factors.
We are looking for the probability that a randomly chosen factor of $21!$ will be odd--numbers that do not contain multiples of $2$ as factors.
From our earlier observation, the only factors of $21!$ that are even are ones with at least one multiplier of $2$, so our probability of finding an odd factor becomes the following: $$P(\text{odd})=\dfrac{\text{number of odd factors}}{\text{number of all factors}}=\dfrac{(9+1)(4+1)(3+1)(1+1)(1+1)(1+1)(1+1)}{(18+1)(9+1)(4+1)(3+1)(1+1)(1+1)(1+1)(1+1)}=\dfrac{1}{(18+1)}=\boxed{\dfrac{1}{19}}$$ | {
"domain": "artofproblemsolving.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.991152642826358,
"lm_q1q2_score": 0.8056755903418841,
"lm_q2_score": 0.8128673178375735,
"openwebmath_perplexity": 156.66332648525005,
"openwebmath_score": 0.8930972814559937,
"tags": null,
"url": "https://artofproblemsolving.com/wiki/index.php?title=2017_AMC_12B_Problems/Problem_16&diff=cur&oldid=83885"
} |
general-relativity, gravity
It is illustrating how thinking in terms of potential, as we do in Newtonian physics, is equivalent mathematically in thinking in terms of space distortion. | {
"domain": "physics.stackexchange",
"id": 28549,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, gravity",
"url": null
} |
parsing, perl
use strict;
use warnings;
my @srcext = ("*.cpp", "*.c", "*.h", "*.hpp", "*.ino", "*.cxx", "*.cc");
my $commLines = 0; # Lines that start with a comment symbol
my $bothLines = 0; # Lines that have a comment and code
my $newLines = 0; # Lines that are just whitespace
my $codeLines = 0; # Lines of code - lines that don't fit in another space
my $totLines = 0; # Total lines of code
my $srcBytes = 0; # Total Number of bytes in src code
my $fCount = 0; # Number of files read
my $files;
for($a = 0; $a < scalar(@srcext); $a++){
$files .= `find . -name "$srcext[$a]"`;
}
my @inputs = split("\n", $files);
for($a = 0; $a < scalar(@inputs); $a++){
my $prev = $totLines;
countLines($inputs[$a]);
printf("Read %d ln. In %s\n", $totLines - $prev, $inputs[$a]);
$fCount++;
}
printResults(); | {
"domain": "codereview.stackexchange",
"id": 14287,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "parsing, perl",
"url": null
} |
homework-and-exercises, newtonian-mechanics, classical-mechanics, spring, elasticity
Title: Regarding the Hooke's law and how it can be described using stress and strain Can someone kindly explain to me the how the relationship marked as (2) below is obtained from the relationship marked as (1).
In our Physics class, our lecture wrote down the following relationship,
According to the Hooke's law,
'F' is proportional to 'e' -----------(1)
From this we can write that,
'F/A' is proportional to 'e/l' ----------(2)
Hence we can write,
F/A = E*(e/l) ----------(3) | {
"domain": "physics.stackexchange",
"id": 77004,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, spring, elasticity",
"url": null
} |
beginner, object-oriented, multithreading, vba, asynchronous
BusyThread = taskThread.Name
'iterate open tasks
openTaskCount = openTaskCount + 1
'execute task
If passesArguments Then
'pop appropriate item from queue
Set taskArguments = iterableQueue.Dequeue
taskThread.Execute taskArguments
Else
taskThread.Execute
End If
Case InstructionType.mltQuit
'quit then do nothing
Me.Quit
instructionVal.instructionBody = mltDoNothing
Case InstructionType.mltDoNothing
'do nothing
Case Else
Err.Raise 5 'invalid argument
End Select
'call self until no instruction
If instructionVal.instructionBody <> mltDoNothing Then
Debug.Assert loopcount < maxThreads * 3 + 5 'max loop should be open all threads then run all tasks + a little
doInstructions loopcount:=loopcount + 1 'watch for infinite loop
End If
End Sub | {
"domain": "codereview.stackexchange",
"id": 29951,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, object-oriented, multithreading, vba, asynchronous",
"url": null
} |
c#, performance, iteration
Title: Generate Braille characters from a dictionary Problem
I would like to generate Braille characters from a dictionary as below which store the n-th active bits:-
private static readonly int BrailleStringSize = 6; | {
"domain": "codereview.stackexchange",
"id": 25525,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, iteration",
"url": null
} |
refrigeration
I believe the purpose of the fan is to circulated the cold air and coils are what generates the cold air. I don't believe the ice build up is normal. From an engineering standpoint, how is this suppose to work? What would cause the ice build up? What design feature are present in a refrigerator to prevent ice buildup? Excessive ice build up can occur when the refrigeration system operates continuously for prolonged periods - hours. This can make the coils very cold and ice to form on them. By operating contiguously, the cooling process never stops and ice just accumulates. The system needs to stop so the ice can melt and the resultant water evaporate.
This may be due to a poorly designed refrigeration system or cold air continuously leaking from inside the refrigerator, so the inside of the refrigerator never reaches a stable cold temperature. | {
"domain": "engineering.stackexchange",
"id": 1771,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "refrigeration",
"url": null
} |
c++, beginner, vectors
// NRVO, no copying, no performance degradation
return result;
}
Also, double is generally used instead of float in C++ unless you have a good reason.
void generate_orthogonal (float *vector, float orthogonal_vector[], int dimension){
float last_entry;
float dot_product = 0;
for (int i = 0; i < dimension -1; i++){
orthogonal_vector[i] = randint(-MAXRANGE,MAXRANGE);
}
for (int i = 0; i < dimension -1; i++){
dot_product = dot_product + (vector[i] * orthogonal_vector[i]);
}
last_entry = -(dot_product/vector[dimension-1]);
orthogonal_vector[dimension-1] = last_entry;
}
Some problems, in addition to the aforementioned ones: | {
"domain": "codereview.stackexchange",
"id": 36010,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, vectors",
"url": null
} |
newtonian-gravity, planets, tidal-effect
Title: Do tidal forces affect the poles? In many tidal forces illustrations it seems like the poles (with respect to the satellite's rotation) are drawn towards the center of the body.
I can't understand why that is.
Is it because the body is assumed to be elastic to some degree? Would a small object placed at pole but not connected to it feel this increase in gravity?
By Krishnavedala - Own work, CC BY-SA 3.0 Keep in mind these two thing: | {
"domain": "physics.stackexchange",
"id": 41695,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-gravity, planets, tidal-effect",
"url": null
} |
matlab, finite-impulse-response, scipy, parks-mclellan
Title: Remez function equivalency between Matlab and Scipy I've read the similar question Find the equivalent of this python remez specs in C++ remez or Matlab firpm, which describes a different problem.
In Matlab, I have following remez and firpm function calls:
remez(10,[0 .1 1-.1 1],[1 1 0 0])
firpm(10,[0 .1 1-.1 1],[1 1 0 0])
and in Scipy, I can achieve the same filter by calling:
signal.remez(11, bands=[0, .1, 1.0 - .1, 1], desired=[1, 0], fs=2)
Both 3 function calls returns me the same coefficients:
ans =
0.0066 -0.0000 -0.0510 0.0000 0.2944 0.5000 0.2944 0.0000 -0.0510 -0.0000 0.0066
The problem is: when I attempt to sharpen the corner frequency from the half pass filter, by substituting the Δf = .1 to Δf = .01, the output from Scipy is an array of nan.
remez(10,[0 .01 1-.01 1],[1 1 0 0])
firpm(10,[0 .01 1-.01 1],[1 1 0 0])
both Matlab calls returns me the same filter coefficients:
ans = | {
"domain": "dsp.stackexchange",
"id": 10935,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, finite-impulse-response, scipy, parks-mclellan",
"url": null
} |
reinforcement-learning, keras, actor-critic-methods, advantage-actor-critic
A2C Loss Function
A very crucial part of A2C implementation that I missed is the custom loss function that takes into account the advantage. The loss function multiplies the advantage with the negative log of the current probability to select the action that was selected.
The trick is that if the advantage is negative, the loss function will switch sign, so the gradients will be applied to the opposite direction.
In one dimension it's easier to understand. Let's say my target prediction is 1 and my actual prediction is 0.6. A simple loss would be defined as target - prediction, or in this case 0.4 and future predictions will be closer to one. If my prediction was 1.4, then the loss would be -0.4. A negative loss would mean predicting a lower result in the future, and a positive result would mean predicting a higher result in the future.
If the sign of the loss function is switched, the prediction will actually move away from 1. | {
"domain": "ai.stackexchange",
"id": 1803,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, keras, actor-critic-methods, advantage-actor-critic",
"url": null
} |
neural-network
And that is what your training data should look like
If it is still to complicated, I recommend you to play around with this front-end neural network library for the browser: Neataptic. It is very easy to fiddle around with. Example | {
"domain": "datascience.stackexchange",
"id": 1743,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-network",
"url": null
} |
electric-fields, dimensional-analysis
Title: Can I use the concept of dimensional analysis in problems of vector analysis? For example if I have Gauss' law: $\nabla D=\rho_v$ how can I get one side from the other dimensionally?
Same question goes for rotation and generally for operators.
Please explain downvotes if there's any! I'm not the kind to downvote back. Firstly, recall the limit definition of a derivative, namely
$$\frac{df}{dt} = \lim_{\delta t \to 0}\frac{f(t+\delta t)-f(t)}{\delta t}$$
if taken with respect to time. From this, it is immediately clear that $\frac{df}{dt}$ has dimensions of $f$ over dimensions of time, so $\frac{d}{dt}$ can be thought of as having dimensions $[T]^{-1}$. | {
"domain": "physics.stackexchange",
"id": 38763,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electric-fields, dimensional-analysis",
"url": null
} |
electromagnetism, optics, visible-light, geometric-optics, polarization
Title: what polarisation is assumed when we talk about anti-reflection coating? When we talk about anti-reflection coating, usually what polarisation ('s' or 'p') are we referring to? Is there a big difference considering that the Fresnal coefficient is different for the two cases?
And if I have no prior knowledge of the property of the light source, and I just assume it is unpolarized, then how do I know which polarisation I should use, or what relative proportion of the two?
Thanks Usually this is for normal incidence, in which case polarization does not matter. If you need oblique incidence you can't get perfect match unless you have and you know the polarization. | {
"domain": "physics.stackexchange",
"id": 42299,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, optics, visible-light, geometric-optics, polarization",
"url": null
} |
digital-filters
Title: Expression for stopband deviation of a digital filter Beginning from the basic definition of decibel that expresses the ratio of two amplitudes as $20\log_{10}(A_{2}/A_{1}) $, how do we arrive at the expression $-20\log_{10}(\delta_{s})$ measured in dB, for the stopband deviation of a digital filter? The stop band ripple, $\delta_s$ is measured with respect to the pass band amplitude (usually 1).
That means:
$$
20 \log_{10}\left (\frac{\delta_s }{ 1 } \right) = 20 \log_{10}\left (\delta_s \right)
$$
but this is the stop band gain and most people think about the stop band in terms of attenuation. So it's more usual to have:
$$
-20 \log_{10}\left (\delta_s \right)
$$
Because, for a good filter, $\delta_s \ll 1$, $\log_{10}(\delta_s)$ is negative, so the minus sign just makes the number positive (and an attenuation rather than a gain). | {
"domain": "dsp.stackexchange",
"id": 12454,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "digital-filters",
"url": null
} |
c, parsing, functional-programming
static parser
ebnf_grammar( void ){
if( !regex_parser ) regex_parser = regex_grammar();
parser spaces = many( anyof( " \t\n" ) );
parser defining_symbol = thenx( chr( '=' ), spaces );
parser choice_symbol = thenx( chr( '|' ), spaces );
parser terminating_symbol = thenx( chr( ';' ), spaces );
parser name = some( either( anyof( "-_" ), alpha() ) );
parser identifier = thenx( name, spaces );
parser terminal =
bind(
thenx( either( thenx( xthen( chr( '"'), many( noneof("\"") ) ), chr( '"') ),
thenx( xthen( chr('\''), many( noneof( "'") ) ), chr('\'') ) ),
spaces ),
Operator( NIL_, make_matcher ) );
parser symb = bind( identifier, Operator( NIL_, symbolize ) );
parser nonterminal = symb;
parser expr = forward();
{
parser factor = ANY( terminal,
nonterminal,
bind( xthen( then( chr( '[' ), spaces ),
thenx( expr, | {
"domain": "codereview.stackexchange",
"id": 43456,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, parsing, functional-programming",
"url": null
} |
homework-and-exercises, electromagnetism, electricity, electric-fields, gauss-law
Title: Electric flux through a cylinder's side
We have a cylinder with radius $R$, as shown above, and a point charge $Q$. We want to calculate the electric flux passing through the cylinder's side surface, excluding its caps.
Is there a way to do this with Gauss's formula? Or should I double integrate? The total flux out of the cylinder must be zero. You can use Gauss's law (reduced with solid angles) to find the flux in and out through spherical caps at each end of the cylinder. The difference must go out through the sides. | {
"domain": "physics.stackexchange",
"id": 77306,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, electromagnetism, electricity, electric-fields, gauss-law",
"url": null
} |
fl.formal-languages, automata-theory, omega-language
Locally testable $\omega$-languages (the notion of locally testable languages was originally introduced for finite words and later extended to infinite words [2]). The language $L(F)$ is an example.
Factors of biinfinite words [1].
First order logic of one successor [3].
Sophic shifts and subshifts of finite type in symbolic dynamics [4].
Question 4. A few references:
[1] D. Beauquier and M. Nivat, About rational sets of factors of a bi-infinite word, LNCS 194, (1985) 33-42.
[2] J.P. Pécuchet Étude syntaxique des parties reconnaissables de mots infinis, Theoret. Comput. Sci. 56 (1988) 231-248.
[3] The expressive power of existential first order sentences of Büchi's sequential calculus.
[4] M.P. Béal and D. Perrin, Symbolic Dynamics and Finite Automata. | {
"domain": "cstheory.stackexchange",
"id": 2252,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fl.formal-languages, automata-theory, omega-language",
"url": null
} |
selector and click to. Solutions; Summary of the convergence tests from Chapter 11 (ht Libby Runte). Review Problems from your textbook: Integration plus L'Hospital's Rule page 579 #1-15, 33-37, 73-87 odds. if 0 < k < 1, then Z 1 a g(x)dx converges Z 1 a f(x)dx converges 2. ∫∞ 4 e−y y dy Solution. Comparison Test for Integrals Comparison Test for Integrals Theorem If f and g are continuous functions with f(x) g(x) 0 for x a, then (a) If R 1 a f(x)dx is convergent, then R 1 a g(x)dx is convergent. Hence the Comparison test implies. The limit comparison test gives us another strategy for situations like Example 3. The last convergence tool we have is the comparison test. For Example, One Possible Answer Is BF, And Another One Is L. Hint: 0 < E^-x Lesserthanorequalto 1|for X Greaterthanorequalto 1. Question: For Each Of The Improper Integral Below, If The Comparison Test Applies, Enter Either A Or B Followed By One Letter From C To K That Best Applies, And If The Comparison Test Does Not | {
"domain": "pacianca.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918516137418,
"lm_q1q2_score": 0.8623082847681207,
"lm_q2_score": 0.8723473879530492,
"openwebmath_perplexity": 642.0331441711976,
"openwebmath_score": 0.9238064289093018,
"tags": null,
"url": "http://pacianca.it/improper-integrals-comparison-test.html"
} |
perl
substr("abcdef", 0, 3) . substr("abcdef", 3+1);
substr takes a start index and a length. Care has to be taken that the substr starting index is a legal index, which is relevant when we are trying to remove the last character.
sub remove_letters_by_index {
my ($string, $index) = @_;
return substr($string, 0, $index) if $index + 1 >= length $string;
return substr($string, 0, $index) . substr($string, $index + 1);
}
We could also just ask Perl to delete a specific character in a string:
sub remove_letters_by_index {
my ($string, $index) = @_;
substr($string, $index, 1) = '';
return $string;
}
With arrays, two similar solutions are possible. We can use the splice function to remove elements from a list:
sub remove_letters_by_index {
my ($string, $index) = @_;
my @letters = split //, $string;
splice @letters, $index, 1;
return join '', @letters;
} | {
"domain": "codereview.stackexchange",
"id": 14889,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "perl",
"url": null
} |
telescope, newtonian-telescope
Instead, I'm going to describe your main options, and let you choose. Be aware that you'll make the choice while still not knowing much about optics. So, in a sense, it will be just the beginning of a learning journey, and you'll have to keep learning and trying things out. Hope you like that sort of thing, because it's kind of mandatory in this hobby.
Also, there are two kinds of people in this field: those who enjoy using the instruments, and those who enjoy making the instruments. I'm 25% of the former, 75% of the latter kind (I make telescopes and mirrors, and I do some optics design). So that's the perspective you'll get. | {
"domain": "astronomy.stackexchange",
"id": 2733,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "telescope, newtonian-telescope",
"url": null
} |
of as the expansion of some. New variable. Data Theorem helped Evernote identify and close 105 security issues and remove 17 harmful third-party libraries, all before releasing them to the public app stores. Fundamental theorem of calculus VI. When is the Net Change Theorem used? Edit. iterated integral, parallel transport, holonomy. This worksheet can work as a starter before introducing integration topic. In calculus, integration by substitution, also known as u-substitution or change of variables, is a method for evaluating integrals. The second fundamental theorem of calculus holds for a continuous function on an open interval and any point in , and states that if is defined by the integral (antiderivative) at each point in , where is the derivative of. As a consequence it allows the order of integration to be changed in iterated integrals. This video covers the method of complex integration and proves Cauchy's Theorem when the complex function has a continuous derivative. The | {
"domain": "novaimperia.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787830929848,
"lm_q1q2_score": 0.8410297302997354,
"lm_q2_score": 0.8519528019683105,
"openwebmath_perplexity": 561.1808117201255,
"openwebmath_score": 0.8674394488334656,
"tags": null,
"url": "http://nvmu.novaimperia.it/integration-theorems.html"
} |
c++, c++11, linked-list
iterator begin() { return iterator{mFirst.get()}; }
iterator end() { return iterator{mLast}; }
This will additionally allow everybody to use the standard library algorithms with your class. You will want a const_iterator as well, to allow for iteration over a const List<T>.
Use References
Right now add() takes a T and first() returns a T. For something like int, the former is OK, but the latter is still questionable. What if you want to modify the list elements? That seems like a useful operation. To that end, you should add:
void add(T const& );
void add(T&& );
T& first();
T const& first();
You may also want to add an emplace, to construct values in-place:
template <typename... Args>
void emplace(Args&&...); | {
"domain": "codereview.stackexchange",
"id": 16486,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, linked-list",
"url": null
} |
php, optimization, object-oriented, controller
// set custom error response
$rules['firstname'][0]->setError('First name too long');
$rules['firstname'][1]->setError('First name, only alpha letters allowed');
$rules['lastname'] [0]->setError('Last name too long');
$rules['lastname'] [1]->setError('Last name, only alpha letters allowed');
$rules['email'] [0]->setError('Email invalid');
$rules['cfmEmail'] [0]->setError('Email invalid');
$rules['cfmEmail'] [1]->setError('Email doesn\'t match');
$rules['password'] [0]->setError('Password must be at least 6 characters');
// instantiate and initialize v, with custom error response for required fields
$v = new Validator(); | {
"domain": "codereview.stackexchange",
"id": 8492,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, optimization, object-oriented, controller",
"url": null
} |
python, combinatorics
By convention, global constant should be named in ALL_CAPS
manager = Manager()
word_list = manager.list()
def new_combination(i, dictionary):
comb_list = itertools.permutations(letter_list,i)
for item in comb_list:
item = ''.join(item)
if(item in dictionary):
You don't need the parens. Also, dictionary in your code is a list. Checking whether an item is in a list is rather slow, use a set for fast checking.
word_list.append(item)
Having a bunch of different processes constantly adding onto a single list, performance will be decreased. You are probably better off adding onto a seperate list and then using extend to combine them at the end.
return
There is no point in an empty return at the end of a function
def make_dictionary():
I'd call this read_dictionary, since you aren't really making the dictionary
my_list = [] | {
"domain": "codereview.stackexchange",
"id": 2241,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, combinatorics",
"url": null
} |
sql, mysql, finance
create table partypayment (
id INT(10) PRIMARY KEY
,partyId INT(4) references partymaster(id)
,partyBillId INT(10) references partybill(id)
,cashPaid INT(10)
,chequePaid INT(10)
,chequeBankName varchar(50)
,chequeNumber INT(10)
,chequeDate date
,paymentDate date
); Do you really need the subquery?
SELECT
bill.id as bill_id,
master.id as party_id,
master.name as party_name,
`amountExcVat`,
bill.vat,
amountExcVat + bill.vat as 'amtIncVat',
`purchasedDate`,
bill.status,
SUM(payment.cashPaid) + SUM(payment.chequePaid) as 'Amount Paid(cash+chq)',
FROM `partybill` as bill, `partymaster` as master, partypayment as payment
WHERE
party_id = partyId AND
bill_id = partyBillId AND
bill.partyId = master.id AND
bill.partyId = payment.id AND
UPPER(MONTHNAME(`purchasedDate`)) = 'OCTOBER' AND
YEAR(`purchasedDate`) = '2013'
GROUP BY partyId, partyBillId) | {
"domain": "codereview.stackexchange",
"id": 4864,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "sql, mysql, finance",
"url": null
} |
c
for (i3 = (elemc2-elemc1-1); i3 < elemc2; i3++){
outputline[i3] = buf[i4];
i4++;
}
}//default uniq
else{
group[i5]++;//increment when two strings are the same
} | {
"domain": "codereview.stackexchange",
"id": 42173,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c",
"url": null
} |
vba, excel, powerpoint
Title: Update PowerPoint graphs from Excel file ' 1. Run from PPT and open an Excel file
' 2. For each slide look for Charts -> Charts with titles -> Charts whose title include "iq_" followed by a number
' 3. find those words and numbers in the opened Excel file after splitting and re-formating string.
' 3. Grab values from column and store in smallArray and repeat for all "iq_'s" on the chart
' 4. Activate Powerpoint charts "Edit Data" which pulls up a non-linked Excel worksheet.
' 5. Paste table into "Edit data" in powerpoint.
' 6. Format chart numbers and color code/3d bezel Chart bars
' 7. Repeat for every slide | {
"domain": "codereview.stackexchange",
"id": 27948,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba, excel, powerpoint",
"url": null
} |
pcl, transform
printf ("Cloud: width = %d, height = %d\n", pcl_in->width, pcl_in->height);
BOOST_FOREACH (const pcl::PointXYZ& pt, pcl_in->points)
printf ("\t(%f, %f, %f)\n", pt.x, pt.y, pt.z);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "sub_pcl");
ros::NodeHandle nh;
ros::Subscriber sub = nh.subscribe<PointCloud>("points2", 1, callback);
tf_pub = nh.advertise<PointCloud> ("tf_points2", 1);
tf_listener = new tf::TransformListener();
ros::spin();
//delete tf_listener;
//return 0;
}
this resumes in the following error.
[ERROR] [1381659561.691872448]: Frame id /pcl does not exist! Frames (3): Frame /world_marker exists with parent /world.
Frame /world exists with parent NO_PARENT. | {
"domain": "robotics.stackexchange",
"id": 15844,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pcl, transform",
"url": null
} |
python, classes, singleton
def _dispatch_request(self, signal: str) -> Dict[str, Union[str, Dict[str, Any]]]:
match signal:
case "OPEN_LONG":
request_detail = self.bybit_client.open_market_long()
case "OPEN_SHORT":
request_detail = self.bybit_client.open_market_short()
case "CLOSE_LONG":
request_detail = self.bybit_client.close_market_long()
case "CLOSE_SHORT":
request_detail = self.bybit_client.close_market_short()
return request_detail | {
"domain": "codereview.stackexchange",
"id": 43931,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, classes, singleton",
"url": null
} |
clustering
How can I make fviz_cluster function working?
Update
Filed a bug: https://github.com/kassambara/factoextra/issues/62 I actually found an answer just accidentally. It is very unfortunate that factoextra documentation does not explicitly say that data parameter should be of data.frame type. In my case rtsne$Y was a matrix. The error that factoextra is giving is very uninformative.
Here is the code that ultimately worked:
fviz_cluster(db,
-----> data = as.data.frame(rtsne$Y),
stand = FALSE,
ellipse = FALSE, show.clust.cent = FALSE,
geom = "point",palette = "jco", ggtheme = theme_classic()) | {
"domain": "bioinformatics.stackexchange",
"id": 405,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "clustering",
"url": null
} |
c#, winforms
#region Component Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.SuspendLayout();
//
// EnhancedListView
//
this.ColumnClick += new System.Windows.Forms.ColumnClickEventHandler(this.EnhancedListView_ColumnClick);
this.ResumeLayout(false);
}
#endregion
}
}
namespace ChangeThisNamespace
{
public partial class EnhancedListView : ListView
{
enum lvEvents : uint
{
ItemAdded = 0x104D,
ItemRemoved = 0x1008
}
private ListViewColumnSorter lvwColumnSorter; | {
"domain": "codereview.stackexchange",
"id": 6280,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, winforms",
"url": null
} |
• Well, the complement is a subspace, so you get an element that is orthogonal to every element of U. – zagortenay333 Oct 15 '18 at 15:25
• Not sure I'm following you. – zagortenay333 Oct 15 '18 at 15:40
• Ok, how is the orthogonal complement defined there? I believe that you would end up with the orthogonal complement of the linear span. So, as you write, in your case it would be a line and not a plane. – Christian Oct 15 '18 at 15:58
• The orthogonal complement is defined as the set of all vectors in V that are orthogonal to all vectors of the subset U. Again, the issue is that, as far as I understand, if a line not going through the origin is interpreted as the set of position vectors (from origin to point on line), then no plane through the origin can be orthogonal to any vector of the line! So Axler must interpret the line as position vectors with respect to the offset origin point, but that's kinda weird. – zagortenay333 Oct 15 '18 at 16:25 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9458012732322215,
"lm_q1q2_score": 0.8000934361242855,
"lm_q2_score": 0.845942439250491,
"openwebmath_perplexity": 152.67937317440197,
"openwebmath_score": 0.684394896030426,
"tags": null,
"url": "https://math.stackexchange.com/questions/2956611/orthogonal-complement-of-a-subset-or-subspace"
} |
kinematics, friction, torque
This contact can only supply forces of equal and opposite magnitudes on the two bodies. No torque is transferred through the contact, because the contact normal is the line of action of the force. Typically the normal force $N$ is calculated based on the fact the contact cannot interpenetrate, and thus the speed of the contact point on each body must match along the contact normal.
In the tangent direction relative speed is allowed (slipping), and this may result in sliding friction or not. Sliding friction opposes motion and has the magnitude of $F = \mu N$
Now a special case exists when the friction coefficient is high where the body jams in place. This can happen if there is such friction force $|F| < \mu N$ that can cause the bodies to have no slip. | {
"domain": "physics.stackexchange",
"id": 72565,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "kinematics, friction, torque",
"url": null
} |
java, benchmarking
public static void main(String[] args) {
new BenchmarkRunner(new SimpleSortingDemo()).run();
}
@MeasureTime
public void bubbleSort() {
BubbleSort.sort(new ArrayList<Integer>(shuffledList));
}
@MeasureTime
public void insertionSort() {
InsertionSort.sort(new ArrayList<Integer>(shuffledList));
}
}
If you want to test drive it in your own projects,
the GitHub project page explains nicely the steps to get started.
I'd like a review in terms of everything, but here are some points you might want to pick on:
What would you do differently?
Is there a way to make the library easier to use?
Is the implementation of BenchmarkRunner clear and natural?
Is it adequate the way it measures the execution time? | {
"domain": "codereview.stackexchange",
"id": 11223,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, benchmarking",
"url": null
} |
infinite-impulse-response, scipy, embedded-systems
CMSIS requires an array of length a multiple of five. Each 5 values are coefficients b0, b1, b2, a1, and a2 for a filter state: "Coefficients b0, b1 and b2 multiply the input signal x[n] and are referred to as the feedforward coefficients. Coefficients a1 and a2 multiply the output signal y[n] and are referred to as the feedback coefficients. Pay careful attention to the sign of the feedback coefficients. Some design tools use the difference equation"
Scipy's formats seem incompatible: Numerator/Denominator uses "b" and "a" terminology, but returns 2 arrays: A numerator array of lengh 6, and denominator array of len 6.
SOS format also returns arrays of length 6.
This is in contrast to FIR, where there's a 1-to-1 mapping. Ie both use an array of coefficients corresponding to a convolution kernel. IIR seems more diverse by comparison.
scipy.signal.iirdesign | {
"domain": "dsp.stackexchange",
"id": 10652,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "infinite-impulse-response, scipy, embedded-systems",
"url": null
} |
matlab, filter-design, lowpass-filter
Panel A is one of the data segments plotted with its filtered signal. It looks good. This is what I expect and Panel B is its signed residual which is the filtered signal minus the original signal. I see there is no DC offset and no linear trend so I am happy.
Panel C is another data segment plotted with its filtered result and there is an obvious offset between the two. The filtered signal "doesn't run through" the signal like how I think it should. Panel D is its residual and we confirm that the mean is not zero. And sometimes (not shown here) I see a linear trend as well in the residual. | {
"domain": "dsp.stackexchange",
"id": 896,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, filter-design, lowpass-filter",
"url": null
} |
c++, c++11, thread-safety, reinventing-the-wheel
template <typename ReturnType>
void WorkQueue::execute_and_set_data(const DataPointer<ReturnType> &data) {
data->first.set_value(data->second());
}
These functions use some template types:
template <typename ReturnType>
using PromiseFunctionPair =
std::pair<std::promise<ReturnType>, std::function<ReturnType()>>;
template <typename ReturnType>
using DataPointer = std::shared_ptr<PromiseFunctionPair<ReturnType>>;
And they replace the call in the try catch block. Because of the nontrivial dependency between ReturnType and these types they cannot be used for automatic template parameter deduction:
try {
execute_and_set_data<ReturnType>(data);
}
However, I would recommend to replace PromiseFunctionPair by a templated struct with better names for the members than first and second which would then allow automatic template parameter deduction.
Missing includes
you are using std::vector without #include <vector>
you are using assert without #include <cassert>
Typo | {
"domain": "codereview.stackexchange",
"id": 9127,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, thread-safety, reinventing-the-wheel",
"url": null
} |
c++, design-patterns, graph
Graph() : TraitBase() {}
void add_vertex(const vertex_type &v) { TraitBase::add_vertex(v); }
void add_edge(const vertex_type &src, const vertex_type &dst) {
TraitBase::add_edge(src, dst);
}
auto adj(const vertex_type &src) { return TraitBase::adj(src); }
auto adj(const vertex_type &src) const { return TraitBase::adj(src); }
const auto &vertices() const noexcept { return TraitBase::vertices(); }
[[nodiscard]] auto size() const noexcept {
return TraitBase::vertices().size();
}
const auto &edges() const noexcept { return TraitBase::edges(); }
const auto &out_edges() const noexcept { return TraitBase::out_edges(); }
bool has_vertex(const vertex_type &src) const noexcept {
return TraitBase::has_vertex(src);
}
bool has_edge(const edge_type &edge) const noexcept {
return TraitBase::has_edge(edge);
} | {
"domain": "codereview.stackexchange",
"id": 43621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, design-patterns, graph",
"url": null
} |
\cdot P(\overline{b_2}\vert\overline{a_2})\cdot P(\overline{a_2}).$$ Now, we have three probabilities to calculate: $$P(\overline{a_2})=1-P(a_2)=\frac{4}{5}.$$ $$P(\overline{b_2}\vert\overline{a_2})=1-P(b_2\vert\overline{a_2})=\frac{2}{3}$$ $$P(a_3\vert\overline{a_2}\cap\overline{b_2})=\frac{1}{2}.$$ Explanation for the last one: If $$\overline{a_2}$$, then A got two balls of different colors on the table. So, A has four balls left, of which two would lead to a win. Now we can calculate our wanted probability: $$P(a_2)+P(a_3\cap\overline{a_2}\cap\overline{b_2})=\frac{1}{5}+\frac{1}{2}\cdot\frac{2}{3}\cdot\frac{4}{5}=\frac{7}{15}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9793540644400346,
"lm_q1q2_score": 0.8118245890323078,
"lm_q2_score": 0.828938806208442,
"openwebmath_perplexity": 377.1822568268263,
"openwebmath_score": 0.7530533671379089,
"tags": null,
"url": "https://math.stackexchange.com/questions/3176202/a-and-b-play-a-game-with-colored-balls-a-starts-with-6-balls-2-orange-2-yello/3176294"
} |
solution from its state at time t to its new state at t + dt. heat transfer between di erent regions in a 2D domain. I have already implemented the finite difference method but is slow motion (to make 100,000 simulations takes 30 minutes). In general, the nonlinear heat equation admits exact solutions of the form $\begin{array}{ll} w=W(kx-\lambda t)& (\hbox{traveling-wave solution}),\\ w=U(x/\!\sqrt t\,)& (\hbox{self-similar solution}), \end{array}$ where $$W=W(z)$$ and $$U=U(r)$$ are determined by ordinary differential equations, and $$k$$ and $$\lambda$$ are arbitrary constants. The following example illustrates the case when one end is insulated and the other has a fixed temperature. Working on a range of operations, from addition to division, this gets your child acquainted with algebra and starts them on the road to understanding expressions and equations. The heat equation is ∂u ∂t = ∇·∇u. More than just an online equation solver. Solutions of the heat equation are sometimes | {
"domain": "lasalsese.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9863631623133666,
"lm_q1q2_score": 0.8087125814009092,
"lm_q2_score": 0.8198933337131076,
"openwebmath_perplexity": 703.8732531495101,
"openwebmath_score": 0.6489722728729248,
"tags": null,
"url": "http://ygig.lasalsese.it/2d-heat-equation-solver.html"
} |
html, css
left: 0;
}
h1 {
font-size: 3rem;
}
h2 {
font-size: 2.3rem;
}
}
/* Tablet 768px */
@media only screen and (max-width: 768px) {
#showcase .product-img,
#showcase .product-video {
top: 10%;
left: 0;
}
#pricing {
height: auto;
}
#pricing .flex-row {
flex-direction: column;
}
#pricing .flex-row,
.price-tag {
width: 100%;
}
.price-tag {
padding: 2rem;
}
#newsletter .flex-row,
#newsletter form {
width: 100%;
}
#newsletter form {
padding: 2rem 2rem;
}
} | {
"domain": "codereview.stackexchange",
"id": 35761,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "html, css",
"url": null
} |
planet, solar-system, orbital-mechanics, 9th-planet, space-probe
In a simple system, retrieving information about the orbits is fairly straightforward (although I will neglect the rigorous demonstration for brevity). The issues arise when the system becomes more complicated.
One source of confound is resonant orbits. Certain resonances would defy deconvolution because the frequencies of their respective influences on the barycenter are synchronous. Furthermore, multiple resonances can't necessarily be disambiguated: the frequency pattern in the motion of the barycenter caused by 2 bodies in a resonance could be replicated by 3 bodies in a resonance. Pluto and Neptune are in a 2:3 resonance, so I don't think we can immediately dismiss the possibility of a theoretical Planet 9 being in a resonance of some kind and that this could hinder deconvolving its effect on the barycenter. | {
"domain": "astronomy.stackexchange",
"id": 4586,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "planet, solar-system, orbital-mechanics, 9th-planet, space-probe",
"url": null
} |
localization, navigation, odometry, 3d-slam, stereo-camera
Title: 6DOF Localization with stereo camera against RGBD dense point cloud | {
"domain": "robotics.stackexchange",
"id": 26121,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "localization, navigation, odometry, 3d-slam, stereo-camera",
"url": null
} |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 5