text stringlengths 49 10.4k | source dict |
|---|---|
java, object-oriented, primes
} Bugs in PrimeNumberClassic
There are multiple bugs in this class : 2 is not prime, while any number <= to 1 are considered prime :(
Some large number may results in infinite loop !!! (I'm going into more details about that at the end of my post)
Before considering the rest of my reviews, you should :
correct them
make unit tests (check JUnit) for your two classes for some cases (2, multiple even numbers not 2, 3, 5, 7, 31, 277, some really big numbers and a negative number for example)
Consider using bigger number
int aren't that big, if you plan on doing math calculations, you should consider switching to long or BigInteger.
Well now, the real review :
Review of PrimeNumber
I don't really get what this class is about. The name makes it sounds like it's storing a prime number but it's not... it actually looks like a Factory of some sort.
I'd consider removing it unless you want to add more features (in which case that may be the object of a future, follow-up, question ^^).
Review of PrimeAbstract
/**
* abstract class of prime number object
*/
abstract class PrimeAbstract {
public abstract boolean isPrime(int number);
}
Why is this an abstract class ? It's clearly an interface, also the javadoc is pretty useless.
The name is not very good... why PrimeAbstract ? Aren't PrimeNumberFinder/ PrimeNumberDetector name closer to the intent ?
Review of PrimeAbstract's childs
The code in the implementation really needs to breathe !
Put some spaces in it ;) it's tiring to read it as of now.
As a rule of thumb, put spaces between the = (as well as things like += ofc) sign, the '?' sign and theirs operands as well as between comparators and their operands.
if (number%2==0) return false; | {
"domain": "codereview.stackexchange",
"id": 26321,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, primes",
"url": null
} |
3. How many numbers that are not divisible by 6 divide evenly into 264,600?
(A) 9
(B) 36
(C) 51
(D) 63
(E) 72
4.A certain quantity is measured on two different scales, the R-scale and the S-scale, that are related linearly. Measurements on the R-scale of 6 and 24 correspond to measurements on the S-scale of 30 and 60, respectively. What measurement on the R-scale corresponds to a measurement of 100 on the S-scale?
(A) 20
(B) 36
(C) 48
(D) 60
(E) 84
5. Mrs. Smith has been given film vouchers. Each voucher allows the holder to see a film without charge. She decides to distribute them among her four nephews so that each nephew gets at least two vouchers. How many vouchers has Mrs. Smith been given if there are 120 ways that she could distribute the vouchers?
(A) 13
(B) 14
(C) 15
(D) 16
(E) more than 16
6. This year Henry will save a certain amount of his income, and he will spend the rest. Next year Henry will have no income, but for each dollar that he saves this year, he will have 1 + r dollars available to spend. In terms of r, what fraction of his income should Henry save this year so that next year the amount he was available to spend will be equal to half the amount that he spends this year?
(A) 1/(r+2)
(B) 1/(2r+2)
(C) 1/(3r+2)
(D) 1/(r+3)
(E) 1/(2r+3)
7. Before being simplified, the instructions for computing income tax in Country Rwere to add 2 percent of one's annual income to the average(arithmetic mean)of 100units of Country R's currency and 1 percent of one's annual income. Which of the following represents the simplified formula for computing the income tax in Country R's currency, for a person in that country whose annual income is I?
(A) 50+I/200
(B) 50+3I/100
(C) 50+I/40
(D) 100+I/50
(E) 100+3I/100 | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 1,
"lm_q1q2_score": 0.8198933425148213,
"lm_q2_score": 0.8198933425148213,
"openwebmath_perplexity": 1341.3597371252881,
"openwebmath_score": 0.6643107533454895,
"tags": null,
"url": "https://gmatclub.com/forum/new-set-of-good-ps-85440.html"
} |
symmetry, standard-model, symmetry-breaking, higgs
Title: What are the Generators of the electroweak interaction after symmetry breaking. (SM) In the standard model (omitting the QCD part), we start off with the set of generators
$T_1$, $T_2$, $T_3$, $Y$
for the four-parametric gauge group $SU(2)_L \times U(1)_Y$.
We then define a new generator $Q= T_3+Y$ and make the transition to the four-parametric gauge group $SU(2)_? \times U(1)_Q$.
What are, aside from $Q$, the new generators for this "new" gauge group?
$?$ , $?$ , $?$ , $Q$
Do we still use the $T$'s we used in $SU(2)_L$? That means the left factor in the group product is still the same as before the symmetry breaking?
My motivation for asking is the observation that in $SU(2)_L \times U(1)_Y$, the four generators are orthogonal and a basis for the space of all complex self-adjoint matrices.
The set of $T_1$, $T_2$, $T_3$, $Q$, while still a basis, is however not orthogonal, since
$( T_3| Q )$=$(T_3|T_3+Y)$=$(T_3|T_3) \neq 0$
It would seem that we would probably want to preserve that orthogonality property and thus not use $T_3$ as a generator after symmetry breaking. Well, after symmetry breaking, all that remains is electromagnetic $U(1)$, so the only generator that is truly a symmetry generator is $Q$.
The fermions couple to the "Higgs" via the Yukawa coupling: | {
"domain": "physics.stackexchange",
"id": 10564,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "symmetry, standard-model, symmetry-breaking, higgs",
"url": null
} |
c, assembly, brainfuck, compiler
int main(int argc, char **argv) {
code = argv[1];
puts("xor di, di\n"
"setup_loop:\n"
"mov byte [tape + di], 0\n"
"add di, 1\n"
"cmp di, 101\n"
"jne setup_loop\n"
"xor di, di"); // sets up the tape with all 0's
int loop_count = 0; // to keep track of asm subroutines for [ and ]
for(i = 0; code[i] != '\0'; i++) {
switch(code[i]) {
case '+':
printf("add byte [tape + di], %d\n", get_amt_to_change('+'));
break;
case '-':
printf("sub byte [tape + di], %d\n", get_amt_to_change('-'));
break;
case '>':
printf("add di, %d\n", get_amt_to_change('>'));
break;
case '<':
printf("sub di, %d\n", get_amt_to_change('<'));
break;
case '.':
puts("mov ah, 0Eh\n"
"mov al, byte [tape +di]\n"
"int 10h");
break;
case '[':
printf("cmp byte [tape + di], 0\n"
"je end_loop%d\n"
"start_loop%d:\n", loop_count, loop_count);
break;
case ']':
printf("cmp byte [tape + di], 0\n"
"jne start_loop%d\n"
"end_loop%d:\n", loop_count, loop_count);
loop_count++; // to not repeat subroutine names
break;
}
}
puts("jmp $\nsection .bss\ntape resb 100"); // a 100 byte tape | {
"domain": "codereview.stackexchange",
"id": 12533,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, assembly, brainfuck, compiler",
"url": null
} |
y-axis. Below is the step by step descriptive logic to check symmetric matrix. Here denotes the transpose of . See identical, matrix, Matrix Analysis, Second edition, Classics in Applied Mathematics, Copositive programming gained fame when Burer showed that hard nonconvex problems can be formulated as completely-positive programs. Collection of functions for matrix calculations, matrixcalc: Collection of functions for matrix calculations. View Answer. The resulting EIG rank test is easy to formulate under stronger Assumptions (A), and becomes more involved when Assumptions (A⁄) are used. If A = [4 2 x − 3 x + 2 x + 1 ] is symmetric, then what is x equal to? S has the form S DATA with independent columns in A. So in R, there are two functions for accessing the lower and upper triangular part of a matrix, called lower.tri() and upper.tri() respectively. The current version of the code can only generate a symmetric or nonsymmetric matrix of arbitrary size, with eigenvalues distributed according to a normal distribution whose mean and standard deviation are specified by the user (subroutines R8SYMM_GEN and R8NSYMM_GEN). TEST_EIGEN is a FORTRAN90 library which generates eigenvalue tests. Description AA’ is always a symmetric matrix for any square matrix A. The matrix method is used inside eigen by default to test symmetry of matrices up to rounding error, using all.equal. Pivots: Pivots are the first non-zero element in each row of a matrix that is in Row-Echelon form. asked Jul 22 '14 at 5:39. gbox gbox. In the correct answer, the matching numbers are the 3's, the -2's, and the 5's. Examples which violate the necessity are diagonal matrices (where only the trace contains non-zero elements). If A = (aij) is a (not neces-sarily square) matrix, the transpose of A denoted AT is the matrix with (i,j) entry (a ji). A matrix can be tested to see if it is symmetric using the Wolfram Language code: SymmetricQ[m_List?MatrixQ] := (m === Transpose[m]) Written explicitly, the elements of a symmetric matrix have the form If A is | {
"domain": "kaliagenova.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9609517095103499,
"lm_q1q2_score": 0.8260953558391085,
"lm_q2_score": 0.8596637559030338,
"openwebmath_perplexity": 567.5526870498367,
"openwebmath_score": 0.5458916425704956,
"tags": null,
"url": "http://www.kaliagenova.com/aq8vj/test-for-symmetric-matrix-f4dd47"
} |
beginner, php, array, statistics
/**
*
* @return a number equal to square of value mean
*/
public static function getMeanSquare($x, $mean)
{
return pow($x - $mean, 2);
}
/**
*
* @return a number equal to standard deviation of values of an array
*/
public static function getStandardDeviation($array)
{
if (count($array) < 2) {
return ConstEQ::NEAR_ZERO_NUMBER;
} else {
return sqrt(array_sum(array_map("ST::getMeanSquare", $array, array_fill(0, count($array), (array_sum($array) / count($array))))) / (count($array) - 1));
}
}
/**
*
* @return a number equal to covariance of values of two arrays
*/
public static function getCovariance($valuesA, $valuesB)
{
// sizing both arrays the same, if different sizes
$no_keys = min(count($valuesA), count($valuesB));
$valuesA = array_slice($valuesA, 0, $no_keys);
$valuesB = array_slice($valuesB, 0, $no_keys);
// if size of arrays is too small
if ($no_keys < 2) {return ConstEQ::NEAR_ZERO_NUMBER;}
// Use library function if available
if (function_exists('stats_covariance')) {return stats_covariance($valuesA, $valuesB);}
$meanA = array_sum($valuesA) / $no_keys;
$meanB = array_sum($valuesB) / $no_keys;
$add = 0.0;
for ($pos = 0; $pos < $no_keys; $pos++) {
$valueA = $valuesA[$pos];
if (!is_numeric($valueA)) {
trigger_error('Not numerical value in array A at position ' . $pos . ', value=' . $valueA, E_USER_WARNING);
return false;
} | {
"domain": "codereview.stackexchange",
"id": 34513,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, php, array, statistics",
"url": null
} |
performance, c, memory-optimization
/* read blocks of samples from WAVE file and feed to encoder */
fprintf(stdout, "Encoding: ");
size_t left = (size_t)totalSamples;
while(left)
{
size_t need = (left>READSIZE ? (size_t)READSIZE : (size_t)left);
if (fread(buffer, channels * (bps / 8), need, fin) != need)
{
return fprintf(stderr, "ERROR: reading from WAVE file\n");
}
/* convert the packed little-endian 16-bit PCM samples from WAVE into an interleaved FLAC__int32 buffer for libFLAC */
size_t i;
for(i = 0; i < need*channels; i++)
{
/* inefficient but simple and works on big- or little-endian machines */
pcm[i] = (FLAC__int32)(((FLAC__int16)(FLAC__int8)buffer[2 * i + 1] << 8) | (FLAC__int16)buffer[2 * i]);
}
/* feed samples to encoder */
if (FLAC__stream_encoder_process_interleaved(encoder, pcm, need))
{
return fprintf(stderr, "ERROR: processing WAVE file: %s\n", FLAC__StreamEncoderStateString[FLAC__stream_encoder_get_state(encoder)]);
}
left -= need;
}
if (!FLAC__stream_encoder_finish(encoder))
{
return fprintf(stderr, "ERROR: finishing encoder: %s\n", FLAC__StreamEncoderStateString[FLAC__stream_encoder_get_state(encoder)]);
} | {
"domain": "codereview.stackexchange",
"id": 5620,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, c, memory-optimization",
"url": null
} |
gravitational-waves
Title: Do gravitational waves have an effect on gravitational forces between two object? I'm pretty new to the subject of gravitational waves. Do gravitational waves have an effect on gravitational forces between two objects? If so, how? There are related answers such as Gravitational Waves and Effect of Gravitational Waves , which has a single 3 down votes answer.
I would like to attempt a new answer as there is an aspect of the phenomenon that they have not illustrated, and the OP asks specifically about forces. If I find this is a duplicate, I will of course remove it, as there are plenty of related articles on the topic on this site.
I'm pretty new to the subject of gravitational waves. Do gravitational waves have an effect on gravitational forces between two object? if so, how?
Gravitational waves distort spacetime, which will affect the relative position of the two bodies. If they are far away from each other, this will not have anything but a minor effect, but if they are close together, as in the case of two black holes in a mutual orbit, then the gravitional forces can alter significantly.
On the subject of "forces", please read Bob Bee's answer, which emphizes the spacetime curvature aspect
Image Source:LIGO Black Hole Merger
A video of the simulated merger of 2 black holes illustrated above
The effect of a cross-polarized gravitational wave on a ring of particles
The effect of a plus-polarized gravitational wave on a ring of particles.
As a gravitational wave passes an observer, that observer will find spacetime distorted by the effects of strain. Distances between objects increase and decrease rhythmically as the wave passes, at a frequency corresponding to that of the wave.
If the source of the waves is relatively close enough, then the spacetime distortion may be sufficient to affect the two bodies in such a way that they increase/decrease the spacetime curvature in their vicinity. | {
"domain": "physics.stackexchange",
"id": 33196,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gravitational-waves",
"url": null
} |
thermodynamics, relativity
&\hphantom{=}\mathrm{let\ } p= mc\sinh\phi,\ \mathrm{and\ } x=\frac{m c^2}{kT} \\
&= \frac{c^2 x}{K_2\left(x\right)} \int_0^\infty \frac{\sinh^4 \phi}{\cosh \phi} \operatorname{e}^{-x\cosh\phi} \operatorname{d}\phi \\
& = \frac{c^2 x}{K_2\left(x\right)} \left[\operatorname{Ki}_{-2}(x) - 2\operatorname{Ki}_{-1}(x) + \operatorname{Ki}_1(x)\right],\end{align}$$ where $\operatorname{Ki}_\alpha(x)$ is the Bickley function. | {
"domain": "physics.stackexchange",
"id": 33412,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, relativity",
"url": null
} |
dna-sequencing, molecular-evolution, primer
Title: How to design internal primers? I have sequences of the Cytochrome c oxidase I (COI) from several populations of Peringia ulvae (Mollusca; Gastropoda; Littorinimorpha; Hydrobiidae). I need additional COI sequences from a specific population. When using the same primers and PCR protocol as for the other populations, I do not get any PCR product for the specific population. My PI suggested me to design internal COI primers for Peringia ulvae. How can I do that? Should I use the COI sequences from the other populations? If you are not getting amplification, there are multiple things to visit before you want to design internal primers. You should try troubleshooting your PCR reactions before designing and ordering internal primers. If you are doing the PCRs wrong, it is likely you will run into the same problems again with internal primers.
There are excellent guides to trouble shooting PCRs - Just a google search "Trouble-shooting PCRs" should give you a lot of material to go through. This is from NEB, but there are several others, each manufacturer having their own, so you could refer to one of whose reagents you are using.
Primers generally give good reads from the 5' end for about 600-700 bp and after that the quality of base calling deteriorates, so if your target amplicon is above 1500bp or so, you would need internal primers because the 'external' primers would give you good sequences for only 1400 bp or so (600-700 bp from both directions, for fwd and rev primers). So internal primers can be used to sequence the regions that cannot be reached by the main (external) forward and reverse primers. The published papers from where you are using the external primers will most likely have the internal primers if CO1 is long enough to require internal primers.
Otherwise, you would need to align the existing sequences from the other populations, and look through regions of the alignment that are conserved which could act as potential priming sites. There are other issues to be taken care of before you can consider a 'conserved region' a 'priming site' and go ahead and order primers. Again there are excellent primer design tutorials available, just a google search away. | {
"domain": "biology.stackexchange",
"id": 6475,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dna-sequencing, molecular-evolution, primer",
"url": null
} |
electromagnetism, magnetic-fields, mathematics
you need to compute the X,Y,Z values of B in the NED reference frame (North,East,Down) so you need the basis vectors first:
Down = a/|a| // gravity points down
North = B/|B| // north is close to B direction
East = cross(Down,North) // East is perpendicular to Down and North
North = cross(East,Down) // north is perpendicular to Down and East, this should convert North to the horizontal plane
You should render them to visually check if they point to correct directions if not negate them by reordering the cross operands (I might have the order wrong I am used to use Up vector instead). Now just convert B to NED :
X = dot(North,B)
Y = dot(East,B)
Z = dot(Down,B)
And now you can compute the H
H = sqrt( X*X +Y*Y ) | {
"domain": "physics.stackexchange",
"id": 55603,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, magnetic-fields, mathematics",
"url": null
} |
performance, file-system, search, powershell
Title: Powershell search millions of files as fast as possible I once asked a similar question but in C#. Now I have the same problem in powershell..
What is the fastest way, to search files newer than 15 minutes, in a file system with more than 1 million files?
Is there any faster way than using pipe?
Get-ChildItem -Path $path -Recurse | Select Name, PSIsContainer, Directory, LastWriteTime, Length| where {($_.LastWriteTime -gt (Get-Date).AddMinutes(-15))}
I already cut off some attributes to minimize the object size. It still takes ages. First, you don't need to call Get-Date for every file. Just call it once at the beginning:
$t = (Get-Date).AddMinutes(-15)
Get-ChildItem -Path $path -Recurse |
Select Name, PSIsContainer, Directory, LastWriteTime, Length |
where {($_.LastWriteTime -gt $t)}
That's saves about 10% (as measured by Measure-Command).
Secondly, you don't need to call Select-Object for each file either. Just change the processing order:
$t = (Get-Date).AddMinutes(-15)
Get-ChildItem -Path $path -Recurse |
where {($_.LastWriteTime -gt $t)} |
Select Name, PSIsContainer, Directory, LastWriteTime, Length
Thirdly, try increasing the buffer size using the OutBuffer parameter:
$t = (Get-Date).AddMinutes(-15)
Get-ChildItem -Path $path -Recurse -OutBuffer 1000 |
where {($_.LastWriteTime -gt $t)} |
Select Name, PSIsContainer, Directory, LastWriteTime, Length
I've used 1000, but you can experiment with the value.
Those three changes reduced the running time to under one half on my system. | {
"domain": "codereview.stackexchange",
"id": 24497,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, file-system, search, powershell",
"url": null
} |
java, object-oriented, swing, graphics
I tried to implement a Circle factory singleton in order not to repeat code. Are there problems? Is this worth it?
In a larger program, factories make sense. In this case, it makes no sense at all; the code needed to support the factory is larger than the savings of not having a factory at all. Consider this version (which does the same thing):
for (double i : circles) {
g2.setColor(colors[count++]);
g2.draw(new Ellipse2D.Double(200 - i / 2, 200 - i, i, i)));
}
It is perfectly legal, and in fact desirable, to instantiate anonymous, short-lived objects that are passed directly into the parameter list of the function, which is essentially what you're doing anyways. This lets people know, clearly, that you're simply creating a circle, instead of having to refer to the factory's source code to get an answer as to what is going on.
In general, are there ways to make my code more succinct?
I believe I've outlined most of the problems in the prior answers, but let's go over them:
Drop the use of the factory, it's simply overkill in this case.
Drop the serial ids, especially in anonymous inner classes, as it is unlikely you'll ever serialize them. Warnings are okay, but you can use suppression annotations if you don't like the yellow squiggles (and I personally hate them; they're a guilt trip).
Use just one component to draw onto (or, one to manage the current view state, if you prefer).
Don't hide then show the frame. Instead, call invalidate() to force a repaint. No flickering.
Loading graphics and stuff as an initialization generally makes sense, but not here, because you're just wasting memory (although, not a ton, since this program only weighs in at a few KB, but, the principal remains). You should defer generating the two components until the last possible moment.
Generally speaking, your main app can be the JFrame; this is a normal design. | {
"domain": "codereview.stackexchange",
"id": 6130,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, swing, graphics",
"url": null
} |
pytorch, weights, linear-regression, tensor, bias
Title: Not able to understand Pytorch Tensor (Weight & Biases) Size for Linear Regression Below are the two tensors
[ 73., 67., 43.]
[ 91., 88., 64.],
[ 87., 134., 58.],
[102., 43., 37.],
[ 69., 96., 70.]
[ 56., 70.],
[ 81., 101.],
[119., 133.],
[ 22., 37.],
[103., 119.]
These are the weight that are added
Weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True) | {
"domain": "ai.stackexchange",
"id": 3146,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pytorch, weights, linear-regression, tensor, bias",
"url": null
} |
My question is that is there any general method through which we don't have to take all these cases and can prove the convergence of series in more generality?
marked as duplicate by Gabriel Romon, Martin R, Sil, Siméon, Sangchul Lee real-analysis StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); May 2 '18 at 18:54 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180677531124,
"lm_q1q2_score": 0.8564182527596524,
"lm_q2_score": 0.868826777936422,
"openwebmath_perplexity": 322.0400184138422,
"openwebmath_score": 0.8820814490318298,
"tags": null,
"url": "https://math.stackexchange.com/questions/2763504/convergence-of-a-n2-sqrta-n-sqrta-n1"
} |
keras, tensorflow, dataset, training
batch_normalization_152 (BatchN (None, 512, 512, 64) 256 conv2d_136[0][0]
__________________________________________________________________________________________________
conv2d_transpose_24 (Conv2DTran (None, 1024, 1024, 6 36928 batch_normalization_152[0][0]
__________________________________________________________________________________________________
concatenate_24 (Concatenate) (None, 1024, 1024, 1 0 conv2d_transpose_24[0][0]
conv2d_113[0][0]
__________________________________________________________________________________________________
batch_normalization_153 (BatchN (None, 1024, 1024, 1 512 concatenate_24[0][0] | {
"domain": "datascience.stackexchange",
"id": 10173,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "keras, tensorflow, dataset, training",
"url": null
} |
algorithms, optimization, approximation, heuristics, packing
Title: What is the approximation ratio of this bin-backing algorithm? Consider the following algorithm for bin packing:
Initially, sort the items by their size.
Put the largest item in a new bin.
Fill the bin with small items in ascending order of size, up to the largest item that fits.
Close the bin. If some items remain, go back to step 1. | {
"domain": "cs.stackexchange",
"id": 18805,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, optimization, approximation, heuristics, packing",
"url": null
} |
### Show Tags
03 Oct 2019, 11:34
OFFICIAL EXPLANATION
Hi All,
We're told that each of 60 cars is parked in one of three empty parking lots. After all of the cars have been parked, the largest lot holds 8 more cars than the middle lot and 16 more cars than the smallest lot. We're asked for the number of cars that are in the LARGEST lot. This is an example of a 'System' question; it can be solved Algebraically, but it can also be solved rather quickly by TESTing THE ANSWERS...
Based on the information that we're given, the three parking lots clearly each end up holding a different number of cars. We're asked for the LARGEST number of the three, so we should look to TEST one of the larger answers first. Let's TEST Answer D...
IF....the largest lot holds 28 cars....
then the middle lot holds 28 - 8 = 20 cars...
and the smallest lot holds 28 - 16 = 12 cars...
Total = 28 + 20 + 12 = 60 cars
This is an exact MATCH for what we were told, so this MUST be the answer!
GMAT assassins aren't born, they're made,
Rich
_________________
Contact Rich at: Rich.C@empowergmat.com
The Course Used By GMAT Club Moderators To Earn 750+
souvik101990 Score: 760 Q50 V42 ★★★★★
ENGRTOMBA2018 Score: 750 Q49 V44 ★★★★★
GMAT Club Legend
Joined: 12 Sep 2015
Posts: 4234
Re: Each of 60 cars is parked in one of three empty parking lots [#permalink]
### Show Tags
03 Oct 2019, 12:39
Top Contributor
EMPOWERgmatRichC wrote:
EMPOWERgmat PS Series:
Block 1, Question 5
Each of 60 cars is parked in one of three empty parking lots. After all of the cars have been parked, the largest lot holds 8 more cars than the middle lot and 16 more cars than the smallest lot. How many cars are in the largest lot?
A. 12
B. 20
C. 22
D. 28
E. 30
Let x = number of cars in the LARGEST lot | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464499040092,
"lm_q1q2_score": 0.8643288913231058,
"lm_q2_score": 0.8856314723088733,
"openwebmath_perplexity": 3974.236522246419,
"openwebmath_score": 0.2563014328479767,
"tags": null,
"url": "https://gmatclub.com/forum/each-of-60-cars-is-parked-in-one-of-three-empty-parking-lots-306824.html"
} |
c#, performance, strings, pagination
private int pagedepth;
private long pagesize;
private long mpagesize; // https://stackoverflow.com/questions/11040646/faster-modulus-in-c-c
private int currentPage = 0;
private int currentPosInPage = 0; | {
"domain": "codereview.stackexchange",
"id": 33769,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, strings, pagination",
"url": null
} |
c++, beginner, file, caesar-cipher
~Caesar() {};
};
int main()
{
string filePath;
bool isRunning = true;
int userChoice;
Caesar *caesar = new Caesar;
while (isRunning)
{
userChoice = caesar->caesarInterface(caesar);
if (userChoice == 1)
{
cout << "Type the .txt file path." << endl;
cin >> filePath;
caesar->cesarEncrypt(caesar, filePath);
system("pause");
isRunning = false;
}
if (userChoice == 2)
{
cout << "Type the .txt file path." << endl;
cin >> filePath;
caesar->caesarDecrypt(caesar, filePath);
system("pause");
isRunning = false;
}
if (userChoice == 3) isRunning = false;
}
return 0;
} What is system("CLS"), system("pause")? That is not C++.
Your top function uses exceptions for normal flow control, and is completely unnecessary.
try
{
cin >> userChoice;
if (obj->userChoice == 1) return obj->userChoice;
if (obj->userChoice == 2) return obj->userChoice;
if (obj->userChoice == 3) return obj->userChoice;
else throw 199;
}
catch (int a)
{
cout << "No such choice... error: 199" << endl;
cout << "Please try again." << endl;
obj->userChoice = 3;
system("pause");
return obj->userChoice;
}
You also duplicate the entire code for each case when they are identical.
Now this is weird and I don’t follow: you read into userChoice with no qualification or receiver on that name. But you then use the value from obj->userChoice which is a different variable.
OK, here is the only call to it:
userChoice = caesar->caesarInterface(caesar); | {
"domain": "codereview.stackexchange",
"id": 30766,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, file, caesar-cipher",
"url": null
} |
c++, c++14, xml, stream, qt
Title: Using QXmlStreamReader to read configuration file of key-value pairs Background
The following XML snippet contains configuration parameters for our application in key-value pair manner. (Despite using INI files for such purpose may be better, I have to use XML files.)
<configuration>
<parameter id="serial_port_name">ttyS0</parameter>
<parameter id="serial_port_baud_rate">9600</parameter>
<parameter id="check_period">1000</parameter>
<parameter id="locale">hu_HU</parameter>
<!-- ... -->
</configuration>
I would like to read the configuration parameters and store them inside a QHash<QString, QString> object with minimal overhead, therefore I've chosen the QXmlStreamReader for iterating through XML elements.
Current code
The toConfigurationMap() method of the following class iterates through the elements of XML file and when a <configuration> element is found, starts another iteration on the level of <parameter> elements to parse them.
class ConfigurationReader {
public:
explicit ConfigurationReader(QFile& input) : reader(&input) {
}
QHash<QString, QString> toConfigurationMap() {
QHash<QString, QString> output;
for(auto token = reader.tokenType(); !reader.atEnd();
token = reader.readNext()) {
if(token != QXmlStreamReader::StartElement ||
reader.name() != "configuration")
continue;
while(reader.readNextStartElement()) {
if(reader.name() == "parameter") {
QString id = reader.attributes().value("id").toString();
output.insert(id, reader.readElementText());
}
else {
reader.skipCurrentElement();
}
}
}
return output;
}
private:
QXmlStreamReader reader;
}; | {
"domain": "codereview.stackexchange",
"id": 31618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++14, xml, stream, qt",
"url": null
} |
python, statistics, django
Build a single queryset to annotate hours worked this month
Cache the result of the aggregation to not recompute it for each object
Annotate the queryset again to get the turnover
The view could start with something like
hours_per_machine = L3.objects.annotate(hours_worked=<whatever>)
btc_to_hour = compute_hour_ratio(hours_per_machine.aggregate(Sum('hours_worked'))['hours_worked__sum'])
machine_turnover = hours_per_machine.annotate(turnover=F('hours_worked') * btc_to_hours)
for machine in machine_turnover:
# do something with machine.turnover
Where
def compute_hour_ratio(hours_worked):
response = requests.get(
'https://api.nicehash.com/api',
params={
'method': 'balance',
'id': '123456',
'key': '01234567-abcd-dcba-abcd-0123456789abc',
})
response.raise_for_status()
balance = response.json()['result']['balance_confirmed']
return balance / hours_worked
To answer your comment about result caching:
Django comes with an impressively configurable cache system. So you may want to take advantage of it. First off, choose your cache backend and configure it properly (let's say, store the value in the database). Then, in your view, you could query the cache manually:
from django.core.cache import cache | {
"domain": "codereview.stackexchange",
"id": 28159,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, statistics, django",
"url": null
} |
waves
Let's assume that we are talking about a scalar field propagating in 1-D. In other words, we are talking about the behavior of a scalar function $u(x,t): \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ that obeys some kind of PDE. We know that $u(x,t)$ will admit a Fourier representation:
$$
u(x,t) = \frac{1}{2 \pi} \iint \tilde{u}(k,\omega) e^{-i k x} e^{-i \omega t} \, dk \, d \omega
$$
The dispersion relation given implies that waves can only propagate when $(\omega^2 - c_a^2 k^2)(\omega^2 - c_b^2 k^2) = 0$; this implies that the support of $\tilde{u}$ is confined to this surface, and that therefore
$$
(\omega^2 - c_a^2 k^2)(\omega^2 - c_b^2 k^2) \tilde{u}(k, \omega) = 0.
$$
Taking the Fourier transform of this equation, we have that
$$
\frac{1}{2 \pi} \iint (\omega^2 - c_a^2 k^2)(\omega^2 - c_b^2 k^2) \tilde{u}(k,\omega) e^{-i k x} e^{-i \omega t} \, dk \, d \omega = 0 \\ | {
"domain": "physics.stackexchange",
"id": 50539,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves",
"url": null
} |
c++, beginner, algorithm, programming-challenge, c++17
"dlvikolv","lvlkiovd","lovdiklv","kvdllvio","lvioklvd","ilvvdolk","dkvvlilo","lkvivdol","iklvvdol","ildvovkl","klvolidv","vvdolilk","kvdlvoil","kolvidvl","vkdoilvl","vdikolvl","ldioklvv","ovvlldki","vlkiodlv","okllivvd","lvikvldo","ovikldlv","lvdkvlio","lvdkivlo","kliovlvd","illkodvv","llvoidvk","loklivdv","okdllviv","dvlvoikl","llokidvv","lvldvkoi","kdvolvli","ldolvvki","vkiolvdl","klvdolvi","livklvod","olvvidkl","ovidvlkl","vldkolvi","lovvkldi","vokdilvl","likdvlvo","ovlvilkd","lkoildvv","vllo | {
"domain": "codereview.stackexchange",
"id": 38967,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, algorithm, programming-challenge, c++17",
"url": null
} |
it. Given the first term and the common ratio of a geometric sequence find the first five terms and the explicit formula. The Sum of Geometric Sequence. We will just need to decide which form is the correct form. As a check, $$\frac{u_5}{u_3}=4=r^2$$ so this does work. This Site Might Help You. Access this finite geometric series worksheets tenaciously prepared for high school students. Determine the common ratio and. Any term = constant = r. This ratio is called _____. The sum of an infinite geometric series is 24, and the sum of the first 200 terms of the series is also 24. Geometric Series Questions (b) Find, to 2 decimal places, the di erence between the 5th and 6th terms. A geometric series is the sum of the terms of a geometric sequence. Work out the missing term in this geometric sequence:. Determine the common ratio and. How do we find the nth term of a geometric sequence? 1. For example, if I know that the 10 th term of a geometric sequence is 24, and the 9 th term of the sequence is 6, I can find the common ratio by dividing the 10 th term by the 9 th term: 24 / 6 = 4. Common Ratio. If there are 160 ants in the initial population, find the number of ants. Given u 5 =1280 and u 8 =81920 , find the geometric sequence. This sequence starts at 10 and has a common ratio of 0. Click here 👆 to get an answer to your question ️ 2. r = 4 2 = 2. Find (1) the common ratio, (2) the ninth term, (3) a recursive rule for the nth term, and (4) an Log On Algebra: Sequences of numbers, series and how to sum them Section Solvers Solvers. Find the common ratio of the geometric sequence. Geometric progression is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed non-zero number called the common ratio. And each time I'm multiplying it by a common number, and that number is often called the common ratio. Finally, use the rule to find the tenth term in the sequence. Also, a geometric sequence has p as its common ratio. If you’re good at finding patterns, then you’ll | {
"domain": "dwvu.pw",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180669140007,
"lm_q1q2_score": 0.8104466803757999,
"lm_q2_score": 0.8221891305219504,
"openwebmath_perplexity": 192.78418523772632,
"openwebmath_score": 0.7307733297348022,
"tags": null,
"url": "http://wzel.dwvu.pw/find-the-common-ratio-of-the-geometric-sequence.html"
} |
quantum-mechanics, renormalization, scattering, many-body, dirac-delta-distributions
If we iterate the Born series, or sum 2-particle bubbles, we encounter divergences. We wish to introduce a regulator to remove divergences, and renormalize to
express the bare potential in terms of $a$.
In the Schroedinger equation we can use the pseudo-potential method explained, for example, in Kerson Huang's book. The trick is that the low energy scattered wave is $\psi\sim (1-a/r)$.
Then the operator $\partial_r(r\psi)$ removes the scattered wave, and the Born approximation in the 2-body channel is exact. Huang explains how to apply this to $N$-body wave functions.
In diagrammatic perturbation theory we can impose a momentum space cutoff (the regulator), and compute the 2-particle bubble. The solution to the Schroedinger equation corresponds to summing the bubbles, which is a geometric series. Now we set $k=0$, and demand agreement with the tree level result. This gives your relation between $g$ and $g_0$ (renormalization). In a two component Fermi gas there are no further divergences, and this step fully renormalizes $N$-body perturbation theory. By the way: If you want a regulator that behaves exactly like the pseudo-potential, and removes the zero external momentum bubble, you can use dimensional regularization.
For bosons, fermions with more than 2 components, or finite range potentials, additional divergences appear.
Historically, the ground state energy of the weakly interacting Fermi gas was first computed using the pseudo-potential (by Lee and Yang), but modern presentations typically use the renormalized cutoff theory. The result is the same. | {
"domain": "physics.stackexchange",
"id": 71242,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, renormalization, scattering, many-body, dirac-delta-distributions",
"url": null
} |
@Chan: In order to say that $b$ is a quadratic residue, there has to exist some integer $w$ such that $w^2\equiv b\pmod{p}$. What you call $w$ is completely irrelevant. You can call it $w$; you can call it $z$; you can call it $x$; you can call it "Steve". "Steve" is then a "witness to the fact that $b$ is a quadratic residue." Steve shows that $b$ is a square modulo $p$. If $x$ is the name of the witness to the fact that $a$ is a quadratic residue, then it turns out that $x^{-1}$, the modular inverse of $x$, is a witness for $b$. You are missing the point because of the notation. – Arturo Magidin Apr 13 '11 at 20:18 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180658651109,
"lm_q1q2_score": 0.8012579936409032,
"lm_q2_score": 0.8128673110375458,
"openwebmath_perplexity": 219.520969198584,
"openwebmath_score": 0.9736251831054688,
"tags": null,
"url": "http://math.stackexchange.com/questions/32833/prove-that-if-ab-equiv-1-pmodp-and-a-is-quadratic-residue-mod-p-then"
} |
ros-kinetic
ndt_gpu will not be built, CUDA was not found.
---
Finished <<< ndt_gpu [2.44s]
Starting >>> rosinterface
Finished <<< kitti_player [6.32s]
Starting >>> runtime_manager
--- stderr: pcl_omp_registration
In file included from /home/quanfayan/autoware.ai/src/autoware/core_perception/pcl_omp_registration/include/pcl_omp_registration/ndt.h:45:0,
from /home/quanfayan/autoware.ai/src/autoware/core_perception/pcl_omp_registration/src/ndt.cpp:46:
/home/quanfayan/autoware.ai/src/autoware/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:181:23: error: expected identifier before string constant
PCL_DEPRECATED ("[pcl::registration::Registration::setInputCloud] setInputCloud is deprecated. Please use setInputSource instead.")
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/quanfayan/autoware.ai/src/autoware/core_perception/pcl_omp_registration/include/pcl_omp_registration/registration.h:181:23: error: expected ‘,’ or ‘...’ before string constant | {
"domain": "robotics.stackexchange",
"id": 35590,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-kinetic",
"url": null
} |
first and last terms in the sum are halved. The reason the $\sin(N/2 t)$ term is missing is that this function vanishes when evaluated at $t_j$, so that there is no contribution of this mode to the coefficient $c_{N/2}$. | {
"domain": "chebfun.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9911526449170697,
"lm_q1q2_score": 0.8237896364075763,
"lm_q2_score": 0.831143054132195,
"openwebmath_perplexity": 1450.5721992579056,
"openwebmath_score": 0.911601722240448,
"tags": null,
"url": "http://www.chebfun.org/docs/guide/guide11.html"
} |
quantum-mechanics, operators, harmonic-oscillator
Title: Similarity between unitary operators and ladder operators I observed a similarity. Is this a co-incidence?:
$$(I+\epsilon P)|x\rangle =|x+\epsilon\rangle$$
And,
$$(X+iP)|n\rangle=A_n|n+1\rangle$$
Here, $|x\rangle$ is an eigenfunction of position. $|n\rangle$ is an eigenfunction of the Hamiltonian $X^2+P^2$
The similarity that I observe is that $(I+\epsilon P)$ works as an "infinitesmial ladder operator" for $|x\rangle$
Is the motivation for ladder operators rooted in this similarity? If so, how? Can ladder operators be systematically derived by exploiting this similarity? I've only seen ladder operators introduced abruptly as a "mathematical trick".
EDIT There's one more clue. The first equation works only because of the value of the commutator $[X,P]$. The second equation also works because of the commutator $[a,H]$
Given all these clues, can we systematically motivate ladder operators? The defining equation for a ladder (or lowering) operator $\hat{L}$ with respect to some observable $\hat{X}$ is* $$[\hat{X},\hat{L}] = -\Delta \hat{L},$$ where $\Delta$ is some difference in eigenvalues of $\hat{X}$. It follows that if $|\xi\rangle$ is some eigenvector of $\hat{X}$ with eigenvalue $\xi$, then $\hat{L}|\xi\rangle$ is another eigenvector with eigenvalue $\xi-\Delta$, since | {
"domain": "physics.stackexchange",
"id": 86088,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, harmonic-oscillator",
"url": null
} |
If you run the command again, you won’t see the download part and the command will be executed very fast.
We can ask Docker to give us a shell using the following command
`$docker run -it feelpp/feelpp-env` It provides a shell prompt from inside the container which is very similar to what you obtain when login with `ssh` on a remote machine. The flags `-i` and `-t` tell Docker to provide an interactive session (`-i`) with a TTY attached (`-t`). #### 10.1.1. Feel++ Container System The Feel++ Container System (FCS) is organized in layers and provides a set of images. #### 10.1.2. Naming The naming convention of the FCS allows the user to know where they come from and where they are stored on the Docker Hub. The name of the images is built as follows ``feelpp/feelp-<component>[:tag]`` where • `feelpp/` is the namespace of the image and organization name • `feelpp-<component>` the image name and Feel++ component • `[:tag]` an optional tag for the image, by default set to `:latest` Feel++ images(components) are defined as layers in the FCS in the table below. Table 6. Table of the current components of the FCS Component Description Built From `feelpp-env` Execution and Programming environment <OS> `feelpp-libs` Feel++ libraries and tools `feelpp-env` `feelpp-base` Feel++ base applications `feelpp-libs` `feelpp-toolboxes` Feel++ toolboxes `feelpp-toolboxes` | Note: `feelpp-env` depends on an operating system image `<OS>`, the recommended and default `<OS>` is Ubuntu 16.10. In the future, we will build upon the next Ubuntu LTS or Debian Stable releases. #### 10.1.3. Tags By default, the `:latest` tag is assumed in the name of the images, for example when running ``$ docker run -it feelpp/feelpp-base`` | {
"domain": "feelpp.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9559813538993888,
"lm_q1q2_score": 0.8163246661246598,
"lm_q2_score": 0.8539127492339909,
"openwebmath_perplexity": 13718.21862802429,
"openwebmath_score": 0.26109886169433594,
"tags": null,
"url": "http://book.feelpp.org/user-manual/"
} |
special-relativity, lagrangian-formalism, metric-tensor, differentiation, covariance
Title: How does the $\partial _{\mu} (\frac{\partial L}{\partial [\partial _{\mu} \phi]})$ term expand into a sum? From QFT Demystified page 31:
This term is from the Euler-Lagrange equation of a scalar field. How does this expand into a sum?
Do we just sum over all $\mu$ from $\mu =0$ to 3, or are we supposed to subtract the spatial terms in the expanded sum?
I believe it should just be a simple sum. This is because the action didn't differentiate between space and time derivatives at all (The Lagrangian being $L(\phi, \partial _{\mu} \phi)$. So it doesn't make sense for the spatial terms to end up with a minus sign in the Euler Lagrange equation. However, the book I'm reading says the expanded sum has minus signs for the spatial terms.
In general, when do we simply sum over repeated indices and when do we have to use minus signs? The expansion of $\partial ^{\mu} \partial_ {\mu} \phi$ has minus signs. But many other times, we just do a simple sum over repeated indices. I think the second equation with the spatial minus signs is simply wrong. All signs should be plus. There are minus signs in the wave equation, of course, but these come from minus signs in the Lagrangian density:
$$
L= \frac 12 \partial^\mu \phi \partial_\mu\phi\\= \frac 12 g^{\mu\nu} \partial_\nu \phi \partial_\mu\phi\\= \frac 12 [(\partial_t \phi)^2 - (\partial_x \phi)^2 -(\partial_y \phi)^2 -(\partial_z \phi)^2].
$$ | {
"domain": "physics.stackexchange",
"id": 87030,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, lagrangian-formalism, metric-tensor, differentiation, covariance",
"url": null
} |
c#, comparative-review, winforms
if
for (int i = 0; i < length; i++)
{
CameraStream stream = null;
var id = (Guid)CameraGridView.Rows[i].Cells[0].Value;
var Camerasettings = Settings.GetCameraSettings(id);
if (i == 0)
{
var PicBoxes = new List<PictureBox> { Camera1a, Camera1b };
var Labels = new List<Label> { Camera1aLbl, Camera1bLbl };
stream = new CameraStream(Camerasettings, PicBoxes, Labels);
}
if (i == 1)
{
var PicBoxes = new List<PictureBox> { Camera2a, Camera2b };
var Labels = new List<Label> { Camera2aLbl, Camera2bLbl };
stream = new CameraStream(Camerasettings, PicBoxes, Labels);
}
if (i == 2)
{
var PicBoxes = new List<PictureBox> { Camera3a, Camera3b };
var Labels = new List<Label> { Camera3aLbl, Camera3bLbl };
stream = new CameraStream(Camerasettings, PicBoxes, Labels);
}
if (i == 3)
{
var PicBoxes = new List<PictureBox> { Camera4a, Camera4b };
var Labels = new List<Label> { Camera4aLbl, Camera4bLbl };
stream = new CameraStream(Camerasettings, PicBoxes, Labels);
}
if (i == 4)
{
stream = new CameraStream(Camerasettings, Camera5, Camera5Lbl);
}
if (i == 5)
{
stream = new CameraStream(Camerasettings, Camera6, Camera6Lbl);
}
if (i == 6)
{
stream = new CameraStream(Camerasettings, Camera7, Camera7Lbl);
}
if (i == 7)
{
stream = new CameraStream(Camerasettings, Camera8, Camera8Lbl);
}
if (i == 8)
{
stream = new CameraStream(Camerasettings, Camera9, Camera9Lbl);
}
cameraStreams.Add(stream);
stream.Start(); | {
"domain": "codereview.stackexchange",
"id": 19807,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, comparative-review, winforms",
"url": null
} |
python
Title: Group counter for ranges of values I have this grouping thing, which is would be a switch case if it weren't for the ranges or a Counter of some sorts, but as there are those ranges I don't know how to implement this any more efficiently.
def getGroups(user_array):
# group1 is group 1-10 x
group1 = 0
# group2 is group 10-50 x
group2 = 0
# group3 is group 50-100 x
group3 = 0
# group4 is group 100-200 x
group4 = 0
# group5 is group 200-500 x
group5 = 0
# group6 is group 500 - 1000 x
group6 = 0
# group7 is group 1000+ x
group7 = 0
for user in user_array:
if user.x_count == 0:
pass
elif user.x_count <= 10:
group1 += 1
elif user.x_count <= 50:
group2 += 1
elif user.x_count <= 100:
group3 += 1
elif user.x_count <= 200:
group4 += 1
elif user.x_count <= 500:
group5 += 1
elif user.x_count <= 1000:
group6 += 1
else:
group7 += 1
return [group1, group2, group3, group4, group5, group6, group7] If your data is strictly integer values, you can use user.x_count in range(...) to test whether or not the user.x_count value is a member of the range(...) set. Ie)
def getUsers(user_array):
group1 = sum(1 for user in user_array if user.x_count in range(1, 11))
group2 = sum(1 for user in user_array if user.x_count in range(11, 51))
# ... etc ... | {
"domain": "codereview.stackexchange",
"id": 34099,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
observational-astronomy, black-hole, galaxy, dark-matter, gravitational-lensing
PPS With regards to galaxies that don't have dark matter - the NGC 1052-DF4 dark-matter-free dwarf galaxy, which was (similar to NGC 1052-DF2, which is the dwarf satellite galaxy of an elliptical galaxy NGC 1052 in the constellation Cetus) originally thought to be also a satellite galaxy of NGC 1052 (as NGC 1052-DF2 is), but later being identified as located closer to the NGC 1035 galaxy (rather than to NGC 1052), per study led by Mireia Montes, firstly lost its dark matter content, and now is in the last stages of being ripped apart 4. The short answer is that we have no idea whether there is a massive black hole at the center of this galaxy, and no real hope of finding out (in the absence of possibly detecting, e.g., X-ray emission from an active nucleus). | {
"domain": "astronomy.stackexchange",
"id": 6023,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "observational-astronomy, black-hole, galaxy, dark-matter, gravitational-lensing",
"url": null
} |
the fields below:. 0 The real part of the second eigenvalue is 2. The following worksheet is designed to analyse the nature of the critical point (when ) and solutions of the linear system X'=AX. The phase portrait represents the trajectories of two variables, x and y, whose state at time t is represented by the coordinate (x(t), y(t)) on the Cartesian plane. Phase PortraitsInstructor: Lydia BourouibaView the complete course: http://ocw. In class we sketched (by hand) the phase portrait for the second system of nonlinear ODEs by linearizaton via the Jacobian matrix. Check the fixed point 0, 0 The real part of the first eigenvalue is -1. max number of iterations: step size: nullcline tolerance: Draw nullclines? Allow trajectories to leave the window? Rainbow? Update Plot. This has rank 1 and the phase portrait is degenerate, as the Mathlet says. The phase portrait is a plot of a vector field which qualitatively shows how the solutions to these equations will go from a given starting point. 1, it certainly appears that the critical point (3,2) is asymptotically stable. $\endgroup$ – Alicia May Oct 13, 2017 at 8:47. The trajectory can be dragged by moving the cursor with the mousekey depressed. Phase plane portrait is a very important tool to study the behavior and stability of a non-linear system. A phase portrait is a plot of multiple phase curves corresponding to different initial conditions in the same phase plane. Now we have Matlab that does a lot of this work for us. The last example of a nonhyperbolic system occurs when the matrix has two zero eigenvalues — but only one linearly independent eigenvector. WOLFRAM | DEMONSTRATIONS PROJECT. Now for a basic plot using NIST. Phase Portraits of Nonlinear Systems. We say the phase portraits of z(k+1) = Az(k) and z(k+1) = Jz(k) are a ne equivalent if Aand Jare similar. Deselect the [Companion Matrix] option, so you can set all four entries in the matrix. Should it be an additional data set then you can consider it like z-data, plotted on the | {
"domain": "mariasaso.de",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9852713883126862,
"lm_q1q2_score": 0.8252829801580173,
"lm_q2_score": 0.8376199592797929,
"openwebmath_perplexity": 682.0292663282,
"openwebmath_score": 0.6334644556045532,
"tags": null,
"url": "https://www.mariasaso.de/phase-portrait-plotter-matrix.html"
} |
cosmology, big-bang, distance
To find the current($t_0$) conformal time, we can use the above equation, for $z=0$
$$\eta=H_0^{-1}\int_{z=0}^{\infty}\frac {dz} {E(z)}$$
$$\eta=H_0^{-1}\int_{0}^{\infty}\frac {dz} {\sqrt{\Omega_{\Lambda}+\Omega_m(1+z)^3+\Omega_r(1+z)^4+\Omega_{\kappa}(1+z)^2}}$$
For the current values of $\Omega_{\Lambda}=0.69$, $\Omega_m=0.31$,$\Omega_{\kappa}= \Omega_r=0$
we have,
$$\eta=H_0^{-1}\int_{0}^{\infty}\frac {dz} {\sqrt{\Omega_{\Lambda}+\Omega_m(1+z)^3}}$$
$$\eta=H_0^{-1}\int_{0}^{\infty}\frac {dz} {\sqrt{0.69+0.31(1+z)^3}}$$
If we take $H_0=70km/s/Mpc$ then $1/H_0=1/(70\times 3.2408\,10^{-20})=4.4133353\,10^{17}s$
And the integral gives, $$\int_{0}^{\infty}\frac {dz} {\sqrt{0.69+0.31(1+z)^3}}=3.266054427285631$$ | {
"domain": "physics.stackexchange",
"id": 55080,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, big-bang, distance",
"url": null
} |
ros
Title: I am looking at getting fpga to generate pwm and quadrature decoders for dc motor and then interface fpga to ROS
Please I want to find out if FPGA can interface with ROS. Any previous work on that. I really need guidance on that. Thanks
Originally posted by Glory on ROS Answers with karma: 76 on 2014-07-31
Post score: 0
I'm not aware of an existing FPGA to ROS framework.
It shouldn't be terribly difficult to do, but an FPGA is probably overkill for most applications.
There are number of implementation details that you'll have to work out:
You'll have to come up with a way to wire your FPGA to your ROS computer. Serial, USB and Ethernet may all be choices here.
You'll need to find or write a protocol that defines how the FPGA communicates with the computer. This could be a simple serial protocol, or it could be UDP over Ethernet, or something else.
You'll need to program the FPGA appropriately. This is usually done in Verilog or VHDL. This will include writing the appropriate hardware to generate your PWM signals, read your encoders, and communicate with your PC over the chosen protocol.
You'll need to write a ROS node which communicates with your FPGA and publishes and subscribes to ROS topics.
Unless you really need the flexibility or speed of an FPGA, there are better, pre-built solutions for interfacing ROS to motors and encoders such as ros_arduino_bridge
Originally posted by ahendrix with karma: 47576 on 2014-07-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18845,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
cc.complexity-theory, reference-request, application-of-theory, ho.history-overview, ce.computational-finance
So in sum, while it seems like a worthwhile goal to help make economists aware of the results regarding complexity in economics (especially as some do take interest), I am not sure that we are in a position to argue that they should take much notice or change their approach; and I think a strong scientific argument would require more data rather than just philosophy. | {
"domain": "cstheory.stackexchange",
"id": 2073,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, reference-request, application-of-theory, ho.history-overview, ce.computational-finance",
"url": null
} |
coding-theory
Title: How many sequences in a prefix code can be compressed by m bits? I have a little understanding problem with Appendix A ("Universal Codes") in the paper "Shannon Information and Kolmogorov complexity" by Gründwald and Vitanyi (Link).
At the end of page 50, they say something corresponding to:
For each Prefix-code, the fraction of sequences of length $n$ that can be compressed by more than $m$ bits is less than $2^{-m}$.
Either this is a writing mistake or I should make a break from reading.
It is easy to see, that the number of strings with length $n$ and compression length smaller than $m$ is smaller $2^{m-n}$ for a prefix-code because there are simple not enough "shorter" words.. Did they mean this information?
It is also easy to understand, that the Kraft-Inequality $\sum_{x\in X}2^{-l(x)}\leq1$ holds for a prefix code with source word set $X$ and the compression length $l(x)$ for strings $x\in X$.
Is that only a writing mistake? Respectively can you tell me an explanation? There is no problem with the statement. An encoding $f$ of the set of strings of length $n$ is a map from $\{0,1\}^n$ to $\{0,1\}^*$ which is injective (one-to-one). Such a mapping compresses a string $x$ by more than $m$ if $|f(x)| < |x| - m$. In our case, all the strings have length $n$, so a string $x \in \{0,1\}^n$ is compressed by more than $m$ if $|f(x)| < n-m$. There are $\sum_{i=0}^{n-m-1} 2^i = 2^{n-m}-1 < 2^{n-m}$ such strings, and since $f$ is injective, the fraction of strings compressed by more than $m$ is less than $2^{n-m}/2^n = 2^{-m}$. | {
"domain": "cs.stackexchange",
"id": 2299,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "coding-theory",
"url": null
} |
gazebo
if(_model->GetJointCount() == 0)
{ std::cerr<<"Invalid joint count, Velodyne plugin not loaded/n"; return; }
this->model = _model;
this->joint = _model->GetJoints()[0];
this->pid = common::PID(0.1,0,0);
}
private: physics::ModelPtr model;
private: physics::JointPtr joint;
private: common::PID pid;
};
Originally posted by hari1234 on Gazebo Answers with karma: 56 on 2016-08-31
Post score: 0
That tutorial is known to have a lot of problems. I made a few changes to fix it, but the changes need approvals before they can go on the live site. One of the problems I solved was the one you mention in this question.
If you're willing, you could try the changed tutorials here and report any problems you find on the comments section? Instead of using the usual gazebo tutorials website, you can click on the links numbered 1 - 6. If you don't find any problems, you can press the approve button. We need 2 approvals to get it live.
Thanks!
Originally posted by chapulina with karma: 7504 on 2016-08-31
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by hari1234 on 2016-08-31:
I still get the same error. It shows jointcontroller class has no member function called SetVelocity. until above lines it works fine but as i add other two, error comes. i'm using gazebo2.2.
Comment by hari1234 on 2016-09-01:
Ok now after i upgraded the gazebo version from 2 to 7 with ros indigo it is working fine with the extra codes mentioned in answer. And i really like this new version, it is working very smoothly than previous one.
Comment by chapulina on 2016-09-01:
Thanks for looking into it! | {
"domain": "robotics.stackexchange",
"id": 3978,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo",
"url": null
} |
You have $x>0$ and $\Bbb Q_x=\{q\in\Bbb Q:q\le x\}$. Your first step, proving that $\Bbb Q_x\ne\varnothing$, doesn’t quite work the way you’ve stated it, because you never specified what $m$ is. You can turn it into a legitimate proof by specifying $m=1$, for instance:
Since $x>0$, $\frac1x\in\Bbb R$, and by the Archimedean property there is an $n\in\Bbb Z^+$ such that $n\ge\frac1x$. But then $\frac1n\le x$, and $\frac1n\in\Bbb Q$, so $\frac1n\in\Bbb Q_x$, which is therefore not empty.
This is working way too hard, though: since $x>0$, $0\in\Bbb Q_x$, and therefore $\Bbb Q_x\ne\varnothing\,$! (However, the argument will be useful later, so it’s not really wasted effort after all.) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9715639702485929,
"lm_q1q2_score": 0.80962815433799,
"lm_q2_score": 0.8333245973817158,
"openwebmath_perplexity": 79.15503258920864,
"openwebmath_score": 0.9721837043762207,
"tags": null,
"url": "https://math.stackexchange.com/questions/226793/show-that-mathbbq-is-dense-in-the-real-numbers-using-supremum"
} |
reflection
Title: Why the transmitted pulse in pulse-echo technique cannot be too long? Question. An ultrasound pulse-echo technique is used to produce an image by reflection from many boundaries. If the transmitted pulse is too long, the image produced is of poor quality. Why?
My attempt. If the transmitted pulse is too long, then the pulse contains many wavelengths. After that, I would receive pulses containing the same number of wavelengths but with lower intensity. Then I am stuck. I even wonder why the transmitted pulse cannot be replace by continuous ultrasound so that we could have continuous reflected image. Any kind of help would be appreciated! This has more to do with signal processing than physics per se. A pulse is usually a chirp, i.e. an increasing or decreasing frequency signal. You could dilate it but if you keep all subbands active all the time then you have no way to determine a start or a end to the band signal so no way to calculate delays either. Delays are what is used to compute distances using this type of imaging. The longer the sound takes to feed back the sampler, the farther the target is assumed to be located. So you need a way to precisely determine the timing of what comes in and back. Also some frequencies can be absorbed by the material so the pulse is designed to send a wide range of frequencies at once. But it has to be short so you know what pulse number you are going to listen to next. Optimizations can be made so as to send multiple pulses before the first comes back, it just gets more complicated to make sure which is which. | {
"domain": "physics.stackexchange",
"id": 71099,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reflection",
"url": null
} |
quantum-field-theory, quantum-chromodynamics, yang-mills, instantons
Title: Are theta vacua topologically protected? In discussions of Yang-Mills instantons it is often stated that one should sum in the path integral over all contributions of fluctuations around all the topologically distinct vacua labelled by winding number $n$.
Usually there follows a discussion on $\theta$-vacua, which are basically a linear combination of $n$-vacua, in the sense that $\lvert \theta \rangle = \sum e^{in\theta} \lvert n \rangle$ and that the procedure of summing over all fluctuations around instanton contributions in the path integral could be replaced by applying the usual rules to a Lagrangian where the term $ \sim \theta (*F, F)$ is added, where $(.,.)$ denotes the Cartan inner product.
I don't understand how this can be. Doing this, one seems to replace the path integral as a sum over contributions from different $n$-vacua by just the path integral expanded in a unique $\theta$-vacuum. Indeed, this seems to come down to summing over a particular linear combination of $n$-vacua. But WHY should you only take into account only a particular $\theta$-vacuum, even though the $\theta$-vacua are inequivalent (they have different energy density)?
Why should e.g. QCD not just be studied in the true vacuum (the one with the lowest energy density) which is just $\theta=0$? Is this because, the $\theta$-vacua, too are topologically protected?
the procedure of summing over all fluctuations around instanton contributions in the path integral could be replaced by applying the usual rules to a Lagrangian where the term $\sim \theta (*F, F)$ is added, where $(.,.)$ denotes the Cartan inner product. | {
"domain": "physics.stackexchange",
"id": 57548,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, quantum-chromodynamics, yang-mills, instantons",
"url": null
} |
ros, gazebo, rviz, librviz
Title: How to initialize robot's position in RVIZ?
I'm using Nav2d simulator where in RVIZ the robot's position is set to X=0 and Y=0 I want to initialize the position to other coordinate for each time when I start the launch file.
I have been looking in tutorial2.rviz file and changed some of parameters but it didn't changed the position. I'm wondering if the robots position is not set in the tutorial2.rviz file then, the program start the position by it self in x=0, y=0.
So my question is where can I find the "parameter name" for x and y position of robot where I can add in tutorial2.rviz file?
I'm appreciating help since it will solve many of my problems.
Originally posted by RosUser on ROS Answers with karma: 81 on 2016-04-15
Post score: 0
I have found it! Robot's position in Rviz can be changed in tutorial.yaml file.
Originally posted by RosUser with karma: 81 on 2016-04-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jayess on 2017-08-02:
This doesn't answer the question of how to set the initial pose. It just states that it's possible. How did you go about solving your problem? | {
"domain": "robotics.stackexchange",
"id": 24381,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, rviz, librviz",
"url": null
} |
Observe that $2$ and $4$ are not relatively prime. In general, assume that $(a,b)$ is an element in the group $\mathbb{Z}_m\times\mathbb{Z}_n$, and let $c=\operatorname{lcm}(m,n)$. Then $(a,b)^c=(0,0)$, so the order of $(a,b)$ is a divisor of $c$. But $c$ is strictly smaller than the number of elements $mn$ if $m$ and $n$ are not relatively prime, so the group cannot be cyclic.
A good theorem to use in checking the direct product of cyclic groups is the following:
$$\mathbb Z_m\times \mathbb Z_n \cong \mathbb Z_{(mn)} \; \underbrace{\iff}_{\text{ IF AND ONLY IF }\;} \gcd(m, n) = 1$$
This holds for any number of factors: $\mathbb Z_{n_1} \times \mathbb Z_{n_2} \times \cdots \times \mathbb Z_{n_n} \cong \mathbb Z_{n_1\cdot n_2\cdots n_n}$ if and only if the $n_i$ are pairwise prime.
• $\Bbb Z_{mn}$ is always cyclic, with generator $1$. (So your first equality is false) – Pedro Tamaroff Nov 3 '13 at 13:33
• @Pedro I was just reminding the OP that $\mathbb Z_{mn}$ is always cyclic. Lighten up. – Namaste Nov 3 '13 at 13:38
• What do you mean? – Pedro Tamaroff Nov 3 '13 at 13:38
• @Pedro My earlier "aside" that $\mathbb Z_{mn}$ is cyclic was not intended to be dependent on $\gcd(m, n)$; it was simply a parenthetical remark that $\mathbb Z_{mn}$ is cyclic. – Namaste Nov 3 '13 at 13:41 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9678992942089575,
"lm_q1q2_score": 0.8302331363764239,
"lm_q2_score": 0.8577680977182187,
"openwebmath_perplexity": 151.84538537881303,
"openwebmath_score": 0.8193814754486084,
"tags": null,
"url": "https://math.stackexchange.com/questions/550107/show-that-mathbbz-2-times-mathbbz-4-is-not-a-cyclic-group/550136"
} |
newtonian-mechanics, harmonic-oscillator, terminology, definition, oscillators
$$\theta(t)=A\cos\left(\sqrt{\frac{g}{l}}t-\frac{\pi}{2}\right)=
A\sin\left(\sqrt{\frac{g}{l}}t\right).$$ | {
"domain": "physics.stackexchange",
"id": 82503,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, harmonic-oscillator, terminology, definition, oscillators",
"url": null
} |
nuclear-engineering
Title: Stabilizing Radioactive Fission products So most fission products have very long half-lives. But i noticed that if a neutron is added the half life becomes a fraction of what it originally was, so could this be a possible method of ridding nuclear waste faster? Would the neutron be absorbed by the nucleus? If not why? Actually, most fission products have quite short half lives. So-called spent fuel from civilian is firstly stored for some months. This 'cooling down' period ensures the decay of most short-lived isotopes into more stable ones.
The cooled waste stream is then subjected to reprocessing with the purpose of recovering Plutonium and Uranium.
One residual waste stream contains residual Plutonium, Neptunium, Americium and Curium isotopes. Together these belong to the chemical group of the Actinides. These heavyweight elements arise when Uranium undergoes (multiple) neutron captures without subsequent fission.
Such isotopes, as you mentioned, have very long half lives which causes long term problems of safe storage. One way of dealing with them would be to use them as fissile materials (absorption of neutron, followed by fission) in so-called Actinide burners (reactors specifically designed to run on this kind of material). Development of these reactors has met with very tough technological challenges and as far as I know none are in commercial use at the moment. | {
"domain": "physics.stackexchange",
"id": 25154,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-engineering",
"url": null
} |
human-biology, human-anatomy
Title: Do decibels matter if the sound is outside human hearing? I bought a dog training device that emits sound at 25K hertz and 130 dB. Can this damage anyone’s hearing since they can’t hear it? Also is it possible for an infant to hear in the 25K hertz range? Thanks Your dog training device produces what is called ultrasonic noise. Children can hear in the 25 kHz range, though they may not develop that ability until they are school aged.
This review has a good overview of the literature around ultrasonic noise. They draw the conclusion that it's hard to isolate the effect of a particular frequency or frequencies exactly, but there is some inconclusive evidence that noise at the frequency and dB of your dog training device could cause hearing damage and have other negative effects. In several countries, you're above the admissible range, but this is for longer term exposure. | {
"domain": "biology.stackexchange",
"id": 8846,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "human-biology, human-anatomy",
"url": null
} |
c#, performance, php
We use a highly-optimized builtin class for converting hexadecimal description of a byte array into that byte array. Much simpler, much faster.
Then, we skip the MemoryStream entirely, since extra wrappers can only slow things down, and streams force us into sequential operation.
Finally, we use Parallel.For to process each block, passing it through rsa.Decrypt, and storing the output at the same array index the input came from.
The result's no longer a string, you can turn it into one after the end of the parallel for if you need to, but generally byte array is better for strong arbitrary data anyway.
If your block operation changes the size (how does that work? and how did you figure out how much to stick into each block on the encryption side?) then
int blockSize = rsa.KeySize >> 3;
int blockCount = 1 + (encryptedBytes.Length - 1) / blockSize;
var decryptedChunks = new byte[][blockCount];
Parallel.For(0, blockCount, (i) => {
var offset = i * blockSize;
var buffer = new byte[Math.Min(blockSize, encryptedBytes.Length - offset)];
Buffer.BlockCopy(encryptedBytes, offset, buffer, 0, buffer.Length);
decryptedChunks[i] = rsa.Decrypt(buffer, false);
});
var decryptedBytes = decryptedChunks.SelectMany(x => x);
Note that the parallelization will only work if your rsa.Decrypt method is thread-safe/stateless. If it's not, find one that is. | {
"domain": "codereview.stackexchange",
"id": 23650,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, php",
"url": null
} |
game, unit-testing, objective-c
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeWild]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeWild]];
NSMutableArray *matches = [_boardEval matchesForOrbs:customArray];
XCTAssert(matches.count == 1, @"Pass");
} | {
"domain": "codereview.stackexchange",
"id": 11771,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "game, unit-testing, objective-c",
"url": null
} |
electric-fields, potential, capacitance
Title: Discontinuity of electric potential in parallel plate capacitor Using Gauss' law, an idealisation of infinitely large plates and symmetry arguments one can show that the electric field of a parallel plate capacitor vanishes outside the plates and is constant between them.
Let us assume that one plate lies on the $xy$-plane, the other a distance $d$ above it (ie. at $z=d$).
Using the fact that the negative gradient of the electric potential equals the electric field, after integrating one finds that the potential between the plates is linearly dependant on $z$ plus an integration constant, say $c_2$. Outside it is constant (say with $c_1$ for $z>d$ and $c_3$ for $z<0$).
I read quite often in textbooks that after integrating a conservative vector field, we can explicitly calculate the integration constants by using the boundary conditions that come from the continuity of the potential and the fact that the potential vanishes at infinity.
When we apply this concept to the parallel plate capacitor however, it results in a contradiction; namely the only solution satisfying the conditions is the zero potential on the whole space, but then the electric field in turn must be zero everywhere.
I don't seem to be able to find the cause of this contradiction: does it come to existence by the idealisation argumentation, or can't we always say that the scalar potential of a conservative vector field must be continuous everywhere?
It seems that the argumentations above work perfectly fine for spherical and cylindrical capacitors, which makes me even more perplex.
Thanks for any insight! One has to be careful with boundary conditions. In general, for finite volume charges distribution ($\rho$ in $C m^{-3}$), the potential is continuous and vanishes at the infinity.
However, in some problems, in particular when there are infinite distributions such as an infinite plane, the potential doesn't vanish at the infinity.
When charges are distributed on a surface (density $\sigma$ in $C m^{-2}$) the electric field is not continuous and the gap can be computed with | {
"domain": "physics.stackexchange",
"id": 50058,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electric-fields, potential, capacitance",
"url": null
} |
EXCEPT SELECT * FROM B
Unix shell
comm -23 a b[24]
grep -vf b a # less efficient, but works with small unsorted sets
## References
1. ^ a b c Halmos (1960) p.17
2. ^ Devlin (1979) p. 6.
3. ^ Bourbaki p. E II.6
4. ^ [1] The Comprehensive LaTeX Symbol List
5. ^ [2] clojure.set API reference
6. ^ Common Lisp HyperSpec, Function set-difference, nset-difference. Accessed on September 8, 2009.
7. ^ Set.difference<'T> Function (F#). Accessed on July 12, 2015.
8. ^ Set.( - )<'T> Method (F#). Accessed on July 12, 2015.
9. ^ Array subtraction, data structures. Accessed on July 28, 2014.
11. ^ Set (Java 2 Platform SE 5.0). JavaTM 2 Platform Standard Edition 5.0 API Specification, updated in 2004. Accessed on February 13, 2008.
12. ^ [3]. The Standard Library--Julia Language documentation. Accessed on September 24, 2014
13. ^ Complement. Mathematica Documentation Center for version 6.0, updated in 2008. Accessed on March 7, 2008.
14. ^ Setdiff. MATLAB Function Reference for version 7.6, updated in 2008. Accessed on May 19, 2008.
15. ^
16. ^ [4]. GNU Octave Reference Manual
17. ^ PARI/GP User's Manual
18. ^ PHP: array_diff, PHP Manual
19. ^ a b [5]. Python v2.7.3 documentation. Accessed on January 17, 2013.
20. ^
21. ^ [6]. The Racket Reference. Accessed on May 19, 2015.
22. ^ Class: Array Ruby Documentation
23. ^ a b scala.collection.Set. Scala Standard Library 2.11.7, Accessed on July 12, 2015.
24. ^ comm(1), Unix Seventh Edition Manual, 1979. | {
"domain": "wikipedia.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9898303419461302,
"lm_q1q2_score": 0.8115553054027504,
"lm_q2_score": 0.819893340314393,
"openwebmath_perplexity": 1323.3048270847241,
"openwebmath_score": 0.835507869720459,
"tags": null,
"url": "https://en.wikipedia.org/wiki/Set_difference"
} |
c#, calculator, wpf
Title: Simple calculator in C# WPF I'm beginning to learn WPF and wanted to do a quick exercise. The code is as simple as it gets but I tried to evaluate each and every possible edge case. Any improvement is welcome, thanks.
C#:
using System;
using System.Globalization;
using System.Windows;
using System.Windows.Controls;
namespace Calculator
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
double firstNumber, secondNumber, resultNumber = 0;
bool calcDone = false;
Operations operation = Operations.None;
string separator = CultureInfo.CurrentCulture.NumberFormat.NumberDecimalSeparator;
public MainWindow()
{
InitializeComponent();
//Assign to the decimal button the separator from the current culture
dec.Content = separator;
}
//List the possible numeric operations
private enum Operations
{
None,
Division,
Multiplication,
Subtraction,
Sum
}
//Manage number buttons input
private void NumberButton_Click(object sender, RoutedEventArgs e)
{
Button button = (Button)sender;
if (calcDone) //calculation already done
{
result.Content = $"{button.Content}";
calcDone = false;
}
else //calculation not yet done
{
if (result.Content.ToString() == "0")
{
result.Content = $"{button.Content}";
}
else
{
result.Content = $"{result.Content}{button.Content}";
}
}
}
//Manage operation buttons input
private void OperationButton_Click(object sender, RoutedEventArgs e)
{
Button button = (Button)sender; | {
"domain": "codereview.stackexchange",
"id": 34533,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, calculator, wpf",
"url": null
} |
newtonian-mechanics, classical-mechanics, projectile, rotational-kinematics, coriolis-effect
Due to the earth's rotation, as seen by an observer on earth (a non-inertial reference frame), the ball experiences fictitious forces, one of which is the Coriolis force. The Coriolis force appears when an object has a velocity in the rotating frame of reference, here the velocity of the ball relative to the earth. The Coriolis force is $−2m \vec ω × {d^* \vec r \over dt}$ where $m$ is the mass of the object (here, the ball), $\vec ω$ is the angular velocity of the rotating frame of reference, $\vec r$ is the position of the object (in either frame) and ${d^* \vec r \over dt}$ is the velocity of the object in the rotating frame of reference. The Coriolis force, hence the sideways deflection, depends on the latitude where the ball is thrown. In the non-inertial reference frame, the angular momentum of the ball relative to the center of the earth is not zero due to the Coriolis force.
(As an aside, the angular momentum depends on the point about which it is evaluated.)
See a physics mechanics textbook such as Symon, Mechanics, and the question/response @mmesser references in a comment above. | {
"domain": "physics.stackexchange",
"id": 93973,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, classical-mechanics, projectile, rotational-kinematics, coriolis-effect",
"url": null
} |
# Calculate the ratio of the side length of a tetrahedron to the side length of the tetrahedron which its centroids form [duplicate]
$OABC$ is a regular tetrahedron. Let $E$, $F$, $G$, $H$ be the centroids of the triangles $OBA$, $OCB$, $OAC$, $ABC$, respectively. You are given that EFGH is also a regular tetrahedron. Using $OA=a$, $OB=b$, $OC=c$, find the ratio of the side length of tetrahedron $EFGH$ to the side length of tetrahedron $OABC$.
Show using vectors. Please give full proof to the answer.
Link to the picture of the tetrahedron used in this question: http://imgur.com/SFzGA5E
the correct answer is a 1:3 ratio, however i am unsure as to how this is calculated
## marked as duplicate by Blue, Andrew D. Hwang, user26857, Christopher, Mike PierceJun 1 '15 at 14:42
This question was marked as an exact duplicate of an existing question.
• can you please share your ideas as to how you approached this problem – happymath Jun 1 '15 at 8:55
• Welcome to Math.SE. Thoughtful questions, even homework-related, are welcome. However, you should not expect others to do your homework for you, and it's inappropriate to post your homework verbatim. (That may not be what you're doing, but all people here can go by is appearances.) Instead, please try to ask questions about specific places where you're stuck, or about particular concepts you don't understand. (Here's a hint for your question: A regular tetrahedron can be inscribed in a cube by "taking alternate vertices".) – Andrew D. Hwang Jun 1 '15 at 10:20
• Albert beat you to it. – Blue Jun 1 '15 at 11:35 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9904406006529085,
"lm_q1q2_score": 0.8188089301148509,
"lm_q2_score": 0.8267117983401364,
"openwebmath_perplexity": 349.31478567902985,
"openwebmath_score": 0.8595666289329529,
"tags": null,
"url": "https://math.stackexchange.com/questions/1307580/calculate-the-ratio-of-the-side-length-of-a-tetrahedron-to-the-side-length-of-th"
} |
I found this a little tricky. The best solution I could find was to let:
$\tan^{-1}\frac{3}{4} = x$ so $\tan x = \frac{3}{4}$
Then let x = 2y so that $\tan 2y = \frac{3}{4}$
Solve for y in the form $y = \tan^{-1}z$, where z is something you have to find. Only one value is admissible. Express x as 2y = $2\tan^{-1}z.$
Now observe that $\frac{1 + \tan w}{1 - \tan w} = \tan(w + \frac{\pi}{4})$. Use that to find an alternative form for $\tan^{-1}z$, which will allow you to find x, in the form you need.
There might be a simpler way (indeed, it might start with an alternative solution of the integral), but I can't immediately find one.
Last edited: May 17, 2014
3. May 17, 2014
### AlephZero
To show the answers are the same, you have to show $2 \tan^{-1} 2 + \sin^{-1}\displaystyle\frac 4 5 = \pi$.
From a 3-4-5 triangle, $\sin^{-1}\displaystyle\frac 4 5 = \tan^{-1}\displaystyle\frac 4 3$.
From $\tan^{-1}a + \tan^{-1}b = \tan^{-1}\displaystyle\frac{a+b}{1-ab}$,
$2 \tan^{-1} 2 = \tan^{-1}\displaystyle\frac {-4} 3 = \pi - \tan^{-1}\displaystyle\frac 4 3$.
QED.
4. May 22, 2014
### sooyong94 | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806484125338,
"lm_q1q2_score": 0.8604367540940261,
"lm_q2_score": 0.8774767842777551,
"openwebmath_perplexity": 928.686408494233,
"openwebmath_score": 0.8789370656013489,
"tags": null,
"url": "https://www.physicsforums.com/threads/integrating-arcsech-x.754086/"
} |
machine-learning, recommendation-systems
$$c_i = {a_i \over a_1 + \dots + a_n}$$
where
$$a_i = {1 \over \text{Var}(X_i)}.$$
Is a linear combination optimal (as opposed to other, nonlinear methods of combining the predictions)? In this situation, if we additionally know that the errors made by each recommendation algorithm have a Gaussian distribution, then yes, a linear combination is optimal. If the errors have a different distribution, then some other combination might be better.
As you can see, this will depend on the loss function we select, and on other assumptions. That said, in practice, a linear combination (i.e., a weighted average) is often a reasonable choice.
There's lots written about this sort of thing in the statistics literature. For instance, the above discussion is just a restatement of the fact that the average is the optimal estimator of the mean of a Gaussian distribution (i.e., it minimizes the mean squared error). See, e.g., https://en.wikipedia.org/wiki/Estimator, https://en.wikipedia.org/wiki/Mean_squared_error, https://stats.stackexchange.com/q/81571/2921, https://stats.stackexchange.com/q/97765/2921, https://stats.stackexchange.com/q/48864/2921, https://math.stackexchange.com/q/9032/14578. | {
"domain": "cs.stackexchange",
"id": 9244,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, recommendation-systems",
"url": null
} |
cosmology, cosmic-microwave-background
Title: Since the microwave background radiation came into being before stars, shouldn't all existing stars (given sufficient equipment) be visible? It's commonly said that, due the rapid expansion of the universe, it is possible that there are objects (such as stars) that could be beyond our field of vision, given the finite speed of light. Since it is possible that space is expanding faster than the speed of light, that makes sense. But, if we can see microwave background radiation (MBR) in whatever direction we look and if MBR is from an era prior to the creation of stars, when we see the MBR, we are seeing something that existed before the creation of stars. We are seeing something that is older and further than stars. If can see IT, then must we not we be seeing all the stars which came after it because they would be younger and closer?
The diagram should clarify. It is qualitative, and uses space and time coordinates chosen so that light rays are diagonal for simplicity. Thus it does not directly show the expansion of the universe.
The green line is our past light cone. Note that what we see in the sky is what's on it (not inside it). Of course, space is 3D (rather than 1D), so we can see beyond stars by looking between them, but the CMB was emitted everywhere, so we cannot see anything beyond that.
The diagram shows that there are stars we cannot see because they are too far away for their light to reach us yet. | {
"domain": "physics.stackexchange",
"id": 77517,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, cosmic-microwave-background",
"url": null
} |
# ArcTan(2) a rational multiple of $\pi$?
Consider a $2 \times 1$ rectangle split by a diagonal. Then the two angles at a corner are ArcTan(2) and ArcTan(1/2), which are about $63.4^\circ$ and $26.6^\circ$. Of course the sum of these angles is $90^\circ = \pi/2$.
I would like to know if these angles are rational multiples of $\pi$. It doesn't appear that they are, e.g., $(\tan^{-1} 2 )/\pi$ is computed as
0.35241638234956672582459892377525947404886547611308210540007768713728\ 85232139736632682857010522101960
to 100 decimal places by Mathematica. But is there a theorem that could be applied here to prove that these angles are irrational multiples of $\pi$? Thanks for ideas and/or pointers!
(This question arose thinking about Dehn invariants.) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.974434786819155,
"lm_q1q2_score": 0.801169692247953,
"lm_q2_score": 0.8221891327004132,
"openwebmath_perplexity": 164.01330212639883,
"openwebmath_score": 0.9378917217254639,
"tags": null,
"url": "http://math.stackexchange.com/questions/79861/arctan2-a-rational-multiple-of-pi"
} |
quantum-mechanics, wavefunction, schroedinger-equation, education, linear-algebra
I actually think there are many advantages to starting from the Heisenberg picture (or even the path integral), or at least learning about them earlier than is taught in, say, Griffifths' book. For example, my experience, many students have difficulty correctly generalizing the Schrodinger picture from one particle to two particles (there is a temptation to think of two wavefunctions rather than one wavefunction on a larger space), and I think that generalization is easier in the Heisenberg picture or using path integrals.
To summarize, independent of subjective notions of which formalism is "more intuitive," I think being able to solve the Hydrogen atom (Coulomb potential) with relatively little new-to-the-students mathematics is a major advantage to using the Schrodinger picture in a first quantum mechanics course. | {
"domain": "physics.stackexchange",
"id": 92998,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, education, linear-algebra",
"url": null
} |
sds-page, experimental
Title: What would cause proteins to get stuck in the stacking layer of a SDS-Page gel Typically when proteins aggregate, they will get stuck at the top of the well. However, we're seeing some protein aggregate in the stacking layer even when we're treating the loading volume with DTT.
One peculiar attribute of this experiment is that we're trying to carry out a Cu(I) catalyzed Azide-Alkyne Click rxn. Without the Cu(I), the proteins run normally. However, after Click rxn, we do see some of our expected clicked product, one of our proteins is disappearing into the top band. Hypothesis is that either the copper(I) is changing the migration or oxidative damage from the Cu(I)-> Cu(II) transition is altering the protein.
Returning to the original question, what would cause a protein to stop at the stacking layer vs. at the combs?
[edit]
According to my labmate who was having this problem, the protein was crosslinking with itself to create fairly sizeable polymers. We also saw ladders of the protein with ascending size. Spinning the clicked reaction removed the issue but also resulted at a lost of the protein. It sounds like he wasn't treating with a sufficient amount of DTT to break up the mixture.
This unfortunately still doesn't differentiate between proteins stuck at the combs vs stuck at the stacking layer. Sounds like the copper cross linking the protein or creating aggregates that the SDS buffer can't break up. add EDTA to your loading buffer before you cook it? | {
"domain": "biology.stackexchange",
"id": 525,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "sds-page, experimental",
"url": null
} |
urdf, ros-hydro
<!-- Robot Model -->
<arg name="model" default="<my_robot>" />
<!-- Robot Self Filter -->
<node pkg="robot_self_filter" name="robot_self_filter" type="self_filter" respawn="true" output="screen">
...
</node>
<!-- Specify URDF file -->
<param name="robot_description"
textfile="$(find robot_self_filter)/urdf/$(arg model).urdf"/>
</launch>
Also, for others, you might get this warning:
[ WARN] [1439998460.539715578]: Self see links need to be an array
when specifying the self_see_links parameter in the launch file. One thing you can try is 1) create a yaml file, 2) add the following array of dictionaries:
self_see_links: [{"name":"l_upper_arm_link","padding":0.02,"scale":0.1},
{"name":"l_upper_arm_roll_link","padding":0.02,"scale":0.1},
{"name":"l_elbow_flex_link","padding":0.02,"scale":0.1},
{"name":"l_elbow_flex_link","padding":0.02,"scale":0.1}]
and 3) load it in the launch file by replacing "< param name="self_see_links" type="string"... />" with the following:
<!-- Load self_see_links -->
<rosparam command="load" file="$(find robot_self_filter)/launch/<my_self_see_links>.yaml"/>
Originally posted by kit_viz with karma: 26 on 2015-08-19
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18365,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "urdf, ros-hydro",
"url": null
} |
### In this section we give a quick review of summation notation. Summation notation is heavily used when defining the definite integral and when we first talk about determining the area between a curve and the x-axis.
In calculus, summation notation or sigma (σ) represents adding many values together. The “a ” in the above sigma notation is saying that you sum all of the values of “a”. In other words, your’re adding up a series of a values: a 1, a 2, a 3 …a x i is the index of summation. The limits of summation are often understood to mean i = 1 through n. Then the notation below and above the summation sign is omitted. Therefore this expression means sum the values of x, starting at x 1 and ending with x n. This expression means sum the squared values of x, starting at x1 and ending with xn. Riemann sums, summation notation, and definite integral notation. Summation notation. Summation notation. This is the currently selected item. Worked examples: Summation notation. Practice: Summation notation. Riemann sums in summation notation. Riemann sums in summation notation. That symbol is the capital Greek letter sigma, and so the notation is sometimes called Sigma Notation instead of Summation Notation. The k is called the index of summation. k=1 is the lower limit of the summation and k=n (although the k is only written once) is the upper limit of the summation. Summation Notation A simple method for indicating the sum of a finite (ending) number of terms in a sequence is the summation notation. This involves the Greek letter sigma, Σ. When using the sigma notation, the variable defined below the Σ is called the index of summation. The index of summation is set equal to the lower limit of summation, which is the number used to generate the first term in the series. The number above the sigma, called the upper limit of summation , is the number used to generate the last term in a series.
## Use summation notation to express the sum of all numbers; Use summation notation to express the sum of a subset of The index variable i goes from 1 to 3.
Understand how to represent a mathematical series; Understand how indices are represented; Understand how to represent a summation with sigma notation | {
"domain": "netlify.app",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226334351969,
"lm_q1q2_score": 0.8010543483400798,
"lm_q2_score": 0.8198933381139645,
"openwebmath_perplexity": 395.39551297137973,
"openwebmath_score": 0.9918047785758972,
"tags": null,
"url": "https://topbinhxqfznm.netlify.app/cordier67502kyzo/the-index-of-summation-notation-sup.html"
} |
organic-chemistry, nitro-compounds, aromaticity
What we can clearly see is, that there is one totally symmetric bonding orbital. Two degenerate (orthogonal) slightly bonding (or non-bonding) orbitals and another totally symmetric anti-bonding orbital.
The remarkable stabilisation comes from the lowest lying π orbital, which is delocalised over all atoms, just like in the more obvious cases like benzene. Additionally, such compounds often retain their planarity by undergoing substitution reactions, rather than addition reactions.
Pro and contra of Y-aromaticity
The concept itself allows the discussion of certain electronic properties in quite simple terms. It is related enough to the general concept of aromaticity so that it can easily be understood at the same level of teaching. Like aromaticity, it allows predictions - to a certain degree - about the reactivity of such compounds.
Peter Grund makes the following assessment, arguing in favour of the concept of acyclic aromaticity.[3]
Despite recent trends, it would appear useful to designate systems as "aromatic" if they represent a delocalized system exhibiting peculiar chemical stability, in order to focus attention on such systems even when they are buried in complex structures. Since guanidine and its derivatives appear to possess really exceptional stability, and since their physical and chemical properties appear to be dominated by the tendency to retain the closed-shell, Y-delocalized 6 pi-electron configuration, we suggest that it is profitable to consider such substances as possessing a special type of "aromatic character," despite their acyclic nature. | {
"domain": "chemistry.stackexchange",
"id": 3930,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, nitro-compounds, aromaticity",
"url": null
} |
newtonian-mechanics, kinematics, potential
Title: Potential Gradient relation I was just wondering why exactly the force is the negative gradient of the potential, which is shown by the relation $${\bf F}=−{\bf∇}V$$I know that this equation only holds for conservative forces, where the potential only depends on the position of an object. The only problem I have is why there is a minus sign/ what the minus sign means in a physical context. Well, despite the fact that the other answers are interesting, I'd prefer going directly to the "origin" of this thing, rather than its useful physical meaning.
The key idea is that
The minus sign is just a convention.
Forces do exist, and we define the potential based on them, not vice-versa.
By the way, I am commiting the mistake I hate: we are talking about potentials. But actually, the relation is
$$ \vec{F}=-\vec{\nabla}E_p$$
Where $E_p$ is the potential energy. You'll say: "that's what I wrote", yes, but we are lazy and we say "potential", but we write potential energy. I curse the person who decided to use $V$ for potential energy, when it was being used for "potential", as in Volts of electric potential. Volts $\neq$ joules, you know.
Well, sorry, back to the topic. The thing is that, the only thing we know is:
A Conservative force is such that the work done by it does not depend on the path.
But that implies that we can write work as "one function", evaluated at the final point, minus the same function evaluated at the initial point: | {
"domain": "physics.stackexchange",
"id": 53026,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, kinematics, potential",
"url": null
} |
Hint: Find the square of $1+\sqrt{3}$. ${}{}{}{}{}{}{}{}{}$
-
Since $(4+2\sqrt3)(4-2\sqrt3)=16-12=4$, try squaring: \begin{align} \left(\sqrt{4+2\sqrt3}+\sqrt{4-2\sqrt3}\right)^2 &=(4+2\sqrt3)+(4-2\sqrt3)+2\sqrt{(4+2\sqrt3)(4-2\sqrt3)}\\ &=8+2\sqrt{16-12}\\[6pt] &=12 \end{align} Therefore, $\sqrt{4+2\sqrt3}+\sqrt{4-2\sqrt3}=2\sqrt3$
-
Write as $\sqrt{4+2\sqrt{3}} = a+b\sqrt{3}$. Now square both sides, equate real and radical part. This gives two equations in $a$ and $b$. Now eliminate $a$, solve for $b$. Goes perfect. Same for the other term.
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.985496420308618,
"lm_q1q2_score": 0.8169162240909907,
"lm_q2_score": 0.8289388040954683,
"openwebmath_perplexity": 1999.5378458537025,
"openwebmath_score": 1.0000046491622925,
"tags": null,
"url": "http://math.stackexchange.com/questions/383975/simplifying-sqrt42-sqrt3-sqrt4-2-sqrt3"
} |
beginner, c
Title: Automatically scan and resize buffer in C I have a simple function that reads character by character from stdin, while resizing the buffer whenever needed. The implementation will only allow 256 characters to be read, but that can easily be modified. Are there an obvious problems with this function? ie. it relies on undefined behaviour. And how can performance be improved?
void scan(char **buffer) {
char *newBuffer;
unsigned char i = 0;
unsigned char size = 1;
*buffer = malloc(16);
(*buffer)[0] = 0;
while (1) {
(*buffer)[i] = getchar();
if ((*buffer)[i] == '\n') {
(*buffer)[i] = 0;
return;
}
if (i >= (size * 16)) {
size++;
newBuffer = realloc(*buffer, size * 16);
*buffer = newBuffer;
}
i++;
}
} Missing includes
I think you need to include <stdio.h> and <stdlib.h> for successful compilation.
Always check that allocations succeed
Look at
*buffer = malloc(16);
(*buffer)[0] = 0;
If malloc() returns a null pointer, then the assignment to its first element is undefined behaviour. Your program could crash, destroy your system, or (if you're unlucky) appear to work.
Always check that input succeeds
If getchar() returns EOF, we should stop reading. Note that by storing the result in a char, we lose the ability to distinguish EOF from valid input.
Avoid output-only parameters
Why do we return void, and instead write our result to a supplied pointer argument? I could understand accepting an argument if we were to re-use a buffer passed in, but we just discard it. I'd write this as
/* caller must release allocated memory with free() */
char *scan(void) | {
"domain": "codereview.stackexchange",
"id": 32557,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c",
"url": null
} |
$$L < 1$$
the series $$\sum a_n$$ converges absolutely, if $$L>1$$ the series diverges, and if $$L=1$$ this test gives no information.
Proof.
The example above essentially proves the first part of this, if we simply replace $$1/5$$ by $$L$$ and $$1/2$$ by $$r$$. Suppose that $$L>1$$, and pick $$r$$ so that $$1 < r < L$$. Then for $$n\ge N$$, for some $$N$$, $${|a_{n+1}|\over |a_n|} > r \quad \hbox{and}\quad |a_{n+1}| > r|a_n|.$$ This implies that $$|a_{N+k}|>r^k|a_N|$$, but since $$r>1$$ this means that $$\lim_{k\to\infty}|a_{N+k}|\not=0$$, which means also that $$\lim_{n\to\infty}a_n\not=0$$. By the divergence test, the series diverges.
$$\square$$
To see that we get no information when $$L=1$$, we need to exhibit two series with $$L=1$$, one that converges and one that diverges. It is easy to see that $$\sum 1/n^2$$ and $$\sum 1/n$$ do the job.
Example 11.7.2 | {
"domain": "libretexts.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9873750507184356,
"lm_q1q2_score": 0.8118090366002171,
"lm_q2_score": 0.8221891327004133,
"openwebmath_perplexity": 175.8374803024278,
"openwebmath_score": 0.9614256620407104,
"tags": null,
"url": "https://math.libretexts.org/Bookshelves/Calculus/Map%3A_Calculus_-_Early_Transcendentals_(Stewart)/11%3A_Infinite_Sequences_And_Series/11.06%3A_Absolute_Convergence_and_the_Ratio_and_Root_Test"
} |
• No. It is for $\frac{\partial \mathrm{trace}((S^T S)^{-1})}{\partial S}$ Feb 26, 2016 at 21:57
• @user2987 see update, seems that it's nothing but chain rule. And in general you will have $\partial_{S}Tr[(S^TS)^{-n}] = -2nS(S^TS)^{-n-1}$ for $n>0$ Feb 26, 2016 at 22:07
Define a new symmetric matrix variable \eqalign{ X &= (S^TS)^{-1} \cr dX &= -X\,\,d\big(S^T\!S\big)\,\,X \cr &= -X\,\,(dS^TS+S^TdS)\,\,X \cr &= -2\,X\,\,{\rm sym}(S^TdS)\,\,X \cr } Write the function using the Frobenius Inner Product and this new variable. Then finding the differential and gradient is pretty easy. \eqalign{ f &= X:X \cr\cr df &= 2\,X:dX \cr &= -4\,X:X\,{\rm sym}(S^T\,dS)\,X \cr &= -4\,X^3:{\rm sym}(S^T\,dS) \cr &= -4\,{\rm sym}(X^3):S^T\,dS \cr &= -4\,X^3:S^T\,dS \cr &= -4\,SX^3:dS \cr\cr \frac{\partial f}{\partial S} &= -4\,SX^3 \cr &= -4\,S(S^TS)^{-3} \cr } | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9877587268703082,
"lm_q1q2_score": 0.8029167892206525,
"lm_q2_score": 0.8128673201042492,
"openwebmath_perplexity": 2943.0001837574187,
"openwebmath_score": 1.0000100135803223,
"tags": null,
"url": "https://math.stackexchange.com/questions/1673554/what-is-the-derivative-of-mathrmtracest-s-2-w-r-t-s"
} |
Conflicting answers when using Complements Principle and the Inclusion-Exclusion Principle
The question I'm looking at is:
Andy, Bill, Carl and Dave are 4 students on a team of 10. 5 must be chosen for a tournament, how many teams can be picked if Andy or Bill or Carl or Dave must be on the team.
Using the inclusion-exclusion principle:
Let $A_1 =$ teams with Andy, $A_2 =$ teams with Bill, ect.
$$|A_i| = {9 \choose 4}= 126$$
$$|A_i \cap A_j| = {8 \choose 3} = 56\text{ for }i \neq j$$
$$|A_i \cap A_j \cap A_k| = {7 \choose 2} =21\text{ for }i \neq j \neq k$$
$$|A_1 \cap A_2 \cap A_3 \cap A_4| = {6 \choose 1} = 6$$
So then $|A_1 \cup A_2 \cup A_3 \cup A_4| = 4(126) - 6(56) + 3(21) - 6 = 225$
But when I use the complements principle to subtract all teams without Andy, Bill, Carl, and Dave from all teams I get:
$${10 \choose 5} - \displaystyle{6 \choose 5} = 252 - 6 = 246$$
which is not the same. So I'm clearly doing something wrong with one of these but I don't know which one is wrong. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9783846659768267,
"lm_q1q2_score": 0.8482944872017785,
"lm_q2_score": 0.8670357546485408,
"openwebmath_perplexity": 694.9107353729173,
"openwebmath_score": 0.8511465191841125,
"tags": null,
"url": "https://math.stackexchange.com/questions/347765/conflicting-answers-when-using-complements-principle-and-the-inclusion-exclusion"
} |
ros
Title: How install SwissRanger?
Hi, i'm starting using ROS, and I need to install the drivers for SwissRanger, but i don't know who?
This link doesn't have the link to download the drivers:
http://www.ros.org/wiki/swissranger_camera
Thanks
Originally posted by Alberto Rivera on ROS Answers with karma: 1 on 2011-04-20
Post score: 0
Take a look at the Tutorial, it goes through all the steps of setting up the driver.
As a quick note, most ROS stacks don't have a download link - you're going to be expected to download the source (via svn or hg or git) and compile (or cross-compile) it to your particular system.
Originally posted by rbtying with karma: 73 on 2011-04-21
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 5410,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
algorithms, sorting, quicksort
Title: Stability of QuickSort Algorithm Def: the stability of algorithm is defined in case of the algorithm preserves same value elements while sorting as the following shows:
So for this QuickSort algorithm:
public class QuickSort {
public static int[] sortedData;
public static void sort(int[] data, int low, int high){
sortedData = data;
int i = low;
int j = high;
int mid = low+(high-low)/2;
int pivot = data[mid];
if(i>j) return;
while(i<=j){
while(data[i]<pivot)
i++;
while(data[j]>pivot)
j--;
if(i<=j){
int temp = data[i];
data[i] = data[j];
data[j] = temp;
i++;
j--;
}
}
if(low<j)
sort(data, low, j);
if(high>i)
sort(data, i, high);
}
}
Problem: We can make it stable by changing condition while(i<=j) to while(i<j). What do you think please? What advantages this will bring please? It will reduce time by constant factor and $O(n)$ in worst case if all elements are equal other than that I am not sure what are the advantages of stability condition of algorithm please? One huge advantage of a stable sorting algorithm is that a user is able to first sort a table on one column, and then by another.
Say that you have a website like Wikipedia with some tabular data, say a list of sorting algorithms, two of the columns being year discovered and name. If you want that table sorted by year, and then alphabetically by name, you can sort the table first by name, then by year.
This is only guaranteed to work with stable sorting. | {
"domain": "cs.stackexchange",
"id": 19151,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, sorting, quicksort",
"url": null
} |
cc.complexity-theory, computability, na.numerical-analysis, computing-over-reals, computable-analysis
Title: How are real numbers specified in computation? This may be a basic question, but I've been reading and trying to understand papers on such subjects as Nash equilibrium computation and linear degeneracy testing and have been unsure of how real numbers are specified as input. E.g., when it's stated that LDT has certain polynomial lower bounds, how are the real numbers specified when they are treated as input? I disagree with your accepted answer by Kaveh. For linear programming and Nash equilibria, floating point may be acceptable. But floating point numbers and computational geometry mix very badly: the roundoff error invalidates the combinatorial assumptions of the algorithms, frequently causing them to crash. More specifically, a lot of computational geometry algorithms depend on primitive tests that check whether a given value is positive, negative, or zero. If that value is very close to zero and floating point roundoff causes it to have the wrong sign, bad things can happen.
Instead, inputs are often assumed to have integer coordinates, and intermediate results are often represented exactly, either as rational numbers with sufficiently high precision to avoid overflow or as algebraic numbers. Floating point approximations to these numbers may be used to speed up the computations, but only in situations where the numbers can be guaranteed to be far enough away from zero that the sign tests will give the right answers.
In most theoretical algorithms papers in computational geometry, this issue is sidestepped by assuming that the inputs are exact real numbers and that the primitives are exact tests of the signs of roots of low-degree polynomials in the input values. But if you are implementing geometric algorithms then this all becomes very important. | {
"domain": "cstheory.stackexchange",
"id": 246,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, computability, na.numerical-analysis, computing-over-reals, computable-analysis",
"url": null
} |
graphs, minimum-spanning-tree
Title: Can a graph with more than |V| - 1 edges have the maximum weighted edge as part of its MST? Consider a graph graph G with |V| vertices and more than |V|-1 edges, is it possible for an edge to be the 'heaviest' and also be unique, but still be part of the graph's MST? If so, in which cases is it applicable?
From what I've been thinking so far, if the maximum edge is the only part that connects some vertex with the rest of the tree, it has to be part of the MST regardless, but that is regardless of the constraint of having more than |V| - 1 edges.
So am I correct to assume that, having more than |V| - 1 edges you can have the heaviest edge as part of the MST? Also, why is the constraint of edges being more than |V| - 1 applied here, what purpose does it serve?
Thanks. Consider the following example:
You can clearly see that the heaviest edge (connecting vertices 3 and 4) is in any MST of this graph. In general, the heaviest edge (assuming it is unique) will be part of the MST of a graph if and only if it is a bridge (meaning that if we remove that edge the graph gets disconnected). The condition $|E| \geq |V| - 1$ doesn't really change anything. However, in order for MST to make sense, we need the graph to be connected. Note that $|E| \geq |V| - 1$ does not guarantee that. | {
"domain": "cs.stackexchange",
"id": 18241,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "graphs, minimum-spanning-tree",
"url": null
} |
Decide the seating order of the people, starting from one of the brothers, say Ivan. Then position the other brother, Alexei, in one of the two slots (fourth and fifth) that fulfill the "separated by two others" condition - $2$ options. Then with Ivan and Alexei resolved, order the remaining five people in one of $5!=120$ ways. Finally add the empty chair to the right of someone, $7$ options, giving $2\cdot 120\cdot 7 = 1680$ options.
$\underline{Get\;the\;bothersome\;empty\;chair\;out\;of\;the\;way\;\;as\;a\;marker\;at\;the\;12\;o'clock\;position}$
• Brother $A$ has $7$ choices of seats
• Brother $B$ now has only $2$ choices (one clockwise and one anticlockwise of $A$ )
• the rest can be permuted in $5!$ ways
• Thus $7\cdot2\cdot5!\;$ways | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850852465429,
"lm_q1q2_score": 0.8127280387455357,
"lm_q2_score": 0.8267117983401363,
"openwebmath_perplexity": 866.1971131004757,
"openwebmath_score": 0.612453043460846,
"tags": null,
"url": "https://math.stackexchange.com/questions/2421917/combinatorics-seating-7-people-around-table-with-8-seats-two-people-have-to-b"
} |
python, performance
Title: Express a number as a sum of powers of 2 This is my function:
def two_powers(num):
powers = []
i = 1
while i <= num:
if i & num:
powers.append(i)
i <<= 1
return powers
I have python 3.6(Windows 10 64-bit). I want the result in the form of a list. My problem statement is to express a integer(num) in the form of sum of powers of 2.
Do I have to create a list in the beginning and then append to it each time ? Can I return a list directly without creating it in the beginning ?
This will will speed up my execution time, right ? It is probably cheating, but the bin(x) function would do most of the heavy lifting, converting x into a string of bits. Iterate in the reverse direction (via [::-1]) to match bits with their proper two-to-the-power-of index, select only those indices where bit is "1", and create the list with list comprehension. It could be done in one statement.
def two_powers(num):
return [ 1 << idx for idx, bit in enumerate(bin(num)[:1:-1]) if bit == "1" ]
Note: bin() actually returns a string prefixed with "0b". The above code skips the prefix, by using an end index in the slice: [:1:-1].
As @200_success mentions, creating and decimating the binary string might not be the most efficient approach. A bit of research turned up int.bit_length() which can be used to determine an upper bound in the range() for list comprehension. Improved solution:
def two_powers(num):
return [ 1 << idx for idx in range(num.bit_length()) if num & (1 << idx) ]
Timing for the original method, Harold's, and my method, on 32 & 64 bit numbers, with most significant bit set, for various density of 1 bits: | {
"domain": "codereview.stackexchange",
"id": 31576,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance",
"url": null
} |
java, file
} You should move most of your code away from the main function for maintainability and testability.
Your variable names (seguir) should be in English, which is the de-facto language of programming; it's fine for your user interface text to be local.
You should have your stream resources in a try-with-resources or a try-finally.
This program is a rough clone of the more pager (and its much more complicated big sister, the less pager). There are features of those pagers you should emulate, mostly: after every page, you should be able to read a section of the file uninterrupted by Para continuar prompts. Stock Java does not make this easy. So long as you don't use third-party libraries, the best thing to do is probably just print your prompt once, and then never again. Rely on the return character from the user to feed the last line of the page. This way, the file will appear as it does on disk.
It's not necessary to close both a file reader and its associated buffered reader; only close the latter.
It's easy enough to make this a much more useful program by removing the hard-coded filename and accepting it as the first command-line parameter.
Suggested
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.Scanner;
public class Main {
public static class Pager implements AutoCloseable {
private final Scanner in = new Scanner(System.in);
private final BufferedReader reader;
private final int blockSize;
public Pager(String filename, int blockSize) throws FileNotFoundException {
FileReader file = new FileReader(filename);
reader = new BufferedReader(file);
this.blockSize = blockSize;
}
public void run() throws IOException {
while (true) {
for (int i = 1; i < blockSize; ++i) {
String line = reader.readLine();
if (line == null) return;
System.out.println(line);
} | {
"domain": "codereview.stackexchange",
"id": 45065,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, file",
"url": null
} |
which yields the linear system of equations
\begin{align*} -2a_1 + 4a_2 - a_3 - a_4 - a_5 &= 0\\ 3a_1 - 2a_2 - 2a_3 + a_4 + 2a_5&= 0\\ 4a_1 + 2a_2 - 2a_3 - 2a_5&=0\\ -a_1 + 2a_3 - a_4 &= 0\\ 3a_1 - a_2 + 2a_3 - a_5 &= 0\\ -2a_1 + a_2 + 2a_3 + 2a_4 - 2a_5 &= 0\text{.} \end{align*}
By row-reducing the associated $$6\times 5$$ homogeneous system, we see that the only solution is $$a_1 = a_2 = a_3 = a_4 = a_5 = 0\text{,}$$ so these matrices are a linearly independent subset of $$M_{23}\text{.}$$
##### M23.
Determine if the matrix $$A$$ is in the span of $$S\text{.}$$ In other words, is $$A\in\spn{S}\text{?}$$ If so write $$A$$ as a linear combination of the elements of $$S\text{.}$$ | {
"domain": "runestone.academy",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9888419690807407,
"lm_q1q2_score": 0.8014355644746147,
"lm_q2_score": 0.810478913248044,
"openwebmath_perplexity": 281.1879529503222,
"openwebmath_score": 0.9999170303344727,
"tags": null,
"url": "https://runestone.academy/ns/books/published/fcla/section-MO.html"
} |
quantum-field-theory, quantum-spin, dirac-equation
\end{align}
If we act with this on the ground state $| 0 \rangle$ then the $b$ terms vanish and so we get
\begin{align}
[J_z, a_{ 0}^{s\dagger}] | 0 \rangle= &\, \int d^3 x \, \int \frac{d^3p\, d^3q}{(2\pi)^6} \frac{1}{4\sqrt{ E_{ p} E_{ q}}} \sum_{r, r'} e^{+i ( p - q)\cdot x}
u^{r\dagger}( q) \Sigma^3 u^{r'}( p)[a_{ q}^{r \dagger}a_{ p}^{r'} , a_0^{s\dagger} ] | 0 \rangle \nonumber\\
=&\,\int d^3 x \, \int \frac{d^3p\, d^3q}{(2\pi)^6} \frac{1}{4\sqrt{ E_{ p} E_{ q}}} \sum_{r, r'} e^{+i ( p - q)\cdot x}
u^{r\dagger}( q) \Sigma^3 u^{r'}( p)a_{ q}^{r \dagger} (2\pi)^3 \delta^3( p) \delta^{r's} | 0 \rangle \nonumber\\
=&\,\int d^3 x \, \int \frac{d^3q}{(2\pi)^3} \frac{1}{4 E_{ q} } \sum_{r} e^{-i q \cdot x}
u^{r\dagger}( q) \Sigma^3 u^{s}( 0)a_{ q}^{r \dagger} | 0 \rangle
\end{align} | {
"domain": "physics.stackexchange",
"id": 79324,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, quantum-spin, dirac-equation",
"url": null
} |
c, homework, search
*result = best ^ mask;
return GOOD;
}
int main() {
byte arr1[] = {0, 1, 2, 3, 4};
byte arr2[] = {200, 100, 222};
byte minmax = 13;
assert(MaxOrMinValue(arr1, sizeof arr1, &minmax, Min) == GOOD);
assert(minmax == 0);
assert(MaxOrMinValue(arr1, sizeof arr1, &minmax, Max) == GOOD);
assert(minmax == 4);
assert(MaxOrMinValue(arr2, 0, &minmax, Max) == BAD);
assert(minmax == 4); // unchanged
assert(MaxOrMinValue(arr2, 1, &minmax, Max) == GOOD);
assert(minmax == 200);
minmax = 123; // to make sure it is overwritten again
assert(MaxOrMinValue(arr2, 2, &minmax, Max) == GOOD);
assert(minmax == 200);
assert(MaxOrMinValue(arr2, 3, &minmax, Max) == GOOD);
assert(minmax == 222);
// Passing neither Min nor Max is bad.
assert(MaxOrMinValue(arr2, 3, &minmax, 123) == BAD);
assert(minmax == 222); // unchanged
}
Admitted, using a bitmask to combine the min and max algorithms is tricky. Without using this trick, the code might look like this:
int MaxOrMinValue(const byte *src, uint src_size, byte *result, MaxOrMin mom) {
if (src_size == 0) {
return BAD;
}
if (mom != Min && mom != Max) {
return BAD;
} | {
"domain": "codereview.stackexchange",
"id": 33489,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, homework, search",
"url": null
} |
ros-fuerte, dynamic-reconfigure
Additional Issues
It appears there are at least two more cases in which dynamic reconfigure param names can cause compilation issues.
When the class "DEFAULT" is added by the cfg generator:
class DEFAULT
{
public:
DEFAULT()
{
state = true;
name = "Default";
}
...
// variables from user-specified fields
...
// internal variables
bool state;
std::string name;
}groups;
the variables "state" and "name" appear to always be created, and will therefore conflict with any user-created variables with the same name.
To avoid these issues, it's probably a good idea to manually append "_param" or something similar to the name of all dynamic_reconfigure fields in .cfg files; however, as with the previous part of this issue, it would be great if this stuff would just work out-of-the-box.
Originally posted by ekaszubski on ROS Answers with karma: 101 on 2012-10-24
Post score: 4
Original comments
Comment by jbohren on 2012-11-06:
Wow, also it looks like there's no issue tracker for dynamic_reconfigure.
Comment by jbohren on 2012-11-06:
I just e-mailed Ze'ev about it, I'll post here when he gets back to me.
Comment by ekaszubski on 2012-11-07:
Thank you, sir :)
Comment by 130s on 2012-11-09:
Btw it is a little surprising that Names http://goo.gl/fqZut doesn't ban a single character parameter isn't it?
Comment by jbohren on 2012-11-11:
Well single character parameters shouldn't be a problem. Meanwhile, still nothing from Ze'ev.
Comment by Dave Coleman on 2013-07-08:
i just ran into this same issue, documenting it on ros.org...
This issue is resolved in https://github.com/ros/dynamic_reconfigure/pull/26 and released in dynamic_reconfigure v1.5.37. The fix will be available in the Hydro and Indigo Debian packages in the next few days. | {
"domain": "robotics.stackexchange",
"id": 11501,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-fuerte, dynamic-reconfigure",
"url": null
} |
radioactivity, conventions
where $P$ refers to the probability density that the lifetime was $t$ which can be calculated by integration by parts. So the average "age at death" for a large ensemble of nuclei will really be equal to the $t_0$ that appears in the exponent of $\exp(-t/t_0)$. | {
"domain": "physics.stackexchange",
"id": 11592,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "radioactivity, conventions",
"url": null
} |
c++, neural-network
actPD = 0;
}
}
for (size_t I = 0; I < Inlayer.size(); I++)
{
double PDval{};
for (size_t hw = 0; hw < Inlayer[I].weights.size(); hw++)
{
PDval = Hidlayers[0][hw].preactvalPD * Inlayer[I].val;
double biasPD = Hidlayers[0][hw].preactvalPD;
if (descenttype == 2)
{
Inlayer[I].weightderivs[hw] += PDval;
Hidlayers[0][hw].biasderiv += biasPD;
}
else
{
Inlayer[I].weightderivs[hw] = PDval;
Hidlayers[0][hw].biasderiv = biasPD;
}
if (optimizerformula == 1)
{
calcema(0, hw, 0, "Hidlayer", "Bias");
calcema(0, I, hw, "Inlayer", "Weight");
}
else if (optimizerformula == 3)
{
calcadam(0, hw, 0, "Hidlayer", "Bias");
calcadam(0, I, hw, "Inlayer", "Weight");
}
}
}
}
void Net::Updateweights()
{
for (size_t I = 0; I < Inlayer.size(); I++)
{
double PD{};
for (size_t iw = 0; iw < Inlayer[I].weights.size(); iw++)
{
if (optimizerformula == 2)
{
PD = (Inlayer[I].weightderivs[iw] * -1.0) - (Lambda * regularize(Inlayer[I].weights[iw]));
Inlayer[I].weights[iw] = Inlayer[I].weights[iw] + (Alpha * PD);
}
else if (optimizerformula == 1)
{ | {
"domain": "codereview.stackexchange",
"id": 40542,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, neural-network",
"url": null
} |
i.e. $$a_n=\frac{\sin\frac{3(n+1)\pi}4}{2^{\frac{n+1}2}}.$$ Those infos about Chebyshev polynomials can be found here: https://en.wikipedia.org/wiki/Chebyshev_polynomials | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.973240714486111,
"lm_q1q2_score": 0.800187945312572,
"lm_q2_score": 0.8221891392358014,
"openwebmath_perplexity": 262.1671974511928,
"openwebmath_score": 0.8817855715751648,
"tags": null,
"url": "https://math.stackexchange.com/questions/1569168/write-frac-11z2-as-a-power-series-centered-at-z-0-1"
} |
python, validation
With all that in mind, it could look like this, note the nested loop is
still there, but it already looks much cleaner:
def stca_list(aircraft1=None, aircraft2=None, t_sep=None, t_min_nm=None, c_tracks=None, warning_or_alert):
aircraft_str = aircraft1 and '{} - {}'.format(aircraft1, aircraft2)
if warning_or_alert not in ('warning', 'alert'):
errorhandling.stca_list_exception('Warning or Alert Error: Choose either warning or alert, [%s] does not exist.' % warning_or_alert)
is_warning = warning_or_alert == 'warning'
expected_color = YELLOW if is_warning else RED
stca_list_dialog = squishtest.waitForObject("{type='isds::StcaListDialog' unnamed='1' visible='1' windowTitle='STCA List'}")
for si in squishtest.object.children(stca_list_dialog):
if squishtest.className(si) != 'QTableWidget':
continue
children = squishtest.object.children(si)
for i, ssi in enumerate(children):
if squishtest.className(ssi) != 'QModelIndex':
continue
for stca_list_value in children[i+1:]:
if aircraft_str != stca_list_value.text:
errorhandling.stca_list_exception('Are value but wrong expected[%s] but was[%s]' % (aircraft_str, stca_list_value.text))
if stca_list_value.foregroundColor != expected_color:
errorhandling.stca_list_exception('Error Warning: expected[%s] got[%s]' % ('alert' if is_warning else 'warning', warning_or_alert)
if t_sep:
utils.time_in_range(t_sep, children[i+2].text) | {
"domain": "codereview.stackexchange",
"id": 16084,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, validation",
"url": null
} |
group-theory, representation-theory, lie-algebra
$$
The Cartan elements are diagonal with
$$
h_k=\cases{1& at row $k$ and column $k$\\
-1 & at row $k+1$ and column $k+1$\\
0& otherwise}\, . \tag{2}
$$
Note that these differ from the standard diagonal Gell-Mann matrices. With this, $C_{ij}$ with $i<j$ is a raising operator and the highest weight state of a representation is the unique state killed by all raising operators, the eigenvalues of $h_k$ are its weight and the Dynkin labels are just these weights, which can be shown to be non-negative integers. With this convention the Dynkin label for $su(2)$ irreps should be $2j$ (which is necessarily an integer).
This definition also makes it obvious that the $N$-dimensional vector
$(1,0,\ldots)$ is a highest weight for the $su(N)$ irrep $(1,0,\ldots)$. The other $N-1$ vectors span the $N$-dimensional space and the matrices resulting from (1) and (2) are used to define the fundamental representation of $su(N)$. | {
"domain": "physics.stackexchange",
"id": 61431,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "group-theory, representation-theory, lie-algebra",
"url": null
} |
asymptotics, landau-notation
Title: Equivalent definitions of big O
Let $A = \{ g(n) \mid \exists c,n_0 \, \forall n \ge n_0\colon g(n) \le cf(n) \}$, and $B = \{ g(n) \mid \exists c,n_0 \, \forall n \geq n_0 \colon g(n) < cf(n) \}$.
Prove $A = B$.
My solution:
Let $f(n)$ and $g(n)$ be functions from $\mathbb{N}$ to $\mathbb{N}$.
$g(n)\le cf(n)$ for all $n > n_0$.
$g(n) = O(f(n))$ means $\exists c, n_0 \forall n\colon n > n_0 \Rightarrow g(n) \le cf(n)$.
To prove A, choose values for $c$ and $n_0$ and prove that $n > n_0$ implies $g(n) \le cf(n)$.
Choose $n_0 = 1$.
Assuming $n>1$, find a $c$ such that $g(n)/f(n) \le cf(n)/f(n) = c$.
This shows that $n>1$ implies $g(n)\le cf(n)$.
$n>1$ implies $1<n$, $n<n^2$, $n^2<n^3$, and so on. | {
"domain": "cs.stackexchange",
"id": 9296,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "asymptotics, landau-notation",
"url": null
} |
statistical-mechanics, probability
Title: Probability of finding n particles in a volume v I'm trying to calculate the probability of finding $n$ particles in a certain volume $v$ in a system with a total of $N$ particles and total volume of $V$. My problem is that I've tried two approaches which both seem valid to me, but give differing answers.
One approach is to use binomial probability, where the probability of success (particle in the volume of interest) is $\frac{v}{V}$. Furthermore, the particles are indistinguishable, so it doesn't matter the order of "successes" and "failures". This gives:
$P=(1-\frac{v}{V})^{N-n}\,(\frac{v}{V})^{n}\,\frac{N!}{(N-n)!n!}$
My other approach is to say to start saying that any configuration (remembering particles are indistinguishable) has equal probability and so the probability for our event is simply $P=\frac{\mathrm{\#\ of\ configurations\ with\ n\ particles\ in\ the\ cell}}{\mathrm{\#\ of\ configurations}}$. Now from combinatorics, the number of configurations is $\binom{N+\frac{V}{v}-1}{N}$, and the number of configurations with $n$ particles in $v$ is $\binom{N-n+\frac{V}{v}-2}{N-n}$. This gives a probability: | {
"domain": "physics.stackexchange",
"id": 11159,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, probability",
"url": null
} |
file-formats, python, gff3, sequence-annotation
MAKER
The MAKER annotation workflow (paper, software) is a pretty commonly used gene annotation tool, and produces GFF3 output like this.
scaffold_12 maker gene 652527 655343 . + . ID=maker-scaffold_12-augustus-gene-0.959;Name=maker-scaffold_12-augustus-gene-0.959
scaffold_12 maker mRNA 652527 655343 . + . ID=maker-scaffold_12-augustus-gene-0.959-mRNA-1;Parent=maker-scaffold_12-augustus-gene-0.959;Name=maker-scaffold_12-augustus-gene-0.959-mRNA-1;_AED=0.24;_eAED=0.18;_QI=0|0|0|0.66|0.5|0.33|3|0|218
scaffold_12 maker exon 652527 652817 . + . ID=maker-scaffold_12-augustus-gene-0.959-mRNA-1:exon:1203;Parent=maker-scaffold_12-augustus-gene-0.959-mRNA-1
scaffold_12 maker exon 654877 655170 . + . ID=maker-scaffold_12-augustus-gene-0.959-mRNA-1:exon:1204;Parent=maker-scaffold_12-augustus-gene-0.959-mRNA-1
scaffold_12 maker exon 655272 655343 . + . ID=maker-scaffold_12-augustus-gene-0.959-mRNA-1:exon:1205;Parent=maker-scaffold_12-augustus-gene-0.959-mRNA-1
scaffold_12 maker CDS 652527 652817 . + 0 ID=maker-scaffold_12-augustus-gene-0.959-mRNA-1:cds;Parent=maker-scaffold_12-augustus-gene-0.959-mRNA-1 | {
"domain": "bioinformatics.stackexchange",
"id": 177,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "file-formats, python, gff3, sequence-annotation",
"url": null
} |
laser, ros-kinetic
2.7093093395233154, 2.739572763442993, 2.770573377609253, 2.802337646484375, 2.8348937034606934, 2.8682701587677, 2.902498245239258, 2.9376096725463867, 2.9736385345458984, 3.010620355606079, 3.0485925674438477, 3.087594509124756, 3.1276679039001465, 3.168856382369995, 3.2112061977386475, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, | {
"domain": "robotics.stackexchange",
"id": 34591,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "laser, ros-kinetic",
"url": null
} |
11
In general, one cannot get explicit analytic solutions of trancendental equations in terms of radicals. This is also the case of univariate higher order polynomial equations. On the other hand since Mathematica 7 we can find exact solutions (in terms of Root objects) of a wide range of (univariate) trancendental equations, for more detailed discussion of ...
11
Since you're working with vectors, just let Mathematica know that these are vectors. Some other systems (MATLAB and its relatives in particular) have the limitation that they can only work with matrices, forcing you to distinguish between row vector and column vectors and keep transposing. This is not necessary nor convenient in Mathematica. In[1]:= ...
10
If you need to work with a set of variables symbolically, but you also need to substitute in values for them occasionally, a good approach is to use a rule list: values = {a -> 0.04, L1 = 1, L0 -> 1} If the symbols have no values assigned, you can use them normally in symbolic calculations: L[s_, L0_, L1_, a_] := L1 + L0/(1 + s/a) D[L[s, L0, L1, ...
Only top voted, non community-wiki answers of a minimum length are eligible | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9343951643678381,
"lm_q1q2_score": 0.8230125392231699,
"lm_q2_score": 0.8807970873650403,
"openwebmath_perplexity": 1305.4646071212592,
"openwebmath_score": 0.5352917909622192,
"tags": null,
"url": "http://mathematica.stackexchange.com/tags/symbolic/hot?filter=all"
} |
thermodynamics, statistical-mechanics, entropy
Title: Understanding entropy production I found this concept of entropy production in wikipedia.
Mainly I am trying to figure out the formula the Clausius formula for entropy production. What exactly are the terms involved?
For what process is $S-S_o$ written?
For what process is $\int \frac{dQ}{T}$ written? What is the $T$ in this expression? The entropy change of a system is the sum of two parts:
Entropy transferred from the surroundings to the system (across the interface with the surroundings) as a result of heat flow, and given by $\int{\frac{dq}{T_B}}$, where dq is the differential heat flow across the boundary interface between the system and surroundings and $T_B$ is the temperature at the boundary through which this same heat flow takes place.
Entropy generation $\sigma$ within the system as a result of irreversibility driven by internal viscous friction, internal conductive heat transfer, and internal mass diffusion. In a reversible process, this contribution to the entropy change is zero, and, in an irreversible process, this contribution is always positive
So, $$\Delta S=\int{\frac{dq}{T_B}}+\sigma$$or, expressed as an inequality,
$$\Delta S\geq \int{\frac{dq}{T_B}}$$Also, in a reversible process, the system and surroundings temperatures are equal, so that, at the boundary, $T_B=T$, where T is the system temperature. | {
"domain": "physics.stackexchange",
"id": 71750,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, statistical-mechanics, entropy",
"url": null
} |
c++, memory-management, c++17
Your comments/critiques are most welcome.
Looking at this, you have a single static with all the memory packed together and all bits to indicate 'free/available'. I don't see a way to improve the memory usage of these bits. If the MEMSIZE would be variable, you could consider other techniques, though in this case, the bitset looks the most efficient.
Looking at the allocator requirements, I believe all required elements are available. You could still add type defs like size_type, difference_type, propagate_on_container_move_assignment, is_always_equal ... to improve some usage by std::vector. These are optional and the provided example on the page doesn't have them either.
= default is a very good choice. Normally, this default method becomes noexcept out of it's own. You could add it, though, if you than change the class so the default behavior no longer is noexcept, it makes the method deleted.
You are correct, all instances of Allocator can be considered equal. (See the is_always_equal typedef I've mentioned before)
As your allocators don't have state, you don't need a static variable for them. You could create them when needed. With some CRTP you could reduce the amount of code needed in the classes using it.
Some other random remarks:
Storage::operator<< could use a ranged based for loop
Your allocate function could be optimized, as you don't have to check every combination. (aka: If you encounter 5 free elements and you need to allocate 10, you can jump to after that first used element
You could replace some of the for-loops with std::all_of/std::none_of (or if you implement the previous std::find)
You don't have protection for out-of bounds checking in the inner loop. (If you are at index 76 and need to allocate 10 elements)
Why would you decrement n in the new operator for an array. | {
"domain": "codereview.stackexchange",
"id": 39635,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, memory-management, c++17",
"url": null
} |
general-relativity, black-holes, metric-tensor, event-horizon, causality
For spacetime points inside a black hole no causal trajectory (such as null geodesic) could reach $\mathscr{I}^+$, while for the points “inside” a normal light cone there are null geodesics reaching this null infinity. So the definition of a black hole region in [1 ] is: $B= M - J^-(\mathscr{I}^+)$, or “all the points of a manifold $M$ that do not lie in the past of $\mathscr{I}^+$” (in other words, all the events from which no signal could ever escape to infinity). In contrast, for an asymptotically flat spacetime without an event horizon all points of spacetime are in the past of $\mathscr{I}^+$, in other words $M=J^-(\mathscr{I}^+)$.
Note, that the existence of singularities inside of a black hole is not a necessity, but an artefact of an “ordinary” general relativity with a particular matter content. One could consider modified theories of relativity which do not have singularities but have black holes in the sense outlined above. | {
"domain": "physics.stackexchange",
"id": 58089,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, black-holes, metric-tensor, event-horizon, causality",
"url": null
} |
java, array
Title: Array is Balanced Array or Not A balanced array is defined to be an array where for every value n in the array, -n also is in the array.
{-2, 3, 2, -3} is a balanced array.
So is {-2, 2, 2, 2}. But {-5, 2, -2} is not because 5 is not in the array.
I wrote the following class BalancedArray which satisfied above condition.
public class BalancedArray {
public static void main(String args[]) {
System.out.println("The result is: " + isBalanced(new int[]{-2, 3, 2, -3}));
}
public static boolean isBalanced(int[] a){
boolean status = false;
for(int i = 0; i < a.length; i++) {
if(a[i] > 0) {
for(int j = i+1; j < a.length; j++) {
if(a[i] == Math.abs(a[j])) {
status = true;
}
}
} else if(a[i] < 0) {
for(int k = i+1; k < a.length; k++) {
if(Math.abs(a[i]) == a[k]) {
status = true;
}
}
}
System.out.println(status);
if(status) {
status= true;
} else {
status = false;
break;
}
}
return status;
}
}
Is this proper way to check Balanced array or I am missing some conditions to check in array?? From comment to answer
This isn't doing what you assume. Passing an array { 2 , 2 } will result in true. This is because in the inner loops you don't check if the values are "oposite" meaning having a value > 0 in the outer loop you don't check if value < 0 in the inner loop.
The if condition
if(status) {
status= true;
} else {
status = false;
break;
}
doesn't buy you anything but adds noise to the code. Simply write
if (!status) {
break;
}
because there is no need to set status to true if thats the value anyway. | {
"domain": "codereview.stackexchange",
"id": 18080,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, array",
"url": null
} |
• How did you calculate this? – Antonio Vargas Jan 17 '13 at 20:24
For probabilists' Hermite polynomials: The Hermite polynomials are the orthogonal polynomials corresponding to the weight function $w(x) = e^{-x^2/2}$. This means that $\int_{-\infty}^{\infty} H_n(x)H_m(x)e^{-x^2/2} \, dx = 0$ whenever $n \not= m$ (or equivalently, $\int_{-\infty}^{\infty} H_n(x) P(x) e^{-x^2/2} \, dx = 0$ for any polynomial $P$ of degree less than $n$). Since $H_0(x) = 1$, it follows that $$\int_{-\infty}^{\infty} H_n(x)e^{-x^2/2} \, dx = \int_{-\infty}^{\infty} H_n(x)H_0(x)e^{-x^2/2} \, dx = 0$$ for all $n > 0$. The only time this integral is non-zero is when $n = 0$, in which case $$\int_{-\infty}^{\infty} H_0(x)e^{-x^2/2} \, dx = \int_{-\infty}^{\infty} e^{-x^2/2} \, dx = \sqrt{2\pi}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676442828173,
"lm_q1q2_score": 0.8346305890206668,
"lm_q2_score": 0.8519527963298947,
"openwebmath_perplexity": 249.465703203087,
"openwebmath_score": 0.9934238791465759,
"tags": null,
"url": "https://math.stackexchange.com/questions/280945/integral-of-hermite-polynomial-multiplied-by-exp-x2-2"
} |
security, common-lisp
Title: Generating hard to guess session tokens Does this code create sufficiently hard-to-guess session tokens, assuming the server and client are communicating over HTTPS?
Take 2 (thanks to this crypto.SE answer):
(ql:quickload (list :ironclad :cl-base64))
(let ((prng (ironclad:make-prng :fortuna)))
(defun new-session-token ()
(cl-base64:usb8-array-to-base64-string
(ironclad:random-data 32 prng) :uri t))) | {
"domain": "codereview.stackexchange",
"id": 7718,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "security, common-lisp",
"url": null
} |
$\text{(ii) }\;1 - P(A\cap B\cap C) \;= \;0.88$ . . . . Right!
(b) (i) only the New York flight is full. (ii) exactly one of the three flights is full.
$\text{(i) }\;P(A \cap \overline{B} \cap \overline{C}) \;=\;(0.6)(0.5)(0.6) \;=\;0.18$
$\text{(ii)}\;\begin{array}{ccccc}P(A \cap \overline{B} \cap \overline{C}) & = & (0.6)(0.5)(0.6) & = & 0.18 \\
P(\overline{A} \cap B \cap \overline{C}) &=& (0.4)(0.5)(0.6) &=& 0.12 \\
P(\overline{A} \cap \overline{B} \cap C) &=& (0.4)(0.5)(0.4) &=& 0.08\end{array}$
$P(\text{exactly one full}) \;=\;0.18 + 0.12 + 0.08 \;=\;0.38$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864270133119,
"lm_q1q2_score": 0.8070691015408177,
"lm_q2_score": 0.8152324915965392,
"openwebmath_perplexity": 1399.6868085917135,
"openwebmath_score": 0.5818556547164917,
"tags": null,
"url": "http://mathhelpforum.com/advanced-statistics/27123-probability-problem.html"
} |
The (10/3t)/(7/3t) = 10/7 then the work ratios is 10 to 7
Since Time Ratio is the inverse of work, the the answer is 7 to 10
Bunuel wrote:
Machine X can complete a job in half the time it takes Machine Y to complete the same job, and Machine Z takes 50% longer than Machine X to complete the job. If all three machines always work at their respective, constant rates, what is the ratio of the amount of time it will take Machines X and Z to complete the job to the ratio of the amount of time it will take Machines Y and Z to complete the job?
A. 5 to 1
B. 10 to 7
C. 1 to 5
D. 7 to 10
E. 9 to 10
Kudos for a correct solution.
_________________
THANKS = KUDOS. Kudos if my post helped you!
Napoleon Hill — 'Whatever the mind can conceive and believe, it can achieve.'
Originally posted by TudorM on 03 Feb 2015, 22:53.
Last edited by TudorM on 03 Feb 2015, 23:29, edited 2 times in total.
Manager
Joined: 17 Dec 2013
Posts: 58
GMAT Date: 01-08-2015
Re: Machine X can complete a job in half the time it takes Machine Y to co [#permalink]
### Show Tags
04 Feb 2015, 04:14
1
x=0,5t
y=t
z=0,75t
t=4hours so we get:
x=2
y=4
z=3
x+z=1/2+3/4=5/6 -> they need 6/5 hours
y+z=1/4+1/3=7/12 -> they need 12/7 hours
ratio is 6/5 divided by 12/7, or multiplied by 7/12 -> we get 7/10
Math Expert
Joined: 02 Aug 2009
Posts: 6961
Re: Machine X can complete a job in half the time it takes Machine Y to co [#permalink]
### Show Tags | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 1,
"lm_q1q2_score": 0.8757869819218865,
"lm_q2_score": 0.8757869819218865,
"openwebmath_perplexity": 2657.5918376725567,
"openwebmath_score": 0.6865741610527039,
"tags": null,
"url": "https://gmatclub.com/forum/machine-x-can-complete-a-job-in-half-the-time-it-takes-machine-y-to-co-192604.html"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.