text stringlengths 49 10.4k | source dict |
|---|---|
javascript, algorithm, chess
function checkUpLeft(row, col) {
if (row >= 0 && col >= 0) {
if (isObstacle(row,col)) {
return;
} else {
counter++;
checkUpLeft(row-1,col-1);
}
}
}
function checkUpRight(row, col) {
if (row >= 0 && col < boardLength-1) {
if (isObstacle(row,col)) {
return;
} else {
counter++;
checkUpRight(row-1,col+1);
}
}
}
function checkDownRight(row, col) {
if (row < boardLength-1 && col < boardLength-1) {
if (isObstacle(row,col)) {
return;
} else {
counter++;
checkDownRight(row+1,col+1);
}
}
}
function checkDownLeft(row, col) {
if (row < boardLength-1 && col >= 0) {
if (isObstacle(row,col)) {
return;
} else {
counter++;
checkDownRight(row+1,col-1);
}
}
}
checkUp(x-1,y);
checkUp(x+1,y);
checkLeft(x,y-1);
checkRight(x,y+1);
checkUpLeft(x-1,y-1);
checkUpRight(x-1,y+1);
checkDownRight(x+1,y+1);
checkDownLeft(x+1,y-1);
return counter;
};
queensMenace(8, [4,5], [[2,3], [7,4]]); What about defining checkDelta?
function checkDelta(dx, dy) {
if (0 <= x + dx && x + dx < boardSize) {
if (0 <= y + dy && y + dy < boardSize) {
// And so on.
This function should make your code really simple. | {
"domain": "codereview.stackexchange",
"id": 29225,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, algorithm, chess",
"url": null
} |
vcf, allele-frequency
And by "hit" I mean an aligned sequence fragment that includes that coordinate.
To get data on how many reads contributed to calling a specific allele at a given position, one would use the DP field, for example, which isn't included in the Kaviar VCF.
The AC, AN, and AF fields in a VCF file are meant to be used to show the allele's frequency in the context of all of the samples used when making that VCF file.
Investigation process
This is one line from the Kaviar vcf file:
1 10230 rs200279319 AC AA,A . . AF=0.0002654,0.0048525;AC=7,128;AN=26378;END=10231
If we look at the README attached with the Kaviar vcf file, it says,
"Kaviar is a compilation of SNVs, indels, and comlex variants observed in humans, designed to facilitate testing for the novelty and frequency of observed variants."
How many humans though? If we go to the Kaviar website it says:
Kaviar contains 162 million SNV sites (including 25M not in dbSNP) and incorporates data from 35 projects encompassing 77,781 individuals (13.2K whole genome, 64.6K exome).
If we look at the Kaviar VCF header we see that the fields are defined:
##INFO=<ID=AF,Number=A,Type=Float,Description="Allele Frequency">
##INFO=<ID=AC,Number=A,Type=Integer,Description="Allele Count">
##INFO=<ID=AN,Number=1,Type=Integer,Description="Total number of alleles in data sources"> | {
"domain": "bioinformatics.stackexchange",
"id": 1987,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vcf, allele-frequency",
"url": null
} |
capacitance, voltage
Title: Where does the maximum voltage come from in a vacuum capacitor? If too much voltage is placed across a normal capacitor then the capacitors material in the dielectric will break down and the capacitor will fail, this makes sense to me.
That being said some capacitors are designed to use a high vacuum as a dielectric and yet these capacitors still have maximum safe voltages, albeit much higher than that of a conventional capacitor. what causes a vacuum capacitor to fail at high voltages if not a failure of the dielectric? The breakdown is a two step process. When the voltage becomes high enough you get field emission from the surface of the capacitor plates. With perfectly smooth surfaces this is all that happens, but with real surfaces you get points where the emission is enhanced due to surface irregularities. Current flowing at those points causes local heating, which enhances emission and you get a feedback loop resulting in a spark between the surfaces. | {
"domain": "physics.stackexchange",
"id": 50093,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "capacitance, voltage",
"url": null
} |
complexity-theory, complexity-classes, intuition
First, we should say that these two classes might be equal. They seem more likely to be different but classes sometimes turn out to be the same: for example, in 2004, Reingold proved that symmetric logspace is the same as ordinary logspace; in 1987, Immerman and Szelepcsényi independently proved that NL$\;=\;$co-NL (and, in fact, that NSPACE[$f(n)$]$\;=\;$co-NSPACE[$f(n)$] for any $f(n)\geq \log n$).
But, at the moment, most people believe that PSPACE and EXP are different. Why?
Let's look at what we can do in the two complexity classes. Consider a problem in PSPACE. We're allowed to use $p(n)$ tape cells to solve an input of length $n$ but it's hard to compare that against EXP, which is specified by a time bound. | {
"domain": "cs.stackexchange",
"id": 3812,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, complexity-classes, intuition",
"url": null
} |
thermodynamics, energy, entropy
Title: Where do I thermodynamically see the energy and entropy of a ball bouncing on the ground until it stops? I am trying to clarify some of the concepts that I learned so far in my thermodynamic adventure.
For that purpose I will set up a salad with those concepts, so I can see not only their isolated definition but the practical relationship between them all.
SCENARIO:
Suppose I am in a room and I have a ball in my left hand.
Suppose the ball is my thermodynamic system because it's the thing I want to analyze. Hence the room is the environment.
Suppose I let the ball fall free.
THERMODYNAMIC PERSPECTIVE:
Potential energy is gravity pulling the ball downwards. All this potential energy is part of the internal energy of the system.
Part of it, because another part is made of the kinetic energy of the system's particles (be it a gas or be it a whatever).
It will hence swim down across the air, bounce on the ground, and swim back up...
...A few times, while losing its internal energy as heat flow that goes into the surroundings.
Is there any work flow while the ball is swimming down in contact with the air? - I say swim because it represents better that there is an opposition with the air, falling sounds too easy -
There is a positive work flow when the ball is touching the ground, because it is compressing its volume, and hence there is work being done on the system.
There is a negative work flow when the ball is bouncing back from the ground because it is expanding again, and hence the system is doing the work.
Being entropy the freedom of the system to have the possibility to set itself into "entropy" amount of different micro-states, where can I see this entropy?
While falling the room gets the heat flow, so the room increases its entropy while the ball itself is losing entropy?
But while "swimming up" against gravity, the ball gaining entropy again (not as much as the previous bounce because it doesn't get so high).
What if the room is very hot and the ball is very cold? If heat is not flowing out of the ball, is it still losing internal energy while falling? If the room was exaggeratedly hot in contrast to the ball, would that help the ball gain more energy and hence bounce higher than before? | {
"domain": "physics.stackexchange",
"id": 32987,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, energy, entropy",
"url": null
} |
resource-recommendations, standard-model, effective-field-theory, hadron-dynamics
Title: A pedagogical exposition of the hadron physics? I am looking for a textbook/lecture notes/etc. on the basics of hadron physics.
I wish to understand how to construct the effective Lagrangian for pions and nucleons starting from the QCD Lagrangian. In other words, from
$$\mathcal{L}_{QCD}=i\bar{Q}\hat{D}Q+(\text{mass terms})+ (\text{gauge fields})$$
derive
$$\mathcal{L}_{hadrons}=-\frac{f_{\pi}^2}4Tr(\partial^\mu U\partial_\mu U^{\dagger})+i\bar{N}\hat{D}N+\text{(gauge interactions)+(higher-order terms)}$$
I've read Peskin and Schroeder and Srednicki's textbooks and sort of understand the general idea but still missing a lot of important points. Among the unclear things are the following
What is the exact quantitative relation between the quark fields and the hadron fields? To my understanding, this is not a direct one. However, in order to make computations something more then "hadrons are composed out of appropriate combinations of quarks" is needed.
How do we construct terms in the hadron Lagrangian? It is relatively clear to me how pion terms appear, but the nucleon ones are confusing. For example, in Srednicki's book there are two different bases for nucleons non-trivially related to each other and it is not clear to me why do we interpret one of them and not the other as "real" protons and neutrons. | {
"domain": "physics.stackexchange",
"id": 23258,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "resource-recommendations, standard-model, effective-field-theory, hadron-dynamics",
"url": null
} |
Maybe there is another way to do this without taking the derivative? :/
• The tangent cone method sends $(1,1)$ to the origin ((y+1)^2+(x+1)^2-3)^3-(x+1)^3*(y+1)^3+(y+1)^2+(x+1)^2 expands y^6+6*y^5+3*x^2*y^4+6*x*y^4+9*y^4-x^3*y^3+9*x^2*y^3+21*x*y^3-5*y^3+3*x^4*y^2+9*x^3*y^2+9*x^2*y^2+3*x*y^2-11*y^2+6*x^4*y+21*x^3*y+3*x^2*y-33*x*y+5*y+x^6+6*x^5+9*x^4-5*x^3-11*x^2+5*x and takes the linear terms $5y+5x=0$ i.e. $(y-1)=-(x-1)$ or $y=-x+2$ Mar 4 '20 at 13:52
If You want a solution without derivative for this specific problem.
Notice that the equation is symetrical in $$x$$ and $$y$$? This means the curve intersects line $$y=x$$ perpendicularly i.e its tangent line at $$(a,a)$$ is $$y+x=2a$$.
For this case $$a=1$$ so the tangent line is $$y+x=2$$
• This is a very insightful solution. +1 Mar 4 '20 at 14:22
Your derivative is not computed properly. You need to differentiate both sides implicitly with respect to $$x$$ and use chain and product rules.
Here, I'm representing the first derivative (general form) as $$y'$$: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854155523791,
"lm_q1q2_score": 0.8123116202851208,
"lm_q2_score": 0.8376199592797929,
"openwebmath_perplexity": 192.80209346141018,
"openwebmath_score": 0.8974630236625671,
"tags": null,
"url": "https://math.stackexchange.com/questions/3569103/how-find-tangent-line-of-the-given-curve-at-this-point"
} |
ros, buiding-errors
new_data[t] = (double)ros::Time::now().toSec();
ROS_DEBUG("Error: [x: %f y: %f z: %f]", new_data[x], new_data[y], new_data[z]);
// Integrate the data
double deltaT = (new_data[t] - old_data[t]);
integration[x] += new_data[x] * deltaT;
integration[y] += new_data[y] * deltaT;
integration[z] += new_data[z] * deltaT;
ROS_DEBUG("Integration: [deltaT: %f x: %f y: %f z: %f]", deltaT, integration[x], integration[y], integration[z]);
derivative[x] = (new_data[x] - old_data[x])/deltaT;
derivative[y] = (new_data[y] - old_data[y])/deltaT;
derivative[z] = (new_data[z] - old_data[z])/deltaT;
ROS_DEBUG("Derivative: [deltaT: %f x: %f y: %f z: %f]", deltaT, derivative[x], derivative[y], derivative[z]);
// Calculate the PID values
pid[x] = K_p[x] * new_data[x] + K_d[x] * derivative[x] + K_i[x] * integration[x];
pid[y] = K_p[y] * new_data[y] + K_d[y] * derivative[y] + K_i[y] * integration[y];
pid[z] = K_p[z] * new_data[z] + K_d[z] * derivative[z] + K_i[z] * integration[z]; | {
"domain": "robotics.stackexchange",
"id": 11718,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, buiding-errors",
"url": null
} |
gazebo, simulation, rosjava
INFO: Subscriber registered: Subscriber<Topic<TopicIdentifier</clock>,
TopicDescription<rosgraph_msgs/Clock, a9c97c1d230cfc112e270351a944ee47>>>
May 20, 2012 12:43:28 PM org.ros.internal.node.RosoutLogger info
Exception in thread "pool-1-thread-9" java.lang.NullPointerException at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187) at
org.ros.time.ClockTopicTimeProvider.getCurrentTime(ClockTopicTimeProvider.java:58) at
org.ros.internal.node.DefaultNode.getCurrentTime(DefaultNode.java:377) at
org.ros.internal.node.RosoutLogger.publish(RosoutLogger.java:58) at
org.ros.internal.node.RosoutLogger.info(RosoutLogger.java:134) at
de.se.rosjava.Main.onStart(Main.java:28) at
org.ros.internal.node.DefaultNode$5.run(DefaultNode.java:511) at
org.ros.internal.node.DefaultNode$5.run(DefaultNode.java:508) at
org.ros.concurrent.ListenerCollection$1.run(ListenerCollection.java:117) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at
java.lang.Thread.run(Thread.java:679) ^C | {
"domain": "robotics.stackexchange",
"id": 9463,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo, simulation, rosjava",
"url": null
} |
c#, security, aes
plainText = RijndaelSimple.Decrypt(cipherText,
passPhrase,
saltValue,
hashAlgorithm,
passwordIterations,
initVector,
keySize);
Console.WriteLine(String.Format("Decrypted : {0}", plainText));
}
} This code uses obsolete PasswordDeriveBytes class, use Rfc2898DeriveBytes class instead (Thanks @tom for highlighting this issue):
Rfc2898DeriveBytes password = new Rfc2898DeriveBytes(
passPhrase,
saltValueBytes,
passwordIterations);
Also, even though IV (initVectorBytes) may be publicly stored it should not be reused for different encryptions. You can derive it from pseudo-random bytes:
byte[] initVectorBytes = password.GetBytes(symmetricKey.BlockSize / 8);
Other than that encryption/decryption looks properly implemented, and I completely agree with original code writer that initialization/duplicate steps should be taken to constructor. It will reduce the number of parameters in Encrypt/Decrypt methods to one - actual payload.
Depending on your specifics you can also expose methods accepting and returning encrypting/decrypting streams for large encryption volumes if necessary. | {
"domain": "codereview.stackexchange",
"id": 3001,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, security, aes",
"url": null
} |
lab-techniques, gel-electrophoresis, staining, purification, sds-page
(2) Why is there large amounts of streaking, especially in lanes 5, 8, 9, 10, and 11.
(3) I isolated my protein of interest because it was a fusion protein with GST (or just GST alone), and then eluted the protein off the glutathione beads via excess glutathione wash. Theoretically, the lanes for the elutions should be free of everything except the protein of interest and glutathione then. This mostly holds true for lanes 4 and 9, which should only have GST and glutathione. However, lanes 7 and 13 should only have GST-NFAT and glutathione. The lanes are not as clear however and seem to have stained numerous other proteins. What is going on?
(4) I was told by a professor that the second band in lane 13 (the GST-NFAT lane) was a degredation product. How can this be? The band does not correspond to the weights of NFAT alone or GST alone. Also, if the NFAT-GST was degraded, wouldn't there be two bands of degredation product?
(5) Why do the GST-alone lanes have what looks like doublet-bands? (Lanes 4, 10).
(6) If we eluted the proteins of interest with excess glutathione, which has a very low molecular weight, why doesn't it show up on the gel? Is it possible that it ran off before everything else?
(7) I have a list of the weights of the bands of the molecular marker. Yet, there is an extra band at the bottom that does not show up on my list. Is it possible my list is incomplete, or is there something else happening?
Finally, I have attached a picture of a failed SDS-PAGE gel. | {
"domain": "biology.stackexchange",
"id": 5200,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lab-techniques, gel-electrophoresis, staining, purification, sds-page",
"url": null
} |
structural-engineering, structural-analysis
Title: What is the difference between base shear and pseudo lateral load in seismic analysis of buildings? What is the difference between base shear and pseudo lateral load in seismic analysis of buildings or are they the same? Base shear is the result of any type of analysis, including pseudo lateral load and seismic analysis. | {
"domain": "engineering.stackexchange",
"id": 3294,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "structural-engineering, structural-analysis",
"url": null
} |
quantum-mechanics, quantum-information, measurements, observables, quantum-measurements
Take an observable $A=\sum_a a\cdot P_a$ where $p_{\mathcal{A}}^0=\langle\mathcal{A}|P_a|\mathcal{A}\rangle$ is the probability of finding measurement result $a$ when measuring state $|\mathcal{A}\rangle$ (same for $\mathcal{D}$). This measurement so far obviously isn't coarse grained, but gives a discrete probability distribution.
Now, to achieve coarse-grainedness, one (graphically speaking) puts gaussian density functions on top of the discrete "peaks", turning the probability distribution continuous. Formally, one calculates $p_{\mathcal{A}}^{\sigma}(x)=\sum_a g_{a,\sigma}(x)p_{\mathcal{A}}^{0}(a)$, where $g$ is the gaussian density function with mean $a$ and width $\sigma$. The $\sigma$ gives the coarse-grainedness or resolution of the measurement.
One then carries on in defining $$P^\sigma=[|\mathcal{A}\rangle,\mathcal{D}\rangle]=\frac{1}{2}+\frac{1}{4}\int_{-\infty}^{\infty}|p_{\mathcal{A}}^{\sigma}(x)-p_{\mathcal{D}}^{\sigma}(x)|\text{d}x$$ as the probability of being able to tell the states apart by said measurement. | {
"domain": "physics.stackexchange",
"id": 95105,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-information, measurements, observables, quantum-measurements",
"url": null
} |
python, django
class BusinessType(models.Model):
name = models.CharField(max_length=100) # e.g., 'Restaurant' or 'Sport Fitness'
possible_tags = models.ManyToManyField('Tag')
class Tag(models.Model):
base_name = models.CharField(max_length=100) # e.g. 'bulgarian_kitchen'
description = models.ManyToManyField('Text')
You still have access to all the same information, but it's all configurable through the database rather than through your code. E.g., to get a restaurant and check if it has an Italian kitchen,
restaurant = Business.objects.get(name__language='en', name__text='YumBob', kind__name='Restaurant')
has_italian_kitchen = restaurant.tags.filter(base_name='italian_kitchen').exists()
or to find all beauty salons that offer laser epilation,
epilators = Business.objects.filter(kind__name='Beauty Salon', tags__name='laser_epilation')
You can also dynamically create forms, etc. for each business type by instantiating a base Form and then adding boolean fields to it for each tag in my_business.kind.possible_tags. And, of course, you could create sidecar tables for particular business types to store additional, non-boolean fields. E.g.,
class RestaurantExtraData(models.Model):
business = models.OneToOneField('Business', primary_key=True, on_delete=models.CASCADE)
num_seats = models.IntegerField()
This way, you minimize the amount of special code required for different businesses, but retain the ability to add extra information where necessary. All of this together avoids your need for the stacked if/else statements and the large quantities of hard-coded strings in your views and templates, and allows you to configure everything using data rather than code. | {
"domain": "codereview.stackexchange",
"id": 36619,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, django",
"url": null
} |
Does that help to see how the spring force acting on m1 will change with time? Will that help in finding the time dependence of the normal force acting on m1?
5. Aug 20, 2013
### Saitama
Yes, it will be SHM.
No.
Won't the normal force be equal to the spring force for the time $m_1$ is attached to wall?
6. Aug 20, 2013
### TSny
Yes, that's correct.
7. Aug 20, 2013
### Saitama
Can you please check my equations in post #3?
8. Aug 20, 2013
### TSny
Looks like the equations are correct to me. But I don't really like using $x$ for the position of mass 2 relative to the wall. Would you mind using $x_2$ instead? That way, you can let $x$ represent the amount of stretch of the spring from it's natural length. Then you can just write Hooke's law in the usual form $F = -kx$. Note that $x_2 = l+x$.
The reason for harping on the notation is that you are going to be interested in the external force from the wall which, as you already said, is determined by the spring force which is most simply expressed in terms of $x$.
Also, you should be able to write down the time dependence of $x$ immediately from your knowledge of simple harmonic motion. (At least up to the time that m1 leaves the wall.) No need to set up and solve a differential equation, unless you just want to.
9. Aug 20, 2013
### TSny
Just realized that since you can write down the stretch of the spring $x(t)$ from knowledge of SHM, you can then easily get $x_2(t)$ and therefore $x_{CM}(t)$ up until m1 leaves the wall. After that, the motion of the CM is simple.
I solved it originally by working with the external force from the wall. But that's not necessary. Sorry for not seeing that earlier.
10. Aug 21, 2013
### Saitama | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854146791214,
"lm_q1q2_score": 0.8038927680207317,
"lm_q2_score": 0.8289388104343892,
"openwebmath_perplexity": 377.23858249326395,
"openwebmath_score": 0.6157432794570923,
"tags": null,
"url": "https://www.physicsforums.com/threads/two-blocks-and-a-spring-system-finding-equation-of-motion.706435/"
} |
homework-and-exercises, units, mass-energy, dimensional-analysis, spacetime-dimensions
We can also see this more generally by the formal definition of work, which is the definition of change in energy:
$$W=\int_C \vec{F}\cdot d\vec{s}$$
for an object being acted on by a force $\vec{F}$ moving along a path $C$ with arclength parameter $d\vec{s}$. This is the formal, most general definition, and holds no matter how many dimensions of space you have. Once again, you can see that this definition involves a dot product. The dot product takes two vectors in any number of dimensions and outputs a one-dimensional quantity (i.e. a number). The work only cares about one component of the force, namely, the one pointing along the one-dimensional path that the particle is taking. No matter how many spatial dimensions this one-dimensional path is embedded in, the work only cares about things happening along one of those dimensions. As such, the units of work should always be the same, and, since work has the same units as energy (otherwise we wouldn't be able to add them together), the units of energy should also always be the same. | {
"domain": "physics.stackexchange",
"id": 52452,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, units, mass-energy, dimensional-analysis, spacetime-dimensions",
"url": null
} |
coordinate-systems, conformal-field-theory, unruh-effect
$$x^\mu = 2R \frac{X^\mu - b^\mu X^2}{1-2b \cdot X + b^2 X^2} + Rb^\mu,$$
with $b^\mu = (0,-1,\vec{0})$.
I picked the minus sign in the definition of b to make the fact that it's made of a special conformal transformation manifest.
I found it in Rudolf Haag's book Local Quantum Physics in section V.4.2.
EDIT: Also notice that the equation makes no dimensional sense; the coordinates on the right should actually be $\frac{X^\mu}{2R}$.
Absorbing this sign into the parametrising vector $b^\mu$ gives exactly Casini-Huerta-Myers' transformation except for a sign.
[Chronology note: this edit came after OP's comment.] | {
"domain": "physics.stackexchange",
"id": 29450,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "coordinate-systems, conformal-field-theory, unruh-effect",
"url": null
} |
Option (B)
_________________
GyanOne [www.gyanone.com]| Premium MBA and MiM Admissions Consulting
Awesome Work | Honest Advise | Outstanding Results
Reach Out, Lets chat!
Email: info at gyanone dot com | +91 98998 31738 | Skype: gyanone.services
Math Expert
Joined: 02 Sep 2009
Posts: 58410
Re: The amounts of time that three secretaries worked on a [#permalink]
### Show Tags
07 Feb 2012, 05:05
2
fiendex wrote:
The amounts of time that three secretaries worked on a special project are in the ratio of 1 to 2 to 5. If they worked a combined total of 112 hours, how many hours did the secretary who worked the longet spend on the project?
A) 80
B) 70
C) 56
D) 16
E) 14
Since, amounts of time that secretaries worked in the ratio of 1 to 2 to 5, then $$\frac{\frac{1x}{2x}}{5x}$$, for some number x --> x+2x+5x=8x=112 --> x=14 --> the secretary who worked the longet spent 5x=70 hours.
_________________
Manager
Joined: 10 Jan 2010
Posts: 138
Location: Germany
Concentration: Strategy, General Management
Schools: IE '15 (M)
GPA: 3
WE: Consulting (Telecommunications)
Re: The amounts of time that three secretaries worked on a [#permalink]
### Show Tags
07 Feb 2012, 06:53
1
All three secretaries worked 112h and with a ratio of 1to2to5
8x = 112h
x = 14h
5*14h = 50h --> B
Math Expert
Joined: 02 Sep 2009
Posts: 58410
Re: The amounts of time that three secretaries worked on a [#permalink]
### Show Tags
23 May 2017, 02:48
fiendex wrote:
The amounts of time that three secretaries worked on a special project are in the ratio of 1 to 2 to 5. If they worked a combined total of 112 hours, how many hours did the secretary who worked the longest spend on the project? | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9793540716711546,
"lm_q1q2_score": 0.8733987855511554,
"lm_q2_score": 0.8918110526265554,
"openwebmath_perplexity": 7088.721673432217,
"openwebmath_score": 0.6598981022834778,
"tags": null,
"url": "https://gmatclub.com/forum/the-amounts-of-time-that-three-secretaries-worked-on-a-127170.html"
} |
algorithm, c, graph, pathfinding, dijkstra
if (dary_heap_add(p_open_backward,
target_vertex_id,
0.0) != RETURN_STATUS_OK) {
CLEAN_SEARCH_STATE;
TRY_REPORT_RETURN_STATUS(RETURN_STATUS_NO_MEMORY);
return NULL;
}
if (distance_map_put(p_distance_forward,
source_vertex_id,
0.0) != RETURN_STATUS_OK) {
CLEAN_SEARCH_STATE;
TRY_REPORT_RETURN_STATUS(RETURN_STATUS_NO_MEMORY);
return NULL;
}
if (distance_map_put(p_distance_backward,
target_vertex_id,
0.0) != RETURN_STATUS_OK) {
CLEAN_SEARCH_STATE;
TRY_REPORT_RETURN_STATUS(RETURN_STATUS_NO_MEMORY);
return NULL;
}
if (parent_map_put(p_parent_forward,
source_vertex_id,
source_vertex_id) != RETURN_STATUS_OK) {
CLEAN_SEARCH_STATE;
TRY_REPORT_RETURN_STATUS(RETURN_STATUS_NO_MEMORY);
return NULL;
}
if (parent_map_put(p_parent_backward,
target_vertex_id,
target_vertex_id) != RETURN_STATUS_OK) {
CLEAN_SEARCH_STATE;
TRY_REPORT_RETURN_STATUS(RETURN_STATUS_NO_MEMORY);
return NULL;
}
/* End: initialize the state. */
/* Main loop: */
while (dary_heap_size(p_open_forward) > 0 &&
dary_heap_size(p_open_backward) > 0) {
if (p_touch_vertex_id) {
/* There is somewhere a vertex at which both the search
frontiers are meeting: */
temporary_path_length =
distance_map_get(
p_distance_forward,
dary_heap_min(p_open_forward))
+
distance_map_get(
p_distance_backward,
dary_heap_min(p_open_backward)); | {
"domain": "codereview.stackexchange",
"id": 43105,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm, c, graph, pathfinding, dijkstra",
"url": null
} |
virology, retrovirus, treatment, gene-therapy
Title: Using viruses to treat altered or misconfigured DNA Consider how a Retrovirus can modify existing cell DNA to 'execute instructions' on its behalf.
I wondered: Why can we not utilize lab-generated viruses to infect sick patients with a 'healthy' virus that would rewrite bad segments of DNA with something either correct or non-malus? This of course is assuming that we have a solid understanding of the relationship between genomes and precise viral functions, additionally that we had some sort of gene replacement functionality.
Consider how a Retrovirus can modify existing cell DNA to 'execute instructions' on its behalf.
Not only retroviruses do that. Actually, pretty much all viruses use the machinery of the host cell on its behalf.
Why can we not utilize lab-generated viruses to infect sick patients with a 'healthy' virus that would rewrite bad segments of DNA with something either correct or non-malus?
We actually do that. We use viruses (incl. retrovirus but not only) to insert DNA segments into eukaryote genome. See vectors in gene therapy > viruses
Retrovirus to reverse another retrovirus
Note that I don't really understand your title so, I ignored it a bit focusing on the content of the post. | {
"domain": "biology.stackexchange",
"id": 7865,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "virology, retrovirus, treatment, gene-therapy",
"url": null
} |
organic-chemistry, stability, hybridization
Title: Comparing stability of 2-propenyl cation and 1-propyl cation
Compare the stability of 2-propenyl cation and 1-propyl cation:
$$\underset{\text{2-propenyl cation}}{\ce{CH2=CH-CH2+}}\qquad \underset{\text{1-propyl cation}}{\ce{CH3-CH2-CH2+}}$$
In 2-propenyl cation we get resonance. So, 2-propenyl cation seems to be more stable, but due to the double bond (or, more precisely, the $\mathrm{sp}^2$ carbon) in 2-propenyl cation, an electronegative carbon atom which makes cation less stable.
In case of 1-propyl cation due to inductive effect cation is getting stabilized, but we know resonance is more stabilising than inductive effect.
I would answer that 1-propyl cation is more stable than 2-propenyl cation because we get $\mathrm{sp}^2$ or a more electronegative C in 2-propenyl cation making it less stable.
I'm really confused which factor is more effective: stability of resonance or destabilisation due to the $\mathrm{sp}^2$-hybridized carbon in 2-propenyl cation.
What would be a valid zero-order approximation for this? Stability order - $2$-propenyl cation>$1$-Propyl cation
Resonance has way more effect on the cation than that tiny-whiny little weak $\mathrm {sp}^2$ hybridised carbon | {
"domain": "chemistry.stackexchange",
"id": 14470,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, stability, hybridization",
"url": null
} |
quantum-field-theory, gauge-theory, representation-theory
The redundancy is that, like usual in physics, we tend to use components to describe stuff, so if $\psi(p)$ is a section of the vector bundle, eg. it is a field that maps to a point $p$ an element of $E_p$, and $e_A(p)$ is a local basis field of the vector bundle, then you can decompose $\psi$ as $$ \psi(p)=\psi^A(p)e_A(p), $$ where the index $A$ is often called an "internal index" (and its range does not have to be the same as the spacetime manifold's dimension).
Now, what matters is $\psi$, not the components of $\psi$, so if you make a change of basis $e_A(p)\mapsto g_A(p)=M^B_{\ \ A}(p)e_B(p)$ for some invertible matrix $M$, the new components $\tilde{\psi}^A(p)=(M^{-1})^A_{\ \ B}(p)\psi^B(p)$ will describe the same physics as the old components.
As it usually happens, there is some additional structure imposed on the $E_p$s, which select some preferred bases, like there is a sense of "orthonormality" in each $E_p$, and you want only to use "orthonormal bases", so usually the matrices $M$ are not just invertible matrices, but are elements of some more restrictive group, but the point remains - that there are physically equivalent choice of bases, which are switched around by elements of a Lie-group. This is the redundancy.
In this sense, none of the gauge fields are 4-vectors, unless the internal vector bundle is the tangent bundle itself. Which is true in general relativity, BUT in GR we do not have any dynamical vector fields in the theory. | {
"domain": "physics.stackexchange",
"id": 31789,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, gauge-theory, representation-theory",
"url": null
} |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Nov 2018, 11:51
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### All GMAT Club Tests are Free and open on November 22nd in celebration of Thanksgiving Day!
November 22, 2018
November 22, 2018
10:00 PM PST
11:00 PM PST
Mark your calendars - All GMAT Club Tests are free and open November 22nd to celebrate Thanksgiving Day! Access will be available from 0:01 AM to 11:59 PM, Pacific Time (USA)
• ### Free lesson on number properties
November 23, 2018
November 23, 2018
10:00 PM PST
11:00 PM PST
Practice the one most important Quant section - Integer properties, and rapidly improve your skills.
# The quotient when a certain number is divided by 2/3 is 9/2. What is
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Director
Status: I don't stop when I'm Tired,I stop when I'm done
Joined: 11 May 2014
Posts: 537
GPA: 2.81
WE: Business Development (Real Estate)
The quotient when a certain number is divided by 2/3 is 9/2. What is [#permalink]
### Show Tags
Updated on: 17 Jun 2017, 13:03
4
Top Contributor
4
00:00
Difficulty:
5% (low)
Question Stats:
84% (00:36) correct 16% (00:38) wrong based on 766 sessions
### HideShow timer Statistics | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9681411498292727,
"lm_q1q2_score": 0.8322758449234843,
"lm_q2_score": 0.8596637433190939,
"openwebmath_perplexity": 7479.736597002335,
"openwebmath_score": 0.6545032262802124,
"tags": null,
"url": "https://gmatclub.com/forum/the-quotient-when-a-certain-number-is-divided-by-2-3-is-9-2-what-is-242867.html"
} |
# Integral involving the CDF of normal distribution
I am doing some research and got stuck in solving the following integral (which I am not sure whether it has a closed form solution or not, I hope it has:))
Here is the integral:
$\int_{-\infty}^{+\infty} e^{-(x-a)^2}N(cx+d)dx$
where $N(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-\frac{z^2}{2}}dz$
any help would greatly be appreciated. (by the way I am not a grad student in math, so please be specific in answers) Thanks a lot.
-
When $e^{-(x-a)^2}$ is the derivative of $N(cx+d)$, i.e. $c=\sqrt{2}$,$d=a\sqrt{2}$, then it has a closed form solution of $\frac{\sqrt{\pi}}{8}$, which you can obtain using integration by parts. For the more general case I am unsure, but the indefinite integral doesn't work in any CAS. – Lucas Jun 8 '13 at 1:33
If $a=0$ and $d=0$, then, for any $c$, the solution is: $\frac{\sqrt{\pi }}{2}$ – wolfies Jun 8 '13 at 1:42
In what follows, I am assuming that $c > 0$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765587105448,
"lm_q1q2_score": 0.8101581852747594,
"lm_q2_score": 0.8267118004748677,
"openwebmath_perplexity": 73.20363002752423,
"openwebmath_score": 0.9941381216049194,
"tags": null,
"url": "http://math.stackexchange.com/questions/414355/integral-involving-the-cdf-of-normal-distribution"
} |
data-structures, graphs, proof-techniques, combinatorics, binary-trees
Title: How can I prove that a complete binary tree has $\lceil n/2 \rceil$ leaves? Given a complete binary tree with $n$ nodes. I'm trying to prove that a complete binary tree has exactly $\lceil n/2 \rceil$ leaves.
I think I can do this by induction.
For $h(t)=0$, the tree is empty. So there are no leaves and the claim holds for an empty tree.
For $h(t)=1$, the tree has 1 node, that also is a leaf, so the claim holds.
Here I'm stuck, I don't know what to choose as induction hypothesis and how to do the induction step. If the statement you're trying to prove by induction is
For all positive integers $n$, a complete binary tree with $n$ nodes has $\lceil{n/2}\rceil$ leaves.
then the induction hypothesis must be
For all positive integers $k<n$, a complete binary tree with $k$ nodes has $\lceil{k/2}\rceil$ leaves.
Similarly, if the statement you're trying to prove by induction is
For all positive integers $n$, a hoosegow with $n$ whatsits has $2^{\lfloor\sqrt{n}\rceil!}\cdot n^\pi$ nubbleframets.
then the induction hypothesis must be
For all positive integers $k<n$, a hoosegow with $k$ whatsits has $2^{\lfloor\sqrt{k}\rceil!}\cdot k^\pi$ nubbleframets. | {
"domain": "cs.stackexchange",
"id": 267,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "data-structures, graphs, proof-techniques, combinatorics, binary-trees",
"url": null
} |
c#, parsing, antlr
ReturnType = returnTypeClause == null
? ReservedKeywords.Variant
: returnTypeClause.type().GetText();
}
}
public IMemberContext MemberContext{get {return (IMemberContext)base.Context;}}
public string ReturnType { get; private set;}
public bool IsImplicitReturnType { get; private set; }
}
And magic is done, all the mess is gone. Let the Context object be the majesty of your class, let it proudly be the object that you may easily inspect to get those nice properties values while debugging, let it rule your class ;)!
I don't know about the details of your project but let's imagine that you could say:
"But it doesn't make sense that my ParserRuleContext implements that interface, because not all rules have accessibility etc...". You could also pass the same object twice in the same ctor, one time for the rule, other time for the Context:
public class ProcedureNode : Node
{
publlic ProcedureNode(ParserRuleContext ruleContext, IMemberContext memberContext, string scope, string localScope)
: base(ruleContext, scope, localScope)
{
MemberContext = memberContext;
if (memberContext.AsType != null)
{
var returnTypeClause = memberContext.AsType();
IsImplicitReturnType = returnTypeClause == null;
ReturnType = returnTypeClause == null
? ReservedKeywords.Variant
: returnTypeClause.type().GetText();
}
}
public IMemberContext MemberContext{get; private set;}
public string ReturnType { get; private set;}
public bool IsImplicitReturnType { get; private set; }
}
Not the best, but still it's way better than your previous attempt IMHO. | {
"domain": "codereview.stackexchange",
"id": 11809,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, parsing, antlr",
"url": null
} |
quantum-mechanics, homework-and-exercises, hamiltonian, time-evolution
&=e^{\frac{-iH(t-t_0)}{\hbar}}\psi(x,t)
\end{align}
$$
we only need to know the value of $H$ at $t=t_0$. If this is true, then $U(t,t_0)=\exp\left[\frac{-iH(t-t_0)}{\hbar}\right]$ should hold whether $H$ is time-dependent or not. What have I done wrong here? The first remark is that, at a rigorous level, you are not allowed to do all those manipulations freely. However, let's suppose for a moment that you would, for everything is extremely regular and well-behaved.
The (omitted) starting hypothesis is that
$$i\partial_t\psi(t)=H(t)\psi(t)\; .$$
If we iterate the derivation, we do not get simply $H(t)^2\psi(t)$, but rather (this is a simple application of the product rule, that actually works also in this case)
$$(i\partial_t)^2\psi(t)=i\dot{H}(t)\psi(t)+H(t)^2\psi(t)\; .$$
As we can easily see, this is where the OP's argument goes wrong, since the derivative of $H(t)$ does not vanish in general for time dependent operators.
I want to remark again, however, that this is not the proper way of dealing with these type of time-dependent equations. The proper way is, however, very complicated and it requires a lot of advanced functional analysis. If you are curious, the most common method is due to T.Kato, and can be found e.g. in this book. | {
"domain": "physics.stackexchange",
"id": 25860,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises, hamiltonian, time-evolution",
"url": null
} |
frequency-response, minimum-phase
The pole contributes a phase change of $-\pi/2$ as $\omega$ moves from zero to infinity. The left half-plane zero of $H_1(s)$ contributes a phase change of $\pi/2$, resulting in a net phase change of zero, whereas the right half-plane zero of $H_2(s)$ contributes a phase change of $-\pi/2$, resulting in a total phase change of $-\pi$.
Another way to see the same thing is to note that any causal and stable transfer function can be written as the product of the minimum-phase transfer function with the same magnitude and a causal and stable allpass:
$$H(s)=H_m(s)H_a(s)\tag{4}$$
It can be shown that the phase of a causal and stable allpass is always non-positive for $\omega\in[0,\infty)$, and, consequently, the phase lag of the minimum-phase system is always less than or equal to the phase lag of any other causal and stable system with the same magnitude response. | {
"domain": "dsp.stackexchange",
"id": 6978,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "frequency-response, minimum-phase",
"url": null
} |
statistical-mechanics, phase-space, partition-function
All of this discussion has been classical, which is fine, because we'd like to be able to do statistical mechanics classically. However, you could also ask why this choice of measure on classical phase space yields statistical results close to what we'd derive using quantum statistical mechanics. There are a few different approaches to the foundations of quantum statistical mechanics, e.g. https://arxiv.org/pdf/quant-ph/0511225.pdf. Anyway, I think the key feature of the measure $dx \, dp$ that makes classical statistical mechanical results agree with quantum statistical mechanical results is that the number of orthogonal quantum states corresponding to some small region of phase space is approximately proportional to the volume of the region as measured using $dx \, dp$. | {
"domain": "physics.stackexchange",
"id": 44132,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, phase-space, partition-function",
"url": null
} |
special-relativity, spacetime
Edit
Marek's answer is really good (I suggest you read it and its references now!), but not quite what I was thinking of.
I'm looking for an answer (or a reference) that shows the correspondence using only/mainly simple algebra and geometry. An argument that a smart high school graduate would be able to understand. I will first describe the naive correspondence that is assumed in usual literature and then I will say why it's wrong (addressing your last question about hidden assumptions) :)
The postulate of relativity would be completely empty if the inertial frames weren't somehow specified. So here there is already hidden an implicit assumption that we are talking only about rotations and translations (which imply that the universe is isotropic and homogenous), boosts and combinations of these. From classical physics we know there are two possible groups that could accomodate these symmetries: the Gallilean group and the Poincaré group (there is a catch here I mentioned; I'll describe it at the end of the post). Constancy of speed of light then implies that the group of automorphisms must be the Poincaré group and consequently, the geometry must be Minkowskian.
[Sidenote: how to obtain geometry from a group? You look at its biggest normal subgroup and factor by it; what you're left with is a homogeneous space that is acted upon by the original group. Examples: $E(2)$ (symmetries of the Euclidean plane) has the group of (improper) rotations $O(2)$ as the normal subgroup and $E(2) / O(2)$ gives ${\mathbb R}^2$. Similarly $O(1,3) \ltimes {\mathbb R}^4 / O(1,3)$ gives us Minkowski space.]
The converse direction is trivial because it's easy to check that the Minkowski space satisfies both of Einstein postulates. | {
"domain": "physics.stackexchange",
"id": 14409,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, spacetime",
"url": null
} |
c++, generics
template <typename Container>
struct ContainerAppender{
Container operator()(Container const& lhs, Container const& rhs)
{
Container dst;
std::merge(std::begin(lhs), std::end(lhs),
std::begin(rhs), std::end(rhs),
std::back_inserter(dst));
return dst;
}
};
template <typename Container>
struct monoid<Container, ContainerAppender<Container>> {
static Container mempty(){ return Container{}; }
static Container mappend(Container const& lhs,
Container const& rhs){
return ContainerAppender<Container>{}(lhs, rhs);
}
static constexpr bool is_instance = true;
using type = Container;
};
template <typename T, typename BinaryOp>
constexpr bool is_monoid_v = monoid<T, BinaryOp>::is_instance;
template <typename T, typename BinaryOp>
using is_monoid = std::enable_if_t<is_monoid_v<T, BinaryOp>>;
template<template <typename> typename BinaryOp,
typename ForwardIt,
typename = is_monoid<typename ForwardIt::value_type,
BinaryOp<typename ForwardIt::value_type>>>
auto mconcat(const ForwardIt begin, const ForwardIt end)
{
using T = typename ForwardIt::value_type;
T out{monoid<T, BinaryOp<T>>::mempty()};
for(ForwardIt it = begin; it != end; it++){
out = monoid<T, BinaryOp<T>>::mappend(out,*it);
}
return out;
} | {
"domain": "codereview.stackexchange",
"id": 30258,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, generics",
"url": null
} |
organic-chemistry, reaction-mechanism, wittig-reactions, stereoselectivity
References
Vedejs, E.; Marth, C.F. Mechanism of the Wittig reaction: the role of substituents at phosphorus. J. Am. Chem. Soc. 1988, 110 (12), 3948–3958. DOI: 10.1021/ja00220a037.
Vedejs, E.; Fleck, T. J. Kinetic (not equilibrium) factors are dominant in Wittig reactions of conjugated ylides. J. Am. Chem. Soc. 1989, 111 (15), 5861–5871. DOI: 10.1021/ja00197a055.
Aggarwal, V. K.; Fulton, J. R.; Sheldon, C. G.; de Vicente, J. Generation of Phosphoranes Derived from Phosphites. A New Class of Phosphorus Ylides Leading to High E Selectivity with Semi-stabilizing Groups in Wittig Olefinations. J. Am. Chem. Soc. 2003, 125 (20), 6034–6035. DOI: 10.1021/ja029573x.
Robiette, R.; Richardson, J.; Aggarwal, V. K.; Harvey, J. N. On the Origin of High E Selectivity in the Wittig Reaction of Stabilized Ylides: Importance of Dipole−Dipole Interactions. J. Am. Chem. Soc. 2005, 127 (39), 13468–13469. DOI: 10.1021/ja0539589.
Robiette, R.; Richardson, J.; Aggarwal, V. K.; Harvey, J. N. Reactivity and Selectivity in the Wittig Reaction: A Computational Study. J. Am. Chem. Soc. 2006, 128 (7), 2394–2409. DOI: 10.1021/ja056650q. | {
"domain": "chemistry.stackexchange",
"id": 8752,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, reaction-mechanism, wittig-reactions, stereoselectivity",
"url": null
} |
quantum-mechanics, electromagnetism, electrons, molecules
When talking about a molecule the concept of a single atom is not valid with respect to electron distribution.
You need to distinguish between the (notional) position of the atom defined by the mass of the atom, made up almost entirely of a quite localized nucleus and the atom as a nucleus with electrons around it, where the electron cloud cannot be as well localized.
Thus you can identify an atom in a molecule in the sense of a well defined nuclear center with an associated nucleus and position, but it is not at all as simple to define the distribution of the electrons associated with each atom. As electrons cannot be distinguished it makes no sense to talk about an electrically neutral atom in a molecule. Only the overall neutrality of the molecule is meaningful.
does the repulsion inside the molecule between the individual atoms come from part PEP and part EM repulsion or just PEP?
When constructing a wafefunction for electrons in an atom or molecule you would use a form that allows for the PEP to be "embedded" in the structure of wavefunction. See for example this Wikipedia page on the Slater determinant.
So we don't explicitly have a force as a result of the PEP, but in order to satisfy the mathematical requirements PEP imposes on the wavefunction, the effect of PEP on energy levels falls out of solving the equations with these "emebedded" structures.
Typically we cannot analytically calculate wavefunctions like these and rely on numerical methods instead. Calculating the effect of PEP is therefore not practical. To do so would require calculating using an entirely different wavefunction structure which does not incorporate PEP principles, but you could not meaningfully relate the two sets of results. I not sure you could even say that the PEP is always repulsive in effect in a molecule.
So the results depends on the effects of electrical attraction (between electrons of nuclei), electrical repulsion (between electrons) and the effects of the PEP via the mathematical requirements for the structure of the wavefunction. | {
"domain": "physics.stackexchange",
"id": 59483,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, electromagnetism, electrons, molecules",
"url": null
} |
bash, shell, sh
6. Shell builtins
If available, prefer them over external commands.
For example, if you use a recent version of bash and portability is not a big deal, you could rewrite uppercased_name=`sed -e "s/\b\(.\)/\u\1/g" <<< "${name}"` as uppercased_name="${name^}"
7. Arguments parsing
First, I recommend you to read the Utility Conventions chapter of the POSIX specification. It's useful to understand how argument syntax works in standard utilities and serves as a guide to the rules you should follow when you create some program.
The goal is to make sure your script follows the program [options] [operands] syntax. I mean, instead of rely on the position of arguments to assign variables (as in name=${1}), your script should parse --name foo (or something similar) to change the value of the name variable.
There are basically two approaches:
Using utilities like getopts
Pros:
getopts is a POSIX builtin command.
It can handle things like -abc without effort.
Easy and intuitive.
Cons:
Can't handle long options.
Writing your own parser
Pros:
Full control of what and how it is parsed.
Handling of long options.
Could implement non-standard syntax.
Cons:
It's harder to handle things like -abc and edge cases.
Greg's and Bash Hackers' (1, 2) wikis have tutorials for both approaches, so check them out.
I've also written an example script to show how to do manual parsing and colored output: | {
"domain": "codereview.stackexchange",
"id": 28565,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bash, shell, sh",
"url": null
} |
c#, statistics
public DrawInfo(int totalBalls, int ballsDrawn)
{
this.TotalBalls = totalBalls;
this.BallsDrawn = ballsDrawn;
}
} Quick remarks:
Don't abbreviate needlessly: di.
Why assign totalBallsFactorialSum and ballsDrawnFactorialSum, when you are only using them once? You're not even consistent: in the case of Factorial((di.TotalBalls - di.BallsDrawn)) you don't assign the result to a variable.
Don't overdo it with the brackets: there's no point for the inner ones in Factorial((di.TotalBalls - di.BallsDrawn)).
JackpotWinningOdds doesn't folow the capitalization conventions.
Why is it called FindJackpotWinningOdds? Wouldn't CalculateJackpotOdds be better?
The this in this.TotalBalls = totalBalls; and this.BallsDrawn = ballsDrawn; is superfluous.
TotalBalls and BallsDrawn should be private set;.
Why even assign the result to JackpotWinningOdds? This whole method can be reduced to a one-liner, though perhaps it would be best to split it over multiple lines to increase legibility:
return (int)(
Factorial(di.TotalBalls)
/ (
Factorial(di.BallsDrawn)
* Factorial(di.TotalBalls - di.BallsDrawn)
)
);
This method could even just be a method on DrawInfo -- together with BigInteger Factorial(BigInteger i), of course, and Factorial() could then even be a private method. | {
"domain": "codereview.stackexchange",
"id": 18251,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, statistics",
"url": null
} |
and the diameter, d, is a line segment from on side of a circle through the center of a circle and out to the other side of the circle, it follows that a radius is 1 2 a diameter. Lv 4. Using the Diameter Calculator. The diameter is twice the length of the radius. And the Message structure is of following sort. Le rayon est égal à la moitié du diamètre. Use our circumference calculator to find the radius … If you draw two opposing line segments from the circle's origin to the edge, you just drew the diameter. Diameter. Radius refers to the line segment from the center of the circle to an endpoint on the circle. For example if your diameter is 4 your radius is 2. the square root of the area divided by PI. Dro Diameter to Radius. Find the radius, circumference, and area of a circle if its diameter is equal to 10 feet in length. The diameter of each circle is shown. If you don't have the radius or the circumference, divide the area of the circle by π and then find that number's square root to get the radius. Click save settings to reload page with unique web page address for bookmarking and sharing the current tool settings, or click flip tool to reverse the tool function with current settings, Switch tool with the current settings for diameter, and calculate circle area, circle circumference, sphere volume or sphere surface area instead. Awleaphart. How do you think about the answers? pour DIAMETER). SURVEY . Radius= 2 times the diameter. REPORT. As nouns the difference between diameter and radius is that diameter is (geometry) any straight line between two points on the circumference of a circle that passes through the centre/center of the circle while radius is (anatomy) the long bone in … Recommended for aggressive machining applications in all materials including, stainless, inconel, titanium, tool steels and hardened materials. Nikki. Tags: Question 8 . -Solving for the diameter and radius when given the circumference. Use the calculator above to calculate the properties of a circle. Practice naming circles and calculating circumference, radius, diameter, and area of a circle. Just remember to divide the diameter by two to get the radius. SURVEY . What does "circumference" mean? Diameter … A goal was to | {
"domain": "swagger-media.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9372107914029487,
"lm_q1q2_score": 0.8191946080933685,
"lm_q2_score": 0.8740772253241803,
"openwebmath_perplexity": 1141.1894267134733,
"openwebmath_score": 0.7430127859115601,
"tags": null,
"url": "http://swagger-media.com/frheb/5cc0a3-diameter-to-radius"
} |
visualization
Title: How to drop points from a data series for presentation? We monitor long running industrial engines and we have data series that we want to present on a line chart on a web page. For instance, we have sensors that monitor the oil temperature and pressure on the engine.
There are several other similar data series on the components of the equipment.
The objective is to have a human operator identify deviations in engines, for post-analysis. Our chart will display 24 or 48 hours of engine operation and the operator may identify peaks in temperature or pressure, or on the other measurements.
As such, it is a large amount of data to present on the chart on the web page, and we're starting to hit limitations in several places.
At 24 hours * 3600seconds/hour * 1 data point/second = 86400 data points on the chart.
This amount of points is slowing down the rendering of the web page, and is a lot of data to transfer.
We want to reduce the count of data points presented on the chart, without losing much context. So I ask:
How can I drop data points without losing much precision?
What techniques are usually applied in this scenario?
A first (naive) thought was to group them in 5-second windows and only return one data point to represent the 5-second window on the chart;
should I average the points in the window?
should I consider the maximum in the window?
Are there other techniques than grouping data points in window, to reduce the loss of meaning for the monitoring? Indeed this problem isn't a very simple one to deal with, despite looking very easy to conceptualise. There exist a certain number of techniques for "reducing" the number of points of timeseries, one being called "downsampling".
A little litterature : https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf
hope this helps,
Cheers | {
"domain": "datascience.stackexchange",
"id": 10937,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "visualization",
"url": null
} |
javascript, unit-testing
presenter.showForm(callback);
expect(testPage.html()).to.equal(expectedPage.html());
});
I am trying to test that when presenter.showForm is called, the returned html is appended successfully appended. Problem is, I'm not usre if it's a valid test as pretty much everything is fake.
I'm toying with the idea of abstracting out the call to $('body').append(html); to another public function of the module or perhaps even to another module so that
the call presenter.showForm(callback); can use the real function but I'm not sure?
I hope that all makes sense.
Can someone review and advise please? I'm not quite sure what you mean by "abstracting out the call...", but given that the "production module" actually looks like that, then I'd say that you have a perfectly valid little test there.
It is good to hear that you really are interested in making sure that your test cases are testing the right things. Unit tests are too often something that programmers write simply because they are forced to do it, and they are therefore happy enough when the tests don't fail - not pausing to consider that it might have passed because the test was incorrect, tested the wrong criteria, or did not in fact test anything at all.
Using fake data is what you normally do at this level of functionality, so you are good there. You might generally want to consider using more than one set of test data, but in this case I would say that it is not necessary. | {
"domain": "codereview.stackexchange",
"id": 4183,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, unit-testing",
"url": null
} |
ros, embedded, roscore
Originally posted by Patrick Bouffard with karma: 2264 on 2011-03-20
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by mjcarroll on 2011-03-20:
Almost certainly the case. Beagleboard is an ARM board. There has been some work started over here: http://dave.limoilux.org/trac/wiki/ROS/CrossCompile/arm but it's far from complete. You can probably also work off the gumROS guide: http://www.ros.org/wiki/gumros as a starter. | {
"domain": "robotics.stackexchange",
"id": 5141,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, embedded, roscore",
"url": null
} |
classical-mechanics, angular-momentum, angular-velocity
Title: Transformation of linear into angular momentum Imagine you have this system:
Composed by a disk that can rotate around a fixed axes $O$,and a particle with mass $m$ and velocity $v_0$
crushes with the disk at a distance $D$ from the center of the disk.
At first glance I thought that the movement would decompose into a tangential velocity and into a normal velocity (which I assume would be absorbed by the system). Then you could easily see:
$$
\sin(\theta) = \frac{d}{r} \\
v_t = \omega r = v_0 \frac{d}{r} \\
\omega = v_0 \frac{d}{r^2}
$$
My question is: Is this line of reasoning right? And how would you express this as a conservation of momentums, taking into account that the particle crashes at a distance $d$?
Something like this:
$$
m v_0 = M \omega r + m v_0'
$$ In absence of friction at the axis of rotation, the hinge at the axis of rotation only introduces a force constraint in the system as an external force to the system composed by the mass and the disk.
Thus, you can readily use the conservation of angular momentum of the system, w.r.t. the pole $O$.
Comparing the state before the collision, with the condition after the collision we get
$m v_0 d = I_{disk} \omega + m r v_{\theta}$,
with $r$ the distance from the point $O$ of the point where the mass stops into the disk after the collision, and $v_{\theta} = r \omega$ the velocity (only tangential, due to the rotational motion) of the mass rotating with the disk. If the mass sticks to the outer surface of the disk, we get
$m v_0 d = I_{disk} \omega + m R^2 \omega = [I_{disk} + m R^2 ] \omega$
Particular cases. Let's have a look at two particular cases: | {
"domain": "physics.stackexchange",
"id": 91080,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, angular-momentum, angular-velocity",
"url": null
} |
oscillators, differential-equations, coupled-oscillators
Title: Normal mode of a coupled pendulum: why the constant $\psi_1$, $\psi_2$ I need to solve a problem that tells me to find out the motion of both the pendulums that appear in the first 45 seconds of this video
I think this kind of motion is described by a system of differential equation of the form:
$$\ddot{x} + \omega^2x = \epsilon y$$
$$\ddot{y} + \omega^2 y = \epsilon x$$
where all constants like the mass of the pendulum and so on are missing and $x$ describes the motion of the first pendulum and $y$ the motion of the second one.
To solve the problem one needs to assume $\epsilon$ very small and $\epsilon \lt \omega^2$.
I've tried to solve this problem analytically, but it was a little too complicated so i tried the physical approach by following the example given here under the section coupled oscillator.
I think that i've understood almost everything except the fact that when we evaluate the normal modes we add the constant $\psi_1$ and $\psi_2$ and get the solutions
$$\vec{\nu}_1 = c_1 \begin{pmatrix} 1 \\ 1 \end{pmatrix}\cos(\omega_1 t + \psi_1)$$
$$\vec{\nu}_2 = c_2 \begin{pmatrix} 1 \\ -1 \end{pmatrix}\cos(\omega_2 t + \psi_2)$$
My question is: why do we have those two constant $\psi_1, \psi_2$? I understand the fact that we are interested only in the real valued part of the solution $Ae^{i\omega t}$, but I don't understand where do these constant come from, is there a physical or better mathematical explanation for this? The general solution of your second-order differential eqaution is not $Ae^{i\omega t}$. In fact, a general solution is:
$$f_t=Ae^{i\omega t}+Be^{-i\omega t}.$$ | {
"domain": "physics.stackexchange",
"id": 13607,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "oscillators, differential-equations, coupled-oscillators",
"url": null
} |
ros, lidar, rplidar, sensor
Title: Anybody knows how to change the rplidar motor rate?
I bought a rplidar laser and I want to control the frequency or rate of my sensor. Anybody knows how I can do this?
Thanks in advance.
Carlos
Originally posted by ingcavh on ROS Answers with karma: 23 on 2015-09-11
Post score: 0
http://www.seeedstudio.com/document/pdf/rplidar_devkit_manual_en.pdf - you control the motor PWM
Mark
Originally posted by MarkyMark2012 with karma: 1834 on 2015-09-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22619,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, lidar, rplidar, sensor",
"url": null
} |
quantum-field-theory, experimental-physics, electromagnetic-radiation, scattering, photons
Title: Scattering of light by light: experimental status Scattering of light by light does not occur in the solutions of Maxwell's equations (since they are linear and EM waves obey superposition), but it is a prediction of QED (the most significant Feynman diagrams have a closed loop of four electron propagators).
Has scattering of light by light in vacuum ever been observed, for photons of any energy? If not, how close are we to such an experiment? This was demonstrated by "Experiment 144" at SLAC in 1997. Here is a list of publications from that project, for instance "Positron Production in Multiphoton Light-by-Light Scattering", whose abstract reads:
A signal of 106±14 positrons above background has been observed in collisions of a low-emittance 46.6 GeV electron beam with terawatt pulses from a Nd:glass laser at 527 nm wavelength in an experiment at the Final Focus Test Beam at SLAC. The positrons are interpreted as arising from a two-step process in which laser photons are backscattered to GeV energies by the electron beam followed by a collision between the high-energy photon and several laser photons to produce an electron-positron pair. These results are the first laboratory evidence for inelastic light-by-light scattering involving only real photons. | {
"domain": "physics.stackexchange",
"id": 38481,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, experimental-physics, electromagnetic-radiation, scattering, photons",
"url": null
} |
memory-access, performance, matrix-multiplication
$$
One more time, the schoolbook algorithm traverses across A and down X, so one of those traversals will always be misaligned with the memory layout.
Conclusion: Memory ordering doesn't matter.
Additional consideration: strings & text
ASCII (or similar) strings are most frequently read across-and-down. There's a lot more to consider since a multidimensional array of characters could be ragged (different length rows, e.g. in storing the lines of a book), but the usual traversal pattern at least suggests a preference for row-major ordering.
Conclusion: Row-major ordering is preferred. | {
"domain": "cs.stackexchange",
"id": 20297,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "memory-access, performance, matrix-multiplication",
"url": null
} |
c++, c++11, image, graphics
void Image::setRect(const uint8_t * __restrict image, const uint32_t xOffset, const uint32_t yOffset,
const uint32_t rectWidth, const uint32_t rectHeight, const bool topLeft)
{
// Validation:
assert(isValid());
assert(image != nullptr);
assert(rectHeight != 0 && rectWidth != 0);
const size_t pixelSize = PixelFormat::sizeBytes(pixelFormat);
uint8_t tmpPixel[bigPixelSizeBytes];
if (topLeft)
{
// Use the top-left corner of the image as the (0,0) origin.
// Invert Y:
uint32_t maxY = (getHeight() - 1);
for (uint32_t y = 0; y < rectHeight; ++y)
{
for (uint32_t x = 0; x < rectWidth; ++x)
{
std::memcpy(tmpPixel, (image + (x + y * rectWidth) * pixelSize), pixelSize);
setPixelAt(x + xOffset, maxY - yOffset, tmpPixel);
}
--maxY;
}
}
else
{
// Use the bottom-left corner of the image as the (0,0) origin.
// Default Y:
for (uint32_t y = 0; y < rectHeight; ++y)
{
for (uint32_t x = 0; x < rectWidth; ++x)
{
std::memcpy(tmpPixel, (image + (x + y * rectWidth) * pixelSize), pixelSize);
setPixelAt(x + xOffset, y + yOffset, tmpPixel);
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 10377,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, image, graphics",
"url": null
} |
Why is this? Let’s look at lines from the center of $$\C$$. Here we are plotting the continuous cosine function with dotted lines, with filled circles to represent the discrete samples we took to fill the row of $$\C$$:
>>> center_rows = [N / 2. - 1, N / 2., N / 2. + 1]
>>> fig = dftp.plot_cs_rows('C', N, center_rows)
>>> fig.suptitle('Rows $N / 2 - 1$ through $N / 2 + 1$ of $\mathbf{C}$',
... fontsize=20)
<...>
(png, hires.png, pdf)
The first plot in this grid is for row $$k = N / 2 - 1$$. This row starts sampling just before the peak and trough of the cosine. In the center is row $$k = N / 2$$ of $$\C$$. This is sampling the cosine wave exactly at the peak and trough. When we get to next row, at $$k = N / 2 + 1$$, we start sampling after the peak and trough of the cosine, and these samples are identical to the samples just before the peak and trough, at row $$k = N / 2 - 1$$. Row $$k = N / 2$$ is sampling at the Nyquist sampling frequency, and row $$k = N / 2 + 1$$ is sampling at a frequency lower than Nyquist and therefore it is being aliased to the same apparent frequency as row $$k = N / 2 - 1$$.
This might be more obvious plotting rows 1 and N-1 of $$\C$$:
>>> fig = dftp.plot_cs_rows('C', N, [1, N-1])
>>> fig.suptitle('Rows $1$ and $N - 1$ of $\mathbf{C}$',
... fontsize=20)
<...>
(png, hires.png, pdf)
Of course we get the same kind of effect for $$\S$$: | {
"domain": "github.io",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.985718064816221,
"lm_q1q2_score": 0.8081836767704107,
"lm_q2_score": 0.8198933403143929,
"openwebmath_perplexity": 889.776216363277,
"openwebmath_score": 0.8864856362342834,
"tags": null,
"url": "https://matthew-brett.github.io/teaching/fourier_basis.html"
} |
×
# A Pattern in the Quadratic Formula!
I was going about finding the pattern of integers $$a$$ such that $$x+a=x^2$$ has at least one rational solution. Using the quadratic formula, the value boiled down to $$\frac{1\pm\sqrt{1+4a}}{2}$$. Thus, this occurs when $$4a+1$$ is the square of an integer.
I knew that odd squares can be expressed as $$4a+1$$, which is not very hard to prove.
Consider $$y^2-1$$, for some odd integer $$y$$. Now, this can be factorized into $$(y+1)(y-1)$$. Now, $$2|y+1,2|y-1$$, since $$y$$ is odd. Thus, $$4|y^2-1$$, or any odd $$y^2$$ can be expressed as $$4a+1$$.
Now, I was making a table of odd integers and their squares to find any patterns. Only after $$25$$, did I realise a pattern.
For $$49$$, $$a=12$$.
For $$81$$, $$a=20$$.
For $$121$$, $$a=30$$.
For $$169$$, $$a=42$$.
I realised a pattern, in which the difference between consecutive values of $$a$$ are in an arithmetic progression with difference of $$2$$.
I tested my theory, and found it to comply, as for $$225$$, $$a=56$$, which I predicted.
Is their a proof for this theory? Did I stumble upon some intricate pattern?
Note by Nanayaranaraknas Vahdam
2 years, 11 months ago
Sort by:
From the above question, $$a = x^{2} - x = x(x-1)$$. The examples you're interested in positive integers, that will be the common pattern of $$2,6,12,20,30,42,...$$ · 2 years, 11 months ago | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429619433692,
"lm_q1q2_score": 0.8278033254082511,
"lm_q2_score": 0.8418256472515683,
"openwebmath_perplexity": 240.38455794351555,
"openwebmath_score": 0.9377146363258362,
"tags": null,
"url": "https://brilliant.org/discussions/thread/a-pattern-in-the-quadratic-formula/"
} |
machine-learning, neural-network, deep-learning, unsupervised-learning, rbm
This is equivalent to using binornd, but is probably faster. For example to generate a random number that is 1 with p=0.9, you get a random number from [0,1]. Now, in 90% of the cases, this random number is smaller than 0.9, and in 10% of the cases it is larger than 0.9. So to get a random number that is 1 with p=0.9, you can call 0.9 > rand(1) - which is exactly what they do.
tl;dr: Pick a new random number from the range [0,1] for each hidden unit in each iteration. Compare it to your probability with hidden_probs > rand(1,H) to make it binary. | {
"domain": "datascience.stackexchange",
"id": 1317,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, neural-network, deep-learning, unsupervised-learning, rbm",
"url": null
} |
statistical-mechanics
The integral of cosines and sines of a Brownian motion
To get the Smolochowski limit, all you need evaluate the average square distance traveled after time t:
$\langle |\Delta X|^2 \rangle = |V_0|^2\langle \int_0^T \int_0^T e^{i\Omega (t - t') + i (B(t)-B(t'))} dt dt' \rangle = \int_0^T\int_0^T e^{i\Omega(t-t')} \langle e^{i(B(t)-B(t'))}\rangle$
So the irreducible quantity to solve this problem is:
$\langle e^{i(B(t) - B(t'))} \rangle = G(t-t')$
$G(t)$ is, by translation invariance, the expected value of
$G(t) = \langle e^{iB(t)} \rangle $
This can be evaluated by Feynman diagrams: expand the exponential in powers, and note that B(t) is gaussian with width t, so all its moments are known. The odd moments are zero, and the even moments are given by products of odd numbers (pairing combinations):
$\langle B(t)^{2n} \rangle = 1 \cdot 3 \cdot 5 ...\cdot (2n-1) t^n$
So the power series for the exponential only has odd terms
$G(t) = \sum_{k=0}^\infty {(-1)^n \langle B(t)^{2n}\rangle\over (2n)!} = e^{-{t\over 2}}$
And this means by symmetry that
$G(t) = e^{-{1\over 2} |t-t'|}$ | {
"domain": "physics.stackexchange",
"id": 1481,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics",
"url": null
} |
algorithm-analysis, runtime-analysis, loops
Title: Nested Loop Time Complexity vs Successive Loops An algorithm has a calculation whose worst case time complexity $T(x)=O(x^2)$ that outputs solutions $y$ & $z$ for input $x$ (the algorithm's worst case time complexity is greater than quadratic time):
for (i = 1; i <= x; i++)
{
for (j = 1; j <= i; j++)
{
// Some quadratic time calculation of x that gives y & z
}
}
A different algorithm using successive loops is created for a given $x$ that outputs the same $y$ & $z$:
for (i = 1; i <= x; i++)
{
// Some quadratic time calculation of x that gives y
}
for (i = 1; i <= x; i++)
{
// Some quadratic time calculation of x that gives z
}
Is the time complexity of the second algorithm $T_s(x)=O(x^2)$? I think that the first algorithms running time is actually $\mathcal{O}(x^4)$.
The outer for loop executes exactly $x$ times,
the inner for loop executes at most $x$ times during every one of these $x$ iterations of the outer loop | {
"domain": "cs.stackexchange",
"id": 9415,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm-analysis, runtime-analysis, loops",
"url": null
} |
algorithms, graphs, np-complete, linear-programming
f is flow, c is capacity, u and v are vertices, d is demand, s is source, t is target(sink). We are given k commodities. i corresponds to ith commodity.
Now How does these constraints guarantee that i commodity will end up only in target of i (ti) with its respective demand (di)
Why It works for single commodity case
In single commodity case it includes all the constraints. This is popularly known as the Maximum Flow problem. The first constraint is capacity constraint, the second one is flow conservation. The fourth implies flow is non-negative.
Why There is Problem for the multi-commodity case ?
There is no constraint on target which guarantees that flow will only go into its respective case.
For instance, this seems like a simple counter example: The linear programming formulation from CLRS is fine. There is no problem with it.
Your purported counterexample isn't a valid counterexample -- it doesn't satisfy all of the inequalities. Take a closer look at the specification of the second inequality. For instance, when we are dealing with commodity $a$ ($i=a$) it says that we should have
$$\sum_{v \in V} f_{auv} - \sum_{v \in V} f_{avu} = 0$$
for all $u \in V \setminus \{s_a,t_a\}$. In particular, consider the case $u = t_b$. We have
$$\sum_{v \in V} f_{at_bv} - \sum_{v \in V} f_{avt_b} = 0 - 2 \ne 0,$$
so the inequality is violated.
To put it in English: the second inequality requires conservation of flow for each commodity, except at that commodity's source and sink. For example, for commodity $a$, we must have conservation of flow at every vertex except $s_a$ and $t_a$. In your proposed counterexample, conservation of flow for commodity $a$ is violated at vertices $s_b$ and $t_b$; that's not allowed. | {
"domain": "cs.stackexchange",
"id": 6802,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs, np-complete, linear-programming",
"url": null
} |
ros, slam, navigation, ros-lunar, rtabmap
Originally posted by ChriMo with karma: 476 on 2018-01-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Markovicho on 2018-01-22:
Thanks a lot !!!
This worked:
Create catkin Workspace :
http://wiki.ros.org/catkin/Tutorials/create_a_workspace
Follow build instructions on
:https://github.com/introlab/rtabmap_ros#rtabmap_ros-
Adding ~/catkin_ws/devel/setup.bash to ~/.bashrc | {
"domain": "robotics.stackexchange",
"id": 29795,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, slam, navigation, ros-lunar, rtabmap",
"url": null
} |
newtonian-mechanics, tensor-calculus, stress-energy-momentum-tensor
Title: Cauchy Stress Tensor Components from Forces $(x,y,z)$ Background: I'm using numerical modeling software to investigate fluid-structure interaction. One of the outputs I get from the model are the forces imposed by the fluid on the solid object (given as $\vec{F}_x,\vec{F}_y,\vec{F}_z$), and these forces are available for a given elemental area. Here's an example of the elemental area (bounded by the black box) with forces acting on the red point: | {
"domain": "physics.stackexchange",
"id": 35374,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, tensor-calculus, stress-energy-momentum-tensor",
"url": null
} |
c#, repository
Title: Building multi-source Repositories and Units of Work I am working in a shop where we tie into multiple different vendors to share data. I am also tasked with "bringing the code base up to the 4.x framework". To start, I understand that Entity Framework is a Repository/UoW pattern and really does not gain anything by wrapping it in home-spun Repository/UoW patterns. While I will eventually add EF at a later date, I don't have it in the existing code base. Instead, I have:
ADO.NET
OLEDB
Web Service (SOAP)
Web Socket (and a whole bunch of ugly)
And that's what I've picked out so far...
So, I am trying to build a basic Repository and UoW from the bottom up that will, eventually, present the same way to the developer in the end. What I have so far are 3 projects:
Me.Data // Has all the base interfaces and abstract classes
Me.Data.Cars // Handles the data via a SOAP based web service
Me.Data.Symitar // Handles the data via a web socket
Starting from the bottom:
NAMESPACE : Me.Data
IRepository
public interface IRepository<T, in TId>
{
T Insert(T entity);
T Update(T entity);
T Delete(T entity);
IList<T> Get(Expression<Func<T, bool>> predicate);
IList<T> GetAll();
T GetById(TId id);
}
IRepositoryWebService
public interface IRepositoryWebService<T, in TId> : IRepository<T, TId> where T : class
{
T Get(T entity);
IList<T> GetList(T entity);
}
RepositoryWebService
public abstract class RepositoryWebService<T, TId> : IRepositoryWebService<T, TId> where T : class
{
protected IUnitOfWork UnitOfWork;
protected RepositoryWebService(IUnitOfWork unitOfWork) { UnitOfWork = unitOfWork; } | {
"domain": "codereview.stackexchange",
"id": 10457,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, repository",
"url": null
} |
energy, molecular-orbital-theory, photochemistry, ionization-energy
Title: "Exactly Equal" and "At Least" in electron excitation When we examine IR spectra, we see troughs corresponding to absorption at exactly a specific frequency that corresponds to the energy needed to stretch certain bonds (although translational motions and intermolcular forces can broaden the accepted frequency).
Coordination compounds absorb light at exactly the frequency corresponding to the crystal-field splitting energy.
However, when we talk about energy from photons required to break a bond, we say "at least" the frequency instead of "exactly" in the examples above. Take $\ce{Cl-Cl}$ for example, we say that the photon must have at least a frequency of 607 THz (or wavelength of no more than 496 nm). There is still a definite energy involved here: 242 kJ/mol to promote an electron from a bonding MO in the molecule to an anti-bonding MO.
As another example, when we talk about ionizing radiation, we say that there is at least a certain amount of energy needed to ionize a compound. For example, my textbook says that a photon needs "at least 1216 kJ/mol" to ionize water. There is still a definite energy level involved: bringing the bonding electron from its negative potential energy MO to 0.
In all of these examples, definite energies were involved. Why is it that sometimes we say that that a photon needs to have exactly the energy needed, and other times the minimum energy needed? When you ionize or break something, the resulting particles (an electron and an ion, or maybe two atoms, or whatever) would fly away from each other with arbitrary speed. Their kinetic energy would absorb any excess energy of the photon. This is the "at least" case.
On the other hand, when you excite some particle (molecule or whatever) in such a way that it remains in one piece, then you can't send it flying, because the momentum conservation forbids that. This is the "exactly equal" case.
Surely this is a simplification; in fact, a photon may excite a molecule using only a part of its energy, while keeping the rest of it to itself (Raman scattering), but that's another story. | {
"domain": "chemistry.stackexchange",
"id": 5428,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "energy, molecular-orbital-theory, photochemistry, ionization-energy",
"url": null
} |
c, stack
There may typo in the comment in 'arrayImpl.c', which refers to the file as 'arrayImple.c' which is it?
It seems strange to me that stackImpl.c includes 'list.h' but not 'stack.h'. | {
"domain": "codereview.stackexchange",
"id": 38235,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, stack",
"url": null
} |
slam, navigation, ros-kinetic, rtabmap
Title: rtabmap_ros: Is it possible to continue running SLAM when a sensor is disabled?
Hardware: Turtlebot 2 (Kobuki), Hokuyo URG-04LX, PrimeSense RGB-D
Software: ROS Kinetic (ubuntu 16.04), turtlebot_bringup, turtlebot_description, urg_node, openni2_camera, rtabmap_ros
Context: I am developing an algorithm for fault tolerance in mobile robots, in which we aim to compensate for faulty sensors at run-time. The experiments require me to disconnect a sensor and verify the fault's effects in the robot's operation, and later if the algorithm is correctly compensating them.
Problem: I need to test the effects of a faulty sensor in the rtabmap_ros package (i.e., how it affects SLAM), however when I disconnect a sensor to simulate a failure (either laser or camera), the package stops transmitting map and localization data, as it probably should. When the sensor is reconnected, the package resumes operation. I have tried to change the "subscribe_depth" and "subscribe_scan" parameters, then calling the "update_parameters" service, as well as the "reset" and "resume" services, all to no avail.
Question: Is it possible for RTABMAP to continue performing SLAM in the event of a sensor disconnection? Could it be a simple tweak in the source code, to not check for a certain flag? If this is not possible with RTABMAP, are there any recommendations for packages which allow for this behavior?
Thanks in advance! | {
"domain": "robotics.stackexchange",
"id": 32471,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, ros-kinetic, rtabmap",
"url": null
} |
python, project-euler
However, solving Project Euler problem 40 only relies on you having the correct result for d(10**i), and this you got right.
2. Improving your solution
There are no docstrings, so even though I have solved Project Euler problem 40 myself, it took me a while to figure out what your functions are doing. Even if you struggle to pick good names for functions (and this can be very hard), you can always make things clearer with a brief description and some examples.
There are also no doctests. Even a handful of doctests might well have found the bug.
The discipline of writing down what each function actually does will make it easier for you to choose good names. For example, once you have written the docstring for your function levels, it should be clear that a better name would be something like total_digits:
def total_digits(n):
"""
Return the total number of digits among all the `n`-digit decimal
numbers.
>>> total_digits(1) # 1 to 9 (note that 0 is not included)
9
>>> total_digits(2) # 10 to 99
180
"""
return n * 9 * 10 ** (n - 1) | {
"domain": "codereview.stackexchange",
"id": 2338,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, project-euler",
"url": null
} |
c, array, pointers
//0 if it is equal, and -1 if it is smaller, parray will be sorted smallest to greatest
for (int j = 1; j < parr->length; j++)
for (int i = parr->offset+j; (i > parr->offset)&&(comp((const void**)&parr->data[i-1], (const void**)&parr->data[i]) > 0); i--) {
void* temp = parr->data[i];
parr->data[i] = parr->data[i-1];
parr->data[i-1] = temp;
}
}
PADEF void parraySortStandard (struct parray* parr, int(*comp)(const void**, const void**)) {
//same as parraySortInsert but uses the C standard qsort function for sorting instead
qsort(&parr->data[parr->offset], parr->length, sizeof(void*), (int(*)(const void*, const void*))comp);
} | {
"domain": "codereview.stackexchange",
"id": 29845,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, array, pointers",
"url": null
} |
D\cdot T^{-1} \qquad \leftrightarrow \qquad D=T^{-1}\cdot H\cdot T$$ where $D$ is diagonal with the eigenvalues as diagonal entries, and $T$ is a matrix constructed using the eigenvectors of $H$, arranged in columns. This way $$H^2= T\cdot D\cdot T^{-1} \cdot T\cdot D\cdot T^{-1} T\cdot D\cdot T^{-1} = T\cdot D^2\cdot T^{-1}$$ and by induction $$H^n= T\cdot D^n\cdot T^{-1}$$ so that $$\exp(i t H)= T\cdot \exp(i t D)\cdot T^{-1}\, .$$ In your specific case, $$D=\hbox{diagonal}(1,-1,1)\, ,\qquad \exp(i t D)= \left( \begin{array}{ccc} e^{i t} & 0 & 0 \\ 0 & e^{-i t} & 0 \\ 0 & 0 & e^{i t} \\ \end{array} \right)\, ,$$ so it only remains to multiply by $T$ and $T^{-1}$, but you already have the eigenvectors. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.975576912786245,
"lm_q1q2_score": 0.8065209314930665,
"lm_q2_score": 0.8267117855317474,
"openwebmath_perplexity": 240.49646344915672,
"openwebmath_score": 0.9735682606697083,
"tags": null,
"url": "https://physics.stackexchange.com/questions/303632/matrix-mechanics"
} |
urdf, xacro
<geometry>
<mesh>
<scale>0.001 0.001 0.001</scale>
<uri>model://.....stl</uri>
</mesh>
</geometry>
<material>
<script>
<uri>__default__</uri>
<name>__default__</name>
</script>
</material>
</visual>
<gravity>1</gravity>
<velocity_decay/>
<self_collide>0</self_collide>
</link>
</model>
</sdf> | {
"domain": "robotics.stackexchange",
"id": 4202,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "urdf, xacro",
"url": null
} |
newtonian-mechanics, newtonian-gravity, orbital-motion, simulations
Title: What velocity do I need to get the moons orbit perfectly aligned with the earths center I have a simulation for universal gravity with two objects. How can I calculate the velocity or instantaneous force I would need to apply to object 2 (the moon) to get it to orbit so that the origin of the moons path around object 1 (the earth) is at the origin of the earth.
NOTE: The moon is already raised above the earth at the start of the simulation.
Given: Fg, G, m1, m2, r, theta This is what I was looking for. | {
"domain": "physics.stackexchange",
"id": 23458,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, simulations",
"url": null
} |
# Proving ${ 2n \choose n } = 2^n \frac{ 1 \cdot 3 \cdot 5 \cdots (2n-1)}{n!}$
How to prove this binomial identity :
$${ 2n \choose n } = 2^n \frac{ 1 \cdot 3 \cdot 5 \cdots (2n-1)}{n!}$$
The left hand side arises while solving a standard binomial problem the right hand side is given in the solution , I checked using induction that this is true but I am inquisitive to prove it in a rather general way.
EDIT: I have gone through all of the answers posted here,I particularly liked Isaac♦ answers after which it was not much difficult for me to figure out something i would rather say an easy and straight algebraic proof, I am posting it here if somebody needs in future :
$${ 2n \choose n } = \frac{(2n)!}{n! \cdot n!}$$
$$= \frac{ 1 \cdot 2 \cdot 3 \cdots n \cdot (n+1) \cdots (2n-1)\cdot (2n) }{n! \cdot n!}$$
$$= \frac{ [1 \cdot 3 \cdot 5 \cdots (2n-1)] \cdot [2 \cdot 4 \cdot 6 \cdots (2n)]}{n! \cdot n!}$$
$$= \frac{ [1 \cdot 3 \cdot 5 \cdots (2n-1)] \cdot [(2.1) \cdot (2.2) \cdot (2.3) \cdots (2.n)]}{n! \cdot n!}$$
$$= \frac{ [1 \cdot 3 \cdot 5 \cdots (2n-1)] \cdot [ 2^n \cdot (n)! ]}{n! \cdot n!}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9755769074605843,
"lm_q1q2_score": 0.8043260227651283,
"lm_q2_score": 0.824461932846258,
"openwebmath_perplexity": 318.17381054448197,
"openwebmath_score": 0.9035852551460266,
"tags": null,
"url": "http://math.stackexchange.com/questions/7778/proving-2n-choose-n-2n-frac-1-cdot-3-cdot-5-cdots-2n-1n"
} |
arduino, laser, ros-kinetic, rosserial-arduino
And also wiki/rosserial/Overview - Limitations - Arrays.
Originally posted by gvdhoorn with karma: 86574 on 2018-02-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by mattMGN on 2018-02-25:
ranges_length and intensities_length were the point !
Thanks you
Comment by kartiksoni on 2022-09-02:
hey can u show me where and how exactly did u add the ranges_length and intensities_length? Thank you. | {
"domain": "robotics.stackexchange",
"id": 30142,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "arduino, laser, ros-kinetic, rosserial-arduino",
"url": null
} |
strings, haskell
But this is very slow, you can see from the repeated usage of last, (++), and reverse that the actual state that matters (which character is next in the sequence) is getting parsed out of the previous state anew with every iteration. You are relying on repeating many \$O(n)\$ operations, which ends up causing your code to have a terrible constant factor performance hit.
The key insight or trick is to take advantage of lazy evaluation to produce an infinite stream of values which can hide all of the messy state machinery you need away in some thunks.
stringSequence :: [Char] -> [String]
We start with a clean slate and a new type signature. In this case we have a function that takes a sequence of Char values, and returns a list (which we know will be infinite) of String values made by applying the sequence generating procedure to the Chars. The magic obviously happens in the sequence generating procedure, so let's concentrate on that.
stringSequence chars = map (:"") chars
Starting is easy, we simply make a String out of each Char in our given sequence. Next step is to generate the two character Strings.
stringSequence chars =
let
strings = map (:"") chars
front = strings
next = concatMap (\f -> map (f ++) strings) front
in
front ++ next
This is the biggest jump I'll make, so make sure you understand it. The two character strings are made by taking each string we've already generated and appending a Char from the sequence to it (for convenience, strings makes it easy to combine previous values generated by the sequence generating function and the next element of the given Chars).
stringSequence chars =
let
strings = map (:"") chars
front = strings
next = concatMap (\f -> map (f ++) strings) front
after = concatMap (\s -> map (s ++) strings) next
in
front ++ next ++ after | {
"domain": "codereview.stackexchange",
"id": 14975,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "strings, haskell",
"url": null
} |
# Zeros of Holomorphic (upper bound)
I'm looking at old complex analysis exams and am stuck on the following. Suppose f(z) is holomorphic on $D(0,2)$ and continuous on its closure.
Suppose the $|f(z)|\le 16$ on the closure of $D(0,2)$ and is non-constant and |f(0)|=1. Show $f$ cannot have more than 4 zeros in $D(0,1)$ .
I found this technique Upper bound for zeros of holomorphic function , but it doesn't seem to apply to this problem.
-
Do you know Rouché's theorem? Compare $f$ with $z^4$. – mrf Oct 28 '12 at 16:25
Davide - thanks. – Digital Gal Oct 28 '12 at 16:25
I considered Rouche but it does not seem to be helpful for this problem since there is so little information about $f$ – Digital Gal Oct 28 '12 at 16:27
I added an additional condition |f(0)|=1 – Digital Gal Oct 28 '12 at 16:28
I believe a solution might be to define $g: D(0,4) \rightarrow \mathbb{C}$ as $g(z)=f(\sqrt{z})$. And then apply the technique here math.stackexchange.com/questions/21437/… to $g$. – Digital Gal Oct 28 '12 at 17:19
If $\alpha_1, \dotsc, \alpha_m$ are zeroes of $f$ on $D(0,1)$ let
$$p(z) = 2^m\prod_{j=1}^m \frac{z-\alpha_j}{4 - \overline{\alpha}_j z}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9808759660443166,
"lm_q1q2_score": 0.8317511941732835,
"lm_q2_score": 0.8479677583778258,
"openwebmath_perplexity": 351.20351048164474,
"openwebmath_score": 0.8919418454170227,
"tags": null,
"url": "http://math.stackexchange.com/questions/222824/zeros-of-holomorphic-upper-bound"
} |
haskell, functional-programming
main :: IO ()
main = print
[ sum
[ ( x * cos angle + y * sin angle
, -x * sin angle + y * cos angle
)
| (t, (x, y)) <- ls
, let angle = 2 * pi * t * k / genericLength ls
]
| (k, ls) <- zip [0..] $ tails [(1,2), (3,4)]
]
Edit: Data.Complex specializes in this sort of math:
import Data.Complex
main :: IO ()
main = print
[ sum [z / cis angle ** t | (t, z) <- ls]
| (k, ls) <- zip [0..] $ tails [1:+2, 3:+4]
, let angle = 2 * pi * k / genericLength ls
] | {
"domain": "codereview.stackexchange",
"id": 37220,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "haskell, functional-programming",
"url": null
} |
molecular-biology
Because most point mutations are deleterious or neutral, the random point mutation rate must be low and the accumulation of beneficial mutations and the evolution of a desired function is relatively slow in such experiments. For example, the evolution of a fucosidase from a galactosidase required five rounds of shuffling and screening before a >10-fold improvement in activity was detected4. Naturally occurring homologous sequences are pre-enriched for 'functional diversity' because deleterious variants have been selected against over billions of years of evolution...
Although shuffling of a single gene creates a library of genes that differ by only a few point mutations1, 2, 3, 4, 5, 6, the block-exchange nature of family shuffling creates chimaeras that differ in many positions. For example, in previous work a single beta-lactamase gene was shuffled for three cycles, yielding only four amino-acid mutations3, whereas a single cycle of family shuffling of the four cephalosporinases resulted in a mutant enzyme which differs by 102 amino acids from the Citrobacter enzyme, by 142 amino acids from the Enterobacter enzyme, by 181 amino acid from the Klebsiella enzyme and by 196 amino acids from the Yersinia enzyme. The increased sequence diversity of the library members obtained by family shuffling results in a 'sparse sampling' of a much greater portion of sequence space15, the theoretical collection of all possible sequences of equal length, ordered by similarity (Fig. 4). Selection from 'sparse libraries' allows rapid identification of the most promising areas within an extended sequence landscape (a multidimensional graph of sequence space versus function) | {
"domain": "biology.stackexchange",
"id": 5481,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-biology",
"url": null
} |
javascript
Title: Namespacing patterns After finishing this article : Lessons From A Review Of JavaScript Code
I was wondering if my namespace
/**
* The primary namespace object
* @type {Object}
* @alias BDDS
*/
if(!window['BDDS']) {
window['BDDS'] = {};
}
if attached to the window, is this snippet invalid,
according to the following?
Problem 9
Problem: The namespacing pattern used is technically invalid.
Feedback: While namespacing is implemented correctly across the rest
of the application, the initial check for namespace existence is
invalid. Here’s what you currently have: 1 if ( !MyNamespace ) { 2
MyNamespace = { }; 3 }
The problem is that !MyNamespace will throw a ReferenceError, because
the MyNamespace variable was never declared. A better pattern would
take advantage of boolean conversion with an inner variable
declaration, as follows: 01 if ( !MyNamespace ) { 02 var MyNamespace
= { }; 03 } 04 05 //or 06 var myNamespace = myNamespace || {}; 07 08 // Although a more efficient way of doing this is: 09 //
myNamespace || ( myNamespace = {} ); 10 // jsPerf test:
http://jsperf.com/conditional-assignment 11 12 //or 13 if ( typeof
MyNamespace == ’undefined’ ) { 14 var MyNamespace = { }; 15 }
This could, of course, be done in numerous other ways. If you’re
interested in reading about more namespacing patterns (as well as some
ideas on namespace extension), I recently wrote “Essential JavaScript
Namespacing Patterns.” Juriy Zaytsev also has a pretty comprehensive
post on namespacing patterns. No your code is correct.
!reference only throws a reference error if its not defined.
window is defined and window["not_defined"] just returns undefined for not defined variables. | {
"domain": "codereview.stackexchange",
"id": 807,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript",
"url": null
} |
tex
\cvsubsection{Pvt. Invest. Advisor/Sol Strategies}
{\emph{Investment/Strategic Planning} \sbullet New York, NY}
{April, 2007 - October, 2012}
\begin{itemize}
\item Advised Sol Strategies on Firm Strategy, Business Development, Cash-Flow Management, and Billing Policy
\item Consulted on Strategy and Wrote Financial, Investment, and Business Plans and Grant Applications
\end{itemize}
\cvsubsection{Florida State University}
{\emph{Research Assistant} \sbullet Tallahassee, FL}
{August, 2007 - April, 2008}
\begin{itemize}
\item Teaching Fellow, Graded for Mergers and Acquisitions, Assisted in Data Collection, Research, and Proctoring Exams
\item Programmed in SAS, Stata, SPSS, and R and Performed Regressions on Econometric Data
\end{itemize}
\cvsubsection{Merrill Lynch}
{\emph{Financial Advisor} \sbullet Pensacola, FL}
{May, 2006 - April, 2007}
\begin{itemize}
\item Hosted Speakers, Brought in \$3 Million in Accounts, and Serviced More Than 100 House Accounts
\item Executed Trades and Limit Orders on Exchange Traded Funds, Stocks, Options, and Auction Rate Securities
\end{itemize}
\cvsubsection{Ameriprise Financial Services}
{\emph{Financial Advisor} \sbullet New York, NY}
{January, 2004 - August, 2005}
\begin{itemize}
\item Gave Seminars, Sold Financial Plans, Met Sales Goals, and Applied Monte Carlo Simulation \& Modern Portfolio Theory
\item Series 7 Securities, Series 66 Investment Advisor, Life Insurance, Health Insurance, and Variable Annuity Licensed
\end{itemize}
\cvsection{Education}
\cvsubsection{University Of West Florida}
{\emph{Master of Business Administration} \sbullet College of Business \sbullet Pensacola, FL}
{August 2010} | {
"domain": "codereview.stackexchange",
"id": 23597,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "tex",
"url": null
} |
The conditional probability of interest is:
\begin{aligned} \mathbb{P}(W=w | W \geqslant 1) &= \frac{\mathbb{P}(W=w, W \geqslant 1)}{\mathbb{P}(W \geqslant 1)} \\[6pt] &= \frac{\mathbb{P}(W=w) \cdot \mathbb{I}(w \geqslant 1)}{1 - \mathbb{P}(W = 0)} \\[6pt] &= \frac{\mathbb{P}(W=w)}{1 - \mathbb{P}(W = 0)} \cdot \mathbb{I}(w \geqslant 1) \\[6pt] &= \frac{{N \choose w} \Big( \frac{1}{k} \Big)^w \Big( 1-\frac{1}{k} \Big)^{N-w}}{1 - \Big( 1-\frac{1}{k} \Big)^{N}} \cdot \mathbb{I}(w \geqslant 1) \\[6pt] &= {N \choose w} \cdot \frac{(k-1)^{N-w}}{k^N - ( k-1)^{N}} \cdot \mathbb{I}(w \geqslant 1). \\[6pt] \end{aligned}
This is the same as the expression you have derived in your update, but it is a simpler form. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9861513875459069,
"lm_q1q2_score": 0.8085389530207316,
"lm_q2_score": 0.8198933381139646,
"openwebmath_perplexity": 355.32491773510765,
"openwebmath_score": 0.99822598695755,
"tags": null,
"url": "https://stats.stackexchange.com/questions/341862/a-generalized-boy-or-girl-problem"
} |
The singular Gaussian distribution is the push-forward of a nonsingular distribution in a lower-dimensional space. Geometrically, you can take a standard Normal distribution, rescale it, rotate it, and embed it isometrically into an affine subspace of a higher dimensional space. Algebraically, this is done by means of a Singular Value Decomposition (SVD) or its equivalent.
Let $\Sigma$ be the covariance matrix and $\mu$ the mean in $\mathbb{R}^n$. Because $\Sigma$ is non-negative definite and symmetric, the SVD will take the form
$$\Sigma = U \Lambda^2 U^\prime$$
for an orthogonal matrix $U\in O(n)$ and a diagonal matrix $\Lambda$. $\Lambda$ will have $m$ nonzero entries, $0\le m \le n$.
Let $X$ have a standard Normal distribution in $\mathbb{R}^m$: that is, each of its $m$ components is a standard Normal distribution with zero mean and unit variance. Abusing notation a little, extend the components of $X$ with $n-m$ zeros to make it an $n$-vector. Then $U\Lambda X$ is in $\mathbb{R}^n$ and we may compute
$$\text{Cov}(U\Lambda X) = U \Lambda\text{Cov}(X) \Lambda^\prime U^\prime = U \Lambda^2 U^\prime = \Sigma.$$
Consequently
$$Y = \mu + U\Lambda X$$
has the intended Gaussian distribution in $\mathbb{R}^n$.
It is of interest that this works when $n=m$: that is to say, this is a (standard) method to generate multivariate Normal vectors, in any dimension, for any given mean $\mu$ and covariance $\Sigma$ by using a univariate generator of standard Normal values.
As an example, here are two views of a thousand simulated points for which $n=3$ and $m=2$:
The second view, from edge-on, demonstrates the singularity of the distribution. The R code that produced these figures follows the preceding mathematical exposition. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9877587272306607,
"lm_q1q2_score": 0.8650662425941783,
"lm_q2_score": 0.8757869900269366,
"openwebmath_perplexity": 318.70517787226265,
"openwebmath_score": 0.8128841519355774,
"tags": null,
"url": "https://stats.stackexchange.com/questions/159313/generating-samples-from-singular-gaussian-distribution/159322"
} |
inorganic-chemistry, molecular-structure, halides, noble-gases
Title: Structure of xenon hexafluoride
The central atom has a hybridization of $\mathrm{sp^3d^3}$. Thus, its structure should be pentagonal bipyramidal.
Why is it not that but a distorted octahedron? This is one of the many reasons why hybridisation including d-orbitals fails for main-group elements.
Xenon in $\ce{XeF6}$ is not hybridised at all. Instead of invoking populated core d-orbitals or energetically removed d-orbitals (remember the aufbau principle: the next shell’s s-orbital has a lower energy than the d-orbitals you are proposing to include in hybridisation!) xenon just offers its three p-orbitals $\mathrm{p}_x, \mathrm{p}_y$ and $\mathrm{p}_z$ for four-electron-three-centre bonds. These 4e3c bonds can be understood using the following two mesomeric structures:
$$\ce{F^-\bond{...}Xe^+-F <-> F-Xe^+\bond{...}F-}$$
Each $\ce{Xe-F}$ bond has a bond order of ½, and for each fluorine there is another with a bond angle $\angle(\ce{F-Xe-F}) \approx 180^\circ$ as part of the same 4e3c bond.
Also note that this means that xenon’s lone pair is comfortably located in the $\mathrm{5s}$ orbital. | {
"domain": "chemistry.stackexchange",
"id": 6982,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, molecular-structure, halides, noble-gases",
"url": null
} |
2. We must swap the rows to put a non-zero in the top left position (this is the partial pivoting step). Swapping the first and second rows gives the matrix
$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 1& \hfill -4& \hfill 2\\ \hfill 0& \hfill -1& \hfill 2\\ \hfill -2& \hfill 5& \hfill -4\end{array}\right].$
We carry out one row operation to eliminate the non-zero in the bottom left entry as follows
$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 1& \hfill -4& \hfill 2\\ \hfill 0& \hfill -1& \hfill 2\\ \hfill -2& \hfill 5& \hfill -4\\ \hfill \end{array}\right]\begin{array}{c}\hfill \\ \hfill \\ R3+2×R1\hfill \end{array}⇒\left[\begin{array}{ccc}\hfill 1& \hfill -4& \hfill 2\\ \hfill 0& \hfill -1& \hfill 2\\ \hfill 0& \hfill -3& \hfill 0\\ \hfill \end{array}\right]$
Next we use the middle element to eliminate the non-zero value underneath it. This gives | {
"domain": "github.io",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9840936092211993,
"lm_q1q2_score": 0.8441241085254859,
"lm_q2_score": 0.8577681031721325,
"openwebmath_perplexity": 257.27628495573083,
"openwebmath_score": 0.8317229151725769,
"tags": null,
"url": "https://bathmash.github.io/HELM/30_2_gauss_elim-web/30_2_gauss_elim-webse2.html"
} |
terminology, programming-languages, semantics
Title: What is pass-by-value-result and its advantage and disadvantage? I have searched on Google, but I can't quite understand what is pass-by-value-result. What is the advantage and disadvantage of using pass-by-value-result?
If I have a program code like this, what will be the result when parameters are passed by value-result?
#include <stdio.h>
int a=2;
int b=1;
void fun(int x, int y) {
b=x+y;
x=a+y;
y=b+x;
} | {
"domain": "cs.stackexchange",
"id": 2609,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "terminology, programming-languages, semantics",
"url": null
} |
expansion, observable-universe
Title: What is the present day radius of the observable universe? The radius of the observable universe is generally taken to be ~45 billion light years. However, we see distant galaxies as they were many millions of years in the past, so there are two ways I can interpret this radius value:
1) Light that is reaching Earth for the first time today was emitted by ancient galaxies that were initially a lot closer, but are now estimated to be 45 billion light years away from us.
2) Light that is reaching Earth for the first time today was emitted by ancient galaxies that were initially 45 billion light years away, and are now a lot further.
Which interpretation, if they're not both wrong, is correct? The answer is basically #1. The observable universe contains objects whose light has taken up to the age of the universe to travel to us.
Thanks to the expansion of the universe, the things that emitted that light are about 46 billion light years away now.
The exact figure depends on the details of the cosmological parameters and also whether you take the first time that the light can have started out as being the "epoch of recombination" (when the cosmic microwave background was emitted and the universe became transparent to radiation) or the big-bang itself (which would add about 0.5 billion light years to the figure).
Light emitted just after the epoch of recombination has a redshift of about 1100. That means the scale factor has increased by a similar amount. Thus something that is now 46 billion light years away was about 42 million light years away when the light was emitted (of course neither the Earth or our Galaxy existed then).
The answer cannot be #2 since light from something 45 billion light years away cannot have reached us in the age of the universe. | {
"domain": "astronomy.stackexchange",
"id": 2842,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "expansion, observable-universe",
"url": null
} |
c, embedded
/*********************************************************************
* TYPEDEFS
*/
/*********************************************************************
* CONSTANTS
*/
/*********************************************************************
* PUBLIC VARIABLES
*/
/*********************************************************************
* PUBLIC FUNCTIONS
*/
bool Heater_Init (void);
bool Heater_SetPower (uint8_t powerPercentage); | {
"domain": "codereview.stackexchange",
"id": 40771,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, embedded",
"url": null
} |
algorithms, recursion
&\quad\text{return $2z$}\\\\
&\text{else:}\\\\
&\quad\text{return $x + 2z$}\\\\
\end{align}
The operation of the multiplication algorithm can be understood as follows:
$$
x.y=\begin{cases} 2(x.\lfloor y/2\rfloor)\text{ if }y\text{ is even}\\\\x+2(x.\lfloor y/2\rfloor)\text{ if }y\text{ is even} \end{cases}
$$
which can be verified as,
\begin{align}
y&=2n\implies 2(x.\lfloor y/2\rfloor)=2(x.\lfloor n\rfloor)=2xn=x.2n=xy\\\\
y&=2n+1\implies x+2(x.\lfloor y/2\rfloor)=x+2(x.\lfloor n+1/2\rfloor)=x+2xn=x(1+2n)=xy
\end{align}
But is it possible to understand the given division algorithm in a similar way ? Suppose $x$ is divided by $y$, and the quotient is $q$ and remainder is $r$. Then, it means the following:
$$x = q \cdot y + r \quad \quad \textrm{where} \quad r < y. \quad \quad (1)$$
Consider the case when $x = even$. Then, $x = 2x'$. Suppose that $x'$ when divided by $y$ gives quotient $q'$ and remainder $r'$. Then, we have:
$$x = 2x' = 2 \cdot (q'\cdot y + r') \quad \quad \textrm{where} \quad r' < y. \quad \quad (2)$$
By equation $(1)$, we have | {
"domain": "cs.stackexchange",
"id": 21448,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, recursion",
"url": null
} |
filters, discrete-signals, z-transform, poles-zeros, matched-filter
Title: Plane Settings of the Matched $z$-transform Method I've come across that the matched $z$-transform maps poles of the $s$-plane design to locations in the $z$-plane. My question is, what is the $s$-plane and what does this mean? I'm aware that the variable $s$ can often be used to denote the complex plane but the fact that this has been used in conjunction with the $z$ variable has confused me slightly. The $s$-plane is the complex plane associated with the Laplace transform, i.e., with transfer functions of continuous-time systems, whereas the $z$-plane is the complex plane associated with the $\mathcal{Z}$-transform, i.e., with transfer functions of discrete-time systems. The matched $Z$-transform is one way of transforming continuous-time systems to discrete-time systems.
Take a look at this answer for more information about transformations from continuous-time to discrete-time. | {
"domain": "dsp.stackexchange",
"id": 7436,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, discrete-signals, z-transform, poles-zeros, matched-filter",
"url": null
} |
ros, python, rviz, gentoo
This did the trick -- many thanks William!
Originally posted by jdbrody on ROS Answers with karma: 28 on 2015-07-09
Post score: 0
This is most likely a bug in the Python CMake infrastructure which doesn't respect Gentoo's eselect tool's choice. You can explicitly pass the write Python executable and stuff directly to catkin and that should workaround the issue, similar to this:
https://gist.github.com/mikepurvis/9837958#file-gistfile1-sh-L59
That is for OS X, but it's the same idea. If you want to report it to Gentoo, then you should be able to reproduce by using cmake --find-package and show that even though eselect has Python2.7 selected, CMake finds Python3.4.
Originally posted by William with karma: 17335 on 2015-07-09
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jdbrody on 2015-07-09:
Thanks so much William! Please excuse the dumb question, but what argument should I pass to cmake --find-package? It chokes without one and none of the thinkgs I tried seemed right!
Comment by William on 2015-07-09:
I think it's PythonLibs. After that you have to add more args but it prompts you for those. | {
"domain": "robotics.stackexchange",
"id": 22133,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, python, rviz, gentoo",
"url": null
} |
javascript, css, sass
_timeout = setTimeout(function() {
// Update dimensions
viewportHeight = document.documentElement.clientHeight;
// Update the scroll percentage
_updatePercent();
}, 250);
});
};
return {
init: init,
getPercent: getPercent,
updateStyles: function(stylesObj) {
Object.assign(_el.style, stylesObj);
}
};
}(window, document));
// Usage
progr.init(); | {
"domain": "codereview.stackexchange",
"id": 21445,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, css, sass",
"url": null
} |
particle-physics, pauli-exclusion-principle, neutrons, neutron-stars
An antisymmetric spatial wavefunction would have a node, like an atomic p orbital, like the electric potential around a dipole antenna. This is a higher energy state than a simple spherical blog, a Gaussian. Given the range of internuclear forces, this nodal antisymmetric wavefunction has more energy than if the two nucleons just stayed apart. This is a matter of radial or angular kinetic energy has to be either "zero" or some quantized value that exceeds "escape velocity" So forget that part of the system wavefunction being antisymmetric.
BTW, we don't have separate spatial wavefunctions for the two nucleons - whatever one does, the partner does the exact opposite, like a two-body celestial mechanics problem. They orbit a common barycenter.
The spin part could be antisymmetric. This is a bit tricky. If particle #1 is up and #2 is down, we can write "UD". There is also "DU". We form the spin part of the wavefunction for the pair as UD-DU. We could instead choose UD+DU but note this is symmetric. So are UU and DD. Just how UD-DU differs from UD+DU may mystify beginners in quantum mechanics, but it's important, and it's how physical matter works whether we humans like it or not. (You might also see where the 'u' and 'd' quarks got their names. The quark idea came along years after isospin.)
Neither D nor U is really a wave or a function; they're at most just rows and columns in matrices if you must represent them in familiar math. Otherwise quantum physicists just deal with these symbolically. Still the jargon is "wavefunction" - we silly humans and our primitive scientific language! | {
"domain": "physics.stackexchange",
"id": 12703,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, pauli-exclusion-principle, neutrons, neutron-stars",
"url": null
} |
palaeontology, herpetology
Title: How big can cold-blooded animals get? It seems impossible to have reptiles the size of dinosaurs, just because they are really big! Did they have different systems of maintaining body temperature or maybe they weren't the exact type of animals that we today call reptiles? Answer is quite simple as from @Alan Boyd link. They are cold blooded and thus, can go out for hunt in cold, they need to stay put till they get some prey.
So, it mainly depend on the temperature of the outside, I found this interesting paper on relation of body sizes and latitude.
Body sizes of poikilotherm vertebrates at different latitudes
Maximum sizes of 12,503 species of poikilotherm vertebrates were
analyzed for latitudinal trends, using published data from 75 faunal
studies. A general trend appears which may be summarized by the rule
"among fish and amphibian faunas the proportion of species with large
adult size tends to increase from the equator towards the poles". The
rule holds for freshwater fish, deepsea fish, anurans, urodeles, and
marine neritic fish arranged roughly in order of decreasing clarity of
the trend). In general the rule applies not only within these groups
of families but also within single families. In reptile groups, the
rule holds weakly among snakes and not at all among lizards or
non-marine turtles. Possible explanations include an association
between small size and greater specialization in the tropics; the
possibility in poikilo-therms of heat conservation or of some other
physiological process related to surface/volume ratio; selection for
larger size in regions subject to winter food shortages; and an
association between large adult size and high reproductive potential
in cold regions. Other suggestions can be advanced, but all are
conjectural and few are subject to test. Global size - latitude trends
should be looked for in other living groups.
Cite: Lindsey, C. C., 1966: Body sizes of poikilotherm vertebrates at
different latitudes. Evolution: 456-465
Now lets compare some of the largest cold blooded Animals:
Reptiles
Amphibians
Fishes (Pisces) | {
"domain": "biology.stackexchange",
"id": 2449,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "palaeontology, herpetology",
"url": null
} |
To see that the minimum value $xy=\frac14$ is attained, check that if $x= \frac14(\sqrt{15} + \sqrt{11})$, $y= \frac14(\sqrt{15} - \sqrt{11})$ and $z= \frac12\sqrt{15}$, then the original equations are satisfied, and $xy = \frac14$.
To obtain those values for $x$, $y$ and $z$, notice that the previous equations imply that to attain the minimum value $\frac14$ for $u$, we must have $v=z=\frac12\sqrt{15}$. In terms of $x$ and $y$, this says that $x+y = \frac12\sqrt{15}$. Also, $xy = \frac14$. This means that $x$ and $y$ are the roots of the quadratic equation $\lambda^2 - \frac12\sqrt{15}\lambda + \frac14 = 0$, giving the values for $x$ and $y$ listed above.
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Let me show you how the method of the Lagrange Multipliers would work.
You will probably learn it sooner or later anyway and it may interest you.
We'll define the function $F(x,y,z,\lambda,\mu)$ as follows:
$$F(x,y,z,\lambda,\mu) = xy + \lambda(x^2+y^2+z^2-7) + \mu(xy+xz+yz-4)$$ | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180631379971,
"lm_q1q2_score": 0.8104466686818101,
"lm_q2_score": 0.822189121808099,
"openwebmath_perplexity": 298.8340266958963,
"openwebmath_score": 0.9169202446937561,
"tags": null,
"url": "https://mathhelpboards.com/threads/find-minimum-value-of-xy.6650/"
} |
The penultimate equality follows from integration by parts.
- 4 years, 11 months ago
@Ruben Doornenbal Can we have problem 24?
- 4 years, 11 months ago
Sir can you elaborate I did'nt understood this one @Ruben Doornenbal
- 4 years, 10 months ago
@U Z The idea is to expand the factor $\displaystyle \frac{1}{1+u^2}$ in a geometric series and interchange summation and integration. The last equality is just the definition of Catalan's constant.
- 4 years, 10 months ago
@Ronak Agarwal @Ruben Doornenbal The answer given is wrong!!! It should be -G!!! You must have forgotten the negative sign.......
- 1 year, 1 month ago
$Problem\quad 30$
Find $\displaystyle \int _{ 1 }^{ \infty }{ \frac { x-\left\lfloor x \right\rfloor -0.5 }{ x } dx }$
- 4 years, 11 months ago | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9566342061815148,
"lm_q1q2_score": 0.8410218609661799,
"lm_q2_score": 0.8791467580102418,
"openwebmath_perplexity": 3756.1208070780667,
"openwebmath_score": 0.9985517263412476,
"tags": null,
"url": "https://brilliant.org/discussions/thread/brilliant-integration-contest-season-1-part-2/"
} |
c, strings, memory-management, pointers
char** destroy_strarr(char** strarr, int length)
{
// Free all strings inside an array of strings and the array itself
int index = 0;
while (index < length)
{
// Free the elements and assign the pointer to NULL
free(strarr[index]);
strarr[index++] = NULL;
}
// Free the array itself and assign to NULL
free(strarr);
strarr = NULL;
return strarr;
}
Here's the corresponding stringfuncs.h
#pragma once
/*
Take string input from user
Pass in a string prompt to display to the user prior to input
Returns a pointer to the input string
*/
char* get_string(const char* prompt);
/*
Split given string by delimiter into an array of strings
Pass in the address of a variable to store the length of the array
Returns a pointer to the array of strings
*/
char** split_string(const char delimiter, const char* string, int* length);
/*
Free all the memory used by an array of strings
Assigns all the string elements as NULL
Returns NULL on success
*/
char** destroy_strarr(char** strarr, int length);
And example use-
#include<stdio.h>
#include<stdlib.h>
#include "stringfuncs.h"
int main()
{
int length;
char* input = get_string("> ");
char** strarr = split_string(' ', input, &length);
strarr = destroy_strarr(strarr, length);
free(input);
input = NULL;
return 0;
}
Primarily concerned about split_string and get_string, the rest are helpers.
Note: This targets C only, not C++
whether or not the implementation is efficient and memory safe. | {
"domain": "codereview.stackexchange",
"id": 37978,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, strings, memory-management, pointers",
"url": null
} |
formal-languages, context-free, pumping-lemma
Title: Proving that $ \{u\#v\#w \mid u,v,w \in {a,b,c}*, |u|_a = |v|_b = |w|_c\}$ isn't context-free I have a question about the pumping lemma for context-free languages.
I understand the conditions of the pumping lemma.
Assume $L$ is context-free. Let $n>0$ be the pumping length given by the lemma.
$z=uvwxy$ with conditions:
$|vwx|\leq n$
$|vx|\geq 1$
$\forall i >0, \exists z= uv^iwx^iy \in L$ | {
"domain": "cs.stackexchange",
"id": 17761,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "formal-languages, context-free, pumping-lemma",
"url": null
} |
c#, image, winforms, graphics, genetic-algorithm
((System.ComponentModel.ISupportInitialize)(this.pictureBox4)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox5)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox6)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox7)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox8)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox9)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox10)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox11)).BeginInit();
this.SuspendLayout();
//
// pictureBox1
//
this.pictureBox1.Location = new System.Drawing.Point(0, 0);
this.pictureBox1.Name = "pictureBox1";
this.pictureBox1.Size = new System.Drawing.Size(255, 255);
this.pictureBox1.TabIndex = 0;
this.pictureBox1.TabStop = false;
//
// OpenImageButton
//
this.OpenImageButton.Location = new System.Drawing.Point(12, 261);
this.OpenImageButton.Name = "OpenImageButton";
this.OpenImageButton.Size = new System.Drawing.Size(75, 23);
this.OpenImageButton.TabIndex = 1;
this.OpenImageButton.Text = "Open Image";
this.OpenImageButton.UseVisualStyleBackColor = true;
this.OpenImageButton.Click += new System.EventHandler(this.OpenImage);
//
// openFileDialog1
// | {
"domain": "codereview.stackexchange",
"id": 33349,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, image, winforms, graphics, genetic-algorithm",
"url": null
} |
acid-base, equilibrium, aqueous-solution
Rearranging the terms to get $\ce{[H+]}$,
$$[H^+]=\sqrt{\frac{\frac{[NH_4^+].k_w}{k_b}+k_{a2}[HCO_3^-]}{1+\frac{[HCO_3^-]}{k_{a1}}}}$$
$$[H^+]=\sqrt{\frac{k_{a1}\cdot \big(\frac{[NH_4^+].k_w}{k_b}+k_{a2}[HCO_3^-]\big)}{k_{a1}+[HCO_3^-]}}$$
Now we will have to make 2 assumptions. First lets assume that $k_{a1}<<[HCO_3^-]$ so that $k_{a1}+[HCO_3^-]=[HCO_3^-]$
$$[H^+]=\sqrt{\frac{k_{a1}\cdot \big(\frac{[NH_4^+].k_w}{k_b}+k_{a2}[HCO_3^-]\big)}{[HCO_3^-]}}$$
If you assume that neither hydrolysis nor the dissociation goes too far, you can assume the concentration of bicarbonate ion to be equal to the initial concentration (say C). Since Ammonium bicarbonate dissociates to give two products in 1:1 ratio, concentration of ammonium produced is equal to the concentration of bicarbonate ion produced. | {
"domain": "chemistry.stackexchange",
"id": 4489,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "acid-base, equilibrium, aqueous-solution",
"url": null
} |
where $m = 1$ to $n-2$. Thus, by inclusion-exclusion, a probability of a run of 3 heads is
$$P(\bigcup_{k=1}^{n-2} \bigcap_{i=0}^2 H_{k+i}) = P(\bigcup_{k=1}^{n-2} A_k)$$
$$= \sum_{i=1}^{n-2} P(A_i) - \sum_{1 \le i < j \le n-2} P(A_i \cap A_j) + \sum_{1 \le i < j < k \le n-2} P(A_i \cap A_j \cap A_k)$$
Now, $P(A_i \cap A_j \cap A_k) = 0$ unless $i+2=j+1=k$ in which case we have $P(A_i \cap A_j \cap A_k) = P(H_k)$
Similarly, $P(A_i \cap A_j) = 0$ unless $i+1=j$ or $i+2=j$ in which cases we respectively have $P(A_i \cap A_j) = P(H_j \cap H_{j+1})$ and $P(A_i \cap A_j) = P(H_j)$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9840936101542133,
"lm_q1q2_score": 0.8478342520526447,
"lm_q2_score": 0.8615382147637196,
"openwebmath_perplexity": 135.75946625984267,
"openwebmath_score": 0.9322981834411621,
"tags": null,
"url": "https://math.stackexchange.com/questions/2749916/probability-of-a-run-of-3-heads-when-i-flip-a-coin-n-times"
} |
mechanical-engineering, systems-engineering, euler
Title: Euler's formula with different coefficients and constants Hi and thank you in advance. I understand that one of Euler's formulas states:
cos(x) + isin(x) = e^ix
I know that if you were to have the same coefficient attached to sin and cos the following would hold true:
4*cos(x) + 4*isin(x) = 4*e^ix or z*cos(x) + z*isin(x) = z*e^ix
Additionally a constant could be applied inside the trig term and this would hold true:
cos(3x) + isin(3x) = e^3ix or cos(xt) + isin(xt) = e^ixt
I have trouble understanding how we can utilize this though say if any of the above coefficients were to not be uniform.
Say:
4*cos(x) + 3*isin(x) = ?
If you were to say the above is equivalent to 7e^ix, this not true. How can you use eulers formula with different coefficients? Is it possible?
I ask this because a fellow student used the following: | {
"domain": "engineering.stackexchange",
"id": 5141,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, systems-engineering, euler",
"url": null
} |
inorganic-chemistry, molecular-structure, radicals
Title: Structure of Br3O8 What is the structure of $\ce{Br3O8}$? It has an odd number of electrons; does that make it a free radical?
The structure given in my book shows
Where did the 7th electron of the central atom go?
Picture from book (Pg 265, NCERT Chemistry Part II, class 11): The electron didn't go anywhere. It's in an unhybridized p orbital on the central bromine, and yes, $\ce{Br3O8}$ is a free radical. That is why it decomposes above -80ºC.$^{[1]}$
$^{[1]}$ Cotton, F. A. Progress in Inorganic Chemistry - Volume 2; Interscience Publishers: New York, NY, 1960. | {
"domain": "chemistry.stackexchange",
"id": 7474,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, molecular-structure, radicals",
"url": null
} |
machine-learning, r, accuracy, overfitting, gbm
Title: How to determine if my GBM model is overfitting? Below is a simplified example of a h2o gradient boosting machine model using R's iris dataset. The model is trained to predict sepal length.
The example yields an r2 value of 0.93, which seems unrealistic. How can I assess if these are indeed realistic results or simply model overfitting?
library(datasets)
library(h2o)
# Get the iris dataset
df <- iris
# Convert to h2o
df.hex <- as.h2o(df)
# Initiate h2o
h2o.init()
# Train GBM model
gbm_model <- h2o.gbm(x = 2:5, y = 1, df.hex,
ntrees=100, max_depth=4, learn_rate=0.1)
# Check Accuracy
perf_gbm <- h2o.performance(gbm_model)
rsq_gbm <- h2o.r2(perf_gbm)
---------->
> rsq_gbm
[1] 0.9312635 The term overfitting means the model is learning relationships between attributes that only exist in this specific dataset and do not generalize to new, unseen data.
Just by looking at the model accuracy on the data that was used to train the model, you won't be able to detect if your model is or isn't overfitting.
To see if you are overfitting, split your dataset into two separate sets:
a train set (used to train the model)
a test set (used to test the model accuracy)
A 90% train, 10% test split is very common. Train your model on the train test and evaluate its performance both on the test and the train set. If the accuracy on the test set is much lower than the models accuracy on the train set, the model is overfitting.
You can also use cross-validation (e.g. splitting the data into 10 sets of equal size, for each iteration use one as test and the others as train) to get a result that is less influenced by irregularities in your splits. | {
"domain": "datascience.stackexchange",
"id": 3542,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, r, accuracy, overfitting, gbm",
"url": null
} |
special-relativity
Title: Behavior of mass approaching the speed of light I would like some clarification on the behavior of relativistic mass as it approaches the speed of light. I am a high school student with an interest in physics so chances are I will have a hard time understanding any formal math in special relativity but I appreciate it anyway.
In special relativity, an object at any non-zero velocity (within the universal speed limit) experiences a length contraction. I would like to know how a mass behaves when an object approaches high speeds, as relativistic mass increases and the length decreases (contracts). Thanks!
In special relativity, an object at any non-zero velocity (within the
universal speed limit) experiences a length contraction.
This isn't actually correct. The object does not experience length contraction since the object is at rest with respect to itself.
It is correct to say that, in an inertial reference frame (IRF) in which the object is uniformly moving, the observed length, in the direction of the motion, will be contracted from the length in the IRF in which the object is at rest.
But the object does not experience length contraction since uniform motion is relative. There are an infinity of relatively moving IRFs in which the object is in relative motion and each one observes a different length contraction.
I would like to know how a mass behaves when an object approaches high
speeds,
Likewise, a mass is at rest with respect to itself. In an IRF in which the mass is uniformly moving, the total energy of the mass is given by
$$E = \sqrt{(pc)^2 + (mc^2)^2} = \gamma mc^2$$
where
$$\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}$$
In the reference frame in which the momentum is zero (particle is at rest), this simplifies to
$$E = mc^2$$
That's really all there is to it. The invariant mass $m$ is the same for all observers.
There is a notion of relativistic mass which is just $\gamma m$ but some question if this notion is useful. From the Wikipedia article "Mass in special relativity": | {
"domain": "physics.stackexchange",
"id": 20098,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity",
"url": null
} |
As for Dominic's second question, I can't seem to find an answer anywhere. But the authors of the above paper do still give us something:
Corollary to Proposition 3.1: If $\tau$ is a Hausdorff topology on $\mathbb{R}$ such that the only sets common to $\tau$ and the usual topology on $\mathbb{R}$ are the co-finite sets, then $\tau$ is countably compact and contains no non-trivial convergent sequences.
I was able to access the Shakhmatov-Tkachenko-Wilson paper here, and I also found some very interesting related information here, on some slides from a talk of Blaszczyk.
• Beautiful, thanks Will, also the comments concerning $\mathbb{R}$! Mar 22, 2016 at 19:47
If $\tau_1$ and $\tau_2$ meet your requirement, then they are called $T_1$-independent. The following paper may contain useful information: Shakhmatov, D.; Tkachenko, M.; Wilson, R. G. Transversal and $T_1$-independent topologies. Houston J. Math. 30 (2004), no. 2, 421–433. MR2852951. | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9919380070417539,
"lm_q1q2_score": 0.841131446375417,
"lm_q2_score": 0.8479677564567913,
"openwebmath_perplexity": 298.57360048270914,
"openwebmath_score": 0.9379008412361145,
"tags": null,
"url": "https://mathoverflow.net/questions/234245/t-2-topologies-that-are-as-disjoint-as-possible/234354"
} |
(Don't confuse this symbol with the letter “u.”). Because you are editing in the cloud, you can easily collaborate with colleagues, import images, and share your diagrams digitally or via print. This post was originally published on September 11, 2018, and updated most recently on July 26, 2020. There are more than 30 symbols used in set theory, but only three you need to know to understand the basics. Ein Venn Diagramm dient zur graphischen Dartellung zweier Mengen innerhalb der Mengenlehre. What do health care, filmmaking, and the retail industries have in common? ), und rechts sein Gegenteil, nämlich . Venn Diagram Fill: Colored No fill, lines only : Disclaimer This tool is provide "free to use for all". In making a Venn diagram, you may also want to consider what is not represented in a set. Ideation methods to turn your design team into a creative powerhouse. The intersection shows what items are shared between categories. In fact, the following three are the perfect foundation. We’ll be using Lucidchart to build our examples because it’s easy to use and completely free. From the example above, the set is the choices the unnamed group make for their drink preferences. Z = {1, 2, 3, 6, 9, 18} Finally, there is one more important set – the universal set. Venn diagrams usually consist of a rectangle which shows the univeral set, U or total sample space, and circles which show sets or events. We ask three people what drinks they like. Außerdem kann man so verschiedene Regeln der Wahrscheinlichkeitsrechnung sehr einfach veranschaulichen. Use the series of Venn diagram templates on Cacoo as a jumping-off point. Now that you know the Venn diagram symbols, read how to make one! There are eight regions that our restaurants could occupy. Mit diesem Mengendiagramm lassen sich Mengenbeziehungen wie Schnittmengen veranschaulichen. Each circle or ellipse represents a category. In our example diagram, the teal area (where green and purple overlap) represents the intersection of A and B which we notate as A ∩ B. These three people, whom | {
"domain": "jaromirstetina.cz",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.95598134762883,
"lm_q1q2_score": 0.8253862681134566,
"lm_q2_score": 0.8633916029436189,
"openwebmath_perplexity": 2687.106904132622,
"openwebmath_score": 0.32489916682243347,
"tags": null,
"url": "https://jaromirstetina.cz/b62l8/7c1d22-venn-diagram-symbol"
} |
discrete-signals, signal-analysis, frequency-spectrum, power-spectral-density, spectrogram
Power is Energy delivered over a time interval, in other words energy normalized to time.
Consistent with this: Joules is a unit of energy, and Watts (a unit of power) is Joules/sec.
In signal processing we use the terms "Energy Signals" and "Power Signals" with the power term as the square of the complex waveform regardless of actual units of a time domain waveform. We can see the analogy to power from electrical circuits with a normalized resistance where power is defined as:
$$P(t)= \frac{V(t)^2}{R}$$
$$P(t) = I(t)^2 R$$
Here in units of Watts, given current in Amps, voltage in Volts, and resistance in Ohms.
Thus if the resistance is normalized to $R=1$, we get the power in either case. With the units of time included above, we also see intuitively how the general complex conjugate product is "instantaneous" power, given the actual power formulas above provide the power dissipated at any instant in time. This is consistent with "Instantaneous Power" as given in the table above.
Applicable to waveforms in signal processing we have the following related terms: "Energy Signals" and "Power Signals".
Energy Signal: A waveform that is non-zero for a finite duration. This is because such a waveform has finite total energy.
Power Signal: A waveform that extends with non-zero values to infinity, but converges when normalized to time. Such a waveform has infinite energy, but for the case that it's average power is bounded, it is referred to as a Power Signal. A sinewave extending for all time is an example of a power signal. | {
"domain": "dsp.stackexchange",
"id": 11063,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "discrete-signals, signal-analysis, frequency-spectrum, power-spectral-density, spectrogram",
"url": null
} |
c#, beginner, object-oriented
</Team>
<Team xmlns="http://xmlsoccer.com/Team">
<Team_Id>561</Team_Id>
<Name>Partick</Name>
<Country>Scotland</Country>
<Stadium>Firhill Stadium</Stadium>
<HomePageURL>http://www.ptfc.co.uk/</HomePageURL>
<WIKILink>http://en.wikipedia.org/wiki/Partick_Thistle_F.C.</WIKILink>
</Team> What @RubberDuck said
plus... I want to be clear that your Leagues / Teams / Players collections etc. - your domain / business objects, should work amongst themselves only. That DAL will create these. Then methods like FindByTeamName() are trivial because we're past all that XML stuff.
public class League {
protected List<Team> Teams { get; set; } | {
"domain": "codereview.stackexchange",
"id": 15888,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, beginner, object-oriented",
"url": null
} |
When two dice are thrown simultaneously, the possible outcomes are: (1,1), (1,2), (1,3), (1,4), (1,5), (1,6) (2,1), (2,2), (2,3), (2,4), (2,5), (2,6) (3,1), (3,2), (3,3), (3,4), (3,5), (3,6) (4,1), (4,2), (4,3), (4,4), (4,5), (4,6) (5,1), (5,2), (5,3), (5,4), (5,5), (5,6) (6,1), (6,2), (6,3), (6,4), (6,5), (6,6) Number of all possible outcomes = 36 (i) Let E1 be the event in which 5 will not come up on either of them. Then, the favourable outcomes are: (1,1), (1,2), (1,3), (1,4), (1,6), (2,1), (2,2), (2,3), (2,4), (2,6), (3,1), (3,2), (3,3), (3,4), (3,6), (4,1), (4,2), (4,3), (4,4), (4,6), (6,1), (6,2), (6,3), (6,4) and (6,6). Number of favourable outcomes = 25 ∴ P (5 will not come up on either of them) = P (E1) = $\frac{25}{36}$ (ii) Let E2 be the event in which 5 will come up on at least one. Then the favourable outcomes are: | {
"domain": "byjus.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9945307244286673,
"lm_q1q2_score": 0.8287669114015197,
"lm_q2_score": 0.8333245932423308,
"openwebmath_perplexity": 4441.393879949012,
"openwebmath_score": 0.32838594913482666,
"tags": null,
"url": "https://byjus.com/question-answer/two-dice-are-thrown-simultaneously-what-is-the-probability-that-i-5-will-not-come-2/"
} |
python, python-3.x, unit-testing, stack
if __name__ == '__main__':
unittest.main()
Am I leaving anything out? Is 16 tests for a stack overkill? What are the best practices for something like this? There are certainly some good tests here. In a related question for Java, the answer by Phil Wright states:
Here are some tests I'd perform at the very least. There may be more
you'd want:
Create an empty Stack. Test that its size is 0.
Push an element onto the stack. Test that its size is now 1.
Push another element onto the stack. Test that its size is now 2.
Pop an element from the stack. Test that it matches the 2nd pushed value. Check that the size of the stack is now 1.
Pop an element from the stack. Test that it matches the 1st pushed value. Check that the size of the stack is 0.
Attempt to pop an element from the stack. You should receive an ArrayIndexOutOfBounds exception.
At a quick glance, you do most (if not all of these tests). So in some sense the coverage is good.
The amount of testing may be overkill. You have to realize you can't test every possible thing. With that in mind your tests should:
Find current issues with your code.
Nip likely future issues.
Prevent errors you've had in the past from coming back.
The past, present, and future. So for example, your function test_peek_push3_pop1 may have a good reason for being there. Maybe you had designed a stack before and had issues with pushing 3, popping 1, and then peeking. I've never personally had such an issue. It seems like a weird test case I wouldn't consider it, but maybe you have. If you have a compelling reason to have that test you should explain your reasoning in a comment so I can understand the purpose of the test. Ultimately:
Think and comment WHY you have a test. | {
"domain": "codereview.stackexchange",
"id": 22141,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, unit-testing, stack",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.