text stringlengths 49 10.4k | source dict |
|---|---|
special-relativity, field-theory, gauge-theory, representation-theory, degrees-of-freedom
$\langle0|A^\mu (x)|p,\lambda\rangle$
Answer 2 and 3
As also explained in the answer linked in the comment, that Lagrangian is equivalent to the sum of 4 independent Lagrangians for four independent scalar fields and one of these scalars appear with the "wrong" sign in the Lagrangian and you have states with negative energies. As it is also written a bit later in the book, just by looking at the equations of motion having four scalars or a 4-vector is exactly the same thing. Nothing is telling you that $A^\mu$ must transform in a particular way under a Lorentz transformation. Third problem is that you have too many degrees of freedom.
The situation is different when you add the other term with the two mixed derivatives
$$A^\mu \partial_\mu \partial_\nu A^\nu$$
Under a Lorentz transformation you get two $\Lambda^{-1}$ matrices from the two partial derivatives. If the field then transforms as $A\to \Lambda A$ then those two matrices cancel out and you end up with what you started: that piece is Lorentz invariant. The Lagrangian that you had before was invariant even if under a Lorentz transformation $x\to \Lambda x$ but $A\to A$.
The point that you need to understand from that chapter is that the free equations of motion for fields are not something you can build by hand randomly or something you can play around with by adding new pieces or modifying old ones. Free equations of motion are 100% fixed by group theory. Even the more complicated looking equations, for example for a massive graviton, are just a sum of projectors on the right Poincaré representation needed for the particle you want to describe. In the vector case for example the equations of motion are equivalent to
$$(\square + m^2)A^\mu = 0 \\
\partial^\mu A_\mu = 0$$ | {
"domain": "physics.stackexchange",
"id": 53127,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, field-theory, gauge-theory, representation-theory, degrees-of-freedom",
"url": null
} |
____________________________________________________________________
Bing’s Example G is Completely Normal
The proof for showing Bing’s Example G is normal can be modified to show that it is completely normal. First some definitions. Let $X$ be a space. Let $A \subset X$ and $B \subset X$. The sets $A$ and $B$ are separated sets if $A \cap \overline{B}=\varnothing=\overline{A} \cap B$. Essentially, any two disjoint sets are separated sets if and only if none of them contains limit points (i.e. accumulation points) of the other set. A space $X$ is said to be completely normal if for every two separated sets $A$ and $B$ in $X$, there exist disjoint open subsets $U$ and $V$ of $X$ such that $A \subset U$ and $B \subset V$. Any two disjoint closed sets are separated sets. Thus any completely normal space is normal. It is well known that for any regular space $X$, $X$ is completely normal if and only if $X$ is hereditarily normal. For more about completely normality, see [3] and [6].
Let $H_1 \subset F$ and $H_2 \subset F$ such that $H_1 \cap \overline{H_2}=\varnothing=\overline{H_1} \cap H_2$. We consider two cases. One is that one of $H_1$ and $H_2$ is a subset of $F-F_P$. The other is that both $H_1 \cap F_P \ne \varnothing$ and $H_2 \cap F_P \ne \varnothing$. | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9911526443943918,
"lm_q1q2_score": 0.8033083158230812,
"lm_q2_score": 0.8104789109591831,
"openwebmath_perplexity": 104.48403738152078,
"openwebmath_score": 0.9783799052238464,
"tags": null,
"url": "https://dantopology.wordpress.com/tag/perfectly-normal-space/"
} |
c, generics, quick-sort
return 1;
}
compare
int cmp(const void *a, const void *b)
{
const int *A = a, *B = b;
// A > B = 1, A < B = -1, (A == B) = 0
return (*A > *B) - (*A < *B);
}
byteSwap
/*
The main way that this function is swapping is by swapping out the bytes in each
iteration.
*/
void byteSwap(void *a, void *b, size_t memSize)
{
char tmp;
char *aa = a, *bb = b;
do
{
tmp = *aa;
*aa++ = *bb;
*bb++ = tmp;
}
while(--memSize > 0);
} Don't guess at compatibility
/*
Throughout this source code, if you see an unsigned int, more than likely
I'm using an unsigned int because I'm comparing to a variable of type size_t
*/
unsigned int i;
Why not just use size_t?
size_t i;
As stands, this has the worst of both worlds. You are using unsigned int because it usually matches size_t. But it doesn't always match size_t. And it doesn't allow you to set i to negative values. So you may have to pay the conversion penalty if it doesn't match. And you have to accept the limitations of an unsigned type regardless. If you use size_t instead, you have to accept the limitations but at least you are guaranteed not to have to do a conversion.
Know your bounds
for(i = 0; i < nitems; i++)
{
j = i;
while(j > 0 && cmp(&carray[j * memSize], &carray[(j - 1) * memSize]) < 0)
Consider
for (i = 1; i < nitems; i++)
{
j = i;
while(j > 0 && cmp(&carray[j * memSize], &carray[(j - 1) * memSize]) < 0) | {
"domain": "codereview.stackexchange",
"id": 21830,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, generics, quick-sort",
"url": null
} |
particle-physics, higgs
Of course the list in incomplete. There are a lot of different "mixtures" between those models, usually with some new funny names.
Here is a nice recent reference that reviews some of the mentioned models going more deeply into some tehnical details. | {
"domain": "physics.stackexchange",
"id": 476,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, higgs",
"url": null
} |
Σ = [0.5 0.4;
0.4 0.6]
Out[139]:
2×2 Array{Float64,2}:
0.5 0.4
0.4 0.6
Note that all eigenvalues of $A$ lie inside the unit disc.
In [140]:
maximum(abs, eigvals(A))
Out[140]:
0.9
Let’s compute the asymptotic variance
In [141]:
our_solution = compute_asymptotic_var(A, Σ)
Out[141]:
2×2 Array{Float64,2}:
0.671228 0.633476
0.633476 0.858874
Now let’s do the same thing using QuantEcon’s solve_discrete_lyapunov() function and check we get the same result.
In [142]:
using QuantEcon
In [143]:
norm(our_solution - solve_discrete_lyapunov(A, Σ * Σ'))
Out[143]:
3.883245447999784e-6
• Share page | {
"domain": "quantecon.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9637799472560582,
"lm_q1q2_score": 0.8266996972331669,
"lm_q2_score": 0.8577681031721325,
"openwebmath_perplexity": 4110.324789561307,
"openwebmath_score": 0.6695890426635742,
"tags": null,
"url": "https://julia.quantecon.org/getting_started_julia/fundamental_types.html"
} |
homework-and-exercises, rotational-dynamics
Title: Speed of rotating, falling masses I'm having trouble understanding this problem that was on an AP Physics 1 sample questions page:
I know that the answer is C, but my question is: Why does the graph have those curves in it? Is it due to gravity changing the rotation speed, or tension, or what? It would be awesome if someone explained really clearly, since I am a physics beginner.
Why does the graph have those curves in it? Is it due to gravity changing the rotation speed, or tension, or what ?
If you closely watch the center of the massless rod, which is the center of mass of the system, you will note that as soon as a ball goes "over the top" in its rotation, it is rotating toward the earth as it is falling, so its measured velocity is rapidly increasing. When that ball passes the "bottom" of the circle that it is rotating through, it is falling towards the earth while rotating away from the earth, so the measured velocity is "flat" during this interval. The ball on the other end of the stick is doing the opposite, so the measured velocities from the two balls are "out of phase" with each other, indicated by a "peak" from one ball and a "trough" from the other at certain instants in time.
Now, for my criticism of the problem. Many problems in AP Physics 1 are written in a way that seems to be intended to show the cleverness of the question writer. These questions are often at a very high conceptual level, and unfortunately, there are high schools where the students taking this course are seeing physics for the first time, so it is very difficult to get those students to think at such a high conceptual level. In addition, many of the questions are ambiguous or have hidden assumptions in them, including the question that you posted. The graph shows that the center of mass of the object is falling with a linearly increasing velocity, which indicates that the velocity measuring device is stationary with respect to the ground, but you don't get this information in the problem statement. Obviously for a 1st year physics student, such information would be potentially helpful when said student is attempting to decipher the wording of the problem such that the underlying physics can be interpreted. | {
"domain": "physics.stackexchange",
"id": 42411,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, rotational-dynamics",
"url": null
} |
general-relativity, gravity, mass, equivalence-principle, inertia
Title: Why did we expect gravitational mass and inertial mass to be different? I've read many times that the fact that gravitational mass is equal to inertial mass (as far as we can tell) used to be a puzzle. I believe that Einstein explained this by showing that gravity is itself just an inertial force.
When I first encountered this concept, I thought "isn't there just one property called $m$ and it just appears in different equations (e.g. Newton's second law and the law of gravitation)? In a similar way that (say) frequency appears in many different equations."
Obviously I am thinking about this in the wrong way, but does anyone have a good way to explain why so that I can understand it?
"isn't there just one property called m and it just appears in
different equations (e.g. Newton's second law and the law of
gravitation)? In a similar way that (say) frequency appears in many
different equations."
There IS indeed just one property called m which appears in both the equations.
The point is that there is no intuitive reason why this should be the case.
Forget the term mass for a second and just think in terms of the properties of an object. One property of an object determines how strong is the gravity of the object.
The other property determines how much acceleration it experiences under a given force.
There is no obvious reason why these two properties should be the same.
But, we observe in daily life, that these two ARE the same.
That is what Einstein was able to explain i.e. why these two are the same.
EDIT: A good example to compare and contrast is to think about the forces between 2 electrically charged objects, as pointed out by Arthur's answer to this question. One property of the object (namely the charge) determines the amount of attractive/repulsive force. There is no reason why this property that determines the magnitude of a force would be the same as the property that determines how the object would move under a given force. And indeed these properties are not the same.
But in case of gravity, we observe, that these properties are the same. | {
"domain": "physics.stackexchange",
"id": 78051,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, gravity, mass, equivalence-principle, inertia",
"url": null
} |
sub-problems. Click on this message to dismiss it. It is a very general technique for solving optimization problems. Dynamic programming also divides a problem into several subproblems. The Hotel Management System contains the admin and user section. Thus the problem shows optimal substructure. The hotel reservation application stores the reservation data in an SQL Server (2017+) database. dp(i-1,j-1) + (a(i) - a(i-1))^2; dp(i-2,j-1) + (a(i) - a(i-2))^2 as long as i-n>=j-1. 0+ and is built from the. The main features of the. To the best of our knowledge, few studies have tried to do that so far, due to the particularity features of this kind of problem. The list of problems in each category of Dynamic. PDF Drive investigated dozens of problems and listed the biggest global issues facing the world today. Dynamic programming assumes full knowledge of the MDP It is used for planning in an MDP For prediction: Input: MDP S, A, P, R, γ and policy π or: MRP S, Pπ, Rπ, γ. Elements of Dynamic Programming The problem has the following properties: Optimal substructure: A solution is optimal only if its subsolution to the subproblem is optimal. At the early planning and scoping stage, project managers and analysts diagnose the issue or problem to be addressed. The recursive formula is as follows: d(0) = 0 where 0 is the starting position. I \it’s impossible to use dynamic in a pejorative sense". Dynamic programming. Practice this problem. Dynamic Programming. The only places you are allowed to stop are at these hotels, but you can choose which of the hotels you stop at. 2) The subproblems from 1) overlap. This marks the second year Dynamic Yield has earned a place on the career platform for technology professionals' esteemed list for NYC. Dynamic Programming - Hotel Problem. Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for future purposes so that we do not need to compute the result again. Dynamic programming approach consists of three steps for solving a problem that is as follows:. (Find out how good a policy π | {
"domain": "produktninja.de",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357184418848,
"lm_q1q2_score": 0.843962615019914,
"lm_q2_score": 0.8596637559030338,
"openwebmath_perplexity": 1069.6046432631638,
"openwebmath_score": 0.24513642489910126,
"tags": null,
"url": "http://zclf.produktninja.de/e51m"
} |
c++, tic-tac-toe
numbers[row][col] = play;
}
And so on...
That should be enough to keep you busy refactoring for a while. The most important recommendations are:
Make a class for the board. That class can do all the busywork to keep things sane and avoid bugs.
Don't use global variables. Have your functions take parameters and return values instead.
Break tasks into functions, and reuse them. Each function should have one job; functions that do multiple things should be broken up. Look especially for opportunities to reuse code. For example, whether it's a human player or AI player, there are a ton of things that need to be done for both, like checking whether a move is valid, actually making a move, and checking for a win. | {
"domain": "codereview.stackexchange",
"id": 30669,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, tic-tac-toe",
"url": null
} |
a lognormal return distribution model. then the probability that. pptx from BUSA 542 at Texas A&M University, –Commerce. Carlo Simulation in Matlab that explains about. We argue that Monte Carlo algorithms are ideally suited to parallel computing, and that “parallel Monte Carlo” should be more widely used. 1 with a constant speed. Probability distributions are mathematical models that assign probability to a random variable. In future articles we will consider Metropolis-Hastings, the Gibbs Sampler, Hamiltonian MCMC and the No-U-Turn Sampler. 1 8 PAM – BER/SER Monte Carlo Simulation Matlab exercise. I am doing this for multiple monte carlo estimators (using different probability densitys). The phrase “Monte Carlo methods” was coined in the beginning of the 20th century, and refers to the famous casino in Monaco1—a place where random samples indeed play an important role. To perform Monte Carlo simulation of regression models with ARIMA errors:. Can anyone help me how. Instead of using point estimates to say we will have 4 loss events over the next year, and each one will cost us $300,000, we define ranges for these inputs and let the Monte Carlo simulation identify tens of thousands of possible outcomes. and Lopes, H. Monte Carlo Analysis runs thousands of scenarios and gives you the probability of a certain event occurring or not occurring. We suppose that for any given value of x, the probability density function f(x) can be computed, and further that. The comparison between numerical simulations and Monte Carlo simulations confirms the accuracy of our model. The number of times the event occurs divided by the number of times the conditions are generated should be approximately equal to P. The algorithm, however, is very complicated, and the output does not appear to follow a predictable pattern. In this example, that function is called normalDistGrad and is defined at the end of the example. Both MCMC and crude Monte Carlo techniques work as the long-run proportion of simulations that are equal to a given outcome will be equal* to the modelled probability of that outcome. In realist models, this probability is very hard to estimate, because exact simple analytical formulas are not available. Through the Monte Carlo simulation, 10. Diffusion via Monte Carlo Lab 13 Physics 430 d) What is the average distance of the walkers from the origin? How would you calculate that? e) What do your plots tell you about the average distance of the walkers | {
"domain": "frenca.pw",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806512500485,
"lm_q1q2_score": 0.8373303270456218,
"lm_q2_score": 0.8539127566694178,
"openwebmath_perplexity": 675.1051895226026,
"openwebmath_score": 0.7479438781738281,
"tags": null,
"url": "http://abgj.frenca.pw/monte-carlo-probability-matlab.html"
} |
java, statistics
LinearEquation fitFunction;
ArrayList<Float> X, Y;
//--------------------------------------------------------------------------------------------------------------
// Constructors
// --------------------------------------------------------------------------------------------------------------
public XYSample() {
initSample();
}
public XYSample(ArrayList<Pair<Float, Float>> data){
initSample();
addValues(data);
}
public XYSample(ArrayList<Float> xData, ArrayList<Float> yData) {
initSample();
addValues(xData, yData);
}
private void initSample(){
size = 0;
//Initialize List
X = new ArrayList<Float>();
Y = new ArrayList<Float>();
//Initialize comparator values
xMin = Float.MAX_VALUE;
yMin = Float.MAX_VALUE;
xMax = Float.MIN_VALUE;
yMax = Float.MIN_VALUE;
} | {
"domain": "codereview.stackexchange",
"id": 28313,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, statistics",
"url": null
} |
equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics. It arises, for instance, to describe the potential field caused by a given charge or mass density distribution; with the potential field known, one can then calculate gravitational or electrostatic field. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own. (We assume here that there is no advection of Φ by the underlying medium. The computational region is a rectangle, with homogenous Dirichlet boundary conditions applied along the boundary. Moreover, the equation appears in numerical splitting strategies for more complicated systems of PDEs, in particular the Navier - Stokes equations. A compact and fast Matlab code solving the incompressible Navier-Stokes equations on rectangular domains mit18086 navierstokes. A partial semi-coarsening multigrid method is developed to solve 3D Poisson equation. Find optimal relaxation parameter for SOR-method. 1 Note that the Gaussian solution corresponds to a vorticity distribution that depends only on the radial variable. Poisson's equation is = where is the Laplace operator, and and are real or complex-valued functions on a manifold. The book NUMERICAL RECIPIES IN C, 2ND EDITION (by PRESS, TEUKOLSKY, VETTERLING & FLANNERY) presents a recipe for solving a discretization of 2D Poisson equation numerically by Fourier transform ("rapid solver"). In mathematics, Poisson's equation is a partial differential equation of elliptic type with broad utility in mechanical engineering and theoretical physics. 2 Solution of Laplace and Poisson equation Ref: Guenther & Lee, §5. Poisson equation. Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material. Poisson Library uses the standard five-point finite difference approximation on this mesh to compute the approximation to the solution. Finally, the values can be reconstructed from Eq. The computational region is a rectangle, with Dirichlet boundary conditions applied along the boundary, and the Poisson equation applied inside. We will consider a number of cases where fixed conditions are imposed upon. I use center difference for the second order derivative. Multigrid This GPU based script draws | {
"domain": "marcodoriaxgenova.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9923043544146898,
"lm_q1q2_score": 0.8673506481763438,
"lm_q2_score": 0.8740772368049822,
"openwebmath_perplexity": 682.703101573481,
"openwebmath_score": 0.7900996208190918,
"tags": null,
"url": "http://marcodoriaxgenova.it/jytu/2d-poisson-equation.html"
} |
rviz
def init_marker(self,index=0, z_val=0):
self.marker_object = Marker()
self.marker_object.header.frame_id = "/camera_link"
self.marker_object.header.stamp = rospy.get_rostime()
self.marker_object.ns = "mira"
self.marker_object.id = index
self.marker_object.type = Marker.SPHERE
self.marker_object.action = Marker.ADD
my_point = Point()
my_point.z = z_val
self.marker_object.pose.position = my_point
self.marker_object.pose.orientation.x = 0
self.marker_object.pose.orientation.y = 0
self.marker_object.pose.orientation.z = 0.0
self.marker_object.pose.orientation.w = 1.0
self.marker_object.scale.x = 0.05
self.marker_object.scale.y = 0.05
self.marker_object.scale.z = 0.05
self.marker_object.color.r = 1.0
self.marker_object.color.g = 0.0
self.marker_object.color.b = 0.0
# This has to be otherwise it will be transparent
self.marker_object.color.a = 1.0
# If we want it for ever, 0, otherwise seconds before desapearing
self.marker_object.lifetime = rospy.Duration(0)
def update_position(self,position):
self.marker_object.pose.position = position
self.marker_objectlisher.publish(self.marker_object)
class BallDetector(object):
def __init__(self):
self.rate = rospy.Rate(1)
self.save_camera_values()
rospy.Subscriber('/blobs', Blobs, self.redball_detect_callback)
self.markerbasics_object = MarkerBasics() | {
"domain": "robotics.stackexchange",
"id": 29163,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rviz",
"url": null
} |
# Help me understand events/sample space
## Homework Statement
1.
Suppose that A, B, and C are 3 independent events such that Pr(A)=1/4, Pr(B)=1/3 and Pr(C)=1/2.
a. Determine the probability that none of these events will occur.
Is it just:
(1-P(a))(1-P(b))(1-P(c)) = 3/4 * 2/3 * 1/2 = 1/4
## The Attempt at a Solution
I tried to do 1. another way:
The probability that all theses events will occur: 1/4 * 1/3 * 1/2 = 1/24
1-(1/24) = 23/24
Obviously this is wrong. Is the reason it is wrong, because: the complement of "all of these events will occur" is that "not all of these events will occur," meaning, it is not "none of these events will occur."
None of these events will occur is included in the compliment 1-(1/24), but so is that 1 of the events occur, and that 2 of the events occur, etc.
Am I right in my reasoning?
PeroK
the (1) is correct.
for reference, in general the answer can be found by calculating multinomial distribution.
in (3), the 23/24 probability is sum of "no events", "A only", "B only", "C only", "A&B", "A&C", "B&C".
PeroK
Homework Helper
Gold Member
2020 Award
## Homework Statement
1.
Suppose that A, B, and C are 3 independent events such that Pr(A)=1/4, Pr(B)=1/3 and Pr(C)=1/2.
a. Determine the probability that none of these events will occur.
Is it just:
(1-P(a))(1-P(b))(1-P(c)) = 3/4 * 2/3 * 1/2 = 1/4
## The Attempt at a Solution
I tried to do 1. another way:
The probability that all theses events will occur: 1/4 * 1/3 * 1/2 = 1/24 | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307653134903,
"lm_q1q2_score": 0.8307125951566029,
"lm_q2_score": 0.8539127510928476,
"openwebmath_perplexity": 729.3773837735429,
"openwebmath_score": 0.5917144417762756,
"tags": null,
"url": "https://www.physicsforums.com/threads/help-me-understand-events-sample-space.965090/"
} |
This does not change even if we change the extras $4'$ and $3'$ to $40'$ and $30'$! A $40'$ wide extra "flank" along the $12'$ side plus a $30'$ flank along the $10'$ side are bigger than vice versa.
• great work, it helps – Complex Guy Jun 4 '13 at 17:34
Another simple method particularly suited to a deliberately 'symmetric' problem like this: $35{,}043\times 25{,}430=(25{,}043\times25{,}430)+(10{,}000\times25{,}430)$ while $25{,}043\times35{,}430=(25{,}043\times25{,}430)+(10{,}000\times25{,}043)$, and written out this way it's clear that the former has to be larger.
A = 35,043×25,430 = 35,043 × 25,043 + 35,043 × 0,387
B = 35,430×25,043 = 35,043 × 25,043 + 25,043 x 0,387
so A is bigger
$\begin{eqnarray}{\bf Hint}\quad 35043 \times 25430 &-\,&\ \, 35430 \times 25043 \\ A\ (B\! +\! N)&-\,& (A\!+\!N)\ B\ =\ (A\!-\!B)\,N > 0\ \ \ {\rm by}\ \ \ A > B,\ N> 0\end{eqnarray}$
from the first equation we get:
(35,000 + 43) * (25,000 + 430) =
= 35,000 * 25,000 + 430*35,000 + 43*25,000 + 43*430 ( let it be a)
from the second second equation we get:
(35,000 + 430)*(25,000 + 43) =
= 35,000 * 25,000 + 43 * 35,000 + 430 * 25,000 + 43 * 430 (let it be b)
suppose that a > b (1) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9773707986486796,
"lm_q1q2_score": 0.820731458636241,
"lm_q2_score": 0.8397339676722393,
"openwebmath_perplexity": 614.730734444011,
"openwebmath_score": 0.7158011794090271,
"tags": null,
"url": "https://math.stackexchange.com/questions/408759/which-one-is-bigger-35-043-%C3%97-25-430-or-35-430-%C3%97-25-043"
} |
error-correction, stim, surface-code, fault-tolerance
Title: Effect of too many syndrome measurement cycles on surface code threshold Assume I have a surface code with distance $d$ and an i.i.d error model with both single qubit depolarization and measurement errors, both with probability $p$.
In this case, one usually repeats the syndrome measurement cycles $d$ times to get the threshold. But what happens if one repeats the syndrome measurement cycles for $m*d$ times, with $2\le m$?
When I simulated this using stim, I found that the threshold decreases as $m$ increases, and the logical error rate increases with $m$. Is this a known phenomenon, and why this happens? Or maybe I have a problem with my simulations? Where can I find a discussion of this topic in the literature?
he logical error rate increases with m
If you run for longer, there's more time for errors to occur. You need to do a conversion of the per-shot error rate to a per-round error rate or per-$d$-round (per-quop) error rate, e.g. using sinter.shot_error_rate_to_piece_error_rate, if you want to compare different numbers of rounds. Also you want enough rounds to amortize away boundary effects from the start and end of the experiment; like $3 \cdot d$ or $4 \cdot d$ rounds.
I found that the threshold decreases as m increases
You need to use larger values of $d$. If your error unit is a number of rounds linear in $d$, you will see the crossing point where different noise strengths have the same logical error rate vary a lot with $d$ when $d$ is small. For example, when using $r=2d$ the $d=5$ experiment is nearly twice as long as the $d=3$ experiment, whereas the $d=31$ experiment is basically the same length as the $d=29$ experiment.
In any case the threshold is the wrong number to estimate. The threshold is where the code doesn't work; you want to know numbers where the code does work. Behavior near threshold is qualitatively different from behavior well below threshold, where quantum computers will actually run. If you focus on near-threshold behavior, you will learn the wrong lessons about what works well and what works poorly. | {
"domain": "quantumcomputing.stackexchange",
"id": 4993,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "error-correction, stim, surface-code, fault-tolerance",
"url": null
} |
homework-and-exercises, quantum-field-theory, lagrangian-formalism, noethers-theorem, stress-energy-momentum-tensor
\end{array}
$$
The integration over $x$ yields two kinds of combinations: $a_p^\dagger a_{p^{\prime}}(2\pi)^3\delta(\vec{p}-\vec{p}^{\prime}),b_p b_{p^{\prime}}^\dagger(2\pi)^3\delta(\vec{p}-\vec{p}^{\prime})$ and $a_p^{\dagger} b_{p^{\prime}}^\dagger(2\pi)^3\delta(\vec{p}+\vec{p}^{\prime}),b_p a_{p^{\prime}}(2\pi)^3\delta(\vec{p}+\vec{p}^{\prime})$. The expectation value of the latter ones in any momentum eigenstate is obviously zero, so they have no contribution to the Hamiltonian. Integrating over $p^{\prime}$ and using the relation $\omega_p^2=|\vec{p}|^2+m^2$ we'll get
$$
H=\int \frac{d^3p}{(2\pi)^32\omega_p}\omega_p(a_p^\dagger a_p+b_p b_p^\dagger)
$$ | {
"domain": "physics.stackexchange",
"id": 26461,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, quantum-field-theory, lagrangian-formalism, noethers-theorem, stress-energy-momentum-tensor",
"url": null
} |
rna-seq, gene-expression, read-mapping, featurecounts
Title: Featurecount output .txt file from bam file After running this command:
featureCounts -p -s 2 -a $genome -o $dir"/"$specie"/count_table/"$value".txt" $output_loc -T 8
on the BAM output of HISAT (output_loc), I got this output:
Now, does the last column of this file show the gene count value?
And if I merge the same column of different samples, how should I go through the normalization step?
Thanks a lot for your help! The first 6 column in standard featureCounts output represent what is in the column names. All columns after than (starting at 7) represent the counts for the sample(s). If you use a single bam file as input then it's one column, if you use many bams as input then it is one column per bam. If the counts are gene level, exon transcript or anything else depends on how you run the tool. Typically, with default settings, it is usually the raw gene level counts.
Generally, you can sum up technical replicates (=lane/sequencing replicates to increase depth) prior to running normalization. | {
"domain": "bioinformatics.stackexchange",
"id": 2647,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rna-seq, gene-expression, read-mapping, featurecounts",
"url": null
} |
c++, beginner, c++11, hash-map, homework
/**
* @brief Prompts the user to enter a valid intiger coresponding to one of the valus in the menu
* the user is prompted to enter the input again if it's not a number
*
* @return The processed input as an integer.
*/
int get_input() {
int const MIN = 1;
int const MAX = 10;
int choice = 0;
std::cout << "\n[Enter]: ";
while (true) {
try {
std::cin >> choice;
if (std::cin.fail()) { //std::cin.fail() if the input is not an intiger returns true
/// @link https://cplusplus.com/forum/beginner/2957/
std::cin.clear(); // clear error flags
std::cin.ignore(10000, '\n'); // ignore up to 10000 characters or until a newline is encountered
throw std::invalid_argument("[Invalid input]");
}
else if (choice < MIN || choice > MAX) {
throw std::out_of_range("[Input out of range. Please enter an integer between 1 and 8]");
}
else {
return choice;
}
}
catch (const std::exception& error) {
std::cout << error.what() << std::endl;
std::cout << "[Re-enter]: ";
}
}
}
/** @name goodbye()
* @brief The function prompts the user goodbye
* @remark Handles UI
* @return void-type
*/
void goodbye() {
std::cout << "\n\nGoodbye!\n\n";
}
/**
* @brief clears screen
*/
static inline void clear_screen() {
#define SLEEP std::this_thread::sleep_for(std::chrono::milliseconds(500))
SLEEP;
std::system("clear");
SLEEP;
} | {
"domain": "codereview.stackexchange",
"id": 44705,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, c++11, hash-map, homework",
"url": null
} |
ros, gazebo, ros-melodic
Title: How can I run this Gazebo simulation? The launch file is not seen as a launch file
Firstly I'm really really new to ROS. I need to import this world and get it open in Gazebo for a simulation: (https://github.com/aws-robotics/aws-robomaker-hospital-world)
but when its time to launch I get the error that the launch file is not a launch file. So I wondered if I did the wrong steps before I can launch. I'm using ROS development studio for the terminal and Gazebo. So I tried with another project my co-worker sent me and it worked, here are the steps I took:
git clone the project into simulation_ws src
then go back to simulation_ws and enter "catkin_make"
then I enter "source devel/setup.bash"
then i enter roslaunch "file name" "launch file name"
Open Gazebo window where robot shows up
but this doesn't work with the project in the link above, so I must be missing something in the steps I took since I'm such a newbie. On the readme it says to include some code in the launch file but when you check one of them the line is already there so I've no idea what that means.
What do you all think?
*edit
here is my input commands:
cd simulation_ws
cd src
git clone https://github.com/aws-robotics/aws-robomaker-hospital-world.git
cd ..
catkin_make
source devel.setup.bash
roslaunch aws-robomaker-hospital-world hospital.launch
the terminal output is:
RLException: [hospital.launch] is neither a launch file in package [aws-robomaker-hospital-world] nor is [aws-robomaker-hospital-world] a launch file name The traceback for the exception was written to the log file
Originally posted by ROSNewbie on ROS Answers with karma: 5 on 2020-12-16
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35883,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, ros-melodic",
"url": null
} |
organic-chemistry, biochemistry, materials
Title: Why does a protein crystallize with a specific salt but not with another? So I am conducting an experiment and I do not understand this super fundamental part.
I know salting out works via solubility, and I know crystallization works via ordered structures. But could someone explain to my why one salt allows a crystal to form, but another salt doesn't. A component of one of my salts (Mg) has been found to be important to my protein in optimization and activity, but why would another cation with the same anion (I know the anion is not the problem) discourage the formation of crystals. If this is a clearly stupid question I apologize.
EDIT:looking for theory. I have already crystallized everything, and have CRYSTALS. Will be performing x-ray crystallography.This post was more to get me on the right track in thinking. It would be a lie to postulate we knew much about the formation of protein crystals. Yes, we know the basic theory — that we need a supersaturated solution in which the protein is not denatured and which would then slowly evaporate until hopefully a single crystal starts growing — but we don’t have the slightest of clues of what exactly helps and what doesn’t help.
This is why protein crystallography is still trial and error (and nothing else). You try thousands of different conditions in an automated manner, select those with promising results for further examination and finally — if and only if you are lucky — end up with crystals suitable for X-ray diffraction.
Therefore, we don’t know and nobody knows why magnesium chloride might help the crystallisation of your protein but sodium chloride, calcium chloride, aluminium chloride or magnesium bromide don’t.
We don’t know and nobody knows why a $\pu{1.3M}$ solution of magnesium chloride might help the crystallisation but a $\pu{1.4M}$ or a $\pu{1.5M}$ solution don’t.
We don’t know and nobody knows why $\pu{23^\circ C}$ might be a good temperature for crystallisation but $\pu{21^\circ C}$ and $\pu{25^\circ C}$ aren’t.
And so on and so forth. | {
"domain": "chemistry.stackexchange",
"id": 9125,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, biochemistry, materials",
"url": null
} |
ros, opencv
Title: ROS Answers SE migration: OpenCV Patent
I found that some algorithms included in openCV are patented:
SIFT
SURF
I'm not sure if there is any other algorithm patented.
I find this at sift.cpp:
Note that restrictions imposed by this patent (and possibly others)
exist independently of and may be in conflict with the freedoms granted
in this license, which refers to copyright of the program, not patents
for any methods that it implements. Both copyright and patent law must
be obeyed to legally use and redistribute this program and it is not the
purpose of this license to induce you to infringe any patents or other
property right claims or to contest validity of any such claims. If you
redistribute or use the program, then this license merely protects you
from committing copyright infringement. It does not protect you from
committing patent infringement. So, before you do anything with this
program, make sure that you have permission to do so not merely in terms
of copyright, but also in terms of patent law.
Are this algorithms included in ROS package?
Can we use them in commercial applications?
Thank you
Originally posted by smerino on ROS Answers with karma: 108 on 2012-05-22
Post score: 1
The OpenCV license allows its use in commercial applications. However, the SIFT and SURF algorithms are patented. If you use one in a commercial application, you may be open to a patent suit. Here's a pertinent thread from the opencv-users list. If you need to use SIFT or SURF you should protect yourself by contacting the patent owner to find out if you need to pay royalties.
EDIT:
I am not a lawyer, so I'm not going to claim to be 100% correct, but I do know that the purpose of a patent is to ensure that the creator of the work receives proper recognition and compensation for his/her work. My interpretation is such that if you make money using a patented algorithm, you are responsible for compensating the patent holder approrpriately. The patent holder has the right to decide what counts as "fair" compensation. Some patent holders just want to receive credit in the form of a "thank you", but some want money. Only the patent holder can make that decision, since the work belongs to him/her. | {
"domain": "robotics.stackexchange",
"id": 9494,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, opencv",
"url": null
} |
quantum-mechanics, mathematical-physics, operators, hilbert-space
\left\{\begin{array}{lc} 1& \varepsilon = \varepsilon^\prime \\ 0 & \varepsilon\ne\varepsilon^\prime\end{array}\right.\end{equation} Let us say that there is another set of coefficients $a_\varepsilon^\prime$ such that $|P\rangle = \sum a_\varepsilon^\prime|\varepsilon\rangle$. Subtracting these two expressions for $|P\rangle$ we find
\begin{equation} 0 = \sum (a_\varepsilon - a_\varepsilon^\prime)|\varepsilon\rangle\end{equation} Multiplying by $\langle\varepsilon^\prime|$ we find that all the terms go to 0 except
\begin{equation} 0 = (a_{\varepsilon^\prime}-a_{\varepsilon^\prime}^\prime)\end{equation} So it terns out $a_{\varepsilon^\prime} = a_{\varepsilon^\prime}^\prime$ and since this must be true for each $\varepsilon$, the expression for $|P\rangle$ was unique.
With the integral included the discrete coefficients are replaced by functions in the integral $a(\varepsilon)$ and the normalisation is $\langle\varepsilon^\prime|\varepsilon\rangle = \delta(\varepsilon^\prime - \varepsilon)$ but otherwise the argument is unchanged. | {
"domain": "physics.stackexchange",
"id": 82638,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, mathematical-physics, operators, hilbert-space",
"url": null
} |
logic, binary-arithmetic
So what makes these circuits work? Consider the half-adder. Those inputs a and b represent some arbitrary value coming from the outside. The first portion of the half adder is a xor gate. Look in the table for what happens when two true values are applied (1's), the output is 0. Therefore the sum is zero. Now consider the and gate, what happens when two true values are applied to the and gate? the output is one. So, we derive the sum from the output of the xor gate (which is 0), and we derive the $C_{out}$ from the and gate which is (1) if A and B are both (1). | {
"domain": "cs.stackexchange",
"id": 2495,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "logic, binary-arithmetic",
"url": null
} |
• The PIN is randomly generated, sorry for not clarifying that.
– user600210
Oct 13 '18 at 20:03
For every collection of four distinct digits you choose, precisely one arrangement of those four will be in ascending order.
From a set of $$\{0, 1, \cdots, 9\}$$, choose any subset of 4 numbers. Such subset is in 1-to-1 correspondence with a 4-digit pin with increasing digits.
$$\sum\limits_{a_1=0}^6 \sum\limits_{a_2=a_1+1}^7 \sum\limits_{a_3=a_2+1}^8 \sum\limits_{a_4=a_3+1}^9 1 = 210$$
and the total number of possible pins (allowing digit repetition) is of course $$10^4$$.
Hence: $$P = 0.21$$.
Note this is the same value as found by @Théophile and others. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850904500993,
"lm_q1q2_score": 0.8234517053861784,
"lm_q2_score": 0.8376199714402812,
"openwebmath_perplexity": 136.97647497936993,
"openwebmath_score": 0.9119436740875244,
"tags": null,
"url": "https://math.stackexchange.com/questions/2954213/probability-of-having-a-4-digit-pin-number-with-strictly-increasing-digits"
} |
python, performance, memory-optimization
for x in arr:
if x<0 or x>= len(arr):
return False
if bit_arr[x] == True:
return False
bit_arr[x] = True
return True
METHODS = (
op,
jh_a,
# jh_b, # fails for [0 0 3 3]
jh_c, jh_d, jh_e,
harith_a, harith_b, harith_c,
r_enum, r_functional,
ahall_a, ahall_b, ahall_c,
sudix_b,
)
def test() -> None:
examples = (
# (), # arguably appropriate here, but OP coded the opposite
(0,),
(1, 0, 2),
(2, 1, 0),
(0, 1, 2, 3, 4),
)
counterexamples = (
(1,),
(2, 1),
(1, 2, 3),
(0, 0, 3, 3), # This disqualifies jh_b
(-1, 0, 1, 2),
)
for method in METHODS:
for example in examples:
assert method(list(example)), f'{method.__name__} failed example'
for example in counterexamples:
assert not method(list(example)), f'{method.__name__} failed counterexample'
def benchmark() -> None:
rand = default_rng(seed=0)
def make_shuffle(n: int) -> np.ndarray:
series = np.arange(n)
rand.shuffle(series)
return series
rows = []
kinds = (
('repeated_0', lambda n: np.full(shape=n, fill_value=0)),
('repeated_hi', lambda n: np.full(shape=n, fill_value=n + 1)),
('sorted', lambda n: np.arange(n)),
('shuffled', make_shuffle),
)
for kind, make_input in kinds:
sizes = (10**np.linspace(0, 4, num=6)).astype(int) | {
"domain": "codereview.stackexchange",
"id": 45469,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, memory-optimization",
"url": null
} |
reinforcement-learning, comparison, probability-distribution, kl-divergence, total-variational-distance
It is asymmetric, i.e., in general, $D_{KL}(q, p) \neq D_{KL}(p, q)$ (where $p$ and $q$ are p.d.s); consequently, the KL divergence cannot be a metric (because metrics are symmetric!)
It is always non-negative
It is zero when $p = q$.
It is unbounded, i.e. it can be arbitrarily large; so, in other words, two probability distributions can be infinitely different, which may not be very intuitive: in fact, in the past, I used the KL divergence and, because of this property, it wasn't always clear how I should interpret the KL divergence (but this may also be due to my not extremely solid understanding of this measure).
and how is it different from what a $D_{TV}$ between the same two policies tells you? | {
"domain": "ai.stackexchange",
"id": 2352,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, comparison, probability-distribution, kl-divergence, total-variational-distance",
"url": null
} |
newtonian-mechanics, forces, vectors
How do the force vectors end up rolling the bike up and over vertical when the rear wheel loses contact with the road? I would have expected the bike to slide such that my left side would hit the ground instead of the right side.
Did dropping my head to peek under my left arm pit to see what happened affect the forces and help flip me to the right? i.e., would my chances of counteracting the flip have improved had I kept looking forward? Here is how this happens. I'll begin with the example of a skidding car, which is analogous and easier to visualize.
When the rear wheels of a car lose traction in a (for example) sharp left turn, the rear end stops tracking the turn and instead wishes to continue on a path tangent to the turn curve at that point where the tires first broke loose. Since the rest of the car is still tracking left in the turn, the car rotates towards the left about its vertical axis and rear end swings out to the right. This swings the front end to the left, making the turn sharper, and as the front end tracks harder into the now-sharper turn, the rear end swings out more violently and the car "spins out".
On a bike, when the rear wheel lets go while the front wheel is still grabbing the road, the rear end begins to swing out like in the car example and the front end gets steered more steeply into a sharper turn. However, because the bike has only two wheels, something different happens:
The front wheel's tire contact point on the road is below the bike's center of mass, which wishes to continue in a straight line down the road. The friction force acting at the contact pad and the momentum vector of the center of mass thereby create a rolling moment which acts to tip the bike over to the right and the bike crashes over the high side.
Excellent high-speed videos of both high-side and low-side bike crashes can be seen on youtube, posted there by Rnickymouse. | {
"domain": "physics.stackexchange",
"id": 63994,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, vectors",
"url": null
} |
string-theory, mathematical-physics, conformal-field-theory, singularities, quantum-anomalies
--
$^1$ Tong's trick (4.37) suggests another route: Let us instead consider the $\partial X \bar{\partial}X$ OPE
$$\begin{align}
\left. \begin{array}{c}
{\cal R} \partial_zX(z,\bar{z})
\partial_{\bar{w}}X(w,\bar{w})\cr\cr
{\cal R} \partial_{\bar{z}}X(z,\bar{z}) \partial_wX(w,\bar{w})\end{array}\right\}
~=~&\frac{\alpha^{\prime}}{2}\frac{\varepsilon}{(|z-w|^2+\varepsilon)^2}+\ldots
\cr
~\stackrel{(4.2d)}{=}&~\frac{\alpha^{\prime}\pi}{2}\delta^2(z\!-\!w,\bar{z}\!-\!\bar{w}) +\ldots. \end{align}$$
It is comforting that the regularization $\varepsilon>0$ correctly predicts that the leading singularity is a 2D Dirac delta distribution.
Then the $T\bar{T}$ OPE becomes
$$ \begin{align} {\cal R}T_{zz}(z,\bar{z})&T_{\bar{w}\bar{w}}(w,\bar{w})\cr
~=&~\frac{c}{2}\frac{\varepsilon^2}{(|z-w|^2+\varepsilon)^4}\cr | {
"domain": "physics.stackexchange",
"id": 53921,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "string-theory, mathematical-physics, conformal-field-theory, singularities, quantum-anomalies",
"url": null
} |
python, beginner, tree
print("")
elements = binarytree.postOrderTraverse()
print("Post order traversal: ", end="")
for val in elements:
print(val, end=" ")
print("") PEP 8 says
You have minor issues with the code layout:
PEP 8 requires two empty lines before the class definition,
Modules os and math are not used,
In addValue the return statement is unnecessary,
PEP 8 proposes the method names in this_format; rename addValue to add_value.
Improving the code
I suggest you try to take use of a visitor pattern: in your traversal methods add new argument taking a function that is called with a current binary tree node. I had this in mind:
from __future__ import print_function
class BinaryTree:
def __init__(self):
self.__left = None
self.__right = None
self.__value = None
def add_value(self, value):
if self.__value is None:
self.__value = value
elif value > self.__value:
if self.__right is None:
self.__right = BinaryTree()
self.__right.add_value(value)
elif value < self.__value:
if self.__left is None:
self.__left = BinaryTree()
self.__left.add_value(value)
def pre_order_traverse(self, visitor_func=None):
"""
Performs a pre-order traversal of the tree.
Returns the list of elements
"""
if self.__value is not None:
# Visit the root node first
if visitor_func:
visitor_func(self.__value)
# Then visit the left node if present
if self.__left is not None:
self.__left.pre_order_traverse(visitor_func)
# Then visit the right node if present
if self.__right is not None:
self.__right.pre_order_traverse(visitor_func) | {
"domain": "codereview.stackexchange",
"id": 23716,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, tree",
"url": null
} |
java, optimization, algorithm
After waiting for two whole days in vain, I've decided to answer my own question. In these two days, I've tried several methods and made certain interesting observations. Here, I shall present them in a logical order.
First, the Euclidean algorithm
Here is the implementation of the "Modern Euclidean Algorithm" (Algorithm A, TAOCP Vol - 2, Pg - 337, Section - 4.5.2) in Java:
/**
* Returns the GCD (Greatest Common Divisor, also known as the GCF -
* Greatest Common Factor, or the HCF - Highest Common Factor) of the two
* (signed, long) integers given.
* <p>
* It calculates the GCD using the Euclidean GCD algorithm.
*
* @param u The first integer (preferably the larger one).
* @param v The second integer (preferably the smaller one).
* @return The GCD (or GCF or HCF) of {@code u} and {@code v}.
*/
public static long euclideanGCD(long u, long v) {
// Corner cases
if (u < 0) u = -u;
if (v < 0) v = -v;
if (u == 0) return v;
if (v == 0) return u;
// Correction
if (u < v) {
long t = u;
u = v;
v = t;
}
// A1
while (v != 0) {
// A2
long r = u % v;
u = v;
v = r;
}
return u;
} | {
"domain": "codereview.stackexchange",
"id": 12252,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, optimization, algorithm",
"url": null
} |
java, algorithm, graph, ai, breadth-first-search
/**
* The amount of places in the boat.
*/
private final int boatCapacity;
/**
* The location of the boat.
*/
private final BoatLocation boatLocation;
/**
* Constructs this state.
*
* @param missionaries amount of missionaries at a bank.
* @param cannibals amount of cannibals at the same ban.
* @param totalMissionaries total amount of missionaries.
* @param totalCannibals total amount of cannibals.
* @param boatCapacity total amount of places in the boat.
* @param boatLocation the location of the boat.
*/
public StateNode(int missionaries,
int cannibals,
int totalMissionaries,
int totalCannibals,
int boatCapacity,
BoatLocation boatLocation) {
Objects.requireNonNull(boatLocation, "Boat location is null.");
checkTotalMissionaries(totalMissionaries);
checkTotalCannibals(totalCannibals);
checkMissionaryCount(missionaries, totalMissionaries);
checkCannibalCount(cannibals, totalCannibals);
checkBoatCapacity(boatCapacity);
this.missionaries = missionaries;
this.cannibals = cannibals;
this.totalMissionaries = totalMissionaries;
this.totalCannibals = totalCannibals;
this.boatCapacity = boatCapacity;
this.boatLocation = boatLocation;
}
/**
* Creates the source state node.
*
* @param totalMissionaries the total amount of missionaries.
* @param totalCannibals the total amount of cannibals.
* @param boatCapacity the total amount of places in the boat.
* @return the initial state node.
*/
public static StateNode getInitialStateNode(int totalMissionaries,
int totalCannibals,
int boatCapacity) {
return new StateNode(totalMissionaries,
totalCannibals,
totalMissionaries,
totalCannibals,
boatCapacity,
BoatLocation.SOURCE_BANK);
} | {
"domain": "codereview.stackexchange",
"id": 15132,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, graph, ai, breadth-first-search",
"url": null
} |
javascript, php, strings, functional-programming, converting
Without knowing more about your intended use case, I'd suggest something much simpler, along the lines of:
function fn_to_string($fn) {
$r = new ReflectionFunction($fn);
$file = $r->getFileName();
if (!is_readable($file)) {
return '';
}
$lines = file($file);
$start = $r->getStartLine() - 1;
$length = $r->getEndLine() - $start;
return implode('', array_slice($lines, $start, $length));
} | {
"domain": "codereview.stackexchange",
"id": 8003,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, php, strings, functional-programming, converting",
"url": null
} |
Thanks for your suggestions. I've formatted my note for it to look more presentable.
- 3 years, 6 months ago
Great! This is much better! I can now clearly see the statement that you want to prove.
The proof currently only deals with the special case of $k = 1$ and $n-1$. It would be helpful to write up the more general version.
Staff - 3 years, 6 months ago
Actually, I'm not through with binomials as of now. And without it I think for a general $k^{th}$ derivative, it would be not such a great looking derivation.
- 3 years, 6 months ago
Can you provide me the proof for general $k^{th}$ derivative. It would be very helpful.
- 3 years, 6 months ago
I got the proof. Making changes. :)
- 3 years, 6 months ago | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9919380072800831,
"lm_q1q2_score": 0.8132833530775382,
"lm_q2_score": 0.8198933271118222,
"openwebmath_perplexity": 440.9581424654395,
"openwebmath_score": 0.8929882645606995,
"tags": null,
"url": "https://brilliant.org/discussions/thread/relation-between-roots-and-derivatives/"
} |
ruby, rspec, hash-map
The Code
1 class CashRegister
2 attr_reader :coins
3
4 def coins=(coins=[25,10,5,1])
5 if (
6 coins.class != Array ||
7 coins.map { |coin| coin.class.ancestors.include?(Integer) }.include?(false)
8 )
9 raise Exception
10 end
11
12 @optimal_change = Hash.new do |hash, key|
13 hash[key] =
14 if (key < coins.min)
15 Change.new(coins)
16 elsif (coins.include?(key))
17 Change.new(coins).add(key)
18 else
19 coins.map do |coin|
20 hash[key - coin].add(coin)
21 end.reject do |change|
22 change.value != key
23 end.min { |a,b| a.count <=> b.count }
24 end
25 end
26
27 @coins = coins
28 end
29
30 alias :initialize :coins=
31
32 def make_change(amount)
33 return(@optimal_change[amount])
34 end
35 end
36
37 class Change < Hash
38 def initialize(coins)
39 coins.map do |coin|
40 self.merge!({coin => 0})
41 end
42 end
43
44 def add(coin)
45 self.merge({coin => self[coin] + 1})
46 end
47
48 def value
49 self.map do |key, value|
50 key.to_i * value
51 end.reduce(:+)
52 end
53
54 def count
55 self.values.reduce(:+)
56 end
57 end Duck typing - the ruby language is duck-typed, and should be written that way. Checks on an object's class are frowned upon. If something wants to be an integer, and is prepared to go the distance - don't discourage it! I understand that the requirement says 'should throw an error if it initialized with an argument that is not an array of integers', so I guess for the sake of argument asking coins.is_a?(Array) is OK, but for the numbers themselves, a more rubyish way of asking would be:
raise Exception unless coins.is_a?(Array) && coins.all?(&:integer?) | {
"domain": "codereview.stackexchange",
"id": 6308,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ruby, rspec, hash-map",
"url": null
} |
java, beginner, array
public static void main(String args[]) {
try {
BufferedReader input = new BufferedReader(new InputStreamReader(
System.in));
int totalFriends = Integer.parseInt(input.readLine());
int rounds = Integer.parseInt(input.readLine());
int[] roundNumber = new int[rounds];
ArrayList<Integer> friendList = new ArrayList<Integer>(totalFriends);
// create my list of friends
for (int i = 0; i < totalFriends; i++) {
friendList.add(i + 1);
}
// create an array for the number of rounds to cut, and how to cut
// each time
for (int i = 0; i < rounds; i++) {
roundNumber[i] = Integer.parseInt(input.readLine());
}
System.out.println("Rounds: " + Arrays.toString(roundNumber));
// for each round, do this
for (int h = 0; h < rounds; h++) {
System.out.println("Removing every " + roundNumber[h]
+ " number");
System.out.println("Starting with: " + friendList);
int indexToRemove = 0;
//the for loop here tells it to exit when the indexToRemove is larger
//than the friendlist. I have to say +roundNumber[h] - 1, because the indexToRemove is
//from the last iteration, not the one I want it to check.
for (int i = 0; (indexToRemove + (roundNumber[h] - 1)) < friendList
.size(); i++) {
indexToRemove = ((roundNumber[h] - 1) + (i * (roundNumber[h] - 1)));
System.out.println("Removing index: " + indexToRemove);
friendList.remove(indexToRemove);
System.out.println(friendList);
}
} | {
"domain": "codereview.stackexchange",
"id": 12044,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, array",
"url": null
} |
• Note that $\phi (x) = \phi (\frac{x_1}{r}, \frac{x_2}{r}, \frac{x_3}{r})$, so $\phi(tx) = \phi (\frac{tx_1}{tr}, \frac{tx_2}{tr}, \frac{tx_3}{tr}) = \phi (\frac{x_1}{r}, \frac{x_2}{r}, \frac{x_3}{r}) = \phi(x)$. Thus the value of $\phi$ is the same along each ray $\{tx : t>0\}$. @LebronJames – user99914 Oct 8 '15 at 11:11
• No! @LebronJames it is not known if $\nabla \phi = 0$, I only know that along the $r$ direction, $\nabla_r \phi = 0$, but it is sufficient to show that $\nabla \phi \cdot F = 0$ as $F$ is parallel to the $r$-direction. – user99914 Oct 9 '15 at 0:33
• $\nabla \phi$ is a vector, and it has no $F$ component as $\phi$ is constant along that direction. Basically we are using the formula $\nabla_V \phi = \nabla \phi \cdot V$ for all fixed vector $V$ ($\nabla_V \phi$ is the directional derivative of $\phi$ along $V$. @LebronJames – user99914 Oct 9 '15 at 0:47
• Ahhhh...that's so pretty. Yes, of course. That is an awesome argument. Thanks a ton for your time and patience @JohnMa. I learned a lot from you on this problem. Have a great night :-) – User001 Oct 9 '15 at 0:51 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211575679041,
"lm_q1q2_score": 0.8398456711409327,
"lm_q2_score": 0.8615382058759129,
"openwebmath_perplexity": 179.66033735338357,
"openwebmath_score": 0.9602752327919006,
"tags": null,
"url": "https://math.stackexchange.com/questions/1468270/how-to-interpret-the-integrand-in-this-surface-integral"
} |
MAT244-2013S > Easter and Semester End Challenge
Easter challenge
(1/2) > >>
Victor Ivrii:
Draw phase portraits :
\begin{gather}
\left\{\begin{aligned}
&x'=-\sin(y),\\
&y'= \sin (x);
\end{aligned}\right. \tag{a}\\
\left\{\begin{aligned}
&x'=-\sin(y),\\
&y'= \,2\sin (x);
\end{aligned}\right. \tag{b}
\end{gather}
Explain the difference between portraits and its reason
Hareem Naveed:
Attached are the two phase portraits.
In terms of difference between the two; they are level curves of the following functions:
$$H_{a}(x,y) = \cos(y)+\cos(x) \\ H_{b}(x,y) = \cos(y) + 2\cos(x)$$
Level curves are also attached.
From the level curves, in a, the centres are hemmed in by 2 defined separatrices, not so in b where there is only one.
How could I formalize these statements? Intuitively, I can "see" the answer.
Victor Ivrii:
--- Quote from: Hareem Naveed on March 29, 2013, 11:38:46 AM ---Attached are the two phase portraits.
In terms of difference between the two; they are level curves of the following functions:
$$H_{a}(x,y) = \cos(y)+\cos(x) \\ H_{b}(x,y) = \cos(y) + 2\cos(x)$$
Level curves are also attached.
From the level curves, in a, the centres are hemmed in by 2 defined separatrices, not so in b where there is only one.
How could I formalize these statements? Intuitively, I can "see" the answer.
--- End quote --- | {
"domain": "toronto.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9808759638081522,
"lm_q1q2_score": 0.8375825000294991,
"lm_q2_score": 0.8539127585282744,
"openwebmath_perplexity": 709.0796182106601,
"openwebmath_score": 0.834181547164917,
"tags": null,
"url": "https://forum.math.toronto.edu/index.php?PHPSESSID=k441o32850lkospornhudrbhs7&topic=281.0;wap2"
} |
c#, bitwise, serialization, stream
It works by determining how many bits are actually required to store a number based on the min and max (e.g. 63 needs 6 bits, 64 needs 7 bits).
Example use (as requested):
BitStream bs = new BitStream();
int min1 = -1000, max1 = 1000, num1 = 287;
float min2 = 0f, max2 = 50f, num2 = 16.78634f;
double min3 = double.MinValue, max3 = double.MaxValue, num3 = 9845216.1916526;
byte fltPrec = 2;
byte dblPrec = 0; | {
"domain": "codereview.stackexchange",
"id": 20857,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, bitwise, serialization, stream",
"url": null
} |
programming, q#
Where only the line indicating the usage of the Microsoft.Quantum.Arithmetic library and the code block using the AddI($\cdot$) operation (or function?) have been added to the tutorial code. This results in an error :
error QS5022: No identifier with that name exists.
Whereas I am sure that this is not a Q# issue, I do wonder if anybody stumbled upon a similar problem, or know how to fix that?
Solution: Using any of the example code from GitHub, all functionalities that I was able to add work without any problems. Using such a sample project I can just delete all the code and rewrite it, which works well.
Question: While copy-paste-delete using sample projects is a viable approach to set up new projects, I do wonder what I am missing in the "normal" project setup? (I tried the fix in Visual Studio, but I'm pretty sure that will work with command line as well)
You need to add Microsoft.Quantum.Numerics NuGet package reference to your project, as described here. That is the package in which AddI is defined, and by default a newly created Q# project doesn't include this reference.
If you copy an existing project from Numerics section of the samples, it will already include the reference to this package (see, for example, this .csproj file). | {
"domain": "quantumcomputing.stackexchange",
"id": 915,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming, q#",
"url": null
} |
parseval
Title: parseval for a continuos but limited signal I have a question about the parseval relation written here
https://en.wikipedia.org/wiki/Parseval%27s_theorem (In the chapter Notation used in physics).
If I have a signal continuous but limited (so it does not go from $\infty$ to $\infty$ but from 0 to T, can The Parseval theorem be applied? Sure. You can can just integrate in the time domain from $0$ to $T$ since the area outside $[0, T]$ is 0.
Please note that you still must integrate from $-\infty$ to $+\infty$ in the frequency domain since finite support in the time domain implies infinite support in the frequency domain (and vice versa).
Things are different if the function is periodic. | {
"domain": "dsp.stackexchange",
"id": 7676,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "parseval",
"url": null
} |
newtonian-mechanics, forces, computational-physics, discrete
&v_{n+1} \, = \, v_0 \, \frac{\tilde{v}_n}{|\tilde{v}_n|}
\end{align}
As you can see here, just like in the smooth case, the velocities $v_{n+1}$ and $v_n$ are coplanar with the normal gradient vector $\nabla f(x_{n+1})$. And again, you can see that the velocity evolves only along the normal gradient vector $\nabla f(x_{n+1})$, which is the discrete analogue of the normal force redirecting the velocity in the smooth case.
By executing steps 1, 2 and 3 we obtain the new position and velocity of the geodesic flow
$$\big(\,x_{n+1},\, v_{n+1}\,\big)$$
By construction, the new pair also satisfies the geodesic restrictions
\begin{align}
&f(x_{n+1}) \, = \, c\\
&\nabla f(x_{n+1})^T\,v_{n+1} =\, 0\\
&|v_{n+1}| \, = \, v_0
\end{align}
By iterating steps 1, 2, 3 you get a discrete analogue of the geodesic flow on $M_c$. And I think the result will have a fairly good behaviour and will emulate many of the properties of the smooth geodesic flow.
Edit. As a test, I implemented this method for the case of the geodesic flow on a 3D ellipsoid. I chose an ellipsoid whose axes are aligned with the coordinate axes. I implemented the method using a fixed Jacobian for Newton's method, when generating the orthogonal projection of the intermediate point onto the surface of the ellipsoid. It works quite well, so for nice surfaces probably there is no need to calculate a hessian, which is good news.
import numpy as np
import matplotlib.pyplot as plt | {
"domain": "physics.stackexchange",
"id": 80543,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, computational-physics, discrete",
"url": null
} |
is ¡ 1 2 ¢5; since the tosses are independent. When we ran this program with $$n = 1000$$, we obtained 494 heads. The probability of first candidate getting selected is 0. The expectation of the number of heads in 1 5 tosses of a coin is 2 x. To do this, type display Binomial(10,5,. Because there are two ways to get all five of one kind (all heads or all tails), multiply that by 2 to get 1/16. Need more help! Let Q n denote the probability that no run of 3 consecutive heads appears in n tosses of a fair coin. What is the probability of getting 2 heads in 10 tosses? 0. Solved 23 views July 30, 2021. Nov 27, 2020 · Note that in 20 tosses, we obtained 5 heads and 15 tails. What is the probability of getting 5 heads in 10 tosses? 252/1,024. The probability of getting a head in a single toss. 81 is the probability of getting 2 Heads in 5 tosses. A weather forecaster states that the probability that it will snow tomorrow is 3 7. Find the probability of: a) getting a head and an even number b) getting a head or tail and an odd number. 2 consecutive heads can happen in following ways: HH(HorT) (HorT) (HorT) (HorT)HH(HorT)(HorT) (HorT)(HorT)HH(HorT) (HorT)(HorT)(HorT)HH. And so in this instance, then at the probability of success of getting a heads is going to be 1/2 or 0. (d) n = 100,p = 0_. When tossing a coin, there are 2 distinct possibilities: heads or tails. That is, it's the probability of not getting a specific sequence of heads and tails (in this case, TTTTT) in 5 coin tosses. The probability to get 5 heads in 5 tosses represents, actually, the probability of 5 heads in a row (3. P (A) = 4/8 = 0. 5 for heads (nor for tails). Let A be the probability of getting exactly 2 Heads in 3 coin. We express probability as a number between 0 and 1. The probabilities are: exactly 2 heads: | {
"domain": "fleckundzimmermann.de",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918495796061,
"lm_q1q2_score": 0.8149738923801968,
"lm_q2_score": 0.8244619242200081,
"openwebmath_perplexity": 198.6753487030207,
"openwebmath_score": 0.8667263984680176,
"tags": null,
"url": "http://fleckundzimmermann.de/probability-of-getting-2-heads-in-5-tosses.html"
} |
-
Thanks, that explained alot! Also can you take a look at the second one too? Thanks! – Aayush Agrawal Sep 1 '12 at 11:10
@Gigili: I appended your edit.tnx – Zeta.Investigator Sep 1 '12 at 11:42
Rearranging the equation, $3x^2-8x-(2k+1)=0$
If $a,7a$ are the solutions, $a+7a=\frac{8}{3}\implies a=\frac{1}{3}$
So, $a\cdot 7a=-\frac{2K+1}{3}\implies \frac{7}{9}=-\frac{K+1}{3}\implies K=-\frac{5}{3}$
-
Hint: Study this link. Vieta's formulas may prove useful later in your mathematical life.
-
Thanks, i will take a look at that. Indian education system is hugely strict though, if you use a concept that isnt in your book you get a zero D: – Aayush Agrawal Sep 1 '12 at 11:15
Ok I see, for a second degree polynomial over the real numbers (say) the formulas are not that hard to derive. Assume $r_1$ and $r_2$ are the roots of a monic polynomial $p(x)$ over $\mathbb{R}$ then $p(x)=(x-r_1)(x-r_2)=x^2-(r_1+r_2)x+r_1r_2$. (Why?) Now it is just a matter of comparing coefficients. – user22705 Sep 1 '12 at 11:30 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9802808718926533,
"lm_q1q2_score": 0.8292611957268338,
"lm_q2_score": 0.8459424431344437,
"openwebmath_perplexity": 632.031721814634,
"openwebmath_score": 0.9353398084640503,
"tags": null,
"url": "http://math.stackexchange.com/questions/189605/small-math-help-with-polynomials?answertab=votes"
} |
operators, group-theory, commutator, lie-algebra
Title: Different definitions of commutator in operator theory/quantum mechanics vs. in group theory In group theory, the commutator of two elements $g$ and $h$ in a group is defined as $$[g,h]=ghg^{-1}h^{-1}$$
However, in quantum mechanics, we always see commutator relation between two operators $A$ and $B$, which should belong to some group, to be defined as
$$[A,B]=AB-BA.$$
How to reconcile these two seemingly different definitions? I understand that operators in QM are in the Hilbert space, which has an additional vector space structure besides the group structure. However, since all vector spaces are groups, shouldn’t the vector space commutator also satisfy the group commutator definition? Operators form a ring, not a group. This is because the operator sum, $\hat{N} + \hat{O}$, makes sense, just as much as the operator composition or product, $\hat{N} \hat{O}$, and it satisfies all the usual properties.
The commutator definition used in quantum theory is the ring commutator.
The group-theoretic definition only works if the objects in question are invertible, since you need to take their $-1$th compositional power. Group elements must be invertible by definition. But there is no stipulation that all quantum operators must be invertible - e.g. projection operators are, essentially by definition, not so. | {
"domain": "physics.stackexchange",
"id": 94392,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "operators, group-theory, commutator, lie-algebra",
"url": null
} |
differential-geometry, coordinate-systems, tensor-calculus, differentiation, vector-fields
Immediately thereafter, it is shown that derivative operators exist by considering the "ordinary derivative operator" $\partial_a$ in some coordinate system. However, I am not sure why ordinary derivatives will satisfy condition 4. That is, I am not sure why ordinary derivative operators will agree in their action scalar fields
For example, suppose our manifold is the plane $\mathbb{R}^2,$ with one coordinate system being the usual cartesian coordinate system, and the other coordinate system being the usual polar coordinate system.
Consider the function $f(x,y) = xy.$ Consider the tangent vector $\hat{y}$. The ordinary derivative operator we write $\frac{\partial}{\partial x} \hat{x} + \frac{\partial}{\partial y} \hat{y}.$ Then $t(f) = \hat{y} \cdot (y \hat{x} + x\hat{y}) = x.$
Now consider this in the polar coordinate system. $\tilde{f} (r, \theta) = r^2 \cos \theta \sin \theta.$ Consider the same tangent vector $\hat{y} = \sin\theta \hat{r} + \cos \theta \hat{ \theta}.$ Now consider the ordinary derivative operator $\frac{\partial}{\partial r} \hat{r} + \frac{\partial}{\partial \theta} \hat{\theta}.$ Note that this is different than what you would get if you wanted to consider what the gradient is in polar coordinates.
Then
$$t(\tilde{f}) = (\sin\theta \hat{r} + \cos \theta \hat{ \theta}) \cdot (2r \cos \theta \sin \theta \hat{r} + r^2(\cos^2 \theta - \sin^2 \theta) \hat{\theta}).$$
But note that $t(f) \neq t(\tilde{f}).$ | {
"domain": "physics.stackexchange",
"id": 76266,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "differential-geometry, coordinate-systems, tensor-calculus, differentiation, vector-fields",
"url": null
} |
c, linked-list, stack
Title: Implementing a stack using a linked-list In my intro CS class we're reviewing data structures. I'm currently working on implementing a stack using a linked list (LIFO) in C. I'd appreciate a review of the implementation as well as of my understanding of how a stack should work.
// This program is implementation of stack data structure
// via linked list (LAST IN FIRST OUT STRUCTURE) LIFO
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
typedef struct node
{
char string[20];
struct node *next;
} node;
node *push(char *element, node *head);
node *pop(node *head);
void destroy_stack(node *p);
int main(void)
{
// create new stack
node *stack = NULL;
// push 6 "functions" to the stack
char *function[6] = {"first funct", "second funct", "third funct",
"fourth funct", "fifth funct", "sixt funct"};
for (int i = 0; i < 6; i++)
{
printf("function is : %s\n",function[i]);
stack = push(function[i], stack);
if (!stack)
{
fprintf(stderr,"Not enough memory space for new list");
return 1;
}
}
// display the stack
for (node *temp = stack; temp != NULL; temp = temp -> next)
{
printf("Elements of the stack are: %s\n", temp -> string);
}
// pop the elements from the stack
while (stack != NULL)
{
printf("Popped elemet is: %s\n", stack -> string);
stack = pop(stack);
}
destroy_stack(stack);
return 0;
}
node *push(char *element, node *head)
{
// create space for new element on stack
node *temp = malloc(sizeof(node));
if (!temp)
{
return NULL;
} | {
"domain": "codereview.stackexchange",
"id": 32196,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, linked-list, stack",
"url": null
} |
ecology, marine-biology, ichthyology, life-history
(American eel, Anguilla rostrata, picture from Wikipedia in public domain)
As a fun sidenote, captive eel can become very old, and earlier this year a more than 155-year-old European eel that was kept in a well died (see newsstory) - the oldest known eel from what I know. Eels were often put in wells because they were believed to keep the well water clean.
Semelpary is also found in other fish species, for instance in the Smelt (Osmeridae) family, where many species are fished commercially. Some (most?) species in the family have multiyear life histories, but some, e.g. the Delta smelt (Hypomesus transpacificus) only live for a single year. Similarly to Salmon, Smelt also migrate from sea to freshwater to spawn, i.e. an anadromous life history.
Overall, less than 1% of teleost fish are semelparous according to Finch (1994) - see link for more examples of semelparous fish species and background. However, this figure is most likely uncertain, given that we know very little about the life history of many marine species. In some cases, you can also find a range of life-history strategies from semelparous to partially or totally iteroparous within the same species, depending on ecological context. Nevertheless, the low overall proportion of semelparous fish means that semelpary is generally a rare life history strategy in human exploited fish species.
(Delta smelt, Hypomesus transpacificus, picture from Wikipedia in public domain) | {
"domain": "biology.stackexchange",
"id": 3429,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ecology, marine-biology, ichthyology, life-history",
"url": null
} |
Here’s an interesting option from Encyclopedia Brittanica:
Tangent, in geometry, straight line (or smooth curve) that touches a given curve at one point; at that point the slope of the curve is equal to that of the tangent. A tangent line may be considered the limiting position of a secant line as the two points at which it crosses the curve approach one another.
This definition is from a geometrical perspective, but the last sentence refers to limits and thus enters the domain of calculus. Note that in the case of a tangent line at an endpoint, the two points that approach one another are necessarily restricted to one side of the intersection, thus this remark takes us again to the concept of a one-sided limit.
Here are some secant lines approaching the tangent:
You will notice that I have written more broadly than you, about tangent lines in general and not just horizontal tangent lines. I’m contending that a curve that is continuous on [a, b] can reasonably and consistently be said to have tangent lines at a and b; and either of these tangent lines could be horizontal.
If tangents at endpoints of the domain make sense, then there can specifically be a horizontal tangent there, as in this case. (And nothing has been said here of the existence of a vertical tangent line at the other end, where the derivative is more decisively nonexistent!)
Amia replied with a suggested concept:
Thanks a lot Dr Rick,
I want to ask if there is a definition for half of a tangent? So we can assume that the tangent on the end points is half tangent line.
I suppose you’re thinking that a half tangent line to a curve would be defined as having slope equal to a one-sided derivative of the curve, leaving the term “tangent line” to be defined only for full derivatives.
Let me observe that the concept we’re considering will apply to more than the case of an endpoint of the domain of the function. I’m thinking of points where the slope of the curve has a discontinuity, such as x = 2 for the function f(x) = |x2 – 4| + 1. You might then wish to say that there are two “half tangent lines”, one “left-tangent” and the other “right-tangent” to the curve. | {
"domain": "themathdoctors.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9867771751368138,
"lm_q1q2_score": 0.8113174762432728,
"lm_q2_score": 0.8221891392358015,
"openwebmath_perplexity": 321.4405209782946,
"openwebmath_score": 0.8534117341041565,
"tags": null,
"url": "https://www.themathdoctors.org/limits-and-derivatives-on-the-edge/"
} |
algorithms, hash, searching
So now if an item is in both array then it's interested in the hash.
The issue however is the complexity is still linear so I'm wondering how can I do better ?
I'd like the question to be as general as possible but for my specific use case it is likely that the first char of startAt and endAt are the same. So { startAt: 'ab', endAt: 'u23' } is not likely. Use an interval tree. It is designed to support exactly this. In particular, the query "find all ranges that contain this value" is known as a stabbing query, and it can be answered in essentially $O(\log n)$ time. | {
"domain": "cs.stackexchange",
"id": 9756,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, hash, searching",
"url": null
} |
The nauty tool includes the program geng which can generate all non-isomorphic graphs with various constraints (including on the number of vertices, edges, connectivity, biconnectivity, triangle-free and others). Can we find an algorithm whose running time is better than the above algorithms? (b) Draw all non-isomorphic simple graphs with four vertices. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Can we do better? Why was there a man holding an Indian Flag during the protests at the US Capitol? $a(5) = 34$ A000273 - OEIS gives the corresponding number of directed graphs; $a(5) = 9608$. Graph Isomorphism in Quasi-Polynomial Time, Laszlo Babai, University of Chicago, Preprint on arXiv, Dec. 9th 2015 >> endobj An isomorphic mapping of a non-oriented graph to another one is a one-to-one mapping of the vertices and the edges of one graph onto the vertices and the edges, respectively, of the other, the incidence relation being preserved. (b) Draw 5 connected non-isomorphic graphs on 5 vertices which are not trees. >> It only takes a minute to sign up. In general, if two graphs are isomorphic, they share all "graph theoretic'' properties, that is, properties that depend only on the graph. In particular, it's OK if the output sequence includes two isomorphic graphs, if this helps make it easier to find such an algorithm or enables more efficient algorithms, as long as it covers all possible graphs. I could enumerate all possible adjacency matrices, and for each, test whether it is isomorphic to any of the graphs I've previously output; if it is not isomorphic to anything output before, output it. De nition 6. There is a closed-form numerical solution you can use. Isomorphic and Non-Isomorphic Graphs - Duration: 10:14. Prove that they are not isomorphic. For example, both graphs are connected, have four vertices and three edges. /Resources 1 0 R Yes. There is a paper from the early nineties dealing with exactly this question: Efficient algorithms for listing unlabeled graphs by Leslie Goldberg. Graph theory: (a) Find the chromatic number | {
"domain": "helicehelas.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9693242009478238,
"lm_q1q2_score": 0.8160019747828126,
"lm_q2_score": 0.8418256492357358,
"openwebmath_perplexity": 710.2381753349516,
"openwebmath_score": 0.6111965775489807,
"tags": null,
"url": "http://www.helicehelas.com/rqn4h5e/non-isomorphic-graphs-with-5-vertices-86d375"
} |
java, algorithm, tree, data-mining
this.lotteryConfiguration =
Objects.requireNonNull(
lotteryConfiguration,
"lotteryConfiguration == null");
this.root = Objects.requireNonNull(root, "The root node is null.");
this.length = this.lotteryConfiguration.getLotteryRowLength();
}
/**
* Constructs a missing rows generator with given lottery configuration.
*
* @param lotteryConfiguration the lottery configuration.
*/
public MissingLotteryRowsGenerator(
final LotteryConfiguration lotteryConfiguration) {
this(lotteryConfiguration, new IntegerTreeNode());
}
/**
* Adds a list of lottery rows to this generator.
*
* @param lotteryRows the lottery rows to add one by one.
* @return this generator for chaining.
*/
public MissingLotteryRowsGenerator
addLotteryRows(final List<LotteryRow> lotteryRows) {
for (final LotteryRow lotteryRow : lotteryRows) {
addLotteryRow(lotteryRow);
}
return this;
}
/**
* Adds a single lottery row to this generator.
*
* @param lotteryRow the lottery row to add.
* @return this generator for chaining.
*/
public MissingLotteryRowsGenerator
addLotteryRow(final LotteryRow lotteryRow) {
Objects.requireNonNull(lotteryRow, "lotteryRow == null");
checkLotteryRow(lotteryRow);
IntegerTreeNode node = root;
for (int i = 0, sz = this.length; i < sz; i++) {
final IntegerTreeNode nextNode;
final int number = lotteryRow.getNumber(i);
if (node.children == null) {
node.children = new TreeMap<>();
}
if (!node.children.containsKey(number)) {
node.children.put(number, nextNode = new IntegerTreeNode());
if (i < sz - 1) {
nextNode.children = new TreeMap<>();
}
} else {
nextNode = node.children.get(number);
}
node = nextNode;
}
return this;
} | {
"domain": "codereview.stackexchange",
"id": 38068,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, tree, data-mining",
"url": null
} |
metal, intermolecular-forces, hydrolysis
Title: Reasons for solid or liquid soap Soap is made by a saponification reaction, where a fat reacts with hydroxide ions to form a surfactant and glycerol.
To make a solid soap $\ce{NaOH}$ is used, while $\ce{KOH}$ is used for liquid soaps.
I don't understand why the alkali metal has such a great impact on the state of matter. Usually the argumentation is based on intramolecular interactions, such as Van-der-Waals forces or hydrogen bonds, but if the same fat is used once with $\ce{NaOH}$ and once with $\ce{KOH}$ the resulting surfactants are basically the same, so the interactions shouldn't differ too much.
The only reason I could think of is the size of the alkali metal. Potassium has an atomic radius of 231 pm which is quite a bit more than the radius of sodium, being 186 pm. But why the atomic radius should have an impact on the state of matter of the soap is still unclear to me. Maybe I'm also completely wrong with this assumption.
A while ago, this question was already asked here on this forum, but I wonder if the given explanation is the only reason for the different state of matter. It's absolutely true, that the reactivity of the alkali metals increases from top to bottom, but can this solely explain the phenomenon? @rch provides the solubility of $\ce{NaOH}$ and $\ce{KOH}$ to back up his answer, but I don't think that this is sufficient. You cannot simply change the hydroxide ion by a fatty acid and assume, that there are no substantial amendments in the reaction behavior, can you?
Does anyone can explain this in more detail? I don't know if anyone is still looking for the answer but here I go anyway because I spent 40 minutes researching this for an assignment
TL;DR - The better solubility of potassium salts is the key factor, but not in the way one would initially suspect. Industrial processes and the efficiency of large-scale soap making explains the choice. | {
"domain": "chemistry.stackexchange",
"id": 14039,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "metal, intermolecular-forces, hydrolysis",
"url": null
} |
c#, .net, mvvm, async-await, windows-phone
Title: Using list of tasks to obtain and cache data I develop Windows Phone App with Prism framework (MVVM).
I use data caching. To get data I use proxy service. Proxy service creates two tasks:
the first task is receiving data from a web service and update data in local database (SqLite).
the second task is receiving data from the local database.
The ViewModel launches tasks and displays data on UI.
It works, but I think it can be done much better. Any ideas?
ViewModel
public class UserProfilePageViewModel : ViewModel, IUserProfilePageViewModel
{
private readonly IUserServiceProxy _userServiceProxy;
private IUserItemViewModel _userProfile;
public IUserItemViewModel UserProfile
{
get { return _userProfile; }
set { SetProperty(ref _userProfile, value); }
}
public UserProfilePageViewModel(IUserServiceProxy userServiceProxy)
{
if (userServiceProxy == null) { throw new ArgumentNullException("The userServiceProxy can't be null"); }
_userServiceProxy = userServiceProxy;
}
public async override void OnNavigatedTo(object navigationParameter, Windows.UI.Xaml.Navigation.NavigationMode navigationMode, Dictionary<string, object> viewModelState)
{
_userId = navigationParameter as string;
await Load(navigationParameter as string);
}
public async Task Load(string id)
{
var taskList = _userServiceProxy.GetUserTaskList(id);
while (taskList.Count > 0)
{
var t = await Task.WhenAny(taskList);
taskList.Remove(t);
try
{
var result = await t;
//update UI
UserProfile.Model = result.Result;
}
catch (OperationCanceledException) { }
catch (Exception) { }
}
await Task<User>.WhenAll(taskList);
}
} | {
"domain": "codereview.stackexchange",
"id": 12000,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, mvvm, async-await, windows-phone",
"url": null
} |
regressions including ridge, LASSO, and elastic net. Python, data science eps=1e-3 means that alpha_min / alpha_max = 1e-3. The estimates from the elastic net method are defined by. And one critical technique that has been shown to avoid our model from overfitting is regularization. Simply put, if you plug in 0 for alpha, the penalty function reduces to the L1 (ridge) term … ElasticNet regularization applies both L1-norm and L2-norm regularization to penalize the coefficients in a regression model. Prostate cancer data are used to illustrate our methodology in Section 4, The post covers: "Alpha:{0:.4f}, R2:{1:.2f}, MSE:{2:.2f}, RMSE:{3:.2f}", Regression Model Accuracy (MAE, MSE, RMSE, R-squared) Check in R, Regression Example with XGBRegressor in Python, RNN Example with Keras SimpleRNN in Python, Regression Accuracy Check in Python (MAE, MSE, RMSE, R-Squared), Regression Example with Keras LSTM Networks in R, Classification Example with XGBClassifier in Python, Multi-output Regression Example with Keras Sequential Model, How to Fit Regression Data with CNN Model in Python. So the loss function changes to the following equation. zero_tol float. In today’s tutorial, we will grasp this technique’s fundamental knowledge shown to work well to prevent our model from overfitting. Elastic Net — Mixture of both Ridge and Lasso. elasticNetParam corresponds to $\alpha$ and regParam corresponds to $\lambda$. Lasso regularization, but only for linear ( Gaus-sian ) and \ ( \ell_2\ ) -norm regularization of the common... And website in this tutorial, you learned: elastic Net regularization using!, besides modeling the correct relationship, we 'll look under the hood at the math. Such information much rodzaje regresji the theory and a lambda2 for the.... Of elastic-net … on elastic Net performs Ridge regression to give you best... … scikit-learn provides elastic Net, and here are some of these algorithms are built to learn the within! And then, dive | {
"domain": "xn--peasanduzelai-jkb.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9648551525886193,
"lm_q1q2_score": 0.8162119156830435,
"lm_q2_score": 0.8459424334245618,
"openwebmath_perplexity": 2423.026905745403,
"openwebmath_score": 0.601301908493042,
"tags": null,
"url": "http://xn--peasanduzelai-jkb.net/yu40c4zx/6a2sg.php?e4281d=polyurethane-resin-suppliers"
} |
fourier-transform, z-transform, laplace-transform
Similary you can also apply a Laplace transform on it:
$$ Y(s) = \mathscr{L}\{ y(t) \} = \mathscr{L}\{ h(t) \ast x(t) \} $$
$$ Y(s) = \mathscr{L}\{ h(t) \} \cdot \mathscr{L}\{ x(t) \} = H(s)\cdot X(s) $$
And exactly the same happens for discrete-time LTI systems. | {
"domain": "dsp.stackexchange",
"id": 8466,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fourier-transform, z-transform, laplace-transform",
"url": null
} |
# Why does $\sqrt{i^4} \neq i^2$.
I was looking at a problem $\sqrt{x}=-3$, and I had at first thought $x=9 i^4$ was a solution. ($\sqrt{9 i^4}=3i^2=-3$)
Though I then realized that this would cause some problems.
For example using this, we would have $\sqrt{i^4}=i^2=-1$. While on the other hand $\sqrt{i^4}=\sqrt{1}=1$.
I checked Wolfram and it says that $\sqrt{i^4} \neq i^2$ (also $(i^4)^{1/2}$). Could any one explain to me why we can't treat the exponents of $i$ this way?
Is it possible to algebraically show that $\sqrt{x}=-3$ has no solutions?
I am trying to learn some complex analysis and this made me realize that I might have some really bad intuition on complex numbers.
• sqrt isn't a defined function on the complex place May 9 '17 at 4:40
• The rule $(z^a)^b=z^{ab}$ does not always hold on the complex plane if $a$ and $b$ are not both integral. May 9 '17 at 4:43
• $\sqrt{x}=-3$ can have solutions but they depend on your definition of the square-root in the complex plain. In particular, you have to define a branch cut from $z=0$ to $z=\infty$ in order to render $\sqrt\cdot$ a complex function. May 9 '17 at 4:46
• This is almost the same as math.stackexchange.com/questions/49169/… . The answers there may help. May 9 '17 at 4:46
• Thank you very much for the replies. May 9 '17 at 4:48
## 2 Answers | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692339078752,
"lm_q1q2_score": 0.8007646027201176,
"lm_q2_score": 0.8198933425148213,
"openwebmath_perplexity": 152.83351355650478,
"openwebmath_score": 0.9530980587005615,
"tags": null,
"url": "https://math.stackexchange.com/questions/2272572/why-does-sqrti4-neq-i2"
} |
For the cutoff version:
We can get a subtraction-free formula for the cutoff version, which should be sufficient to get asymptotics, by the same idea that gives a simple bijective proof of the identity that Carlo Beenakker mentioned. That is:
$$k^p$$ counts maps from a $$p$$-element set $$[p]$$ to a $$k$$-element set
Thus $$\binom{n}{k} k^p$$ counts pairs of a $$k$$-element subset $$S$$ of an $$n$$-elements set $$[n]$$ with a map from $$[p]$$ to $$S$$. In other words, it counts maps $$f$$ from $$[p]$$ to $$[n]$$ together with a $$k$$-element subset $$S$$ of $$[n]$$ containing the image of $$f$$.
So $$\sum_{k=0}^d (-1)^k \binom{n}{k} k^p$$ is the sum over maps $$f: [p] \to n$$ of the sum over subsets $$S$$ of $$[n]$$, containing the image of $$f$$, of size at most $$k$$, of $$(-1)^{|S|}$$. We may assume the image of $$f$$ has size $$\leq d < n$$ and thus that there is some element $$e$$ not in the image of $$f$$. We can cancel each subset with $$e\notin S$$ with the $$S \cup \{e\}$$, as these have opposite signs. The only subsets that fail to cancel are those that have size exactly $$d$$ and do not contain $$e$$, of which there are $$\binom{n - | \operatorname{Im}(f) | -1}{ d - |\operatorname{Im}(f)| }$$. | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9787126475856414,
"lm_q1q2_score": 0.8112928875067629,
"lm_q2_score": 0.8289387998695209,
"openwebmath_perplexity": 135.47193198722653,
"openwebmath_score": 0.9537872076034546,
"tags": null,
"url": "https://mathoverflow.net/questions/370471/are-there-any-identities-for-alternating-binomial-sums-of-the-form-sum-k-0/370475"
} |
forces, vectors, geometry, linear-algebra, displacement
Title: How & why does the law of vector addition work? Our teacher explained vector addition to us. He explained to us the triangle law of vector Addition.
I have two questions:
He said the vector $\vec{R}$ is the resultant vector, which means that instead of going through $\vec{A}$ and $\vec{B}$ we could have directly gone through $\vec{R}$. I don't really understand it and obviously it is easy to understand this intuitively when talking about displacement. But bringing forces and other vector quantities into this image is really hard to understand. So how does the triangle law work for adding forces?
The head-tail combining rule is confusing and appears as a trick to enable memorization. So if there is a triangle law of vector addition, then why is there a need for the parallelogram law Of vector addition, when both are talking about adding entities with directions and magnitudes?
Also, how does one understand force vector addition using the parallelogram law (I believe this question will be answered when the first question is answered)?
Edit: I am new here, so I don't know how to add images to the question. Also please don't answer this question mathematically, please answer in a way that vector additions make sense intuitively and become easy to imagine. As you say, it is easy to understand when talking about displacement.
Then you only need to recognise that a displacement vector is the same vector wherever it appears on the plane.
(this also shows the parallelogram law. This is not different from the triangle law, just a different way of thinking of it).
So, to apply this to forces, and other vector quantities, you only have to recognise that it does not matter where you put the vectors on the diagram. Of course, it only applies to questions where only the vector properties of the forces are relevant -- like when you are calculating the net force acting in a particular direction. It does not apply when you need to take moments, and have to describe the force acting at a point. | {
"domain": "physics.stackexchange",
"id": 65548,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, vectors, geometry, linear-algebra, displacement",
"url": null
} |
beginner, c, vigenere-cipher
Title: Program based on Vigenère’s cipher I would like to get some helpful thoughts on this and if there is a way to break it. Is there any point to not use fgets in that form?
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#define GETNAME_SIZE 100
char *getPhrase() {
int size;
char phrase[GETNAME_SIZE];
char *phraseHolder;
printf("Enter a phrase:");
fflush(stdout);
if (fgets(phrase,sizeof phrase,stdin) == NULL ) {
return NULL;
}
size = strlen(phrase)+1;
phraseHolder = malloc(size);
if (phraseHolder == NULL) {
return NULL;
}
strcpy(phraseHolder,phrase);
phraseHolder[strcspn(phraseHolder,"\n")] = '\0';
printf("The phrase you introduced is: <%s>\n", phraseHolder);
return phraseHolder;
}
char *getChangedPhrase(char *phrase) {
int size,i,codeSize;
int j = 0;
char *code;
int pace;
size = strlen(phrase)+1;
if(size <2) {
printf("Enter a bigger phrase!");
return NULL;
}
code = malloc(size);
if (code == NULL) {
return NULL;
}
fflush(stdout);
printf("Enter the code that you want to be used with your phrase!(only letters)\n"); | {
"domain": "codereview.stackexchange",
"id": 23470,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c, vigenere-cipher",
"url": null
} |
More generally, using this method one can show that for any $n \geq 1$, we have $$R(\exp(x+x^p/p+\cdots + x^{p^n}/p^n)) = R(\exp(x^{p^{n+1}}/p^{n+1})).$$ Even though I understand the details involved in this calculation, I don't know if there is some more general theory underlying these examples. It may be helpful to share similar examples that you know. For instance, is it always the case that $$R(\exp(f(x))) > R(\exp(x)),$$ given $f(x) \in x\mathbb{C}_p[[x]]$ has a nonzero root $\alpha \in \mathbb{C}_p$ of absolute value $R(\exp(x)) = (1/p)^{1/(p-1)}$ and no non-zero roots of smaller absolute value?
• I may be able to offer a partial answer here, but it’ll have to wait till morning at best. Feb 7 '16 at 5:58
• Have you looked at "Rank one solvable p-adic differential equations and finite Abelian characters via Lubin–Tate groups" by Andrea Pulita? The abstract starts with "We introduce a new class of exponentials of Artin–Hasse type, called π-exponentials". Feb 7 '16 at 12:35
• @LaurentBerger: That looks interesting. I haven't looked at it, but I will surely do. Thank you! Feb 7 '16 at 22:29 | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9863631643177029,
"lm_q1q2_score": 0.8497895561883252,
"lm_q2_score": 0.861538211208597,
"openwebmath_perplexity": 110.69966932048736,
"openwebmath_score": 0.9369610548019409,
"tags": null,
"url": "https://mathoverflow.net/questions/230412/when-does-the-radius-of-convergence-of-the-product-of-two-p-adic-power-series"
} |
physical-chemistry, solutions
Title: What is the differences between partial pressure and vapour pressure? Was looking at Henry's law and Raoult's law
constants and there seemto be lots of
equations involved.
Henry's law involves partial pressure and the latter involves the vapor pressure.
Wondering what the difference is?
In a mixture of gases, each gas has a partial pressure which is the hypothetical pressure of that gas if it alone occupied the volume of the mixture at the same temperature.
What does this mean? For example, if we have a mixture of gases $A$, $B$ and $C$ in an isolated room, then, according to Dalton's law, the pressure exerted by the gases will be the sum of their partial pressures :
$$P = p_A + p_B + p_C$$
where $p_A$, $p_B$ and $p_C$ are the partial pressures of each gas. Also, if we have a moles of $A$, b moles of $B$ and c moles of $C$, we can express the partial pressure of each gas as below:
$$p_A = \frac{a}{a+b+c} P$$
$$p_B = \frac{b}{a+b+c} P$$
$$p_B = \frac{c}{a+b+c} P$$
Vapor pressure or equilibrium vapor pressure is the pressure exerted by a vapor in thermodynamic equilibrium with its condensed phases (solid or liquid) at a given temperature in a closed system. | {
"domain": "chemistry.stackexchange",
"id": 3046,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, solutions",
"url": null
} |
ros, ros-hydro, ubuntu-precise, ubuntu
Title: jsk-ros-pkg "configuring incomplete" errors occured
After installing jsk_common, which included the install of jsk_recognition and jsk_libfreenect2, did a source ~/ros/hydro/devel/setup.bash, complied by executing catkin_make in the catkin work folder ~/ros/hyro. After considerable script activity, following error message appeared.
[roseus.cmake] compile installed package sound_play
-- Using these message generators: gencpp;geneus;genlisp;genpy
-- +++ processing catkin package: 'jsk_2014_06_pr2_drcbox'
-- ==> add_subdirectory(jsk-ros-pkg/jsk_demos/jsk_2014_06_pr2_drcbox)
-- +++ processing catkin package: 'jsk_rosjava_messages'
-- ==> add_subdirectory(jsk-ros-pkg/jsk_smart_apps/jsk_rosjava_messages)
-- Configuring incomplete, errors occurred!
Invoking "cmake" failed.
No other error messages. Is there a log file to review for more detail?
openjdk-6-jre and openjdk-7-jre are installed
Originally posted by RobotRoss on ROS Answers with karma: 141 on 2014-09-11
Post score: 0
What exactly did you do around "After considerable script activity`, in particular what was the last command you ran?
Also, many of jsk-* packages are actively maintained and available as DEB. Installable by apt-get install ros-hydro-jsk-common for example.
Originally posted by 130s with karma: 10937 on 2014-09-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19378,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-hydro, ubuntu-precise, ubuntu",
"url": null
} |
algorithms, algorithm-analysis
Title: How can I prove algorithm correctness? How can I prove algorithm correctness ?
when i face a problem and come up with a solution the only way to know if this a valid solution or not is by trying some test cases. if they pass through the algorithm and produce the expected output then my algorithm most properly true.
but obviously this is not hold all the time because i may forget some corner cases or it is hard to figure out all the test cases.
So how can I prove mathematically if my algorithm produce the expected output or not ?
For example, consider the program below.
You’re given a read only array of $n$ integers.
Find out if any integer occurs more than $n/3$ times in the array in linear time and constant additional space.
Algorithm:
We will use an array of size 3 to count occurrence of numbers let it be count adding numbers to our count array with its proper count. If we reach the size of count array we decrement one from count of each number. If number count becomes zero it can be safely eliminated from the count array.
Here is an example:
Input: 4 3 3 7 2 3 4 5
count arr (4,1) 4 as first element and 1 is its count till now
count arr (4,1)(3,1)
count arr (4,1)(3,2)
count arr (4,1)(3,2)(7,1). Here we reach the max allowed size for count then we need to decrement count by one and if it reaches zero its item will be removed from our count array so count arr becomes
count arr (3,1). We will proceed with next element in the array which is 2.
count arr (3,1)(2,1)
count arr (3,2)(2,1)
and so on. At the end count arr will be (3,1)(5,1). | {
"domain": "cs.stackexchange",
"id": 4838,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, algorithm-analysis",
"url": null
} |
4. Jul 7, 2008
### rock.freak667
Re: Show if A,B,C are invertible matrices of same size....
Multiply both sides by the matric ABC
5. Jul 7, 2008
Re: Show if A,B,C are invertible matrices of same size....
So
$D=(ABC)^{-1}$
$\Rightarrow D(ABC)=(ABC)^{-1}(ABC)$
Well, I think I see where this is going. And I think that the only reason this works is because we are assuming that the product (ABC) IS invertible. Which brings me back to my original point. In order to show how the multiplication MUST be carried out, we must first SHOW or assume without proof that (ABC) is in fact invertible since our argument will be based on the fact that (ABC)*(ABC)^{-1}=Id.
6. Jul 7, 2008
### rock.freak667
Re: Show if A,B,C are invertible matrices of same size....
Well if A,B,C are nxn matrices then ABC is an nxn matrix.
and for the matrix ABC to be invertible $det(ABC) \neq 0$
and det(ABC)=det(A)*det(B)*det(C)
but the matrices A,B,C are invertible.
7. Jul 7, 2008
### Defennder
Re: Show if A,B,C are invertible matrices of same size....
Another way you could show that a product of two matrices A and B are invertible is by showing that there exists some matrix which when multiplied to AB on the left and on the right gives the identity matrix:
Suppose A and B are invertible, then:
$$AB(B^{-1}A^{-1}) = I$$ for multiplying on the right
$$B^{-1}A^{-1}AB = I$$ for multiplying on the left.
In both cases this reduces to I, so $$B^{-1}A^{-1}$$ is the inverse of AB.
Now make use of this result to prove your question. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9845754492759498,
"lm_q1q2_score": 0.8688168161123153,
"lm_q2_score": 0.8824278695464501,
"openwebmath_perplexity": 1168.921472130163,
"openwebmath_score": 0.7810722589492798,
"tags": null,
"url": "https://www.physicsforums.com/threads/show-if-a-b-c-are-invertible-matrices-of-same-size.243766/"
} |
electromagnetism, waves
Title: Why don't waves erase out each other when looking onto a wall? If I stand exactly in front of a colorful wall, I imagine the light waves they emit, and they receive should randomly double or erase out each other.
So as a result, I imagine I should see a weird combination of colors, or a full-black/full-white/very lightly perception of the wall, when all the light waves that the wall receives and emits cancel out each other or double each other.
Why doesn't that actually happen? Any time I look into a wall, I never see the wall "cancel out" of my perception. Same for radio waves. Shouldn't radio waves not work at all? There are so many sources where they could reflect and cancel out or annoy each other... 1) First let us separate colour perception from frequency. Individual frequencies have a color correspondence but the colour the human eye perceives is another story.
2) White light, such as sunlight, is composed of many frequencies.
When the impinging wave hits a wall it can be
a) reflected
b) absorbed
c) scattered incoherently
In order for the light waves to cancel out each other or double each other the photons have to be, within the uncertainty principle, superimposed in time and space. Sometimes it happens, but the probability is small. That is one of the reasons why a reflected beam can never have the same strength as its original beam. If the frequency is the same the probability will be higher than if the frequencies come from a random palette.
This superposition can be achieved with lasers, where there is control of frequency and the beam is coherent, i.e. the phases are preserved upon reflection. A hologram is an example of superposition of same-frequency light to create a three dimensional shape by peaks and dips.
Edit: From a disappeared question the following comment is worth adding:
You can perceive all colours even if only two frequencies are shining on an object. Also in this decade, Land first discovered a two-color system for projecting the entire spectrum of hues with only two colors of projecting light (he later found more specifically that one could achieve the same effect using very narrow bands of 500 nm and 557 nm light). | {
"domain": "physics.stackexchange",
"id": 23840,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, waves",
"url": null
} |
c#, linq, weekend-challenge, playing-cards
Title: Poker Hand Evaluator, take 2 This is following up on my previous attempt, which was admittedly done fast and not-so-well. This code attempts to allow comparing two poker hands to determine a winner, not only evaluating a given set of cards. (this was actually edited - see original take 2 here).
I'm abstracting the dirt away into an interface... and yet it doesn't look very clean to me, but maybe it's just because I don't use Tuple<T1,T2> very often. T1 is a bool indicating whether the hand is matched, T2 contains the winning cards.
public interface IPokerHandEvaluator
{
Tuple<bool, IEnumerable<PlayingCard>> IsPair(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsTwoPair(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsThreeOfKind(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsFourOfKind(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsFlush(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsFullHouse(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsStraight(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsStraightFlush(IEnumerable<PlayingCard> cards);
Tuple<bool, IEnumerable<PlayingCard>> IsRoyalFlush(IEnumerable<PlayingCard> cards);
} | {
"domain": "codereview.stackexchange",
"id": 24476,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, linq, weekend-challenge, playing-cards",
"url": null
} |
digital-communications, bandwidth
for both schemes, if you double the symbol period for the QASK scheme, you are effectively stretching out the signal (or in other words halving the frequency of each component), leaving you with something of form
$$s_1(t) = a_1 \sin(2 \pi 50 t) + ...+a_i \sin(2 \pi 75 t) + + a_n \sin(2 \pi 100 t)$$
now $s_1(t)$ has a bandwidth that is half of the bandwidth of $s(t)$. You can just shift $s_1(t)$ to the previous centre frequency and have the same centre frequency whilst using half the bandwidth. Hope the point is a bit clearer now. | {
"domain": "dsp.stackexchange",
"id": 4745,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "digital-communications, bandwidth",
"url": null
} |
black-holes, reference-frames, observers, hawking-radiation
Title: Intensity of Hawking radiation for different observers relative to a black hole Consider three observers in different states of motion relative to a black hole:
Observer A is far away from the black hole and stationary relative to it;
Observer B is suspended some distance above the event horizon on a rope, so that her position remains constant with respect to the horizon;
Observer C is the same distance from the horizon as B (from the perspective of A), but is freefalling into it.
All of these observers should observe Hawking radiation in some form. I am interested in how the spectra and intensity of the three observations relate to one another.
My previous understanding (which might be wrong, because I don't know how to do the calculation) was that if you calculate the radiation that B observes, and then calculate how much it would be red shifted as it leaves the gravity well, you arrive at the spectrum and intensity of the Hawking radiation observed by A. I want to understand how the radiation experienced by C relates to that observed by the other two.
The radiation fields observed by B and C are presumably different. B is being accelerated by the tension in the rope, and is thus subject to something like the Unruh effect. C is in freefall and therefore shouldn't observe Unruh photons - but from C's point of view there is still a horizon ahead, so presumably she should still be able to detect Hawking radiation emanating from it. So I would guess that C observes thermal radiation at a lower intensity than B, and probably also at a lower temperature (but I'm not so sure about that).
So my question is, am I correct in my understanding of how A and B's spectra relate to one another, and has anyone done (or would anyone be willing to do) the calculation that would tell us what C observes? References to papers that discuss this would be particularly helpful. This paper discusses these issues in a fairly comprehensible way. Faraway observers (like your observer A) see thermal Hawking radiation with an effective temperature given by the Hawking temperature
$$T_H := \frac{\hbar c^3}{8 \pi G M k_B},$$
where $M$ is the black hole's mass. | {
"domain": "physics.stackexchange",
"id": 37529,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "black-holes, reference-frames, observers, hawking-radiation",
"url": null
} |
c#, multithreading, thread-safety, lazy
The infamous GetValueAsync can then be rewritten completely. Each caller thread can provide its own cancellationToken. Use it to acquire a lock on _mutex. Once we have the lock, check whether _value is already memoized. If not (!_memoized) execute _valueFactory, and memoize the value. Now, we perform another timeout check for the calling thread ThrowIfCancellationRequested. Even though we have the value available now, the caller might still have timed out, so let him now. Don't forget to release the mutex.
public async Task<T> GetValueAsync(CancellationToken cancellationToken)
{
await _mutex.WaitAsync(cancellationToken);
try
{
if (!_memoized)
{
_value = await _valueFactory(cancellationToken).ConfigureAwait(false);
_memoized = true;
cancellationToken.ThrowIfCancellationRequested();
}
return _value;
}
finally
{
_mutex.Release();
}
}
We should allow for a convenience overload if no cancellation support is required for a given caller.
public async Task<T> GetValueAsync() => await GetValueAsync(CancellationToken.None);
And since we comply to the concept of Lazy we should also provide a synchronous property Value.
public T Value => GetValueAsync().Result;
Refactored Code
public sealed class CancelableAsyncLazy<T>
{
private readonly Func<CancellationToken, Task<T>> _valueFactory;
private volatile bool _memoized;
private readonly SemaphoreSlim _mutex;
private T _value;
public CancelableAsyncLazy(Func<CancellationToken, Task<T>> valueFactory)
{
_valueFactory = valueFactory
?? throw new ArgumentNullException(nameof(valueFactory));
_mutex = new SemaphoreSlim(1, 1);
}
public T Value => GetValueAsync().Result; | {
"domain": "codereview.stackexchange",
"id": 35230,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, multithreading, thread-safety, lazy",
"url": null
} |
genome, repeat-elements, sequence-analysis
Title: Is it wise to use RepeatMasker on prokaryotes? I'm looking for a way to identify low complexity regions and other repeats in the genome of Escherichia coli. I found that RepeatMasker may be used for example when drafting genomes of prokaryotes (E. coli example). But RepeatMasker works on a limited dataset of species, neither of them being prokaryotes. By default, when running RepeatMasker, if no species is specified, it will compare with homo sapiens data.
This seems rather inadequate, but the most relevent alternative, PRAP, requires a "dead" tool (VisCoSe, by Michael Spitzer).
Is it still wise to to use RepeatMasker on Escherichia coli?
If yes, which settings would maximise relevance ? If I understood correctly your question, you want to mask those regions in a (FASTA?) genome. I think you could identify those regions using mummer and mask them using bedtools.
# align genome against itself
nucmer --maxmatch --nosimplify genome.fasta genome.fasta
# select repeats and convert the corrdinates to bed format
show-coords -r -T -H out.delta | awk '{if ($1 != $3 && $2 != $4) print $0}' | awk '{print $8"\t"$1"\t"$2}' > repeats.bed
# mask those bases with bedtools
bedtools maskfasta -fi genome.fasta -bed repeats.bed -fo masked.fasta
Have a look at nucmer and bedtools maskfasta options to fine-tune your analysis. | {
"domain": "bioinformatics.stackexchange",
"id": 223,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "genome, repeat-elements, sequence-analysis",
"url": null
} |
computability, turing-machines, undecidability, decision-problem, halting-problem
In this prove, I am confused on what would $x*code(x)$ correspond to. Is the input followed by the coded input? Why would we want to do so? Let $H=\{(\langle M\rangle, x)\mid M, \text{ when given input }x, \text{ halts}\}$. In other words, the language $M$ consists of all pairs of TM descriptions and words, such that the TM halts on that word.
Let $SA=\{\langle M\rangle\mid M, \text{ when given its own description, halts}\}$. You're given the fact that $SA$ is undecidable.
Suppose, to the contrary, that $H$ was decidable, so that there was a TM $M_1$ such that when given a pair, $(\langle M\rangle, x)$, of a TM $M$, and a word $x$
$$
M_1((\langle M\rangle, x)) = \begin{cases}\text{accept}, & M(x) \text{ halts}\\
\text{reject}, & M(x) \text{ doesn't halt}\end{cases}
$$
We'll use this to build a decider, $M_2$, for SA, establishing the desired contradiction. Define
M2(<M>) =
if M1((<M>,<M>)) = accept ; does M halt on its own description?
return M(<M>) ; if so, this will always halt
else
return reject
Now if $M$ halts on its own description, the if test will detect that and so we can simulate the action of $M$ on $\langle M\rangle$, be sure it halts, and return the result, either accept or not. If $M$ doesn't halt on its own description, we reject. The upshot is that we'll have built a decider, $M_2$ for $SA$, so have a contradiction, and hence our assumption that $H$ was decidable, must be false. | {
"domain": "cs.stackexchange",
"id": 3644,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computability, turing-machines, undecidability, decision-problem, halting-problem",
"url": null
} |
Why do we not use here that $\overline{Y}_B=p \overline{Y_1} + (1-p) \overline{Y_0}$ ?
From $E(\overline {Y_A}) = E(Y)$ and $E(\overline {Y_B}) = E(Y)$ we get that both estimators are unbiased, right?
To check which estimate is better we have to compare the two variances, right? The variance of the estimate A is equal to the variance of the median of the amounts of money. Does this mean that this is better than the variance of the estaimate B?
Or can we not compare them?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Does it hold that $E(\overline {Y_A}) = E(Y)$ and $E(\overline {Y_B}) = E(Y)$ because $\overline {Y_A}$ and $\overline {Y_B}$ describes respectlively the average amount of money?
Why do we not use here that $\overline{Y}_B=p \overline{Y_1} + (1-p) \overline{Y_0}$ ?
If follows mathematically.
Let's go through the steps for B, using indeed the formula for $\overline{Y_B}$.
$$E(\overline{Y_B})=E\left(p\overline{Y_1}+(1−p)\overline{Y_0}\right) =pE(Y_1)+(1−p)E(Y_0) =p\mu_1 + (1-p)\mu_0 =\mu = E(Y)$$
Yes?
From $E(\overline {Y_A}) = E(Y)$ and $E(\overline {Y_B}) = E(Y)$ we get that both estimators are unbiased, right?
Yep. | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9902915246646303,
"lm_q1q2_score": 0.8119334260423168,
"lm_q2_score": 0.8198933403143929,
"openwebmath_perplexity": 733.596048834762,
"openwebmath_score": 0.9828346967697144,
"tags": null,
"url": "https://mathhelpboards.com/threads/expected-values-and-variances.24037/"
} |
c++, simulation
if (component == nullptr) {
throw std::invalid_argument{
"The target component is not present in the circuit."
};
}
return component;
}
bool isDoubleInputPinComponent(AbstractCircuitComponent* component) {
return dynamic_cast<AbstractDoubleInputPinCircuitComponent*>
(component) != nullptr;
}
bool isSingleInputPinComponent(AbstractCircuitComponent* component) {
return dynamic_cast<AbstractSingleInputPinCircuitComponent*>
(component) != nullptr;
}
void checkIsSingleInputGate(AbstractCircuitComponent* gate) {
if (dynamic_cast<AbstractSingleInputPinCircuitComponent*>(gate)
== nullptr) {
throw std::logic_error{
"A single input pin is expected here."
};
}
}
void checkIsDoubleInputGate(AbstractCircuitComponent* gate) {
if (dynamic_cast<AbstractDoubleInputPinCircuitComponent*>(gate)
== nullptr) {
throw std::logic_error{
"A double input pin is expected here."
};
}
}
void checkIsDagInForwardDirection() {
std::unordered_map<AbstractCircuitComponent*, Color> colors;
for (AbstractCircuitComponent* component : m_component_set) {
colors[component] = Color::WHITE;
}
for (AbstractCircuitComponent* component : m_input_gates) {
if (colors[component] == Color::WHITE) {
dfsForwardVisit(component, colors);
}
}
}
void checkIsDagInBackwardDirection() {
std::unordered_map<AbstractCircuitComponent*, Color> colors;
for (AbstractCircuitComponent* component : m_component_set) {
colors[component] = Color::WHITE;
} | {
"domain": "codereview.stackexchange",
"id": 27816,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, simulation",
"url": null
} |
bash, console, file-system, perl
sub process_double_args {
my ($list, $cmd, $tag) = @_;
my $cmd_regex = "^" .
DSConstants::COMMAND_ADD_SHORT . "|" .
DSConstants::COMMAND_ADD_LONG . "|" .
DSConstants::COMMAND_ADD_WORD . "|" .
DSConstants::COMMAND_REMOVE_SHORT . "|" .
DSConstants::COMMAND_REMOVE_LONG . "|" .
DSConstants::COMMAND_REMOVE_WORD . "|" .
DSConstants::COMMAND_UPDATE_PREVIOUS . "\$";
if ($cmd !~ /$cmd_regex/) {
die "$cmd: command not recognized.";
}
for ($cmd) {
$_ eq DSConstants::COMMAND_ADD_SHORT && add_tag($list, $tag, getcwd());
$_ eq DSConstants::COMMAND_ADD_LONG && add_tag($list, $tag, getcwd());
$_ eq DSConstants::COMMAND_ADD_WORD && add_tag($list, $tag, getcwd());
$_ eq DSConstants::COMMAND_REMOVE_SHORT && remove_tag($list, $tag);
$_ eq DSConstants::COMMAND_REMOVE_LONG && remove_tag($list, $tag);
$_ eq DSConstants::COMMAND_REMOVE_WORD && remove_tag($list, $tag);
my $update_dir = $tag;
$_ eq DSConstants::COMMAND_UPDATE_PREVIOUS && update_previous($list, $update_dir);
}
} | {
"domain": "codereview.stackexchange",
"id": 42944,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bash, console, file-system, perl",
"url": null
} |
organic-chemistry, carbonyl-compounds, nucleophilic-substitution
Title: Rate of formation of hydrate in a carbonyl compound When water in presence of any acid is added to a carbonyl compound, it leads to the formation of its hydrate. But how do we determine the rate of reaction? Is it done by checking the amount of partial positive charge on carbonyl compound (by electron withdrawing group attached to carbonyl carbon)? The formation of the hydrate of any carbonyl compound has the rate determining step as the nucleophilic addition of a water molecule to the electophilic carbon atom of the carbonyl group.
Hence, any electron withdrawing group, like fluoro or nitro, attached in the chain next to the carbonyl group will favor the product side of the equilibrium.
On the other hand, extra branching at the alpha position due to methyl/t-butyl groups will cause steric hindrance, while electron releasing groups will reduce the carbon's electrophilicity, both favoring the reactant side of the equilibrium instead.
It is important to note that while the hydration of alkenes goes to completion, the hydration of carbonyls does not. Hence, ignoring a few exceptions like formaldehyde, ninhydrin, chloral, etc., an equilibrium is setup during hydration of carbonyls. Therefore, it is better to talk about the position of equilibrium in this process rather than the rate of the reaction, as correctly pointed out by @Mithoron. | {
"domain": "chemistry.stackexchange",
"id": 9660,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, carbonyl-compounds, nucleophilic-substitution",
"url": null
} |
the last term by a constant in economics, we see the growth! This problem can be how to find geometric mean the population or sample option selector is used! And 1.5, we frequently encounter this in a number of ways s discuss certain ways in which this can! Are products or exponential in nature in the data set are equal ; otherwise, geometric. √4 = 2 ] = 2√15 the Creative Commons-License Attribution 4.0 International ( CC by ). Otherwise, the geometric mean for 3, _, _, _, _96 of... For example, if you enter your data: using statistics.geometric_mean ( ) your article appearing on GeeksforGeeks! 19,500 to the 1/5-th power these values set are equal ; otherwise, the geometric mean of a of. Out our quiz-page with tests about: Siddharth Kalla ( Aug 21 2009! Terms by dividing any two consecutive terms terms by dividing any two consecutive terms Aletha 13 October,.. | {
"domain": "800truckhelp.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9572778000158576,
"lm_q1q2_score": 0.8211223644686977,
"lm_q2_score": 0.8577681049901037,
"openwebmath_perplexity": 677.440489187798,
"openwebmath_score": 0.5607569217681885,
"tags": null,
"url": "http://800truckhelp.com/b197z/viewtopic.php?cf5c32=how-to-find-geometric-mean"
} |
ds.algorithms
Title: Maximum subarray problem with weights The maximum sum subarray problem involves finding a contiguous subarray with the largest sum, within a given one-dimensional array $A[1...n]$ of numbers. Formally, the task is to find indices i and j with $1<=i<=j<=n$ s.t. the sum $\sum_{x=i}^j A[x]$ is as large as possible.
It is well-known that this problem can be solved in linear time $O(n)$.
I'm trying to solve a variation of this particular problem. In addition to array $A[1...n]$ we are also given an array $W[1...n]$ where $W[i]$ gives the weight of the ith item. The items are ordered in increasing weight, so $W[i] \leq W[j]$ if $i<j$. Moreover, all values in $W$ and $A$ are larger than 0, and $A[i] \geq W[i]$ for all $i=1...n$. Objective: find a contiguous subarray that maximizes $\sum_{x=i}^j (A[x]-W[j])$.
Here's a numerical example
i W A
1 6 14
2 7 12
3 8 10
4 9 10
5 12 18
6 13 16
7 14 25
8 18 22
9 19 26
10 20 23
The solution to the above example would be: i=5, j=7, with a score of: $A[5]-W[7]+A[6]-W[7]+A[7]-W[7]=18-14+16-14+25-14=17$
To solve this problem, I came up with the following $O(n²)$ algorithm:
best_score= -1
best_i = best_j = -1 | {
"domain": "cstheory.stackexchange",
"id": 5022,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ds.algorithms",
"url": null
} |
astrophysics, astronomy, star-clusters
Title: Why are the absolute magnitudes in M5 galaxy so puny? Wikipedia gives the following as the HR diagram for M5:
The stars at the base of the red giant branch have absolute visual magnitudes of 15? That seems way, way too dim. The sun's absolute magnitude according to wikipedia is 4.83, although it doesn't state in what filter that measurement was taken. What is going on here? It is a simple mistake. According to Layden et al. (2005), the distance to M5 is 7.76 kpc and has a V-band extinction of 0.11 mag. You need to subtract 14.56 mag from the y-axis to get the absolute magnitude.
As an aside, I did eventually find the incorrectly labelled diagram here. The "author", Lithopsian, gives no reference to where the data came from and claims it as their own work! Caveat emptor. I would stick to diagrams published in reputable journals. | {
"domain": "physics.stackexchange",
"id": 41216,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "astrophysics, astronomy, star-clusters",
"url": null
} |
algorithms, graphs
So we'll need to work harder. Well, let's try to prove it inductively. Let $\#V(G)$ be the number of nodes in a graph $G$, $\#E(G)$ the number of edges and $\#C(G)$ the number of elementary cycles that aren't self edges. I assert that if $G$ is unipathic and not empty then $\#C(G) \le \#V(G)-1$.
For a graph with one or two nodes, this is obvious. Suppose the assertion holds for all graphs such that $\#V(G) < n$ and let $G$ be a unipathic graph with $n$ nodes. If $G$ has no cycle, $0 = \#C(G) < \#V(G)$, case closed. Otherwise, let $(a_1,\ldots,a_m)$ be an elementary cycle. | {
"domain": "cs.stackexchange",
"id": 75,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs",
"url": null
} |
c++
// or
PNGFileReader png2;
png2.decompress_png_to_raw("Plop.png");
What I would rather see is an object that is full constructed:
PNGFileReader png1("File");
PNGFileReader png2(vectorOfRawData);
Then you can perform actions on the objects. | {
"domain": "codereview.stackexchange",
"id": 1560,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++",
"url": null
} |
quantum-mechanics, mathematical-physics, operators, momentum, hilbert-space
It's also important to note that the hamiltonian $H$ in this case is given by the Friedrich extension of $$p_0^2=-\frac{d^2}{dx^2}\\
\mathcal{D}(p_0^2)=\{\psi\in\mathcal{H}^2[0,1]\,|\,\psi(0)=\psi'(0)=0=\psi'(1)=\psi(1)\}$$
$H$ cannot be the square of any $p_\alpha$, since the domains do not match.
Edit: As pointed out by @jjcale, one way to take the momentum in this case should be $p=\sqrt{H}$, but clearly, the action of $p$ can't be a derivative, because it has the same eigenfunctions of $H$, which are of the form $\psi_k(x)=\sin \pi kx$. This ilustrates the fact that it's not related to spatial translations as stated above.
Edit 2: There's is a proof that the Friedrich extension is the one with Dirichilet boundary conditions in Simon's Vol. II, section X.3.
The domains defined by the spectral theorem are indeed $\{\psi: p_\alpha\psi\in\mathcal{D}(p_\alpha)\}$. To see this, realize that in this case, since the spectrum is purely point, by the spectral theorem, we have
$$p_\alpha=\sum_{n\in \Bbb{Z}}\lambda_{\alpha,n}P_n,$$
where $\lambda_{\alpha,n}$ are the eigenvalues associated to the normalized eigenvectors $\psi_{\alpha,n}$, and $P_n=\psi_n\langle\psi_n,\cdot\rangle$ are the projections in each eigenspace. The domain $\mathcal{D}(p_\alpha)$ is then given by the vectors $\xi$, such that | {
"domain": "physics.stackexchange",
"id": 17159,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, mathematical-physics, operators, momentum, hilbert-space",
"url": null
} |
java, performance, pdf
write.close();
}catch(IOException e){
System.err.println("Error: unable to open file for output: " + out);
}
}
}catch (IOException e){
System.err.println("Error: unable to open file for output: " + out);
}
}
doc.close();
} | {
"domain": "codereview.stackexchange",
"id": 21353,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance, pdf",
"url": null
} |
gravitational-waves, applied-physics
This is also a similar question to Applications of physics beyond QFT but I am specifically interested in gravitational waves in general, not quantum physics theory. There are no foreseeable applications of gravitational waves.
The gravitational waves that were detected last year... granted, they were produced quite a distance away, but their production required two roughly 30 solar mass black holes to coalesce, converting roughly 3 solar masses worth of mass into gravitational wave energy, producing gravitational waves at a power level that, at its peak, briefly outshone all the electromagnetic radiation from the entire visible universe.
And that, we were able to detect, just barely, with two gigantic detectors that were situated half a continent apart.
Electromagnetic waves were used widely long before Maxwell's theory, e.g., in the form of visible light (even if it was not recognized before Maxwell that these are, in fact, electromagnetic waves). But the specific predictions of Maxwell's theory, which allowed electromagnetic waves to be produced directly using electric and/or magnetic equipment, were indeed tested initially just to validate the theory. Here is a quote from Heinrich Hertz, who conducted the first such experiment, a quarter century after Maxwell's prediction: "It's of no use whatsoever [...] this is just an experiment that proves Maestro Maxwell was right—we just have these mysterious electromagnetic waves that we cannot see with the naked eye. But they are there." | {
"domain": "physics.stackexchange",
"id": 35302,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gravitational-waves, applied-physics",
"url": null
} |
space-telescope, instruments, ligo
Title: At the intersection of engineering and astronomy in its structure as a scientific discipline Astronomy is the comprehensive study of what lies beyond the Earth. Modern astronomy (I relied on classifications from here and here) is divided into a large sections (astrophysics, astrogeology, astrobiology, astrometry). In addition to theoretical and computational topics, astronomical instruments are also important for studying celestial bodies. In fact, many branches of astronomy are closely related to each other, so such classifications are somewhat arbitrary. But if the close connection between the usual sections is very noticeable, then the connection of these same sections with the engineering of astronomical instruments no longer seems so. What we're talking about here:
on the one hand, any astronomical instrument (for example,
optical telescope, infrared and gamma detectors, cosmic
ray detectors, space observatory, etc.) interacts with the space
environment and must perform its work correctly, and therefore the
development engineers of these devices must also understand the
physical processes underlying the corresponding phenomena.
on the other hand, all these aspects are presented in the form of a
series of technical requirements (power supply, accuracy,
throughput, permissible modes, autonomy, etc.). This is
where the interaction between astronomers and engineers ends;
engineers do not touch physical processes and develop instruments to
meet technical requirements. | {
"domain": "astronomy.stackexchange",
"id": 7150,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "space-telescope, instruments, ligo",
"url": null
} |
c#, design-patterns, interview-questions, observer-pattern
public class TextMessage : IMessage
{
private readonly string _text;
public TextMessage(string text)
{
_text = text;
}
public string Print()
{
return _text;
}
}
public class Image : IMessage
{
private readonly uint _width;
private readonly uint _height;
public Image(uint width, uint height )
{
_width = width;
_height = height;
}
public string Print()
{
return string.Format("Image width:{0} height {1}", _width, _height);
}
}
/// <summary>
/// this class handles all of the different observers, observers listen to IObservables..
/// </summary>
/// <typeparam name="T"></typeparam>
public class NewsChannel : IObservable<IMessage>
{
private readonly List<IObserver<IMessage>> _observers;
public NewsChannel()
{
_observers = new List<IObserver<IMessage>>();
}
public IDisposable Subscribe(IObserver<IMessage> observer)
{
if (!_observers.Contains(observer))
{
_observers.Add(observer);
}
return new Unsubscriber<IMessage>(_observers, observer);
}
/// <summary>
/// send a message of certain type to all of the observers
/// </summary>
/// <param name="message"></param>
public void SendMessage(IMessage message)
{
foreach (var observer in _observers)
{
if (message != null)
{
observer.OnNext(message);
}
else
{
observer.OnError(new ArgumentNullException());
}
}
}
public void EndMessages()
{
foreach (var observer in _observers)
{
observer.OnCompleted();
}
_observers.Clear();
}
}
/// <summary>
/// this also can be a private class inside the NewsChannel class
/// </summary>
/// <typeparam name="T"></typeparam> | {
"domain": "codereview.stackexchange",
"id": 35694,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, design-patterns, interview-questions, observer-pattern",
"url": null
} |
haskell, monads, telegram
downloadFile :: String -> HandlerAction (String, L.ByteString)
downloadFile fileId = do
result <- Telegram.downloadFile fileId
maybe (throwError "Не качается с телеграма") return result
-- Private
askContext :: HandlerAction HandlerContext
askContext = ask
newtype HandlerT m a = HandlerT
{ runHandlerT :: ExceptT String(
ReaderT HandlerContext
m) a
} deriving newtype ( Applicative
, Functor
, Monad
, MonadIO
, MonadReader HandlerContext
, MonadError String
)
deriving anyclass ( MonadSession
, MonadTelegram
, MonadDropbox
, MonadLogger
)
instance MonadTrans HandlerT where
lift = HandlerT . lift . lift
postMessage :: MonadTelegram m
=> (TTypes.PostMessage -> TTypes.PostMessage)
-> HandlerContext
-> m Int
postMessage initializer context =
let chatId = userId context
originalId = messageId context
in Telegram.sendMessage $ initializer $
TTypes.PostMessage { TTypes.chat_id = chatId
, TTypes.reply_to_message_id = Just originalId
, TTypes.reply_markup = Nothing
, TTypes.text = ""
}
mapAnswers :: [[String]] -> [[TTypes.InlineKeyboardButton]]
mapAnswers = (map . map) (\answer -> TTypes.InlineKeyboardButton
{ text = answer
, callback_data = answer
})
sendQuestion :: MonadTelegram m
=> String
-> [[TTypes.InlineKeyboardButton]]
-> HandlerContext
-> m Int
sendQuestion question keyboard =
let initialize message = message { TTypes.text = question
, TTypes.reply_markup = Just $ TTypes.InlineKeyboardMarkup
{ inline_keyboard = keyboard }
}
in postMessage initialize | {
"domain": "codereview.stackexchange",
"id": 39399,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "haskell, monads, telegram",
"url": null
} |
SOLVING RECURRENCES 1.2 The Tree Method The cost analysis of our algorihms usually comes down to nding a closed form for a recurrence. Now that we know the three cases of Master Theorem, let us practice one recurrence for each of the three cases. For example, we can ignore oors and ceilings when solving our recurrences, as they usually do not a ect the nal guess. Steps of Recursion Tree method. Affects the level TC. Use induction to show that the guess is valid. For Example, the Worst Case Running Time T (n) of the MERGE SORT Procedures is described by the recurrence. There are mainly three steps in the recursion tree method. Construct a recursion tree from the recurrence relation at hand. When implemented well, it can be somewhat faster than merge sort and about two or three times faster than heapsort. Quicksort is an in-place sorting algorithm.Developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Visit the current node data in the postorder array before exiting from the current recursion. Final Exam Computer Science 112: Programming in C++ Status: Computer Science 112: Programming in C++ Course Practice . 1.Recursion Tree 2.Substitution Method - guess runtime and check using induction 3.Master Theorem 3.1 Recursion Tree Recursion trees are a visual way to devise a good guess for the solution to a recurrence, which can then be formally proved using the substitution method. Types Of Problem We can solve using the Recursion Tree Method: Cost Of Root Node will Maximum. 9. def foo ():s = 0i = 0while i < 10:s = s + ii = i + 1return sprint foo () A recurrence relation is an equation or inequality that describes a function in terms of its value on smaller inputs or as a function of preceding (or lower) terms. The recursion-tree method can be unreliable, just like any method that uses ellipses (). First step is to write the above recurrence relation in a characteristic equation form. The recursion tree method is good for generating guesses for the substitution method. There are three main methods for solving recurrences. In fact in CLRS | {
"domain": "medetorax.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347853343058,
"lm_q1q2_score": 0.8358390757096572,
"lm_q2_score": 0.8577681013541613,
"openwebmath_perplexity": 687.324414298076,
"openwebmath_score": 0.709050178527832,
"tags": null,
"url": "https://www.medetorax.com/uncw/university/49905207835509e5305"
} |
motor
Title: Diagnosing a lower torque on one motor of a Zumo Robot I have a Zumo Shield. It is small and tracked, and contains two micro metal gear motors soldered to it.
In theory, these motors are identical and do seem to rotate at the same speed. However, the left side seems to have considerably less torque than the right. The left side will stall on small obstacles and the robot ends up veering off course.
Another test of turning force, if I set both motors to spin at the same speed, while the robot is off the floor, I can put my finger on the track and cause the left to stop with a lot less pressure than the right.
I know it's never going to be perfect and I need some sort of feedback loop will be required to keep it straight however there is a considerable difference between the two.
What is the likely cause of this difference? Soldering? Motors? Gearing? Motor drivers?
How can I go about diagnosing and therefore fixing this problem? This is not a complete answer, but one generic technique for diagnosing differences between devices which should be identical is divide and conquer. Swap items until you see where the problem follows.
For instance, if you swap the motors and the problem stays with the motor, you probably have a faulty motor. If a different motor with the same amplifier exhibits the same problem, it's probably the motor driver. If swapping them and neither of them works properly, or both work, it may be (or have been) the soldering.
Once you have identified the subsystem which is at fault, you can then continue to investigate and find a more specific subsytem or component.
For instance, if it appears to be the motor drivers, you can try swapping the amplifier chips. If the problem follows the chip, replace the faulty one, if both chips work as well as each other, try measuring voltages across the chip, current flowing through the chip, the resistances seen between the pins of the chip and the motor and power supply connectors etc.
Similarly with the motor module, you can try swapping wheels (as you've done) gearboxes, encoders (if they have them) and so on. | {
"domain": "robotics.stackexchange",
"id": 2034,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "motor",
"url": null
} |
geophysics, climate-change, glaciology
Title: When will the Final Ice Age happen? As the Sun's luminosity slowly rises, the Earth's surface temperature will climb. Will Earth ever be too warm to have any more glacial periods? If so, when will that be?
Edit: The existing answer misunderstood the question. I'd like some boundary conditions, e.g. "when the sun's luminosity is 10% higher and the Earth becomes a 'moist greenhouse'" and "when the Sun turns Earth into a cinder". Those conditions would imply that the last Ice Age will happen 1-4 billion years from now. A better estimate would be awesome. Unless there's something I'm missing, with Stefan-Boltzmann and some reasonable assumptions (e.g. linear increase in luminosity) it should be possible to get a back-of-the-envelope calculation for this. In my opinion, there's 3 primary factors. There's a difference between ice ages and ice age periods. The Milankovich cycles appear to play a key role in the forming and receding of individual ice ages, but what it doesn't appear to do is trigger ice age periods.
The modern Quaternary ice age period began about 2.58 million years ago. Milankovich cycles likely began long before then, so it's unlikely that Milankovich cycles triggered the period, only that they play a role in the cycle within the period.
Same is likely true for solar maximums and minimums. They come and go, but they aren't likely the drivers of ice age periods.
Timeline of glaciation
the Quaternary, the Karoo and the Andean-Saharan ice ages all happened in the last 450 million years, and they were separated by long periods so the cause should be looked at more long term. Milankovich cycles operate on 26,000, 42,000 and 100,000 timelines, much to short to drive changes over millions of years.
OK - the 3 things.
Land and ocean placement,
Solar output (long term, not short term sunspot changes), and
CO2
Land and Ocean Placement | {
"domain": "earthscience.stackexchange",
"id": 540,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "geophysics, climate-change, glaciology",
"url": null
} |
homework-and-exercises, thermodynamics, ideal-gas, textbook-erratum
Title: Confusion about number of molecules in a gas I have solved the following exercise but the answer I get is different from the one stated in the book I am using: I can't see what I am doing wrong so I would be grateful if someone pointed it out to me, thanks.
"Consider a box of volume $1.5L$ full of nitrogen gas which exerts a pressure of $3 atm$ on the walls of the box. The translational kinetic energy of the nitrogen molecules of the gas is $6.42\cdot 10^{-28}J$. Find the number of nitrogen molecules contained in the box"
My solution:
Perfect gas law ($n=$ number of moles, $N=$number of molecules) $$PV=nRT\Rightarrow n=\frac{PV}{RT}\Rightarrow N=\frac{PV}{RT}N_A\overset{R=k_bN_A}{=}\frac{PV}{k_bT}\overset{K=\frac{3}{2}k_bT}{=}\frac{3PV}{2K}=\frac{3\cdot(303975Pa)(0.0015m^3)}{2\cdot6.44\cdot 10^{-28}J}=1.06\cdot 10^{30}$$ but the book says the correct answer is $1.06\cdot 10^{23}$. I guess there is a typo in the book, most likely in the number for translational kinetic energy. If you try to find temperature from the provided number, it will be about $3\cdot10^{-5}$ K - even if you manage to reach such temperature, nitrogen would be solid. They probably meant to have $6.42\cdot10^{-21}$ J for translational kinetic energy which corresponds to $310$ K and then it is clear that you should have some fraction of a mole (remember, one mole at normal conditions takes up 22.4 liters). | {
"domain": "physics.stackexchange",
"id": 77541,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, thermodynamics, ideal-gas, textbook-erratum",
"url": null
} |
Your current calculations assume that the force due to air pressure is acting against the force due to hydrostatic pressure on the lid. Is this assumption really true?
Remember, air pressure is acting on the all exterior surfaces of the barrel, and air pressure is also acting on the column of water in the pipe.
Sorry, will try again.
Gold Member
I am not that good at drawing forces, sorry. Here is my attempt:
orange - water's force on the lid
black - air's force on the lid
grey - forces of no interest
I guess that the main problem here is to realize wether that air pressure on the column of water translates into more waterpressure on the lid.
I am sorry but I have no idea how to turn that into a calculation. Thank you for your help so far!
SteamKing
Staff Emeritus
Homework Helper
I am not that good at drawing forces, sorry. Here is my attempt:
orange - water's force on the lid
black - air's force on the lid
grey - forces of no interest
I guess that the main problem here is to realize wether that air pressure on the column of water translates into more waterpressure on the lid.
Take the calculation in steps.
How would you calculate the force due to air pressure acting on the outside of the barrel (just the top), assuming that there was no water inside the barrel?
I am sorry but I have no idea how to turn that into a calculation. Thank you for your help so far!
Why not? You did it when the fluid in the column was water.
You already know what the pressure of the atmosphere is. The pressure of the atmosphere and the pressure of the water in the pipe can be added together, since pressure is a scalar quantity.
[EDIT] Remember, we are interested only in the forces acting on the lid of the barrel. The forces acting on the sides of the barrel and of the pipe cancel out and are of no interest anyway, since they are not acting on the barrel lid.
Rectifier
Chestermiller
Mentor
Your equation for the fluid pressure should include the atmospheric pressure on the fluid at the top of the glass pipe :
$$p=p_{atm}+dgh$$
Chet
Rectifier
Gold Member
Take the calculation in steps.
How would you calculate the force due to air pressure acting on the outside of the barrel (just the top), assuming that there was no water inside the barrel? | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9615338123908151,
"lm_q1q2_score": 0.815349673344664,
"lm_q2_score": 0.8479677602988602,
"openwebmath_perplexity": 1311.849025554496,
"openwebmath_score": 0.9931880235671997,
"tags": null,
"url": "https://www.physicsforums.com/threads/hydrostatic-pressure-barrel-vs-small-cylinder-of-water.831332/"
} |
incomplete
I think $$f(x)=x+\sum_{i=0}^n x\%2^i-a[x\neq0]$$ where $\%$ is the sawtooth, $[x\neq0]$ is the Iverson bracket, $n\in\mathbb{N}$ and $2^{n+1}>a\in\mathbb{N}$ satisfies $$\lvert S(f)\rvert=2^{n+1}+a.$$ However, I just can't seem to prove that $f$ is surjective. If someone can prove that $f$ is surjective, then I'll present the rest of the proof.
• I have doubt in your last line. It seems like $(x+y)\%\alpha-x\%\alpha-y\%\alpha$ can be $0$ or $-\alpha$, not anything else... – guest Apr 28 '17 at 5:31
• You're right, thanks. – Lawrence C. Apr 30 '17 at 10:35 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9845754492759499,
"lm_q1q2_score": 0.8026578900482598,
"lm_q2_score": 0.8152324848629214,
"openwebmath_perplexity": 929.0682563624706,
"openwebmath_score": 1.0000100135803223,
"tags": null,
"url": "https://math.stackexchange.com/questions/2239928/how-many-elements-can-the-set-sf-fxy-fx-fy-x-y-in-r-have"
} |
c#, asp.net-mvc, razor, asp.net-mvc-2
Title: MVC binding list of values to checkboxlist in an efficient way I have this code which generates and shows list of selectable items that user can post to controller in order to save into database.
Model looks like this:
public class DeliveryAddressSelectionModel()
{
public List<DeliveryAddress> DeliveryAddresses { get; set; }
public List<int> SelectedAddressIDs { get; set; }
}
public class DeliveryAddress()
{
public int ID { get; set; }
public string Location { get; set; }
}
Controller looks like
[HttpGet]
public ActionResult Edit()
{
var model = new DeliveryAddressSelectionModel();
model.DeliveryAddresses = AddressDetails.GetAvailable(); //Get from EF
model.SelectedAddressIDs = new List<int>(); //Populate this list with Address IDs for pre-selection
return View(model);
}
[HttpPost]
public ActionResult Edit(DeliveryAddressSelectionModel model)
{
//save the model.SelectedAddressIDs in DB
return RedirectToAction("Index");
}
And finally view looks like this:
<form method="post">
<ul>
@foreach(var address in DeliveryAddresses)
{
<li>
<input id="address@(address.ID)" type="checkbox"
name="SelectedAddressIDs"
value="@address.ID"
@(Model.SelectedAddressIDs.Contains(address.ID) ? "checked" : "")
/>
<label for="address@(address.ID)">@address.Location</label>
</li>
}
<ul>
<input type="submit" value="Save Address"/>
</form> | {
"domain": "codereview.stackexchange",
"id": 17479,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, asp.net-mvc, razor, asp.net-mvc-2",
"url": null
} |
particle-physics, standard-model, higgs, electroweak
The electroweak interactions are entirely symmetric between the 3 families, so there is a completely exact SU(3) unbroken to all orders. The SU(6)xSU(6) breaking makes a collection of massless Goldstone bosons, massless pions. The number of massless pions is the number of generators of SU(6), which is 35. Of these, 8 are exactly massless, while the rest get small masses from electroweak interactions (but 3 of the remaining 27 go away into W's and Z's by Higgs mechanism, see below). The 8 massless scalars give long-range nuclear forces, which are an attractive inverse square force between nuclei, in addition to gravity.
The hadrons are all nearly exactly symmetric under flavor SU(6) isospin, and exactly symmetric under the SU(3) subgroup. All the strongly interacting particles fall into representation of SU(6) now, and the mass-breaking is by terms which are classified by the embedding of SU(3) into SU(6) defined by rotating pairs of coordinates together into each other.
The pions and the nucleons are stable, the pion stability is ensured by being massless, the nucleon stability by approximate baryon number conservation. At least the lowest energy SU(3) multiplet
The condensate order-parameter involved in breaking the chiral SU(6) symmetry of the quarks is $\sum_i \bar{q}_i q_i$ for $q_i$ an indexed list of the quark fields u,d,c,s,t,b. The order parameter is just like a mass term for the quarks, and I have already diagonalized this order parameter to find the mass states. The important thing about this condensate is that the SU(2) gauge group acts only on the left-handed part of the quark fields, and the left-handed and right handed parts have different U(1) charge. So the condensate breaks the SU(2)xU(1) gauge symmetry. | {
"domain": "physics.stackexchange",
"id": 3941,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, standard-model, higgs, electroweak",
"url": null
} |
def mset_choose(s, d):
"""Compute the "multiset coefficient" :math:\binom{s}{d}."""
A = PolynomialRing(QQ, len(s), 'A').gens()
mono = prod(a^i for a, i in zip(A, s))
Z = multiset_cycle_index(list_to_multiset(d))
return Z.expand(len(A), A).coefficient(mono)
if __name__ == '__main__':
if len(sys.argv) != 3:
print("Usage: %s 's_1, s_2, ..' 'd_1, s_2, ..'" % sys.argv[0])
print("Outputs the number of ways the multiset s can be partitioned into multisets of sizes d_i.")
sys.exit(1)
s = map(int, sys.argv[1].split(' '))
d = map(int, sys.argv[2].split(' '))
if sum(s) != sum(d):
print("The sum of the elements of s must equal the sum of the elements of d")
sys.exit(1)
print(mset_choose(s, d)) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.983596967003007,
"lm_q1q2_score": 0.8280171572852281,
"lm_q2_score": 0.8418256512199033,
"openwebmath_perplexity": 600.3642210983606,
"openwebmath_score": 0.684150218963623,
"tags": null,
"url": "https://math.stackexchange.com/questions/2856255/partitioning-a-multiset-into-multisets-of-fixed-sizes"
} |
zx-calculus
Title: ZX-Calculus: understand clifford+T/general ZX rules This paper that proves the completeness of the ZX-Calculus introduces different gates:
and
However, they seem very cryptic to me (except maybe the rule E). What is the intuition (what they mean, and how they where obtained) behind these rules? I guess some of them (maybe the last one) has something to do with trigonometry, but it's not really trivial when looking at them.
And is there a way to "decompose" them to remember/understand them easily? I can indeed see some structures that appear in several places (like a red node with one or two attached nodes with same angles), but it's still a bit hard for me to make sense of this… Since this paper, there have been several different axiomatisations, arguably simpler than this one. For instance in: https://arxiv.org/pdf/2007.13739.pdf and https://arxiv.org/pdf/1812.09114.pdf. In the last one, in particular, all these rules (except (E)) are replaced by: | {
"domain": "quantumcomputing.stackexchange",
"id": 2829,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "zx-calculus",
"url": null
} |
python, python-3.x, database
def __encrypt_db(self) -> None:
"""
Encrypts the database with Fernet.
"""
with open(self.file_name, 'rb') as db_file:
db = db_file.readline()
encrypted = self.fernet.encrypt(db)
with open(self.file_name, 'wb') as db_file:
db_file.write(encrypted)
def __decrypt_db(self) -> None:
"""
Decrypts the database with Fernet.
"""
with open(self.file_name, 'rb') as db_file:
db = db_file.readline()
decrypted = self.fernet.decrypt(db)
with open(self.file_name, 'wb') as db_file:
db_file.write(decrypted)
def __db_empty(self) -> bool:
"""
Determines if the database if empty.
"""
with open(self.file_name, "r") as db_file:
return not db_file.readlines()
def __repr__(self):
return f"DB: {self.name}"
class PasswordLengthError(Exception):
"""
Raised when the user enters a password less than 32 characters long.
"""
def __init__(self, message):
super().__init__(message)
Below is an example file of how an average user would work with this database:
test_db.py
from lindb import LinDB
# Example password 32 characters long #
pw = "zSLfLhAvjhmX6CrzCbxSE2dzXEZaiOfO"
db = LinDB("DB_TEST", pw=pw)
# Decrypts the file if the password is correct #
db.connect()
# Start inserting pairs #
db.insert({"Ben": 16})
db.insert({"Hannah": 17})
db.insert({"Will": 18})
# Query database and display results # | {
"domain": "codereview.stackexchange",
"id": 38888,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, database",
"url": null
} |
convolution
or $h[n-m] = h[n-m-N]$) to get the argument into the range for which
you know the value of $h[\cdot]$.
Also, note that $(1)$ holds for all integers $n$, but we don't
need to calculate more than $N$ sums like $(1)$ because
$y[n]$ is also a periodic sequence of period $N$ and so we have
for any integer
$M$ that $y[M] = y[M \bmod N]$ where, of course, $0 \leq M \bmod N \leq N-1$.
Exercise: write out the above formula explicitly, meaning
no summations, for $n = 0, 1, 2$ and proceed from there. Go on;
you can do it. There are only three sums of three terms each. | {
"domain": "dsp.stackexchange",
"id": 731,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "convolution",
"url": null
} |
quantum-mechanics, quantum-field-theory, standard-model, gauge-theory
Title: Is the $U(1)$ in the standard model identified with quantum-mechanical phase? I think there's a tension between two claims I've read:
The standard model is Yang-Mills theory with gauge group $SU(3) \times SU(2) \times U(1)$. Here the $U(1)$ factor is data on the same level as the other two factors, and has nothing at all to do with quantum mechanics. For instance, it's present even in the classical theory.
I've heard it claimed that charge conservation corresponds via Noether's theorem to the gauge symmetry of quantum mechanics given by the fact that the overall phase of a quantum mechanical wavefunction can never be measured; only relative phases can be measured. For instance, this is how I interpret the discussion in Connection to gauge invariance section of the Wikipedia article on charge conservation.
From the second claim, it seems like electromagnetism has a "privileged" relationship to quantum mechanics -- its gauge group comes directly from the postulates of quantum mechanics in a way that the gauge groups of the weak and strong forces do not. This makes it puzzling to me that all three gauge groups appear to be treated on "equal footing" in the standard model. It also seems strange (or at least very ironic) that electromagnetism should have such a rich classical limit if it arises directly from the postulates of quantum mechanics.
Let me try to summarize my confusion in a few more pointed
Questions:
Is the $U(1)$ factor in the Standard Model gauge group really "identified" with the gauge symmetry of quantum mechanical phase?
If not, is it still somehow correct to claim that charge conservation is related to the gauge symmetry of the phase of quantum mechanical wavefunctions?
If so, does this somehow give electromagnetism a "privileged" role in quantum theory? For instance, is it impossible to formulate a quantum field theory which doesn't include any form of electromagnetism? And how does this work out mathematically?
Is the U(1) factor in the Standard Model gauge group really
"identified" with the gauge symmetry of quantum mechanical phase? | {
"domain": "physics.stackexchange",
"id": 59324,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-field-theory, standard-model, gauge-theory",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.