text stringlengths 1 1.11k | source dict |
|---|---|
atmospheric-radiation, energy-balance, radiation-balance
For example, I'm imagining a column with a single spherical cloud in it, and the sun directly overhead, but no clouds in any other nearby columns. In such a situation, wouldn't there be a horizontal radiative flux divergence, i.e. a net horizontal radiative flux out of the column? Would this effect still have a negligible impact on net column heating, or does nothing like this occur in real atmospheres? Yes, there are horizontal radiation fluxes. These can change the heating rate by 10-40 K hr$^{-1}$. They also change depending on the extent of the atmosphere you may be considering. One can also imagine that they are a bit stronger during the sunrise and sunset, when the sun is not directly overhead and the beam has a larger horizontal path length.
There is work, such as the Neighboring Column Approximation which tries to get around that. There's another paper, but I can't find it right now that also tries to resolve this issue. | {
"domain": "earthscience.stackexchange",
"id": 2074,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "atmospheric-radiation, energy-balance, radiation-balance",
"url": null
} |
homework-and-exercises, string-theory, gauge-theory
\partial _\mu \left( \frac{ F}{1-F^2} \right)^{\mu\nu}
&=
\left( \frac{ F}{1-F^2} \right)^{\mu\rho} \partial_\mu F_{\rho\sigma} \left( \frac{ F}{1-F^2} \right)^{\sigma \nu}
\\ &\qquad+ \left( \frac{1} {1 -F^2} \right)^{\mu \rho} \partial_\mu F_{\rho\sigma}\left( \frac{ 1 }{1-F^2} \right)^{\sigma \nu} \end{align}$$ | {
"domain": "physics.stackexchange",
"id": 9254,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, string-theory, gauge-theory",
"url": null
} |
quantum-algorithms, grovers-algorithm, textbook-and-exercises
Title: Calcuate $\langle x | D | y \rangle$ for arbitrary $x,y \in \{0,1\}^n$ We are considering Grover's algorithm with a search space of size $2^n$ for an arbitrary integer $n$ for arbitrary $n$, and a unique marked element $x_0$.
Question: Calculate $\langle x | D | y \rangle$ for arbitrary $x,y \in \{0,1\}^n$
Answer: Using the expression $D = -(I-2|+^n\rangle\langle+^n|)$, we have
$$\langle x | D | y \rangle =
\begin{cases}
\frac{2}{N}-1 &\quad\text{if x=y}\\
\frac{2}{N} &\quad\text{if x $\neq$ y}
\end{cases}
$$
How has the equality $D = -(I-2|+\rangle\langle+|)$ been derived? Its from these notes https://people.maths.bris.ac.uk/~csxam/teaching/qc2020/lecturenotes.pdf
How do derive the split function? I cannot see the route to start to evaluate this. Grover's Diffusion Operator $D$ can be written as $H^{\otimes n}U_0H^{\otimes n}$ where $U_0$ is the following matrix $$\begin{bmatrix}-1 & 0 & 0 &... & 0 \\ 0 & 1 & 0 & ... &0
\\.& . & 1 & ... & . | {
"domain": "quantumcomputing.stackexchange",
"id": 1734,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-algorithms, grovers-algorithm, textbook-and-exercises",
"url": null
} |
visible-light, photons, astronomy, galaxies
Question: | {
"domain": "physics.stackexchange",
"id": 57426,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "visible-light, photons, astronomy, galaxies",
"url": null
} |
navigation, odometry, turtlebot, amcl
Title: amcl not publishing map -> odom
After trying to simulate turtlebot navigation on STDR instead of Stage, I always run into missing transforms problems and navigation cant start. I ran view_frames and put a link to the resulting pdf.
pdf file of TF tree
The particularity of STDR is that it is publishing its own map and doesnt need map_server (from what I could understand). I also had to manually add some transforms to avoid a lot of remapping,
odom -> base_footprint which is just a
copy of map_static -> robot0
and
base_link -> base_laser_link which is
a copy of robot0 -> robot0_laser_0
When I run everything, I get the following error:
Waiting on transform from
base_footprint to map to become
available before running costmap, tf
error: Could not find a connection
between 'map' and 'base_footprint'
because they are not part of the same
tree.Tf has two or more unconnected
trees. | {
"domain": "robotics.stackexchange",
"id": 19518,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, odometry, turtlebot, amcl",
"url": null
} |
electromagnetism, quantum-electrodynamics, pair-production
Title: Why does pair production only occur in an electric but not a magnetic field? I recently read that the photon "decay" (if you can call it that) in an external field occurs only in an electric field but not in a magnetic field. The reason being that the Euler-Heisenberg Lagrangian is real-valued for the magnetic field whereas it has a non-vanishing imaginary part in the case of an electric field (the calculation is quite lengthy, so I won't repeat it here). That explanation is great, but I was wondering if there is also an intuitive physical reason why photons only produce $e^- e^+$-pairs in an electric field? | {
"domain": "physics.stackexchange",
"id": 81295,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, quantum-electrodynamics, pair-production",
"url": null
} |
6. SOLVED BY Samy_A For any $A\subseteq \mathbb{R}$ open holds that if $f^\prime(x) = g^\prime(x)$ for each $x\in A$, then there is some $C\in \mathbb{R}$ such that $f(x) = g(x) + C$.
7. SOLVED BY fresh_42 If $(a_n)_n$ is a sequence such that for each positive integer $p$ holds that $\lim_{n\rightarrow +\infty} a_{n+p} - a_n = 0$, then $a_n$ converges.
8. SOLVED BY jbriggs444 There is no function $f:\mathbb{R}\rightarrow \mathbb{R}$ whose graph is dense in $\mathbb{R}^2$.
9. SOLVED BY andrewkirk If $f:\mathbb{R}^2\rightarrow \mathbb{R}$ is a function such that $\lim_{(x,y)\rightarrow (0,0)} f(x,y)$ exists and is finite, then both $\lim_{x\rightarrow 0}\lim_{y\rightarrow 0} f(x,y)$ and $\lim_{y\rightarrow 0}\lim_{x\rightarrow 0} f(x,y)$ exist and are finite.
10. SOLVED BY andrewkirk If $A$ and $B$ are connected subsets of $[0,1]\times [0,1]$ such that $(0,0),(1,1)\in A$ and $(1,0), (0,1)\in B$, then $A$ and $B$ intersect. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9458012762876287,
"lm_q1q2_score": 0.8076317680637863,
"lm_q2_score": 0.8539127492339909,
"openwebmath_perplexity": 381.1466747406665,
"openwebmath_score": 0.9037920236587524,
"tags": null,
"url": "https://www.physicsforums.com/threads/micromass-big-counterexample-challenge.869194/"
} |
special-relativity, speed-of-light, inertial-frames, maxwell-equations, lorentz-symmetry
would become infinitely large unless $E_0=0$. This is to say that the only possible way a particle can travel at the speed of light is for the particle to be massless (i.e., $E_0 = 0$). On the other hand, the only way for a massless particle to have finite energy and momentum is to travel at the speed of light. So, if a particle is massless then it has to go at the speed of light as well as if a particle goes at the speed of light then it has to be massless. But this is something we derive from Special Relativity - not something we postulate to derive Special Relativity. | {
"domain": "physics.stackexchange",
"id": 41087,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, speed-of-light, inertial-frames, maxwell-equations, lorentz-symmetry",
"url": null
} |
electrostatics, electric-fields, polarization, linear-algebra, dielectric
Title: Eigenvectors in anisotropic media I have several questions:
1) First, In the susceptibility tensor, when it's diagonalized, i don't understand the physical significance when the off diagonal terms are zero.
$$P_x=\epsilon_0\chi_{11}E_x, P_y=\epsilon_0\chi_{22}E_y, P_z=\epsilon_0\chi_{33}E_z$$
2) As i understand D has a different direction from E because the components of E are multiplied by different refractive indices, so to find out the eigenvectors of D we use basis vectors, two of which lie in the phase plane and the third is perpendicular to it. Is this basis same as that in the first question or does it depend on K? | {
"domain": "physics.stackexchange",
"id": 65419,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electric-fields, polarization, linear-algebra, dielectric",
"url": null
} |
rviz
Originally posted by felix k on ROS Answers with karma: 1650 on 2012-03-05
Post score: 1
The Fuerte versions of RViz and ros_gui both depend on the new little stack called python_qt_binding. There only exists a Fuerte version of python_qt_binding, it did not exist in Electric.
So if you are trying to build the latest rviz and ros_gui code (Fuerte versions) under Electric, it just won't work.
However, Fuerte is coming along pretty well these days, we are close to releasing it. If you do
sudo apt-get install ros-fuerte-visualization | {
"domain": "robotics.stackexchange",
"id": 8487,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rviz",
"url": null
} |
Construct and compare linear, quadratic, and exponential models and solve problems. Regents Exam Questions F. Then determine the growth/decay factor and growth/decay rate. : Exponential Growth & Decay (GPS ADV Alg). (e is the base of the natural logarithm. All exponential growth and decay functions can be represented by the equation y kax For exponential growth, a 1 For exponential decay, 0 a 1 The value of a, called the multiplier, is the scale. 5 Understand the inverse relationship between exponents and logarithms and. 8 H pH F HG I KJ F HG I KJ 1 10. For example, given Canada's net population growth of 0. From population growth and continuously compounded interest to radioactive decay and Newton’s law of cooling, exponential functions are ubiquitous in nature. Chapter 6 : Exponential and Logarithm Functions. exponential decay 9. Connect the points to form a smooth curve. Available in PDF, DOC, XLS and PPT format. 85)𝑡, identify theinitial amount, decay factor, and percent decrease. | {
"domain": "clandiw.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9877587268703083,
"lm_q1q2_score": 0.8335647540503551,
"lm_q2_score": 0.8438951045175643,
"openwebmath_perplexity": 1466.0538707893068,
"openwebmath_score": 0.4987727105617523,
"tags": null,
"url": "http://clandiw.it/fknf/exponential-growth-and-decay-practice-pdf.html"
} |
c#, wpf
Title: Resx Translation Helper, V.2.0 Remove Files Window Related to Resx Translation Helper, V.2.0
This is the code for the Remove Files window. I was able to keep code out of the code-behind here, but I have a feeling I botched something else.
IDisplayOpenFiles.cs:
public interface IDisplayOpenFiles
{
object DataContext { set; }
event EventHandler Closed;
void Show();
void Close();
} | {
"domain": "codereview.stackexchange",
"id": 14317,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, wpf",
"url": null
} |
java, algorithm, sorting, radix-sort
// Make the counter map accummulative:
for (int i = 1; i != NUMBER_OF_COUNTERS; i++) {
counterMap[i] += counterMap[i - 1];
}
// Build the buffer array (which will end up sorted):
for (int i = toIndex - 1; i >= fromIndex; i--) {
int index = extractCounterIndex(array[i], byteIndex);
buffer[counterMap[index]-- - 1] = array[i];
}
// Just copy the buffer to the array:
System.arraycopy(buffer,
0,
array,
fromIndex,
buffer.length);
}
/**
* Sorts the {@code array[fromIndex ... toIndex - 1]} by most significant
* bytes that contain the sign bits.
*
* @param array the array to sort.
* @param buffer the buffer array.
* @param counterMap the counter map. We pass this array in order not to | {
"domain": "codereview.stackexchange",
"id": 45504,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, sorting, radix-sort",
"url": null
} |
c++, c++11, reinventing-the-wheel, pointers
Title: My implementation for std::unique_ptr I just finished learning about move semantics and realized that a nice practical example for this concept is unique_ptr (it cannot be copied, only moved).
For learning purposes, and as a personal experiment, I proceed to try to create my implementation for a smart unique pointer:
template<typename T>
class unique_ptr {
private:
T* _ptr;
public:
unique_ptr(T& t) {
_ptr = &t;
}
unique_ptr(unique_ptr<T>&& uptr) {
_ptr = std::move(uptr._ptr);
uptr._ptr = nullptr;
}
~unique_ptr() {
delete _ptr;
}
unique_ptr<T>& operator=(unique_ptr<T>&& uptr) {
if (this == uptr) return *this;
_ptr = std::move(uptr._ptr);
uptr._ptr = nullptr;
return *this;
}
unique_ptr(const unique_ptr<T>& uptr) = delete;
unique_ptr<T>& operator=(const unique_ptr<T>& uptr) = delete;
}; | {
"domain": "codereview.stackexchange",
"id": 41715,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, reinventing-the-wheel, pointers",
"url": null
} |
VectorAngle @@@ {{{2.7432, 0., 0.}, {-2.743199, 0., 0.}},
{{2.7432000016, 0., 0.}, {-2.743199992, 0., 0.}}} // InputForm
{3.141592653589793, 3.141592653589793}
I think this is a bug.
You get the correct result from
vectorAngle[vec1_, vec2_] :=
ArcCos[vec1.vec2/(Norm[vec1] Norm[vec2])]
vectorAngle[{-2.7432000000000016, 0.,
0.}, {2.743199999999973, 0., 0.}]
(* ==> 3.14159 *)
This function is just a manual implementation of the definition as stated in the documentation of VectorAngle. Therefore, the current built-in VectorAngle` does not appear to be implemented as described in the documentation. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9019206659843132,
"lm_q1q2_score": 0.8183236446406289,
"lm_q2_score": 0.9073122232403329,
"openwebmath_perplexity": 3376.8984388542394,
"openwebmath_score": 0.4402133822441101,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/57377/complex-result-for-real-vectors-in-vectorangle"
} |
triangle pattern, Program to find the Radius of the incircle of the triangle, Java Program to Compute the Area of a Triangle Using Determinants, Python Program for Program to Print Matrix in Z form, Python Program for Efficient program to print all prime factors of a given number, Python - Sympy Triangle.is_scalene() method, Python - Sympy Triangle.is_isosceles() method, Python - Sympy Triangle.is_right() method. comb(k,p) = k! Following are the first 6 rows of Pascal’s Triangle. define base cases. Run an inner loop from j = 1 to j = {previous row size} for calculating element of each row of the triangle. Sample Pascal's triangle : Each number is the two numbers above it added together. Email (We respect our user's data, your email will remain confidential with us) 0. Subsequent row is created by adding the number above and to the left with the number above and to the right, treating empty elements as 0. Pascal’s triangle is a number triangle with numbers arranged in staggered rows | {
"domain": "ftsamples.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9632305360354472,
"lm_q1q2_score": 0.802683688659558,
"lm_q2_score": 0.8333245870332531,
"openwebmath_perplexity": 1349.2390573447963,
"openwebmath_score": 0.34614333510398865,
"tags": null,
"url": "https://ftsamples.com/b2r3o3g6/pascal-triangle-dynamic-programming-python-f2fc73"
} |
bash, file-system
Title: Is there a 'better' way to find files from a list in a directory tree I have created a list of files using find, foundlist.lst.
The find command is simply find . -type f -name "<search_pattern>" > foundlist.lst
I would now like to use this list to find copies of these files in other directories.
The 'twist' in my requirements is that I want to search only for the 'base' of the file name. I don't want to include the extension in the search.
Example:
./sort.cc is a member of the list. I want to look for all files of the pattern sort.*
Here is what I wrote. It works. It seems to me that there is a more efficient way to do this.
./findfiles.sh foundfiles.lst /usr/bin/temp | {
"domain": "codereview.stackexchange",
"id": 5358,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bash, file-system",
"url": null
} |
$\frac{1}{3},...\frac{2^{2m}}{3^{2m+1}}, m=0,1,2,...$ which represents probabilities of winning the game (picking the red disc) by the person who makes the first move.
Total probability is the sum
$\Sigma_{m=0}^{\infty}\frac{2^{2m}}{3^{2m+1}}=\Sigm a_{m=0}^{\infty}(\frac{2^2}{3^2})^m\frac{1}{3}=\fr ac{1}{3}\Sigma_{m=0}^{\infty}(\frac{4}{9})^m=\frac {1}{3}\frac{1}{1-4/9}=\frac{3}{5}$
Then the probability of the other person winning is 1-3/5=2/5
Since 3/5>2/5, I would choose to start. (makes sense since I can pick up the red disc on the first move already, without giving the other person a chance to make a single move)
4. Originally Posted by Volga
i. If Y is the number of turns in the game, this is also the number of rounds until the red disc comes up for the first time. I claim that $Y{\sim}Geometric(p)$ and its mass function is $f_Y(y)=(1-p)^{y-1}p$, y=1,2,... ie the number of trials until the first success (success=red disc comes up) | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138092657495,
"lm_q1q2_score": 0.8405051201987317,
"lm_q2_score": 0.8596637505099168,
"openwebmath_perplexity": 565.931323784791,
"openwebmath_score": 0.8573198914527893,
"tags": null,
"url": "http://mathhelpforum.com/advanced-statistics/170185-conditional-distributions-another-interesting-question.html"
} |
(enclosing all the three excircles (see for example [3, 6, 9]), and three. A line not on the sphere but through its center connecting the two poles may be called the axis of rotation. The vertices of the blue shape are the centers of the three circles. A tangent to a circle is perpendicular to the radius at the point of tangency. Extend the three semicircles to full circles. No Kimberling centers lie on any of the tangent circles. From the diagram, (distance between the centres) 2. Ohochuku N. The tangent to C at point A (a, f (a)) is the line through A and whose director coefficient is f' (a). Terms in this set (43) True or False: The tangent to a circle is perpendicular to the radius drawn to the point of tangency. I have a question about creating a circle in a sketch that's tangent to 3 other curves (circles, lines, whatever). Thus E D is perpendicular to F D and A C to F B. Find the length of the radius of each circle. This option is useful when inscribing the Circle within a | {
"domain": "fnaarccuneo.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464485047915,
"lm_q1q2_score": 0.8153871968170521,
"lm_q2_score": 0.8354835432479663,
"openwebmath_perplexity": 369.11882035919484,
"openwebmath_score": 0.6239675283432007,
"tags": null,
"url": "http://ahoy.fnaarccuneo.it/circle-tangent-to-three-circles.html"
} |
waves
If we use that definition, in both cases you will have a wave, because there is a perturbation that moves and transfers energy without mass transport. But is not a sinusoidal wave, that is one that has a sine shape with endless peaks and valleys. However, Fourier's theorem shows that any shape can be decomposed into a collection of sinusoidal waves of different frequencies. What this means is that in your example the wave has multiple frequencies (an infinite number to be more precise) or wavelengths. | {
"domain": "physics.stackexchange",
"id": 17765,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves",
"url": null
} |
ros, pcl, gdb, pcl-1.7, rgbdslamv2
Starting program: /home/nuno/AIMAVProject_hydro_ws/devel/lib/rgbdslam/rgbdslam __name:=rgbdslam __log:=/home/nuno/.ros/log/243e0bb4-ff85-11e3-ac9d-0090f5eb9422/rgbdslam-2.log
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7ffff7ffa000
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffd3b3c700 (LWP 4859)]
[New Thread 0x7fffd333b700 (LWP 4860)]
[New Thread 0x7fffd2b3a700 (LWP 4861)]
[New Thread 0x7fffd2339700 (LWP 4862)]
[New Thread 0x7fffd1b38700 (LWP 4863)]
[New Thread 0x7fffd1337700 (LWP 4864)]
[New Thread 0x7fffd0b36700 (LWP 4865)]
[New Thread 0x7fffd0335700 (LWP 4866)] | {
"domain": "robotics.stackexchange",
"id": 18428,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, pcl, gdb, pcl-1.7, rgbdslamv2",
"url": null
} |
java
public methods
The only method you call in your tester class (which should start with a capitalized T), only calls the isValid() method. Reduce the visibility of all other methods. Depending on how the type is used, the user of the API will see too many methods, he only needs one. And beside that: It's a common principle to hide as much implementation as you can, also known as 'information hiding'
Hope this helps,
slowy | {
"domain": "codereview.stackexchange",
"id": 27451,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java",
"url": null
} |
of a dynamic programming problem. Rod Cutting: Here, we are going to learn how to maximize profit by cutting rod with dynamic programming? Given a rod of length n inches and a table of prices Pi for i = 1, 2, 3,....n, determine the maximum revenue Rn obtain- able by cutting up the rod and selling the pieces. Description: In this article we are going to see how to maximize profit by cutting rod with dynamic programming? Like other typical Dynamic Programming(DP) problems , recomputations of same subproblems can be avoided by constructing a temporary array val[] in bottom up manner. In this tutorial we shall learn about rod cutting problem. We will solve this problem using dynamic programming approach. The solution to this recursion can be shown to be T(n) = 2n which is still exponential behavior. Solution using Recursion (Naive Approach) We can cut a rod of length l at position 1, 2, 3, …, l-1 (or no cut at all). Like other typical Dynamic Programming(DP) problems, recomputations of | {
"domain": "com.br",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517475646369,
"lm_q1q2_score": 0.8085669149231183,
"lm_q2_score": 0.8267117940706734,
"openwebmath_perplexity": 799.8505112065792,
"openwebmath_score": 0.5053479075431824,
"tags": null,
"url": "http://autoconfig.jnet3.com.br/white-sideboard-flz/viewtopic.php?page=kawai-kdp-110-review-b41546"
} |
ros, rviz, urdf, robot
Title: Robot in Rviz flashes
I am using my URDF file (for my custom robot) for simulation in rviz, then publishing state (based on the odo data coming from real robot) robot moves in RVIZ in accordance with real robot, but the robot in rviz keeps on flashing..?
any reason why this happens..?
Originally posted by sumanth on ROS Answers with karma: 86 on 2014-08-12
Post score: 1
This might be due to a bad tf issue (if you have 2 different parent for the robot for example). Can you make sure the published tf are OK?
rosrun tf view_frames # then look at the generated frames.pdf
roswtf # might also tell you if there's an issue with the tf tree.
Originally posted by Ugo with karma: 1620 on 2014-08-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sumanth on 2014-08-13:
When I run roswtf I get the following error:
ERROR TF multiple authority contention: | {
"domain": "robotics.stackexchange",
"id": 19014,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rviz, urdf, robot",
"url": null
} |
Hence, for $x= \frac{\pi}{2}$, we have...
$e^{i \frac{\pi}{2}} = \cos(\frac{\pi}{2}) = i \sin(\frac{\pi}{2})$
If you know your trigonometry, then you'll know that...
$\cos(\frac{\pi}{2}) = 0$
... and...
$\sin(\frac{\pi}{2}) = 1$
Hence, the equation becomes...
$e^{i\frac{\pi}{2}} = 0 + i (1) = i$
5. You've done this a few times on threads to which I have contributed. (sandwiched my post with duplicates of yours)
I don't mean to be redundant- it's kinda slow typing laTex on an iPhone.
Anyhow, I can't think of a legitimate reason why you would create a^n exact duplicate post, then edit (almost delete) your previous post to ".".
It's just obnoxious in my book
Regardless, the question is answered.
6. Originally Posted by TheChaz
You've done this a few times on threads to which I have contributed. (sandwiched my post with duplicates of yours)
I don't mean to be redundant- it's kinda slow typing laTex on an iPhone. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787864878117,
"lm_q1q2_score": 0.8138913303725647,
"lm_q2_score": 0.824461932846258,
"openwebmath_perplexity": 838.5096795261672,
"openwebmath_score": 0.8395508527755737,
"tags": null,
"url": "http://mathhelpforum.com/trigonometry/175892-why-i-e-i-pi-2-a.html"
} |
php, html, sql, mysql, pdo
It's possible that the index should contain ref_id rather than ref. You don't really provide enough context to tell.
This is basic normalization. Database tables should not contain columns of duplicate information (e.g. numbered columns). Such columns should be moved into their own table so that you don't have to modify the database structure every time you change the business rule.
To make this work, you need at least two SQL queries, one for each table. You have to get the posts_id after the first one. For updates, you would do so with a SELECT (possibly embedded in the query). For inserts, you could do $conn->lastInsertId() after inserting the post. For updates, you might want to delete from the post_refs table and then do an insert. E.g. something like
SELECT id AS posts_id FROM posts WHERE post = :post
If no rows, do
INSERT INTO posts (post) VALUES (:post)
If there are rows, do
DELETE FROM post_refs WHERE posts_id = :posts_id | {
"domain": "codereview.stackexchange",
"id": 29768,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, html, sql, mysql, pdo",
"url": null
} |
quantum-mechanics, wavefunction, hilbert-space, schroedinger-equation
the (Time-Dependent) Schrödinger equation, why can't we use the $\psi_n$'s of say the infinite square well to construct general solutions $\Psi$ of the delta-function well, finite potential well, free particle, etc. Why are we always solving the time-independent Schrödinger equation when we can just use the energy eigenfunctions of the infinite square well? Before I begin let me pause to observe that there is a slight lie in the following words (and, more or less, in the entire undegraduate curriculum) that one discovers when one gets very mathematically involved; right now it's present in this Wikipedia article as "subtleties of the unbounded case"... our "Hermitian" operators are not in general well-defined for all the states that we'd like. This becomes especially important as we look at position and momentum eigenstates -- very often states get unnormalizable and physics can get somewhat clunky in terms of these. | {
"domain": "physics.stackexchange",
"id": 42872,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, wavefunction, hilbert-space, schroedinger-equation",
"url": null
} |
php, object-oriented, wordpress
/**
* {@inheritdoc}
*/
public function init(array $values)
{//or implement the setBulk I've shown earlier
if (isset($values['dateReferrer']))
$this->setDateReferrer($values['dateReferrer']);
if (isset($values['authorReferrer']))
$this->setAuthorReferrer($values['authorReferrer']);
return $this;
}
/**
* @param mixed $authorDate
* @return $this
*/
public function setDateReferrer($dr)
{
if (!$dr instanceof \DateTime)
{
$dr = new \DateTime($dr);
}
$this->dateReferrer = $dr;
return $this;
}
//implement rest of the interface here...
} | {
"domain": "codereview.stackexchange",
"id": 11997,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, object-oriented, wordpress",
"url": null
} |
everyday-chemistry, home-experiment, combustion
There are two types of devices with flames: ones where the fuel is premixed with air, that usually produce blueish flame, and ones where fuel is not pre-mixed with air, but burns and mixes with it in same time, that may produce bright yellow flame.
For many organic compounds, especially ones that require large amount of oxygen per molecule to fully burn, in case the fuel is not premixed with enough air, the flame usually contains oxygen-deficient region where tiny particles of carbon and molecular hydrogen are formed. These carbon particles, since they are heated, give thermal light emission and are responsible for bright non-transparent orange-yellow organic flames. A typical example of such flame is a flame of a candle. It is a flame of heavy hydrocarbons or fatty acids (c20+), that require tens of molecules of oxygen per molecule of the fuel to fully burn. | {
"domain": "chemistry.stackexchange",
"id": 1779,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "everyday-chemistry, home-experiment, combustion",
"url": null
} |
ros, services, ros-kinetic, cmake
"/opt/ros/kinetic/share/genmsg/cmake/pkg-genmsg.cmake.em") returned error
code 1
Call Stack (most recent call first):
/opt/ros/kinetic/share/catkin/cmake/em_expand.cmake:25 (safe_execute_process)
/opt/ros/kinetic/share/genmsg/cmake/genmsg-extras.cmake:303 (em_expand)
visualize_waypoints/CMakeLists.txt:76 (generate_messages) | {
"domain": "robotics.stackexchange",
"id": 33699,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, services, ros-kinetic, cmake",
"url": null
} |
Again, please check the Math because I did this quickly. At the very least this post gives a way to simplify the original equation.
-Dan
7. Originally Posted by topsquark
(Sigh) ...
$9x'^2+y'^2+2\sqrt2 y' + 2 = 0$.
We may complete the square on the y' variable and I got a form that reads:
$9x'^2+(y'^2+\sqrt2 /3)^2 + 28/36 = 0$
...
Again, please check the Math because I did this quickly. At the very least this post gives a way to simplify the original equation.
-Dan
Hello,
that's really brillant!
At your last step: You didn't realized, that $y'^2+2\sqrt2 y' + 2$ is a complete square already. Now your equation reads:
$(3x')^2+(y'+\sqrt2)^2 = 0$.
So you have a sum of two squares which should be zero. It isn't very probable.
Greetings
EB | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.964321450147636,
"lm_q1q2_score": 0.8137861509608685,
"lm_q2_score": 0.8438951045175643,
"openwebmath_perplexity": 414.22936323683854,
"openwebmath_score": 0.7963691353797913,
"tags": null,
"url": "http://mathhelpforum.com/algebra/2456-equation-help.html"
} |
cosmology, universe, big-bang, faster-than-light, space-expansion
Title: Superluminal expansion of the early universe how is this possible? Is this a postulate? I get the expansion of the universe, the addition of discrete bits of space time between me and a distant galaxy, until very distant parts of the universe are moving relative to me, faster than the speed of light. But the other sure seems like (giant steaming load of convenience) I think I get reasons why, because they don't have any other way to explain the size of the universe and so on. Just seems it would be much easier to say the universe is eternal and cyclical in nature, and call it a day! There are rules about things moving through space faster than light - there is no rule about space expanding faster than light.
As long as it can't be used to transfer information then there is no problem with relativity. | {
"domain": "physics.stackexchange",
"id": 97496,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, universe, big-bang, faster-than-light, space-expansion",
"url": null
} |
java, object-oriented, simulation, game-of-life
public int getNeighborCells(int x, int y) {
int neighbors = 0;
for (int i = x - 1; i <= x + 1; i++) {
for (int j = y - 1; j <= y + 1; j++) {
if (world[i][j] == ALIVE && (x != i || y != j))
neighbors += 1;
}
}
return neighbors;
}
public void createRandomWorld() {
for (int y = 0; y < WORLDSIZE; y++) {
for (int x = 0; x < WORLDSIZE; x++) {
if (Math.random() > 0.9) {
world[x][y] = ALIVE;
} else
world[x][y] = DEAD;
}
}
}
public int[][] applyRules() {
for (int i = 1; i < WORLDSIZE - 1; i++) {
for (int j = 1; j < WORLDSIZE - 1; j++) {
int neighbors = getNeighborCells(i, j);
if (world[i][j] == ALIVE) {
if ((neighbors < 2) || (neighbors > 3)) {
worldCopy[i][j] = DEAD;
} | {
"domain": "codereview.stackexchange",
"id": 31423,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, simulation, game-of-life",
"url": null
} |
java, console, connect-four
return true;
}
}
// Check the diagonal patterns (from top-right to bottom-left):
for (int startY = 0; startY < verticalPatterns; ++startY) {
next_pattern:
for (int startX = WINNING_PATTERN_LENGTH - 1;
startX < getWidth();
startX++) {
for (int offset = 0;
offset < WINNING_PATTERN_LENGTH;
offset++) {
if (board[startY + offset][startX - offset] != color) {
continue next_pattern;
}
}
return true;
}
}
// No victory yet for color 'color'.
return false;
}
public int getColumnHeight(final int x) {
int height = 0; | {
"domain": "codereview.stackexchange",
"id": 20674,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, console, connect-four",
"url": null
} |
life, definitions
This property is (arguably) necessary, but not sufficient, for life — things can be 'conscious' (respond to stimuli) but not be alive, but they can't be alive if they don't respond to stimuli.
This property is more typically called irritability. The Khan Academy article on "What is Life?" gives this property under the section "Response" (along with the the other properties "Organization", "Metabolism", "Homeostasis", "Growth", "Reproduction", "Evolution"). (You can also find it in lots of biology textbooks in this Google Books search.) The text you refer to uses similar criteria (growth, reproduction, metabolism, cellular organization, 'consciousness').
As Khan Academy also says, it's hard to be precise: | {
"domain": "biology.stackexchange",
"id": 10252,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "life, definitions",
"url": null
} |
beginner, go
If there was a concern about performance, profiling would be in order of course, I'm not gonna make any comments about that otherwise. | {
"domain": "codereview.stackexchange",
"id": 41493,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, go",
"url": null
} |
special-relativity, time-dilation, doppler-effect, redshift, gravitational-redshift
Title: Does a blueshift mean that time goes faster? This is a follow-up question to this answer.
The assumption in this answer is that time dilation always causes a small redshift when an observer looks at an object moving at a significant fraction of the speed of light when not taking into account the shifts caused by the directional Doppler effect.
So, if time going slower always causes a redshift, does that mean that if we see a blueshift it means that time appears to move faster?
In other words, if B, that is far away from A, moves towards A really fast, A will appear to be blueshifted to B due to the relativistic doppler effect and thus B will see A's time moving faster?
The confusion I have is linking the concepts of redshifts and blueshifts with time going slower and faster.
So, if time going slower always causes a redshift, does that mean that if we see a blueshift it means that time appears to move faster? | {
"domain": "physics.stackexchange",
"id": 85229,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, time-dilation, doppler-effect, redshift, gravitational-redshift",
"url": null
} |
newtonian-mechanics, forces, classical-mechanics, centripetal-force
Title: Kinetic Energy and Centripetal Force I know since no work is done by the centripetal force, kinetic energy is constant; but does that mean it if the kinetic energy was increased or decreased it would have no effect on the centripetal force? It would have no effect on the force, which in general has a separate cause and does not depend on speed or alike (it may depend on gravity or friction or alike depending on the situation).
The centripetal force is:
$$F_c=m\frac{v^2} r$$
Decreasing the kinetic energy $K=\frac12 mv^2$ of the circulating object means decreasing either $m$ or $v$. Doing either would mathematically seem to decrease $F_c$ (one linearly and the other quadratically). But we know that doesn't happen. What happens instead is that $r$ changes accordingly.
The conclusion is that a change in the kinetic energy of a circulating object changes its orbit, but not the centripetal force pulling in it. | {
"domain": "physics.stackexchange",
"id": 48109,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, classical-mechanics, centripetal-force",
"url": null
} |
java, multithreading, thread-safety, zeromq
for (int i = 0; i < THREADS; i++) {
executor.submit(new ClientTask(endTime));
}
// wait for termination
executor.shutdown();
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
And below is ClientTask class -
public class ClientTask implements Runnable {
private static final Random random = new Random();
private static final String CHARACTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
private final long endTime;
public ClientTask(long endTime) {
this.endTime = endTime;
}
@Override
public void run() { | {
"domain": "codereview.stackexchange",
"id": 10511,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, multithreading, thread-safety, zeromq",
"url": null
} |
scikit-learn, multiclass-classification
Finally, it's important to keep in mind that there may be limitations to what can be achieved with the available data. If the underlying patterns are inherently complex or noisy, it may be difficult to achieve high levels of accuracy. In such cases, it's important to carefully evaluate the performance metrics and determine what level of performance is acceptable for your use case (compare your results with a dumb (random) classifier). | {
"domain": "datascience.stackexchange",
"id": 11534,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "scikit-learn, multiclass-classification",
"url": null
} |
materials, structural-engineering, stresses
Title: Why do we even use engineering stress? Surprisingly this hasn't been asked before, so I must be missing something simple.
We use engineering stress and engineering strain in this eq. Stress = (Young's modulus) × (strain). This eq. is used in analysis of bending beams, twisting shafts and in buckling. So the final equation of bending $(\frac{M}{I} = \frac{\sigma}{y})$ and torsion $(\frac{T}{I} = \frac{\tau}{r})$ will give us value of engineering stress but not the value of stress.
Why are we considering engineering stress instead of true stress while we know it will not give correct value of stress?
Some things I read are:
Difficult to measure.
Not that much of a difference and we can just apply a Factor of Safety.
"We don't consider materials to change their cross-sectional area after loading, since we design to have no plastic deformation the elastic region is most important, therefore what happens after the proportional limit is not important" | {
"domain": "engineering.stackexchange",
"id": 513,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "materials, structural-engineering, stresses",
"url": null
} |
However, apparently the solution is given by this:
Clearly this is not the same answer. I'm not entirely sure which is right; if I've gone wrong somewhere in my solution, where was it?
• There is a distinction between exactly a 75/25 split (deterministic), and a probability .75 of one and .25 of the other. In the latter case you have to account for the variance of the Bernoulli split (as per @drhab's Answer. – BruceET Feb 27 '18 at 9:26
## 1 Answer
To make things clear suppose that $X\sim N(\mu,\sigma^2)$ and $Y\sim N(\mu,\sigma^2)$ so both the same.
Then there is no split up and it would be just a matter of finding $P(W>5000)$ where $W\sim N(\mu,\sigma^2)$.
But it seems you would go for finding $P(0.25X+0.75Y>5000)$.
Now observe that the variance of $0.25X+0.75Y$ is not $\sigma^2$ (as it should) but is $\frac{10}{16}\sigma^2$.
The correct route is working with $W=BX+(1-B)Y$ where $B\sim\text{Bernoulli}(0.25)$ and where $B,X,Y$ are independent. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363499098282,
"lm_q1q2_score": 0.8001348486012292,
"lm_q2_score": 0.8128673178375734,
"openwebmath_perplexity": 160.86405866708088,
"openwebmath_score": 0.9522947072982788,
"tags": null,
"url": "https://math.stackexchange.com/questions/2668742/probability-of-mixture-of-normal-random-variables"
} |
python, beginner, game, role-playing-game
clear()
skills()
elif option == "back":
game()
else:
input("Invalid command\n")
skillupgrade()
except:
input("No skill with this name")
skillupgrade() | {
"domain": "codereview.stackexchange",
"id": 42192,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, game, role-playing-game",
"url": null
} |
soft-question, field-theory
Title: Do the equations on this piece of art have physical significance Someone I know owns a piece of art, which is shown in the figure.
$$[\varphi_\alpha(x), \varphi_\beta(y)]= -i\Delta_{\alpha\beta}(x-y)$$
and
$$U[\sigma,\sigma_0]= I-i\int_{\sigma_0}^\sigma\mathcal{H}'(x')U[\sigma',\sigma_0]\,\mathrm dx'\,. $$
I have some theories on what the physical significance (or lack there of) of the particular symbols are, but would like to see if meaning jumps out at anyone. | {
"domain": "physics.stackexchange",
"id": 28302,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "soft-question, field-theory",
"url": null
} |
homework-and-exercises, electrical-resistance, power
Title: Parallel Resistors (special cases formula)
1) A car's rear window defroster uses $n=15$ strips of resistive wire in a parallel arrangement. If the total resistance is 1.4 ohms, what is the resistance of one wire?
Solution: I rearranged the formula and got the following answer $$ R= nR_{T} = (15)(1.4 \Omega) = 21 \Omega .$$
2) Question : What is the total power dissipated in the defroster if 12 V is applied to it? | {
"domain": "physics.stackexchange",
"id": 17278,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, electrical-resistance, power",
"url": null
} |
Question 14 Explanation:
Randomized quicksort has expected time complexity as O(nLogn), but worst case time complexity remains same. In worst case the randomized function can pick the index of corner element every time.
Question 15
Consider any array representation of an n element binary heap where the elements are stored from index 1 to index n of the array. For the element stored at index i of the array (i <= n), the index of the parent is
A i - 1 B floor(i/2) C ceiling(i/2) D (i+1)/2
Heap GATE-CS-2001
Discuss it | {
"domain": "geeksforgeeks.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.962673111584966,
"lm_q1q2_score": 0.8182443945985391,
"lm_q2_score": 0.8499711737573762,
"openwebmath_perplexity": 1730.2345813167385,
"openwebmath_score": 0.4198146164417267,
"tags": null,
"url": "http://quiz.geeksforgeeks.org/gate-cs-2001/"
} |
optics, geometric-optics
Objects emmit light in every direction. When we look at an object we only see the light that is travelling directly from the object into our eye in a straight line. What we would like to do is bend some of the light that does not reach our eyes, such that it does reach our eyes. This way the apparent brightness of the object will go up, and we can see the object more clearly. The lens is constructed to make this happen. | {
"domain": "physics.stackexchange",
"id": 23727,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "optics, geometric-optics",
"url": null
} |
c#, visitor-pattern
So when a Person says visitor.Visit(this), it can only be the first version, because this is a Person, and when an Animal says it, it can only be the second.
With these changes, we arrive at a program which is functionally identical to your original, but without a cast in sight - dynamic dispatch has taken their place.
With the Visitor side of things looked at, some points on the rest of the code:
Initializing Lists
In Main, you're creating a list specifically so you can add its elements to a second list. You can just use the second list directly. Additionally, when using a parameterless constructor and an initializer list, you can omit the constructor brackets:
List<Creature> creatures = new List<Creature>
{
new Person { Name = "Frank" },
new Person { Name = "Tony" },
new Person { Name = "Amy" },
new Animal { Name = "Bubbles" },
new Animal { Name = "Max" }
}; | {
"domain": "codereview.stackexchange",
"id": 19120,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, visitor-pattern",
"url": null
} |
Therefore $$A$$ is the product of lower triangular matrices each of them having a lower triangular inverse; hence also $$A^{-1}$$ is lower triangular as well. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363717170516,
"lm_q1q2_score": 0.8060764148580296,
"lm_q2_score": 0.817574478416099,
"openwebmath_perplexity": 194.93252795058538,
"openwebmath_score": 0.9505676031112671,
"tags": null,
"url": "https://math.stackexchange.com/questions/3050100/if-a-matrix-a-is-both-triangular-and-unitary-then-it-is-diagonal"
} |
classical-mechanics, lagrangian-formalism, differential-geometry, coordinate-systems
Title: Generalized vs curvilinear coordinates I am taking the course "Analytical Mechanics" (from on will be called "AM") this semester. In our first lecture, my professor introduced the notion of generalized coordinates. As he presented, we use generalized coordinates when we have constraints on the system (i.g. being on a sphere). When we have $M$ particles (in 3D space) we need to use $3M$ coordinates, unless we have $p$ independent constraints which now enable us to describe our system with $3M-p$ coordinates. | {
"domain": "physics.stackexchange",
"id": 98917,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, lagrangian-formalism, differential-geometry, coordinate-systems",
"url": null
} |
rviz, urdf
Title: In URDF how do I get more than one geomtry element in a link?
I would like to build a simple robot model in urdf with just visualization and display it in rviz. I just have one tf frame for the body (it is a quad rotor) so how can I use the same frame for more than on geometry element?
Or is it possible to take a 3D model and just use that?
Originally posted by TommyP on ROS Answers with karma: 1339 on 2013-02-21
Post score: 0
Short answer is that you currently can't do exactly what you're saying.
The easiest way to get around it is to create a second link with an appropriate offset, connected with a fixed joint.
This does create an extra frame which will need to be published. This is most often handled with robot_state_publisher.
Originally posted by David Lu with karma: 10932 on 2013-02-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 12993,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rviz, urdf",
"url": null
} |
java, swing, jdbc
headers = buildHeaders(rs);
content = buildContent(rs);
} catch (SQLException e) {
e.printStackTrace();
}
return new TableContent(headers, content);
}
private Vector<String> buildHeaders(final ResultSet rs) throws SQLException {
Vector<String> headers = new Vector<String>();
int col = rs.getMetaData().getColumnCount();
for (int i = 1; i <= col; i++) {
headers.add(rs.getMetaData().getColumnName(i));
}
return headers;
}
private Vector<Vector<String>> buildContent(final ResultSet rs) throws SQLException {
Vector<Vector<String>> content = new Vector<Vector<String>>();
while (rs.next()) {
int col = rs.getMetaData().getColumnCount();
Vector<String> newRow = new Vector<String>(col); | {
"domain": "codereview.stackexchange",
"id": 7045,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, swing, jdbc",
"url": null
} |
java, beginner, swing, layout
btnDamage.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
intBIResult = 0;
intBIResult = biUnit5.compareTo(biConstantOne);
if (intBIResult == 0 || intBIResult == 1) {
biDamageOutput = biDamageOutput.add(biUnit1.add(biConstantOne));
biDamageOutput = biDamageOutput.add(biUnit2.multiply(biConstantTen));
biDamageOutput = biDamageOutput.add(biUnit3.multiply(biConstantHundred));
biDamageOutput = biDamageOutput.add(biUnit4.multiply(biConstantThousand));
biDamageOutput = biDamageOutput.add(biUnit5.multiply(biConstantTenThousand));
} else {
intBIResult = 0;
intBIResult = biUnit4.compareTo(biConstantOne); | {
"domain": "codereview.stackexchange",
"id": 19683,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, swing, layout",
"url": null
} |
thermodynamics
Thanks! This is a case where you want to determine a property in a composite using some form of a rule of mixtures.
$$ P^n = \sum f_j P^n $$
Here, $P$ is the composite property, $f_j$ is typically the volume fraction, and $P_j$ is the property of the component $j$ in the mixture. When $n = 1$, this is the simple rule of mixtures. When $n = -1$, this is the inverse rule of mixtures.
An example of this rule applied to fiber composites is given at the link below.
https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Rule_of_mixtures.html
For heat capacity, the formulation is written using mass specific heat capacity and mass fraction. The link below gives an explanation and a calculator.
https://thermtest.com/rule-of-mixtures-calculator | {
"domain": "engineering.stackexchange",
"id": 2765,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics",
"url": null
} |
c++, beginner
Title: Generating a 2-variable truth table and performing boolean arithmetic My code currently generates a 2-variable truth table and lets the user select to AND/OR/NOT the variables. I was looking for advice on how to make it more concise, handle bad inputs better, and ignore case while going through.
CODE:
#include <iostream>
#include <string>
#include <iomanip>
using namespace std;
int main() {
bool p[4] = { true, true, false, false };
bool q[4] = { true, false, true, false };
cout << "Do you want to AND or OR the two propositional variables?" << endl;
string andor;
cin >> andor;
cout << "Do you want to NOT p? Y/N" << endl;
string ansp;
cin >> ansp;
cout << "Do you want to NOT q? Y/N" << endl;
string ansq;
cin >> ansq; | {
"domain": "codereview.stackexchange",
"id": 18803,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner",
"url": null
} |
programming-challenge, haskell, combinatorics, computational-geometry, monads
Title: Finding minimum scalar product using ST Monad Inspired by a Stack Overflow question, I decided to practice my Haskell by taking a crack at the Google Code Jam's Minimum Scalar Product problem:
Given two vectors \$\mathbf{v_1}=(x_1,x_2,\ldots,x_n)\$ and \$\mathbf{v_2}=(y_1,y_2,\ldots,y_n)\$. If you could permute the coordinates of each vector, what is the minimum scalar product \$\mathbf{v_1} \cdot \mathbf{v_2}\$?
Constraints:
\$100 \le n \le 800\$
\$-100000 \le x_i, y_i \le 100000\$
I'm not claiming any algorithmic awesomeness (this is just a reference implementation for checking correctness later).
This is my first time using Vectors and the ST monad, so what I really want is a sanity check that I'm using both correctly, and that I'm using the correct tools for the job.
module MinimumScalarProduct where | {
"domain": "codereview.stackexchange",
"id": 21004,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming-challenge, haskell, combinatorics, computational-geometry, monads",
"url": null
} |
beginner, rust
if a5 > e5 {
break;
} | {
"domain": "codereview.stackexchange",
"id": 44850,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, rust",
"url": null
} |
java, swing, text-editor
UndoManager undomanager;
public static void main(String[] args) {
new Notepad();
}
Notepad(){
frame = new JFrame(Title);
textarea = new JTextArea();
frame.setSize(1000, 750);
frame.setLocationRelativeTo(null);
frame.setJMenuBar(menubar = addmenu());
frame.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE);
textarea.setLineWrap(true);
textarea.setWrapStyleWord(true); | {
"domain": "codereview.stackexchange",
"id": 29188,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, swing, text-editor",
"url": null
} |
quantum-mechanics, electrons, atomic-physics, orbitals
Title: How are line spectra explained after rejecting/improving Bohr's theory? I learned that Bohr explained line spectra by postulating that electrons can only be at certain discrete distances from the nucleus. Later, this theory was refuted/improved by de Broglie and Schrödinger. Since their theories, electrons were seen as standing waves and we can only know where they will probably be. The regions with $90\%$ probability are called orbitals. But how can line spectra be explained if electrons are not restricted to discrete distances but rather to orbital regions?
By the way, am I getting the idea of orbitals correctly ? Is it correct to see it as a region with a high probability of finding an electron as a standing wave?
Since their theories, electrons were seen as standing waves and we can only know where they will probably be. The regions with $90\%$ probability are called orbitals.
Not quite. The orbital is the standing wave itself. | {
"domain": "physics.stackexchange",
"id": 53408,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, electrons, atomic-physics, orbitals",
"url": null
} |
Now P(5)=P(4)+int_4^5 f(t)dt=4-1/2*1*4=2.
P(6)=P(5)+int_5^6f(t)dt=2+1/2*1*4=4.
Finally, P(7)=P(6)+int_6^7 f(t)dt where int_7^6 f(t)dt is area of rectangle with sides 1 and 4. So, P(7)=4+1*4=8.
Sketch of P(x) is shown below.
Example 2. If P(x)=int_1^x t^3 dt , find a formula for P(x) and calculate P'(x).
Using part 2 of fundamental theorem of calculus and table of indefinite integrals we have that P(x)=int_1^x t^3 dt=(t^4/4)|_1^x=x^4/4-1/4.
Now, P'(x)=(x^4/4-1/4)'=x^3. We see that P'(x)=f(x) as expected due to first part of Fundamental Theorem.
Example 3. Find derivative of P(x)=int_0^x sqrt(t^3+1)dt.
Using first part of fundamental theorem of calculus we have that g'(x)=sqrt(x^3+1).
Example 4. Find d/(dx) int_2^(x^3) ln(t^2+1)dt.
Here we have composite function P(x^3). To find its derivative we need to use Chain Rule in addition to Fundamental Theorem.
Let u=x^3 then (du)/(dx)=(x^3)'=3x^2. | {
"domain": "emathhelp.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9950945944397533,
"lm_q1q2_score": 0.8226564310665383,
"lm_q2_score": 0.8267117876664789,
"openwebmath_perplexity": 1507.8173915170753,
"openwebmath_score": 0.9498471021652222,
"tags": null,
"url": "http://www.emathhelp.net/notes/calculus-2/definite-integral/the-fundamental-theorem-of-calculus/"
} |
c
I hope this is all true, because the story is almost too good: I really wonder how he got at this solution; it is correct, but there is a much smaller one.
It is suggested you write it as if for Mr. Babbage himself, who seems to have a clever pen-and-paper method, and who knows the basics (only the basics, but very well).
I found some rare programs that only check roots ending on 4 or 6, because the end of the 269696 ending is a 6. I extended this keenly to endings 64 and 36, which both only give squares ending on 96.
Here is my code. I turned it around and first calculate an approximate root for every number with that ending. At least I treat it as approximate.
My main Q is about the rounding and the way I assign and use babb and diff. Before I used round() it was not really working; now I removed all casts and it seems to work.
/* "Babbage Problem"
= (Smallest) number whose square ends in ...269696 ? */
/* The root must end on 4 or 6 to give ending 6, | {
"domain": "codereview.stackexchange",
"id": 40543,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c",
"url": null
} |
c#, design-patterns
return messenger.InStateMessages();
}
private IEnumerable<string> GetBeforeStateEnterMessages(IState gameState)
{
var messenger = gameState as IMessenger;
if (messenger == null) { return Enumerable.Empty<string>(); }
return messenger.BeforeStateEnterMessages();
}
public IEnumerable<string> ChangeState()
{
if (_gameState == null)
{
_gameState = new SetupMax();
return GetBeforeStateEnterMessages(_gameState);
}
if (_gameState is SetupMax)
{
guessingRange.SetMax(_gameState.Value);
_gameState = new SetupMin(); | {
"domain": "codereview.stackexchange",
"id": 13862,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, design-patterns",
"url": null
} |
time, software, pulsar
I am also unsure where the data for the other clocks would be publicly available. I am hoping a scientist who has done a similar calculation can point me in the right direction. So, after some discussion and paper reading I learned of the sigma_z calculation in this paper by Matsakis, Taylor, and Eubanks. Several astronomers recommended this to me. I was also told to try out Stable32, although I'm not sure if that will work for pulsars. The code for a sigma_z calculation can be found here: sigma_z calculation. I believe this is the best method for doing what I asked. | {
"domain": "astronomy.stackexchange",
"id": 3871,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "time, software, pulsar",
"url": null
} |
I have made some slight progress in finding the asymptotic distribution of $$T_2$$. By CLT, $$\frac{1}{n}\sum(X_i^2)\sim AN(\sigma^2,Var(X^2)/n)$$. Then, by letting $$g(z)=\sqrt{z}$$, Delta Method yields $$g(\frac{1}{n}\sum(X_i^2))=T_2 \sim AN(\sigma, \frac{Var(X^2)}{n}\cdot\frac{1}{4\sigma^2})$$. However, I do not know how to evaluate $$Var(X^2)$$.
I used a similar approach for finding the asymptotic distribution of $$T_1$$. Since $$|X| \sim HN(\sigma)$$, we have $$\bar{|X|} \sim AN(\frac{\sigma\sqrt{2}}{\sqrt{\pi}},\frac{\sigma^2(1-2/\pi)}{n})$$ from CLT. Then, we use Delta Method with $$g(z)=z\sqrt{\pi/2}$$ to get $$T_1 \sim AN(\sigma, \frac{\sigma^2\pi(1-2/\pi)}{2n})$$. How can I complete this problem? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9621075690244281,
"lm_q1q2_score": 0.8079144024055342,
"lm_q2_score": 0.839733963661418,
"openwebmath_perplexity": 337.1374179168605,
"openwebmath_score": 0.9999644756317139,
"tags": null,
"url": "https://stats.stackexchange.com/questions/461169/how-can-i-find-the-asymptotic-relative-efficiency-of-two-quantities-estimating"
} |
automata, finite-automata
When two decimal digits are added, there are only two possible carries: $0$ or $1$. If we already have a carry, this is still true (the limit cases are $x_{i} = y_{i} = c = 0$ and $x_{i} = y_{i} = 9$, $c = 1$). To put this another way, the output of adding two decimal digits and a carry bit is a two digit number $c' z_{i}$, where $c'$ is the carry bit for the next addition and $z_{i}$ is the decimal output digit.
To give an example of the result, here's the transducer for the binary case. Each transition gives the two input digits, the output digit and target state of the transition. Remember that each transition implicitly includes the carry bit in the addition: | {
"domain": "cs.stackexchange",
"id": 5527,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "automata, finite-automata",
"url": null
} |
# Finding the derivative of a function using the Product Rule
I'm home teaching myself calculus because I'm 16 and therefore too young to take an actual class with a teacher, so I apologise if this seems simple.
I understand the definition of the Product Rule and its formula:
"If a function $h(x)=f(x)\times g(x)$ is the product of two differentiable functions $f(x)$ and $g(x)$, then $h'(x) = f(x)\times g'(x)+f'(x)\times g(x)$".
I did a question to find the derivative of $g(x) = (2x+1)(x+4)$ using the Product Rule.
Now on the solutions sheet it says I must begin by writing:
$g'(x)=(2x+1){\bf (1)}+{\bf (2)}(x+4)$
What confuses me are the terms that I have put in bold. (the terms $(1)$ and $(2)$). I believe the term $(1)$ is $g'(x)$ from the formula and the term $(2)$ is $f'(x)$ from the formula.
How am I supposed to know these 2 terms? Am I supposed to find the derivative of $(2x+1)$ and $(x+4)$ before going on to the question?
I also apologise if this is quite messy. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347838494568,
"lm_q1q2_score": 0.8339711274023511,
"lm_q2_score": 0.8558511469672594,
"openwebmath_perplexity": 130.94169983363867,
"openwebmath_score": 0.9787867665290833,
"tags": null,
"url": "http://math.stackexchange.com/questions/170489/finding-the-derivative-of-a-function-using-the-product-rule/170503"
} |
sortes de moyennes et de taux ages,... ) ² = σ×σ and mode do not reveal the actual dispersion in data have. Identified with this speaking, the standard deviation ( for above data ) = =.! Will always be a just a bit smaller than the value given by VARP will always be just. Distance from the mean ) variance vs standard deviation same unit of the variance and effectively deviation... Above data ) = = 2 away from their mean value and the mean formulas when working with population and! When we consider the variance ( the average of squares of deviations from the mean is 13, 4... Deviation based on the concept of mean mean & median vs population variance and standard deviation the! Measurements in statistics one variance vs standard deviation is the measure of the variance ( the average distance of numbers. Is expressed in the first case we call them population variance and standard deviation is used calculate... Of step deviation we are familiar with a shortcut method for calculation of | {
"domain": "institutionalmarkets.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9793540740815275,
"lm_q1q2_score": 0.8203265216375724,
"lm_q2_score": 0.837619961306541,
"openwebmath_perplexity": 461.9927051292846,
"openwebmath_score": 0.8864527344703674,
"tags": null,
"url": "http://institutionalmarkets.com/l8nwzpk/variance-vs-standard-deviation-4bd4f2"
} |
java, mathematics, combinatorics
for (GameSquare square : this.squares) {
hashCode = hashCode * prime + square.hashCode();
}
return hashCode;
}
@Override
public boolean equals(Object other) {
// self check
if (this == other) {
return true;
}
// null check
if (other == null) {
return false;
}
// type check and cast
if (getClass() != other.getClass()) {
return false;
}
Rule otherResultSet = (Rule) other;
boolean sizesAreEqual = this.squares.size() == otherResultSet.squares.size();
if (!sizesAreEqual) {
return false;
}
// Don't care about order
return this.squares.containsAll(otherResultSet.squares) && this.getResultsEqual() == otherResultSet.getResultsEqual();
}
} | {
"domain": "codereview.stackexchange",
"id": 36841,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, mathematics, combinatorics",
"url": null
} |
equilibrium, pressure
Title: Pressure before equilibrium = Pressure during equilibrium? For example, let's say we have the equation $$\ce{PCl_3(g) + Cl_2(g) -> PCl_5(g)},$$ and the temperature is held constant.
Would the pressure in the container when equilibrium is reached be greater than, less than, or equal to the pressure when the $\ce{PCl3}$ and the $\ce{Cl2}$ are initially mixed?
I have a few ideas, but they all come up with different conclusions. | {
"domain": "chemistry.stackexchange",
"id": 17480,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "equilibrium, pressure",
"url": null
} |
homework-and-exercises, newtonian-mechanics, newtonian-gravity, estimation, escape-velocity
$$v_e = \sqrt{\frac{2GM}{r}}$$
Imagine you play this scenario in reverse. You have a bullet and a gun, a zillion light years apart, motionless with respect to another. You watch and wait, and after a gazillion years you notice that they're moving towards one another due to gravity. (To simplify matters we'll say the gun is motionless and the bullet is falling towards the gun). After another bazillion years you've followed the bullet all the way back to the gun, and you notice that they collide at 0.001 m/s. You check your sums and you work out that this is about right, given that if the gun was as massive as the Earth's 5.972 × 10$^{24}$ kg, the bullet would have collided with it at 11.7 km/s. Escape velocity is the final speed of a falling body that starts out at an "infinite" distance. If you launch a projectile from Earth with more than escape velocity, it ain't ever coming back. | {
"domain": "physics.stackexchange",
"id": 24767,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, estimation, escape-velocity",
"url": null
} |
beginner, c, console, math-expression-eval, makefile
if(inst[end] == ')')
{
puts("Error: Unbalanced or unexpected parenthesis or bracket");
mkoperandempty(&num);
mkoperationemtpy(&op);
return 1;
}
if(inst[end] =='(')
{
start=end+1;
int bracketcounter =1;
while(bracketcounter != 0)
{
end++;
if(inst[end] == '(') bracketcounter++;
else if(inst[end] == ')' ) bracketcounter--;
if(end ==len)
{
puts("Error: Unbalanced or unexpected parenthesis or bracket");
mkoperandempty(&num);
mkoperationemtpy(&op);
return 1;
}
}
inst[end] ='\0';
int temp;
if(parse(&inst[start],&temp)!=0)
{
mkoperandempty(&num);
mkoperationemtpy(&op);
return 1 ; | {
"domain": "codereview.stackexchange",
"id": 12631,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c, console, math-expression-eval, makefile",
"url": null
} |
function y. First, we solve the homogeneous equation y'' + 2y' + 5y = 0. An advantage of the proposed method over series methods like that of Frobe-. In one word, easy. Example: g'' + g = 1. I didn't include them in this post, but I have edited it now. The highest power attained by the derivative in the equation is referred to as the degree of the differential equation. In earlier sections, we discussed models for various phenomena, and these led to differential equations whose solutions, with appropriate additional conditions, describes behavior of the systems involved, according to these models. To numerically solve a differential equation with higher-order terms, it can be broken into multiple first-order differential equations as shown below. If y is some exponential form of x, say $y = e^{a x}$, then all terms get the same $e^{3 a. Kirchhoff's voltage law says that the sum of these voltage drops is equal to the supplied voltage: dI Q L RI 苷 E共t兲 dt C APPLICATIONS OF SECOND-ORDER | {
"domain": "edra.pw",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9896718459575752,
"lm_q1q2_score": 0.809130441062361,
"lm_q2_score": 0.8175744761936438,
"openwebmath_perplexity": 359.05057508222586,
"openwebmath_score": 0.7273454070091248,
"tags": null,
"url": "http://huup.edra.pw/second-order-differential-equation-solver.html"
} |
electromagnetism, general-relativity, theory-of-everything
Second, there's a physical insight of background independence: space and time aren't some static arena inhabited by fields, but rather have the same dynamical properties that fields have.
Third, there is no place for evolution in external time in the background independent context, because there is no external time. The implications are far-reaching: no general energy conservation (except in particular solutions), no Hamiltonian, no unitarity in the quantum theory. This is known as the problem of time. This doesn't indicate that background independent theories are unphysical, however, just that we have to utilize completely different techniques in order to derive predictions. E.g. the background independent dynamics is described in terms of constraints.
Making electromagnetism background independent
It is really easy to make Maxwell's theory background independent. We just have to couple it to gravity: | {
"domain": "physics.stackexchange",
"id": 41425,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, general-relativity, theory-of-everything",
"url": null
} |
a joint meeting of congress, a highly trustworthy source says that there is a … Before closing this section, let’s look at one more example of a base rate fallacy. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Your machine is pretty good at this. The base rate of left-handed individuals in a population is 1 in 10 (10%). use base rates in your decision. In this case, 600 people will receive a true-positive result. Base rate fallacy, or base rate neglect, is a cognitive error whereby too little weight is placed on the base, or original rate, of possibility (e.g., the probability of A given B). Pregnancy tests, drug tests, and police data often determine life-changing decisions, policies, and access to public goods. Rather than integrating general information and statistics with information about an individual case, the mind tends to ignore the former and focus on the latter. In the example, the stated 95% accuracy of | {
"domain": "senacmoda.info",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517430863701,
"lm_q1q2_score": 0.8313157973590329,
"lm_q2_score": 0.8499711832583696,
"openwebmath_perplexity": 1437.6115724276058,
"openwebmath_score": 0.6456455588340759,
"tags": null,
"url": "http://www.senacmoda.info/xrkzhche/archive.php?page=bcaafe-base-rate-fallacy-example"
} |
24. Supposey, x1, and x2 have ajoint normal distribution with parameters ^N = [1, 2, 4]
and covariance matrix 2 =
(a) Compute the intercept and slope in the function E[y*xi], Var[y*xi], and the coefficient of determination in this regression. (Hint: See Section 3.10.1.)
(b) Compute the intercept and slopes in the conditional mean function, E[y*x1 ,x2]. What is E[y*xi=2.5,x2=3.3]? What is Var[y*xi=2.5,x2=3.3]?
First, for normally distributed variables, we have from (3-102),
= Cov[y,x]{Var[x]}-1Cov[x,y] / Var[y]. We may just insert the figures above to obtain the results.
and and
"5 2" -1 "3" 2 6 1
= -.4615 + .6154xj - .03846x2, Var[y*xi,x2] = 2 - (.6154,-.03846)(3,1)N = .1923. E[y*xj=2.5,x2=3.3] = 1.3017. The conditional variance is not a function of xx or x2. ~
25. What is the density of y = 1/x if x has a chi-squared distribution? | {
"domain": "rhayden.us",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9890130576932458,
"lm_q1q2_score": 0.824168901966625,
"lm_q2_score": 0.8333245911726382,
"openwebmath_perplexity": 1170.1316471991727,
"openwebmath_score": 0.8116089105606079,
"tags": null,
"url": "https://www.rhayden.us/regression-model-2/probability-and-distribution-theory.html"
} |
performance, strings, linux, assembly, amd64
Also, just before translating to assembly, I like to rename variables with their register equivalents in assembly. This makes it easier for me to translate. Here's the code for that (again C).
void remove_leading_zeroes(char *rdi) {
char *rsp = (char *)malloc(strlen(rdi) + 1);
bool cl = false;
size_t r8 = 0;
size_t r9 = 0;
do {
if (rdi[r9] != '0')
cl = true;
if (cl) {
rsp[r8] = rdi[r9];
r8++;
}
r9++;
} while (r9 < strlen(rdi));
if (!cl) {
rsp[r8] = '0';
r8++;
}
rsp[r8] = 0;
r9 = 0;
do {
rdi[r9] = rsp[r9];
r9++;
} while (r9 < r8);
rdi[r9] = 0;
free(rsp);
}
And finally, the assembly code I wrote. I'm an amateur, so any suggestions on how to make this faster, or any good practice I missed would be awesome.
extern strlen
section .text
global _remove_leading_zeroes
_remove_leading_zeroes:
; Input:
; - char *number -> rdi. The result will be stored in the same string. | {
"domain": "codereview.stackexchange",
"id": 43808,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, strings, linux, assembly, amd64",
"url": null
} |
Must sum up all of the polygon… hexagon: hexagon is ( 6-2 ) •180°=4•180°=720° n 2... Test in this quick quiz polygonal figure having six sides and angles you should remember that the sum the.... we get is ( 6-2 ) •180°=4•180°=720° called exterior angles add up to turn... So the sum of the interior angle of a normal nsided polygon is always equal to number... You are using mobile phone, you is probably asked to locate angles of a polygon... Must sum up all of the interior angles in a pentagon is equal to 108 )... ’ s functionality, hence each exterior angle is \ ( d, e\ ) and all interior angles a... ) •180°=4•180°=720° shape bounded by a finite chain of straight lines printable reference sheet for the interior and angles. Angle formed outside the polygon interior angle of a regular hexagon is 720 degrees of... And \ ( c\ ) are interior angles formula works by trying it on a triangle add! An hexagon is 720° sided polygons = 120 deg far a ordinary,. Regular hexagon all the interior | {
"domain": "strategicrecruitingservices.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347838494567,
"lm_q1q2_score": 0.8223207399365141,
"lm_q2_score": 0.8438951005915208,
"openwebmath_perplexity": 596.3082293200838,
"openwebmath_score": 0.304522305727005,
"tags": null,
"url": "https://strategicrecruitingservices.com/7vf19f/8e08e4-hexagon-interior-angles"
} |
php
Not good (mostly for debugging):
by storing a literal values we create extra overhead by storing
variable names in new variables;
worsen debugging by add one more step to find the object containing debug information;
we need to support additional dictionary; possible data inconsistency if someone adds new value to database but forget to update it in the model.
Subjective:
improves/reduces code readability.
The question is: what is the best way to store such parameter names? Or maybe to use literals.
keep the value in one place which make it more simple to change it
You will never change it (why would you?) so this is not a valid reason. And if you did change it then search/replace works fine.
keep available property dictionary in one place (in the model) | {
"domain": "codereview.stackexchange",
"id": 2038,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php",
"url": null
} |
quantum-mechanics, wavefunction
Finally, exactly from the amplitudes you get the frequencies since they have the form $e ^ {-{i (\omega t - k x)}} \rightarrow e ^ {-{i/ \hbar (E t - p x)}}$ which is typical form of waves. And having the frequencies equal means of course equating the corresponding terms from both equations, which happen to be the energies. | {
"domain": "physics.stackexchange",
"id": 23699,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, wavefunction",
"url": null
} |
c#, wpf
I'm not sure I see the need for your static "messenger" class, and I personally really dislike static / singletons because they tend to couple stuff unnecessarily and cause headaches with unit testing. All the type switching is not great. I tend to classify type checking as smelly code
Your basic requirement is that for a given model object type, you want a single place to fetch it from so you can cache instances. This is pretty much a simple Repository.
You could create a generic Repository<T> class, and create an instance for each type you want to cache. If a VM requires access to a type, then the appropriate repository is supplied to the VM constructor. No singletons, no statics, no switching on type. Just a simple class that is passed around. | {
"domain": "codereview.stackexchange",
"id": 21297,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, wpf",
"url": null
} |
electromagnetism, dielectric
Some of the energy that the secondary wave gives back is directed not in the original propagation direction but backwards: a bit of the light is reflected. (This also happens for conductors, almost perfectly, which is why metals are shiny.)
The secondary wave isn't completely in sync with the incoming one, but generally has a bit of a phase lag (like when you shake a pendulum). As a result, the wave looks “delayed”, as if it has travelled a longer way through the material than it actually has – from an outside point of view that means the wavelength is shortened despite constant frequency. But because the wavefront still has to be in sync everywhere, you'll get refraction. | {
"domain": "physics.stackexchange",
"id": 49192,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, dielectric",
"url": null
} |
c++, performance
int main()
{
constexpr std::size_t length = 200'000'000;
int* src = new int[length];
for (int i = 0; i < length; i++)
src[i] = i;
int* dst = new int[length];
std::vector<TimedTest*> tests;
tests.push_back(new TimedTestMemCpy("memcpy"));
tests.push_back(new TimedTestStdCopy("std::copy"));
tests.push_back(new TimedTestSimpleLoop("simpleLoop"));
tests.push_back(new TimedTestPointerCopy("pointerCopy"));
tests.push_back(new TimedTestOMPCopy("OMPCopy"));
std::cout << std::setw(5) << "Test#";
for (auto test : tests)
std::cout << std::setw(12) << test->name << std::setw(9) << "Avg";
std::cout << "\n";
for (int i = 0; i < 100; i++)
{
std::cout << std::setw(5) << i;
for (auto test : tests)
{
test->run(dst, src, length);
std::cout << std::setw(12) << test->time << std::setw(9) << test->average();
}
std::cout << "\n";
} | {
"domain": "codereview.stackexchange",
"id": 42252,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance",
"url": null
} |
• Oh ok that makes sense now, thanks for your help! – VakarianWrex Oct 13 '14 at 14:50
• @bloodruns4ever Don't need to be confused: $\vdash (\neg A \rightarrow B) \rightarrow (\neg A \rightarrow \neg \neg B)$ and $(\neg A \rightarrow B) \vdash (\neg A \rightarrow \neg \neg B)$ are actually equivalent statements (see deduction theorem). – Bruno Bentzen Oct 13 '14 at 14:51
• @bloodruns4ever The idea of the proof is exactly the one Mauro described above in his useful comment. Good work. – Bruno Bentzen Oct 13 '14 at 14:54 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9688561685659696,
"lm_q1q2_score": 0.8094633786579799,
"lm_q2_score": 0.8354835371034368,
"openwebmath_perplexity": 739.877063087206,
"openwebmath_score": 0.9589228630065918,
"tags": null,
"url": "https://math.stackexchange.com/questions/970760/deducing-lnot-b-to-a-from-lnot-a-to-b-using-hilbert-deductive-system"
} |
python, performance, python-2.x, community-challenge
Now that we're actually generating valid test files, let's look at the rest of your code.
It smells funny to me that your Topography class takes a filename. I'd expect a Topography to take data, and provide a classmethod that loads it from a file. This also means that with testing you don't need to go through the intermediate file. As an aside, your implementation is buggy (and your test files are invalid). They should have the size of the matrix as the first line, and your test files don't have that (and your code doesn't handle it). | {
"domain": "codereview.stackexchange",
"id": 21593,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, python-2.x, community-challenge",
"url": null
} |
fft, signal-detection
A simple example: if you are looking for the presence of a pulse against a background of noise, you might decide to set a threshold somewhere above the "typical" noise level and decide to indicate presence of the signal of interest if your detection statistic breaks above threshold. Want a really low false-alarm probability? Set the threshold high. But then, the probability of detection might decrease significantly if the elevated threshold is at or above the expected signal power level!
To visualize the $P_d$ / $P_{fa}$ relationship, the two quantities are often plotted against one another on a receiver operating characteristic curve. Here's an example from Wikipedia: | {
"domain": "dsp.stackexchange",
"id": 270,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, signal-detection",
"url": null
} |
evolution, cell-biology, microbiology, plant-physiology, phycology
Here is a classification of algae, taken from Robert Edward Lee, Phycology, 4th edition, in a simplified diagram. 4 basic types of photosynthetic structure is found in algae, according-which the classification has been done.
.classification of algae, by Lee.
(Representation concept courtesy- a note shared to me by a college- friend several-years ago).
Now, How these structures exactly developed? | {
"domain": "biology.stackexchange",
"id": 5981,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "evolution, cell-biology, microbiology, plant-physiology, phycology",
"url": null
} |
generics, typescript
console.log(foldl((a, b) => (a + b), 0, [1,2,3,4,12,14,15,16]));
I consider that significantly more readable. You can do the same thing for MMap and Filter.
Falsey bug You use return !head ? base : foldl.... This will not run if head is falsey. For example, foldl((a, b) => (a + b), 0, [1,2,3,4,12,14,15,16]) results in 67, but foldl((a, b) => (a + b), 0, [0, 1,2,3,4,12,14,15,16]) results in 0, which probably isn't desirable. Check the array's length instead:
function foldl<T>(
f: (a: T, b: T) => T,
base: T,
arr: T[]
): T {
if (!arr.length) {
return base;
}
const [head, ...rest] = arr;
return foldl(f, f(base, head), rest);
}
This applies to MMap and Filter as well.
DRYer filter To fix the falsey bug and make Filter a bit cleaner, you can replace:
const [head, ...rest] = arr;
return !head ? [] :
(f(head) ? [head, ...Filter(f, rest)] : [...Filter(f, rest)]) | {
"domain": "codereview.stackexchange",
"id": 39627,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "generics, typescript",
"url": null
} |
c++, c++11, sorting, quick-sort
Technicalities
Use type deduction instead of int for the iterator differences. It's highly unlikely that you'll actually have a range larger than the capacity of an int, but it's always better to be on the safe side.
Support for #pragma once is pretty widespread now, but it's still not standard. It's completely acceptable to use it as long as you're aware of the trade offs, but deviation from the standard should never be taken lightly.
auto value = *medianOfThree(begin, end - 1); | {
"domain": "codereview.stackexchange",
"id": 12863,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, sorting, quick-sort",
"url": null
} |
error-handling, swift, number-systems
(also making the var found obsolete.)
Mutating the temporary string can be avoided by working with a SubString (which is a kind of view into the original string) and
only updating the current search position:
var pos = original.startIndex
while pos != original.endIndex {
let subString = original[pos...]
// ...
pos = original.index(pos, offsetBy: glyph.count)
}
Naming: This is very opinion-based, here are my opinions:
Declare the function as func romanToInt(_ roman: String),
so that it is called without (external) argument name:
romanToInt("MMXIV").
Rename var int to var value.
dict is also a non-descriptive name (and it is not even a dictionary), something like glyphsAndValues might be a better choice. | {
"domain": "codereview.stackexchange",
"id": 29185,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "error-handling, swift, number-systems",
"url": null
} |
c++, beginner, c++17, game-of-life, sfml
if(texture.loadFromFile(filename)) {
this->textures[name] = texture;
}
}
sf::Texture *AssetManager::getTexture(std::string name) {
return &this->textures.at(name);
}
LifeState.cpp
#include <memory>
#include <iostream>
#include "LifeState.hpp"
#include <SFML/System/Clock.hpp>
void LifeState::init(GameDataRef &data) {
this->data = data;
auto size = this->data->assets.getTexture("tile")->getSize();
int width = this->data->window.getSize().x / size.x;
int height = this->data->window.getSize().y / size.y;
auto boolean = [](int x, int y, int height, int width) { return false; };
this->currentState.fill(height, width, boolean);
this->nextState.fill(height, width, boolean);
int posX = 0;
int posY = 0;
sf::Texture* tile = this->data->assets.getTexture("tile"); | {
"domain": "codereview.stackexchange",
"id": 35416,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, c++17, game-of-life, sfml",
"url": null
} |
memory-access, performance, matrix-multiplication
Title: Are there absolute reasons to prefer row/column-major memory ordering? I've heard it said that "fortran uses column-major ordering because it's faster" but I'm not sure that's true. Certainly, matching column-major data to a column-major implementation will outperform a mixed setup, but I'm curious if there's any absolute reason to prefer row- or column-major ordering. To illustrate the idea, consider the following thought experiment experiment about three of the most common (mathematical) array operations:
Vector-vector inner products
We want to compute the inner product between two equivalent-length vectors, a and b:
$$
b = \sum_i a_i x_i.
$$
In this case, both a and b are "flat"/one-dimensional and accessed sequentially, so there's really no row- or column-major consideration.
Conclusion: Memory ordering doesn't matter.
Matrix-vector inner products
$$
b_i = \sum_j A_{ij} x_j
$$ | {
"domain": "cs.stackexchange",
"id": 20297,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "memory-access, performance, matrix-multiplication",
"url": null
} |
ros, compile, library, ros-groovy, roscpp
Original comments
Comment by ahendrix on 2014-11-03:
I suspect there's something in your cmakelists that is causing the library search to fail. Please edit your question to include the CMakeLists.txt from your halcon package.
Comment by Ruud on 2014-11-04:
The CMakeLists.txt does not search or include pthread. I did found a dependency in a third party source file looking to include "pthread.h", which is nowhere to be found in the include directories. Apparently the compiler looks for it but occasionally does not find it or it sometimes does.
Comment by Dirk Thomas on 2014-11-04:
As the error message says the library is used by the roscpp package (/opt/ros/groovy/share/roscpp/cmake/roscppConfig.cmake). I can't see any reason why the behavior should be non-deterministic. You might want to post the full code of your example (e.g. in a GitHub repo) for others to take a look. | {
"domain": "robotics.stackexchange",
"id": 19934,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, compile, library, ros-groovy, roscpp",
"url": null
} |
c++, c++11, multithreading, template, queue
MT_FIFO(const MT_FIFO& rhs)
{
lock(mutex_, rhs.mutex_);
std::lock_guard<std::mutex> (mutex_, std::adopt_lock);
std::lock_guard<std::mutex> (rhs.mutex_, std::adopt_lock);
wait_flag_ = wait_flag_.load();
this->copy_fifo(rhs);
}
MT_FIFO& operator=(const MT_FIFO& rhs)
{
if(this == &rhs)
{
return this;
}
lock(mutex_, rhs.mutex_);
std::lock_guard<std::mutex> (mutex_, std::adopt_lock);
std::lock_guard<std::mutex> (rhs.mutex_, std::adopt_lock);
wait_flag_ = wait_flag_.load();
return this->copy_fifo(rhs);
}
bool try_pop(T *data)
{
std::lock_guard<std::mutex> lock(mutex_);
return this->pop_one_(data);
}
void wait_off() {wait_flag_ = false; cv_.notify_all();}
void wait_on() {wait_flag_ = true;} | {
"domain": "codereview.stackexchange",
"id": 25526,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, multithreading, template, queue",
"url": null
} |
c++, performance, c++11, union-find
or at the very least:
typedef long long ll; | {
"domain": "codereview.stackexchange",
"id": 29218,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, c++11, union-find",
"url": null
} |
ct.category-theory, edit-distance
Untyped Trees: Think of s-expressions only. The tree-edit-distance between two
trees is the string-edit-distance between the preorder traversal of said trees. You can check some bibliography by Demaine et al. or Pawlik and Augsten, for example.
Typed Trees: Patches over Abstract Syntax Trees that are guaranteed to preserve
the well-typedness of the object, ie, applying a patch will always yield a valid AST.
Under the typed umbrella, there are less edit operations one can consider. Substitution, for example, doesn't make sense. Nevertheless, there exists a diff over the preorder traversal of the trees by Lempsink et al., which was later extended by Vassena. I am currently focusing on approaches that distance themselves from edit scripts for the very problems I pointed earlier, such as our latest work or some earlier work which tries to take advantage of the structure of the type of the values being "patched". | {
"domain": "cstheory.stackexchange",
"id": 4705,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ct.category-theory, edit-distance",
"url": null
} |
quantum-field-theory, perturbation-theory
Title: Why do we need the coupling small when doing perturbative QFT calculation? I don't really understand why, when we calculate say the 2-point Greens function in a scalar QFT with interaction $\lambda \phi^4$, we need the coupling constant $\lambda$ to be small?
Everywhere I look it seems to be a result of having factors of the form $e^{\int \mathcal{L}_{\text{INT}}}$ and then requiring that $\mathcal{L}_{\text{INT}} = \lambda \phi^4$ be sufficiently small such that we can expand the exponential into its power series, and so we take $\lambda$ to be sufficiently small.
I don't understand why we need this assumption given the exponential is analytic everywhere and thus its Taylor series is well defined and equal to the exponential even when $\mathcal{L}_{\text{INT}} $ is large. We could therefore expand without this assumption, no? At what point are we using $\lambda \ll 1$?
Take for example the explanation at the beginning of the following notes on page 26 | {
"domain": "physics.stackexchange",
"id": 61580,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, perturbation-theory",
"url": null
} |
quantum-mechanics, scattering
Rayleigh scattering is in the main elastic scattering from small particles whose size is less than that of the wavelength of the photon. The scattering can occur of atoms or molecules and for molecules the scattering can be inelastic with a change of rotational energy of the molecule.
Compton scattering is inelastic scattering of a photon from a free charged particle. If the charged particle is a bound electron then the energy of the photon must be much greater than the binding energy of the electron.
Side note: Rayleigh scattering is a particular case of Mie scattering. This theory explains in particular the white colour of objects which are made of particles of size greater than the typical wavelength : milk, clouds, chemical powders...
To add to the answer there is Thomson (no "p") scattering which is the elastic scattering of electromagnetic radiation by a free charged particle, as explained by classical electromagnetism. | {
"domain": "physics.stackexchange",
"id": 28192,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, scattering",
"url": null
} |
vector-fields, fluid-statics, surface-tension
$\oint_{C_\pi} dl\, \mathbf{\hat{n}}.\mathbf{\hat{n}}=\oint_{C_\pi} dl=circumference$
Since $\mathbf{\hat{n}}$ is the normalized normal. By the way, the next integral, $\iint_{W_\pi} dA \,\boldsymbol{\nabla}\cdot\mathbf{n}$ makes no sence at all, in this case. The normal to the edge is only defined on the edge. How do you evaluate it in the bulk?
Perhaps you meant something like:
$\oint_{C_\pi} dl\,\mathbf{\hat{n}}\cdot\mathbf{F}$, for some generic field $\mathbf{F}$
In this case $\oint_{C_\pi} dl\,\mathbf{\hat{n}}\cdot\mathbf{F}=\int_{W_\pi} d^2 r \boldsymbol{\nabla}.\mathbf{F}$
But also, if you invert the boundary normal ($\mathbf{\hat{n}}\to -\mathbf{\hat{n}}$) and assume that $\mathbf{F}$ vanishes at infinity then the edge integral may be taken as the edge integral of $\mathbb{R}^2$ with $W_\pi$ removed: | {
"domain": "physics.stackexchange",
"id": 59819,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vector-fields, fluid-statics, surface-tension",
"url": null
} |
java, rags-to-riches
I put this together in a unit test. Here's the full file:
import org.junit.Assert;
import org.junit.Test;
public class ThreeInARow {
/**
* Determine whether three <code>int</code> values can be arranged in to an incrementing sequence.
*
* @param a the first value
* @param b the second value
* @param c the third value
* @return true if there is an order of the three inputs which makes them sequential
*/
public static final boolean isSequential(int a, int b, int c) {
final int x = Math.abs(a - b);
final int y = Math.abs(b - c);
final int z = Math.abs(a - c);
return x + y + z == 4 && x * y * z == 2;
}
private static final int[] FROM = {Integer.MIN_VALUE, Integer.MIN_VALUE + 3, -5, -4, -3, -2, -1,
0, 1, 2, 3, 100, Integer.MAX_VALUE - 2, Integer.MAX_VALUE};
@Test
public void testGoodBlocks() { | {
"domain": "codereview.stackexchange",
"id": 29291,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, rags-to-riches",
"url": null
} |
#### Pranav
##### Well-known member
Evaluate $\dfrac{1}{3^2+1}+\dfrac{1}{4^2+2}+\dfrac{1}{5^2+3}+\cdots$
Notice that the given sum can be written as:
$$\sum_{r=1}^{\infty} \frac{1}{(r+2)^2+r}=\sum_{r=1}^{\infty} \frac{1}{r^2+5r+4}=\sum_{r=1}^{\infty} \frac{1}{(r+4)(r+1)}$$
$$=\frac{1}{3}\left(\sum_{r=1}^{\infty} \frac{1}{r+1}-\frac{1}{r+4}\right)$$
$$=\frac{1}{3}\left(\sum_{r=1}^{\infty}\int_0^1 x^r-x^{r+3}\,dx\right)=\frac{1}{3}\left( \sum_{r=1}^{\infty} \int_0^1 x^r(1-x^3)\,dx\right)$$
$$=\frac{1}{3}\int_0^1 (1-x^3)\frac{x}{1-x}\,dx = \frac{1}{3}\int_0^1 x(x^2+x+1)\,dx=\frac{1}{3}\int_0^1 x^3+x^2+x \,dx$$
Evaluating the definite integral gives:
$$\frac{1}{3}\cdot \frac{13}{12}=\frac{13}{36}$$
#### anemone
##### MHB POTW Director
Staff member | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9904406023459218,
"lm_q1q2_score": 0.8358279678581253,
"lm_q2_score": 0.8438950966654774,
"openwebmath_perplexity": 10322.888516076788,
"openwebmath_score": 0.909170389175415,
"tags": null,
"url": "https://mathhelpboards.com/threads/evaluate-the-sum-to-infinity.9432/"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.