text stringlengths 49 10.4k | source dict |
|---|---|
sensor-fusion
That's it! That's all that you do to get the quaternion.
Now, as I mentioned earlier, the accelerometers give an absolute oriention (in roll and pitch only!!), so you can use those values to correct for long-term drift in the results you get from using the gyroscope. For more information on how to do this, try checking out the Madgwick Filter. (Final comment: I was going to link you directly to Sebastian Madgwick's page, but it looks like it's down. Not sure what happened, but I'd get the paper I linked sooner rather than later if you're interested, if he's taking down his IP.)
In response to your questions
OP asks (paraphrased) - "A quaternion $q$ represents a pose, so there is no angular rate information, so how does quaternion multiplication $q\otimes S$ make sense units-wise? If I consider $q$ to have units of rate, then the numeric integration doesn't make sense."
My response - I wish I could explain the physical meaning of the quaternion math, but I can't; I don't know how to. All I can say is, the formula for quaternion rate of change is $dQ/dt = \frac{1}{2}q\otimes S$, and that you then perform numeric integration by $q = q + (dQ/dt)*dt$.
Quaternions are unitless, so that may help your conceptual understanding. Consider the quaternion rate equation to be "scaling" the gyro rates to fit the quaternion format. That is, the term $\frac{1}{2}q\otimes$ converts $S$ to $dQ/dt$.
If you're having stability issues, the very first thing I would check is that you are actually passing your angular rates as radians per second and not degrees per second. Based on your math above, it looks like your gyro outputs in units of deg/s. | {
"domain": "robotics.stackexchange",
"id": 1138,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "sensor-fusion",
"url": null
} |
calibration, camera-calibration
(ok, downsampled_corners, board) = self.get_corners(scrib, refine = True)
File "/opt/ros/groovy/lib/python2.7/dist-packages/camera_calibration/calibrator.py", line 367, in get_corners
(ok, corners) = _get_corners(img, b, refine)
File "/opt/ros/groovy/lib/python2.7/dist-packages/camera_calibration/calibrator.py", line 156, in _get_corners
(ok, corners) = cv.FindChessboardCorners(mono, (board.n_cols, board.n_rows), cv.CV_CALIB_CB_ADAPTIVE_THRESH | cv.CV_CALIB_CB_NORMALIZE_IMAGE | cv2.CALIB_CB_FAST_CHECK)
error: blockSize % 2 == 1 && blockSize > 1 | {
"domain": "robotics.stackexchange",
"id": 13212,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "calibration, camera-calibration",
"url": null
} |
• I think the gif dosn't really fit this site, though I can't say if there is a specific rule against such things – Yuriy S Oct 10 '16 at 11:27
• @YuriyS: Hmm ... I am sorry to learn that people may find it inappropriate! That was by no means my intention! Do you think it would work better by removing the gif an leave the link and the figure of speach, or are all of those opfuscating the points I tried to make? – String Oct 10 '16 at 12:21
• @String: your link to a cartoon movie of a dead animal being bludgeoned is disproportionate and offensive. – Rob Arthan Mar 12 '17 at 1:27
• @RobArthan: Sorry, in some parts of the world such a cartoon would be considered merely a funny way to illustrate the saying about beating a dead horse. No offense intended, only a light tone. I cannot help that people do take offense, so I have removed it. Still it puzzles me how a cartoon matching the content of a saying would offend. I am from Denmark, after all. – String Mar 12 '17 at 8:11
• @String: on MSE it's easier just to use neutral language. As I am English (after all), may I point out that the cliched phrase is actually "flogging a dead horse" and it doesn't have the connotations you think it does (it's not "explaining to death", it's using a tired old argument that has lost all interest or relevance). Your cartoon doesn't help with that. – Rob Arthan Mar 13 '17 at 1:55
1. Well... yes... it does break down. You assumed that there are only 6 primes and reached a contradiction. You've successfully proved that there aren't only 6 primes.
2. Under the assumption that there are only 6 primes 30,031 isn't factorizable. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517450056274,
"lm_q1q2_score": 0.8646361651533242,
"lm_q2_score": 0.8840392848011833,
"openwebmath_perplexity": 240.20242253351205,
"openwebmath_score": 0.8419486880302429,
"tags": null,
"url": "https://math.stackexchange.com/questions/1960055/proof-of-infinitely-many-prime-numbers/1960060"
} |
javascript, css, animation, sass
Title: "Star Field" Animation with JavaScript and Sass-CSS I've have make this animation on the weekend as a "just for fun" thing. And for to play with various techniques I've seen in others code.
I think it works quite alright.
Nevertheless I would appreciate comments about:
How to structure / shorten the JavaScript- and Sass-code better?
How could the animation be improved? So that it runs smoother and more "natural"?
How to improve the responsiveness?
Looking forward to read your comments and suggestions.
'use strict';
var field = document.querySelector('.field');
var starCount = 1000;
function addStar(parent, maxX, maxY) {
var x = Math.floor(Math.random() * (maxX + 1));
var y = Math.floor(Math.random() * (maxY + 1));
var randomNumber = Math.random();
var star = document.createElement('div');
var starTop = document.createElement('div');
var starBottom = document.createElement('div');
var styleValue = 'left: ' + x + 'px; top: ' + y + 'px;';
var starKind = '';
switch (true) {
case randomNumber < 0.25:
starKind = 'large-star';
break;
case randomNumber < 0.5:
starKind = 'medium-star';
break;
case randomNumber < 0.75:
starKind = 'small-star';
break;
default:
starKind = 'tiny-star';
}
starTop.classList.add(starKind + '-top');
starBottom.classList.add(starKind + '-bottom');
star.appendChild(starTop);
star.appendChild(starBottom);
star.setAttribute('style', styleValue);
star.classList.add('star');
star.classList.add(starKind);
parent.appendChild(star);
return star;
} | {
"domain": "codereview.stackexchange",
"id": 22395,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, css, animation, sass",
"url": null
} |
do come close to realistic problem in some cases. by Asmar, Nakhle H. (ISBN: 9780486807379) from Amazon's Book Store. Note as well that there really isn’t anything new here yet. Straightfoward and readable, this … Copy and paste this code into your Wikipedia page. A Complete Solution Manual for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems, 5th Edition Authors: Richard Haberman View Sample. Here we ’ re going to see here comes when we move from initial conditions are edition...: 85 in your course differential Equations in the form as the first changes is a definition we... Your course differential Equations with Boundary-Value Problems, was written by and is associated to the:... Will not hold here accompanying CD-ROM and technology 4 and 6, and all the... Definitely not the only one used in boundary value Problems a unique solution was some basic continuity conditions that could... Very mild conditions Society Year: 2011 ISBN: 9780486807379 ) from Amazon 's Book Store instead of conditions! In them, but they do come close to realistic problem in some cases topic boundary! Solution Manual for differential Equations with Boundary-Value Problems,, edition: 8 associated to the BVP be applying conditions. This textbook survival guide was created for the textbook: differential Equations boundary! Give the ebook compilations in this website the trace-determinant plane its derivative ( since we ll! In dealing with the same differential equation is also nonhomogeneous differential equations with boundary value problems 3rd edition we leave this section an point... Including a new section on the boundary of some process and index s find some solutions a! This code into your Wikipedia page and perhaps the Problems ) arise when we move from conditions. Third edition by Mark Pinsky the biggest change that we need to do is apply boundary. Stuff out of the solution to the BVP conditions on the boundary conditions we ’ be. On eligible orders at this differential equation is also nonhomogeneous before we work a couple of homogeneous.... – Now covered in two sections, including a new section on the boundary conditions instead of initial conditions boundary. For differential Equations with boundary value Problems scope Search Text Search scope Text! The general solution and its derivative ( since we ’ ll be applying boundary conditions not always unpredictable however there!, there are probably several natural questions that | {
"domain": "tracxsystems.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9825575157745541,
"lm_q1q2_score": 0.8055923614973635,
"lm_q2_score": 0.8198933381139645,
"openwebmath_perplexity": 553.1635635023474,
"openwebmath_score": 0.5857695937156677,
"tags": null,
"url": "http://beta.tracxsystems.com/view/sakura-pigma-fvgan/page.php?id=d5c41d-differential-equations-with-boundary-value-problems-3rd-edition"
} |
cc.complexity-theory, space-bounded, exp-time-algorithms, space-complexity
How do they know that $\inf_{n\to \infty} \frac{S(cn) + cn}{2^n} > 0$? Somehow the order of quantifiers in the statement of the fact seem not to match that kind of deduction. It does not say that if we have any machine with $S'(n)$ which also accepts $L$ then we must have
$$
\inf_{n\to \infty} \frac{S'(n)}{S(n)} > 0.
$$
It just says that if this equals zero, then we can choose some language accepted in space bound $S(n)$ but not in space bound $S'(n)$, so it depends on the firstly choosen $S'(n)$. But the argument of the proof does not seem to use it that way. Also the introduction "Let $L$ be as in Fact 2 [...]" seems problematic, for what space bound function $S'(n)$ in the numerator is $L$ choosen?
So how do this argument works? What am I missing here? Can someone please explain?
EDIT: Just let me add that if I accept $\inf_{n\to \infty} \frac{S(cn) + cn}{2^n} > 0$ then $S(n) \ge d^n$ for some $d > 1$ and all multiplies of $c$ is clear to me. For then we know for some $\varepsilon > 0$ we have
$$
S(cn) > 2^n\varepsilon - cn
$$
for all $n > N$ for some $N$. Write with $d > 1$ then $2^n = (d + \delta)^n \ge d^n + n d^{n-1}\delta \ge d^n + | {
"domain": "cstheory.stackexchange",
"id": 4591,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, space-bounded, exp-time-algorithms, space-complexity",
"url": null
} |
c++, c++11, overloading
The result when running your main.cpp is:
4685
Move ctor
Move assign
468
Showing that yes, the move ctor and move assignment operator were actually used. | {
"domain": "codereview.stackexchange",
"id": 19205,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, overloading",
"url": null
} |
dynamics
The cross, being an unactuated bearing, cannot support a moment. If it weren't for the faces or end-caps colliding with one another, the entire assembly would swing completely down:
This is what I meant when I said the cross moves - you have one joint between the base and the cross, and then you have another joint between the cross and the end-effector cylinder. The complex part here isn't the joint definitions, it's the addition of the constraint. You don't have one pictured here, so I'm not sure what you're expecting is going to hold the end-effector up.
You either need a bearing or support of some kind for the end-effector or you need to actuate the universal/Cardan joint. | {
"domain": "robotics.stackexchange",
"id": 2120,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dynamics",
"url": null
} |
thermodynamics, energy, temperature, kinetic-theory
If you shine a bright light - or microwave oven, or laser, or maser or incandescent light, or other strong EM energy source - onto or through an object, the photons have an opportunity to interact with the electrons in the object. If they don't interact with the object, they pass through it without reflection or absorption and the object is clear at that wavelength. If the incoming photons have wavelengths / energies which happen to be resonant with allowed transitions in the atoms / molecules they hit, some of the photons' energy will be absorbed into the atoms. The details depend on which transitions are stimulated by the radiation, but generally you can either raise electrons' potential energy on absorption or add energy to rotational and vibrational states. Rotational and vibrational energy are directly components of kinetic energy, so immediately increase the temperature of the absorbing substance. Energy absorbed into electronic energy states may either be re-emitted or get redistributed into rotational/vibrational modes, depending on the structure of the absorbing molecules, how much they interact with neighboring atoms, and overlap between the electronic and rovibrational transitions. If the incident light is re-emitted immediately at the same wavelength, it's reflected. Metals are shiny because they have large numbers of free electrons that efficiently re-emit incident light at many wavelengths. If all the light is reflected or transmitted without being absorbed or redistributed among molecular motions, no heating occurs. The details of what happens (absorption vs reflection vs transmission and which mechanical mode is stimulated) are going to depend on things like the density of "free" electrons, the polarizability of the atoms' / molecules' electron clouds, the frequency of the incident light, the masses of the atoms, the polarization of the incident photons, etc. However, the core physics involved is that an EM wave ("light" / "photon") contains an oscillating electric field (and an oscillating magnetic field, but usually the electric component is more important). When it hits a molecule, the electrons and protons feel opposing forces due to the electric field. This sets up an electric dipole in the individual atoms, and that dipole then feels a force in response to the still-present, still-changing electromagnetic wave. The electrons have much lower masses than the nuclei, so respond more quickly, but still not instantaneously - resulting in the electron | {
"domain": "physics.stackexchange",
"id": 91576,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, energy, temperature, kinetic-theory",
"url": null
} |
(a)
$\begin{cases} S(i,j) = S(i-1,j) \text{ or } S(i-1,j-w_i)\\ S(0,j) = 0\\ S(i,0) = 1 \end{cases}$
(b) $m$需用$2^k$個bits存,因此時間複雜度為$O(2^k \times n)$ not polynomial.
13. (6%) Testing gifted or mediocre: m students take an exam which has n questions. Gifted students get all n answers right. Mediocre students get less than n/2 answers right. Grade all the exams, giving all gifted students an ‘A’ and all mediocre students a ‘C’.
Algorithm 1:
1. For each student, grade at most the first n/2 questions in order – stop as soon as you see a wrong answer.
2. If you’ve seen a wrong answer, give grade ‘C’. Otherwise give grade ‘A’.
Algorithm 2:
1. For each student, choose 10 questions at random and grade them.
2. If you’ve seen a wrong answer, give grade ‘C’. Otherwise give grade ‘A’.
Algorithm 3:
1. For each student, repeatedly choose a question at random and grade it, until you have graded n/2 correct answers or seen a wrong answer.
2. If you’ve seen a wrong answer, give grade ‘C’. Otherwise give grade ‘A’.
Explain the correctness and the running time of these three algorithms.
14. (6%)
(a) What is an optimal Huffman code for the set of frequencies, $\{1,1,2,3,5,8\},$ based on the first six Fibonacci numbers?
(b) Generalize your answer to find the optimal code when the frequencies are the first n Fibonacci numbers. | {
"domain": "eecsmt.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9808759649262344,
"lm_q1q2_score": 0.8486775034663451,
"lm_q2_score": 0.8652240791017536,
"openwebmath_perplexity": 1704.955713403054,
"openwebmath_score": 0.6259288191795349,
"tags": null,
"url": "https://eecsmt.com/graduate-school/exam/104-nthu-cs-ds/"
} |
ros, ros-melodic, trac-ik, arm-kinematics
Title: Closing kinematic loop with trac_ik
Hello List,
Creating simulation of a rower in a boat using ros, gazebo and trac_ik.
There are three kinematic loops in the robot's description. This cannot be described in URDF, so they should be closed when generating the SDF from it.
I use ik_fast to find the correct joint values for this. I created stubs on both sides of the, still, broken arm's, and ask trac_ik how to close them.
The project can be found om my github page.
The code resides in the boot3_description directory.
Please see this screenshot of the rower in rviz.
The loop with the left arm is closed, but the right arm not yet.
Also note the stub-links that have to be placed on top of each other.
I create a program ik_1.py that uses the joint state controller and trac_ik to calculate the correct values of the arm joints to close the loop.
The new initial joint values are written in param.xacro to be used later.
Now my question.
The program works perfectly when closing the left arm, but does NOT yield the proper values for the right arm.
trac_ik finds a solution, but when the new values are used the right arm is not connected.
I can't find any difference in the two chains, apart of course of the different values to get another arm.
How can trac_ik find a solution that is clearly wrong?
Hopefully someone can shed some light on this.
UPDATE: I was confused, I use trac_ik, not ik_fast.
Thanks in advance, Sietse
Originally posted by Sietse on ROS Answers with karma: 168 on 2019-03-07
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32606,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, trac-ik, arm-kinematics",
"url": null
} |
Posted by: Tim Campion on March 28, 2016 3:03 PM | Permalink | Reply to this
### Re: The Most Common Prime Gaps
Yes, that’s the pattern, and the answer to Jesse’s question.
Posted by: John Baez on March 28, 2016 5:07 PM | Permalink | Reply to this
### Re: The Most Common Prime Gaps
The news seems to be trickling out. I got this email today:
PRIME NEWS: Last Digits 1,3,7 and 9 permeates significantly throughout primes whole complex.
Beeing the significant most frequent digits in primes, Last Digits 1,3,7 and 9 permeates throughout its whole complex.
The digits are not uniformly distributed but Benfordian, indicating primes are governed by a flux combination of Benfords Law and Last Digits. Hence consecutive LD=1 has greater probability. When sizing prime data-set, researchers should be aware of the importance of complete and fair rounds of first digits 1-9. Otherwise; the results may be biased.
You can see the numerical evidence, distributions and explanations in “primes” on www.stringotype.com.
It is hypothesized that the result corresponds to the dimensionless Fine Structure Constant - a fractal order in nature. Base 10 number system maps the order with zero skewness, giving rise to many circadian rythms beeing purely reflected in numbers. Hence digits corresponds to primary respondents and can possibly be intepreted as integrals of functional processes.
Best regards
Terje Dønvold
Oslo
Posted by: John Baez on March 30, 2016 7:54 PM | Permalink | Reply to this
Post a New Comment | {
"domain": "utexas.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211619568682,
"lm_q1q2_score": 0.8434387472247115,
"lm_q2_score": 0.865224084314688,
"openwebmath_perplexity": 1067.0095199908965,
"openwebmath_score": 0.6767888069152832,
"tags": null,
"url": "https://golem.ph.utexas.edu/category/2016/03/the_most_common_prime_gaps.html"
} |
python, beginner, sorting, quick-sort
Title: Quicksort implementation in Python I have written an implementation of Quicksort in Python. I am new to Python. Any suggestions for improvement or criticism on my use of Python?
def partition(a, lo, hi):
i, j, v = lo+1, hi, a[lo]
while(True):
while(a[i] < v):
i += 1
if (i == hi): break
while(a[j] > v):
j -= 1
if (j == lo): break
if (i >= j): break
a[i], a[j] = a[j], a[i]
a[lo], a[j] = a[j], a[lo]
return j
def sort(a, lo, hi):
if (hi <= lo):
return
q = partition(a, lo, hi)
sort(a, lo, q-1)
sort(a, q+1, hi)
assert isSorted(a, lo, hi)
def quick_sort(a):
shuffle(a)
sort(a, 0, len(a)-1)
assert isSortedArray(a)
def isSorted(a, lo, hi):
for i in range(lo, hi):
if a[i+1] < a[i]:
return False
return True
def isSortedArray(a):
for i in range(0, len(a)-1):
if a[i+1] < a[i]:
return False
return True When describing quicksort partitioning, your v is typically called the "pivot". The code would be clearer if you named the variable according to that convention.
You always choose a[lo] as the pivot. However, that produces pathological performance when the input array is already sorted.
I would prefer to see
while(a[i] < v):
i += 1
if (i == hi): break
… written as
while i < hi and a[i] < pivot:
i += 1 | {
"domain": "codereview.stackexchange",
"id": 10376,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, sorting, quick-sort",
"url": null
} |
kalman-filters, matrix
Title: Why is this matrix invertible in the Kalman gain? In the wikipedia article about Kalman filters, the well-known expression of the matrix of Kalman gains is given:
$$ \mathbf {K} _{k}=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\text{T}}\mathbf {S} _{k}^{-1} $$
with
$$\mathbf{S}_k=\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\text{T}}+\mathbf {R} _{k}.$$
I understand that $\mathbf{R}_k$, as a covariance matrix, can be asked to be non-singular: it is reasonable to believe that no variance is zero. But this does not answer my question: why is $\mathbf{S}_k$ invertible? Note that $\mathbf{P} _{k\mid k-1}$, just like $\mathbf{R}_k$, is also a covariance matrix, and for this reason it is (at least) positve semi-definite, i.e., $\mathbf{y}^T\mathbf{P}_{k\mid k-1}\mathbf{y}\ge 0$ for $\mathbf{y}\neq\mathbf{0}$. Now set $\mathbf{y}=\mathbf{H}_k^T\mathbf{x}$ to see that also $\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\text{T}}$ is at least positive semi-definite (positive definite if $\mathbf{P}_{k\mid k-1}$ is positive definite and $\mathbf{H}_k$ has full rank). Finally, note that the sum of two positive semi-definite matrices is positive semi-definite. | {
"domain": "dsp.stackexchange",
"id": 4177,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "kalman-filters, matrix",
"url": null
} |
organic-chemistry, reaction-mechanism, aromatic-compounds, synthesis
Title: How to make 1,1-diphenyl-1-butene from benzophenone and 1-bromopropane? I would like to know the method of syntheses of making
1,1-diphenyl-1-butene $(\ce{C16H16})$ from benzophenone $(\ce{(C6H5)2CO},$ generally abbreviated $\ce{Ph2CO})$ and 1-bromopropane $(\ce{CH3CH2CH2Br}).$
I think that the carbonyl group which is a functional group composed of a carbon atom double-bonded to an oxygen atom $(\ce{C=O})$ of benzophenone will react to the alkyl halide.
Because oxygen is more electronegative than carbon, carbonyl compounds often have resonance structures which affect their reactivity. This relative electronegativity draws electron density away from carbon, increasing the bond's polarity, therefore making carbon an electrophile (i.e. slightly positive).
But I have no idea how it reacts and proceeds to work. Can anyone help me understand the detail mechanisms and reaction procedures to make 1,1-diphenyl-1-butene from benzophenone and 1-bromopropane? I present another (almost) two-step synthesis, involving the famous Wittig reaction. The first step details the preparation of a phosphonium ylide, which subsequently is allowed to react with the carbonyl compound (benzophenone in our case) to yield the final product, 1,1-diphenyl-1-butene.
For mechanistic details, visit NotEvans.'s answer to Which is the currently accepted mechanism of a Wittig reaction?. | {
"domain": "chemistry.stackexchange",
"id": 12267,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, reaction-mechanism, aromatic-compounds, synthesis",
"url": null
} |
nuclear-physics, radioactivity, differential-equations
Title: Can the decay rate of nuclear decay be proportional to the second/third exponent of number of nuclei? A equation we all come across in high-school physics:
$$\frac{-dN}{dt}= kN$$ where N is number nuclei left
Is this always true for spontaneous nuclear decays? In chemistry, we find second, third order reactions. Similarly, has anyone found spontaneous decay where the decay rate is proportional not to the first exponent of remaining nuclei, rather to second or third exponent? The nuclear forces responsible for radioactive decay are short ranged and so isolated from other forces (such as the much weaker E$M forces) that this results in the exponential decay law. I believe there have been experiments that demonstrate that these decays can be slightly affected by exposure to very strong E&M fields, but this is a special circumstance that normally does not occur. I would have to Google to find references to those experiments. I believe the experiments were conducted on nuclear isomers.
Edit: The experiments that I was remembering took place between 1998 and 2007, but a Google search reveals that those experiments have now been discredited. You may read about this episode here. The search terms that I used were: nuclear, isomer, decay, stimulated. If you follow the links resulting from this search, you may find the original sources. | {
"domain": "physics.stackexchange",
"id": 41634,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-physics, radioactivity, differential-equations",
"url": null
} |
quantum-mechanics, quantum-information, hilbert-space, measurement-problem, density-operator
However, POVMs (or maybe general measurements) are the "right" way to think about measurements. The paradigm of open quantum systems, which is very important for real world experiments is inherently inscribed into POVMs and they also tell us why sometimes, measurements seem not repeatable in the lab. So POVMs are not some theoretical construct floating in philosophy space (closed quantum systems), but more operational descriptions of measurements. In addition, they are better to work with when describing real world situations.
As a final note: General measurements are not considered heavily in the literature. Peter Shor was so kind as to point out an (old) example of their use with this Peres, Wooters paper (paywall!). Usually however, I find that people work with POVMs instead of general measurements. | {
"domain": "physics.stackexchange",
"id": 22067,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-information, hilbert-space, measurement-problem, density-operator",
"url": null
} |
$$x^3 ( 1 - 4x^2) < 0$$
$$x^3(1 - 2x) (1 + 2x) < 0$$
$$4x^3(x - 1/2)(x + 1/2) > 0$$ (Notice the flipped sign. We multiplied both sides by -1 to convert 1/2 - x to x - 1/2)
Now the transition points are 0, -1/2 and 1/2 so put + in the rightmost region.
The solution will be x > 1/2 or -1/2 < x< 0.
Check out these posts discussing such complications:
http://www.veritasprep.com/blog/2012/06 ... e-factors/
http://www.veritasprep.com/blog/2012/07 ... ns-part-i/
http://www.veritasprep.com/blog/2012/07 ... s-part-ii/
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9653811611608241,
"lm_q1q2_score": 0.8534348638996897,
"lm_q2_score": 0.8840392771633079,
"openwebmath_perplexity": 2382.6171032034335,
"openwebmath_score": 0.5893213152885437,
"tags": null,
"url": "http://gmatclub.com/forum/inequalities-trick-91482.html?kudos=1"
} |
python, beginner, python-3.x, programming-challenge
Title: Project Euler # 20 factorial digit sum in Python
n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27. Find the sum of the digits in the number 100!
def fact(n):
"""returns factorial of n."""
if n <= 1:
return 1
return n * fact(n - 1)
def count_digits(n):
"""Assumes n > 1.
returns sum of digits of n's factorial."""
factorial = fact(n)
total = 0
for digit in str(factorial):
total += int(digit)
return total
if __name__ == '__main__':
print(count_digits(100)) The standardlibrary module math already contains a factorial function. On my machine it is about 20 times faster than your function using n = 100. It also does not suffer from stack size limitations as yours does (try computing fact(3000)).
Alternatively you could learn about memoizing, which will help you in many Project Euler problems. Here it would be useful if you had to evaluate the factorial of many numbers (and even better if the numbers are increasing).
from functools import wraps
def memoize(func):
cache = func.__cache = {}
@wraps(func)
def wrapper(*args, **kwargs):
key = args, frozenset(kwargs.items())
if key in cache:
ret = cache[key]
else:
ret = cache[key] = func(*args, **kwargs)
return ret
return wrapper
@memoize
def fact(n):
...
Note that this decorator only works if your arguments are hashable (so no lists for example). | {
"domain": "codereview.stackexchange",
"id": 40811,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-3.x, programming-challenge",
"url": null
} |
c++, template
template <typename T>
struct is_vector3 : std::false_type {};
template <typename T>
struct is_vector3<Vec3<T>> : std::true_type {};
template <typename T>
struct is_vector4 : std::false_type {};
template <typename T>
struct is_vector4<Vec4<T>> : std::true_type {};
// -- Unary operators --
template <typename T> static inline
std::enable_if_t<is_vector<T>::value, T>
operator+(const T& v)
{
return v;
}
template <typename T> static inline
std::enable_if_t<is_vector<T>::value, T>
operator-(const T& v)
{
T result(uninitialize);
for (unsigned int i = 0; i < T::size; i++)
result[i] = -v[i];
return result;
}
// -- Binary operators --
template <typename T> static inline
std::enable_if_t<is_vector<T>::value, T>
operator+(const T& v, const typename T::value_type& s)
{
T result(uninitialize);
for (unsigned int i = 0; i < T::size; i++)
result[i] = v[i] + s;
return result;
}
template <typename T> static inline
std::enable_if_t<is_vector<T>::value, T>
operator+(const typename T::value_type& s, const T& v)
{
T result(uninitialize);
for (unsigned int i = 0; i < T::size; i++)
result[i] = s + v[i];
return result;
} | {
"domain": "codereview.stackexchange",
"id": 29789,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template",
"url": null
} |
electromagnetism, lagrangian-formalism
The answer is Yes. Below we will show this. Let the speed of light be $c=1$ from now on.
1) Field variables. The model has $2\times 3=6$ gauge potential fields ${\cal A}^a_i(\vec{x},t)$. Here $i=1,2,3$ are three spatial directions, and $a=1,2$ is an internal $SO(2)$ index. The gauge potential transforms
$${\cal A}^a_i\to \sum_{b=1}^2M^a{}_b {\cal A}^b_i $$
in the 2-dimensional fundamental representation of $SO(2)$, where
$$M^a{}_b =\left[\begin{array}{cc} \cos(\theta) & \sin(\theta) \\ -\sin(\theta) &\cos(\theta)\end{array}\right] \in SO(2), \qquad \sum_{b,c=1}^2 (M^{t})_a{}^b g_{bc} M^c{}_d = g_{ad}, \qquad g_{ab}\equiv \delta_{ab}.$$
The magnetic field $\vec{B}$ and electric field $\vec{E}$ are given by the curl of the gauge potential $\vec{\cal A}^a$,
$$\vec{\cal B}^a := \vec{\nabla} \times \vec{\cal A}^a, \qquad a=1,2, $$
where
$$ \vec{B}\equiv\vec{\cal B}^1 \qquad \mathrm{and} \qquad \vec{E}\equiv \vec{\cal B}^2.$$
It is easy to check that
$${\cal B}_i^a\to \sum_{b=1}^2 M^a{}_b {\cal B}_i^b$$ | {
"domain": "physics.stackexchange",
"id": 976,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, lagrangian-formalism",
"url": null
} |
def func(x, p1,p2):
return p1*np.cos(p2*x) + p2*np.sin(p1*x)
popt, pcov = curve_fit(func, xdata, ydata,p0=(1.0,0.2))
The variable popt contains the fit parameters
array([ 1.88184732, 0.70022901])
We need to do a little more work to get the sum of squared residuals
p1 = popt[0]
p2 = popt[1]
residuals = ydata - func(xdata,p1,p2)
fres = sum(residuals**2)
which gives
0.053812696547933969
1. Thanks a lot for the clear information and examples. I have a question you could probably shed some light on.Since I started my Ph. D. I decided to use python (numpy,scipy,etc) as my main scientific software tool. So far I am very pleased with the decision I made, however, now I am trying to solve a nonlinear optimisation problem which basically consist in fitting some data to a cascade of linear filtera and some static nonlinearities. I put together a script in python which uses “scipy.optimize.leastsq()”. I haven’t been able to get an acceptable fit so far and the speed is not great either. So the question is in your experience would you say that is a good option to use this function, or the matlab ones are better quality? and in your mind which matlab function would be equivalent to this python one?
2. Hi Carlos
I’ve never done a speed/quality comparison between these optimisation functions on different systems I’m afraid. All I can say is that I’ve not had a problem with the Python ones so far.
Best Wishes,
Mike
3. Hello Mike,
Thanks for your promptly answer.Perhaps in the near future I will carry out some comparison tests.If I get any significant/interesting results I will share them here.
Cheers,
Carlos.
4. Hi,
you can use the full_output flag (borrowed from the scipy.optimize.leastsq function) to obtain further information about the solution: | {
"domain": "walkingrandomly.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731137267749,
"lm_q1q2_score": 0.8864194675162246,
"lm_q2_score": 0.920789679151471,
"openwebmath_perplexity": 1504.927461305425,
"openwebmath_score": 0.5015531182289124,
"tags": null,
"url": "https://walkingrandomly.com/?p=5215"
} |
Using @anderstood answer we can play with graphics to make the surface transparent and plot ball movement:
V[q1_, q2_] := m/2 q1 q1 + k/2 q2 q2
surf = Plot3D[
V[q1, q2] /. {m -> 1, k -> 3, \[Omega] -> Sqrt[k/m]}, {q1, -5,
5}, {q2, -5, 5},
RegionFunction ->
Function[{q1, q2}, m/2 q1^2 + k/2 q2^2 <= 12 /. {m -> 1, k -> 3}],
Mesh -> None,
ColorFunction ->
Function[{z}, Opacity[0.4, #] &@ColorData["TemperatureMap"][z]]]
traj[t_] :=
Evaluate[Flatten@sol /. \[Omega] -> Sqrt[k/m] /. {m -> 1,
k -> 3} /. {p10 -> 3, p20 -> 1, q20 -> -1.5, q10 -> 2}]
frames = Table[
Show[surf, Graphics3D@{Red, Ball[traj[t], 0.2]}], {t, 0, 10,
0.1}];
Export["Documents/animBall.gif", frames] | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.976310526632796,
"lm_q1q2_score": 0.84293832428746,
"lm_q2_score": 0.863391617003942,
"openwebmath_perplexity": 7111.308929883166,
"openwebmath_score": 0.31206706166267395,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/133773/plot-particle-motion-in-potential"
} |
ros, ros-kinetic, ros-canopen
* /lh_arm/pos_based_pos_traj_controller_arm_lh/joints: ['lh_arm_0_joint'...
* /lh_arm/pos_based_pos_traj_controller_arm_lh/publish_rate: 50
* /lh_arm/pos_based_pos_traj_controller_arm_lh/required_drive_mode: 7
* /lh_arm/pos_based_pos_traj_controller_arm_lh/type: position_controll...
* /lh_arm/robot_description: <?xml version="1....
* /robot_description: <?xml version="1....
* /rosdistro: kinetic
* /rosversion: 1.12.14 | {
"domain": "robotics.stackexchange",
"id": 33167,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-kinetic, ros-canopen",
"url": null
} |
photons, diffraction, x-rays, x-ray-crystallography, braggs-law
Title: Is the Compton effect observed during regular x-ray diffraction?
Here we have the standard set up to observe the compton effect. From what I understand, due to the particle like nature of light, when the x ray photons collide with the electrons in the scatterer they lose energy and thus have a larger wavelength.
I was just wondering that if we were to remove the scatterer and just shine a narrow beam of x ray photons on the crystal, would we still observe a compton effect. I am just wondering as even in that case, we would still have the x ray photons striking the electrons in the crystal, is the same loss of energy not present there as in the case of the photons striking the scatterer first.
One reason I think we will not observe the effect is because the electrons in the atom are tightly bound, so the photons colliding with the electrons have a very small change in their overall energy as the electrons have the effective mass of the entire atom, thus gain little energy during such a collision. Although, this is just a guess. Yes, in fact Compton scattering is often an annoyance in x-ray diffraction when you want to study the diffuse background from crystal defects and disorder. The amount of Compton scattering is highly dependent on the xray energy, and is highest when it reaches the energy scale of the electron mass of 512 keV.
Below is a plot showing the various contributions to x-ray interactions with matter as a function of energy. Note how Compton scattering begins dominating at higher energy. "Thomson scattering" is what you would call ordinary x-ray diffraction. Photoelectric contributions amount to absorption of the x-rays, so they don't show up directly in the scattered beam, but explains why the most x-rays are absorbed not scattered.
I should add though, Compton scattering can actually be useful in studying crystals, as it can measure the momentum of electrons in the crystal. In a sense you can then reconstruct both the charge density and momentum density of electrons by combining x-ray scattering in the Thomson and Compton regimes. | {
"domain": "physics.stackexchange",
"id": 58271,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "photons, diffraction, x-rays, x-ray-crystallography, braggs-law",
"url": null
} |
scikit-learn, feature-scaling, encoding, serialisation
Title: Are scalers or encoders supposed to be serialized along with trained models? Consider the very basic example below:
X = data.drop("Price", axis = 1)
y = data["Price"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1)
scaler = MinMaxScaler()
model = LinearRegression()
scaler.fit(X_train)
X_train_s = scaler.transform(X_train)
X_test_s = scaler.transform(X_test)
model.fit(X_train_s, y_train)
model.score(X_test_s, y_test)
The above code simply splits the data into inputs and outputs, for training and testing, scales the inputs using the scaler object, uses the data to train a Linear Regression model, and then test the model.
Now suppose I am satisfied with the results, so I can serialize the model to a jobilb file, but any data that goes in has to be scaled first, right? So, should I do something like below?
joblib.dump((scaler, model), "scaler_and_model.joblib") | {
"domain": "datascience.stackexchange",
"id": 11798,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "scikit-learn, feature-scaling, encoding, serialisation",
"url": null
} |
# A weird subset of $\mathbb R^2$
Is there a path-connected subset of $\mathbb R^2$ such that any path connecting 2 distinct points in that subset has infinite length? I am told that there is such a set, but I don't know what it is. Thank you.
-
Something like a Koch snowflake should work. – Chris Eagle Feb 13 '12 at 20:33
There is, for example almost every path of a standard Brownian motion. – Did Feb 13 '12 at 20:34
Would $\{x \}$ count? – Matt Feb 13 '12 at 20:36
@Matt I don't believe that's in the spirit of the question. – Austin Mohr Feb 13 '12 at 20:46
The graph of any continuous nowhere differentiable function $\mathbb R\to\mathbb R$ is an example (or any continuous function that is not of bounded variation on any subinterval).
If $f:[a,b]\to\mathbb R$ is of bounded variation, then $f'$ exists in $(a,b)$ except on a set of measure $0$. Thus if $f:\mathbb R\to\mathbb R$ is nowhere differentiable, then $f$ is not of bounded variation on any subinterval.
Suppose that $f:\mathbb R\to\mathbb R$ is continuous and nowhere differentiable. Then $G=\{(x,f(x)):x\in\mathbb R\}\subset \mathbb R^2$ is path connected, and if $a<b$, then the length of any path in $G$ connecting $(a,f(a))$ to $(b,f(b))$ is bounded below by the total variation of $f$ on $[a,b]$, hence is infinite.
Exercise 26 in Chapter 3 on page 49 of Wheeden and Zygmund's Measure and integral gives a hint of a way to construct a continuous function on $[0,1]$ that is not of bounded variation on any subinterval, with no reference to differentiation, using a modification of the construction of the Cantor function. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9825575157745542,
"lm_q1q2_score": 0.8144800562334095,
"lm_q2_score": 0.8289388083214156,
"openwebmath_perplexity": 144.080486923193,
"openwebmath_score": 0.8751280903816223,
"tags": null,
"url": "http://math.stackexchange.com/questions/109023/a-weird-subset-of-mathbb-r2"
} |
electrostatics
The complication that you have highlighted arises because the book uses $\hat x$ as the unit vector.
The displacement from the origin is then $\vec x = x \hat x$ and $x$, the component of $\vec x$ in the $\hat x$ direction, can be either positive or negative but because of the $x^2$ term the direction information seems to be lost.
$\vec E = E_{\rm x} \hat x$ where $E_{\rm x}$ is the component of the electric field in the $\hat x$ direction.
If one writes the electric field as $\vec E = E_{\rm x} \hat x= \dfrac{kQ}{|x^3|}x\hat x$ then $\dfrac{kQ}{|x^3|}x$ is the component of the electric field in the $\hat x$ direction.
If you use this equation then the signs take care of themselves and you get the correct sign for the component of the electric field in the $\hat x$ direction as the sign of that component is determined by the sign of the product $Qx$.
$Q$ positive and $x$ positive $\Rightarrow$ electric field direction is $+\hat i$
$Q$ positive and $x$ negative$\Rightarrow$ electric field direction is $-\hat i$
$Q$ negative and $x$ positive $\Rightarrow$ electric field direction is $-\hat i$
$Q$ negative and $x$ negative $\Rightarrow$ electric field direction is $+\hat i$
as per the diagrams from your textbook. | {
"domain": "physics.stackexchange",
"id": 47417,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics",
"url": null
} |
scikit-learn, random-forest
Title: Explaining feature_importances_ in Scikit Learn RandomForestRegressor For a project, I used the feature_importances_ attributes from the RandomForestRegressor. Everything works well but I don't know how to explain why one feature is more important than another. I mean I know that the higher the score is the higher the importance, but I don't understand how it is calculated.
For exemple, if a variable as a score of 0.35 what does it mean?
I would appreciate if someone could explain me how it works!
Thanks! scikit-learn's RandomForestRegressor feature importance is computed in each tree composing the forest. You can find the source code here (starting at line 1053).
What it does is, for each node in the tree where the split is made on the feature, it substracts each child node's (left and right) impurity values from the parent node impurity value. If impurity decreases a lot (meaning the feature performs an efficient split), it basically gives a high score. Of course, all that is weighted depending on how useful the test is for the result: a split between two individuals gives a high impurity decrease, but is trivial, so much easier than between two large populations.
Once the feature importance has been determined for each tree, it is summed up and normalized so that the feature_importances_ vector sums up to 1.
It might introduce some biaises, I guess that it should be the case if variables are not scaled, for instance. However it is quite easy to compute (you just have to read impurity values from the tree), so I guess this is why it is provided by default. But the method is not unequivocal, there are other methods out there that you can implement more or less manually:
shuffling a feature's values in the dataset
reverting the result of each test based on the feature
... and probably a few other approaches. All those will give you other scores that may be of help. | {
"domain": "datascience.stackexchange",
"id": 6211,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "scikit-learn, random-forest",
"url": null
} |
# Is there an inequality operator (Less, Greater) for lists of elements (as opposed to elements)?
Like multi-column sorting. For example,
{1,3,5} > {1,2,5}
would return True, while
{1,3,5} > {1,5,2}
would return False.
I'm betting there's a simple term for this and that term would lead straight to some function, but I can't think of the term. It'd be a trivial function to write, but there must be something built-in...
(I'm looking for a predicate I can supply to this PriorityQueue implementation.)
• Tiebreaking is a term that just came to mind, but searching it yields no results. – Andrew Cheong Mar 19 '15 at 6:29
• Look at OrderedQ... – ciao Mar 19 '15 at 6:35
• Agh, that was what I was looking for. Still not well-versed in $Mathematica$. I didn't think to read through the *Q functions, even after thinking the word "Predicate"—I guess I'd only seen more primitive *Q functions so didn't think I'd find my answer there. In case this question might help someone else searching the terms I've used, could you post your comment as an answer? Thanks. – Andrew Cheong Mar 19 '15 at 6:38
• It's been a long day. Thanks, I'll fix. – Andrew Cheong Mar 19 '15 at 8:07
• what about {1,3,5} > {1,2,7}? You could make a case for either answer. Or {1,3,5} > {1,1,3,5} ? – ControlAltDel Mar 19 '15 at 12:21
I propose using Order, assuming equal-length lists.
Order[{1, 3, 5}, {1, 3, 4}]
Order[{1, 3, 5}, {1, 5, 2}]
Order[{1, 3, 5}, {1, 3, 5}]
-1
1
0
You can assign an infix operator if you wish:
CirclePlus = Order; | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9465966641739773,
"lm_q1q2_score": 0.839824913087131,
"lm_q2_score": 0.8872045981907006,
"openwebmath_perplexity": 3468.9200969363237,
"openwebmath_score": 0.38250893354415894,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/77668/is-there-an-inequality-operator-less-greater-for-lists-of-elements-as-oppose/77670"
} |
galaxy, redshift, spectroscopy, spectra, luminosity
Title: How to convert theoretical template spectrum from luminosity density to flux density units? I'm working with galaxy spectral templates (e.g., Bruzual & Charlot 2003) which seem to always come with y-axis units of $L_{\odot}$/A and x-axis units of Angstroms. Thus the y-axis is a luminosity density instead of a flux density. In contrast, observationally, we tend to always work with spectra that have y-axis units of flux density ($F_{\lambda}$ in erg/s/cm$^2$/A or $F_{\nu}$ in erg/s/cm$^2$/Hz). Similarly, spectral energy distributions (SEDs) from photometry tend to have $\lambda F_{\lambda}$ or $\nu F_{\nu}$ such that the y-axis is flux, not flux density.
How do I convert a theoretical template spectrum from units of luminosity density ($L_{\odot}$/A) to flux density (erg/s/cm$^2$/A)?
For context, I want to fit spectral templates to an observed SED. The observed SED is for an object at a redshift $z$, so I think I can either convert the templates to flux density units, or I can convert my observed SED to luminosity density units. I feel like working in flux density units is more natural -- plus I'm not sure if multiplying the observed SED y-axis values by $4\pi D^2$ (D is distance of object) and x-axis (wavelengths) by $1/(1+z)$ would be sufficient (e.g., normalization concerns). Use this equation:
$$F_\nu = \frac{L_\nu}{4\pi D_L^2}.$$ | {
"domain": "astronomy.stackexchange",
"id": 2124,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "galaxy, redshift, spectroscopy, spectra, luminosity",
"url": null
} |
earth, atmospheric-science, popular-science, fluid-statics
Title: Would a pipe from the surface to the Earth's exosphere suck all atmosphere to the space? If I built a tube from Earth's surface to the exosphere, would all the air be sucked out to space?
If this pipe reached to a big planet, like Jupiter, would its gravity through the pipe suck our atmosphere?
If one end of the pipe was at the Earth's core, and other in the exosphere, would the magma go there, like in giant volcano?
No, it would not be sucked off, for the same reason that the earth has an atmosphere to begin with: gravity.
No, for the same reason that Jupiter doesn't have a noticeable pull on you: the strength gravity decreases with the inverse square of distance.
No, Gravity is too strong.
Your misconception seems to be coming from the idea of a vacuum and a straw. The vacuum itself is not what causes the sucking. It is the atmospheric pressure that causes sucking. | {
"domain": "physics.stackexchange",
"id": 24809,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "earth, atmospheric-science, popular-science, fluid-statics",
"url": null
} |
electromagnetism, electricity
Title: How do I find the polarity of a U-shaped electromagnet? How do I find the polarity of a U-shaped electromagnet ?
Current flowing clockwise --> South Pole
Current flowing anticlockwise --> North Pole
However the direction of flow of current changes when seen from top as compared to bottom. From the examples, I find that one should view the direction from the bottom. So why is this so ?
In this image:
I believe the polarity at P is North and that at Q is South. Am I right ?
Is there any other method to determine the polarity of a U-shaped electromagnet ? The current direction $I$ is from the positive terminal of the voltage source to the negative terminal.
Look end on along the axis of the electromagnet.
Clockwise current $I$ direction $\Rightarrow$ south pole
Anticlockwise current $I$ direction $\Rightarrow$ north pole | {
"domain": "physics.stackexchange",
"id": 50421,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, electricity",
"url": null
} |
the initial condition are defined as:. The stability condition (1. Generic solver of parabolic equations via finite difference schemes. Then the gene. As of now a small portion of possible inputs is implemented; one can change: - the mesh file - the geometry file - introduce more/different Dirichlet boundary conditions (different geometry or values) The geometries used to specify the boundary conditions are given in the square_1x1. differential equations, Heat conduction, Dirichlet and Neumann boundary Conditions I. 2) can be derived in a straightforward way from the continuity equa- tion, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Every node and every side of the rectangular must be common with adjacent elements except for sides on the boundaries. The Neumann boundary condition is a type of boundary condition, named after Carl Neumann (1832 - 1925, figure 3) $$^3$$. First Problem: Slab/Convection. ■Use routine in Lapack to solve the tridiagonal system of linear equation (e,g, dgtsv). The Domain Dimension—1D, 2D, and 3D PDE control complex enough in 1D: string, acoustic duct, beam, chemical tubular reactor, etc. 1 meters, but zero for r>0. The heat equation is also widely used in image analysis (Perona & Malik 1990) and in machine-learning as the driving theory behind scale-space or graph Laplacian methods. Laplace boundary value problem on a rectangle. should pick the homogeneous Neumann boundary conditions (8) du(x) d ru = 0; [email protected]: If the temperature distribution on the boundary of is enforced to be g(x) then one should pick the Dirichlet boundary condition (3). We are interested in solving the heat equation with mass being created at a random point chosed with distribution in Dand dissipated on the boundary in such a way that the total mass increases in time, leading to a super-critical regime. Another way of viewing the Robin boundary conditions is that it typies physical situations where the boundary "absorbs" some, but not all, of the energy, heat, mass…, being transmitted through it. Louise Olsen-Kettle The University of Queensland School of Earth Sciences Centre for Geoscience Computing. Note that this is true | {
"domain": "auit.pw",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9886682478041813,
"lm_q1q2_score": 0.8128722891905571,
"lm_q2_score": 0.8221891327004133,
"openwebmath_perplexity": 666.6580393036137,
"openwebmath_score": 0.8180767297744751,
"tags": null,
"url": "http://qxvy.auit.pw/2d-heat-equation-neumann-boundary-conditions.html"
} |
reinforcement-learning, q-learning, variance
This is incorrect. There is not really such a thing as "the reward for current state" in the general case of a MDP. If you mean the $V(S_t)$ should include the value of $R_t$, then this is still wrong, given David Silver's use of the conventions for time step indexing. It is possible to associate immediate reward with either the current time step, leading to sequence $S_0, A_0, R_0, S_1, A_1, R_1$ etc or you can use the convention of immediate reward being on next time step $S_0, A_0, R_1, S_1, A_1, R_2$ etc. David Silver (and Sutton & Barto's book) uses the latter convention.
Under that convention:
$$V(s) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty}
\gamma^{k}R_{t+k+1}|S_t=s]$$
$$Q(s,a) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^{k}R_{t+k+1}|S_t=s, A_t=a]$$
You can see that the first term in the expansion of the sum for both Q(s,a) and V(s) is $R_{t+1}$. If you changed the convention, then both would include the equivalent value, but would be labelled $R_{t}$ in any formula.
Q and V do not differ in which time steps they sum reward over. They may differ in the value of $R_{t+1}$ because $V(s)$ assumes following the policy $\pi$ when selecting $A_t$ whilst $Q(s,a)$ uses the value $a$ supplied as a parameter for $A_t$, which can be different.
how can we be certain what to subtract from what, such that our Advantage is always positive? | {
"domain": "datascience.stackexchange",
"id": 11307,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, q-learning, variance",
"url": null
} |
audio, discrete-signals
Title: Getting loudness of a track with RMS I'm trying to calculate the loudness of an audio track I have stored in a buffer. The buffer contains PCM data of the signal and I want to get how 'loud' it is by using Root Mean Squared. I assume I can do this in the time-domain instead of having to switch to the frequency domain. What would be the pseudo-code for doing this?
Would I simply sample for one second (audio[0] - audio[44099], audio[44099] - audio[88199] etc..) and calculate the RMS of those values? So, for example, would I do this:
$$RMS = \sqrt{\frac{\text{audio}[0]^2 + \text{audio}[1]^2 + \text{audio}[2]^2.....\text{audio}[44099]^2}{44100}}$$
for each second? Another thing is that the RMS value is not very well correlated with perceived loudness. You might want to consider calling it level or volume instead.
There is something called equal loudness contours which quantifies how sensitive the ear is to one particular frquency compared to another frequency, see the Wikipedia article. These curves are level dependent.
For instance, the ear is very sensitive to a 1kHz tone compared to a 100Hz tone, as shown in this image (horizontal axis is frequency in Hz):
One of the relative simple things you can do is to filter your PCM data with an inverted equal loudness curve. Or you can apply the standard A weighting, see the Wikipedia Weighting Filter article. Then you can compute the RMS value of the output of the equal loudness weighted filter. | {
"domain": "dsp.stackexchange",
"id": 37,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "audio, discrete-signals",
"url": null
} |
java, multithreading, thread-safety, http
Supplier<Set<String>> httpRequestSupplier = () -> getElementsFromHttpRequest();
startAsynchronousElementSource(consumer, httpRequestSupplier);
The final statement produces the output on the console:
elementsStream.map(e -> e + "_MAPPED").forEach(System.out::println);
Where is the synchronizsation???
The work of synchronisation is done by the "LinkedBlockingDeque". As we only delegate to ONE synchronized method of the "LinkedBlockingDeque" in "getElement" and "registerElement" we do not need our own synchronisation. | {
"domain": "codereview.stackexchange",
"id": 24160,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, multithreading, thread-safety, http",
"url": null
} |
biochemistry
Hence the substrate-binding site of α-amylase does not have access to the residues that need to bind for it to perform hydroysis of glycogen, and, indeed, the enzyme that breaks down glycogen — glycogen phosphorylase — is specific for these free ends.
The α-amylases that can hydrolyse both α-1,4 and α-1,6 glycosidic links are quite few compared with those with specificity to one or the other type of linkage (see Table 2 of the MacGregor review, if you can obtain access to it). The impression obtained from following up two of the examples there is that the enzymes involved can exist in alternative conformations, the correct one of which is triggered by the substrate. An example is the glycogen debranching enzyme, the studies of which in Sulfolobus solfataricus and Candida glabrata can be read freely on-line. Although somewhat less directly relevant, the example of a Thermoactinomyces vulgaris neopullulanase is another variation on this theme. | {
"domain": "biology.stackexchange",
"id": 8225,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "biochemistry",
"url": null
} |
algorithms, graphs, hamiltonian-path
Title: Detecting Hamiltonian path in a graph There are various methods to detect hamiltonian path in a graph.
Brute force approach. i.e. considering all permutations T(n)=O(n*n!)
Backtracking T(n)=O(n!)
Using Dynamic programming T(n)=O(2^n * n^2)
Now, there is one another method using topological sort. Topological sort has an
interesting property: that if all pairs of consecutive vertices in the sorted order are connected by
edges, then these edges form a directed Hamiltonian path in the DAG. If a Hamiltonian path
exists, the topological sort order is unique. Also, if a topological sort does not form a
Hamiltonian path, the DAG will have two or more topological orderings.
Approximation Algorithm: Compute a topological sort and check if there is an edge between each
consecutive pair of vertices in the topological order.
I have a doubt that why is it considered as an approximate algorithm? Wouldn't it give correct output every time? What are the cases when it won't give correct output? Hamiltonian Path in a DAG is easy to solve: You can find the longest path in $O(|V|+|E|)$ time using the critical path algorithm: https://en.wikipedia.org/wiki/Longest_path_problem#Acyclic_graphs_and_critical_paths. For unweighted DAGs, this sounds like essentially the same thing as your topological sort.
Hamiltonian Path is NP-hard on digraphs with cycles, and on undirected graphs. | {
"domain": "cs.stackexchange",
"id": 11405,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs, hamiltonian-path",
"url": null
} |
c#, optimization
private ulong GetUlongProperty(string propertyName, string tableName) {
...
}
}
The use that class from Win32ComputerSystem and Win32OperatingSystem:
public class Win32ComputerSystem
{
private PropertyGetter propertyGetter;
public Win32ComputerSystem(PropertyGetter propertyGetter)
{
this.propertyGetter = propertyGetter; // TODO: check null
}
public string GetName()
{
return propertyGetter.GetStringProperty("Name", "Win32_ComputerSystem");
}
...
}
See also: Effective Java, Second Edition, Item 16: Favor composition over inheritance
In the PropertyGetter class you could eliminate some duplication from the Get*Property methods if you extract out the duplicated logic to a private method(s).
Another note: returning 0 or an error message instead of the expected value seems a little bit dangerous.
if (!enu.MoveNext()) return 0;
...
if (!enu.MoveNext()) return "Unable to retrieve " + propertyName + " from Win32_ComputerSystem!";
If that's an exceptional case an exception might be better. Clients of these classes might use these values as valid data and you just postpone the error which makes debugging harder. (The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas: Dead Programs Tell No Lies.) | {
"domain": "codereview.stackexchange",
"id": 6461,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, optimization",
"url": null
} |
java, swing, traveling-salesman
MainFrame f = new MainFrame();
}
});
}
public JFrame frame = new JFrame();
public JPanel panel = new JPanel(new BorderLayout());
private TSPDrawer tsp;
public MainFrame() {
tsp = new TSPDrawer();
JButton button1 = new JButton("Start Simulation");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.add(panel);
panel.setBackground(Color.white);
frame.add(button1, BorderLayout.NORTH);
button1.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
tsp.startSimulation(true);
}
});
panel.add(tsp);
frame.pack();
frame.setVisible(true);
}
public class TSPDrawer extends JPanel {
private Timer timer;
private int displayNoOfSteps = 0;
public Solution solution = null;
private int noOfFrames;
public TSPDrawer() {
setOpaque(false);
timer = new Timer(80, new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
solution = new Solution(Solution.getPlayback(),displayNoOfSteps);
noOfFrames = Solver.getPoints().size();
if (displayNoOfSteps<noOfFrames+1) {
repaint();
displayNoOfSteps++;
}
else
startSimulation(false);
}
});
timer.setRepeats(true);
timer.setCoalesce(true);
}
@Override
public Dimension getPreferredSize() {
return new Dimension(1000, 600);
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
if (!(solution==null)) {
solution.draw(g);
}
} | {
"domain": "codereview.stackexchange",
"id": 14215,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, swing, traveling-salesman",
"url": null
} |
computational-geometry, doubly-connected-edge-list
1: I will assume that you just have a graph for now, but perhaps a simplicial complex might be a better description? I don't think it matters much for the discussion here, though. | {
"domain": "cs.stackexchange",
"id": 16913,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computational-geometry, doubly-connected-edge-list",
"url": null
} |
Now I need to show that this works. Let $p(x,y)$ be the probability that $D_x$ beats $D_y$ before one of these edge iterations, and let $p'(x,y)$ be the probability that $D'_x$ beats $D'_y$ after one of these edge iterations.
First look at $D'_a$ and $D'_b$. $D'_a$ has a $\frac2{k=2}$ chance of rolling $m+3$ and winning over any $D'_b$ roll. $D'_a$ can also win half the time when both dice roll numbers appearing on their old versions, which occurs with probability $\frac{k^2}{(k+2)^2}$. \begin{align} p'(a,b) & = \frac{k^2}{(k+2)^2} p(a,b) + \frac{2}{k+2} \\ & = \frac{k^2}{2(k+2)^2} + \frac{2}{k+2} \\ & = \frac{k^2 + 4k + 8}{2(k+2)^2} \\ & = \frac{(k+2)^2 + 4}{2(k+2)^2} \\ & = \frac12 + \frac{2}{(k+2)^2} \text{ so D'_a beats D'_b.} \end{align} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9861513869353992,
"lm_q1q2_score": 0.8085389590300468,
"lm_q2_score": 0.8198933447152497,
"openwebmath_perplexity": 378.5845929103986,
"openwebmath_score": 0.9812066555023193,
"tags": null,
"url": "https://math.stackexchange.com/questions/1624550/can-you-create-non-transitive-dice-for-any-finite-graph/1624632"
} |
floating-point, numerical-analysis
$L$ -- lower
$U$ -- upper
For example, suppose $F(10,3,-10,10)$
$4.00\times10^5$ represents the number $400000$
But how are we going to represent $0$ in such case?
Are we going to give $0$ directly or $0.00\times10^0$? Note: In the interest of making this somewhat self-contained, I am using terminology from the most recent versions of the IEEE-754 standard. Prior to 2008, "subnormal numbers" were called "denormal numbers", and "binary32" was called "single precision". Some textbooks/papers/etc may use the old terms.
The representation that you are talking about here is called, in IEEE-754, normal numbers. A normal number is one which has a single nonzero digit on the left-hand side of the radix point (i.e. decimal point or binary point) of its mantissa.
The representation for zero uses a slightly different representation, namely, subnormal numbers.
Taking binary32 as our example, there are three fields:
The sign bit, which is 1 bit in size.
The exponent field, which is 8 bits in size.
The significand field, which is 23 bits in size. | {
"domain": "cs.stackexchange",
"id": 17136,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "floating-point, numerical-analysis",
"url": null
} |
beginner, programming-challenge, rust
Title: ARC 067 - read ints and find the best choices I just started learning Rust, and this is my solution to a problem from Atcoder's Regular Contest #067, D - Walk and Teleport.
The problem can be summarized as follows:
There are N towns in a line, and you are about to visit all of them.
Town i is located at the point with coordinate xi.
You can travel either by walking or by teleporting. (You can combine them)
Walking costs you a * |Δx|, where a is a constant given and Δx is the distance you are about to travel.
Teleporing to any location costs you b, regardless of the distance you travel.
Given a, b and a sorted list of coordinates (xi), find the minimum cost to travel all towns.
Input is given in the form of
N a b
x1 x2 ... xN
where N is the number of cities and a, b, xi is what described above. Again, xi is sorted. All a, b, xi are integers in the range of [1, 109].
My speculation is that you can just travel from left to right, and choose the cheaper way to travel to the next town (a*(xj+1-xj) versus b).
This is my code.
use std::io;
use std::cmp::min;
fn main (){
let input_one = read_ints();
let a = input_one[1]; let b = input_one[2];
let xs = read_ints();
println!("{}", solve(&a,&b,&xs));
}
fn distances(xs:&[u64]) -> Vec<u64>{
// Find the distances between adjacent towns
xs.iter().zip(xs.iter().skip(1)).map(|(a,b)| b-a).collect()
} | {
"domain": "codereview.stackexchange",
"id": 24762,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, programming-challenge, rust",
"url": null
} |
ros
Title: Selling robots with ROS
Hi all,
My apologies if this topic has been mentioned before. I would like to know if the BSD license would allow me to sell a ROS-based product or a modification of it without being sued? Any additional information is very welcome.
Regards,
Renato Samperio.
Originally posted by Renato Samperio on ROS Answers with karma: 21 on 2012-08-31
Post score: 3
!!! NOTICE !!!
Not all code in the ROS ecosystem has been released under BSD or similarly permissive licenses. The default license created by roscreate-pkg is "BSD" but some people do not know or do not care what this means, and they can be careless with the code they use or link against in that package.
If you are concerned about licensing, make sure that the contents of the package and all of its dependencies are also covered under the appropriate license for your application.
The answer for the ROS core, however, is YES. In fact, the entire motivation for pushing that the ROS core and most of the mantle is BSD-licensed is to enable commercialization of ROS-based robots. You can even build and sell proprietary software built on top of BSD-licensed code.
Also it might be useful to link people to the ROS developer's guide, which goes over some of these issues.
Originally posted by jbohren with karma: 5809 on 2012-08-31
This answer was ACCEPTED on the original site
Post score: 13
Original comments
Comment by Renato Samperio on 2012-09-17:
Dear Jonathan, Many thanks for your answer and links. They are very useful. | {
"domain": "robotics.stackexchange",
"id": 10845,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
javascript, game, canvas
function drawEmpty() { // draw empty table and handle click on empty table
var ctx = this.ctx;
ctx.drawImage(emptyTableImage, 0, 0);
if (this.mouse.over) {
ctx.globalCompositeOperation = "lighter";
ctx.globalAlpha = TABLE.empty.highlightAmount;
ctx.drawImage(emptyTableImage, 0, 0);
ctx.globalAlpha = 1;
ctx.globalCompositeOperation = "source-over";
if (!helpItemsUsed.empty) { // show help is the help action has not yet been done
drawHelpText(ctx, TABLE.help.empty, TABLE.empty);
}
this.cursor = TABLE.empty.cursor;
if (this.mouse.button === 1) { // bit field
this.buttonDown = true;
} else if (this.buttonDown) {
this.active = true;
setTimeout(addTable, TABLE_REFRESH_DELAY);
this.buttonDown = false;
helpItemsUsed.empty = true; // flag this help as not needed as user has complete that task
}
} else {
this.cursor = "default";
}
}
// create the mouse inteface for a table
function createMouse(table) {
var mouse = {
x: 0,
y: 0,
over: false,
table: table,
element: table.div,
button: 0
};
mouse.event = mouseEvent.bind(mouse);
mouse.start = function() {
MOUSE.events.forEach(n => {
this.element.addEventListener(n, this.event);
});
}
mouse.remove = function() {
MOUSE.events.forEach(n => {
this.element.removeEventListener(n, this.event);
});
}
return mouse;
} | {
"domain": "codereview.stackexchange",
"id": 21595,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, game, canvas",
"url": null
} |
homework-and-exercises, electromagnetism, tensor-calculus
$F_{\mu\nu}\equiv\partial_\mu A_\nu-\partial_\nu A_\mu$ is the Electromagnetic Tensor, $\tilde F_{\mu\nu}\equiv -\frac i2 \epsilon_{\mu\nu\rho\sigma}F^{\rho\sigma}$ its dual tensor, $\epsilon^{\mu\nu\rho\sigma }$ is the Levi-Civita symbol and $A_\mu$ is an $c$-vector field (the $c$-potential).
I've tried this:
By definition
$$
F_{\mu\nu}\tilde F^{\mu\nu}=-\frac i2 \epsilon^{\mu\nu\rho\sigma}(\partial_\mu A_\nu -\partial_\nu A_\mu)F_{\rho\sigma}.
$$
Using the Antisymmetry property of $\epsilon$ we have
$$
F_{\mu\nu}\tilde F^{\mu\nu}=-i \epsilon^{\mu\nu\rho\sigma}(\partial_\mu A_\nu)F_{\rho\sigma}.
$$
With the product rule,
$$
F_{\mu\nu}\tilde F^{\mu\nu}=-i \epsilon^{\mu\nu\rho\sigma}\left(\partial_\mu ( A_\nu F_{\rho\sigma})-A_\nu\partial_\mu F_{\rho\sigma}\vphantom{\frac yy}\right).
$$ | {
"domain": "physics.stackexchange",
"id": 50890,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, electromagnetism, tensor-calculus",
"url": null
} |
particle-physics, pair-production
Title: Can pair production produce any particle/anti-particle pair? Hopefully this one'll be a fairly easy answer. My understanding of pair production is that it is a simple transfer of momentum/energy in which an incident particle looses energy after a scattering interaction with a nucleus (or other more massive (?) particle) and that this energy is converted to a particle/antiparticle pair. Pretty much all of the sources that I've consulted (Introduction to Elementary Particles by Griffith, Mando & Ronchi 1952, etc.) talk about pair production of electrons, muons, and perhaps pions, (~ 0.5, 105, and 137 MeV/$c^2$ in mass respectively); but I've only found a very limited number of sources that mention higher energy pair production.
My basic question is whether it's theoretically possible for any particle/anti-particle pair (e.g. $D\overline{D}$ meson pair production) to be produced by a collision of sufficient energy within the constraint that all conserved quantum numbers must sum to 0? Or is there some upper limit imposed by theory?
Obviously there is a practical upper limit, based upon the maximum energy of incident particle radiation observed in nature, or generated in a particle accelerator, but is there a theoretical upper limit to the particle mass of pair production?
Thanks!
-D. Hodge
but I've only found a very limited number of sources that mention higher energy pair production.
Well, we do get antiproton beams, and those are produced in scattering off nucleons with enough energy so that the particle pair can appear.
The limit to pair production is the limit of the energy available . When the masses become large, the probability of producing them in pairs is small, but here is an experimental study for W+ W- production at LHC, testing the predictions of the theory. And a discussion of the even more massive top anti top creation, which from conservation of topness have to be generated in particle antiparticle pairs.
So yes, only the available energy limits the pairs that can be produced in pair production. At that level, that they are pair produced is a small part of the study of the processes which are examined for fitting or not fitting the standard model calculations, so not many experiments can have the accuracy to do so. That is why there is not much in literature, except for specific studies. | {
"domain": "physics.stackexchange",
"id": 47681,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, pair-production",
"url": null
} |
and
$$\log_{1/5}4>\log_{1/2}5>\log_{1/3}27.$$
• You can tighten the bounds on $log_2 5$. In fact $2^2 < 5 < 2^3$ – Martin Bonner supports Monica Sep 18 '18 at 12:09
• Ah. Now I understand. You can tighten the bounds - but you don't need to – Martin Bonner supports Monica Sep 18 '18 at 12:12
• @MartinBonner: exactly. – Yves Daoust Sep 18 '18 at 13:26 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232894783426,
"lm_q1q2_score": 0.801229479080536,
"lm_q2_score": 0.8152324915965392,
"openwebmath_perplexity": 733.0269367574678,
"openwebmath_score": 0.841733455657959,
"tags": null,
"url": "https://math.stackexchange.com/questions/2921146/sort-those-3-logarithmic-values-without-using-calculator"
} |
beginner, haskell, parsec
evalIsZero :: Term -> Term
evalIsZero TmZero = TmTrue
evalIsZero term
| isNumerical term = TmFalse
| otherwise = TmError
evalPred :: Term -> Term
evalPred TmZero = TmZero
evalPred t@(TmSucc subterm) = t
evalPred _ = TmError
evalSucc :: Term -> Term
evalSucc term
| isNumerical term = TmSucc term
| otherwise = TmError
Note the complex terms are farmed off to their own functions. This makes testing easier. Especially as the runtime gets more complex.
You asked about main's type. If you use forM_ you can define it as main :: IO ().
As for Applicative:
functionParser :: String -> (Term -> Term) -> GenParser Char st Term
functionParser name funcTerm = funcTerm <$> (string (name ++ "(")
*> arithParser
<* char ')')
ifParser :: GenParser Char st Term
ifParser = TmIf <$> (string "if" *> spaces *> arithParser)
<*> (spaces *> string "then" *> spaces *> arithParser)
<*> (spaces *> string "else" *> spaces *> arithParser)
Also, the last 'try' should not be used. 'try' means to attempt a parse and do not consume the input if it failed. The last parse action is the end of the parse so there is no need to leave the input in the parser. | {
"domain": "codereview.stackexchange",
"id": 6735,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, haskell, parsec",
"url": null
} |
Alternatively, at the last step, one could “solve” $$2x = x$$ by subtracting x from both sides, yielding $$x = 0$$: that is, every number (since x was unspecified) is equal to zero. That would include 1 = 0.
What went wrong?
First, as I read it, when he differentiated (using a nonstandard notation “dx” apparently meaning “d/dx”) $$\underbrace{x + x + \dots + x}_{x\text{ times}}$$, he just differentiated each x to get 1, without considering that the number of terms is not constant. If you try to justify this by going to the definition of the derivative, you have to take the difference $$f(x + \Delta x)-f(x)$$, which here becomes the difference of sums of different numbers of terms. Doctor Rick took it from there:
In taking the difference, you forgot that not only has each term changed its value, but also the NUMBER of terms has changed. Let's put in some numbers to make this clear. Let x = 3 and delta(x) = 1. Then:
x^2 = 3 + 3 + 3
(x+delta(x))^2 = 4 + 4 + 4 + 4
delta(x^2) = 1 + 1 + 1 + 4
We still don't have the 2x that you expected; we've got 7 instead of 6. Why is this? You forgot something else. 2x is the DERIVATIVE of x^2 - the limit of delta(x^2)/delta(x) as delta(x) approaches zero. But the function as we have defined it (as a sum of x terms) has meaning only for integer values of x, so delta(x) can't be less than 1. The derivative is not defined. All we can define is a DIFFERENCE, as I have done (with delta(x) = 1, the smallest possible value), and this is not equal to 2x.
http://mathforum.org/dr.math/faq/faq.false.proof
At the bottom there is a link to "derivatives," an item in our archives that is directly related to your problem.
The reference at the bottom is to this answer by Doctor Rob:
Derivatives | {
"domain": "themathdoctors.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.983085084750966,
"lm_q1q2_score": 0.8355939871252622,
"lm_q2_score": 0.8499711775577736,
"openwebmath_perplexity": 489.86648390966036,
"openwebmath_score": 0.8776736259460449,
"tags": null,
"url": "https://www.themathdoctors.org/10-calculus-says-so-or-not/"
} |
quantum-mechanics, operators, heisenberg-uncertainty-principle, commutator, observables
Title: Uncertainty Principle: Commutators How are commutators the mathematical basis for uncertainty principle? What makes one say that commutators imply uncertainty principle or vice-versa? Consider a commutator relation $[\hat{A},\hat{B}]=\hat{C}$. This commutator relation is preserved if you take $\hat{A} \mapsto \hat{A} - <\hat{A}>,\hat{B} \mapsto \hat{B} - <\hat{B}>$ with expectation value denoted by $<...>$.
for Operators $\hat{A},\hat{B},\hat{C}$. Then you can act a norm on it and obtain
$||\hat{C}||=||\hat{A}\hat{B}-\hat{B}\hat{A}||\le ||\hat{A} \hat{B}||+||\hat{B}\hat{A}||$ (triangle inequality).
Now you can set this norm to supremum norm and obtain $||\hat{A}\hat{B}||\le||\hat{A}||||\hat{B}||$.
Then you will arrive at the uncertainty Relations if you use invariance of the commutator relation by shift of Operators as shown above.
Set e.g. $\hat{A} = \hat{p},\hat{B}=\hat{x},C=-i\hbar$ and you will get the uncertainty relation for this variables. | {
"domain": "physics.stackexchange",
"id": 42139,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, heisenberg-uncertainty-principle, commutator, observables",
"url": null
} |
dft
From the theoretical DTFT definition, it can be shown that the DTFT $Y(\omega)$ of $y[n]$ is given as:
$$ Y(\omega) = \frac{1}{M} \sum_{m=0}^{M-1} X( \frac{ \omega + 2\pi k}{M} ) ~~~,~~~ -\pi \leq \omega < \pi \tag{3}$$
And we define the $K$-point DFT of $Y[k]$ as:
$$ Y[k] = Y(\omega)|_{\omega = \frac{2\pi}{K} k } = Y(\frac{2\pi}{K} k) ~~~,~~~ k = 0,1,...,K-1 \tag{4}$$
Note the range of DFT index $k$ for $Y[k]$. Since $y[n]$ is a $K$-point sequence we have defined a $K$-point DFT of it.
Finally plug Eq(4) into Eq(3)
$$ Y[k] = Y(\frac{2\pi}{K} k) = \frac{1}{M} \sum_{m=0}^{M-1} X( \frac{ \frac{2\pi}{K} k + 2\pi m}{M} ) $$
$$ Y[k] = Y(\frac{2\pi}{K} k) = \frac{1}{M} \sum_{m=0}^{M-1} X( \frac{2\pi}{KM} k + \frac{2\pi}{M} m ) $$
Now $KM = N$ and we replace $\frac{2\pi}{M}$ with $\frac{2\pi}{N}(N/M) = \frac{2\pi}{N}K $ to get | {
"domain": "dsp.stackexchange",
"id": 7186,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dft",
"url": null
} |
linear-regression, prediction, dummy-variables
Title: How Dummy Variables Should Be Modeled In A Linear Regression Model? I've a cross sectional model where I want predict number of users that take specific service, to make it I've many variables but have specifically two nominal: isWorkday(0 or 1) and weeday(1,2,3,...,7). When I make the model, taking into account the two variables, generates high multicollinearity. So I've delete one of them, so what's better have many dummies (weeday) or less dummies (isWorkday). Since your task is to predict something, the better variable is the one that gives you a higher prediction accuracy. So you can simply test both and choose the one with which your model performs better.
However, I would suggest considering to engineer your own feature that incorporates information of both variables. For example, you could create three dummy variables: workday, weekend and holiday and include two of them into your model (to prevent falling into the dummy variable trap). Another option would be to only include the interaction terms between isWorkday and weekday. | {
"domain": "datascience.stackexchange",
"id": 5000,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "linear-regression, prediction, dummy-variables",
"url": null
} |
c++, lookup, c++03
typedef std::map<Key, Value, Compare, Allocator> container;
container table_;
public:
typedef typename container::iterator iterator;
typedef typename container::const_iterator const_iterator;
typedef typename container::size_type size_type;
typedef typename container::reference reference;
typedef typename container::const_reference const_reference;
typedef typename container::pointer pointer;
typedef typename container::const_pointer const_pointer;
typedef typename container::value_type value_type;
typedef Allocator allocator;
typedef Key key_type;
typedef Value mapped_type;
typedef Compare key_compare;
protected:
key_compare cmp_;
//Disallow polymorphic usage through derived pointer
~basic_lookup_table()
{ }
iterator upper_bound(const Key& k)
{
return table_.upper_bound(k);
}
const_iterator upper_bound(const Key& k) const
{
return table_.upper_bound(k);
}
iterator lower_bound(const Key& k)
{
return table_.lower_bound(k);
}
const_iterator lower_bound(const Key& k) const
{
return table_.lower_bound(k);
}
iterator find(const Key& k)
{
return table_.find(k);
}
const_iterator find(const Key& k) const
{
return table_.find(k);
}
public:
void insert(const key_type& key, const mapped_type& value)
{
table_.insert(std::make_pair(key, value));
}
#if __cplusplus >= 201103L
void insert(key_type&& key, mapped_type&& value)
{
table_.insert(std::make_pair(key, value));
}
#endif
bool erase_key(const key_type& k)
{
size_type s = table_.erase(k);
return s != 0;
} | {
"domain": "codereview.stackexchange",
"id": 3146,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, lookup, c++03",
"url": null
} |
algorithm, ruby, graph, set
Merging components
Where the algorithm will spend time is when we merge components together. Here you will have to
update component indexes. What you have to see here is that it will be performed much less often
than the lookup, especially in case of strongly connected components, where first_set and
second_set of the main loop will be equal.
To merge 2 components, I would put the nodes of the smallest component into the largest one, to
avoid updating too many indexes. And then I would update the indexed component so that it contains
all nodes.
def merge_components(component_index, components, first_set, second_set)
to = first_set
from = second_set
to, from = from, to if components[to].size < components[from].size
components[from].each do |i|
component_index[i] = to
end
components[to].merge components[from]
end
In case memory is a problem, you probably don't need to store the sets themselves, but only their
size, since you know that they will never overlap, and that the size of the merged set is always
the sum of the sizes of the 2 sets. But by doing so you have to loop through all nodes when
updating components, which can be much longer.
Building Cost
Last thing to change: how we compute the cost of the solution. We will need the final connected
components. This can be extracted like:
final_component_index = component_index.uniq
Then the MST cost would be
roads_cost = c_road * final_component_index.inject(0) { |sum, x| components[x].size - 1 + sum }
And the final cost
final_component_index.size * c_lib + roads_cost | {
"domain": "codereview.stackexchange",
"id": 28971,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm, ruby, graph, set",
"url": null
} |
rotational-dynamics, rotational-kinematics, rigid-body-dynamics, stability, gyroscopes
\end{align}
and the solution to the original system, describing the precession of the angular velocity, is
\begin{align}
&\omega_1 = \omega_1^0\, \cos\Big((k\omega_3^0)\, t\Big) - \omega_2^0\, \sin\Big((k\omega_3^0)\, t\Big)\\
&\omega_2 = \omega_1^0\, \sin\Big((k\omega_3^0)\, t\Big) + \omega_2^0\, \cos\Big((k\omega_3^0)\, t\Big)\\
&\omega_3 = \omega_3^0
\end{align}
If you want to understand the time-evolution of the angular velocity of the coin nearby a major axis, you can simply consider the $x-$axis, due to coin's symmetry. Then, start with an initial angular velocity $(\omega_1^0, \omega_2^0, \omega_3^0)$ with $\omega_2^0 = 0$ and $\omega_3^0 =\varepsilon$ a very small number. Then the time-evolution of the angular velocity is
\begin{align}
&\omega_1 = \omega_1^0\, \cos\big(k\varepsilon\, t\big)\\
&\omega_2 = \omega_1^0\, \sin\big(k\varepsilon\, t\big)\\
&\omega_3 = \varepsilon
\end{align}
which shows that staring from initial angular velocity $(\omega_1^0, 0, \varepsilon)$ after time $t = \frac{\pi}{2\,k\varepsilon}$ the angular velocity will be | {
"domain": "physics.stackexchange",
"id": 59784,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rotational-dynamics, rotational-kinematics, rigid-body-dynamics, stability, gyroscopes",
"url": null
} |
c#, entity-framework, lambda
canalComunicacion => canalComunicacion.ListaControladores2.Select(controlador=>controlador.ListaLectoras.Select(lectoras=>lectoras.ListaDeGruposAccesoLectora.Select(grupoAcceso=>grupoAcceso.GrupoAcceso).Select(grupo=>grupo.ListaDeGrupoAccesoEmpleados.Select(lisEmpleado=> lisEmpleado.Persona.ListaTarjetaPersona.Where(lisTarjeta=>lisTarjeta.FechaDesactivacion==null))))),
canalComunicacion => canalComunicacion.ListaControladores2.Select(controlador=>controlador.ListaLectoras.Select(lectoras=>lectoras.ListaDeGruposAccesoLectora.Select(grupoAcceso=>grupoAcceso.GrupoAcceso).Select(grupo=>grupo.ListaDeGrupoAccesoEmpleados.Select(lisEmpleado=> lisEmpleado.Persona.ListaTarjetaPersona.Select(lisTarjeta=>lisTarjeta.Tarjeta))))),
canalComunicacion => canalComunicacion.ListaControladores2.Select(controlador=>controlador.ListaLectoras.Select(lectoras=>lectoras.ListaDeGruposAccesoLectora.Select(grupoAcceso=>grupoAcceso.GrupoAcceso).Select(grupo=>grupo.ListaDeGrupoAccesoEmpleados.Select(lisEmpleado=> lisEmpleado.Persona.ListaTarjetaPersona.Select(lisTarjeta=>lisTarjeta.Tarjeta.ListaTiposTarjeta.Where(list=>list.FechaDesactivacion==null)))))), | {
"domain": "codereview.stackexchange",
"id": 8375,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, entity-framework, lambda",
"url": null
} |
rosinstall, roboearth, rosmake
Original comments
Comment by salma on 2012-10-12:
it doesn't work
the result of echo $ROS_PACKAGE_PATH is : /opt/ros/fuerte/share:/opt/ros/fuerte/stacks
Comment by Lorenz on 2012-10-12:
This indicates that sourcing setup.bash did not work correctly. Did you get any error messages, either in the rosinstall command or when executing the source command?
Comment by salma on 2012-10-12:
now when i write
$ source ~/ros/setup.bash
bash: /home/salma/ros/setup.bash: No such file or directory
i found :
bash: /home/salma/ros/setup.bash: No such file or directory
Comment by Lorenz on 2012-10-12:
Did the rosinstall command throw an errror?
Comment by salma on 2012-10-12:
no
and stack installed but in Home
Comment by Lorenz on 2012-10-12:
Can you please edit your original question and add the exact output of the command? See http://ros.org/wiki/Support Something seems to be messed up in your system and we need to find out what. Also, what's the output of ls ~/ros?
Comment by salma on 2012-10-12:
ok i did :)
Comment by Lorenz on 2012-10-12:
I cannot see the output of ls ~/ros.
Comment by salma on 2012-10-12:
salma@salma-G31M-S2L:~$ ls ~/ros
pkgs setup.bash setup.sh setup.zsh stacks
Comment by salma on 2012-10-12:
edited , the last line
Comment by Lorenz on 2012-10-12:
Then the command source ~/ros/setup.bash definitely must work and must not throw an error.
Comment by salma on 2012-10-12:
so ?? :((
it throw an error :
bash: /home/salma/ros/setup.bash: Permission denied
Comment by Lorenz on 2012-10-12: | {
"domain": "robotics.stackexchange",
"id": 11333,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosinstall, roboearth, rosmake",
"url": null
} |
which is clearly a proper subring of $\mathbb{Z}[\sqrt{6}]$. On the other hand, \begin{align*} \mathbb{Q}[\sqrt{24}] &= \{a + b\sqrt{24} | a, b \in \mathbb{Q} \} \\ &= \{a + 2b\sqrt{6} | a, b \in \mathbb{Q} \} \\ &= \{a + b'\sqrt{6} | a, b' \in \mathbb{Q}\} \\ &= \mathbb{Q}[\sqrt{6}]. \end{align*}
The point is that you can divide anything in $\mathbb{Q}$ by two, but not anything in $\mathbb{Z}$.
-
Since $\sqrt{6}\not\in\mathbb{Z} [\sqrt{24}]$.
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9883127409715955,
"lm_q1q2_score": 0.8419958126067439,
"lm_q2_score": 0.8519528057272543,
"openwebmath_perplexity": 472.6066426971351,
"openwebmath_score": 1.0000100135803223,
"tags": null,
"url": "http://math.stackexchange.com/questions/312878/why-is-mathbbz-sqrt24-ne-mathbbz-sqrt6"
} |
fluid-dynamics, pressure
Title: Direction of pressure in fluids ok so my friend told me that in a container, the pressure exerted by the walls on the liquid in the container act in the upward direction.Is he correct ? so what I am imagining is a cylindrical container kept on the ground. according to me the pressure by the wall of the container should act perpendicular to the surface of the wall.Am i going wrong somewhere ?
Any help would be appreciated. Pressure at a point in a static fluid is independent of direction.
http://www.southampton.ac.uk/~jps7/Aircraft%20Design%20Resources/Sydney%20aerodynamics%20for%20students/fprops/statics/node4.html
The force exerted by the walls on the liquid will be pointing inwards. Imagine if there is a hole in the container and water is liquid out, it is easy to see that you have to apply a force inwards in order to prevent the liquid from leaking. Since the force is proportional to the area of the hole, you'd want a dimensionally equivalent form of pressure in order to eliminate the dependence on area. And that is stress. (Note that stress is not a scalar nor a vector - it is a tensor)
http://en.wikipedia.org/wiki/Stress_(mechanics) | {
"domain": "physics.stackexchange",
"id": 17086,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, pressure",
"url": null
} |
java, datetime
count the number of whole days between start and end,
add each of the days to start,
format to desired String representation, and
collect to the desired Set.
Putting it all together (note the comments indicating the above points):
public static Set<String> getUTCDayStringsBetween(Instant startInstant,
Instant endInstant) {
if (endInstant.isBefore(startInstant)) {
throw new IllegalArgumentException("Start date (" + startInstant +
") must be before end date (" + endInstant + ")");
}
ZonedDateTime start = startInstant.atZone(UTC);
ZonedDateTime end = endInstant.atZone(UTC).with(start.toLocalTime());
return LongStream.rangeClosed(0, start.until(end, ChronoUnit.DAYS)) // 1
.mapToObj(start::plusDays) // 2
.map(DateTimeFormatter.ISO_LOCAL_DATE::format) // 3
.collect(Collectors.toCollection(LinkedHashSet::new)); // 4
} | {
"domain": "codereview.stackexchange",
"id": 14447,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, datetime",
"url": null
} |
python, json
response = requests.request(method="POST",
url=URL,
headers=data_headers,
data=payload)
Explanation of my testing methodology
Since running tests in Jupyter notebooks can be tricky, I'm going to mention here my methodology. Right now, I'm testing the payload variable and request in a separate cells. After a successful record insertion via the API, I don't reload the whole notebook, and simply reload the payload and request cells: 1) Change the successful payload cell into type 'raw'; 2) Add new cell with modified payload variable for testing; 3) reload the payload and request cells; 4) examine the results; 5) toggle cell type 'code' and 'raw' to return to the success version and verify underlying code; 6) repeat. What makes this work is toggling cell type, and never mutating variables once assigned--there is no assignment to any variable name assigned to a cell above it. I fail to see a reason for the long way around.
If you already have parsed JSON, why not use it directly?
result = df.to_json(orient="records")
parsed_json = json.loads(result)
je_json = json.dumps({
'BatchId': '1',
'userId': 'myID',
'journalEntries': parsed_json
})
It would be even better to not serialize and deserialize the JSON data twice:
je_json = json.dumps({
'BatchId': '1',
'userId': 'myID',
'journalEntries': df.to_dict(orient="records")
})
On a side-note I find it funny, how pandas' method name to_dict() is absolutely misleading, since it returns a list in the above case.
Also, you don't need to serialize the JSON payload manually, since requests will do it for you:
from requests import post
payload = {
'BatchId': '1',
'userId': 'myID',
'journalEntries': df.to_dict(orient="records")
} | {
"domain": "codereview.stackexchange",
"id": 42804,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, json",
"url": null
} |
However, it only does the sampling with replacement, as in the example below. Contents (click to skip to that section): Sampling With Replacement; Sampling Without Replacement; Sampling with Replacement. Sampling Distribution of the Mean C. Sampling Distribution of Difference Between Means D. Sampling Distribution of Pearson's r E. Sampling Distribution of a Proportion F. Exercises The concept of a sampling distribution is perhaps the most basic concept in inferential statistics. If random samples of size three are drawn without replacement from the population consisting of four numbers 4, 5, 5, 7. Calculate the mean and standard deviation of this sampling distribution. Find the sample mean $$\bar X$$ for each sample and make a sampling distribution of $$\bar X$$. 3 Sampling algorithms (applicable to any support and any design), ex: sequential algorithms. $\endgroup$ – … Sampling Distributions Prerequisites • none A. Sampling > Sampling with replacement / Sampling without replacement. If it's with replacement we call it multinomial. With the function discrete_distribution, it is possible to sample with replacement.And, with this function, I implemented sampling without replacement in a very rough way: I want to use the uniform_int_distribution in the c++ random library. There is no change at all in the size of the population at any stage. How can I sample without replacement? Indicator for sampling with replacement, specified as the comma-separated pair consisting of 'Replace' and either true or false.. So if you don't have "with replacement" the probabilities change and it's called something else. The application of a particular sampling algorithm on a sampling Whenever a unit is selected, the population contains all the same units, so a unit may be selected more than once. 1 Supports or set of samples (example all the samples with replacement with xed sample size n) 2 Sampling design or multivariate discrete positive distribution. #include < Sampling is called with replacement when a unit selected at random from the population is returned to the population and then a second element is selected at random. Sample with replacement if 'Replace' is true, or without replacement if 'Replace' is false.If 'Replace' is false, then k must not be larger than the size of the dimension being sampled. Introduction B. The distribution of the mean of sample of size 4, taken from a population with a standard deviation, has a standard deviation | {
"domain": "netcup.net",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9532750413739075,
"lm_q1q2_score": 0.8064158082227431,
"lm_q2_score": 0.8459424334245617,
"openwebmath_perplexity": 1030.2916650443512,
"openwebmath_score": 0.40271544456481934,
"tags": null,
"url": "http://hosting126194.a2fe2.netcup.net/kevl1/898acc-sampling-with-replacement-distribution"
} |
recurrence-relation
If we define function $S: \Bbb Z\to\Bbb Z$ by $S(m) = T(m)$, i.e., $S$ is the same function as $T$, then we have, for any $n\in\Bbb Z$ that is a power of 2, i.e., $n=2^m$ for some $m$,
$$S(n) = S(2^m) = T(2^m) = T(2^{m-1}) + 1 = S(2^{m-1}) + 1 = S(2^m/2)+1 =S(n/2)+1. $$
On the other hand, if we define function $S: \Bbb Z\to\Bbb Z$ by $S(m) = T(2^m)$, then substituting $m-1$ for $m$, we have $S(m-1)=T(2^{m-1})$. So,
$$S(m) = T(2^m) = T(2^{m-1}) + 1 = S(m-1) + 1$$
In summary, either one of $S(n) = S(n/2) + 1$ and $S(n) = S(n-1) + 1$ can be correct. Which one is correct depends on how you define $S$. Once you have write down the definition of $S$, it will be clear which one is correct and which one is wrong. In other words, you have to specify what is $S$ first. | {
"domain": "cs.stackexchange",
"id": 12336,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "recurrence-relation",
"url": null
} |
quantum-mechanics, solid-state-physics, electronic-band-theory, plane-wave
$\phi(r)$ is the potential produced by an individual ion/nuclei in the lattice
Any help much appreciated! I do not know if I can add much, but I can try to explain how these matrix elements arise. The integral should indeed be over all space, but the lattice structure manifests itself in that the momentum $K$ only takes discrete values: the reciprocal lattice vectors. If you make a Fourier transform of the Coulomb potential, you indeed encounter a problem of the singularity at $r=0$. This can be addressed by modifying the Coulomb potential by multiplying it by a factor $e^{- \lambda r}$, performing the Fourier transform, and then letting the parameter $\lambda \rightarrow 0$ (other forms of obtaining the result are also available).
The result that you get is the $U(K)$ (Eq. 113) that you quote above, where $K$ is a reciprocal lattice vector. This is well-behaved at all points except $K=0$. You can set the $U(K=0) = 0$ by hand. If this seems rather arbitrary, the $K=0$ value is related to the mean charge of the unit cell, which is indeed zero for physical cases (otherwise the electrostatic energy of the crystal would indeed diverge). | {
"domain": "physics.stackexchange",
"id": 63891,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, solid-state-physics, electronic-band-theory, plane-wave",
"url": null
} |
java, performance, sieve-of-eratosthenes
This is the classical implementation of the sieve. The resulting array isComposite will hold a value of false for prime values.
Now, observe that the first time you ever see a prime, you remove all of its multiples from contention. We can therefore store a separate list containing just these primes. (We also have to loop over the back half of the array to get any primes >= sqrt(n).)
List<Integer> primes = new ArrayList<Integer>();
boolean[] isComposite = new boolean[n];
isComposite[0] = isComposite[1] = true;
for (int i = 2; i < Math.sqrt(n); ++i) {
if (!isComposite[i]) {
primes.add(i);
for (int j = i * i; j < n; j *= i) {
isComposite[j] = true;
}
}
}
for (int i = (int)Math.sqrt(n); i < n; ++i) {
if (!isComposite[i]) {
primes.add(i);
}
}
Finally, we can determine the range of primes we want. To do this, perform a binary search on primes to find the lower and upper bound. Then, select a random element from the list.
You may wish to save this list of primes, so that it only need be computed once. In particular, observe that the sieve of Eratosthenes can be "paused" and "resumed" in computation, so that the results of previous computations may be used in future computation. I leave this as an exercise to the reader. | {
"domain": "codereview.stackexchange",
"id": 29036,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance, sieve-of-eratosthenes",
"url": null
} |
Just for fun, here's a combinatorial proof as well. Let $A$ be a set of size 128, and let $S$ be the set of all binary strings of length $7$. Let's count the number of bijections from $A$ to $S$.
• On the one hand, $|A| = |S| = 128$, so there are $128!$ bijections from $A$ to $S$.
• On the other hand, to construct a bijection, first we choose which elements of $A$ go to a string begining in $0$, and then choose which elements of $A$ go to a string beginning in $1$. We can do this in $\binom{128}{64}$ ways. Then, among the elements that we assigned a string starting in $0$, we must split them into strings starting in $00$ and strings starting in $01$; we can do this in $\binom{32}{16}$ ways. Similarly for the elements assigned a string starting in $1$; they either start in $10$ or in $11$. And so on.
• Thank you for this, but I'm a little confused as to how you distributed the exponents - would you mind elaborating a little bit? – Chris T Jun 24 '16 at 21:24
• somewhat similar to derangements and Stirling numbers. nice answer. – vidyarthi Jun 24 '16 at 21:28
• @ChrisT I added a line, if that helps explain it! I also made a combinatorial proof, but I see Andre Nicolas and joriki have already done essentially the same. – 6005 Jun 24 '16 at 21:31
• Thank you for clarifying, it was helpful to see it this way :) – Chris T Jun 24 '16 at 21:32 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180690117799,
"lm_q1q2_score": 0.8126870265446957,
"lm_q2_score": 0.8244619350028204,
"openwebmath_perplexity": 404.8077839921688,
"openwebmath_score": 0.808262288570404,
"tags": null,
"url": "https://math.stackexchange.com/questions/1838603/why-does-128-equal-the-product-of-these-binomial-coefficients-128-bino/1838616"
} |
motor, hardware, dc
Title: Choosing the right dc motor I'm trying to find the optimal components list for a radio controlled lawn mover i'm trying to build.
The blades will be rotated by a 140 cc engine. I choose this engine because it's already mine. It's weight is 25 kg
The movement will be electrical powered by 2 battery 12v and 18 amp h
i will use arduino and 2 motor drivers. Each driver can handle 6v to 27v and a maximum of 43 amp | {
"domain": "robotics.stackexchange",
"id": 2215,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "motor, hardware, dc",
"url": null
} |
waves, simulations, interference
In general, this will happen whenever you drive at a rational multiple of the fundamental frequency. The reason the effect doesn't work when you change the tension setting is because that changes the wave speed and hence the frequencies, so you're no longer driving at a rational multiple of the fundamental. It doesn't violate conservation of energy, because for the last two you will be doing negative work on the rope. The effect may even be observable for simple multiples of the fundamental on a real string.
If you want to give this effect a name, it's simply the usual destructive interference, but with the neat twist that a wave you're putting in now is destructively interfering with a wave you put in earlier. | {
"domain": "physics.stackexchange",
"id": 53863,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, simulations, interference",
"url": null
} |
c#, thread-safety, asynchronous
Title: Running Code Just Once I want a code run just once (say in Dispose). WriteOnceBlock<T> from TPL Data Flow could be used here; but again if we need to check if it is done (in a 'not data flow' friendly manner), we have to call Receive and timeout and things.
I wrote this and it works, but I am not sure about when comparison takes place (==); is it possible for that to be rearranged so something goes wrong and out of time/order?
public class Once
{
const Int32 JobDone = 11011;
const Int32 NotDone = 119;
Int32 _done;
public Once() { _done = NotDone; }
public bool IsDone() { return Interlocked.CompareExchange(ref _done, JobDone, NotDone) == JobDone; }
}
And usage:
readonly Once _stopped = new Once();
public void OnStop()
{
if (_stopped.IsDone()) return;
//...
} There are a few items to make this code better.
Naming. Once is an OK name for the class, but the method name IsDone is a problem. This is an 'atomic' operation that sets values, as well as gets values. A method called something like "Trigger", and changing the class name to a common term like OneShot, will give you the semantics like:
private readonly OneShot terminator = new OneShot();
if (terminator.Trigger())
{
... do something if we are the first trigger
}
or, using your semantics, the negated value:
if (!terminator.Trigger()) return;
Your fields should all be private.
private const Int32 JobDone = 11011;
private const Int32 NotDone = 119;
private Int32 _done;
otherwise other code can possibly reset or mess up your trigger.
Why use the bizarre numbers for the constants? What's wrong with 1, and 0. The special numbers make me think there's something especially magical with them.
Apart from that, the Interlocked.CompareExchange is the right tool for the job. It creates an atomic compare-and-set operation that makes it thread safe. | {
"domain": "codereview.stackexchange",
"id": 11159,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, thread-safety, asynchronous",
"url": null
} |
The left-most trapezoid has base lengths 0 and 1 and height 1, so the area of the first trapezoid is $\frac{1}{2} (0+1)\cdot 1=\frac{1}{2}\text{.}$ The middle trapezoid has base lengths 1 and 8 and height 1, so the area of the second trapezoid is $\frac{1}{2}(1+8) \cdot 1=\frac{9}{2} \text{.}$ The right-most trapezoid has base lengths 8 and 27 and height 1, so the area of the third trapezoid is $\frac{1}{2}(8+27)\cdot 1=\frac{35}{2}\text{.}$ Therefore, the approximate area under the graph of $f(x)=x^3$ from $0$ to $3$ using the trapezoid rule with 3 subintervals is $T_3= \frac{1}{2} + \frac{9}{2} + \frac{35}{2}=\frac{45}{2}=22.5\text{.}$
The error is $E_{T,3}=20.25-22.5=-2.25 \text{.}$ Using trapezoids creates a smaller error compared to $L_3$ and $R_3 \text{.}$ The magnitude of the error from approximating using $M_3$ is half the magnitude of the error from approximating using $T_3 \text{,}$ but they have opposite signs.
### SubsectionThe Trapezoid Rule
So far, we have used the simplest possible quadrilaterals (that is, rectangles) to estimate areas. It is natural, however, to wonder if other familiar shapes might serve us even better. | {
"domain": "unl.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9895109084307415,
"lm_q1q2_score": 0.8525014562618984,
"lm_q2_score": 0.8615382094310355,
"openwebmath_perplexity": 333.4462961604561,
"openwebmath_score": 0.9525200128555298,
"tags": null,
"url": "https://mathbooks.unl.edu/Calculus/sec-5-9-num-int.html"
} |
algorithms, data-compression, coding-theory
Title: How the LZ77 compression algorithm handles the case when the entire look-ahead buffer is matched in the search buffer The LZ77 compression algorithm uses a sliding window technique, where the window consists of a look-ahead puffer and a search-buffer. What I am wondering is how the algorithm handles the case if the match of the word in the search-buffer is the entire word in the look-ahead buffer? According to the desriptions I find, the algorithm matches as long as it can, and then outputs the offset, the length of the match and the next token after the matched portion in the look-ahead buffer, but in case the entire look-ahead buffer is matched we do not have a next token to output?
I nowhere find this case described, for example the pseudocode just states "X first char after p in view", but I am asking about the case where we have no char after p in the view, as p is entire view?
For example, consider a search buffer of size 5 and a look-ahead buffer of size 4 and we read in
|abrar|rarr|ad
then we find a match at offset 3, and the match (which extends behind the boundary between both puffers, but this is no problem) goes up to all of rarr, even the next a could be matched, but what we should do now, should we output (3,4, C(a)) where C(a) denotes the code of a which is not in the look-ahead buffer, or should we just match the first 3 tokens? A simple solution: your look ahead buffer is no smaller than your longest match length.
As long as the start position is in the Search Buffer(a min of 1 byte), then the look-ahead buffer will have that one extra byte available to use as the follow byte | {
"domain": "cs.stackexchange",
"id": 9091,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, data-compression, coding-theory",
"url": null
} |
python, beginner, tic-tac-toe, ai
#Calls the functions that draw the window
def updateCanvas():
drawBG()
drawMarks()
tk.update()
canvas = Canvas(tk, width=WIDTH, height=HEIGHT)
canvas.bind("<Button-1>", click)
canvas.bind("<Button-2>", reset)
canvas.bind("<Button-3>", requestMove)
updateCanvas()
canvas.pack()
tk.mainloop() Your program looks pretty good for a beginner. A couple of things could make the code easier to work with though, and thus reduce the risk of bugs and extra maintenance should you ever pick this up again.
State
You set a winner as follows:
def xWon(self):
self.winner = self.State.x
self.gameOver = True
def oWon(self):
self.winner = self.State.o
self.gameOver = True
def draw(self):
self.gameOver = True
winner is a State and the gameOver is a separate variable. gameOver can't be part of State, because State is actually being abused for something it wasn't supposed to do:
#the three states each cell can be in
class State(Enum):
empty = 0
x = 1
o = -1 | {
"domain": "codereview.stackexchange",
"id": 44047,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, tic-tac-toe, ai",
"url": null
} |
quantum-mechanics, special-relativity, heisenberg-uncertainty-principle, observables
Title: Simultaneity and The Uncertainty Principle So, the uncertainty principle states that one can not measure momentum and position with accuracy simultaneously. However, we know from relativity that simultaneously is something frame dependent in nature, so how can we reconcile these two in relativistic quantum mechanics or even QFT?
I know that in QFT we define observables in different points in spacetime such that their Lie bracket is zero, but I'm not really sure how to go from
this to an answer. I think you are conflating two separate issues. Lack of simultaneity in SR is an effect which increases with distance along the direction of travel between two reference frames moving relative to each other. There is nothing to prevent observers from agreeing that two events are simultaneous if they occur in the same spot, as would be the case if you were trying to measure the momentum and position of an electron, say. It is rather like saying that you cannot face north and south simultaneously- that has nothing to do with simultaneity in SR. | {
"domain": "physics.stackexchange",
"id": 97531,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, special-relativity, heisenberg-uncertainty-principle, observables",
"url": null
} |
machine-learning, python, neural-network, logistic-regression
From a scale from 1 to 10, what is your opinion about the temperature?
And then we measured the temperature. We have put all these information into some spreadsheets:
Rating (Move Kindness): 8
Temperature: 18 degrees Celsius
At the end of this survey we asked them to give the move kindness a rating.
So we have this, for example:
Temperature: 8, 18
Light: 7, 300
Humidity: 8, 50
....
Rating (Move Kindness): 8
So my question is, what's the best way to analyse these data for a reliable measurement device using python?
We were thinking of using neural networks, because they can be trained, but logistic regression or some other machine learning algorithm is also an option. Can anyone give me some direction on this? Okay, so from what I understand, you have a regression problem taking into account a variety of physical features. The reason I say that this is a regression problem, verses a classification problem is because the scale you are trying to predict is an ordinal scale.
There are a couple approaches to this. If your features are discriminative and linear enough, a simple least squares linear regression might work. If you believe the problem you have is to complicated for linear regressions, a simple vanilla neural network with one single output. I would recommend using the scikit-learn library in python for all models that are not neural networks. Here is a link to the generalized linear regression page.
That link has code samples and mathematical explanations. If you decide to use neural networks, and you don't have a great amount of samples or a need to use the GPU, the pyBrain library is great.
I wouldn't recommend using a logistic regression (since you mentioned it in your question), simply because a logistic regression is a classification problem, and I believe you would be better off approaching this from a regression standpoint. | {
"domain": "datascience.stackexchange",
"id": 571,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, python, neural-network, logistic-regression",
"url": null
} |
complexity-theory, np-complete, np-hard
$P_{i,s,t}$ is true if and only if the tape square $s$ contains symbol $i$ at time step $t$.
$Q_{i,t}$ is true if and only if the machine is in state $i$ at time step $t$.
$S_{s,t}$ is true if and only if symbol $s$ is scanned by the tape head at time step $t$.
Next, we construct formulas which model the actions of $M$ and test whether or not $w$ is accepted. We can do this using only the above proposition symbols, and in conjunctive normal form.
I encourage you to think through the details yourself by working out what the formulas might look like. The ones that Cook used are:
At each time step $t$, one and only one square is scanned.
At each time step $t$ and tape square $s$, there is one and only one symbol.
At each time step $t$, $M$ is in one and only one state.
At time step $1$, $M$ is in its start state and the tape contains exactly $w$ followed by "blank" symbols.
At each time step transition, the $P$, $Q$ and $S$ propositions are updated correctly, according to the transition function of $M$. Remember $M$ is nondeterministic, so you need to include all possible transitions. (If you're playing along at home, use three formulas for this.)
And the final, most important, formula states that:
$M$ enters the "accept" state at some time.
Then the conjunction of all of these formulas is true if and only if $M$ accepts $w$. Solve using your favourite SAT solver, and you're done.
The Complexity of Theorem-Proving Procedures by Stephen A. Cook (1971).
[Cook-Levin theorem] (http://en.wikipedia.org/wiki/Cook–Levin_theorem) on Wikipedia. | {
"domain": "cs.stackexchange",
"id": 2235,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, np-complete, np-hard",
"url": null
} |
bioinformatics
Title: Borrowing bioinformatic methods in other fields Have the bioinformatic methods like sequence analysis, genome annotation, comparative genomics (and many others), been applied to solve problems outside the field of bioinformatics? Of course! The perfect example is Gusfield book. Many of string algorithms came from bioinformatics. For example, sequence analysis algorithms are used in text editors.
PS: Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology, Dan Gusfield | {
"domain": "cs.stackexchange",
"id": 1854,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bioinformatics",
"url": null
} |
quantum-mechanics, wavefunction, fermions, pauli-exclusion-principle
Title: Help needed to understand whether multiple fermions can occupy same physical space or not As per my understanding:
Multiple fermions cannot have the same quantum state (as per Pauli exclusion principle)
Multiple fermions can occupy the same physical space as long as they have different quantum states (or numbers or properties such as spin)
If both these statements are true then, part of the second statement "as long as they have different quantum states (or numbers or properties such as spin)" doesn't become necessary. Because first statement implies that "multiple fermions always have different quantum states". Hence, the second statement simply becomes "Multiple fermions can always occupy the same physical space" (For a moment let's consider only fermions, their quantum state and physical space they occupy. And not other factors like electromagnetic repulsion etc)
However, at multiple places on the Internet it has been stated (and seems like widely accepted) that: Multiple fermions cannot occupy the same physical space as per Pauli exclusion principle, and that is why matter structures exists in the universe.
Can someone please help me trying to figure out where am I making mistake? Your first two statements are correct.
The third statement you quote,
Multiple fermions cannot occupy the same physical space as per Pauli exclusion principle, and that is why matter structures exists in the universe.
Isn't terribly wrong as a first approach to the exclusion principle at the level of science comunication to the general public. But it is not really correct in the details. If you see advanced enough to understand how the Pauli principle relates to quantum states, then you can completely discard the formulation in the third statement and move on.
(Though, that said, maybe it's relevant to mention that the first two formulations are still oversimplified and short of the full story, which is to do with the effect on the wavefunction of exchanging two indistinguishable particles.) | {
"domain": "physics.stackexchange",
"id": 96693,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, wavefunction, fermions, pauli-exclusion-principle",
"url": null
} |
supersymmetry, higgs, multiverse, beyond-the-standard-model
They list 128 GeV, which is sort of close to the value that was ultimately found, and say (page 26) "a Higgs mass near 128 GeV would provide strong evidence for the multiverse, although not quite as strong as might occur for a value near 141 GeV". In this regard, one should consider a "secretly famous" paper by Shaposhnikov and Wetterich, which actually did predict the right value - 126 GeV - several years in advance, and which didn't use the multiverse or supersymmetry. Instead, they assumed that quantum gravity has the property of "asymptotic safety" at high energies. This is an unfashionable assumption because it seems to contradict standard ideas about black hole entropy... However, my real point is that the right mass for the Higgs boson can possibly be obtained without the use of anthropic effects. And indeed, there are now some string-theory models in which the right value is produced by a physical cause rather than an anthropic tuning. | {
"domain": "physics.stackexchange",
"id": 23873,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "supersymmetry, higgs, multiverse, beyond-the-standard-model",
"url": null
} |
c++11
void SetLoanType(std::string&& loanType) { m_loanType = loanType; }
};
class Loan {
private:
std::string m_loanType;
std::string m_loanLength;
double m_APR;
double m_maximumLoan;
public:
int m_loanPeriod;
Loan(std::string loanType, std::string loanLength, double APR, double maximumLoan, int loanPeriod)
:m_loanType(loanType), m_loanLength(loanLength),
m_APR(APR), m_maximumLoan(maximumLoan), m_loanPeriod(loanPeriod) {
}
std::string GetLoanType()const { return m_loanType; }
std::string GetLoanLength()const { return m_loanLength; }
double GetAPR()const { return m_APR; }
double GetMaximumLoan() const { return m_maximumLoan; }
void DisplayLoan() const {
std::cout << "Loan Type: " << m_loanType << "\n";
std::cout << "Loan Length: " << m_loanLength << "\n";
std::cout << "APR: " << m_APR * 100 << "%\n";
std::cout << "Maximum Loan: " << m_maximumLoan << "\n";
}
int ChoosePaybackPeriod(Customer& customer) {
int PaybackPeriod = 0;
std::cout << "How long do you wish to pay back your loan?: ";
std::cin >> PaybackPeriod;
if (PaybackPeriod > 0 && PaybackPeriod <= m_loanPeriod)
{
return PaybackPeriod;
} | {
"domain": "codereview.stackexchange",
"id": 40152,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++11",
"url": null
} |
species-identification, ornithology
Title: Identity of strange bird with fish net tangled around one foot
This bird was photographed at a small beach in Rio de Janeiro, Brazil.
What kind of (ocean?) bird is this?? This could be an immature black-crowned night heron, Nycticorax nycticorax.
Lines of evidence:
pale yellowish bill
thick neck
hunched back
orange eyes
head feather patterning consistent with "black crown" in adults
See these photos and captions published by the Cornell Lab of Ornithology:
Juvenile Black-crowned Night Heron
Thick-necked heron with a thick bill. Juveniles are brown and streaky overall. Note pale yellowish bill.
© Evan Lipton
Immature Black-crowned Night Heron
Immature birds have a mix of juvenile and adult plumage. This individual has faint streaking on the chest, a dark gray cap, and a nearly complete dark gray back.
© Alex Lamoreaux
Adult Black-crowned Night Heron
Stocky and compact heron. Often tucks neck into its body creating a hunchbacked look. Adults have a black cap and back that contrasts with its whitish to pale gray belly and gray wings.
© Jeff Timmons
Note the fishing line tangled around the adult bird's feet. Must be a common trait! | {
"domain": "biology.stackexchange",
"id": 10636,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "species-identification, ornithology",
"url": null
} |
message-filters, boost, time, timestamp, timesynchronizer
ros::spin();
return 0;
}
There are a huge amount of error messages from boost which are really hard for me to make sense of, but the first thing that looked important was the error message below. The full error message can be found at http://pastebin.com/SQdT5VhT
/home/ubuntu/catkin_ws/src/mvp_ros/src/synchronizer.cpp:34:89: required from here
/opt/ros/indigo/include/message_filters/sync_policies/approximate_time.h:167:89: error: 'value' is not a member of 'ros::message_traits::TimeStamp<geometry_msgs::Twist_<std::allocator<void> >, void>'
ros::Time msg_time = mt::TimeStamp<typename mpl::at_c<Messages, i>::type>::value(msg);
^
/opt/ros/indigo/include/message_filters/sync_policies/approximate_time.h:177:99: error: 'value' is not a member of 'ros::message_traits::TimeStamp<geometry_msgs::Twist_<std::allocator<void> >, void>'
previous_msg_time = mt::TimeStamp<typename mpl::at_c<Messages, i>::type>::value(previous_msg);
^
/opt/ros/indigo/include/message_filters/sync_policies/approximate_time.h:183:100: error: 'value' is not a member of 'ros::message_traits::TimeStamp<geometry_msgs::Twist_<std::allocator<void> >, void>'
previous_msg_time = mt::TimeStamp<typename mpl::at_c<Messages, i>::type>::value(previous_msg);
^ | {
"domain": "robotics.stackexchange",
"id": 23867,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "message-filters, boost, time, timestamp, timesynchronizer",
"url": null
} |
special-relativity, coordinate-systems, velocity, inertial-frames, speed
Title: Do Lorentz transforms use velocity or speed? Suppose we have the $S'$ frame moving at some velocity $v$ in the $X$ direction relative to the $S$ frame, then it follows
$$x' =\gamma (x+vt), t'=\gamma(t+\frac{vx}{c})$$
From my understanding, to get the inverse Lorentz transforms, we say that if $S'$ is moving at $v$ relative to $S$, then $S$ is moving at $-v$ relative to $S'$, then it follows, substituting $-v$ into the above equations and switching prime to unprime(with no generality lost doing this),
$$x =\gamma (x'-vt'), t=\gamma(t'-\frac{vx'}{c})$$
This relies on V being velocity not speed due to the sign change, however, on Wikipedia and such, $v$ is referred to as speed and velocity. Is this derivation flawed(yet leads to the correct answer), or is speed being used lightly? Yes, very often in physics, we use "speed" and "velocity" interchangeably out of pure laziness, as "speed" has one syllable while "velocity" has four. Lorentz boosts are always defined with velocities, though of course that is equivalent to defining them with a speed plus a direction, which in the 1D case is just a sign.
But isn't this "wrong"? Sure, but not in a way that matters. In high school and freshman physics, your teachers might spend a lot of time distinguishing speed from velocity, but that's just because it's important to drill in there is a difference. If you're speaking with people who all know that already, there's no point in being totally careful, because what you mean will be clear from context. | {
"domain": "physics.stackexchange",
"id": 67498,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, coordinate-systems, velocity, inertial-frames, speed",
"url": null
} |
quantum-field-theory, mathematical-physics, group-representations, lorentz-symmetry, poincare-symmetry
For the tachyonic case, I'm completely at a loss. The little group is $\mathrm{SU}(1,1)\cong\mathrm{SL}(2,\mathbb{R})$, which has a plethora of rather complicated unitary irreducible representations given by Bargmann's classification. I see no way to connect the representation of the field with any of these representations. Yes it does if adding the invariance requirement of the vacuum state stated in the Wightman axioms. That is a general result valid as a consequence of the GNS construction.
Let us start from the fact that the algebra of observables ${\cal A}$ is the (unital) $^*$-algebra of finite linear combinations of finite products of smeared field operators $\phi(f)$.
The action of Poincaré group induces a $^*$-algebra representation $SO(1,3)_+\times \mathbb{R}^4 \ni g \mapsto \alpha_g : {\cal A} \to {\cal A}$ completely defined by requiring that
$$\alpha_g(\phi(f)) := \rho_g \phi(g(f))$$
and extending it to the whole algebra by imposing that it is linear, preserves the adjoint and the products (and the unit),
where $g(f)$ is the standard action of the group on test functions.
Let us indicate the vacuum state by $\Omega$. The above representation has the property that, according to Wighhman axioms, it leaves invariant the $n$-point functions of the vacuum state
$$\langle \Omega, \phi(f_1) \cdots \phi(f_n) \Omega \rangle =
\langle \Omega, \alpha_g(\phi(f_1)) \cdots \alpha_g(\phi(f_n)) \Omega \rangle\:, $$ which extends to
$$\langle \Omega, A\Omega \rangle =
\langle \Omega, \alpha_g(A) \Omega \rangle \tag{1}$$ | {
"domain": "physics.stackexchange",
"id": 79051,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, mathematical-physics, group-representations, lorentz-symmetry, poincare-symmetry",
"url": null
} |
thermodynamics, ideal-gas, heat-engine, heat-conduction
Title: Confusion in first law of thermodynamics Our teacher while explaining first law of thermodynamics showed us an example of a piston inside a chamber. He said when heat is applied the particles move, i.e the kinetic energy increases. Due to this velocity the molecules put some pressure on piston. Due to which the piston moves.
He said that some part of the energy supplied (heat) goes in increasing kinetic energy and other in doing work due to which the piston moves.
$$ dQ= dU + dW $$
My doubt is that how is the heat affecting the pressure applied directly. The heat affects the velocity of gas molecule which then increases the pressure on piston due to which it moves.
He also said that if there was no piston then the total heat will go into increase in kinetic energy. Which doesn't make sense how does the heat or molecules know when to take some energy when when to not?
But how is heat directly effecting the work done? I believe that heat goes fully in to vibrating/increasing kinetic energy of particle. If no heat was entering or escaping from the container, and it had fixed walls, we could assume that the energy of the molecules, ($\frac{1}{2} mv^2 = \frac{p^2}{2m}$) after each bounce on the walls was conserved, and temperature was constant.
If there is an expanding piston, and doing work outside, the energy comes from the molecules that lose some of its own after bouncing on the piston surface.
If there was no heat input the gas cooled. Depending on the amount of heat flow, it can expand the piston and increase the gas temperature. | {
"domain": "physics.stackexchange",
"id": 64678,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, ideal-gas, heat-engine, heat-conduction",
"url": null
} |
That is where I got off. You need to change the x into 5-x.
$$\displaystyle V = \int_{0}^{4} 2\pi (5-x)(4x-x^2)dx$$
You can't get from the first formula to the second simply by replacing $x$ with $5-x$.
The formula you cited in your original post looks like it was meant for revolution about the $y$-axis. I find it easier to not try to use such a formula, but to just look at one element of the entire volume, whether it be a shell, disk or washer. Once you have the elemental volume, then you can add all the elements by integrating.
MHB Math Scholar
Another picture:
#### alane1994
##### Active member
Ok, I have arrived at an answer of $$\displaystyle 64\pi$$. | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429609670702,
"lm_q1q2_score": 0.841595201005848,
"lm_q2_score": 0.8558511469672594,
"openwebmath_perplexity": 637.6273117862211,
"openwebmath_score": 0.7216432690620422,
"tags": null,
"url": "https://mathhelpboards.com/threads/shell-method-about-the-line-x-5.5809/"
} |
c++, c++11, design-patterns, url
Title: Creating URL with Builder pattern What do you think about this piece of code? How would you design the builder pattern in this situation?
#ifndef CPP_URL
#define CPP_URL
#include<iostream>
#include<vector>
#include<map>
using std::string;
enum protocol {
HTTP,
HTTPS
};
static string map(protocol scheme) {
switch(scheme) {
case HTTP:
return "http";
default:
return "https";
}
}
static int defaultPort(protocol scheme) {
switch (scheme) {
case HTTP:
return 80;
default:
return 440;
}
}
class url final {
private:
using list = std::vector<std::string>;
using pairs = std::map<std::string, std::string>;
const list paths;
const pairs queries;
const protocol scheme;
const string domain;
const string fragment;
const string username;
const string password;
mutable int port;
public:
class builder {
friend url;
private:
protocol scheme;
pairs queries;
list paths;
string domain;
string fragment;
string username;
string password;
int port = -1;
public:
builder& setScheme(protocol);
builder& setFragment(const string&);
builder& setDomain(const string&);
builder& setUsername(const string&);
builder& setPassword(const string&);
builder& setPort(int port);
builder& addQuery(const string&, const string&);
builder& addPath(const string&);
url& build() const;
};
explicit url(const url::builder&);
string build() const;
};
using builder = url::builder;
inline builder& builder::setScheme(protocol scheme) {
this->scheme = scheme;
return *this;
}
inline builder& builder::setUsername(const string& username) {
this->username = username;
return *this;
}
inline builder& builder::setPassword(const string& password) {
this->password = password;
return *this;
} | {
"domain": "codereview.stackexchange",
"id": 30015,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, design-patterns, url",
"url": null
} |
performance, sql, sql-server
I reordered the JOIN clauses to ensure that the product remains as small as possible after each single step. Further, I eliminated the unnecessary join on the User schema.
However, you don't actually need full INNER JOIN either. If your database system supports that, you can safely replace the 2nd and 3rd of the INNER JOIN with LEFT SEMI JOIN operators instead.
So much for fixing the inner select. But as a matter of fact, now we don't even need to do it as a subquery any more, but can just handle if as a LEFT JOIN with COUNT and GROUP BY on the outmost query.
Whether this actually gains any performance needs to be tested.
There are also a couple of flaws in your database scheme:
Take the UserClasses table schema. You are abusing it to describe both the roles of teacher and student for any given class, without distinguishing between these two. I suspect you coded the user role into the Users schema instead, but it would have been better to store different roles in different schemes.
You are apparently storing class names as string literals in multiple schemes, this is an indirect violation of the 2NF, but even worse, it requires string comparisons to match the corresponding columns against each other. This should be refactored ASAP.
There also appears to be a possible design flaw in Results. If the same test is reused by two different classes, and a pupil is enrolled into both, his test results are now shared between both classes. Test results should probably linked to a specific enrollment to a class, rather than just to the generic test. This also allows to simplify this query further, as the most expensive part of joining on UserClass for querying pupil enrollment is then obsolete. | {
"domain": "codereview.stackexchange",
"id": 21849,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, sql, sql-server",
"url": null
} |
python, algorithm, python-2.x, complexity, pathfinding
# If we are immediately surrounded by walls/obstacles, search fails
if len(no_obst_poses) == 0:
return None
# If only one choice, take it
elif len(no_obst_poses) == 1:
path.append(no_obst_poses[0])
else:
# Lets check if available_poses are in our memory
# Init empty list of available poses
available_poses = []
# Step through all poses in no_obst_poses
for pose in no_obst_poses:
# Check if pose is in our memory
if not pose in mem:
# If not in memory, append to available_poses
available_poses.append(pose)
# Lets choose one of the available_poses
if len(available_poses) > 0:
choice = np.random.choice(len(available_poses), 1)[0]
path.append(available_poses[choice])
else:
choice = np.random.choice(len(no_obst_poses), 1)[0]
path.append(no_obst_poses[choice])
# Check if we reached our goal_pose
if path[-1] == goal_pose:
return path
return None
if __name__ == "__main__":
# If we run this file, run the provided example and print resultant path
# to the terminal
world_state = np.array([[0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0]])
robot_pose = (2, 0)
goal_pose = (5, 5) # Note: in provided example (6, 6) is not a valid pose
# Set max_step_number
max_step_number = 1000
# Instantiate RandomPlanner class
planner = RandomPlanner(max_step_number)
# Perform path search
path = planner.search(world_state, robot_pose, goal_pose) | {
"domain": "codereview.stackexchange",
"id": 29175,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, algorithm, python-2.x, complexity, pathfinding",
"url": null
} |
telescope, image-processing
Title: What is the size of the image sensor in the largest optical telescopes? What image sensors (imaging electronics) are used in telescopes? Like CCD, is that the best option?
What is the typical physical size and resolution of the photo-sensitive surface of the currently active largest ones? The current largest digital CCD camera is that of the Vera C. Rubin Observatory1 which has a whopping 3.2 gigapixels. The previous largest features on the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS), and has a resolution of ~1.4 gigapixels.[1]
Based on the spec sheet provided by the Vera Rubin Observatory, the LSST camera has a resolution of roughly 0.2 arcseconds per 10 $\mu$m pixel. It is about 5 feet (1.52 meters) wide and weighs over 6000 lbs (2721 kilos, $2.6\times10^9$ dyn).[2] The actual photosensitive portion of the camera is ~64 cm ($4\times10^{34}$ Planck lengths) across.
Cameras of this size often have pointing, calibration and recording issues. These sources of error are usually extremely well understood, but it's still interesting to see. Pan-STARRS has a detailed list on their data site[3]. These issues include:
Randomly missing data that gets filled in later
Pointing errors related to astrometric positions of their targets
Registration issues near the celestial pole resulting in poor photometry
1formerly known as the Large Synoptic Sky Survey or LSST. The acronym has now been repurposed: What is the LSST now? Where does LSST end and Vera C. Rubin Observatory begin? | {
"domain": "astronomy.stackexchange",
"id": 5154,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "telescope, image-processing",
"url": null
} |
noise, statistics, stability
I detail this further in this existing post which may further answer this question: https://dsp.stackexchange.com/a/87468/21048
It may also help to understand that the ADEV computation, for any given averaging interval $\tau$ has a high pass component as a moving difference (and ultimately has a bandpass frequency response as a filter to the noisy signal as the cascade of a Sinc filter (moving average) together with a comb filter (moving difference). ADEV specifically filters the signal as such and then computes the standard deviation of the resulting noise after that filter. My description here applies directly to "Overlap ADEV" or "OADEV" which converges more quickly to the same result of the original "block by block ADEV" and therefore is my preferred approach to computing ADEV typically. I describe both approaches here with the following graphic for OADEV copied again below: | {
"domain": "dsp.stackexchange",
"id": 12176,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "noise, statistics, stability",
"url": null
} |
c#, unit-testing, nunit, union-find
yield return new TestCaseData(new Parameters<int, int>
{
InitialSize = 11,
PairsToMerge = new[]
{
new IndexPair(10, 0),
new IndexPair(1, 9),
new IndexPair(2, 8),
new IndexPair(3, 7),
new IndexPair(4, 6),
new IndexPair(5, 10),
new IndexPair(5, 1),
new IndexPair(8, 5),
new IndexPair(3, 4)
},
InputOutput = new[]
{
new InputOutput<int, int>(0, 10),
new InputOutput<int, int>(1, 10),
new InputOutput<int, int>(2, 10),
new InputOutput<int, int>(3, 3),
new InputOutput<int, int>(4, 3),
new InputOutput<int, int>(5, 10),
new InputOutput<int, int>(6, 3),
new InputOutput<int, int>(7, 3),
new InputOutput<int, int>(8, 10),
new InputOutput<int, int>(9, 10),
new InputOutput<int, int>(10, 10)
}
}).SetName("Unify elements into two different compoents, then ensure that they each have the correct parent");
}
private static IEnumerable<TestCaseData> GetComponentSizeTestCaseSource()
{
yield return new TestCaseData(new Parameters<int, int>
{
InitialSize = 4,
PairsToMerge = new[] { new IndexPair(1, 1), new IndexPair(2, 2), new IndexPair(3, 3) },
InputOutput = new[] { new InputOutput<int, int>(1, 1), new InputOutput<int, int>(2, 1), new InputOutput<int, int>(3, 1) }
}).SetName("Get the component size of elements that have not yet been grouped"); | {
"domain": "codereview.stackexchange",
"id": 37668,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, unit-testing, nunit, union-find",
"url": null
} |
c++, numerical-methods
return 0;
} The leapfrog part itself seems correct.
Use a vector math library
Instead of creating your own Vector3D, use a library that implements a similar type for you, along with overloads for all the mathematical operations you want to do on them. That avoids you from having to spell out all the operations done on the individual components of those vectors. I personally favor GLM (although it is oriented towards graphics it works fine for 3D vectors), but there are many more.
Use exact calculations where possible
We use numerical integration because it is the only practical thing to do when you have a system of many particles. However, you should always favor exact solutions where possible. For example, instead of using a numerical derivative of the Lennard-Jones potential, you should be able to write an exact version. This will then avoid any issues like the choice of dr, which may or may not be right depending on r itself.
What kind of system is this simulating?
vnp already mentioned in the comments that he doesn't see how the particles interact. Indeed, the acceleration of each particle only depends in its distance to the origin of the system, not on any other particle.
What are the units? I see some comments after the declarations of sigma, epsilon and mass, which is great, and from that I can assume that particle positions are also in ångström? But then the box seems very small compared to the size of the well in the Lennard-Jones potential. Even more important, what about time?
Are particles meant to be kept inside the box? Are there periodic boundary conditions? There are lots of questions here. You should document all this in comments in your source code, and/or refer to a paper or other document describing exactly what you are trying to simulate.
Note that these questions are important for deciding whether your use of the leapfrog algorithm is correct: it is just an approximation, and there will are sources of errors. Whether those errors will dominate your results depend on the velocities, forces and the size of your timesteps.
Performance optimizations
While the leapfrom algorithm is implemented correctly, you are calculating the accelarations twice as often as necessary. Consider that the result of the second call to accel() will be the same as the first call to accel() in the next iteration of the for-loop in main(). | {
"domain": "codereview.stackexchange",
"id": 45004,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, numerical-methods",
"url": null
} |
particle-physics, experimental-physics, accelerator-physics
Title: What technological advance would allow LEP3 to surpass LEP2? I learned that for electron accelerators synchrotron radiation and acceleration are the limiting factors.
This article, that I found in one answer to this question mentions that one would not use the superconducting acceleration elements that were developed in the meanwhile, but rely on regular normal-conduction acceleration.
Then I wonder, at the same radius, what technological advance would allow LEP3 to surpass LEP2 in a significant ammount? There is nothing really surprising here. The limit is basically how much energy you can add to the beam in the space left after you have installed all your bending and tuning magnets.
Both magnets and cavities are better now than they were then. More over, the size of both items has gotten slightly smaller allowing more of them to be packed into the same distance. | {
"domain": "physics.stackexchange",
"id": 13933,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, experimental-physics, accelerator-physics",
"url": null
} |
bert, transformer, sentiment-analysis, huggingface
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night "
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night "
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
It shows sentiments of all three labels, positive, neutral and negative.
However, I'm now trying to use Finbert from ProsusAI to do sentiment analysis https://huggingface.co/ProsusAI/finbert. It doesn't give me its usage on its page. So I'm following this tutorial https://towardsdatascience.com/effortless-nlp-using-pre-trained-hugging-face-pipelines-with-just-3-lines-of-code-a4788d95754f.
My code is
from transformers import pipeline
classifier = pipeline('sentiment-analysis', model='ProsusAI/finbert')
classifier('Stocks rallied and the British pound gained.')
However, the result is [{'label': 'positive', 'score': 0.8983612656593323}]. It only shows the sentiment of the most likely label's (positive). But I need all three labels' sentiment (positive, neutral and negative). How should I use it? You can get the scores for all labels as follows:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import scipy | {
"domain": "datascience.stackexchange",
"id": 10939,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bert, transformer, sentiment-analysis, huggingface",
"url": null
} |
ros, ros-kinetic, rqt-graph
PluginManager._load_plugin() could not load plugin "rqt_graph/RosGraph":
Traceback (most recent call last):
File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/plugin_handler.py", line 99, in load
self._load()
File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/plugin_handler_direct.py", line 54, in _load
self._plugin = self._plugin_provider.load(self._instance_id.plugin_id, self._context)
File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/rqt_gui_py/ros_py_plugin_provider.py", line 60, in load
return super(RosPyPluginProvider, self).load(plugin_id, plugin_context)
File "/home/Giuseppe/ros_catkin_ws/install_isolated/lib/python2.7/site-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context) | {
"domain": "robotics.stackexchange",
"id": 32753,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-kinetic, rqt-graph",
"url": null
} |
which is sixth-order accurate. Clearly the process can be repeated to get eighth-order accuracy and beyond. Doing so goes by the name of Romberg integration, which we will not present in full generality.
## Node doubling¶
Note in (5.6.12) that $$R_f(4n)$$ depends on $$S_f(2n)$$ and $$S_f(4n)$$, which in turn depend on $$T_f(n)$$, $$T_f(2n)$$, and $$T_f(4n)$$. There is a useful benefit realized by doubling of the nodes in each application of the trapezoid formula. As shown in Fig. 5.6.2, when doubling $$n$$, only about half of the nodes are new ones, and previously computed function values at the other nodes can be reused.
Fig. 5.6.2 Dividing the node spacing by half introduces new nodes only at midpoints, allowing the function values at existing nodes to be reused for extrapolation.
Specifically, we have | {
"domain": "tobydriscoll.net",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9877587234469587,
"lm_q1q2_score": 0.8599400531121726,
"lm_q2_score": 0.8705972751232809,
"openwebmath_perplexity": 997.3050622322135,
"openwebmath_score": 0.8504312634468079,
"tags": null,
"url": "https://tobydriscoll.net/fnc-julia/localapprox/integration.html"
} |
organic-chemistry, experimental-chemistry, redox, synthesis, alcohols
References:
The majority of the chemical information here comes from Comprehensive Organic Synthesis, Vol. 7. A whole chapter is dedicated to DMSO based oxidations and their examples in total synthesis: see Lee, T. V. Oxidation Adjacent to Oxygen of Alcohols by Activated DMSO Methods. DOI: 10.1016/B978-0-08-052349-1.00191-8. | {
"domain": "chemistry.stackexchange",
"id": 7941,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, experimental-chemistry, redox, synthesis, alcohols",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.