text stringlengths 49 10.4k | source dict |
|---|---|
astrophysics, radiation, thermal-radiation, plasma-physics
Title: Is Sun brighter than what we actually see? I learned from that plasma can reflect radiations of frequency less than that of its own oscillations. If so, considering the plasma in Sun's atmosphere, it should also reflect solar radiations.
That would mean that the radiation emitted from the inner layers of the Sun would be reflected back by the outer layers. So, the only radiation coming out should be the ones generated at the outer layers, for which there is no denser layers of plasma surrounding it. And of course, the ones that have higher frequencies than the plasma in each layer would come out unscathed.
If this is true, most of the radiation generated by fusion will be trapped inside, and what we observe is only a fraction.
Note that the intensity of observable radiation coming out from stars would now mostly depend on the outermost layer. So, wouldn't it be inappropriate to consider stars as Black bodies while determining their temperature and other properties? Is Sun brighter and hotter than what we see from outside?
Note that the intensity of observable radiation coming out from stars would now mostly depend on the outermost layer.
Because the material in the star is opaque, it completely depends on the outermost layer. Of course the properties of that layer (such as its temperature) are driven by the energy coming from the interior.
So, wouldn't it be inappropriate to consider stars as Black bodies while determining their temperature and other properties?
It is appropriate for an object that has a spectrum that closely matches a blackbody spectrum. However the only property that describes is the temperature of the visible layer. It doesn't imply anything about the interior and processes that produce and distribute energy. You shouldn't read blackbody and think that means that the (invisible) interior is simple or in some way similar to the exterior.
The sun's interior is much hotter than the exterior (around 15 million Kelvin in the core, compared to the 6000 Kelvin or so at the photosphere). Because it is not visible, I would hesitate to call it "brighter". But you could consider it that way. | {
"domain": "physics.stackexchange",
"id": 57258,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "astrophysics, radiation, thermal-radiation, plasma-physics",
"url": null
} |
# Find the range of values of $x$ which satisfies the inequality.
Find the range of values of $x$ which satisfies the inequality $(2x+1)(3x-1)<14$.
I have done more similar sums and I know how to solve it. I tried this one too but my answer doesn't matches the book's answer.
I did in this way: Solving, i.e, After rearranging the equation and factorizing it, I get
$(3x+5)(2x-3)<0$
Is this right?
Anyways, then finding the range of values, I get: $x<-\frac{5}{3}$ or $x>\frac{3}{2}$
But my book says the answer should be $-\frac{5}{3}<x<\frac{3}{2}$.
Did I do any mistake?
• It looks like you did everything right except realize that you are doing $(3x+5)(2x-3)<0$ instead of $(3x+5)(2x-3)\gt0$... Note that the LHS is a parabola facing "up" so it is increasing as $x$ moves away from the center... – abiessu Feb 28 '14 at 19:07
• @abiessu I used the sign $<$....Do I have to change it to $>$? – Kiara Feb 28 '14 at 19:09
• No, I'll try to explain better. – abiessu Feb 28 '14 at 19:10
• @abiessu I still did not understand! – Kiara Feb 28 '14 at 19:19
Consider the inequality
$$ab\lt 0\tag 1$$
This inequality holds exactly when $a\gt 0, b\lt 0$ or when $a\lt 0, b\gt 0$, but not when $a\gt 0, b\gt 0$ or $a\lt 0,b\lt 0$.
In this question, the inequality has been reduced to
$$(3x+5)(2x-3)\lt0$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692264378963,
"lm_q1q2_score": 0.8201423167855797,
"lm_q2_score": 0.8397339596505965,
"openwebmath_perplexity": 135.65575878970603,
"openwebmath_score": 0.8539050817489624,
"tags": null,
"url": "https://math.stackexchange.com/questions/694324/find-the-range-of-values-of-x-which-satisfies-the-inequality"
} |
java, console, math-expression-eval, calculator
private static void plusOrMinus(ArrayList<Double> nums,
ArrayList<String> op, String plusOrMinus) {
int index = op.indexOf(plusOrMinus);
if (plusOrMinus.equals("+")) {
nums.set(index, nums.get(index) + nums.get(index + 1));
} else {
nums.set(index, nums.get(index) - nums.get(index + 1));
}
nums.remove(index + 1);
op.remove(index);
}
private static void multiplyOrDivide(ArrayList<Double> nums,
ArrayList<String> op, String multiplyOrDivide) {
int index = op.indexOf(multiplyOrDivide);
BigDecimal bd = new BigDecimal(nums.get(index).toString());
BigDecimal bd2 = new BigDecimal(nums.get(index + 1).toString());
if (multiplyOrDivide.equals("*")) {
bd = bd.multiply(bd2);
bd.setScale(bd.scale() + bd2.scale());
} else {
bd = bd.divide(bd2);
}
nums.set(index, bd.doubleValue());
nums.remove(index + 1);
op.remove(index);
} | {
"domain": "codereview.stackexchange",
"id": 8765,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, console, math-expression-eval, calculator",
"url": null
} |
newtonian-mechanics, forces, rotational-dynamics, torque
Title: Is there a rotational analog for Newton's laws of motion? In context of rotational dynamics my textbooks interminably states that there always exists a rotational analog for every linear variable and they are related to one another through similar equation. (And for the fact I know about most of these analogs and the related equations)
Though till now I haven't yet seen/learnt of rotational analog of Newton's Laws of Motion. So
Does there exist a rotational analog for Newton's laws of Motion?
If possible, then are they generalizable to non-rigid body?
Edit:
To further clarify the question I am going to write down the rotational analog of Newton's laws of motion. Assuming that there exists a rotational analog of Newton's laws of motion, I suppose they would look like the following:
A body continues it's state of rest and uniform rotation unless acted upon by an external torque.
The torque acting on a body is directly proportional to the rate of change of angular momentum. i.e.,
$$ \boldsymbol {\tau} = \frac {d\mathbf L}{dt}$$
For every torque there is an equal and opposite torque. | {
"domain": "physics.stackexchange",
"id": 64254,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, rotational-dynamics, torque",
"url": null
} |
NOTE
$$\frac{\partial F}{\partial X} = \left(\begin{array}{cc}0 & 1\\ 1+2x&0\end{array}\right)$$
plt = StreamPlot[{y, x^2 + x}, {x, -2, 2}, {y, -2, 2}];
pt1 = Graphics[{Red, Disk[{-1, 0}, 0.04]}];
pt2 = Graphics[{Red, Disk[{0, 0}, 0.04]}];
Show[plt, pt1, pt2]
• What software package did you use to make that vector field? – TSF Feb 21 at 12:02
• @TonyS.F. MATHEMATICA. I attached the script. – Cesareo Feb 21 at 12:07
• You cannot use linearization for the center. Lyapunov's linearization theorem does not allow any conclusion in this case. – MrYouMath Feb 22 at 18:47
• @MrYouMath Thanks for the hint. I had forgotten to include this detail. – Cesareo Feb 22 at 20:04
• @Cesareo: I like the idea with the circle. Do you have any resource that gives more examples of this method? But I find it a little bit strange that this method does neglect higher order terms. – MrYouMath Feb 22 at 21:00
A more elegant way for the center at $$\boldsymbol{x}_\text{eq}=[-1,0]^T$$ is to use the following Lyapunov function candidate
$$V(x,y)= 1/2y^2+1/6\left[1-x^2(2x+3)\right].$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9732407152622597,
"lm_q1q2_score": 0.8366597651809086,
"lm_q2_score": 0.8596637523076225,
"openwebmath_perplexity": 381.46198143835994,
"openwebmath_score": 0.6213104724884033,
"tags": null,
"url": "https://math.stackexchange.com/questions/3118741/linearisation-and-stability-of-a-system"
} |
organic-chemistry, boiling-point
Title: At what temperature will proteins and fats boil in a vacuum? A question was asked on another stackexchange site:
This is not nice perspective, but eventually it will happen. An astronaut falls out of spaceship because of damage caused by collision with other object, or because of suit decontamination. The fluids from the body would evaporate, and if any bacteria would survive, than only as spores. Does it mean the perfect mummification of the body? Or there will be some decay, caused by enzymes from damaged cells, for example?
Given that the water in the body will outgass taking with it much of the rest of the body, are there any cited sources that specify the pressure at which proteins, fats and bones will break down and become vapour? You're talking about the human body being exposed to outer space and asking about what would happen to it's internal structure. Bones, fats and proteins are within the enclosed system that is the human body, they are not subjected to the low pressures of space. The answer to the pressure at which proteins, fats and bones will break down and become vapour is not related to the answer to what would happen to them in the human body if the body is exposed to the vacuum of space. Nevertheless, I'll try to provide both answers.
The answer to the second question (short-term) can be found on NASA's website at "ask an astrophysicist". It's actually quite surprising for someone used to sci-fi movies:
You do not explode. Your blood does not boil. You do not freeze. You do not instantly lose consciousness. | {
"domain": "chemistry.stackexchange",
"id": 535,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, boiling-point",
"url": null
} |
• Thank you, those are useful references! – ShreevatsaR Jan 12 at 16:56
• @ShreevatsaR: You're welcome. – Markus Scheuer Jan 12 at 16:58
• Thanks very much for a very clear and informative answer. I've often heard of the saddle-point method but never got around to learning it (e.g. didn't get that far in the Analytic Combinatorics book); now I have some idea. So if I understand this answer correctly, this gives the asymptotics of the largest coefficient... and we need to appeal to probability arguments etc to show that for the other coefficients, the probability drops off exponentially from this main (largest) term. – ShreevatsaR Jan 15 at 19:25
• @ShreevatsaR: You're welcome and I agree with your considerations. I'd like to point to the introductory paragraph (the first one) in section VIII.9: Saddle-points and probability distributions. This section might go in the directions you are interested in. – Markus Scheuer Jan 15 at 19:36
You can prove that your asymptotic expression is correct using the Edgeworth series. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9790357573468175,
"lm_q1q2_score": 0.8004346508393979,
"lm_q2_score": 0.8175744806385542,
"openwebmath_perplexity": 400.0119019042564,
"openwebmath_score": 0.9821085929870605,
"tags": null,
"url": "https://math.stackexchange.com/questions/3063646/what-is-the-probability-that-the-sum-of-digits-of-a-random-k-digit-number-is"
} |
Indeed as bombadillo mentioned in his analysis of mathematicians thinking, I allow myself this license when brainstorming, but not when writing up proofs. since a proof is an attempt to communicate with others, it should leave no essential point in doubt.)
Last edited: Apr 21, 2005
12. Apr 21, 2005
### EvLer
You're right, I sort of re-defined the problem. But the problem does not say that the basis is eignebasis, all it says is that the set is orthogonal.
So in orthogonal set, zero may be included? It's something you mentioned earlier. Doesn't that make it linearly depenent?
13. Apr 21, 2005
### James Jackson
Mathwonk: I have all sorts of fun along the lines you're mentioning. I am a Physicist (final year of my degree in the UK) and as such make assumptions along the lines you mention. However, I'm always careful to prove any such assumptions to myself as although I take them as true, I find it helps understanding on a deeper level than the problem at hand if you fully understand the framework supporting it.
With that in mind, I'm finding it quite interesting taking a 4th year module in Quantum Computing and Quantum Information Theory that is taught by the Mathematics department (we can take this 4th year Maths module in our 3rd year Physics) as everything is definined very formally, which is different to Physics where there is a certain element of what seems to be hand waving but is actually saving time by telling you certain things are true. If you want to go and prove these things then that's fine!
A case in point is a post on this sub-forum on orthogonal basis sets. I gave a counterexample to someone's claim that a Physicist would quite happily take, but Hyrkyl added that (in this instance) a certain fundamental property of what I was talking about (i.e. the space $C^2$ having an inner product) to 'formalise' things.
Bloody mathematicians :)
Edit: Please ignore certain gramatical inconsitancies in the post above but I'm slightly less than sober right now...
14. Apr 21, 2005
### mathwonk
{(0,0), (1,0)} is an example of a dependent, but mutually orthogonal set.
15. Apr 21, 2005 | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854111860906,
"lm_q1q2_score": 0.8102397396101908,
"lm_q2_score": 0.8354835309589073,
"openwebmath_perplexity": 1367.087035490761,
"openwebmath_score": 0.8208373785018921,
"tags": null,
"url": "https://www.physicsforums.com/threads/orthogonal-basis.72387/"
} |
python, numpy, statistics, pandas, numerical-methods
So, using numpy or pandas or both, is there any way I could improve this
in execution time and syntax?
The final idea is to return a numpy array with the simulated p-values to append it to the original dataframe. As you mentioned, by calling np.vectorize on your monte_carlo function and applying it to your dataset b, you are essentially running a for loop over each element individually.
np.random.noncentral_chisquare takes either a float or an array of floats for its noncentrality parameter, so you could vectorize your function call by generating your entire distribution in 1 call then applying the rest of the function to the entire matrix at once.
def monte_carlo2(x, tot_sample):
gen_dist = np.random.noncentral_chisquare(df=1, nonc=x, size=(x.shape[0],total_sample))
compare = gen_dist > x
return np.divide(np.sum(compare, axis=1), tot_sample)
x2 = monte_carlo2(b, total_sample)
This passes your f_res data in as a vector, b, instead of 1 element at a time, generates a distribution with nonc= [b0, b1, b2, ..., bN]. This results in a gen_dist matrix of size (625527, 10) or (len(b), total_sample). b is a vertical vector, so the comparison will apply it across each row, then you just need to sum your comparison over each row (axis=1).
You could probably speed it up more by only generating one distribution per unique value of f_res, but depending on what percentage of your values are unique this may not be worth it, and I assume you want a new random sample each time.
With your original function I get a timeit of:
15.2 s ± 125 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
And with the vectorized version I get:
8.84 s ± 50.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) | {
"domain": "codereview.stackexchange",
"id": 27262,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, numpy, statistics, pandas, numerical-methods",
"url": null
} |
of that line or the stack is reduced to the single base point P0. Instead, one just observes that P2 would make a greater angle than P1 if (and only if) P2 lies on the left side of the directed line segment P0P1 as shown in the following diagram. A triangulation of a set of points in the Euclidean space is a simplicial complex that covers the convex hull of , and whose vertices belong to . If you want a convex hull and you want it now, you could go get a library like MIConvexHull.That library claims to be high-performance compared to a comparable C++ library, but that claim is implausible, especially for the 2D case, since the algorithm relies heavily on heap memory … Construct the convex hull brute force algorithm and divide and conquer algorithm of a set of 2-dimensional points. For 3-D problems, k is a triangulation matrix of size mtri-by-3, where mtri is the number of triangular facets on the boundary. How to check if two given line segments intersect? Graham's Scan algorithm will find the corner points of the convex hull. Firstly, the point cloud is segmented into 3D planes via region growing and region merging. �2��v4ݎ�="�R��Ӵ͓�'�!͔����e��Z Definitions. Let's consider a 2D plane, where we plug pegs at the points mentioned. We consider here a divide-and-conquer algorithm called quickhull because of its resemblance to quicksort.. Let S be a set of n > 1 points p 1 (x 1, y 1), . z=x 2+y 2 Compute the 3D lower convex hull z=x2+y Project the 3D facets back to the plane. But even if sorting is required, this is a faster sort than the angular Graham-scan sort with its more complicated comparison function. Software 3(4), 398-403 (1977), Ronald Graham, "An Efficient Algorithm for Determining the Convex Hull of a Finite Point Set", Info. It could even have been just a random set of segments or points. s lies within the circumcircle of p, q, r iff sʼ For other dimensions, they are in input order. After all points have been processed, push onto the stack to complete the lower convex chain. simplices ndarray of ints, shape | {
"domain": "com.br",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.95598134762883,
"lm_q1q2_score": 0.874628239582888,
"lm_q2_score": 0.9149009462917594,
"openwebmath_perplexity": 854.4457365825215,
"openwebmath_score": 0.45257630944252014,
"tags": null,
"url": "https://www.nossaciencia.com.br/53mmr2m/0e6777-find-convex-hull-of-points-given-in-a-2d-plane"
} |
ros, navigation, gmapping
ERROR [1507272826.100975658, 142.019000000]: Joint 'hand_index_virtual_1_joint' not found in model 'tiago'
ERROR [1507272826.101006027, 142.019000000]: Joint 'hand_index_virtual_2_joint' not found in model 'tiago'
ERROR [1507272826.101036851, 142.019000000]: Joint 'hand_index_virtual_3_joint' not found in model 'tiago'
ERROR [1507272826.101064825, 142.019000000]: Joint 'hand_little_abd_joint' not found in model 'tiago'
ERROR [1507272826.101089220, 142.019000000]: Joint 'hand_little_flex_1_joint' not found in model 'tiago'
ERROR [1507272826.101112872, 142.019000000]: Joint 'hand_little_.......................
How can I fix this errors?
If I ignore all the warnings and errors, I can still start with mapping.
Error 3:
By saving the map with :
droid@M6600:~/tiago_public_ws$ rosservice call /pal_map_manager/save_map "directory: '3XC12'"
I got the following message:
ERROR: Unable to load type [pal_navigation_msgs/SaveMap].
Have you typed 'make' in [pal_navigation_msgs]?
Bevor starting with mapping, I already execute 'catkin build', with the result:
[build] Summary: All 82 packages succeeded!
[build] Ignored: 8 packages were skipped or are blacklisted.
[build] Warnings: None.
[build] Abandoned: None.
[build] Failed: None.
[build] Runtime: 32.3 seconds total.
Therefore the type 'pal_navigation_msgs' must be available on my local workstation !?!
Error 4:
Sometimes after rebooting or starting Tiago in web-commander under 'Diagnostics' | {
"domain": "robotics.stackexchange",
"id": 29016,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation, gmapping",
"url": null
} |
python, algorithm
Since k=1, then result should be the single pair [(0, 1)]. But the initialization loop runs over all the million elements of nums1.
This could be avoided by initializing the heap with the single pair (nums1[0], nums2[0]) since we know that this is the pair with the smallest sum. And then when we pop a pair (nums1[i], nums2[j]) from the heap, we have to add two new pairs to the heap: that is, (nums1[i + 1], nums2[j]) and (nums1[i], nums2[j + 1]), as either of these pairs might be next in order after the pair we just processed.
However, we have to be careful to add each pair just once — after processing (nums1[1], nums2[0]) and (nums1[0], nums2[1]) we must not add the pair (nums1[1], nums2[1]) twice. This can be avoided by keeping a set of all the pairs we have added to the heap.
solution = []
heap = [] # Min-heap of (nums1[i] + nums2[j], i, j)
added = set() # Set of indexes (i, j) that have been added to heap.
def add(i, j):
# Add (nums1[i] + nums2[j], i, j) to the heap if possible.
if i < len(nums1) and j < len(nums2) and (i, j) not in added:
added.add((i, j))
heappush(heap, (nums1[i] + nums2[j], i, j))
add(0, 0)
while k and heap:
k -= 1
_, i, j = heappop(heap)
solution.append((nums1[i], nums2[j]))
add(i + 1, j)
add(i, j + 1)
return solution | {
"domain": "codereview.stackexchange",
"id": 30255,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, algorithm",
"url": null
} |
semiconductors
If another atom is brought in line with the first two, a new charge distribution becomes possible that is neither completely bonding nor antibonding. Hence, a third energy level is formed between the two extremes. When $N$ atoms are covalently bonded into a linear chain, $N$ energy levels distributed between the lowest-energy bonding state and the highest-energy antibonding state appear, forming a band of energies. In our linear chain of atoms, spin degeneracy allows all $N$ electrons to fall into the lower half of the energy band, leaving the upper half of the band empty. However in a three-dimensional crystal, the number of energy levels is more generally equated with the number of unit cells, not the number of atoms. In typical semiconductor crystals, there are two atoms per primitive unit cell. Thus, the first atom fills the lower half of the energy band (as with the linear chain), whereas the second atom fills the upper half, such that the energy band is entirely full.
My question relates to the following section:
In our linear chain of atoms, spin degeneracy allows all $N$ electrons to fall into the lower half of the energy band, leaving the upper half of the band empty. However in a three-dimensional crystal, the number of energy levels is more generally equated with the number of unit cells, not the number of atoms. | {
"domain": "chemistry.stackexchange",
"id": 15273,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "semiconductors",
"url": null
} |
Your proof seems a bit confusing. You would make it clearer with the initial observation that any linear map from a vector space $$U$$ to a vector space $$V$$ is entirely determined by its values at the vectors $$e_i$$ of a basis of $$U$$. The rest should follow almost immediately. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9773708019443873,
"lm_q1q2_score": 0.8420422837079964,
"lm_q2_score": 0.86153820232079,
"openwebmath_perplexity": 64.75065967517986,
"openwebmath_score": 0.9988114833831787,
"tags": null,
"url": "https://math.stackexchange.com/questions/3494820/spaces-of-linear-maps-and-dual-space"
} |
density-matrix, state-tomography
$$T_{{\bf a, a'}} := Tr(M^{({\bf a})}M^{({\bf a'})}).$$
I would like to know how (1) can be understood from intuitive and mathematical points of view. My main point of confusion is the matrix $T^{-1}_{{\bf a,a'}}$. Let $\rho$ be an arbitrary state, and $\{\mu_b\}_b$ some POVM, so that the outcome probabilities are $p_b(\rho)=\operatorname{tr}(\mu_b \rho)$. You're asking what are the (Hermitian) operators $M_b$ such that you get the decomposition $\rho=\sum_b p_b(\rho) M_b$ for all $\rho$.
This is completely analogous to the following question: given a finite-dimensional vector space $V$ and a finite subset $\{v_k\}\subset V$ that spans $V$, is there some set $\{w_k\}$ such that for all $v\in V$ we have $v=\sum_k \langle v_k,v\rangle w_k$?
One way to answer this question is to observe this relation is equivalent to $v=WV^\dagger v$, with $V$ and $W$ matrices whose columns are the vectors $\{v_k\}$ and $\{w_k\}$, respectively. If we want this for all $v\in V$, that means we're asking for vectors $w_k$ such that $WV^\dagger =I$.
This is an inhomogeneous linear system, a solution of which can always be written
$$W=(V^\dagger)^+\equiv (V V^\dagger)^{-1} V,$$
assuming $VV^\dagger$ is invertible, that is, $V$ is surjective (which is true iff $\{v_k\}$ span $V$). | {
"domain": "quantumcomputing.stackexchange",
"id": 4887,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "density-matrix, state-tomography",
"url": null
} |
python, beginner, game, multithreading, tkinter
self.new_game()
def new_game(self):
self.canvas.delete(ALL)
self.canvas.create_text(WIDTH/2,HEIGHT/2-50,text="Welcome to Snake!"\
+ "\nPress arrow keys or click in the window"\
+ " to start moving!", tag="welcome_text")
rectWidth = WIDTH/25
#Initialize snake to 3 rectangles
rect1 = self.canvas.create_rectangle(WIDTH/2-rectWidth/2, HEIGHT/2-rectWidth/2, WIDTH/2+rectWidth/2\
, HEIGHT/2+rectWidth/2, outline="#dbf", fill="#dbf"\
, tag="rect1")
rect2 = self.canvas.create_rectangle(WIDTH/2-rectWidth/2, HEIGHT/2-rectWidth/2, WIDTH/2+rectWidth/2\
, HEIGHT/2+rectWidth/2, outline="#dbf", fill="#dbf"\
, tag="rect2")
rect3 = self.canvas.create_rectangle(WIDTH/2-rectWidth/2, HEIGHT/2-rectWidth/2, WIDTH/2+rectWidth/2\
, HEIGHT/2+rectWidth/2, outline="#dbf", fill="#dbf"\
, tag="rect3")
#initialize variables that contribute to smooth gameplay below:
#
#set rectangle width and height variables for use with new rectangles on the canvas
self.rectWidth = rectWidth
#lastDirection recorded because first 2 rectangles always overlap while moving,
#but if user goes right then immediately left the snake should run into itself and
#therefore end the game (See below functions self.check_collide and self.end_game)
self.lastDirection = None
self.direction = None
#Used to force snake to expand out on first move
self.started = False | {
"domain": "codereview.stackexchange",
"id": 15225,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, game, multithreading, tkinter",
"url": null
} |
quantum-mechanics, probability, estimation
Is this valid and if so what would the probability and mechanics behind both the statue waving and the cow jumping of the moon ?
Edit#1:
I forgot to also ask if the "physics" mentioned in the article I linked denying the possibility of Dawkins' example is right? To get a feel for this kind of (im)probability, consider the simplified case where molecules either move left or right with 50% chance. A marble statue hand is maybe 1 kg, the molar weight of marble is conveniently nearly exactly 100 g/mole, implying 10 moles, e.g. $10N_A=6.02214076\times 10^{24}$ molecules. The chance that they would all be moving to the left simultaneously is 1 in $2^{10 N_A}=2^{6.02214076\times 10^{24}}=10^{1.812845\times10^{24}}$.
Molecules typically change direction fast, so we get a lot of "trials" every second, but we do not have to calculate how many there are since the above probability is so small than even if they tried once every Planck time it would be far, far longer than the expected time till proton decay or the end of the black hole era.
One can try to refine the calculation with more directions, not every molecule moving in the same direction and so on. But the answer is still the same. This is why thermodynamics can rely on statistical mechanics: since there are so many molecules, macroscopic averaged properties behave in very lawful ways, and fluctuations tend to be tiny. | {
"domain": "physics.stackexchange",
"id": 59876,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, probability, estimation",
"url": null
} |
thermodynamics, thought-experiment, entropy
Edit:
One of the comments below (which has been deleted now) mentions that the entropy of B does increase but the overall entropy of the system (A+B) decreases.
So the question remains: why does the entropy of the system decrease and not increase?
Or put another way: why is side A more 'dominant' in the overall contribution of the system's entropy? So the box is hot on the right and cold on the left. Then one can use this separation of temperature to run a heat engine by allowing the heat to flow from the hot side to the cold side. Another possible action of the demon is that he can observe the molecules and only open the door if a molecule is approaching the trap door from the right. This would result in all the molecules ending up on the left side. Again this setup can be used to run an engine. This time one could place a piston in the partition and allow the gas to flow into the piston chamber thereby pushing a rod and producing useful mechanical work.
So remembering this is a paradox, if the demon "could" do his job, a system that previously could do no work because entropy was even all over, is now able to do work, so by definition entropy has decreased.
But the demon can't do the job, so this is just assuming he could.
Using the expression for the internal energy of an ideal gas, the entropy may be written:
$${{\frac {S}{Nk}}=\ln \left[{\frac {V}{N}}\,\left({\frac {U}{{\hat {c}}_{V}kN}}\right)^{{\hat {c}}_{V}}\,{\frac {1}{\Phi }}\right]}$$
Since this is an expression for entropy in terms of $U$, $V$, and $N$, it is a fundamental equation from which all other properties of the ideal gas may be derived. Note, no explicit $T$ dependence exists, as mentioned in the comments above. | {
"domain": "physics.stackexchange",
"id": 34127,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, thought-experiment, entropy",
"url": null
} |
# Convolving with the hemodyamic response function#
import numpy as np
import matplotlib.pyplot as plt
## Making a hemodynamic response function#
We will start by making a function that returns an estimate of the blood flow signal at any time $$t$$ after the onset of a neural impulse.
We could use measured data to do this, from data like this paper, with the problem that the data has some noise that we do not want to preserve. Another way, that we follow here, is to construct a function from some useful mathematical curves that gives values that are of a very similar shape to the measured response over time.
## Using scipy#
Scipy is a large library of scientific routines that builds on top of numpy.
You can think of numpy as being a subset of MATLAB, and numpy + scipy as being as being roughly equivalent to MATLAB plus the MATLAB toolboxes.
Scipy has many sub-packages, for doing things like reading MATLAB .mat files (scipy.io) or working with sparse matrices (scipy.sparse). We are going to be using the functions and objects for working with statistical distributions in scipy.stats:
import scipy.stats
scipy.stats contains objects for working with many different distributions. We are going to be working with scipy.stats.gamma, which implements the gamma distribution.
from scipy.stats import gamma
In particular we are interested in the probability density function (PDF) of the gamma distribution.
Because this is a function, we need to pass it an array of values at which it will evaluate.
We can also pass various parameters which change the shape, location and width of the gamma PDF. The most important is the first parameter (after the input array) known as the shape parameter ($$k$$ in the wikipedia page on gamma distributions).
First we chose some x values at which to sample from the gamma PDF:
x = np.arange(0, 25, 0.1)
Next we plot the gamma PDF for shape values of 2, 4, 6, 8, 12. | {
"domain": "nipraxis.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9755769056853638,
"lm_q1q2_score": 0.8043260065742788,
"lm_q2_score": 0.8244619177503206,
"openwebmath_perplexity": 1959.5350844538898,
"openwebmath_score": 0.5251843929290771,
"tags": null,
"url": "https://textbook.nipraxis.org/convolution_background.html"
} |
quantum-gate, quantum-state
Alternative:
If the two input qubits are entangled, the above method won't work since you won't be able to represent the input state as a tensor product of the states of the two qubits. So, I'm outlining a more general method here.
When two gates are in parallel, like in your case, you can consider the tensor product of the two gates and apply that on the 2-qubit state vector. You'll end up with the same result.
$\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} \otimes \frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} = \frac{1}{2}\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1 \end{bmatrix}$
Now, on applying this matrix on the 2-qubit state $\begin{bmatrix}1\\0\\0\\0\end{bmatrix}$ you get:
$$\frac{1}{2}\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1 \end{bmatrix} \begin{bmatrix}1\\0\\0\\0\end{bmatrix}=\begin{bmatrix}1/2\\1/2\\1/2\\1/2\end{bmatrix}$$
which is equivalent to $$\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_B$$
Justification
Tensor product of linear maps: | {
"domain": "quantumcomputing.stackexchange",
"id": 193,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-gate, quantum-state",
"url": null
} |
So one solution to 7x- 10n= 2 is x= 6, n= 4. It is possible to write out the "general solution" but since 6 itself is between 0 and 10, x= 6 satisfies 7(6)= 2 (mod 10) and so 35(6)= 210= 10 (mod 50).
5. Jan 12, 2013
### knowLittle
According to Wikipedia, Diophantine equations are written as follows:
ax + by = c
The Diphantine equation that you are really writing is this
35x-50n=10?
I understand everything, until you change the equation 1=7 -2(10-7)= 3(7)-2(10)=1. I understand that 21-20=1, but why changing from 7- 2(10-7) to 3(7)-2(10)?
Also, I am acquainted with Euclid's GCD algorithm:
Euclid(a,b)
if b==0
return a
else return Euclid (b, a mod b)
Is there a way to use it without having to trace it?
Is this all solutions for 35x $\equiv$ 10 mod 50? Also, is it correct that there has to be exactly 5 solutions, since the gcd of 35, 50 is 5?
Is my solution correct? | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676502191266,
"lm_q1q2_score": 0.9095320627463644,
"lm_q2_score": 0.9284087951081424,
"openwebmath_perplexity": 824.4552962959322,
"openwebmath_score": 0.8410573601722717,
"tags": null,
"url": "https://www.physicsforums.com/threads/solutions-to-congruence-modulo-50.663798/"
} |
# Tossing a matchstick randomly onto an infinite grid
I was given this problem recently in a job interview in which I did not succeed, however I found the problem itself very interesting, and have been trying to solve it since.
Question:
Given matchstick of length 1, and an infinite grid of similar matchsticks (i.e. an infinite grid of 1x1 squares), if you toss the matchstick randomly onto the grid, what is the probability that the matchstick lands overlapping at least one of the matchsticks in the grid?
My approach:
I decided to first define 3 variables:
• $x =$ horizontal distance between lower end of landing matchstick and nearest grid matchstick to the left
• $y =$ vertical distance between lower end of landing matchstick and nearest grid matchstick below
• $\theta$ = landing angle of matchstick with grid horizontal
By definition:
• $x, y$ ~ $U(0, 1)$
• $\theta$ ~ $U(0, \pi)$
• $x, y, \theta$ are all independent of each other
I then split the events of overlapping the horizontal and the vertical matchsticks in the grid, such that:
$$P(Overlap) = P(OverlapsVertical) + P(OverlapsHorizontal) - P(OverlapsVertical \bigcup OverlapsHorizontal)$$
• $P(OverlapsVertical) = P(y + sin(\theta) >= 1)$
• $P(OverlapsHorizontal) = P(x + cos(\theta) >= 1)$
For overlapping vertical, because both y and theta are uniformly distributed, I could calcluate $P(OverlapsVertical)$ by:
• Constructing the rectangle of possible values for $y$ and $\theta$
• Integrating to find the proportion of area of this rectangle that satisfied the condition $y + sin(\theta) >= 1$
Doing the same for the horizontal case, I found that $P(OverlapsVertical) = P(OverlapsHorizontal) = 2/\pi$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9802808724687407,
"lm_q1q2_score": 0.8618107540743348,
"lm_q2_score": 0.8791467611766711,
"openwebmath_perplexity": 346.62857863666494,
"openwebmath_score": 0.7380909323692322,
"tags": null,
"url": "https://math.stackexchange.com/questions/2589445/tossing-a-matchstick-randomly-onto-an-infinite-grid"
} |
accelerometer, gyroscope, matlab, filter
Vectorize. Vectorizing will always speed up your code. Matlab is terrible, terrible at doing any kind of looping. They have algorithms that run vector operations faster than if you looped over it. In C languages there shouldn't be much of a difference, but Matlab isn't C and it doesn't run in real time. I clocked your code at about 140ms to execute when I made a fake data set of size randn(3723,8) and used that for your code. When I used vector operations I got that down to 8ms, and that includes the loop required to do the lag filter. If you did just the complimentary filter, then you can take that loop out and the time drops to 0.8ms. This means that, even though I'm still running a loop, my vectorized version is 16x faster than your unvectorized version. Without the loop it's 160 times faster. I wanted to harp on vectorizing a little because I had simulations as a grad student that ran 24 hours per day for up to 9 days. Every millisecond matters when you're on that time scale. You might not ever do anything that intense, but you'll almost certainly work on larger datasets at some point. This looks like a minute of data. | {
"domain": "robotics.stackexchange",
"id": 1185,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "accelerometer, gyroscope, matlab, filter",
"url": null
} |
star, the-sun, stellar-evolution
Title: How do astronomers know when the Sun will die? How it is possible for astronomers to find out when the end of the Sun's life will be? Yes, astronomers could be wrong. Part of being a scientist is always having to be ready to admit that you were wrong.
Astronomers have developed models of how the sun and other stars work. We understand them to be nuclear furnaces. These models may not be perfect, but they are well supported by the evidence and can predict much of what we observe from the sun and other stars. Rob's answer gives some examples of the many tests that these models have passed.
Using these models we can expect the sun to exist for about 10 billion years, and it is currently about 4.6 billion years old. While the model could be wrong, it is probably not very wrong, and the sun will go on shining for a long time yet. | {
"domain": "astronomy.stackexchange",
"id": 6284,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "star, the-sun, stellar-evolution",
"url": null
} |
java, beginner
if (firstPlayerResult > secondPlayerResult) {
return 2;
} else if (firstPlayerResult < secondPlayerResult) {
return 1;
} else {
return 0;
}
}
You are exactly duplicating the code for checking the distance for player 1 and player 2. You could move that code into a separate getDistanceToCarpet function, and have the getWhoIsCloser function call that twice.
If I understand your code correctly, you seem to be using so-called taxicab distance, meaning the total distance is equal to the X distance plus the Y distance (this is perfectly valid, as long as you're doing it on purpose). This is convenient, because it lets you separately calculate the X and Y distances and then just add them together.
Let's look at just the X distance for now. We can see that if X is on the carpet, the distance is 0. If is is less than xCarpet, it's equal to xCarpet - xPlayer, and if it's more than xCarpet + size the distance is equal to xPlayer - (xCarpet + size).
And hey, we're doing the same exact thing for Y. We could just copy-paste it and replace all the xes with ys, or we could break it out into another function. The copy-paste approach can be error-prone, so I'd recommend more functions.
Put all that together, it might look a little something like this:
public static int singleAxisDistance(int playerCoordinate, int carpetCoordinate, int carpetSize) {
if (playerCoordinate < carpetCoordinate) {
return carpetCoordinate - playerCoordinate;
} else if (playerCoordinate < carpetCoordinate + carpetSize - 1) {
return 0;
} else {
return playerCoordinate - (carpetCoordinate + carpetSize - 1);
}
} | {
"domain": "codereview.stackexchange",
"id": 40610,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner",
"url": null
} |
elasticity, continuum-mechanics, stress-strain, statics, structural-beam
Some extra details about Euler-Bernoulli beam.
In static conditions,
the indefinite equilibrium for bending for a beam with distributed moment load $m(x)$ reads $M' = m$, being $M$ the internal bending moment, $M = EJ \theta' = EJ w''$,
shear equilibrium reads $T' = f$, with distributed force $f(x)$
relation between bending moment and shear force reads $M' = T$
so that you can put everything together to get the equation
\begin{equation}
f = T' = M'' = (EJ w'')'' = EJ w'''' \ .
\end{equation} | {
"domain": "physics.stackexchange",
"id": 99078,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "elasticity, continuum-mechanics, stress-strain, statics, structural-beam",
"url": null
} |
c++, programming-challenge, time-limit-exceeded
Title: Hackerrank - "Sherlock and the Beast" I solved the problem but for two test cases, it's giving me timeout error. It's taking longer than 2 seconds on their servers. I tried using <ctime> to calculate the run time of the program and on my system, it's taking between 1.3-1.6 seconds. It's only giving me error for TestCases #9 and #10.
The question is to find the largest N-digit "decent" number, where a decent number consists only of the digits '3' and '5', and the number of threes is divisible by 5, and the number of fives is divisible by 3.
#include <iostream>
#include <string>
std::string decent_number(int, std::string);
int main() {
clock_t start = clock();
int numOfCases;
std::cin >> numOfCases;
int n;
for (int i = 0; i < numOfCases; ++i) {
std::string dn("-1");
std::cin >> n;
for (int j = n; j >= 0; --j) {
if (j % 3 == 0 && (n-j) % 5 == 0) {
dn = decent_number(j, "5");
dn += decent_number(n-j, "3");
break;
}
}
std::cout << dn << std::endl;
}
clock_t ends = clock();
std::cout << "Time: " << static_cast<double>(ends - start)/ CLOCKS_PER_SEC
<< std::endl;
return 0;
}
std::string decent_number(int repeatNum, std::string num) {
std::string dn;
for (int i = 0; i < repeatNum; ++i)
dn = dn + num;
return dn;
} | {
"domain": "codereview.stackexchange",
"id": 17529,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, programming-challenge, time-limit-exceeded",
"url": null
} |
python, object-oriented, python-3.x, role-playing-game, abstract-factory
print("Game state:\n")
game_state.show_characters()
orc.attack(human)
game_state.show_characters()
Main()
My question is:
Is this a good design? Or maybe a complete pain in the ... to work with? Is there something that I could improve?
Of course this code is really far from being finished, but I try to find the best approach possible to design things like this so that i can deepen my knowledge in design patterns and stuff like that.
The CharacterFactory class is created so that I'll be able to handle Orc and Human classes in a more abstract way or so.
You try to stay DRY which is very good, but this idea would be better represented with inheritance.
You could create a class Character() and let both Human and Orc, inherit from the super class like so:
class Character():
def __init__(self, health, attack_damage, name):
self.attack_damage = attack_damage
self.health = health
self.name = name
def attack(self, target):
target.health -= self.attack_damage
def __str__(self):
return f"Name: {self.name}\nDamage: {self.attack_damage}\nHealth: {self.health}\n"
class Human(Character):
def __init__(self, name, health=105, attack_damage=45):
super().__init__(health, attack_damage, name)
class Orc(Character):
def __init__(self, name, health=100, attack_damage=50):
super().__init__(health, attack_damage, name)
def main():
orc = Orc("Karcsi")
human = Human("Nojbejtoo")
print(orc)
print(human)
orc.attack(human)
print(human)
if __name__ == "__main__":
main()
Things I changed: | {
"domain": "codereview.stackexchange",
"id": 31723,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, object-oriented, python-3.x, role-playing-game, abstract-factory",
"url": null
} |
ros
Title: get rotation between two frames
Hi,everyone,
I have two frames, one is global frame("map"), another is local frame ("laser"). And I have the transform between them.
I want to publish a Navigation Goal, and the Goal is obtaining from the laserscan. The Goal's position is obtaining from a laserscan range float ra = scan_msg->ranges[t] ,and the Goal's orientation is get from a angle float angle = scan_msg->angle_min +t * scan_msg->angle_increment,here the t is a constant(specified).
Thank you in advance!
edit 11-03
tf::StampedTransform transform;
geometry_msgs::PoseStamped new_goal;
geometry_msgs::PointStamped position_in, position_out;
int id = 50;
try
{
listener->waitForTransform("/map", "/laser", scan_msg->header.stamp, ros::Duration(10.0));
listener->lookupTransform("/map", "/laser", scan_msg->header.stamp, transform);
}
catch (tf::TransformException& ex)
{
ROS_ERROR("Received an exception trying to transform a point from \"laser\" to \"map\": %s", ex.what());
}
float angle = scan_msg->angle_min + id * scan_msg->angle_increment;
pt.x = ra * cos(angle);
pt.y = ra * sin(angle);
position_in.header = scan_msg->header;
position_in.point.x = pt.x;
position_in.point.y = pt.y;
listener->transformPoint("map", position_in, position_out); | {
"domain": "robotics.stackexchange",
"id": 26120,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
ros-kinetic
[ 0%] Built target _industrial_msgs_generate_messages_check_deps_RobotStatus
[ 0%] Built target trajectory_msgs_generate_messages_py
[ 0%] Built target std_msgs_generate_messages_py
[ 0%] Built target _industrial_msgs_generate_messages_check_deps_SetDrivePower
[ 0%] Built target trajectory_msgs_generate_messages_cpp
[ 0%] Built target std_msgs_generate_messages_cpp
[ 0%] Built target _industrial_msgs_generate_messages_check_deps_ServiceReturnCode
[ 0%] Built target trajectory_msgs_generate_messages_eus
[ 0%] Built target std_msgs_generate_messages_eus
[ 0%] Built target trajectory_msgs_generate_messages_lisp
[ 0%] Built target std_msgs_generate_messages_lisp
[ 0%] Built target geometry_msgs_generate_messages_lisp
[ 0%] Built target geometry_msgs_generate_messages_cpp
[ 0%] Built target geometry_msgs_generate_messages_nodejs
[ 0%] Built target geometry_msgs_generate_messages_py
[ 0%] Built target geometry_msgs_generate_messages_eus
[ 1%] Built target simple_message_dummy
[ 1%] Built target _ardrone_autonomy_generate_messages_check_deps_matrix33
[ 1%] Built target _ardrone_autonomy_generate_messages_check_deps_CamSelect
[ 1%] Built target _ardrone_autonomy_generate_messages_check_deps_navdata_gyros_offsets
[ 1%] Built target _ardrone_autonomy_generate_messages_check_deps_navdata_demo
[ 1%] Built target _ardrone_autonomy_generate_messages_check_deps_navdata_phys_measures
[ 1%] Built target _ardrone_autonomy_generate_messages_check_deps_navdata_magneto
[ 1%] Built target _ardrone_autonomy_generate_messages_check_deps_vector31 | {
"domain": "robotics.stackexchange",
"id": 30724,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-kinetic",
"url": null
} |
reinforcement-learning, q-learning
Could this be a sign of the agent not having explored enough, of being stuck in a local minimum?
Exploration could be an issue. The "local minimum" in that case is probably not an issue with the neural network, but that small variations in policy are all worse than the current policy. As you are learning off-policy, then increasing the exploration rate may help find the better states, at the expense of slower overall learning. Also, methods that explore more widely than randomly on each action could be better - e.g. action selection methods that consistently pick unexplored state/action pairs such as Upper Confidence Bound.
Also a possibility is that the structure of your network generalises well under the current policy, but is not able to cover better policies. In that case, whenever the exploration suggests a better policy, the network will also increase estimates of unrelated action choices - so it would try them, notice they are better, then back off as the new values also cause unwanted policy changes in other situations.
If you know a better policy than the one that is being found, then you could plot a learning curve with the policy fixed, see if the network can learn it. However, usually you will not know this, so you may be stuck with trying some variations of neural network architecture or other hyperparameters.
There are other methods than DQN (e.g. A3C, DDPG), as well as many add-ons and adjustments to DQN that you could try (e.g. eligibility traces, double learning). | {
"domain": "datascience.stackexchange",
"id": 3711,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, q-learning",
"url": null
} |
special-relativity, energy, spacetime, momentum, mass-energy
So I think you've touched on some interesting ideas, but it seems like you might benefit from making your ideas a bit more precise. I hope these facts about the standard understanding of relativity help in that effort. :) | {
"domain": "physics.stackexchange",
"id": 22019,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, energy, spacetime, momentum, mass-energy",
"url": null
} |
programming-languages, type-theory, semantics, dependent-types
Title: Resources for implementing dependent type theory I want to implement Martin Löf's intuitionistic type theory in a functional language such as Haskell, preferably also implementing a lexer/parser for the language. How should I start approaching it? Are there any good papers/blog posts/github repo examples that might be useful? Thanks! This paper is my go-to for implementing dependent types. It starts from the basics, uses bidirectional types, and has accompanying code in Haskell.
If you're at all interested in type inference, this paper is great, and also has accompanying Haskell code.
David Christiansen has a tutorial on dependent type checking with bidirectional types, with a Haskell and Racket. More generally, I've heard great things about his book The Little Typer, though I haven't had a chance to read it myself.
I've heard great things about smaltt by András Kovács, particularly for it being an efficient implementation. | {
"domain": "cs.stackexchange",
"id": 17924,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming-languages, type-theory, semantics, dependent-types",
"url": null
} |
electromagnetism, maxwell-equations, spacetime-dimensions
$$\Delta_g A_\mu - {R^\nu}_{\rho} A^\rho = 0$$
with $\Delta_g$ the Laplace-Beltrami operator. It is fairly well known that Cauchy-type initial conditions are generally too constraining to solve elliptic equations, hence they do not really correspond to what we typically expect of physics. An interesting treatment of physics in the Riemannian case can be found here :
https://web.archive.org/web/20170318151343/http://www.gregegan.net/ORTHOGONAL/ORTHOGONAL.html
Note that in general, it's always possible to have closed timelike curves in a Riemannian space (you can just turn around), as can be shown by the fact that it is invariant under the rotation group $O(n)$. Hence we can "boost" to the frame $(x,t) \to (-x, -t)$. This makes for pretty bad things.
The same is true for the case $p = n$, $q = 0$, simply by taking the opposite sign.
$p > 1$, $q > 1$
This is the ultrahyperbolic case, with more than one timelike dimension. Much like in the Riemannian case, since any timelike plane forms a Riemannian submanifold, there are always closed timelike curves in it. The ultrahyperbolic wave equation tends to either have no solution or non-unique solutions, and they are in general unstable. A good review on the topic can be found here :
http://rspa.royalsocietypublishing.org/content/465/2110/3023 | {
"domain": "physics.stackexchange",
"id": 41979,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, maxwell-equations, spacetime-dimensions",
"url": null
} |
electromagnetism, resource-recommendations, material-science, radio-frequency
Title: Transparency of gypsum and concrete for RF waves Is there a database (or any other source) of graphs of average transparency of various materials (cardboard, concrete, gypsum etc.) as a function of wavelength? Studies have been done at a few popular frequencies, but in general this is hard to do with RF. You can get a feel for how it was done at 2Ghz and 5Ghz from this article http://www.ko4bb.com/Manuals/05%29_GPS_Timing/E10589_Propagation_Losses_2_and_5GHz.pdf
They also publish tables of their results which you might be able to scale to other frequencies as a starting point. | {
"domain": "physics.stackexchange",
"id": 10426,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, resource-recommendations, material-science, radio-frequency",
"url": null
} |
astronomy, cosmology, cosmic-microwave-background
These effects are sometimes called cosmic variance. Basically, it means that what we observe may not be truly representative of the entire universe.
Also, we're dealing with statistical data, and flukes happen. We're biased at seeing patterns, even though they may not mean anything. For example, if you throw a dice 100 times in a row, then all sorts of apparent patterns may occur. For instance, it could contain the sequence 666666. That specific sequence may seem unlikely and significant, but it is just as likely as any other specific sequence, like e.g. 202020 or 675439.
When can we expect this controversy to be resolved
The Planck team has yet to release the polarisation data (probably next year), which may shed some light on the situation. But I think these issues will entertain the cosmologists for quite a few years. A promising field of research is the mapping of the large-scale distribution of (dark) matter, using gravitational lensing. This would help calculating the Integrated Sachs–Wolfe effect (which is the gravitational redshift/blueshift of CMB photons as they pass through the potential well of galaxy clusters) more accurately.
the Planck probe data are found to be in error
The data is reliable (the Planck data agrees with WMAP), it's all about the interpretation.
the Standard Model must undergo major revision?
It is possible that the explanation lies in a slight deviation from the Standard Model (but no major revision); after all, the Standard Model is an idealisation. Our universe may not be exactly homogeneous or isotropic at large scales. The Planck team actually tried fitting a non-standard model, a so-called Bianchi Model, with mixed results (Planck 2013 results. XXVI. Background geometry and topology of the Universe). Some others get rather carried away, speculating about the influence of 'other universes'.
As a final note, it is important to stress that, in the overall scheme, the impact of these anomalies is very small. The Standard Model fits the CMB almost perfectly, and is in agreement with studies of clusters of galaxies and supernovae. | {
"domain": "physics.stackexchange",
"id": 7995,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "astronomy, cosmology, cosmic-microwave-background",
"url": null
} |
rosbag
Title: How to filter tfs in a bag file
Hi,
I got a bag file that includes a huge number of a certain tf message. I want to filter out this special message, but all other tf should not be considered? How can I filter out just all tf messages with a certain frame_id and child_frame_id?
Something like
rosbag filter source.bag filtered.bag 'topic!="\tf" or m.transforms[0].header.child_frame_id != "odom"'
does not work and generates an error message (as expected)
AttributeError: '_std_msgs__Header' object has no attribute 'child_frame_id'
Thanks for your help
Poseidonius
Originally posted by Poseidonius on ROS Answers with karma: 427 on 2012-12-09
Post score: 0
Try
rosbag filter source.bag filtered.bag 'topic!="\tf" or m.transforms[0].child_frame_id != "odom"
child_frame_id is not in a header. This should fix the error you get.
Originally posted by mmedvede with karma: 221 on 2012-12-09
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Poseidonius on 2012-12-09:
I should not work at night! Stupid error! Thanks to Mikail!
Comment by Hafez Farazi on 2016-08-29:
shouldn't it be :
rosbag filter source.bag filtered.bag 'topic!="/tf" or m.transforms[0].child_frame_id != "odom" | {
"domain": "robotics.stackexchange",
"id": 12028,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosbag",
"url": null
} |
cryptography, finance, rust
for i in range(0us,mnemonic.binary_hash.len() / 11) {
let bin_idx = mnemonic.binary_hash.slice(i*11,(i+1)*11);
let idx = std::num::from_str_radix::<isize>(bin_idx, 2).unwrap();
mnem_words.push(words.as_slice().words().nth(idx as usize).unwrap()); //check for better way of doing this
}
let str_mnemonic = format!("{:?}",mnem_words);
println!("mnemonic: {}", str_mnemonic);
let key_value = mnemonic.to_seed(str_mnemonic.as_slice(),str_seed); //to_string() on a Vec<&str>?
println!("key: {}",key_value.as_slice().to_hex());
}
lib.rs:
extern crate crypto;
extern crate "rustc-serialize" as rustc_serialize;
use crypto::pbkdf2::{pbkdf2};
use crypto::sha2::{Sha256, Sha512};
use crypto::hmac::Hmac;
use crypto::digest::Digest;
use std::old_io::File;
use rustc_serialize::hex::{FromHex, ToHex};
use std::iter::repeat;
static EMPTY:&'static str = "00000000"; //'
static PBKDF2_ROUNDS:u32 = 2048;
static PBKDF2_KEY_LEN:usize = 64; | {
"domain": "codereview.stackexchange",
"id": 12003,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cryptography, finance, rust",
"url": null
} |
python, algorithm, strings, unit-testing, reinventing-the-wheel
class Test(unittest.TestCase):
def test_index_of_with_matching_patterns(self):
assert index_of('abc', '') == 0 # all strings contain empty string
assert index_of('abc', 'a') == 0 # single letters are easy
assert index_of('abc', 'b') == 1
assert index_of('abc', 'c') == 2
assert index_of('abc', 'ab') == 0 # multiple letters are harder
assert index_of('abc', 'bc') == 1
assert index_of('abc', 'abc') == 0 # all strings contain themselves
assert index_of('aaa', 'a') == 0 # multiple occurrences
assert index_of('aaa', 'aa') == 0 # overlapping pattern
assert index_of('thisisabsolutelycrazy', 't') == 0
assert index_of('abcdef', 'abcdef') == 0 # all strings contain
assert index_of('abcdef', 'abcdef') == 0 # all strings contain
def test_index_of_with_non_matching_patterns(self):
# Negative test cases (counterexamples) with non-matching patterns
assert index_of('abc', 'z') is None # remember to test other letters
assert index_of('abc', 'ac') is None # important to test close cases
assert index_of('abc', 'az') is None # first letter, but not last
assert index_of('abc', 'abz') is None # first 2 letters, but not last | {
"domain": "codereview.stackexchange",
"id": 29850,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, algorithm, strings, unit-testing, reinventing-the-wheel",
"url": null
} |
special-relativity, differential-geometry, tensor-calculus
Title: The integral over a four-dimensional closed curve In Landau-Lifschitz (Volume 2)
The integral over a four-dimensional closed curve is transformed into an integral over the surface spanning it by the substitution:
$$
dx^i \rightarrow df^{ki}\frac{\partial}{\partial x^k}.\tag{6.18}
$$
Thus for the integral of a vector, we have:
$$
\oint A_i dx^i=\int df^{ki}\frac{\partial A_i}{\partial x^k}
=\frac{1}{2}\int df^{ik}\left(\frac{\partial A_k}{\partial x^i}-\frac{\partial A_i}{\partial x^k}\right).\tag{6.19}
$$
which is generalization of Stokes' theorem.
How the antisymmetric tensor created at the end?/How is the last part obtained? The answer is that the two-form $df^{ik}$ is antisymmetric, since $ df^{ik} = dx^i dx'^k - dx^k dx'^i$ (in their notation), so $df^{ik} = - df^{ki}$. Your identity follows after using this and relabeling indices. | {
"domain": "physics.stackexchange",
"id": 63657,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, differential-geometry, tensor-calculus",
"url": null
} |
organic-chemistry, mixtures
Title: Would Oxygen Gas and Ozone be a pure substance together? If I have oxygen gas and ozone ($\ce{O2 + O3}$) together would it be considered a pure substance or a mixture?
And would pure substances always have the same molecular structure? Ozone is highly reactive and unstable, while dioxygen is stable. There do not combine to form a compound. So, clearly it is a mixture.
To answer the second part of the question, "And would pure substances always have the same molecular structure?", first a Wikipedia definition on substances, to quote:
A chemical substance is a form of matter having constant chemical composition and characteristic properties.[1][2]...
Chemical substances can be simple substances[4], chemical compounds, or alloys. Chemical elements may or may not be included in the definition, depending on expert viewpoint.[4]
Chemical substances are often called 'pure' to set them apart from mixtures. A common example of a chemical substance is pure water...
However, in practice, no substance is entirely pure, and chemical purity is specified according to the intended use of the chemical.
And further:
A chemical substance may well be defined as "any material with a definite chemical composition" in an introductory general chemistry textbook.[5] According to this definition a chemical substance can either be a pure chemical element or a pure chemical compound. But, there are exceptions to this definition; a pure substance can also be defined as a form of matter that has both definite composition and distinct properties.[6] The chemical substance index published by CAS also includes several alloys of uncertain composition.[7] Non-stoichiometric compounds are a special case (in inorganic chemistry) that violates the law of constant composition, and for them, it is sometimes difficult to draw the line between a mixture and a compound, as in the case of palladium hydride. Broader definitions of chemicals or chemical substances can be found, for example: "the term 'chemical substance' means any organic or inorganic substance of a particular molecular identity, including – (i) any combination of such substances occurring in whole or in part as a result of a chemical reaction or occurring in nature".[8] | {
"domain": "chemistry.stackexchange",
"id": 14527,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, mixtures",
"url": null
} |
• Your argument that $\bigl(f\rvert_U\bigr)^{-1}$ is continuous doesn't really follow from the lemma. The lemma gives you the continuity of $\bigl(f^{-1}\bigr)\rvert_{f[U]}$, and then the observation that $\bigl(f\rvert_U\bigr)^{-1} = \bigl(f^{-1}\bigr)\rvert_{f[U]}$ takes care of the rest. And yes, one doesn't need that $U$ is open to conclude that $f$ induces a homeomorphism $U \to f[U]$. Jun 13, 2018 at 20:32 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668668053619,
"lm_q1q2_score": 0.8300106888062836,
"lm_q2_score": 0.8459424353665381,
"openwebmath_perplexity": 52.3789988774898,
"openwebmath_score": 0.9732982516288757,
"tags": null,
"url": "https://math.stackexchange.com/questions/2818706/suppose-that-f-x-to-y-is-a-homeomorphism-and-u-is-an-open-subset-of-x"
} |
electrostatics, rotational-dynamics, potential, integration, dipole
Title: Why is the external torque not taken as -$pE\sin\theta$ in the derivation of the Potential energy due to an external field So in the derivation of the Potential energy of a dipole due to an external field, we consider a dipole with charges $q_1= +q$ and $q_2= –q$ placed in a uniform external electric field. We know that in a uniform electric field, the dipole experiences no net force but it experiences a torque $τ$ given by
$$\vecτ = \vec p×\vec E$$
which will tend to rotate it. Now here's the tricky part. Suppose an external torque $τ
_{ext}$ is applied in such a manner that it just neutralises this torque and rotates it in the plane of paper from angle $θ_0$ to angle $θ_1$ at an infinitesimal angular speed and without angular acceleration. The amount of work done by the external torque
$$W= \int \:τ\left(\theta \right)d\theta =\int \:\left(pEsin\theta \right)d\theta $$
$$W=pE\left(cos\theta_0 -cos\theta_1 \right)$$
if $\theta_0=\pi/2$ then and $\theta_1=\theta$
$$W=-pEcos\theta$$ | {
"domain": "physics.stackexchange",
"id": 79198,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, rotational-dynamics, potential, integration, dipole",
"url": null
} |
Excluding the empty set and the singleton subsets of $A$, there can be at most $\binom{n}{2}$ subsets of $A$ in $\mathscr{F}$ containing more than one element. This follows by matching unordered pairs in $A$ with sets in $\mathscr{F}$ containing such pairs.
Thus $|\mathscr{F}| \le 1 + n + n(n-1)/2$ and the bound is sharp.
-
1. To say the same thing in different words: consider all the sets in $\mathcal{F}$ of size at least $2$. From each such set we pick two elements, giving an unordered pair $\{a,b\}$ of elements from $A$. All these unordered pairs must be distinct (for the sets' intersection to have size at most $1$), and there are only $\binom{n}{2}$ distinct unordered pairs possible. So $\mathcal{F}$ can have at most $\binom{n}{2}$ sets of size at least $2$. It can also have the empty set and all the $n$ singleton sets, so $\mathcal{F} \le 1 + n + \binom{n}{2}$. And we can achive this bound in an obvious way. – ShreevatsaR Dec 6 '12 at 19:38
Fix $m\in\Bbb N$ with $m\le n$. If $\mathscr{F}\subseteq\wp(A)$ has the property that $|X\cap Y|<m$ for distinct $X,Y\in\mathscr{F}$, then $$|\mathscr{F}|\le\sum_{k=0}^m\binom{n}m\;.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232950125848,
"lm_q1q2_score": 0.8058102786119307,
"lm_q2_score": 0.8198933447152497,
"openwebmath_perplexity": 291.8740334124839,
"openwebmath_score": 0.8793835639953613,
"tags": null,
"url": "http://math.stackexchange.com/questions/252431/upper-bound-for-the-size-of-a-maximal-collection-of-subsets-which-pairwise-hav"
} |
observational-astronomy, solar-system, data-analysis, fundamental-astronomy
Tracking at a single-frequency band in the two-way mode has been assumed for each case. Dual-frequency downlinks, which are available from some space- craft, can be used to reduce the effects of the ionosphere and solar plasma. For example, solar plasma delays exceeding 200 m in S-band Viking Lander range measurements were calibrated to about 8-m accuracy using dual S and X down- links from the Viking orbiters [86,87]. Today, spacecraft operate primarily with an X-band uplink and downlink. Plasma effects for an X-band two-way link are reduced by a factor of 13 when compared to an S-band link. Future use of Ka-band two-way links would reduce this effect by an additional factor of 14.
For the current system, the random error of 0.03 mm/s for an X-band Doppler measurement made over 60 s is due primarily to fluctuations in solar plasma density along the line of sight. This value varies with proximity of the ray path to the Sun and with the solar cycle. The random error for a range measurement is due primarily to thermal noise. | {
"domain": "astronomy.stackexchange",
"id": 5723,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "observational-astronomy, solar-system, data-analysis, fundamental-astronomy",
"url": null
} |
special-relativity, velocity
\tag{02c}\label{02c}
\end{align}
Note that $u_1,u_2$ are not the positive magnitudes of $\:\mathbf{u}_1,\mathbf{u}_2$. They are real numbers, that is they can have negative values.
The derived equation
\begin{equation}
\mathbf{u} \boldsymbol{=}\dfrac{\mathbf{u}_2\boldsymbol{+}\dfrac{\gamma^2_{1}\left(\mathbf{u}_1\boldsymbol{\cdot}\mathbf{u}_2\right)}{c^2 \left(\gamma_{1}\boldsymbol{+}1\right)}\mathbf{u}_1\boldsymbol{+}\gamma_1 \mathbf{u}_1}{ \gamma_1\left(1\boldsymbol{+}\dfrac{\mathbf{u}_1\boldsymbol{\cdot}\mathbf{u}_2}{c^{2}}\right)}
\tag{03}\label{03}
\end{equation}
beyond to be the transformation law for 3-velocities, is the law of relativistic addition of 3-velocities, more exactly it's the relativistic sum of $\:\mathbf{u}_1,\mathbf{u}_2$.
For the $\gamma-$factors we have
\begin{equation}
\gamma \boldsymbol{=}\gamma_{1}\gamma_{2}\left(1\boldsymbol{+}\dfrac{\mathbf{u}_1\boldsymbol{\cdot}\mathbf{u}_2}{c^2}\right)
\tag{04}\label{04}
\end{equation}
which from the definition of rapidities
\begin{equation} | {
"domain": "physics.stackexchange",
"id": 72392,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, velocity",
"url": null
} |
polarity, adsorption
Title: Thin Layer Chromatography & Stationary Phase
There are two types of stationary phases used in thin layer chromatography. One stationary phase is made of aluminium oxide. Which of the following molecules would be expected to have the smallest $R_f$ using aluminium oxide as the stationary phase on the glass slide?
(a) $\ce{CH3CH2CH2CH2CH2COOH}$
(b) $\ce{CH3CH2CH2CH2CH2OH}$
(c) $\ce{CH3CH2CH2CH2NH2}$
(d) $\ce{CH3CH2CH2CH2CH3}$
$\ce{AL2O3}$ is a non-polar stationary phase, therefore if the molecule is also non-polar it would display high adsorption to the stationary phase, and therefore a smaller $R_f$ value.
The answer is (a).
Isn't (a) polar?
Wouldn't this increase the $R_f$ value, because it doesn't adsorb to the non-polar stationary phase? Can someone please explain, correct me? Aluminum Oxide is more polar than Silica Gel and polar compounds do not migrate far from the starting point.
Thin Layer Chromatographic Analyses
7. Thin-Layer Chromatography | {
"domain": "chemistry.stackexchange",
"id": 1086,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "polarity, adsorption",
"url": null
} |
javascript, object-oriented, cache
Title: LRU Cache in ECMAScript I wrote this for a CodeWars challenge while trying to learn ECMAScript and would really like to have some advice on how it could be improved.
What I don't like about this code myself, but am unsure on how to improve:
I didn't yet found a way to have a class in ECMAScript that mixes public and private methods in a way, where the public methods (and only them) can still access private methods and vars.
(Partly caused by the point above) I think the code is way too verbose and bloated. Too much of it is wasted on language specific implementation details. There must be a smarter way to do this - I feel like I could write the same functionality in PHP using 10% of the lines I needed now.
function LRUCache(capacity, init) {
this._capacity = capacity;
this._size = 0;
this._accessMap = [];
this._items = [];
this._frozenMethods = ['cache', 'delete'];
this._functionalProps = ['size', 'capacity'];
this.cache = function(key, value) {
if( this._accessMap.indexOf(key) == -1 ) {
this._addNewItem(key);
}
this._items[key] = this[key] = value;
this._updateAccessMap(key);
return this;
},
this.delete = function(key) {
if(this._frozenMethods.indexOf(key) !== -1 ||
this._functionalProps.indexOf(key) !== -1) {
return false;
}
if( this._accessMap.indexOf(key) === -1 ) {
return true;
}
delete this[key];
delete this._items[key];
this._size--;
this._updateAccessMap(key, true);
return true;
},
this._removeOldest = function() {
var key = this._accessMap[0];
this.delete(key);
},
this._updateAccessMap = function(key, remove) { | {
"domain": "codereview.stackexchange",
"id": 15556,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, object-oriented, cache",
"url": null
} |
homework-and-exercises, optics, reflection
Title: Determining distance from object in concave lens so I was studying for my Yr11 Physics test tomorrow and I came across a question that I got the wrong answer on, all of my friends are getting the same answer.
Question: You wish to project the image of a lamp, magnified three times, onto a screen 5.0m from the lamp. How far from the lamp should the mirror be placed?
I though that we would just use $M=\frac{-d_i}{d_o}$ where $M=3$ and $d_i=5$, which will work out that $d_o\approx -1.7$ which has a magnitude of 1.7m from the lamp
However, our answers book says that the answer is 2.5m
What have I done wrong? Also, please keep in mind that I am in year 11 and may not understand anything beyond that level Your approach is correct, but you really need to draw even a crude sketch.
The lamp is $x$ m from the mirror; the image is $5$ m from the lamp, which puts it $(5+x)$ from the mirror. These are the d values for your magnification formula, which is correct... | {
"domain": "physics.stackexchange",
"id": 10596,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, optics, reflection",
"url": null
} |
electromagnetism, lagrangian-formalism, variational-principle, action, point-particles
There is the standard textbook Lagrangian for $N$ charged particles:
$$
L(A^\nu,\partial_\mu A^\nu,\{\mathbf r_a,\mathbf v_a\}_{a=1}^{N}) = \int_V -\frac{1}{4\mu_0}F^{\mu\nu}F_{\mu\nu} d^3 \mathbf x~~~+$$
$$- ~~\sum_a q_a \varphi(\mathbf r_a,t) + \sum_a q_a\mathbf v_a \cdot \mathbf A(\mathbf r_a,t) ~~~+
$$
$$
-~~~\sum_a \sqrt{1-v_a^2/c^2}m_a c^2,
$$
where $\varphi,\mathbf A, F$ refer to total EM field. Thus this is a function of $N$ positions, $N$ velocities, and a functional of the fields $\varphi,\mathbf A$.
This Lagrangian is in textbooks used to "derive" (e.g. in Landau&Lifshitz) both the Maxwell equations for total fields in presence of the current density $\sum_a q_a\mathbf v_a \delta(\mathbf x - \mathbf r_a)$ and charge density $\sum_a q_a\delta(\mathbf x - \mathbf r_a)$, and also to "derive" the equations of motion for all the particles, with each particle experiencing the Lorentz force $q_a\mathbf E(\mathbf r_a,t) + q_a\mathbf v_a\times \mathbf B(\mathbf r_a,t)$. | {
"domain": "physics.stackexchange",
"id": 98844,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, lagrangian-formalism, variational-principle, action, point-particles",
"url": null
} |
regression, prediction, sequence
What would be a good way to approach this problem? Are there problems that are similar?
Edited for clarity. Your biggest issue with the evaluation scheme you have - "success" means within tolerance, "failure" means outside tolerance, plus your constraint on model outputs needing to vary per time step - is that it will be hard to extract gradients in order to train the prediction model directly. This rules out many simple and direct regression models, at least if you want to use "maximise number of scores within tolerance" as your objective function. The constraints on sequential predictions and allowing re-tries are also non-differentiable if taken as-is.
I think you have two top level choices:
1. Soften the loss function, and add the hard function as a metric
Use a differentiable loss function that has best score when predictions are accurate and constraints are met. For example your loss function for a single predicted value could be
$$L(\hat{x}_n, \hat{x}_{n+1}, x_{n+1}) = (\hat{x}_{n+1} - x_{n+1})^2 + \frac{a}{1+e^{s(|\hat{x}_n - \hat{x}_{n+1}| - \epsilon)}}$$
the second constraint part is essentially sigmoid with $a$ controlling the relative weight of meeting constraints with accuracy of the prediction and $s$ controlling the steepness of cutoff around the constraint.
a. The weighting between prediction loss and constraint loss will be a hyper-parameter of the model. So you would need to include $a$ and $s$ amongst parameters to search if you used my suggested loss function.
b. You can use your scoring system, not as an objective function, but as a metric to select the best model on a hyper-parameter search.
c. With this approach you can use many standard sequence learning models, such as LSTM (if you have enough data). Or you could just use a single step prediction model that you feed current prediction plus any other features of the sequence that is allowed to know, and generate sequences from it by calling it repeatedly.
This system should encourage re-tries that get closer to the true value.
2. Use your scoring system directly as a learning goal | {
"domain": "datascience.stackexchange",
"id": 5388,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "regression, prediction, sequence",
"url": null
} |
# Sundry Coset Results
## Theorems
Let $G$ be a group and let $H$ be a subgroup of $G$.
Let $x, y \in G$.
Let:
$x H$ denote the left coset of $H$ by $x$;
$H y$ denote the right coset of $H$ by $y$.
Then the following results apply:
### Element in Coset iff Product with Inverse in Subgroup
#### Element in Left Coset iff Product with Inverse in Subgroup
Let $y H$ denote the left coset of $H$ by $y$.
Then:
$x \in y H \iff x^{-1} y \in H$
#### Element in Right Coset iff Product with Inverse in Subgroup
Let $H \circ y$ denote the right coset of $H$ by $y$.
Then:
$x \in H y \iff x y^{-1} \in H$
### Cosets are Equal iff Product with Inverse in Subgroup
#### Left Cosets are Equal iff Product with Inverse in Subgroup
Let $x H$ denote the left coset of $H$ by $x$.
Then:
$x H = y H \iff x^{-1} y \in H$
#### Right Cosets are Equal iff Product with Inverse in Subgroup
Let $H x$ denote the right coset of $H$ by $x$.
Then:
$H x = H y \iff x y^{-1} \in H$
### Cosets are Equal iff Element in Other Coset
#### Left Cosets are Equal iff Element in Other Left Coset
Let $x H$ denote the left coset of $H$ by $x$.
Then:
$x H = y H \iff x \in y H$
#### Right Cosets are Equal iff Element in Other Right Coset
Let $H x$ denote the right coset of $H$ by $x$.
Then:
$H x = H y \iff x \in H y$
### Coset Equals Subgroup iff Element in Subgroup
#### Left Coset Equals Subgroup iff Element in Subgroup | {
"domain": "proofwiki.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9935117315560086,
"lm_q1q2_score": 0.825750364493007,
"lm_q2_score": 0.8311430436757312,
"openwebmath_perplexity": 139.08099428570452,
"openwebmath_score": 0.9981957077980042,
"tags": null,
"url": "https://www.proofwiki.org/wiki/Sundry_Coset_Results"
} |
ngs, long-reads, pacbio
Title: What is a PacBio "movie file"? I came across references to "movie files" from PacBio sequencing in this paper:
https://www.jimmunol.org/content/204/12/3434
Specifically:
Movie files used to generate results presented in this article have been submitted to the National Center for Biotechnology Information BioProject (https://www.ncbi.nlm.nih.gov/bioproject/) under accession number PRJNA389440.
Looking at the SRA entries, it looks like it's just the raw reads as I expected. But what is meant by "movie files"? (This is probably just my ignorance about PacBio as opposed to Illumina or others but I've never seen that term before and can't find much online.) [disclaimer: not a PacBio person, I'm just vaguely aware of the technology]
Movie files are the PacBio raw data format, representing observed signal intensities in each sequencing well over the course of the sequencing run. The movie files can be used to re-call sequences with updated statistical models, which can improve accuracy, or test out hypotheses about how epigenetics alters the rate of nucleotide incorporation (i.e. methylation).
Illumina has a similar static version of this which is an image (or multiple images, depending on chemistry) of the flow cell after each synthesis cycle. Illumina's raw data format consumes tens of terabytes per run, and is of minimal value after base calling, so is almost always discarded after base calling.
Anything uploaded to NCBI SRA must contain FASTQ information. While the original uploaded files can be kept (which may include time/signal information), the files are converted into a FASTQ-like format for NCBI's internal representation. As such, there's a chance that movie files could be uploaded to SRA, but those files would also need to include the called FASTQ information (e.g. in a HDF5 container). In contrast to this, ENA does allow uploading raw data files without called information. | {
"domain": "bioinformatics.stackexchange",
"id": 1745,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ngs, long-reads, pacbio",
"url": null
} |
newtonian-gravity, potential, solar-system, celestial-mechanics
So, in math terms, $U=-\int_{\infty}^{r} -G\frac{Mm}{r^{\prime2}} dr^{\prime}=-G\frac{Mm}{r}$. This result is for one planet in relation to the sun, calculated assuming one dimension of force (which is valid for only 2 bodies). If you have more bodies that you want to account for (like other planets), then you need to complicate that expression to higher dimensions and add all the terms from the gravitational forces due to every object with the proper vectors in tact. Luckily, you can use superposition of forces for that so it's not all that messy, but would be a little much here (I think). In practical terms, the gravitational force from the star at the center of a solar system far outweighs that of the individual planets, that you can neglect those terms and come up with a very good approximation of the total gravitational potential. | {
"domain": "physics.stackexchange",
"id": 27383,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-gravity, potential, solar-system, celestial-mechanics",
"url": null
} |
c++, algorithm, stack
cout << "5. EXIT" << endl;
cout << "6. Print" << endl;
cout << "Enter the choice"<<endl;
cin >> ch;
switch (ch)
{
case 1:
cout << "Enter the number to be pushed" << endl;
cin >> num;
s1.push(num);
break;
case 2:
cout << "Get the TOP Element" << endl;
s1.topElement();
break;
case 3:
cout << "Check Empty" << endl;
s1.isEmpty();
break;
case 4:
cout << "POP the element" << endl;
s1.pop();
break;
case 5: exit(0);
case 6:
s1.print();
break;
}
}
system("pause");
} This review is in response to the request for help on "overall coding practice." It does not delve into the syntax or semantics of the C++ language. There are three levels: Computer Science, Architecture, Variable Names.
Computer Science
Technically the implementation is not a stack because:
myStack.push(4);
myStack.pop(); | {
"domain": "codereview.stackexchange",
"id": 11396,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, stack",
"url": null
} |
ros, bagfile, ros-kinetic
Original comments
Comment by gvdhoorn on 2019-12-27:
I would actually add that even when not using TF static, what the OP observes is very much possible when using bag files and not playing them from the start.
TF is a distributed state system, where all participants in the nodegraph maintain their own local "view" on the tree. If a participant does not receive certain messages, it will not "know" about those transforms. When visualising the tree from the point-of-view of such a participant, you'll see missing edges and nodes (ie: transforms and frames).
TF static does not help here, because transforms on that topic are only broadcast once (or at least, it uses latched publishers), but incomplete views/partial state is inherent to the way TF works (but it's also not a problem and can even be used as an advantage).
Comment by Orhan on 2019-12-27:
Thanks. It's more clear now.
Comment by warriorUSP on 2019-12-27:
Thanks a lot for the clearification of latched topics and working of /tf_static. But I would like to ask that, if I want to use some ros package which requires those frames (here, frames below /realsense_link), and want to use the bagfile from the middle, then what would be the ideal way to go?
Should I store those static transforms from the start in some node and publish from there, then isnt this feature limiting the use of bagfile, which was possible earlier (though at the expense of computation)?
Comment by Orhan on 2019-12-27: | {
"domain": "robotics.stackexchange",
"id": 34198,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, bagfile, ros-kinetic",
"url": null
} |
python
return correct, letters_remaining
def prompt_for_guess(guessed_letters: list) -> str:
"""
Prompts the user for their next guess. Rejects guesses that are more than a single letter, and guesses which were
already made previously. Returns the (validated) guess.
:param guessed_letters: the list of previously guessed letters
:return: the user's next guess
"""
guess = input("Your guess? ").strip().upper()
if len(guess) > 1:
print("Sorry, you can only guess one letter at a time.")
return prompt_for_guess(guessed_letters)
elif guess in guessed_letters:
print("Sorry, you already guessed that letter.")
return prompt_for_guess(guessed_letters)
return guess
def draw_hangman(number_of_incorrect_guesses: int) -> None:
"""
Draws the appropriate hangman stage, given the number of incorrect guesses. 0 or fewer will draw the empty scaffold.
6 or more will draw the fully hanged man.
:param number_of_incorrect_guesses: the number of incorrect guesses the player has made in the current game
:return: Nothing
"""
if (number_of_guesses < 0):
number_of_guesses = 0
if (number_of_guesses > 6):
number_of_guesses = 6
print(HANGMAN_STAGES[number_of_guesses])
def draw_secret_word(secret_word: str, guessed_letters: list) -> None:
"""
Prints the secret word, with underscores representing unknown letters and with any correctly-guessed leters printed
in the appropriate location within the word.
:param secret_word: The secret word
:param guessed_letters: All previous guesses
:return: Nothing
"""
for letter in secret_word:
to_print = letter if letter in guessed_letters else '_'
print(to_print, end=' ')
print("\n") | {
"domain": "codereview.stackexchange",
"id": 36990,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
I hope this helps.
For me, I am remembering just a few formula, and even then most of those are from derivatives. So $$(\sin x)'=\cos x$$ and $$(\cos x)'=-\sin x$$. This allows me to put an integral sign before those and get the formula for integrals. For tangent I use integration by parts. For integrals of rational functions, I know that I need to split into fractions, where the polynomials at the numerator are maximum second order polynomials in $$x$$ or are the type $$x^n$$. Then I complete the square. If I get something like $$\int\frac{ax+b}{(ax+b)^2+c^2}dx$$ then I can change variables and get $$\ln$$. If I get $$\int\frac 1{1+x^2}dx$$ then I know that it's $$\arctan$$. Everything else I can derive | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363746096915,
"lm_q1q2_score": 0.8438147806714495,
"lm_q2_score": 0.8558511506439708,
"openwebmath_perplexity": 263.79652839248786,
"openwebmath_score": 0.9035236239433289,
"tags": null,
"url": "https://math.stackexchange.com/questions/3835081/a-nice-way-to-remember-trigonometric-integrals"
} |
php, security, mysqli
header('Location:/FvWithCaptcha/thankyou.php');
}
elseif ($result == FALSE || $result == NULL || $securimage->check($_POST['captcha_code']) == false) {
echo "Error in registration";
}
mysqli_close($conn);
?>
Any tutorials are also appreciated!
register_validation.php
<?php
$unameErr = $nameErr = $fnameErr = $emailErr = $passErr = $mnumErr = $sexErr = $intrsErr = "" ;
$uname =$name = $fname = $email = $pass = $mnum = $sex = $intrs = "" ;
if ($_SERVER["REQUEST_METHOD"] == "POST") {
if(empty($_POST["uname"])) {
$unameErr = "User Name Is Required";
}
else
{
$uname = test_input($_POST["uname"]);
}
if(empty($_POST["name"])) {
$nameErr = "Name Is Required";
}
else
{
$name = test_input($_POST["name"]);
//Condition Check
if(!preg_match("/^[a-zA-Z ]*$/", $name)){
$nameErr = "Only Letters are allowed";
}
} | {
"domain": "codereview.stackexchange",
"id": 13930,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, security, mysqli",
"url": null
} |
quantum-mechanics, scattering-cross-section
$$j(r)=-\frac{ie\hbar}{2m}(\psi^*(\nabla \psi)-(\psi^*\nabla) \psi )$$
In order to write the probability current as an operator that acts on $\psi$, I can use the expression given by Charles Francis.
All this is ok, but I still have confusion to replace the H operator just by the kinetic energy and that makes the case for free particle but what about a general Hamiltonian contains other types of energy? | {
"domain": "physics.stackexchange",
"id": 66419,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, scattering-cross-section",
"url": null
} |
python, array, bitwise, bit-twiddling
What if somebody uses a key like lambda item: int(item) + 5 Because you convert everything to bool, you won't detect the difference between 5 and 6. And you won't sort the same way as a python list.
You also call key a lot, but since there are only two values, True and False, perhaps you should only call it twice and store the values.
while lo < hi:
if bool(key(self[hi])) is reverse:
break
hi -= 1
else:
break
self[lo], self[hi] = self[hi], self[lo]
This swap operation is actually going to be fairly expensive. You have to jump into another function, do a division, bit twiddling, etc and you have to do it four times for the swap.
lo += 1
I think the whole method could be faster by building a new __data from scratch.
one_bytes, one_bits = divmod(total_ones, 8)
zero_bytes, zero_bits = divmod(total_zeros, 8)
self.__data = [0] * zero_bytes + [255] * one_bytes
# handling the leftover bits is a bit tricky, but hopefully you get the idea | {
"domain": "codereview.stackexchange",
"id": 2052,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, array, bitwise, bit-twiddling",
"url": null
} |
meteorology, atmosphere
Title: Why do cities at high altitude regions have high atmospheric pressures? Why do cities at high altitude regions, such as Lhasa, altitude 11995ft (3656 metre), have high atmospheric pressures such as 30.11inHg (101.97 kPa) in the weather report? The reason for this is that the reported value is the relative pressure and not the absolute pressure. If meteorological stations reported the absolute pressure, dependent on elevation, it would been really confusing. So air pressure is always adjusted to sea level.
This is also why your home barometer has to be adjusted to show the correct pressure depending on the altitude where you are.
Here are a link to the weather close to where I live. It is at 150 m elevation,
https://www.yr.no/place/Norway/Buskerud/R%c3%b8yken/B%c3%b8dalen/hour_by_hour_detailed.html?spr=eng If you seek for Drammen you will see the green line showing the same pressure but the elevation there is close to sea level 10m or so. | {
"domain": "earthscience.stackexchange",
"id": 1104,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "meteorology, atmosphere",
"url": null
} |
quantum-mechanics, atomic-physics, hydrogen, orbitals, semiclassical
We may now analyze this result to answer the second part of the question. Clearly, the degeneracy of the orbits with the same $n_r$ and $\ell$ is implied by the allowed energies. There are a couple of other important visual notes regarding the shapes of orbits:
We generally have elliptical orbits; they're circular when $n_r=0$.
$m$ is indicative of the inclination of the orbit in space: the absolute value thereof dictates the angle between the plane of the orbit and the $xy$ plane, and the sign tells you whether the orbit is clockwise or anticlockwise.
Reference
Introduction to Quantum Mechanics by Pauling and Wilson | {
"domain": "physics.stackexchange",
"id": 62826,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, atomic-physics, hydrogen, orbitals, semiclassical",
"url": null
} |
catkin, ros-lunar, roscd, windows10, package
My setup is kind of complicated: My workspace is in a OneDrive-synced folder (Win10 file system), which is then mounted using WSL to have a Linux environment for ROS development.
This is also not a typical setup and especially the "onedrive synced" part is worrying me. What does that do to your file permissions? File permissions on all files need to be correct, or binaries won't run and all sorts of other problems can come up.
I would strongly recommend you try to get a baseline by replicating your setup on a regular Ubuntu installation (perhaps a VM).
If that works while you're following the same workflow, it's more than likely that either WSL and/or your OneDrive setup is interfering.
Originally posted by gvdhoorn with karma: 86574 on 2018-08-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-08-16:
Also note: WSL is not a supported platform for ROS, and you are bound to run into issues.
I'm not sure it is a good environment to start learning ROS in.
See #q238646 for a related Q&A.
Comment by totalnewbie on 2018-08-20:
In the end, I set up a VM with Linux, which isn't ideal on a not-so-powerful portable machine for sure, but at least ROS works adequately. Thanks. | {
"domain": "robotics.stackexchange",
"id": 31556,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "catkin, ros-lunar, roscd, windows10, package",
"url": null
} |
astrophysics, sun, stars, complex-systems
Nothing even remotely like that is known to exist in the sun.
Finally, note that the solar structure models account well for the observed behavior of the sun with no assumptions at all about short-scale "Sheldrake" structure. If there were such structure present, it is unlikely that the models would work properly since they do not include them.
Sheldrake's implication that astronomical bodies are therefore capable of consciousness and emotions is without any experimental evidence. Note that there are phenomenal amounts of "structure" in a sand dune viewed through a microscope, but nothing that would support anything like consciousness. | {
"domain": "physics.stackexchange",
"id": 79614,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "astrophysics, sun, stars, complex-systems",
"url": null
} |
is established. Our objective is to estimate the This expression shows an interesting point: the regression function can be computed from the. , points on a graph are not clustered in a straight line), then a simple regression would not be the appropriate analysis to use for this data set. Related to the Perceptron and 'Adaline', a Logistic Regression model is a linear model for binary classification. Cornbleet and Gochman is used to calculate Deming regression, and the latest iteratively re-weighted procedure described by Linnet is used to calculate Weighted Deming regression. 2% probability that the email message is spam. My data: State N Var1 Var2. WMAs can have different weights assigned based on the number of periods used in the calculation. ) In addition to the slope and y-intercept, the module can return the square of the correlation coefficient (R squared), the Durbin-Watson statistic, the mean squared error, sigma, the t statistics, the variance of the estimates of the slope and y-intercept, the predicted y values and the residuals of the. Linear regression has one problem, is that it tends to underfit the data. Linear regression D. (WLS) regression models are fundamentally different from the. • Locally Weighted Linear Regression: Take a conservative function approximator called LINEAR REGRESSION. I am trying to reproduce the results of a paper. Therefore, assumptions adopted in super-resolution. 2 A Linear Probabilistic Model. Linear Regression works accurately only on data has a linear relationship between them. 95 in the equation is the slope of the linear regression which defines how much of the variable is the dependent variable on the independent variable. Regressions include lin-lin, lin-log, log-lin and log-log. Regression function can be wrong: maybe regression function should have some other form (see diagnostics for simple linear regression). Linear regression looks at various data points and plots a trend line. Perform a Multiple Linear Regression with our Free, Easy-To-Use, Online Statistical Software. Linear regression A conditional statistical model of random vector y given measurement vector x, where y is a linear transformation of x followed by additive noise, typically Gaussian. A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The output y can be | {
"domain": "data-wizard.de",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9886682471364099,
"lm_q1q2_score": 0.8083099198872961,
"lm_q2_score": 0.817574471748733,
"openwebmath_perplexity": 833.588459763039,
"openwebmath_score": 0.6430838108062744,
"tags": null,
"url": "http://data-wizard.de/weighted-linear-regression.html"
} |
software, astrophotography
Title: When taking a sequence of exposures for stacking/coaddition, what dither patterns are most commonly desired? Why? When taking a sequence of exposures for stacking/coaddition, what dither patterns are most commonly desired? When visiting a telescope, what default dither patterns would a visiting astronomer like to see in the observing software (assuming custom patterns are also supported)? Dithering is as much an art as a science and depends on many factors including, but not limited to:
The type of object being observed (point source, small extended object, large extended object)
Telescope parameters (The field of view of the telescope relative to the size of the object, optical quality, size and type of abberations, etc)
The quality of the detector (flat fielding, vinetting, bad pixels, linearity, etc)
The type of readout electronics (single or multiple channels and where they fall on the chip, gains and biases, etc) | {
"domain": "physics.stackexchange",
"id": 2960,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "software, astrophotography",
"url": null
} |
planetary-atmosphere, jupiter, atmospheric-escape
Title: Does Jupiter lose some atmosphere at all? I've hear of something like this: "All planets lose their atmospheres eventually and they become a barren rock if given enough time."
There are three types of atmospheric loss ways:
Jeans Escape: Temperature and escape velocity factors determine the gasses and the amount that's lost. Jupiter cannot lose atmosphere via Jeans escape because Jupiter has too high escape velocity and too high of a temperature.
Charge exchange: Solar radiation creates electrons and positively charged ions in upper atmospheres by tearing electrons off atoms or molecules. Subsequently, charge attraction and repulsion in collisions accelerates ions. Jupiter most likely cannot lose its atmosphere via charge exchange because its magnetic field is strong and extensive while no(little?) material escapes via polar wind.
Vertical atmospheric escape/impact erosion: Energetic objects that strike a planet erodes its atmosphere by creating a plume of heated gas. Jupiter does not lose its atmosphere via impact erosion because it has very high gravity and escape velocity. This pulls particles ejected by impact back into its atmosphere. | {
"domain": "astronomy.stackexchange",
"id": 3521,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "planetary-atmosphere, jupiter, atmospheric-escape",
"url": null
} |
quantum-field-theory, particle-physics, research-level, spinors
Spinor bundles are associated vector bundles to a spin bundle, which is a lift of the oriented orthonormal frame bundle. Lifts need neither exist or, if they do exist, be unique, hence there are manifolds on which you cannot define a spin bundle and manifolds on which you have more than one such bundle. The obstruction for the existence of a spin structure is orientability and the vanishing of the second Stiefel-Whitney class of the tangent bundle. If a manifold M admits a spin structure, then it may admit more than one: they are classified by $H^1(M,\mathbb{Z}_2)$ which is isomorphic to the set of group homomorphisms from the fundamental group to $\mathbb{Z}_2$. Roughly speaking this measures how you can consistently assign signs to noncontractible loops.
The circle has fundamental group $\mathbb{Z}$ and since there are two homomorphisms $\mathbb{Z} \to \mathbb{Z}_2$, there are two different spin structures, which in string theory are usually called NS and R, much to the amusement of spin geometers everywhere.
Hence the lesson is that before you can even talk about <insert your favourite spinor equation> you need to say what your spinors are; that is, which spinor bundle they are sections of.
A rough analogy (which can be made precise in this case) is that you have equations and then boundary conditions and both are necessary in order to define the problem. The analogue of the boundary conditions is specifying the spinor bundle. This is indeed the case for the circle: where the spinor field will either change by a sign or not as you move along the circle.
It is not uncommon for manifolds admitting inequivalent spin structures, that there should be parallel, Killing,... spinor fields relative to one of the spin structures, but not relative to others. In fact, this is the generic situation.
In summary, the answer to the question in the title is emphatically Yes.
Further remarks
This may answer the OP's question in the comment to an earlier version of this answer. | {
"domain": "physics.stackexchange",
"id": 3377,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, particle-physics, research-level, spinors",
"url": null
} |
ros, gazebo, p3dx, model
Title: Adding noise to P3DX model
Hi Ros users:
I am trying to add gaussian noise to velocities of P3DX. The model that I am using is from this source https://github.com/RafBerkvens/ua_ros_p3dx
I have modified this part of code:
<!-- ground truth -->
<gazebo>
<plugin name="p3d_base_controller" filename="libgazebo_ros_p3d.so">
<alwaysOn>true</alwaysOn>
<updateRate>100.0</updateRate>
<bodyName>base_link</bodyName>
<topicName>${ns}/base_pose_ground_truth</topicName>
<gaussianNoise>0.01</gaussianNoise>
<frameName>map</frameName>
<!-- initialize odometry for fake localization -->
<xyzOffsets>0 0 0</xyzOffsets>
<rpyOffsets>0 0 0</rpyOffsets>
<velocityGaussianNoise>10 10 10</velocityGaussianNoise>
</plugin>
</gazebo>
I change the part where it says "gaussiannoise nad velocityGaussianNoise" but it works in the same way.
Does anybody know how i can add gaussian noise to P3DX model??
Thank you so much. | {
"domain": "robotics.stackexchange",
"id": 22189,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, p3dx, model",
"url": null
} |
quantum-mechanics, operators, hamiltonian-formalism, commutator, phase-space
Title: Quantum canonical transformation This post is very similar in content to this one. I'm looking for a quantum implementation of the transformations
$$ x_i \to x_i + f(p) p_i, $$
$$ p_i \to h(p) p_i. $$
In these, the subindex $i$ denotes components of $\mathbf{x}$ and $\mathbf{p}$; and $f(p)$ and $h(p)$ are scalar functions of $p\equiv|\mathbf{p}|$. So I'm looking for the operator $T$ implementing
$$ T x_i T^{-1} = x_i + f(p) p_i, $$
$$ T p_i T^{-1} = h(p) p_i. $$
I tried guessing an expression for $T$ inspired by the accepted answer to the post mentioned at the beginnig, but I couldn't work out the calculations to verify if I guessed it right. When trying to do so, I used the relation
$$ [A, e^B] = \int_0^1 ds e^{(1-s)B}[A,B]e^{sB} $$
discussed here, but got trapped in seemly never-ending nested, multiple integrals.
ADDENDUM
As pointed out in the comments, the transformations above are (classicaly) canonical only for the trivial case of $h=1$. To make my question more pertinent, I rephrase it removing the required transformation for $x_i$. Therefore, I ask for the quantum implementation of the transformation $p_i \to h(p) p_i$, letting the transformation of $x_i$ be determined after requiring canonicity. Hint: well, to break down and modularize your problem to several technical ones,
if you stuck to one dimension and dismissed QM, so you worked in the p-representation where $\hat x$ is proportional to $d/dp$, and you sought the operator T which scales p by a function thereof,
$$
T p T^{-1}= p ~ h(p),
$$ | {
"domain": "physics.stackexchange",
"id": 85461,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, hamiltonian-formalism, commutator, phase-space",
"url": null
} |
python, web-scraping
for job in listings:
jobT = job.find('a', class_='jobtitle')
jobL = job.find('span', class_='location').text.strip()
jobS = job.find('div', class_='summary').text.strip()
link = jobT['href']
if any(any(subs in s for s in (jobT.text.strip().lower(), jobS.lower())) for subs in (jobName.split('+')[0], jobName[1])):
print('Your job in '+jobL+' as a '+ jobT.text.strip()+
'.\nHere is a quick summary of your job here: '+
jobS+'\nLink for more information and application for the job - https://indeed.com'+link, end='\n\n\n') What you are doing sounds reasonable overall.
It's good that you are using F-strings:
nmoPag = input(f"There are {noPag} number of pages. If you want to scrape all of them write 'Max' else write number of pages you wish to scrape: ")
But there are many other places where you don't eg:
print('Your job in '+jobL+' as a '+ jobT.text.strip()+
'.\nHere is a quick summary of your job here: '+
jobS+'\nLink for more information and application for the job - https://indeed.com'+link,
So I would upgrade the rest of the code for more consistency and better readability :)
Consistency: you are mixing lower case and upper case in some variable names eg jobName vs place. Remember that variable names are case-sensitive in Python. The practice could be dangerous and confusing. Imagine that you have jobName and jobname, it's two variables that may be assigned different values.
There is redundancy in the use of functions, for example this bit of code is repeated twice:
jobT.text.strip() | {
"domain": "codereview.stackexchange",
"id": 37762,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, web-scraping",
"url": null
} |
javascript, jquery, html
$(".remove").click(function() {
$(this).parent(".pip").remove();
});
// Remove image code ended here....
};
counter++;
// get the text
}
drawText(data);
}
json(jsonData);
}); // end of document ready
// extempl code - get the text
const fonts = []; // caching duplicate fonts
function drawText(layer) {
if (layer.type === 'image') return;
if (!layer.type || layer.type === 'group') {
return layer.layers.forEach(drawText)
}
if (layer.type === 'text') {
const url = 'http://piccellsapp.com:1337/parse/files/PfAppId/' + layer.src;
if (!fonts.includes(url)) {
fonts.push(url);
$("style").prepend("@font-face {\n" +
"\tfont-family: \"" + layer.font + "\";\n" +
"\tsrc: url(" + url + ") format('truetype');\n" +
"}");
}
$('.container').append(
'<div class="txtContainer" ' +
'style="' +
'text-align: ' + layer.justification + '; ' +
'font-family: ' + layer.font + '; ' +
'left: ' + layer.x + 'px; ' +
'top: ' + layer.y + 'px; ' +
'width:' + layer.width + 'px; ' +
'color: ' + layer.color.replace(/^0x/, '#') + '; ' +
'font-size: ' + layer.size + 'px; ' +
'height:' + layer.height + 'px;' +
'">' +
layer.text +
'</div>');
}
}
// extempl code end | {
"domain": "codereview.stackexchange",
"id": 33966,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, html",
"url": null
} |
thermodynamics, entropy
Title: Is a world with constant/decreasing entropy theoretically impossible? We can imagine many changes to the laws of physics - you could scrap all of electromagnetism, gravity could be an inverse cubed law, even the first law of thermodynamics could hypothetically be broken - we've all imagined perpetual motion machines at one time or another.
However, the second law of thermodynamics seems somehow more 'emergent'. It just springs out of the nature of our universe - the effectively random movement of physical objects over time. Provided you have a Universe whose state is changing over time according to some set of laws, it seems like the second law must be upheld, things must gradually settle down into the state of greatest disorder.
What I'm particularly wondering is if you can prove in any sense (perhaps using methods from statistical mechanics)? Or is it possible to construct a set of laws (preferably similar to our own) which would give us a universe which could break the second law. The short answer is that such a universe cannot be envisaged, not with relevance to our known physics.
Entropy as defined in statistical thermodynamics is proportional to the logarithm of the number of microstates of the closed system, the universe in your question.
You would have to devise a universe where the number of microstates diminishes with time.
The great multiplier of microstates in our universe is the photon, which is emitted at every chance it gets, and thus increases the number of microstates. Photons are emitted by electromagnetic interactions and by all bodies consisting of atoms and molecules due to the black body radiation effect. Each emitted (or absorbed, because the state of the atom that absorbed it has changed) photon defines a new microstate to be added to the number of microstates, whose logarithm defines entropy. A universe without electromagnetism would not have atoms.
It is worth noting that all biological systems decrease entropy, as does the crystallization of materials, but this is possible because the systems are open and the energy exchanges create a large number of microstates thus obeying in the closed system the entropy constraint. | {
"domain": "physics.stackexchange",
"id": 9709,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, entropy",
"url": null
} |
general-relativity, differential-geometry, differentiation, vector-fields
Let $\rho$ be a nowhere vanishing scalar density of weight 1 (eg. of the same type as $\sqrt{-g}$, some people consider this to be weight -1 I guess). Such an object shall be called a volume density.
The covariant derivative of scalar densities of weight 1 (with respect to any connection) is defined as $$ \nabla_\mu\rho=\partial_\mu\rho -\Gamma_{\mu\nu}^\nu\rho. $$
Here $\Gamma$ doesn't denote the Christoffel symbols, but the coefficients of an arbitrary linear connection.
The connection $\nabla$ is said to be volume-presering if there is such a $\rho$ such that $\nabla_\mu\rho=0$. We make some statements regarding volume-preserving connections. | {
"domain": "physics.stackexchange",
"id": 60104,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, differential-geometry, differentiation, vector-fields",
"url": null
} |
• $\mathcal{N}_f = 8$. e.g. a "distorted" octahedron with vertices at 6 points $\pm \vec{A}, \pm \vec{B}, \pm \vec{C}$ where $$\begin{cases} \vec{A} &= (-1,\alpha,1),\\ \vec{B} &= ( 2,0,1),\\ \vec{C} &= (-1,-\alpha,1) \end{cases} \quad\text{ and }\quad \alpha = \frac{\sqrt{3(3+4\sqrt{5} - \sqrt{49+24\sqrt{5}})}}{2} \approx 1.1657187$$ One can cut this "distorted" octahedron along $\vec{A} \to -\vec{C} \to \vec{B} \to -\vec{A} \to \vec{C} \to -\vec{B} \to \vec{A}$ and unfold the surface into two kites.
$\hspace1in$
• $\mathcal{N}_f = 10.$ e.g. a square antiprism mentioned in the question. A concrete example is the one with following 8 vertices at $$\begin{cases} (\pm 1,\pm1,0),\\ (\pm\sqrt{2},0,\beta),\\ (0,\pm\sqrt{2},\beta) \end{cases} \quad\text{ where }\quad\beta = \sqrt{2(\sqrt{2}-1)} \approx 0.9101797$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429609670702,
"lm_q1q2_score": 0.8257464861954458,
"lm_q2_score": 0.8397339676722393,
"openwebmath_perplexity": 420.8618590672783,
"openwebmath_score": 0.7100414633750916,
"tags": null,
"url": "https://math.stackexchange.com/questions/673617/making-a-convex-polyhedron-with-two-sheets-of-paper"
} |
nuclear-physics, vacuum, space, explosions
I appreciate your consideration, and any insightful answers (or even hypothesis) would be fantastic. I don't know much about physics really (outside of the enjoyable what-ifs I read from xkcd!) so I'm at a loss. A space explosion of an atomic or nuclear weapon would be very different from the same explosion on the earth, for the following reasons.
In space, there is no atmosphere in which acoustic and shock waves can propagate. The only matter being "thrown" outward by the blast is that which originated from the bomb, which is very small compared with the energy release in the explosion itself. To experience blast damage from a space explosion would require you to be so close to the detonation point that you'd be incinerated by the fireball before being physically torn to pieces.
In space, there is no large mass of rock and dirt to render radioactive via mechanisms like neutron activation, so the only residual radioactivity would be that carried by the components of the bomb's casing and internal parts (including, in the case of an atomic blast, unfissioned plutonium or uranium) which would be small compared to the energy release. No planetary surface next to the detonation point also means no shock wave reflections and no seismic waves either.
In space, the radiation and stray neutrons produced by the explosion would suffer no absorption or attenuation by air, because there'd be no air in the way. This means that the x-rays, neutrons, and other ionizing radiation would remain lethal out to a larger radius.
In an air burst explosion on earth, the air immediately surrounding the detonation point gets heated so hot by the initial blast that it becomes opaque to infrared radiation, which traps a lot of the energy release inside the fireball until such time as the fireball has expanded and cooled enough to become transparent again. With no air around the detonation point, the energy release can stream directly out of the explosion and the time evolution of the fireball and its size, etc. would be very different. | {
"domain": "physics.stackexchange",
"id": 72877,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-physics, vacuum, space, explosions",
"url": null
} |
python, python-3.x, tkinter, pygame
def get_answer_last_screen(): # Defines get_answer_last_screen
if entry_characters.get() == word:
# Tells program that if the entry of characters = the correct/incorrect
controller.show_frame("WinGameScreen") # if answer is correct, WinGameScreen is shown
else:
controller.show_frame("IncorrectScreen") # if answer is incorrect, IncorrectScreen is shown
def get_answer(): # Defines get_answer
if entry_characters.get() == word: # Tells program that if the entry of characters = the correct/incorrect
controller.show_frame("CorrectScreen") # Show correct screen (if answer is correct)
else:
controller.show_frame("IncorrectScreen") # Show incorrect screen (if answer is wrong)
def get_answer2(event): # Defines get_answer
if entry_characters.get() == word: # Tells program that if the entry of characters = the correct/incorrect
controller.show_frame("CorrectScreen") # Show correct screen (if answer is correct)
else:
controller.show_frame("IncorrectScreen") # Show incorrect screen (if answer is wrong)
def tutorial_show(): # Defines tutorial_show
button_next_screen.configure(command=next_level_two) # Uses the 'next_level_two' function
controller.show_frame("Tutorial") # Shows Tutorial screen
def main_menu(): # Defines main_menu
button_next_screen.configure(command=next_level_two) # Uses the 'next_level_two' function
controller.show_frame("MainMenu") # Shows MainMenu
def difficulty_change(): # Defines difficulty_change
button_next_screen.configure(command=next_level_two) # Uses the 'next_level_two' function
controller.show_frame("Difficulty") # Shows Difficulty screen | {
"domain": "codereview.stackexchange",
"id": 31552,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, tkinter, pygame",
"url": null
} |
thermodynamics, physical-chemistry
So the question is :
Does the chemical reaction $\rm (2KOH + CaCO_3 → Ca(OH)_2 + K_2CO_3)$ occur spontaneously? Yes or no? Why? In modern chemical theory the term 'spontaneous reaction' doesn't make much sense and isn't often used anymore.
Instead the chemical reaction (e.g.)
$$\text{A}+\text{B} \rightleftharpoons\text{C}+\text{D}\tag{1}$$
is considered an equilibrium reaction, so that:
$$K_E=\frac{\alpha_C \alpha_D}{\alpha_A \alpha_B}\tag{2}$$
where $K_E$ is the equilibrium constant of $(1)$ and the $\alpha$ are so-called chemical activities (in simple, very dilute cases these equate to the more traditional concentrations).
If $K\gg 1$ the equilibrium is 'right-leaning', if $K\ll 1$ it is called 'left-leaning'.
It is possible that in your reaction the equilibrium is sufficiently right-leaning for some $\text{Ca(OH)}_2$ and $\text{K}_2\text{CO}_{3}$ to form in your conditions. But you need to be very certain of your experimental/analytical conditions.
Yes or no? Why?
So, as so often it's not really a 'yes or no' question. | {
"domain": "physics.stackexchange",
"id": 86365,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, physical-chemistry",
"url": null
} |
python, python-3.x, regex
return files_end_with_one
subject_fastafiles = subject_list_fastafiles()
query_fastafiles = query_list_fastafiles()
subject_files_ending_with_one = filter_files_ending_with_one(subject_fastafiles) Docstrings
You should include a docstring at the beginning of every function, class, and module you write. This will allow documentation to identify what your code is supposed to do. This also helps other readers understand how your code works. I see that you already have a couple for your functions, but stay consistent.
Parameter Names
Parameter names should be descriptive enough to be able to tell what should be passed. While l might be obvious to some programmers as an iterable, to others it might not. Since you're passing a list, renaming it to list_ (to avoid using reserved word list) makes it more obvious what you're passing, and accepting.
Constant Variables
When you have a constant in your program, it should be UPPER_CASE to identify it as such.
Code Reduction
You want as little code as possible in your program. So, instead of:
def subject_list_fastafiles():
""" Method Docstring """
subject_fastafiles = sorted_nicely([fastafile for fastafile in os.listdir(subject_path) if os.path.isfile(os.path.join(subject_path, fastafile))])
return subject_fastafiles
def query_list_fastafiles():
""" Method Docstring """
query_fastafiles = sorted_nicely([fastafile for fastafile in os.listdir(query_path) if os.path.isfile(os.path.join(query_path, fastafile))])
return query_fastafiles
def filter_files_ending_with_one(sorted_files):
""" Method Docstring """
files_end_with_one = [name for name in subject_fastafiles if name[-1].isdigit() and not name[-2].isdigit() == 1]
return files_end_with_one | {
"domain": "codereview.stackexchange",
"id": 35923,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, regex",
"url": null
} |
biochemistry, bacteriology
Title: What is the film that covers the tongue? What is the film that covers the tongue in the mornings, even after brushing the teeth and tongue the night before and why does it have color variations? Do the different colors mean anything? It isn't really a film. The tiny bumps that cover your tongue are called papillae, and are normally pink in color. However, they can become inflamed and white when irritated. The appearance of the white "coating" is caused by debris, bacteria and dead cells getting lodged between the papillae.
You may be breathing through your mouth when you sleep, which is drying it out. Bacteria may also still be the cause; you may not be brushing well enough or your toothpaste may not be correctly doing its job.
The color variations may be due to different types or amounts of debris, or the color may vary with different conditions listed here. Tongue color changes also often occur with glossitis (inflammation of the tongue itself). | {
"domain": "biology.stackexchange",
"id": 2115,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "biochemistry, bacteriology",
"url": null
} |
inorganic-chemistry, reaction-mechanism, redox, coordination-compounds
\end{alignat}$$
The difference in redox potential can be explained using the stability constant of $\ce{[Ag(NH3)2]+}$:
$$K_\text{B} = \frac{\left[\ce{[Ag(NH3)2]+}\right]}{\left[\ce{Ag+}\right]\left[\ce{NH3}\right]^2}$$
$$\begin{aligned}
E&=E_{\ce{Ag+}}^\circ+\frac{RT}{F}\cdot\ln\left[\ce{Ag+}\right]\\
&=E_{\ce{Ag+}}^\circ+\frac{RT}{F}\cdot\ln\frac{\left[\ce{[Ag(NH3)2]+}\right]}{K_\text{B}\cdot\left[\ce{NH3}\right]^2} \\
E_{\ce{[Ag(NH3)2]+}}^\circ&=E_{\ce{Ag+}}^\circ+\frac{RT}{F}\cdot\ln\frac{1}{K_\text{B}} \\
&\approx E_{\ce{Ag+}}^\circ-0.059\ \mathrm{V}\cdot\log K_\text{B}
\end{aligned} $$
3.
The complex formation equilibrium slows down the overall reaction. A slow, controlled reaction is important for creating the desired silver mirror. If $\ce{Ag+}$ is reduced too quickly, colloidal silver metal would appear, which would create a black cloudy liquid. | {
"domain": "chemistry.stackexchange",
"id": 3365,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, reaction-mechanism, redox, coordination-compounds",
"url": null
} |
ros, rosout
if self._get_time is not None and not self._is_wallclock():
time_str += ', %f' % self._get_time()
msg = msg.replace('${time:' + time_format + '}', time_str)
return msg, color
And a node script that uses it (which depends on the fork, but the format_msg body could be copied into it):
import logging
import sys
import rospy
from rospy.impl.rosout import _rospy_to_logging_levels
from rosgraph_msgs.msg import Log
from rosgraph.roslogging import (
_color_reset,
format_msg,
)
def log_callback(log):
logger_levels_to_names = logging.getLevelNamesMapping()
names_to_logger_level = {}
for key, value in logger_levels_to_names.items():
names_to_logger_level[value] = key
logging_level = _rospy_to_logging_levels[log.level]
logger_name = names_to_logger_level[logging_level]
# print(f"log level {log.level} {logging_level} {logger_name}")
msg, color = format_msg(log.msg, None, log.name, log.file,
log.line, log.function, logger_name, None,
log.header.stamp.to_sec())
if color is None:
color = ""
text = color + msg + _color_reset
if logging_level < logging.WARNING:
print(text)
else:
print(text, file=sys.stderr)
def main():
rospy.init_node("rosout_to_stdout")
rospy.Subscriber("rosout_agg", Log, log_callback, queue_size=8)
rospy.spin()
if __name__ == "__main__":
main() | {
"domain": "robotics.stackexchange",
"id": 2706,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rosout",
"url": null
} |
c++, array, iterator, collections, vectors
// Querying size ===================================================
constexpr auto empty() const noexcept -> bool;
constexpr auto size() const noexcept -> size_type;
constexpr auto max_size() const noexcept -> size_type;
// Iterators =======================================================
constexpr auto begin() noexcept -> iterator;
constexpr auto begin() const noexcept -> const_iterator;
constexpr auto end() noexcept -> iterator;
constexpr auto end() const noexcept -> const_iterator;
constexpr auto cbegin() const noexcept -> const_iterator;
constexpr auto cend() const noexcept -> const_iterator;
constexpr auto rbegin() noexcept -> reverse_iterator;
constexpr auto rbegin() const noexcept -> const_reverse_iterator;
constexpr auto rend() noexcept -> reverse_iterator;
constexpr auto rend() const noexcept -> const_reverse_iterator;
constexpr auto crbegin() const noexcept -> const_reverse_iterator;
constexpr auto crend() const noexcept -> const_reverse_iterator; | {
"domain": "codereview.stackexchange",
"id": 45374,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, array, iterator, collections, vectors",
"url": null
} |
c++, c++11, tic-tac-toe
{
std::cout << num << " ";
}
}
std::cout << "\n\n";
}
void Board::removeFromValidLocation(int move) {
std::remove_if(m_board.begin(), m_board.end(), [&](auto& number) {
return number == move;
});
} | {
"domain": "codereview.stackexchange",
"id": 40165,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, tic-tac-toe",
"url": null
} |
deep-learning, visualization, word2vec, tsne
Title: Using t-SNE to track progress of a word vector embedding model. Pitfalls? I've been training a word2vec/doc2vec model on a large amount of text. I recently stumbled across the t-SNE package, and am finding it wonderful at finding hidden structure in high-dimensional data.
Can t-SNE be used as a way of tracking the progress of a hard machine learning task like this - where the model's understanding goes from unintelligible nonsense to something with hidden structure?
I have seen examples of the MNIST data set on t-SNE where all the individual numbers cluster well with each other. (as explained in this answer)
As I increase the number of vectors in the doc2vec model and the size of the training set, I start to see clumping (if you squint) in the t-SNE plot. So far, these clumps are mainly associated with posts of very similar wording (one clump is mainly "Good morning/evening!" tweets). (Picture was generated with perplexity of 400)
How much additional clumping can I expect to see as the model is improved? Is this indicative that the model is, in fact, improving and learning deeper connections between words/phrases? Or have these t-SNE plots settled into the form they'll always take?
EDIT: I have since realised that the apparent lack of clumping could be due to the data itself. MNIST separates out cleanly because there are generally no weird glyphs that look like halfway mutations between numbers. My dataset (twitter sentiment, 1.6 million tweets) is, for a lack of a better word, filled with unclassifiable drivel, and it seems entirely probable that the homogeneous forest of points in the centre of the plot represents these sorts of tweets. T-SNE is extremely useful for visualizing high-dimensional data in lower-dimensional space. However, t-SNE can have several gotchas, including comparing cluster sizes. The t-sne algorithm tries to even out cluster sizes by expanding dense clusters and contracting sparse clusters. Thus, it is not straightforward to directly compare clusters across different runs.
"How to Use t-SNE Effectively" goes into greater detail about common pitfalls of using the technique. | {
"domain": "datascience.stackexchange",
"id": 4524,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, visualization, word2vec, tsne",
"url": null
} |
• For the proof of Prop 2, I'm not sure why we have "in either case $x\in I \subseteq J$ for some $I \in \mathscr{I}$"? Could you please elaborate? – twosigma Jun 18 '20 at 20:41
• It was a bit terse. If $c \leqslant x < b,$ then because $b = \sup J,$ there exists $y \in J$ such that $x < y,$ therefore there exists an open interval $I \in \mathscr{I}$ such that $y \in I.$ We then have $c \in I$ and $y \in I$ and $c \leqslant x < y,$ therefore $x \in I \subseteq J.$ The proof for $a < x \leqslant c$ is similar, with the inequality signs going the other way round. (By the way, I'm afraid the proof of Proposition 5 is even more terse, and it may be more easily understood with reference to the proof of Proposition 3 than its statement.) – Calum Gilhooley Jun 18 '20 at 20:52
• Thanks for the detailed answer and additional interesting propositions. For Prop 5, I have seen the proof before so that is ok. – twosigma Jun 18 '20 at 21:46
• Like you, I imagined at first that a quite elaborate proof would be needed to answer the general case of your question, even though @PierreCarre's comment takes care of the case of two intervals without fuss. It was only after a long comfortable soak in the bath last night that I realised there was a simple proof. By that time I had dreamt up most of these other arguments, because my initial more elaborate idea had been to prove Prop. 4 from first principles and deduce Prop. 1. Something like that, anyway. It was only in bed this morning, unable to sleep, that I got it all straight in my head. – Calum Gilhooley Jun 18 '20 at 22:32 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9854964194566753,
"lm_q1q2_score": 0.8471955516074846,
"lm_q2_score": 0.8596637541053281,
"openwebmath_perplexity": 81.7338026084833,
"openwebmath_score": 0.8846967816352844,
"tags": null,
"url": "https://math.stackexchange.com/questions/3723481/an-open-interval-is-not-a-disjoint-union-of-two-or-more-open-intervals"
} |
noise, qpsk, pll
\text{Narrow} & 0.011 \text{ cyc/samp} & 200 \text{ samples} & 1e-4 & 7e-3 & 1.045 \\
\text{Medium} & 0.056 \text{ cyc/samp} & 20 \text{ samples} & 0.01 & 0.07 & 1.257 \\
\text{Wide} & 0.202 \text{ cyc/samp} & 5 \text{ samples} & 0.072 & 0.21 & 2.206 \\
\end{bmatrix}$$
I then created a noise sample of $2^{15}$ complex samples with the following phase noise target values for the PSD:
Freq $1=2\pi$, Phase Noise (dBc):
1e-2 cycles/sample, -15 dBc
1e-1 cycles/sample, -45 dBc
0.5 cylces/sample, -59 dBc
To this phase noise data I added AGWN with a total power of -40 dBc. This created experimental data of complex samples with both AM and PM noise components, with the phase noise contributing AM only with increasing density toward the lower frequencies, and AWGN contributing AM and PM components equally.
The loop performance was characterized by determining the closed loop transfer function from the input of the phase rotator to the output of the phase rotator which is given by:
$$G_{CL}(z) = \frac{1}{1+G_{OL}(z)} $$
It is clear that signal component, here normalized to 1 would not be affected in magnitude by the phase rotator, so we can assess SNR from the noise directly after it has passed through the above transfer function.
The Decision Directed Phase Detector responds equally to small scale AM and PM changes (A small change in amplitude can not be distinguished from a small change in phase), so any AM components would get translated to (uncorrelated) PM noise at the output of the phase rotator as the loop tries to correct phase offsets that don't exist. Therefore the resulting total noise at the output taken at the phase rotator output would be: | {
"domain": "dsp.stackexchange",
"id": 8613,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "noise, qpsk, pll",
"url": null
} |
lisp, scheme, sicp, numerical-methods
Title: Integral using Simpson's Rule As an answer to this problem:
Exercise 1.29
Simpson's Rule is a
more accurate method of numerical
integration than the method
illustrated above. Using Simpson's
Rule, the integral of a function f
between a and b is approximated as
(h / 3) * (y_0 + 4y_1 + 2y_2 + 4y_3 + 2y_4 + ... + 2y_(n-2) + 4y_(n-1) + y_n
where h = (b - a)/n, for some even
integer n, and yk = f(a + kh).
(Increasing n increases the accuracy
of the approximation.) Define a
procedure that takes as arguments f,
a, b, and n and returns the value of
the integral, computed using Simpson's
Rule. Use your procedure to integrate
cube between 0 and 1 (with n = 100 and
n = 1000), and compare the results to
those of the integral procedure shown
above.
I wrote the following solution:
(define (sum term a next b)
(define (iter a result)
(if (> a b)
result
(iter (next a) (+ (term a) result)))
) (iter a 0))
(define (simpsons-rule f a b n)
(let ((h (/ (- b a) n)))
(define (y_k k) (f (+ a (* k h))))
(define (even n) (= (remainder n 2) 0))
(define (term n) (* (if (even n) 2.0 4.0) (y_k n)))
(define (next n) (+ n 1))
(* (/ h 3.0) (+ (y_k 0.0) (sum term 0.0 next (- n 1.0)) (y_k n)))))
(define (cube x) (* x x x)) | {
"domain": "codereview.stackexchange",
"id": 203,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lisp, scheme, sicp, numerical-methods",
"url": null
} |
python, python-3.x, mathematics
def GetPowerSet(lst):
result = [[]]
for x in lst:
result.extend([subset + [x] for subset in result])
return result
S = {'a': [one, one, 0, 0, 0, 0, 0], 'b': [0, one, one, 0, 0, 0, 0],
'c': [0, 0, one, one, 0, 0, 0], 'd': [0, 0, 0, one, one, 0, 0],
'e': [0, 0, 0, 0, one, one, 0], 'f': [0, 0, 0, 0, 0, one, one]}
P = GetPowerSet(S)
u = [0, 0, one, 0, 0, one, 0]
v = [0, one, 0, 0, 0, one, 0]
u_0010010 = {y for x in P for y in x if SumVectorList(x, S) == u}
u_0100010 = {y for x in P for y in x if SumVectorList(x, S) == v} Please have a look at PEP 8 giving the Style Guide for Python Code.
Now, in AddVectors (or add_vectors), it seems like you assume that A and B have the same size. You could achieve the same result with zip.
def add_vectors(a, b):
assert(len(a)==len(b))
return [sum(i) for i in zip(a,b)]
I have to go, I'll some more later :-) | {
"domain": "codereview.stackexchange",
"id": 4202,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, mathematics",
"url": null
} |
formal-languages, context-free, automata
Title: Why is the language of even-length non-palindromes context-free? We know $L_1=\{w_1 w_2 \in (a+b)^*\mid |w_1|=|w_2|, w_2 \neq w_1^{\;\mathrm{R}}\}$
is a context-free language.
Can anyone help me produce a PDA or give me any hint how I can quickly understand why this is context-free? The language of even-length non-palindromes is given by the following context-free grammar:
$$S \rightarrow 0S0 \mid 1S1 \mid D$$
$$D \rightarrow 1A0 \mid 0A1$$
$$A \rightarrow \lambda \mid 00A \mid 01A \mid 10A \mid 11A$$ | {
"domain": "cs.stackexchange",
"id": 3184,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "formal-languages, context-free, automata",
"url": null
} |
In conclusion, $g$ can be $2^0 \cdot 5^1, 2^1 \cdot 5^1, 2^2 \cdot 5^1$, and their sum is $\boxed{\mathbf{(C)}35}$.
## Solution B
All the unknown entries can be expressed in terms of $b$. Since $100e = beh = ceg = def$, it follows that $h = \frac{100}{b}, g = \frac{100}{c}$, and $f = \frac{100}{d}$. Comparing rows $1$ and $3$ then gives $50bc = 2 \cdot \frac{100}{b} \cdot \frac{100}{c}$, from which $c = \frac{20}{b}$. Comparing columns $1$ and $3$ gives $50d \cdot \frac{100}{c}= 2c \cdot \frac{100}{d}$, from which $d = \frac{c}{5} = \frac{4}{b}$. Finally, $f = 25b, g = 5b$, and $e = 10$. All the entries are positive integers if and only if $b = 1, 2,$ or $4$. The corresponding values for $g$ are $5, 10,$ and $20$, and their sum is $\boxed{\mathbf{(C)}35}$.
Credit to Solution B goes to http://billingswest.billings.k12.mt.us/math/AMC%201012/AMC%2012%20work%20sheets/2004%20AMC%2012B%20ws-15.pdf, a page with a play-by-play explanation of the solutions to this test's problems. | {
"domain": "artofproblemsolving.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9861513905984457,
"lm_q1q2_score": 0.8342273051727819,
"lm_q2_score": 0.8459424314825853,
"openwebmath_perplexity": 111.24639728587795,
"openwebmath_score": 0.9582446217536926,
"tags": null,
"url": "https://artofproblemsolving.com/wiki/index.php?title=2004_AMC_12B_Problems/Problem_22&oldid=87132"
} |
ros, python, rostest
[roslaunch][INFO] 2017-11-15 12:16:07,611: runner.stop()
[rostest][INFO] 2017-11-15 12:16:07,611: shutting down processing monitor...
[roslaunch][INFO] 2017-11-15 12:16:07,611: shutting down processing monitor <ProcessMonitor(ProcessMonitor-1, started daemon 139687375009536)>
[roslaunch.pmon][INFO] 2017-11-15 12:16:07,611: ProcessMonitor.shutdown <ProcessMonitor(ProcessMonitor-1, started daemon 139687375009536)>
[roslaunch.pmon][INFO] 2017-11-15 12:16:07,698: ProcessMonitor._post_run <ProcessMonitor(ProcessMonitor-1, started daemon 139687375009536)>
[roslaunch.pmon][INFO] 2017-11-15 12:16:07,698: ProcessMonitor._post_run <ProcessMonitor(ProcessMonitor-1, started daemon 139687375009536)>: remaining procs are []
[roslaunch.pmon][INFO] 2017-11-15 12:16:07,702: ProcessMonitor exit: cleaning up data structures and signals
[roslaunch.pmon][INFO] 2017-11-15 12:16:07,702: ProcessMonitor exit: pmon has shutdown
[rostest][INFO] 2017-11-15 12:16:07,702: ... shutting down processing monitor complete
[roslaunch.pmon][INFO] 2017-11-15 12:16:07,703: ProcessMonitor.shutdown <ProcessMonitor(ProcessMonitor-1, stopped daemon 139687375009536)> | {
"domain": "robotics.stackexchange",
"id": 29369,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, python, rostest",
"url": null
} |
Question
Serving at a speed of 170 km/h, a tennis player hits the ball at a height of 2.5 m and an angle $\theta$ below the horizontal. The baseline from which the ball is served is 11.9 m from the net, which is 0.91 m high. What is the angle $\theta$ such that the ball just crosses the net? Will the ball land in the service box, whose service line that is 6.40 m from the net?
$\theta = 6.1^\circ$
Yes, the ball will land within the service box.
Solution Video
# OpenStax College Physics Solution, Chapter 3, Problem 37 (Problems & Exercises) (15:37)
View sample solution
Rating
No votes have been submitted yet.
## Calculator Screenshots
Video Transcript
Submitted by elnazassadpour on Sat, 03/16/2019 - 06:42
You are incredibly good at explaining the solutions. Thanks so much! Just a quick question. How come we can't find theta by just doing d = squareroot of dy^2 + dx^2. and then using the equation theta = tan-1 (dy/dx), basically just using distances that we know to find the angle, rather than velocity? There are other questions, where it's just distance that's used.
Submitted by ShaunDychko on Tue, 04/02/2019 - 09:52
Hello, sorry for taking so long to get back to your question. It was spring break here. Thanks for the nice feedback and I'm really glad the solutions are helping with your studies.
Let's keep in mind that theta is an angle to do with velocity. It's the angle between the 'x' and 'y' components of the velocity. It's possible to use distances as a proxy for velocity only when both distance components are proportional to their respective velocity components, in which case the triangles made from resultants and components will be similar. I use the word similar in the technical math sense in that the triangles will have the same angles since they have equal ratios of corresponding sides. However, in this particular question, the y-component of displacement will not be proportional to the y-component of initial velocity since there is vertical acceleration due to gravity. Displacement can be used to find theta because the acceleration interferes with that strategy. | {
"domain": "collegephysicsanswers.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812354689081,
"lm_q1q2_score": 0.8005721843844085,
"lm_q2_score": 0.8267117898012105,
"openwebmath_perplexity": 552.8543585701631,
"openwebmath_score": 0.706351637840271,
"tags": null,
"url": "https://collegephysicsanswers.com/openstax-solutions/serving-speed-170-kmh-tennis-player-hits-ball-height-25-m-and-angle-theta-below"
} |
beginner, clojure, game-of-life
Title: Building blocks of Life I'm learning Clojure, and decided to write a Conway's Game of Life clone as my starting project. I ended up dicking around for a bit before diving in, and came up with a few functions that I'd like looked over. Mainly, I'm concerned about writing them more concisely and idiomatically.
I'm planning on using a 1D vector to represent a 2D field. The typical equation to get the index of a vector corresponding to an (x,y) coordinate is y * width + x. Here are my first attempts:
; A 2D point representing a coordinate, or any pair of numbers
(defrecord Point [x y])
; Represents the Game of Life world.
; cells is a vector representing a 2D matrix of cells
; dims is the dimensions of the world as a Point
(defrecord Enviro [cells dims])
(defn index-of [width x y]
(+ (* y width) x))
(defn enviro-index-of [enviro x y]
(let [width (:x (:dims enviro))]
(index-of width x y)))
(defn enviro-index-of-2 [enviro x y]
(let [width (-> enviro :dims :x)]
(index-of width x y)))
index-of is straight forward. My issue is the 2 convenience functions to get an index by supplying the Enviro instead of the width directly. I tried deconstructing the record directly in the argument list, but it complained that it didn't recognize the keyword record keys. The work-around was to just "navigate" the records manually, and bind "width" in a let. My first attempt was kind of naïve; using the accessor functions to get the Enviro, then the dimensions Point. Then I remembered the thread macro! For my second attempt, I used -> to find the width. It seems to be much cleaner, but is this as idiomatic as it gets? | {
"domain": "codereview.stackexchange",
"id": 20059,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, clojure, game-of-life",
"url": null
} |
c++, exception-handling
void fail() {
PROJECT_ASSERT(true > 0);
PROJECT_ASSERT(!!!true);
}
int main() {
try {
fail();
}
catch (BaseError& e) {
print_diagnostic_info(std::cerr, e);
if (std::string const* err_msg_p = boost::get_error_info<err_msg>(e))
std::cerr << "Error: " << *err_msg_p << std::endl;
// Cleanup goes here
}
}
Running this gives
exception.cpp(39): Throw in function void fail()
Dynamic exception type: boost::exception_detail::clone_impl<AssertionError>
std::exception::what: std::exception
[tag_err_info*] = Assertion failed: `!!!true'.
Error: Assertion failed: `!!!true'.
What are the possible problems with this approach, and are there any improvements to be made?
One part I'm not sure about is defining error in the assertion. Seeing as this is an inner scope it should always hide out values, shouldn't it? Could this lead to unexpected warnings (maybe glue __LINE__ to it?)?
EDIT: With suggestions incorporated:
#include <boost/exception/all.hpp>
#include <boost/exception/diagnostic_information.hpp>
#include <string>
#include <iostream>
#include <ostream>
typedef boost::error_info<struct tag_err_info,std::string> err_msg;
struct BaseError : virtual boost::exception, virtual std::exception {};
struct FatalError : BaseError {};
struct AssertionError : FatalError {
AssertionError(std::string const& err) {
std::string error = "Assertion failed: `";
error += err;
error += "'.";
*this << err_msg(error);
}
}; | {
"domain": "codereview.stackexchange",
"id": 1187,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, exception-handling",
"url": null
} |
mechanical-engineering, design, cad, mechanisms, autodesk-inventor
Title: How can I *more accurately* measure the weight of a hanging bottle? I have a project where I need to be able to measure the volume of remaining liquid in a bottle based on weight. The idea is to use a Force Sensing Resistor:
(model FSR402-Short Tail: http://www.interlinkelectronics.com/FSR402short.php)
(source: interlinkelectronics.com)
My issue is that I can't seem to get an accurate consistent reading on the pressure exerted on the device when any kind of shifting (such as replacing / refilling the bottle, moving the device location, etc) would create a huge variance in the data points. It can vary from 20-150 points easily... which is too large of a margin since 150 points could equate to about 500ml of liquid.
The idea was to loosely hang the bottle & connector to the pressure plate (see second image) using two centrally located screws. the pressure plate would have a set of two rubber feet perpendicular to the screw placements to allow the pressure to be focused on the rubber footing with the FSR.
This produced two main issues that I did not foresee with the CAD design:
over-tightening/under-tightening of the screws
inaccurate or inconsistent readings of the pressure from very minor shifts or bumps.
What can I do to improve my design, shown below?
1) Top perspective view of pivot mechanism.
2) Side perspective view, translucent with descriptions. This is not an appropriate sensor for what you are trying to do. This type of flat pressure sensor is sensitive to localised pressures. For example, from a piece of dirt or an uneven surface. You should use a load cell which is designed for your application. For example, something like: Omega LCMKD-20N would be more suitable.
You can also consider this brief introduction to designing a weighing system. | {
"domain": "engineering.stackexchange",
"id": 1635,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, design, cad, mechanisms, autodesk-inventor",
"url": null
} |
complexity-classes, automata-theory, nfa
$B_i$ has states $q_1, \dots, q_{n+1}$, and $reject$. Initial state is $q_1$. When in state $q_j$ for $j \leq n$: if $x_j$ does not appear in $T_i$, then you go in state $q_{j+1}$ for every value of the next letter. If $x_j$ appears positively in $T_i$, then you go in $q_{j+1}$ only if you read letter $1$. If you read letter $0$, you go in state $reject$. If $x_j$ appears negatively in $T_i$, you do the same by swapping $0$ and $1$. $q_{n+1}$ is the only final state.
Now you build $A$, acyclic, which accepts every word of length $n$ (same construction as before for $T_i$ empty).
It is clear that $L(A) \subseteq L(B)$ iff $D$ is a tautology, which is
coNP-complete. | {
"domain": "cstheory.stackexchange",
"id": 4853,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-classes, automata-theory, nfa",
"url": null
} |
star, size
Title: Relatable comparison of VY Canis Majoris to the Sun? I was trying to describe how vast the largest known star is to someone and felt I wasn't quite able to relate the scale difference. I know it's roughly 1500 times larger than the sun. Anyone know of a good analogy? After some playing around with wolfram alpha and google my best comparison has been
The sun compared to VY Canis Majoris is like a donut compared to the London Eye.
The London Eye is about 120m in diameter, this divided by 1500 is about 8cm which is roughly the diameter of a ring donut. | {
"domain": "astronomy.stackexchange",
"id": 2103,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "star, size",
"url": null
} |
automata, computation-models, turing-completeness, stacks, linear-bounded-automata
a read-only input tape, surrounded by endmarkers,
a finite state control,
a read-write storage tape of length $S(n)$, where $n$ is the length of the input string, and
a stack
In "Hopcroft/Ullman (1979) Introduction to Automata Theory, Languages, and Computation (1st ed.) we find:
Theorem 14.1 The following are equivalent for $S(n)\geq\log n$.
$L$ is accepted by a deterministic $S(n)$-AuxPDA
$L$ is accepted by a nondeterministic $S(n)$-AuxPDA
$L$ is in $\operatorname{DTIME}(c^{S(n)})$ for some constant $c$. | {
"domain": "cs.stackexchange",
"id": 4615,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "automata, computation-models, turing-completeness, stacks, linear-bounded-automata",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.