text stringlengths 49 10.4k | source dict |
|---|---|
evolution, theoretical-biology, evolutionary-game-theory
+ \epsilon [E(\mathbf{p},\mathbf{q}) - E(\mathbf{q},\mathbf{q})] &>0
\end{align*}
where $\epsilon$ is frequency of individuals playing $\mathbf{q}$. Since $0<\epsilon<1$, if the above inequality is true, the conditions 1 and 2 in the second definition must be true. The part that I find less clear is how the second definition implies the first definition. More specifically, that if $\mathbf{p}$ is a strict Nash equilibrium (i.e., $E(\mathbf{p},\mathbf{p})> E(\mathbf{q},\mathbf{p})$), then $W(\mathbf{p}) > W (\mathbf{q})$. We need to show that if $\mathbf{p}$ is a strict Nash equilibrium, $E(\mathbf{p},\mathbf{p})> E(\mathbf{q},\mathbf{p})$, then $W(\mathbf{p}) > W (\mathbf{q})$ for some sufficiently small $\epsilon$. When $E(\mathbf{p},\mathbf{q}) \geq E(\mathbf{q},\mathbf{q})$, $\mathbf{p}$ clearly dominates $\mathbf{q}$.The less obvious case is when $E(\mathbf{p},\mathbf{q}) < E(\mathbf{q},\mathbf{q})$.
Since $E(\mathbf{p},\mathbf{p}) - E(\mathbf{q},\mathbf{p})>0$, there should be a strictly positive number $k$ such that | {
"domain": "biology.stackexchange",
"id": 4874,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "evolution, theoretical-biology, evolutionary-game-theory",
"url": null
} |
calibration, kinect, openi-tracker
Original comments
Comment by JoeRomano on 2011-08-08:
In main.cpp of NiUserTracker you'll notice several callback functions that occur when a new user is found (line 87), or pose detected (line 106). If you insert the call to LoadCalibration into either of these functions it should work. I would recommend replacing line 110 from PoseDetected with this
Comment by qdocehf on 2011-08-08:
Also, is there a way to make it automatically load the data. From looking at the code, it seems as if I need to type in 'L' every time I want it to apply the saved data to users.
Comment by qdocehf on 2011-08-08:
Do you mean "UserCalibration_CalibrationEnd" or "UserCalibration_CalibrationComplete"? "UserCalibration_CalibrationStart" does not have this line.
Comment by JoeRomano on 2011-08-08:
g_UserGenerator.GetSkeletonCap().SaveCalibrationDataToFile(aUserIDs[i], XN_CALIBRATION_FILE_NAME);
Comment by daaango on 2011-08-08:
To get it to save calibration, all you have to do is paste "g_UserGenerator.GetSkeletonCap().IsCalibrated(aUserIDs[i])" into the XN_CALLBACK_TYPE UserCalibration_CalibrationStart function, specifically after "g_UserGenerator.GetSkeletonCap().StartTracking(nId)" which is around line 130.
Comment by qdocehf on 2011-08-08:
From daaango's update, it seems as though this feature is already in the latest unstable version of OpenNI. All I want to know is what commands I need to use to get this to work.
Comment by qdocehf on 2011-08-08: | {
"domain": "robotics.stackexchange",
"id": 6206,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "calibration, kinect, openi-tracker",
"url": null
} |
graph with the help of a matrix. I already have the methods to check for self-loops and cycles, I need a method to check SPECIFICALLY for connectivity in the adjacency matrix to prove it is a DAG. No The problem is that we always need to use O(n^2) elements for storage, and hence, we often use adjacency lists to represent graphs. If any vertex v has vis1[v] = false and vis2[v] = false then the graph is not connected. Look at the graph laplacian D-A where D is the diagonal matrix with corresponding degrees of vertices on the diagonal. A value in a cell represents the weight of the edge from vertex v v v to vertex w w w. An adjacency matrix representation for a graph . The diagram below illustrates the adjacency matrix for the example graph we presented earlier. For example, we need to check if an adjacency matrix such as this one is fully connected: The graph is (n+2)*(n+2), and the number of functions here is 4. We define an undirected graph API and consider the adjacency-matrix and adjacency-lists representations. At the ith row and jth column, we store the edge weight of an edge from the vertex i to vertex j. Make all visited vertices v as vis1[v] = true. Graph API 14:47. The "Adjacency Matrix" Lesson is part of the full, Tree and Graph Data Structures course featured in this preview video. 0. The forms of problems that one must solve are typically: process/print the nodes, e.g., check if the graph is connected--- for every node, one can go to all other nodes Sign in to answer this question. Make all visited vertices v as vis2[v] = true. Depth-First … For a undirected graph it is easy to check that if the graph is connected or not. Time Complexity: Time complexity of above implementation is sane as Depth First Search which is O(V+E) if the graph is represented using adjacency list representation. Yes Is there an edge from 1 to 3? If the smallest eigenvalue is strictly bigger then zero or the same as if zero is not an eigenvelue then it is connected. Edited: Matt J on 24 Jul | {
"domain": "railwayphotos.net",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9504109798251322,
"lm_q1q2_score": 0.8307325968395357,
"lm_q2_score": 0.874077230244524,
"openwebmath_perplexity": 414.27426330965716,
"openwebmath_score": 0.5063043236732483,
"tags": null,
"url": "http://railwayphotos.net/3unjduhn/c7ab88-check-if-graph-is-connected-adjacency-matrix"
} |
ros, 3dcamera
Title: 3D Camera Selection
I am working on a robot which will pick items from Point A and deliver it to Point B.
For this purpose I planned to use a 3D camera and plan the strategy as follows.
Selection of 3D camera [Stereo, TOF or Structured light, Please suggest which one will be the suitable solution?]
Generation of Dense 3D point cloud data, [ By fusion of Laser scanner data and the camera data]
Filter and feature extraction from data , Obstacle avoidance , path planning from Point A to Point B.
If there is any tutorial available which matches with my goals it would be great.
Which steps are missing in my strategy ?
Thankyou in advance
Originally posted by Jackie_16 on ROS Answers with karma: 7 on 2016-04-19
Post score: 0
Original comments
Comment by jarvisschultz on 2016-04-19:
I'd recommend editing your question to provide a more descriptive title. People aren't likely to click on a question with a generic title.
Comment by Jackie_16 on 2016-04-19:
ok thanks :)
Comment by NEngelhard on 2016-04-19:
How much money do you want to spend?
Comment by Jackie_16 on 2016-04-19:
@Nengelhard Lets say for camera part, 300-400$ ?
The navigation stack (navstack) would be a good place to start. The navstack provides functionality for path planning, obstacle avoidance, localization, mapping, etc. The navstack is designed to run with a variety of sensors. People have used depth cameras (such as a Kinect) and laser scanners of all varieties successfully with the navstack.
If you are looking to create your own solution, then you are definitely on the right track. Be aware that your step 3 will take quite a lot of effort to get everything working well on a real robot. My advice would be to start with an already-available solution (e.g. the navstack), and then figure out the parts of that solution that you aren't happy with or the parts that you're interested in concentrating your efforts on. Then write your implementations to be compatible with the solution you started with.
The navstack has many great tutorials to get you started. | {
"domain": "robotics.stackexchange",
"id": 24400,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, 3dcamera",
"url": null
} |
motor
Title: DC motor - max current I'm looking at the data sheet for a DC motor that states:
Current consumption at nominal torque (mA): 380
I have a power supply that can deliver 500 mA. Can I take the above statement to indicate that the motor will never draw more than 380 mA, or does it mean that it usually uses 380 mA, and that I should probably choose a different power supply? Nominal torque is usually on some nominal rotation speed, which should be mentioned too.
Maximal torque is usually on zero speed (or even more if you force it to rotate backwards). And the maximal current is usually a lot higher than the nominal on nominal speed and torque.
So it says, that if your motor is running at nominal, it would take the 380mA. You can safely assume, that at many occassions, even in normal usage, the current will be temporary a lot higher.
Power supplies usually state how much current they can provide for a long time (nominal current); good power supplies also state maximal current and how long it can provide it (might also be called surge current).
I personally would consider the 500mA highly underrated for full usage of the motor, but if the power supply is over-current resistant (be it by detection and sophisticated construction, or be it just "weak" source with high inside resistance), then it may work well enough, just sometimes the motor would seem "weak" as the voltage would need to drop in order to keep step with current provided by source. And low voltage means low current means low torque there. On some application it dos not matter, it would just move slower, on other applications that may be a critical flaw - it is you, who decides, what is acceptable and what not. | {
"domain": "robotics.stackexchange",
"id": 1356,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "motor",
"url": null
} |
biochemistry, metabolism
Glucokinase has a key role in the regulation of blood glucose concentration by the liver. To paraphrase (and edit slightly) Cornish-Bowden and Cardenas: "Mammals have two types of enzymes to catalyse the formation of G6P from glucose. Glucokinase [hexokinase D in their nomenclature] differs significantly from the other type [generally just referred to as hexokinase]. Its abundance varies markedly with hormonal status; it requires much higher glucose concentrations (about 10 mM) for half saturation, and is insensitive to physiological concentrations of G6P. It is thus well adapted to respond to variations in blood-glucose concentrations."
To clarify, 10mM is in the region of the blood glucose concentration (ca. 5mM) so the glucokinase reaction will be affected by the relative concentrations of blood glucose and intracellular G6P in a standard mass action manner, which will determine whether the liver takes up glucose or releases it into the blood. If glucokinase were inhibited allosterically by G6P like hexokinase, this couldn't work.
Now let's turn to G6P and hexokinase in skeletal muscle. Hexokinase has a much higher affinity for glucose than hexokinase and will convert it efficiently to G6P. As long as G6P is then converted to G1P for glycogen synthesis, G6P will not build up. However when glycogen synthesis stops because the capacity of the muscle to store glycogen is reached, the concentration of G6P will increase and turn hexokinase off. This, in turn, will cause a build up of intracellular glucose and prevent glucose transport into the muscle. This makes sense, because with full glycogen stores, and in the absence of need for contraction, glucose will not be metabolized by the muscle; it will be left in the blood for the liver to handle.
Thus, the key point about G6P inhibition of hexokinase, and the apparent source of confusion in the question, is that it only occurs at concentrations that are reached when the G6P is not being metabolized within the cell.
Reference and Footnote | {
"domain": "biology.stackexchange",
"id": 5239,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "biochemistry, metabolism",
"url": null
} |
quantum-mechanics, atomic-physics, orbitals, dipole
$$
[M_z,M_+]=+\hbar M_+,\ \ \ \ [M_z,M_-]=-\hbar M_-.
$$
So if we wanted to investigate the state $M_+|\ell,m_z\rangle$, note that the above commutator implies $M_zM_+=M_+M_z+\hbar M_+$ so
$$
M_z(M_+|\ell,m_z\rangle)=(M_+M_z+\hbar M_+)|\ell,m_z\rangle=\hbar(m_z+1)(M_+|\ell,m_z\rangle).
$$
So the result of applying $M_+$ to the state is a state with one higher $m_z$ value (it's the old value plus one). If we did the same for $M_-$, we would find that the result of applying $M_-$ to the state is a state with $m_z$ value decreased by one.
So in the end, the expressions for $M_+$ and $M_-$ may look a little arbitrary, but there's a good reason why they look that way, and it all comes from the commutation relations between the components of the angular momentum operator that I wrote down at the beginning. | {
"domain": "physics.stackexchange",
"id": 73806,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, atomic-physics, orbitals, dipole",
"url": null
} |
formal-languages, pushdown-automata
Title: Poping a symbol on a PDA when Input and Stack are Irrelevant Say I had a PDA with alphabet language {0,1}, and a stack language {P,Q,\$}. In the PDA I don't really care what the inputs are at the end and I just want to clear the stack back down to the special character. I could write out the transitions like:
1,P->e
1,Q->e
0,P->e
0,Q->e
e,P->e
e,Q->e
1,\$->\$
0,\$->\$
but that's a bit much. Is it in convention to instead just write:
{0,1},{P,Q}->e
e,{P,Q}->e
{0,1},\$->\$
Note that I'm not trying to pop more than a symbol at a time, just to I don't care about the inputs or stack at this point. With this small alphabet and stack it's not horrible... but if the languages were larger that would be a lot of diction for every individual case. I just want loop on this state to get to a point where the inputs are e and the stack is \$ so that I can transition to the final accept state with e,\$->e
I only ask because I have not seen any textbook or material write transitions as such. So this is either a yes or no answer, or if there's some other convention for how to handle larger alphabets, please let me know what it is. This is technically allowed (its just a shorthand, I believe its understandable, but if you aren't sure, just explain what it represents)
However, one important thing to note is that $\{0,1\}\times \{P,Q\}\rightarrow \epsilon$ is a totally different transision than $\{\epsilon\}\times\{P,Q\}\rightarrow \epsilon$.
The first one would "eat up" a part of the input, for every symbol you delete from the stack (which also means you will delete a bounded number of elements from the stack) | {
"domain": "cs.stackexchange",
"id": 18545,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "formal-languages, pushdown-automata",
"url": null
} |
algorithms, sorting
Title: Given $k$ sorted lists, $O(n \log k)$ complexity, Mergesort rather than Heapsort I was convinced that my idea for a solution to sort $k$ sorted lists into one list would work with a 'variation' on MergeSort. I was told this would not work and had to use Heapsort, but didn't get any explanation or intuition behind it. (I believe the assumption is that each of the $k$ lists had size $\frac{n}{k}$)
Essentially, my intuition behind using Mergesort was that we have $n$ total elements, but all the steps below height $\log k$ in our recursion tree had already been solved. So at height $\log k$ each list is of size $\frac{n}{k}$, so we perform $\frac{n}{k}$ comparisons and $2\frac{n}{k}$ inserts (to form a new list from the two $\frac{n}{k}$ lists) $k$ times in total, which seems to be on the scale of $O(n)$.
We are now one step up on the recursion tree with $\frac{k}{2}$ lists and we will eventually perform $O(n)$ operations $\log k$ times.
Can anyone provide insight as to why this intuition is wrong? Or if right, how I should go about formally proving it? I would structure a formal proof of the runtime of your strategy as follows (This is, in structure, very similar to what you already wrote, but does not refer to omitted steps of an imaginary run of Mergesort.):
Arrange the $k$ lists as the leaves of an (almost) complete binary tree. Each internal node of the tree represents the merging of the two lists in its children. Clearly this tree has a height of $\lceil\log k\rceil$.
In this structure, the merges on each level of the tree combined will process each element exactly once.[1] Thus, they take a combined time in $\cal O(n)$. Adding up over all levels of the tree gives a runtime in $\cal O(n\log k)$. | {
"domain": "cs.stackexchange",
"id": 12802,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, sorting",
"url": null
} |
special-relativity, relativity, inertial-frames
By the group composition continuity postulate, this is a continuous function of a real variable, with $\mathrm{sqr}(0)=0$ and, by the monotonicity axiom in that postulate we see that $\mathrm{sqr}(1)>1$. By the intermediate value theorem, therefore, there is a $\eta_{\frac{1}{2}}\in[0,\,1]$ such that $\sigma(\eta_{\frac{1}{2}})\,\sigma(\eta_{\frac{1}{2}}) = \sigma(1)$. That is, $\sigma(1)$ has a square root in the path segment $\sigma([0,\,1])$. But now, by dint of the convergence of the logarithm series, every matrix within the ball defined by $\mathcal{V}=\{\gamma|\,\|\gamma-\mathrm{id}\|<1\}$ has a unique square root inside that ball (Although it may very well have other square roots outside the ball) defined by $\sqrt{\sigma(\eta)} = \exp\left(\frac{1}{2}\log(\sigma(\eta))\right)$ because the logarithm is defined and maps the ball $\mathcal{V}$ into a neighborhood $\mathcal{U}=\log(\mathcal{V})=\{K|\,\exp(K)\in\mathcal{V}\}$ and both the matrix exponential, defined by the universally convergent matrix exponential matrix Taylor series and logarithm are bijective maps between $\mathcal{U}$ and $\mathcal{V}$. Therefore, if there were two square roots $\varsigma_1,\,\varsigma_2$ inside the ball $\mathcal{V}$, then both have logarithms so their squares are $\sigma(1)=\exp(2\,\log\varsigma_1) = | {
"domain": "physics.stackexchange",
"id": 31559,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, relativity, inertial-frames",
"url": null
} |
electromagnetic-radiation, photons, laser, coherence
Title: Why is the photon emitted in the same direction as incoming radiation in Laser? When an atom “lases” it always gives up its energy in the same direction and phase as the incoming light. Why does this happen? How can this be explained?
How does the photon generated because of stimulated emission, know which direction to take?
What are the factors leading to this? The word "stimulated" means that the emission of the photon is "encouraged" by the existence of photons in the same state as the state where the new photon may be added. The "same state" is one that has the same frequency, the same polarization, and the same direction of motion. Such a one-photon state may be described by the wave vector and the polarization vector, e.g. $|\vec k,\lambda\rangle$.
The physical reason why photons like to be emitted in the very same state as other photons is that they are bosons obeying the Bose-Einstein statistics. The probability amplitude for a new, $(N-1)$-st photon to be added into a one-photon state which already has $N$ photons in it is proportional to the matrix element of the raising operator
$$ \langle N+1| a^\dagger|N\rangle = \sqrt{N+1}$$
of the harmonic oscillator between the $N$ times and $(N+1)$ times excited levels. Because the probability amplitude scales like $\sqrt{N+1}$, the probability for the photon to be emitted into the state goes like the squared amplitude i.e. as $N+1$. Recall that $N$ is the number of photons that were already in that state.
This coefficient $N+1$ may be divided to $1$ plus $N$. The term $1$ describes the probability of a spontaneous emission – that occurs even if no other photons were present in the state to start with – while the term $N$ is the stimulated emission whose probabilities scales with the number of photons that are already present. | {
"domain": "physics.stackexchange",
"id": 49765,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-radiation, photons, laser, coherence",
"url": null
} |
-
Where is the terminology "linear application" from? I've never seen it before. – Clive Newstead Sep 25 '12 at 10:26
May be it does not exist in english textbooks Haha ! I'm sorry, I translated from french... Here's the exact definition of what I call linear application in my answer: Let $V$ and $W$ be two finite dimensional vector spaces over a field $K$, a linear application $f$ is a groups morphism from $(V,+)$ to $(W,+)$ that has the following property: $$\forall v \in V, \forall \lambda \in K, f(\lambda v)=\lambda f(v).$$ – mak Sep 25 '12 at 10:51
I guessed you were French ;) In English they're normally called linear maps, linear transformations or linear operators. – Clive Newstead Sep 25 '12 at 11:00
true that haha ! :) – mak Sep 25 '12 at 11:13
I would simply say 'geometry' can be a good motivation. Mention rotations, reflections, similarity trnasformations, projections to a subspace.. Roughly speaking they are the geometrical transformations that keep the origo and take lines to lines. I understood much better matrices when I could imagine some geometry behind..
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307676766119,
"lm_q1q2_score": 0.8106838077946582,
"lm_q2_score": 0.8333245973817158,
"openwebmath_perplexity": 319.79660816621924,
"openwebmath_score": 0.892182469367981,
"tags": null,
"url": "http://math.stackexchange.com/questions/202107/why-are-linear-transformations-important"
} |
ros, arduino, driver, rosserial
Title: DC Motor with Rosserial help
I am trying to make a controllable arduino robot. Is there anyone out there who knows how to make a motor turn without a motor shield (Is a motor shield really necessary)? Also, I am getting encoders for the DFRobot 4WD robot
DFRobot 4WD Arduino-Compatible Platform w / Encoders
and wanted to write a ROS driver for it. I have an xbee radio I'd like to put to use with it too. I have never written a driver, so, for those with experience, help would be great! And tips, tutorials, or general advice welcome!
thanks,
-Hunter A.
Originally posted by allenh1 on ROS Answers with karma: 3055 on 2012-03-25
Post score: 0
Go to my site:
https://www.sites.google.com/site/shridharshah/projects/ros-arduino-xbee
I have done exactly what you need. I have provided full documentation. Let me know if any questions.
Originally posted by sks with karma: 36 on 2012-07-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by hamzh.albar@gmail.com on 2018-07-19:
Hello @sks , I have questions for you, let me know if you are willing to answer them.
Thank you | {
"domain": "robotics.stackexchange",
"id": 8710,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, arduino, driver, rosserial",
"url": null
} |
php
echo '<td>'.$getallCampaignevents->campaignOpens.'</td>';
echo '<td>'.$getallCampaignevents->campaignClicks.'</td>';
echo '<td>'.$getallCampaignevents->campaignBounces.'</td>';
echo '<td>'.$getallCampaignevents->campaignForwards.'</td>';
echo '<td>'.$getallCampaignevents->campaignOptOuts.'</td>';
echo '<td>'.$getallCampaignevents->campaignSpamReports.'</td>';
echo '</tr>';
// var_dump($getallCampaignevents);
}
}
?>
</table> | {
"domain": "codereview.stackexchange",
"id": 5261,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php",
"url": null
} |
homework-and-exercises, classical-mechanics, lagrangian-formalism, potential-energy, equilibrium
Thank you for your time. Notice that $A$ is a linear operator on $\mathbb R^n$. Suppose that $A$ is singular, namely $\det A= 0$, then the kernel of $A$ is nontrivial. In other words, there exists some nonzero $v\in\mathbb R^n$ for which $Av=0$. It follows that the kinetic energy vanishes for this nonzero $v$.
There's nothing "wrong" with this mathematically speaking, but it's physically pathological because the kinetic energy represents energy due to the magnitude of the motion of the object, and we therefore expect that any state of the object for which $\dot q_i\neq 0$ for some $i$ should be assigned a nonzero kinetic energy. | {
"domain": "physics.stackexchange",
"id": 10811,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, potential-energy, equilibrium",
"url": null
} |
fluid-dynamics, aerodynamics
Title: Boundary Layer in aerofoil I want to know how the top and bottom boundary layer interact at the trailing edge of an aerofoil (zero angle of attack) and what happens to the boundary layer after a small distance from the trailing edge. Does the region at the back of aerofoil have lesser velocity due to the boundary layer? How is it carried by freestream? In every boundary layer (except for exotic hypersonic cases), the speed at the wall is zero. At the trailing edge, the upper and lower layers meet, and if you imagine a plane which extends from the trailing edge backwards and follows the streamlines, the speed at the trailing edge is equally zero. The more you now move away from the trailing edge along this plane, the more the speed increases, as now the inhibiting effect of wall shear is missing, and only the shear of the layers above and below the plane acts upon the air in this plane. If you measure the speed orthogonally to this plane, you will see a speed drop near the plane which gets wider and more shallow the more you move away from the trailing edge.
This speed drop can be measured and gives a very precise value for airfoil drag. See the picture below for a rake of pitot tubes which is used for this kind of measurement. | {
"domain": "physics.stackexchange",
"id": 20603,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, aerodynamics",
"url": null
} |
algorithms, graphs, trees, counting
Title: How many different trees can we form from given graph? I'm trying to practice some combinatorics and I faced this problem, let's say we have given graph with N nodes and M edges. $$N\leq500, M \leq N\cdot(N - 1)/2$$
In this graph I want to count the sub-sets of edges such that each subset will have exactly $N-1$ edges and the edges forming the subset will form connected graph.
Example
Let's say we have the following graph
We can count a total of 3 subsets: {1,2,3,5}, {1,2,3,4}, {1,2,4,5}. Please note the the numbers from 1 to 5 are marking the edges, not the nodes.
What I think
Let's say our graph is complete graph, it is the worse case. If we have 500 nodes and 499 nodes going out from each node, I think that there could be $500^{500}$ possible combinations which is huge number. There is a simple algebraic algorithm based on the Matrix Tree Theorem. Just make the Laplacian matrix of the graph and compute $N^{-1}$ times the product of its non-zero eigenvalues. | {
"domain": "cs.stackexchange",
"id": 9458,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs, trees, counting",
"url": null
} |
earth, relative-motion, coriolis-effect
For the kind of velocity that an airship can reach the first term, $2\Omega v_r$, is much larger than the second term $v_r^2/R$
So:
Unless the crew of the airship takes countermeasures an airship moving along a local latitude line, with a velocity with respect to the Earth, will veer away from that course. The tendency to veer away is proportional to the velocity with respect to the Earth. The magnitude of the tendency to accelerate sideways will be $2\Omega v_r$. The direction of the veering is as follows: an airship moving west-to-east will swing wide, veering to the outside of the local latitude line. An airship moving east-to-west will veer to the inside of the local latitude line.
Note that for this calculation it isn't necessary to know the mass of the airship. The required centripetal acceleration is provided by the Earth's gravity, and inertial mass and gravitational mass are equivalent.
Also, at any latitude the direction of the tendency to deflect is parallel to the plane of the equator. But at higher latitudes the Earth surface is at an angle to the plane of the equator. A local calculation must take that into account. | {
"domain": "physics.stackexchange",
"id": 76856,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "earth, relative-motion, coriolis-effect",
"url": null
} |
ros, rosjava-core, rosjava
And since my answer solved your problem, please mark it as correct. Thank you.
These rules are there to indicate to other people looking for an answer, or willing to provide an answer, that this problem is solved (and what the solution was), so please abide to them.
Comment by safzam on 2012-10-31:
hi hi yes I have marked it as correct by clicking on good sybmol under the answer number on left uper side . I did not know this before. thanks..its done now :-). | {
"domain": "robotics.stackexchange",
"id": 11541,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rosjava-core, rosjava",
"url": null
} |
quantum-mechanics, operators, complex-numbers
Given a linear operator $A : D(A) \to {\cal H}$, where the domain $D(A)\subset {\cal H}$ is a subspace, and a conjugation $K_B : {\cal H} \to {\cal H}$ (depending on the basis $B$), it is possible to define another linear operator $A^{*_{K_B}}$ that we may call the complex conjugated operator of $A$ with respect to $K_B$.
$$A^{*_{K_B}} := K_B A K_B\tag{*}$$
provided $K_B (D(A)) \subset D(A)$. I stress that $K_B$ appears twice in the right-hand side of the definition above. This is because we want that $A^{*_B}$ is linear as $A$ is:
A definition like this
$$A^{*_{K_B}} := K_B A\quad \mbox{(wrong),}$$
would instead produce an antilinear operator:
$$(K_BA)(au) = K_B(aA(u))= a^* K_BAu\:.$$
Also observe that, in the absence of issues with the domains of the operators, (*) implies $$(AB)^{*_{K_B}} = A^{*_{K_B}}B^{*_{K_B}}\tag{4}\:.$$
Consider for instance the momentum operator restricted to the subspace of Schwartz' functions ${\cal S}(\mathbb R)$. As is well known,
$$P\psi = -i \frac{d}{dx} \psi \:,\quad \psi \in {\cal S}(\mathbb R)\:.$$
It is immediately proved that, referring to conjugations $K$ and $K'$ discussed in 1,
$$P^{*_K} = +i \frac{d}{dx} = -P\:,\tag{5}$$
whereas | {
"domain": "physics.stackexchange",
"id": 46002,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, complex-numbers",
"url": null
} |
# Project Euler 41 Solution: Pandigital prime
Problem 41
We shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
## Solution
I think there are two straight-forward approaches. The first is naively generating primes from 2 to 987654321 and find the largest pandigital number. The second approach generates a list of all digits from 1 to $$k$$ where $$k$$ is between 1 and 9.
But we can do a little better by improving the bounds. The order of the digits will be re-arranged by the permutation, but the sum will always be the same. We know that if the digit sum is divisible by three, the whole number is divisible by three and as such not a prime number. Starting with $$1+2+3+4+5+6+7+8+9=45$$, which is divisible by three. Doing the same without the 9 leads to $$1+2+3+4+5+6+7+8=36$$ - which is also divisible by three. We can check the next sum, which is $$1+2+3+4+5+6+7=28$$. Okay, the largest pandigital number has at most 7 digits and as such is 7654321. Generating the primes to be checked with the first approach would require only 0.8% of the previous search space. But we can do a bit better. We first check all lengths we need to take into account, even if chances are high that the resulting number will be a 7 digit number:
s = 0
for n in range(1, 7 + 1):
s+= n
if s % 3 != 0:
print(n) | {
"domain": "xarg.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307708274402,
"lm_q1q2_score": 0.8020619313547954,
"lm_q2_score": 0.8244619263765706,
"openwebmath_perplexity": 404.6355936540136,
"openwebmath_score": 0.6460615992546082,
"tags": null,
"url": "https://www.xarg.org/puzzle/project-euler/problem-41/"
} |
inorganic-chemistry, acid-base
Title: Is it correct to say that H2SO4 is an acid in this reaction? Is it correct to say that $\ce{H2SO_4}$ acts as an acid in the following reaction?
$\ce{2Ag + H2SO_4} \rightarrow \ce{Ag_2SO_4 + 2H_2O + SO_2}$
I know it acts as an oxidizing agent, but is it correct to say it shows acidic properties? Do you get the same reaction with sodium sulfate?
I do not think so, at least under "normal" conditions.
When the sulfuric acid acts as an oxidizing agent giving off $\ce{SO_2}$, the excess oxygen comes off as oxide ions which must be somehow put into more stable form. The protons from the sulphuric acid do that by turning them to water.
Getting the reaction to go requires the sulfuric acid to act as both an acid and an oxidizing agent. | {
"domain": "chemistry.stackexchange",
"id": 6158,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, acid-base",
"url": null
} |
java, beginner, simulation, dice
public class DiceSimulation {
private static Random rollDice = new Random();
public static void main(String[] args) {
final int ROLLCOUNT = 10000;
final int SIDES = 6;
int counter, die1, die2;
//an array with elements to keep count
int [] doubleCounts = new int [SIDES];
// display welcome message. no other purpose but trial
welcome ();
/**
* Set counter to start from 1 and go till desired constant number
* Rolling two separate dices and storing values in dice1 & dice 2 respectively
*/
for (counter=1; counter<=ROLLCOUNT;counter++){
die1=roll(SIDES);
die2=roll(SIDES);
if (die1==die2){
doubleCounts[die1-1]++;
}
}
// Display results totals of paired rolls
for (int idx=0; idx<doubleCounts.length; idx++){
System.out.format(" You rolled set of %d %d times\n",(idx+1), doubleCounts[idx]);
}
}
private static int roll(int sides){
return rollDice.nextInt(sides) + 1;
}
public static void welcome () {
System.out.println("welcome to dice world!");
}
} | {
"domain": "codereview.stackexchange",
"id": 20570,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, simulation, dice",
"url": null
} |
1. Some functions have a two-sided inverse map, another function that is the inverse of the first, both from the left and from the right.For instance, the map given by → ↦ ⋅ → has the two-sided inverse → ↦ (/) ⋅ →.In this subsection we will focus on two-sided inverses. Show Instructions. If $$MA = I_n$$, then $$M$$ is called a left inverse of $$A$$. Actually, trying to prove uniqueness of left inverses leads to dramatic failure! '+o�f P0���'�,�\� y����bf\�; wx.��";MY�}����إ� endstream endobj 54 0 obj <> endobj 55 0 obj <>/ProcSet[/PDF/Text]>>/Rotate 0/Thumb 26 0 R/TrimBox[79.51181 97.228348 518.881897 763.370056]/Type/Page>> endobj 56 0 obj <>stream Show Instructions. Proof In the proof that a matrix is invertible if and only if it is full-rank, we have shown that the inverse can be constructed column by column, by finding the vectors that solve that is, by writing the vectors of the canonical basis as linear combinations of the columns of . If BA = I then B is a left inverse of A and A is a right inverse of B. share. 6 comments. Note the subtle difference! u (b 1 , b 2 , b 3 , …) = (b 2 , b 3 , …). 3. This may make left-handed people more resilient to strokes or other conditions that damage specific brain regions. Active 2 years, 7 months ago. Recall that$B$is the inverse matrix if it satisfies $AB=BA=I,$ where$I$is the identity matrix. Matrix inverses Recall... De nition A square matrix A is invertible (or nonsingular) if 9matrix B such that AB = I and BA = I. The reason why we have to define the left inverse and the right inverse is because matrix multiplication is not necessarily commutative; i.e. Then a matrix A−: n × m is said to be a generalized inverse of A if AA−A = A holds (see Rao | {
"domain": "wolterskluwerlb.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.965899577232538,
"lm_q1q2_score": 0.8776336478911665,
"lm_q2_score": 0.9086179025005187,
"openwebmath_perplexity": 987.8661969525623,
"openwebmath_score": 0.8560187816619873,
"tags": null,
"url": "http://www.wolterskluwerlb.com/andy-cutting-yhldkzq/unique-left-inverse-62f161"
} |
$f(x) = \frac{x^2}{x}$
The function is not defined at x = 0 yet clearly everywhere else it is equal to just x, so the taylor series is x.
A less trivial example is:
$f(x) = \frac{\sin(x)}{x}$
The function is not defined at x=0. However, the taylor series can be obtained by using the series for sin(x), and dividing everything by x.
$\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} ...$
$\frac{\sin(x)}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} ...$
Your example is fundamentally the same. Can you get the series for cos(x) and then perform the necessary steps?
Thank for that expository answer. Please note that I am not having trouble finding the Taylor series, but rather understanding the rationale behind doing it in spite of the function not being defined at x = 0.
Your answer was illuminating indeed, but I wonder why the discontinuity would be a removable one. We are certainly not working with limits (or are we?).
9. Re: Taylor series: (cos(2x)-1)/x²
The original example is the same in principle as finding the taylor series of $f(x) = \frac{x^2}{x}$. The only difference is that it is masked/obfuscated by trigonometric functions. If you understand why this simpler function x^2/x has a taylor series even though its undefined, you should understand your example as well.
It does have something to do with limits: if the discontinuity is removable, that means the limit at that point exists, and the value of the limit is precisely what the value of the function is "supposed" to be. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347905312772,
"lm_q1q2_score": 0.8320822999884842,
"lm_q2_score": 0.8539127585282744,
"openwebmath_perplexity": 301.7551369335235,
"openwebmath_score": 0.958280622959137,
"tags": null,
"url": "http://mathhelpforum.com/calculus/211251-taylor-series-cos-2x-1-x.html"
} |
python, beginner, pandas, matplotlib
Title: Extending die roll simulations for complex data science tasks I've developed a Python script that simulates die rolls and analyses the results. I'm now looking to extend and modify this code for more complex data science tasks and simulations.
Is this code simple and readable?
Are there more insightful statistics or visualisations that can be generated from the die roll data?
How could this code be extended or modified for more complex data science tasks or simulations?
import unittest
from random import randint
import matplotlib.pyplot as plt
import pandas as pd
def roll_die() -> int:
"""Simulate rolling a fair six-sided die and return the result"""
return randint(1, 6)
num_rolls = 1000
die_rolls = [roll_die() for _ in range(num_rolls)]
df = pd.DataFrame({"Rolls": die_rolls})
roll_counts = df["Rolls"].value_counts().sort_index()
print(df)
print(roll_counts)
plt.bar(roll_counts.index, roll_counts.values)
plt.xlabel("Die Face")
plt.ylabel("Frequency")
plt.title("Die Roll Distribution")
plt.show()
class TestRollDie(unittest.TestCase):
def test_roll_die(self):
result = roll_die()
self.assertTrue(1 <= result <= 6) | {
"domain": "codereview.stackexchange",
"id": 45144,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, pandas, matplotlib",
"url": null
} |
temperature, everyday-life
Title: Frosty Window Panes There might be an obvious reason for this, but yesterday, while travelling in the bus, it was heavily raining outside and the window panes becomes frosty and hazy so I could write a bunch of stuff on it. Why does this happen? That is because water had condensed onto the window pane. This water on the window pane condensed out of air. Prior to condensation, it was in the form of vapor. Now for a given water vapor pressure in air, condensation into liquid water can only occur if the temperature drops sufficiently low (to be precise, lower than saturation temperature). So rain must have cooled the window pane sufficiently for this to happen. Either that or rain increased the water vapor content of the air enough to make the corresponding saturation temperature exceed the temperature of the window pane. | {
"domain": "physics.stackexchange",
"id": 42028,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "temperature, everyday-life",
"url": null
} |
resonance, lewis-structure
What is resonance? Is it a physical process? No way! It is just a concept, a word, used to describe | {
"domain": "chemistry.stackexchange",
"id": 2711,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "resonance, lewis-structure",
"url": null
} |
homework-and-exercises, newtonian-mechanics, integration, statics
Combining contact mechanics with thin wall membrane deflection is a supremely complicated subject and I wish you luck in finding a relevant reference for a parabolic shape.
What you are attempting to do is consider only the radial stresses along the membrane, with no consideration of the hoop or shear components also. In doing so, the radial stress is perpendicular to the applied force at the center, causing an infinite deformation and hence zero stiffness. Mentally replace the surface of a chain mail material without resistance to bending, but with resistance to pulling. When a force is applied perpendicular to the surface there is nothing there to support the force. You need to do a far more in-depth analysis, and to get a finite answer, you need to move the point loading into a distributed loading over a small area.
You might want to look into this BOOK for more details. | {
"domain": "physics.stackexchange",
"id": 13992,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, integration, statics",
"url": null
} |
molecular-biology, proteins, genetics, learning
Title: How do proteins and genes participate in learning? I am a computer scientist that studies biology and bioinformatics.
In the last weeks, I have been trying to study new research directions, and I would like to deepen my knowledge on the role and behavior of genes and proteins in learning.
By learning, I mean the human process: the information I is absent at time T, and present at time T+1.
I would like to study more this problem, and I am wondering: how do proteins and genes behave during learning?
I have read that proteins that participate in learning are called marker proteins. Is it true? Which role do they have?
Where could I find some resources to study this fascinating problem?
Thank you very much! The storage of memories in cells is rarely thought of on the protein level of the cell. Cells are usually given a developmental state, but no memory. A cell may become a liver cell, cancerous, or diabetic, but this is not memory, but a physiological change in the cell which is usually not reversible to a previous state.
For example cancer treatments are entirely focused on identifying the cancerous cells and killing them. Internally the genomes of cancer cells often have deletions and duplications. They are cancerous, they have not learned to be cancerous. Though not as dramatic, it is now thought that cellular differentiation which creates different types of cells is heavily influenced by epigenetic modification of the genome; the DNA is marked by methyl groups which dictates the state of the cell by modifying the gene. This is mediated by proteins for sure, but is quite complex and not well understood at this time. Epigenetic markers can even change gene behavior between generations of offspring as well, though that is not usually called memory.
How is information stored in the brain? This is thought to be reflected in the organization of the neurons in the brain. There are many kinds of neurons. They can be distinguished by the sorts of axons and dendrites that emanate from the cell body. They can also be distinguished by the chemical variety of neurotransmitter they use (there are a score of different molecules). So to a great extent the type of cell and the specific proteins it chooses to use to mediate information is very important. | {
"domain": "biology.stackexchange",
"id": 413,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-biology, proteins, genetics, learning",
"url": null
} |
r, geospatial
Title: Finding travel distance between airports I have 2 nested for loops which i want to get rid of. Any thoughts?
I am calculating distance between cities based on their longitude and latitude. There is a custom function earth.dist() that i am using in the loop.
for (i in 1:nrow(dat)) {
#for each other airport
for (j in 1:nrow(dat)) {
#if both airport are different
if (dat[i,3]!=dat[j,3]){
k=k+1
#airport1
airport1[k] <- dat[i,3]
#airport2
airport2[k] <- dat[j,3]
#find travel distance
travdist[k] <- earth.dist(dat[i,5],dat[i,4],dat[j,5],dat[j,4])
}
}
}
function for distance calculation
earth.dist <- function (lon1, lat1, lon2, lat2){
rad <- pi/180
a1 <- lat1 * rad
a2 <- lon1 * rad
b1 <- lat2 * rad
b2 <- lon2 * rad
dlon <- b2 - a2
dlat <- b1 - a1
a <- (sin(dlat/2))^2 + cos(a1) * cos(b1) * (sin(dlon/2))^2
c <- 2 * atan2(sqrt(a), sqrt(1 - a))
R <- 6378.145
d <- R * c
return(d)
} First, let's download some data similar to yours (I assume). This csv available online has almost 7,000 airports:
url <- "https://commondatastorage.googleapis.com/ckannet-storage/2012-07-09T214020/global_airports.csv"
library(RCurl)
txt <- getURL(url)
data <- read.csv(textConnection(txt), stringsAsFactors = FALSE) | {
"domain": "codereview.stackexchange",
"id": 21078,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "r, geospatial",
"url": null
} |
javascript, object-oriented, functional-programming, formatting, constructor
this.bio = function () {
alert(`This person's interests are: ${this.hobbiesSentence}`)
};
}
As I'm writing this, I could see a case for making the function exist but not necessarily creating/storing the sentence unless the bio() method is called. Any other ideas? Thanks in advance. Performance
Strings are immutable, so using accumulator += stuffToAppend in a loop can traditionally impact performance. The problem is that we're creating a new string every iteration, leading to quadratic time complexity for an operation that should be linear. It turns out that modern browsers optimize this heavily using an internal array to represent string parts and make it quite fast over using an explicit array, so this post is focused on style rather than performance.
Design
On first thought, reduce seems like the right function from a semantic standpoint since we want to boil the array of interests down to one string. However, since avoiding string concatenation requires an intermediate array in reduce, we might as well just skip the intermediate array and use map and join. It's pretty common that reduce can be replaced with map or filter, which are more specific and succinct.
Switch statements are also generally not used much in JS (but often used in C...). You can replace many switch statements in JS with an object (particularly if you're choosing between a number of similar functions), or at least an if statement. Either way, the nature of the commas and "and" in this example makes it a bit awkward, so there doesn't seem to be any clear-cut win.
Additionally, this routine of "prettifying" a list is generic and can be moved to a separate function to keep Person clean.
As an aside, instead of switching between "interests", "hobbies" and "bio", it seems best to pick one term and stick with it throughout.
Here's my attempt. This might seem a bit abstract, but it's typical in JS to avoid conditional/switch stuff as much as it is to avoid loops (which is the idea with reduce). If you prefer a more traditional approach, replace the joins array and indexing with an if statement and I'd still endorse it. | {
"domain": "codereview.stackexchange",
"id": 35520,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, object-oriented, functional-programming, formatting, constructor",
"url": null
} |
star, distances
So, the distance between Alpha Centauri AB and Barnard's Star is:
$d = \sqrt{(-1.643 + 0.057)^2 + (-1.366 + 5.938)^2 + (-3.816 - 0.487)^2} \approx\mathbf{6.476\,ly}$
Well, that was certainly tedious - but it's a process that you can standardize to pretty much any star, or really, any two astronomical objects:
First, convert RA and DEC to degrees.
Second, assign R, RA, and DEC to the spherical coordinates $r$, $\theta$, and $\phi$.
Third, convert spherical coordinates to rectangular coordinates.
Lastly, use the distance formula with the two sets of $x$, $y$, and $z$ coordinates.
Hope this helps. :) | {
"domain": "astronomy.stackexchange",
"id": 5951,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "star, distances",
"url": null
} |
c#, dynamic-programming
if (sequence.Count <= 0)
{
return new List<int>();
}
// generate the value table
List<SequenceValueWithMetaData> valueTable = GenerateValueTableFromSequence(sequence);
if (valueTable == null)
{
// Internal Error
throw new InvalidOperationException("valueTable is null");
}
// find largest length in the valueTable and record it to index
int indexOfLongestLength = 0;
for (int i = 0; i < valueTable.Count; i++)
{
if (valueTable[i].Length > valueTable[indexOfLongestLength].Length)
{
indexOfLongestLength = i;
}
}
// create the longest subsequence by finding the longest length,
// adding the value to the back of the list, then moving to its predecessor's index
int currentIndex = indexOfLongestLength;
int insertionPoint = valueTable[indexOfLongestLength].Length - 1;
int[] longestSubsequence = new int[valueTable[indexOfLongestLength].Length];
do
{
// add the value at index to the end of longestSubsequence
longestSubsequence[insertionPoint] = valueTable[currentIndex].Value;
insertionPoint = insertionPoint - 1;
// check if there is a predecessor
int? predecessorIndexMaybe = valueTable[currentIndex].predecessorIndexMaybe;
// if there is a predecessor, then assign set currentIndex to to the predecessorIndex
if (predecessorIndexMaybe.HasValue)
{
currentIndex = predecessorIndexMaybe.Value;
}
else
{
// no predecessor, so we're done
break;
}
} while (true); // breaks when there is no predecessor
return longestSubsequence.ToList();
}
private static List<SequenceValueWithMetaData> GenerateValueTableFromSequence(List<int> sequence)
{
if (sequence == null)
{
// Internal Error
throw new InvalidOperationException("valueTable is null");
} | {
"domain": "codereview.stackexchange",
"id": 22198,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, dynamic-programming",
"url": null
} |
java, algorithm, binary-tree
class Solution {
public int widthOfBinaryTree(TreeNode root) {
TreeNode[] levelHi = new TreeNode[3_000];
TreeNode[] levelLo = new TreeNode[3_000];
levelHi[0] = root;
root.num = 0;
int maximumWidth = 1;
int levelLength = 1;
while (true) {
int numberOfChildrenInLoLevel =
getNextDeepestLevel(levelHi, levelLo, levelLength);
if (numberOfChildrenInLoLevel == 0) {
return maximumWidth;
}
int tentativeWidth = levelLo[numberOfChildrenInLoLevel - 1].num -
levelLo[0].num + 1;
maximumWidth = Math.max(maximumWidth, tentativeWidth);
TreeNode[] levelTemp = levelLo;
levelLo = levelHi;
levelHi = levelTemp;
levelLength = numberOfChildrenInLoLevel;
}
}
int getNextDeepestLevel(TreeNode[] levelHi,
TreeNode[] levelLo,
int levelHiLength) {
int levelLoLength = 0;
for (int i = 0; i < levelHiLength; i++) {
TreeNode currentTreeNode = levelHi[i];
TreeNode leftChild = currentTreeNode.left;
TreeNode rightChild = currentTreeNode.right;
if (leftChild != null) {
leftChild.num = currentTreeNode.num * 2;
levelLo[levelLoLength++] = leftChild;
}
if (rightChild != null) {
rightChild.num = currentTreeNode.num * 2 + 1;
levelLo[levelLoLength++] = rightChild;
}
}
return levelLoLength;
}
} | {
"domain": "codereview.stackexchange",
"id": 42785,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, binary-tree",
"url": null
} |
The blue terms cancel out, while the red term will vanish during the limit process. We are left with
$$\langle v,Ax\rangle+\langle x,Av\rangle$$
which can be seen as the derivative of $\langle x,Ax\rangle$ in the direction $v$. Your special case of computing the partial derivative $\partial x_1$ is asking to derive $\langle x,Ax\rangle$ in the direction of $e_1$, which is is the vector $(1,0,\cdots,0)^\top$. Plug it in to get
$$(*)\qquad\langle e_1,Ax\rangle+\langle x,Ae_1\rangle.$$
Such "axis aligned vectors" like $e_1$ are good at extracting coordinates or rows/columns. So, the first term of $(*)$ gives you the first coordinate of $Ax$. This is what you wrote as $\langle (A^\top)^{(1)},x\rangle$. The second term gives you the inner product of $x$ with the first column of $A$. You wrote this as $\langle A^{(1)},x\rangle$.
The partial derivative with respect to $x_1$ can be computed as a directional derivative : $$\frac{\partial f }{\partial x_1}(x) = \frac{d}{dt}(f(x+te_1))|_{t=0}$$ (where $e_1=(1,0,\dots,0)$.) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9850429147241161,
"lm_q1q2_score": 0.8449383960876129,
"lm_q2_score": 0.8577681068080748,
"openwebmath_perplexity": 118.02447772096212,
"openwebmath_score": 0.9989399313926697,
"tags": null,
"url": "https://math.stackexchange.com/questions/2283230/partial-derivative-of-fx-with-respect-to-x-1?noredirect=1"
} |
• Well, I'd do it by ignoring the floor (getting something like $2187$) and searching near there.
– lulu
Sep 30 '19 at 11:25
• You already have a good formula there. All that's left to do is to do a binary search on it, and you'll have an algorithm that is pretty much as fast as possible Sep 30 '19 at 12:01
• @Sudix As the left-hand side evolves in a roughly linear fashion, one can do a lot better than pure binary search by using the secant method. Sep 30 '19 at 12:08
• I don't understand your equation. For $n=105$, I got 48. Mar 20 at 23:23
• Note that if $$\gcd(a,b)=1$$, then $$\gcd(a+b,b)=1$$
• There are $$48$$ numbers which are less than $$105$$ which are relatively prime to $$105$$, since $$\phi(105)=48$$. Let $$a_i$$ be the $$i$$-th number which is relatively prime to $$105$$. It is clear that $$a_{48}=104$$.
• Also the first $$104$$ numbers which are relatively prime are $$\{1,2,4,8,\ldots,104\}$$. The next $$48$$ numbers are $$\{1+105,2+105,4+105,\ldots,104+105\}$$. Thus you see that $$a_{96}=209$$.
Continue like this.
• In point 3, I think you mean "The first $48$" and "The next $48$", not $104$. Sep 30 '19 at 11:56
• Also, I think you meant $1,2,4,8,\ldots$, not $1,2,4,7,\ldots$. Sep 30 '19 at 12:44
I personally think this problem is easier to approach from a slightly more brute-force angle. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9755769127862449,
"lm_q1q2_score": 0.8043260145325773,
"lm_q2_score": 0.8244619199068831,
"openwebmath_perplexity": 135.5843384160907,
"openwebmath_score": 0.8570600748062134,
"tags": null,
"url": "https://math.stackexchange.com/questions/3375415/find-the-thousandth-number-in-the-sequence-of-numbers-relatively-prime-to-105"
} |
c++, recursion, lambda, c++20
return result;
}
There is no need to explicitly specify the type of x, at best it is the same as the type of argument it gets passed, at worst you make a mistake that compiles without errors but causes some subtle cast. And since you want to return a value that has the same type as x (so that we cast the result of func() back to a std::variant, just write -> decltype(x) as the trailing return type. You can do the same for the trailing return type of the lambda passed to std::visit().
Well, that would be true, except the above example is only so compact because you are copying by value, which leads me to:
Avoid unnecessary copies
I missed this in my previous review, but there are more places where you cause a copy to be made: anytime a function takes a parameter by value, it is copied. So to avoid the costly copies of large containers, be sure to pass the inputs as much as possible by const reference, both for the templated function parameters and for the parameters passed to the lambda functions.
Now we need a way to ensure the trailing return types don't become references. To do this, you can use std::remove_reference. It becomes a bit messier, so I would use a using declaration:
template<class T, class Fn> requires is_iterable<T> && is_element_visitable<T>
static inline T recursive_transform(const T &input, Fn func)
{
using value_type = std::remove_reference<decltype(*input.begin())>::type;
T result = input;
std::transform(input.begin(), input.end(), result.begin(),
[func](const auto &x) -> value_type {
return std::visit([_Func](auto&& arg) -> value_type {
return func(arg);
}, x);
}
);
return result;
} | {
"domain": "codereview.stackexchange",
"id": 39680,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, recursion, lambda, c++20",
"url": null
} |
javascript, beginner, mathematics, matrix
console.log(" ");
console.log("toRowEcholonForm test: " + compereMatrices(toRowEchelonForm(m), mr));
// answer: http://www.wolframalpha.com/input/?i=solve+row+echelon+form+{{1%2C+2%2C+2%2C+2}%2C{1%2C+3%2C+3%2C+3}%2C+{1%2C+4%2C+16%2C+5}}
m = [
[1, 2, 2, 2],
[1, 3, 3, 3],
[1, 4, 16, 5]
];
mr = [
[1, 0, 0, 0],
[0, 1, 0, 0.9166666666666666],
[0, 0, 1, 0.08333333333333333]
];
console.log(" ");
console.log("toRowEcholonForm test: " + compereMatrices(toRowEchelonForm(m), mr));
// answer: http://www.wolframalpha.com/input/?i=solve+row+echelon+form+{{0%2C+2%2C+-1%2C+-6}%2C{0%2C+3%2C+-2%2C+-16}%2C+{0%2C+0%2C+-3%2C+11}}
m = [
[0, 2, -1, -6],
[0, 3, -2, -16],
[0, 0, -3, 11]
];
mr = [
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
]; | {
"domain": "codereview.stackexchange",
"id": 8091,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, mathematics, matrix",
"url": null
} |
thermodynamics
Title: What is the correct expression of pressure-volume work? My book defines work in pressure-volume work as:
$$\mathrm{d}w = - P_\mathrm{ext} \mathrm{d}V$$
However before doing so it mentioned the piston to be massless.
In physics, work is defined as
$$\mathrm{d}w = P_\mathrm{in} \mathrm{d}V$$
What is the correct definition of pressure-volume work in chemistry?
Does the definition of work, $$-P_\mathrm{ext} dV$$ change if the piston was not massless in chemistry? | {
"domain": "chemistry.stackexchange",
"id": 14332,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics",
"url": null
} |
python, python-2.x, json, file-system
parser.add_argument("--persistentAlert", action='store_true', help="If this flag is set then the alert to the user is a foreground window that requires pressing OK to dismiss") | {
"domain": "codereview.stackexchange",
"id": 25204,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-2.x, json, file-system",
"url": null
} |
c++, performance
Title: Distance between two different sets of points Before writing this post I looked around for a library that could solve this problem. I didn't find much so I decided to try to write this.
My problem is the following:
I have two sets of points with coordinates x1, y1, and x2, y2. The sets have different number of elements. I want to find what is the average distance between all the elements in set 1 vs all the elements in the set 2 given a certain cutoff. This mean that, if two points (of the two different sets) are further than cutoff they should not be considered.
The easiest solution is to perform \$O(n^2)\$ search and then filter the results based on the distance, but it's inefficient.
I tried to write an algorithm that divide the space of the sets in squares of size "cutoff". For each point I can associate two indexes that tell me to which box the point belong. Looking at the indexes I can generate the lists of neighbors points and calculate the distances only between points that are in confining boxes.
#include <vector>
#include <algorithm>
#include <iostream>
#include <time.h>
#include <numeric>
using namespace std;
// euclidean distance
double euc(double x, double y) {
return sqrt(x * x + y * y);
}
//calculate the distance vector between two different sets of points
// set 1 of coordinate x1, y1
// set 2 of coordinate x2, y2
vector <double> all_dist(vector <double>& x1, vector <double>& x2, vector <double>& y1, vector <double>& y2) {
vector <double> d(x1.size()*x2.size());
for (int i = 0; i < x1.size(); ++i) {
for (int j = 0; j < x2.size(); ++j) {
d[i*x2.size()+j]=euc(x1[i] - x2[j], y1[i] - y2[j]);
}
}
return d;
} | {
"domain": "codereview.stackexchange",
"id": 36328,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance",
"url": null
} |
electromagnetic-radiation, integration, complex-numbers, greens-functions, analyticity
Let’s first look at the case with t < 0. Here, $e^{−i\omega t} \rightarrow 0$ when$ \omega \rightarrow + i \infty $ . This means that, for t<0, we can close the contour C in the upper-half plane as shown in the figure and the extra semi-circle doesn’t give rise to any further contribution. But there are no poles in the upper-half plane. This means that, by the Cauchy residue theorem, $G_{ret}(r, t) = 0$ when t < 0.
Now i get what the Author means that contour integral itself becomes zero since there's no pole inside, but we are now dealing integral along the real axis while there's singularity among it.
I have no idea why the integral when t<0 becomes zero at all.
And also, when t>0,
In contrast, when t > 0 we have $e^{−i \omega t} \rightarrow 0$ when
$\omega \rightarrow −i \infty$, which means that we get to close the contour in the lower-half plane. Now we do pick up contributions to the integral from the two poles at $\omega = \pm k$. This time the Cauchy residue theorem gives
$\begin{align}
\oint d \omega \frac{e^{-i \omega t}}{(\omega -k )(\omega +k)} = -2\pi i \bigg[\frac{e^{-ikt}}{2k} - \frac{e^{ikt}}{2k} \bigg]
\end{align}$ | {
"domain": "physics.stackexchange",
"id": 93056,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-radiation, integration, complex-numbers, greens-functions, analyticity",
"url": null
} |
magnetic-fields, potential
$$
\mathbf{B} = -\nabla\Psi.
$$
Any idea on this? I was not able to find any reference online or on Electromagnetics books.
Many thanks in advance for any suggestion. The short answer is that the curl of the vector field you found is not zero, even at points where there is no current; so there cannot be a scalar potential for it. This is straightforward enough (if tedious) to verify: assign coordinates to the ends of the wire (I recommend $x = y= 0$ & $z = \pm d$); write out $\cos \alpha_1$ and $\cos \alpha_2$ in terms of $d$ and $\rho$, the distance from the axis (which is the same as your $a$); and take the curl of the resulting expression for $B_\phi$ in cylindrical coordinates. The $\rho$- and $z$-components of the result will be non-vanishing in general because $\partial (\rho B_\phi)/\partial \rho$ and $\partial B_\phi/\partial z$ are not zero.
As to why this happens, this is due to an underappreciated subtlety of the Biot-Savart Law.
For the Biot-Savart Law to yield a magnetic field satisfying Ampere's Law ($\nabla \times \mathbf{B} = \mu_0 \mathbf{J}$), it is necessary that $\nabla \cdot \mathbf{J} = 0$. Specifically, if you take the curl of $\mathbf{B}_\mathrm{BS}$ as defined by the Biot-Savart Law, then after heroic amounts of vector algebra (see, for example, §5.3.2 of Griffiths) you get to the statement that
$$ | {
"domain": "physics.stackexchange",
"id": 82700,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "magnetic-fields, potential",
"url": null
} |
theoretical-chemistry, symmetry
Title: Mathematical definition of a symmetry operator I had a hard time understanding what it means to apply a symmetry operator to a function, so I wondered if there is a formal way to define this? As far as I understand, applying a symmetry operation to a function means the following:
$$ \hat Rf(x) = f(\hat Rx) $$
The result has then to be the same function with a coefficient in front of it. Otherwise it would not be symmetric in regard to the symmetry operator $\hat{R}$.
Is this correct and can it be shown that this is how to apply symmetry operators to functions ? That's not entirely correct. You're making at least an extra assumption here. You're assuming that $\hat{R}$ is a function from $D\mapsto D$ where $D$ is the domain of $f$, but there's no reason why $f$ must operate as a function on $D\mapsto D$. It might just as well be $f: D\mapsto \mathbb{R}$.
I think what you're really asking for is that a symmetry element $\hat{R}$ such that $\hat{R}: D\mapsto D$, and $\forall x\in D$, $f(x) = f(\hat{R}x) = (f\circ \hat{R})x$.
Then for example, for $f(x) = (\cos x, \cos x)$, the transformation $\hat{R}$ mapping $x \rightarrow -x$ would would a valid symmetry element. | {
"domain": "chemistry.stackexchange",
"id": 10231,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "theoretical-chemistry, symmetry",
"url": null
} |
classical-mechanics, hamiltonian-formalism, variational-principle, action, boundary-conditions
For their definitions and how they are related, see e.g. my Phys.SE answer here.
On one hand, the (Dirichlet) on-shell action function (2) and Hamilton's principal function (3) are closely related, cf. this Phys.SE post. Explicit solutions to (2) and (3) are only known sufficiently simple cases.
On the other hand, the off-shell action functional (1) is the one which is used in the stationary action principle with suitable boundary conditions imposed. The other two (2) and (3) cannot be used in a variational principle.
For a discussion of boundary value vs. initial value problems, see e.g. this Phys.SE post.
--
$^1$ Following e.g. H. Goldstein, CM, the Hamilton's principal function (3) is a type 2 generating function of canonical transformations. The integration constants $\alpha_i$ are identified with the new momenta $P_i$. | {
"domain": "physics.stackexchange",
"id": 20279,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, hamiltonian-formalism, variational-principle, action, boundary-conditions",
"url": null
} |
physical-chemistry, solutions
Title: thinking about osmotic pressure when liquid is replaced by gas An option in my test says:
"The osmotic pressure of a dilute solution is the same as it would exert if it exists as a gas in the same volume of the solution and at same temperature."
I'm not able to think how should I relate between a solution and a gaseous mixture. Am I missing something? In both the case of the osmotic pressure of a dilute solution and the case of the pressure exerted by an ideal gas, the solute or gas may be described as composed of non-interacting (ideal) particles, and the mathematical expressions (equations of state) describing the two situations are very similar (in one case $p=cRT$, in the other $\pi = cRT$)$^\dagger$. However it might be less confusing if equivalent to state that in both cases the equations describe similar relationships between the work required to change the volume of the system and the accompanying change in the concentration of gas or solute. In both cases work can be done by the system through an expansion, but in one case the expansion results from pressure exerted by the gas, while in the other it results from pressure exerted by the solvent. In the case of osmotic pressure, since the chemical potential of the solvent is coupled to that of the solute (as described by the Gibbs-Duhem relation) it is possible to relate the osmotic pressure to the solute concentration (in the limit of an ideal solution as described by Henry's Law).
$^\dagger$As I commented, the equations are analogous but in my opinion, "The osmotic pressure of a dilute solution is the same as it would exert if it exists as a gas in the same volume of the solution and at same temperature." is a misstatement. The entire solution, if evaporated, would not exert the same pressure. It is more subtle than that. It is the solute that would exert an equivalent pressure if a gas. | {
"domain": "chemistry.stackexchange",
"id": 13810,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, solutions",
"url": null
} |
Note that $\lfloor x \rfloor = x - \{x\}$, where $\{x \}$ (the fractional part of $x$) is periodic with period $1$, with a jump discontinuity at integer points. Let's try to write $\{x\}=f\left(\cot \pi x\right)$, since $\cot \pi x$ has the same property. We need $f(y)\rightarrow 0$ as $y\rightarrow \infty$, $f(y)\rightarrow 1$ as $y\rightarrow -\infty$, and the correct arc-tangent-y interpolation in between. What works is $f(y)=\frac{1}{2}-\frac{1}{\pi}\tan^{-1}y$. Putting things together, $$\lfloor x \rfloor=x-\frac{1}{2} + \frac{1}{\pi}\tan^{-1}(\cot \pi x).$$
• I'm leaning towards your answer as it appears to be the simplest and easiest to understand. However, I have one question. Are you saying that f(y) equals the fractional portion? That's very interesting if that's the case! – The Great Duck May 14 '16 at 6:42
• Yes, since $\{x\}=x - \lfloor x \rfloor$. One caveat here is that we need to fill in the integer values (for both $\{x\}$ and $\lfloor x\rfloor$) by right-continuity, since $\cot k\pi$ ($k\in\mathbb{Z}$) is not defined. – mjqxxxx May 15 '16 at 16:00
• Hmm... Perhaps there is a way to fix that and make it exist? Without piece wise of course. – The Great Duck May 16 '16 at 3:35
I don't think your going to find an explicit formula for the floor function and here is why, | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9585377272885904,
"lm_q1q2_score": 0.8240201324766552,
"lm_q2_score": 0.8596637451167997,
"openwebmath_perplexity": 262.11660488060244,
"openwebmath_score": 0.9100489616394043,
"tags": null,
"url": "https://math.stackexchange.com/questions/1117081/explicit-formula-for-floorx/2731687"
} |
ros, ros2, rviz, urdf, pluginlib
if __name__ == '__main__':
main()
I have no clue how it works, but it does.
Problem 3: rviz2 was not loading the urdf despite a robot_state_publisher node being published.
Solution: This was actually a collection of problems instead of one! I wish I was able to publish pictures to explain it, but I do not have enough karma. First rviz2 was throwing an error "Fixed Frame [map] does not exist", but what eclipsed me was that this frame is supposed to reference the fixed frame in your urdf file. I named mine base_link so I renamed "map" to "base_link" in the displays menu and that fixed that.
Next despite there being no visible errors, my urdf was not displaying. What did not understand was that you need to have a display ADD ON enabled called "robotmodel" enabled! This is not in the panels menu. In order to enable this, you need to click the "Add" button within the display panel menu, scroll down rviz_default_plugins to click RobotModel, then click ok to enable that. Robot model needs to be enabled in the displays panel in order to render urdf files.
Next, despite both RobotModel and rviz2 being enabled rviz2 was still not loading the model. What I did not know was that rviz2 does not automatically listen to the description topic where your urdf is being published. You need to enter in the node that rviz listens to in order for it to listen to the node! According to the answer here You need to scroll down to the "Description Topic" drop-down menu in RobotModel and double click to the right of it and enter in "robot_description" for it to listen to the node where your urdf file node is being published. The name may be different based on what you named the robot_state_publisher node.
After solving these set of problems. The urdf of the model finally loaded....
Originally posted by rydb with karma: 125 on 2020-06-23
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 35076,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros2, rviz, urdf, pluginlib",
"url": null
} |
context-free, formal-grammars
Title: Context-free grammar for $\{a^x b^y : x \neq y\}$ I am trying to create a context free grammar in Extended Backus–Naur form, which starts with a non-empty sequence of a's and is followed by a non-empty sequence of b's. With the special condition that the number of b's has to be unequal to the number of a's.
Thus, the grammar should generate words like:
aaaabbb
aaabb
abbb
So basically I could do something like this:
$\ G=(N,T,P,S)$
$\ N = \{S\}$
$\ T = \{a,b\}$
$\ P = \{S=aa(S|\epsilon)b\}$
But then the words would always have $\ 2n$ a's and n b's:
aab
aaaabb
aaaaaabbb | {
"domain": "cs.stackexchange",
"id": 14877,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "context-free, formal-grammars",
"url": null
} |
7
One simple approach is to use a max-heap. Separately keep track of the minimum element stored in the heap. Then all operations can be done relatively efficiently: Load from array can be done in $O(n)$ time, building Build-Heap. Each of the peek operations takes $O(1)$ time. Decreasing the maximum element can be done in $O(\lg n)$ time. You decrease it ...
6
The algorithm you suggest is simply heapify. And indeed - if you increase the value of an element in a min-heap, and then heapify its subtree, then you will end up with a legal min-heap.
6
It takes $\Omega(n)$ time to find the median of a heap in the worst case. The reason is that the lowest levels of the heap (the leaves and their close ancestors) can be very disordered, and they make up the majority of the heap. As a result, if you can find the median on a heap, you can find a median of the $\Omega(n)$ unordered elements near the leaves. ...
6
This is essentially a Segment tree which is a data structure that augments an array with a binary tree as you describe such that: You have fast set and get at any index You have fast "aggregate" queries on ranges You can support fast update queries on ranges, for some combinations of updates and queries The $j$th node at height $k$ in the tree "summarizes" ...
5
If $P$ is the number of processing units available, consider $P$-ary heaps¹. When descending to perform some operation on a set of keys, you can fork whenever keys lead to different children. If the heap is balanced, this results in independent taks for a uniformly chosen set of keys (and sufficiently big $n$) after having descended a constant number of ...
5
The reason that your operation is not listed, is that one is not simply interested in all operations that can be easily implemented using a certain data structure, but rather the other way. Given a set of operations, what is the most efficient way (in terms of space and time) to implement these operations. (But I add more to this later) Binary heaps ...
4 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.965381162156829,
"lm_q1q2_score": 0.8243512912085693,
"lm_q2_score": 0.8539127585282744,
"openwebmath_perplexity": 737.6950532569289,
"openwebmath_score": 0.5009334087371826,
"tags": null,
"url": "https://cs.stackexchange.com/tags/priority-queues/hot"
} |
• this appears to be completely correct. i withdraw my previous dispute with you, Matt. you are correct and i was mistaken. what do you do about $f_p$ when the $Q < \frac12$? and, BTW, that $Q$ (from the cookbook) still is applicable only for the Bilinear Transform mapping of $s$ to $z$. the mapping of $Q$ to $r$ will be different for Impulse Invariant. Nov 13, 2015 at 8:35
• @robertbristow-johnson: I'm glad we finally agree. It would be great if you could also mention that fact in the comments to the original question. Not for me, but for the OP and future users. Then we can also clean up the mess over there ... Nov 13, 2015 at 8:39
• @robertbristow-johnson: Good question about low Q, I'll add that case to my answer as soon as I have the time to do so. Nov 13, 2015 at 9:55
• @robertbristow-johnson: I added some information about $Q<\frac12$. Since we get real-valued poles, the pole angles are of course either $0$ or $\pi$, and $\omega_0$ can still take any value. Nov 13, 2015 at 12:33
• I can confirm your formula for the relationship. I got the same by solving for the zero of the derivative of the magnitude frequency response. Sep 29, 2016 at 18:39 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426382811476,
"lm_q1q2_score": 0.8355027115595438,
"lm_q2_score": 0.857768108626046,
"openwebmath_perplexity": 440.46020252245245,
"openwebmath_score": 0.9571929574012756,
"tags": null,
"url": "https://dsp.stackexchange.com/questions/27014/discrete-time-biquad-filter-relation-between-peak-frequency-and-pole-frequency?noredirect=1"
} |
and arrive at the final expression for the Kronecker representation for fcs
kcs[n_, m_] :=
2/π Sum[(1 + 2 i - n) /((1 + 2 i) (2 n - (1 + 2 i)))
KroneckerDelta[1 + 2 i + m - n], {i, -∞, ∞}]
Latex
$$\text{kcs}(\text{n\_},\text{m\_})\text{:=}\frac{2 \sum _{i=-\infty }^{\infty } \frac{(2 i-n+1) \delta _{2 i+m-n+1}}{(2 i+1) (2 n-(2 i+1))}}{\pi }$$
Checking it
Table[kcs[n, m], {m, -3, 3}, {n, -3, 3}] == tcs
True
The case p = 3
Cos x Cos x Cos
fccc = Integrate[Cos[n π x] Cos[m π x] Cos[k π x], {x, 0, 1}]
$$\frac{\frac{\sin (\pi (k-m-n))}{k-m-n}+\frac{\sin (\pi (k+m-n))}{k+m-n}+\frac{\sin (\pi (k-m+n))}{k-m+n}+\frac{\sin (\pi (k+m+n))}{k+m+n}}{4 \pi }$$
We connfine ourselves to the KroneckerDelta representation: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9572778036723354,
"lm_q1q2_score": 0.8383714571600891,
"lm_q2_score": 0.8757870013740061,
"openwebmath_perplexity": 5846.096627024306,
"openwebmath_score": 0.6785317063331604,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/126272/solution-to-a-specific-problem-caused-by-generic-simplification"
} |
linear transformation T(X) = AX − CXB where A, C ∈ Mn, B ∈ Ms. 2 Addition, subtraction and scalar multiplication of matrices 7. If your transformation matrix represents a rotation followed by a translation, then treat the components separately. g. Projective Geometry Overview nTools of algebraic geometry nInformal description of projective geometry in a plane nDescriptions of lines and points nPoints at infinity and line at infinity nProjective transformations, projectivity matrix nExample of application nSpecial projectivities: affine transforms, similarities, Euclidean transforms Apply one transformation matrix to an other actor. Course 2. Resizing The other important Transformation is Resizing (also called dilation, contraction, compression, enlargement or even expansion ). First, it creates a translation matrix, M T, then multiplies it with the current matrix object to produce the final transform matrix:Look carefully at the form of each standard 2×2 matrix that describes the given transformation. New Year Special Convoy (2002) Galvatron (2005) Galvatron II (2005) The first true Matrix of Leadership toy was a silver die-cast metal accessory which came with Takara's "New Year Special" reissue of the original Generation 1 Optimus Prime figure. The notes cover applications of matrix diagonalization (Boas 3. Horn ABSTRACT We consider the linear transformation T (X) = AX - CXB where A, C E M , B E Ms. (12) Premultiplication of J by These linear algebra lecture notes are designed to be presented as twenty ve, fty minute lectures suitable for sophomores likely to use the material for applications but still requiring a solid foundation in this fundamental branch To follow up user80's answer, you want to get transformations of the form v --> Av + b, where A is a 3 by 3 matrix (the linear part of transformation) and b is a 3-vector. My favorite is GPS occultation. Although we would almost always like to find a basis in which the matrix representation of an operator is If many applications of diagonalization of a matrix have been mentioned, the reduction of quadratic forms is also one important application (you could find some examples in the chapter 6 of the 7. 2 other things passed into that method are the rotation matrix R and matrix I. Generic affine transformations are represented by the Transform class which internaly is a | {
"domain": "kacpergomolski.pl",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9845754506337405,
"lm_q1q2_score": 0.8183230388271673,
"lm_q2_score": 0.831143045767024,
"openwebmath_perplexity": 628.818077638687,
"openwebmath_score": 0.6772881150245667,
"tags": null,
"url": "http://kacpergomolski.pl/rvxm/matrix-transformation-applications.php"
} |
newtonian-mechanics, classical-mechanics, momentum, oscillators
Title: Linear momentum of atoms of a molecule and their frequencies This exerpt on "Normal Modes of a Diatomic Molecule" is from Introduction to Mechanics Kleppner and Kolenkow:
Suppose
we have a polyatomic molecule model with N masses and several
springs coupling them. We now look for special solutions of the
form
$x_
i = a_i sin(ωt + φ)$
$ i = 1, . . . , N$
The phase factor φ is also the same for each mass.where ai is the vibration amplitude of the ith mass. Note that in the
special solution we are looking for, each mass vibrates with the same
angular frequency ω.
We justify the existence of such a solution by arguing that if the masses
were vibrating with different frequencies, it would not be possible to
conserve linear momentum for an isolated molecule."
The bolded line explains that the atoms of polyatomic molecules move with same frequency; otherwise, their linear momentum wouldn't be conserved. I searched but didn't find any relation between frequency and linear momentum conservation. Could anyone please explain on what basis were those lines written? They are just arguing that if the atoms vibrated with different frequencies, then the center of mass of the atom would be oscillating as a function of time. Since this could only occur if an external force was acting on the molecule, which there is not, they conclude that the atoms must all have the same vibration frequency. Another way to say this is that the overall momentum of the molecule must be conserved (force is the change in momentum with time).
For a concrete example, consider a diatomic molecule like O$_2$. The only way to have one of the atoms oscillating without oscillating the center of mass is to have the other atom moving with the symmetry to exactly counteract the momentum of the first (either with an equal and opposite vibration, or they could be rotating together about the center of mass). | {
"domain": "physics.stackexchange",
"id": 48156,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, classical-mechanics, momentum, oscillators",
"url": null
} |
electromagnetism, magnetic-fields, field-theory, conventions, notation
$$ B^k=\epsilon^{ijk} F^{ij} $$
In this way no summation is required, you simply compute the value of $\epsilon^{ijk}$.
For example, $k=3$, $i$ and $j$ are now free so I write $\epsilon^{ijk}$ giving a value to $i$ and the other to $j$, picking the simpler choice I have $B^3= \epsilon^{123} F^{12}= 1 F^{12}$. The other choice is $B^3= \epsilon^{213} F^{21}= -F^{21}= F^{12}$
So my questions are:
why that way of writing $B^k$ and not the other simpler one?
Why summing over permutation? I mean what tells me that I have to sum over the permutations and not just pick a value for $i$ and $j$ and compute the value of the Levi-Civita tensor? That's not how index notation works. You say that in
$$ B^k = \epsilon^{ijk}F^{ij}\tag{1}$$
you are "free to choose" the $i$ and $j$, but then you only make the two choices (1,2) and (2,1) for which that gives the correct results. What about $i=1,j=3$, or $i=2,j=2$? You chose those specific $i$s and $j$s because you knew what result you wanted, not because it's somehow a choice dictated by the notation (1).
To avoid such ambiguous choices, all indices in index notation must occur either on both sides of the equation or be summed over. The general convention is to sum over repeated indices, although in your relativistic setting the index position is actually relevant and the equation should be written
$$ B^k = \epsilon^{ijk} F_{ij}. \tag{2}$$ | {
"domain": "physics.stackexchange",
"id": 36754,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, magnetic-fields, field-theory, conventions, notation",
"url": null
} |
javascript, html, security, math-expression-eval
<html lang="en">
<head>
<title>Calculator</title>
<script>
function sqrt(x)
{
return Math.sqrt(x);
}
function abs(x)
{
return Math.abs(x);
}
function sin(x)
{
return Math.sin(x);
}
function cos(x)
{
return Math.cos(x);
}
function tan(x)
{
return Math.tan(x);
}
function arcsin(x)
{
return Math.asin(x);
}
function arccos(x)
{
return Math.acos(x);
}
function arctan(x)
{
return Math.atan(x);
}
function ln(x)
{
return Math.log(x);
}
function log(x)
{
return Math.log10(x)
}
function cbrt(x)
{
return Math.cbrt(x);
}
function exp(x)
{
return Math.exp(x);
}
function root(index, radicand)
{
return radicand**(1/index);
}
function logrtm(base, argument)
{
return Math.log(argument)/Math.log(base);
}
function dec(x)
{
return parseFloat(x);
}
function int(x)
{
return Math.round(x);
}
function min(stuff)
{
let items = []
if (arguments.length > 1)
{
for (var i = 0; i < arguments.length; i++) items[i] = arguments[i];
}
else if (!isNaN(stuff))
{
items = [stuff]
}
else
{
items = stuff;
}
return Math.min(...items);
}
function max(stuff)
{
let items = []
if (arguments.length > 1)
{
for (var i = 0; i < arguments.length; i++) items[i] = arguments[i];
}
else if (!isNaN(stuff))
{
items = [stuff]
}
else
{
items = stuff;
} | {
"domain": "codereview.stackexchange",
"id": 40911,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, html, security, math-expression-eval",
"url": null
} |
ros, rviz, ros-kinetic, robot, range
#print "ir2: {0}".format(ir1_range.range)
def range_infrared_sensor_3(ir_value):
ir3_range = Range()
ir3_range.header.stamp = rospy.Time.now()
ir3_range.header.frame_id = "/base_link"
ir3_range.radiation_type = 0
ir3_range.field_of_view = 1.0
ir3_range.min_range = min_range
ir3_range.max_range = max_range
ir3_range.range = ir_value
ir3_pub.publish(ir3_range)
#print "ir3: {0}".format(ir1_range.range)
def range_array_infrared_sensor(ir_left, ir_center, ir_right):
ir_array = [ir_left, ir_center, ir_right]
array_ir_range = Range()
array_ir_range.header.stamp = rospy.Time.now()
array_ir_range.header.frame_id = "/base_link"
array_ir_range.radiation_type = 0
array_ir_range.field_of_view = 1.0
array_ir_range.min_range = min_range
array_ir_range.max_range = max_range
array_ir_range.range = ir_array
array_ir_pub.publish(array_ir_range)
#------------------------------------------------------------------
def move_robot():
move.linear.x = 1
move.linear.z = 1
robot_pub.publish(move)
rate.sleep() | {
"domain": "robotics.stackexchange",
"id": 31083,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rviz, ros-kinetic, robot, range",
"url": null
} |
reaction-mechanism, polymers, safety
Title: Do NaHCO3 and cyanoacrylate react? I just saw this video about using baking soda + cyanoacrylate(CA) glue as a substrate+resin.
Neat, but do CA and $\ce{NaHCO3}$ react in any way?
If so, what does the equation look like and what is the resulting structure?
If not at room temperature, do they react at higher temperatures because CA curing is exothermic.
Is the CA + $\ce{NaHCO3}$ stable once it dries, is there any residue that would be toxic or irritating?
(not sure what to tag this with, feel free to add tags) "Baking soda"(*) speeds up the cyanoacrylate curing, as it's basic anion initiates anionic polymeration. Similarly as benzoyl peroxide initiates radical polymeration of styrene to polystyrene.
$$\ce{HO-CO-O- ->[CH2=CR1R2] \\
HO-CO-O-CH2-CR1R2- ->[CH2=CR1R2] \\
HO-CO-O-CH2-CR1R2-CH2-CR1R2- ... etc}$$
While curing, there is expected stronger evaporation of monomeric cyanoacrylate, compared to standard curing, due faster reaction, which may be irritating. When cured, the safety concerns are the same as for ordinary cyanoacrylate curing and usage, which has medicinal applications too.
Note that advices about medical safety, acute and especially long term one, is explicitly off-topic on this site, which would comment just chemical aspects of safe manipulation and precautions.
The higher temperature is issue for the structural stability, as the remaining encapsulated baking soda may decompose, forming gaseous carbon dioxide.
(*) - "Baking soda" = sodium bicarbonate, $\ce{NaHCO3}$. Not to be confused with sodium carbonate, $\ce{Na2CO3}$ resp. $\ce{Na2CO3 . 10 H2O}$ aka 'washing soda", which is caustic, while baking soda is not. | {
"domain": "chemistry.stackexchange",
"id": 17023,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reaction-mechanism, polymers, safety",
"url": null
} |
javascript, unit-testing
// when
const diff = Calculator.difference(number1, number2)
// then
expect(diff).toEqual(0);
});
...
It would be a totally different story if your difference function was a helper function specific to the calculator. Then the function would not be a part of the interface of the calculator and you should not test it directly, as it is a bad practice to test "private" functions, because you tie the tests to the implementation. This means that whenever you will refactor, the tests will also break. You can make the tests for a function while you develop it, but delete these tests afterwards, as they are not useful anymore. You should test the private functions indirectly, by testing the "publicly visible" interface, which is your main concern in the first place. | {
"domain": "codereview.stackexchange",
"id": 41653,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, unit-testing",
"url": null
} |
fourier-series, gibbs-phenomenon
In the first case, he does the Lanczos smoothing derivation where he averages the function by running a rectangular window through it (convolving with rect). What he shows is, not surprisingly, that the coefficients get multiplied by this term:
(2) $\frac{\sin k\pi/N}{k\pi N}$
which should look very familiar to you, of course, because it is the $sinc$ function. However, what you are missing is that $k$ is discrete! It is actually a sampled version of the $sinc$ function.
Effectively, he convolves a function with a $rect$ and shows that the coefficients of the resulting Fourier series (read loosely as: sampled Fourier transform) is a sampled $sinc$ function. No surprise there. Convolution thm says convolution in time turns into multiplication in frequency. Fourier transform, which you know, of $rect$ is $sinc$, so convolution by $rect$ is multiplication by $sinc$ in frequency space.
In the next section, he does something different. He takes the Fourier series (read: sampled Fourier transform) and removes all the higher frequency coefficients. In effect, he is taking the Fourier transform, multiplies by a $rect$, and then samples it. For simplicity, he sets all the Fourier coefficients that did not get discarded to $1$.
What he's left with is this:
(3) $h(\theta)=\frac{sin(N+1/2)\theta}{sin(\theta/2)}$
And you ask, why isn't this a $sinc$ function? Can you answer it now?
The quick answer is because what is applied in frequency domain is not just a truncation, it's a truncation and a sampling operator. What you know is that when you truncate (i.e. multiply by rect) in frequency domain, the time domain gets convolved with a $sinc$ (by Convolution thm and Fourier transform of $rect$), but this is without sampling. | {
"domain": "dsp.stackexchange",
"id": 658,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fourier-series, gibbs-phenomenon",
"url": null
} |
python, python-3.x, numpy, coordinate-system, geospatial
projection = dict(zip(times, points.tolist()))
pprint(projection)
if __name__ == '__main__':
dev_dist()
Output
{'+15min': [32.92161221650089, -98.9548217948532],
'+30min': [32.83322142714634, -98.96961414990395],
'+45min': [32.744827642960495, -98.98437722206711],
'+60min': [32.65643087492691, -98.99911116737137]} | {
"domain": "codereview.stackexchange",
"id": 42999,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, numpy, coordinate-system, geospatial",
"url": null
} |
$$= 1 - [Binomial\_cdf(k-1;n,\frac{1}{m})]^m$$
-
I believe the exponent should be an m, not a k as you have. I have included another step that makes this more clear. I may be missing something, or misunderstanding your question, so please clarify if you still don't agree. – Daniel Johnson Feb 8 '12 at 16:40
To help resolve these differences, I have edited the question to show the value Excel actually returns for the formula as given (with $k$ in the exponent). Putting $m$ in the exponent yields a value of 0.6398; compare this to the simulation results. – whuber Feb 8 '12 at 16:56
So the rest of your question is how to represent this expression in R? I don't use R, so I can't help you with that, but i would guess there is a binomial cdf function that you can use and if not, the summation formulation should be pretty trivial to code. – Daniel Johnson Feb 8 '12 at 19:50
@DanielJohnson: Thanks for your explanation and correction. I can now use R for simulation (as well as computing the theoretical approximation), thanks to whuber. The only part remaining is how not to depend on an approximation that is not so good (it introduce a relative error on p of nearly 1%, and likely much worse in other setups). – fgrieu Feb 8 '12 at 20:21
The collection of numbers has a multinomial distribution with $m$ categories and $n$ sample size. Letting $N_i$ be the number of times the $i$th category is chosen/repeated, we have $$(N_1,\dots,N_m)\sim multinomial\left(n;\frac{1}{m},\frac{1}{m},\dots,\frac{1}{m}\right)$$ Now leveraging off of @danieljohnson's answer the probability we are after is
$$p(n,m,k)=1-Pr(N_1<k,\dots,N_m<k)$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429575500228,
"lm_q1q2_score": 0.8318515439489159,
"lm_q2_score": 0.8459424431344438,
"openwebmath_perplexity": 613.5908381593049,
"openwebmath_score": 0.817209005355835,
"tags": null,
"url": "http://stats.stackexchange.com/questions/22450/odds-of-drawing-at-least-k-identical-values-among-m-after-n-draws/22772"
} |
java, game, swing
public class ExtraPanel extends JPanel {
/**
*
*/
private static final long serialVersionUID = -4682418373205077458L;
private int frameWidth = 100;
private int delay = 100;
private int actualDelay = 100;
private static final int PREFERRED_HEIGHT = 40;
public boolean timerStopped = true;
public boolean firstClick = true;
private int score = 0;
public ExtraPanel(){
super();
setPreferredSize(new Dimension(2000, PREFERRED_HEIGHT));
draw();
}
private void draw() {
ActionListener drawer = new ActionListener(){
public void actionPerformed(ActionEvent evt){
repaint();
if (!timerStopped){
if (delay > 0){
delay-= 1;
}
else {
timerStopped = true;
firstClick = true;
}
}
}
};
Timer t = new Timer(10, drawer);
t.start();
}
public void paintComponent(Graphics g){
super.paintComponent(g);
setBackground(Color.WHITE);
g.setColor(Color.BLACK);
g.drawString(getScore(), getFrameWidth()/2-g.getFontMetrics().stringWidth(getScore()), getHeight() - 2);
g.setColor(Color.RED);
g.fillRect(0, 0,
getFrameWidth() - (getFrameWidth()-getFrameWidth()*delay/actualDelay),
getHeight()/2);
}
private String getScore() {
return Integer.toString(score);
}
public void setScore(int score){
this.score = score;
}
public void upScore(){
score++;
}
public int getDelay() {
return delay;
}
public void setDelay(int delay) {
this.delay = delay;
this.actualDelay = delay;
} | {
"domain": "codereview.stackexchange",
"id": 18335,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, game, swing",
"url": null
} |
corresponds to the root that you're taking. Radical equationsare equations in which the unknown is inside a radical. All right reserved. For problems 1 â 4 write the expression in exponential form. Neither of 24 and 6 is a square, but what happens if I multiply them inside one radical? In other words, we can use the fact that radicals can be manipulated similarly to powers: There are various ways I can approach this simplification. To indicate some root other than a square root when writing, we use the same radical symbol as for the square root, but we insert a number into the front of the radical, writing the number small and tucking it into the "check mark" part of the radical symbol. In math, a radical is the root of a number. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Algebra radicals lessons with lots of worked examples and practice problems. Math Worksheets What are radicals? can be multiplied like other quantities. Constructive Media, LLC. Radicals are the undoing of exponents. There are certain rules that you follow when you simplify expressions in math. \small { \left (\sqrt {x - 1\phantom {\big|}}\right)^2 = (x - 7)^2 } ( xâ1â£â£â£. You probably already knew that 122 = 144, so obviously the square root of 144 must be 12. . Variables with exponents also count as perfect powers if the exponent is a multiple of the index. The square root of 9 is 3 and the square root of 16 is 4. To understand more about how we and our advertising partners use cookies or to change your preference and browser settings, please see our Global Privacy Policy. Before we work example, letâs talk about rationalizing radical fractions. For instance, 4 is the square of 2, so the square root of 4 contains two copies of the factor 2; thus, we can take a 2 out front, leaving nothing (but an understood 1) inside the radical, which we then drop: Similarly, 49 is the square of 7, so it contains two copies of the factor 7: And 225 is the square of 15, so it contains two copies of the factor 15, so: Note that the value of the simplified radical is positive. This problem is | {
"domain": "m-i-n-d.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.97112909472487,
"lm_q1q2_score": 0.8028438720280265,
"lm_q2_score": 0.8267117898012105,
"openwebmath_perplexity": 1076.595914595042,
"openwebmath_score": 0.7245129346847534,
"tags": null,
"url": "http://m-i-n-d.org/qmpchdy/2c52ec-radicals-math-examples"
} |
The article synthesizes prior works using a unified notation, enabling straightforward application in robotics. The magnetization need not be static; the equations of magnetostatics can be used to predict fast magnetic switching events that occur on time scales. Magnetic Fields (Biot-Savart): Summary Current loop, distance x on loop axis (radius R): Straight wire: finite length infinite wire: B x = µ 0 IR 2 2(x2+R2)3/2 B center = µ 0 I 2R (coscos) 4 1 2 0 θθ π µ = − a I B a I B π µ 2 =0 θ 1 θ 2. Using Biot Savart Law, we find out that the magnetic field is μ0⋅I(t)/2. In using the Biot-Savart Law for an finite wire, I am having trouble understanding the angles. , the study of electric fields generated by stationary charges). THERMAL PHYSICS I (25 Marks) LECTURES 25 + 5 Tutorial 1. Use the law of Biot and Savart to find the magnitude of the magnetic field at point P due to the 1. Magnetisms Part-1 BY NM SIR|Biot Savart law |Magnetic field due to finite wire|IIT-JEE |NEET PHYSICS. The top wire has current 2 A to the right, and the bottom wire has current 3 A to the left. The Biot-Savart law says that if a wire carries a steady current I, the magnetic field dB at a point P associated with an element of the wire ds has the following properties: The vector d B is perpendicular both to ds (which is a vector units of length and in the direction of the current) and to the unit vector r directed from the element to P. There's a bit of an art to setting up the. Biot-Savart integral is taken over finite wire length:. We have therefore shown that. Thus, this is all about biot savart law. where ц, is the permeability of the medium surrounding the wire. Biot-Savart Law. • When wire is perpendicular to the plane of paper, the field is in the plane of the paper. We start by describing the Biot–Savart law since Ampere’s law may be derived from the Biot–Savart law. It is clear from these force laws that an observer could say that they were in | {
"domain": "effebitrezzano.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9715639669551474,
"lm_q1q2_score": 0.8370394787112369,
"lm_q2_score": 0.8615382076534742,
"openwebmath_perplexity": 632.2104021153492,
"openwebmath_score": 0.7613797187805176,
"tags": null,
"url": "http://fxks.effebitrezzano.it/biot-savart-law-finite-wire.html"
} |
python, beginner, algorithm, regex, sqlite
Note that this way you also don't need to worry about Python to database type conversions and quotes inside parameters - it will all be handled by the database driver.
Performance
instead of re-connecting to the database multiple times, think about connecting to a database once, processing all the data and then closing the connection afterwards
same idea about the use of requests - you may initialize a Session() and reuse
use lxml instead of html.parser as an underlying parser used by BeautifulSoup
you can use SoupStrainer class to parse only the desired element, which will allow you to then simply get the text and split by space instead of applying a regular expression:
parse_only = SoupStrainer(class_="nb-connect-fofo")
page = BeautifulSoup(page_html, 'lxml', parse_only=parse_only)
return page.get_text().split()[0] | {
"domain": "codereview.stackexchange",
"id": 26255,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, algorithm, regex, sqlite",
"url": null
} |
To achieve $O(n^2)$, we first sort the lengths of line segments ($l_1, l_2,\ldots, l_n$) in non-decreasing order. Let $a = l_i$ and $b = l_j$ with $i < j$, and let $k$ be the position in which $l_k$ is closest to $\sqrt{l_i^2 + l_j^2}$. When $j$ runs from $i+1$ to $n$, $k$ also increases gradually to $n$. Thus finding $k$ has an $O(1)$ amortized run time, i.e., the whole algorithm takes $O(n^2)$.
Update 11/26/2013:
An easier $O(n \log n)$ algorithm is to sort $l_i$ in a non-decreasing order, and find $3\leq k \leq n$ that maximizes the area of the triangle assembled from $l_{k-2}, l_{k-1}$, and $l_{k}$. It follows from the fact that if $i ,j < k$ then $S(l_i, l_j, l_k) \leq S(l_{k-2}, l_{k-1}, l_k)$ (can be proved by taking the derivative of $S$ and that $l_k$ is the length of the longest side). | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9766692284751636,
"lm_q1q2_score": 0.8052265893288558,
"lm_q2_score": 0.8244619220634456,
"openwebmath_perplexity": 246.21505426512917,
"openwebmath_score": 0.7692148685455322,
"tags": null,
"url": "https://cs.stackexchange.com/questions/18459/given-the-set-of-length-of-triangle-find-the-maximum-area-triangle"
} |
javascript, ajax, laravel
javascript:void(0)/onclick
No! This breaks semantic HTML. If you have <a href='javascript:void(0)>, what you actually want is a <button> instead. A link and a button have two very different semantics.
Do not use inline JavaScript like onclick: use event handlers anyway. Any good content security policy (which helps protect against XSS) will disable inline javascript and thus those attributes will not work.
Generating HTML elements
I'd strongly recommend generating HTML elements based on template tags in your HTML body and using Document.cloneNode instead of pulling together JavaScript strings in your code to help with separation of concerns. This would also work very nicely with something like lodash's template elements. | {
"domain": "codereview.stackexchange",
"id": 20874,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, ajax, laravel",
"url": null
} |
cosmology, universe, planets
Title: Age of universe estimates I was recently involved in a discussion on a sister site (now removed) regarding how tightly coupled Physics is with the age of the Universe (and Earth).
I believe that the Earth and the Universe are both billions of years old, but don't know enough on why exactly other than having confidence in peer reviewed science. Moreover, it would be helpful if I knew which parts of physics are tightly coupled with the current age estimate. So,
Are there any notable hypotheses or entire fields of modern physics that both:
do not rely on the age of the Earth for their predictive and explanatory power and
do not predict an old Earth
If so, which fields depend (directly or indirectly) on the age of the Earth, and which do not?
Put differently,
consistent with old Earth = hypotheses that either rely on the age of the Earth for their predictive power or predict an old Earth
$M=$ Modern physics.
$M_0=$ Modern physics consistent with old Earth.
$M_n=$ Hypotheses that both do not rely on the age of the Earth for their predictive power and do not predict an old Earth. | {
"domain": "physics.stackexchange",
"id": 10358,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, universe, planets",
"url": null
} |
organic-chemistry, reaction-mechanism
Title: Mechanism for Bial's test In biochemistry laboratory, it's common for undergraduate students to perform identification of carbohydrates and its functional groups. Bial's test, a method to check if any pentose is present, is one of some ways for the identification. The overall reaction can be found in Wikipedia page (https://en.m.wikipedia.org/wiki/Bial's_test).
The conversion of pentose to furfural has known to me (because I've found the paper) but not the second reaction. I guess some organometallic reactions are involved but not sure to write the mechanism.
I've tried to search the research articles related to the mechanism in Google, Google Scholar, ... or even, Scopus but can't find any paper related to it. This is indeed surprising to me because I thought this classical test should be investigated exhaustively either by spectrometry or computational perspectives.
Any thought or idea is appreciated to present the mechanism (even the mechanism is produced by direct computation). The acid-catalyzed condensation of furfural with orcinol to form structure 1 is explained here for Molisch's test: Clarification in the mechanism for Molisch's test for glucose. Ferric chloride is the oxidant of 1 (see 2 ----> 3). Acid catalyzes the ring closure of quinoid structure 3 to intermediate 4, which rearomatizes to 5 with loss of water. Direct cyclization of 2 with acid is unlikely because it would interrupt the aromaticity of one of the orcinol rings. | {
"domain": "chemistry.stackexchange",
"id": 11420,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, reaction-mechanism",
"url": null
} |
ros, laser-scan-matcher
Title: Using multiple lasers with laser_scan_matcher
Hello,
Has anyone tried using laser_scan_matcher with multiple laser sensors? There is nothing about it in the documentation, but that hasn't been updated since Fuerte. I am using laser_scan_matcher for the odom transform of an omnidirectional robot and I have 2 LRFs, one facing forward and one facing rearward. I'm wondering if having laser_scan_matcher subscribe to the scan topic, where I am publishing the data from both lasers, would work or if that will cause a problem with it trying to match scans that are completely different.
Thanks
Originally posted by Icehawk101 on ROS Answers with karma: 955 on 2015-04-23
Post score: 2
I figured it out. After looking through the node's code I found that it saves the frame id of the first scan it gets, then for subsequent scans it ignores those that don't have the same frame id. So it doesn't use multiple sensors, but you can publish the data from multiple sensors to the /scan topic without having to worry about laser_scan_matcher throwing a fit.
Originally posted by Icehawk101 with karma: 955 on 2015-04-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21509,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, laser-scan-matcher",
"url": null
} |
but remember that when graphing polar equations, standard form means. and even polar bears. to determine the equation’s general shape. Fun question. Jump to: navigation, search. The lemniscate should have two petals and the distance from the origin to the end of the tip of each petal should be 2. The material you see below is borrowed heavily from Yosh's Graphing Polar Equations part 1, part 2, and part 3. 28, whereas 360𝑜 is, well, 360. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. How do you graph the lemniscate #r^2=36cos2theta#? Precalculus Polar Coordinates Limacon Curves. (3, 3 ) _____ 19. They are not spirals; perhaps the name comes from the similar form of the equation for the logarithmic spiral,. families-of-polar-graphs.$ Area of one arch $=3\pi a^2$. The organizer gives examples of limacons, lemniscates, and polar roses. The polar axis is usually drawn in the direction of the _____ x-axis. Jump to: navigation, search. This Trigonometry PreCalculus Polar Curves Graphic Organizer Summary is designed for PreCalculus - Trigonometry and can be used as a review for Calculus 2 or AP Calculus BC classes before studying the Calculus of Polar Functions. where did i go wrong? please check anyone. Precal Spring 33 more complicated polar equations Polar Coordinates (Part 3) Equations MHS PAP Precalculus 19-20. Posted by Maryam Amr on 8/12/15 11:00 AM. The fixed points, or foci, are a distance € 2c apart. The topics covered are: 1) translating from cartesian to polar coordinates, polar to cartesian coordinates and 2) how to graph some of the more basic (and used) polar functions. But, with practice, it gets a lot easier, and you'll get much, much faster. Polar Equations The following table summarizes some common poiar graphs and forms of their equations. " on the Rectification of Lemniscates and other Curves. Comment actions. For any compact subset KcC, (2. Mon Nov 11 - I retaught graphing roses | {
"domain": "sicituradastra.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787864878118,
"lm_q1q2_score": 0.8024453791113411,
"lm_q2_score": 0.8128673246376009,
"openwebmath_perplexity": 2009.7685798538423,
"openwebmath_score": 0.7826905250549316,
"tags": null,
"url": "http://jiri.sicituradastra.it/polar-lemniscates.html"
} |
python, javascript, beginner, algorithm, strings
"Doc, note: I dissent. A fast never prevents a fatness. I diet on cod" is not a palindrome!
--------------------------------------------------
The reverse of "redder" is "redder"
"redder" is a palindrome!
--------------------------------------------------
The reverse of "madam" is "madam"
"madam" is a palindrome!
--------------------------------------------------
The reverse of "1991" is "1991"
"1991" is a palindrome!
--------------------------------------------------
The reverse of "refer" is "refer"
"refer" is a palindrome! Boolean expression returns
This applies to both your Javascript and Python implementations:
if (reversed_string_1 === original_string && reversed_string_2 === original_string && reversed_string_3 === original_string && reversed_string_4 === original_string) {
return true;
// If the original string is not a palindrome
} else {
return false;
} | {
"domain": "codereview.stackexchange",
"id": 36237,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, javascript, beginner, algorithm, strings",
"url": null
} |
condensed-matter, hamiltonian, second-quantization, majorana-fermions
Title: What are the fermions in the SYK model doing? The Hamiltonian of the SYK model is
\begin{equation}
H = \mathcal{N}\sum_{ijkl}^N J^{ijkl} \chi_i \chi_j \chi _k \chi _l
\end{equation}
where $\mathcal{N}$ is some normalization to make the energy scale with $N$ and $\chi_i$ is a Majorana operator. Different reviews on the SYK model call the variables $i=1,\dots,N$ sites, others talk about $\chi$ as a vector with $N$ components like it was a spin $N$ particle. In relation to this question, I don't understand whether the number of particles is conserved in this Hamiltonian. In other cases, like the Hubbard model, one gets a term like
\begin{equation}
H = \mathcal{N}\sum_{r,r'} J^{r,r'} a^\dagger_r a_{r'}
\end{equation}
where the interpretation is that the Hamiltonian destroys a particle on the site $r'$ and creates another one at site $r$. On the SYK Hamiltonian, however, since the Majorana fermions are self-adjoint, any operator can work as both creation and annihilation operators. This means that any term in the Hamiltonian could create four particles, destroy four particles or anything in between. So the question is: How should we interpret the SYK Hamiltonian (or any Hamiltonian with Majorana fermions, for that matter)? It is standard to consider the fermions in the SYK model as all of them being at the same `site' but with an all-to-all coupling. SYK model is considered as a 1-dimensional (or 0-dimensional in condensed matter language) model. While I have seen people refer to them being as different sites, I am not really sure if it matters. There are other papers in which various chains of SYK models are coupled together to construct a higher-dimensional model and in that sense the $i$ index should not be confused with spatial sites. | {
"domain": "physics.stackexchange",
"id": 57924,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter, hamiltonian, second-quantization, majorana-fermions",
"url": null
} |
javascript, jquery
// Access to public method:
console.log($.funkyTown('foo_public_method', 'Wha?'));
});
My questions:
Do you see anything out of the ordinary with my above plugin template? If so, how could it be improved?
Related to #1 above: As you can probably see, I'm trying to account for the various needs like passing options, private and public methods and console handling... What am I missing in terms of useful features?
This line $.funkyTown = function(method) { makes the javascript linter throw a warning: "warning: anonymous function does not always return a value"; is there anyway for me to fix this? Could I just add return false to the end (right before the closing };? Update: Looks like I just needed to use a different tool.
Because I'm writing a utility plugin (one that will never be used directly on an element) do I still need to return this in my public methods in order to make things chainable (see the return this; // Is this needed for chaining? line of code above)? Should I even worry about chaining for this type of plugin?
Could you provide any other feedback to help me improve my code?
What's the easiest/best way to pass settings from init to other private and public functions? Normally, I'd use .data() on $(this) to store settings and other stateful vars... Because there's not element, should I just pass settings as an argument to the other methods? Update: Doi! This was an easy one! I simply needed to initialize my settings outside of my public methods object.
UPDATE 1:
I've updated my code (above) to reflect the things I've learned (i.e. the strike-through lines in numeric list above) since posting this question.
I've also added a new feature:
;(function($, window, document, undefined) {
// ...
}(jQuery, window, document)); | {
"domain": "codereview.stackexchange",
"id": 7951,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery",
"url": null
} |
file-formats, uniprot
02/18/2019
For the DE line it is noted
A block of DE lines may further contain multiple Includes: and/or Contains: sections and a separate field Flags: to indicate whether the protein sequence is a precursor or a fragment:
At present this seems to be related to the answer. When I have the file correctly parsed, then I can check for a correspondence as verification. Thank you for asking this question. "Flag" is actually an obsolete notation which predates the introduction of proper evidence attribution in UniProtKB, and which escaped us in our efforts to update documentation to use the correct terminology. The correct term would now be "Evidence", and the next version of the user manual will have this corrected (release 2019_06 due on July 3rd, 2019).
Before the current way of evidencing annotations was introduced in UniProtKB, the following "non-experimental qualifiers" or "flags" were used: By similarity, Potential, Probable. Absence of such a flag would generally imply that the information is based on an experiment.
When in doubt, please don't hesitate to contact the UniProt helpdesk. | {
"domain": "bioinformatics.stackexchange",
"id": 1008,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "file-formats, uniprot",
"url": null
} |
akan Anda buat sejak Anda lahir sampai Anda meninggal dunia berdasarkan kode genetik dan
DNA Anda? Dan jika begitu seberapa dekat mereka bisa benar-benar mendapatkan, dan http://dewanonton.inube.com berapa banyak kesempatan acak yang akan
mengguncang prediksi itu. Apakah tidak mungkin untuk mengetahui - atau mungkinkah seluruh
hidup seseorang dapat diramalkan? Dan dengan mengatakan bahwa jika Anda bisa
memprediksi kehidupan satu individu, mungkinkah Anda bisa memprediksi kehidupan 7 miliar
manusia di planet ini?
Menonton film Megamind secara gratis juga memberi kita https://groups.google.com/forum/#!topic/dewanonton
ruang lingkup untuk mengetahui apa semua karakter dalam film tersebut. Brad Pitt dan Will Ferrell
telah menganugerahkan hidup dalam karakter Metro Man dan Megamind dengan memberikan
suara mereka. Karakter penting lainnya dalam film ini adalah sebagai berikut; Suara Tighten telah
diberikan oleh Jonah Hill, suara Roxanne Ritchi sebenarnya adalah Tina Fey, suara Davis Cross
ada di belakang http://logdown.com/account/posts/2832400-layarkaca21/preview karakter Minion. Seluruh sinopsis plot film juga tersedia secara gratis di situs online.
Prediksi Bola | {
"domain": "ryanhmckenna.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9852713861502773,
"lm_q1q2_score": 0.8078174370668469,
"lm_q2_score": 0.8198933293122506,
"openwebmath_perplexity": 9639.77198396295,
"openwebmath_score": 0.2637558579444885,
"tags": null,
"url": "http://www.ryanhmckenna.com/2015/01/interpolation-search-explained.html"
} |
# Probability of a Random Walk crossing a straight line
Let $(S_n)_{n=1}^{\infty}$ be a standard random walk with $S_n = \sum_{i=1}^n X_i$ and $\mathbb{P}(X_i = \pm 1) = \frac{1}{2}$. Let $\alpha \in \mathbb{R}$ be some constant. I would like to know the value of
$$\mathcal{P}(\alpha) := \mathbb{P}\left(\exists \ n \in \mathbb{N}: S_n > \alpha n\right)$$
In other words, I am interested in the probability that a random walk $(S_n)_{n=1}^{\infty}$ crosses the straight line through the origin with slope $\alpha$.
Since the standard random walk is recurrent, it follows that $\mathcal{P}(\alpha) = 1$ for $\alpha \leq 0$, while obviously $\mathcal{P}(\alpha) = 0$ for $\alpha \geq 1$. Hence the non-trivial part and the part I am interested in is the region $\alpha \in (0,1)$. For this region we know that $\mathbb{P}(S_1 > \alpha) = \frac{1}{2}$, hence $\mathcal{P}(\alpha) \geq \frac{1}{2}$, but finding an exact value seems difficult.
Note: One way to explicitly calculate $\mathcal{P}(0)$ is by
$$\mathcal{P}(0) = \sum_{n=1}^{\infty} \frac{C_n}{2^{2n-1}} = 1$$ | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9852713857177956,
"lm_q1q2_score": 0.8167296945956567,
"lm_q2_score": 0.8289388146603365,
"openwebmath_perplexity": 125.62264444457085,
"openwebmath_score": 0.9337767362594604,
"tags": null,
"url": "https://mathoverflow.net/questions/63789/probability-of-a-random-walk-crossing-a-straight-line"
} |
cc.complexity-theory, oracles, relativization, pspace, structural-complexity
To replace the $n^2$ with $nf(n)$, just let $p_n(x) = x^{f(n)} + f(n)$ instead.
Interestingly, if I'm understanding correctly, I believe this implies that if one could improve the Trevisan-Xue...
...to a pseudodeterministic/Bellagio algorithm (see Andrew Morgan's comment below), one would get that $\mathsf{BPEXP} \not\subseteq \mathsf{P/poly}$; or
...to a nondeterministic algorithm that guessed $polylog(N)$ bits but then ran in $poly(N)$ time, and such that on any accepting path it makes the same output (cf. $\mathsf{NPSV}$), it would imply $\mathsf{NEXP} \not\subseteq \mathsf{P/poly}$; or
... to a deterministic algorithm, one would get $\mathsf{EXP} \not\subseteq \mathsf{P/poly}$.
On the one hand, this suggests why derandomizing the switching lemma further should be hard - an argument which I'm not sure was known before! On the other hand, this strikes me as a kind of interesting take on hardness versus randomness (or is this actually a new thing, oracles versus randomness?). | {
"domain": "cstheory.stackexchange",
"id": 3945,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, oracles, relativization, pspace, structural-complexity",
"url": null
} |
If you think about it, there can only be one function which is its own derivative and which goes through (0, 1). Technically, it's a first order differential equation with one initial condition. You can easily approximate it with arbitrary precision by programming or using a spreadsheet: https://docs.google.com/spreadsheet/ccc?key=0Am_ePpIZW9YMdFI3dFlHOFoxWnpXOTVvWnh5X3FOeGc&hl=en_US
You could also just show that the limit on the right is its own derivative as well.
Alternate method using first principles with $e^x$:
$$\frac{de^x}{dx}=\lim_{h->0}\frac{e^{x+h}-e^x}{h}=e^x\lim_{h->0}\frac{e^{h}-1}{h}$$
So:
$$\lim_{h->0}\frac{e^h-1}{h}=1$$
Rearranging to make $e$ the subject:
$$e=\lim_{h->0}(h+1)^{1/h}=\lim_{n->\infty}(1+\frac{1}{n})^n$$
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787872422175,
"lm_q1q2_score": 0.8000875884060797,
"lm_q2_score": 0.8104789109591832,
"openwebmath_perplexity": 187.33747441139442,
"openwebmath_score": 0.9473238587379456,
"tags": null,
"url": "http://math.stackexchange.com/questions/69806/prove-the-definitions-of-e-to-be-equivalent"
} |
c++, functional-programming, c++14, template-meta-programming
template<typename T>
struct Reverse_<List<>, T>
{
typedef T Type;
};
template<typename T>
using Reverse = typename Reverse_<T, List<>>::Type;
template<typename T, typename U>
struct Merge_
{
typedef Cons<Head<T>, typename Merge_<Tail<T>, U>::Type> Type;
};
template<typename T>
struct Merge_<List<>, T>
{
typedef T Type;
};
template<typename T, typename U>
using Merge = typename Merge_<T, U>::Type;
template<typename, typename T, typename>
struct If_
{
typedef T Type;
};
template<typename T, typename U>
struct If_<Box<false>, T, U>
{
typedef U Type;
};
template<typename T, typename U, typename V>
using If = typename If_<T, U, V>::Type;
template<typename T, template<typename> class U>
struct Filter_
{
typedef If<U<Head<T>>, Cons<Head<T>, typename Filter_<Tail<T>, U>::Type>,
typename Filter_<Tail<T>, U>::Type> Type;
};
template<template<typename> class T>
struct Filter_<List<>, T>
{
typedef List<> Type;
};
template<typename T, template<typename> class U>
using Filter = typename Filter_<T, U>::Type;
template<template<typename, typename> class T>
struct Function2
{
template<typename U>
struct Apply_
{
template<typename V>
using Apply = T<U, V>;
};
template<typename U>
using Apply = Apply_<U>;
}; | {
"domain": "codereview.stackexchange",
"id": 15614,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++14, template-meta-programming",
"url": null
} |
(taking density ρ ) that of a stack of discs each having mass m(dz) = πr2ρdz = π(Rz h)2ρdz and moment of inertia I(dz) = 1 2m(dz)r2 : h ∫ 01 2πρ(Rz h)4dz = 1 10πρR4h = 3 10MR2. If the mass of each connecting rod is negligible, what is the moment of inertia about an axis perpendicular to the paper and passing through …. The shape we worked with was a semicircle with an axis perpendicular to its surface through its …. Therefore, the moment of inertia of a uniform solid sphere (I) = 2MR 2 /5. Expression for Moment of Inertia: 1: Uniform Rod: Axis is perpendicular to the length of the rod and passing through one of its end \small I=\frac{1}{3}ML^{2} 2: Uniform Rod : Axis is perpendicular to the length of the rod and passing through the center of mass of the rod \small I=\frac{1}{12}ML^{2} 3: Circle or circular ring. + M h2 h: the perpendicular distance between the two axes. The origin is at the center of the rectangle. The distance (k) is called the Radius of Gyration. However, we know how to integrate over space, not over mass. Now suppose we displace the axis parallel to itself by a distance D. Similarly, the greater the moment of inertia of a rigid body or system of particles, the greater is its resistance to change in angular velocity about a fixed axis of rotation. Moment of Inertia is a measure of resistance to angular acceleration. The radius of gyration of a uniform rod of mass m and length l about an axis perpendicular to its length and at distance l/4 from its one end will be …. Example 4 A thin uniform rod of length $$\ell$$ and mass $$m$$ is rotated about the axis which is perpendicular to the rod and passes through its …. Determine the linear acceleration of the tip of the rod. A uniform disk of mass m is not as hard to set into rotational motion as a "dumbbell" with the same mass and radius. Need help | {
"domain": "billardschule-cronenberg.de",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9835969674838371,
"lm_q1q2_score": 0.8131512157307522,
"lm_q2_score": 0.8267117962054049,
"openwebmath_perplexity": 186.42280334111211,
"openwebmath_score": 0.5442878603935242,
"tags": null,
"url": "http://billardschule-cronenberg.de/the-moment-of-inertia-of-a-uniform-rod-about-an-axis-through-its-center-is.html"
} |
homework-and-exercises, electromagnetism
calculate the electric field "on the line of charge". Or on anything. The field exists at the location of the line of charge. At any point on line of charge you have an electric field produced by the sphere. This is E and not dE. What you sum are the forces on each piece of the line of charge. You have to calculate dF=Edq and sum these contributions. But pay attention to direction of each dF. They are vectors. Calculate component along the line and perpendicular to the line separately. | {
"domain": "physics.stackexchange",
"id": 71768,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, electromagnetism",
"url": null
} |
machine-learning, r, data-mining, logistic-regression
Title: Why I didn't get any significant variable in my logistic model? I decided to apply the logistic regression method to my categorical and quantitative data.
So, I followed these steps:
Eliminating the bad and inconsistent data.
Preparing the target variable (categorical variable).
Testing the dependencies between categorical variables and the
target variable using the Ki-2 test for selecting the variables that
are well linked with the target variable.
Testing the correlation between quantitative variables to avoid the
choice of two correlated variables at a time.
Crossing some variables to improve their significance.
After all this I do not find any significant variables in my logistic regression model knowing that the base is well coherent and well cleaned.
I work with the R language and I used the glm function:
glm (formula, family = familytype (link = linkfunction), data =) | {
"domain": "datascience.stackexchange",
"id": 2277,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, r, data-mining, logistic-regression",
"url": null
} |
thermodynamics, entropy
GOOD NEWS. (3/5/18)
I've finally been successful in integrating Eqn. 11 analytically to obtain $T_1$ as an explicit function of $T_2$. The result is:
$$T_1=\frac{T_2}{\left[1+\left(\frac{T_{20}}{T_{10}}-1\right)\left(\frac{T_{20}}{T_2}\right)^{C_v/R}\right]}\tag{11a}$$
Note from this that, as would be expected, if $T_{10}=T_{20}$, $T_1=T_2$
CHANGE IN ENTROPY
The change in entropy for this system is given by:
$$\Delta S=(N_1s_1+N_2s_2)-(N_{10}s_{10}+N_{20}s_{20})\tag{12}$$
where the s's are molar entropies:
$$s=C_v\ln{(T/T_{ref})}+R\ln{(v/v_{ref})}\tag{13}$$where $T_{ref}$ and $v_{ref}$ are the temperature and molar volume at some arbitrary reference state; these parameters cancel out of the calculations. The specific volume $v_j$ for chamber j is given by $v_j=V_j/N_j$.
ADDITIONAL ANALYSIS ON 3/5/18
Intuitively, and, based on Eqns. 8 and 9 of the present analysis, we know that the gas remaining in chamber 2 at any time during this process has experienced an adiabatic reversible compression, such that its final entropy per unit mass is equal to its initial entropy per unit mass: $$s_2=s_{20}$$If we substitute this into Eqn. 12, this yields:
$$\Delta S=N_1s_1-N_{10}s_{10}-(N_{20}-N_2)s_{20}\tag{14}$$ | {
"domain": "physics.stackexchange",
"id": 47105,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, entropy",
"url": null
} |
c++, error-handling, api
Title: C interface exception handling with C++ implementation Whilst developing a bigger project, I was in need of having basic error handling inside the context of a C interface.
I came up with the following solution.
// include/interface/error.h
#ifndef SDK_INTERFACE_ERROR_H
#define SDK_INTERFACE_ERROR_H
#ifdef __cplusplus
extern "C" {
#endif //__cplusplus
struct Error;
typedef struct Error Error;
typedef void(*ErrorHandler)( Error const * );
void SetExceptionHandler( ErrorHandler const * inExceptionHandler );
[[ noreturn ]] void ThrowException( Error const * inError );
#ifdef __cplusplus
}
#endif //__cplusplus
#endif //SDK_INTERFACE_ERROR_H
// source/interface/error.cpp
#include "interface/error.h"
#include <shared_mutex>
#include <mutex>
#include <cstdlib>
#include <functional>
namespace
{
std::function< void( Error const * ) > sErrorHandler = []( Error const * ){ std::abort(); };
std::shared_mutex sErrorHandlerMutex;
}
void
SetExceptionHandler( ErrorHandler const * inExceptionHandler )
{
std::unique_lock lock( sErrorHandlerMutex );
sErrorHandler = *inExceptionHandler;
}
void
ThrowException( Error const * inError )
{
std::shared_lock lock( sErrorHandlerMutex );
sErrorHandler( inError );
std::abort();
}
Errors in this context means unrecoverable errors, So error handling is simply a means to provide a user of this interface time to perform a clean exit.
However I still have the following questions | {
"domain": "codereview.stackexchange",
"id": 45062,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, error-handling, api",
"url": null
} |
java, multithreading, datetime, swing, timer
It's really this map that defines your state machine - you could use the same States and the same Triggers, organized in a different way, to produce a different class of machines.
static {
// when we initialize the FSM, we load the transition map
// first, we create empty Trigger maps for each known state
for (State s : State.values()) {
stateTransitions.put(s, new EnumMap<Trigger,State>());
}
// now fill the trigger maps with the supported transitions
stateTransitions.get(State.STARTED).put(Trigger.TOGGLE, State.STOPPED);
stateTransitions.get(State.STOPPED).put(Trigger.TOGGLE, State.STARTED);
// here, we add pause support.
stateTransitions.get(State.STARTED).put(Trigger.PAUSE, State.PAUSED);
stateTransitions.get(State.PAUSED).put(Trigger.PAUSE, State.STARTED);
}
Stateless4j puts a reasonable fluent interface in front of the state machine creation idioms. Using that library, your state machine creation might look like...
StateMachine<State, Trigger> stopwatch = new StateMachine<State, Trigger> ();
stopwatch.configure(State.STARTED)
.permit(Trigger.TOGGLE, State.STOPPED)
.permit(Trigger.PAUSE, State.PAUSED);
stopwatch.configure(State.STOPPED)
.permit(Trigger.TOGGLE, State.STARTED);
stopwatch.configure(State.PAUSED)
.permit(Trigger.TOGGLE, State.STARTED); | {
"domain": "codereview.stackexchange",
"id": 8550,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, multithreading, datetime, swing, timer",
"url": null
} |
electrons, potential-energy, voltage
Title: Is it possible to determine how many electrons there are in a plate from its voltage? We've just stared doing voltage in school, and so I wondered if this is possible :)
If I charge an insulated metal plate using an EHT to something like 5kV, then I've added electrons to the plate. My guess is that the extra electrons 'push more' than what can be countered by the protons, causing a voltage, almost like a spring compressing I guess. (Correct me if I'm wrong!).
So, if I know the voltage of the plate, and I know how many atoms there are in the plate, is it possible to calculate how many extra electrons have been added to cause the voltage?
Thanks! If voltage is all that you know, then the answer is No.
If you know how much charge $Q$ in Coulombs is added, you only have to divide by the charge $e$ on each electron (in Coulombs). Otherwise, if you have a parallel plate capacitor and you know the capacitance $C$, you can work it out from $Q = CV$.
Your suggestion that the extra electrons 'push more' like a spring is interesting because a capacitor stores energy as a spring does. In a spring the elastic energy stored is $\frac 12 kx^2$ where $k$ is the spring constant (how stiff it is) and $x$ is what distance it is stretched or compressed, whereas in a capacitor the electrical energy stored is $\frac 12 CV^2$. So the formulas are similar. | {
"domain": "physics.stackexchange",
"id": 30448,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrons, potential-energy, voltage",
"url": null
} |
newtonian-mechanics, forces, rotational-dynamics, friction, equilibrium
Title: Nonsensical equation for slipping vs. tipping point on an incline? Question
Imagine a block is resting on a ramp that is inclined at $\theta$. We push the block with a force $F_\text{push}$ directed down the ramp. The magnitude of the force $F_\text{push}$ is selected so that the static friction force is at its maximum:
We slide our finger up and down the height of the block, so that $F_\text{push}$ is applied at a variety of different heights. When the force is applied up near the top of the block, the block tips over. When the force is applied down at the base, the block slides. We move our finger up and down until we find the threshold height $h$: pushing below $h$ causes the block to slide, and pushing above $h$ causes the block to tip.
The formula for the threshold height is:
$$h=\frac{b-V\text{tan}\theta}{\mu -\text{tan}\theta} \text{ }(\text{Eq. 1})$$
where $b$ is half the base of the block, $V$ is the distance from the base up to the center of mass (extending perpendicularly up from the ramp), and $\mu$ is the coefficient of static friction between the block and the ramp surface. (This equation assumes we are pushing on the block with a force directed down the ramp surface.)
But when we start plugging in values, the equation doesn't always make sense. Let's assume fixed values of $b=2.15 \text{ cm}$ and $\mu=0.75$. Here's the graph of $h$ against $\theta$ when we let $V=2.70 \text{ cm}$ and when we let $V=3.70 \text{ cm}$. | {
"domain": "physics.stackexchange",
"id": 46933,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, rotational-dynamics, friction, equilibrium",
"url": null
} |
c++, beginner, game, sdl
if (this->_x + this->getW() < player1.getX() + player1.getW())
{
// vertical collision
this->_speedVector.y = -this->_speedVector.y;
}
else
{
this->move(player1.getX() + player1.getW() - this->_x, 0);
collision(player1);
}
}
}
// Check collision with second player
if (this->_x + this->getW() >= player2.getX() && this->_x <= player2.getX() + player2.getW())
{
if (this->_y + this->getH() >= player2.getY() && this->_y <= player2.getY() + player2.getH())
{
if (this->_x > player2.getX())
{
// vertical collision
this->_speedVector.y = -this->_speedVector.y;
}
else
{
this->move(-(this->_x + this->getW() - player2.getX()), 0);
collision(player2);
}
}
}
// Check collision with screen
if (this->_y < 0 || this ->_y + globals::BALL_SIZE > globals::SCREEN_HEIGHT)
{
this->_speedVector.y = -this->_speedVector.y;
}
} | {
"domain": "codereview.stackexchange",
"id": 27826,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, game, sdl",
"url": null
} |
Now note that $$\lim_{N\to\infty} \sum_{k=0}^{N} q_k - q_{k+1} = \lim_{N\to\infty} q_0 - q_N = q_0 = \begin{cases} \left \lfloor x \right \rfloor & \text{if } x\geq 0\\ \left \lfloor x+1 \right \rfloor & \text{if } x<0 \end{cases}$$
Another solution: We will use the identity $\left \lfloor nx\right \rfloor = \sum_{k = 0}^{n - 1} \left \lfloor x + \frac{k}{n} \right \rfloor$,
$\sum_{k=0}^{\infty} \sum_{i=1}^{m-1} \left \lfloor \frac{x+im^k}{m^{k+1}} \right \rfloor = \sum_{k=0}^{\infty} \sum_{i=1}^{m-1} \left \lfloor \frac{x}{m^{k+1}} + \frac{i}{m} \right \rfloor = \sum_{k=0}^{\infty}( \sum_{i=0}^{m-1} \left \lfloor \frac{x}{m^{k+1}} + \frac{i}{m} \right \rfloor - \left \lfloor \frac{x}{m^{k+1}}\right \rfloor)$
Now we can apply the identity from above on $\frac{x}{m^{k+1}}$, and we are left with, | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787861106087,
"lm_q1q2_score": 0.837095778583547,
"lm_q2_score": 0.8479677545357568,
"openwebmath_perplexity": 394.26390708909565,
"openwebmath_score": 0.9997522234916687,
"tags": null,
"url": "https://math.stackexchange.com/questions/2309940/how-do-you-solve-sum-k-0-infty-sum-i-1m-1-left-lfloor-fracxim"
} |
ros, gazebo, c++, service
# Set the build type. Options are:
# Coverage : w/ debug symbols, w/o optimization, w/ code-coverage
# Debug : w/ debug symbols, w/o optimization
# Release : w/o debug symbols, w/ optimization
# RelWithDebInfo : w/ debug symbols, w/ optimization
# MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries
#set(ROS_BUILD_TYPE RelWithDebInfo)
rosbuild_init()
#set the default path for built executables to the "bin" directory
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
rosbuild_add_executable(example src/example.cpp)
next, save following code to src/example.cpp:
#include <ros/ros.h>
#include <gazebo_msgs/SetModelState.h>
int main (int argc, char** argv)
{
ros::init(argc,argv,"test_node");
ros::NodeHandle n;
ros::ServiceClient client = n.serviceClient<gazebo_msgs::SetModelState>("/gazebo/set_model_state");
gazebo_msgs::SetModelState setmodelstate;
gazebo_msgs::ModelState modelstate;
modelstate.model_name = "drill";
setmodelstate.request.model_state = modelstate;
if (client.call(setmodelstate))
{
ROS_INFO("BRILLIANT!!!");
ROS_INFO("%f",modelstate.pose.position.x);
}
else
{
ROS_ERROR("Failed to call service ");
return 1;
}
return 0;
}
make and run:
make
./bin/example | {
"domain": "robotics.stackexchange",
"id": 8149,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, c++, service",
"url": null
} |
# Math Help - Simplifying Square Root w/in square root
1. ## Simplifying Square Root w/in square root
I've been asked to simplify and I don't know how.
1/[square root of 1/(square root of x+2) +2]
Wow.
2. Originally Posted by tesla1410
I've been asked to simplify and I don't know how.
1/[square root of 1/(square root of x+2) +2]
Wow.
Fist we need to get it clear what you have been asked to simplify, is it:
$
\frac{1}{\sqrt{\frac{1}{\sqrt{x+2}}+2}}\ ?
$
Whatever it truns out the exact problem is the key trick you will need
to employ is:
$\frac{1}{\sqrt{something}}=\frac{\sqrt{something}} {something}$
RonL
3. Yes, you've stated the problem correctly. What's throwing me is the square root fraction w/in the square root - but then you probably know that's the hang up.
BTW, is there some way I can use the trick symbols that so many others use? It would make writing a problem clearer.
Thanks.
4. Originally Posted by tesla1410
...
BTW, is there some way I can use the trick symbols that so many others use? It would make writing a problem clearer.
The mathematicts is being laid-out/typeset using a LaTeX system, the details
can be found here.
To see what is going on left chick on an equation and a window with the
code used to set the equation will pop-up.
RonL
5. Originally Posted by tesla1410
Yes, you've stated the problem correctly. What's throwing me is the square root fraction w/in the square root - but then you probably know that's the hang up.
OK, simplify:
$
\frac{1}{\sqrt{\frac{1}{\sqrt{x+2}}+2}}
$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9902915223724212,
"lm_q1q2_score": 0.8164576562095532,
"lm_q2_score": 0.8244619263765707,
"openwebmath_perplexity": 1200.2356644680915,
"openwebmath_score": 0.9197702407836914,
"tags": null,
"url": "http://mathhelpforum.com/algebra/5360-simplifying-square-root-w-square-root.html"
} |
vb.net, pdf
''' <summary>
''' Merges multiple PDF files into a single PDF file
''' </summary>
''' <param name="PDFFilePath">The path in which to search for PDF files to merge</param>
''' <param name="OutputFileName">The PDF file to create from the merged PDF files</param>
''' <param name="OverwriteExistingPDF">If the specified PDF file already exists, identifies whether or not to overwrite the existing file</param>
''' <param name="RecurseSubFolders">Identifies whether or not to look in subfolders of the specified path for additional PDF files</param>
''' <returns>A FileInfo object representing the merged PDF if successful. <cref>Nothing</cref> if unsuccessful.</returns>
Public Overloads Function MergeAll(ByVal PDFFilePath As String, ByVal OutputFileName As String, ByVal OverwriteExistingPDF As Boolean, ByVal RecurseSubFolders As Boolean) As System.IO.FileInfo
Return MergeAll(PDFFilePath, OutputFileName, OverwriteExistingPDF, PDFMergeSortOrder.Original, RecurseSubFolders)
End Function | {
"domain": "codereview.stackexchange",
"id": 29351,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vb.net, pdf",
"url": null
} |
• Are you assuming that there are 1017 people in the building? And that all are not born on July 15. Or are you saying there are $n$ number of people in the building, 1017 weren't born on July 15 and $n -1017$ were? – fleablood Oct 6 '18 at 19:29
• "assuming that the year is not a leap year." I'm going to nitpick that whether this year is a leap year has no bearing whatsoever when someones birthday is. My birthday is the same this year as it was in 2016. If I had been born on Feb 29 on a leap year that would still be my birthday no matter what year you asked me..... – fleablood Oct 6 '18 at 19:33
• @fleablood If it was a leap year, I would have 366 days in total - that's all I meant - and the problem mentioned to assume that it was not a leap year if anyone was confused. – geo_freak Oct 6 '18 at 19:36
• ... that said, we can say "for simplicity we can ignore leap days" or we can say that ans 1 in $4*365+1$ the probability of not July 15 is $\frac {4*364 + 1}{4*365 + 1}$ But that is close enough to $\frac {364}{365}$ to not be fussy. After all, Not every day is equally likely (just look at any census) and the assumption that they are equal is no more inaccurate than the rounding down. – fleablood Oct 6 '18 at 19:38 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765546169712,
"lm_q1q2_score": 0.8269974227543971,
"lm_q2_score": 0.8438951104066293,
"openwebmath_perplexity": 271.8601538015679,
"openwebmath_score": 0.5503652691841125,
"tags": null,
"url": "https://math.stackexchange.com/questions/2944843/probability-that-no-one-has-a-specific-birthday-and-at-least-one-person-does"
} |
# plot time to first peak
tp = t[np.where(R==R.max())[0][0]]
plt.plot([tp,tp],[plt.ylim()[0],R.max()],'g--')
plt.text(tp,8,' Time to first peak = {0:.2f}'.format(tp))
# find positive-going zero crossings
idx = np.where(np.diff(np.sign(R-11.2))>0)[0]
t0 = t[idx[0]]
t1 = t[idx[1]]
plt.plot([t0,t0],[9.5,11.2],'g--')
plt.plot([t1,t1],[9.5,11.2],'g--')
plt.plot([t0,t1],[10,10],'g--')
plt.text((t0+t1)/2,10.05,'Period = {0:.2f}'.format(t1-t0), ha='center')
interact(simulation,zeta=(0,1.2,0.001), tau = (0.1,0.5,0.001))
Out[7]:
<function __main__.simulation>
## 3. [10 pts] Modeling and Simulation of Interacting Tanks¶
The following diagram shows a pair of interacting tanks.
Assume the pressure driven flow into and out of the tanks is linearly proportional to tank levels. The steady state flowrate through the tanks is 3 cubic ft per minute, the steady state heights are 7 and 3 feet, respectively, and a constant cross-sectional area 5 sq. ft. The equations are written as | {
"domain": "jupyter.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.971992481016635,
"lm_q1q2_score": 0.8374086616755017,
"lm_q2_score": 0.8615382094310357,
"openwebmath_perplexity": 2269.710197156688,
"openwebmath_score": 0.8749575018882751,
"tags": null,
"url": "http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/Assignment_02/Assignment_02.ipynb"
} |
quantum-field-theory, field-theory, solitons
How can I get the idea of vortex from the above statement? It's a bit hard to be sure without seeing the whole text, but it looks like they're discussing the problem of of obtaining finite minimum energy solutions of a gauge/Higgs system. In 3 space dimensions, for example, for the Georgi-Glashow model, $$ \mathcal{L}= \frac{1}{2}Tr(F^{\mu\nu}F_{\mu\nu})+Tr(D_{\mu}\phi D^{\mu}\phi)-\frac{\lambda}{4}(|\phi|^2-v^2)^2 $$ to minimize the energy you want the curvature to vanish at infinity, so the potential becomes pure gauge.
Moreover $\phi^a \phi^a=v^2$ defines a 2 sphere in internal group space. So looking at the behaviour of the Higgs field on the $S^2$ at infinity we have a map $S^2\rightarrow S^2$. Now there is the boring solution where $A_{\mu}=0$ and $\phi$ is a constant at infinity. This is like your picture (b). But there is also the possibility that $\phi$ follows the direction defined by the polar coordinates $\theta, \psi$ on the $S^2$ at infinity. This is a winding number 1 solution and is depicted in (a) - this is 'tHooft's hedgehog configuration. Things like (a) are monopoles.
Now you can do the same thing in two spatial dimensions instead of three. $\phi$ is just a complex number, and the vacuum manifold in internal group space is an $S^1$ this time. If we work out what the gauge potential must do to make the energy of a two dimensional hedgehog finite, it turns out the the $A$ field is pointing in a direction tangential to the $S^1$ at infinity. This is the vortex - you can't extend $A$ back towards the origin without hitting singular behaviour | {
"domain": "physics.stackexchange",
"id": 7487,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, field-theory, solitons",
"url": null
} |
ros, python, rospy, logging
Title: Can the log level of a (python) ROS node be changed dynamically?
I wonder if it is possible (and if not if it makes sense) to change the log level of a python node during runtime.
The node could connect to a special purpose topic where the log level for the node can be announced. This would allow to have a node running without the debug messages being published to /rosout, but if necessary the log level could be switched to debug if the node is not running as expected.
Of course you can always change the init_node line and re-run the node, but in some occasions a more transparent solution might be useful?
Originally posted by Felix Kaser on ROS Answers with karma: 318 on 2012-10-01
Post score: 4
Although not currently/officially supported by the rospy api, this can be accomplished through a simple function. See the accepted answer for Change python node log level while running.
Originally posted by JeremyRoy with karma: 51 on 2021-12-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by 130s on 2021-12-06:
Since this old thread is about ROS1, which is basically frozen for new feature addition, non-complex workaround is probably an acceptable answer and solution. So I marked this as a selected answer. | {
"domain": "robotics.stackexchange",
"id": 11199,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, python, rospy, logging",
"url": null
} |
electromagnetism, geophysics, geomagnetism
Title: Why is modelling the Earth as having geomagnetic poles useful? I'm reading about geomagnetic poles and wondering what their signifcance is. It seems one (and perhaps the main) purpose of using this type of model is for understanding the aggregation of magnetic particles from outside of earth. I feel as though I'm missing a step as to why this model is used, surely the magnetic particles from space still experience the irregular magnetic field of earth?
My guesses from the reading I've done so far are that:
At a great distance, the magnetic particles do actually experience equivalent attraction as though the Earth had a bar magnet that gave it its geomagnetic poles.
Over some time period, and over a number of particles the force experienced by all particles averages to what would be experienced if the Earth had a bar magnet that gave it its geomagnetic poles. | {
"domain": "physics.stackexchange",
"id": 89435,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, geophysics, geomagnetism",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.