text stringlengths 49 10.4k | source dict |
|---|---|
electromagnetism, quantum-field-theory, electric-fields, quantum-electrodynamics, pair-production
To derive the upper limit $|F|\leq\frac{e}{2\epsilon_0}$ for the background electric field $F$ in 1+1D mentioned by Coleman, here is one approach: Consider the energy density
$${\cal E}~=~\frac{\epsilon_0}{2}\left(F + \frac{e}{2\epsilon_0}{\rm sgn}(x-b) - \frac{e}{2\epsilon_0}{\rm sgn}(x-a)\right)^2\tag{2}$$
of the electric field after a pair separation. Let us assume $F,e>0$. Then $0<a<b<L$. (The other cases are similar.) The relevant pieces are the cross-terms (ct)
$$\begin{align} {\cal E}_{\rm ct}~\stackrel{(2)}{=}~& \frac{Fe}{2}\left[{\rm sgn}(x-b)- {\rm sgn}(x-a)\right]\cr &-\frac{e^2}{4\epsilon_0}\underbrace{{\rm sgn}(x-b) {\rm sgn}(x-a)}_{=1+{\rm sgn}(x-b)- {\rm sgn}(x-a)}\cr
~=~&\frac{e}{2}(F-\frac{e}{2\epsilon_0})\left[{\rm sgn}(x-b)- {\rm sgn}(x-a)\right] -\frac{e^2}{4\epsilon_0}.
\end{align}\tag{3}$$
The corresponding energy is the integral | {
"domain": "physics.stackexchange",
"id": 98850,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, quantum-field-theory, electric-fields, quantum-electrodynamics, pair-production",
"url": null
} |
c++, game, sfml
t_score.setPosition(550, 534);
t_bestScore.setPosition(551, 576);
if (score > 0 && score < 25)
{
s_ironMedal.setPosition(390, 539);
}
else if (score >= 25 && score < 50)
{
s_bronzeMedal.setPosition(390, 539);
}
else if (score >= 50 && score < 75)
{
s_silverMedal.setPosition(390, 539);
}
else if (score >= 75)
{
s_goldMedal.setPosition(390, 539);
}
}
void GameOverState::handleInput()
{
sf::Event event;
while (this->data->window.pollEvent(event))
{
if (sf::Event::Closed == event.type)
{
this->data->window.close();
}
if (iM2->IsSpriteClicked(event, this->s_retryButton, sf::Mouse::Left))
{
this->data->machine.addState(StateRef(new GameState(data)), true);
}
}
}
void GameOverState::update(float dt)
{
}
void GameOverState::draw(float dt)
{
this->data->window.clear();
this->data->window.draw(this->s_back);
this->data->window.draw(this->s_retryButton);
this->data->window.draw(this->s_gameOverTitle);
this->data->window.draw(this->s_board); | {
"domain": "codereview.stackexchange",
"id": 43754,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, game, sfml",
"url": null
} |
java, game, mvc, swing
gameController.sendTroops(source, target);
}
source = null;
}
});
Game.getInstance().addObserver((source, arg) -> { repaint(); });
}
}
The GameController#sendTroops's method just converts the received coordinates into model objects:
class GameController {
// TODO: Should I move this method to the civwar.model.Game class?
private Building getBuildingByCollision(Point2D point) {
for (Building building : Game.getInstance().getBuildings()) {
if (Math.abs(building.getPosition().getX() - point.getX()) <= 10 &&
Math.abs(building.getPosition().getY() - point.getY()) <= 10
) {
return building;
}
}
return null;
}
public void sendTroops(Point2D source, Point2D target) {
Building sourceBuilding = getBuildingByCollision(source);
if (sourceBuilding == null) return;
Building targetBuilding = getBuildingByCollision(target);
if (targetBuilding == null) return;
Game.getInstance().sendTroops(sourceBuilding, targetBuilding);
}
}
The Game model facade and its sendTroops method:
class Game extends Observable {
public void sendTroops(Building source, Building target) {
AttackTroop troop = source.detachTroopTo(target);
if (troop != null) {
addAttackTroop(troop);
setChanged();
}
}
} | {
"domain": "codereview.stackexchange",
"id": 26191,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, game, mvc, swing",
"url": null
} |
computer-architecture
Title: Why do most books say that a 1 bit branch predictor mispredicts on the first loop iteration? I am reading two books, Computer organization and design by David A Patterson, and Digital Design and Computer Architecture by Harris and Harris. These books claim that a 1 bit branch predictor mispredicts on the first and last iterations of every loop.
I don't understand why it mispredicts on the first iteration. I think that the first prediction result depends on the history of the CPU, so we don't know whether the predictor would predict correctly or not.
My question is whether I have to assume that the initial state of the 1 bit predictor is for the branch to be taken. I believe the assumption is that the loop has been encountered previously (i.e., assuming a long-running program with the loop encountered multiple times and assuming no aliasing in the branch predictor), so the first iteration has the predictor set based on the last iteration of the previous encounter (i.e., not-taken) and, of course, the last iteration has the predictor set based on the previous iteration (i.e., taken). This results in two mispredictions.
In general, a one-bit branch predictor will have two mispredictions per instance of unusual branch direction; first for the unusual instance and second for the instance immediately after that unusual instance. By providing some form of hysteresis (as with a 2-bit predictor) or resistance to change (e.g., by randomly dropping misprediction information, possibly with probabilities determined by static branch prediction--so far as I know, a technique that is only theoretical), such unusual branch behavior is less likely to cause mispredictions for the usual behavior of a branch. | {
"domain": "cs.stackexchange",
"id": 1432,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computer-architecture",
"url": null
} |
c#, .net, networking, socket
var sendingTask = SendTask(_listener, messageBytes);
await sendingTask;
MessageSubmitted?.Invoke(this, _close);
});
await retryTask;
return true;
}
catch (SocketException ex)
{
return HandleSendException(message, ex);
}
catch (ObjectDisposedException ex)
{
return HandleSendException(message, ex);
}
catch (AggregateException ex)
{
return HandleSendException(message, ex);
}
catch (Exception ex)
{
return HandleSendException(message, ex);
}
}
private bool HandleSendException(IProcessable message, Exception ex)
{
var exMessage = "Send: Error sending message. " + ex.Message;
Logger.Log.Error(exMessage);
IsInstantiated = false;
Close();
ClientChainsContainer.SelfCommandFailsChain.Handle(message, ClientId);
DieOrRestart();
return false;
}
public async Task<bool> SendAlarm()
{
//prevents multiple sending attempts
return await Send(new RaiseAlarmCommand(), true);
}
private async Task<bool> Subscribe()
{
var cmd = new SubscribeCommand { FingerPrint = "Hello" };
return await Send(cmd);
}
private async Task SendTask(Socket listener, byte[] message)
{
await
Task.Factory.FromAsync(
(cb, s) => listener.BeginSend(message, 0, message.Length, SocketFlags.None, cb, null),
ias => listener.EndSend(ias), null);
}
#endregion
#region Event Invocators
public void InvokeMessageReceived(IAsyncClient a, List<byte> msg)
{
MessageReceived?.Invoke(a, msg);
}
//
public void InvokeConnected(IAsyncClient a)
{
Connected?.Invoke(a);
} | {
"domain": "codereview.stackexchange",
"id": 25599,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, networking, socket",
"url": null
} |
You might want to check the concept of vacuous truth.
• Of course, despite $\inf\emptyset>\sup\emptyset$, for any element $x\in\emptyset$ we have $\inf\emptyset < x$ and $x<\sup\emptyset$. Side remark: I wonder if under these conditions it is valid to combine those inequalities to $\inf\emptyset<x<\sup\emptyset$; the latter seems to include the claim that $\inf\emptyset<\sup\emptyset$ which clearly is wrong. – celtschk Aug 6 '16 at 9:27
• @celtschk there is a certain risk of confusion in writing it that way, yet strictly I'd say that $a< b < c$ is nothing but short for $a< b$ and $b <c$. There are also arguments that show non-existence by coming to an "impossible" chain of inequalities, so such things are used. – quid Aug 6 '16 at 9:34
• @quid: Actually thinking more about it, the full statement is actually: $\forall x: x\in\emptyset \implies \inf\emptyset<x<\sup\emptyset$ and thus is doesn't matter if $\inf\emptyset<\sup\emptyset$ is included in the chain, as the statement $\forall x: x\in\emptyset \implies \inf\emptyset<\sup\emptyset$ is vacuously true. – celtschk Aug 6 '16 at 9:54
A subset $S$, of the real numbers, is bounded when there exists some real number $M$ such that $|x| \le M$ for all $x \in S$.
(The same for bounded from above and bounded from below separately.)
According to this definition the empty set is bounded. You can chose $M$ whatever you like. The assertion is true, it is vacuously true.
Further more, every real number is an upper bound an every real number is a lower bound, e.g., $26$ is a lower bound and $-45$ an upper bound of the empty set. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9724147169737826,
"lm_q1q2_score": 0.8103370984850335,
"lm_q2_score": 0.8333245932423308,
"openwebmath_perplexity": 272.9995042320769,
"openwebmath_score": 0.8363492488861084,
"tags": null,
"url": "https://math.stackexchange.com/questions/1884094/is-emptyset-bounded-why-then-inf-emptyset-infty-is-reasonable/1884102"
} |
Edit: Actually, we would need f to be continous, but since it's R-I it's continous a.e. so there is no real problem.
3. Originally Posted by Xingyuan
If $f(x)$ is Riemann Integrable in $[a,b]$,and for any integer $n\geq0$, we have $\int_{a}^{b}x^{n}f(x)dx = 0$. can we get the conclusion: $f(x)=0$ a.e.?
(a.e. stands for almost everywhere)
Well you made your question almost trivial allowing n >= 0 ==> taking n = 0 we get that INT{1*f(x)} dx = 0 ==> f(x) = 0 a.e.
Tonio
4. Originally Posted by tonio
Well you made your question almost trivial allowing n >= 0 ==> taking n = 0 we get that INT{1*f(x)} dx = 0 ==> f(x) = 0 a.e.
Tonio
No,In the problem,I am not assuming $f(x)\geq0$
5. Originally Posted by Xingyuan
No,In the problem,I am not assuming $f(x)\geq0$
I didn't say anything about f(x) >= 0. I took n = 0, since you wrote n >= 0, and then x^0 =1 and INT{x^0*f(x)} dx = INT fx dx = 0 ==> f(x) = 0 a.e.
Tonio
6. Originally Posted by tonio
Well you made your question almost trivial allowing n >= 0 ==> taking n = 0 we get that INT{1*f(x)} dx = 0 ==> f(x) = 0 a.e.
Tonio
Huh? That doesn't follow. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812327313545,
"lm_q1q2_score": 0.8513492197405764,
"lm_q2_score": 0.8791467564270272,
"openwebmath_perplexity": 508.9791345749555,
"openwebmath_score": 0.936928391456604,
"tags": null,
"url": "http://mathhelpforum.com/differential-geometry/108094-problem-riemann-integrable.html"
} |
Kudos [?]: 1 [1] , given: 4
Re: Math: Triangles [#permalink] 24 Nov 2009, 22:18
1
KUDOS
thanks for the Information.People like you make the internet a better place . Printing this out now. Request to the moderator-you could put all the links of subject info like this one into one page.Is there a forum page like that?
Intern
Joined: 22 Dec 2009
Posts: 28
Followers: 0
Kudos [?]: 11 [1] , given: 6
Re: Math: Triangles [#permalink] 27 Dec 2009, 05:51
1
KUDOS
Exceptional... a really well compiled data on triangles and their properties...
the best part was to mention the problems from the official GMAT books.... +2
i think it would be worth mentioning the sine and cosine rules of triangles...
The law of sines (or Sine Rule):
If A, B and C are the angles made by the sides a= BC, b= CA & c= AB at the vertices of a triangle ABC, then according to the sine rule, (a/sin A) = (b/sin B) = (c/sin C)
The law of sines can be used to compute the remaining sides of a triangle when two angles and a side are known.
The law of cosines (or Cosine Rule) :
If A, B and C are the angles made by the sides a= BC, b= CA & c= AB at the vertices of a triangle ABC, then according to the cosine rule,
a^2 = b^2 + c^2 - 2bc(cos A); b^2 = a^2 + c^2 - 2ac(cos B); c^2 = a^2 + b^2 + 2ab(cos C)
The law of cosines is useful for computing the third side of a triangle when two sides and their enclosed angle are known, and in computing the angles of a triangle if all three sides are known.
_________________
Deserve before you Desire
Manager
Joined: 17 Aug 2010
Posts: 90
Followers: 1
Kudos [?]: 5 [1] , given: 22 | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9763105245649665,
"lm_q1q2_score": 0.8135835707613073,
"lm_q2_score": 0.8333245932423308,
"openwebmath_perplexity": 1582.4772377983493,
"openwebmath_score": 0.7091482877731323,
"tags": null,
"url": "http://gmatclub.com/forum/math-triangles-87197.html?kudos=1"
} |
solution of a partial differential equation in some region R of the space of the independent variables is a function that possesses all of the partial derivatives that By separation of variables, we assume a solution in the form of a product and Euler derived and solved a linear wave equation for the motion of vibrating strings in the. When using the separation of variable for partial differential equations, we assume the solution takes the form u(x,t) = v(x)*g(t). EE 439 time-independent Schroedinger equation - 2 With U independent of time, it becomes possible to use the technique of "separation of variables", in which the wave function is written as the product of two functions, each of which is a function of only one variable. In the following, the radius r;the mass mand the velocity vare func-tions of the time t:By de nition of the density ;we have. The wave equation written can be written with the aid of a wave operator. These two links review how to determine the Fourier coefficients using the so-called "orthogonality. 3, 2006] We consider a string of length l with ends fixed, and rest state coinciding with x-axis. Check for extra solutions coming from the warning in Step 4. A method for the solution of a certain class of nonlinear partial differential equations by the method of separation of variables is presented. We show that the curved Dirac equation in polar coordinates can be transformed into Schrodinger-like differential equation for upper spinor component. Answer to Use separation of variables to obtain a series solution of the wave equation au 1 22u дх2 c2 Ət2 subject to the bound. tissue as a finite domain was analytically solved by employing the separation of variables and Duhamel’s superposition integral. Solve this equation using separation of variables to find the steady-state temperature of the block with boundary conditions T(0,y) = T(L,y) = T(x,W) = 0 and T(x,0) = T0. Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear difierential equations with partial derivatives (PDE). 4 D’Alembert’s Method 104 3. The generalization to systems of partial differential equations, invariant under multi-parameter groups, is stated and proved. You can also do | {
"domain": "chicweek.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9923043544146899,
"lm_q1q2_score": 0.8434301006175603,
"lm_q2_score": 0.8499711775577736,
"openwebmath_perplexity": 556.7745988916529,
"openwebmath_score": 0.808579683303833,
"tags": null,
"url": "http://zgrk.chicweek.it/solution-of-wave-equation-by-separation-of-variables-pdf.html"
} |
3. Jan 1, 2013
### evosy1978
Ok, thanks for taking the time to reply, just to let you know it can sometimes take time for things to "click in my head" lol.. Im still confused, so can I break it down and show you where Im at...
I understand the last bit as you have cancelled out one of the x's from the top and bottom of the fraction.
Im confused as to where the part in red 1/(2x) came from?
I also dont understand why the 2x disappears from under the 75-x^2 and then appears under the x^2?
Thanks
Last edited: Jan 1, 2013
4. Jan 1, 2013
### MrWarlock616
It's like this:
$V= x^2 (\frac{75-x^2}{2x})=\frac{1}{2x}x^2(75-x^2)=\frac{x}{2}(75-x^2)$
5. Jan 1, 2013
### I like Serena
Here's a couple of the rules for working with fractions and multiplications:
$$\frac 2 3 = 2 \cdot \frac 1 3$$
$$2 \cdot 3 \cdot 4 = 4 \cdot 2 \cdot 3$$
$$\frac {3 \cdot 2} {5 \cdot 2} = \frac 3 5$$
These are the rules MrWarlock applied. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.980580652952557,
"lm_q1q2_score": 0.8016977137033079,
"lm_q2_score": 0.8175744761936437,
"openwebmath_perplexity": 2756.2955403269766,
"openwebmath_score": 0.7582684755325317,
"tags": null,
"url": "https://www.physicsforums.com/threads/simplifying-an-algebra-question.661759/"
} |
lo.logic, computability, church-turing-thesis
The gist of the incompleteness theorems can be expressed in an abstract form using provability logic, which is a kind of modal logic. This gives the incompleteness theorems a wide range of applicability well beyond Peano arithmetic and computability. As soon as certain fixed-point principles are satisfied, incompleteness kicks in. These fixed-point principles are satisfied by traditional computability theory, which therefore falls victim to incompleteness, by which I mean existence of inseparable c.e. sets. Because the provable and refutable sentences of Peano arithmetic form inseparable c.e. sets, the traditional Gödel's incompleteness theorems can be seen as a corollary to incompleteness phenomena in computability. (I am being philosophically vague and your head will hurt if you try to understand me as a mathematician.)
I suppose we can take two stands on how all this relates to the informal notion of effectivity ("stuff that can actually be computed"): | {
"domain": "cstheory.stackexchange",
"id": 1800,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lo.logic, computability, church-turing-thesis",
"url": null
} |
reinforcement-learning, dqn, deep-rl, gradient-descent, gradient
&=\\
\mathbb{E} \left[ \nabla_{\theta} { \left(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta) \right)}^2 \right] \label{5}\tag{5}
\end{align}
Recall that the derivative of $f(x)=x^2$ is $f'(x)=2x$ and that the derivative of a constant is zero. Note now that the only term in \ref{5} that contains $\theta$ is $Q(s,a; \theta)$, so all other terms are constant with respect to $\theta$. Hence, \ref{5} can be written as
\begin{align}
\mathbb{E} \left[ \nabla_{\theta} {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))}^2 \right]
&=\\
\mathbb{E} \left[ 2 {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))} \nabla_{\theta} \left( {r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta)} \right) \right]
&=\\
\mathbb{E} \left[ 2 {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))} \left( {\nabla_{\theta} r + \nabla_{\theta} \gamma \max_{a'} Q(s',a',\theta^{-}) - \nabla_{\theta} Q(s,a; \theta)} \right) \right]
&=\\ | {
"domain": "ai.stackexchange",
"id": 1353,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, dqn, deep-rl, gradient-descent, gradient",
"url": null
} |
Innovo Forehead And Ear Uk, Interviewing Course Syllabus, Schlage Be467 Manual, Homemade Food For Labrador, Tempur-cloud® Breeze Dual Cooling™ Pillow, Piazza Ponte Santangelo Roma, Osu Stats Compare, " /> | {
"domain": "org.br",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363517478327,
"lm_q1q2_score": 0.8405373584622082,
"lm_q2_score": 0.8539127473751341,
"openwebmath_perplexity": 518.5773486296423,
"openwebmath_score": 0.9288099408149719,
"tags": null,
"url": "http://felizcidade.org.br/2rf5m4/cardinality-of-surjective-functions-b8ae3d"
} |
vbscript
Dim iIndex = CInt(oNode.Value) / 100
sReturnData = sAddresses(iIndex - 1)
Ooops:
'Fall-through is OK as long as you check for an error condition here.
If Err.Number <> 0 Then
sReturnData = vbNullString
'Do some other useful things.
End If | {
"domain": "codereview.stackexchange",
"id": 6057,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vbscript",
"url": null
} |
Here is the template:
Let $$\epsilon > 0$$ be given.
Choose $$N= ....$$ (Here is you choice of $$N$$ that depends on $$\epsilon)$$
Now, for $$n > N$$, show that your $$N$$ satisfies
$$|a_n - L | < \epsilon$$
Just show your work backwards. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9489172601537141,
"lm_q1q2_score": 0.8244447353365316,
"lm_q2_score": 0.8688267881258485,
"openwebmath_perplexity": 215.51368699877509,
"openwebmath_score": 0.8321181535720825,
"tags": null,
"url": "https://math.stackexchange.com/questions/2937210/proof-verification-lim-n-to-infty-sqrtn21-n-0"
} |
quantum-field-theory
This is true for integer n. For noninteger n, there are subtleties. If you have exponentials of powers, the resulting distribution for $X(t+\epsilon)$ always has a finite variance, and you always recover the central limit theorem at long distances. But if you make the distribution Levy, you can get new central limit fixed points. These are not exponentials of fractional powers, but the distributions themselves have fractional power tails (the Lagrangian has logarithms of fractional powers).
Since there is no rigorous mathematical theory of quantum fields, these heuristics have to be the guide. You can see them work with lattice models, so they are certainly correct, but proving them is just out of reach of current mathematical methods.
Free field Lagrangians are (free) Central Limit Fixed Points
The central limit points are given by free field Lagrangians. These are quadratic, so they are central-limit stable. If you average a free field at short distances to get a longer distance description, you get the same free field, except that you can recover certain symmetries, like rotational or Lorentz invariance, which might not be there originally. But ignore this. The fundamental renormalization criterion is that you start from a renormalization fixed point, and if you want interactions, you perturb away from this free fixed point by adding nonlinearities.
These fixed points are not all stable to nonlinear interactions, they get perturbed by generic polynomial interactions. If you start with a scalar field
$$\int (\partial\phi)^2 + P(\phi) d^dx$$ | {
"domain": "physics.stackexchange",
"id": 1750,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory",
"url": null
} |
beginner, c, linked-list
printf("------------------------------\n");
}
int main()
{
int i;
reverse_list(); //test function error message
for (i = 3; i > 0; --i)
{
add_to_front(i);
}
for (i= 4; i < 7; ++i)
{
add_to_end(i);
}
add_to_list(4,9); //test function error message
add_to_list(4,1);
print_list();
remove_from_list(3);
print_list();
reverse_list();
print_list();
add_to_list(10,0);
print_list();
getchar();
} You code is quite good for a beginner and shows a grasp of how pointers and lists work, so that's great! However, there are a few points I'd like to address.
I'll start by briefly listing what for me are less important issues or some things that others are likely to disagree with, then move on to more problematic errors:
General style:
Personally, I find that an if statement with a single-line block introduces too much noise. That is, I prefer
if (condition)
statement;
to
if (condition)
{
statement;
}
Since you typedefed struct node as node, you don't have to write struct node all the time.
malloc() issues:
Don't cast the return value of malloc() unless you're writing C++: see this SO question.
It's better to write ptr = malloc(sizeof(*ptr)); instead of ptr = malloc(sizeof(node)); because that way the line won't need changing if the type of ptr changes.
Bigger problems: | {
"domain": "codereview.stackexchange",
"id": 4331,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c, linked-list",
"url": null
} |
homework-and-exercises, integration, aerodynamics
Title: How to calculate surface area of an aerofoil? how do I find the surface area of an airfoil? I know the Thickness distribution and camber line equation, but what I don't understand is which function should I consider when calculating the surface area, for instance, how is the surface area of the NACA 2412 airfoil calculated?
For NACA 2412:
Thickness distribution equation is given by:
$$\frac{t}{0.2}\left(0.2969\sqrt{x}-0.126x-0.3516x^{2}+0.2843x^{3}-0.1015x^{4}\right)\left\{0\le x\le1\right\}$$
where $t$ is the maximum thickness as a fraction of the chord. So here $t$ is equal to %12 of chord length
And the camber line equation given by: $$0.125\left(0.8x-x^{2}\right)\ \left\{0\le x\le0.4\right\}$$
$$\frac{1}{18}\left(0.2+0.8x-x^{2}\right)\left\{0.4\le x\le1\right\}$$ The camber line equation traces a curve between $x = 0$ and $x = 1$. Let us denote it by $y(x)$. The length element along this curve is $dl = \sqrt{(dx)^2 + (dy)^2} = \sqrt{1 + {y^\prime}^2}dx$, where $y^\prime = dy/dx$. The area of the portion of aerofoil at this line element is $2 \times t(x) \times dl$, where $t$ is the thickness function. The factor of $2$ arises because the length of the upper and lower portions is each given by $t(x)$. Therefore the area is
\begin{equation}\tag{1}
A = \int_0^1 2t(x)\sqrt{1 + {y^\prime}^2}dx. | {
"domain": "physics.stackexchange",
"id": 96584,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, integration, aerodynamics",
"url": null
} |
javascript, beginner
You could also keep using your listener on the whole bookList, but you'd have to make sure that the clicked element is the X button before removing elements - check if the target .matches('.delete'). (but even then, it's strange for that functionality to be outside of the Book class - it would make more sense for it to be encapsulated)
If you do use the delegation method, use event.target, not event.originalTarget (which is non-standard and only works on Firefox - on any other browser, it will throw an error). (also, best to name the variables appropriately - you're passing the event to deleteBook, not the event.target, so calling the parameter target is confusing)
You can make the submit listener a bit less repetitive by calling e.preventDefault on the first line of the function, that way you don't have to write it twice later, nor do you have to worry about accidentally not calling it in case you add more logical paths to the function. (And if you call preventDefault(), there's no need to return false, nothing needs to be returned)
In getFields, you're implicitly creating a global variable fields. This doesn't do anything useful, and will throw an error in strict mode (which you should be using). Just remove the fields = part:
function getFields(){
return {
'title': document.getElementById('title').value,
'author': document.getElementById('author').value,
'isbn': document.getElementById('isbn').value
};
}
Doesn't matter much here, but if you had more fields, to be less repetitive, you could replace that with:
function getFields() {
return Object.fromEntries(
['title', 'author', 'isbn'].map(
id => [id, document.getElementById(id).value]
)
);
}
It's good to prefer const whenever possible - using let is a warning to future readers of the script (including yourself) that you may be reassigning the variable later. So, you could replace
for(let [key, value] of Object.entries(fields)){ | {
"domain": "codereview.stackexchange",
"id": 37809,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner",
"url": null
} |
beginner, bash
# `find` returns the path plus filename for each file found
# but we just want the directories, for ease of reading
RESULTS=$(dirname $FULLRESULTS)
#-------------------------display results and get the source path
PS3="Which one do you want to copy ? "
select SRCPATH in $RESULTS
do
if [ ! -z "$SRCPATH" ]
then
#-----------------copy the file
CMD="scp $SRCPATH/$FILE root@$IP:$DEST"
echo "Executing: $CMD"
$CMD
fi
# we don't actually want to loop
break
done Thanks for submitting this for a code review. I like the indentation and documentation in your code.
The biggest problem I see with this is dealing with filenames that haves spaces in them. Your select line will almost certainly break. The find command will let you get the results null-terminated using -print0, but I'm not sure how to get the select to work with that. Hopefully in your use-case you can avoid filename with spaces and avoid fixing this.
Small suggestions
Mention the defaults in the usage printout. Maybe move the defaults to variables so you Don't Repeat Yourself.
Use double square brackets around conditionals to avoid unpleasant surprises and subtle bugs.
Move the thens up on the same line as the if with ; then.
Consider checking for whether the find got any results before printing "was found". You could print out one message for no results and only print your current "File ... was found ..." message if something was actually found.
Rather than put the scp command into a variable just so you can display it you could cut on tracing with set -x and then cut it back off right after with set +x.
Check out shellcheck. | {
"domain": "codereview.stackexchange",
"id": 42430,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, bash",
"url": null
} |
forces, friction, torque
EDIT: I just realized that equations (2) and (3) are only correct if the ball is spinning in the direction it is flying (i.e. if vector $\hat{\mathbf{e}}$ is parallel or anti-parallel to vector $\mathbf{V}_t$). So 2D Pong is OK with equations (2) and (3). However, in 3D problem, if the ball is spinning sideways, then during contact, $\boldsymbol{\omega}$ and $\mathbf{V}_t$ will change in length, and therefore unit vectors $\hat{\mathbf{e}}$ and $\hat{\mathbf{f}}$ will change direction. This means that to get the ball's speed and spin, we need to integrate in time. It can be done analytically (a nice problem to torture graduate students), and it is not difficult to do numerically. One should choose a time of contact duration $\tau$ a large integer $N$, and small time step $\Delta t=\tau/N$. And then calculate
$$\mathbf{V}_t\left(t+\Delta t\right)=\mathbf{V}_t(t)+\hat{\mathbf{e}}\frac{\left|\mathbf{F}_t\right|}{m}\Delta t$$
$$\boldsymbol{\omega}\left(t+\Delta t\right)=\boldsymbol{\omega}+\hat{\mathbf{f}}\frac{\left|\mathbf{F}_t\right|R}{I}\Delta t$$
and do it $N$ times, re-calculating the values of $\hat{\mathbf{e}}$ and $\hat{\mathbf{f}}$ every time step. A fringe benefit of this approach, it is easy to control the onset of rolling. If any component of vector $\hat{\mathbf{e}}$ changes sign (i.e., goes through 0) at any time step, sliding stops and rolling begins. The answer should not depend or $\tau$ or $N$. | {
"domain": "physics.stackexchange",
"id": 1280,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, friction, torque",
"url": null
} |
Null Space and Nullity of a Matrix
Definition of Null Space of a Matrix
The null space of an $m \times n$ matrix $A$ is the set of all the solutions $\mathbf x$ to the homogeneous equation
$A \mathbf x = \mathbf 0$ where $\mathbf x$ is a column vector with $n$ rows and $\mathbf 0$ is a zero column vector with $n$ rows.
The null space of matrix $A$ is denoted as "Null $A$".
Null $A$ is a subspace of $\mathbb{R}^n$ and vectors $x$ are in $\mathbb{R}^n$.
Using set notation we write: Null $A = \{ \mathbf x : \mathbf x \in \mathbb{R}^n | A \mathbf x = \mathbf 0 \}$
The nullity of matrix $A$ is the dimension of Null $A$ which equal to the number of vectors in Null $A$.
Properties of the Null Space
Let $A$ be an $m \times n$ matrix.
1. The null space of a given matrix $A$ is never empty since $\mathbf x = \mathbf 0$ is a trivial solution to the homogeneous equation $A \mathbf x = \mathbf 0$.
2. Null $A$ is a subspace of $\mathbb{R}^n$
3. All elements of Null $A$ are vectors in $\mathbb{R}^n$.
Examples with Solutions | {
"domain": "analyzemath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9956005857929761,
"lm_q1q2_score": 0.8116459484256464,
"lm_q2_score": 0.8152324938410784,
"openwebmath_perplexity": 240.6717309014054,
"openwebmath_score": 0.8552239537239075,
"tags": null,
"url": "https://www.analyzemath.com/linear-algebra/matrices/null-space.html"
} |
type-theory, dependent-types
Title: What are the difference between and consequences of using type parameters and type indexes? In type theories, like Coq's, we can define a type with parameters, like this:
Inductive ListP (Element : Type) : Type
:= NilP : ListP Element
| ConsP : Element -> ListP Element -> ListP Element.
Alternatively, we can define a type with an index, like this:
Inductive ListI : Type -> Type
:= NilI : forall t, ListI t
| ConsI : forall t, t -> ListI t -> ListI t.
My questions are:
Are these fundamentally different or fundamentally the same?
What are the consequences of using one over the other?
When is it preferable to use one over the other? ListP t and ListI t are isomorphic: they have exactly the same constructors.
<prompt>Coq < 12 || 0 < </prompt>Check (NilP, NilI).
(NilP, NilI)
: (forall t : Type, ListP t) *
(forall t : Type, ListI t)
<prompt>Coq < 13 || 0 < </prompt>Check (ConsP, ConsI).
(ConsP, ConsI)
: (forall t : Type, t -> ListP t -> ListP t) *
(forall t : Type, t -> ListI t -> ListI t) | {
"domain": "cs.stackexchange",
"id": 2296,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "type-theory, dependent-types",
"url": null
} |
particle-physics, antimatter
Title: Is nature symmetric between particles and antiparticles? Is nature symmetric with respect to presence of particles? Do we have an antiparticle for every particle thought of? Are there any proven examples where we don't have an antiparticle? And what about antiparticle of a photon (we know it can also behave as a particle)? The anti-particle for any particle is obtained by charge C and parity P conjugation. C is the operation that interchanges positive and negative charges and P is the operation that reflects in a mirror. The combined operation of CP must produce a particle of the same mass. This is a theorem of relativistic quantum field theory due to CPT symmetry. This other particle is either the same particle or an antiparticle with opposite charge and/or chirality.
Some particles such a photons, gluons, Z bosons, pions, Higgs, graviton etc, do not have anti-particles because they are invariant under the CP transformation. You can say that they are their own anti-particle. This can only happen for particles without electric charge and with no chirality.
In principle the QCD color charge is also reversed for an anti-particle. This suggests that a gluon should not be regarded as its own anti-particle, but since colourless states are never seen the distinction cannot really be made in any operational sense.
All known particles which are their own anti-particle are bosons, but it is also possible for a fermion to be its own anti-particle if it is a Majorana spinor rather than a Dirac spinor. No known fermions are of this type (unless neutrinos are Majorana) but they exist in SUSY models.
Observed elementary particles that do have anti-particles include all the quarks and leptons (except possibly the neutrinos) and the charged W bosons. Any composite particle also has an antiparticle made of the anti-particles of its constituents. This can only be its own anti-particle if all its constituents are (e.g. a glueball), or if it is made of particle/anti-particle combinations as is the case for pions. | {
"domain": "physics.stackexchange",
"id": 1201,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, antimatter",
"url": null
} |
ajax
updateResultsOnChange("input[type=radio][name=data]", "monthlydata");
updateResultsOnChange("input[type=radio][name=contract_length]", "contract_length");
updateResultsOnChange("input[type=radio][name=price]", "price");
updateResultsOnChange("input[type=radio][name=mbprice]", "price");
updateResultsOnChange("input[type=radio][name=mblength]", "contract_length");
updateResultsOnChange("input[type=radio][name=mbdata]", "monthlydata");
Also please check if you need multiple radio boxes that do the same thing, it might be confusing for users. | {
"domain": "codereview.stackexchange",
"id": 44611,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ajax",
"url": null
} |
c#, performance, recursion, breadth-first-search, chess
KnightMovesImplementation.cs
using System;
using System.Collections.Generic;
namespace KnightMoves_CSharp
{
class KnightMovesImplementation {
/*
* This class provides the search for all the paths a Knight on a chess
* board can take from the point of origin to the destination. It
* implements a modified Knights Tour. The classic knights tour problem
* is to visit every location on the chess board without returning to a
* previous location. That is a single path for the knight. This
* implementation returns all possible paths from point a to point b.
*
* The current implementation is a Recursive Breadth First Search. Conceptually
* the algorithm implements a B+ tree with a maximum of 8 possible branches
* at each level. The root of the tree is the point of origin. A particular
* path terminates in a leaf. A leaf is the result of either reaching the
* destination, or reaching a point where there are no more branches to
* traverse.
*
* The public interface CalculatePaths establishes the root and creates
* the first level of branching. The protected interface CalculatePath
* performs the recursive depth first search, however, the
* MoveFilters.GetPossibleMoves() function it calls performs a breadth
* first search of the current level.
*/
KMBoardLocation PointOfOrigin;
KMBoardLocation Destination;
uint SingleSideBoardDimension;
KnightMovesMethodLimitations PathLimitations;
KMOutputData Results;
KMMoveFilters MoveFilter;
KMPath m_Path;
public KnightMovesImplementation(KMBaseData UserInputData)
{
SingleSideBoardDimension = UserInputData.DimensionOneSide;
PathLimitations = UserInputData.LimitationsOnMoves;
InitPointOfOrigin(UserInputData);
InitDestination(UserInputData);
Results = new KMOutputData(PointOfOrigin, Destination, SingleSideBoardDimension, PathLimitations);
MoveFilter = new KMMoveFilters(PointOfOrigin, SingleSideBoardDimension, PathLimitations);
m_Path = new KMPath();
} | {
"domain": "codereview.stackexchange",
"id": 20830,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, recursion, breadth-first-search, chess",
"url": null
} |
newtonian-mechanics, momentum, conservation-laws, collision
Dividing the latter by the former equation yields:
$$ v_1 + u_1 = v_2 + u_2$$
which is the equation you are looking for | {
"domain": "physics.stackexchange",
"id": 57554,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, momentum, conservation-laws, collision",
"url": null
} |
lagrangian-formalism, momentum, terminology, units, notation
Title: The exact definition of conjugate momentum density After checking various websites, I've seen the conjugate momentum density defined as either:
\begin{align*}
P_r ~=~ \frac{\partial \mathcal{L}}{\partial \dot{A}_r}
\end{align*}
or
\begin{align*}
P_r ~=~ \frac{\partial \mathcal{L}}{\partial(\partial_0 A_r)}.
\end{align*}
When you are working in natural units, there is no difference. However, when you don't take $c=1$ (or if you're working in an exotic metric), the difference is important, because $\partial_0 = \frac{\partial}{\partial (ct)} \neq \frac{\partial}{\partial t}$. It may seem trivial but I think it is worth being sure. I assume you're thinking about Minkowski space, i.e. the metric $\eta_{\mu\nu}=\text{diag}(c^2,-1,-1,-1)$. You should be aware that the dot notation is purely a notational shorthand, and has no other information contained in it. In particular, by definition we have
$$\dot{A}\equiv\partial_0A=\frac{1}{c}\frac{\partial A}{\partial t}$$
Thus, there is no problem (in any metric) because the different notations don't actually differ in content. | {
"domain": "physics.stackexchange",
"id": 12156,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lagrangian-formalism, momentum, terminology, units, notation",
"url": null
} |
adiabatic, invariants
Title: Adiabatic Invariant for nonoscillatory system For oscillatory system (e.g. quantum harmonic oscillation with slowly changing effective spring constant), it is common to define the adiabatic invariant to be
$$I=\frac{H(t)}{\omega(t)}$$ where $H$ is the Hamiltonian of the system and $\omega$ is the angular frequency of oscillation.
What about systems in which there isn't a well-defined oscillatory motion (e.g. 2-state system avoided crossing)? What is the corresponding adiabatic invariant? You seem to be confusing the classical adiabatic theorem with the quantum adiabatic theorem. In the quantum case, the adiabatic theorem says that you stay in an energy eigenstate as the parameters of the Hamiltonian are slowly varied. That makes perfect sense for both the two-state system and the harmonic oscillator.
In the particular case where the quantum system represents oscillatory motion in a potential well, you can then take the classical limit and apply the quantum adiabatic theorem, which tells you that
$$I = \oint p \, dq$$
is a classical adiabatic invariant. (Also, I have no idea where you got the idea that $I = H(t) / \omega(t)$, that isn't true even classically.) | {
"domain": "physics.stackexchange",
"id": 63991,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "adiabatic, invariants",
"url": null
} |
A. Bezdek and F. Fodor, On convex polygons of maximal width, Archiv der Mathematik, Vol. 74, No. 1, pp. 75–80, 2000. | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462211935647,
"lm_q1q2_score": 0.8077196099084856,
"lm_q2_score": 0.817574471748733,
"openwebmath_perplexity": 707.4476587532919,
"openwebmath_score": 0.723906397819519,
"tags": null,
"url": "https://mathoverflow.net/questions/385856/fattest-polygons-based-on-diameter-and-least-width"
} |
quantum-information, quantum-computer
0 & 1 & 0 & 0
\end{pmatrix}
$$
thus -
$$
CNOT1 \cdot CNOT2 \cdot CNOT1 =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0& 1\\
0 & 0 & 1 & 0
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1& 0\\
0 & 1 & 0 & 0
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0& 1\\
0 & 0 & 1 & 0
\end{pmatrix}
=
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0& 0\\
0 & 0 & 0 & 1
\end{pmatrix}
=SWAP
$$ | {
"domain": "physics.stackexchange",
"id": 22918,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-information, quantum-computer",
"url": null
} |
points 0,1 ] is not a limit point of a spaces and give definitions. Neighborhood of each of its points are interior points of points in X has a convergent converging... If it contains a neighborhood of each of its points are interior points is open if it contains its... Boundary, its complement is the set of its 1, exterior and boundary Let ( X d... A convergent subsequence converging to a but is not a limit point of a Closure usual... In detail, and we leave the verifications and proofs as an exercise a but is a! Open cover of X has a convergent subsequence converging to a but is not a limit point of a introduce. X is called -net if a metric space is called -net if a metric space R ) Let E a! Its 1, and we leave the verifications and proofs as an exercise of. Subset of a metric exterior point in metric space and a is a subset of a complete sub-space ( in the space! Interior point No interior point to a point X is sequentially compact if every sequence of points in.. We do not develop their theory in detail, and we leave the verifications and proofs an! Boundary, its complement is the set of its 1 space arbitrary intersections finite. Every sequence of points in X whole of N exterior point in metric space its boundary, its complement is the set its! As usual, Let ( X ; d ) is a metric is. Open cover of X has a convergent subsequence converging to a point X is compact if every of. Only if each of its 1 a point in X is sequentially compact … Theorem in a metric space a. Complete sub-space space arbitrary intersections and finite unions of closed sets are closed exterior (! If every sequence of points in X if a metric space is if... | {
"domain": "manufakturaklimatu.pl",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850877244272,
"lm_q1q2_score": 0.8394688881622903,
"lm_q2_score": 0.853912747375134,
"openwebmath_perplexity": 290.6943844046739,
"openwebmath_score": 0.825274646282196,
"tags": null,
"url": "http://manufakturaklimatu.pl/05rfy7/b17405-exterior-point-in-metric-space"
} |
$$\frac{s}{c}=\tan \phi= {\left( \frac {b}{a}\right)}^ {\frac13} \tag 3$$
Construct right triangle with Pythagorean thm ( even if the dimension does not tally to linear dimension ) resolving $$(s,c)$$ to conveniently plug their values into (1) and then simplify
$$L={\left( a^{\frac23}+ b^{\frac23}\right)}^ {\frac32} \tag 4$$
Actually it is an Astroid envelope of the sliding ladder having equation
$$x^{\frac23}+ y^{\frac23}= L^{\frac23}$$
on to which you juxtaposed a sharp corner from right side.
The 3 lines ( two blue,one red) have the same length 19.7313 units. Only the red line touches the corner $$(8.6)$$ . The blue lines do not touch the corner, there is considerable gap/clearance.
Now plug in numerical values for symbols
$$a=6,\; b=8,\;L \approx 19.7313 \;ft, \; \phi= 42.2568^{\circ}; \tag 5$$
If corridor widths are changed you know what to do.
• This model assumes the ladder to be in contact with the corner at all times. But the ladder touches the corner only momentarily. Sep 18 at 16:57
• Not at all. The model does not assume contact with corner always but assumes contact only with the wall and floor along which the ladder slides. This is ensured in the formulation. Your misconception can be hopefully cleared by sketch just now added. To drive home this point I mentioned already about the Astroid that touches the corner at a single point. All other positions of the ladder keeps it clear from re-entrant corner without any contact or interference as shown. Sep 18 at 20:58 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9873750525759486,
"lm_q1q2_score": 0.837262213804578,
"lm_q2_score": 0.8479677622198947,
"openwebmath_perplexity": 388.5708878218957,
"openwebmath_score": 0.7111199498176575,
"tags": null,
"url": "https://math.stackexchange.com/questions/4253717/a-simpler-solution-to-an-optimization-problem"
} |
• Yes it does follow the poisson distribution as well, in which case the variance is the same as the expectation. Jun 5, 2015 at 14:32
Let us define $X$ as the random number of customers arriving in a 3 hour interval, so $X \sim \operatorname{Poisson}(\lambda = 60)$ as you noted. Now, for each $i = 1, 2, \ldots, X$, let $T_i$ be the number of transactions that the $i^{\rm th}$ customer to arrive makes, so that $$\Pr[T_i = k] = 0.1 (5 - k), \quad k = 1, 2, 3, 4.$$ We can now calculate $$\operatorname{E}[T_i] = \sum_{k=1}^4 \frac{k(5-k)}{10} = 2, \quad \operatorname{E}[T_i^2] = \sum_{k=1}^4 \frac{k^2(5-k)}{10} = 5,$$ so that $\operatorname{Var}[T_i] = 5-2^2 = 1$. Now we let $$S = \sum_{i=1}^X T_i$$ be the total number of transactions observed in the 3 hour interval. We wish to compute $\operatorname{E}[S]$ and $\operatorname{Var}[S]$. To this end, observe that
\begin{align*} \operatorname{E}[S] &= \operatorname{E}_X[\operatorname{E}[S \mid X]] \\[8pt] &= \operatorname{E}_X\left[\sum_{i=1}^X \operatorname{E}[T_i]\right] \\[8pt] &= \operatorname{E}[X \operatorname{E}[T_i]] \\[8pt] &= \operatorname{E}[X]\operatorname{E}[T_i] \\[8pt] &= 2\lambda = 120. \end{align*} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9658995733060719,
"lm_q1q2_score": 0.8170954448650551,
"lm_q2_score": 0.8459424431344437,
"openwebmath_perplexity": 339.18802956641935,
"openwebmath_score": 0.9834083318710327,
"tags": null,
"url": "https://math.stackexchange.com/questions/1313156/poisson-distribution-where-each-event-can-lead-to-different-outcomes"
} |
machine-learning, algorithm, prediction
Title: Selecting the right algorithm to predict disease from questions I'm trying to come up with the right algorithm for a system in which the user enter symptoms and we starts triggering questions related to that symptoms and his answers will result a disease which is related to the answer which is given by the user
Let's assume that the user entered the following input:
Symptom - Deafness
Q1. How long have you had a problem with deafness
A)From few days B)From few weeks to months C)More than month D)Since Birth
Q2. What was the onset of the deafness
A) Sudden B) Gradual
Now we have a knowledge base like if a user select option 1 from question 1 and option 2 form question 2 then we will give him some disease. But i need an algorithm which will give % of success in backend so that i can throw the results of disease for example if a user select option 2 from question1 and option 1 from question 2, then when we compare from our knowledge base there will be one set over there which has option 1 from question 1 and option 2 from question 2 then its a "SOME" disease.. now if we compare from our knowledgebase and we found even 50% of the choices is resulting this disease we will throw that disease name.
NOW i am confused what algo should be use for this approach for ai. There is no defined rules for choosing a machine learning algorithm to learn some type of pattern. However, there are some guidelines to help you better select an algorithm which will yield a higher probability of success.
Some important considerations are:
Number of features: This is the number of questions that each patient had to answer.
Number of instances: This is the number of patients that took your survey.
Number of output classes: This is how specific you want your diagnosis of the disease. Is this a yes/no, or a 5-stage progression. | {
"domain": "ai.stackexchange",
"id": 286,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, algorithm, prediction",
"url": null
} |
machine-learning, regression, rmse, r-squared
Title: Determining which model result is better I am trying to determine which model result is better. Both results are trying to achieve the same objective, the only difference is the exact data that is being used. I used random forest, xgboost, and elastic net for regression. Here is one of the results that has low rmse but not so good r2
model n_rows_test n_rows_train r2 rmse
rf 128144 384429 0.258415240861579 8.44255341472637
xgb 128144 384429 0.103772500839367 9.28116624462333
e-net 128144 384429 0.062460300392487 9.49266713837073
The other model run has a higher r2 but not so good rmse relative to the standard deviation.
n_rows_train n_rows_test metric_col model rmse r2
37500 12500 3 year appreciation e-net 62.3613393228877 0.705221446139843
37500 12500 3 year appreciation rf 52.0034451171835 0.795011617995982
37500 12500 3 year appreciation sgd 1952637950501.17 -2.89007070463773E+020
37500 12500 3 year appreciation xgb 50.3263561914699 0.808019998691306
Which one is better? Another way to approach the problem is to take all of the trained models and compare each of their performances on the same hold-out dataset. This is the most common way to evaluate machine learning models.
Choosing the evaluation metric to use depends on the goal of the project. Most machine learning projects care about predictive ability.
R² is not a useful metric for the predictive ability of a model.
RMSE can be a useful metric of predictive ability. However, since the errors are squared is sensitive to the properties of the data. You mention that you are using different data. Those differences in data could impact comparing RMSE across different sources. Comparing different models on the same dataset would be better when using RMSE. | {
"domain": "datascience.stackexchange",
"id": 10119,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, regression, rmse, r-squared",
"url": null
} |
neural-network, classification, regression, convolutional-neural-network, audio-recognition
So the output is divided in to 183 classes, being mapped into HMM's with 3 states for each 61 phonemes, and the ANN target (As I see it target = posterior probability) by training a monophone hmm with forced alignment. I am not sure I understand this process. If the ANN targets are those the CNN should aim/regress to and at the end classify the state based on, Why then process the input?.. why not make a simple DNN that does the regression/classification?
It looks like the improvement lies in the use of forced alignment here, and only on monophone? where is the improvement?
And again how I am supposed to link the input shape and the output shape based this? This would require the audio files to have a certain length, the length of the audio is never specified, so I am assuming that this is not the case. DNN/CNN prediction(training) is done for 1 frame at a time. The output can be any of the 183 outputs states. Length of the audio files is not a problem since the input to the DNN/CNN is of same dimension only the number of inputs change with audio length.
e.g 1.wav has 500 features and each feature is 39 dimensional and 2.wav has 300 features,
the trained model will take a 39 dimensional input and output will be 183 dimensional. So depending on length we'll get different number of outputs.
Since all the frames in an utterances are tested against all the 183 possibilities so the output always remains 183 dimensional. There is no need to specify number of phonemes an utterance can have as everything is being done at frame level.
Frame concatenation (9-15 frames) is done to leverage contextual properties of speech data. Phone changes are context dependent. For 15 frame context, we change the input of DNN to [7*39 (left_context) 39 7*39(right_context)], a 585 dimensional vector. So now DNN will take 585 dimensional data as input and will output a 183 dimensional vector.
CNN input | {
"domain": "datascience.stackexchange",
"id": 1584,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-network, classification, regression, convolutional-neural-network, audio-recognition",
"url": null
} |
performance, beginner, c, hash-map
/* test append 14344392 keys */
void hashdb_test_append_rockyou(hashdb *db)
{
size_t x = ROCKYOU_LINES-1;
size_t i;
for(i = 0; i < ROCKYOU_LINES; i++, x--) {
if(hashdb_add(db, test_keys[i], test_keys[x]) != 0) {
fprintf(stderr, "error: hashdb_test_append_rockyou\n");
abort();
}
}
}
/* test get 1000 keys */
void hashdb_test_get_1000_keys(hashdb *db) {
for(size_t i = HASHDB_START_GET_FROM; i < HASHDB_START_GET_FROM + 1000; i++) {
const char *res = hashdb_get(db, test_keys[i]);
if(res == NULL) {
fprintf(stderr, "error: hashdb_test_get_1000_keys");
abort();
}
}
}
/* test get single keys */
void hashdb_test_get_key(hashdb *db)
{
const char *res = hashdb_get(db, test_keys[HASHDB_START_GET_FROM]);
if(res == NULL) {
fprintf(stderr, "error: hashdb_test_get_key");
abort();
}
}
/* test get all keys */
void hashdb_test_get_rockyou(hashdb *db)
{
for(size_t i = 0; i < ROCKYOU_LINES; i++) {
const char *res = hashdb_get(db, test_keys[i]);
if(res == NULL) {
fprintf(stderr, "error: hashdb_test_get_rockyou");
abort();
}
}
} | {
"domain": "codereview.stackexchange",
"id": 44486,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, beginner, c, hash-map",
"url": null
} |
ros
Title: rospy.init_node() has already been called with different arguments
Hi, I'm running indigo on a RPi.
I have a script to use a joystick to run a servo, which I'll paste below. I know the code subscribing to this is working, because when I publish angles to it using rostopic, the servo turns. I was hoping that when I use this script, I could push the joystick up, and the servo would turn more positive. I based my usage of the joy node on this link from the ROS Wiki page. I got the following error.
[ERROR] [WallTime: 1425312842.009290] bad callback: <function joy_talker at 0x224c1b0>
Traceback (most recent call last):
File "/home/pi/catkin_ws/src/ros_comm/rospy/src/rospy/topics.py", line 702, in _invoke_callback
cb(msg)
File "/home/pi/catkin_ws/src/servo/joy.py", line 8, in joy_talker
rospy.init_node('talker', anonymous=True)
File "/home/pi/catkin_ws/src/ros_comm/rospy/src/rospy/client.py", line 274, in init_node
raise rospy.exceptions.ROSException("rospy.init_node() has already been called with different arguments: "+str(_init_node_args))
ROSException: rospy.init_node() has already been called with different arguments: ('servo_joy', ['/home/pi/catkin_ws/src/servo/joy.py'], False, None, False, False)
#!/usr/bin/env python
import rospy
from std_msgs.msg import Float32
from sensor_msgs.msg import Joy | {
"domain": "robotics.stackexchange",
"id": 21028,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
assembly, compiler
pauseString db "PAUSE",0
section '.rdata' readable writable
result dd ? ;A variable used internally by the AEC compiler.
numberOfRows dd ?
currentRow dd ?
currentColumn dd ?
numberBeforeTheImmediatelyAboveOne dd ?
numberImmediatelyAbove dd ?
numberToBePrinted dd ?
array dd 30000 DUP(?)
section '.idata' data readable import
library msvcrt,'msvcrt.dll' ;"Microsoft Visual C Runtime Library", available as "C:\Windows\System32\msvcrt.dll" on Windows 98 and newer.
import msvcrt,printf,'printf',system,'system',exit,'exit',scanf,'scanf',clock,'clock'
AsmEnd
I find it very confusing that you consider all of this assembly as inline assembly. I see the start and exit of a regular FASM assembly program, 2 things I don't expect to find in inline assembly.
Am I correct when I say that your high level language only uses single precision float variables? If not then variables like numberOfRows, currentRow, and currentColumn should be treated like dword integers for speed and frankly because that's what they truly are.
numberBeforeTheImmediatelyAboveOne dd ?
numberImmediatelyAbove dd ?
While using descriptive names is encouraged, having source lines that are much longer than the visible screen's width makes reading a lot more difficult. Perhaps you could make use of FASM's line continuation character \ ?
While currentRow < numberOfRows | currentRow = numberOfRows
Why the OR operator? Does your project not have the compound <= operator?
If available then simply write: While currentRow <= numberOfRows.
If not available then you could invert the condition to: While numberOfRows > currentRow.
This is wrong
fld dword [numberToBePrinted]
fstp qword [esp]
Here you convert a single precision float into a double precision float, but you forget to reserve space on the stack!
sub esp, 8 <<<< Making room on the stack
fld dword [numberToBePrinted]
fstp qword [esp] | {
"domain": "codereview.stackexchange",
"id": 38193,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "assembly, compiler",
"url": null
} |
quantum-field-theory, particle-physics, quantum-electrodynamics, scattering, feynman-diagrams
Which QFT vertex will cause an electron to scatter off a spin-0 charged particle electromagnetically?
We cannot write a Lorentz-invariant interaction vertex with one fermion field (the electron) and two boson fields (the spin-0 particle from which the electron scatters and the spin-1 photon mediator). If someone can point out, I'll highly appreciate that. When an electron scatters off of a muon, there is no muon-electron-photon vertex. Instead there are two vertices- one with two electrons and a photon and one with two muons and a photon.
Similarly if you have an electron scattering off a boson, you wouldn't have a single electron-boson-photon vertex, you would have the usual "two electrons and a photon" vertex and an additional "two bosons and a photon" vertex in the Feynman diagram. | {
"domain": "physics.stackexchange",
"id": 71566,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, particle-physics, quantum-electrodynamics, scattering, feynman-diagrams",
"url": null
} |
zoology, entomology
Title: How do insects know what is edible? What is the current scientific consensus on how insects innately know what is food and not food? If they are introduced to new food sources do they experiment with eating the new food? Could you teach a preying mantis to eat beef? Insect feeding behaviour is generally triggered by one or more conditions which may include colour, shape, chemical traces or temperature.
Insects generally locate food based on some combination of olfactory, thermal and visual queues (colour and shape). If their minimum criteria are met to specified tolerance, they will attempt to feed on whatever is nearby using their usual feeding method.
When these conditions appear on the 'wrong' target, it attracts insects and triggers feeding attempts. Insects can be triggered to feed on atypical food sources if the relevant aspects of their environment match those of their normal feeding environments. For example, here is a report from a professor of entomology recollecting his observations of being bitten by pea aphids while handling plants, which he assumes is because of the scent on his hands.
We can exploit this in various ways for research. One is for artificial blood-feeding of insects: most systems, like the Hemotek membrane feeding system, warm blood to the body temperature of the host. They do not normally resemble a target host in any other way. Some blood-feeding insects have very specific requirements for temperature (for example they will only feed on blood if it is heated to the body temperature of birds; the same blood heated to mammalian body temperature will be ignored) but we do not need to make the target look or smell like the natural host. Other species may need olfactory cues, which can be provided by researchers rubbing the membranes on their forearms before placing them on the feeding system, or by breathing on cages as you add the food.
A second way we exploit this is for insect traps. Although not all traps work this way, some work by mimicking the host and attracting insects that are looking for a meal. This can be via olfactory/chemical mimicry (for example carbon dioxide baited traps - try Googling "CO2-baited traps") or visual. Different degrees of visual 'deception' may be needed; for instance to attract tsetse flies, colour is important but shape is not: | {
"domain": "biology.stackexchange",
"id": 8621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "zoology, entomology",
"url": null
} |
textbook-and-exercises, quantum-operation, nielsen-and-chuang, quantum-process-tomography
Title: In quantum process tomography, how does $\chi$ characterize a quantum process? I'm working through Nielsen and Chuang and I'm pretty confused by the discussion of quantum process tomography. I'm trying to work through an example of 1-qubit state tomography given by N&C (box 8.5), which provides an algorithm for determining $\chi$ in terms of block matrices and density matrices (determined by state tomography). The process seems pretty straight forward, but how does $\chi$ characterize a quantum process? The linear map $\mathcal{E}$ is what characterizes a quantum process,
$$\rho \rightarrow \mathcal{E}(\rho),$$
but $\mathcal{E}$ can be determined by $\chi$. Using the operator-sum representation,
$$\mathcal{E}(\rho) = \sum_i A_i \rho A^\dagger_i = \sum_{i}\sum_{m}\sum_{n}a_{im}\tilde{A}_m \rho a^*_{in}\tilde{A}^\dagger_n,$$
where the $a_{ij}$ are some set of complex numbers that allow a fixed set of operators $\tilde{A}_{i}$ to form a basis for the unknown set of operators $A_i$ on the state space. Remember, if we can determine the operators $A_i$, then we can completely describe $\mathcal{E}$. Rearranging the above,
$$\mathcal{E}(\rho) = \sum_m \sum_n \tilde{A}_m \rho \tilde{A}^\dagger_n \sum_i a_{im} a^*_{in} = \sum_{mn} \tilde{A}_m \rho \tilde{A}^\dagger_n \chi_{mn},$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 2702,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "textbook-and-exercises, quantum-operation, nielsen-and-chuang, quantum-process-tomography",
"url": null
} |
java, performance, android, batch, jni
int min=insn->detail->groups_count > sz2 ? sz2 : insn->detail->groups_count;
for(int i=0;i<min;++i)
{
data2[i]=insn->detail->groups[i];
}
// Don't forget to release it
env->ReleaseByteArrayElements(*jba2, data2, 0);
env->DeleteLocalRef(job2);
fid = env->GetFieldID(darcls, "groups_count","B");
if (fid == NULL) {
return; /* failed to find the field */
}
env->SetByteField(dar, fid, insn->detail->groups_count);
}
//__android_log_print(ANDROID_LOG_VERBOSE, "Disassembler", "afterdetail");
jobject lvi=env->NewObject(lvicls,ctorLvi,dar);
//__android_log_print(ANDROID_LOG_VERBOSE, "Disassembler", "created lvi");
//jstring element = env->NewStringUTF(s.c_str());
env->CallBooleanMethod(arr, java_util_ArrayList_add, dar);
env->CallVoidMethod(thiz,additem,lvi);
__android_log_print(ANDROID_LOG_VERBOSE, "Disassembler", "added lvi"); | {
"domain": "codereview.stackexchange",
"id": 32178,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance, android, batch, jni",
"url": null
} |
harmonic-oscillator, perturbation-theory
Title: Perturbation Theory Applied to the Quantum Harmonic Oscillator I am trying to compare the wave function obtained by exact method and by approximated method.
The potential is
\begin{equation} V(x)=\frac{1}{2}m\omega^2+Ax\end{equation}
I found a solution but I am wondering what is the series development I have to do to go from function one to two. Letting $f(A)=\exp\left(-\frac{m\omega}{2\hbar}\left(x+\frac{A}{m\omega^2}\right)^2\right)$, we expand as the instructions said, in powers of $A$:
$$f(A)=f(0)+\frac{df}{dA}(0)A+\frac{1}{2}\frac{d^2f}{dA^2}(0)A^2+...$$
The text only keeps the first two terms, but it appears to just be a normal Taylor series in $A$. | {
"domain": "physics.stackexchange",
"id": 67774,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "harmonic-oscillator, perturbation-theory",
"url": null
} |
Sorry, no clue about a physical system.
• This is a good intuitive way to think about it. A formal proof is too heavy for a over-a-cup-of-coffee discussion, this is just right. I'll give you the credit since nobody seems to want to take a shot at the physical application – crasic Sep 29 '10 at 8:23
• Could you please elaborate how the fate of the last person is determined the moment either the first or the last seat is selected? and how any other seat will necessarily be taken by the time the last guy gets to 'choose'.? – Mathgeek Jan 20 '17 at 19:50
• @Mathgeek: Suppose the last guy gets seat X, which is neither the first seat, nor the last seat. What seat did person numbered X take? – Aryabhata Jan 23 '17 at 23:26
• @Mathgeek: Yes, it will be 1. – Aryabhata Jan 25 '17 at 22:11
• This answer is good, but I think fails to make a fundamental insight: at any given time there is only one person who is yet to board and whose seat has been taken. This is because when someone boards either they sit in their seat or they take someone else's seat, but remove themselves from the queue. In both cases the net number of pre-taken seats doesn't change. – Stella Biderman Mar 30 '17 at 14:33
Here is a rephrasing which simplifies the intuition of this nice puzzle.
Suppose whenever someone finds their seat taken, they politely evict the squatter and take their seat. In this case, the first passenger keeps getting evicted (and choosing a new random seat) until, by the time everyone else has boarded, he has been forced by a process of elimination into his correct seat.
This process is the same as the original process except for the identities of the people in the seats, so the probability of the last boarder finding their seat occupied is the same.
When the last boarder boards, the first boarder is either in his own seat or in the last boarder's seat, which have both looked exactly the same (i.e. empty) to the first boarder up to now, so there is no way the poor first boarder could be more likely to choose one than the other. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347905312774,
"lm_q1q2_score": 0.824315749269558,
"lm_q2_score": 0.84594244507642,
"openwebmath_perplexity": 500.10150711242886,
"openwebmath_score": 0.7296741008758545,
"tags": null,
"url": "https://math.stackexchange.com/questions/5595/taking-seats-on-a-plane/5596"
} |
beam, ansys
NODE(x,y,z) returns the number of the selected node nearest the x,y,z
location (in the active coordinate system, lowest number for
coincident nodes)
I adapted the vm1 example from the docs. When you know the nodal positions it looks like this:
/PREP7
ANTYPE,STATIC ! STATIC ANALYSIS
ET,1,LINK180
SECTYPE,1,LINK
SECDATA,1 ! CROSS SECTIONAL AREA (ARBITRARY) = 1
MP,EX,1,30E6
N, ,0,0,0
N, ,0,4,0
N, ,0,7,0
N, ,0,10,0
NODE1 = NODE(0,0,0)
NODE2 = NODE(0,4,0)
NODE3 = NODE(0,7,0)
NODE4 = NODE(0,10,0)
E,NODE1 ,NODE2 ! DEFINE ELEMENTS
EGEN,3,1,1 ! Generates elements from an existing pattern
D,NODE1,ALL,,,NODE4,NODE3 ! BOUNDARY CONDITIONS AND LOADING
F,NODE2,FY,-500
F,NODE3,FY,-1000
FINISH
/SOLU
OUTPR,BASIC,1
OUTPR,NLOAD,1
SOLVE
FINISH
If you don't know the exact position of the nodes, you can loop through a selected set:
/PREP7
ANTYPE,STATIC ! STATIC ANALYSIS
ET,1,LINK180
SECTYPE,1,LINK
SECDATA,1 ! CROSS SECTIONAL AREA (ARBITRARY) = 1
MP,EX,1,30E6
N, ,0,0,0
N, ,0,4,0
N, ,0,7,0
N, ,0,10,0 | {
"domain": "engineering.stackexchange",
"id": 4832,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beam, ansys",
"url": null
} |
fluid-dynamics, waves
\partial_t u + u \partial_x u + \frac1\rho \partial _ x p=0.\\
\end{gather}
Here we used an equation $d \rho = a^{-2} d p$, derived from definition of speed of sound.
Dividing the first equation by $\pm \rho a$ and adding it to the first we obtain:
$$
\partial_t u \pm \frac1{\rho a} \partial_t p + (u \pm a) (\partial_x u \pm \frac1{\rho a}\partial_x p). \tag{1}
$$
Now, for an ideal gas $a^2 = \frac{\gamma p}{\rho}$, so
$$
d a = \frac {\gamma -1}2 \frac { 1}{\rho a} d p,
$$
and (1) could be thus written as
$$
[\partial _t + (u \pm a) \partial _x ] J_\pm=0, \tag{2}
$$
where $J_+$ and $J_-$ are two Riemann invariants:
$$
J_\pm = u \pm \frac2{\gamma -1} a,
$$
To obtain the second order equation for $a$ and $u$ we act with differential operators
$[\partial _t + (u \mp a) \partial _x ]$ (notice the inverted sign) on equations (2), then adding and subtracting the resulting equations. We thus receive two equations containing 2nd order hyperbolic differential operator from the question, plus a bunch of terms quadratic in the first derivatives, like $(\partial_t u )^2$. Those quadratic terms had to be eliminated using equations (2).
In the end we finally arrive to equations:
\begin{align*}
&\frac{\partial^2 a}{\partial t^2}+2u\frac{\partial^2 a}{\partial x \partial t}+\left[u^2-a^2\right]\frac{\partial^2 a}{\partial x^2}=0\\ | {
"domain": "physics.stackexchange",
"id": 10752,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, waves",
"url": null
} |
computational-physics, simulations, statistics, data-analysis
I get from both (4) and (5) something identical or very similar to (1) and (2)
when I have a small (<100000) number of samples, but increasing them, while the jackknife average (4) is still very similar to (1), now the jackknife variance (5) gets smaller and smaller compared to (2), soon becoming many orders of magnitude smaller. This happens also for more complicated functions, not just the plain average of something.
I don't know if I misanderstood the algorithm, if it's an error in the implementation or if this is exacly what I should expect from the jackknife variance. In your case, $\bar x_{JK}$ and $\sigma_{JK}^2$ should be exactly
equal to $\bar x$ and $\sigma$. You can show that by taking equations
(4) and (5) and plugging in the Jackknife definitions (and the
definition of $\bar x$) and doing some (perhaps ugly) algebra. Of
course $\sigma_{JK}^2$ should become smaller with increasing $N$, but
so should $\sigma^2$.
At first and second glance I couldn't spot any error in your
equations, so maybe you're right and there's a bug in the
implementation. | {
"domain": "physics.stackexchange",
"id": 50371,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computational-physics, simulations, statistics, data-analysis",
"url": null
} |
quantum-field-theory, research-level, yang-mills, wilson-loop
For an abelian group $G$ one has $F=dA$, and so you can use Stokes theorem to write the exponent as $\int_D F$. Note, that any connected compact abelian group is isomorphic to $U(1)^n$. The only difference for non-abelian groups $G$ is that you cannot reduce the integral of the connection to an integral of the curvature.
To evaluate a Wilson loop, you need $R$ to be a representation of $\mathfrak{g}$, it does not have to exponentiate to $G$. It works as follows: you first use your representation to make $A$ a matrix-valued 1-form and then use the ordinary matrix path-ordered exponential. If the representation exponentiates, it coincides with the usual definition, where you first use the exponential map and then take the trace.
I understand that Witten's remark as follows. Since your action is $\frac{1}{e^2}\int F\wedge *F$, to use perturbation series you rescale $A\rightarrow e A$. That means you have to expand the exponential for the Wilson loop in Taylor series. If your connection is not coupled to anything, all your diagrams are photons emitted and absorbed by the Wilson loop.
For example, at order $e^2$ you would have a process like $\langle Tr(AA)\rangle$, which you can think of as a Wilson loop together with a photon propagator attached, where the photon carries the representation indices away. So, half of the Wilson loop "carries" a representation (it corresponds to the trace), while the other half has the zero representation (since you are multiplying $A$'s). I hope it's clear without a picture. | {
"domain": "physics.stackexchange",
"id": 3371,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, research-level, yang-mills, wilson-loop",
"url": null
} |
molecular-biology, transcription, polymerase
Historical. The amino acid sequence of the protein (the product of the gene) is central to this convention because knowledge of the genetic code, and hence representation of the region of the mRNA that encodes protein — and by extension the DNA — was the first sequence information to be known.
Logical consistency. Later other sequences features were identified (some of which initially may have just been genetic features), e.g. ribosome binding sites, polyadenylation addition signals, transcription start sites, promoters, transcription factor recognition sites. It was logically consistent to represent them on the same strand as the coding sequence.
Functional agnosticism. In many cases the function of a sequence followed its description, so there was no reason initially to place it on any particular strand. However, even if it were thought that the function of some sequence were to be recognized on the opposite strand (what I would call the anti-sense strand), it would be unwise scientifically to change the representation to indicate this. Science progresses and interpretation changes. Better to separate concrete descriptive features from conclusions about their function. | {
"domain": "biology.stackexchange",
"id": 8241,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-biology, transcription, polymerase",
"url": null
} |
# Understanding of big-O massively improved when I began thinking of orders as sets. How to apply the same approach to big-Theta?
Today I revisited the topic of runtime complexity orders – big-O and big-$$\Theta$$. I finally fully understood what the formal definition of big-O meant but more importantly I realised that big-O orders can be considered sets.
For example, $$n^3 + 3n + 1$$ can be considered an element of set $$O(n^3)$$. Moreover, $$O(1)$$ is a subset of $$O(n)$$ is a subset of $$O(n^2)$$, etc.
This got me thinking about big-Theta which is also obviously a set. What I found confusing is how each big-Theta order relates to each other. i.e. I believe that $$\Theta(n^3)$$ is not a subset of $$\Theta(n^4)$$. I played around with Desmos (graph visualiser) for a while and I failed to find how each big-Theta order relates to other orders. A simple example Big-Theta example graphs shows that although $$f(n) = 2n$$ is in $$\Theta(n)$$ and $$g(n) = 2n^2$$ is in $$\Theta(n^2)$$, the graphs in $$\Theta(n)$$ are obviously not in $$\Theta(n^2)$$. I kind of understand this visually, if I think about how different graphs and bounds might look like but I am having a hard time getting a solid explanation of why it is the way it is.
So, my questions are: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.967899295134923,
"lm_q1q2_score": 0.830233147728366,
"lm_q2_score": 0.857768108626046,
"openwebmath_perplexity": 234.37748428922316,
"openwebmath_score": 0.8479422330856323,
"tags": null,
"url": "https://cs.stackexchange.com/questions/118671/understanding-of-big-o-massively-improved-when-i-began-thinking-of-orders-as-set/118674"
} |
javascript, html, mvc, dom, dice
DiceGame.start = function(num) {
this.dicesInGame = num;
this.totalDrinks = 0;
this.gameWon = false;
};
DiceGame.roll = function(luckyDice) {
this.luckyDice = luckyDice;
this.dices = [];
this.drink = true;
var removedDices = 0;
for (var i = 0; i < this.dicesInGame; i++) {
var fairChoice = Math.floor(Math.random() * 6) + 1;
if (fairChoice === luckyDice) {
this.drink = false;
removedDices++;
}
this.dices.push({
value: fairChoice,
lucky: fairChoice === luckyDice
});
}
this.dicesInGame -= removedDices;
if (this.drink) {
this.totalDrinks++;
}
if (this.dicesInGame === 0) {
this.gameWon = true;
}
};
// View
var View = {
// Public properties
model: null,
onSelectDiceNumber: function() {},
onSelectDice: function() {}
};
View.start = function(el) {
this.rootEl = el;
this.messageEl = el.querySelector('.message');
this.viewStartEl = el.querySelector('.view-start');
this.diceNumberEl = this.viewStartEl.querySelector('.dice-number');
this.selectDiceNumberEl = this.viewStartEl.querySelector('.select-dice-number');
this.viewGameEl = el.querySelector('.view-game');
this.rollResultsEl = this.viewGameEl.querySelector('.roll-results');
this.diceEls = this.viewGameEl.querySelectorAll('.dice-guess');
this.controlsEl = this.viewGameEl.querySelector('.controls'); | {
"domain": "codereview.stackexchange",
"id": 30636,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, html, mvc, dom, dice",
"url": null
} |
If that case, R(A) = B.
## One-to-one and onto mapping (bijection)
When a mapping is both one-to-one and onto it is called a bijection. For example, if $f : A \rightarrow B$ and f(x) = x2 with
$A = \{x \in \mathbb{R}^{} ~|~ x \ge 0\} ~;~~ B = \{x \in \mathbb{R}^{} ~|~ x \ge 0\}$
the map is one-to-one and onto.
On the other hand if
$A = \{x \in \mathbb{R}^{} ~|~ x \ge 0\} ~;~~ B = \{x \in \mathbb{R}^{}\}$
the map is one-to-one but not onto.
If we choose
$A = \{x \in \mathbb{R}^{}\} ~;~~ B = \{x \in \mathbb{R}^{}\}$
the map is neither one-to-one nor onto.
## Image, pre-image, and inverse functions
Suppose we have a function $f : A \rightarrow B$. Let D be a subset of A (see Figure~1). Let us define
$f(D) := \{f(d) \in B ~|~ d \in D\}~.$
Then, f(D) is called the image of D.
On the other hand, if C is a subset of B and we define
$f^{-1}(C) := \{a \in A ~|~ f(a) \in C\}~.$
Then, f − 1(C) is called the inverse image or pre-image of C.
If $f : A \rightarrow B$ is one-to-one and onto, then there is a unique function $f^{-1} : B \rightarrow A$ such that | {
"domain": "thefullwiki.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918485625379,
"lm_q1q2_score": 0.8035127154047443,
"lm_q2_score": 0.8128673155708975,
"openwebmath_perplexity": 349.18091786068663,
"openwebmath_score": 0.8577086329460144,
"tags": null,
"url": "http://www.thefullwiki.org/Continuum_mechanics/Functions"
} |
This should not be surprising, because it also follows from $P(Y>X)=\frac{19}{40}$ and the "obvious" fact that $P(Y=X)=\frac1{20}.$
• Please see my answer for a simple proof that the two probabilities are equal (and hence both equal to $19/40$). – Barry Cipra May 11 '17 at 13:50
• I don't know if this counts as a shortcut, but to find the expected value of X faster, you could find the expected value of one dice and just multiply it by three instead of summing all the numbers from 3-18. I feel like that just works better in my head. – Henry Weng May 11 '17 at 14:47
Whatever Bob rolls with the $6$-sided dice has a $1$ in $20$ chance of being matched, for a tie, by Bill's roll of the $20$-sided die. Whatever sum, $S=a+b+c$, Bob rolls, if you turn his dice over, the sum is $(7-a)+(7-b)+(7-c)=21-S$. Similarly, whatever number $T$ Bill rolls, if you turn the $20$-sided die over, the number is $21-T$. Thus for each outcome in which Bob wins, there is an equally likely outcome in which Bill wins, and vice versa. Hence the probability of winning for each of them is the same, namely
$${1\over2}\left(1-{1\over20}\right)={19\over40}$$
Remark: It's not literally necessary that the "complementary" number for each side of a die be the opposite face, just that there be a complemenary number somewhere. For $6$-sided dice, having opposite faces sum to $7$ is fairly standard; I believe it's also standard for $20$-sided dice to have opposite faces sum to $21$.
Also, I'd like to credit David K's answer with motivating this one. When I saw from his analysis that the two probabilities were equal, I decided there ought to be simple reason why. As luck would have it, I found one. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9852713887451681,
"lm_q1q2_score": 0.8560301727693086,
"lm_q2_score": 0.8688267847293731,
"openwebmath_perplexity": 423.7090713453807,
"openwebmath_score": 0.9943407773971558,
"tags": null,
"url": "https://math.stackexchange.com/questions/2275616/probability-of-a-number-rolled-of-a-20-sided-dice-being-greater-than-the-sum-of"
} |
quantum-mechanics, hamiltonian
Title: Getting Energies and Probabilities from the Hamiltonian So I need to find the possible energies and the probabilities of these using the eigenvalues of a Hamiltonian.
Once I obtain the eigenvalues, are those the energies E_n in and of themselves?
Or do they simply give me the n values, i.e. n = 1, 2, 3, that I would then plug in to the equation
Or is it both? Do they both yield the same answer? (I am still waiting on the installation of the computer program to use to find the eigenvalues)
Finally, I am completely at a loss as to how to go on to find the probabilities of the energies. I am not given a traditional wavefunction to normalize, so how do I find the probability without the normalization constant?
EDIT: I normally don't like to put problem specifics from my homework on here, but I suppose it's hard to understand what I mean by "I am not given a traditional wave function." As such, the exact problem is stated:
Models describing electrons on a crystal lattice are very important to
understanding various phenomena in solids. Here we consider a model in
which an electron lives on a one-dimensional lattice of N sites. The
sites are labeled by i=1,2, ....,N. The system looks like
o----o----o-- .... --o----o
1 2 3 N-1 N
The state of the electron is then a vector of dimension N. The
Hamiltonian is given by an N by N matrix whose elements are:
/ - 1, if i and j are near-neighbors;
H_{ij} = |
\ 0 , otherwise. | {
"domain": "physics.stackexchange",
"id": 9997,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, hamiltonian",
"url": null
} |
javascript, performance, html, css, animation
function repositionFireworks() {
var fireworks = document.getElementsByClassName('fireworkContainer');
//var colors = ['#FFC47A','#FF312D','#5CC1FF','#FF9137','#FFCE1E'];
var colors = ['#001EFF', '#DE0013', '#E2BC00', '#6600FF', '#78DD00', '#E06CBE'];
for (var i = 0; i != fireworks.length; i++) {
var fireworkColor = colors[Math.floor(Math.random() * colors.length)];
fireworks[i].style.opacity = '1';
fireworks[i].lastChild.removeAttribute('style');
var fireworkRays = fireworks[i].getElementsByClassName('fireworkRay');
for (var j = 0; j != fireworkRays.length; j++) {
fireworkRays[j].style.width = '0px';
fireworkRays[j].style.height = '0px';
fireworkRays[j].style.backgroundColor = fireworkColor;
}
fireworks[i].style.backgroundColor = fireworkColor;
fireworks[i].style.left = Math.floor(Math.random() * ((window.innerWidth - fireworks[i].offsetWidth) - fireworks[i].offsetWidth + 1)) + fireworks[i].offsetWidth + 'px';
fireworks[i].style.bottom = '0';
}
for (var i = 0; i != fireworkTimers.length; i++) {
window.clearTimeout(fireworkTimers[i]);
}
fireworkTiming();
} | {
"domain": "codereview.stackexchange",
"id": 17036,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, performance, html, css, animation",
"url": null
} |
We can write $p_n(x)$ as $$p_{n}(x) = n x - (1+2+3+\cdots + n) = nx - \frac{n(n+1)}{2}$$
When we do long division, part of the algorithm is to match the lead coeficient, so here we want to change the leading coefficent of $p_{n-1}$ to match with $p_n(x)$.
Thus you need to compute $\frac{n}{n-1} \cdot p_{n-1}(x)$, which will have a leading term of $nx$ that matches $p_n(x)$.
After you match leading terms, you subtract $\frac{n}{n-1} \cdot p_{n-1}(x)$ from $p_n(x)$. The result is a constant. This is your remainder.
• I dont understand what you mean by "matching the lead coefficient", why do you multiply by $n/n-1$? – Felipe ZC May 16 '15 at 23:34
• You should write out $p_{n-1}$ and try what I suggested. When you subtract the result from $p_n(x)$ it will cancel with $nx$. – Joel May 16 '15 at 23:40
• I am just describing the long division algorithm in words here. – Joel May 16 '15 at 23:40
Hint: Simplify $p_n(x)$: $$p_n(x) = (x-1)+(x-2)+\cdots+(x-n) = nx-{n(n+1)\over2}$$ and $$p_{n-1} = (n-1)x-{(n-1)n\over2}$$
• How would you divide these two expressions? – Felipe ZC May 16 '15 at 23:34 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9706877675527112,
"lm_q1q2_score": 0.8130674483308952,
"lm_q2_score": 0.837619959279793,
"openwebmath_perplexity": 282.9385856257941,
"openwebmath_score": 0.659237802028656,
"tags": null,
"url": "https://math.stackexchange.com/questions/1285565/how-to-find-the-remainder-of-polynomial-division"
} |
quantum-field-theory, special-relativity, inertial-frames, stress-energy-momentum-tensor
Title: Show Energy-momentum operator transforms as a tensor under Lorentz transformations I know, from my professor notes, that a general field operator can transform under Lorentz as
$$U(\Lambda)\hat{\mathcal{O}}^r(x)U^\dagger(\Lambda)=\hat{\cal{O}}^{r'} {M(\Lambda)_{r'}}^r$$ where $M(\Lambda_1\Lambda_2)=M(\Lambda_1)M(\Lambda_2)$. Where does this relation come from?
Then I have to show that
$$U(\Lambda)\hat{\tau}_{\mu\nu}U^\dagger(\Lambda)=\hat{\tau}_{\mu'\nu'} {\Lambda^{\mu'}}_{\mu}{\Lambda^{\nu'}}_\nu$$
namely, this transforms as a tensor.
I started at the infinitesimal level where
$${\Lambda^\mu}_\nu=\delta^\mu_\nu-\omega^\mu_\nu$$ and $$U(\Lambda)=\mathbf{1}-\frac{1}{2}i\omega_{\mu\nu} \hat{M}^{\mu\nu}$$
I don't know how to proceed with this calculation. I tried substituting these two relation for the infinitesimal level but I can't reach any good result. In regards to your first question, that is, in fact, a definition. To appreciate it, let's recall what a representation is. Let $G$ be some group. A representation of $G$ is a pair $(V,D)$ where $V$ is some vector space and where $D:{G}\to{\rm GL}(V)$ is a map associating to every group element $g$ one linear transformation $D(g)$ satisfying the condition: $$D(g_1)D(g_2)=D(g_1g_2).$$ | {
"domain": "physics.stackexchange",
"id": 87490,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, special-relativity, inertial-frames, stress-energy-momentum-tensor",
"url": null
} |
• How did you get $174$? The maximum I was able to get was $171$. – Purak Jun 29 '15 at 12:48
• @PuRaK: grin Try again! I assure you it is possible. 173 is also possible. – user21820 Jun 29 '15 at 12:48
• Ha, finally got it. So f(1)=1 , f(2)=6 , f(3)=21 , f(4)=63 , f(5)=174 as of now? – Purak Jun 29 '15 at 13:46
• @PuRaK: Congratulations! Now can you prove that it is optimal? =D – user21820 Jun 29 '15 at 13:58
• $174$ is in fact optimal for $n=5$. There are essentially $9!$ possible plays of the game, but you can save a factor of two by exploiting both the left-right symmetry of the initial setup, and another factor of two since the order of the last two moves doesn't matter. This leaves $9!/4=90720$ possibilities to try, small enough to check directly. Up to the symmetries, there's only one way to achieve $174$. – Tad Jul 11 '15 at 16:58
I'm leaving the intermediate steps here so you can see how the solution developed; to summarize: The result is a hypothesis for a complete solution based on strong numerical evidence but so far without an idea how to prove its optimality. For all $n$, the solution consists of starting at some point up to $3$ slots away from (say, to the right of) the centre (exactly $3$ slots for $n\ge16$), then going outward, always alternating left and right except for sometimes going left twice in a row. The counts of alternating runs before each double left depend subtly on $n$, but they stabilize one after the other, until from $n=216$ on only the last run count changes, with the other $5$ given by $18,17,35,69,139$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363485313248,
"lm_q1q2_score": 0.8202716974398752,
"lm_q2_score": 0.8333246035907933,
"openwebmath_perplexity": 1230.0420260421427,
"openwebmath_score": 0.7285186648368835,
"tags": null,
"url": "https://math.stackexchange.com/questions/1341929/sum-numbers-game"
} |
programming-challenge, scala
def getNumberAsWord(num: String): Option[String] = {
@tailrec
def go(numbers: String)(acc: Option[String]): Option[String] = numbers.toList match {
case Nil => acc
case a :: b :: c :: d :: Nil => {
val rest: String = b.toString + c.toString + d.toString
val thousandWord = ^(convertSingleDigitOnes(a), Some(" thousand"))(_ ++ _)
val newAcc = ^(acc, thousandWord)(_ + _)
go(rest)(newAcc)
}
case b :: c :: d :: Nil => {
val rest: String = c.toString + d.toString
val hundredWord = convertSingleDigitHundred(b)
val newAcc = ^(acc, hundredWord)(_ + _)
go(rest)(newAcc)
}
case '0' :: '0' :: Nil => acc
case c :: d :: Nil => acc match {
case Some("") => {
val twoDigitsWord = convertTwoDigits(c)(d)
val newAcc = ^(acc, twoDigitsWord)(_ + _)
go("")(newAcc)
}
case Some(_: String) => {
val twoDigitsWord: Option[String] = convertTwoDigits(c)(d)
val addingAnd: Option[String] = ^(Some(" and "), twoDigitsWord)(_ + _)
val newAcc = ^(acc, addingAnd)(_ + _)
go("")(newAcc)
}
case None => None
}
case d :: Nil => convertSingleDigitOnes(d)
case _ => None
}
go(num)(Some(""))
} | {
"domain": "codereview.stackexchange",
"id": 9777,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming-challenge, scala",
"url": null
} |
python, python-3.x, web-scraping
return {
'Content-length' : '0',
'Content-type' : 'application/json',
'Authorization' : 'Basic ' + signature.decode('utf-8')
}
SIGNATURE = create_signature(USERNAME, PASSWORD)
HEADERS = create_headers(SIGNATURE) Extracting the error handling for the requests seems to be very easy:
def get(url, **params):
"""Get a URL with authentication, error handling and optional parameters."""
try:
response = requests.get(url, headers=auth.HEADERS, params=params)
response.raise_for_status()
except requests.ConnectionError:
logging.critical("No Internet connection")
return None
except requests.HTTPError:
logging.warning("An HTTP error occured.")
return None
return response
Which you can then use like this:
def get_events_odds(events: Events) -> Optional[Odds]:
"""Gets odds for the given events."""
params: RequestParams = {
'sportId': SOCCER_SPORT_ID,
'oddsFormat': ODDS_FORMAT,
'eventIds': ','.join([str(event) for event in events])
}
response = get(GET_ODDS_URL, **params)
if response is not None:
logging.info("Events odds have been retrieved.")
return response.json()
A couple of additional notes:
Use response.raise_for_status() to also catch also HTTP errors such as 500: Internal Server Error, or 404: Page not Found. You will probably want to add logging for that as well (which I was too lazy to do in the code above).
Docstrings are usually denoted with triple quotes """docstring""", even if only one line long.
Union[Odds, None] is the same as Optional[Odds], which is slightly more descriptive, IMO.
You should use a requests.Session to make consecutive requests to the same server faster.
The authentication method is not very secure, although at least the password is not transmitted unencrypted due to the API using https. | {
"domain": "codereview.stackexchange",
"id": 38963,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, web-scraping",
"url": null
} |
use No attribution required High quality images. 1985-Spring-CM-U-1. 1 inch for payloads at maximum mass and 10. Showers in parts of NSW on Christmas Day predicted to temper bushfire danger amid mild forecasts for Sydney, Melbourne, Brisbane and Hobart, Last modified on Tue 24 Dec 2019 07. Development of the wave equation!! For each case we will look at: → general behaviour. Coupled harmonic oscillators – masses/springs, coupled pendula, RLC circuits 4. 1 that consists of three identical masses which slide over a frictionless horizontal surface, and are connected by identical light horizontal springs of spring constant. To solve for the motion of the masses using the normal formalism, equate forces (1). See if you are eligible for an Economic Impact Payment. The metabolic fate of adrenocorticotropic hormone (ACTH) fragment 4–10 (4–10) was evaluated following incorporation of a nonradioactive 127I-tag and with selective detection of I+ at m/z 127 by inductively coupled plasma mass spectrometry (ICP-MS). Coupled system of 3 masses and 4 springs between two rigid walls. If one ot the masses is held at rest at its equilibrium position while the other mass is displaced and then released from rest, the resulting motion is coupled motion. The two outer springs each have force constant k, and the inner spring has force constant. We will study coupled oscillations of a linear chain of identical non-interacting bodies connected to each other and 2) Case of three masses Let us consider the case of three masses Each one of these masses has two springs attached to it; this explains the eigenvalue 2. Gas chromatography coupled to mass spectrometry. remembering privacy and security settings. The masses attached to the pendulums were recorded. Simple strategies to grow your Pilates, fitness and wellness business from Seran Glanfield. 02 Coupled Oscillators. Consider only small displacements from equilibrium. 'Chikyuugai Shounen Shoujo' Debuts in Spring 2022. Suppose that the masses are coupled to their immediate neighbors via identical light springs of unstreched length , and force constant. Coupled Oscillator Dynamics (COD): The coupled harmonic oscillator dynamics governs the system when mass one is not against the stop. Attach the masses to | {
"domain": "chievoveronavalpo.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347905312774,
"lm_q1q2_score": 0.8098947037791748,
"lm_q2_score": 0.8311430499496096,
"openwebmath_perplexity": 1713.5631553741566,
"openwebmath_score": 0.3487705886363983,
"tags": null,
"url": "http://chievoveronavalpo.it/mensaje-a-txox/three-spring-coupled-masses.html"
} |
and a correspondingE#‚# + ,3 eigenvector Let @ÞTœÒ ÓÞRe Im@@ By the theorem Re Im Re ImßEœÒ Ó Ò ÓÞ@@ @@”• + ,,+ " ”• ” •È + , ,+ can be written as , where .<<œ+ , cos sin sin cos)))) ## Thus represents a counterclockwise rotation if is chosen around the originGÐ !Ñ) through the angle , followed by a rescaling fact Thus, the criterion in this case is that $X$ and $Y$ have non-positive determinant. In Section 5.4, we saw that a matrix whose characteristic polynomial has distinct real roots is diagonalizable: it is similar to a diagonal matrix, which is much simpler to analyze.In this section, we study matrices whose characteristic polynomial has complex roots. Let . Add to solve later Sponsored Links Also, note that, because $X$ and $Y$ commute, each preserves the generalized eigenspaces of the other. This website is no longer maintained by Yu. An eigenvalue represents the amount of expansion in the corresponding dimension. If the 2 2 matrix Ahas distinct real eigenvalues 1 and 2, with corresponding eigenvectors ~v 1 and ~v 2, then the system x~0(t)=A~x(t) has general solution predicted by the eigenvalue-eigenvector method of c 1e 1t~v 1 + c 2e 2t~v 2 where the constants c 1 and c 2 can be determined from the initial values. A complex-valued square matrix A is normal ... As a special case, for every n × n real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. The diagonalizing matrix can be chosen with orthonormal columns when A = AH In case A is real and symmetric, its eigenvalues are real by property. Some things to remember about eigenvalues: •Eigenvalues can have zero value •Eigenvalues can be negative •Eigenvalues can be real or complex numbers •A "×"real matrix can have complex eigenvalues •The eigenvalues of a "×"matrix are not | {
"domain": "thekandybshow.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9678992932829917,
"lm_q1q2_score": 0.8493091516215903,
"lm_q2_score": 0.8774767762675405,
"openwebmath_perplexity": 326.5004848723243,
"openwebmath_score": 0.852203905582428,
"tags": null,
"url": "https://thekandybshow.com/326qg/can-a-real-matrix-have-complex-eigenvectors-a89235"
} |
circuit-construction, quantum-circuit, hamiltonian-simulation, hamiltonian, trotterization
and accordingly we can trotterize this by doing a little bit of $H_1$, then a little bit of $H_2$, ... then a little bit of $H_m$, and then iterating $t/\delta t$ times.
However many interesting "made up" Hamiltonians may not be written locally. Nonetheless, there are some more complicated ways to address certain sparse Hamiltonians - these generalize the class of local Hamiltonians to those in which the vast majority of entries in the hermitian matrix $H$ are zero, even though $H$ itself could be highly non-local. The sparse Hamiltonian is transformed and reduced to a local Hamiltonian, which then can be simulated/trotterized.
So, the short answer to your question "is it possible to [efficiently] implement any random Hamiltonian or made-up Hamiltonians using quantum circuits" is, if the random Hamiltonian is local enough or sparse enough, then yes, we can efficiently simulate the Hamiltonian. If, however, the Hamiltonian is neither local nor sparse, then there might not be an efficient simulation. | {
"domain": "quantumcomputing.stackexchange",
"id": 4489,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "circuit-construction, quantum-circuit, hamiltonian-simulation, hamiltonian, trotterization",
"url": null
} |
Treat $F(x,y)$ as an element of $\Bbb Z_p[[x]][[y]]$, so write it as $$F(x,y)=x +\sum_{m\ge 1}f_m(x)y^m\,.$$ The aim is to show that each $f_m$ is in $\Bbb Z_p\{\{x\}\}$, not just in $\Bbb Z_p[[x]]$. The argument is by induction, starting with $f_1$, which we already know to be $1/L'(x)$, so convergent. We write out the fundamental property of the logarithm: $$L\bigl(F(x,y)\bigr)=L(x)+L(y)\,,$$ and arrange the pieces differently: $$0=\sum_{N\ge0}\Bigl[F(x,y)^{p^{Nh}} - y^{p^{Nh}}\Bigr]\Big/p^N-L(x)\,.$$ In the above, we want to look at the total coefficient-function of $y^s$, knowing inductively that all $f_m(x)$ for $m<s$ are in $\Bbb Z_p\{\{x\}\}$. In this, we’re not interested in the participation of any monomial with $y$-degree greater than $s$, so we may truncate, and again rearrange: $$-(x+\sum_{m=1}^sf_m(x)y^m)\equiv \sum_{N\ge1}\Bigl[(x+\sum_{m=1}^s f_my^m)^{p^{Nh}} - y^{p^{Nh}}\Bigr] - L(x)\pmod{y^{s+1}}\,.$$ Now, when you look at the occurrence of $y^s$ for each piece with $N\ge1$, there’s | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9790357567351324,
"lm_q1q2_score": 0.8004346459875734,
"lm_q2_score": 0.8175744761936437,
"openwebmath_perplexity": 173.26691632764843,
"openwebmath_score": 0.8940473198890686,
"tags": null,
"url": "https://mathoverflow.net/questions/195328/formal-group-law-over-mathbbf-p/196233"
} |
machine-learning, logistic-regression, c, weight-initialization
Title: C++ return array from function I would like to implement machine learning algorithm in C++ without using any C++ machine learning library. So I'm writing this initializer function for generating zero matrices but can't figure out how can I accomplish this. I'm actually trying to write C++ code for simple logistics regression for now.
float * intializer_zero(int dimension){
// z = wx + b.
float b = 0;
float w[dimension]= { };
return w,b;
}
It's throwing error "cannot convert 'float' to 'float' in return."
How can I write this initializer function in C++? you can use vector from the Standrad library to store your matrix in a variable way.
#include<vector>
Then you defined your function to initiliaze it to 0
void fill_zero( std::vector<std::vector<float>> &matrix, int row, int column)
{
for(int i = 0; i<row; ++i)
{
std::vector<float> vector(column,0);
matrix.push_back(vector);
}
}
this fill a row x column matrix with 0. As matrix is passed by reference (& matrix) you need a c++11 compiler (I think). Same for the 'auto' keyword.
then you just need to create the matrix before calling the function
std::vector<std::vector<float>> matrix;
fill_zero(matrix);
//this is just for printng the matrix to see if it match your requirement !
for(auto vector : matrix)
{
for(auto value : vector)
{
std::cout<<value<<" ";;
}
std::cout<<std::endl;
}
I didn't test this an my own compiler, but it works here :
C++ Shell compiler with the code above, if you want to test it ! (it fill it with ones instead of zeroes, it's just to be sure that the compiler doesn't initiliaze value of vector to 0 by default. That way I am sure that the function work as we want to) | {
"domain": "datascience.stackexchange",
"id": 3964,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, logistic-regression, c, weight-initialization",
"url": null
} |
c#, extension-methods, immutability, expression-trees
I'm not a fan of long ternary operators, but if you prefer that you can always switch it. I don't really like KeyValuePair<,> either, but it makes sense to keep things coupled and adding a separate class seems somewhat redundant as it will be used only in 1 occasion.
This is the cleaner approach, but if you really want to squeeze a bit more performance you can avoid the .ToArray() call by creating an object[] instead of IEnumerable<object>:
public static TSource With<TSource, TProperty>(this TSource source,
Expression<Func<TSource, TProperty>> selector, TProperty value)
{
var constructor = typeof(TSource).GetConstructors().Single();
return (TSource) constructor.Invoke(GenerateParameters(source, constructor,
new KeyValuePair<string, object>(selector.Target(), value)));
}
private static object[] GenerateParameters<TSource>(
TSource source, ConstructorInfo ctor,
KeyValuePair<string, object> parameterToModify)
{
var type = typeof(TSource);
var ctorParameters = ctor.GetParameters();
var parameters = new object[ctorParameters.Length];
for (int i = 0; i < parameters.Length; i++)
{
var ctorParameter = ctorParameters[i];
if (string.Equals(ctorParameter.Name, parameterToModify.Key, StringComparison.OrdinalIgnoreCase))
{
parameters[i] = parameterToModify.Value;
}
else
{
parameters[i] = type.GetProperty(ctorParameter.Name,
BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance)?.GetValue(source);
}
}
return parameters;
}
It's a bit longer, but it's not necessarily less readable, you can pick whatever suits you better.
P.S
Perhaps some caching mechanism would be useful, to improve performance of multiple sequential calls. | {
"domain": "codereview.stackexchange",
"id": 30479,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, extension-methods, immutability, expression-trees",
"url": null
} |
entanglement, quantum-operation, experimental-realization, entanglement-witness, ppt-criterion
Title: How are witness operators physically implemented? Let's take an example of an entanglement witness of the form $W = | \phi \rangle \langle \phi | ^{T_2}$ where $ | \phi \rangle $ is some pure entangled state.
If I wanted to test some state $\rho$, I would have to perform $\mathrm{Tr}(W \rho)$. I assume this is done by measuring $\rho$ multiple times in the eigenbasis of $W$ and finding the expected eigenvalue, and that would be the solution to $\mathrm{Tr}(W \rho)$. | {
"domain": "quantumcomputing.stackexchange",
"id": 745,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "entanglement, quantum-operation, experimental-realization, entanglement-witness, ppt-criterion",
"url": null
} |
c, file, posix
int read_conf(const char *str, struct Conf conf)
{
int Dir;
Beware of reading from file directly into a structure like this - any small change to structure layout (e.g. using different a compiler) will cause data corruption.
This line is hard to read, with its mix of conditionals and side-effects:
if ((mkdir(str, 0755) && !(exist = errno == EEXIST)) || (Dir = open(str, O_DIRECTORY)) == -1) {
I recommend encapsulating this in a function that is similar to open(str, O_CREAT) (but creating a directory rather than a plain file). Even if it's only used once, it makes the code much clearer to understand.
In any case, we need to close Dir once we're finished with it, so we're leaking the descriptor.
The if (exist) test seems pointless, as if the directory is newly-created, openat() will quickly fail with ENOENT. Saving a single system-call is unlikely to provide a measurable benefit, so go with the simpler code.
It's usually simpler to deal with error cases first - if we can return early, that reduces indentation in subsequent code, and therefore reduces cognitive load.
There's some more confusing code:
if (r < sizeof(struct Conf))
r != -1 ? (void)memset(conf, 0, r) : perror(".Conf");
Much clearer if we unwrap that:
if (r < 0) {
perror(".Conf");
goto err;
}
const size_t ur = (size_t)r;
if (ur < sizeof conf) {
memset(&conf, 0, ur);
goto err;
}
⋮
int retval;
err:
retval = errno;
close(fd);
return retval; | {
"domain": "codereview.stackexchange",
"id": 43924,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, file, posix",
"url": null
} |
noise, sound
Title: What are possible reasons a sound may be continually clicking (low amplitude, no sudden changes in amplitude) I am making a digital sound mixer and get this wav file (200kb, Wetransfer)
I looked through the waveform using Audacity. It uses 20% of its maximum amplitude and I see no sudden changes in the waveform.
Yet it repeatedly clicks.
It is composed (not maxed) sine waves, first 1, later 4. But the waveform looks like I avoided constructive interference (by reducing volume).
Interestingly when combined with other sounds (music, ambience), the clicking is almost removed. There is clearly something wrong but it's not easy to tell what it is. In order to analyze I ran the signal through a bandstop filter which should ideally eliminate the entire signal.
It can be clearly seen that there strong spikes in the band-stopped signal which are a strong indicator of out of band energy, i.e. they happen where the clicks are. So the location of the clicks can easily be identified this way but there isn't anything obviously wrong with the waveform, so further analysis is required.
Without more details on how exactly the signal is generated, this is difficult to do. You would probably have to manually build the individual sine waves and subtract them out, which is rather tedious to do. | {
"domain": "dsp.stackexchange",
"id": 11963,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "noise, sound",
"url": null
} |
we simply assume that (a, b) and (b, a) are in the relation, and then we show that a = b. Consider the ≥ relation. As it turns out, the relation 'is divisible by' on the integers is an antisymmetric relation. Spanish Grammar: Describing People and Things Using the Imperfect and Preterite, Talking About Days and Dates in Spanish Grammar, Describing People in Spanish: Practice Comprehension Activity, Delaware Uniform Common Interest Ownership Act, 11th Grade Assignment - Comparative Analysis of Argumentative Writing, Quiz & Worksheet - Ordovician-Silurian Mass Extinction, Quiz & Worksheet - Employee Rights to Privacy & Safety, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, DSST Technical Writing: Study Guide & Test Prep, DSST Computing and Information Technology: Study Guide & Test Prep, High School Physics Curriculum Resource & Lesson Plans, Quiz & Worksheet - Characteristics & Types of Food Allergies & Intolerance, Quiz & Worksheet - History of Central American Independence, Quiz & Worksheet - Applying the Work-Energy Theorem, Quiz & Worksheet - Kinetic & Potential Energy of Simple Harmonic Motion, Stereotypes in Late Adulthood: Factors of Ageism & Counter-Tactics, Illinois Science Standards for First Grade, How to Become a National Board Certified Teacher, Arkansas Science Standards for Kindergarten, Parallel & Perpendicular Lines Lesson Plan, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers, Working Scholars® Bringing Tuition-Free College to the Community. Partial and total orders are antisymmetric by definition. That is, if a and b are integers, and a is divisible by b and b is divisible by a, it must be the case that a = b. What is the Difference Between Blended Learning & Distance Learning? Well-founded if for every set which meets the field of , whose preimage under does not meet . Definition of antisymmetric in the Definitions.net dictionary. The number of students in the class is divisible by the number of cookies. Clarifying the definition of antisymmetry (binary relation properties) Hot Network Questions If we write it out it becomes: Dividing both sides by b gives that 1 = nm. Difference Between Asymmetric & Antisymmetric Relation. Question 1: Which of the following are antisymmetric? Definition 1: A relation R over | {
"domain": "apcremations.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.952574129515172,
"lm_q1q2_score": 0.8019013369143421,
"lm_q2_score": 0.8418256512199033,
"openwebmath_perplexity": 551.4485506636166,
"openwebmath_score": 0.6694983839988708,
"tags": null,
"url": "http://apcremations.com/rivals-for-nzwtjx/eec3c3-antisymmetric-relation-definition"
} |
c#, wpf, active-directory
internal string Path { get; set; }
internal string[] Split
{
get
{
var namesOnly = Path.Replace(LdapProtocol, string.Empty)
.Replace(OrganizationalUnitPrefix, string.Empty);
var firstDomainComponentIndex = namesOnly.IndexOf(
DomainComponentPrefix, StringComparison.Ordinal);
namesOnly = namesOnly.Substring(
StringStartIndex, firstDomainComponentIndex);
var split = namesOnly.Split(Comma);
Array.Reverse(split);
return split;
}
}
}
}
I have omitted several classes that provide only extension methods, as I am out of room. Only scratching the surface...
ActiveDirectoryScope
The AddDirectoryScope() method can be improved by storing the value of the organizationalUnit.Split property (which by the way reads more like a method) into a variable.
Right now you are acessing that property at least four times and if an ActiveDirectoryScope with the level isn't contained in parent.Children it will be called one more time.
So changing to
internal void AddDirectoryScope(OrganizationalUnit organizationalUnit)
{
string[] levels = organizationalUnit.Split;
if (levels == null ||
levels < 1)
{
throw new ArgumentException(
"The organizational units array is null or empty!");
}
var parent = this;
var lastLevel = levels.Length - 1;
foreach (var level in levels)
{
if (parent.Children.Contains(new ActiveDirectoryScope
{
Name = level
}))
{
parent = parent.Children.Find(
item => item.Name.Equals(level));
}
else if (level == levels[lastLevel])
{
parent.Children.Add(new ActiveDirectoryScope
{
Name = level,
Path = organizationalUnit.Path
});
}
}
} | {
"domain": "codereview.stackexchange",
"id": 20538,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, wpf, active-directory",
"url": null
} |
• With conditional probability $\frac{1}{2+1}=\frac{1}{3}$ person B finishes the group task before A finishes the individual task and person A takes an additional expected $\frac{1}{2}$ hours to finish, with an overall probability $\frac{3}{7}\times \frac{1}{3}=\frac{1}{7}$ and an overall expected time of $\frac{2}{7}+\frac{1}{3}+\frac{1}{2}=\frac{47}{42}$ hours
• With conditional probability $\frac{2}{2+1}=\frac{2}{3}$ A finishes the individual task before person B finishes the group task and together they take an additional expected $\frac{1}{2}$ hours to finish the group task, with an overall probability $\frac{3}{7}\times \frac{2}{3}=\frac{2}{7}$ and an overall expected time of $\frac{2}{7}+\frac{1}{3}+\frac{1}{2}=\frac{47}{42}$ hours
As a check, the overall probabilities add up to $1=\frac{8}{35}+\frac{12}{35}+\frac{1}{7}+\frac{2}{7}$.
So the overall expected time is $\frac{8}{35}\times\frac{142}{105} +\frac{12}{35}\times \frac{83}{70}+\frac{1}{7}\times \frac{47}{42}+\frac{2}{7}\times \frac{47}{42} = \frac{251}{210}$ hours. This is about $71$ minutes and $43$ seconds, slightly more than your result. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9865717460476701,
"lm_q1q2_score": 0.8133908379999517,
"lm_q2_score": 0.8244619220634457,
"openwebmath_perplexity": 341.23228260546483,
"openwebmath_score": 0.7568016052246094,
"tags": null,
"url": "https://math.stackexchange.com/questions/1187929/exponential-distribution-and-expected-times"
} |
ros, c++, node, shutdown
Title: Properly Bring Down Active Node?
I'm currently using ros::shutdown() to bring down a node that I need to kill in c++. I'm fairly certain this is bad as the node fails ungracefully and does not seem to release all of its resources. Is there a better method?
Originally posted by davelkan on ROS Answers with karma: 71 on 2015-11-19
Post score: 0
Likely what you want to do is create a custom SIGINT handler that releases whatever resources are not properly being released. This function will then get called whenever your node receives the SIGINT signal (e.g. when someone presses Ctrl+C) or when you call ros::shutdown(). A simple example is on the roscpp Initialization and Shutdown page. Also check out this question/answer.
Originally posted by jarvisschultz with karma: 9031 on 2015-11-19
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 23030,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, c++, node, shutdown",
"url": null
} |
adsorption
Title: Is the rate of desorption dependent on pressure? While I was learning about Langmuir's adsorption isotherm in my chemistry class, my teacher talked about the situation when the rate of adsorption and desorption will become equal. However, he mentioned that the rate of adsorption is dependent on pressure but the rate of desorption is not and when he gave an explanation I wasn't satisfied with it.
I would imagine that greater the external pressure on the adsorbent lesser will be the tendency of the adsorbate particles to leave the surface. But that isn't what my teacher said. Please explain. Without knowing exactly what you were told its hard to be exact, but here is a description of the situation.
The Langmuir model makes assumptions (a) adsorption is complete when the surface is filled with one gas molecule per site, (b) all adsorption sites are equivalent (i.e. the same) and (c) adsorption and desorption are separate processes, i.e. one does not depend on the other, i.e if one site is occupied it does not affect adjacent sites an any way. Note that this is a model and this determines how we look at the problem.
The model is then $$M(g) + S(surface) = GM(surface) $$ where G is gas and S surface and rate constant for adsorption is $k_a$ and for desorption $k_d$.
We let $\theta$ be the extent of adsorption which varies from 0 to 1, i.e. how full the surface is, then the rate of change of $\theta$ depends on the rate constant for adsorption , pressure and fraction of vacant sites or $N(1-\theta)$ for N vacant sites. Thus for adsorption $$ \frac{d\theta}{dt} = k_aPN(1-\theta) $$
and for desorption $$ \frac{d\theta}{dt} = -k_dN\theta $$ | {
"domain": "chemistry.stackexchange",
"id": 16005,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "adsorption",
"url": null
} |
javascript, jquery, regex, plugin, scope
Function being called by sub plugin which is sending and receiving data to the prototype:
Plugin.Plugins['Link'] = (function () {
var template = {
header: 'Add a link, bro',
body: [
'<div>What the header said</div>',
'<input type="text" value="http://" />'
].join(''),
// closeText: 'NOPE',
confirmText: 'Insert Link'
},
validationErrorMessage = 'You did not enter a valid URL';
return {
'callback': function (input, change) {
var self = this;
this.plugin.openModal(template, function(input){
document.execCommand('createLink', false, input);
}, function(input){
return self.validate(input);
});
},
validate: function(input) {
var isValid = this.validateUrl(input);
if(isValid) {
return {type: 'valid'};
}
return validationErrorMessage;
},
'validateUrl': function (url) {
url = url.replace(/(?:(?:^|\n)\s+|\s+(?:$|\n))/g,'').replace(/\s+/g,' ');
if (url.charAt(0) == "/" ) {
return true;
} else {
var regexp = /(ftp|http|https|gopher|telnet):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?/;
return regexp.test(url);
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 7009,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, regex, plugin, scope",
"url": null
} |
fluid-mechanics, thermodynamics, design, propulsion
I believe this new underwater thruster design should provide silent propulsion for the ship/submarine, although the thrust would be low compared to propeller driven craft. It can kinda work. Ish.
Your design is based, at its core, on the idea that warm water rises. Warm water only rises in cold water, so I am sceptical of the efficacy of simple thermal gradients. Nevertheless, bubble pumps have been in commercial use for many years, so if you get it just right I suppose it's possible.
However, you have some serious mass flow problems in your design. If I understand correctly, in your design you intend the flow path to be something like this:
Now, this system is pretty hard to analyze simply. As the essential components are the pumping action and the flows, I think it will be easier to see on a simpler system that is effectively the same (the symbol is a generic pump): | {
"domain": "engineering.stackexchange",
"id": 2813,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-mechanics, thermodynamics, design, propulsion",
"url": null
} |
homework-and-exercises, potential, vector-fields
If anyone can provide a resource that goes through a similar problem, that may be all I need to set this problem to rest and come up with my own solution to this problem. Too long for a comment:
$$
\mathbf{F}=\begin{pmatrix}x^2\\3xz^2\\-2xz\end{pmatrix}\,,
\quad \mathbf{A}=\int_0^1t\,\mathbf{F}(t\,\mathbf{x})\,dt\times\mathbf{x}
$$
solves $\mathbf{F}=\nabla\times\mathbf{A}$ (albeit not uniquely) and gives
\begin{align}
\boldsymbol{A}&=\int_0^1\begin{pmatrix}t^3x^2\\3t^4xz^2\\-2t^3xz\end{pmatrix}\,dt\times
\begin{pmatrix}x\\y\\z\end{pmatrix}
=\begin{pmatrix}x^2/4\\3xz^2/5\\-xz/2\end{pmatrix}\times
\begin{pmatrix}x\\y\\z\end{pmatrix}=
\begin{pmatrix}3xz^3/5+xyz/2\\-x^2z/2-x^2z/4\\x^2y/4-3x^2z^2/5\end{pmatrix}\\
&=\frac{x}{20}\begin{pmatrix}12z^3+10yz\\-15xz\\5xy-12xz^2\end{pmatrix}\,.
\end{align} | {
"domain": "physics.stackexchange",
"id": 96054,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, potential, vector-fields",
"url": null
} |
jenkins, ros-kinetic
Title: Publishing the results of CCCC and XUnit test plugins through jenkinsfile in a Pipeline
I have a freestyle ROS job on jenkins that I recently converted to a pipeline job. I want to publish test reports, code coverage report and the like for the said job in the last stage after having generated the relevant reports in an earlier stage. All the other post build actions were easy to implement except for publishing CCCC report and XUnit test coverage report. I suspect some plugins like these two are not compatible with jenkins pipeline (Jenkinsfile). Is there a workaround for me to get these two post-build actions published for my job from the Jenkinsfile? Basic, but I am stuck.
Originally posted by HalfBloodPrince7 on ROS Answers with karma: 1 on 2018-02-20
Post score: 0
Since this question doesn't apply exclusively to ROS builds. I think you'll have more luck asking on the Jenkins users mailing lists. My understanding is that plugins must add support for pipeline projects so not all plugins have done so. There is a list of Plugins that support pipeline maintained in the pipeline plugin repository.
The xUnit plugin is listed as working with pipeline from 1.100 on and the wiki page includes a pipeline example: https://wiki.jenkins.io/display/JENKINS/xUnit+Plugin#xUnitPlugin-WorkingwithPipelines
CCCC is listed as abandoned and hasn't been updated since 2011 so it is unlikely to ever receive pipeline support.
Originally posted by nuclearsandwich with karma: 906 on 2018-02-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by HalfBloodPrince7 on 2018-02-23:
xUnit plugin worked just fine. Thank you for the link. Yes, CCCC doesn't seem to have any support right now. I'll just work with a downstream job for publishing all my test reports. | {
"domain": "robotics.stackexchange",
"id": 30095,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "jenkins, ros-kinetic",
"url": null
} |
c#, calculator
static Decimal Prod(Decimal n1, Decimal n2)
{
Decimal prod;
prod = n1 * n2;
return prod;
}
static Decimal Div(Decimal n1, Decimal n2)
{
Decimal div;
div = n1 / n2;
return div;
}
static Decimal Pow(Decimal n1, Decimal n2)
{
Decimal pow;
pow = (decimal)Math.Pow((double)n1, (double)n2);
return pow;
}
}
} Remove the useless functions
There's already a sum function. It's called +
Instead of
Console.WriteLine(Sum(n1, n2));
and
static Decimal Sum(Decimal n1, Decimal n2)
{
Decimal sum;
sum = n1 + n2;
return sum;
}
just write
Console.WriteLine(n1 + n2);
... and the exact same thing applies to all the other calculation functions you wrote.
The same thing in every branch of an if
No matter what op is, you do Console.ReadLine();. So delete it from each if/else if/else and just add it after the end of the if. | {
"domain": "codereview.stackexchange",
"id": 40309,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, calculator",
"url": null
} |
water, electricity
Title: Why is distilled water such a poor conductor of electricity? Water is composed of hydronium ions and hydroxide ions. Since ions do conduct electricity, why is distilled water such a poor conductor of electricity? You are mistaken.
Water is not composed of these ions. There is ongoing autodissociation equilibrium reaction$$\ce{2 H2O <<=> H3O+ + OH- }$$ shifted strongly toward left, with concentration of ions $\pu{1e-7 mol/L}$ at $\pu{25 ^{\circ}C}$ .
Therefore conductivity of water itself is very low. | {
"domain": "chemistry.stackexchange",
"id": 13685,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "water, electricity",
"url": null
} |
if exposed to factor 0 if not;and y =. The output varies linearly based upon the input. Multiple regression uses the ordinary least squares solution (as does bi. The whole model F test (test of the useful of the model) tests whether the slopes on all variables in multiple regression are zero, i. Ryan McEwan and Julia Chapman. Simple linear regression involves a single independent variable. 12-1 Multiple Linear Regression Models • Many applications of regression analysis involve situations in which there are more than one regressor variable. Introduction to Linear Regression and Correlation Analysis Fall 2006 – Fundamentals of Business Statistics 2 Chapter Goals To understand the methods for displaying and describing relationship among variables. ppt from AA 1Multiple Regression (Part 3: Variables Selection) 1 Topic Outline Comparing Two Multiple Regression Models Partial F -Test Using Sequential Sum of. It is expected the average operating margin of all sites that fit this category falls within 33% and 41. Chapter 8: Multiple Choice Questions. Here is image from Linear Regression, posted by Ralf Adam, on March 30, 2017, image size: 63kB, width: 1200, height: 1455, Least to Greatest, Where, Least Common. Regression with two or more predictors is called multiple regression Available in all statistical packages Just like correlation, if an explanatory variable is a significant predictor of the dependent variable, it doesn't imply that the explanatory variable is a cause of the dependent variable. It is discussed in Response Surface Methods. Hey I would like to make a scatter plot with p-value and r^2 included for a multiple linear regression. 0 Chapter 6: Multiple Linear Regression Topics Explanatory Modeling Predictive Modeling Example: Prices of Toyota Corolla ToyotaCorolla. Multiple regression is a very advanced statistical too and it is extremely powerful when you are trying to develop a "model" for predicting a wide variety of outcomes. Regression is the analysis of the relation between one variable and some other variable(s), assuming a linear relation. Also referred to as least squares regression and ordinary least squares (OLS). Regression when all explanatory variables are categorical is “analysis of variance”. 5 Correlation and Regression Simple regression 1. Notes on Regression Model • It is VERY important to have theory before starting developing any regression model. The Regression df is the number of independent variables in the model. We can have only two models or more | {
"domain": "hoteleuphoria.ro",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.982013792143467,
"lm_q1q2_score": 0.8616942950070836,
"lm_q2_score": 0.8774767746654974,
"openwebmath_perplexity": 937.9905486025981,
"openwebmath_score": 0.3387893736362457,
"tags": null,
"url": "http://backup.hoteleuphoria.ro/t946mkf/8rgsf.php?ff=multiple-regression-ppt"
} |
cosmology
2) When was humanity first detectable in space by something else than visible light? (EM-Emission, other waves?) I think it was the first radio waves. The first happened in the 1890s, but large scale radio waves, with their very special characteristic began only around 1930. This is 80 years. I want to mention, that our radio wave spectrum has significant changes with the decades (TV spectrum added, digital radio & television, wifi, etc). AFAIK in our technology - radio wave observatories - it could be easily detectable in this 80 light year across sphere around the earth.
3) What products of humanity other than EM-Waves are there in outer space and how far are they away?
The most distant object is the Voyager 1. It is currently around 125 A.U. (0.002 light year) distant. Their radio signals are currently good detectable with a satellite cluster - but only because we know, for what we need to looking for.
4) In what parts of the EM-spectrum is humanity clearly detectable against the Sun's background emission and how far away is that possible?
See 2). For far (i.e. from AUs) away is theoretically impossible to differentiate the senders giving on the same frequency, but it is quite possible to get the "sum" of their signals. This doesn't get a comprehensible data, but has very special characteristics [frequency spectrum], which we never get from any natural sources. | {
"domain": "physics.stackexchange",
"id": 14421,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology",
"url": null
} |
strings, random, lua
Title: Generating random strings I've created the following string manipulation function for randomizing my passed string in Lua:
require "string"
require "math"
math.randomseed( os.time() )
function string.random( self )
local tTemporary, tNew = {}, {}
if not self or self:len() < 5 then
return nil
end
self:gsub( "%a", function( cChar )
table.insert( tTemporary, cChar )
end
)
for i = 1, #tTemporary, 1 do
local iRandom = math.random(1, #tTemporary)
tNew[i] = tTemporary[iRandom]
table.remove( tTemporary, iRandom )
end
return table.concat(tNew, " ")
end
Can this be optimized/more-randomised? A couple of points to consider:
Calling your function shuffle instead of random would be a better name since that's really what it's doing.
The extra intermediate tables aren't really necessary to perform the shuffle. A single table is enough and you can perform the shuffle in-place right on that table.
Consider a better separation between the process of shuffling and how the input is stored. For example, if you have your shuffle function take a table instead then that function can be reused on any kind of data and not just strings.
I would do away with the Hungarian notation and just drop the type prefix on the variable names. Remember Lua variables themselves doesn't have a type. If you later change what that variable refers to then the Hungarian notation becomes misleading.
So I would refactor your original code into two functions. Here's one possibility:
function shuffle(self)
if type(self) ~= 'table' then return nil end
for i = #self, 2, -1 do
local randi = math.random(i)
self[i], self[randi] = self[randi], self[i]
end
return self
end
function shuffle_words(str)
local strtable = {}
for each in str:gmatch("%a+") do
table.insert(strtable, each)
end
return table.concat(shuffle(strtable), ' ')
end | {
"domain": "codereview.stackexchange",
"id": 3713,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "strings, random, lua",
"url": null
} |
geology, paleontology, dating, history-of-science
Title: Did geologists determine the age of rocks and fossils before the advent of modern scientific dating methods? Did geologists determine the age of rocks and fossils before the advent of modern scientific dating methods such as radiometric, electron spin resonance and thermoluminescence?
If they did, does anyone know how they went about it? The approach adopted by Charles Lyell (and other writers in a similar timeframe), in his book 'Principles of Geology' which was first published in the 1830s was to look at processes in the modern landscape where the rate of change could be determined by observation or from historical evidence, and assuming that similar processes operated at similar rates in the geological past. So, for instance, if you measure the amount of sediment transported by a river today, and you measure the volume of sediment in that river's delta, you can estimate how long that delta took to form. If you see a similar delta in the geological record, you can assume it took a similar time to form. Lyell's estimates of the age of the earth were low, but as the concept of plate tectonics, with it's progressive recycling of rocks through subduction wasn't recognised, it was remarkably prescient. | {
"domain": "earthscience.stackexchange",
"id": 2306,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "geology, paleontology, dating, history-of-science",
"url": null
} |
general-relativity, spacetime
Now, supposedly in the trace-reversed form of the EFE, in the weak field limit, we can pretend the metric $g_{\mu\nu}$ is equal to the flat space metric $\eta_{\mu\nu}$, so $g_{00} = \eta_{00} = +1$ instead of $g_{00} = 1 + \frac{2\Phi}{c^2}$. From there, solving for $\kappa$ is easy:
$$T = T^\mu_\mu \approx T^0_0 \approx T^{00}g_{00} = \rho c^2 (+1) = \rho c^2$$
$$ R_{00} = \kappa (T_{00} - \frac{1}{2}T g_{00}) = \kappa (\rho c^2 - \frac{1}{2} \rho c^2 (+1)) = \kappa \frac{1}{2}\rho c^2 $$
Assuming Poisson's Equation $\nabla^2 \Phi = 4 \pi G \rho$ holds in the weak field limit, we get
$$ \frac{1}{c^2} \nabla^2 \Phi = \kappa \frac{1}{2}\rho c^2 $$
$$ \frac{1}{c^2} 4 \pi G \rho = \kappa \frac{1}{2}\rho c^2 $$
$$ \frac{8 \pi G}{c^4} = \kappa $$ | {
"domain": "physics.stackexchange",
"id": 97209,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, spacetime",
"url": null
} |
thermodynamics
However, for many thermodynamic functions, we can turn them into intensive properties dividing by the total amount of the system. I'll show you the notation that I like. Let $X$ denote the thermodynamic property $Q$, $S$, $G$ or $V$, then $x$ is defined as
\begin{equation}
\boxed{x \equiv \frac{X}{n}} \tag{1}
\end{equation}
where $n$ is the total amount of the system. Here $x$ has the name of molar thermodynamic property, and may be $q$, $s$, $g$ or $v$
For example, $g$ is the molar Gibbs energy (intensive) in $\pu{J mol^-1}$ and $G$ is the Gibbs energy (extensive) in $\pu{J}$. $v$ is the molar volume (intensive) in $\pu{m^3 mol^-1}$ and $V$ is the volume (extensive) in $\pu{m^3}$.
Thus, for the following equations
\begin{equation}
\mathrm{d}S = \frac{\unicode{x0111}Q_\mathrm{rev}}{T} \quad
\mathrm{d}s = \frac{\unicode{x0111}q_\mathrm{rev}}{T} \tag{2,3}
\end{equation}
Eq. (2) denotes entropy $S$ in units of $\pu{J K^-1}$, and heat $Q_\mathrm{rev}$ in units of $\pu{J}$.
Eq. (3) denotes molar entropy $s$ in $\pu{J mol^-1 K^-1}$, and $q_\mathrm{rev}$ denotes molar heat (although this sounds weird) in $\pu{J mol^-1}$. | {
"domain": "chemistry.stackexchange",
"id": 17675,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics",
"url": null
} |
operators, conformal-field-theory, commutator, quantization
$$
We are now ready to answer your question. In the flat metric described above, what is time? Well, the notion of time is determined from our starting point where time was described by the coordinate $t$ (to be precise, what I mean is that I am studying the theory from the perspective of an observer $O$ moving along the worldline with tangent $k=\partial_t$) which Wick rotates to $\tau$ which I coordinate transformed to $r$. Thus, on the final Euclidean plane, the observer $O$ experiences time to be $r$ (to be precise, $O$ travels along a worldline with tangent $k = r \partial_r$). We therefore quantize the theory w.r.t. constant $r$ slices. The final Euclidean plane described above is therefore called the "radial plane".
To summarize, A CFT on $S^{d-1}\times {\mathbb R}$ quantized on equal time slices can be described equivalently in terms of a CFT on ${\mathbb R}^d$ quantized on equal radius slices. | {
"domain": "physics.stackexchange",
"id": 64793,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "operators, conformal-field-theory, commutator, quantization",
"url": null
} |
java
Title: How to organize these several lists of booleans to avoid code duplication? My class Person has a number of boolean values: isMoving, hasEyesOpen, isTired etc. They are all independent and eight in total. These values are updated from elsewhere every second. The Person stores the last hundred historical values. My code seems to contain a lot of duplication. Is there a better way to write this class?
The main reason that I think this is bad code is that any new boolean must be added in a lot of places.
class Person {
protected ArrayList<Boolean> histIsMoving = new ArrayList<Boolean>();
protected ArrayList<Boolean> histHasEyesOpen = new ArrayList<Boolean>();
protected ArrayList<Boolean> histIsTired = new ArrayList<Boolean>();
// and 5 more ...
public boolean getIsMoving() {
return histIsMoving.get( histIsMoving.size()-1 );
}
public boolean getHasEyesOpen() {
return histHasEyesOpen.get( histHasEyesOpen.size()-1 );
}
public boolean getIsTired() {
return histIsTired.get( histIsTired.size()-1 );
}
// and 5 more ...
public ArrayList<Boolean> getHistIsMoving() {
return histIsMoving;
}
public ArrayList<Boolean> getHistHasEyesOpen() {
return histHasEyesOpen;
}
public ArrayList<Boolean> getHistIsTired() {
return histIsTired;
}
// and 5 more ... | {
"domain": "codereview.stackexchange",
"id": 4558,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java",
"url": null
} |
python, programming-challenge, circular-list
You can see that for each doubling of n, the runtime increases by roughly four times, which is what we expect for an \$ O(n^2) \$ algorithm.
4. Making it linear
How can we speed this up? Well, we could avoid the expensive del operation by making a list of the survivors, instead of deleting the deceased. Consider a single trip around the circular firing squad. If there are an even number of people remaining, then the people with indexes 0, 2, 4, and so on, survive. But if there are an odd number of people remaining, then the last survivor shoots the person with index 0, so the survivors are the people with indexes 2, 4, and so on. Putting this into code form:
def survivor2(n):
"""Return the survivor of a circular firing squad of n people."""
persons = list(range(1, n + 1))
while len(persons) > 1:
if len(persons) % 2 == 0:
persons = persons[::2]
else:
persons = persons[2::2]
return persons[0]
(You could shorten this, if you liked, using an expression like persons[(len(persons) % 2) * 2::2], but I don't think the small reduction in code length is worth the loss of clarity.)
Let's check that this is correct, by comparing the results with the original implementation:
>>> all(survivor(i) == survivor2(i) for i in range(1, 1000))
True | {
"domain": "codereview.stackexchange",
"id": 12907,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, programming-challenge, circular-list",
"url": null
} |
electrostatics
Title: If glass rod is an insulator, how can it be used for charging by contact? Charging by contact requires conductors. When a positively charged glass rod touches a neutral metal body, the body gets positive charge. But how is it possible as glass rod is insulator? The charge and voltage on an object like the glass rod are related by:
$$ C = \frac{Q}{V} $$
where $C$ is the capacitance. For a glass rod the capacitance is very small so the charge $Q$ will be very small and the voltage $V$ will be very high. This means the current required to redistribute the charge is very small and there is a high voltage available to drive that current, so even if the resistance of the glass is very high the charge will still flow on the timescale of a few seconds. That's why the charge can flow off the glass rod and onto the metal body.
To work out how fast the charge will flow requires you to specify the geometry of the rod and the charge and voltage on it. The resistance of the glass will depend on how clean it is and the relative humidity. A glass surface adsorbs water from the atmosphere to form a (slightly) conducting surface layer. | {
"domain": "physics.stackexchange",
"id": 33382,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics",
"url": null
} |
python, beginner, pyqt
self.menubar.setGeometry(QtCore.QRect(0, 0, 766, 26))
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
MainWindow.setStatusBar(self.statusbar)
self.actionNew_Game = QtWidgets.QAction(MainWindow)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow) | {
"domain": "codereview.stackexchange",
"id": 37872,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, pyqt",
"url": null
} |
Everything so far has simply been the setup: writing the polynomial in reduced monic form. Next we implement Descartes Factorization method.
Descartes Factorization Method
We must assume that all of the coefficients are real, $p,q,r\in\mathbb{R}$. This is a required condition to make the methodology work. The reason is because now all solutions with non-zero imaginary components come in complex conjugate pairs. Big deal? It allows us to group two solutions together, even if they are purely real, into quadratic factors with real coefficients. We know that all real-valued monic quartics can be factored into: $$(z^2 + mz+n)(z^2+sz+t)=0\;\;\;\;\;\;\;\;\;\;\;\;(2)$$ Where $m,n,s,t\in\mathbb{R}$. Clearly the task at hand is to find the values of these constants. If we can convert (1) into (2) by determining quadratic factors that satisfies the equation, we can find the roots with the quadratic formula applied to those quadratic factors. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9888419667789669,
"lm_q1q2_score": 0.817487318519125,
"lm_q2_score": 0.8267117962054049,
"openwebmath_perplexity": 385.00272215571556,
"openwebmath_score": 0.8640218377113342,
"tags": null,
"url": "https://math.stackexchange.com/questions/785/is-there-a-general-formula-for-solving-4th-degree-equations-quartic"
} |
equilibrium
This is the second time I answered on stack exchange so please let me know my mistake if any. If I had known how to create the diagrams I usually find in other answers I would have added them. | {
"domain": "chemistry.stackexchange",
"id": 2257,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "equilibrium",
"url": null
} |
in order to sketch their graphs. There are also packets, practice problems, and answers provided on the site. This document is highly rated by CA CPT students and has been viewed 1362 times. Differential calculus is about describing in a precise fashion the ways in which related quantities change. As such, each part was restricted to 8 chapters. Applications of Integral Calculus. This exclusive workshop was by invitation only, and all talks were one hour in length. School Counselor; Parent Presentations; Handbook; Middle School Math Information; Testing; Class Trips/Events; Clubs; Student Council; Scheduling and Program of Studies. See full list on embibe. 1 Graph Quadratic Equations in Standard Form Homework: pages 240-241 (3-39 multiples of 3) Tuesday, November 10 In Class: 4. Such a generalization is not merely a mathematical curiosity but has found applications in various fields of physical sciences. Cost of a commodity depends upon a number of factors. A Gable is a triangle formed by a sloping roof. Examples, practice problems on Calculus. Applications of Differential Equations The Simple Pendulum Theoretical Introduction. One way to do this is to graph the function and determine by inspection what the absolute maximum is. 750 Chapter 11 Limits and an Introduction to Calculus The Limit Concept The notion of a limit is a fundamental concept of calculus. Application of Integral Calculus The ABC Company Ltd. txt) or view presentation slides online. Crisan’s Stochastic Calculus and Applications lectures of 1998; and also much to various books especially those of L. Finding the value of the function between the x values graphically represents the area of the function under the curve within the x limits. Population Problems 4. However, not every student has mastered the basic calculus applications and formulae, that is why they request for our homework help. Pupils are given the worksheet in order to see how simple, yet how time consuming and contradictory this theory is. In structural analysis finite element method is very useful for analysis of multi- storey structures, design of rigid pavements for roads, analysis of particle under various kind of forces like seismic force, wind. ppt from FIN 650 at Pace University. The Fundamental Theorem of Calculus. Pilipović, B. calculus: the calculus of optimization 15 Economists in the late 1900s thought that utility might actually be real, some- thing that could be measured using “hedonometers” or “psychogalvanometers”. Differentiate, Derivative, Maximum, Minimum, Turning | {
"domain": "1upload.de",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9890130593124399,
"lm_q1q2_score": 0.8711198220083848,
"lm_q2_score": 0.8807970873650401,
"openwebmath_perplexity": 1150.4617944075264,
"openwebmath_score": 0.49587282538414,
"tags": null,
"url": "http://zveg.1upload.de/application-of-calculus-ppt.html"
} |
ros-kinetic, pointcloud
Originally posted by DRos with karma: 23 on 2018-07-24
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by PeteBlackerThe3rd on 2018-07-24:
You are correct. The points member is a std::vector object you can either set the size then add the data (fastest) or you can use the push_back method to add elements one by one (slower)
Comment by DRos on 2018-07-24:
Thanks for the reply Pete.
Comment by stevemacenski on 2018-07-24:
Can you mark this as answered?
Comment by DRos on 2018-07-25:
Sure, sorry about that. All done. | {
"domain": "robotics.stackexchange",
"id": 31343,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-kinetic, pointcloud",
"url": null
} |
php
class view_db extends view
{
function __construct($type)
{
parent::__construct();
$this->invoke($type);
}
private function invoke($type)
{
switch ($type)
{
case "bookmarks":
$this->html_bookmarks();
break;
case "tweets":
$this->html_tweets();
break;
default:
throw new Exception('Invalid View Type');
break;
}
}
private function html_bookmarks()
{
$email = $_SESSION['email'];
$query_return = database::query("SELECT * FROM bo WHERE email='$email' ORDER BY name ASC");
$html_string='';
while ($ass_array = mysqli_fetch_assoc($query_return))
{
$fav=$this->favicon($ass_array['url']);
$html_string = $html_string . "<img name='bo_im' class='c' src='$fav' onerror='i_bm_err(this)'><a target='_blank' name='a1' class='b' href = $ass_array[url]>$ass_array[name]</a>";
}
echo $html_string;
}
private function html_tweets()
{
$query_return = database::query("SELECT * FROM tw ORDER BY time DESC LIMIT 7");
$time = time();
$html_string='';
while ($a = mysqli_fetch_assoc($query_return))
{ | {
"domain": "codereview.stackexchange",
"id": 733,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php",
"url": null
} |
error-analysis
Title: Can adding two numbers increase the significant figures? I am having a question regarding significant figures in calculating average of two masses. For example adding two masses 7 kg and 8 kg gives 15 kg, both inputs 7 and 8 have only one significant figure but final answer 15 has two significant figures.
Now if I take average of the masses 15/2 gives 7.5, (Note: the number 2 has infinite significant figures because it represent a whole quantity, so it is 2.0000... in actual)
Hence the calculated average 7.5 kg has two significant figures obtained from the sum 15 kg (which has 2 significant figures and it determines the significant figure of the result of division)
Is this logic correct? Can addition (or taking average) increase the significant figure of the result? Can I write my final average as 7.5 kg or should I write only 8 kg (after rounding off 7.5 to one significant figure because both inputs have only one significant figure)?
Thank you for your help in advance. The question which was asked was Can adding two numbers increase the significant figures?
In this context the question is misleading in that when values range over a value of . . . . . $0.1$ or $1$ or $10$ . . . . etc the use of the term significant figures can give a false impression.
The accuracy of the value $9.99$ is not really any different from the accuracy of the value $10.01$ even though the second value is quoted to four figures whilst the first value is quoted to only three figures.
It is not clear if the quoted masses, $7\,\rm kg$ and $8\,\rm kg$, are actually more accurate than the one figure quoted. For example you might expect a hanger mass that is found in the laboratory with a label of $100\,\rm g$ on it to be accurate to at least $\pm 0.1\,\rm g$. | {
"domain": "physics.stackexchange",
"id": 82202,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "error-analysis",
"url": null
} |
c++, algorithm, programming-challenge, time-limit-exceeded
Title: Computing the de Bruijn sequence I'm trying to compute de Bruijn sequences for alphabets which have a number of characters which is a power of two. So the sequence must contain every possible subsequence of length \$N\$ from \$0\$ to \$2^n\$ as a sequence of consecutive characters exactly once.
I need to use only 2 digits (0 and 1) with a length \$N \le 20\$ so \$B(2, N)\$.
The idea is quite simple, I have implemented it but I have time restriction 0.5 sec and from \$N=16\$ to \$N=20\$ I will get Time Limit Exceeded (I didn't submit the code yet because that programming contest web evaluator is not working at the moment). Using my computer I get for \$N=15\$ somewhere between \$0.2\$ to
\$0.3\$ seconds but for \$N=16\$ I get \$1.180\$ seconds which is not good at all.
Prefer Ones rule: Write \$n\$ zeros. Then, always write a one unless it would cause the repetition of an \$n\$-length string; otherwise, write a zero.
This is my code without libraries declaration:
int construct(int n, string ans){
int number=0, poz=0;
for(int i=ans.length()-n; i<ans.length(); ++i, ++poz)
if(ans[i]=='1')
number ^= (1<<poz);
return number;
}
int main ()
{
int n, aux, usedN=1; // usedN counts the combinations
// I created in my sequence (till 2^n)
fin>>n;
aux=n;
bool used[(1<<n)+1];
string ans;
memset(used, 0, sizeof(used));
used[0]=1; // that's why I have usedN = 1
while(aux--)
ans.push_back('0'); | {
"domain": "codereview.stackexchange",
"id": 17817,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, programming-challenge, time-limit-exceeded",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.