text stringlengths 49 10.4k | source dict |
|---|---|
machine-learning, artificial-intelligence
Title: Clarification about RNN encoder-decoder equation In the paper by Cho et.al., section 2.3 details the equations for the modified LSTM cell in RNN used in the paper's implementation. The equation in question is :
Here, the output of the reset gate (r) is element wise multiplied with the previous hidden state h<t-1>, and then matrix multiplied with the U matrix.
Later on, in the Appendix section A.1.1 explains the equations for the decoder, specifically:
My question: Is the r 'j element-wise multiplied to the corresponding expression or not? $\mathbf{r}$ is a vector; $r'_j$ is a scalar; so this is multiplying a scalar ($r'_j$) by a vector (the part in $[...]$). There is only one way to multiply a scalar by a vector: you multiply each coordinate of the vector by the scalar. | {
"domain": "cs.stackexchange",
"id": 19550,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, artificial-intelligence",
"url": null
} |
navigation, move-base
Originally posted by Po-Jen Lai with karma: 1371 on 2015-07-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Naman on 2015-07-07:
Thanks @Ricky! Do you know of any good resource/paper/forums/links for generating a global plan for wall following in ROS? TIA
Comment by Po-Jen Lai on 2015-07-07:
Forum: http://robotics.stackexchange.com/, As for paper: (I just did a quick survey, you might get more useful info by doing more survey) http://ieeexplore.ieee.org/xpl/abstractCitations.jsp?tp=&arnumber=220250&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D220250 | {
"domain": "robotics.stackexchange",
"id": 22086,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, move-base",
"url": null
} |
python, performance, python-3.x, multiprocessing
digits_list = dates()
else:
while True:
number_digits = input("How many numbers do you want to put"
"(/!\max is 6 numbers!\)")
if number_digits.isdigit() and int(number_digits) <= 6:
number_digits = "9" * int(number_digits)
digits_list = numbers(number_digits)
number_digits = ""
break
else:
parser.error('A number lower or equal to 6 is required for the lenght of the numbers')
if args.save:
run = True
while run:
save = input("How do you want to name the save file?")
if save != "":
save = save+".txt"
break
print("\n" + "-Found [" + str(lines_hashfile) + "] hashes in hashlist " +
"\n" + "-Found [" + str(lines_wordfile) + "] words in wordlist")
hashmethod()
input("\n"+"Press <ENTER> to start")
start_time = datetime.datetime.now() # Save the time the program started
main()
print("Scan finished")
try:
os.remove("tmp")
except PermissionError:
print("PermissionError: tmp file couldn't be removed (need administrator permissions)")
You can also find the code with wordlists and hashlist for test here:
https://github.com/Thewizy/UltimateBruteforcer Just a few quick comments
You can use a dictionary as mapping file
First you check the length of the bytes and use an temp variable 1, 2, 3, 4, 5, 6 to know which hashing algo to use
lenght = len(hash)
if lenght == 32: # MD5
hashmethod_list.append(1)
...
if hashmethod_list[hashline] == 1:
hashedguess = hashlib.md5(bytes(word, "utf-8")).hexdigest() | {
"domain": "codereview.stackexchange",
"id": 33575,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, python-3.x, multiprocessing",
"url": null
} |
control, ros, ros2, transforms, tf2
Title: Can a ROS 2 TF buffer be queried in time-critical code?
Does anybody know if the ROS2 tf buffer belongs in control loops? What's the latency on calling lookupTransform()?
It would be great if it were realtime safe.
Originally posted by AndyZe on ROS Answers with karma: 2331 on 2021-04-28
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 36379,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "control, ros, ros2, transforms, tf2",
"url": null
} |
signal-analysis, digital-communications, symmetry
So, from an information-theoretic point of view, a non-zero mean means wasting power without transporting information. From a spectrum regulation point of view it means introducing spurs even outside the the actual bandwidth of the information signal.
Since these spurs actually stem from a sampling of the pulse shape $G$ at multiples of the symbol rate, this is not as bad for many (especially purely theoretic) digital systems, where you have intense control over the parameters of that pulse shape – but if your pulse shaper is an imperfect analog one, this will come back and bite you. Also note that if you're building a digital transmitter with a rectangular pulse shape with one sample per symbol (as found in many cheap transmitters), you will have a DAC somewhere. That DAC will have a reconstruction filter. You will have to incorporate that filter's imperfections as periodically repeating spur. Not great. | {
"domain": "dsp.stackexchange",
"id": 5969,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "signal-analysis, digital-communications, symmetry",
"url": null
} |
cosmology, astronomy, astrophysics, universe
Title: If there were infinite many stars If there were infinite many stars, is the sky then always full of light so is there than even night? IF there were infinite stars(or a lot of stars) the fact that the night sky is dark indicates that there was not enough time for light from those stars to travel to us. That means the stars started to radiate light at a finite time before. | {
"domain": "physics.stackexchange",
"id": 26877,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, astronomy, astrophysics, universe",
"url": null
} |
You are welcome. I had figured the $4!$ was an unimportant error. But that still leaves an overcount in that part by a factor of $2$. That error is somewhat more structural. – André Nicolas Apr 15 '13 at 9:30 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9911526438717138,
"lm_q1q2_score": 0.8520580062857801,
"lm_q2_score": 0.8596637577007393,
"openwebmath_perplexity": 100.20600061453757,
"openwebmath_score": 0.8496515154838562,
"tags": null,
"url": "http://math.stackexchange.com/questions/362140/probability-taking-cups-from-a-box"
} |
genetics, evolution
in a dioecious species (i.e., one with two sexes), cannibalism means a 50% chance of eating a potential mate and losing the opportunity to pass on your genes
diseases are more likely to transfer between species the more closely related they are. If you are a cannibal you are eating a species with a 99% similarity to you and are more likely to get diseases or parasites. Like kuru/mad cow/chronic wasting disease.
it precludes social behavior and all its benefits. Why engage in mutually profitable behavior when you can just eat those around you? This is thought to be one reason why pack-hunting is rare in some groups of vertebrates like monitor lizards.
on a broad-scale evolutionary perspective, cannibalism means removing members of your own population from the gene pool. It may be good for the individual but it's bad for the species as whole
cannibalism means you are feeding on an animal that is potentially your own size. That's a bad strategy in general. Most cannibalism in nature tends to be adults eating infants or adults scavenging carrion of their own species. By contrast predators are normally larger than their prey, with a few exceptions like pack-hunting wolves and lions. | {
"domain": "biology.stackexchange",
"id": 11107,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "genetics, evolution",
"url": null
} |
paleontology
Title: Were dinosaurs with feathers common? More and more evidence of fossilized dinosaurs with feathers are appearing. Did many dinosaurs have feathers and did this change during the Mesozoic? Keep in mind that although I am a paleontologist, I am not a vertebrate paleontologist so I might not be aware of every single find ever made of feathered dinosaur.
As far as I can tell, all dinosaurs fossils exhibiting feathers belong to the Theropoda. Here is a (probably dated) phylogeny of dinosaurs from Sereno 1999: | {
"domain": "earthscience.stackexchange",
"id": 49,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "paleontology",
"url": null
} |
performance, programming-challenge, ruby, time-limit-exceeded
But even better, Ruby has a negating form of "if" called "unless". Idiomatic Ruby would be:
unless w.disc
Getting the first element of an array or enumeration
root = nodes[0]
This is fine, but this also works:
root = nodes.first
Ruby likes you to have multiple ways to do a thing, a philosophical shift from Python.
Redundant #to_s
temp = gets.strip.to_s
The result of String#strip is a String, so there's no need for .to_s here.
Use Array instead of Queue (for this application)
q = Queue.new
Queue is intended for multi-threaded applications; it includes locking to prevent race conditions. For a single-threaded application, you can and should just use Array. The minor reason is performance (avoid the locking). The major reason is communication: A Ruby programmer seeing Queue is going to start looking for the threading in your program.
Since the methods you are calling on q (<< and pop) are also implement on Array and have the same meaning, you can do this instead:
q = []
Create empty arrays with []
nodes = Array.new
There's nothing wrong with this, but you will more often see a new, empty array created using the array literal syntax:
nodes = [] | {
"domain": "codereview.stackexchange",
"id": 40549,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, programming-challenge, ruby, time-limit-exceeded",
"url": null
} |
2)}{432}.$$ Note \begin{eqnarray*} \frac{\alpha^3}{(1+\alpha^2)(1-\alpha^3)} &=&-\frac{1}{2}\frac{1+\alpha}{1+\alpha^2}+\frac12\frac{1+\alpha-\alpha^2}{1-\alpha^3} \end{eqnarray*} and hence \begin{eqnarray*} I_2&=&\int_0^1\frac{\alpha^3\ln\alpha}{(1-\alpha)(\alpha^2+1)(\alpha^2+\alpha+1)}d\alpha\\ &=&-\frac{1}{2}\int_0^1\frac{(1+\alpha)\ln\alpha}{1+\alpha^2}d\alpha+\frac{1}{2}\int_0^1\frac{(1+\alpha-\alpha^2)\ln\alpha}{1-\alpha^3}d\alpha \end{eqnarray*} But \begin{eqnarray*} \int_0^1\frac{(1+\alpha)\ln\alpha}{1+\alpha^2}d\alpha&=&\int_0^1\frac{\ln\alpha}{1+\alpha^2}d\alpha+\int_0^1\frac{\alpha\ln\alpha}{1+\alpha^2}d\alpha\\ &=&\sum_{n=0}^\infty\int_0^1(-\alpha^2)^n\ln\alpha d\alpha+\sum_{n=0}^\infty\int_0^1\alpha(-\alpha^2)^n\ln\alpha d\alpha\\ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464471055739,
"lm_q1q2_score": 0.8235964251520956,
"lm_q2_score": 0.8438951005915208,
"openwebmath_perplexity": 3739.340066272802,
"openwebmath_score": 0.9999449253082275,
"tags": null,
"url": "https://math.stackexchange.com/questions/698062/need-help-with-int-0-infty-frac-log1x-left1x2-right-left1x3-ri"
} |
newtonian-mechanics, kinematics, projectile
Title: When I kik a ball at velocity Vo, angle alpha, from the ground..(no air resistance) I have the force acting on the ball equals to F = (Fx,Fy) = (0, -mg)?
I mean I understand that I have the mg force on the y axis, pointing down, and I understand that there is no additional force on the x-axis, but shouldn't I have also a force (both in the x axis and yaxis) that comes from the fact that I kicked the ball?
I mean shouldn't it be something like F = (Fx, Fy) = (ma(x)+0, -mg + ma(y))?
Thanks The usual assumption is that the kick changes the initial condition of the ball. Your foot made contact at $t=0$, and exerted a large force $F$ during a short time interval $\Delta t$. If we assume the force exerted was constant, then at the time the ball leaves the foot, it has momentum $p = mv = F\Delta t$, and it has moved a very short distance $d = \frac12 a t^2 = \frac{F \Delta t^2}{2 m} = \frac{v\Delta t}{2}$; and the shorter $\Delta t$, the less far the ball moved during the kick (for the same velocity).
If you ignore that small amount of motion, then you can say the problem starts when the foot stops making contact with the ball; and you start the calculation with the ball at a certain velocity $v$, only subject to the force of gravity. | {
"domain": "physics.stackexchange",
"id": 28416,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, kinematics, projectile",
"url": null
} |
photons, quantum-electrodynamics, wavelength, superposition
Title: What would happen if 2 different photons overlap each other while travelling in the same direction? Imagine 2 photons with same wavelength but from different sources overlap each other and since they don't interact with each other I like to know if there is any changes to their wavelength vs solo? Consider only low energy for simplicity. I was thinking they become brighter but then that can only work if they travel side by side so only conclusion is their momentum adds up and no change in frequency right? Your reasoning is correct. As you stated, the two photons will not interact with each other. If overlapping changed their frequencies it would constitute interaction-- and so there is no frequency change. The momentum of the two photons is just the sum of their individual momenta, and the energy of the two is just the sum of their individual energies.
The above is true in a vacuum. However, in a material medium photons can interact in various ways, including changing frequencies, exchanging momenta and energy, and so on. | {
"domain": "physics.stackexchange",
"id": 63591,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "photons, quantum-electrodynamics, wavelength, superposition",
"url": null
} |
5) $\qquad \qquad \qquad La$ ******CPA
6) $\qquad \qquad \qquad Ia$ Simp. 3
7) $\qquad \qquad La \rightarrow Ia$ CP 5-6
8) $\qquad \qquad (Wa \land Sa) \rightarrow \neg La$ MP 4,7
9) $\qquad \qquad \qquad \forall y (Wy \rightarrow Sy)$ CPA
10) $\qquad \qquad \qquad(Wa \rightarrow Sa)$ UI 9
11) $\qquad \qquad \qquad Wa$ Simp. 3
12) $\qquad \qquad \qquad Sa$ MP 10,11
13) $\qquad \qquad \qquad Wa \land Sa$ Conj. 11,12
14) $\qquad \qquad \qquad \neg La$ MP 8,13
15) $\qquad \qquad \forall y (Wy \rightarrow Sy) \rightarrow \neg La$ CP 9-14
16) $\qquad (Wa \land Ia) \rightarrow (\forall y (Wy \rightarrow Sy) \rightarrow \neg La)$ CP 3-15
17) $\forall x (Wx \land Ix) \rightarrow (\forall y (Wy \rightarrow Sy) \rightarrow \neg Lx)$ UG 2-16 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812309063187,
"lm_q1q2_score": 0.8360922317225671,
"lm_q2_score": 0.8633916117313211,
"openwebmath_perplexity": 1496.8116325146198,
"openwebmath_score": 0.5448061227798462,
"tags": null,
"url": "https://math.stackexchange.com/questions/2048459/logic-proof-predicate-calculus"
} |
-1$. And of course in the real symmetric case$\gamma = 1$13. Dec 13, 2017 ### Pushoam I missed to write square of the elements. The corrected one: Then, for the anti - symmetric matrix,$tr (A^2)= tr (-AA^T) = - A_{ij}A_{ij}## = negative of the sum of the square of the elements of the matrix. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426458162397,
"lm_q1q2_score": 0.8610919669490874,
"lm_q2_score": 0.8840392878563335,
"openwebmath_perplexity": 3132.3418716038977,
"openwebmath_score": 0.9799693822860718,
"tags": null,
"url": "https://www.physicsforums.com/threads/calculating-eigenvalues-help.934324/"
} |
quantum-mechanics, path-integral, quantization
Title: Uniqueness in the path integral vs canonical quantisation In quantum mechanics it is well known that if you have a Lagrangian $\mathcal{L}$ and you want to quantise it, there is no unique way of doing this. This is because when you construct the Hamiltonian $\mathcal{H}$ and try to promote observables to operators, canonical momenta and position don't commute.
However the other way of doing quantum mechanics is in terms of path integrals. If one has a classical Lagrangian $\mathcal{L}$ then you can write down the propagator as $\int \mathcal{D}[q] e^{i S[q]}$ where $S[q] = \int \mathcal{L} dt$ is the action. It would therefore seem like $\mathcal{L}$ uniquely specifies the appropriate quantum theory, at least for any measurement that only involves propagators.
So my question is: how can one have non-uniqueness for canonical quantisation but apparent uniqueness for path integrals if the two formulations are supposed to be equivalent? Is it that the propagators do not fully determine the theory? Does this then mean that there are some quantities that cannot, even in principle, be calculated with path integrals?
In quantum mechanics it is well known that if you have a Lagrangian L and you want to quantise it, there is no unique way of doing this.
This is correct, however note that you also have the constraint that H needs to be Hermitian.
However the other way of doing quantum mechanics is in terms of path integrals. If one has a classical Lagrangian L then you can write down the propagator as $∫D[q]e^{iS[q]}$ where $S[q]=∫Ldt$ is the action. It would therefore seem like L uniquely specifies the appropriate quantum theory, at least for any measurement that only involves propagators. | {
"domain": "physics.stackexchange",
"id": 58223,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, path-integral, quantization",
"url": null
} |
energy, physical-chemistry, work, biology
Title: Can endergonic reactions occur outside of living organisms? If the Gibbs free energy equation is defined as:
∆G = ∆H - T∆S
And the amount of energy/work released from a reaction is:
-∆G = w_max
Living organisms use exergonic reactions to metabolize fuel where the Gibbs free energy is negative and they (can) occur spontaneously in non-living structures.
Exergonic: ∆G<0
Also in living organisms the energy that is released from the exergonic reactions can be coupled with the endergonic reactions. These do not occur unless work is applied to the reagents.
Endergonic: ∆G>0
Given that, is it possible for an endergonic reaction to happen anywhere outside of a living creature or man-made creation? The Gibbs Free energy is not defined as ∆G = ∆H - T∆S. That holds only at constant temperature. It can be defined as $\Delta G = \Delta H - \Delta (TS)$. For a reaction to be spontaneous at constant temperature and pressure, $\Delta G$ should be negative. However, all reactions go to a certain extent, even if only microscopically. Basically a $\Delta G$ of zero means that the equilibrium constant for that reaction is 1. A negative $\Delta G$ means that the equilibrium constant is greater than one, while a positive $\Delta G$ means that the equilibrium constant is less than one. This isn't magic, the relationship is $\Delta G = - RT\ln K$, where $R$ is the gas constant, $T$ the temperature, and $K$ the equilibrium constant.
One way to have a reaction take place to a significant extent with $\Delta G > 0$ is to couple it to another reaction with a large negative $\Delta G$. This is most often done by having one component in the desired reaction also a component in the "driving" reaction. And this can be done without applying work to the the reactants. | {
"domain": "physics.stackexchange",
"id": 5912,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "energy, physical-chemistry, work, biology",
"url": null
} |
javascript, algorithm, node.js, combinatorics
/* Some complicated code that adds to result */
...
}
return cache[n] = result;
}
}
Complexity, Performance, & Example
TL;DR
The next part of the answer addresses performance and complexity and how both can be improved with example function.
As the example is a completely different different approach it is not considered a review (rewrite); however some of it can be used in your solution.
Complexity
Your complexity is in the sub-exponential range \$O(n^{m log(n)})\$ where \$m\$ is some value >= 2. This is rather bad. The example reduces complexity by reducing the value of \$m\$.
Performance
Performance is indirectly related to complexity. You can increase performance without changing the complexity. The gain is achieved by using more efficient code, rather than a more efficient algorithm.
Example
The example is a completely different algorithm but some of the techniques can be applied to your solutions, such as the cache and moving the check for found combinations out of the recursing function.
Addressing complexity
I could not modify your algorithm to improve the complexity. This is not due to their not being a less complex algorithm based on your approach, just that I was unable to find it.
Addressing performance
There is a lot of room to improve performance via caching, strings, sorts, and stuff.
Cache
The example uses a cache to reduce calculations. See above Tips regarding cache.
Note the cache is set up to contain the result of n 0 to 2 which is equivalent to your first 3 if statements. | {
"domain": "codereview.stackexchange",
"id": 41007,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, algorithm, node.js, combinatorics",
"url": null
} |
powershell
[Parameter (
Position = 1,
# making this mandatory makes little sense when a default is supplied
#Mandatory,
# does it make any sense to use the following attribute?
ValueFromPipelineByPropertyName
)]
# don't use "First" and "Last" for _position_ info when the same words are used for the name parts
# it's needlessly confusing
#[ValidateSet ('First', 'Last')]
[ValidateSet ('Start', 'End')]
[string]
$LastNamePosition = 'Start'
)
begin {}
process
{
# properly supporting the pipeline requires one to have a "process" block.
# otherwise all code is run in a virtual "end" block ... and that does not correctly support pipeline input
# mixing camelCase and PascalCase for variable names is confusing [*grin*]
# the recommended style for PoSh is PascalCase
#$usernameComponents = $FullName.ToLower().Split(" ")
# the 2nd sample name has a trailing space
# the ".Trim()" removes that
# good practice in PoSh is to avoid using double quotes since that can trigger unwanted expansion of $Vars
$UserNameComponents = $FullName.Trim().ToLower().Split(' ')
switch ($LastNamePosition)
{
'Start' {
# "index -1, index 0" for the $UserNameComponents skips any middle name or initial
$UserName = '{0}.{1}' -f $UserNameComponents[-1], $UserNameComponents[0]
}
'End' {
$UserName = '{0}.{1}' -f $UserNameComponents[0], $UserNameComponents[-1]
}
# this is a binary choice, so the "default" is not needed
#Default { $firstName = $UserNameComponents.Length -1; $lastName = 0 }
} | {
"domain": "codereview.stackexchange",
"id": 39049,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "powershell",
"url": null
} |
java, optimization, algorithm, recursion, tree
HashSet<Node> nodeList2 = new HashSet<Node>();
nodeList1 = find(node1, nodesList1); // node1 = K
nodeList2 = find(node2, nodesList2); // node2 = H
Iterator iterator = nodeList1.iterator();
Node minimalNode = null
while (iterator.hasNext()){
Node node = (Node) iterator.next();
if(nodeList2.contains(node)){
if (minimalNode == null || node.compareTo(minimalNode) < 0) {
minimalNode = node;
}
}
}
return minimalNode; | {
"domain": "codereview.stackexchange",
"id": 6216,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, optimization, algorithm, recursion, tree",
"url": null
} |
excel
=IF(OR(ISBLANK(X2),ISBLANK(Y2)),0,X2/VLOOKUP(YEAR(Y2),$EU$3:$FH$57,14,FALSE))*$FH$57+IF(OR(ISBLANK(AE2),ISBLANK(AF2)),0,AE2/VLOOKUP(YEAR(AF2),$EU$3:$FH$57,14,FALSE))*$FH$57+IF(OR(ISBLANK(AL2),ISBLANK(AM2)),0,AL2/VLOOKUP(YEAR(AM2),$EU$3:$FH$57,14,FALSE))*$FH$57+IF(OR(ISBLANK(AS2),ISBLANK(AT2)),0,AS2/VLOOKUP(YEAR(AT2),$EU$3:$FH$57,14,FALSE))*$FH$57+IF(OR(ISBLANK(AZ2),ISBLANK(BA2)),0,AZ2/VLOOKUP(YEAR(BA2),$EU$3:$FH$57,14,FALSE))*$FH$57+IF(OR(ISBLANK(BG2),ISBLANK(BH2)),0,BG2/VLOOKUP(YEAR(BH2),$EU$3: | {
"domain": "codereview.stackexchange",
"id": 12137,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "excel",
"url": null
} |
cell-division
Title: Why doesn't cellular, replicative senescence (or the hayflick limit) constrain the normal development of an organism? The wikipedia article on cellular senescence states:
Cellular senescence is the phenomenon by which normal diploid cells cease to divide. In culture, fibroblasts can reach a maximum of 50 cell divisions before becoming senescent. This phenomenon is known as "replicative senescence", or the Hayflick limit. | {
"domain": "biology.stackexchange",
"id": 6433,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cell-division",
"url": null
} |
electromagnetism, homework-and-exercises, special-relativity
and
$$\frac{du^i}{d\tau}=\frac{e}{mc}F^{i\beta}u_\beta=\frac{e}{mc}(F^{i0}u_0 -F^{ij}u_j)=\frac{e}{mc}(\gamma c)\vec{E}$$
since $\vec{B}$ is zero in this frame.
Now I write all the velocities as parallel and perpendicular to the electric field and define $\omega_E=\frac{eE}{mc}$
$$\frac{d(\gamma c)}{d\tau}=\omega_E (\gamma v_{||})$$
$$\frac{d(\gamma v_{||})}{d\tau}=\omega_E (\gamma c)$$
$$\frac{d(\gamma v_{\perp})}{d\tau}=0$$
Then differentiating the second equation by $d/d\tau$ we get
$$\frac{d^2(\gamma v_{||})}{d\tau^2}=\omega_E \frac{d(\gamma c)}{d\tau}=\omega_{E}^2(\gamma v_{||})$$
solutions to this are $(\gamma v_{||})=A\sinh(\omega_E \tau)+B\cosh(\omega_E \tau)$ which implies $(\gamma c)=A\cosh(\omega_E \tau)+B\sinh(\omega_E \tau)$ and $\gamma v_{\perp}=\text{const.}$. Now at $\tau=0$ we know that $v_{\perp}=v_0$ and $v_{||}=0$. Let | {
"domain": "physics.stackexchange",
"id": 2901,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, homework-and-exercises, special-relativity",
"url": null
} |
javascript, jquery, ecmascript-6, html5
let openID = null;
var target;
var imageUrl = "https://i.imgur.com/RzEm1WK.png";
let jsonData = {
"layers" : [
{
"x" : 0,
"height" : 600,
"layers" : [
{
"x" : 20,
"height" : 788,
"src" : "ax0HVTs.png",
"y" : 70,
"width" : 650,
"type" : "image",
"name" : "shape_1"
},
{
"x" : 40,
"height" : 140,
"src" : "iEA642D.jpg",
"y" : 360,
"width" : 430,
"type" : "image",
"name" : "shape_2"
},
{
"font" : "ArchivoNarrow-Bold",
"x" : 19,
"y" : 17,
"width" : 211,
"src" : "611aa93612da8fde1b17d87368355d1f_Font83.otf",
"type" : "text",
"text" : "VALENTINES DAY ",
"size" : 27,
"height" : 22,
"name" : "edit_sale"
}
],
"y" : 0,
"width" : 600,
"type" : "group",
"name" : "fb_post_5"
}
]
};
$(document).ready(function() {
// below code will upload image onclick mask image
$('.container').click(function(e) {
// filtering out non-canvas clicks
if (e.target.tagName !== 'CANVAS') return;
// getting absolute points relative to container
const absX = e.offsetX + e.target.parentNode.offsetLeft + e.currentTarget.offsetLeft;
const absY = e.offsetY + e.target.parentNode.offsetTop + e.currentTarget.offsetTop; | {
"domain": "codereview.stackexchange",
"id": 34119,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, ecmascript-6, html5",
"url": null
} |
Any automorphism $\varphi$ of $\mbox{Gal}(L/\Q)$ will be determined by the pair $\left(\varphi(\sqrt 2),\varphi(\sqrt 3)\right)$ so by permuting roots in the defining polynomials we obtain the following table $$\begin{array}{c|c:c:c:c} \varphi&\ 1\ &\ \varphi_1\ &\ \varphi_2\ &\ \varphi_3\ \\ \hline \varphi(\sqrt 2)&\sqrt 2&-\sqrt 2&-\sqrt 2&\sqrt 2\\ \varphi(\sqrt 3)&\sqrt 3&-\sqrt 3&\sqrt 3&-\sqrt 3 \end{array}$$ so the Galos Group is $\{1,\varphi_1,\varphi_2,\varphi_3\}$. The table also shows that $\sqrt 2+\sqrt 3$ maps to four distinct elements so that this must be a primitive element of the extension $L\supset\Q$. It is immediate from the the table that $\varphi_i^2=1$ for $i=1,2,3$ so we have $$\mbox{Gal}(L/\Q)\simeq\newcommand{\Z}{\mathbb Z}\Z_2\times\Z_2$$ The subgroups of $\mbox{Gal}(L/\Q)$ appears to be like in this (primitive) diagram $$\begin{matrix} &&\{1\}&&\\ &\swarrow&\downarrow&\searrow\\ \langle \varphi_1\rangle&&\langle\varphi_2\rangle&&\langle\varphi_3\rangle\\ &\searrow&\downarrow&\swarrow\\ &&\mbox{Gal}(L/\Q)&& \end{matrix}$$ and based on that I have found that the intermediate fields are as | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918529698321,
"lm_q1q2_score": 0.817197875303099,
"lm_q2_score": 0.8267117962054049,
"openwebmath_perplexity": 190.22740751905314,
"openwebmath_score": 0.7341055274009705,
"tags": null,
"url": "https://math.stackexchange.com/questions/644131/systematically-describing-the-galois-group-and-intermediate-fields"
} |
newtonian-mechanics, forces, friction, velocity, drag
Find the general air resistance coefficient, $\beta$, from $P/v = \beta v^2$ $$\beta = \frac{P}{v^3} = \frac{216000}{60^3} = 1$$
Build the mathematical model for acceleration $a$ and engine power provided $P$ $$ a = \frac{1}{m} \left( \frac{P}{v} - \beta\,v^2 \right)$$ and note that the power provided is a function of speed also. You cannot provide full power at zero speed. Based on the gearing or electric motor characteristics, peak power is available only at a few speeds.
You can build a table of aerodynamic resistance at various speeds now
Speed [mph]
Speed [m/s]
Air Resistance [N]
10
4.48
20.1
40
17.9
321.2
60
26.9
722.6
80
35.8
1284.6
100
44.8
2007.3
120
53.8
2890.5
In summary, aerodynamic resistance changes a lot with speed. | {
"domain": "physics.stackexchange",
"id": 78733,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, friction, velocity, drag",
"url": null
} |
fft, dft, real-time
Title: Why are my frequency bins oscillating? I am working on a personal project that maps bass notes to colors in real-time. However, I'm encountering some issues with oscillations in my frequency bins.
I visualized my frequency bins to illustrate the issue:
30 Hz tone
45 Hz tone
75 Hz tone
Here is an outline of my signal processing chain:
Capture 2205 samples from my input device at a sample rate of 44.1 kHz (allowing me to identify frequencies down to 20 Hz)
Apply the Hann function to the samples
Calculate frequency bins from 20 Hz to 150 Hz in 1 Hert increments using the Goertzel algorithm
It seems like the oscillations occur at all frequencies, but become more noticeable the lower I go. I assume this is because given some period, the lower the frequency the fewer the cycles captured in that period. I had expected that to result in more leakage, but not necessarily oscillating values. Increasing the sampling period helps, but necessarily increases perceived latency (because I'm using samples further back in time).
Does my assumption about the sampling period seem correct? Are there ways to mitigate this oscillation without increasing my sampling period significantly?
30 Hz tone graphed from real data: This is indeed spectral leakage and specifically the dependency of the leakage on phase. Here is an example of 30 Hz sine with with a random phase. These are 50 frames with a randomized phase.
The "flickering" is easy to see and it's also easy to see why it happens. At 2205 sample the window will include 1.5 periods are 30 Hz. For a phase of $\varphi = 0$ the extra half period will be entirely positive so you will see a very significant DC component. For a phase of $\varphi = -\pi/2$ the extra half period will be symmetric, so the DC component will be 0.
The easy fix is to simply make your frames larger. This way the "extra" fractional periods become a much smaller part of the overall signal and you get better frequency resolution as well since the main lobe of the hanning window becomes much narrower.
Below is the same example but with 4 times the frame size | {
"domain": "dsp.stackexchange",
"id": 12515,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, dft, real-time",
"url": null
} |
programming-languages
Less formally, you can build many examples as follows. Take $f : \mathbb N \to \mathbb N$ to be a function which does NOT in general satisfy
$$
f(n)=f(m) \implies f(n+1)=f(m+1)
$$
Say, $f$ is a hash function.
Then, we can say $P\equiv Q$ whenever $f(\# vars(P))=f(\# vars(Q))$, where $\#vars$ counts the number of variables.
This is an equivalence (trivially), but not a congruence since if we add another fresh variable to both $P,Q$ we increment their variable count by one, but $f$ does not preserve that value in general. | {
"domain": "cs.stackexchange",
"id": 12384,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming-languages",
"url": null
} |
inorganic-chemistry, acid-base, everyday-chemistry
$$\ce{H2O + CO2(aq) <=> H2CO3}$$
and the protolysis of true $\ce{H2CO3}$
$$\ce{H2CO3 <=> H+ + HCO3-}$$
For a weak acid
$$\begin{align}
\log[\ce{H+}]&\approx\frac12\left(\log K_\mathrm a+\log[\ce{H2CO3^*}]\right)\\
&=\frac12\left(-6.3-5.0\right)\\
&=-5.65\\
\mathrm{pH}&=5.65
\end{align}$$
Thus, pure rain in equilibrium with the atmosphere has about $\mathrm{pH}=5.65$. Any acid rain with lower $\mathrm{pH}$ would be caused by additional acids. | {
"domain": "chemistry.stackexchange",
"id": 12339,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, acid-base, everyday-chemistry",
"url": null
} |
c#, state
Title: Cooking steak with State Pattern and without IFs I've been playing around a bit with a case of State Pattern but including as well ranges in order to get into a specific state.
So the definition is as simple as this:
There is a Steak that has a state based on its temperature, the temperature can be modified, which changes the state.
A Steak has 6 cooking states based on its temperature:
Raw (temp <= 100)
Rare (temp > 100 && temp <= 110)
MediumRare (temp > 110 && temp <= 120)
Medium (temp > 120 && temp <= 130)
MediumWell (temp > 130 && temp <= 140)
WellDone (temp > 140)
I am stuck trying to refactor the rules that dictate if the state must change or not. I'm aiming for a fully object-oriented solution, avoiding if branching conditions (if it's possible)
I've got to a solution, without using if branching conditions, but feels a bit complicated and overengineered...
Consuming code:
static void Main(string[] args)
{
var steakRules = new SteakRules();
steakRules.Add(new RawRule());
steakRules.Add(new RareRule());
steakRules.Add(new MediumRareRule());
steakRules.Add(new MediumRule());
steakRules.Add(new MediumWellRule());
steakRules.Add(new WellDoneRule());
var steak = new Steak(steakRules); //Temp 0. This stake is Raw and cannot be eaten.
steak.AddTemperature(50); //Temp 50. This stake is Raw and cannot be eaten.
steak.AddTemperature(55); //Temp 105. This stake is Rare and can be eaten.
steak.AddTemperature(20); //Temp 125. This stake is Medium and can be eaten.
steak.AddTemperature(40); //Temp 165. This stake is Well Done and can be eaten.
} | {
"domain": "codereview.stackexchange",
"id": 34237,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, state",
"url": null
} |
A transparent surface plot of this boundary is shown in the following figure. The points on this boundary correspond to positive semidefinite matrices while the interior of this set is $\mathbb{S}_{++}^{2}$ (this is because the leading principal minor condition for strict positive definiteness gives $xy>z^{2}$ with positive $x,y$). The vertex of this set, shown in red dot below, is the $2\times 2$ zero matrix. We learnt in Lec. 5, p. 3-4, that this set is in fact, a convex cone, which confirms the visual intuition gleaned from the plot below. A sample MATLAB code $\texttt{VisualizeSPSD.m}$ to generate the above figure is avalable in the CANVAS Files section.
# Problem 2 [5 x 6 = 30 points] Vector unit norm balls¶
For any fixed $p$ satisfying $0\leq p \leq \infty$, the vector unit $p$-norm ball is a set
$$\{x\in\mathbb{R}^{n} : \|x\|_{p} \leq 1\} \subset \mathbb{R}^{n}.$$
Clealry, the above set is centered at the origin. For the definition of vector $p$-norm, see Lec. 3, p. 5 .
The following plot shows the two dimensional $p$-norm balls for $p\in\{0.5,1,1.5,2,3.5,\infty\}$ (from left to right, top to bottom).
Use your favorite programs such as MATLAB/Python/Julia to plot the three dimensional $p$-norm balls for the same $p$ as above. Insert the plot in the notebook. Submit your code in the zip file so that we can reproduce your plot.
## Solution for 2:¶
The desired plot is shown below. The Python code $\texttt{Plot3DUnitNormBall.py}$ to generate the above plot is available in the CANVAS Files section.
# Problem 3 [35 points] Schatten $p$-norm of a matrix¶ | {
"domain": "nbviewer.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138157595305,
"lm_q1q2_score": 0.8126200387188753,
"lm_q2_score": 0.8311430457670241,
"openwebmath_perplexity": 546.3953806218766,
"openwebmath_score": 0.9689300060272217,
"tags": null,
"url": "https://nbviewer.org/github/abhishekhalder/AM229F20/blob/main/HW1%20Solution.ipynb"
} |
organic-chemistry, intermolecular-forces, boiling-point
Title: Why is the boiling point of ethyl fluoride lower than that of hydrogen fluoride? The book, Solomons' Organic Chemistry (for JEE Mains and Advance), contains the following question:
Hydrogen fluoride has a dipole moment of $\pu{1.82 D}$; its boiling point is $\pu{19.34 ^{\circ} C}$. Ethyl fluoride ($\ce{CH3CH2F}$) has an almost identical dipole moment and has a larger molecular weight, yet its boiling point is $\pu{-37.7 ^{\circ} C}$. Explain.
Now while explaining the (very) concept of boiling point and why it differs for different substances the book considers:
Dipole-dipole forces
London dispersion forces
Molecular weight
But when I tried to explain the phenomenon stated in the question, I found it hard to explain the lower melting point. Following is my approach on the explanation:
Since both have the same (similar) dipoles therefore there would be very negligible difference in their boiling points.
London dispersion forces increase in magnitude as the surface area of the molecule increases. So contrary to what was said $\ce{CH3CH2F}$ should have stronger dispersion force interaction (due to larger surface area) and hence higher boiling point as compared to $\ce{HF}$ (given other properties were same).
Given that $\ce{CH3CH2F}$ has higher mass than $\ce{HF}$, therefore it should lead to higher boiling point (as is stated in the question itself).
Now it can be clearly seen that the above stated arguments are against the phenomenon, so I believe that I am missing something in my explanation.
So | {
"domain": "chemistry.stackexchange",
"id": 13450,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, intermolecular-forces, boiling-point",
"url": null
} |
For payoff 2, there are this many ways to toss a HH( or HHH,HHHH and HHHHHH). There are {HHHHH, THHHH,HHHHT, THHHT,HTHHH,HHHTH, HHHTT, TTHHH, HHTHH, HHTTT, THHTT, TTHHT, TTTHH, THTHH, THHTH] Expected payoff 2 =(1/32$4 )+ ( 2/32$3) + ( 5/32$2) + (7/32$1)=$3/4= $27/32 Payoff 2 is better, I guess LOL @Calvin Lin Total possible outcomes of 32 of which there is one combination [TTTTT] which has no payoff. - 4 years, 7 months ago Log in to reply Check your calculations. The expected payoff of both scenarios is$1. | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9632305318133554,
"lm_q1q2_score": 0.8515336201430932,
"lm_q2_score": 0.8840392741081575,
"openwebmath_perplexity": 1207.6052515524545,
"openwebmath_score": 0.9509485363960266,
"tags": null,
"url": "https://brilliant.org/discussions/thread/which-payoff-do-you-want-to-go-for-2/"
} |
1 = 1
4 = 2^2
6 = 2 * 3
9 = 3^2
13 = 13
16 = 2^4
22 = 2 * 11
24 = 2^3 * 3
25 = 5^2
33 = 3 * 11
36 = 2^2 * 3^2
37 = 37
46 = 2 * 23
49 = 7^2
52 = 2^2 * 13
54 = 2 * 3^3
61 = 61
64 = 2^6
69 = 3 * 23
73 = 73
78 = 2 * 3 * 13
81 = 3^4
88 = 2^3 * 11
94 = 2 * 47
96 = 2^5 * 3
97 = 97
100 = 2^2 * 5^2
109 = 109
117 = 3^2 * 13
118 = 2 * 59
121 = 11^2
132 = 2^2 * 3 * 11
141 = 3 * 47
142 = 2 * 71
144 = 2^4 * 3^2
148 = 2^2 * 37
150 = 2 * 3 * 5^2
157 = 157
166 = 2 * 83
169 = 13^2
177 = 3 * 59
181 = 181
184 = 2^3 * 23
193 = 193
196 = 2^2 * 7^2
198 = 2 * 3^2 * 11
208 = 2^4 * 13
213 = 3 * 71
214 = 2 * 107
216 = 2^3 * 3^3
222 = 2 * 3 * 37
225 = 3^2 * 5^2
229 = 229
241 = 241
244 = 2^2 * 61
249 = 3 * 83
253 = 11 * 23
256 = 2^8
262 = 2 * 131
276 = 2^2 * 3 * 23
277 = 277
286 = 2 * 11 * 13
289 = 17^2
292 = 2^2 * 73
294 = 2 * 3 * 7^2
297 = 3^3 * 11
312 = 2^3 * 3 * 13
313 = 313
321 = 3 * 107
324 = 2^2 * 3^4
325 = 5^2 * 13
333 = 3^2 * 37
334 = 2 * 167
337 = 337
349 = 349
352 = 2^5 * 11
358 = 2 * 179
361 = 19^2
366 = 2 * 3 * 61
373 = 373
376 = 2^3 * 47
382 = 2 * 191
384 = 2^7 * 3
388 = 2^2 * 97
393 = 3 * 131 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9865717472321278,
"lm_q1q2_score": 0.8133908347212846,
"lm_q2_score": 0.8244619177503205,
"openwebmath_perplexity": 66.12765329171974,
"openwebmath_score": 0.5425806641578674,
"tags": null,
"url": "https://math.stackexchange.com/questions/1101206/positive-integers-n-which-can-be-written-as-x2-3y2"
} |
quantum-mechanics, quantum-information
To sum it up, we were looking for the largest $\frac{b}{c}$ such that the inequality holds true. If we took $b$ larger than 1, it would break at $x=0$, so that's one restriction. Then we prove that for $x=\pi$ the largest possible $c$ is $\frac{2}{\pi}$ or else it would break at that point. And finally, that gives us the value of $a$ we can't deviate from without crossing the $1-\frac{2}{\pi}$ line. | {
"domain": "physics.stackexchange",
"id": 26497,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-information",
"url": null
} |
Grand Turbo Rotisserie Instructions, Waterloo Courier Police Log, Yugioh Legendary Collection 3 Release Date, Normal Wrist Range Of Motion, Shure Se215 Bluetooth, Everything Changes Lyrics Waitress, The Henna Guys Reviews, | {
"domain": "com.br",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9890130593124399,
"lm_q1q2_score": 0.8263041290358484,
"lm_q2_score": 0.8354835371034368,
"openwebmath_perplexity": 402.3403455863307,
"openwebmath_score": 0.9638453722000122,
"tags": null,
"url": "https://www.cerenge.com.br/docs/applications-of-eigenvalues-and-eigenvectors-52eeb8"
} |
c#, asynchronous, delegates, wcf, asp.net-web-api
Title: .NET WCF Activator for sync and async calls I decided to re-write my code responsible for WCF calls, as all of my methods had try-catch-finally blocks. I read that it is bad idea to use a using() statement as it does not close the WCF connection. After digging around, I found a solution based on delegates. Yesterday I decided to add an activator for calls that are made by async controllers in my MVC Web API app.
I am not sure about the code used for async. My doubts are based on this article.
Note - some code removed for brevity.
Here is the code for activators:
using System;
using System.Collections.Generic;
using System.Linq;
using System.ServiceModel;
using System.Threading.Tasks;
using System.Web;
namespace MyProject.Concrete
{
public class SvcActivator<T>
{
public delegate void ServiceMethod(T proxy);
private BasicHttpBinding binding
{
get
{
return new BasicHttpBinding
{
MaxReceivedMessageSize = int.MaxValue,
MaxBufferPoolSize = int.MaxValue
};
}
}
public static void Use(ServiceMethod method, string url)
{
var channelFactory = new ChannelFactory<T>(binding);
var endpoit = new EndpointAddress(url);
IClientChannel proxy = (IClientChannel)channelFactory.CreateChannel(endpoit);
bool success = false;
try
{
method((T)proxy);
proxy.Close();
success = true;
}
catch (Exception)
{
throw;
}
finally
{
if (!success)
{
proxy.Abort();
}
}
}
public static Task UseAsync(ServiceMethod method, string url, HttpBindingBase binding)
{
var channelFactory = new ChannelFactory<T>(binding);
var endpoit = new EndpointAddress(url);
IClientChannel proxy = (IClientChannel)channelFactory.CreateChannel(endpoit);
bool success = false; | {
"domain": "codereview.stackexchange",
"id": 8661,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, asynchronous, delegates, wcf, asp.net-web-api",
"url": null
} |
mathwonk
Homework Helper
Riemann himself proved, in the same paper where he defined his notion of integration, that a bounded function is integrable in his sense if and only if, for every positive d, the set of points where the function oscillates by d or more, can be covered by a finite sequence of intervals whose total length is as small as you wish. In particular a function has oscillation zero at every point where it is continuous. Your function is discontinuous at exactly one point, namely x=3, where the oscillation is 2. Since it is easy to cover one point by an interval as short as you wish, your function is integrable. It follows also that a bounded function which is discontinuous at only a finite number of points, or even only at an infinite sequence of points, is always integrable. There also exist integrable functions which have an uncountable set of discontinuities.
This condition is equivalent to the later one, stated by Lebesgue, that a bounded function is integrable in Riemann's sense, if and only if the set of points where it is discontinuous can be covered by an infinite sequence of intervals, whose total length is as small as you wish. Since Lebesgue's result is famous, but most people have not read Riemann's paper in detail, this condition is usually attributed, incorrectly in my opinion, to Lebesgue. I.e. Lebesgue's condition is merely an equivalent restatement of Riemann's condition.
Kolika28
The Riemann integral is defined to be the limit of the sum of the areas of rectangles whose bases are small intervals and whose heights are any value of the function in that interval ... as the mesh of the partition (the largest size of any of its intervals) approaches 0 ... if that limit is well-defined. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9591542835062628,
"lm_q1q2_score": 0.801357605553656,
"lm_q2_score": 0.8354835289107309,
"openwebmath_perplexity": 353.40040191931644,
"openwebmath_score": 0.9338372945785522,
"tags": null,
"url": "https://www.physicsforums.com/threads/riemann-integrability-with-a-discontinuity.984596/"
} |
# Behaviour of $x^n$, $ln(x)$, and $e^x$ as $x\to \infty$
In the chapter "Limits of a Function", I came across the following property:
As $$x\to \infty$$, $$\ln(x)$$ increases much slower than any positive power of $$x$$ where as $$e^x$$ increases much faster than any positive power of $$x$$.
So the following properties hold good:
$$(1) \lim_{x \to \infty} \frac{\ln(x)}{x}=0$$
$$(2) \lim_{x \to \infty} \frac{(\ln(x))^n}{x}=0$$
$$(3)\lim_{x \to \infty} \frac{x}{e^x}=0$$
$$(4) \lim_{x \to \infty} \frac{x^n}{e^x}=0$$
For verifying the properties $$(1)$$ and $$(3)$$, I used the L'Hospital Rule, and I proved the limits tend to the value $$0$$.
I don't think the other two properties doesn't hold good at all conditions i.e., for all positive integral values of $$n$$. First of all, I was unable to use the L'Hospital Rule since I felt it would be very lengthy even if we know the value of $$n$$. So, I decided to use graphing calculator to determine their behaviour.
The following graph is for properties (1) and (2). The limit approaches $$0$$ at lower positive values of n but at a higher value say 98 as in the given graph. The limit itself approaches infinity and not zero. I tried to zoom out to see the behaviour, but as far as I tried the limit approaches infinity and not zero. Further from the graph it is evident the property given in my book is invalid, as the logarithmic function increases faster than the function $$x$$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9693242009478239,
"lm_q1q2_score": 0.8098544139931975,
"lm_q2_score": 0.8354835391516132,
"openwebmath_perplexity": 154.96061313875776,
"openwebmath_score": 0.9456321001052856,
"tags": null,
"url": "https://math.stackexchange.com/questions/3361802/behaviour-of-xn-lnx-and-ex-as-x-to-infty"
} |
ds.algorithms, gr.group-theory
Since you said in the comments that any canonical form will do, you might also be interested in my paper with Lance Fortnow [3]: in its currently generality, I think your question is related to our results. We show that if every equivalence relation decidable in $P$ has a canonical form in $P$, then "bad" consequences result, such as $NP = UP = RP$, which in particular implies that the polynomial hierarchy collapses down to $BPP$. On the other hand, the equivalence relations you're interested in may not be in $P$, but this result suggests that even if it lies in a higher complexity class other hard problems may still stand in your way.
So I think if you want some better upper bounds you really need the problem to be more specific.
[1] Andreas Blass and Yuri Gurevich. Equivalence relations, invariants, and normal forms. SIAM J. Comput. 13:4 (1984), 24-42.
[2] László Babai and Eugene M. Luks. Canonical labelings of graphs. STOC 1983, 171-183.
[3] Lance Fortnow and Joshua A. Grochow. Complexity classes of equivalence problems revisited. Inform. and Comput. 209:4 (2011), 748-763. Also available as arXiv:0907.4775v2. | {
"domain": "cstheory.stackexchange",
"id": 1229,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ds.algorithms, gr.group-theory",
"url": null
} |
ros-humble
Title: Abrupt cmd_vel change from + to - causes rattle in robot I am using the Carter robot and want to manipulate it back and forth using cmd_vel topic.
While testing with multiple commands, when I publish the joint twist message such as x: 0.5 and then x: -0.5, the robot rattles and then goes backward.
I want to add some deceleration functions and want to know where I can find some hints to implement them.
Thanks! This seems more like a robot motor controller gain tuning issue than a cmd_vel one, look in the control loop on your microcontroller/motor controller.
At least on the standard Twist messages sent alongside cmd_vel, there aren't any fields for acceleration. | {
"domain": "robotics.stackexchange",
"id": 38539,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-humble",
"url": null
} |
2) Using differentiation. $y'=4x-4$ then set this equal to zero so $4x-4=0$, $4x=4$ and $x=1$.
• Hey I have a prob in the same topic, would you help me out? – Kiara Feb 25 '14 at 19:12
• @Kiara of course! – The Ref Feb 25 '14 at 19:16
• There is another similar question in my book, $y=2x^2-5x-1$, I did it in the same method, but I get this answer: min value is $-3\frac{1}{2}$, occurs when $x$=$\frac{5}{4}$. But the book has an answer like this: min value is $-4\frac{1}{8}$, occurs when $x$=$\frac{5}{4}$. Which one is right? – Kiara Feb 25 '14 at 19:21
• you're correct that $x=5/4$, but I'm afraid the book is correct about the y value $2(5/4)^2 - 5(5/4) -1=2(25/16)-25/4 -1=25/8 - 25/4 - 1 = 25/8 - 50/8 - 8/8 = -33/8$ – The Ref Feb 25 '14 at 19:29
• Oh got it! Thanks a lot....:D – Kiara Feb 25 '14 at 19:30
$$y=2x^2-4x+7=2\left(x^2-2\cdot x\cdot1+1^2\right)+7-2\cdot1=2(x-1)^2+5$$
or $$2y=4x^2-8x+14=(2x)^2-2\cdot2x\cdot2+2^2+14-4=(2x-2)^2+10$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.983085088220004,
"lm_q1q2_score": 0.8296206971327139,
"lm_q2_score": 0.8438951084436077,
"openwebmath_perplexity": 189.804669874966,
"openwebmath_score": 0.8482280373573303,
"tags": null,
"url": "https://math.stackexchange.com/questions/690157/find-the-maximum-or-minimum-value-of-the-quadratic-function?noredirect=1"
} |
condensed-matter, superconductivity
We introduce the electron operators $\alpha^{+}$
, $\alpha$
by
$$\alpha_{\mathbf{k}}^{+}=c_{\mathbf{k}}^{+}\;;\;\alpha_{\mathbf{k}}=c_{\mathbf{k}}\;\;\text{for}\;\;\varepsilon_{\mathbf{k}}>\varepsilon_{F}$$
and the hole operators $\beta^{+}$
, $\beta$
by
$$\beta_{\mathbf{k}}^{+}=c_{-\mathbf{k}}\;;\;\beta_{\mathbf{k}}=c_{-\mathbf{k}}^{+}\;\;\text{for}\;\;\varepsilon_{\mathbf{k}}<\varepsilon_{F}$$
The $-\mathbf{k}$
introduced for the holes is a convention which gives correctly the net change of wave-vector or momentum: the annihilation $c_{-\mathbf{k}}$
of an electron at $-\mathbf{k}$
leaves the Fermi sea with a momentum $\mathbf{k}$
. Thus $\beta_{\mathbf{k}}^{+}\equiv c_{-\mathbf{k}}$
creates a hole of momentum $\mathbf{k}$
. | {
"domain": "physics.stackexchange",
"id": 23180,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter, superconductivity",
"url": null
} |
Thus giving examples of the first two sentences.
B) Here you started with $\sqrt{x-15} = 3 - \sqrt{x}$. This is false when $x = 16$, however when you apply various rules you will get $x = 16$, which is true when $x = 16$. Here you started with something that is false, and derived something that is true! But there is no contradiction because from a false we can get a true.
You might ask what is the point of manipulating an expression if the end result could be true or false regardless whether the original was true or false.
C) Well, if you start with a true statements, apply various rules, you will never get a false statement. So if you start with a statement that might be true or false, apply various rules, and the end result is false, then the original had to be false as well.
For example if we start with $\sqrt{x-15} = 3 - \sqrt{x}$ and suppose we do not know if it is true or false when $x = 5$. I.e. we do not know whether $\sqrt{5-15} = 3 - \sqrt{5}$ is true. We could square both sides and get $5 = 16$. The final result was false, thus $\sqrt{5-15} = 3 - \sqrt{5}$ was false as well, i.e. $\sqrt{x-15} = 3 - \sqrt{x}$ is false when $x = 5$.
Now you could do this for any $x$ except $16$, showing none of these values work. So when you got $x = 16$, you were not showing that $x$ is $16$, you were showing that no other value could be a solution!
When you substituted $x = 16$ and got $1 = -1$, you were showing $x = 16$ does not work either.
Thus manipulating an equation is useful for eliminating values. Once you eliminated some values, you should substitute all values left over back into the original see whether they work. (Unless you used reversible steps, then I can show that substitution is not needed). | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9648551505674444,
"lm_q1q2_score": 0.8220110472060722,
"lm_q2_score": 0.8519528000888386,
"openwebmath_perplexity": 252.7294801430224,
"openwebmath_score": 0.8836697936058044,
"tags": null,
"url": "https://math.stackexchange.com/questions/1827941/square-root-confusion-why-am-i-getting-an-answer-if-it-doesnt-work/1829163"
} |
thermodynamics, statistical-mechanics, entropy
Title: If thermodynamics says entropy always increases, how can the universe end in heat death? My understanding of entropy is that it is a measure of the available states a system can take on. In this context, if the total entropy in the universe always increases, how can it lead to a configuration where everything is at absolute zero? This has an entropy of 0, as there is only 1 possible configuration of the universe where this is the case.
the total entropy in the universe always increases
The total entropy of any isolated system increases until equilibrium is obtained. Then entropy stops increasing. If the system is the entire universe, then entropy will increase until the heat death.
how can it lead to a configuration where everything is at absolute zero.
The universe being at maximum entropy does not mean the entire universe is at absolute zero.
This has an entropy of 0, as there is only 1 possible configuration of the universe where this is the case.
Not at all. Maximum entropy means there are a maximal number of configurations.
As a useful analogy, just think of the gas in a box example. The box has a "heat death" when maximum entropy is obtained when there is a uniform gas concentration throughout the box. This doesn't put the box at absolute zero, and it doesn't put the box in a state that only is associated with a single configuration. | {
"domain": "physics.stackexchange",
"id": 62814,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, statistical-mechanics, entropy",
"url": null
} |
gene-expression
Title: What is the difference between differentially expressed genes and deregulated genes? Can anyone explain clearly what is the difference between differentially expressed genes and deregulated genes? Any gene whose gene expression differs significantly from some reference is considered to be differentially expressed. I think the most common representation of differential expression is the volcano plot, where you're plotting the fold change or log2 fold change against the -log10 p value assigned that gene.
This means a couple of things: One, you need a sufficently large sample size that you can actually get statistics, and two, you need a sufficient reference sample which will act as your baseline.
If a gene is degregulated, however, the expression is aberrant. In order to characterize aberrant expression you need a normal sample or accepted reference sample, and the sample of interest. This might be as simple as tumor vs normal tissue from the same patient. | {
"domain": "biology.stackexchange",
"id": 7067,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gene-expression",
"url": null
} |
differential-equations
The issue is more that there are an infinite number of possible solutions. For example, in the one-dimensional version of the problem, any function of the form
$u(x,t)=A\exp(-\lambda\alpha{}t)\left(\sin(\sqrt{\lambda}x) + B \cos(\sqrt{\lambda}x)\right)$
will be found to satisfy the heat equation itself.
So we need some additional information to narrow down the solution to one that fits our physical situation. These are the boundary conditions and the initial conditions.
For example, we might have a situation where the left end of a bar (of length L) is connected to a ideal heatsink at 100 C, and the right end of the bar is connected to an ideal heatsink at 0 C. Then our boundary conditions are
$u(0, t) = 100$
$u(L, t) = 0$
And we might have an initial condition from the fact that before the bar was connected to those heat sinks it was heated to a uniform temperature of 25 C.
$u(x, 0) = 25$.
We use Fourier analysis to find the combination of all the possible solutions that satisfy these conditions. | {
"domain": "physics.stackexchange",
"id": 14256,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "differential-equations",
"url": null
} |
python, object-oriented
self.mort_data = pd.DataFrame([0.0006, 0.000594, 0.000587, 0.000582, 0.000577, 0.000572, 0.000569, 0.000567, 0.000566, 0.000567, 0.00057, 0.000574, 0.00058, 0.00059, 0.000602, 0.000617, 0.000636, 0.00066, 0.000689, 0.000724, 0.000765, 0.000813, 0.00087, 0.000937, 0.001014, 0.001104, 0.001208, 0.001327, 0.001465, 0.001622, 0.001802, 0.002008, 0.002241, 0.002508, 0.002809, 0.003152, 0.003539, 0.003976, 0.004469, 0.005025, 0.00565, 0.006352, 0.00714, 0.008022, 0.009009, 0.010112, 0.011344, 0.012716, 0.014243, 0.01594, 0.017824, 0.019913, 0.022226, 0.024783, 0.027606, 0.030718, 0.034144, 0.037911, 0.042046, 0.046578, 0.051538, 0.056956, 0.062867, 0.069303, 0.0763, 0.083893, 0.092117, 0.101007, 0.1106, 0.120929, 0.132028, 0.143929, 0.15666, 0.170247, 0.184714, 0.200079, 0.216354, 0.233548, 0.251662, | {
"domain": "codereview.stackexchange",
"id": 44783,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, object-oriented",
"url": null
} |
java, array, unit-testing, collections
private boolean listsEqual(List<Integer> list, List<Integer> list2) {
if (list.size() != list2.size()) {
return false;
}
for (int i = 0; i < list.size(); ++i) {
if (!list.get(i).equals(list2.get(i))) {
return false;
}
}
return true;
}
}
Please, tell me anything that comes to mind. Remove checks index twice:
@Override
public E remove(int index) {
checkRemovalIndex(index);
E ret = this.get(index);
super.remove((finger + index) % size());
if (finger + index > size()) {
--finger;
}
return ret;
}
Here's what get looks like:
@Override
public E get(int index) {
checkAccessIndex(index);
return super.get((index + finger) % size());
}
Here's remove with the get call inlined:
@Override
public E remove(int index) {
checkRemovalIndex(index);
checkAccessIndex(index);
E ret = super.get((index + finger) % size());
super.remove((finger + index) % size());
if (finger + index > size()) {
--finger;
}
return ret;
}
It checks the index twice!
I think you could do better by directly using the return value of super.remove:
@Override
public E remove(int index) {
checkRemovalIndex(index);
E ret = super.remove((finger + index) % size());
if (finger + index > size()) {
--finger;
}
return ret;
}
addAll with index greater than list size corrupts finger:
@Override
public boolean addAll(int index, Collection<? extends E> coll) {
if (coll.isEmpty()) {
return false;
}
int actualIndex = finger + index;
if (actualIndex >= size()) {
actualIndex %= size();
finger += coll.size();
} | {
"domain": "codereview.stackexchange",
"id": 19175,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, array, unit-testing, collections",
"url": null
} |
quantum-mechanics, quantum-entanglement, measurements
Let's look at entanglement, here the probability that both parties get the same result is given by $\cos^2({\theta})$ where $\theta$ is the "angle" between their bases. In my example with singlets above the angle was $\pi/4$ and so there was a 50% correlation between the parties' results. With an angle of $\pi/8$ we see an 85% correlation. This is the key difference, the correlation is governed by the similarity of the bases, not the frequency of particular outcomes. An outcome that occurs 10% of the time will yield a coincidence with probability $\cos^2({\theta})$, which will generally not be 10%. To emphasize this, let's think about the weighted coin flips. If Alice gets a tails on a 90% heads coin, then even in an optimistic case, where Bob chooses the same coin, she only expects Bob to get a tails 10% of the time. With entangled particles, when Alice sees a tails, even if it only occurs one in a million times, she expects Bob to see a tails with certainty assuming he chose the same basis, or with probability .85 if he chose one $\pi/8$ radians off, and so on.
Hope that helps | {
"domain": "physics.stackexchange",
"id": 1073,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-entanglement, measurements",
"url": null
} |
matlab, filters, signal-analysis, audio, estimation
Title: How to calculate pitch fundamental frequency f( 0) ) in time domain? I am new in DSP, trying to calculate fundamental frequency ( f(0) ) for each segmented frame of the audio file. The methods of F0 estimation can be divided into three categories:
based on temporal dynamics of the signal time-domain;
based on the frequency structure frequency-domain, and
hybrid methods.
Most of the examples are estimating fundamental frequency based on the frequency structure frequency-domain, I am looking for based on temporal dynamics of the signal time-domain.
This article provides some information but I am still not clear how to calculate it in the time domain?
https://gist.github.com/endolith/255291
This is the code, I have found, used so far :
def freq_from_autocorr(sig, fs):
"""
Estimate frequency using autocorrelation
"""
# Calculate autocorrelation and throw away the negative lags
corr = correlate(sig, sig, mode='full')
corr = corr[len(corr)//2:]
# Find the first low point
d = diff(corr)
start = nonzero(d > 0)[0][0]
# Find the next peak after the low point (other than 0 lag). This bit is
# not reliable for long signals, due to the desired peak occurring between
# samples, and other peaks appearing higher.
# Should use a weighting function to de-emphasize the peaks at longer lags.
peak = argmax(corr[start:]) + start
px, py = parabolic(corr, peak)
return fs / px | {
"domain": "dsp.stackexchange",
"id": 8756,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, filters, signal-analysis, audio, estimation",
"url": null
} |
c, parsing, linux, status-monitoring, x11
static int
populate_tm_struct(void)
{
time_t tval;
time(&tval);
tm_struct = localtime(&tval);
return 0;
}
static int
loop (Display *dpy, Window root)
{
int weather_return = 0;
while (1) {
// get times
populate_tm_struct();
// // run every second
get_time();
get_network_usage();
get_cpu_usage();
if (weather_update == true && wifi_connected == true)
if ((weather_return = get_weather()) < 0)
if (weather_return != -2)
break;
if (portfolio_init == false && wifi_connected == true)
init_portfolio();
// run every five seconds
if (tm_struct->tm_sec % 5 == 0) {
get_TODO();
get_backup_status();
if (get_portfolio_value() == -1)
break;
if (get_wifi_status() < 0)
break;
get_memory();
get_cpu_load();
get_cpu_temp();
get_fan_speed();
get_brightness();
get_volume();
get_battery();
}
// run every minute
if (tm_struct->tm_sec == 0) {
get_log_status();
get_disk_usage();
}
// run every 3 hours
if ((tm_struct->tm_hour + 1) % 3 == 0 && tm_struct->tm_min == 0 && tm_struct->tm_sec == 0)
if (wifi_connected == false)
weather_update = true;
else
if ((weather_return = get_weather()) < 0)
if (weather_return != -2)
break;
format_string(dpy, root);
sleep(1);
}
return -1;
} | {
"domain": "codereview.stackexchange",
"id": 31497,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, parsing, linux, status-monitoring, x11",
"url": null
} |
I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}$, $\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,
To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $\mathbb{R}^2$ isn't as 'clean' as in $\mathbb{C}$?
Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $\mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2\pi$, we might as well just consider the piece whose domain ins $(-\pi, \pi]$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9693241991754918,
"lm_q1q2_score": 0.8119252942777174,
"lm_q2_score": 0.8376199572530448,
"openwebmath_perplexity": 269.2950759844331,
"openwebmath_score": 0.914422869682312,
"tags": null,
"url": "https://math.stackexchange.com/questions/444475/whats-the-difference-between-mathbbr2-and-the-complex-plane/444493"
} |
cosmology, spacetime, coordinate-systems, time, universe
Title: In spacetime time is a coordinate. Does it mean there is a single objective timeline for the Universe? If every event can be defined with x, y, z, t coordinates - does it mean all events with the same t are composing the whole Universe at the moment t? No, the choice of coordinates is not absolute. In flat spacetime, if you and I are moving relative to each other, then our time axes are tilted relative to each other. A plane of constant t in your frame of reference will be a sloping slice through time in mine, with t increasing all along the slope in one direction and deceasing all along it in the other. In curved spacetime, the effects can be much more complicated. In short, the idea of 'now' only makes sense locally- you cannot extrapolate your 'now' over distance and expect it to agree exactly with mine if we are moving relative to each other. At everyday speeds and distances, the magnitude of the effect is negligible, which is why we don't notice it.
The effect is known as the relativity of simultaneity and it is the cause of other effects. Length contraction, for example, arises because you consider either end of a moving object at what you consider to be the same moment. In the rest frame of the object, however, they are two separate moments, and the tail of the object has moved in the period between them to catch up with the head, thus making the object seem shorter.
Even though it is impossible to allocate a common t coordinate to 'now' everywhere, that does not mean that there isn't a 'now' everywhere- it just means that different observers will need to use different coordinates to refer to it. There is a parallel with spatial dimensions. If you live in New York, say, you will consider the horizontal to be a plane tangential to the Earth in New York, whereas if I live in London, I will pick a plane tangential with the Earth there. Neither of us can use the same z coordinate to measure sea level, since sea level in New York will have negative z value in my frame and vice versa. That doesn't mean that there is not a well-defined sea level- it just means that different observers use different values of z to refer to it. You cannot extrapolate sea level in New York to any distance. Likewise with time. | {
"domain": "physics.stackexchange",
"id": 96503,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, spacetime, coordinate-systems, time, universe",
"url": null
} |
screw-theory
reference https://www.cs.cmu.edu/~baraff/sigcourse/notesd2.pdf
link to [Physics.SE] answer https://physics.stackexchange.com/a/686727/392
But in screw notation, the above becomes much simpler (to remember and to write)
$$ m_{\rm reduced} = \frac{1}{ \boldsymbol{n}^\top (\mathbf{I}_1^{-1} + \mathbf{I}_2^{-1}) \, \boldsymbol{n} } $$
And again, you can see that the two masses are added like resistors in parallel (addition of inverse inertia) and then projected through the contact normal. This kind of insight cannot be achieved with vector notation. | {
"domain": "robotics.stackexchange",
"id": 2481,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "screw-theory",
"url": null
} |
classical-mechanics, conservation-laws
Title: Cat falling from a tree When a cat or any body falls over to the ground, how is momentum conserved?
I was working on a problem of a cat falling on top of a skateboard, and the system travels together with a new velocity. That seemed intuitive enough for me. This is how I was thinking through:
The cat had momentum that became zero after the impact. Should not the skateboard have recoiled in some way, due to the conservation of momentum? After all, the change in momentum for the board should have been something measurable.
I guess there is something wrong with the way I am approaching the problem. Could you please help me identify this?
EDIT: I apologise: but the situation is like this - a skateboard moves on the ground with constant speed, until the cat is dropped from a tree. The cat lands on the skateboard and then proceeds with a new speed. The skateboard does not have to move, especially if the force is perpendicular. The ground (or earth) can absorb the momentum. In other words the earth gains a slight amount of velocity! | {
"domain": "physics.stackexchange",
"id": 53570,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, conservation-laws",
"url": null
} |
c#, performance, collections
<body>
<pre><code>
BenchmarkDotNet=v0.10.3.0, OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Core(TM) i5-4570 CPU 3.20GHz, ProcessorCount=4
Frequency=3125012 Hz, Resolution=319.9988 ns, Timer=TSC
[Host] : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1637.0
LegacyJitColdStart : Clr 4.0.30319.42000, 64bit LegacyJIT/clrjit-v4.6.1637.0;compatjit-v4.6.1637.0
LegacyJitMonitoring : Clr 4.0.30319.42000, 64bit LegacyJIT/clrjit-v4.6.1637.0;compatjit-v4.6.1637.0
LegacyJitThroughput : Clr 4.0.30319.42000, 64bit LegacyJIT/clrjit-v4.6.1637.0;compatjit-v4.6.1637.0
RyuJitColdStart : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1637.0
RyuJitMonitoring : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1637.0
RyuJitThroughput : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1637.0
</code></pre>
<pre><code>Platform=X64 Runtime=Clr InvocationCount=100000000
LaunchCount=1 TargetCount=10 WarmupCount=3
</code></pre> | {
"domain": "codereview.stackexchange",
"id": 24859,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, collections",
"url": null
} |
thermodynamics, entropy, definition
Title: Intuition behind the formula for macroscopic entropy Wikipedia says that the 'macroscopic' definition of entropy is:
$$ \Delta S = \displaystyle \int \dfrac{dQ_{\rm rev}}{T}$$
Where $T$ is the uniform absolute temperature of a closed system and $dQ_{\rm rev}$ is an incremental reversible transfer of heat into that system.
I've always had trouble with derivatives and integrals in physical equations, and this is no exception.
Why does the integral appear in this formula? I know that entropy has just been defined that way, but why? What is the logic behind dividing $dQ_{\rm rev}$ by $T$ and taking the integral of that? Thermodynamics is full of what is called differentials.
When you want to find the slope of a secant of a curve between points $(x_1,y_1)$ and $(x_2,y_2)$, you calculate
$\frac{y_2-y_1}{x_2-x_1}=\frac{\Delta y}{\Delta x}$
where the latter is just shorthand notation. Now, the idea behind derivatives is that you get the slope of the tangent if you let $\Delta x$ and $\Delta y$ tend to really, really small size and it is then when you call them $dx$ and $dy$. $dx$ is historically called the differential of $x$ and the slope of the tangent is then the differential quotient, or $\frac{dy}{dx}$. This explains the strange notation for derivates. It is also quite intuitive, e.g. the chain rule becomes $\frac{dz}{dx}=\frac{dz}{dy}\frac{dy}{dx}$, i.e. the quotient is "expanded" with the differential $dy$. | {
"domain": "physics.stackexchange",
"id": 9653,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, entropy, definition",
"url": null
} |
## Now, let's practice
Problem 1
• Current
Evaluate 6, a, plus, 4, b, minus, 6 when a, equals, 1 and b, equals, 3.
## Want to join the conversation?
• On the first question, it says my answer is wrong and I got 18 as my answer.
• At first i got that too, but then i realized there was a -6 at the end of the problem. So the actual answer was 12. :)
• i dont get the 2nd q how do we solve it when its an fraction
• a fraction is division. so 10 divided by 5 can be written as a fraction 10/5, and x/y means x divided by y. does that help? you would do it along with multiplication in the order of operations.
• how do you expand and simplify: 2(x- 1)(x+1)+(2x-1)(2x-1)
• First distribute 3x over (2x+5) and then distribute -2x over (x-6):
3x(2x+5) - 2x(x-6)
= 6x^2 + 15x - 2x^2 + 12x [Note: -2x * -6 = +12x, not -12x]
= 6x^2 - 2x^2 + 15x + 12x [Rewrite with like terms in order.]
= 4x^2 + 27x
• how can i find the answer in the equation 5x - x/y when x=4 and y =2
• 1) Substitute each value into the expression for their respective variable
5(4) - 4/2
-- Multiply: 20 - 4/2
-- Divide: 20 - 2
-- Subtract: 18
Hope this helps.
• After coming back from the Holiday BELIEVE ME! I had to go through these problems more than (2)- (3) times. Knowing how much I love math this, probably was one of the toughest days ever. This is how I solved the First and second problems:
10+2p-3k ( p=4, r=5)
10 +2*4-3*5
10+8-15
18-15=3 | {
"domain": "khanacademy.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9881308810508339,
"lm_q1q2_score": 0.8101619178610158,
"lm_q2_score": 0.8198933293122507,
"openwebmath_perplexity": 2276.3825564296353,
"openwebmath_score": 0.46755251288414,
"tags": null,
"url": "https://en.khanacademy.org/math/algebra/x2f8bb11595b61c86:foundation-algebra/x2f8bb11595b61c86:substitute-evaluate-expression/a/evaluating-expressions-with-two-variables"
} |
java, beginner, tic-tac-toe, ai
Title: Minimax implementation of tic tac toe I have the following working tic tac toe program. Someone said it's a convoluted mess and I'm looking for pointers on how to clean it up.
import java.util.Scanner;
import java.util.ArrayList;
import java.util.Random;
import java.util.Arrays;
public class Shortver{
private static final int boardRowDim = 3;
private static final int boardColDim = 3;
private String[][] board;
private String playerName;
private String playerMark;
private String computerMark;
private boolean humanGoes;
private boolean winner;
private boolean draw;
private int gameTargetScore;
private boolean output = false;
private boolean toSeed = false;
private ArrayList<Integer> availableMoves;
public Shortver(String name, boolean whoGoesFirst){
availableMoves = new ArrayList<Integer>();
board = new String[boardRowDim][boardColDim];
for (int i = 0; i < board.length; i++){
for(int j = 0; j < board[0].length; j++){
board[i][j] = ((Integer)(double2single(i,j))).toString();
availableMoves.add(double2single(i,j));
}
}
playerName = name;
humanGoes = whoGoesFirst;
playerMark = "X";
computerMark = "O";
gameTargetScore = 15;
if(!humanGoes){
playerMark = "O";
computerMark = "X";
gameTargetScore = - 15;
}
winner = false;
draw = false;
}
public static void main(String[] args)throws Exception{
System.out.println("\u000C");
Scanner kboard = new Scanner(System.in);
printHeader();
System.out.print(" Please enter your name ; ");
String name = kboard.next();
name = capitalize(name); | {
"domain": "codereview.stackexchange",
"id": 34785,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, tic-tac-toe, ai",
"url": null
} |
# Domain of the n composed logarithms on x.
As we know, the real logarithm has the domain
$$D_1 = \{x : x \in \mathbb{R}, x > 0\}$$
What is the logarithmic domain of "higher order" logarithms, at index n? For example, it seems that
$$D_2 = \{x : x \in \mathbb{R}, x > 1\}$$
which would be the domain of
$$f(x) = \log(\log(x))$$
$D_3$ is a little hard to imagine. I think my real question deals with formalization, and the extended case of $D_n$ however.
There may be connections to the iterated logarithm, which says how many iterations are necessary before a value breaks in a certain base. See the wikipedia article. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534398277178,
"lm_q1q2_score": 0.8737823421135005,
"lm_q2_score": 0.8902942377652497,
"openwebmath_perplexity": 429.86676581023846,
"openwebmath_score": 0.9099500775337219,
"tags": null,
"url": "https://math.stackexchange.com/questions/1529339/domain-of-the-n-composed-logarithms-on-x"
} |
statistical-mechanics, solid-state-physics, semiconductor-physics
Title: Number density of holes in an intrinsic semiconductor having light and heavy hole band Say the dispersion is isotropic:
$$E=E_0+\frac{\hbar^2k^2}{2m}$$
Then the density of states in the valence bands are given by
$$g_{lh}(E)=\frac{\sqrt{2}}{\pi^2\hbar^3}m_{lh}^{3/2}\sqrt{E_v-E}$$
$$g_{hh}(E)=\frac{\sqrt{2}}{\pi^2\hbar^3}m_{hh}^{3/2}\sqrt{E_v-E}$$
If I want to calculate the total number density of holes without omitting the light holes for their smaller number, which integral is the correct one?
$$p = \int_{-\infty}^{E_v}[1-f(E)][g_{lh}(E)+g_{hh}(E)]dE$$
or
$$p=\int_{-\infty}^{E_v}\left[1-\frac{1}{2}f(E)\right][g_{lh}(E)+g_{hh}(E)]dE$$
................................
I was considering the second form since the possibility of an electron occupying one of the bands at energy between $E$ and $E+dE$ is half of the probability of it occupying the energy between $E$ and $E+dE$. Think of finding the number density for both bands separately, then add them together: | {
"domain": "physics.stackexchange",
"id": 94727,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, solid-state-physics, semiconductor-physics",
"url": null
} |
electromagnetism, waves, polarization, complex-numbers
Title: Is polarization of a wave just a description of its motion in three dimensions? Since a polarization of the wave is described by complex numbers, we can try to give that mathematical formalism geometrical meaning. With having two different axes, one imaginary and other real, it is possible to represent the motion of the wave in three dimensions as the motion of 2 waves in two planes. These two planes are being observed in three-dimensional space from the point in the third plane which is perpendicular to the two other planes. This third plane is actually a complex plane, from whose perspective these two planes are viewed as lines. This also means that the notion of a complex plane just refers to the plane which is an auxiliary construction, on which coordinates of a point in two other planes are projected. The coordinates in each plane correspond to their motion in two dimensions, parametrically described, where one of two parameters is time. This can be mathematically interpreted as a function where y=y(x).
These two equations with two variables for each one, or the lines as viewed geometrically are real and imaginary part of a complex number. Each part is showing the two-dimensional projection of a curve in three dimensions. If we can observe these two projections as independent of each other, it is possible to describe this curve in three dimensions. This is done by introduction of mathematical abstraction, a complex number, where imaginary and real part are independent of each other.
If our attention is focused only on single point in this three-dimensional world, we can just determine its location from three spatial coordinates. In other words, there are two equations sharing the connection with same parameter. Uniqueness of that parameter arises from setting the same beginning of both planar coordinates system at the place where complex plane intersects two other planes. Therefore, the solution of the equation are the coordinates of the point in three-dimensional space (x, y, z). But if we bring in another dimension to an observed object, i.e. if we make that point a line (given algebraically as the function with two variables), we get a new, fourth variable or fourth dimension. That dimension is time, from the viewpoint of a physicist. | {
"domain": "physics.stackexchange",
"id": 13238,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, waves, polarization, complex-numbers",
"url": null
} |
# 99 Consecutive Positive Integers whose sum is a perfect cube?
What is the least possible value of the smallest of 99 consecutive positive integers whose sum is a perfect cube?
• What have you tried? What do you know about the sum of $99$ consecutive integers? If the first is $n$, what is the sum? – Ross Millikan Aug 19 '17 at 3:30
• Instead of 99, try solving the problem for only 9 consecutive numbers. – MJD Aug 19 '17 at 3:50
Hint 1: the sum of an odd number of consecutive integers is easiest described by the middle term. For example the sum of five consecutive integers where the middle term is $x$ is
$$(x-2)+(x-1)+x+(x+1)+(x+2)$$
$(x-2)+(x-1)+x+(x+1)+(x+2)=5x$. More generally, the sum of $n$ consecutive integers where $n$ is odd and $x$ is the middle term is $nx$
Hint 2: In a perfect cube, each prime must occur in the prime factorization a multiple of three number of times (zero is also a multiple of three)
$99=3^2\cdot 11^1$ is missing some factors to be a cube.
Let $\color{Blue}{n=3\cdot 11^2}\color{Red}{\cdot a}\color{Blue}{^3}$ for any arbitrary $\color{Red}{a}$. Only notice that $$\underbrace{ (n-49) + (n-48) + ... + (n-1) + \color{Blue}{n} + (n+1) + ... + (n+48) + (n+49)}_{\text{these are} \ \ 1+2\cdot 49 = 99 \ \ \text{consecutive numbers!}} \\ =99\color{Blue}{n}=99\cdot 3\cdot 11^2\cdot\color{Red}{a}^3=(33\color{Red}{a})^3.$$
Also one can prove that there are no other solutions! | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9875683509762452,
"lm_q1q2_score": 0.8119679638201553,
"lm_q2_score": 0.8221891305219504,
"openwebmath_perplexity": 210.7155638397361,
"openwebmath_score": 0.8235054016113281,
"tags": null,
"url": "https://math.stackexchange.com/questions/2398748/99-consecutive-positive-integers-whose-sum-is-a-perfect-cube"
} |
quantum-mechanics, quantum-field-theory, hilbert-space, operators
$\mathcal D$ refers to a dense subspace of the Hilbert space $\mathcal H$ on which $\Phi$ reduces to an operator-valued function on spacetime, rather than an operator-valued distribution.
In other words, all operators on the Hilbert space are approximated by expressions of the form $a_1\phi(x_1)\cdots\phi(x_{n_1})+a_2\phi(x_1)\cdots\phi(x_{n_2})+\cdots$. I can see the appeal of such an axiom, but it seems to come out of nowhere. Is there a counterpart of this in "ordinary" (non-relativistic) quantum mechanics (NRQM)?
For example, considering a single non-relativistic free particle in one dimension without spin, can all operators on $L^2(\mathbb R)$ be approximated by linear combinations of products of $\hat x$ and $\hat p$? Is this a fact of a NRQM that has always been there, but just isn't important in that context? If it is new in QFT, is there a conceptual reason why it should hold there but not NRQM? The condition that there shouldn't be any invariant subspace is just the statement that the representation of the algebra of observables is irreducible (hence the reference to Schur's lemma in your quote, which is a statement about equivariant maps between irreducible representations). "Completeness" is just a somewhat unusual choice of name for this property.
It appears in ordinary QM for example when we use the Stone-von Neumann theorem to fix the representation of $x$ and $p$ as multiplication and differentiation on $L^2(\mathbb{R})$ - the statement of the theorem is that this is the unique irreducible representation of the CCR.
Irreducibility of the representation of the algebra of observables is equivalently the statement that there are no superselection sectors, see also this answer of mine. | {
"domain": "physics.stackexchange",
"id": 92372,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-field-theory, hilbert-space, operators",
"url": null
} |
statistical-mechanics, bosons, chemical-potential
Title: Bose-Einstein statistics: meaning of the chemical potential The average number of particles at the $\varepsilon_r$ energy level in the Bose-Einstein statistics is given by
$$\bar n_r=\frac{1}{e^{\beta(\varepsilon_r-\mu)}-1}$$
I wonder what the significance that the chemical potential $\mu$ has in these statistics. I see that $\varepsilon_r$ must be bigger than $\mu$, since $n_r$ would diverge at $\varepsilon_r=\mu$ and would take negative values for $\varepsilon_r<\mu$. This doesn't happen in the Fermi-Dirac nor Maxwell-Boltzman statistics (plotted in red and green in the picture, next to next).
I understand that, as bosons don't follow the Pauli exclusion principle, they tend to fill first the lowest energy levels, but what is the meaning of the divergence and negative values of $n_r$ at $\varepsilon_r\leq\mu$? What is the role of the chemical potential? From a statistical-mechanical point of view, the chemical potential $\mu$ is just a parameter of a probability distribution. Together with other parameters, such as the inverse temperature $\beta$, this parameter determines some expectation values of the probability distribution, for example the expectations of the total energy and total number of quanta. These expectations reflect the values that we macroscopically observe on average.
From this point of view the constraint $\mu < \epsilon_r$ is simply necessary for such expectations to have sensible values – positive energy and positive number of quanta. The mathematics behind this is explained in this answer to another question of yours. | {
"domain": "physics.stackexchange",
"id": 75584,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, bosons, chemical-potential",
"url": null
} |
ros, sound-play, rosdep, ros-indigo
Title: how to install sound_play
Im using ros indigo. i tried this command "rosdep install sound_play". It says,
ERROR: Rosdep cannot find all required resources to answer your query
Missing resource sound_play
ROS path [0]=/opt/ros/indigo/share/ros
ROS path [1]=/opt/ros/indigo/share
ROS path [2]=/opt/ros/indigo/stacks
Im new to linux.
Originally posted by Veera Ragav on ROS Answers with karma: 240 on 2017-05-03
Post score: 1
The command that you are trying to run installs dependencies of the sound_play package. To figure out what those dependencies are, your system needs to be able to find the package.xml for the sound_play package. So you need to have that package on your machine somewhere. You should be able to type sudo apt-get install ros-indigo-sound-play to install the package from apt-get. Then the above command should work.
Alternatively, you could have a copy of the source code for the sound_play package in a workspace, and have sourced the setup.bash file for that workspace. Then rosdep would use the package.xml located in your workspace instead of the one in /opt/ros/indigo/share.
Note there are many other questions with this same issue on answers.ros.org, these would have been a good place to start. E.g.:
http://answers.ros.org/question/212453/rosdep-cannot-find-all-required-resources/
http://answers.ros.org/question/57801/missing-resource/
http://answers.ros.org/question/75241/install-ros-dependencies-from-apt/
Originally posted by jarvisschultz with karma: 9031 on 2017-05-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27791,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, sound-play, rosdep, ros-indigo",
"url": null
} |
atmospheric-science, density, air
Title: Why does the composition of the air does not change with altitude? Air contains about 78% nitrogen and 21% oxygen independent of altitude (up to 100 km). Why is this? Shouldn't the concentration of nitrogen increase with higher altitudes since nitrogen has a lower density than oxygen?
Shouldn't the concentration of nitrogen increase with higher altitudes since nitrogen has a lower density than oxygen?
No, it shouldn't, at least not up to 100 km or so. Look at your graph, which shows that even argon is well-mixed throughout the lower atmosphere (the troposphere, stratosphere, and mesosphere). Argon atoms are considerably more massive than are carbon dioxide molecules, which in turn are considerably more massive than oxygen and nitrogen molecules, and yet all of these (along with all of the long-lived gases in the atmosphere) are well-mixed throughout the lower atmosphere.
The reason is that the lower atmosphere is dense enough to support turbulence while the upper atmosphere is not. The turbopause marks the somewhat fuzzy boundary below which turbulent mixing dominates over diffusion and above which it's diffusion that dominates. | {
"domain": "physics.stackexchange",
"id": 29725,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "atmospheric-science, density, air",
"url": null
} |
game, unit-testing, objective-c
- (void)setUp {
[super setUp];
_boardEval = [[DMBoardEval alloc]init];
}
#pragma mark - First Bugs Found
-(void) testBugWithTwoMatchesWhenShouldBeOne {
//originally this returned 2 matches because of problems with the wild search algorithm
NSMutableArray *customArray = [[NSMutableArray alloc]init];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeYellow]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeYellow]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeYellow]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeBomb]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeBrown]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeGreen]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeBrown]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeRed]];
[customArray addObject:[[DMOrb alloc]initWithType:DMOrbTypeWild]];
NSMutableArray *matches = [_boardEval matchesForOrbs:customArray];
XCTAssert(matches.count == 1, @"Pass");
}
-(void) testBugWithOneBigMatchInsteadOfTwoSmall {
//this returns one big match of WRWBrBr when it should be multiple
//it returns them all as one big array because when it reaches the brown, wild search is active
//and the previous one was a wild, so it adds the current one | {
"domain": "codereview.stackexchange",
"id": 11771,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "game, unit-testing, objective-c",
"url": null
} |
newtonian-mechanics, newtonian-gravity, equivalence-principle
Title: On Newton’s laws and inertial reference frames Why is $\mathbf F = m \, \mathbf a$ only valid in inertial reference frames?
Earth is not an inertial reference frame, even if we ignored its rotation.
On its surface everything is subjected to a uniform gravity.
According to Einstein’s equivalence principle, being still in a uniform gravitational field is the same as being accelerated in the opposite direction, by means of a constant force.
So we are in an accelerated reference frame, and nevertheless Newton’s laws hold, provided we included the force of gravity, which according to us plays the same role as an apparent force would play in any other non-inertial reference frame.
So something is missing here. Either Newton’s laws hold in (uniformly) accelerated reference frames, or something else.
“Gravity is not a force” sounds a lot like a cheat.
Newton’s laws hold, provided we included the force of gravity, which according to us plays the same role as an apparent force would play in any other non-inertial reference frame | {
"domain": "physics.stackexchange",
"id": 83775,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, newtonian-gravity, equivalence-principle",
"url": null
} |
ros, navigation, rosaria, p3at
Title: problem with robot center of turning in navigation
Hi all
We have modified our P3AT robot in a way it moves with just two front wheels it means we have disabled two rear wheels and the robot moves with just two front wheels. Accordingly, we have encountered with a malfunction in turning of the robot, because of the robot turns with two front wheels instead of all four wheels.
Hence I am going to modify the center of turning of the robot from the center of robot to the center of two front wheels.
I'd appreciate it if anyone help me to find the solution.
Best Regards
Farshid
Originally posted by farshid616 on ROS Answers with karma: 63 on 2012-03-31
Post score: 0
You are going to have to edit your robots urdf. Look here for help understanding urdf better. http://www.ros.org/wiki/urdf/Tutorials
Originally posted by Atom with karma: 458 on 2012-04-01
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8810,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation, rosaria, p3at",
"url": null
} |
other methods. doc 1_Double_Spinners. There are 12 possible outfits for the student to wear. G2 = second card is green. Are they dependent or independent events? 1) You roll two dice. When two balls are chosen at random without replacement from bag B, the probability that they are both white is $$\frac{2}{7}$$. Tree diagrams can make some probability problems easier to visualize and solve. Now, for the conditional probability we want to view that 3∕4 as if it was 1 whole, which we achieve by multiplying by its reciprocal, namely 4∕3. tree diagram. Disjoint Events (Revisited) Drawing with and without Replacement Making a Picture –Venn Diagrams, Probability Tables, and Tree Diagrams. Draw two marbles, one at a time, this time without replacement from the urn. The first step to solving a probability problem is to determine the probability that you want to calculate. (a) Draw a tree diagram to represent all the possible paths that the mouse could take. 7E-19 Three desperados A, B and C play Russian roulette in which they take turns pulling the trigger of a six-cylinder revolver loaded with one bullet. We will then draw two balls from the chosen box, without replacement and with equal probability on those remaining. Let's consider another example: Example 2: What is the probability of drawing a king and a queen consecutively from a deck of 52 cards, without replacement. The probability that it is on time on any day is 0. a Draw a tree diagram showing all outcomes and probabilities. If we select 4 computers at random from the distribution center (with replacement) what is the probability that at least 1 of the computers is a tablet computer? P. A tree diagram is a special type of graph used to determine the outcomes of an experiment. two bills without replacement, determine whether the probability that the bills will total $15 is greater than the probability that the bills will total$2. 18 Outcomes & Probability Third Pick First Pick Second Pick Figure 8: Tree diagram for selecting three sweets randomly (with probability value) e) Probability distribution for each flavour if three sweets are. We pick a card, write down what it is, then put it back in the deck and draw again. Probability & Tree Diagrams. There is a total of 3 colour sequences through which we end up with 1 of. MEMORY METER. Related Topics: algebra, | {
"domain": "piratestorm.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9865717436787542,
"lm_q1q2_score": 0.8553929746006267,
"lm_q2_score": 0.8670357529306639,
"openwebmath_perplexity": 564.7005215453598,
"openwebmath_score": 0.6850816011428833,
"tags": null,
"url": "http://piratestorm.it/probability-with-replacement-tree-diagram.html"
} |
# Why is $a^{5} \equiv a\pmod 5$ for any positive integer?
Why is $a^{5} \equiv a\pmod 5$ for any positive integer?
I feel like it should be obvious, but I just can't see it. Any help appreciated.
Edit: without Fermat's theorem.
• Isn't this a direct application of Fermat's Little Theorem? Mar 21, 2015 at 1:23
• Fermat's little theorem: en.wikipedia.org/wiki/Fermat%27s_little_theorem Mar 21, 2015 at 1:24
• @Deepak Which is usually a hint that the OP doesn't know the theorem. Mar 21, 2015 at 1:25
• possible duplicate of Purpose of Fermat's Little Theorem Mar 21, 2015 at 1:26
• @Deepak: Thomas got it right, I'd never heard of the theorem before.
– Bob
Mar 21, 2015 at 1:28
OK, without using Fermat's Little Theorem (a far more general and elegant result), here's another easy workaround.
Any integer $a$ can be exactly one of $0, 1, 2, -2, -1 \pmod 5$.
Take the fifth powers of each of those and see them reduce back to the original residue in each case.
One way to prove this is to prove it by induction. If $n^5\equiv n\pmod 5$ show that $(n+1)^5\equiv n+1\pmod 5$.
Note that $$(n+1)^5=n^5+5n^4+10n^3+10n^2+5n+1\equiv n^5+1\equiv n+1\pmod5$$
The general theorem people have mentioned in comments, Fermat's Little Theorem, states that if $p$ is prime and $a$ is any number: $a^p\equiv a\pmod p$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363476123225,
"lm_q1q2_score": 0.8480433778992422,
"lm_q2_score": 0.8615382129861583,
"openwebmath_perplexity": 228.66548349144847,
"openwebmath_score": 0.8377379775047302,
"tags": null,
"url": "https://math.stackexchange.com/questions/1199114/why-is-a5-equiv-a-pmod-5-for-any-positive-integer"
} |
x x=−+10000 70 0. Solving Problems Using Quadratic Relations 285 Curious Math 296. Quadratic formula. His overhead cost, in dollars, can be modeled by the equation y=5x2−10 , where x is the number of hours spent repairing televisions. How to Calculate Break Even. 29x This appears to be a strange-looking radical equation. For example, the equation. 55u + 10 and revenue equation R(u) = 0. So lets calculate square root of b 2 - 4 * a * c and store it in variable root_part. We may remove the absolute value bars because the left-hand side is never negative, and neither is the right-hand side. 2: Break-Even Analysis and Market Equilibrium A mathematical model is a mathematical description (often by means of a function or an equa-tion) of a real-world phenomenon. 5 where x is the quantity sold in thousands and p is the price in dollars. Here is a formula for finding the roots of any quadratic. Graph quadratic functions. Probably the best known example of quadratic models are for projectiles. Louis Arch An equation of the arch will give the height y of any point on the arch at a given horizontal distance x. }\) Graph the profit function over a domain that includes both break-even points. What must be true about b? 44. In the blink of an eye it would solve even the most difficult questions for you. There are other ways of solving a quadratic equation instead of using the quadratic formula, such as factoring (direct factoring, grouping, AC method), completing the square, graphing and others. Be able to Solve a System of Equations: Solve for one variable in one equation and substitute into the other equation. Then revenue R(x) = xp(x), and the break-even points are the points where R(x) = C(x). Suppose for example you wanted to plot the relationship between the Fahrenheit and Celsius temperature scales. we should always remember that you should choose one number that will be the target triple. Reasoning The equation 4x2 + bx + 9 = 0 has no real-number solutions. Given a general quadratic equation of the form. Break-even analysis is a tool for evaluating the profit potential of a business model and for evaluating various pricing strategies. We get the values of x that we | {
"domain": "chiesaevangelicapresbiteriana.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787879966233,
"lm_q1q2_score": 0.8047802163424543,
"lm_q2_score": 0.8152324848629214,
"openwebmath_perplexity": 551.1855345766834,
"openwebmath_score": 0.6575508117675781,
"tags": null,
"url": "http://vewb.chiesaevangelicapresbiteriana.it/break-even-quadratic-equation.html"
} |
quantum-mechanics, general-relativity, epr-experiment
String theory replaces the spacetime metric and quantum fields describing elementary particles with new degrees of freedom. In some limits, these degrees of freedom are described by strings propagating in 10 spacetime dimensions. In other situations, this same theory describes other objects like branes or reduces to a theory of supergravity. The main issue string theory has is connecting to observations. That's true of all quantum gravity approaches, but since string theory introduces some new elements (extra dimensions, additional unobserved light fields, multiple vacuaa), one challenge is explaining what set of ingredients actually reproduces our world. Additionally, no one knows how to formulate string theory (or "M-theory") precisely in a non-perturbative way, outside of the special case of asymptotically Anti de Sitter spacetimes.
Loop quantum gravity keeps Einstein's equations, but uses a different set of degrees of freedom besides the metric and changes the methods we use to quantize a theory. A major challenge for loop quantum gravity is to show that in an appropriate limit, it reduces to classical GR.
Asymptotic safety proposes that Einstein's equations and the fundamental metric degrees of freedom can be quantized directly, and we "simply" have to learn how to quantize the theory when we can no longer use perturbation theory. More technically, the proposal is that the renormalization group for gravity has a non-trivial UV fixed point. A major issue here is showing that this UV fixed point actually exists (and it is not at all clear that it does).
It is worth noting that the holographic principle seems to imply that quantum gravity should have a degree of "non-localness" to it, which is not present in quantum theory. This is perhaps a reason to be skeptical of the asymptotic safety scenario. | {
"domain": "physics.stackexchange",
"id": 91104,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, general-relativity, epr-experiment",
"url": null
} |
gravity, forces, black-holes, universe
Response to comment:
Suppose you're hovering above a black hole and you let down a rope until it touches the event horizon. The rope will always be torn apart. This happens because all forces, electrostatic, strong and weak, propagate at the speed of light, and at the event horizon even the speed of light is not high enough to escape.
So if this is the experiment you're thinking off, then once your charged particle has got sufficiently close to the event horizon you would not be able to pull it back out with a particle of the opposite charge.
However, this isn't to do with the fundamental strength of the gravitational force. It's because one electron has a hard time resisting the gravitational force of an entire black hole.
Response to response to comment:
If you're asking "Is gravity stronger than the electromagnetic force" then you need to define what you mean. You are quite correct that at a black hole event horizon, the electromagnetic force would not be enough to hold together an object that partially crossed the event horizon. In that sense gravity at the event horizon is stronger than the electromagnetic force, and indeed it's stronger than any force.
But a physicist would not conclude gravity was stronger because you're not comparing like with like. To do a fair comparison you need to take two test particles with both charge and mass, put them some set distance apart and measure first the electrostatic force between them, then the gravitational force between them. Whether you're near a black hole or not, this experiment will conclude that the gravitational force is much much weaker than the electrical force.
If you conclude that gravity is stronger than electromagnetism at an event horizon all you're really saying is that the gravitational force due to the (at least) $10^{31}$ kilograms of a black hole is stronger than the electrostatic force from the $10^{-19}$ coulombs of charge on an electron. Well, yes, that's not terribly surprising! | {
"domain": "physics.stackexchange",
"id": 4760,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gravity, forces, black-holes, universe",
"url": null
} |
ros, catkin-make, package.xml
-- Configuring incomplete, errors occurred!
See also "/home/fatima/catkin_ws1/build/CMakeFiles/CMakeOutput.log".
See also "/home/fatima/catkin_ws1/build/CMakeFiles/CMakeError.log".
make: *** [cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed
and this is my CMakeList.txt
cmake_minimum_required(VERSION 2.8.3)
project(beginner2_tutorials)
## Find catkin and any catkin packages
find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs genmsg message_generation)
## Declare ROS messages and services
add_message_files(FILES Num.msg )
add_service_files(FILES AddTwoInts.srv)
## Generate added messages and services
generate_messages(DEPENDENCIES std_msgs)
catkin_package( INCLUDE_DIRS include
LIBRARIES ${beginner2_tutorials} CATKIN_DEPENDS message_runtime roscpp nodelet)
## DEPENDS eigen opencv )
## Declare a catkin package
##catkin_package(CATKIN_DEPENDS message_runtime std_msgs sensor_msgs)
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
INCLUDE_DIRS include
LIBRARIES beginner2_tutorials
CATKIN_DEPENDS message_runtime roscpp rospy std_msgs
DEPENDS system_lib
DEPENDS message_runtime
)
catkin_package()
## Build talker and listener
include_directories(include ${catkin_INCLUDE_DIRS})
add_executable(talker src/talker.cpp)
target_link_libraries(talker ${catkin_LIBRARIES})
add_dependencies(talker beginner2_tutorials_generate_messages_cpp) | {
"domain": "robotics.stackexchange",
"id": 23153,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, catkin-make, package.xml",
"url": null
} |
quantum-mechanics, measurement-problem, quantum-measurements, foundations, born-rule
Does this mean that axioms 1-3 taken together are inconsistent? Is there a flaw in this reasoning? There is no inherent contradiction in the statements you line up, but it appears to be a wrong description of empirical reality. The missing piece is the Born rule, which tells you how to relate quantum states to measurements.
With the Born rule there is indeed a contradiction, since you have stated that all physical systems are completely described by quantum states which evolve unitarily, while the Born rule specifies a special class of physical processes, measurements, where the interaction of two physical systems leads to a non-unitary evolution generally known as the wave function collapse.
This seemingly ad hoc addition of measurement as a special physical process gives us an extremely successful theory for predicting empirical reality, but is considered unsatisfying to many.
Note that in the many-worlds interpretation, the result of the measurement is actually $|\Omega\rangle$, i.e. the measurement yields all outcomes at once, but for some reason your empirical perception chooses a random one according to the Born rule. This school of thought shifts the inconsistency such that the measurement apparatus is not the "unphysical" non-unitary system, but rather your perception is. Whether this is more or less satisfying is up to the individual, with the predictions of the theory remaining unchanged. | {
"domain": "physics.stackexchange",
"id": 99445,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, measurement-problem, quantum-measurements, foundations, born-rule",
"url": null
} |
#### caffeinemachine
##### Well-known member
MHB Math Scholar
1. Define a graph.
A graph is a two-dimensional drawing showing a relationship usually between two sets of numbers by means of a line, curve, a series of bars, or other symbols. Usually, the independent variable is shown on the horizontal line (X-Axis) and the dependent variable is shown on the vertical line (Y-Axis). When the axis is perpendicular then is intersects as a point called the origin and is calibrated in the units of the quantities represented. A graph usually has four quadrants representing the positive and negative values of the variables. Most of the time the north-east quadrant is shown on the graph when the negative values do not exist or don’t have interest. A graph can also be defined as a diagram of values, usually shown as lines or bars.
2. What is an ‘edge’ of a graph?
The edge of a graph is the intersection of two planes or faces. The edges can also be called arcs. In a simple graph, the set of vertices (V) and set of unordered pairs of distinct elements of V called edges. Graphs are not all simple. Sometimes a pair of vertices are connected by multiple edge yielding a multigraph. At times the vertices are connected to themselves by an edge called a loop, creating a pseudograph. Edges can also be given a direction creating a directed graph. An edge joins one vertex to another or starts and ends at the same vertex. The edges can have straight or curved lines and is also known as arcs.
3. Does it matter if we join two vertices with a straight line or a curved line when we represent a graph by drawing it on paper? | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307731905612,
"lm_q1q2_score": 0.8746929380674514,
"lm_q2_score": 0.8991213705121083,
"openwebmath_perplexity": 301.7428207620524,
"openwebmath_score": 0.5917503237724304,
"tags": null,
"url": "https://mathhelpboards.com/threads/5-vertices-denoted-by-k5-part-b.5864/"
} |
php, authentication, laravel, controller
} catch(Exception $e) {
ErrorLogging::logError($e);
return abort('500');
}
}
/**
* activeSessionCheck
* Helper function to check session objects.
*
* @param Sessions $session The session to check.
* @return \Illuminate\Http\RedirectResponse | null
*/
private static function activeSessionCheck(Sessions $session) {
if($session->ip_address !== $_SERVER['REMOTE_ADDR']) {
$session->delete();
\Session::forget('sessionId');
return redirect()->route('login');
}
if($session->authenticated === 1) {
return redirect()->route('authHome');
}
return null;
}
/**
* check
* Validates if the user is authenticated on this IP Address.
*
* @return bool
*/
public static function check() {
if(!\Session::has('sessionId')) {
return false;
}
$cryptor = new Cryptor();
$sessionId = $cryptor->decrypt(\Session::get('sessionId'));
$session = Sessions::where('id', $sessionId)->first();
if($session->ip_address !== $_SERVER['REMOTE_ADDR']) {
$session->delete();
\Session::forget('sessionId');
return false;
}
return true;
}
/**
* adminCheck
* Validates if the user is an authenticated admin user.
*
* @return bool
*/
public static function adminCheck() {
$check = self::check();
if(!$check) {
return $check;
}
$cryptor = new Cryptor();
$sessionId = $cryptor->decrypt(\Session::get('sessionId'));
$session = Sessions::where('id', $sessionId)->first(); | {
"domain": "codereview.stackexchange",
"id": 24357,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, authentication, laravel, controller",
"url": null
} |
navigation, robot-localization, ros-noetic, diff-drive-controller
<collision name='surface'>
<pose>0 0 1 0 -0 0</pose>
<geometry>
<box>
<size>1.5 0.8 0.03</size>
</box>
</geometry>
<surface>
<friction>
<ode>
<mu>0.6</mu>
<mu2>0.6</mu2>
</ode>
<torsional>
<ode/>
</torsional>
</friction>
<contact>
<ode/>
</contact>
<bounce/>
</surface>
<max_contacts>10</max_contacts>
</collision>
<visual name='visual1'>
<pose>0 0 1 0 -0 0</pose>
<geometry>
<box>
<size>1.4 0.8 0.04</size>
</box>
</geometry>
<material>
<script>
<uri>file://media/materials/scripts/gazebo.material</uri>
<name>Gazebo/Wood</name>
</script>
</material>
</visual>
<collision name='front_left_leg'>
<pose>0.68 0.38 0.5 0 -0 0</pose>
<geometry>
<cylinder>
<radius>0.02</radius>
<length>1</length>
</cylinder>
</geometry>
<max_contacts>10</max_contacts>
<surface>
<contact>
<ode/>
</contact>
<bounce/>
<friction>
<torsional>
<ode/>
</torsional>
<ode/>
</friction>
</surface>
</collision>
<visual name='front_left_leg'>
<pose>0.68 0.38 0.5 0 -0 0</pose>
<geometry>
<cylinder>
<radius>0.02</radius>
<length>1</length>
</cylinder>
</geometry>
<material>
<script>
<uri>file://media/materials/scripts/gazebo.material</uri>
<name>Gazebo/Grey</name>
</script>
</material>
</visual> | {
"domain": "robotics.stackexchange",
"id": 38884,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, robot-localization, ros-noetic, diff-drive-controller",
"url": null
} |
beginner, sql, sql-server, t-sql, stackexchange
Title: Tag wikis under a certain length I wrote a SEDE query to list all tag wikis with a body or an excerpt under a given number of characters. I've used this to find empty wikis or wikis with very little information. The tags can be sorted by post count to make choosing which to improve easier.
This is one of the first times I've written anything in SQL, I took ideas from other queries I found and threw them together. The conditional WHERE clause is my biggest concern. It looks really messy, but I'm not sure what the best way to achieve the same effect would be.
I'm aware that it doesn't actually get wikis with a length under the given number, but under or on. I wanted 0 to mean an empty wiki as that feel more intuitive than 1. I'm not sure about this design decision, what do you think?
-- MaxBodyLength: Max body length "Set to -1 to disable"
DECLARE @max_body_length INT = ##MaxBodyLength:INT?100##;
-- MaxExcerptLength: Max excerpt length "Set to -1 to disable"
DECLARE @max_excerpt_length INT = ##MaxExcerptLength:INT?-1##;
-- Max SE post length (see http://meta.stackexchange.com/a/176447/299387)
DECLARE @LEN_MAX INT = 30000; | {
"domain": "codereview.stackexchange",
"id": 18649,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, sql, sql-server, t-sql, stackexchange",
"url": null
} |
graphs, linear-programming, network-flow
As raised by Willard Zhan, point (*) requires a proof: $N=N_1+N_2$.
Proof:
Consider a maximal solution $(N_1,N_2)$ for the priority problem. Assume it's not a global max flow: $N_1+N_2<N$. Then there is an augmenting path (without cycle) in the residual graph from $s$ to $t$. This path exits only once from the source, providing a way to increase either $N_1$ or $N_2$ and leave the other one unchanged. Both contradict the maximality of $(N_1,N_2)$ for the lexicographical order.
This proof relies on a nice fact about max flow: there is no need to ever decrease the flow on an edge from the source to find a better flow (when there is one).
This solution generalizes easily to any number of priorities. You need to solve as many max flow problems as priorities plus the last one. | {
"domain": "cs.stackexchange",
"id": 9968,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "graphs, linear-programming, network-flow",
"url": null
} |
It would be very instructive to compare the classic proof that $\sqrt{2}$ is irrational, which starts off very similar to what you have. But there is the crucial additional requirement that $\gcd(p, q) = 1$, that is, they have no prime factors in common. If they do have prime factors in common, divide them out so the fraction is in lowest terms.
Then you square both sides to obtain $$2 = \frac{p^2}{q^2}.$$ Moving stuff around, we get $p^2 = 2q^2$, which means that $p^2$ is even. From this we derive a contradiction of $\gcd(p, q) = 1$.
I think that if you fully understand this proof, you should be able to carry the same concepts over to proving $\sqrt{3}$ is also irrational.
A little housekeeping note: $\sqrt{3} q$ could cause confusion with $\sqrt{3q}$. It would be clearer to write $q\sqrt{3}$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9877587257892506,
"lm_q1q2_score": 0.8075663272333466,
"lm_q2_score": 0.8175744806385542,
"openwebmath_perplexity": 133.71074885048233,
"openwebmath_score": 0.8218138813972473,
"tags": null,
"url": "https://math.stackexchange.com/questions/1496048/is-this-a-valid-proof-that-sqrt-3-is-irrational"
} |
brushless-motor
Title: How to assemble brushless motors and propellers? I'm building a quadcopter and I've received my motors and propellers.
What's the right way to assemble those together?
I'm not confident with what I've done, as I'm not sure the propeller would stay in place on a clockwise rotating motor.
I mean, if the motor rotates clockwise, will the screw stay tightly in place, even with the prop's inertia pushing counter-clockwise?
Here's what I've done (of course i'll tighten the screw...) : This is correct. Just make sure to insert a lever into the hole at the top to really tighten the nut! | {
"domain": "robotics.stackexchange",
"id": 137,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "brushless-motor",
"url": null
} |
bash, unix, cgi
echo "<br />"
echo "<a href=\"/cgi-bin/passwd.sh?cmd=passform\">Password</a>"
echo "<a href=\"/cgi-bin/passwd.sh?cmd=contactform\">Contact</a>"
echo "<a href=\"/cgi-bin/passwd.sh?cmd=resetform\">Reset Password</a>"
echo "</body>"
}
# START bash_cgi
# Created by Philippe Kehl
# http://oinkzwurgl.org/bash_cgi
# (internal) routine to store POST data
function cgi_get_POST_vars()
{
# check content type
# FIXME: not sure if we could handle uploads with this..
[ "${CONTENT_TYPE}" != "application/x-www-form-urlencoded" ] && \
echo "bash.cgi warning: you should probably use MIME type "\
"application/x-www-form-urlencoded!" 1>&2
# save POST variables (only first time this is called)
[ -z "$QUERY_STRING_POST" \
-a "$REQUEST_METHOD" = "POST" -a ! -z "$CONTENT_LENGTH" ] && \
read -n $CONTENT_LENGTH QUERY_STRING_POST
# prevent shell execution
local t
t=${QUERY_STRING_POST//%60//} # %60 = `
t=${t//\`//}
t=${t//\$(//}
t=${t//%24%28//} # %24 = $, %28 = (
QUERY_STRING_POST=${t}
return
} | {
"domain": "codereview.stackexchange",
"id": 16364,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bash, unix, cgi",
"url": null
} |
javascript, array, tree
var goal = paths.reduce(function(carry, pathEntry){
// On every path entry, resolve using the base object
resolvePath(carry, pathEntry.path);
// Return the base object for suceeding paths, or for our final value
return carry;
// Create our base object
}, createPath());
document.write(JSON.stringify(goal)); | {
"domain": "codereview.stackexchange",
"id": 15785,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, array, tree",
"url": null
} |
terminology, asteroids, meteors, meteorites, meteoroids
Title: How do the terms comet, asteroid, meteoroid, meteor and meteorite differ? These terms are frequently used interchangeably by the uninitiated to mean approximately a “space rock”. In practical terms, how do their meanings differ? At the risk of being snarky (each definition is from wikipedia)...
Comet - A comet is an icy small Solar System body that, when close enough to the Sun, displays a visible coma (a thin, fuzzy, temporary atmosphere) and sometimes also a tail. These phenomena are both due to the effects of solar radiation and the solar wind upon the nucleus of the comet. Comet nuclei range from a few hundred meters to tens of kilometers across and are composed of loose collections of ice, dust, and small rocky particles.
Asteroid - Asteroids (from Greek ἀστήρ 'star' and εἶδος 'like, in form') are a class of small Solar System bodies in orbit around the Sun. They have also been called planetoids, especially the larger ones. These terms have historically been applied to any astronomical object orbiting the Sun that did not show the disk of a planet and was not observed to have the characteristics of an active comet, but as small objects in the outer Solar System were discovered, their volatile-based surfaces were found to more closely resemble comets, and so were often distinguished from traditional asteroids
Meteor - A meteoroid is a sand- to boulder-sized particle of debris in the Solar System. The visible path of a meteoroid that enters Earth's (or another body's) atmosphere is called a meteor, or colloquially a shooting star or falling star. If a meteoroid reaches the ground and survives impact, then it is called a meteorite. Many meteors appearing seconds or minutes apart are called a meteor shower. The root word meteor comes from the Greek meteo¯ros, meaning "high in the air". | {
"domain": "physics.stackexchange",
"id": 14662,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "terminology, asteroids, meteors, meteorites, meteoroids",
"url": null
} |
python, json, api, flask
matches = []
for repo in repo_list:
repo_info = {}
repo_name = repo["name"]
repo_info["repo"] = repo_name
page = 1
items_exist = True
total_count = 0
while (items_exist and page <= NUM_PAGES):
commits = requests.get( f"https://api.github.com/search/commits?q={include} repo:{org}/{repo_name}&per_page={MAX_PER_PAGE}&page={page}", headers=HEADERS).content
commits = json.loads(commits.decode('utf-8'))
if "message" in commits:
matches.append({"reached_api_limit": True})
return {"matches": matches}
if "items" in commits and len(commits["items"]) > 0 and (page == 1 or commits["total_count"] > page-1 * MAX_PER_PAGE):
for commit in commits["items"]:
if word_to_ignore in commit["commit"]["message"]:
continue
repo_info["message"] = commit["commit"]["message"]
matches.append(repo_info)
if commits["total_count"] < MAX_PER_PAGE:
items_exist = False
else:
items_exist = False
page += 1
return {"matches": matches}
@ app.route('/search', methods=['POST'])
def example():
return get_matches()
if __name__ == '__main__':
app.run() if request.method == 'POST': is redundant; delete it.
Your
your request is missing the 'org', 'include' or 'ignore' parameters
is less than helpful. Why not tell the user exactly what they're missing?
This except block:
try:
org = request.get_json()["org"]
include = request.get_json()["include"]
word_to_ignore = request.get_json()["ignore"] | {
"domain": "codereview.stackexchange",
"id": 41170,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, json, api, flask",
"url": null
} |
is an 80% chance that Theory A is true, but a 20% chance that Theory B is true instead”. I absolutely know that if you adopt a sequential analysis perspective you can avoid these errors within the orthodox framework. So the probability that both of these things are true is calculated by multiplying the two: \[ 2014. Remember what I said in Section 16.10 about ANOVA being complicated. We shall not often be astray if we draw a conventional line at .05 and consider that [smaller values of $$p$$] indicate a real discrepancy. Unlike frequentist statistics Bayesian statistics does allow to talk about the probability that the null hypothesis is true. In real life, the things we actually know how to write down are the priors and the likelihood, so let’s substitute those back into the equation. Orthodox methods cannot tell you that “there is a 95% chance that a real change has occurred”, because this is not the kind of event to which frequentist probabilities may be assigned. That way, anyone reading the paper can multiply the Bayes factor by their own personal prior odds, and they can work out for themselves what the posterior odds would be. Just as we saw with the contingencyTableBF() function, the output is pretty dense. It is a well-written book on elementary Bayesian inference, and the material is easily accessible. Although the bolded passage is the wrong definition of a $$p$$-value, it’s pretty much exactly what a Bayesian means when they say that the posterior probability of the alternative hypothesis is greater than 95%. Book on Bayesian statistics for a "statistican" Close. See Rouder et al. | {
"domain": "kvb.hu",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9732407214714477,
"lm_q1q2_score": 0.8329491877465816,
"lm_q2_score": 0.8558511469672594,
"openwebmath_perplexity": 479.9039466795225,
"openwebmath_score": 0.6960655450820923,
"tags": null,
"url": "http://www.kvb.hu/vf1wj/bayesian-statistics-in-r-book-2097ee"
} |
$$\frac{1}{{{\text{ln}}\,n}} > \frac{1}{n}$$ (for $$n \geqslant 2$$) A1
Note: Award A0 for statements such as $$\sum\limits_{n = 2}^\infty {\frac{1}{{{\text{ln}}\,n}}} > \sum\limits_{n = 2}^\infty {\frac{1}{n}}$$. However condone such a statement if the above A1 has already been awarded.
a correct statement linking $$n$$ and $$n + 2$$ eg,
$$\sum\limits_{n = 0}^\infty {\frac{1}{{{\text{ln}}\left( {n + 2} \right)}}} = \sum\limits_{n = 2}^\infty {\frac{1}{{{\text{ln}}\,n}}}$$ or $$\sum\limits_{n = 0}^\infty {\frac{1}{{n + 2}}} = \sum\limits_{n = 2}^\infty {\frac{1}{n}}$$ A1
Note: Award A0 for $$\sum\limits_{n = 0}^\infty {\frac{1}{n}}$$
$$\sum\limits_{n = 2}^\infty {\frac{1}{n}}$$ (is a harmonic series which) diverges
(which implies that $$\sum\limits_{n = 2}^\infty {\frac{1}{{{\text{ln}}\,n}}}$$ diverges by the comparison test) R1
Note: The R1 is independent of the A1s. | {
"domain": "iitianacademy.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9898303440461105,
"lm_q1q2_score": 0.8311941601432017,
"lm_q2_score": 0.8397339656668286,
"openwebmath_perplexity": 2170.4203858038,
"openwebmath_score": 0.9161812663078308,
"tags": null,
"url": "https://www.iitianacademy.com/ib-dp-maths-topic-9-2-series-that-converge-absolutely-hl-paper-3/"
} |
discrete-signals, frequency-spectrum
$$XCORR = \sum_{n=0}^{N-1}x[n]y^*[n]$$
Where $y[n]=e^{j2\pi n k/N}$; where we index through each $k$, starting with $0$ (as DC), and $k=1$ as the first harmonic given as $e^{j2\pi n/N}$ and all higher harmonics up to $N-1$. We stated earlier that that first harmonic is $2\pi f_s/N$ which was the actual frequency given in $Hz$ for the sampling rate $f_s$ Hz for the continuous waveform $e^{j2\pi f_s/N t}$. If we use units of normalized frequency (by dividing by the sampling rate) to get cycles/sample, the sampling rate $f_s = 1$, and time indexes by $n$. Thus we get $e^{j2\pi n/N}$ as the first harmonic and $e^{j2\pi n k/N}$ for all higher harmonics. This is complex conjugate multiplied by our unknown signal $x[n]$, which results in for each $k$, that weight of that harmonic as part of the signal.
Consider in comparison the same form for continuous time signals as the Fourier Series Expansion which is just a cross-correlation as the integration of a complex conjugate product:
$$c_k = \frac{1}{T}\int_0^T x(t)e^{-jk\omega_o t}dt$$ | {
"domain": "dsp.stackexchange",
"id": 11703,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "discrete-signals, frequency-spectrum",
"url": null
} |
For independent random variables $X$ and $Y$ with means $\mu_X$ and $\mu_Y$ respectively, and variances $\sigma_X^2$ and $\sigma_Y^2$ respectively, \begin{align}\require{cancel} \operatorname{var}(X+Y+XY) &= \operatorname{var}(X)+\operatorname{var}(Y)+\operatorname{var}(XY)\\ &\quad +2\cancelto{0}{\operatorname{cov}(X,Y)}+2\operatorname{cov}(X,XY)+\operatorname{cov}(Y,XY)\\ &=\sigma_X^2+\sigma_Y^2+\big(\sigma_X^2\sigma_Y^2+\sigma_X^2\mu_Y^2+\sigma_Y^2\mu_X^2\big)\\ &\quad +2\operatorname{cov}(X,XY)+\operatorname{cov}(Y,XY). \end{align} Now, \begin{align} \operatorname{cov}(X,XY) &= E[X\cdot XY] - E[X]E[XY]\\ &=E[X^2Y]-E[X]\big(E[X]E[Y]\big)\\ &= E[X^2]E[Y]-\big(E[X]\big)^2E[Y]\\ &= \sigma_X^2\mu_Y \end{align} and similarly, $\operatorname{cov}(Y,XY) = \sigma_Y^2 \mu_X$. Consequently, \begin{align}\operatorname{var}(X+Y+XY) &=\sigma_X^2+\sigma_Y^2+\sigma_X^2\sigma_Y^2+\sigma_X^2\mu_Y^2+\sigma_Y^2\mu_X^2 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.971992482639258,
"lm_q1q2_score": 0.8013707967564706,
"lm_q2_score": 0.8244619285331332,
"openwebmath_perplexity": 2379.9094727166885,
"openwebmath_score": 0.999935507774353,
"tags": null,
"url": "https://stats.stackexchange.com/questions/323905/variance-of-xyxy"
} |
c#, wpf
Here is where the value comes in because I can override FindFile in FilePathChecker easily, and I can pass in an instance of that easily info ClientPropertyFactory. So now all the math and resource finding things an go into the ClientPropertyFactory and be tested in a test framework (such as NUnit, or Microsoft's test framework). The underlying issue is why go through all that trouble to make 2 more classes just to return a client? Simply put for testing purposes. If all that code is testable then you know it works as you expect without having to run the actual application. It also gears your mind more towards the MVVM principle of seperating work that you do in the code behind with actual work that you want done. Also by seperating when you go back to this code and decide to use the MVVM approach you'll find it almost trivial. (Essentially it would be a matter of setting your button bindings to a method which is just the inside of your current button click events.) I personally think that this type of work is being overshadowed by different approaches such as MVVM. Granted WPF was built with MVVM in mind, but what if someone comes in and says oh MVVM is SOOOOoooo 2015, we use MVMVPM now, or your company decides that MVVM is to strange to work with and they only want MVC/MVP pattern used? No big deal because your code that does the actual heavy lifting is extracted and tucked away safely in a blanket of protection. (FYI.. I use MVVM, I like it and would recommend looking into it after you've learned how to seperate heavy lifting code from code behind) | {
"domain": "codereview.stackexchange",
"id": 17500,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, wpf",
"url": null
} |
nlp, gpu, language-model, memory
Title: What size language model can you train on a GPU with x GB of memory? I'm trying to figure out what size language model I will be able to train on a GPU with a certain amount of memory. Let's for simplicity say that 1 GB = 109 bytes; that means that, for example, on a GPU with 12 GB memory, I can theoretically fit 6 billion parameters, given that I store all parameters as 16-bit floats. However, in order to use a language model, you typically also need space for storing the input text and the activations of the current layer (and maybe also of the previous layer), and if you are going to train the model, you will typically need space to store the activations of all layers in order to be able to do backpropagation, and if you use an optimizer like Adam, you need space to store the running mean of the partial derivatives (of the loss function with respect to the various parameters, or in other words, the gradient), as well as the running mean of the squares of the partial derivatives.
So, given this complication, could someone tell me what size language models (that is, how many parameters) I will be able to train on a GPU with
10 GB of memory (RTX 3080 10 GB)?
12 GB of memory (RTX 3080 12 GB and RTX 3080 Ti)?
16 GB of memory (RTX 4080)?
24 GB of memory (RTX 3090 and RTX 3090 Ti)?
For example, Tim Dettmers mentions in his blog that you should have at least 24 GB of memory if you do research on transformers. I'm guessing this translates roughly to a language model of a certain size. Tldr; I’ve seen a good rule-of-thumb is about 14-18x times the model size for memory limits, so for a 10GB card, training your model would max out memory at roughly 540M parameters.
There is some really good information here:
https://huggingface.co/docs/transformers/perf_train_gpu_one#anatomy-of-models-memory
Note that there are a ton of caveats, depending on framework, mixed precision, model size, batch sizes, gradient checkpointing, and so on.
Just to summarize the above, rough memory requirements are:
Model weights | {
"domain": "datascience.stackexchange",
"id": 11538,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nlp, gpu, language-model, memory",
"url": null
} |
classical-mechanics, differential-geometry, hamiltonian-formalism, phase-space, canonical-conjugation
Title: How does the tautological one-form convert a velocity to a momentum? The Wikipedia page on the "tautological one-form" $\theta$ says that it
is used to create a correspondence between the velocity of a point in a mechanical system and its momentum, thus providing a bridge between Lagrangian mechanics and Hamiltonian mechanics
and that
velocities are appropriate for the Lagrangian formulation of classical mechanics, but in the Hamiltonian formulation, one works with momenta, and not velocities; the tautological one-form is a device that converts velocities $\dot{q}$ into momenta $p$.
I certainly understand why this device is physically useful, but unfortunately, the Wikipedia doesn't explicitly explain how $\theta$ maps $\dot{q}$ to $p$.
In order to make sure that I understand the tautological one-form correctly, I'd like to explain it in very concrete detail with a minimum of mathematical formalism; if anything below is incorrect, then please let me know. As I understand it, $\theta$ is constructed via the following steps: | {
"domain": "physics.stackexchange",
"id": 93437,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, differential-geometry, hamiltonian-formalism, phase-space, canonical-conjugation",
"url": null
} |
Here is some intuition about the last "standard calculation". Suppose you choose a $$k$$-subset $$S$$ uniformly at random from $$[m+k]$$. What's the expectation of $$\max S$$? The $$k$$ chosen elements divide the $$m$$ unchosen elements into $$k+1$$ intervals of total size $$m$$. By a symmetry argument the expected number of elements in each of these intervals is the same, so each interval has size $$m/(k+1)$$ in expectation. In particular, the expected size of the last interval is $$m/(k+1)$$, so the expectation of the largest element is $$m+k - m/(k+1)$$.
In our case $$m+k = {n \choose 2}$$ and $$k=n-1$$.
I believe this takes $$\Theta(n^2)$$ oracle calls without replacement, and $$\Theta(n^2\log n)$$ oracle calls with replacement.
Lower bound: Assume my list has a single pair $$x[i], x[i+1]$$ out of order. Then I can sort if and only if $$(i,i+1)$$ is chosen as a pair. That means we require $$\Omega(n^2)$$ iterations in expectation without replacement (simple proof: there's a $$1/2$$ probability that $$(i,i+1)$$ is in the last $$\binom{n}{2}/2$$ pairs we pick) , and $$\Omega(n^2\log n)$$ iterations with replacement (by classic coupon collector).
Upper bound: The conversation in the comments seems to indicate two versions of the problem. I'll discuss the differences; then solve each below.
In the first, we just get a stream of comparisons; I just keep these comparisons in some data structure for later use (this case is easier as it reduces to the coupon collector's problem directly). | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471670723236,
"lm_q1q2_score": 0.8089084490049522,
"lm_q2_score": 0.8221891327004132,
"openwebmath_perplexity": 278.7908703688796,
"openwebmath_score": 0.8513035178184509,
"tags": null,
"url": "https://cstheory.stackexchange.com/questions/51961/expected-number-of-random-comparisons-needed-to-sort-a-list"
} |
ros, ros-melodic, tutorial
'std_msgs/msg/Bool' (ROS 2) <=> 'std_msgs/Bool' (ROS 1)
'std_msgs/msg/Byte' (ROS 2) <=> 'std_msgs/Byte' (ROS 1)
'std_msgs/msg/ByteMultiArray' (ROS 2) <=> 'std_msgs/ByteMultiArray' (ROS 1)
'std_msgs/msg/Char' (ROS 2) <=> 'std_msgs/Char' (ROS 1)
'std_msgs/msg/ColorRGBA' (ROS 2) <=> 'std_msgs/ColorRGBA' (ROS 1)
'std_msgs/msg/Empty' (ROS 2) <=> 'std_msgs/Empty' (ROS 1)
'std_msgs/msg/Float32' (ROS 2) <=> 'std_msgs/Float32' (ROS 1)
'std_msgs/msg/Float32MultiArray' (ROS 2) <=> 'std_msgs/Float32MultiArray' (ROS 1)
'std_msgs/msg/Float64' (ROS 2) <=> 'std_msgs/Float64' (ROS 1)
'std_msgs/msg/Float64MultiArray' (ROS 2) <=> 'std_msgs/Float64MultiArray' (ROS 1)
'std_msgs/msg/Header' (ROS 2) <=> 'std_msgs/Header' (ROS 1)
'std_msgs/msg/Int16' (ROS 2) <=> 'std_msgs/Int16' (ROS 1)
'std_msgs/msg/Int16MultiArray' (ROS 2) <=> 'std_msgs/Int16MultiArray' (ROS 1) | {
"domain": "robotics.stackexchange",
"id": 34070,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, tutorial",
"url": null
} |
c++, strings, c++11, formatting, boost
template <class... Args>
std::string format(std::string&& fmt, Args&&... args)
{
std::string temp = std::move(fmt);
detail::format_impl<0>(temp, std::forward<Args>(args)...);
return temp;
}
template <class... Args>
void print(const std::string& fmt, Args&&... args)
{
std::cout << format(fmt, std::forward<Args>(args)...);
}
template <class... Args>
void print(std::string&& fmt, Args&&... args)
{
std::cout << format(std::forward<std::string>(fmt), std::forward<Args>(args)...);
} | {
"domain": "codereview.stackexchange",
"id": 8356,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, strings, c++11, formatting, boost",
"url": null
} |
mechanical-engineering, civil-engineering, applied-mechanics, statics, dynamics
Title: Finding downward force of lever bar I am working on a DIY project, I am trying to replicate a bar compressor which for some reason is only sold in Europe so I thought it would be fun to make my own without having to waste so much material. I simply want to make sure the force going down with the plate somewhat replaces a person to jump in the trashcan. Somewhere above 80+ lbs, since I am no engineer I was wondering how would one go about finding the force applied going down to the trash if about 50lbs is pulling down the lever. I understand that materials are key to this but as an amateur I just wanted to know how to get started. Sorry for posting here but I thought it would suit this page the most for this question. Thank you. The input force is 18.5+20.5 ft away from the fulcrum and the load is 18.5 away from the fulcrum.
This means the force applied to the load is $\frac{(18.5+20.5)}{18.5} = 2.1$ times greater than the input force or for an input force of 50 lbs you get an output force of 105 lbs.
However this assumes that all forces are applied perpendicular to the line between point and fulcrum. If this is not the case then you need to decompose the forces into tangential and normal components. | {
"domain": "engineering.stackexchange",
"id": 1328,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, civil-engineering, applied-mechanics, statics, dynamics",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.