text stringlengths 49 10.4k | source dict |
|---|---|
electromagnetism, special-relativity, electricity
Title: How Special Relativity causes magnetism So my physics teacher assigned us an article about how special relativity causes magnetism in a wire with a current, even with the low drift velocities of electrons in a current.
It seemed that the basis of the article was that magnetism is just relativistic electricity, so I was wondering how a permanent magnet worked? It makes sense to me that a moving charged particle attracts unmoving particles of the opposite charge, but how do the orbits of electrons in a magnet cause it to have a magnetic field? There are two phenomena in your question.
(1) Let us first understand how magnetic field can be considered to "arise" because of relativity. Imagine a frame of reference in which a charge $Q$ is at rest. If another charge $q$ is brought in its vicinity, it will experience only an electrostatic force. Now get on to another inertial frame of reference moving at a velocity $\vec{v}$ with respect to the first one. In this frame of reference, you will observe both the charges moving. The static charges of the old reference frame now appear as charges and currents. The electrostatic field of the previous frame now appears as an electrostatic field of different magnitude and a magnetic field. Since physics is the same in all inertial frames of reference, we are inclined to believe that $\vec{E}$ and $\vec{B}$ are manifestations of a single electromagnetic field.
This is a very "hand-waving" kind of an explanation. You may want to refer to Rober Resnick's "Special Theory of Relativity" or Melvin Schwartz's "Principles of Electrodynamics" for greater mathematical details.
(2) The first point tries to explain how magnetism due to a current can be considered to be a relativistic effect. Now let us consider magnetism due to electrons. Apart from charge and mass, electrons also have an intrinsic magnetic moment that can be explained only through relativistic quantum mechanics. Thus, magnetism of a bar magnet is also a relativistic effect. Please note that magnetism in a bar magnet is because of the electron's spin and not orbital motion. | {
"domain": "physics.stackexchange",
"id": 27173,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, special-relativity, electricity",
"url": null
} |
c#, .net, excel
const int SixDigitLong = 6;
static bool TryGetFirstPersonIndex(string cell, int row, int column, out (int Row, int Column) firstPersonIndex)
{
var isASixDigitLongInteger = cell.Length == SixDigitLong && int.TryParse(cell, out _);
//+1 is needed to point to "Name" column rather than "Number"
firstPersonIndex = isASixDigitLongInteger ? (row, column + 1) : default;
return isASixDigitLongInteger;
}
With these in our hand the first part of ParseSheet can be achieved like this
Instead of using two triple tuple I have declared two more simple variable
I have used two foreach loops inside of two fors
I also made the cell checks lazy (evaluate only if we haven't found yet)
static (string IndustryName, (int Row, int Column) FirstPersonIndex) GetIndustryAndFirstPersonIndex(DataTable sheet)
{
string industryName = default;
(int Row, int Column) firstPersonIndex = default;
foreach (DataRow row in sheet.Rows)
foreach (DataColumn column in sheet.Columns)
{
if (industryName != default && firstPersonIndex != default)
return (industryName, firstPersonIndex);
var cell = row[column].ToString().Trim();
if (industryName == default
&& TryGetIndustryName(cell, out industryName))
break;
if (firstPersonIndex == default
&& TryGetFirstPersonIndex(cell, sheet.Rows.IndexOf(row), sheet.Columns.IndexOf(column), out firstPersonIndex))
break;
}
throw new Exception("Excel file is not normalized!");
}
The second part of the ParseSheet can be achieved with the following Linq:
Please bear in mind the .Range requires start and count parameters not start and end | {
"domain": "codereview.stackexchange",
"id": 43886,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, excel",
"url": null
} |
javascript, jquery, touch
$nav.on({
touchstart:function(e){
e.stopPropagation();
newActive($(this),touched);
},
mouseenter:function(){
newActive($(this),true);
},
click:function(e){
e.preventDefault();
if(menuActive){
$(this).trigger('trueClick',e.target);
}
},
trueClick:function(e,$target){
$(this).parents('.nav').trigger('mouseleave');
window.location.href = $target;
}
},'li .has_children').on('mouseleave',function(){
removeActive(function(){
menuActive = false;
touched = false;
});
});
$('html').on('touchstart',function(e){
if(menuActive){
$nav.trigger('mouseleave');
}
});
});
This should manage what you want in a much cleaner way:
menuActive variable defaults to false, and is only set to true when submenu is open (by either touchstart or mouseenter)
actual click action is prevented, and verification is done to determine if menu is appropriately active
if menu is active, custom event is triggered to go to the link's target
touch event is not bubble up to html by use of e.stopPropagation();
if user touches somewhere on the screen that is not the submenu, it will close the submenu and set menuActive to false
I separated out a couple of actions into reusable functions, and separated events to avoid if checking. This code executes much faster than the original, and more importantly it is bulletproof. There is true separation of touch vs hover events, the show / hide of the menu still leverages CSS, and it accounts for browser inconsistencies (example: the reason for the use of the callback is because Safari 5.1 has a 300ms delay between adding a class and it displaying onscreen).
I know this code is longer than the original, but guaranteed it will run faster and be easier to maintain. Hope it helps. | {
"domain": "codereview.stackexchange",
"id": 5496,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, touch",
"url": null
} |
homework-and-exercises, kinematics, acceleration
Title: Find constant acceleration needed to reach point (possibly related to projectile motion on slopes) Given a particle with a position $p_0$ and an initial velocity $\vec{v_0}$, what acceleration $\vec{a}$ do we need to reach point $p_1$ and how long until we have reached $p_1$? The magnitude of the acceleration must be $\leq A_{max}$, and the time to reach the $p_1$ should be minimal. The acceleration should be constant, i.e. the same acceleration vector should be used for the entire trip.
Our first attempt was to use the basic kinematics formula from Wikipedia, but because both $t$ and $A$ are unknown, this proved unfruitful.$$P_1=\frac12at^2+v_0t+P_0$$
Edit: Reformulation of problem in the form of projectile motion on slopes and attempt at solution using the link provided by sammy gerbil. | {
"domain": "physics.stackexchange",
"id": 30612,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, kinematics, acceleration",
"url": null
} |
python, wavelet, time-frequency, cwt, pywavelets
Title: Why does a signal with constant frequency have spots that changes colors at a specific value of scale (and so frequency) in the scalogram? I am studying the Wavelet transform and I am considering this example that I took from PyWavelets documentation. The signal in time domain has the following shape:
Till the value of zero on the horizontal axis we have a signal with a constant frequency. So I would expect the scalogram to have something like a constant (both in color and in dimension) horizontal stripe at a specific value of the scale (or period or frequency or whatever you want to put on the y axis of the scalogram) till the zero value, while instead it has like vertical stripes that alternate their colors from the extreme violet color to the extreme green. Why ?! This is the image:
From a scalogram like this, I would expect a signal that changes frequencies because the changing of colors means that changing of values of wavelets coefficients and so the changing of similarity between the wavelet and the input signal (since the operation that is done is the convolution between the wavelet and the data). High similarity should mean high coefficients (so green color) while low similarity means low values of coefficients (so violet colors); if the colors changes means that the similarity changes and so also the shape of the signal changes and thus also the frequency. Is this right ? What am I missing ?
Any suggestion would be really appreciated. Thanks in advance.
EDITI:I appreciate the suggestions in the comments below my post but since there has not been an answer to my post and my question has not been closed, I want to share with you that I found a clear explanation in this nice video Wavelets: a mathematical microscope. As Ash pointed out, one should plot the magnitude of a complex wavelet and so one has to consider the convolution with both the real and imaginary part of the wavelet. Hence by following this procedure, I obtain the plot as I expected it to be but with still one problem: the red bar that should be in correspondence of scale value equal to 30 and the distortions (that correspond to signal changing in frequency) should go from 30 to lower scales, in my case is inverted. Why ? | {
"domain": "dsp.stackexchange",
"id": 11585,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, wavelet, time-frequency, cwt, pywavelets",
"url": null
} |
statistical-mechanics, entropy, harmonic-oscillator
Title: Entropy of harmonic oscillator in 1d, 3d and anisotropic 3d I'm curious about the entropy of a simple harmonic oscillator in a few different scenarios:
1d: particle with mass m moving in one dimension, potential $U = \frac{1}{2} k x^2$
3d isotropic: particle with mass m in three dimensions, $U = \frac{1}{2} k (x^2 + y^2 + z^2)$
3d anisotropic: particle with mass m, $U = \frac{1}{2} (k_x x^2 + k_y y^2 + k_z z^2)$
The oscillator can be assumed to be in thermal equilibrium with a heat bath at some temperature T.
Naively, it seems the entropy in the isotropic 3d case should be a 3 times that of 1d, and that the 3d anisotropic problem should converge to the 1d problem if the $k_y$ and $k_z$ force constants become very large; but I can't figure out how to derive that.
I'm curious about the entropy of a simple harmonic oscillator in a few different scenarios:
1d: particle with mass m moving in one dimension, potential $U = \frac{1}{2} k x^2$
3d isotropic: particle with mass m in three dimensions, $U = \frac{1}{2} k (x^2 + y^2 + z^2)$
3d anisotropic: particle with mass m, $U = \frac{1}{2} (k_x x^2 + k_y y^2 + k_z z^2)$
Naively, it seems the entropy in the isotropic 3d case should be a 3 times that of 1d, | {
"domain": "physics.stackexchange",
"id": 89246,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, entropy, harmonic-oscillator",
"url": null
} |
More generally, the probability is $$\frac kn$$ when starting from vertex $$v_k$$. We can prove this by verifying that $$p_k = \frac kn$$ satisfies the equations $$p_0 = 0,\quad p_n = 1,\quad p_k = \frac12(p_{k-1} + p_{k-1}) \text{ for } 1 \le k \le n-1.$$
Assuming this lemma, let's play the game until one of players $$6$$ or $$8$$ receives the book. The two cases are symmetric; assume it's player $$6$$, but the exact same argument will work for the other case. Then we know that
• From here on out, if we see player $$8$$ before we see player $$7$$, we know player $$7$$ will be last: if we've seen both of $$7$$'s neighbors, we've seen everyone else.
• If we see player $$7$$ before we see player $$8$$, we know player $$7$$ won't be last.
So the probability that player $$7$$ is last is the probability that we see player $$8$$ before player $$7$$, which is the probability of seeing the end of the path $$7 - 6 - 5 - 4 - 3 - 2 - 1 - 10 - 9 - 8$$ before seeing the start, beginning from vertex $$6$$. That probability is $$\frac19$$.
So player $$7$$ has a $$\frac19$$ probability of being the last player. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9861513869353993,
"lm_q1q2_score": 0.82810482280966,
"lm_q2_score": 0.8397339736884712,
"openwebmath_perplexity": 265.42924585192674,
"openwebmath_score": 0.8947685956954956,
"tags": null,
"url": "https://math.stackexchange.com/questions/3387183/probability-of-a-game"
} |
vba, excel
Title: VBA Script to Remove Duplicates I'm curious if there's any way I can substantially improve its speed (it runs through >200k rows). I think I'll most likely need this once, so please bear with me since it probably looks terrible.
Thanks for reading!
Code I would like reviewed:
Option Explicit
Public Sub RemoveDuplicates()
Dim r As Range
Dim rBefore As Range
Dim wsSource As Worksheet
Dim ws As Worksheet
Dim c As Range
Dim c_ As Range
Dim v As Variant
Dim v_ As Variant
Dim dupCount As Long
Dim xlApp As Application
Dim i As Long
Dim size As Long
Set xlApp = Excel.Application
With xlApp
.ScreenUpdating = False
.DisplayStatusBar = True
.StatusBar = "Running Script..."
End With
Set wsSource = xlApp.ThisWorkbook.Worksheets("source")
Set rBefore = wsSource.Range(wsSource.Cells(2, 1), wsSource.Cells(wsSource.UsedRange.Rows.Count, 1))
Set ws = xlApp.ThisWorkbook.Worksheets("testSheet")
Set r = ws.Range(ws.Cells(2, 1), ws.Cells(ws.UsedRange.Rows.Count, 1))
size = r.Count
i = 1
For Each v In r
Set c = v
DoEvents
dupCount = 0
For Each v_ In r
Set c_ = v_
DoEvents
If CStr(c.Value) = CStr(c_.Value) Then
dupCount = dupCount + 1
If dupCount > 1 Then
c_.Rows(c_.Row).EntireRow.Delete
End If
End If
Next
xlApp.StatusBar = Format(i & " out of " & size) & " Rows Complete"
i = i + 1
Next | {
"domain": "codereview.stackexchange",
"id": 31742,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba, excel",
"url": null
} |
For example, it is clear that if ##k>16##, both ##k## and ##k-16## are positive. So above the line to the right of ##16## put ##+~+##. Now think of ##k## sliding to the left across ##k=16##. That changes the sign of the factor ##k-16## to negative, so above that section of the line put ##+~-##. Now as ##k## moves left across ##k=0## the ##k## factor changes sign so on the left part of the line put ##-~-##. The sign of the product depends on how many minus signs there are and it obviously negative on ##(0,16)##. You just have to look at which factor changes sign. One advantage of doing it this way is it doesn't matter how many factors there are or whether they are in the numerator or denominator. Just remember if a factor is squared, it doesn't change sign when the variable crosses its root. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9706877684006775,
"lm_q1q2_score": 0.8130674490411686,
"lm_q2_score": 0.8376199592797929,
"openwebmath_perplexity": 529.912259690669,
"openwebmath_score": 0.7181357145309448,
"tags": null,
"url": "https://www.physicsforums.com/threads/solving-inequality-with-different-power-variables.720128/"
} |
beginner, objective-c, coordinate-system
NSLog(@"Entity #1: %@",entity1);
NSLog(@"Entity #2: %@",entity2);
}
return 0;
}
Everything works fine and produces the output:
2019-06-07 16:49:51.525097-0600 ObjCProj[57486:829658] Hello, World!
2019-06-07 16:49:51.525371-0600 ObjCProj[57486:829658] Entity #1: Entity with pos: (0, 0, 0) rot:(0, 0, 0)
2019-06-07 16:49:51.525404-0600 ObjCProj[57486:829658] Entity #2: Entity with pos: (10, 10, 10) rot:(0, 0, 0)
2019-06-07 16:49:51.525456-0600 ObjCProj[57486:829658] Entity #1: Entity with pos: (5, 6, 7) rot:(3, 2, 1)
2019-06-07 16:49:51.525479-0600 ObjCProj[57486:829658] Entity #2: Entity with pos: (15, 16, 17) rot:(6, 4, 2)
Program ended with exit code: 0 You asked:
when do I use entity.position vs [entity position]?
The former is syntactic sugar for the latter, used with properties. I personally use . notation when dealing with @property, and otherwise use [...] syntax. But this is a matter of opinion, so use your own good judgment. I’d only suggest consistency throughout your project.
when do I use NSNumber vs double
Use NSNumber when you need an object. A few cases:
if the values might go into a NSArray or other collection;
if you you want to use with NSNumberFormatter;
where the numeric value is not required and you’d like to distinguish between nil and NSNumber and you want to avoid magical “sentinel” values in your code. | {
"domain": "codereview.stackexchange",
"id": 34828,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, objective-c, coordinate-system",
"url": null
} |
quantum-mechanics
Title: Why doesn't momentum operator's eigenket expected to be a parity eigenket? I am reading the 4th chapter of Sakurai where it says that
The momentum operator anti-commutes with the parity operator, so the momentum eigenket is not expected to be a parity eigenket.
Why is it so? Position operator also anti-commutes with parity operator. Is the above statement true for position operator as well?
Further, it says that
angular momentum is expected to be a parity eigenket because it
commutes with parity operator.
What is the relation between commutation/anti-commutation with being the parity operator? Generally, any two hermitian operators that commute with each other (compatible observables) can be simultaneously diagonolized and therefore share the same eigenkets.
If $\{{\bf p},{\bf \pi}\}=0$, therefore, $[{\bf p},{\bf\pi}]\not=0$; so, $\bf{p}$ and $\bf{\pi}$ do not share the same eigenkets. (${\bf p}$: momentum, ${\bf\pi}$: parity) | {
"domain": "physics.stackexchange",
"id": 56081,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics",
"url": null
} |
physical-chemistry, redox
We can then determine the appropriate half-reactions by the standard procedure. Instead of walking you through it, I will present you with the final result for comparison.
$$\begin{align}\ce{MnO4- + 3e- + 4 H+ &-> MnO2 + 2 H2O}\tag{Red}\\
\ce{2H2O &-> O2 + 4e- + 4 H+}\tag{Ox}\\[1em]\hline
\ce{4 MnO4- + 4 H+ &-> 4 MnO2 + 3O2 + 2 H2O}\tag{Redox}\end{align}$$
It turns out that the final reaction equation formally doesn’t even need water. However, potassium permanganate is a shelf-stable solid under normal laboratory conditions (if light is excluded), so it is reasonable to assume that the reaction will only proceed in aqueous solution.
To determine whether the redox reaction is feasible in aqueous solution, it is easier to go with the two half-reactions we already have; thermodynamically, they must correspond to the same process with the same reaction enthalpy etc. The Nernst equation together with the inspection of the cell potential allows us to estimate whether a process will be spontaneous or not by mere inspection of the standard potentials.
$$\begin{align}E^0_\text{cell} &= E^0_\text{Red} - E^0_\text{Ox}\\
&= \pu{+1.70V} - (\pu{+1.23V})\\
&= \pu{+0.47V}\\
E^0_\text{cell} &> 0 \\&\Longrightarrow \text{spontaneous process}\end{align}$$
Thus, thermodynamics already predict the decomposition of permanganate in aqueous solution to be spontaneous under standard conditions. As mentioned, the Nernst equation will allow us to estimate whether it also is spontaneous given general laboratory conditions; I will assume $\mathrm{pH} = 7$ and $p(\ce{O2}) = \pu{0.2bar}$ | {
"domain": "chemistry.stackexchange",
"id": 9151,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, redox",
"url": null
} |
c++, http, socket
std::size_t result = 0;
if (localBuffer != nullptr)
{
result = std::min(bufferRange.totalLength, size);
std::copy(bufferRange.inputStart, bufferRange.inputStart + result, localBuffer);
bufferRange.totalLength -= result;
}
else
{
auto begOfRange = bufferRange.inputStart;
auto endOfRange = bufferRange.inputStart + bufferRange.totalLength;
auto find = std::search(begOfRange, endOfRange, endOfLineSeq, endOfLineSeq + 2);
if (find != endOfRange)
{
bufferRange.inputLength = find + 2 - bufferRange.inputStart;
result = bufferRange.inputLength;
}
else
{
// We found some of a header or the method in the buffer
// But it was not the whole line. So move this fragment to
// the beginning of the buffer and return 0 to indicate
// that not a complete line was read. This will result in
// a call to getMessageDataFromStream()
std::copy(begOfRange, endOfRange, &bufferData[0]);
bufferRange.inputStart = &bufferData[0];
}
}
return result;
}
std::size_t ProtocolHTTP::getMessageDataFromStream(char* localBuffer, std::size_t size)
{
char* buffer = localBuffer ? localBuffer : bufferRange.inputStart;
std::size_t dataRead = localBuffer ? 0 : bufferRange.totalLength;
std::size_t dataMax = localBuffer ? size : bufferSize - (bufferRange.inputStart - &bufferData[0]);
char* lastCheck = buffer + (dataRead ? dataRead - 1 : 0);
BufferRange& br = bufferRange; | {
"domain": "codereview.stackexchange",
"id": 20541,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, http, socket",
"url": null
} |
r, phylogenetics, phylogeny, ggtree
Then use ggplot2 style functions to overlay the presentation, I'm assuming you do understand ggplot2. What the code you showed was doing was individually changing every font and the position of every branch in the tree. This is definitely not cool code and I wouldn't consider it further, i.e. its not even worth bothering with, that ain't how graphs nor trees are supposed to be represented.
In any case, overall ggplot2 style code is always fiddly and to be honest I gave up using it, but you increment/or layer the changes you want as the code you supplied describes. Honestly I don't want to get into the fiddly incremental layered changes that ggplot2 graphs require.
ETE3 is good and does do coloured trees, which are described here, http://etetoolkit.org/docs/latest/tutorial/tutorial_drawing.html. The only hassle it was written for Python 3.6 - that long ago, but you can access it via conda and set the Python version.
ETE4 will be very cool but its badly delayed in its release date, they did get funding for this so the delay is not cool, maybe never who knows. Getting it up to Python 3.10 would be useful at the very least.
My personal take is ETE3 is useful for automating the tree output so you can manually assess the iterations, its quite critical for e.g. testing and debugging. Whether you are turning that into publication quality - which is universally how graphs are handled - I personally find excessive. The difference is that complex graphs represent loads of information which is not always immediately obvious to a casual observer, therefore gamma (transparency), plot type loads of stuff are relevant. A tree is a tree and 'dressing up a tree' makes not a jot of difference for an experienced observer. If someone wasn't quite clear about a tree, they'd just ask you for the tree file and file it through 'Figtree'. | {
"domain": "bioinformatics.stackexchange",
"id": 2215,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "r, phylogenetics, phylogeny, ggtree",
"url": null
} |
electrostatics, electrons, acceleration, potential, voltage
Title: Accelerated through a potential difference Say a electron is accelerated through a potential difference of 10v established between two points A and B 1metre distance apart.
Then would i only be able to say that a electron is accelerated through a potential difference of 10v if the electron travels through 1metre to gain 10joule of kinetic energy or or it will just gain 10joule of kinetic energy just by being in that electric field? A charged particle only gains energy by moving from one point to another at a different potential. If the electron doesn't move, then it doesn't gain any energy. So, the electron actually has to move from point A to point B to gain energy from the potential difference.
The following is a response to saying the electron gains "10 joules" of energy.
The kinetic energy gained by a charged particle when passing through a potential difference is equal to the charge of the particle multiplied by the potential difference.
$$K = qV$$
where $K$ is the kinetic energy, $q$ is the charge of the particle, and $V$ is the voltage difference. An electron would only gain 10 joules of kinetic energy if it had a charge of 1 Coulomb, which is a very large charge. The actual charge of an electron is about 1.602$\times$10$^{-19}$ C. So, an electron passing through a 10V potential difference ends up with 1.602$\times$10$^{-18}$ joules of kinetic energy.
Because these numbers are so small, a different unit of energy is often used. This unit is called the electron-volt (eV), which is equal to the energy gained by an electron when it passes through a 1-volt potential difference. One electron-volt is equal to 1.602$\times$10$^{-19}$ J. This makes calculations easier because, to use your problem as an example, an electron passing through a 10V potential difference gains 10 eV of energy. | {
"domain": "physics.stackexchange",
"id": 81783,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electrons, acceleration, potential, voltage",
"url": null
} |
slam, navigation, 3dmapping, rtabmap, rtab
<param name="RGBD/LocalLoopDetectionSpace" type="string" value="false"/>
<param name="RGBD/OptimizeFromGraphEnd" type="string" value="false"/>
<param name="Kp/MaxDepth" type="string" value="8.5"/>
<param name="LccIcp/Type" type="string" value="1"/>
<param name="LccIcp2/CorrespondenceRatio" type="string" value="0.05"/>
<param name="LccBow/MinInliers" type="string" value="10"/>
<param name="LccBow/InlierDistance" type="string" value="0.1"/>
<param name="RGBD/AngularUpdate" type="string" value="0.1"/>
<param name="RGBD/LinearUpdate" type="string" value="0.1"/>
<param name="RGBD/LocalImmunizationRatio" type="string" value="0.50"/>
<param name="Rtabmap/TimeThr" type="string" value="700"/>
<param name="Mem/RehearsalSimilarity" type="string" value="0.30"/>
<!-- localization mode -->
<param if="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="false"/>
<param unless="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="true"/>
<param name="Mem/InitWMWithAllNodes" type="string" value="$(arg localization)"/>
</node> | {
"domain": "robotics.stackexchange",
"id": 23767,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, 3dmapping, rtabmap, rtab",
"url": null
} |
ros, simulation, stageros, stage, create
Title: Creating robot models in stage under ROS
Hello,
I have found a tutorial, which describes how to create robot models for player/stage. Can I apply this 1:1 to stageros? In particular does the stage wrapper in ros support all sensors? Or are there any limitations?
BR
Originally posted by Eisenhorn on ROS Answers with karma: 82 on 2012-09-29
Post score: 0
Original comments
Comment by SL Remy on 2012-10-01:
Which tutorial did you find? It's difficult to answer your question about the 1:1 applicability if you don't add a little more detail.
The current stageros only supports one laser sensor per robot. People have long wished for a more flexible implementation, but no one has done that project yet.
It is possible to construct your own alternative to stageros with support for other sensors to fit your specific needs.
Originally posted by joq with karma: 25443 on 2012-09-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Eisenhorn on 2012-10-03:
Does this mean, that stageros can't be extended to support more than one ultrasonic sensors?
What do you mean by alternative to stageros? Does this comprise a complete alternative from scratch? | {
"domain": "robotics.stackexchange",
"id": 11177,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, simulation, stageros, stage, create",
"url": null
} |
c++, programming-challenge, hash-map
Title: Hackerrank Day 8: Dictionaries and Maps solution in C++ This is a solution for the Day 8 hackerrank.com challenge. The basic idea of the challenge is to create a mapping of names to phone numbers based on the the given input, and later look up the phone numbers for certain names. If the entry exists, print it and its number. If it doesn't, print out "Not found".
Here's my solution in C++:
#include<iostream>
#include<map>
using namespace std;
int main()
{
int i, n;
cin>>n;
string name, number, key;
map<string, string> phone_dir;
for(i=0; i<n; i++)
{
cin>>name>>number;
phone_dir.insert(pair <string, string> (name, number));
}
while(cin>>key)
{
if (phone_dir.find(key) != phone_dir.end())
{
cout<<key<<"="<<phone_dir.find(key)->second<< endl;
}
else cout<< "Not found"<<endl;
}
return 0;
} | {
"domain": "codereview.stackexchange",
"id": 31112,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, programming-challenge, hash-map",
"url": null
} |
particle-physics, neutrinos, antimatter
If a neutrino had exactly zero mass, this polarization would be complete. However, we now have convincing evidence that at least two flavors of neutrino have finite mass. This means that it's possible, in theory, for an relativistic observer to "outrun" a left-handed neutrino, in which reference frame its north pole would be pointing along its momentum — that observer would consider it a right-handed neutrino. Would a right-handed neutrino act like an antineutrino? That would imply that the neutrino is actually its own antiparticle (an idea credited to Majorana). Would the right-handed neutrino simply refuse to participate in the weak interaction? That would make them good candidates for dark matter (though I think there is other evidence against this).
It's an open experimental question whether there is really a difference between neutrinos and antineutrinos, apart from their spin, and there are several active searches, e.g. for forbidden double-beta decays. | {
"domain": "physics.stackexchange",
"id": 13542,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, neutrinos, antimatter",
"url": null
} |
python, beginner
Get rid of the continue at the bottom of the loop. It is not needed.
Get rid of the various statements getal = getal + 1 in your else branches. By inspection, I can see that this adjustment is always done whenever you loop (since you sys.exit() on success) so just move the statement to the top of the loop, and adjust your incoming variable:
getal -= 1
while True:
getal += 1
# many, many if statements
Perform your multiplication as part of your adjustment to getal, not as a separate part of your if statements:
a = ((getal - 1) / 5) * 4
if a % 5 == 1:
Consider negating your tests. Instead of looking for "good" values, look for "bad" values. Because you are testing for "failure," and because the response to failure is to jump away (continue), you can replace nesting with a series of tests at the same level:
if a % 5 != 1:
continue
b = ...
if b % 5 != 1:
continue
c = ...
if c % 5 != 1:
continue | {
"domain": "codereview.stackexchange",
"id": 41528,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner",
"url": null
} |
ros-kinetic, cv-bridge
here is the Cmakelist.txt:
cmake_minimum_required(VERSION 2.8.3)
project(sample_opencv_pkg)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
cv_bridge
opencv2
rospy
sensor_msgs
std_msgs
)
## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)
## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()
################################################
## Declare ROS messages, services and actions ##
################################################
## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
## * add a build_depend tag for "message_generation"
## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
## but can be declared for certainty nonetheless: | {
"domain": "robotics.stackexchange",
"id": 31516,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-kinetic, cv-bridge",
"url": null
} |
quantum-mechanics, quantum-field-theory, general-relativity, gravity, virtual-particles
.......
There can be a self interaction of the electron by exchange of a photon as sketched in the Feynman diagram .
These virtual electrons never pop in and out, let alone in any sinusoidal fashion, they are a mathematical analogue useful for calculations and a help to the intuition, and they are just part of the exchange diagrams necessary to calculate crossections of interactions. The only hypothesis for the generation of real electrons and other particles from virtual pairs in the diagrams comes next to the gravitational field of black holes, called Hawking radiation, and the energy needed for the particle to be on mass shell is taken from the strong gravitational field of the black hole.
Back to the Lamb shift :
This "smears out" the electron position over a range of about 0.1 fermi (Bohr radius = 52,900 fermis). This causes the electron spin g-factor to be slightly different from 2. There is also a slight weakening of the force on the electron when it is very close to the nucleus, causing the 2s electron (which has penetration all the way to the nucleus) to be slightly higher in energy than the 2p(1/2) electron.
Note that the Lamb shift happens because of the smearing out of the virtual electron's position and the effect appears on the g factor, related to the magnetic moment of the electron, not the mass. This is because off mass shell particles have indeterminate mass, as was stated in one of the comments to your question, whereas quantum numbers that define the particle are there.
So your analogue should be the Zeeman effect . You are asking whether the gravitational field of the sun could split spectral lines the way the magnetic and electric fields do. In astrophysics this effect of the magnetic fields is studied and that is the way one know of the existence of the magnetic fields. A gravitational field effect might be there but it would be flooded out by the sun's magnetic field :
Feynman diagrams at the vertices have coupling constants . The effect of magnetic fields gives a 1/137 at each vertex. The gravitational field given that gravitons would have to be exchanged in the appropriate diagrams , gives a 10^-39 at each vertex. An impossible to measure difference in the Zeeman effect. | {
"domain": "physics.stackexchange",
"id": 18022,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-field-theory, general-relativity, gravity, virtual-particles",
"url": null
} |
moveit
Title: cannot roslaunch moveit_planning_execution.launch
Hi all! I'm following the tutorial of industrial Create_a_MoveIt_Pkg_for_an_Industrial_Robot. When I stepped into the 2.Update Configuration Files and completed the moveit_planning_execution.launch, I tried roslaunch it,but it failed. Something was wrong like this:
redefining global property: pi
when processing file: /home/shantengfei/catkin_ws/src/jaka/urdf/jaka.urdf.xacro
while processing /home/shantengfei/catkin_ws/src/jaka_ur_moveit_config/launch/move_group.launch:
while processing /home/shantengfei/catkin_ws/src/jaka_ur_moveit_config/launch/trajectory_execution.launch.xml:
while processing /home/shantengfei/catkin_ws/src/jaka_ur_moveit_config/launch/jakaUr_moveit_controller_manager.launch.xml:
Invalid roslaunch XML syntax: not well-formed (invalid token): line 8, column 0
The traceback for the exception was written to the log file
. I defined the pi in xacro.So is this wrong? In addition, there is a tip
"Invalid roslaunch XML syntax: not well-formed (invalid token): line 8, column 0
The traceback for the exception was written to the log file".
Here is the launch file:
<!-- Non-standard joint names:
- Create a file jaka_ur_moveit_config/config/joint_names.yaml
controller_joint_names: [joint_1, joint_2, ... joint_N]
- Update with joint names for your robot (in order expected by rbt controller)
- and uncomment the following line: --> | {
"domain": "robotics.stackexchange",
"id": 29102,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "moveit",
"url": null
} |
special-relativity, metric-tensor, inertial-frames, notation
Title: Different formal definitions of Lorentz Transformations The formal definition for Lorentz Transformation is a matrix $\Lambda$ such that $$\Lambda^\mu_{\ \ \alpha}\Lambda^\nu_{\ \ \beta}\eta_{\mu\nu}=\eta_{\alpha\beta.}$$
In some books I have found a definition that use the transposition: $$(\Lambda^T)\eta\Lambda=\eta.$$
My question is how to link them.
My attempt, so far, is to multiply by the inverse but I get stuck very soon and I don't know how to reach the second equation. Probably the passagges are trivial.
Thanks for any help. Kontle's answer contains the main idea, but let me be a bit more specific about your example: all is about matrix notation.
Recall in general linear algebra that, given $n \times n$ matrices $X,A \in \mathbb{R}^{n \times n}$ one can write the $(i,j)$-coefficient of the matrix $B= X^TAX$ as follows:
\begin{equation}
(b_{ij})=\sum_{l=1}^n\left(\sum_{k=1}^n x_{ki}a_{kl}\right)x_{lj}
= \sum_{k,l=1}^nx_{ki}a_{kl}x_{lj},
\end{equation}
To check this try to write explicitly the matrices and try to write in summation notation what you are multiplying (remembering that since at the beginning you have $X^T$, multiplying rows of $X$ is the same as multiplying columns of $X^T$).
More specifically, in your notation and for a $4 \times 4$ matrix, you have:
$B= \Lambda^T\eta \Lambda$ as follows:
\begin{equation} | {
"domain": "physics.stackexchange",
"id": 86986,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, metric-tensor, inertial-frames, notation",
"url": null
} |
homework-and-exercises, kinematics, projectile
Step by step for a vertical freefall (1D) with origin on the ground at the vertical of the initial position of the object and y axis toward the object:
$$-g=a_y$$
$$\Rightarrow v(t) = \int^t_0 -g.dt=-g.t+v_{0y}$$
$$\Rightarrow y(t) = \int^t_0 (-g.t+v_{0y}).dt=-\frac{1}{2}g.t^2+v_{0y}.t+y_0$$
Here $v_{0y}$ is 0 and $y_0=80m$. You're interested in $v(T_g/2)$ where $T_g$ is $t$ so that $y(t)=0$. | {
"domain": "physics.stackexchange",
"id": 12193,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, kinematics, projectile",
"url": null
} |
mobile-robot, sensors, coverage
Title: Best strategy for area scanning using little sensing bots I'm currently working on a school project about simulating robots scanning an area, but it has been a struggle to find what strategy the robots should use. Here are the details:
I am given a certain amount of robots, each with a sensing range of $r$. They spawn one after another. Their task to scan a rectangular area. They can only communicate with each other when they are within communication range.
I am looking for the best strategy, (i.e. time efficient solution) for this.
Any reply or clue to the strategy will be appreciated. Distributed cooperative coverage algorithms for robots sounds like an area of active research. I suggest looking at some academic papers. Here are a few to get you started:
Multirobot Cooperative Model applied to Coverage of Unknown Regions
Cooperative Coverage of Rectilinear Environments | {
"domain": "robotics.stackexchange",
"id": 511,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mobile-robot, sensors, coverage",
"url": null
} |
molecular-biology, botany, perception, receptor, light
Title: Receptors for red and far-red light in plants: Shade avoidance Franklin (2009) describes how plants use the ratio of the red wavelength (660-670nm) over the far-red wavelength (725-735nm) (R:FR) in order to avoid shading.
My question is: which receptor is stimulated by the red and which receptor is stimulated by the far-red?
In his paragraph on Phytochromes (3rd page, right column), K. Franklin seems to say that PhyB is responsible for measuring this ratio but I am not sure. As far as I know there are 5 receptors for far-red and red light which are the phytochroms(phyA-phyE) Its all about the ratio between red and far-red light.
Each phytochrom has an inactive(PR) and an active(PFr) conformation. phyA is the only phytochrom which is activated by far-red light, so its active state is PR. (Only if the ratio between red and far-red light is low.) The other phytochroms, are activated by red light (high ratio between red and far-red).
An active phytochrom blocks the COP1/SPA complex. This complex is a E3 ubiquitin ligase which ubiquinates transcription factors for the light answer like HY5 or HFR1.
Example:
Under normal light conditions, phyB-phyE is active. They block the COP1/SPA complex so the transcription factors for the light answer are not getting ubiquinated. The plant can get a light phenotyp.
In the case that a plant grows under another plant, it gets less red light because the higher growing plant uses it for its photosynthesis. phyB-phyE become inactive. COP1/SPA can ubiquinate the transcriptions factors. The plant gets an low light phenotyp by trying to grow out of the shadow.
The function of phyA is to produce a light phenotyp if there is a lot of far-red light but almost no red light. Than it is getting activated and blocks the COP1/SPA complex. Under red light phyA is not only inactivated it also gets degraded. | {
"domain": "biology.stackexchange",
"id": 3412,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-biology, botany, perception, receptor, light",
"url": null
} |
robotic-arm, ros, gazebo
and robot is just like hanging on base link, because base link is only one which has type=fixed and all other have type=revolute.
here is picture of hanging robot:
When I lift and release robot in gazebo using mouse, robot just fall down and axis are just shaking like there is no motor or any force which keeps them fixed. Reason why robot is laying down is because in urdf file gravity is not turned off. When I add this part of the code to my urdf then robot is loaded in some position and is not falling down.
<!-- Gazebo-specific link properties -->
<gazebo reference="${prefix}base_link">
<material>Gazebo/Yellow</material>
<turnGravityOff>true</turnGravityOff>
</gazebo>
<gazebo reference="${prefix}link_1">
<material>Gazebo/Orange</material>
<turnGravityOff>true</turnGravityOff>
</gazebo>
<gazebo reference="${prefix}link_2">
<material>Gazebo/Orange</material>
<turnGravityOff>true</turnGravityOff>
</gazebo>
<gazebo reference="${prefix}link_3">
<material>Gazebo/Orange</material>
<turnGravityOff>true</turnGravityOff>
</gazebo>
<gazebo reference="${prefix}link_4">
<material>Gazebo/Orange</material>
<turnGravityOff>true</turnGravityOff>
</gazebo>
<gazebo reference="${prefix}link_5">
<material>Gazebo/Orange</material>
<turnGravityOff>true</turnGravityOff>
</gazebo>
<gazebo reference="${prefix}link_6">
<material>Gazebo/Black</material>
<turnGravityOff>true</turnGravityOff>
</gazebo>
I am not sure that I want turned off gravity in simulation. | {
"domain": "robotics.stackexchange",
"id": 2166,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "robotic-arm, ros, gazebo",
"url": null
} |
javascript
this.price -= 0.5
}
if (month === 48) {
this.price -= 0.4
}
if (month === 36) {
this.price -= 0.3
}
if (month === 24) {
this.price -= 0.15
}
if (month === 12) {
this.price += 0.0
}
} If I were doing this, I'd start by putting the data into a 2 dimensional array. As raw data, it seems to be pretty much this: | {
"domain": "codereview.stackexchange",
"id": 40409,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript",
"url": null
} |
python, javascript, beginner, html, python-2.x
if __name__ == '__main__':
utils.send_response(metrics_fetcher())
history.py
import utils
def history_fetcher(default_table):
table = utils.REQUEST_ARGS.getvalue('LogType') + 'Log' if 'LogType' in utils.REQUEST_ARGS else default_table
try:
formatter = utils.METRICS_FORMATTER[table]
except KeyError:
table = default_table
formatter = utils.METRICS_FORMATTER[table]
start_date, end_date = utils.retrieve_start_and_end_date()
connection = utils.connect_to_database()
with connection as cursor:
rows = utils.select_from_database(cursor, table, start_date, end_date, minute_grouping=3)
yield 'Logged\tValue'
for row in rows:
yield '{}\t{}'.format(row[0], formatter(row[1]))
connection.close()
if __name__ == '__main__':
utils.send_response(history_fetcher(default_table='TemperatureLog')) | {
"domain": "codereview.stackexchange",
"id": 24550,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, javascript, beginner, html, python-2.x",
"url": null
} |
c, converting
You nailed the first part, but I'm not so sure about the second. I guess it depends on how accurate you need it to be.
int celsius = 5 * (fahr-32) / 9;
This is very likely to be a non-integer number before you cast it. C casts from floating point to integer via truncation. This means that you're always "rounding" the result down. I doubt that's what you intended. Use the round() method instead for a more accurate result. Or, even better, return a double.
One other thing, don't abbreviate variable names. I understand why you used fahr instead of fahrenheit, but it's a bad habit to get into. | {
"domain": "codereview.stackexchange",
"id": 15625,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, converting",
"url": null
} |
which we can rewrite as
$\displaystyle \int_0^1 \int_0^1 [\hbox{Ad}_{stx} Z, \hbox{Ad}_{tx} Y]\ t ds dt = - \int_0^1 \int_0^1 [\hbox{Ad}_{(1-st)x} Y, \hbox{Ad}_{(1-t)x} Y]\ t ds dt.$
But by an appropriate change of variables (and the anti-symmetry of the Lie bracket), both sides of this equation can be written as
$\displaystyle \int_{0 \leq a\leq b \leq 1} [\hbox{Ad}_{ax} Z, \hbox{Ad}_{bx} Y]\ da db$
and the claim follows.
Remark 1 The above argument shows that every finite-dimensional Lie algebra ${{\mathfrak g}}$ can be viewed as arising from a local Lie group ${G}$. It is natural to then ask if that local Lie group (or a sufficiently small piece thereof) can in turn be extended to a global Lie group ${\tilde G}$. The answer to this is affirmative, as was first shown by Cartan. I have been unable however to find a proof of this result that does not either use Ado’s theorem, the proof method of Ado’s theorem (in particular, the structural decomposition of Lie algebras into semisimple and solvable factors), or some facts about group cohomology (particularly with regards to central extensions of Lie groups) which are closely related to the structural decompositions just mentioned. (As noted by Serre, though, a certain amount of this sort of difficulty in the proof may in fact be necessary, given that the global form of Lie’s third theorem is known to fail in the infinite-dimensional case.) | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9933071474906947,
"lm_q1q2_score": 0.8277472746473831,
"lm_q2_score": 0.8333245932423308,
"openwebmath_perplexity": 149.96531194708652,
"openwebmath_score": 0.9581230282783508,
"tags": null,
"url": "https://terrytao.wordpress.com/2011/10/29/associativity-of-the-baker-campbell-hausdorff-formula/"
} |
Bimodal functions, Total Recursive Functions and Partial Recursive Functions in Automata, Mathematics | Classes (Injective, surjective, Bijective) of Functions, Mathematics | Generating Functions - Set 2, Inverse functions and composition of functions, Last Minute Notes - Engineering Mathematics, Mathematics | Introduction to Propositional Logic | Set 1, Mathematics | Predicates and Quantifiers | Set 1, Mathematics | L U Decomposition of a System of Linear Equations, Mathematics | Mean, Variance and Standard Deviation, Mathematics | Sum of squares of even and odd natural numbers, Mathematics | Eigen Values and Eigen Vectors, Mathematics | Lagrange's Mean Value Theorem, Mathematics | Introduction and types of Relations, Data Structures and Algorithms – Self Paced Course, We use cookies to ensure you have the best browsing experience on our website. Misc 10 (Introduction)Find the number of all onto functions from the set {1, 2, 3, … , n} to itself.Taking set {1, 2, 3}Since f is onto, all elements of {1, 2, 3} have unique pre-image.Total number of one-one function = 3 × 2 × 1 = 6Misc 10Find the number of all onto functio Q1. Transcript. Option 3) 200. No. Attention reader! If X has m elements and Y has 2 elements, the number of onto functions will be 2 m-2. generate link and share the link here. This course will help student to be better prepared and study in the right direction for JEE Main.. there are zero onto function . Any ideas on how it came? set a={a,b,c} and B={m,n} the number of onto functions by your formula is 6 . That is, a function f is onto if for each b ∊ B, there is atleast one element a ∊ A, such that f(a) = b. Number of functions from one set to another: Let X and Y are two sets having m and n elements respectively. Home. My book says it is the coefficient of x^m in m!(e^x-1)^n. Considering all possibilities of mapping elements of X to elements of Y, the set of functions can be represented in Table 1. Therefore, N has 2216 elements. There are 3 ways of choosing each of the 5 elements = | {
"domain": "sankore.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9869795098861572,
"lm_q1q2_score": 0.8465995456534107,
"lm_q2_score": 0.8577681068080748,
"openwebmath_perplexity": 619.3188419242183,
"openwebmath_score": 0.6051638722419739,
"tags": null,
"url": "https://sankore.com/riz0beq/total-no-of-onto-functions-from-a-to-b-ffb20a"
} |
navigation, rviz, clearpath, sicklms, husky
Originally posted by Joe28965 with karma: 1124 on 2020-01-09
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by emlynw on 2020-01-09:
Yes I've got an IMU connected too. Do you know how can I fix this if this is the problem? I'll try doing the same without the IMU plugged in to see what happens
Comment by Joe28965 on 2020-01-09:
What do you use to create your map? Is it gmapping? I know cartographer will update the map if it has detected the robot has started drifting. However, I'm not sure if using cartographer will fix your problem since I'm by no means an expert when it comes to ROS and have never personally used ROS indigo
Comment by emlynw on 2020-01-09:
I've just tried unplugging the IMU and it seems to solve the problem of the LIDAR points rotating relative to the odom frame so thanks! I've tried calibrating the IMU but it still causes the same error so I'm unsure how to fix that, any ideas? I used gmapping but got the weird round maps. I've now tried using hector_slam and am getting decent maps even with the IMU so I think it must also correct for drift.
Comment by Joe28965 on 2020-01-09:
well slam is Simultaneous Localisation And Mapping, so you should check which part of the package actually does the localisation and see if you can somehow tweak some there, maybe make it rely more on other sensors?
Comment by emlynw on 2020-01-10:
Even when I'm not running slam, in RViz the walls return to a fixed point relative to the fixed odom frame with the IMU unplugged but do not stay fixed to odom with the IMU plugged in | {
"domain": "robotics.stackexchange",
"id": 34243,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, rviz, clearpath, sicklms, husky",
"url": null
} |
java, classes
public int getPoints() {
return points;
}
public int getMoney() {
return money;
}
public int getTime() {
return time;
}
public ArrayList<String> getAccessWords(Boolean forSave) {
if (!forSave){
return accessWords;
} else {
StringBuilder stringAW = new StringBuilder();
for (String s:accessWords){
stringAW.append(s).append("|");
}
return new ArrayList<String>(Arrays.asList(stringAW.toString()));
}
}
public ArrayList<Integer> getLettersCount(boolean forSave) {
if (!forSave){
return lettersCount;
} else {
StringBuilder stringLetterCount = new StringBuilder();
for (Integer i:lettersCount){
stringLetterCount.append(i);
}
int tempInt = 0;
if (stringLetterCount.toString() != ""){
tempInt = Integer.parseInt(stringLetterCount.toString());
}
return new ArrayList<Integer>(Arrays.asList(tempInt));
}
}
public void reset() {
tempPoints = points;
tempMoney = money;
tempTime = time;
isDone = false;
isBlock = false;
}
public void decreasePoints(int pPoints) {
if (accessWords.contains("CONTINUOUS")) {
points -= pPoints;
if (points <= 0) {
decreaseRepeat();
}
} else if (difficulty.equals("SPECIAL")) {
tempPoints -= pPoints;
if (tempPoints <= 0 && type != Type.Mazes && !isBlock) {
decreaseRepeat();
isBlock = true;
}
} else if (type == Type.Mazes) {
tempPoints -= pPoints;
if (tempPoints > 0) {
decreaseRepeat();
isBlock = true;
}
} else {
tempPoints -= pPoints;
if (tempPoints <= 0) {
decreaseRepeat();
}
}
} | {
"domain": "codereview.stackexchange",
"id": 8225,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, classes",
"url": null
} |
ros2
if(NOT CMAKE_C_STANDARD)
set(CMAKE_C_STANDARD 99)
endif()
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
find_package(ament_cmake REQUIRED)
find_package(ament_cmake_python REQUIRED)
find_package(rclcpp REQUIRED)
find_package(rclpy REQUIRED)
find_package(urdf REQUIRED)
find_package(xacro REQUIRED)
find_package(robot_state_publisher REQUIRED)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
ament_lint_auto_find_test_dependencies()
endif()
install(
DIRECTORY
launch
DESTINATION
share/${PROJECT_NAME}/
)
ament_python_install_package(${PROJECT_NAME})
install(PROGRAMS
scripts/spawn_helper.py
DESTINATION lib/${PROJECT_NAME}
)
ament_package()
And this is my controller_manager.yaml file:
controller_manager:
ros__parameters:
update_rate: 10 # Hz
joint_state_broadcaster:
type: joint_state_broadcaster/JointStateBroadcaster
forward_position_controller:
type: forward_command_controller/ForwardCommandController
position_trajectory_controller:
type: joint_trajectory_controller/JointTrajectoryController
forward_position_controller:
ros__parameters:
joints:
- joint1
- joint2
interface_name: position
position_trajectory_controller:
ros__parameters:
joints:
- joint1
- joint2
command_interfaces:
- position
state_interfaces:
- position
state_publish_rate: 200.0 # Defaults to 50
action_monitor_rate: 20.0 # Defaults to 20 | {
"domain": "robotics.stackexchange",
"id": 37501,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros2",
"url": null
} |
astrophysics, dark-matter, galaxies, galaxy-rotation-curve
Title: Do counter rotating galaxies have dark matter? Have counter rotating dark matter galaxies been observed?
Counter rotating galaxies, you may already know, are galaxies where some stars or arms rotate in one direction and other stars or arms rotate in an opposite direction, possibly due to the merger of two or more galaxies. As you probably know, the presence of dark matter in galaxies can be assumed true due to the analysis of the velocity curves. In 1970, Freeman determined the velocity profiles of galaxies using the 21 cm line and he found that for NGC300 and M33 there should have been much more gravitational mass outside the last bright point. In the same year, Rubin and Ford (1970) determined the velocity profile for M31: the profile was flat until 24kpc, which is much greater than the last photometric radius.
The physical predicted model of a rotation curve of a galaxy must decrease smoothly following a keplerian model after the last luminous radius. As you can see, most studied galaxies show that their velocity curves are flat outside of their last visible point. The most accepted idea to solve this discrepancy between real and the predicted models is the hypothesis of the presence of dark matter in the galaxy halo. Another important parameter to estimate the presence of dark matter is the mass/luminosity ratio. For our Galaxy it has an approximate value of $~50 M_{\odot}/L_{\odot}$ (Binney and Tremaine 2008). This means that there should be mass that is not visible, maybe condensated in dark matter, brown dwarfs or other non luminous bodies.
In order to answer your question, counter rotating galaxies may have similar velocity curves and they can have the presence of gravitational but not luminous mass in there halo. As you can see in this small paper on the counter rotating Sa NGC3539 galaxy, https://ned.ipac.caltech.edu/level5/March14/Corsini/Corsini2.html, there is a plot at the end, which perfectly shows the velocity profile: it stays flat outside of the last radius instead of decreasing as predicted. This can be explained assuming that the halo is filled with dark matter. | {
"domain": "physics.stackexchange",
"id": 85644,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "astrophysics, dark-matter, galaxies, galaxy-rotation-curve",
"url": null
} |
In my opinion ##\ln x^4## and ##4 \ln x## are two same functions but I am confused why they have different domains
Should I just follow the original question? If given as ##f(x)=\ln x^4## then the domain is x ∈ ℝ , x ≠ 0 and if given as ##f(x) = 4 \ln x## the domain is x > 0? So for the determination of domain I can not change the original question from ##\ln x^4## to ##4 \ln x## or vice versa?
Thanks
The solution is: you cheated!
If we write ##g(x)=x^4## then ##f=\ln\circ g## which is only defined if we use absolute values: ##f=\ln\circ \operatorname{abs} \circ g##. So the correct expression is ##f(x)=\ln|x^4|## which equals ##4\cdot \ln|x|##. The fact that you could omit the absolute value is due to your unmentioned knowledge that ##x^4\geq 0## for all ##x##. Hence you used an additional information which was hidden, whereas the camouflage vanished in ##\ln x##.
Adesh, songoku and dRic2
Mark44
Mentor
For function ##f(x)=\ln x^4## the domain is x ∈ ℝ , x ≠ 0 but if I change it into ##f(x) = 4 \ln x## then the domain will be x > 0
In my opinion ##\ln x^4## and ##4 \ln x## are two same functions but I am confused why they have different domains
The property of logarithms that you used, ##\ln a^b = b\ln a## is valid only for a > 0. ##x^4 > 0## if and only if ##x \ne 0##, but the same is not true for x itself. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9372107878954105,
"lm_q1q2_score": 0.8039095181016838,
"lm_q2_score": 0.8577681013541613,
"openwebmath_perplexity": 2625.6523321814757,
"openwebmath_score": 0.8927342295646667,
"tags": null,
"url": "https://www.physicsforums.com/threads/confusion-about-the-domain-of-this-logarithmic-function.991159/"
} |
c#, performance, json.net
}
private IEnumerable<RecordHolder> GetDefaultConfiguration()
{
// get the default config files already present in default "Records" folder
// and return RecordHolder list back.
}
private IEnumerable<RecordHolder> GetConfigFromServer()
{
// get the config files from the server
// and return RecordHolder list back.
}
private IEnumerable<RecordHolder> GetConfigFromLocalFiles()
{
// get the config files from the secondary location
// and return RecordHolder list back.
}
} | {
"domain": "codereview.stackexchange",
"id": 38901,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, json.net",
"url": null
} |
c++, c++11, tree, template, pointers
template <class BST>
bool BinarySearchTree<BST>::_search(const BST &item, node_t * &parent, node_t * ¤t) const
{
parent = nullptr;
current = root.get();
while(current)
/*-Loop searches for item in the tree.
-Each iteration traverses and checks one node.*/
{
if(item == (current)->data)
{
return true;
}
else
{
parent = current;
if(item > current->data)
{
current = current->right.get();
}
else
{
current = current->left.get();
}
}
}
return false;
}
template <class BST>
void BinarySearchTree<BST>::remove_with_children(node_t * &node)
{
node_t * min_parent;
min_parent = find_min_r_sub(*node);
if(min_parent)
{ //Node is set with min value and the minimum node deleted
node->data = min_parent->left->data;
min_parent->left.reset();
}
else
{ //min_parent is null, so the the min value is the nodes right child.
node->data = node->right->data;
node->right.reset();
}
}
template <class BST>
typename BinarySearchTree<BST>::node_t * BinarySearchTree<BST>::find_min_r_sub(const node_t &node) const
{
node_t * parent = nullptr;
node_t * current = node.right.get();
while(current->left)
/*- Loop traverses subtree as far left as possible.
- Each iteration traverses one node.*/
{
parent = current;
current = current->left.get();
}
return parent;
} | {
"domain": "codereview.stackexchange",
"id": 10557,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, tree, template, pointers",
"url": null
} |
homework-and-exercises, newtonian-mechanics, rotational-kinematics
Now your doubt is that why (2) holds it might happen that $\alpha$ can be positive or negative and thus $\omega_i$ can be greater than or less than $\omega_f$.
In the above problem it is given that wheel covers 90 revolutions in 15 sec. So on averge it angular velocity is $\frac{90}{15}=6$ revolutions/sec.
But final velcity is $10$ revolutions/sec.
This means that at some point of time the velocity is less than $6$ revolutions/sec only then we get an average of 6 revolution/sec.
But we are given that angular acceleration is constant. So velocity can either decrease or increase in that time interval. It can't be the case that at some interval velocity will decrease and then increase as for that angular acceleration has to change sign so it won't remains constant giving us contradiction.
As at some interval angular velocity has to take value less than $6$ revolutions/sec.
This suggests that over the whole interval velocity will increase continuously thus initial angular velocity will be less that final angular velocity.
So, the take home message is that in the equation $(2)$ the information of angular acceleration is inherent, though it might not be so much evident from the form of equation but we can see this from the derivation.
Hope that helps! | {
"domain": "physics.stackexchange",
"id": 78811,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, rotational-kinematics",
"url": null
} |
vcf, genome, 1000genomes
Title: Where do I get a large reference VCF? I would like to download a large .vcf file containing many (hundreds or thousands) of samples. Ideally, I would download different population-specific .vcf files, but the ability to sort/filter by ancestry group is fine. Where do I get such a file? I prefer GRCh37 for consistency with other files I'm using.
Update: I tried the HGDP file--I'm trying the 100 genomes one now, in addition to trying http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/1000_genomes_project/release/20190312_biallelic_SNV_and_INDEL/ now.
I downloaded the file, but when I opened it it seemed to have no samples (instead of having many samples). Here is a full row:
1 10145 . AAC A 151.43 PASS AC=4;AF=0.0127389;AN=314;BaseQRankSum=0.72;ClippingRankSum=0;DP=24144;ExcessHet=160.275;FS=0;InbreedingCoeff=-0.1558;MLEAC=5;MLEAF=0.002604;MQ=1.63;MQRankSum=0.764;NEGATIVE_TRAIN_SITE;QD=3.52;ReadPosRankSum=0.085;SOR=0.458;VQSLOD=-0.9391;culprit=QD;VQSRMODE=INDEL;NS=157;ExcHet=0
This question has also been asked on Biostars Solution: Use https://bochet.gcc.biostat.washington.edu/beagle/1000_Genomes_phase3_v5a/b37.vcf/, then to get the ancestry information (i.e., the ancestry group of each sample, e.g., "HG00566" or "NA19758") use https://www.internationalgenome.org/data-portal/sample. | {
"domain": "bioinformatics.stackexchange",
"id": 2478,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vcf, genome, 1000genomes",
"url": null
} |
Show Tags
14 Aug 2010, 09:44
As already discussed, the method is [(Last-First)/Increment ]+1
If you are refering to mgmat by any chance, this is discussed in Chap4 of Number properties
Intern
Joined: 30 Jun 2017
Posts: 16
Location: India
Concentration: Technology, General Management
WE: Consulting (Computer Software)
Re: How many multiples of 10 are there between 1000 and 2000, inclusive? [#permalink]
Show Tags
06 Sep 2017, 22:57
Bunuel wrote:
Baten80 wrote:
How many multiples of 10 are there between 1000 and 2000, inclusive?
Let me to know the calculation process.
Ans.101
Hi, and welcome to Gmat Club! Below is a solution for your problem:
$$# \ of \ multiples \ of \ x \ in \ the \ range = \frac{Last \ multiple \ of \ x \ in \ the \ range \ - \ First \ multiple \ of \ x \ in \ the \ range}{x}+1$$.
For our original question we would have: $$\frac{2,000-1,000}{10}+1=101$$.
Check this: http://gmatclub.com/forum/totally-basic ... 20multiple
Hope it helps.
Hi Bunuel, Thanks for the explanation. However, if the question would have been that how many multiples of 5 are there between -7 and 37, inclusive? Will we consider the last number to be 35 and the first number to be -5?
Math Expert
Joined: 02 Sep 2009
Posts: 55236
Re: How many multiples of 10 are there between 1000 and 2000, inclusive? [#permalink]
Show Tags
06 Sep 2017, 23:07
SinhaS wrote:
Bunuel wrote:
Baten80 wrote:
How many multiples of 10 are there between 1000 and 2000, inclusive?
Let me to know the calculation process.
Ans.101
Hi, and welcome to Gmat Club! Below is a solution for your problem:
$$# \ of \ multiples \ of \ x \ in \ the \ range = \frac{Last \ multiple \ of \ x \ in \ the \ range \ - \ First \ multiple \ of \ x \ in \ the \ range}{x}+1$$. | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9284087926320944,
"lm_q1q2_score": 0.8049636277114594,
"lm_q2_score": 0.8670357649558007,
"openwebmath_perplexity": 3760.2497207758547,
"openwebmath_score": 0.7760913968086243,
"tags": null,
"url": "https://gmatclub.com/forum/how-many-multiples-of-10-are-there-between-1000-and-2000-inclusive-98997.html"
} |
1. ## Variable Density Problem
Two cones are the same size w/ radius 3in and height 6in. One has a density of 80oz/in^3 at its base and 50oz/in^3 at the vertex. The other has a density of 50oz/in^3 at its base and 80oz/in^3 at its vertex. In both cones, the density varies linearly with the distance from the plane of the base. What are the masses of the cones? How do you solve this? The answers are supposed to be 1350pi and 1035pi, but I cannot get the right answer. One time I got 337.5pi which is 1/4 of 1350pi, but can't see why I would need to multiply by four. Thanks!
2. ## Re: Variable Density Problem
Hey citcat.
Can you show your integral calculations specifically to find these masses given the density functions (and also the volume regions through the limits)?
3. ## Re: Variable Density Problem
For cone one:
P=rho=density
V=volume
dM=PdV
P=-5y+80
y=-4x+6 so x=(y-6)/-4
dV=(pi)(r^2)dy=(pi)((y-6)/-4)^2dy
dM=(-5y+80)(pi)((y-6)/-4)^2dy
M=intgeral from 0-6 of (-5y+80)(pi)((y-6)/-4)^2dy=326.5pi
326.5pi is a quarter of what the answer should be which is 1350 pi
4. ## Re: Variable Density Problem
The volume doesn't look right: shouldn't it be over the region of the cone which is a dV = dxdydz (with the right limits)?
5. ## Re: Variable Density Problem
I get:
Mass = density * Volume:
$\int_0^6\rho(x)Adx$
$=\int_0^6\rho\pi{r}^2dx$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9835969662817623,
"lm_q1q2_score": 0.821779076502457,
"lm_q2_score": 0.8354835411997897,
"openwebmath_perplexity": 1291.7481340762777,
"openwebmath_score": 0.802258312702179,
"tags": null,
"url": "http://mathhelpforum.com/calculus/206923-variable-density-problem.html"
} |
c++, memory-management, template, lambda, heap
//***************
// eHeap::eHeap
// move constructor
//***************
template<class type, class lambdaCompare>
inline eHeap<type, lambdaCompare>::eHeap(eHeap<type, lambdaCompare> && other)
: compare(other.compare),
numElements(0),
granularity(DEFAULT_HEAP_GRANULARITY),
heapSize(DEFAULT_HEAP_SIZE),
heap(nullptr)
{
std::swap(numElements, other.numElements);
std::swap(granularity, other.granularity);
std::swap(heapSize, other.heapSize);
std::swap(heap, other.heap);
}
//***************
// eHeap::~eHeap
//***************
template<class type, class lambdaCompare>
inline eHeap<type, lambdaCompare>::~eHeap() {
Free();
}
//***************
// eHeap::operator+
// merge, copies originals
// DEBUG: cannot be const function because the invoked heapify copy constructor is not const
//****************
template<class type, class lambdaCompare>
inline eHeap<type, lambdaCompare> eHeap<type, lambdaCompare>::operator+(const eHeap<type, lambdaCompare> & other) {
int sumElements;
int i, j;
type * newData;
sumElements = numElements + other.numElements;
if (!sumElements)
return eHeap<type, decltype(compare)>(compare); | {
"domain": "codereview.stackexchange",
"id": 23209,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, memory-management, template, lambda, heap",
"url": null
} |
c#, interface
if (!reapplyDelay.Any(id => id.Key == collidedInfo.entityId))
{
reapplyDelay.Enqueue(new KeyValuePair<int, float>(collidedInfo.entityId, conditionInstance.duration));
collider.transform.GetComponentInChildren<EntityInfoServer>().AddConditionServer(Condition.Burning, conditionInstance);
}
}
}
And here's another example:
Storm_Server
...
public float duration;
public Queue<KeyValuePair<int, float>> reapplyDelay = new Queue<KeyValuePair<int, float>>();
...
void Update()
{
duration += Time.deltaTime;
if (Mathf.CeilToInt((duration + Time.deltaTime) / TICK_FREQ) > Mathf.CeilToInt(duration / TICK_FREQ))
{
Activate(stormCollider);
}
KeyValuePair<int, float> id;
while (reapplyDelay.Count > 0)
{
if (reapplyDelay.Peek().Value - STORM_DRENCH_DELAY > duration)
{
id = reapplyDelay.Dequeue();
}
else
{
break;
}
}
}
public void OnTriggerEnter(Collider collider)
{
if (collider.tag == "Player")
{
//Debug.Log("IN STORM");
EntityInfoServer collidedInfo = collider.transform.GetComponentInChildren<EntityInfoServer>();
if (!reapplyDelay.Any(id => id.Key == collidedInfo.entityId))
{
reapplyDelay.Enqueue(new KeyValuePair<int, float>(collidedInfo.entityId, duration));
collider.transform.GetComponentInChildren<EntityInfoServer>().AddConditionServer(Condition.Drenched, stormInfo);
}
}
} | {
"domain": "codereview.stackexchange",
"id": 20756,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, interface",
"url": null
} |
c++
int DirentWrap::number_of_folders_in_directory(string file_path)
{
return number_of_entities_in_directory(path, S_IFDIR) - 2;
}
Portability.
"\\" as a path separator works for Windows only. Prefer "/", which works for both Linux and Windows (the backslash is only required in cmd.exe).
folders_in_directory assumes that . and .. always appear first. readdir does not guarantee it.
Consider using std::experimental::filesystem library.
Error checking.
stat may fail. readdir may fail (if so it would return NULL, and you need to test errno, rather than blindly breaking a loop).
Similarly, the exception you throw on opendir failure loses the important information, namely why opendir failed. Provide errno or strerror(errno).
Why class?
class DirentWrap does not have any state. There is no reason to have it. Its methods should be made free functions, with dp and ep being their local variables. | {
"domain": "codereview.stackexchange",
"id": 36931,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++",
"url": null
} |
quantum-field-theory, path-integral
Title: Transition amplitude computed by the path integral I would like to understand the transition amplitude given by the path integral as it is presented in Srednicki's book in formula (8.3):
$<0|0>_J = \int D\phi \exp[i \int d^4x [L_0 + J\phi]$. (8.3)
Apparently it is not equal to 1 as it is shown only a couple of lines later.
However, only 19 pages before the normalization of ground state stated to be:
$<0|0>=1.$ (5.4).
Unfortunately Srednicki does not clearly say if $|0>$ represents a ground state of an interacting QFT, however, as the normalization of $|0>$ is stated at the beginning of the explanation of LSZ, I strongly assume that $|0>$ represents the ground state of an interacting QFT. The consultation of Peskin &Schroeder (P&S) also supports this.
So if this is the case, I consider $L=L_0 + J\phi$ as the Lagrangian of an interacting QFT, so I don't see a reason why $<0|0>_J$ should not be equal 1.
In particular if the considered QFT is QED, the corresponding current $J^\mu$ has clear physical meaning as Dirac current, for this well-known interacting QFT $<0|0>_J$ should be equal $=<0|0>=1$ according to (5.4).
Unfortunately, it comes even worse. According to the middle of p.55 of Srednicki $<0|0>_{J=0}=1$. So for a free QFT the ground state seems to be indeed normalized.
However, consulting P&S on the subject, I found the following:
the first = sign according to Srednicki, the rest according to P&S: | {
"domain": "physics.stackexchange",
"id": 51141,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, path-integral",
"url": null
} |
computability, turing-machines
Title: Set of Turing machines $S$ such that any $A \in S$ halts on input the description of any $B \in S$ Does there exist a maximal set of Turing machines $S$ over the alphabet $\{0,1\}$ such that any $A \in S$ halts on input the description of any $B \in S$?
Take S to be the set of deciders. Then S satisfies the property, but is not maximal, because for example we can take:
The set of TMs which are not equal to 0 and which halt on input any string not equal to 0. (Assuming 0 is not a deciding TM.)
Where $A$ is any set of non-deciding Turing machines, the set of TMs which are not in $A$ and halt on input any string not in $A$.
Maybe we can construct such a set $S$ by infinite recursion or by Zorn's lemma, but I haven't been able to see how.
Edit: By maximal, I mean there is no $S' \supsetneq S$ satisfying this property. Let us construct a Turing machine $M$ such that $M$ first generates its own description $\langle M \rangle$ by recursion theorem. Then for any input $w$ it first checks if $w=\langle M \rangle$, if yes it halts. If not it starts counting infinitely. Now let $S=\{\langle M \rangle\}$. If you try to add any other Turing machine $T$ to $S$ , it could not be part of $S$ as $M$ won't halt on $\langle T \rangle$. Thus following your definition of a maximal set, $S=\{\langle M \rangle\}$ is such a set. | {
"domain": "cs.stackexchange",
"id": 8458,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computability, turing-machines",
"url": null
} |
c++, object-oriented, tetris
I'd prefer something like this instead:
static const char space = '\x0';
static const char border = '\x9';
pField[y+fieldWidth+x] = (x==0 || x == fieldWidth-1 || y == fieldHeight-1) ? border : space;
Separation of Concerns
Right now, your PlayField manually allocates storage for the playing field. And it simulates 2D addressing in linear memory. And it knows about where the borders go in Tetris. And it doesn't do those very well--for example, it has a ctor that allocates memory with new, but there's no code to delete that memory anywhere, so the memory is leaked.
In my opinion, it would be better to use std::vector to manage the raw memory. Then write a simple wrapper to manage 2D addressing on top of that. Finally, add a layer to manage the Tetris border. | {
"domain": "codereview.stackexchange",
"id": 34747,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, object-oriented, tetris",
"url": null
} |
For primes $p$ and $q$ the ratio $r=\log(p)/\log(q)$ is an irrational number and by the Equidistribution theorem the sequence $\{r,2r,3r,4r,\ldots\}$ is asymptotically equidistributed modulo 1.
Specifically, for large $N$ we would expect that $2\epsilon N$ elements of $\{i\cdot r\}_{i=1}^N$ are equivalent mod 1 to a number in the intervals $[0,\epsilon]\cap[1-\epsilon,1)$. This implies that $p^i$ is within a factor of $q^\epsilon$ of a power of $q$.
So if we look at powers of $p$ up to $N$ and want one to be within about 1 part in 200 of a power of $q$, approximately we want $\left|\log\left(p^a/q^b\right)\right|\le 0.005$, then we would expect to find $2N\log(1.005)/\log(q)$ close pairs.
Below are some charts showing this estimate and the actual number of pairs of exponents that give powers within 0.005 for pairs of primes $2\le p<q \le 29$. The x-axis enumerates the prime pairs, starting at $(2,3)$ and ending at $(23,29)$. The 17th entry showing a count of 2 in the first chart is $(3,17)$.
Note that the above argument does not rely on $p,q$ being primes, only that $\log(p)/\log(q)$ is irrational. Here are tables for $N=100$ and $N=10000$ for a few sets of primes as well as $(2,\pi)$ and $(\zeta(3),\mathrm{e})$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138190064203,
"lm_q1q2_score": 0.8104649321383294,
"lm_q2_score": 0.8289388125473629,
"openwebmath_perplexity": 203.50174922911646,
"openwebmath_score": 0.853807806968689,
"tags": null,
"url": "http://math.stackexchange.com/questions/124047/is-the-clustering-of-prime-powers-merely-coincidental"
} |
ruby, ruby-on-rails, comparative-review, active-record
One option is simply to do everything in the controller. It's precisely the sort of thing controllers are meant to do: Control access to and modification of data. In Rails you typically end up with a 1:1 structure of controllers and models (e.g. a PostsController for Post records), but that isn't a law. A user account system might have a SessionsController and a SignupsController and so on that all work on the same User model.
So you could easily have your PaymentsController both create a Payment record and update the Loan record. Nothing wrong with that. In fact, your controller is probably already loading the Loan anyway in order to do @loan.payments.create(payment_params).
Last option is to make Downpayment a "model" (just a plain class, not an ActiveRecord model) or a service object. Its responsibility would be the same as the controller in the previous description: Create a Payment record, and update the Loan record. So it's essentially the same logic, just extracted out of the controller.
In either case, you're back to a single responsibility: Pay down a loan. There are two models involved, but conceptually it's one task.
The tradeoff with either, however, is that you could technically create Payment without the Loan being updated by simply not using the controller/service object. But on the other hand you can do that too by just writing to the database without going through Rails.
An example:
def downpay(loan, amount)
loan.transaction do
loan.payments.create!(amount: amount)
loan.update_attributes!(paid: loan.payments.total >= loan.amount)
end
end | {
"domain": "codereview.stackexchange",
"id": 21478,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ruby, ruby-on-rails, comparative-review, active-record",
"url": null
} |
special-relativity, lorentz-symmetry
Now, in order to determine the numerical value and the sign of ${\displaystyle n}$, we have to look at the experiment...
...
... This gives
$$
{\displaystyle n={\frac {1}{c^{2}}}}\qquad\mbox{(25)}$$
And only from that it follows, that ${\displaystyle c}$ is constant for all coordinate systems. At the same time we see that the universal space-time constant ${\displaystyle n}$ is determined by the numerical value of ${\displaystyle c}$.
Now it is clear that optics lost its special position with respect to the relativity principle by the previous derivation of the transformation equations. By that, the relativity principle itself gains more general importance, because it doesn't depend on a special physical phenomenon any more, but on the universal constant ${\displaystyle n}$.
Nevertheless we can grant optics or the electrodynamic equations a special position, though not in respect to the relativity principle, but in respect to the other branches of physics, namely in so far as it is possible to determine the constant ${\displaystyle n}$ from these equations.
For more early papers on Relativity,
this is a useful starting point:
https://en.wikisource.org/wiki/Portal:Relativity
FOOTNOTE:
Here is another reference by Ignatowski. This reference is a more thorough introduction to the idea.
"Das Relativitätsprinzip"
Archiv der Mathematik und Physik 17: 1-24, 18: 17-40 (1910)
https://de.wikisource.org/wiki/Das_Relativit%C3%A4tsprinzip_(Ignatowski)
I had to run it through Google Translate. | {
"domain": "physics.stackexchange",
"id": 61917,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, lorentz-symmetry",
"url": null
} |
synthetic-biology, plasmids, restriction-enzymes, ligation
An exonuclease digested the sticky ends of the vector away, or
The sticky ends simply degraded by chance
resulting in a blunt-ended re-ligation of the vector without insert and consequently loss of the EcoRI site.
I believe these to be quite important things to consider in cloning - in subsequent experiments I have had several colonies growing on negative control plates (i.e. colonies transformed with a control ligation that contained no insert), all of which were probably due to loss of sticky ends and blunt ligations, resulting in functional plasmids that conferred ampicillin resistance despite containing an undesired plasmid. | {
"domain": "biology.stackexchange",
"id": 1264,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "synthetic-biology, plasmids, restriction-enzymes, ligation",
"url": null
} |
ros, kinect, odroid, 3d-navigation
Title: on board kinect use for navigation
Hi, the kinect for xbox one or kinect for windows 2, which one is better for uav navigation? I know the Kinect for windows is discontinued right now. I want to use odroid xu4 as onboard computer to connect kinect.
Thanks
Originally posted by crazymumu on ROS Answers with karma: 214 on 2015-09-10
Post score: 0
I am using kinect 2 for windows. According to libfreenect2 and iai_kinect2 , I can process data successfully in odroid xu4 board.
Originally posted by crazymumu with karma: 214 on 2016-01-19
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by breadplex on 2019-09-21:
How many FPS does it produce? Just I read for 30 FPS Kinect need very powerfull board. | {
"domain": "robotics.stackexchange",
"id": 22608,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, kinect, odroid, 3d-navigation",
"url": null
} |
Can you think of a similar proof to show a function $g:A\to B$ is onto iff it is right cancellable? In fancier language, the concept of injective function and monic (i.e. left cancellable) morphism, and the concept of surjective function and epic (i.e. right cancellable) morphism coincide in the category of sets, where objects are sets and arrows are functions.
• Made some corrections. Would (2b) above be an example of a right cancellable map? – St Vincent Dec 14 '14 at 0:31
• @StVincent One direction for the surjectivity iff right cancellable is fine, but the other is lacking. Try assuming $g$ is not surjective and as I did, finding two differemt functions $f_1,f_2$ such that $f_1g=f_2g$. – Pedro Tamaroff Dec 14 '14 at 1:13 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9715639636617014,
"lm_q1q2_score": 0.819898066938187,
"lm_q2_score": 0.8438950986284991,
"openwebmath_perplexity": 52.9929750464434,
"openwebmath_score": 0.9803346991539001,
"tags": null,
"url": "https://math.stackexchange.com/questions/1065739/suppose-that-f-a-to-b-and-g-b-to-c-are-functions"
} |
As we just argued, ${\hat{\mu},\hat{\Sigma}}$, ${\hat{\mu}_i}$ and ${\hat{\Sigma}_i}$ can vary all over the space without any restriction, the supremum in the numerator and denominator thus does not depend on the choice ${\mu^*}$ and ${\Sigma^*}$ at all. So our theorem is proved. $\Box$
Chain rule of convex function
This post studies a specific chain rule of composition of convex functions. Specifically, we have the following theorem.
Theorem 1 For a continuously differentiable increasing function ${\phi: \mathbb{R} \rightarrow \mathbb{R}}$, a convex function ${h: U \rightarrow \mathbb{R}}$ where ${U\in \mathbb{R}^n}$ is a convex set and an ${x\in U}$, if ${\phi'(h(x))>0}$ or ${x\in \mbox{int}(U)}$, then
\displaystyle \begin{aligned} \partial (\phi \circ h) (x) = \phi' (h(x)) \cdot[ \partial h (x)], \end{aligned} \ \ \ \ \ (1)
where ${\partial }$ is the operator of taking subdifferentials of a function, i.e., ${\partial h (x) = \{ g\mid h(x)\geq h(y) +\langle{g},{y-x}\rangle,\forall y\in U\}}$ for any ${x\in U}$, and ${\mbox{int}(U)}$ is the interior of ${U}$ with respect to the standard topology in ${\mathbb{R}^n}$. | {
"domain": "threesquirrelsdotblog.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.992422758508031,
"lm_q1q2_score": 0.8204475928070726,
"lm_q2_score": 0.8267117876664789,
"openwebmath_perplexity": 11766.24440627605,
"openwebmath_score": 1.0000014305114746,
"tags": null,
"url": "https://threesquirrelsdotblog.com/"
} |
ros, ros-kinetic, android
05-24 15:03:29.734 27146-27174/? E/ExternalAccountType: Unsupported attribute readOnly
05-24 15:03:29.735 27146-27174/? E/CSP_ExceptionCapture: Unsupported attribute readOnly
05-24 15:03:29.811 4771-4771/? E/HwSystemManager: AppCleanUpService:msg is 1
05-24 15:03:29.825 27146-27174/? E/ExternalAccountType: Unsupported attribute readOnly
05-24 15:03:29.825 27146-27174/? E/CSP_ExceptionCapture: Unsupported attribute readOnly
05-24 15:03:29.843 5459-5497/? E/LogCollectService: Level = 256
05-24 15:03:29.871 28290-28303/? E/linker: readlink('/proc/self/fd/21') failed: Permission denied [fd=21]
warning: unable to get realpath for the library "/system/lib/hw/gralloc.hi3630.so". Will use given name.
05-24 15:03:29.874 28290-28303/? E/linker: readlink('/proc/self/fd/21') failed: Permission denied [fd=21]
warning: unable to get realpath for the library "libion.so". Will use given name.
05-24 15:03:29.876 28290-28303/? E/HAL: load: id=gralloc != hmi->id=gralloc
05-24 15:03:29.932 5459-5497/? E/LogCollectService: Level = 256
05-24 15:03:29.957 15993-22694/? E/NetworkScheduler: Unrecognised action provided: android.intent.action.PACKAGE_REMOVED | {
"domain": "robotics.stackexchange",
"id": 30894,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-kinetic, android",
"url": null
} |
general-relativity, resource-recommendations, education
Title: Recommendation on books with problems for general relativity? I am reading Sean Carroll's book on GR and have read the first two chapters, which are on manifolds and differential geometry. However, there are only 12 problems for both chapters. In fact, there seem to be few problems for each chapter throughout the textbook. Hence, I wish for a recommendation on a book on general relativity with lots of problems. The book should be mathematically and conceptually advanced, and have plenty of problems (30-50 problems per chapter). The book preferably should either be free or of low cost, because I am self studying general relativity and don't have too many financial recourses. So in brief: what books are there that cover general relativity from the very beginning to cosmology with tons of problems? The "Problem book in relativity and gravitation" is very good, and written by some well-known relativists. It's got a pretty broad variety of questions, along with solutions. It is a little on the old side, but many of the problems are just as relevant today.
In terms of depth and breadth I don't think there's much that can compare to MTW, so that's obviously also quite good. Of course, Wald's book is generally excellent, which includes the questions. | {
"domain": "physics.stackexchange",
"id": 95338,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, resource-recommendations, education",
"url": null
} |
machine-learning, comparison, supervised-learning, self-supervised-learning, representation-learning
This example is similar to the example given in this other answer.
Neural networks
Some neural networks, for example, autoencoders (AE) [7] are sometimes called self-supervised learning tools. In fact, you can train AEs without images that have been manually labeled by a human. More concretely, consider a de-noising AE, whose goal is to reconstruct the original image when given a noisy version of it. During training, you actually have the original image, given that you have a dataset of uncorrupted images and you just corrupt these images with some noise, so you can calculate some kind of distance between the original image and the noisy one, where the original image is the supervisory signal. In this sense, AEs are self-supervised learning tools, but it's more common to say that AEs are unsupervised learning tools, so SSL has also been used to refer to unsupervised learning techniques.
Robotics
In [2], the training data is automatically but approximately labeled by finding and exploiting the relations or correlations between inputs coming from different sensor modalities (and this technique is called SSL by the authors). So, as opposed to representation learning or auto-encoders, in this case, an actual labeled dataset is produced automatically.
Example
Consider a robot that is equipped with a proximity sensor (which is a short-range sensor capable of detecting objects in front of the robot at short distances) and a camera (which is long-range sensor, but which does not provide a direct way of detecting objects). You can also assume that this robot is capable of performing odometry. An example of such a robot is Mighty Thymio.
Consider now the task of detecting objects in front of the robot at longer ranges than the range the proximity sensor allows. In general, we could train a CNN to achieve that. However, to train such CNN, in supervised learning, we would first need a labelled dataset, which contains labelled images (or videos), where the labels could e.g. be "object in the image" or "no object in the image". In supervised learning, this dataset would need to be manually labelled by a human, which clearly would require a lot of work. | {
"domain": "ai.stackexchange",
"id": 1295,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, comparison, supervised-learning, self-supervised-learning, representation-learning",
"url": null
} |
python, python-3.x, algorithm, computational-geometry
Review request
Algorithm improvements.
Functional correctness of implemented algorithm.
Boundary and error cases.
Python-specific feedback. I'm not going to really talk much about the algorithm for the first part, and more about Python usage.
line_segment needs to be LineSegment by PEP8
__start_point should not be double-underscored, and should not be declared at the static level, so delete __start_point = (0, 0)
Add PEP484 type hints
Don't index [0] and [1] when you actually just mean .x and .y, for which named tuples are well-suited
Convert many (most?) of your class methods to @properties
Do not implement your own binary search; call into bisect (I have not shown this in my reference implementation)
Fix up minor typos such as segmnets, serach
Replace your prints with asserts to magically get actual unit tests
Suggested
# Code for Algorithm-2
from functools import cmp_to_key
from typing import Tuple, Sequence, List, NamedTuple, Callable
class Point(NamedTuple):
x: int
y: int
class LineSegment:
"""
Represents line_segment which is either horizontal or vertical.
"""
def __init__(self, start_point: Point, end_point: Point) -> None:
if start_point.x == end_point.x:
self._start_point = (start_point, end_point)[start_point.y > end_point.y]
self._end_point = (start_point, end_point)[start_point.y < end_point.y]
else:
self._start_point = (start_point, end_point)[start_point.x > end_point.x]
self._end_point = (start_point, end_point)[start_point.x < end_point.x]
def does_intersect(self, target_line_segment: 'LineSegment') -> bool:
is_vertical = self.is_segment_vertical
is_target_vertical = target_line_segment.is_segment_vertical
# Check for parallel segments
if is_vertical and is_target_vertical:
return False | {
"domain": "codereview.stackexchange",
"id": 42017,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, algorithm, computational-geometry",
"url": null
} |
forces, potential-energy, conventions, vector-fields, conservative-field
Of course, this is all a convention -- if you'd like, you can define a "potential schmenergy" function $\tilde{V} = -V$, and then $\vec{F} = + \nabla \tilde{V}$. None of the physics would be changed, except that objects would fall from low to high potential schmenergy, which might go against the grain of your intuition. | {
"domain": "physics.stackexchange",
"id": 78700,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, potential-energy, conventions, vector-fields, conservative-field",
"url": null
} |
measurement, error-correction, stabilizer-code, stabilizer-state
where in the last step we use three facts. First, $g_1$ does not stabilize the post-measurement state so we remove it from the generator list. Second, $(-1)^m\tilde{g}$ becomes a new stabilizer, so we add it to the list. Third, since $\tilde{g}$ and $g_i$ for $i\in\{2,\dots,p\}$ commute, the post-measurement state is stabilized by $g_i$.
Note that the list of stabilizer generators may only grow if $p<n$. This makes sense since the largest stabilizer group on $n$ qubits has $n$ generators. In this case $p=n$ and the input state is a stabilizer state. This allows us to avoid checking whether $\tilde{g}$ or $-\tilde{g}$ belong to the stabilizer in the second step above, because we know that one of them does. | {
"domain": "quantumcomputing.stackexchange",
"id": 3466,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "measurement, error-correction, stabilizer-code, stabilizer-state",
"url": null
} |
Remark $$\$$ The history of the criterion is both interesting and instructive. $$\:$$ For this see David A. Cox, Why Eisenstein proved the Eisenstein criterion and why Schönemann discovered it first.
Above is prototypical of transformation-based problem solving. Consider the analogous case of solving quadratic equations. One knows how to solve the simple special case $$\rm\ x^2 = a\$$ by taking square roots. To solve the general quadratic we look for an invertible transformation that reduces the general quadratic to this special case. The solution, dubbed completing the square, is well-known. For another example, see this proof of the Factor Theorem $$\rm\:x\!-\!c\:|\:p(x)\!-\!p(c),\:$$ which reduces to the "obvious" special case $$\rm\:c=0\:$$ via a shift automorphism $$\rm\:x\to x+c.\:$$ The problem-solving strategy above is completely analogous. We seek transformations that map polynomials into forms where Eisenstein's criterion applies. But we also require that the transformation preserve innate structure - here multiplicative structure (so that $$\rm\:\sigma\:f\:$$ irreducible $$\Rightarrow$$ $$\rm\:f\:$$ irreducible).
Employing such transformation-based problem solving strategies has the great advantage that one can transform theorems, tests, criteria, etc, into a simple reduced or "normal" form that is easy to remember or apply, and then use the ambient symmetries or transformations to massage any given example to the required normal form. This strategy is ubiquitous throughout mathematics (and many other sciences). For numerous interesting examples see Zdzislaw A. Melzak's book Bypasses: a simple approach to complexity, 1983, which serves as an excellent companion to Polya's books on mathematical problem-solving.
• The extra comments beyond just the proof are very useful, thank you! The link to David A. Cox's article is not working for me, could you please check it?
– user279515
Apr 22 '19 at 8:58 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9790357585701875,
"lm_q1q2_score": 0.8686050209568464,
"lm_q2_score": 0.8872045922259088,
"openwebmath_perplexity": 369.20979699484616,
"openwebmath_score": 0.9287957549095154,
"tags": null,
"url": "https://math.stackexchange.com/questions/215042/irreducibility-of-xp-1-cdots-x1/215052"
} |
python, algorithm, reinventing-the-wheel, combinatorics
Is there a simpler way to trigger the unfolding of the iterator into a list?
Yes there is list slicing.
Just with list slicing only the reversed could be a[k + 1::-1]
Where the last -1 is the step, and -1 means we step over it in reverse
This returns a list and not a generator, thus your reverse could be
return a[:k + 1] + a[k + 1::-1]
@user2357112, I feel like a rookie now.
I made a mistake, intuitively while fast typing I thought list reversing would be like this list[start:end:step] but instead it works differently, with exclusion.
[first i to include:first i to exclude:step], and becomes:
return a[:k + 1] + a[:k:-1] | {
"domain": "codereview.stackexchange",
"id": 27543,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, algorithm, reinventing-the-wheel, combinatorics",
"url": null
} |
Suppose then that $1/2\lt s/t\lt1$ and consider $Q(s/t)$. Just as Johan did, we define another random walk $U_n=\sum_{i=1}^nZ_i$ with steps $Z_i=s$ or $Z_i=s-t$ so that $Q(s/t)$ equals the probability that $U_n$ will ever reach $-1$, define $f(j)$ as the probablity that $U_n$ will reach $-1$ when it is currently at $j$, and find that $f(j)=f(j+s)/2+f(j+s-t)/2$ for $j\ge0$. The characteristic equation of this recursion is $g(z)=z^t-2z^{t-s}+1=0$. By Lemma 1 and since $f(j)$ tends to $0$ as $j$ tends to infinity, we must have $f(j)=\sum_{i=1}^{t-s}A_ir_i^j$, where $r_1,\dots,r_{t-s}$ are the (necessarily simple) zeroes of $g(z)$ inside the unit circle. Since $f(j)=0$ for $s-t\le j\le -1$, Lemma 2 implies that $Q(s/t)=f(0)=\sum_{i=1}^{t-s}A_i=1-\prod_{i=1}^{t-s}(1-r_i)$. | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9852713857177956,
"lm_q1q2_score": 0.8167296945956567,
"lm_q2_score": 0.8289388146603365,
"openwebmath_perplexity": 125.62264444457085,
"openwebmath_score": 0.9337767362594604,
"tags": null,
"url": "https://mathoverflow.net/questions/63789/probability-of-a-random-walk-crossing-a-straight-line"
} |
python
Title: Generating filesystem paths from a fixed string It's clever, but makes me vomit a little:
file = '0123456789abcdef123'
path = os.sep.join([ file[ x:(x+2) ] for x in range(0,5,2) ]) I have no idea what you're trying to do here, but it looks like you're splitting a string into groups of two a specified number of times? Despite the magic constants, etc. there's really no better way to do it, but I think there's certainly a better way to format it (I'm assuming these are directories since you're using os.sep):
The below is I think a more clear way to write it:
file = '0123456789abcdef123'
dir_len = 2
path_len = 3
path = os.sep.join(file[ x:(x+2) ] for x in range(0, dir_len * path_len-1, dir_len))
Note that the [] around the list comprehension is gone - it's now a generator. For this example it really doesn't matter which one you use, but since this is Code Review generators are another Python concept you should look at. | {
"domain": "codereview.stackexchange",
"id": 25,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
javascript, performance, beginner, game, canvas
// Following code is a fix for [[obj1, obj3], [obj2, obj4]].
if (alreadyHadCollisions && (index1 > -1 || index2 > -1)) {
for (i4 = 0; i4 < this.collisions[collisionIndex].length; ++i4) {
obj3 = this.collisions[collisionIndex][i4];
if (obj3 !== obj1 && obj3 !== obj2) collision.push(obj3);
}
this.collisions.splice(collisionIndex, 1);
}
if (index1 > -1 || index2 > -1) {
alreadyHadCollisions = true;
collisionIndex = i3;
}
}
if (!alreadyHadCollisions) this.collisions.push([obj1, obj2]);
}
}
}
}
for (i1 = 0; i1 < this.collisions.length; ++i1) {
var targets = this.collisions[i1],
biggestRadius, scaleFactor;
obj1 = targets[0];
biggestRadius = obj1.getRadius();
for (i2 = 1; i2 < targets.length; ++i2) {
obj2 = targets[i2];
var density = Math.max(obj1.density, obj2.density),
area = obj1.getArea() * (obj1.density / density) + obj2.getArea() * (obj2.density / density); | {
"domain": "codereview.stackexchange",
"id": 15652,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, performance, beginner, game, canvas",
"url": null
} |
python, scikit-learn, multilabel-classification, hyperparameter-tuning
Title: How to train multioutput classification with hyperparameter tuning in sklearn? I am working on a simple multioutput classification problem and noticed this error showing up whenever running the below code:
ValueError: Target is multilabel-indicator but average='binary'. Please
choose another average setting, one of [None, 'micro', 'macro', 'weighted', 'samples'].
I understand the problem it is referencing, i.e., when evaluating multilabel models one needs to explicitly set the type of averaging. Nevertheless, I am unable to figure out where this average argument should go to; accuracy_score, precision_score, recall_score built-in methods have this argument which I do not use explicitly in my code. MultiOutputClassifier doesn't have such an argument, neither does the RandomizedSearchCV's .fit() method. I also tried passing methods like precision_score(average='micro') directly to the scoring and refit arguments of RandomizedSearchCV but that didn't solve it either since methods such as precision_score() require correct and true y labels as arguments, which I have no access to in the individual K-folds of the randomized search.
Full code with data:
from sklearn.datasets import make_multilabel_classification
from sklearn.naive_bayes import MultinomialNB
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
X, Y = make_multilabel_classification(
n_samples=1000,
n_features=2,
n_classes=5,
n_labels=2
)
pipe = Pipeline(
steps = [
('scaler', MinMaxScaler()),
('model', MultiOutputClassifier(MultinomialNB()))
]
) | {
"domain": "datascience.stackexchange",
"id": 10534,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, scikit-learn, multilabel-classification, hyperparameter-tuning",
"url": null
} |
atoms, subatomic
Title: Can we use a camera with a frame rate of 1 trillion FPS to observe electrons, protons, and neutrons? Like with this camera from MIT that works at 1 trillion FPS: https://youtu.be/EtsXgODHMWk No. The wavelength of visible light is too long to resolve individual subatomic particles, so even if you had an extremely fast shutter on your camera, the particles would remain invisible.
Particle accelerators use detectors that can capture extremely fast events occurring on extremely small length scales, but they do not use visible light to do so. | {
"domain": "physics.stackexchange",
"id": 54758,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "atoms, subatomic",
"url": null
} |
continuum-mechanics, turbulence, solid-mechanics
Title: Equivalence of turbulence in solid materials The governing equations for a fluid and a solid are effectively the same and many times analysis can be done for a solid using the Navier-Stokes equations with the equation of state and/or the stress tensor computation modified. Because the equations are effectively the same, the phenomenon present in solutions are effectively the same.
What is the equivalence of turbulence in a solid? How do the scaling laws change? The onset of turbulence in fluids is determined by the Reynolds number
$$
\mathrm{Re} = \frac{vL}{\nu},
$$
where $L$ is the characteristic length scale, $v$ the characteristic velocity, and $\nu$ the viscosity. The onset of turbulence in fluids occurs for $\mathrm{Re}$ greater than about 1000 or more, depending on geometry.
If we want to see the equivalent to turbulence in solids, we will have to figure out how to make the Reynolds number large enough. The trouble is that to the extent that solids can be treated as fluids, they're fluids with really, really high viscosities. So we have to make $L$ or $v$ or both really really big in order to counter this.
We could try to make $v$ high, by making one part of the solid move extremely rapidly relative to another. However, for all the solids I can think of, this will just break the solid. | {
"domain": "physics.stackexchange",
"id": 7171,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "continuum-mechanics, turbulence, solid-mechanics",
"url": null
} |
haskell, integer
instance Ord ℤ where
a `compare` b = normalise a `comp` normalise b
where Zero `comp` Zero = EQ
Zero `comp` (Succ _) = LT
Zero `comp` (Pred _) = GT
(Succ _) `comp` Zero = GT
(Succ _) `comp` (Pred _) = GT
(Pred _) `comp` Zero = LT
(Pred _) `comp` (Succ _) = LT
(Succ n) `comp` (Succ m) = n `comp` m
(Pred n) `comp` (Pred m) = n `comp` m
instance Real ℤ where
toRational n = toInteger' n % 1
instance Integral ℤ where
toInteger = toInteger'
quotRem _ Zero = error "divide by zero"
quotRem Zero _ = (Zero, Zero)
quotRem n d | n == d = (Succ Zero,0)
| n < d = (Zero, n)
| otherwise = (Succ (fst foo), snd foo)
where foo = (n-d) `quotRem` d This is some good looking code, there are just a few things that throw me. The first is that you're defining the integers but occasionally referring to them as "Nat"s. The natural numbers are non-negative integers, don't confuse the two.
toNat = toEnum :: Int -> ℤ
fromNat = fromEnum :: ℤ -> Int
This is a strange construction, give the type signature before the definition of the function and GHC will figure out which version of toEnum and fromEnum to use. Anything else is unusual and unnecessary.
Your Show instance should really just punt to the instance for Integers. Again, naturals numbers aren't integers so your "Nat: " tag is incorrect, it doesn't really add anything, and it makes writing a Read instance more difficult. Also there's no need to section infix functions by writing them prefix style. Thus:
instance Show ℤ where
show = show . toInteger
instance Read ℤ where
read = fromInteger . read | {
"domain": "codereview.stackexchange",
"id": 12055,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "haskell, integer",
"url": null
} |
Calculator works. Read Integral Approximations to learn more.. If you don't specify the bounds, only the antiderivative will be computed. In doing this, the Integral Calculator has to respect the order of operations. eval(ez_write_tag([[728,90],'shelovesmath_com-medrectangle-3','ezslot_2',109,'0','0']));Let’s try some problems: $$\begin{array}{l}f\left( x \right)={{x}^{2}}-2x\\g\left( x \right)=0\end{array}$$, $$\int\limits_{0}^{2}{{\left[ {0-\left( {{{x}^{2}}-2x} \right)} \right]dx}}=-\int\limits_{0}^{2}{{\left( {{{x}^{2}}-2x} \right)dx}}$$, $$\begin{array}{l}f\left( x \right)={{x}^{2}}-5x+6\\g\left( x \right)=-{{x}^{2}}+x+6\end{array}$$, \displaystyle \begin{align}&\int\limits_{0}^{3}{{\left[ {\left( {-{{x}^{2}}+x+6} \right)-\left( {{{x}^{2}}-5x+6} \right)} \right]dx}}\\\,\,\,&\,\,\,=\int\limits_{0}^{3}{{\left( {-2{{x}^{2}}+6x} \right)dx}}=\left[ {-\frac{2}{3}{{x}^{3}}+3{{x}^{2}}} | {
"domain": "academiedubiocontrole.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668701095653,
"lm_q1q2_score": 0.804452184698937,
"lm_q2_score": 0.8198933425148213,
"openwebmath_perplexity": 1279.330310946827,
"openwebmath_score": 0.9425788521766663,
"tags": null,
"url": "http://academiedubiocontrole.org/blog/pioneer-woman-haa/9brkc.php?page=b74b56-application-calcul-int%C3%A9gral"
} |
Tags cost, find, pens, set
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post xoritos Elementary Math 8 October 9th, 2018 05:25 PM GIjoefan1976 Algebra 2 December 7th, 2017 02:43 PM Numbermanipulator Algebra 3 December 13th, 2014 03:31 PM Chikis Elementary Math 5 October 22nd, 2014 04:43 AM summoner Advanced Statistics 3 November 6th, 2012 04:08 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | {
"domain": "mymathforum.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806563575741,
"lm_q1q2_score": 0.8150028035986677,
"lm_q2_score": 0.8311430562234877,
"openwebmath_perplexity": 2815.9058036280285,
"openwebmath_score": 0.6220771074295044,
"tags": null,
"url": "http://mymathforum.com/elementary-math/346233-find-cost-set-pens.html"
} |
star, planet, tidal-forces
Title: How do stars affect the orbits of moons? I asked a question previous similar to this, but I'm wondering, can a star make a moon move closer to its planet or further away? How? Short answer : yes.
Long answer : it's complicated.
This is a classic three body problem, and a real example would be the Sun-Earth-Moon system.
Unfortunately there is no general solution of the 3-body Newtonian gravity problem, so we have to rely on special methods for special cases and even these turn out to be quite complex.
The best answer I could find to your specific question applied to the Sun-Earth-Moon is this one on Yahoo. As you can see the resulting effect is quite a complex set of periodic effects. It's small but not negligible.
You can intuitively visualize the effect in such a system by noting that typically to a good approximation the planet-moon system will orbit the star at a considerably larger distance than the planet-moon distance. The planet and moon themselves orbit around their center of mass. The relative difference in gravitational forces between the position where the moon is furthest away and closest will give you a rough guide to the order of magnitude of the effect. A back of the envelope calculation for Earth makes it about a 1% order of magnitude for the Sun-Earth-Moon.
Other systems will be different, but there will certainly be an effect. | {
"domain": "astronomy.stackexchange",
"id": 2176,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "star, planet, tidal-forces",
"url": null
} |
particle-physics, electromagnetic-radiation, radiation, radioactivity, gamma-rays
Title: How to produce ionizing radiation without radioactive substance? I think ionizing radiation caused by ray or particles.
My professor told me:"without radioactive substance,with only commercial products,it's possible to produce ionizing radiation."
Can anyone give me some thoughts about the methodology. Ultraviolet light is capable of ionizing a variety of substances; this effect is quite strong and allows UV to be used to disinfect drinking water and kill bacteria as well as giving you a sunburn. | {
"domain": "physics.stackexchange",
"id": 79904,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, electromagnetic-radiation, radiation, radioactivity, gamma-rays",
"url": null
} |
c++, game
{
cout<<"[ ";
bool isNeg = si < 0;//If Starting Index(si) Is Negative It Does Not Display Indexes
for(int i = 0; i < ac.size(); i++)
{
string ce = (i != (ac.size() - 1)) ? ", " : " ]\n";//Sets ce with a seperator it becomes " ]\n" If Its The Last Card
string sind = isNeg ? "" : to_string(si) + ") ";//Stores Card Index or Keeps it Empty Based on isNeg
if(getColor(ac[i]) == colors[4])//Checks if the Card Is Wild
{
cout<<SetTextColor(ac[i], (rand() % 4) + 1)<<sind;//Displays The Index With 1 Random Color
for(int j = 0; j < ac[i].size(); j++)//Loops Through Characters of The Card
cout<<SetTextColor(ac[i], (j % 4 + 1))<<ac[i][j];//Displays Each Character With A Color In Order
cout<<SetTextColor("", 5)<<ce;//Displays Seperator with Default Color
}
else
cout<<SetTextColor(ac[i], 5)<<sind<<ac[i]<<SetTextColor("", 5)<<ce;//Displays The Index And Card With The Cards Color Followed By The Seperator With The Default Color
si++;
}
}
public:
static vector<string> AllStatus;//Stores Status(i.e number of Cards or uno or won) of All Card Objects
vector<string> AllCards;//Stores All Cards of a Cards Object
static int count;//Counts the number of Cards Objects
Cards()
{
srand(time(0));//To make random against time
count++;//Increases count every time object is created
AllStatus.push_back("");//creates empty status when an object is created | {
"domain": "codereview.stackexchange",
"id": 43764,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, game",
"url": null
} |
context-free, formal-grammars
Title: Easiest way to write a grammar? When I see a problem like "Write a grammar for a language $L$ if $L = \{..\}$" for me is a matter of "instinct" the way that one can define productions. For example given the following exercise:
Let $L$ a language which alphabet is $\{x,y,z\}$ and accepts strings
$w$ where there aren't consecutive $x$'s nor consecutive $y$'s nor
consecutive $z$'s. | {
"domain": "cs.stackexchange",
"id": 1369,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "context-free, formal-grammars",
"url": null
} |
Then it is easy to prove that an optimal solution cannot contain three consecutive elements in increasing (or decreasing) order $... x_i < x_j < x_k ...$ because it is enough to move the middle element at the end of the array and the sum cannot decrease after the trasformation. So an optimal solution must proceed in a zig-zag style. Then we can prove that the optimal solution cannot contain two consecutive elements from the same side. Suppose that we have somewhere $... x_i, x_j, x_k ...$ with $x_i> x_j < x_k$ and $x_i, x_j, x_k \in B$. Then then the remaining sequence must contain three consecutive elements from $A$ $...x'_i,x'_j,x'_k...$ ordered as $x'_i < x'_j > x'_k$. Then $A \ni x'_j < x_j \in B$ can be swapped preserving the inequalities and strictly increasing the value of the sum.
For the detailed proof see: Bubble Cup 7DC - Problem C: MaxDiff
(it contains other nice problems) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9688561694652216,
"lm_q1q2_score": 0.8382777038428773,
"lm_q2_score": 0.8652240964782011,
"openwebmath_perplexity": 301.52295231788287,
"openwebmath_score": 0.6606630682945251,
"tags": null,
"url": "https://cstheory.stackexchange.com/questions/27808/finding-a-permutation-p-of-x-1-x-2-dots-x-n-which-maximises-sum-i-1"
} |
organic-chemistry, aromatic-compounds
P.S: It’s not necessary for anyone to interpret these concepts the same way. In fact, it’s a good thing if an artist takes the picture of benzene and interprets it completely differently. If anything, ‘anyone’ should be restricted to the chemical — or maybe the entire scientific — community, but there I can see the interpretation safely, regardless of whether aromaticity is drawn or not. | {
"domain": "chemistry.stackexchange",
"id": 3412,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, aromatic-compounds",
"url": null
} |
c#, beginner, battleship
protected bool IsHit(int shot) {
shipIsHit = shot == shipPosition;
return shipIsHit;
}
The do while loop:
do
{
Console.WriteLine("\n\nEnter a number between {0} and {1}", game.lowLimt, game.highLimit);
Console.WriteLine("Board: \n{0}", game.board);
Console.WriteLine(game.Fire(Console.ReadLine()));
}
while (!game.shipIsHit);
Maybe Fire() should return something that has both a 'is hit' boolean and a user message. Maybe an out parameter will suffice. I really do not like that the driver has to know about an internal state property.
Internal to Battleship: Create a Board class to wrap up all those lose board-specific properties and give them context.
Note the evolving structure
The code is simpler overall! Fire() reads very high level and it's simple too, nice! This is the invisible hand of OO programming at work.
end Edit
Now What?
Where/what do you want this to evolve to? Not much else to say w/out that vision. Nonetheless here's my "first thoughts":
Hit vs Sunk
The do while should end with the ship sunk. This is a good place to start evolving the code because it addresses the core state management of the game. Getting core fundamentals right profoundly effects the entire app code and structure. And "hit vs sunk" necessarily motivates structural changes of the game's objects. My immediate thought is the ship itself.
Firing
Encapsulate the random number generation into a method that exposes the idea of firing the gun.
I think there is no point of a particular ship firing. That may be obvious from actually playing "battleship" but these sorts of things must be explicit.
Ship Class
Will have to know how many hits it can take and how many it has taken. The fact that a ship has 1 or 20 "hit points" should not be exposed in the loop or anywhere else. The ship may know where it is in the ocean (every ship has a navigator doesn't it?) - so it can tell if it got hit. | {
"domain": "codereview.stackexchange",
"id": 29887,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, beginner, battleship",
"url": null
} |
confused where to start…like how to define w and where to go from there. MATLAB Programming. Whenever a plot is drawn, title’s and a label’s for the x axis and y axis are required. This MATLAB function returns the phase angle in the interval [-π,π] for each element of a complex array z. r = randi ( [10 50],1,5) r = 1×5 43 47 15 47 35. To create complex variables z1 = 7+j and z2 = 2ejˇ simply enter z1 = 7 + j z2 = 2*exp(j*pi) Table 2 gives an overview of the basic functions for. In linear algebra of MATLAB we call these scalars. It works for many langueges including MATLAB, the choice of this class. The size and data type of the output array is the same as. m - Matlab; Visual Complex Function Links; Complex Function Grapher. MATLAB has all the standard scalar arithmetic operators for real and complex numbers: Unless redefined by the user, ' i ' and 'j' are special constants referring to the principal square toot of -1. The angle must be converted to radians when entering numbers in complex exponential form: >> x = 2*exp(j*45*pi/180). You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. To derive an expression for the indefinite integral of a function, we write − For example, from our previous example − syms x int(2*x) MATLAB executes the above statement and returns the following result − In this example, let us find the integral of some commonly. The numerical scope of MATLAB is very wide. Doing length(y) is the same as fs*T (where T the length of the acquisition in time). You can plot your results. Plot the spectrum Suppose I want to plot the spectrum of the first 0. 1 Solving a basic differential equation 15. It's thrown into a mix of other questions. If you pass multiple complex arguments to plot, such as plot(z1,z2), then MATLAB® ignores the imaginary parts of the inputs and plots the real parts. Among others, see the Complex Numbers core MATLAB and the Symbolic Math Toolbox Complex Numbers documentation. a = rand + | {
"domain": "freccezena.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9719924802053234,
"lm_q1q2_score": 0.8162151000011526,
"lm_q2_score": 0.8397339656668286,
"openwebmath_perplexity": 660.3944330768784,
"openwebmath_score": 0.5779609084129333,
"tags": null,
"url": "http://aapo.freccezena.it/how-to-plot-complex-numbers-in-matlab.html"
} |
operators, density-operator, time-evolution, open-quantum-systems
Updated Question: How to prove that $e^{-i[H,\bullet]t}=e^{-iHt}\bullet e^{iHt}$?
The background to my question is that the dynamics of a density matrix is given by the von-Neumann equation $\dot{\rho} = -i [H, \rho] = \mathcal{H}[\rho]$ with the solution $\rho(t)=\mathcal{U}(t)[\rho(0)]=e^{\mathcal{H}[\bullet]t}\rho(0)$. We can identify the exponential with the time evolution superoperator, i.e. $\mathcal{U}(t)=e^{\mathcal{H}[\bullet]t}$. On the other hand, we know that the solution to the von-Neumann equation can be expressed in Hilbertspace using the time evolution matrix as $\rho(t)=U^\dagger (t)\rho(0)U(t)$ with $U(t)=e^{-iHt}$. Clearly both ways of looking at the problem should coincide so that $\mathcal{U}(t)[\bullet] = U^\dagger(t)\bullet U(t)$ or explicity $e^{-i[H,\bullet]t}=e^{-iHt}\bullet e^{iHt}$. It is this last equation where I do not see its validity. This is an answer to the updated question. As was first pointed out by Mark Mitchison, one can express the action $e^{-i[H,\bullet]t}$ on an operator as a combination of left and right acting matrices. To that end define $\mathcal{L}(A)[\rho] = A\rho$ and $\mathcal{R}(A)[\rho] = \rho A$ as the left (right) acting superoperator. We need three properties: | {
"domain": "physics.stackexchange",
"id": 76445,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "operators, density-operator, time-evolution, open-quantum-systems",
"url": null
} |
Lets choose a system with parameters that give us some degree of freedom. An example of a system that gives us four critical points and some degree of freedom is (there are other choices, this is not unique):
$$\tag 1 x' = a x + b x^2 + c x y = x(a + bx + cy)\\ y' = d y + e y^2 + f x y = y(d+ey + fx)$$
First thing we need to do is to find the critical points (recall, we need exactly four of them).
• $x = 0 \rightarrow y = 0$ or $y = -\dfrac{d}{e}$
• $y = 0 \rightarrow x = 0$ or $x = -\dfrac{a}{b}$
• $x \ne 0, y \ne 0 \rightarrow x = \dfrac{cd -ae}{be - cf}, y = \dfrac{af -bd}{be - cf},~~\text{with}~~ c \ne 0, be \ne cf$
Thus, our four critical points are (note the constraints above):
$$(0,0), ~\left(0, -\dfrac{d}{e}\right), ~\left(-\dfrac{a}{b}, 0\right), \left(\dfrac{cd -ae}{be - cf}, \dfrac{af -bd}{be - cf}\right)$$
The Jacobian matrix of $(1)$ is given by:
$$\tag 2 \displaystyle J(x, y) = \begin{bmatrix}\frac{\partial x'}{\partial x} & \frac{\partial x'}{\partial y}\\\frac{\partial y'}{\partial x} & \frac{\partial y'}{\partial y}\end{bmatrix} = \begin{bmatrix}a + 2bx + cy & cx \\ fy & d + 2 e y + fx \end{bmatrix}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9790357561234475,
"lm_q1q2_score": 0.8506124731641058,
"lm_q2_score": 0.868826769445233,
"openwebmath_perplexity": 457.9211581220462,
"openwebmath_score": 0.7520902156829834,
"tags": null,
"url": "https://math.stackexchange.com/questions/458105/constructing-a-non-linear-system-with-prerequisites-about-the-nature-of-its-crit?noredirect=1"
} |
rviz, moveit, pr2
Title: Planning with moveit and collisions with the scene (moveit rviz plugin)
Hi all!
I have installed moveit! and I have now a little experience with the rviz plugin.
My aim is to compare some planning algorithms with two or three scenes, using the pr2 robot. I was able to spawn the pr2 following the tutorials and plan some motions, but after having imported a scene from the "Scene Object" pan, by clicking on the "Import from text" button (like in this tutorial http:\moveit.ros.org\wiki\Environment_Representation\Rviz, "Importing from Text Files" paragraph), I am not able to tell the planning algorithms to take into account of the collisions with the scene. The movements planned are going through the scene objects...
I didn't find anything in the tutorials, but maybe I wasn't looking the right pages.
Any help about making collisions work also with the environment?
Self-collision checking works perfectly...
Thanks in advance!
Originally posted by kir on ROS Answers with karma: 23 on 2013-07-24
Post score: 2
Hey!
The problem is that the scene you are editing is actually the one in Rviz, not the one for the planning algorithm (which runs in a separate process).
You need to publish the scene you changed (imported geometry to) to the move_group node. This can be easily done by clicking the Publish Planning Scene button in the Context tab of the motion planning plugin frame.
From that point on, planning algorithms should use the updated geometry.
Ioan
Originally posted by isucan with karma: 1055 on 2013-07-26
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by kir on 2013-07-27:
Ok, it worked, thanks! :)
I think you can probably help me with another question about the same topic. Here: http://answers.ros.org/question/69397/moveit-benchmark-log-where-is-it/
Thanks a lot! I am really enjoying your work :)
Comment by Qiang on 2013-10-22:
great, This poster help me alot. I also have another question, how can I extract this obstacle information with C++ code, is there tutorial about that?
Qiang | {
"domain": "robotics.stackexchange",
"id": 15036,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rviz, moveit, pr2",
"url": null
} |
phylogenetics, phylogeny, rdkit, matrix
Title: DNA, molecular descriptors Are there any programs (preferably command line tools) for calculating molecular descriptors of DNA? I am looking for something like Chemopy or RDKit but specifically for DNA.
Thanks in advance! Just found the right solution.
iFeatureOmega does exactly what I was looking for. Sequence based and has CLI.
https://ifeatureomega.erc.monash.edu/iFeature2/index.html
Additionally it supports protein, DNA/RNA and also small molecule inputs. | {
"domain": "bioinformatics.stackexchange",
"id": 2325,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "phylogenetics, phylogeny, rdkit, matrix",
"url": null
} |
Gunderson remark about strong induction: While attempting an inductive proof, in the inductive step one often needs only the truth of $S(n)$ to prove $S(n+1)$; sometimes a little more "power" is needed (such as in the proof that any positive integer $n\geq 2$ is a product of primes--we'll explore why more power is needed in a moment), and often this is made possible by strengthening the inductive hypothesis.
Kenneth Rosen remark in Discrete Mathematics and Its Applications Study Guide: Understanding and constructing proofs by mathematical induction are extremely difficult tasks for most students. Do not be discouraged, and do not give up, because, without doubt, this proof technique is the most important one there is in mathematics and computer science. Pay careful attention to the conventions to be observed in writing down a proof by induction. As with all proofs, remember that a proof by mathematical induction is like an essay--it must have a beginning, a middle, and an end; it must consist of complete sentences, logically and aesthetically arranged; and it must convince the reader. Be sure that your basis step (also called the "base case") is correct (that you have verified the proposition in question for the smallest value or values of $n$), and be sure that your inductive step is correct and complete (that you have derived the proposition for $k+1$, assuming the inductive hypothesis that proposition is true for $k$--or the slightly strong hypothesis that it is true for all values less than or equal to $k$, when using strong induction.
Statement of weak induction: Let $S(n)$ denote a statement regarding an integer $n$, and let $k\in\mathbb{Z}$ be fixed. If
• (i) $S(k)$ holds, and
• (ii) for every $m\geq k, S(m)\to S(m+1)$,
then for every $n\geq k$, the statement $S(n)$ holds.
Statement of strong induction: Let $S(n)$ denote a statement regarding an integer $n$. If | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668712109664,
"lm_q1q2_score": 0.8154900278997056,
"lm_q2_score": 0.8311430520409024,
"openwebmath_perplexity": 243.289736145629,
"openwebmath_score": 0.9403507113456726,
"tags": null,
"url": "https://math.stackexchange.com/questions/1184541/what-exactly-is-the-difference-between-weak-and-strong-induction"
} |
cosmology, energy-conservation, space-expansion, dark-energy, virtual-particles
So in the far future all observers in the universe will have a cosmological event horizon at around 16 billion light years and they will never be able to see farther into the universe than that. However within this distance everything behaves normally.
There is a (wildly speculative) idea that the dark energy density can increase with time eventually driving the Hubble parameter to infinity. This is called the Big Rip. This will in effect tear everything apart and destroy everything, however there is currently no evidence that this will happen.
A couple of final points. Firstly energy is not conserved in the expansion of the universe. This is the case no matter how the universe is expanding and doesn't require dark energy to do anything weird like a Big Rip. See for example: | {
"domain": "physics.stackexchange",
"id": 46449,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, energy-conservation, space-expansion, dark-energy, virtual-particles",
"url": null
} |
file in #6. Approximate the. They are also called "Euclidean coordinates," but not because Euclid discovered them first. We are supposed to convert this func-tion to Cartesian coordinates. To de ne polar coordinates, we start by identifying a pole or origin, labeled O, usually taken to be the same as the origin in Cartesian coordinates. Press Change to Cartesian coordinates. Converting between spherical and cartesian coordinates. (-3, 2π/3) 5 EX 4 Plot r = 6 sin θ. Precalculus: Polar Coordinates Practice Problems 3. And, these coordinates are directed horizontal and vertical distances along the x and y axes, as Khan Academy points out. Decimal Degrees (DD) Latitude (-90 to 90) and longitude (-180 to 180). Convert to Polar Coordinates (-3,0) Convert from rectangular coordinates to polar coordinates using the conversion formulas. Convert from rectangular coordinates to polar coordinates using the conversion formulas. The Laplacian in Spherical Polar Coordinates C. I've received an assignment to investigate the polar coordinate system compared to the cartesian one. In this video, we take a look at the polar coordinate system and derive the expressions for converting between cartesian coordinates and polar coordinates. Precalculus: Polar Coordinates Concepts: Polar Coordinates, converting between polar and cartesian coordinates, distance in polar coordinates. Use degrees for 0. Recall that Hence, The Jacobian is Correction There is a typo in this last formula for J. \\ r^2 = x^2 + y^2 \\ \theta = \arctan (\frac yx) {/eq. New, dedicated functions are available to convert between Cartesian and the two most important non-Cartesian coordinate systems: polar coordinates and spherical coordinates. Remember to consider the quadrant in which the given point is located when determining θ for the point. Cartesian and Polar Coordinates nbsp; The Cartesian coordinates of a point in the xy -plane are Cartesian and Polar Coordinates (a) The Cartesian coordinates of a point in the xy -plane are ( x , y ) = (-3. To specify points in space using spherical-polar coordinates, we first choose two convenient, mutually perpendicular reference directions (i and k in the picture). com interactive, accessed 06/2016. This is a familiar problem; recall. A Cartesian coordinate system (UK: / k ɑː ˈ t | {
"domain": "clandiw.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9875683484150417,
"lm_q1q2_score": 0.8432972016158118,
"lm_q2_score": 0.853912747375134,
"openwebmath_perplexity": 486.12006461400983,
"openwebmath_score": 0.9364537000656128,
"tags": null,
"url": "http://clandiw.it/fvql/conversion-of-cartesian-coordinates-to-polar-coordinates-pdf.html"
} |
acid-base, redox
Title: In acid/base chemistry, are there amphoteric substances that undergo something that resembles disproportionation in redox chemistry Some substances disproportionate. This means a species with an intermediate oxidation state yields two species with higher and lower oxidation state. For example:
$$\ce{Hg2Cl2 -> Hg + HgCl2}$$
My question is if there are substance that have the analogous acid/base behavior, where an intermediate protonation state yields the conjugate acid and the conjugate base. You could write it like this:
$$\ce{2AH -> A- + AH2+}\tag{1}$$
An example would be water:
$$\ce{2H2O <- OH- + H3O+}$$
What I am looking for, however, is a case where the mixture of conjugate acid and the conjugate base (of the same amphoteric starting material) are the major species, i.e. a reaction of the type (1) with an equilibrium constant larger than 1. This would mean that the pKa values are in the opposite order than usual, i.e. it is "easier" to lose the second proton than it is to lose the first. For disproportionation, there is a similar feature of the reduction potentials (it is easier to accept the second electron than it is to accept the first).
Are there any such examples?
Update: To clarify my question, I am looking for substances that skip a protonation state, i.e. pKa2 < pKa1. If you could isolate the intermediate protonation state, it would yield the higher and lower protonation state. This is similar to species that skip an oxidation number (or have the corresponding species at very low concentration) because the reduction potentials favor disproportion of the (hypothetical or barely detectable) species with intermediate oxidation state. As I mentioned in the comments, I know of one case where a polyprotic acid has a second proton which is more acidic than the first: the aqueous pervanadyl complex, $\ce{[VO_2(H_2O)_4]^+}$. According to Wikipedia,
$$
\begin{array}{rcl} | {
"domain": "chemistry.stackexchange",
"id": 17627,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "acid-base, redox",
"url": null
} |
rospack
Sorry I don't have enough points to upload a screenshot of the console. This is an empty package with no added code yet but my understanding is that it should still compile. I ran catkin_make and then the second time I did a whitelist to just compile the picar package. I see the folder and get no error message from catkin_make but roscd and rospack both don't see it and I get No Such Package errors. Am I missing a step?
-- +++ processing catkin package: 'picar'
-- ==> add_subdirectory(picar)
-- Configuring done
-- Generating done
-- Build files have been written to: /home/pi/ros_catkin_ws/build
####
#### Running command: "make -j4 -l4" in "/home/pi/ros_catkin_ws/build"
####
pi@raspberrypi:~/ros_catkin_ws $ rospack depends1 picar
[rospack] Error: no such package picar
pi@raspberrypi:~/ros_catkin_ws $ roscd picar
roscd: No such package/stack 'picar'
pi@raspberrypi:~/ros_catkin_ws $ ls
build build_isolated devel devel_isolated noetic-ros_comm-wet.rosinstall src
pi@raspberrypi:~/ros_catkin_ws $ cd src
pi@raspberrypi:~/ros_catkin_ws/src $ ls
catkin CMakeLists.txt gencpp genlisp gennodejs message_generation picar ros ros_comm_msgs roscpp_core roslisp std_msgs
class_loader cmake_modules geneus genmsg genpy message_runtime pluginlib ros_comm rosconsole ros_environment rospack
pi@raspberrypi:~/ros_catkin_ws/src $ cd picar | {
"domain": "robotics.stackexchange",
"id": 35552,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rospack",
"url": null
} |
r
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA), ELTname = c("Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team",
"Company Top Management Team", "Company Top Management Team", | {
"domain": "codereview.stackexchange",
"id": 32530,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "r",
"url": null
} |
mvc, scala, thread-safety
else if (reachEX.contains(twilioEx.getErrorCode))
Ok(Json.obj("response" -> "unreachableNumber"))
else {
Future { models.LogFile.errorLogs(Json.obj("route" -> request.tags, "post" -> request.body.asFormUrlEncoded.toString, "address" -> request.remoteAddress, "error" -> twilioEx).toString) }
Ok(Json.obj("response" -> "error"))
}
case e: Exception =>
println(e)
Future { models.LogFile.errorLogs(Json.obj("route" -> request.tags, "post" -> request.body.asFormUrlEncoded.toString, "address" -> request.remoteAddress, "error" -> e.toString).toString) }
Ok(Json.obj("response" -> "error"))
}
} As I'm new to scala, I don't feel I could generate the appropriate code, but I wanted to make a couple points.
First, your method is doing way to much. I see validation, I see messing with a request object, I see sending the SMS, I see response generation. From my point of view, when looking at this method, you can't tell what is going on without a lot of effort. So my recommendation would be to add more methods. If I were writing this, I would have an outer method that abstracts away the HTTP portions of your code | {
"domain": "codereview.stackexchange",
"id": 8830,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mvc, scala, thread-safety",
"url": null
} |
image-processing, template-matching
Title: Distance Transform & Chamfer Matching -- find actual edges I have had good success using the Distance Transform and Chamfer Matching to locate a template in an edge image. Is there a recommended technique to then find the actual edges that match the template?
For example, in the image below, the template is shown in blue at the location found by chamfer matching. The image edges are shown in red and green, where the green edges are the edges that I would like to extract because they best match the template.
I have tried techniques involving finding the Euclidean distance from every edge pixel to the template edge and taking the edge with the smallest total distance, but this technique often finds incorrect edges. I think I need something that compares the "shape" of the edges with the shape of the template edges and am wondering if there is a standard technique. Computing the Hausdorff distance (HD) from each template edge to each image edge partially or wholly contained in a bounding box surrounding the matched template and then selecting those edges with an HD less than a threshold extracts the desired edges. | {
"domain": "dsp.stackexchange",
"id": 12524,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "image-processing, template-matching",
"url": null
} |
Since $\pi\csc(\pi z)$ has residue $(-1)^n$ at $z=n$ for $n\in\mathbb{Z}$, we will use the contours $$\gamma_\infty=\lim\limits_{R\to\infty}Re^{2\pi i[0,1]}\qquad\text{and}\qquad\gamma_0=\lim\limits_{R\to0}Re^{2\pi i[0,1]}$$ To sum over all $n\in\mathbb{Z}$ except $n=0$, we use the difference of the contours, which circles the non-zero integers once counter-clockwise. \begin{align} 2\sum_{n=1}^\infty\frac{(-1)^n}{n^2} &=\frac1{2\pi i}\left(\int_{\gamma_\infty}\frac{\pi\csc(\pi z)}{z^2}\mathrm{d}z-\int_{\gamma_0}\frac{\pi\csc(\pi z)}{z^2}\mathrm{d}z\right)\\ &=\color{#C00000}{\frac1{2\pi i}\int_{\gamma_\infty}\frac{\pi\csc(\pi z)}{z^2}\mathrm{d}z}-\operatorname*{Res}_{z=0}\left(\color{#00A000}{\frac{\pi\csc(\pi z)}{z^2}}\right)\\ &=\color{#C00000}{0}-\operatorname*{Res}_{z=0}\left(\color{#00A000}{\frac1{z^2}\frac\pi{\pi | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9915543728093005,
"lm_q1q2_score": 0.8781517515825661,
"lm_q2_score": 0.8856314647623016,
"openwebmath_perplexity": 454.77495379795647,
"openwebmath_score": 0.8262123465538025,
"tags": null,
"url": "https://math.stackexchange.com/questions/1074582/evaluating-sums-using-residues-1n-n2"
} |
Subsequent to the collision, as the gate swings from the vertical, gravity will exert a moment about the reference axis, so angular momentum about it will start to change.
This is what I come up with
am I correct?
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
This is what I come up with View attachment 99291 am I correct?
Yes, that's the right diagram. Now you need to use conservation of angular momentum about the pivot. What is the raven's angular momentum about the pivot before impact?
So the two impact forces of raven and gate are not classified as external force because they are inside the system (gate + raven) And the only external force is the force provided by the pivot. So that linear momentum is not conserved. I think I mixed up the conservation of momentum by divide the system into the gate and raven, so the force of the gate is balanced, therefore linear momentum is conserved. Thank you.
(1.1)(5)(1.5/2)=(1.1)(-2)(1.5/2)+(4.5)(1.5^2)(angular speed^2)/3
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
(1.1)(5)(1.5/2)=(1.1)(-2)(1.5/2)+(4.5)(1.5^2)(angular speed^2)/3
Very nearly right. The angular speed should not be squared. (Check the dimensions.). You probably got mixed up with rotational KE, or maybe centripetal acceleration.
(1.1)(5)(1.5/2)=(1.1)(-2)(1.5/2)+(4.5)(1.5^2)(angular speed)/3 I don't remember that L=moment of inertia * angular velocity instead of square the angular velocity. Thank you very much. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9324533051062237,
"lm_q1q2_score": 0.8447932106931848,
"lm_q2_score": 0.905989829267587,
"openwebmath_perplexity": 566.4237366805205,
"openwebmath_score": 0.8423534035682678,
"tags": null,
"url": "https://www.physicsforums.com/threads/linear-and-angular-momentum-on-a-wooden-gate.867402/"
} |
Since the sequence is decreasing it is bounded above by 1, and because $x_n\geq0$ the sequence id bounded below by 0.
The boundedness and monotonicity of the sequence implies that a limit exists:
Let $\lim x_n=x$. Because $\lim x_n=\lim x_{n+1}$, $$x=\frac{x}{x+1}<=>x(x+1)=x<=>x+1=1=>x=0$$ So $0$ is the limit.
I'm not sure if there are problem in the work that I did and any help would be greatly appreciated.
I don't know if I should start a new question for this, so I've included it here anyways: Since one was able to tell that the $\lim x_n=\lim \frac{1}{1+n}$ for the above sequence, should one try to do the same thing with the sequence $x_{n+1}= \frac{(x_n)^2}{x_n+1}$, $x_o=1$?
-
You could also show that your sequence is $x_n=\frac{1}{n}$. – Baby Dragon Mar 16 '13 at 20:45
Also note that your initial c0ndition, $x_0$ does not matter as long as $x_0=-1$ is avoided. – Baby Dragon Mar 16 '13 at 20:52
The $x_o=1$ was given to me in the question. Why doesn't it matter? – user66807 Mar 16 '13 at 20:58
@user66807 because the sequence is $x_n = \frac{1}{n+1}$. This sequence converges, irrespective of the choice of $x_0$. All you need is: $x_0 > 0$. (Some definitions of naturals numbers are the positive integers anyhow) . – Rustyn Mar 16 '13 at 21:01
@Rustyn Yazdanpour Oh I see, makes sense now. Thanks. – user66807 Mar 16 '13 at 21:04 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.98409360595565,
"lm_q1q2_score": 0.809111068376698,
"lm_q2_score": 0.8221891327004132,
"openwebmath_perplexity": 291.72007163800174,
"openwebmath_score": 0.9358304142951965,
"tags": null,
"url": "http://math.stackexchange.com/questions/332227/did-i-compute-the-limit-of-of-the-sequence-x-n1-fracx-nx-n1-x-o-1-pr"
} |
electromagnetism, magnetic-fields, electric-fields, electrons, electric-current
Title: What direction is current for an electron in both $E$ and $B$ fields? According to Fleming's left hand rule:
I am a bit confused however in the case of an electron travelling in a region of both electric and magnetic fields, which direction would the electric current be?
I know that for a current carrying conductor, the current is opposite to electron flow, does that mean that in the case of a single electron the current would be in the opposite direction to the motion of the electron? No matter what fields exist in space, electric current is always in the direction opposite to the direction of motion of e-, because e- are negatively charged particles.
However, current is not defined for a single e-, because current is a continuous flow of charges. But if we have to tell the direction of current anyways in case of a single e-, we would say it is in the opposite direction.
In the image attached, electric current means that if a positively charged particle is moving in that direction, B is in that direction, then Force will be in this direction.
If an e- is moving in the same direction as the positively charged particle was moving, then direction of force will rotate 180 degs | {
"domain": "physics.stackexchange",
"id": 73715,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, magnetic-fields, electric-fields, electrons, electric-current",
"url": null
} |
java, sorting
This is a bubblesort algorithm. However, on some places you use the significantly more effective Collections.sort method. I recommend writing Comparators for the different ways you want to sort by and then use Collections.sort.
Another pattern which I see repeated in your code is a loop to add items from one list to another:
for(int i = 0; i<organizableList.size(); i++) {
Something g = organizableList.get(i)....;
if(g.someCondition && g.otherCondition...) {
anotherList.add(organizableList.get(i));
}
}
You can get rid of a lot of code duplication by creating an interface and a method:
public interface FilterInterface<E> {
/**
* @param obj Element to check
* @return True if element should be kept, false otherwise.
*/
boolean shouldKeep(E obj);
}
public static List<Something> addToAnotherList(Collection<Something> list, FilterInterface<Something> filter) {
List<Something> result = new ArrayList<Something>();
for (Something something : list) {
if (filter.shouldKeep(something))
result.add(something);
}
return result;
}
An example usage of this would be:
List<BudgetItem> holder = addToAnotherList(organizableList, new FilterInterface<BudgetItem>() {
@Override
public boolean shouldKeep(BudgetItem item) {
GregorianCalendar g = item.getDateOfTransaction();
if(g.equals(dayOne)||g.equals(dayTwo)||(g.after(dayOne)&&g.before(dayTwo)))
return true;
else return false;
}
});
Many of your methods are both returning and modifying the organizableList array. This feels strange to me. It can be more preferable to create a copy of the list and sort it in a specific way and return that. | {
"domain": "codereview.stackexchange",
"id": 6804,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, sorting",
"url": null
} |
# What are the elements of $k[X,Y]/(X^2-Y^3)$ like?
What are the elements in $k[X,Y]/(X^2-Y^3)$ like, where $k$ is a field?
For example, in $k[X]/(x^2+2x+3)$, all elements are of a degree lower than $2$. But I can't quite figure out the multi-variable case.
My first guess was that we could treat $Y$ as a constant and ensure that all the elements of $k[X,Y]/(X^2-Y^3)$ had their degree of $X$ as less than $2$. $Y$ obviously could then have any degree.
But then again, we could do also ensure that all elements had their degree of $Y$ as less than $3$, letting $X$ take any degree.
Having two representations for the elements of $k[X,Y]/(X^2-Y^3)$ sounds a little spurious to me.
Any help would be much appreciated. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180690117798,
"lm_q1q2_score": 0.8192727222688905,
"lm_q2_score": 0.8311430499496096,
"openwebmath_perplexity": 130.0909261256881,
"openwebmath_score": 0.9203822016716003,
"tags": null,
"url": "https://math.stackexchange.com/questions/658352/what-are-the-elements-of-kx-y-x2-y3-like/658357"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.