text stringlengths 49 10.4k | source dict |
|---|---|
phase-transition, density, weight, matter, density-of-states
Title: What happens to weight when ice melts? A block of ice is weighed in a container. Then, it is left out to melt. Would the weight of the water be greater, less than, or equal to the ice?
I know that it has something to do with density and volume, but i'm not sure how. The internal energy if water is 80 calories per gram higher than that of ice, which represents a finite but incredibly small mass increase, as is clear from Einstein's relation E=mc$^2$. Otherwise the mass is constant. The weight depends also on the gravitational field, which you can assume to be constant over the volume of ice and water. All considered the weight should be the same to a very high accuracy. | {
"domain": "physics.stackexchange",
"id": 70135,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "phase-transition, density, weight, matter, density-of-states",
"url": null
} |
c#
if (category == all && status != all)
{
predicate = model =>
IsPresent(model.Product, product) &&
IsPresent(model.Status, status);
}
else if (category != all && status == all)
{
predicate = model =>
IsPresent(model.Product, product) &&
IsPresent(model.Category, category);
}
else if (category == all && status == all)
{
predicate = model =>
IsPresent(model.Product, product);
}
return concessionList.Where(predicate).ToList();
}
predicate: I've defined a function variable (Func<TInput, TOutput>)
It receives a ConcessionModel (as TInput)
and will return with a bool (as TOutput).
So, I have defined a function which implementation varies based on the input parameters.
We use this function as a filter condition (Where) against the data source (concessionList)
Here I have used the short form, but could be written like this:
return concessionList.Where(model => predicate(model)).ToList(); | {
"domain": "codereview.stackexchange",
"id": 40743,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#",
"url": null
} |
# GLM: effect of link function on choice of transformation of covariate
It struck me that if I have data of the form below,
library(data.table)
dt = data.table(xxx = rep(1:50, 10))
dt[, y := rgamma(500, 1.1 * xxx + runif(500)/2,)]
with(dt, plot(xxx, y))
, that before I would have though "ah this is a linear effect, so I should include a linear effect in my Gamma-regression", but upon realizing that in Gamma-regression one uses the log-link, and that any prediction from my model is obviously transformed via the exponential function $$\hat y_i= \exp(\beta_0 + \beta_1xxx_i)$$,
such that the best model would have a logarithmic transformation of the covariate $$xxx$$. When I check the Gamma deviance, this seems to be correct:
library(mgcv)
model = mgcv::gam(y ~ xxx, family = Gamma(link = "log"), data = dt)
model2 = mgcv::gam(y ~ log(xxx), family = Gamma(link = "log"), data = dt)
dt[, pred := predict(model, type ="response", newdata=dt)]
dt[, pred2 := predict(model2, type ="response", newdata=dt)]
dt[, dev := (y - pred)/pred - log(y/pred)]
dt[, dev2 := (y - pred2)/pred2 - log(y/pred2)]
dt[, list(sum(dev), sum(dev2))]
Deviance for model1: 43.9819, and for model2: 19.53396.
Am I loosing my mind after a long summer break, or am I thinking correctly? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.975576912786245,
"lm_q1q2_score": 0.8212656718570915,
"lm_q2_score": 0.8418256532040708,
"openwebmath_perplexity": 908.1852487230227,
"openwebmath_score": 0.9944055080413818,
"tags": null,
"url": "https://stats.stackexchange.com/questions/481827/glm-effect-of-link-function-on-choice-of-transformation-of-covariate"
} |
javascript, jquery, angular.js
// Function to manage resize up event
var resizeUp = function($event) {
var margin = 50,
lowest = $mouseDown.top + $mouseDown.height - margin,
top = $event.pageY > lowest ? lowest : $event.pageY,
height = $mouseDown.top - top + $mouseDown.height;
$element.css({
top: top + "px",
height: height + "px"
});
};
// Function to manage resize right event
var resizeRight = function($event) {
var margin = 50,
leftest = $element[0].offsetLeft + margin,
width = $event.pageX > leftest ? $event.pageX - $element[0].offsetLeft : margin;
$element.css({
width: width + "px"
});
};
// Function to manage resize down event
var resizeDown = function($event) {
var margin = 50,
uppest = $element[0].offsetTop + margin,
height = $event.pageY > uppest ? $event.pageY - $element[0].offsetTop : margin;
$element.css({
height: height + "px"
});
};
// Function to manage resize left event
function resizeLeft ($event) {
var margin = 50,
rightest = $mouseDown.left + $mouseDown.width - margin,
left = $event.pageX > rightest ? rightest : $event.pageX,
width = $mouseDown.left - left + $mouseDown.width;
$element.css({
left: left + "px",
width: width + "px"
});
};
var createResizer = function createResizer( className , handlers ){
newElement = angular.element( '<div class="' + className + '"></div>' );
$element.append(newElement);
newElement.on("mousedown", function($event) {
$document.on("mousemove", mousemove);
$document.on("mouseup", mouseup); | {
"domain": "codereview.stackexchange",
"id": 9507,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, angular.js",
"url": null
} |
quantum-state, measurement
&\geq 0 \tag{3.3}\,.
\end{align}
Measurement outcome is the index $k$ of the state that resulted.
Assume if you perform measurement corresponding to $\{ M_1, M_2, \cdots, M_k \}$ on state $\ket{\psi}$, then there can be $k$ different outcomes which can occur. Any of these outcomes can occur at random with probabilistic odds given by Eq.$(3.1)$.
\begin{equation}
\ket{\psi}\xrightarrow[]{\{ M_1,M_2, \cdots, M_k \}}\begin{cases}
\ket{\psi_1}, & \text{with probability $p_1$ ----- Getting outcome 1}\\
\ket{\psi_2}, & \text{with probability $p_2$ ----- Getting outcome 2}\\
\vdots\\
\ket{\psi_k}, & \text{with probability $p_k$ ----- Getting outcome $k$}
\end{cases}
\end{equation}
You can think of these outcomes as some pointer device in your laboratory. If you have $k$ different possible outcomes for the measurement, in the lab, your apparatus will point to one of the $k$ possible points, and then you can say that $k$-outcome has happened, and then you can deduce that state of your system now is $\ket{\psi_k}$.
So, in general, these operators $\{M_k\}$ can be any linear operators, i.e. matrices, which satisfy the condition given in Eq. $(1)$.
However, there are special cases of this general mathematical framework of generalized measurements where, in each special case, apart from Eq.$(1)$, there are some additional conditions on measurement operators. For example, in Projective (Vonn Neumann) measurements, these measurement operators form a complete set of orthogonal projectors. | {
"domain": "quantumcomputing.stackexchange",
"id": 5451,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-state, measurement",
"url": null
} |
hydrocarbons, catalysis, carbon-dioxide
Title: Is there any chemical that can catalyze "reverse combustion?" I am trying to build a machine that converts carbon dioxide and water into gasoline (i.e. octane). Because the energy needs are so large, is there any catalyst chemical that can cause "reverse combustion" at room temperature? By principle, all catalysts accelerate both forward and backward reactions
in the same extent, so the equilibrium with and without the catalyst is the same.
So the catalyst catalysing the "reversed combustion" would catalyze the normal combustion as well. The equilibrium would be far at the combustion products side, as if the catalyst was not used.
If it had not been so, one could have constructed a chemical perpetuum mobile. It could generate energy by shifting equilibrium forward and backward just by adding and removing the catalyst. | {
"domain": "chemistry.stackexchange",
"id": 13149,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "hydrocarbons, catalysis, carbon-dioxide",
"url": null
} |
and we only need to find appropriate bounds for the difference $$\sum_{j=0}^{m-1}\cos^k\left(\tau+\frac{2\pi j}{m}\right)-\frac{m}{2}\int_{0}^{2\pi}\cos^k(x)\,dx$$ when $k$ is even and greater or equal to $m$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9674102552339746,
"lm_q1q2_score": 0.804056310087696,
"lm_q2_score": 0.8311430499496096,
"openwebmath_perplexity": 276.7880429427188,
"openwebmath_score": 0.773747444152832,
"tags": null,
"url": "https://math.stackexchange.com/questions/502529/how-find-the-range-of-m"
} |
programming, qiskit, circuit-construction, quantum-fourier-transform
$$\frac{1-\mathrm{i}}{2}|01\rangle+\frac{1+\mathrm{i}}{2}|11\rangle$$
which explains why we measured these states with equal probability. On the other hand, the textbook version yields the state $|10\rangle$.
Thus, the problem is that you've created the big-endian/textbook version of the QPE algorithm, while qiskit works with little-endian. The fix one can immediately think of is to change the qubits on which you apply the controlled phase gates. In order to do this, yuo can just replace the line:
exponent = 2**(n-x-1) | {
"domain": "quantumcomputing.stackexchange",
"id": 3107,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming, qiskit, circuit-construction, quantum-fourier-transform",
"url": null
} |
c#, performance
res.Center = pair.Key;
resList.Add(res);
model.Reservations.Add(new Reservation { ResList = resList });
}
}
}
}
}
}
The custom methods I'm using also contain loops. I wonder if a LINQ would be faster but I don't really know how to use it.
In a nutshell i want to get all the information about Reservations without loosing so much time stuck in so many foreach loops.
Here is my Reservation Class :
public sealed class Reservation
{
public List<Res> ResList { get; set; }
public class Res
{
public XYZ Center { get; set; }
public XYZ RoundCenter { get; set; }
public Duct AssociatedDuct { get; set; }
public Wall AssociatedWall { get; set; }
public double WallWidth { get; set; }
public int Radius { get; set; }
}
public Reservation()
{
ResList = new List<Res>();
}
}
A Curve is a right in the center of a Duct (each Duct contains one Curve). And a Face is a side of a Wall (each Wall contains 6 Faces) I hope I have understood this now.
Instead of iterating over all the Duct items and adding each iteration the related Curve to curves you should create another class like
public class DuctCurev
{
public Duct TheDuct {get; private set; }
public Curve TheCurve {get; private set; }
public DuctCurve(Duct duct, Curve curve)
{
TheDuct = duct;
TheCurve = curve;
}
}
now we iterate once over all of the Duct's and find the related Curve which we will add to a List<DuctCurve> like so
List<DuctCurve> ductCurves = new List<DuctCurve>();
foreach (Duct d in ducts)
{
ductCurves.Add(d, FindDuctCurve(d));
} | {
"domain": "codereview.stackexchange",
"id": 20486,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance",
"url": null
} |
classical-mechanics, mathematical-physics, hamiltonian-formalism, mathematics, poisson-brackets
References:
V.I. Arnold, Mathematical methods of Classical Mechanics, 2nd eds., 1989; p. 206.
--
$^1$ Note that different authors give different definitions of a canonical transformation, cf. e.g. this Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 13028,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, mathematical-physics, hamiltonian-formalism, mathematics, poisson-brackets",
"url": null
} |
quantum-mechanics, mathematical-physics, quantum-optics, phase-space, wigner-transform
Any multiple of these polynomials will solve this linear equation.
However, it is practical/convenient to simplify their Rodrigues formula
$$L_n(z)=\frac{e^z}{n!}\partial_z^n \left(e^{-z} z^n\right) =\frac{1}{n!} \left( \partial_z -1 \right)^n z^n,$$
and Sheffer sequence recursive formula,
$$
\tag{7} \partial_z L_n = \left ( \partial_z - 1 \right ) L_{n-1},$$ generating function, etc, as you probably learned from your Hydrogen atom. So they are all unity at the origin. Recall, from the text, these are all ingredients of Wigner functions f normalized to 1, whence the common normalization; trivially checkable for n=0,
$$
1=\int\!\! dxdp ~f(z)= \frac{\pi \hbar }{2}\int_0^\infty\!\! dz ~e^{-z/2} L_n (z)\frac{(-)^n}{\pi \hbar} ~.
$$
But from n=0 and the Sheffer sequence recursion (7), you may readily check the normalization for n=1, through integration by parts to be a mere change of sign. So, recursively, for all n, you show the alternating sign normalizations. | {
"domain": "physics.stackexchange",
"id": 78253,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, mathematical-physics, quantum-optics, phase-space, wigner-transform",
"url": null
} |
matlab, estimation, linear-algebra, least-squares, parameter-estimation
$$m_n = \frac{\displaystyle\sum_{i=0}^{n-1}(x_i-\overline X)(y_i-\overline Y)}{\displaystyle\sum_{i=0}^{n-1}(x_i-\overline X)^2},\quad n \ge 2,\tag{1}$$
where $\overline X$ is the mean of the $x$ values and $\overline Y$ is the mean of the $y$ values. The problem with Eq. 1 is that the sums are actually nested sums:
$$m_n = \frac{\displaystyle\sum_{i=0}^{n-1}\left(x_i-\frac{\sum_{j=0}^{n-1} x_j}{n}\right)\left(y_i-\frac{\sum_{j=0}^{n-1} y_j}{n}\right)}{\displaystyle\sum_{i=0}^{n-1}\left(x_i-\frac{\sum_{j=0}^{n-1} x_j}{n}\right)^2}, \quad n \ge 2.\tag{2}$$
As $n$ is incremented, the inner sums can be updated recursively (iteratively), but by direct inspection the outer sum would need to be recalculated from the beginning. Least squares is an old and well-studied problem, so we don't try to bang our heads but look elsewhere. Converted to zero-based array indexing, Eqs. 16–21 of the Wolfram page on least squares fitting state:
$$\sum_{i=0}^{n-1}(x_i-\overline X)(y_i-\overline Y) = \left(\sum_{i=0}^{n-1} x_iy_i\right) - n\overline X\overline Y,\tag{3}$$ | {
"domain": "dsp.stackexchange",
"id": 7410,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, estimation, linear-algebra, least-squares, parameter-estimation",
"url": null
} |
ros2, compile, colcon, build, fcl
Title: How to compile rmf_core using colcon build in ROS2 eloquent?
On running the build, the error stated that the fcl package not found when building package rmf_traffic.
Source from github: https://github.com/osrf/rmf_core
Starting >>> rmf_utils
Starting >>> rmf_dispenser_msgs
Starting >>> rmf_fleet_msgs
Starting >>> rmf_traffic_msgs
Starting >>> rmf_door_msgs
Starting >>> rmf_lift_msgs
Starting >>> cpp_pubsub
Starting >>> rmf_workcell_msgs
Starting >>> turtlesim
Finished <<< cpp_pubsub [0.28s]
Finished <<< turtlesim [0.95s]
Finished <<< rmf_utils [2.69s]
Starting >>> rmf_traffic
--- stderr: rmf_traffic
CMake Error at /usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:415 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:593 (_pkg_check_modules_internal)
CMakeLists.txt:26 (pkg_check_modules)
---
Failed <<< rmf_traffic [ Exited with code 1 ]
Aborted <<< rmf_door_msgs
Aborted <<< rmf_lift_msgs
Aborted <<< rmf_dispenser_msgs
Aborted <<< rmf_workcell_msgs
Aborted <<< rmf_fleet_msgs
Aborted <<< rmf_traffic_msgs | {
"domain": "robotics.stackexchange",
"id": 34397,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros2, compile, colcon, build, fcl",
"url": null
} |
There is a mathematical explanation of this effect, so if you're curious, ask an adult to teach you more about the math I wrote in this blog.
Good luck on your science fair!
-Paul
49. Hi thanks for your reply. I did get the frame and strung the bobs. The frame he made had holes going through blocks of plexiglass to the top. The holes were 5mm large. The kevlar thread is very thin so I was getting a rocking inside the holes. So I made the holes smaller by glueing a bead cap (half of a bead thing)on the end. I am having a problem fine tuning. It goes great for the first 45 secs? then you can tell it is out of sync. Do you think it is because the hole is still allowing movement inside or it is still not tuned. I dont want to glue the string to the pivot point until I know for sure. He made me two of them so maybe I will string heavy steel bobs in the other and see which one works best.
1. Hi again,
From your description, it seems like the machine is still slightly untuned. However, you should try to set it up so that you can avoid gluing the string to the pivot point. This way, it is easier to fine-tune whenever you need to.
Cheers,
Paul
50. Hi Paul, I'm hoping to make some of these with my physics students, so cool! A quick question though - are the lengths given the length of the string, the length of the string+ball? Or would it not make any difference as long as we stick with one or the other?
thanks!
51. Hi Paul,
How does the weight of each ball affect the operation- or does it? I don't see a place for mass of the pendulum in the calculations. It's funny the pendulum equation includes the acceleration of gravity but not the mass of the pendulum.
It seems to me that the higher the mass, the greater the momentum of the ball. Does this affect the time it takes pendulum to run down?
I can get metal balls in a variety of materials and diameters which give a good range of weights.
Richard
1. Hi Richard, | {
"domain": "blogspot.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9925393578601738,
"lm_q1q2_score": 0.8313707785260579,
"lm_q2_score": 0.837619961306541,
"openwebmath_perplexity": 880.1129026177096,
"openwebmath_score": 0.627366840839386,
"tags": null,
"url": "http://hippomath.blogspot.com/2011/06/making-your-own-pendulum-wave-machine.html"
} |
human-biology, measurement
These two curves don't move in the same direction nor the opposite one.
From what could the total body water be determined and is it reliable?
The scale has so few values about me and I don't see how it could measure total body water. Your scale exploits the difference in electrical properties of water and fat to measure the composition of your body. This method is called bioelectric impedance analysis (BIA), and it is usually used for estimating the composition. There exists several other, more accurate methods, such as weighing a person under water (hydrostatic weighing), dual-energy X-ray absorptiometry (DXA), CT, and MRI – the last two usually regarded as the gold standard1.
The basic principle of BIA is to use a small electric current through your body and measure its electrical response. Roughly speaking, cell membranes can be regarded as electric capacitors, water as a good conductor, and fat as a bad conductor. In the simplest model, human body is therefore represented as a resistor and a capacitor in a series ($RC$ circuit), and electrical impedance (combination of electrical resistance and electrical capacitance) is measured at a single frequency (e.g. $50\;\mathrm{kHz}$).
The measurement together with your physical dimensions (height and weight), your age, and your gender – all of which you had to disclose to your bathroom scale – are then put into a BIA formula, which the manufacturer has obtained by calibrating its device with one of the more accurate methods. An example of such formula2 is
$$FFM = (0.34 \times10^4) \frac{h^2}{Z} + 15.34 h + 0.273 m - 0.127t + 4.56 S - 12.44,$$
where $FFM$ is fat-free mass, $h$ body height (in meters), $Z$ electrical impedance (in Ohms), $m$ body mass (in kilograms), $t$ age (in years), and $S=0$ (female), $S=1$ (male). | {
"domain": "biology.stackexchange",
"id": 11810,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "human-biology, measurement",
"url": null
} |
entropy, soft-question, elasticity
Title: What's the difference between elastic energy and entropic elasticity? I was reading up some articles on elasticity theory to make an essay about elastic energy in rubber bands, but in the first paragraph of this article it is said that rubber bands do not show elastic potential energy, but entropic elasticity. I've never seen this before, and since its fundamentally a thermodynamics thing, it takes the research to another field completely. What is the actual difference between entropic elasticity and ordinary elasticity? Does it matter when studying the potential energy of an elastic body? We know that all things occur to maximize entropy. (In other words, the Second Law tells us that we more often see outcomes that are more likely to occur—that is, those outcomes with higher entropy).
As outlined here, energy minimization can act as a surrogate for entropy maximization. We can interpret this as favoring both high entropy and strong bonding, as the latter releases energy that provides the entropy benefit of heating the rest of the universe.
This is all encompassed in the Gibbs free energy, which includes both internal energy and entropy terms.
For a standard metal spring, the internal energy term dominates. As typically modeled using a pair potential, the atomic/molecular spacing is shifted slightly from its (minimum) equilibrium position. (This doesn’t affect the entropy much.) The greater energy upon displacement corresponds to a restoring force that imparts springlike behavior, with linear elasticity and a constant stiffness observed for small displacements.
The ideal gas can also resist deformation (specifically, compression), but this material model has no capacity to store energy. To understand the bulk modulus of the ideal gas, we have to return to the Second Law: the drive for entropy maximization produces a restoring force toward the equilibrium configuration of pressure equilibration with the surroundings. In contrast to internal-energy-mediated elasticity, the behavior of an entropic spring is immediately nonlinear (e.g., the isothermal stiffness of the ideal gas is just its pressure), and the stiffness is strongly temperature dependent because the entropy term in the Gibbs free energy has the temperature as its coefficient. | {
"domain": "physics.stackexchange",
"id": 98478,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "entropy, soft-question, elasticity",
"url": null
} |
digital-communications, algorithms, signal-detection, complex
Don't invert that matrix
Now, it's practically never desirable to calculate a matrix inverse, so the thing that you actually solve is
$$\left(H^T H\right)\hat x = H^T y\text.$$
How complex the solution to that is depends on the properties of your MIMO channel matrix!
The quadratic / full-rank MIMO case
In the (very special) case that $H$ is normal (that implies among other things quadratic, ie. as many receive as transmit antennas $N_r=N_t=:N$), you do that by applying a QR decomposition to $H^T H$:
\begin{align}
\left(H^T H\right)\hat x &= H^T y\\
QR \hat x &= H^T y\\
R\hat x &= Q^T H^T y\tag{1}\\
&= \nu y\tag{2}
\end{align}
Since $R$ is a triangular matrix, $(1)$ s the point where you just backsubstitute. Luckily, $R$, $Q^T$ and $H^T$ only need to be calculated once ever, so we can estimate many $\hat x$ from many $y$ as long as the channel doesn't change at fixed complexity.
Complexity
You do the usual paper trick of ignoring the fact that you need to acquire $H$ first. That can, and quite possibly will, dominate the complexity of this (so don't be a bad scientist – figure out how you get your $H$ and how compelx that is, if you're writing some kind of publication).
So, assuming you already have the $H$, the operations you are doing per channel realization (i.e. only once every time $H$ changes, which probably doesn't happen very often): | {
"domain": "dsp.stackexchange",
"id": 7425,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "digital-communications, algorithms, signal-detection, complex",
"url": null
} |
c, windows, logging, portability
The UpperCase column is the same result as a logical XOR.
However, since we're checking the opposite in the following if statement, we'll also include a NOT operator (!)
Becuase, NOT(XOR) would give us the LowerCase column results.
*/
if (!(GetAsyncKeyState(VK_SHIFT) ^ isCapsLock())) // Check if letters should be lowercase
{
vkCode += 32; // Un-capitalize letters
}
sprintf(val, "%c", vkCode);
log(val);
}
else // Every other key
{
switch (vkCode)
// Check for other keys
{
case VK_CANCEL:
log("[Cancel]");
break;
case VK_SPACE:
log(" ");
break;
case VK_LCONTROL:
log("[LCtrl]");
break;
case VK_RCONTROL:
log("[RCtrl]");
break;
case VK_LMENU:
log("[LAlt]");
break;
case VK_RMENU:
log("[RAlt]");
break;
case VK_LWIN:
log("[LWindows]");
break;
case VK_RWIN:
log("[RWindows]");
break;
case VK_APPS:
log("[Applications]");
break;
case VK_SNAPSHOT:
log("[PrintScreen]");
break;
case VK_INSERT:
log("[Insert]");
break;
case VK_PAUSE:
log("[Pause]");
break;
case VK_VOLUME_MUTE:
log("[VolumeMute]");
break;
case VK_VOLUME_DOWN:
log("[VolumeDown]");
break;
case VK_VOLUME_UP:
log("[VolumeUp]");
break;
case VK_SELECT:
log("[Select]");
break;
case VK_HELP:
log("[Help]");
break;
case VK_EXECUTE:
log("[Execute]");
break;
case VK_DELETE: | {
"domain": "codereview.stackexchange",
"id": 7025,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, windows, logging, portability",
"url": null
} |
java, object-oriented, role-playing-game
}
public static void battleIntro(Player player, Room room) {
System.out.println("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
+ "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n");
System.out.println("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
+ "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n");
System.out.println("You arrive at Room [" + player.getCurrX() + "]["
+ player.getCurrY() + "]");
System.out.println("You enter the room and look around and see...");
System.out.println(room.getDescription() + "\n\n");
System.out.println("Number of monsters: " + room.getNumOfMonsters());
System.out.println("Your fight with " + room.getMonster().getName()
+ " begins.\n");
} | {
"domain": "codereview.stackexchange",
"id": 37720,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, role-playing-game",
"url": null
} |
• Good. Notice we don't really get a "canonical" value of $\alpha$; we are just told by the Theorem that it's okay to use $\alpha$ as long as $0<\alpha<\min[ \frac{1}{2b},\frac{b}{1+b^2},a]$. This is the answer to (a): any $\alpha$ in this range works. // Now on to b): does our inequality for $\alpha$ allow it to be more than $\pi/2$? Well, neither $\alpha<\frac{1}{2b}$ nor $\alpha<a$ seem to prohibit such scenario outright, since we don't know what $a$ and $b$ are. But look at $\alpha<\frac{b}{1+b^2}$ ... the right side here is never very large, for any $b$. Try to make this precise. – user53153 Feb 24 '13 at 23:54
• So we can take derivative of $\frac{b}{1+b^2}$. This gives us $\frac{1-b^2}{(b^2+1)^2}$. This implies that the maximum value of this function is at b=1 and min is at b=-1 (when we set derivative = 0). Hence, our $\alpha$ must be between -1 and 1. And $\frac{\pi}{2}$ is greater than 1.5. Is this the correct answer? – user52932 Feb 25 '13 at 0:12
• Not quite. For one thing, $b$ is positive by the logic of the problem (you see $|y-y_0|\le b$ there), so there is no need to check negative values of $b$. More importantly, the relevant number here is the maximum of $\frac{b}{1+b^2}$, not where it is attained. This maximum is $\frac12$. Hence, the conclusion is that $\alpha<\frac12$. – user53153 Feb 25 '13 at 0:15 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232899814556,
"lm_q1q2_score": 0.8448975502534726,
"lm_q2_score": 0.8596637451167997,
"openwebmath_perplexity": 122.09050381567063,
"openwebmath_score": 0.9077156782150269,
"tags": null,
"url": "https://math.stackexchange.com/questions/312703/existence-of-solution-to-differential-equations"
} |
$$\therefore ab^{-1} \in S \iff a^2(b^2)^{-1} \in H$$.
Thusly:
\begin{align} (ab^{-1})^2&=(ab^{-1})(ab^{-1}) &\\ &=a(b^{-1}a)b^{-1}& \\ &=a(ab^{-1})b^{-1} & \text{ (since } G\text{ is abelian),}\\ &=a^2b^{-2}& \\ &=a^2(b^2)^{-1}. & \end{align} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9850429103332287,
"lm_q1q2_score": 0.8411406996760988,
"lm_q2_score": 0.8539127492339909,
"openwebmath_perplexity": 226.35676295140252,
"openwebmath_score": 0.9588920474052429,
"tags": null,
"url": "https://math.stackexchange.com/questions/3197806/let-s-xx-in-g-and-x2-in-h-show-that-s-is-a-subgroup-of-g-for"
} |
I have the following questions: is this solution correct? Are there any cookbook style methods for solving problems of this kind? My approach seems a bit ad hoc to me, so I am looking for a more general or straightforward one. Thanks. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9928785699899225,
"lm_q1q2_score": 0.8252241270438305,
"lm_q2_score": 0.831143054132195,
"openwebmath_perplexity": 37.888025366361646,
"openwebmath_score": 0.9918289184570312,
"tags": null,
"url": "https://math.stackexchange.com/questions/1541512/convergence-of-the-sequence-a-n1-fraca-n2a-n-34-a-0-frac12"
} |
random-forest, predictor-importance
What is a proper analysis that can be conducted on the values obtained from the table, in addition to saying which variable is more important than another?
I was suggested something like variable ranking or using cumulative density function, but I am not sure how to begin with that. I would be reluctant to do too much analysis on the table alone as variable importances can be misleading, but there is something you can do. The idea is to learn the statistical properties of the feature importances through simulation, and then determine how "significant" the observed importances are for each feature. That is, could a large importance for a feature have arisen purely by chance, or is that feature legitimately predictive?
To do this you take the target of your algorithm $y$ and shuffle its values, so that there is no way to do genuine prediction and all of your features are effectively noise. Then fit your chosen model $m$ times, observe the importances of your features for every iteration, and record the "null distribution" for each. This is the distribution of the feature's importance when that feature has no predictive power.
Having obtained these distributions you can compare the importances that you actually observed without shuffling $y$ and start to make meaningful statements about which features are genuinely predictive and which are not. That is, did the importance for a given feature fall into a large quantile (say the 99th percentile) of its null distribution? In that case you can conclude that it contains genuine information about $y$. If on the other hand the importance was somewhere in the middle of the distribution, then you can start to assume that the feature is not useful and perhaps start to do feature selection on these grounds. | {
"domain": "datascience.stackexchange",
"id": 3415,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "random-forest, predictor-importance",
"url": null
} |
terminology-and-notation
Title: Why do we say 'classical computer' to mean 'digital computer'? The term 'classical computer' is always used to mean standard digital computation (Turing model, Boolean circuits or just good old RAM). I have never seen it to mean other models of computation based on classical physics (such as analog computation). This is evident when papers talk about the classical complexity of a problem, when what is meant is the complexity defined given a digital model of computation.
Do we gain anything by 'classical' instead of 'digital', or is it just a shibboleth? I think we do gain a little bit by saying "classical" instead of "digital".
As you point out, you can certainly build classical analog computers, and in the past these were very useful. But I believe that such classical analog computer can all be efficiently simulated by digital computers - not in the sense that a digital circuit can necessarily mimic the exact physical evolution of the computer, but in the sense that they can in principle solve any given problem with a similar asymptotic runtime (possibly up to polynomial speedups or slowdowns). In other words, I think it's generally believed that the extended Church-Turing thesis holds for all physically realizable computers whose behavior does not essentially rely on quantum mechanics. (You might argue that this claim is vague or even circular, but I think that with some work you can make it both true and noncircular. Note that this claim certainly hasn't been rigorously proven, but I think it's generally accepted to be true in our world.)
So I think that referring to these computers by the broad term "classical" usefully conveys the highly nontrivial insight of the extended Church-Turing thesis: that if you just care about "macro" features like the asymptotic runtime, then it doesn't actually matter whether your computer is digital or analog - what matters is whether it can use inherently quantum phenomena like superpositions and entanglement in a controlled fashion. | {
"domain": "quantumcomputing.stackexchange",
"id": 1626,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "terminology-and-notation",
"url": null
} |
condensed-matter, research-level, topological-field-theory, topological-order, topological-phase
In 2012, Xiao-Gang Wen wrote a stackexchange post asking how to characterize symmetry-breaking in quantum systems. You can see that the posts/discussions in that thread led Wen to characterize such phases in terms of their entanglement. Presumably, the notion of 'stochastic evolution' was subsequently developed to deal with the fact that while symmetry-breaking had long-range entanglement in its cat state, it is less entangled than topologically-ordered phases.
In 2015 (with updates over the next years), Zeng, Chen, Zhou and Wen wrote the book that you are referencing. It seems that the figure you are asking about was borrowed from their earlier work (with modifications), and the caption was not updated to include the more subtle notions of unitary-vs-stochastic.
Perhaps you can contact one of the authors so that they can correct the typo in a next version. | {
"domain": "physics.stackexchange",
"id": 68697,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter, research-level, topological-field-theory, topological-order, topological-phase",
"url": null
} |
java, inheritance, interface
IProjectService.java: (9 lines, 228 bytes)
public interface IProjectService extends IDependentListService<Project>{
public List<Project> getAllProjectsByLoggedInUser();
}
IUserService.java: (7 lines, 129 bytes)
public interface IUserService extends IListService<User>{
}
Update (Code):
The code for the IUserService changed a bit, after reviewing the implementation of the CustomerService:
IUserService.java:
public interface IUserService extends IListService<User>{
User getLoggedInUser();
}
Questions
Does this code follow good practices, especially concerning inheritance?
Naming. Should I really prepend I before the Interfaces? (Implementations are named without I). Are the names clear enough?
Everything else you see as problematic ;)
Update (Questions):
I am currently in the process of writing Javadoc for this code after refactoring it. Currently I use the interfaces as a Platform to place the Javadoc. In the implementation I would then just @see to the Interface Javadoc. Is it okay to do this, should I instead write out the Javadoc for the Implementations too and just @see to other implementations and the interfaces? This answer will reflect the naming convention at my work.
Naming Convention
Prefixing I to all your interface seems superfluous, since we use a simple name like ContractService. The implementation class would have Impl suffixed to the name of the service like : ContractServiceImpl. Normally you will use DI to inject the implementation where you need it, so you will only see the interface name.
Prefixing I seems odd to me if I'm looking at the Java collections API where you have : List, Map, etc.
If you only use ContractService as the name of the implementation, it could not be clear at first read that you are using an implementation directly and not the interface. Using Impl as a suffix make it clear that it's the implementation, so if you see it in your code, it can show that something smelly is going on.
Naming of methods | {
"domain": "codereview.stackexchange",
"id": 6893,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, inheritance, interface",
"url": null
} |
python, performance, beginner, algorithm, mathematics
Vermicelli Benchmark: 5.542082810000011
Vermicelli: "[[35, 26, 17, 1, 62, 53, 44], [46, 37, 21, 12, 3, 64, 55], [57, 41, 32, 23, 14, 5, 66], [61, 52, 43, 34, 25, 16, 7], [2, 63, 54, 45, 36, 27, 11], [13, 4, 65, 56, 47, 31, 22], [24, 15, 6, 67, 51, 42, 33]]" is not a magic square.
Vermicelli Benchmark: 5.477112298999998
Vermicelli: "[[60, 53, 44, 37, 4, 13, 20, 29], [3, 14, 19, 30, 59, 54, 43, 38], [58, 55, 42, 39, 2, 15, 18, 31], [1, 16, 17, 32, 57, 56, 41, 40], [61, 52, 45, 36, 5, 12, 21, 28], [6, 11, 22, 27, 62, 51, 46, 35], [63, 50, 47, 34, 7, 10, 23, 26], [8, 9, 24, 25, 64, 49, 48, 33]]" is a magic square.
Vermicelli Benchmark: 5.534445683000001
Vermicelli: "[[22, 47, 16, 41, 10, 35, 4], [5, 23, 48, 17, 42, 11, 29], [30, 6, 24, 49, 18, 36, 12], [13, 31, 7, 25, 43, 19, 37], [38, 14, 32, 1, 26, 44, 20], [21, 39, 8, 33, 2, 27, 45], [46, 15, 40, 9, 34, 3, 28]]" is a magic square.
Vermicelli Benchmark: 5.473650165999999 | {
"domain": "codereview.stackexchange",
"id": 36803,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, beginner, algorithm, mathematics",
"url": null
} |
homework-and-exercises, optics, reflection, geometry
Title: How do you calculate the focal point location of a circular mirror? I'm trying to find the focal point and center of curvature of a concave mirror. Just using the radius for the center doesn't seem to work. I know C = 2f, but I'm not sure how to find f or C, given the radius of a perfect circle. is r = C and I'm just to drawing it right?
The object should be at the same place (but inverted) if it's placed at the center of curvature right?
When I try to use an optics simulator the rays seem to bounce off something behind the mirror. You are correct in the way that you use the center and radius. You are also mostly correct that $C=2f$, however this is only true when the size of the mirror is small, compared to the radius of the mirror. The relationship $C=2f$ holds best for small angular diameters.
What you are noticing in your optical simulation is that when the angular diameter of the mirror is too large, the light rays that reflect off the edges of the mirror do not reflect into the focal point. As a result, there is not a single point where all light rays parallel to the principle axis focus anymore. This inconvenience is known as spherical aberration and is part of the reason why better optical systems use parabolic reflectors instead of spherical ones. However, your simulation is probably written such that all parallel rays are programmed to reflect into the focal point, and so the simulation is forced to be unphysical in order for that to happen. Take it is a lesson that simulations are great at modeling physical realities, but are only as good as their program. | {
"domain": "physics.stackexchange",
"id": 19045,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, optics, reflection, geometry",
"url": null
} |
python, cassandra
Sorting options:
t, at: Sort by query time or average query time
c: Sort by count
Of these, the -g option is yet to be implemented, since there are some problems in how the queries are logged.
I'm also adding long-form variants of these (--sort, --reverse, etc.) consistently.
Support JSON encoded input, in a streaming fashion. This is for another related patch I'm submitting, where the queries are dumped with JSON encoding for easier parsing by external tools. The JSON-encoded entry will look like:
{
"operation": "SELECT FROM foo.bar WHERE token(id) > token(60bad0b3-551f-46c7-addc-4e3105561a21) LIMIT 100",
"totalTime": 1,
"timeout": 1,
"isCrossNode": false,
"numTimesReported": 1,
"minTime": 1,
"maxTime": 1,
"keyspace": "foo",
"table": "bar"
}
Keep compatibility with Python 2 and 3
The code:
csqldumpslow.py:
#! /usr/bin/env python3
from __future__ import print_function
import re
import sys
import getopt
import json
def usage():
msg = """Usage:
{} [OPTION] ... [FILE] ...
Provide a summary of the slow queries listed in Cassandra debug logs.
Multiple log files can be provided, in which case, the logs are combined.
If no file is specified, logs/debugs.log is assumed. Use - for stdin.
-h, --help Print this message
-s, --sort=type Sort the input
t - total time
at - average time
c - count
-r, --reverse Reverse the sort order
-t, --top=N Print only the top N queries (only useful when sorting)
-j, --json Assume input consists of slow queries encoded in JSON
-o, --output=FILE Save output to FILE
"""
print(msg.format(sys.argv[0])) | {
"domain": "codereview.stackexchange",
"id": 24334,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, cassandra",
"url": null
} |
quantum-mechanics, hilbert-space, operators, quantum-information
Title: Completely positive non-increasing linear map imply non-increasing on hermitian operators? Let $T$ be a completely positive linear map from $L(H)$ to $L(H)$ where $H$ is a Hilbert space.
We have that
$$T(X) = \sum_i A_iXA_i^\dagger$$
where $\{A_i\}_i$ are the Kraus operators, and we know that $\sum_iA_iA_i^\dagger \leq I$, with $I$ the identity.
Can we deduce from this that for $X\in L(H)$ such that $X = X^\dagger$ and $X\geq 0$ then $T(X) \leq X$ ? No, take e.g. $X=\begin{pmatrix}1&0\\0&0\end{pmatrix}$ and $A_0=\begin{pmatrix}0&1\\1&0\end{pmatrix}$ (only one $A_i$). | {
"domain": "physics.stackexchange",
"id": 64157,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, hilbert-space, operators, quantum-information",
"url": null
} |
c++, performance, c++11, recursion, breadth-first-search
enum KnightMovesMethodLimitations
{
DenyByPreviousRowOrColumn,
DenyByPreviousLocation
};
#endif /* KMKMMETHODlIMITATIONS_H_ */
KnightMoves.h
/*
* KnightMoves.h
*
* Created on: Mar 17, 2016
* Author: pacmaninbw
*/
#ifndef KNIGHTMOVES_H_
#define KNIGHTMOVES_H_
#include <string>
#include "KMMethodLimitations.h"
#include "KMBaseData.h"
#endif /* KNIGHTMOVES_H_ */
KnightMoves.cpp
/*
* KnightMoves.cpp
*
* Created on: Mar 17, 2016
* Author: pacmaninbw
*/
#include <iostream>
#include <stdexcept>
#include <chrono>
#include <ctime>
#include <algorithm>
#include <vector>
#include "KnightMoves.h"
#include "KnightMovesImplementation.h"
#include "KMBoardDimensionConstants.h"
double Average(std::vector<double> TestTimes)
{
double AverageTestTime = 0.0;
double SumOfTestTimes = 0.0;
int CountOfTestTimes = 0;
for (auto TestTimesIter : TestTimes)
{
SumOfTestTimes += TestTimesIter;
CountOfTestTimes++;
}
if (CountOfTestTimes) { // Prevent division by zero.
AverageTestTime = SumOfTestTimes / static_cast<double>(CountOfTestTimes);
}
return AverageTestTime;
}
void OutputOverAllStatistics(std::vector<double> TestTimes)
{
if (TestTimes.size() < 1) {
std::cout << "No test times to run statistics on!" << std::endl;
return;
} | {
"domain": "codereview.stackexchange",
"id": 21124,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, c++11, recursion, breadth-first-search",
"url": null
} |
$\sqrt{\frac{12}{x^5}} = \sqrt{\frac{12}{x^5} \cdot \frac{x}{x}} = \sqrt{\frac{12x}{x^6}} \nonumber$
We can now use Property 1 to take the square root of both numerator and denominator.
$\sqrt{\frac{12x}{x^6}} = \frac{\sqrt{12x}}{\sqrt{x^6}} \nonumber$
In the numerator, we factor out a perfect square. In the denominator, absolute value bars would insure a positive square root. However, we’ve stated that x must be a positive number, so $$x^3$$ is already positive and absolute value bars are not needed.
$\frac{\sqrt{12x}}{\sqrt{x^6}} = \frac{\sqrt{4}\sqrt{3x}}{x^3} = \frac{2\sqrt{3x}}{x^3} \nonumber$
Let’s look at another example.
Example $$\PageIndex{10}$$
Given that x < 0, place $$\sqrt{\frac{27}{x^{10}}}$$ in simple radical form.
Solution
One possible approach would be to factor out a perfect square and write
$\sqrt{\frac{27}{x^{10}}} = \sqrt{\frac{9}{x^{10}} \cdot \sqrt{3}} = \sqrt{(\frac{3}{x^5})^2}\sqrt{3} = |\frac{3}{x^5}|\sqrt{3}.$ | {
"domain": "libretexts.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9911526459624252,
"lm_q1q2_score": 0.8302092449775559,
"lm_q2_score": 0.8376199653600372,
"openwebmath_perplexity": 356.65065799553645,
"openwebmath_score": 0.9020271897315979,
"tags": null,
"url": "https://math.libretexts.org/Bookshelves/Algebra/Intermediate_Algebra_(Arnold)/09%3A_Radical_Functions/9.03%3A_Division_Properties_of_Radicals"
} |
filters, signal-analysis, lowpass-filter, sensor
Title: Filtering an acceleration signal I have a triaxial accelerometer giving me acceleration along x, y, and z. I then combine the 3 axes by calculating magnitude of acceleration over all time points (details in my question here)
I'm wondering about the correct time in the process to low pass filter my signal to eliminate some of the high frequency noise in the signals. I guess I have a few options:
Filter the raw xyz signals, calculate magnitude and use it as is
Filter the raw xyz signals, calculate magnitude, filter the resulting magnitude vector, and then use that
Use the raw xyz signals (no filtering), calculate magnitude, filter the result then use it | {
"domain": "dsp.stackexchange",
"id": 5112,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, signal-analysis, lowpass-filter, sensor",
"url": null
} |
differential-geometry, symmetry, metric-tensor, coordinate-systems, differentiation
The Laplacian looks nice in Cartesian coordinates because the coordinate axes are straight and orthogonal, and hence measure volumes straightforwardly: the volume element is $dV = dx dy dz$ without any extra factors. This can be seen from the general expression for the Laplacian,
$$\nabla^2 f = \frac{1}{\sqrt{g}} \partial_i\left(\sqrt{g}\, \partial^i f\right)$$
where $g$ is the determinant of the metric tensor. The Laplacian only takes the simple form $\partial_i \partial^i f$ when $g$ is constant. | {
"domain": "physics.stackexchange",
"id": 57679,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "differential-geometry, symmetry, metric-tensor, coordinate-systems, differentiation",
"url": null
} |
algorithm-analysis, data-structures, probability-theory, hash-tables
Title: Balanced allocation-Hash table- overflow probability My question is related to this:
Hash-Table in Practice
In [1] page 7, it is said that if we throw $n$ balls into $k$ bins, then each bin contains at most $\frac{n}{k}+O(\sqrt[2]{(\frac{n}{k})\log k}+\log k)$ elements with a high probability.
Question 1: Why is $O()$ used in the above estimation?
Question 2: Does it mean the probability that a bin contains more than the above value is negligible?
[1]. http://www.pinkas.net/PAPERS/FNP04.pdf A more formal statement of the claim is as follows. There is a constant $C > 0$ and for each $k$, a function $\epsilon(n)$ satisfying $\lim_{n\to\infty} \epsilon(n) = 0$, such that if you throw $n$ balls into $k$ bins, then with probability at least $1-\epsilon(n)$ the contents of each bin is at most $ \frac{n}{k} + C(\sqrt{(\frac{n}{k})\log k}+\log k)$. | {
"domain": "cs.stackexchange",
"id": 5483,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm-analysis, data-structures, probability-theory, hash-tables",
"url": null
} |
slam, navigation, mapping, rviz, hector
<launch>
<param name="/use_sim_time" value="false"/>ros
<node pkg="rviz" type="rviz" name="rviz"
args="-d $(find hector_slam_launch)/rviz_cfg/mapping_demo.vcg"/>
<include file="$(find uprobotics)/launch/hector_mapping.launch"/>
<include file="$(find uprobotics)/launch/geotiff_mapper.launch">
<arg name="trajectory_source_frame_name" value="scanmatcher_frame"/>
</include>
</launch>
hector_mapping.launch
<launch>
<node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen">
<param name="pub_map_odom_transform" value="true"/>
<param name="map_frame" value="map" />
<param name="base_frame" value="base_link" />
<param name="odom_frame" value="base_link" />
<!-- Map size / start point -->
<param name="map_resolution" value="0.025"/>
<param name="map_size" value="2048"/>
<param name="map_start_x" value="0.5"/>
<param name="map_start_y" value="0.5" />
<param name="laser_z_min_value" value="-2.5" />
<param name="laser_z_max_value" value="7.5" /> | {
"domain": "robotics.stackexchange",
"id": 14485,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, mapping, rviz, hector",
"url": null
} |
ros, server, wiki
Title: Internal Server Error ROS Wiki
I'm getting a Internal Server Error when editing/try to access the wiki-page: http://ros.org/wiki/amigo_gazebo . I guess it's the same problem as:http://answers.ros.org/question/12315/getting-an-internal-server-error-while-editing-the-roswiki/ . Can somebody perhaps restore this page? Thanks, Rein
Originally posted by reinzor on ROS Answers with karma: 106 on 2013-01-02
Post score: 1
Fixed. You had non-ascii characters inside of a section marked with
{{{
#!clearsilver CS/NodeAPI
Which caused the page renderer to throw a stack trace and die. Using the preview button before saving a page should help detect this in the future, so that you can correct it before it becomes permanent.
Originally posted by ahendrix with karma: 47576 on 2013-01-02
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 12246,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, server, wiki",
"url": null
} |
performance, hashcode, pascal
// We need the exact disk size in bytes to know when to stop reading
ExactDiskSize := GetDiskLengthInBytes(hSelectedDisk);
SectorCount := ExactDiskSize DIV 512;
// Now read the disk FROM START TO END and hash it until completion or the user aborts it
try
SHA1Init(ctx);
FileSeek(hSelectedDisk, 0, 0);
repeat
ProgressCounter := ProgressCounter + 1; // We use this update the progress display occasionally, instead of every buffer read
TimeStartRead := Now;
// The hashing bit...read the disk in buffers, hash each buffer and then
// finalise the finished hash. If there's a read error, abort. | {
"domain": "codereview.stackexchange",
"id": 8376,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, hashcode, pascal",
"url": null
} |
r, scrnaseq, 10x-genomics
cell8 = c(3.74969491678643, 0.103404975609384, 0.0354753982873036,
0, 0, 0), cell9 = c(1.19084857532713, 3.9213265721495, 0,
0.0341973245272891, 0.0419122921627454, 0), cell10 = c(4.1224255501566,
0.301871669274068, 0.0633536200981225, 0.389959552469879,
0, 0.0405296102106492)), row.names = c("PTPRC", "MHC-II",
"ITGAM", "Ly6C", "Ly6G", "EMR1"), class = "data.frame")
> | {
"domain": "bioinformatics.stackexchange",
"id": 1878,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "r, scrnaseq, 10x-genomics",
"url": null
} |
matlab, filters, dsp-core
I choose the format of the filter coefficients as 32 bit signed integer. If I follow the same implementation I used for floating point, the input value may overflow 32 bits... Perhaps I need to use the coefficients as 16 bit signed integers or store the results in 64 bit registers?
I tried using the int64_t format, but the filters don't work correctly. I can't understand why...
Also, in the integer implementation of digital filters, I don’t understand how I should work with the final result? For example, how can I understand where the comma would be if it were a float number?
For digital filters I used single precision floating point coefficients. Here is an example of such coefficients that I obtained in Matlab:
#define MWSPT_NSEC 3
const int NL[MWSPT_NSEC] = { 1,3,1 };
const real32_T NUM[MWSPT_NSEC][3] = {
{
0.01185768284, 0, 0
},
{
1, 2, 1
},
{
1, 0, 0
}
};
const int DL[MWSPT_NSEC] = { 1,3,1 };
const real32_T DEN[MWSPT_NSEC][3] = {
{
1, 0, 0
},
{
1, -1.669203162, 0.7166338563
},
{
1, 0, 0
}
}; | {
"domain": "dsp.stackexchange",
"id": 12371,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, filters, dsp-core",
"url": null
} |
But how does this help me in solving the ODE?
In other cases i simply took the eigenvalues and eigenvectors (if complex, split it into sin and cos) and solved the DE with the exponential function (like in the one dimensional case: $x'=ax \Rightarrow x(t)=c_1*e^{at}$
10. Yes, you've correctly identified the diagonal matrix $D$ such that there is an invertible matrix $P$ such that $A=PDP^{-1}.$
The reason you need to diagonalize $A$ is because the solution to the system $\mathbf{x}'=A\mathbf{x}$, as you've hinted at, is the following:
$\mathbf{x}(t)=e^{At}\mathbf{x}_{0}.$
So you have to compute this matrix exponential $e^{At},$ and the best and easiest way to do that is to diagonalize $A.$ Why? Because it's ridiculously easy to compute arbitrary powers of a diagonal matrix (just raise the elements on the diagonal to the desired power!). Also,
$A^{2}=(PDP^{-1})(PDP^{-1})=PDDP^{-1}=PD^{2}P^{-1},$
$A^{3}=(PDP^{-1})(PDP^{-1})(PDP^{-1})=PDDDP^{-1}=PD^{3}P^{-1}.$
In general, you have
$A^{k}=PD^{k}P^{-1}.$
Then you just invoke the Taylor series expansion of the exponential, and you find that
$e^{At}=Pe^{Dt}P^{-1}.$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534371153208,
"lm_q1q2_score": 0.8135648342901904,
"lm_q2_score": 0.828938799869521,
"openwebmath_perplexity": 280.1382337723151,
"openwebmath_score": 0.9268473982810974,
"tags": null,
"url": "http://mathhelpforum.com/differential-equations/163743-system-first-order-ode.html"
} |
ros
<parent link="Body_LKP"/>
<child link="Body_LAP"/>
<axis xyz="0 1 0"/>
<limit effort="10.0" lower="-1.29154" upper="1.69297" velocity="1.0"/>
</joint>
<joint name="LAR" type="revolute">
<origin rpy="0 -0 0" xyz="0.0711787 -0.0466006 -1.04e-10"/>
<parent link="Body_LAP"/>
<child link="Body_LAR"/>
<axis xyz="1 0 0"/>
<limit effort="10.0" lower="-0.191986" upper="0.191986" velocity="1.0"/>
</joint>
<gazebo>
<!-- robot model offset -->
<pose>0 0 .66 0 0 0</pose>
</gazebo>
<transmission name="LKP_trans" type="pr2_mechanism_model/SimpleTransmission">
<actuator name="LKP_motor" />
<joint name="LKP" />
<mechanicalReduction>1</mechanicalReduction>
<motorTorqueConstant>1</motorTorqueConstant>
</transmission>
<transmission name="LAP_trans" type="pr2_mechanism_model/SimpleTransmission">
<actuator name="LAP_motor" />
<joint name="LAP" />
<mechanicalReduction>1</mechanicalReduction>
<motorTorqueConstant>1</motorTorqueConstant>
</transmission>
<transmission name="LAR_trans" type="pr2_mechanism_model/SimpleTransmission">
<actuator name="LAR_motor" />
<joint name="LAR" />
<mechanicalReduction>1</mechanicalReduction>
<motorTorqueConstant>1</motorTorqueConstant>
</transmission> | {
"domain": "robotics.stackexchange",
"id": 12862,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
## Truffle
# truffle
sudo npm install -g truffle
At this point line 18 works exactly as documented in the installation section of Truffle at https://github.com/ConsenSys/truffle.
# Cloud9 Environment
Ok, this part took me a while to get working on Cloud9. I had no problems running on a local Ubuntu virtual machine, but I wasn’t so lucky on Cloud9. You can see how I worked out the problem on StackOverflow at Ethereum Test RPC working in Cloud9 with Truffle. Credit for the final answer goes to XMLHttpRequest cannot load cloud 9 io.
In order to make it possible for testrpc to bind to the public IP address of your Cloud9 environment you’ll need to click on the “Share” button in the upper right-hand corner of the Cloud9 IDE and then make the Application link public. The StackOverflow answer indicated that this was due to a recent bug in Cloud9, so this may not be necessary in the future.
# Truffle Project Environment/Configuration
Once you have some code/contracts you want to try out you’ll need to start testrpc first. Do this by executing testrpc -p 8081 -d 0.0.0.0 in a new Cloud9 terminal. The -p 8081 argument tells testrpc to run on port 8081. Cloud9 only opens a few ports for public access according to https://docs.c9.io/docs/multiple-ports so we can’t use the testrpc default of 8545. Port 8080 will be used later by the truffle serve command, so 8081 is the next open port. The -d 0.0.0.0 argument tells testrpc to bind to all available IP addresses, including the public one.
Now that you have testrpc up and running, you’ll need to configure your Truffle project properly. You can do this a few ways but the key is to get this part in your config file:
"rpc": {
"host": "project-user.c9users.io",
"port": 8081
} | {
"domain": "josiahdev.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.988668248138067,
"lm_q1q2_score": 0.8363564198395739,
"lm_q2_score": 0.8459424295406088,
"openwebmath_perplexity": 1418.1227835708082,
"openwebmath_score": 0.6284764409065247,
"tags": null,
"url": "http://josiahdev.com/blog/"
} |
signal-analysis, fourier-transform, fourier-series
Title: What is difference between outputs of Fourier transform and Fourier series of a periodic square waveform? We can use Fourier transform of an aperiodic signal and Fourier series of periodic signal. But we can use Fourier transform formula for periodic function also.
Now, let us consider a periodic square wave with fundamental period $T$ . Then I want to ask is
What is difference between the outputs of Fourier transform and Fourier series of a periodic square waveform? The Fourier transform $X(\omega)$ of a $T$-periodic function $x(t)$
$$ x(t+T) = x(t) \quad \quad \forall t $$
having complex Fourier coefficients $c_n$
$$ x(t) = \sum_{n=-\infty}^{\infty} c_n e^{j 2 \pi n t/T} $$
$$c_n = \frac{1}{T} \int_{0}^T x(t) e^{-j 2 \pi n t/T} \ dt \tag{1}$$
can be expressed as a weighted sum of Dirac impulses, where the weights are given by the complex Fourier coefficients:
$$X(\omega) \triangleq \mathcal{F}\left\{ x(t) \right\} = 2 \pi \sum_{n=-\infty}^{\infty} c_n \delta\left(\omega - \frac{2\pi n}{T}\right) \tag{2}$$ | {
"domain": "dsp.stackexchange",
"id": 2857,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "signal-analysis, fourier-transform, fourier-series",
"url": null
} |
general-relativity, spacetime, metric-tensor
Obligatory Remark:
I've barely begun to even scratch the surface here, you should read Hawking and Ellis/Wald for more detailed information (and there's a vast literature on this stuff). | {
"domain": "physics.stackexchange",
"id": 89784,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, spacetime, metric-tensor",
"url": null
} |
c++, polymorphism, cuda
return (0.5 * L_m / sqrt(3.0)) * (x + 0.25 * (exp(2.0*x)-exp(-2.0*x))); /* L */ //0.25 * (exp(2*x)-exp(-2*x)) == sinh(x) * cosh(x) and is faster
}
__host__ __device__ double DipoleB::getLambdaAtS(const double s) const
{// consts: [ ILATDeg, L, L_norm, s_max, ds, errorTolerance ]
double lambda_tmp{ (-ILATDegrees_m / s_max_m) * s + ILATDegrees_m }; //-ILAT / s_max * s + ILAT
double s_tmp{ s_max_m - getSAtLambda(lambda_tmp) };
double dlambda{ 1.0 };
bool over{ 0 };
while (abs((s_tmp - s) / s) > errorTolerance_m) //errorTolerance
{
while (1)
{
over = (s_tmp >= s);
if (over)
{
lambda_tmp += dlambda;
s_tmp = s_max_m - getSAtLambda(lambda_tmp);
if (s_tmp < s)
break;
}
else
{
lambda_tmp -= dlambda;
s_tmp = s_max_m - getSAtLambda(lambda_tmp);
if (s_tmp >= s)
break;
}
}
if (dlambda < errorTolerance_m / 100.0) //errorTolerance
break;
dlambda /= 5.0; //through trial and error, this reduces the number of calculations usually (compared with 2, 2.5, 3, 4, 10)
}
return lambda_tmp;
} | {
"domain": "codereview.stackexchange",
"id": 30404,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, polymorphism, cuda",
"url": null
} |
3. Jan 23, 2014
### CAF123
Hi Dick,
This depends on the combination of right and left steps. The displacement is given by $ka$ where $k$ is an integer. If there are no left steps, then $k = r = N$.
4. Jan 23, 2014
### Ray Vickson
You seem to be missing the point: given N and r, what is the displacement X? You need to figure this out because you are being asked to find the mean and standard deviation of X.
5. Jan 23, 2014
### CAF123
I see, so then $X = ra - \ell a = ra - (N-r)a$. To obtain the mean it is a case of solving the equation $X_{avg} = 2r_{avg} a - Na$. I can see where this method is going and in fact I have already solved it this way. I was wondering if there was a way of solving this explicitly with the distributions I have derived or via conditional probabilities.
6. Jan 23, 2014
### Staff: Mentor
To find the mean, you can simply use the symmetry of the problem. The standard deviation is more interesting.
You can find ravg with the distributions. I don't see where conditional probabilities would occur, as all steps are independent.
7. Jan 23, 2014
### CAF123
Since the drunk is equally likely to go backwards or forwards at each post, the mean should be zero.
$r_{avg}$ is the expected number of right steps. This is just an expectation of a binomial distribution (regard each step as a trial and moving to the right an event - at each post the probability of the event is 1/2).
E[right steps] = r(1/2) + (N-r)(1/2) = N/2. How would I show this using my distributions? | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9901401438236133,
"lm_q1q2_score": 0.8071944075894463,
"lm_q2_score": 0.8152324826183822,
"openwebmath_perplexity": 284.2027022447053,
"openwebmath_score": 0.8103267550468445,
"tags": null,
"url": "https://www.physicsforums.com/threads/the-random-walk-problem.734345/"
} |
In other words, the first few entries in the expansion of $f$, $\sum_{i=0}^N \alpha_i b_i$, contain the "smooth parts" of $f$, the parts that contribute least to $f$'s Dirichlet energy. The more terms you add, the more high-frequency details you recover. In the (very common) case that you must approximate a function using only a limited amount of information, and the coarse, smooth behavior of $f$ is most important to preserve, the Fourier basis thus gives you a natural representation for doing so.
The above picture generalizes directly to other settings, such as on the sphere (where the spherical harmonics play the role of the sines and cosines) or other manifolds.
• Is there a way to reconcile this view with littleO's? It seems closely related, as being an eigenfunction of the shift operation seems intimately related to having nice properties under derivation. Is there a similarly simple and deep interpretation of the Laplacian? Apr 13, 2018 at 11:36
• @user6873235 Yes, I do wonder if they are equivalent, or if they are separate notions that happen to coincidentally give the same basis functions in the case of $S^1$... I'm not sure what the shift operator looks like on $S^2$ for instance (though there we do have that the spherical harmonic basis functions relate to the rotation group in a similar way.) Apr 13, 2018 at 18:42
Sine and cosine functions are eigenfunctions of the second derivative operator with negative eigenvalues, $\frac{d^2}{dx^2} sin(\omega x) = -\omega^2$, while exponetial (and hyperbolic sine/cosine) are eigenfunctions with positive eigenvalues. This makes Laplace transforms (for positive eigenvalues) and Fourier transforms (for negative eigenvalues) very useful for second order differential equations. Force is proportional to second derivative, so sinusoidal functions are the eigenfunctions in a variety of physical situations: harmonic motion, waves, etc.
Sinusoidal functions are in some sense the "simplest" periodic functions. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668714863166,
"lm_q1q2_score": 0.8176304820534713,
"lm_q2_score": 0.8333245911726382,
"openwebmath_perplexity": 328.3058074404362,
"openwebmath_score": 0.8950873017311096,
"tags": null,
"url": "https://math.stackexchange.com/questions/2733613/why-do-we-use-trig-functions-in-fourier-transforms-and-not-other-periodic-funct/2733639"
} |
javascript, strings, formatting
wrapAfterMaxLength(str, maxL) {
if (!this.containsSeparator(str)) {
return str;
}
var p = maxL + 1;
while (p < str.length && !this.containsSeparator(str.charAt(p))) {
p++;
}
return this.wrapTheText(str, p, maxL);
}
containsSeparator(str) {
return _.includes(str, " ") || _.includes(str, "-");
}
wrapTheText(str, p, maxL) {
if (0 < p && p < str.length) {
var left = (str.substring(p, p + 1) === "-") ? str.substring(0, p + 1) : str.substring(0, p);
var right = str.substring(p + 1);
return left + "\n" + this.wrapType(right, maxL);
}
}
} | {
"domain": "codereview.stackexchange",
"id": 26062,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, strings, formatting",
"url": null
} |
+ AB′C c) F = ABC′ + (A′ +B′)C d) G = ABC. For example: (XY) + (YX) + (AB) + (BA) + (EE) COMMUTATIVE (XY) + (XY) + (AB) + (AB) + (EE) IDEMPOTENT (XY) + (AB) + (E) ASSOCIATIVE XY+AB+E Simplify the following Boolean expressions by using the required laws. A karnaugh map provides an organized way for simplifying boolean expressions and helps in constructing the simplest minimized SOP and POS expression. State the principle of duality in Boolean algebra and give the dual of the Boolean expression : (X + Y). The most practical law is DeMorgan's law: one form explains how to simplify the negation of a conjunction ( and ) and the other form explains how to simplify the negaion of a disjunction ( or ). A lambda expression (ramda) describes a block of code (an anonymous function can be passed to a construct or method for subsequent execution). Online minimization of boolean functions. Verilo Module 6. Here are some examples of Boolean algebra simplifications. Karnaugh Map (truth table in two dimensional space) 4. Alyazji 2 2. Simplify the following Boolean expression AB +A'C +BC 5. Is there any way to simplify a combination of XOR and XNOR gates in the following expression? I have tried multiple boolean theorems and I have not been able to simplify this any further: The simp. Simplification. Simplify the following Boolean expression. Simplifying Variable Expressions Date_____ Period____ Simplify each expression. name: nkemdirim chimezirim miracle iti1100c assignment student number: 8869343 demonstrate the validity of the following identities means of truth tables. One way to analyze, and perhaps simplify, boolean expressions is to create truth tables. Simplifying logic circuits Obtain the expression of the circuit’s function, then try to simplify We will look at two methods: Algebraic and Karnaugh maps E1. A boolean expression evaluates to either true or false. • Some standardized forms | {
"domain": "rallystoryeventi.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9648551535992068,
"lm_q1q2_score": 0.838291986084183,
"lm_q2_score": 0.8688267694452331,
"openwebmath_perplexity": 892.2882055966397,
"openwebmath_score": 0.7118634581565857,
"tags": null,
"url": "http://kmle.rallystoryeventi.it/simplify-the-following-boolean-expression.html"
} |
homework-and-exercises, newtonian-gravity, acceleration, coulombs-law, free-fall
How to find the exact solution $r(t)$ of the ODE ?
How to find the exact solution $t(r)$ which is the inverse function of $r(t)$? First of all, define the variable $u(t)=R+r(t)$, so your equation can be put as:
\begin{equation}
{d^2u\over dt^2}={k\over u^2}, \text{ where $k$ is a constant}
\end{equation}
Then, multiply by $du/dt$ both sides of this equation, leaving:
\begin{eqnarray}
&&{du\over dt}{d^2u\over dt^2}={k\over u^2}{du\over dt}\\
&\Rightarrow&{1\over 2}{d\over dt}\left({du\over dt}\right)^2=-{d\over dt}\left({k\over u}\right)\\
&\Rightarrow&{d\over dt}\left[\left({du\over dt}\right)^2+{2k\over u}\right]=0\\
\end{eqnarray}
Then you have:
\begin{equation}
\left({du\over dt}\right)^2+{2k\over u}=C,\text{ where $C$ is a constant}
\end{equation}
Finally:
\begin{equation}
{du\over dt}=\pm\sqrt{C-{2k\over u}},
\end{equation}
which you can solve easily (fifty points to gryffindor!!!) | {
"domain": "physics.stackexchange",
"id": 25569,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-gravity, acceleration, coulombs-law, free-fall",
"url": null
} |
$$\displaystyle=126+36\cdot7=?$$
You may reduce it to using the binomial theorem only as follows:
$$\begin{eqnarray*}[x^8](1+x^2-x^3)^{9} & = & [x^8](1+x^2(x-1))^{9} \\ & = & [x^8]\sum_{k=0}^9 \binom{9}{k}x^{2k}(x-1)^k\\ & = & [x^8]\sum_{k=\color{blue}{3}}^{\color{blue}{4}}\binom{9}{k}x^{2k}(x-1)^k\\ & = & \binom{9}{3}[x^2](x-1)^3 + \binom{9}{4}[x^0](x-1)^4\\ & = & 3\binom{9}{3} + \binom{9}{4}\\ & = & 378\\ \end{eqnarray*}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676478446031,
"lm_q1q2_score": 0.8009512574986076,
"lm_q2_score": 0.8175744695262777,
"openwebmath_perplexity": 373.0755909871012,
"openwebmath_score": 0.8160020112991333,
"tags": null,
"url": "https://math.stackexchange.com/questions/3238977/coefficient-problem-in-algebra"
} |
So I think that somehow you should probably change all of your equations so that units are matching.
So for segment 1, for example, we found the equation to be: x = 45t2 + C
If you want to keep time in minutes, then because 45 km/h = 0.75 km/minute, the equation should be:
x = 0.75t2 + C (At least that's what I think it should be.)
19. Apr 22, 2017
### Bunny-chan
Hahahahaha. It's OK! That's very confusing anyway!
So, from the first function, we have now that for the second segment $C = 0.1875$. Which means that for $t = 2$, the position will be $0.75 \times 2 - 0.1875 = 1,3125$, which seems to match the answer graph. Is that right?
20. Apr 22, 2017
### TomHart
That looks right, based on a very quick calculation. I'm sorry I don't have any more time to spend right now. I would really like to in order to bail myself out. But I have to run my neighbor to the airport right now. Once again, I'm sorry for the bad information I gave you. But I got the same answer of 1.3125 at t = 2. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9637799472560581,
"lm_q1q2_score": 0.8266996972331667,
"lm_q2_score": 0.8577681031721324,
"openwebmath_perplexity": 502.6945397347745,
"openwebmath_score": 0.6502695083618164,
"tags": null,
"url": "https://www.physicsforums.com/threads/motion-graphs-and-their-units.912153/"
} |
codon, orf
Title: Why are there six reading frames if only one strand of DNA is referred to as the ‘coding strand’? We need to consider six reading frames when considering the potential of DNA to encode protein (three frames for each strand). But only one strand is transcribed into RNA — the so-called coding strand. It would therefore seem to me that there are actually only three reading frames to consider. Why, then, do people refer to six?
Another point concerning reading frames is the definition of Open Reading Frame — ORF. One text defines ORF as:
“An ORF is a continuous stretch of codons beginning with a start codon
(usually AUG) and ending with a stop codon”
whereas another text defines it as
“An ORF is a continuous stretch of codons that do not contain a stop
codon (usually UAA, UAG or UGA)”
It seems to me that the first definition is correct. Which is the generally accepted definition for ORF? In my opinion this question reflects two things:
The difficulty students have in appreciating the historical experimental concerns of research workers in an area that is now well understood, and, hence, how it influenced the coining of new technical terms.
The way that the use of terms has changed with time as old concerns disappear and new ones arise. Thus, a term originally used in one sense may have subsequently been adopted to mean something else, even if this does not appear strictly logical. | {
"domain": "biology.stackexchange",
"id": 6652,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "codon, orf",
"url": null
} |
deep-learning, dimensionality-reduction, rbm
Title: How is dimensionality reduction achieved in Deep Belief Networks with Restricted Boltzmann Machines? In neural networks and old classification methods, we usually construct an objective function to achieve dimensionality reduction. But Deep Belief Networks (DBN) with Restricted Boltzmann Machines (RBM) learn the data structure through unsupervised learning. How does it achieve dimensionality reduction without knowing the ground truth and constructing an objective function? As you know, a deep belief network (DBN) is a stack of restricted Boltzmann machines (RBM), so let's look at the RBM: a restricted Boltzmann machines is a generative model, which means it is able to generate samples from the learned probability distribution at the visible units (the input). While training the RBM, you teach it how your input samples are distributed, and the RBM learns how it could generate such samples. It can do so by adjusting the visible and hidden biases, and the weights in between.
The choice of the number of hidden units is completely up to you: if you choose to give it less hidden than visible units, the RBM will try to recreate the probability distribution at the input with only the number of hidden units it has. An that is already the objective: $p(\mathbf{v})$, the probability distribution at the visible units, should be as close as possible to the probability distribution of your data $p(\text{data})$.
To do that, we assign an energy function (both equations taken from A Practical Guide to Training RBMs by G. Hinton)
$$E(\mathbf{v},\mathbf{h}) = -\sum_{i \in \text{visible}} a_i v_i - \sum_{j \in \text{hidden}} b_j h_j - \sum_{i,j} v_i h_j w_{ij}$$
to each configuration of visible units $\mathbf{v}$ and hidden units $\mathbf{h}$. Here, $a_i$ and $b_j$ are the biases, and $w_{ij}$ are the weights. Given this energy function, the probability of a visible vector $\mathbf{v}$ is | {
"domain": "datascience.stackexchange",
"id": 1259,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, dimensionality-reduction, rbm",
"url": null
} |
php
?>
This code seems to require changes based on whether magic quotes are enabled. Moreover, it's currently assuming that magic quotes are enabled. One should be encouraging the elimination of magic quotes from this earth, not depending on them.
This rather hard-to-decipher code contains no comments. I'm usually against lots of comments, but i'm even more against code whose intent is not self-evident. Pick one or the other. (Allow me to cast my vote here in favor of making the code readable enough that you don't need comments.)
Who told you two-char table names were acceptable? Names should be descriptive. You seem to be aware that there's no two-char limit on name lengths; use that to make life easier for the poor schmuck who ends up working on this code later (which could be you in 6 months). I should not need documentation to look through your stuff and see what it does. Particularly considering you haven't provided any. :P
A post is a database? No. A view is a database? Even more strongly, no. (You've claimed this is MVC; you should know that in MVC, views should never even have direct access to the db. That's the model's job.) Use inheritance to say something "is a" something-else, not just because it's handy to say parent::__construct() to set up the DB.
BTW, that's another thing. The constructor's job is to set up an object. No more, no less. new something() should return me a new something -- that i will use. No more, no less. It should not have side effects outside of setting up the instance in question. Setting some magic global 3 layers up in the inheritance tree? Hell no. You should never have a naked new something();; if you want an object, then you should be using it (storing it, calling functions on it, etc). If you don't, then quit abusing constructors. Hell, if everything is going to be init'ing the database anyway, then just say database::get() at the beginning of the code and spare yourself all the unnecessary inheritance. | {
"domain": "codereview.stackexchange",
"id": 825,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php",
"url": null
} |
is source free and curls around. From their experimental results, Biot and Savart arrived at a mathematical expression that gives the magnetic field at some point in space in terms of. First, we use the Biot-Savart law to find the direction of the field. The Biot-Savart law is standardly taught to 2nd/3rd year aero-engineers and has been around in that use for over 100 years. The formula is exact for an infinitely long wire. In case of a long straight conductor, carrying current I, Biot-Savart law gives:. 2 The Magnetic Field of a Steady Current Biot-Savart law The integration is along the current path. It consists of permanent horseshoe magnets, coil, soft iron core, pivoted spring, non-metallic frame, scale, and pointer. The dl vector is a vector pointing in the direction of positive current flow for a differentially small section of wire. Bleeder Valve. We might suppose, from looking at Eqs. Magnetic Field A magnetic field is the area around a permanent magnet or a wire carrying a current in which a force is experienced. Ampere's law is more suitable for advanced fli fl i Iformulations of electromagnetism. For a point vortex at the origin this reduces to the radial velocity fleld u(x)=K2d ⁄- = K2d(x). a magnet to Paris on September 4, 1820. The wire is presented in the picture below, by red color. The Biot-Savart Law (Text section 30. A circular loop of radius R is placed in the xy-plane, centered at the origin, as shown. Effect of pressure (at constant Temp) Thus, velocity of sound is independent of change of pressure at given temperature. Find the magnetic field inside a Toriod. where μ 0 is permeability constant. magnetic flux. The Chemistry Glossary contains basic information about basic terms in chemistry, physical quantities, measuring units, classes of compounds and materials and important theories and laws. 11/14/2004 section 7_3 The Biot-Savart Law blank. Key words: 2D Euler, patch of vorticity, stability. This is at the AP Physics level. of Physics 1 Magnetic force and field Magnetic force: F=qv!B Currents make magnetic fields: Biot-Savart Law dB= µ. We note that. | {
"domain": "aoly.pw",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9715639644850629,
"lm_q1q2_score": 0.8075086305677752,
"lm_q2_score": 0.8311430436757312,
"openwebmath_perplexity": 686.7284100631799,
"openwebmath_score": 0.8189038634300232,
"tags": null,
"url": "http://jloe.aoly.pw/biot-savart-law-constant.html"
} |
gazebo
This box is over the ground plane.
ANd i can see the sensor information with the instruction:
gztopic echo /gazebo/default/box/link/my_contact
BUt i dont understand what does it mean.
-what does this information mean?
contact {
collision1: "box::link::box_collision"
collision2: "ground_plane::link::collision"
position {
x: -0.500000000462553
y: 0.49999999823938179
z: -1.3116396857526524e-11
}
position {
x: 0.499999999537447
y: 0.49999999830396458
z: -1.3116396855009743e-11
}
normal {
x: 0
y: 0
z: 1
}
normal {
x: 0
y: 0
z: 1
}
depth: 1.3116396857526524e-11
depth: 1.3116396855009743e-11
wrench {
body_1_name: "box::link::box_collision"
body_2_name: "ground_plane::link::collision"
body_1_force {
x: 0.18159452738357906
y: -0.58800230161463551
z: 2.9400153496814356
}
body_2_force {
x: 0
y: 0
z: 0
}
body_1_torque {
x: 1.1760044494554212
y: 1.3792103044659421
z: 0.2032039939184285
}
body_2_torque {
x: 0
y: 0
z: 0
}
}
wrench {
body_1_name: "box::link::box_collision"
body_2_name: "ground_plane::link::collision"
body_1_force {
x: -0.18159452730762679
y: -0.5880023016380882
z: 2.9400153496814232
}
body_2_force { | {
"domain": "robotics.stackexchange",
"id": 3050,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo",
"url": null
} |
ros
Title: integrating a c# dll in ROS
I am forced to integrate a .NET assembly in a ROS node (kinetic).
I have tested the .NET assembly with mono and it works just fine.
What are my chances of creating a succesful ROS node purely in C#?
For example, I would need to subscribe to a topic for receiving image data.
Or would you recommend to wrap the .NET assembly in c++ and use c++ for ROS integration?
I have never done any of the two...
Originally posted by knxa on ROS Answers with karma: 811 on 2017-06-29
Post score: 0
I have decided to wrap the dot net assembly in c++.
After a look at how to embed mono in a c++ application it does not look that difficult. I expect the .NET dll to be quite static and the interface to be quite limited.
So there is no need to involve C# in accessing the ROS API, for example worry about how ROS messages are parsed and generated.
Originally posted by knxa with karma: 811 on 2017-06-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28252,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
The sum of $0+\cdots + n-1$ is $$\frac12(n-1)n.$$
Here $n$ is the number of users; there are 0 comparisons needed for the first user alone, 1 for the second user (to compare them to the first), 2 for the third user, and so on, up to the $n$th user who must be compared with the $n-1$ previous users.
For example, for $9$ people you are adding up $0+1+2+3+4+5+6+7+8$, which is equal to $$\frac12\cdot 8\cdot 9= \frac{72}{2} = 36$$ and for $10$ people you may compute $$\frac12\cdot9\cdot10 = \frac{90}2 = 45.$$
-
You want to know how many ways there are to choose $2$ users from a set of $n$ users.
Generally, the number of ways to choose $k$ elements from a set of order $n$ (that is, all elements in the set are distinct) is denoted by $$\binom{n}{k}$$
and is equivalent to $$\frac{n!}{(n-k)!k!}$$
In the case of $k=2$ the latter equals to $$\frac{n!}{(n-2)!2!}=\frac{n(n-1)}{2}$$
which is also the sum of $1+2+...+n-1$.
-
The following way to getting the solution is beautiful and said to have been found by young Gauss in school. The idea is that the order of adding $1+2+\cdots+n=S_n$ does not change the value of the sum. Therefore:
$$1 + 2 + \ldots + (n-1) + n=S_n$$ $$n + (n-1) + \ldots + 2 + 1=S_n$$
Adding the two equations term by term gives | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.989510906574819,
"lm_q1q2_score": 0.8370692622864396,
"lm_q2_score": 0.8459424314825853,
"openwebmath_perplexity": 292.94499834353053,
"openwebmath_score": 0.7841282486915588,
"tags": null,
"url": "http://math.stackexchange.com/questions/778495/i-have-the-pattern-1-2-3-4-5-6-but-i-need-the-formula-for-it"
} |
$n\mid m$ means that $n$ divides $m$; $2\mid5$ is false since it leaves a remainder of $1$, thus $2\nmid5$. ($x\nmid y$ stands for that $x$ does not divide $y$)
More examples:
$2\mid6$ since remainder when $6$ is divided by $2$ is zero.
$3\mid 9^n+27n, \forall n\in\mathbb{N}$ (which means that forall $n$ that's a natural number)
• is there any reason to use the divides symbol with the strike through rather than the negation symbol before it, other than preference? Or is the way you have "2∤5" preferable to using "¬(2|5)" due to simplicity? – user2068060 Jul 25 '13 at 18:36
• I guess, it's more like a ... personal preference? I do think though that $2\nmid5$ is simpler – user67258 Jul 25 '13 at 18:40
The rule says that if $a\mid b$, then $b = k \cdot a$ for some integer $k$. Since $2.5\notin\mathbb{Z}$, $2\nmid5$.
• k>=1 is interesting here. Thus (2|0) is false? As is (2|-6)? – user2068060 Jul 25 '13 at 18:24
• Sorry, that was a typo (don't know what I was thinking!). Have corrected the answer. – dotslash Jul 25 '13 at 18:45 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534376578004,
"lm_q1q2_score": 0.8241597892899435,
"lm_q2_score": 0.8397339676722393,
"openwebmath_perplexity": 454.446948027841,
"openwebmath_score": 0.8698151707649231,
"tags": null,
"url": "https://math.stackexchange.com/questions/452146/understanding-divides-aka-as-used-logic/452150"
} |
quantum-gate, quantum-algorithms, nielsen-and-chuang, shors-algorithm, quantum-fourier-transform
If you compare it with what is written in Nielsen and Chuang step 4, you will notice that $|\tilde{l/r}\rangle$ is exactly a quantum state $|l2^t/r\rangle$ for binary representation of integer $l2^t/r$.
But if $l2^t/r$ is not an integer (i.e. if $2^t$ is not an integer multiple of $r$), then
$$\frac{1}{\sqrt{2^t}}\sum_{x=0}^{2^t-1} e^{2\pi i (l2^t/r)\frac{x}{2^t}}|x\rangle \neq FT (|l2^t/r\rangle),$$
since for rational $l2^t/r$ there is no integer binary representation, and thus, no quantum state $|l2^t/r\rangle$. In this case, what we get from $|\tilde{l/r}\rangle$ in step 4 is only an approximation.
Additionally, you asked about period-finding specifically, but the same logic applies to the descriptions of order-finding algorithm and discrete logarithm algorithm in Nielsen and Chuang. | {
"domain": "quantumcomputing.stackexchange",
"id": 2171,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-gate, quantum-algorithms, nielsen-and-chuang, shors-algorithm, quantum-fourier-transform",
"url": null
} |
special-relativity
Title: Finding the velocity of a particle How to find the velocity of a particle which has mass $m$ and energy $E$ considering the non-relativistic and the extreme relativistic limits? Non-relativistic: The kinetic energy of a particle is given by $\frac {mv^2} 2$ . Assuming no rotation, that's it for kinetic energy. $E$ is kinetic energy plus potential energy of an object. There is also its energy due to its mass $m_0c^2$, but for a non-relativistic particle this is so large we can neglect it. So if $E$ is all kinetic, $v = \sqrt(\frac{2E} m)$ and you will have $v$ within a plus or minus sign.
Relativistic: $E$ is given by $m_0\gamma c^2$, where $\gamma = \frac {1} {\sqrt(1-\frac {v^2} {c^2})}$. The relativistic kinetic energy is $(\gamma -1)m_0c^2$ ($m_0$ is the object's rest mass). So if you know $E$ and $m_0$, you can calculate $\gamma$ using $E = m_0\gamma c^2$ and find $v$ using the equation for $\gamma$. | {
"domain": "physics.stackexchange",
"id": 27917,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity",
"url": null
} |
$\begin{split}\begin{split} I - T_f(n) = I - \int_a^b p(x)\, dx &= \int_a^b \bigl[f(x)-p(x)\bigr] \, dx \\ &\le (b-a) \max_{x\in[a,b]} |f(x)-p(x)| = O(h^2). \end{split}\end{split}$
A more thorough statement of the truncation error is known as the Euler–Maclaurin formula,
(5.6.6)$\begin{split}\int_a^b f(x)\, dx &= T_f(n) - \frac{h^2}{12} \left[ f'(b)-f'(a) \right] + \frac{h^4}{740} \left[ f'''(b)-f'''(a) \right] + O(h^6) \\ &= T_f(n) - \sum_{k=1}^\infty \frac{B_{2k}h^{2k}}{(2k)!} \left[ f^{(2k-1)}(b)-f^{(2k-1)}(a) \right],\end{split}$
where the $$B_{2k}$$ are constants known as Bernoulli numbers. Unless we happen to be fortunate enough to have a function with $$f'(b)=f'(a)$$, we should expect truncation error at second order and no better.
Observation 5.6.6
The trapezoid integration formula is second-order accurate.
Demo 5.6.7
We will approximate the integral of the function $$f(x)=e^{\sin 7x}$$ over the interval $$[0,2]$$.
f = x -> exp(sin(7*x));
a = 0; b = 2; | {
"domain": "tobydriscoll.net",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9877587234469587,
"lm_q1q2_score": 0.8599400531121726,
"lm_q2_score": 0.8705972751232809,
"openwebmath_perplexity": 997.3050622322135,
"openwebmath_score": 0.8504312634468079,
"tags": null,
"url": "https://tobydriscoll.net/fnc-julia/localapprox/integration.html"
} |
Your assumption is correct; if $b\neq 2$, then $8-b^3\neq 0$, and hence the limit would either not exist or be infinite. But you know the limit is $1$.
A shorter way to do the first part is: $\sqrt[3]{8x^3+ax^2}-bx=x(\sqrt[3]{8+\frac{a}{x}}-b)$. The cube root approaches 2 as $x\rightarrow \infty$, so if $b\neq 2$, the product approaches $\pm\infty$ (not $1$ as in the hypothesis).
Wow...your method really is so much shorter and simpler. I'm amazed. But I have a question now: If $8-b^3\neq 0$ then how can we be sure that the limit would either not exist or be infinite. I think I'm missing the concept. – user66807 Apr 23 '13 at 3:08
$8-b^3$ is a nonzero constant in this case, while the denominator approaches 0. This cannot approach 1. If the denominator is always of the same sign, then the fraction will always be of the same sign, and hence approach either $+\infty$ or $-\infty$. However if the denominator changes signs, the limit doesn't exist. – vadim123 Apr 23 '13 at 3:14
Okay, now I have another question. In your method the cube root approaches 2 as $x \to \infty$ but if $b=2$ then the "stuff" in the parenthesis will tend to $0$ and then we will have $\infty * 0$ won't we? – user66807 Apr 23 '13 at 3:22
Correct. The $\infty \cdot 0$ form is indeterminate, so it might equal 1. Any other value for $b$ cannot give a limit of 1. Hence $b$ must be $2$. – vadim123 Apr 23 '13 at 3:23 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787830929849,
"lm_q1q2_score": 0.8390735089739226,
"lm_q2_score": 0.8499711737573762,
"openwebmath_perplexity": 188.6267199063797,
"openwebmath_score": 0.9884944558143616,
"tags": null,
"url": "http://math.stackexchange.com/questions/369958/i-need-to-find-the-value-of-a-b-in-mathbb-r-such-that-the-given-limit-is-tru"
} |
quantum-field-theory, conformal-field-theory, ads-cft, representation-theory, higher-spin
Here two things are not being very clear to me,
(1) How does one see the claim that at the IR fixed point the value of $\Delta_{+}$ somehow implies that now $J_{(s)}$ is a conserved current and hence the spin-s field in the boundary is now a gauge field?
(2) Is it also being claimed that at the UV fixed point the value of $\Delta_{-}$ is precisely the same as the dimension of a spin-$s$ gauge field? What theory is this? How do we understand this? I can't wrap my head around the fact that this $J_{s}$ which I thought of as the conserved current spin-s current till now happens to have the same dimension as a gauge field!? On the last question, I am not sure how good you are at the representation theory, but the following fact is true: take so(d,2) (we need so(3,2) for this work), use the conformal base, i.e. Lorentz generators $L_{ab}$, translations $P_a$, conformal boosts $K_a$ and dilatation $D$, $a,b=1..d$. $P$ and $K$ behave as raising/lowering generators with respect to $D$, $[D,P]=+P$, $[D,K]=-K$. Take the vacuum to carry a spin-s representation of the Lorentz algebra and a weight $\Delta$ with respect to $D$, i.e. $|\Delta\rangle^{a_1...a_s}$. When $\Delta=d+s-2$, there is a singular vector, $P_m|\Delta\rangle^{ma_2...a_s}$. This is a standard representation theory: finding raising/lowering operators, defining vacuum, looking for singular vectors. Actually, singular vectors are exactly the conformally-invariant equations one can impose. | {
"domain": "physics.stackexchange",
"id": 9413,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, conformal-field-theory, ads-cft, representation-theory, higher-spin",
"url": null
} |
machine-learning, beginner
Title: Beggining in machine learning I just want to know which books, courses,videos, links,etc do you recommend me to start in machine learning, neural networks, languajes most commonly used. I want to start from zero, just in the begging of all beacuse I have not experience in this kind of algorithms but it's something that call my attention. Thank you! Coursera is currently offering a course on Machine learning with collaboration from MIT. Many says its strongly recommended.
https://www.coursera.org/learn/machine-learning
But I found the below course from Edx more interesting.
https://www.coursera.org/learn/machine-learning
It also provides hands on the Microsofts New exclusive Machine learning platform On Azure. | {
"domain": "datascience.stackexchange",
"id": 446,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, beginner",
"url": null
} |
java, algorithm, backtracking
Title: Partition array into 2 sets, such that different of sums in both subsets is minimal
Given a set of \$n\$ integers, divide the set in two subsets of \$\frac{n}{2}\$ sizes
each such that the difference of the sum of two subsets is as minimum
as possible. If \$n\$ is even, then sizes of two subsets must be strictly
\$\frac{n}{2}\$ and if \$n\$ is odd, then size of one subset must be \$\frac{n-1}{2}\$ and size of other subset must be \$\frac{n+1}{2}\$.
For example, let a given set be {3, 4, 5, -3, 100, 1, 89, 54, 23, 20},
the size of set is 10. Output for this set should be {4, 100, 1, 23,
20} and {3, 5, -3, 89, 54}. Both output subsets are of size 5 and sum
of elements in both subsets is same (148 and 148). Let us consider
another example where n is odd. Let given set be {23, 45, -34, 12, 0,
98, -99, 4, 189, -1, 4}. The output subsets should be {45, -34, 12,
98, -1} and {23, 0, -99, 4, 189, 4}. The sums of elements in two
subsets are 120 and 121 respectively.
Also verifying complexity:
Time - \$O(2^n)\$
Space - \$O(n)\$
where \$n\$ is the array size.
Looking for code review, optimizations and best practices.
final class DataSet {
private final List<Integer> firstHalf;
private final List<Integer> secondHalf;
public DataSet(List<Integer> firstHalf, List<Integer> secondHalf) {
this.firstHalf = firstHalf;
this.secondHalf = secondHalf;
}
public List<Integer> getFirstHalf() {
return this.firstHalf;
}
public List<Integer> getSecondHalf() {
return this.secondHalf;
} | {
"domain": "codereview.stackexchange",
"id": 8109,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, backtracking",
"url": null
} |
cosmological-constant, higgs
Title: Higgs field requires a large cosmological constant -- does the Zero Point Field balance it? I just read Wolfram's blog post on the Higgs discovery.
Still, there’s another problem. To get the observed particle masses,
the background Higgs field that exists throughout the universe has to
have an incredibly high density of energy and mass. Which one might
expect would have a huge gravitational effect—in fact, enough of an
effect to cause the universe to roll up into a tiny ball. Well, to
avoid this, one has to assume that there’s a parameter (a
“cosmological constant”) built right into the fundamental equations of
gravity that cancels to incredibly high precision the effects of the
energy and mass density associated with the background Higgs field.
Then I recalled that one of the great unsolved problems in physics is why the zero-point energy of the vacuum predicts a very large cosmological constant which is not observed.
The language used to describe these two effects confuses me, but as far as I can tell, Higgs->contraction and ZPF->expansion
Any chance these two effects are in balance? The effective cosmological constant has potential contributions from several sources:
1) Scalar field potentials like from the Higgs field which does not have to have its minimum at V=0.
2) Quantum vacuum fluctuations.
3) A cosmological constant.
Note that all of these contributions can be positive or negative and are potentially many orders of magnitude larger than the observed cosmological constant. So the cosmological constant problem is why do all these potentially very large terms almost exactly cancel.
One proposed solution is that on very large scales there is a landscape (multiverse) of different potential minima (perhaps from string theory) and galaxies can only form in those pocket-universes where the different contributions in the landscape give a sufficiently small effective cosmological constant. You can see Weinberg's famous paper on it for more details:
http://rmp.aps.org/abstract/RMP/v61/i1/p1_1 | {
"domain": "physics.stackexchange",
"id": 3951,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmological-constant, higgs",
"url": null
} |
# Calculate probability if events are dependent on each other.
The problem: Three boxes are give: A,B,C. In the box a there are two red marbles. In the box B there are two white marbles and in the box C there are one white and one red marble. One marble is randomly pulled out from one of the boxes. What is the probability that there is red marble left in the box if it's known that the marble that was first pulled out was white?
So, first of all, I am not sure if I get the problem right, I need to calculate the probability that box B is randomly chosen, because otherwise there will be not-red marble left ( in case the first one pulled out is white one). So, I calculated the probability, but I am quite sure it is incorrect as it is too easy: First I calculate the probability that the white marble is pulled out: $$3/6=1/2$$ as there are 3 white marbles and totally 6 marbles. Then I calculate the probability that the white marble comes from box C as that would be the only chance that in the box only red marble is left: $$1/6$$. Then I subtract the case when both white marble and white marble from box C is pulled out t.i. intersection: $$P(A)=\frac{1}{2}+\frac{1}{6}-\frac{1}{2\cdot6}=\frac{7}{12}.$$ Please correct me, if I have misunderstood the task and / or my solution is wrong.
your question is a little difficult to understand. But what I understood is that you want to find the probability of picking a box at random taking out a ball and the ball remaining is red, given the one you took out is white.
The only way in which this could happen is if you picked box $$C$$ and you took a white ball. The probability of that is
$$P(\text{white out},\text{red remaining})=P_{C,white}=1/3*1/2=1/6$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226334351969,
"lm_q1q2_score": 0.8032973809707208,
"lm_q2_score": 0.822189121808099,
"openwebmath_perplexity": 158.05701802752085,
"openwebmath_score": 0.8070855736732483,
"tags": null,
"url": "https://math.stackexchange.com/questions/3392360/calculate-probability-if-events-are-dependent-on-each-other/3392379"
} |
How can I split the r-squared value between the two predictor variables?
• This post provides information on how to partition the $R^{2}$. Jun 4 '13 at 19:08
• This comment can represent, briefly and inadequately, the point of view that this will often prove futile if not dangerous. The success or failure of a model is best regarded as the result of a team effort by the predictors (and their particular functional forms, interaction terms, etc., etc.) and is to be judged as such. Naturally, most of us are interested in relative importance of predictors and it is not nonsense, but attempts to quantify it exactly need to be accompanied with full statements of the technical and philosophical limitations on such an exercise. Jul 2 '13 at 17:37
You can just get the two separate correlations and square them or run two separate models and get the R^2. They will only sum up if the predictors are orthogonal.
• By 'orthogonal', do you mean the two predictors should be uncorrelated with each other? Jun 4 '13 at 17:30
• Yes, uncorrelated...it's the only way they sum to the total.
– John
Jun 5 '13 at 4:23
In addition to John's answer, you may wish to obtain the squared semi-partial correlations for each predictor.
• Uncorrelated predictors: If the predictors are orthogonal (i.e., uncorrelated), then the squared semi-partial correlations will be the same as the squared zero-order correlations.
• Correlated predictors: If the predictors are correlated, then the squared semi-partial correlation will represent the unique variance explained by a given predictor. In this case, the sum of squared semi-partial correlations will be less than $R^2$. This remaining explained variance will represent variance explained by more than one variable.
If you are looking for an R function there is spcor() in the ppcor package.
You might also want to consider the broader topic of evaluating variable importance in multiple regression (e.g., see this page about the relaimpo package).
I added the tag to your question. Here is part of its tag wiki: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9425067163548471,
"lm_q1q2_score": 0.8171870155996014,
"lm_q2_score": 0.8670357477770336,
"openwebmath_perplexity": 621.3496573166487,
"openwebmath_score": 0.6708703637123108,
"tags": null,
"url": "https://stats.stackexchange.com/questions/60872/how-to-split-r-squared-between-predictor-variables-in-multiple-regression"
} |
enzymes, enzyme-kinetics, thermodynamics
Title: What does enzymatic equilibrium in % represent? I am studying an enzyme which can catalyse a chemical reaction in both directions. The paper I am looking at is mentioning a thermodynamic equilibrium of 1% in the synthase direction.
What does that mean more intuitively? I am mostly confused by the percent unit as I expected a constant rate without unit...
My guess would be that under these specific concentrations of substrates/products, the enzyme is catalising 1 synthase reaction and 99 inverse reactions for each 100 reactions it is catalising. In other words, the equilibrium constant K of the synthase reaction under these conditions would be 0.01.
Is that correct? You are correct in stating the relative rates of the two reactions. What may have you confused is the percent sign in the statement. The percent sign is not a unit in the sense that you are taking it.
What you have in a simple reaction like you are examining are two opposing rates of reaction. The rate of a catalytic reaction, like others, usually depends on the concentrations of at least some of the reactants in the direction under consideration.
At equilibrium, by definition, the concentration of every material involved in the reaction is unvarying. Otherwise, the reaction would not have reached equilibrium yet. What that also means is that the rates of the forward & inverse reactions are equal at equilibrium. So what does the statement signify? It means that thermodynamically, the synthase reaction is the more unfavorable direction. That is, at equilibrium, the product concentration of the synthase direction will be in the minority.
The statement involving the percent sign is just giving the ratio between the [arbitrary] forward direction, here stated as the synthase direction, and the inverse reaction. The percent sign is just a mathematically short way of stating that the number stated is parts per hundred.
The equilibrium constant K is exactly the same ratio, stated as a decimal value rather than a integer per hundred. | {
"domain": "biology.stackexchange",
"id": 8531,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "enzymes, enzyme-kinetics, thermodynamics",
"url": null
} |
blood-circulation, kidney
Title: Why does glomerulus don't allow white blood cells to leave? The glomerulus in nephrons are just a ball of capillaries, so why can't it allow the white blood cells to squeeze though the epithelial cells into Bowman's capsule just like the formation of tissue fluid in other capillaries by filtration? Red blood cells, White blood cells, platelets and proteins with large molecular weight cannot pass through the podocyte and fenestrations in glomerular capillary, but small molecules like water, salts and sugars are filtered out as part of urine.
As these cells and proteins are large to cross through this filter, they remain in the capillary and create osmotic pressure within the capillary. Bowman’s space has osmotic pressure approximately zero. So, only hydrostatic pressure works in this state and help in movement of fluid across the capillary wall.
Via: https://opentextbc.ca/anatomyandphysiology/chapter/25-5-physiology-of-urine-formation/ | {
"domain": "biology.stackexchange",
"id": 10153,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "blood-circulation, kidney",
"url": null
} |
# How to find the probability range/scale for a histogram
I have a histogram showing the distribution of reaction times from $$100$$ trials worth of data. The range in times is measured in ms and ranges from $$70$$ ms to $$420$$ ms. The frequency is displayed on the left y-axis with the max peaking at $$28$$ occurrences in the $$175-210$$ ms bin range. The bin sizes, as you could guess, are in $$35$$ ms sized boxes. I have to add a probability y-axis and a probability density y-axis to the same graph, but I'm not sure how to calculate the probability to see how high in value the axis should go. My lab describes calculating this amount by dividing the scale of the "first axis" by the total number of measurements of my histogram.
I thought it would simply by $$35$$ for the scale of the "first axis", which I'm assuming is the x-axis, divided by $$100$$, the number of trials I conducted, but when I start to calculate the probability density, I have to divide the scale for probability by the interval width.
So basically I have to solve the first to solve the second. The problem is I don't know what is the difference between the scale of the first axis and the interval width.
With the assumption of $$35/100 = 0.35$$ becomes my max for the probability axis, but this doesn't exactly make sense because then the next equation would just be $$0.35/35$$, which means I'm calling the scale of the first axis the same thing as the interval width.
Could anyone provide some clarification on how I should identify what is the first axis, and how do I find its scale? What's the difference from the interval width? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.960951703918909,
"lm_q1q2_score": 0.8349005645087577,
"lm_q2_score": 0.868826769445233,
"openwebmath_perplexity": 572.6011750812833,
"openwebmath_score": 0.8518800735473633,
"tags": null,
"url": "https://math.stackexchange.com/questions/3109556/how-to-find-the-probability-range-scale-for-a-histogram"
} |
drag, differential-equations, free-fall
Title: Can't find coefficient for my IVP In my problem I have to set up an IVP and model freefall with air resistance before the bungee starts being pulled on. Beta being my airresistance coefficient. I have:
$$ mx'' + \beta x' = f(t) = 0$$
$$ m = \frac{75}{16} \quad and \quad \beta = 0.5$$
Solving for k in my characteristic equation gives me:
$$k^{2} + \frac{8}{75}k = 0$$
$$k_{1}=0 \quad and \quad k_{2}=-\frac{8}{75}$$
Thus my general solution is:
$$x(t) = c_{1}+c_{2}e^{-\frac{8}{75}t}$$
$$x'(t) = -\frac{8}{75}c_{2}e^{-\frac{8}{75}t}$$
But trying to find either c with my initial conditions of x'(0) = 0 and x(0) = 0 (it equals zero as I want down to be considered positive in this scenario) I keep getting either both c's are equal to zero or just the second c is, which means the velocity function doesn't work. I'm a bit lost on how to proceed. If you think about the initial conditions in terms of physics, and connect that to your specific DE, you will see that when the velocity is zero ($x'(0)=0$) then your acceleration is zero: $$x''=\frac{\beta}{m}x'.$$
If the acceleration and the velocity are zero, the system won't change position, based on your DE.
You need a new DE, or a new initial velocity. | {
"domain": "physics.stackexchange",
"id": 57744,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "drag, differential-equations, free-fall",
"url": null
} |
time-series, regression, dataset
There are plenty of other norms you could consider, such as the operator norms. Interestingly enough it can be proved that any two matrix norms are equivalent up to a constant scaling factor, so your choice of norm isn't particularly important, and minimising the "sum of squares" under any norm should be more or less the same. | {
"domain": "datascience.stackexchange",
"id": 9071,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "time-series, regression, dataset",
"url": null
} |
the line x=2 (see figure). Proof without words : Volume of a torus. Find its volu… (@) Find, by Cavalieri's second principle, the volume of a torus, or anchor ring, formed by revolving a circle of radius r about a line in the plane of the circle at distance car from the center of the circle. Forum Staff. Thanks in advance. A torus is just a cylinder with its ends joined, and the volume of a cylinder of radius $r$ and length $d$ is just $\pi r^2 d$, so all we need is the length of the cylinder. A torus is usually pictured as the solid generated by a circular cross-section rotated on an axis in the same plane. Files: elliptic_strip.PNG k2_circle_ellip... 2 The same question Follow This Topic. A torus is a donut shaped solid that is generated by rotating the circle of radius $$r$$ and centered at ($$R$$, 0) about the $$y$$-axis. Formula Surface Area = 4π 2 Rr Volume = 2π 2 Rr 2 Where, R = Major Radius r = Minor Radius. This question intrigued me to order a box full of donuts, so here we go, I would answer this while I enjoy my Krespy Creme donuts. The torus. + 2² = b² , y = 0, about the z-axis. volume of a torus. Simply multiply that by 2pi and you get the torus volume. How do you describe a flat three-torus? With R>r it is a ring torus. A g-holed toroid can be seen as approximating the surface of a torus having a topological genus, g, of 1 or greater. Kevin Kriescher . Questionnaire. Calculations at a torus. The Domestic Abuse Service in St Helens are delivered by Torus St Helens, offering support to any resident of St Helens who is a victim of domestic abuse, whatever their living situation. Online calculator to find volume and surface area of torus or donut shape using major and minor radius. Volume of a Torus The disk x^{2}+y^{2} \\leq a^{2} is revolved about the line x=b(b>a) to generate | {
"domain": "barnabasinstitute.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9719924793940119,
"lm_q1q2_score": 0.8013707982732136,
"lm_q2_score": 0.8244619328462579,
"openwebmath_perplexity": 861.7796938378502,
"openwebmath_score": 0.77825528383255,
"tags": null,
"url": "http://barnabasinstitute.org/qtno0/7e9ae4-volume-of-a-torus"
} |
c++, beginner, algorithm, vectors, stl
constexpr void allocate_and_copy_construct(std::size_t capacity, std::size_t size, const Type& value = Type()) {
allocate(capacity);
construct(size, value);
}
constexpr void construct_init_list(std::initializer_list<Type> values) {
m_size = values.size();
for (size_type index{ 0 }; const auto & currentValue : values)
std::allocator_traits<allocator_type>::construct(m_allocator, m_vector + (index++), currentValue);
}
constexpr void deallocate_and_destruct(std::size_t capacity, std::size_t size) {
destruct(size);
deallocate(capacity);
}
constexpr void deallocate_destruct_keep_size_and_capacity(std::size_t size, std::size_t capacity) {
for (std::size_t index{ 0 }; index < m_size; ++index)
std::allocator_traits<allocator_type>::destroy(m_allocator, m_vector + index);
std::allocator_traits<allocator_type>::deallocate(m_allocator, m_vector, m_capacity);
m_capacity = capacity;
m_size = size;
}
constexpr void uninitialized_alloc_copy(const vector& other) {
m_size = other.m_size;
for (size_type index{ 0 }; index < m_size; ++index) {
std::allocator_traits<allocator_type>::construct(m_allocator, m_vector + index, *(other.m_vector + index));
}
} | {
"domain": "codereview.stackexchange",
"id": 40496,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, algorithm, vectors, stl",
"url": null
} |
stereochemistry, nucleophilic-substitution
Typically, the presence of a stereogenic centre eventually leads to optical activity of the material which then is either dextrorotatory or laevorotatory. However for this question, your focus should be the correct assignment of (R), or (S) in the context of the underlying mechanism.
The effect eventually recorded in the lab depends on multiple parameters; the presence of an additional stereogenic centre does not need to be synergistic. This then leads to circular dichroism spectroscopy used indeed for structure elucidation. Among publications easier to understand about this topic in the Journal of Chemical Education are the ones by Urbach, Thomson, and Andrews.
References:
Andrews S. S.; Tretton J. Physical Principles of Circular Dichroism. J. Chem. Educ. 2020, 97, 4370-4376, doi 10.1021/acs.jchemed.0c01061.
Urbach, A. R. Circular Dichroism Spectroscopy in the Undergraduate Curriculum. J. Chem. Educ. 2010, 87, 891–893, doi 10.1021/ed1005954.
Thomson P. I. T. Is That a Polarimeter in Your Pocket? A Zero-Cost, Technology-Enabled Demonstration of Optical Rotation. J. Chem. Educ. 2018, 95, 837–841, doi 10.1021/acs.jchemed.7b00767. | {
"domain": "chemistry.stackexchange",
"id": 16221,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "stereochemistry, nucleophilic-substitution",
"url": null
} |
thermodynamics, fluid-dynamics, rocket-science
Where $d$ is the hydraulic diameter of the coolant tubes running axially along the rocket nozzle, $\dot{m}_C$ is the mass flow rate of the coolant, $\rho$ denotes densities, $\mu$ is the dynamic (absolute) viscosity of the coolant/propellant, $A_C$ is the total cross sectional area of all the coolant tubes in the rocket nozzle, $K$ refers to the thermal conductivity of the propellant/coolant, $Pr$ is the Prandtl number, and $c_2$ / $c_3$ are correction coefficients that aren't super relevant to my upcoming question. The subscripts $C$ refer to bulk coolant/propellant properties, $CF$ to properties of the coolant/propellant "film" (averaged boundary layer conditions), and $CS$ refers to what the paper calls "static" properties, which I assume to mean stagnation properties.
The question I've got is this: how would one go about determining "film" properties (those labelled with the $CF$ subscript)?
The following source, along with my general understanding of fluid mechanics indicates that there is not much variance of pressure over the boundary layer for turbulent gaseous flow (which I believe would be a reasonable first order approximation of the conditions cited by the paper within the original nozzle-design report which states that the maximum expected Mach number within the coolant tubes is 0.5).
https://aip-scitation-org.mines.idm.oclc.org/doi/abs/10.1063/1.1762413
Any suggestions on how I could evaluate $\rho_{CF}$, $K_{CF}$, $\mu_{CF}$, and $Pr_{CF}$? I'm currently assuming that there's not much difference between bulk and film properties, but I doubt the original author of the NASA study would have included those film terms if they were not significant. For the record, I'm most curious about the $\rho_{CF}$ and possibly $\mu_{CF}$. The Prandtl number and the thermal conductivity of the fluid I'm reasonably comfortable assuming as constant to start out with.
Thanks! | {
"domain": "physics.stackexchange",
"id": 68349,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, fluid-dynamics, rocket-science",
"url": null
} |
quantum-mechanics, quantum-entanglement
Title: Why can't waves and particles be used for quantum entanglement communication? I've heard the argument before that you can't know your results match before comparing notes, but I've also heard that the entangled particle is a wave or particle based on whether the other entangled particle is being measured in it's own double slit experiment.
If we can make the entangled particle a wave, how come we can't, with enough particles, make a binary code of wave/particle/wave wave/particle to send messages? The wave/particle duality is a duality that needs an accumulation of measurements with the same boundary conditions to be manifested experimentally.
The particle definition is extended from the classical mechanics , there is a specific (x,y,z,t) describing the particle. The wave part comes from the quantum mechanical wave function that predicts the probability to find the particle at (x,y,z,t) .
This double slit experiment referred here with electrons one at a time can give an intuition :
Each electron leaves a particle-like footprint on the screen, an (x,y) dot, seemingly random. It is the accumulation that gives the wave interference.
f we can make the entangled particle a wave or not, how come we can't, with enough particles, make a binary code of wave/particle/wave wave/particle to send messages?
italics mine
Because you cannot control where the electron will end up on the screen, it is controlled by a probability function. So even in this simple set up, a message cannot be encoded, Messages need direct control. | {
"domain": "physics.stackexchange",
"id": 83486,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-entanglement",
"url": null
} |
electromagnetism, electric-fields, charge, stress-energy-momentum-tensor
Title: About the derivation of the Maxwell stress tensor In the derivation of the Maxwell stress tensor, we see, like here (or in Jackson's Classical Electrodynamics), the charge density $\rho$ upon which $\textbf{E}$ and $\textbf{B}$ are acting being replaced by $\nabla \cdot \textbf{E}$:
$1$. Starting with the Lorentz force law
$$\mathbf{F} = q(\mathbf{E} + \mathbf{v} \times \mathbf{B})$$
the force per unit volume is
$$\mathbf{f} = \rho\mathbf{E} + \mathbf{J} \times \mathbf{B}$$
$2$. Next, $ρ$ and $J$ can be replaced by the fields $\textbf{E}$ and $\mathbf{B}$, using Gauss's law and Ampère's circuital law:
$$\mathbf{f} =
\epsilon_0\left(\color{red}{\boldsymbol{\nabla} \cdot \mathbf{E}}\right)\mathbf{E} +
\frac{1}{\mu_0}\left(\boldsymbol{\nabla} \times \mathbf{B}\right) \times \mathbf{B} -
\epsilon_0\frac{\partial\mathbf{E}}{\partial t} \times \mathbf{B}\,
$$ | {
"domain": "physics.stackexchange",
"id": 67529,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, electric-fields, charge, stress-energy-momentum-tensor",
"url": null
} |
quantum-gate, bloch-sphere
Similar to the $z$ bases case when we use points on the Bloch sphere to describe qubit states, an arbitrary qubit state under $y$ bases can be written as:
$$|\psi\rangle=\cos\frac{\theta}{2}|+\rangle_{y}+\sin\frac{\theta}{2}\exp(i\phi)|-\rangle_{y}$$
Then, I exert $R'_{y}(\delta)$ on $|\psi\rangle$ written this way,
$$R'_{y}(\delta)|\psi\rangle=\exp(i\frac{\delta}{2})[\cos\frac{\theta}{2}|+\rangle_{y}+\sin\frac{\theta}{2}\exp[i(\phi-\delta)]|-\rangle_{y}]$$
Here is my question: if it's counterclockwise rotation, it should be ($\phi+\delta$) above. Why it's $(\phi-\delta$)?
Is it because I wrongly use the Bloch interpretation with y basis?
Thank you for your help! It should be $S^{-1}R_yS$ instead of $SR_yS^{-1}$. Change from one basis into another basis, you found the right matrix $S$ while the wrong method. For a pedagogical method about change basis(not only in quantum mechanics), you may refer to this link. | {
"domain": "quantumcomputing.stackexchange",
"id": 3372,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-gate, bloch-sphere",
"url": null
} |
computer-architecture, cpu-cache
Title: How to calculate the number of tag, index and offset bits of different caches? Specifically:
1) A direct-mapped cache with 4096 blocks/lines in which each block has 8 32-bit words. How many bits are needed for the tag and index fields, assuming a 32-bit address?
2) Same question as 1) but for fully associative cache?
Correct me if I'm wrong, is it:
tag bits = address bit length - exponent of index - exponent of offset?
[Is the offset = 3 due to 2^3 = 8 or is it 5 from 2^5 = 32?] The question as stated is not quite answerable. A word has been defined to be 32-bits. We need to know whether the system is "byte-addressable" (you can access an 8-bit chunk of data) or "word-addressable" (smallest accessible chunk is 32-bits) or even "half-word addressable" (the smallest chunk of data you can access is 16-bits.) You need to know this to know what the lowest-order bit of an address is telling you.
Then you work from the bottom up. Let's assume the system is byte addressable.
Then each cache block contains 8 words*(4 bytes/word)=32=25 bytes, so the offset is 5 bits.
The index for a direct mapped cache is the number of blocks in the cache (12 bits in this case, because 212=4096.)
Then the tag is all the bits that are left, as you have indicated.
As the cache gets more associative but stays the same size there are fewer index bits and more tag bits. | {
"domain": "cs.stackexchange",
"id": 12595,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computer-architecture, cpu-cache",
"url": null
} |
visible-light, scattering, thermal-radiation, atmospheric-science, sun
Title: Why is the sky blue and the sun yellow? The blue color of light of the sky is due to Rayleigh scattering.
But the sun itself appears yellow in color whereas the scattered sunlight itself appears blue.
Why does this happen?
Should the sun then not also appear blue in color?
Raleigh scattering is very weak so the vast majority of the light from the Sun passes through the atmosphere without being scattered. That means when we look at the Sun we see the 99% of the light that isn't scattered, and that light has the original 5,700K colour spectrum.
The only light we see directly from the Sun is the light that travels in a straight line from the Sun to our eye - that's the horizontal yellow line in this diagram. If you consider the upper yellow line we can't see this light ray because it misses our eye. However the Rayleigh scattering due to the air scatters in all directions, so some of this scattered light reaches our eye. That means when we look away from the Sun we only see the scattered light and not the direct sunlight.
The Rayleigh scattering depends on the wavelength and blue light is scattered most. That means the light we see coming from directions away from the Sun has a spectrum weighted towards the blue. NB it isn't pure blue light. It's a spectrum of light enriched in blue compared to the direct sunlight. A spectrum of the scattered light from the blue sky is given in this answer:
(image from Wikipedia)
And that's why the Sun looks yellow and the sky looks blue. | {
"domain": "physics.stackexchange",
"id": 95939,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "visible-light, scattering, thermal-radiation, atmospheric-science, sun",
"url": null
} |
java, algorithm, iterator
// fetch the first elem
nextValid = moveToNext();
}
...
An implementation that implicitly represents the stack by a linked List of FlatIterators can have more exact typing, if necessary.
The Iterator interface mandates that the remove method may be unimplemented, in which case it must throw an UnsupportedOperationException (see the docs). Of course, an implementation that simply delegates to the current iterator would be quite simple.
Your class does not consider that the leaf nodes in your input may be Iterable, but should not be iterated. My solution has an explicit type T, so that the recursive flattening can stop at values of these types. The moveToNext should probably look like:
// proceeds to assign the next item to "next"
// returns true if there was a next element
// returns false if no further items are available.
private boolean moveToNext() {
while(!iteratorStack.empty()) {
// remove depleted iterators
if (!iteratorStack.peek().hasNext()) {
iteratorStack.pop();
continue;
}
// now an iterator sits on top
// consume next elem from topmost iterator
final Object peek = iteratorStack.peek().next();
// handle it according to type:
if (peek instanceof T) {
// yay, we found the next elem!
next = (T) peek;
return true;
}
else if (peek instanceof Iterable) {
stack.push(peek.iterator());
continue;
}
// array compatibility hacks would go here
else {
throw /* SomeAppropriateException */;
}
}
// no further elems are available, all iterators are depleted
return false;
}
– notice the absence of recursion, as that would improve neither readability, performance, or elegance when working with a stack.
And hasNext() is simply:
public boolean hasNext() {
// if current element isn't valid, move up
if (!nextValid) nextValid = moveToNext();
return nextValid;
} | {
"domain": "codereview.stackexchange",
"id": 4768,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, iterator",
"url": null
} |
newtonian-mechanics, centripetal-force
To move the device, no further energy would be required, yet we would neither violate the conservation of momentum, nor the conservation of energy, while the center of mass would remain at the same place for this closed system, at all times.
Sounds too good to be true, so what did i get wrong? Or would this "UFO" actually work? Your question is very confusing, so I will first attempt to answer the spirit of your question with a cleaner scenario.
It is possible to "move" from one place to another if there is minimal friction. You can do so with yourself, a large box, and a bag full of baseballs. But it isn't as cool as it sounds (that is why I had to put quotes around move).
If you sit at one end of the box with your baseballs, then in your frame everything is at rest.
If you throw a baseball at the back wall then for a bit both you and the box move in the forward direction and the box will not slow down until after the ball hits the other side. After which point everything can come to rest again. But meanwhile you and the box have moved! And there is no propellant because the baseball stays inside the box.
However the baseball moved in the opposite direction. Eventually you run out of baseballs. And the center of energy doesn't move, not ever, not for an instant not even a little bit.
That last result is a general theorem in the absence of gravity or other external forces, the total momentum stays constant and the center of energy moves at a constant velocity. So if you started with everything at rest, the center of energy can't move.
It's called the center of energy theorem.
If you have two people, one at each end, and each with a bag of baseballs, you can shuttle back and forth. If that's what you want. Nothing deep.
So now we can look at your setup. Originally everything was at rest. So the center of energy was stationary. So it must stay stationary.
Therefore as you activate your motors some energy must go one way for some energy to go the other way. And there is simply no way to end up with the center of energy in a new location. In the case with the box, the box could move one way because the baseballs move the other way. | {
"domain": "physics.stackexchange",
"id": 22147,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, centripetal-force",
"url": null
} |
gene-expression, genomics, gene-regulation
Title: Percentage of genome devoted to regulating gene expression Recently I've been studying the p53 tumor suppressor gene as a model for regulation of gene expression. It's amazing how many different post-translational modifications are known to regulate p53 activity, and how many different factors are involved in this regulation.
It is postulated that there are between 20,000 and 30,000 genes in the human genome. Is there an estimate for the percentage of these genes whose primary function is related to regulation of gene expression? Okay, I'll take this out of the comments and put in an answer for all of us to work on.
To directly answer your question:
"Is there an estimate for the percentage of these genes whose primary
function is related to regulation of gene expression?"
It depends on how you define "gene expression." And what cellular processes you want to include in that definition.
Larry's answer is the usual standard response, especially for people (such as me, ha) that have spent significant time studying transcription factors. About 1% of human genes have DNA binding domains and are thought to be directly involved in regulating the transcription of genes into mRNA - these are transcription factors (TFs). Closely related are cofactors, which regulate expression by binding to TFs or RNA polymerase machinery, but not directly to DNA.
Regulation of gene expression could also include modifications at the chromatin level - here you would include chromatin remodelers, histone acetylases, deacetylases, methylases and the histones themselves.
mRNA transcripts can also be regulated by miRNAs: post-transcriptional regulators that bind to complementary sequences on target mRNAs, which leads to translational repression or target degradation and gene silencing. So you would also include the proteins involved in this process, most notably the RNA-induced silencing complex, which includes Dicer.
There are also proteins involved in mRNA stabilization and turnover, which effects gene expression.
I'm not sure if anyone has added up all of the genes above to determine an overall percentage of the genome.
If you include in your definition of "gene regulation" post-transcriptional modification, folding chaperones, intra-cellular transport, extra-cellular and intra-cellular signaling, and so on - then Shigeta is right, you begin approaching 100%. In the most basic sense, life itself is gene regulation. | {
"domain": "biology.stackexchange",
"id": 204,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gene-expression, genomics, gene-regulation",
"url": null
} |
enthalpy, precipitation
Title: Are precipitations exothermic and/or endothermic? Should be an easy one. I'm fumbling a concept. I've read precipitations are exothermic. Is this accurate? Why would there be no endothermic precipitation reactions? No, absolutely not.
Precipitation reactions can be either endothermic and exothermic.
Table 1. Thermodynamic data of precipitation for some salts
\begin{array}{cccccc}
\hline
\text{Salt} & \Delta G_\text{ppt}^\circ & \Delta H_\mathrm{ppt}^\circ & -T\Delta S_\mathrm{ppt}^\circ(\pu{25 °C}) \\
\hline
\ce{Be(OH)2} & -121 & -31 & -90 \\
\ce{Mg(OH)2} & -63 & -3 & -61 \\
\ce{Ca(OH)2} & -28 & 16 & -44 \\
\ce{Li2CO3} & -17 & 18 & -34 \\
\ce{MgCO3} & -45 & 28 & -74 \\
\ce{CaCO3} & -48 & 10 & -57 \\
\ce{SrCO3} & -52 & 3 & -56 \\
\ce{BaCO3} & -47 & -4 & -43 \\
\ce{FePO4} & -102 & 78 & -180 \\
\hline
\end{array}
It is more than clear that all precipitations are not exothermic.
Also definitely noticable is the fact that $\Delta S$ is a positive quantity.
But that is unexpected, as an increase in the disorder of the system upon precipitation may seem counterintuitive to many.
What is often overlooked is that several other factors are at work.
Paving the way
Ions can be of two types:
Electrostatic structure breakers
Electrostatic structure makers | {
"domain": "chemistry.stackexchange",
"id": 11771,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "enthalpy, precipitation",
"url": null
} |
organic-chemistry, redox, carbonyl-compounds
activated silyl carboxylates
Corriu, R. J. P.; Lanneau G. F.; Perrot M. The one-pot conversion of carboxylic acids to aldehydes via activated silyl carboxylates. Tetrahedron Letters 1987, 28 (34), 3941–3944.
titanium-catalyzed Grignard
Sato, F.; Jinbo, T.; Sato, M. The Reduction of Carboxylic Acids to Aldehydes by Dichlorobis[π-cyclopentadienyl]titanium-Catalyzed Grignard Reactions. Synthesis 1981, 1981 (11), 871
N,N-dimethylchloromethyleniminium chloride and lithium tri-t-butoxyaluminum hydride
Fujisawa, T.; Mori, T.; Tsuge, S.; Sato, T. Direct and chemoselective conversion of carboxylic acids into aldehydes. Tetrahedron Letters 1983, 24 (14), 1543–1546.
$o\text{-}\ce{HSC6H4OH, POCl3, HClO4/LiAlH4/H2O, HgCl2}$
Costa, L.; Degani, I.; Fochi, R.; Tundo, P. Pentaatomie heteroaromatic cations. Note III. A Convenient Synthesis of Aldehydes from Carboxylic Acids via 2-Substituted 1,3-Benzoxathiolium Perchlorates. J. Heterocyclic Chem. 1974, 11, 943–948.
$o\text{-}\ce{(NH2)C6H4, PPA/NaOet/MeI/NaBH4}$ or $\ce{LiAlH4/H3O+}$
Craig, J. C.; Ekwurire, N. N.; Fu, C. C.; Walker, K. A. M. Conversion of Carboxylic Acids into Aldehydes and their C-1 or C-2 Deuteriated Derivatives. Synthesis 1981, 1981 (4), 303–305. | {
"domain": "chemistry.stackexchange",
"id": 5309,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, redox, carbonyl-compounds",
"url": null
} |
python, numpy, simulation, physics, matplotlib
# Initilize positions of planets
for planet in planets:
planet.getRelPos(startTime)
planet.log()
start = t.time()
for i, time in enumerate(np.arange(startTime, endTime, step)):
# Every plan_step update the position estmation
if (i % plan_step == 0):
for planet in planets:
planet.getRelPos(time)
planet.log()
# Update craft_vol
for planet in planets:
ship.force_g(planet.mass, planet.pos[0], planet.pos[1], planet.pos[2])
ship.update()
# Log the position of the ship every 1000 steps
if (i % 1000) == 0:
# Append the position to the lists
ship.log() | {
"domain": "codereview.stackexchange",
"id": 17111,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, numpy, simulation, physics, matplotlib",
"url": null
} |
eclipse
Originally posted by raghav.rao32 with karma: 16 on 2015-06-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by hannjaminbutton on 2015-06-12:
Yes, I clicked on the triangle. For my version it only says Code Style, Documentation, File Types, Indexer and Language Mapping.
I also included these paths but its still not working.
But thanks for your answer, I will get a more recent version of eclipse!
Comment by Reiner on 2015-06-16:
http://answers.ros.org/question/52013/catkin-and-eclipse/
The top rated answer will help you.
I battled with the same problem for a week. sourcing seems to be the answer | {
"domain": "robotics.stackexchange",
"id": 21883,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "eclipse",
"url": null
} |
I'll start with the second question, where the link provided by dimebucker91 should be useful. Generally, if you have n students, and want to create a group of k, the general formula is:
$$\frac{n!}{k!(n-k)!}$$
So here, you have 24 students and want to create a group of four, so you are looking for the solution to:
$$\frac{24!}{4!(20)!}$$
The answer to the first question is an extension to the answer of the second question. If you have 24 students, and pick any given four of them to create the first group, you have the above-mentioned result. Then you do the same thing for the next four students, and pick another four from your remaining 16 students, and so on, so your result looks something like:
$$\frac{24!}{4!(20)!} * \frac{20!}{4!(16)!} * \frac{16!}{4!(12)!} ...$$
We could multiply these out, but we might also note that many of these terms cancel, so the final result is actually just:
$$\frac{24!}{(4!)^6}$$
which is about 3.246*10^15 combinations.
The number of different groups of four you can get from 24 students is given by $\binom{24}4=\frac{24!}{(24-4)!\cdot 4!}=10,626$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9927672368385032,
"lm_q1q2_score": 0.8116611580532119,
"lm_q2_score": 0.8175744806385543,
"openwebmath_perplexity": 167.31210295965025,
"openwebmath_score": 0.44332900643348694,
"tags": null,
"url": "https://math.stackexchange.com/questions/1403945/grouping-kids-in-groups-of-4"
} |
classical-mechanics, rotational-dynamics, reference-frames, inertial-frames
$$R^T\,\vec{d{G}}_s=\vec{d{G}}_b+\tilde{\omega}\,\vec{{G}}_b\,dt\tag 3$$
$$d(\vec{G})_s=d(\vec{G})_b+d(\vec{G})_{\text{rot}}$$
thus:
$$d(\vec{G})_s=R^T\,\vec{d{G}}_s$$
$$d(\vec{G})_{\text{rot}}=\tilde{\omega}\,\vec{{G}}_b\,dt=\left(A(\vec{\phi})\,d\vec{\phi}\right)\times \vec{G}_b$$
comment
$\tilde{\omega}\,\vec{G}=\vec{\omega}\,\times\,\vec{G}$
with $\tilde{\omega}$ a skew anti symmetric matrix
$\begin{bmatrix}
0 & -\omega_z & \omega_y \\
\omega_z & 0 & -\omega_x \\
-\omega_y & \omega_x & 0 \\
\end{bmatrix}$
From:
$\dot{R}=R(\vec{\phi})\,\tilde{\omega}\quad$
we get
$\quad\vec{\omega}=A(\vec{\phi})\,\vec{\dot{\phi}}$ | {
"domain": "physics.stackexchange",
"id": 57019,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, rotational-dynamics, reference-frames, inertial-frames",
"url": null
} |
# 2007 USAMO Problems/Problem 2
## Problem
(Gregory Galperin) A square grid on the Euclidean plane consists of all points $(m,n)$, where $m$ and $n$ are integers. Is it possible to cover all grid points by an infinite family of discs with non-overlapping interiors if each disc in the family has radius at least 5?
## Solutions
### Solution 1
Lemma. Among 3 tangent circles with radius greater than or equal to 5, one can always fit a circle with radius greater than $\frac{1}{\sqrt{2}}$ between those 3 circles.
Proof. Descartes' Circle Theorem states that if $a$ is the curvature of a circle ($a=\frac 1{r}$, positive for externally tangent, negative for internally tangent), then we have that $$(a+b+c+d)^2=2(a^2+b^2+c^2+d^2)$$ Solving for $a$, we get $$a=b+c+d+2 \sqrt{bc+cd+db}$$ Take the positive root, as the negative root corresponds to internally tangent circle.
Now clearly, we have $b+c+d \le \frac 35$, and $bc+cd+db\le \frac 3{25}$. Summing/square root/multiplying appropriately shows that $a \le \frac{3 + 2 \sqrt{3}}5$. Incidently, $\frac{3 + 2\sqrt{3}}5 < \sqrt{2}$, so $a< \sqrt{2}$, $r > \frac 1{\sqrt{2}}$, as desired. $\blacksquare$ | {
"domain": "artofproblemsolving.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.992539358301545,
"lm_q1q2_score": 0.83554508575549,
"lm_q2_score": 0.8418256452674008,
"openwebmath_perplexity": 215.7900868721272,
"openwebmath_score": 0.8789174556732178,
"tags": null,
"url": "http://www.artofproblemsolving.com/wiki/index.php/2007_USAMO_Problems/Problem_2"
} |
classification, nlp, text
Title: How to calculate Pointwise Mutual Information (PMI) when working with multiple ngrams Pointwise Mutual Information or PMI for short is given as
Which is the same as:
Where BigramOccurrences is number of times bigram appears as feature, 1stWordOccurrences is number of times 1st word in bigram appears as feature and 2ndWordOccurrences is number of times 2nd word from the bigram appears as feature. Finally N is given as number of total words.
We can tweak the following formula a bit and get the following:
Now the part that confuses me a bit is the N in the formula. From what I understand it should be a total number of feature occurrences, even though it is described as total number of words. So essentially I wouldn't count total number of words in dataset (as that after some preprocessing doesn't seem like it makes sense to me), but rather I should count the total number of times all bigrams that are features have appeared as well as single words, is this correct?
Finally, one other thing that confuses me a bit is when I work with more than bigrams, so for example trigrams are also part of features. I would then, when calculating PMI for a specific bigram, not consider count of trigrams for N in the given formula? Vice-versa when calculating PMI for a single trigram, the N wouldn't account for number of bigrams, is this correct?
If I misunderstood something about formula, please let me know, as the resources I found online don't make it really clear to me. The application of PMI to text is not so straightforward, there can be different methods.
PMI is originally defined for a standard sample space of joint events, i.e. a set of instances which are either A and B, A and not B, not A and B or not A and not B. In this setting $N$ is the size of the space, of course.
So the question when dealing with text is: what is the sample space? | {
"domain": "datascience.stackexchange",
"id": 10943,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classification, nlp, text",
"url": null
} |
php, url-routing
public function html ($str){ // respond with HTML
$this->header('Content-Type', 'text/html');
$this->body = $str;
return $this;
}
public function form ($str) { // Respond with form data
$this->header('Content-Type', 'application/x-www-form-urlencoded');
// TODO: Allow the user to user an array
$this->body = $str;
return $this;
}
public function render ($file){ // Render an HTML file from the templates folder
//TODO: Restrict to templates folder open
//TODO: Add server-side rendering code
$this->body = file_get_contents($file);
return $this;
}
public function sent() { // Check if the request has been sent
return $this->sent;
}
public function send () { // send the request
if($this->sent()){
$log->error('Attempted to call send on a sent response!');
exit('Attempted to call send on a sent response!');
}
// Log the request for debugging
$this->log->info($this->headers);
$this->log->info('HTTP Responce Code: ' . $this->code);
$this->log->info($this->body);
// Set the headers
foreach($this->headers as $key => $value) {
header($key .': ' . $value);
}
// Set the status code
http_response_code($this->code);
// Send out the body
echo $this->body;
// Set the sent variable
$this->sent = true;
}
} | {
"domain": "codereview.stackexchange",
"id": 25657,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, url-routing",
"url": null
} |
pcl, catkin, ros-groovy
example2.cpp:(.text._ZN3pcl6search17OrganizedNeighborINS_8PointXYZEE13setInputCloudERKN5boost10shared_ptrIKNS_10PointCloudIS2_EEEERKNS5_IKSt6vectorIiSaIiEEEE[pcl::search::OrganizedNeighbor<pcl::PointXYZ>::setInputCloud(boost::shared_ptr<pcl::PointCloud<pcl::PointXYZ> const> const&, boost::shared_ptr<std::vector<int, std::allocator<int> > const> const&)]+0x1ce): undefined reference to `pcl::search::OrganizedNeighbor<pcl::PointXYZ>::estimateProjectionMatrix()'
CMakeFiles/example2.dir/src/example2.cpp.o: In function `pcl::registration::TransformationEstimationSVD<pcl::PointXYZ, pcl::PointXYZ>::estimateRigidTransformation(pcl::PointCloud<pcl::PointXYZ> const&, std::vector<int, std::allocator<int> > const&, pcl::PointCloud<pcl::PointXYZ> const&, Eigen::Matrix<float, 4, 4, 0, 4, 4>&)': | {
"domain": "robotics.stackexchange",
"id": 16245,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pcl, catkin, ros-groovy",
"url": null
} |
thermodynamics
Title: Thermal Conductivity - value meaning What does the thermal conductivity value mean for a certain material? For example:
wool (watts, meter, kelvin):
$k = 0.04\,\mathrm{W}\,\mathrm{m}^{-1}\,\mathrm{K}^{-1}$
What does this mean?
If I give the material 0.04 watts, the temperature increases by 1 kelvin? Thermal conductivity has nothing to do with increasing the temperature by applying mechanical work.
Take a slab of the material of thickness $l$ and an area of $A$ if the temperature difference between the two sides of the slab is $\Delta T$, then the heat flowing through the slab in a time interval $\Delta t$ in the steady state will be:
$$Q = k \frac{A \Delta T} l \Delta t.$$
That is the thermal conductivity gives the power (in the sense energy per time) of the heat flow through the material in dependence of the temperature difference in a way that is independent of the geometry of the object but only depends on the material.
The same idea is behind the definition of the resistivity $\rho$ of a material. We get the resistance of a prismatic component by:
$$R = \rho l / A.$$
Again, given the geometry of the object we can derive its properties using the material constant. | {
"domain": "physics.stackexchange",
"id": 48788,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics",
"url": null
} |
complexity-theory, reductions, approximation
Title: What does Arora mean by 'computational history'? In Arora's paper, he wrote,
Papadimitriou and Yannakakis also noted that the classical style of reduction (Cook-Levin-Karp [41, 99, 85]) relies on representing a computational history by a combinatorial problem. A computational history is a very non-robust object, since even changing a bit in it can affect its correctness. This nonrobustness lay at the root of the difficulty in proving inapproximability results.
What does he mean by this statement? In particular: what does it mean by 'computational history', and 'representing a computational history by a combinatorial problem'?
I conjecture that: [it is] a very non-robust object, he means that computation could make many many errors. So, in the last statement in the paragraph, maybe he means nonrobustness [the ability to make many errors] lay at the root of the difficulty in proving inapproximability.
Could someone tell me how nonrobustness exists in Karp/Levin reduction? The computational history consists of the contents of the tape of the Turing machine after each step. These are the variables in the SAT instance produced by the Cook-Levin theorem, which is the combinatorial problem which represents the computational history.
This representation is fragile in the sense that even when the SAT instance is unsatisfiable, we may be able to satisfy almost all of its clauses - say, all but a constant number. This is because the reduction in the Cook-Levin theorem is very local: if we flip a single bit in the computational history, then this falsifies only a small number of clauses.
In contrast, the PCP theorem guarantees that the SAT instance is either satisfiable or at most 99% of its clauses can be satisfied, and this implies a hardness of approximation result for MAX-SAT. | {
"domain": "cs.stackexchange",
"id": 11279,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, reductions, approximation",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.