text stringlengths 49 10.4k | source dict |
|---|---|
analytical-chemistry
Title: Differences in UV/vis absorbance spectrum of tap water and distilled water In my lecture, I was told that the absorbance of a substance can be measured by a colorimeter. By connecting the device to a computer, a graph showing the wavelengths and absorbance values can be shown. I suppose the graph is called a "spectrum".
My lecturer used deionized water in the experiment for calibration.
Then I wonder, if there are any differences (even subtle ones) in the spectrum of tap water and distilled/deionized water?
I understand that the result does not cause major effects to the measurement. What are those effects? This is a bit of a broad question, since it can be rephrased as "what compounds might be found in tap water that can be detected by direct measurement by UV spectroscopy".
Obviously one answer cannot address all sources of tap water, and the question does not specify a level of purity. At least one recent study (1) suggests UV measurement as a method of detecting some contaminants of concern:
Contaminants in water were studied using ultraviolet absorption with light emitting diode and
deuterium lamp sources, and a thresholding detector. The absorption spectra of potassium hydrogen
pthalate, clothianidin, tryptophan, thiamethoxam, uric acid and metaldehyde were obtained in the
range 200–360 nm. Only metaldehyde was not suitable for detection in this range. For the other
contaminants, and mixtures of pairs of compounds, the transmitted signal could be approximately
described with a simple spectral model of the source–absorption–detector system. Combined
measurements at two wavelengths could allow relative concentrations in certain mixtures to be
determined, and real-time absorption measurements were demonstrated in a flume.
Regarding what effects these particular compounds had on the UV spectrum, obviously detection requires that there is a difference between pure (e.g. distilled) water and the contaminated one. From the results: | {
"domain": "chemistry.stackexchange",
"id": 17827,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "analytical-chemistry",
"url": null
} |
EQUATION IN SPHERICAL COORDINATES. In mathematics, the Pythagorean theorem, also known as Pythagoras' theorem, is a fundamental relation in Euclidean geometry among the three sides of a right triangle. 2 22 2 2 2 2 4 12 0 4 22 y xy y xy x dzdxdy − −− ∫∫ ∫ −− + Change of Variables For problems 4 and 5 find the Jacobian of the transformation. In spherical coordinates, we likewise often view $$\rho$$ as a function of $$\theta$$ and $$\phi\text{,}$$ thus viewing distance from the origin as a function of two key angles. However, in polar coordinates we have u(r,θ) = r sinθ r2 = sinθ r so that u r = − sinθ r2, u. To write ∇2 f (where f is some function of r, θ, and φ) in spherical coordinates we go through the same procedure as we did for cylindrical coordinates. If du and dv are sufficiently close to 0, then T( R) is approximately the same as the parallelogram spanned by. A spherical rotation coordinate system for the description of three-dimensional joint rotations. Find more Mathematics widgets in Wolfram|Alpha. In this video, Krista King from integralCALC Academy shows how to find the Jacobian of the transformation given three equations for x, y and z, all defined in terms of three other variables, u, v and w. My Calc III Grad Student Instructor warned us against using the center of mass formula in coordination with spherical or cylindrical coordinates. And we get a volume of: ZZZ E 1 dV = Z ˇ 0 Z 2ˇ 0 Z a 0 ˆ2 sin(˚)dˆd d˚= Z ˇ 0 sin(˚)d˚ Z 2ˇ 0 d Z a 0 ˆ2dˆ= (2)(2ˇ) 1 3 a3 = 4 3 ˇa3. Find the Jacobian for the transformation of switching from cylindrical coordinates to spherical coordinates. However, in other curvilinear coordinate systems, such as cylindrical and spherical coordinate systems, some differential changes are not length based, such as d θ, dφ. Multiple integration extends the power of one-dimensional integration to being able to calculus surface area and volume in multiple dimensions. The plane wave solution to the Schrodinger equation is then written, eikz with a normalization of | {
"domain": "ol3roma.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9915543738075945,
"lm_q1q2_score": 0.8174988355515703,
"lm_q2_score": 0.824461932846258,
"openwebmath_perplexity": 585.6846455806344,
"openwebmath_score": 0.8959506750106812,
"tags": null,
"url": "http://ol3roma.it/jacobian-of-spherical-coordinates-proof.html"
} |
c#, html, functional-programming, generics, framework
var indent = markupElement.Parent != null;
var placeOpeningTagOnNewLine = _formatting[markupElement.Name].HasFlag(MarkupFormattingOptions.PlaceOpeningTagOnNewLine);
var placeClosingTagOnNewLine = _formatting[markupElement.Name].HasFlag(MarkupFormattingOptions.PlaceClosingTagOnNewLine);
var hasClosingTag = _formatting[markupElement.Name].HasFlag(MarkupFormattingOptions.IsVoid) == false;
var indentString = IndentString(_formatting.IndentWidth, markupElement.Depth);
var html =
new StringBuilder()
.Append(IndentTag(placeOpeningTagOnNewLine, indent, indentString))
.Append(RenderOpeningTag(markupElement.Name, markupElement.Attributes))
.AppendWhen(() => hasClosingTag, sb => sb
.Append(content)
.Append(IndentTag(placeClosingTagOnNewLine, indent, indentString))
.Append(RenderClosingTag(markupElement.Name)));
return html.ToString();
}
#endregion
private static string IndentTag(bool newLine, bool indent, string indentString)
{
return
new StringBuilder()
.AppendWhen(() => newLine, sb => sb.AppendLine())
.AppendWhen(() => newLine && indent, sb => sb.Append(indentString))
.ToString();
} | {
"domain": "codereview.stackexchange",
"id": 24918,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, html, functional-programming, generics, framework",
"url": null
} |
homework-and-exercises, newtonian-mechanics, momentum, drag, rocket-science
Title: Rocket with Drag force, Exam Question This question showed up on my Practice exam.
A spaceship of mass $M_0$ (ship+fuel) travels horizontally in the earth's atmosphere, which exerts a drag force $F=-bv$. The ship is fitted with a retro-rocket pointed in the Forward direction (on front of ship, the purpose is to slow down the ship). The retrorocket burns fuel at constant rate$-\frac {\delta m}{\delta t}=k$, the exhaust leaves the nozzle at velocity u with respect to the rocket velocity. The spaceship enters the atmosphere at velocity $V_0$. Solve the velocity with respect to time. | {
"domain": "physics.stackexchange",
"id": 35716,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, drag, rocket-science",
"url": null
} |
forces, newtonian-gravity, dimensional-analysis, coulombs-law, models
Title: Proof for property of proportionality used in deriving physical laws like law of gravitation and coulombs law $$\text{F} \propto m_1m_2$$
$$\text{F} \propto \frac{1}{r^2}$$
Therefore $$\text{F} \propto \frac{m_1m_2}{r^2}$$
In physics one quantity $\text{F}$ is directly proportional to two other quantities ($m_1m_2$ and $\displaystyle \frac{1}{r^2}$), then the product of the last two quantities is directly proportional to the first quantity $$\text{F} \propto \frac{m_1m_2}{r^2}$$
This property intuitively makes sense but I have never seen a rigorous proof of it or ever seen it being formally written in a textbook.
Can someone please prove it rigorously.
(The law of gravitation is simply the most popular application of the problem I have asked. I do not want to know the inner workings of the law of gravitation. I only want to know the how proportionality can be combined. But if further understanding of law of gravitation is required then I am open to it.) It boils down to what $\propto$ (proportional to) means.
When we say
$$F(m_1,m_2,r) \propto m_1m_2$$
this means
$$F(m_1,m_2,r) = A(r)m_1m_2 \tag{1}$$
with some unknown function $A$.
Likewise, saying
$$F(m_1,m_2,r) \propto \frac{1}{r^2}$$
means
$$F(m_1,m_2,r) = B(m_1,m_2) \frac{1}{r^2} \tag{2}$$
with some unknown function $B$.
Now the only way to satisfy both (1) and (2) simultaneously is | {
"domain": "physics.stackexchange",
"id": 84313,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, newtonian-gravity, dimensional-analysis, coulombs-law, models",
"url": null
} |
physiology, plant-physiology, photosynthesis, ecophysiology
That's about it - its a somewhat hairy calculation, but if you have standard weather data and a little knowledge of the plants you are estimating the leaf temperature of, it should be manageable.
This equation and other relevant info are in this book:
Campbell, G.S. and Norman, J.M. An Introduction to Environmental Biophysics. Springer Science, 1998 | {
"domain": "biology.stackexchange",
"id": 433,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physiology, plant-physiology, photosynthesis, ecophysiology",
"url": null
} |
classification, data-cleaning, apache-spark, scala, text
Title: How to down-weight non-words in text classification? Background:
Documents coming in as well as training set have gone through Apache Tika with Tesseract for inline images. This works well, except when it doesn't. Many docs are old, scanned images and what Tika extracts is gibberish.
Using Spark on Hadoop and either ML or MLlib (haven't settled, though I like ML better).
So far getting best results from a pipeline using Naive Bayes that removes Stopwords, tokenizes and Countvectorizes features (no Tf-Idf). Total bag-of-words approach. Next best is using ML to tokenize, Tf with Idf and into LogisticregressionwithLBFGS.
Anyway, the thought occurred to me that the model uses many docs that are junk. Literally just strings of gibberish like "mmmmmmmm aaannnammmmrrr hdhhdhhhhhjjj..."
This isn't good, but since I'm operating at scale it's just what happened. Certainly I could pick through 10,000 training docs and remove the bad examples, but there has to be an easier way. Is there?
The title of this question belies my brainstorm that there might be a way to discount, downweight or outright ignore tokens that aren't discernable by a dictionary. Is there?
Open to any and all advice or approaches to get better precision out of this model.
Thanks Usually, those non-sensical words are not problematic because they appear in one or two documents, and people usually just filter words with such low frequencies.
Are you not able to do this? Or do your non-sensical words appear that much?
Anyhow, your suggestion is what people use:
outright ignore tokens that aren't discernable by a dictionary. Is there?
Exactly. There is no magic way to know if a word is English or not. What word processors do is to use a dictionary, as you yourself suggested.
In python, before stemming, you could filter based on pyenchant.
import enchant
d = enchant.Dict("en_US")
d.check('hello') # True
d.check('mmmmmmmm') # False | {
"domain": "datascience.stackexchange",
"id": 991,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classification, data-cleaning, apache-spark, scala, text",
"url": null
} |
Coordinates in 2D. Similar ones. It is formed by the intersection of the medians. Their intersection is the centroid. 0 percent of the wheelbase, with a standard deviation of 6. So in general this is true for any isosceles triangle. The centre of gravity of an isosceles triangle with base – P and side – Q from its base is ?. 2 Infinite points and parallel lines 46 4. 4 The horizontal distance of the centroid of the elemental sector from the origin (more correctly, from the "pole" of the polar coordinate system) is 2 3 rcos θ. John F Ehlers introduced this COG indicator in 2002. Where is the center of a triangle? There are actually thousands of centers! Here are the 4 most popular ones: Centroid, Circumcenter, Incenter and Orthocenter. Canard Center of Gravity Calculator Aerodynamic Center (AC), Mean Aerodynamic Chord (MAC), Center of Gravity (CG), Neutral Point (NP) and Wing Area Canard Root Chord (A):. A classic problem in mechanics is the calculation of the gravity force that would be experienced by a mass m that was attracted by a uniform spherical shell of mass M. From the formula for the x-component of the centroid can be computed as the ratio shown to the right. ) have only areas but no mass. This indicator was converted to ThinkorSwim by baffled1. the center of gravity if it were a 3D object of constant thickness and density? If so, then pick two points, and find the midpoint between them. To calculate the center of gravity, divide total weight distance moment by total mass of the system. Since the density is constant, we may take. The centroid of a triangle is the point of intersection of its medians (the lines joining each vertex with the midpoint of the opposite side). All mass exerts a gravitational force, from the smallest. 53 x 108" = 57. Given a triangle made from a sufficiently rigid and uniform material, the centroid is the point at which that triangle balances. The centroid of a triangle is not just theoretical. Center of Gravity Finding the center of gravity of an irregular plane figure. If a force is applied at the center of mass, this ruler will accelerate the same exact way as would a point mass. (a) concurrence of the medians (b) intersection | {
"domain": "rafbis.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806506825456,
"lm_q1q2_score": 0.8254779448344705,
"lm_q2_score": 0.8418256512199033,
"openwebmath_perplexity": 304.50342906770857,
"openwebmath_score": 0.5220875144004822,
"tags": null,
"url": "http://rafbis.it/azhy/center-of-gravity-formula-for-triangle.html"
} |
gas-laws
Example
The density form of the ideal gas laws goes like:
$\rho = \text{mass / volume = molecular weight * pressure / constant / temperature}$.
Therefore for constant temperature and pressure, the specific gravity
is equal to the ratio of molecular weights. This follows because
$\frac{\rho_1}{\rho_2} = \frac{MW_1}{MW_2}$ at constant conditions.
Equations and Solution
$M_{\text{air}} = 28.97$ (I got $28.84$ using the simple $0.21*32+0.79*28$ and this agrees with this)
$\text{SG}=0.6=\frac{n_{CH_4}\rho_{CH_4}+n_{C_2H_6}\rho_{C_2H_6}}{\text{air}}$
$1=n_{CH_4}+n_{C_2H_6}$
so by the first equation and the second equation the density of the mixture is $0.6 * 28.97 = 17.38$. The third equation holds only if $0.9*16 + 0.1*30 = 17.4$ so
methane is 90% and ethane is 10%. | {
"domain": "chemistry.stackexchange",
"id": 2298,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gas-laws",
"url": null
} |
Starting with the upper right block from the comparison $X = (..)$: \begin{align} X_{13} &= C_x (1-c) + C_y s= \beta \\ X_{23} &= -C_x s + C_y(1-c) = \gamma \end{align} and rewriting it as matrix equation with unknowns $C_x$, $C_y$: $$\left( \begin{matrix} 1-c & s \\ -s & 1-c \end{matrix} \right) \left( \begin{matrix} C_x \\ C_y \end{matrix} \right) = \left( \begin{matrix} \beta \\ \gamma \end{matrix} \right)$$ Using matrix inversion I get $$\left( \begin{matrix} C_x \\ C_y \end{matrix} \right) = \left( \begin{matrix} \frac{1}{2} & -\frac{s}{2(1-c)} \\ \frac{s}{2(1-c)} & \frac{1}{2} \end{matrix} \right) \left( \begin{matrix} \beta \\ \gamma \end{matrix} \right) \quad (*)$$ and $$\theta = \arccos c = \arcsin s \quad (**)$$
So a given $X$ gives $c = X_{11}, s = X_{21}, \beta = X_{13}, \gamma = X_{23}$ and can be used with the above equations to yield $C_x, C_y, \theta$, if $c \ne 1$.
The case $c = 1$ is for $X = I$, $\theta = 0$ where we do not need this transformation.
Example: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534354878827,
"lm_q1q2_score": 0.8069403403573838,
"lm_q2_score": 0.8221891239865619,
"openwebmath_perplexity": 224.78447548845307,
"openwebmath_score": 0.9981078505516052,
"tags": null,
"url": "https://math.stackexchange.com/questions/1188368/getting-translation-and-rotation-from-resultant-matrix"
} |
rosdep, camera
Title: What is this rosdep error?
Trying to get the dependencies for a camera with the following command:
mike@spinach:~/ros_workspace$ rosdep install prosilica_camera
And I get the following error:
ERROR: Rosdep cannot find all required resources to answer your query
Missing resource prosilica_camera
ROS path [0]=/opt/ros/fuerte/share/ros
ROS path [1]=/home/mike/ros_workspace
ROS path [2]=/opt/ros/fuerte/share
ROS path [3]=/opt/ros/fuerte/stacks
What is wrong with my install?
I have run rosdep update.
Originally posted by mikeS on ROS Answers with karma: 81 on 2012-06-05
Post score: 5
Original comments
Comment by joq on 2012-06-05:
What distribution are you using? In Electric, prosilica_camera was part of camera_drivers. In Fuerte, is belongs to a separate prosilica_driver stack.
Comment by mikeS on 2012-06-05:
I'm using Fuerte
Comment by askkvn on 2021-03-20:
go to src directory, where your package reside (don't step in to your package). And try to run this command from there. It should work.
Specifically this error means that the package prosilica_camera is not in your ROS_PACKAGE_PATH.
rosdep is a tool for installing system dependencies for source packages. It does not install the source for you. You need to check that out from version control. There are tools like rosinstall which are designed to help with that. And if you're on Ubuntu usually you can use apt to install most packages, and skip compiling others code from source.
Please make sure to go through the ROS tutorials to understand the core ROS concepts.
Originally posted by tfoote with karma: 58457 on 2012-06-05
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 9675,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosdep, camera",
"url": null
} |
data-structures, efficiency
If you are concerned about memory efficiency you may also want to look into critbit-trees / binary tries. They store only the bits of the key that differ from other keys (more or less). This is reasonably fast and also quite memory efficient, especially with long keys, such as uint256 or arbitrary length. An example implementation is here (my code).
I'm not sure whether these structures fit your requirements, but I think they are worth mentioning. | {
"domain": "cs.stackexchange",
"id": 5674,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "data-structures, efficiency",
"url": null
} |
radioactivity, nuclear-chemistry
Title: Instability due to neutrons Why does an excess of neutrons leads to instability?
For example, both $\ce{^{3}H}$ and $\ce{^{4}H}$ are unstable, with respective half-lives of $3.89\cdot 10^8$ and $1.39\cdot 10^{-22}$ seconds.
By my reasoning, neutron-rich nuclei should be strongly bound by the strong nuclear force, and should therefore be more stable. Additionally, neutrons, unlike protons, are neutral and therefore do not contribute to repulsive electromagnetic forces. The key here is that a system will tend towards the lowest energy state; in other words, for a process to be spontaneous, the final state must have lower energy than the initial state.
The mass of a neutron is slightly higher than the combined mass of the proton, electron and neutrino that result from beta decay, meaning that the decay will be energetically favoured. This might be easier to understand by considering the decay of a free neutron:
$$\text{n}^0 \rightarrow \text{p}^+ + \text{e}^-+\bar{\nu}_\text{e}$$
Whether this process will occur in a nuclear neutron is determined by the $N/Z$ ratio and the atomic number. In small neutron-rich nuclei, the process above will predominate, but as the nucleus gets larger, more neutrons are needed to counteract the electromagnetic repulsion between protons (which has a longer range than the strong nuclear force). Again, the reason is energy: the unbound nuclei collectively have higher energy than a nucleus, due to nuclear binding energy. Hence, for larger nuclei, neutron decay is not energetically favourable. | {
"domain": "chemistry.stackexchange",
"id": 10772,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "radioactivity, nuclear-chemistry",
"url": null
} |
object-oriented, strings, parsing, vba, meta-programming
vbeProcedure
' requires Microsoft Visual Basic for Applications Extensibility 5.3 library
Option Explicit
' error handling values
Private Const BaseErrorNum As Long = 3500
Public Enum vbeProcedureError
vbeObjectNotIntializedError = vbObjectError + BaseErrorNum
vbeReadOnlyPropertyError
vbeInvalidArgError
End Enum
Public Enum MemberType
mt_PropertyGetter
mt_PropertyLetter
mt_PropertySetter
mt_Function
mt_Sub
End Enum
Public Enum MemberAccessibility
ma_Public
ma_Private
ma_Friend
End Enum
Private Const ObjectNotIntializedMsg = "Object Not Initialized"
Private Const ReadOnlyPropertyMsg = "Property is Read-Only after initialization"
' exposed property variables
Private Type TVbeProcedure
ParentModule As CodeModule
Name As String
procKind As vbext_ProcKind
End Type
Private this As TVbeProcedure
' truly private property variables
Private isNameSet As Boolean
Private isParentModSet As Boolean
Public Property Get Name() As String
If isNameSet Then
Name = this.Name
Else
RaiseObjectNotIntializedError
End If
End Property
Public Property Let Name(ByVal vNewValue As String)
If Not isNameSet Then
If vNewValue = vbNullString Then
RaiseInvalidArgError "Name", "The Name property can not be set to an empty string."
End If
this.Name = vNewValue
isNameSet = True
Else
RaiseReadOnlyPropertyError
End If
End Property
Public Property Get ParentModule() As CodeModule
If isParentModSet Then
Set ParentModule = this.ParentModule
Else
RaiseObjectNotIntializedError
End If
End Property | {
"domain": "codereview.stackexchange",
"id": 10126,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "object-oriented, strings, parsing, vba, meta-programming",
"url": null
} |
python-3.x
with
if len(args) == 2:
user, password = args
proxy_string += f":{user}@{password}"
Note that you're doing two tasks inside one function: creating proxies strings that can be formatted as 1) ip:port and 2) ip:post:user:password. It's up to you, but you could extract the logic into two (private) separate functions:
@staticmethod
def load_proxies(proxy_file_path) -> List[Dict[str, str]]:
def _load_ip_proxy(value: str) -> str:
"""
Transform a `{ip}:{port}` string into a `http://{ip}:{port}` string.
"""
ip, port = value.split(":")
return f"http://{ip}:{port}"
def _load_ipup_proxy(value: str) -> str:
"""
Transform a `{ip}:{port}:{user}:{password}` string into a `http://{ip}:{port}:{user}@{password}` string.
"""
ip, port, user, password = value.split(":")
return f"http://{ip}:{port}:{user}@{password}"
with open(proxy_file_path) as proxy_file:
proxies = []
for proxy in proxy_file:
if proxy.count(":") == 1:
proxies.append({'https': _load_ip_proxy(proxy.strip('\n'))})
elif proxy.count(":") == 3:
proxies.append({'https': _load_ipup_proxy(proxy.strip('\n'))})
return proxies
A variation of the above code using Python's "switch statements":
@staticmethod
def load_proxies(proxy_file_path) -> List[Dict[str, str]]:
# ... | {
"domain": "codereview.stackexchange",
"id": 41923,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python-3.x",
"url": null
} |
quantum-mechanics, statistical-mechanics, condensed-matter, computational-physics
&= 2^L sinh(\beta_k) \Pi_{i \neq k}cosh(\beta_k)
\end{align}
Is this a good way to do this? I have L equations for L variables but they are coupled in a complicated manner. Is there a simpler way to obtain the $\beta_i$? What would be a good numerical routine to calculate this? Any help/comment will be much appreciated. I don't know whether you were using Einstein convention, but either you meant:
$$
\rho = \frac{1}{N}\exp\left(\sum_i \beta_iZ_i\right)
$$
or you meant that at each site you have:
$$
\rho_i = \frac{1}{N_i}\exp\left(\beta_iZ_i\right)
$$
In any case, the $\rho_i$ are the marginals of $\rho$ and conversely (since the $Z_i$ commute):
$$
\rho = \prod_i\rho_i
$$
You either have $L+1$ independent equations for $L+1$ unknowns $\beta_i, N$:
$$
\text{Tr}\rho = 1\\
\text{Tr}[\rho Z_i] = \bar Z_i
$$
or you equivalently have $L$ independent systems of $2$ equations for $2L$ unknowns $\beta_i,N_i$:
$$
\text{Tr}\rho_i = 1\\
\text{Tr}[\rho_i Z_i] = \overline Z_i
$$
In either case the system is easy to solve, using:
$$
\text{Tr}\rho_i = \cosh\beta_i \\
\text{Tr}[\rho_i Z_i] = \sinh \beta_i
$$
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 97196,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, statistical-mechanics, condensed-matter, computational-physics",
"url": null
} |
c++, calculator
template <typename NUMTYPE>
NUMTYPE applyOperator(const char operation, NUMTYPE& num1, NUMTYPE& num2)
{
if (operation == '+')
return num1 + num2;
else if (operation == '-')
return num1 - num2;
else if (operation == '*')
return num1 * num2;
else if (operation == '/')
{
if (num1 == 0)
throw std::runtime_error("Math Error");
return num2 / num1;
}
else if (operation == '%')
{
if (num1 == 0)
throw std::runtime_error("Math Error");
return (long long) num2 % (long long) num1;
}
else if (operation == '^')
{
if (num1 == 0 && num2 == 0)
throw std::runtime_error("Math Error");
return pow(num2, num1);
}
else
throw std::runtime_error("Unknown Symbol");
}
template <typename NUMTYPE>
void applyFunction(std::string &function, NUMTYPE &argument)
{
if (function == "abs")
argument = fabs(argument);
else if (function == "sqrt")
argument = sqrt(argument);
else if (function == "cbrt")
argument = cbrt(argument); | {
"domain": "codereview.stackexchange",
"id": 37688,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, calculator",
"url": null
} |
An example of an "unjustified step" occurred famously in Andrew Wiles's first announcement that he had proved Fermat's Last Theorem. Someone (actually, I think multiple people) found a mistake in the proof he presented. After he made a considerable additional effort, he was finally able to present a proof without that mistake, and this proof was accepted.
Some comments under the original question raised the issue of how we check the intermediate steps of a proof by contradiction, claiming that it is easier to check the steps in a direct proof since their conclusions are all true.
Things that we want to prove typically have the form $S\implies T,$ for which a direct proof typically involves assuming $S$ and then showing that $T$ follows. In the intermediate steps of the proof, we have some facts that depend on $S,$ which we cannot "check" by simply observing that they are true; we can check them by verifying the logic in every step leading up to that part of the proof, or we can check them by coming up with an alternative proof showing that they follow from $S.$
We may also introduce some known facts (which do not depend on $S$) in the course of the proof, which we can check simply by verifying that they are true facts.
A third possibility is that we derive something from $S$ that we could have known to be true without assuming $S.$ This is wasteful; we could improve the proof by simply introducing these facts as known without showing a logical derivation from $S.$
The same things happen in proof by contradiction. We will have some steps that we can check only by checking every step in the logic leading up to them or by devising an alternative proof, we may have known facts that we can check more easily, and we may even have wasted effort by deriving something from our (false) assumption that we could have simply brought in as a known fact. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9678992969868542,
"lm_q1q2_score": 0.8302331422784414,
"lm_q2_score": 0.8577681013541613,
"openwebmath_perplexity": 267.8752633120478,
"openwebmath_score": 0.7733908295631409,
"tags": null,
"url": "https://math.stackexchange.com/questions/2460945/in-a-proof-by-contradiction-how-do-we-know-the-assumption-is-the-cause-of-the-c"
} |
ros, c++, msg
Originally posted by Phorce on ROS Answers with karma: 97 on 2014-01-20
Post score: 0
It looks like you aren't publishing and receiving the same message types; the message type that you're subscribing to and the message type that is being published must match.
For your example, it looks like you should be subscribing to the leap_motion_sensor/leapros message type (leap_motion_sensor::leapros in C++).
From there, you should be able to access the hand_direction data with msg->hand_direction
Originally posted by ahendrix with karma: 47576 on 2014-01-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16701,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, c++, msg",
"url": null
} |
physical-chemistry, equilibrium
Lets say the first reaction reached equilibrium and then $\ce{N_2O_3}$ started decomposing. This means more amount of $\ce{O_2}$ will be produced and the equilibrium for the first reaction will get disturbed
Without the second reaction, the first will attain equilibrium when $a$ is about 1.85 M. The second reaction makes F and uses up B, so the first reaction almost stays at equilibrium, and the concentration of A does not change much.
Will it ever stop?
Due to this the oxygen level will drop and equilibrium for the second reaction will get disturbed and $\ce{N_2O_3}$ will decompose further to form oxygen.
But this process will go on forever. There is definitely wrong with this. Will this really happen or is there some other mechanism at work?
If you look at this step-wise (figure out how much one reaction would go, disregarding the other), you are never done. This is similar to Zeno's paradox. However, the corrections get smaller and smaller, and your approximation gets better and better.
In general, reactions approach equilibrium, they don’t reach it. Nothing special about this system. | {
"domain": "chemistry.stackexchange",
"id": 14615,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, equilibrium",
"url": null
} |
deep-learning, reinforcement-learning
Title: Why Do We Store The action In Replay Memory Deep Q-learning According to my understanding, in Deep Q-learning, in order to train the NN, the agent stores experiences in a buffer (Replayed Memory), and each experience contains:
e = <s,a,r,s'> | {
"domain": "datascience.stackexchange",
"id": 10433,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, reinforcement-learning",
"url": null
} |
evolution, life, death
So, to return to your question of "Why does the brain release DMT when dying?", the answer is that there IS no answer. There is no selection pressure for or against that trait, it just happens. As to the mechanism itself, I suspect that there is just a global breakdown of function in the brain, leading to a dump of ALL neurotransmitters. When the reuptake pumps fail, the neurons just dump their contents into the extracellular space. The result is pleasant hallucinations, giving the impression that humans are not "meant" to feel pain when dying. Imposing that kind of rationale, however, is simply creating meaning out of chaos. If you watch TV static long enough, your mind will start to see patterns which aren't there, simply because the human mind is "wired" to find patterns in chaos in order to better understand the environment.
To wrap this up, I suppose the answer to your question is that the brain is finding meaning in meaninglessness, simply because that is what the brain does. We look for patterns to make sense out of our environment, which sometimes leads to spurious conclusions. There is no reason that the brain hallucinates when dying, but there IS a reason that this trait appears to be a built-in mechanism.
I hope I was able to shed some light on the issue. Or at the very least, not confuse you further :D | {
"domain": "biology.stackexchange",
"id": 12206,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "evolution, life, death",
"url": null
} |
performance, c, unit-testing, cyclomatic-complexity
return log_data;
}
// provides common error report for memory allocation error.
void report_create_and_init_test_log_data_memory_failure(char *function_name)
{
fprintf(error_out_file, "In function %s, Memory allocation failed in create_and_init_test_log_data\n", function_name);
} typedef struct can reuse the name of struct
I see you often do something like:
typedef struct foo_bar {
...
} Foo_Bar;
It's a bit weird to use lower_case for the struct name and Upper_Case for the typedef. You can reuse the same name as the struct:
typedef struct foo_bar {
...
} foo_bar;
It's also common to append _t to the typedef'ed name so it is easier to identify it as a type name instead of a variable or function name, although the _t suffix is reserved by at least POSIX 1003.1.
No need to use extern for function declarations
The keyword extern is only necessary to declare variables without defining them, for function declarations there is no need, you can write for example the following in a header file:
bool init_vm_error_reporting(char* error_log_file_name);
Use const where appropriate
It seems like you avoided using const everywhere. Using it might allow the compiler to better optimize your code, and it will be able to report an error if you ever accidentily do write to a variable that shouldn't be changed. So for example:
bool init_vm_error_reporting(const char* error_log_file_name);
You can also use it for struct members:
typedef struct test_log_data
{
const char* function_name;
bool status;
const char* path;
bool stand_alone;
} test_log_data; | {
"domain": "codereview.stackexchange",
"id": 39204,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, c, unit-testing, cyclomatic-complexity",
"url": null
} |
python, regex, file
I divided it into two functions - search, where I can check where result file is existed and remove it, and then step for os.path.walk. Param is a folder to search in.
How can this code be adjusted to look better? These parameter names are confusing: param and ext.
The step() function suffers from excessive nesting. I would further subdivide your step() function to create a search_path() function that deals with each candidate file. Since both of these helper functions aren't really that useful on their own, I would define them both within the main search() function.
You've hard-coded "search-result.txt" twice. Ideally, it should be parameterized rather than hard-coded at all. Furthermore, you're reopening it for appending each time you enter a directory, which is problematic because…
Reopening the file handle is wasteful.
You might not even have any search results for that directory.
If you opened it just once in 'w' mode instead of many times in 'a' mode, you wouldn't have to remove the file at all. (Note that removing the file does make a difference, if the file exists and has a second hard-link.)
Also, if all(not filter in path for filter in filters) is inefficient. As explained in the documentation for [os.path.walk()], you can avoid entering directories in which you have no interest:
The visit function may modify names to influence the set of directories visited below dirname, e.g. to avoid visiting certain parts of the tree. (The object referred to by names must be modified in place, using del or slice assignment.)
To analyze file extensions, use os.path.splitext().
Idiomatic Python loops rarely need statements like i += 1. What you want to use is enumerate().
The format string for output.write() is easier to read if spread out into several lines.
import os.path
import re
EXCLUDE_DIRS = ['bin', 'build', 'logback', 'test', 'target'] | {
"domain": "codereview.stackexchange",
"id": 13451,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, regex, file",
"url": null
} |
navigation, laserscan, transform
Title: Get a navigation goal from laserscan
Hi everyone,
For some purpose, I want to publish a navigation goal(type geometry_msgs::PoseStamped ). Now I want to get the position and orientation from the laserscan data as fellows, how can i make it?
float ra = scan_msg->ranges[t];
float angle = scan_msg->angle_min + i * scan_msg->angle_increment;
the laserscan frame_id = "laser", the map frame_id = "map", and I have the tf tree laser->base_link->odom->map.
Any suggestions.
Thank You In Advance!!!
EDIT
I have solve the problem at this link
Originally posted by chengwei on ROS Answers with karma: 51 on 2016-11-01
Post score: 0
Original comments
Comment by gvdhoorn on 2016-11-03:
Could you please post your last edit as an answer, and then accept your own answer? That shows much more clearly that your question was answered than if you close it.
Thanks.
I'm a little unclear on what you actual question is but I'll offer some advice.
Obtaining your current location based on laser scan data can be done with something like the amcl package that compares the laser scans to a known map.
You can create a map from laser scan data using gmapping or manually using GIMP (just export as .gmp).
An easy way to publish a Navigation Goal is to use RVIZ. You can select the 2D Nav goal button at the top which allows you to place an arrow on the screen of where you want the nav goal to be.
Originally posted by shoemakerlevy9 with karma: 545 on 2016-11-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26111,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, laserscan, transform",
"url": null
} |
electromagnetic-radiation, speed-of-light, faster-than-light, causality
Title: Is the speed of causality $c$ or $\frac{c}{\epsilon}$?
Is the speed of causality $c$ or $\frac{c}{\epsilon}$ where $c$ is the speed of light in vacuum and $\epsilon$ is the dielectric constant of the medium?
I searched the net but could not find a good and relevant answer. There was this but it did not address the main concern.
So, will an electron travelling at $0.99c$ in water, where the speed of light is actually $\frac{c}{1.33}$, violate causality? It's always c. But in dielectrics the photons interact (scattering etc) with the media and sort of zigzag before going out (that's a simplistic way of saying all those interactions take extra time). So it seems they go slower. Anything that has fewer interactions will go faster, eg neutrinos.
Nothing will go faster than c. There is no causality issue. | {
"domain": "physics.stackexchange",
"id": 38538,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-radiation, speed-of-light, faster-than-light, causality",
"url": null
} |
To make the decomposition easier for myself, I chose the "carrier" frequency w0 to be 1 because this can always be arranged by re-defining the time unit.
• As a note, "Solve[c1*Cos[c2*x-c3] + c4*Cos[c2*x-c5] == c6*Cos[c2*x-c7], {c6,c7}, Reals]" does not work. – user1722 Jan 6 '14 at 18:18
Distantly related to Jens', if we write the desired solution in the form x[t0] Cos[w0 (t - t0)], where t0 produces a local extremum, it is a matter of some simple calculus to solve for t0. However, one complication in Mathematica is that it uses ArcCos to find the solutions in this case. Unfortunately that leads for each value of cosine to two solution, one good and and one bad:
testparams = {w0 -> 2, b1 -> 3, b2 -> 4, a1 -> 5, a2 -> 6};
x'[t0] /. Solve[x'[t0] == 0, t0] /. testparams // N
(* {-3.55271*10^-15, 14.0814, -14.0814, -3.55271*10^-15} *)
Well, it's a shame in a way, but we're taught in school to check our solutions and select the ones that work. In this case we only 1. Thus the final answer:
x2 = x[t0] Cos[w0 (t - t0)] /.
First @
Select[
Solve[x'[t0] == 0, t0] /.
ArcCos[e_] :>
ArcTan[Numerator[e],
PowerExpand @ Simplify @ Sqrt[Denominator[e]^2 - Numerator[e]^2]],
Simplify[(x'[t0] == 0) /. #] &, 1] // Simplify | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9787126444811032,
"lm_q1q2_score": 0.8024399771379576,
"lm_q2_score": 0.8198933381139646,
"openwebmath_perplexity": 7308.955454580657,
"openwebmath_score": 0.9465422630310059,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/30389/combining-cosine-or-sine-terms-into-a-single-cosine-or-sine/30396"
} |
python, object-oriented, python-3.x, parsing, game-of-life
name: Gosper glider gun
comments:
['This was the first gun discovered.',
'As its name suggests, it was discovered by Bill Gosper.']
author: Bill Gosper Nov. 1970
size_x: 36
size_y: 9
rule_birth: [3]
rule_survival: [2, 3]
pattern_raw: 24bo$22bobo$12b2o6b2o12b2o$11bo3bo4b2o12b2o$2o8bo5bo3b2o$2o8bo3bob2o4bobo$10bo5bo7bo$11bo3bo$12b2o!
human_friendly_pattern:
........................o...........
......................o.o...........
............oo......oo............oo
...........o...o....oo............oo
oo........o.....o...oo..............
oo........o...o.oo....o.o...........
..........o.....o.......o...........
...........o...o....................
............oo...................... | {
"domain": "codereview.stackexchange",
"id": 23250,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, object-oriented, python-3.x, parsing, game-of-life",
"url": null
} |
How to fit a boundary to a scatter plot
I am playing around with some diffusion simulations using random walks. For example, if I generate many random walks from the same parent distribution (a Gaussian) as:
ManyRandomWalks =
Table[
RandomWalkData = RandomVariate[NormalDistribution[0, 1], 100];
RandomWalk = 1 + Accumulate[RandomWalkData];
RandomWalk,
{i, 1, 200}
]
It will look like this:
One can bound this scatter with the equation: $$f(t) = A + \sqrt{D t} + B t$$
I'd like to make a fit of this to get a more accurate value of $$D$$ -- a diffusion constant -- so far the best method I can think of is to bin the data by index or $$x$$-axis and then perform a statistic or count on the bin and then fit this -- much in a similar how one would fit a histogram.
The other approach might me to do some MLE like FindDistributionParameters and define my function as a PDF, extracting parameter values that way.
Are there any inbuilt features to achieve what I want? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.974042647323258,
"lm_q1q2_score": 0.8298383681157379,
"lm_q2_score": 0.8519528076067262,
"openwebmath_perplexity": 1749.485003702994,
"openwebmath_score": 0.4375614523887634,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/215638/how-to-fit-a-boundary-to-a-scatter-plot/215660"
} |
ros, ros-kinetic, stacks, ros-package-path
Title: ROS_PACKAGE_PATH dont include kinetic/stacks
I have Ubuntu 64b 16.04 and ROS Kinectic.
echo $ROS_PACKAGE_PATH
/home/viclinux/catkin_ws/src:/opt/ros/kinetic/share
as you see, I do not have .../kinetic/stacks folder. Is it important? What is wrong?
Originally posted by marilia15 on ROS Answers with karma: 104 on 2016-12-14
Post score: 0
I don't know when exactly the stacks folder actually vanished, but it does not exist anymore at least since indigo. So it is not needed in the package path anymore.
Originally posted by mgruhler with karma: 12390 on 2016-12-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by marilia15 on 2016-12-14:
it would be great to update the tutorial then
http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment
How can we advice the admin?
Comment by mgruhler on 2016-12-14:
You can register and then edit it youself. I've done this for now: http://wiki.ros.org/ROS/Tutorials/catkin/CreateWorkspace
Just to be sure: @William is there still any case where you could have /opt/ros/kinetic/stacks?
Comment by William on 2016-12-20:
I would guess it might have disappeared when we stopped building binaries for rosbuild packages. Perhaps it would exist if you manually created a binary package for stack of rosbuild packages, but I'm not sure. @Dirk Thomas | {
"domain": "robotics.stackexchange",
"id": 26478,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-kinetic, stacks, ros-package-path",
"url": null
} |
in his 1704 treatise Opticks of ring. Radius, indem man ein Millimeterpapier zusätzlich aufs Glas legt ( m + ½ ),... Next time I comment \lambda\ ) geringer ist als der Gangunterschied beim 3 effect his! | {
"domain": "flypack.ro",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9669140225647108,
"lm_q1q2_score": 0.819911912543334,
"lm_q2_score": 0.8479677545357568,
"openwebmath_perplexity": 2083.269237707841,
"openwebmath_score": 0.6355154514312744,
"tags": null,
"url": "https://flypack.ro/body-figure-homqv/newton%27s-ring-experiment-is-performed-with-c66c17"
} |
reinforcement-learning, actor-critic-methods, environment, advantage-actor-critic
There are two main advantages to this approach:
The dataset for training is closer to the independent, identically distributed (i.i.d.) ideal, important for theoretical and practical reasons when training a neural network. Samples taken from a single trajectory are not independent, but instead are correlated due to rules of the environment - so using a single trajectory is furthest from i.i.d. This is a similar motivation to use of experience replay tables for DQN variants of Q-learning. However, experience replay is inherently off-policy, so not a good fit to A2C or A3C that need samples taken when acting under the current policy.
Collecting experience is often a major bottleneck in training RL agents. Being able to do so in parallel in a distributed environment can significantly speed up the training process. | {
"domain": "ai.stackexchange",
"id": 1945,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, actor-critic-methods, environment, advantage-actor-critic",
"url": null
} |
javascript, beginner, jquery, html, css
<ul>
<li><a href="#" class="abutton">show alles</a></li>
<li><a href="#" class="bbutton">show voorbeelden</a></li>
<li><a href="#" class="cbutton">show contact</a></li>
</ul> | {
"domain": "codereview.stackexchange",
"id": 4531,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, jquery, html, css",
"url": null
} |
javascript, parsing
Title: Quadratic equation solver in JavaScript The task is to implement a solveEquation function, which solves the Quadratic equation. Each equality has exact 2 integer solutions. Return those numbers as an ordered array.
Example:
const solutions = solveEquation('2 * x^2 - 10 * x + 12');
console.log(solutions); // [2, 3]
Solution:
function solveEquation(equation) {
// Your solution here
let a, b, c;
// Using ES6 destructuring
[a, b, c] = getValues(equation);
return getXList(a, b, c);
} | {
"domain": "codereview.stackexchange",
"id": 30216,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, parsing",
"url": null
} |
quantum-mechanics, waves, wavefunction, schroedinger-equation, complex-numbers
You can now combine the previous two equations to get
$$\Psi(\theta) = 2a_1\cos\omega t - 2b_1\sin\omega t$$
Notice that the solution now has only two real coefficients, $a_1$ and $b_1$, or equivalently, one complex coefficient, $c_1$. The catch is that $c_1$ isn't really being used as a complex number; rather, it's being picked apart and its components used separately, which is why I don't think the way it's explained in the PDF is very useful. | {
"domain": "physics.stackexchange",
"id": 16588,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, waves, wavefunction, schroedinger-equation, complex-numbers",
"url": null
} |
waves
The formula given in the linked notes is,
$$\overbrace{\rho(x)}^{\text{density}}\underbrace{\sqrt{\Delta x^2+\Delta u ^2}}_{\text{length}}u_{tt}=\overbrace{T(x+\Delta x,t)\sin \theta (x+\Delta x,t)-T(x,t)\sin \theta (x,t)}^{\text{sum of tension forces on ends of string}}+\underbrace{F(x,t)\Delta x}_{\text{external forces}}$$
Where $u(x,t)$ is the distance at time $t$ from the equilibrium point to the point at position $x$ on the string and $\theta(x,t)$ is the angle between the horizontal and the tangent line at position $x$ at time $t$. What I just do not understand is why the external forces get represented as $F(x,t)\Delta x$ and not simply as $F(x,t)$. Is it just to avoid problems when dividing by $\Delta x$ and letting $\Delta x \to 0$? Why is this justified, or what is the reasoning behind it? The left-hand side is infinitesimal, right? So is the difference between tension forces on the right-hand side. So if the author had just written $F(x,t)$, then that would have been an infinitesimal quantity too, i.e. $dF(x,t)$ in fact. But necessarily $dF(x,t) \propto \Delta x$, and the author has chosen to denote by $F(x,t)$ that proportionality factor instead. | {
"domain": "physics.stackexchange",
"id": 41728,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves",
"url": null
} |
java, beginner, sorting, interview-questions, search
for (int i = 0; i < results.length; i++) {
results[i] = oldestClasses.get(i).getKey();
}
Timer.finish();
return results;
Twice you had repeated code that this does only once. The first time by refactoring the repeated code out of what was effectively an if/else.
The second time, I observed that both for loops filled results. Only the length changed. It's better practice to limit by the length of the array anyway. That way if you changed the 12 to 15, you wouldn't have to do so in multiple places.
I renamed result to results, as it is an array not a single value. I find it to be a good convention to make collections and arrays plural if possible.
I got rid of the sortByValue, as I wanted to work with a List rather than a Map that I'd put in a List. If you want, you could make sortByValue return a List with the new code instead.
Prefer descriptive names like oldestClasses to names like list if there is a sensible name available.
Bug
Isn't your date sort backwards? I think it is sorting oldest first. Don't you want newest first?
Bug 2
Your code wouldn't handle the situation where there were less than twelve classes. Yes, there are supposed to be thousands in the general case. But what about a new IDE? This would be a common unit test in a test harness.
Alternative
If there is no search string, you keep sorting the whole class map to get the twelve most recent classes. Consider storing that instead. Then you could say something like
if (start.isEmpty()) {
return newestClasses;
}
and then continue with the rest of the logic.
The problem statement is ambiguous, but they may have been expecting you to do this. Index once; use many. | {
"domain": "codereview.stackexchange",
"id": 22293,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, sorting, interview-questions, search",
"url": null
} |
quantum-mechanics, heisenberg-uncertainty-principle, commutator, lie-algebra, poisson-brackets
Title: If Poisson Bracket of Momentum and Position is non-zero, why no Uncertainty Principle? In Hamiltonian classical mechanics, we have that the Poisson bracket of position
and momentum satisfies $$\{q_i, p_j\} = \delta_{ij}$$
But this implies that momentum and position 'generate' changes in each other. I.e. as we move along the flow of one, the other changes. Since this holds in classical mechanics as well as in quantum mechanics, why is there no uncertainty principle in classical mechanics? Why is it that non-commutativity under the Lie bracket operator gives rise to fundamental uncertainty in one case and not in the other?
Please answer in terms of Lie theory, if possible. What one requires for the uncertainty principle to arise is that the relevant observables should not commute, i.e., their commutator is non-zero. The Poisson bracket of two observables is not the same as their commutator. Even if the Poisson bracket of the classical observables is non-zero, they do commute in classical mechanics--they are just real numbers. So, no uncertainty principle arises in classical mechanics. What happens is that the commutator of observables in quantum mechanics corresponds to the Poisson bracket of the corresponding classical observables (this is the famous canonical quantization scheme up to some ordering ambiguities which do not really affect the question at hand). And thus, if the Poisson bracket of the classical observables is non-zero, the commutator of the corresponding quantum observables will be non-zero. This gives rise to the uncertainty principle in quantum mechanics. So, in a nutshell, commutators of quantum observables correspond to the Poisson brackets of classical observables, but, the commutator of classical observables (which is always zero) is not the same as their Poisson bracket. | {
"domain": "physics.stackexchange",
"id": 60218,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle, commutator, lie-algebra, poisson-brackets",
"url": null
} |
ros, ros-melodic, performance
Title: Why tensorflow python3 script runs slower in ROS than standalone?
Ubuntu 18.04
ROS Melodic
Python 3.6.8
Tensorflow: 1.13.1
TensorRT: 5.1.5
NVIDIA: GTX1060 using Cudnn 7.6 and Cuda 10.0
When I run a object detection model developed in python using tensorflow framework in ros, the speed appears to be slower than if i run the same tensorflow model in pure python environment. Is this expected behaviour?
I tried tensorflow object detection model named "ssd_mobilenet_v2_coco_2018_03_29"
python3 tensorflow_inference.py --> takes 16ms approx.
rosrun inference_node inference.py --> takes 22ms approx
Just for information my catkin workspace is build with python3 too.
For example my model stats below
python3 tensorflow_inference.py ---> takes 60ms approximately
But
rosrun inference_node inference.py and subscribing to an image broadcasted by a bag file --> takes 85ms approx.
This seems strange or I am missing some point.
Originally posted by npscars on ROS Answers with karma: 16 on 2019-07-29
Post score: 0
I found out the reason why this was happening was because the image size input to python3 directly was a bit smaller than the input to ros node. Sorry for confusion but thought would be still useful for someone who might experience this issue like me.
Originally posted by npscars with karma: 16 on 2019-10-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33547,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, performance",
"url": null
} |
That is, there can't be two distinct points on the graph of $f(x)$ that have the same slope.
I wonder if perhaps they were thinking something like this when they wrote that: $$\lim_{x \to +\infty} (2ax + b) = (\operatorname{sgn}a)\infty$$ and $$\lim_{x \to -\infty} (2ax + b) = -(\operatorname{sgn}a)\infty,$$ where $\operatorname{sgn}a$ is $1$ if $a > 0$ and $\operatorname{sgn}a$ is $-1$ if $a < 0$. Basically the "limit" of each arm is a vertical line. That's the only thing I can think they'd be getting at, but that's an awful and pointless thing to explain at a precalculus level.
• It's worth noting that slopes are measured with projective real numbers, not extended real numbers; in particular, we should have $(-1) \cdot \infty = \infty$ here. – Hurkyl Sep 6 '16 at 16:22
• The lines can be parallel if they slope 180 degrees opposite to each other as well as if they have identical slopes - i.e. x1 = -x2. – DeadMG Sep 7 '16 at 7:14
• @DeadMG, I know, but that never happens for any $x$ on any parabola. – tilper Sep 7 '16 at 11:01
• @Hurkyl : Yes, with the extended real numbers we get hyperbolic geometry, along with a notion of distance. But we need projective geometry to understand what is going on here, and hence projective real numbers, which also dispenses with any notion of distance. – Daniel Buck Sep 7 '16 at 11:15
The parabola (top picture), and hyperbola (bottom picture) viewed projectively, all lines $y=k$ are parallel to the line at infinity, $L_{\infty}$. Here we see all non-degenerate conic sections are ellipses in the projective plane. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9715639702485929,
"lm_q1q2_score": 0.8096281563488287,
"lm_q2_score": 0.8333245994514084,
"openwebmath_perplexity": 507.93269766141236,
"openwebmath_score": 0.7551336884498596,
"tags": null,
"url": "https://math.stackexchange.com/questions/1916720/why-do-parabolas-arms-eventually-become-parallel/1916729"
} |
c#, wpf, mvvm, xaml
<Grid>
<views:LoginView x:Name="LoginView" Opacity="0"/>
<views:MainView x:Name="MainView" Opacity="0"/>
<views:ErrorView x:Name="ErrorView" Opacity="0"/>
<Button Command="{Binding ToggleStateCommand}" VerticalAlignment="Bottom" HorizontalAlignment="Center" Width="50" Height="30"/>
</Grid>
</Window>
StateManager Attached Property
public class StateManager : DependencyObject
{
public static string GetVisualState(DependencyObject obj)
{ return (string)obj.GetValue(VisualStateProperty); }
public static void SetVisualState(DependencyObject obj, string value)
{ obj.SetValue(VisualStateProperty, value); }
public static readonly DependencyProperty VisualStateProperty =
DependencyProperty.RegisterAttached(
"VisualState",
typeof(string),
typeof(StateManager),
new PropertyMetadata((s, e) =>
{
var propertyName = (string)e.NewValue;
var ctrl = s as FrameworkElement;
if (ctrl == null)
throw new InvalidOperationException("This attached property only supports types derived from FrameworkElement.");
VisualStateManager.GoToElementState(ctrl, propertyName, true);
}));
} | {
"domain": "codereview.stackexchange",
"id": 8416,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, wpf, mvvm, xaml",
"url": null
} |
java, swing, simulation, gui, awt
public class MyWindow {
public static enum Border {
TOP,
RIGHT,
BOTTOM,
LEFT
}
public final static int MINIMUM_WIDTH = 50;
public final static int MINIMUM_HEIGHT = 50;
private final static int TITLE_BAR_HEIGHT = 30;
private final static Color ACTIVE_TITLE_BAR_BACKGROUND = new Color(255, 150, 100);
private final static Color TITLE_BAR_TEXT_COLOR = Color.WHITE;
private final static Color PASSIVE_TITLE_BAR_BACKGROUND = new Color(200, 200, 200);
private final static Color BORDER_HIGHLIGHT_COLOR = Color.GREEN;
private final static Color BODY_COLOR = new Color(50, 50, 50);
private final static Font TITLE_FONT = new Font("Monospaced", Font.BOLD, 12);
private final static int TITLE_PADDING = 14;
private String title;
private int width;
private int height;
private int x;
private int y;
private boolean active;
private boolean topBorderHightlighted;
private boolean rightBorderHighlighted;
private boolean bottomBorderHightlighted;
private boolean leftBorderHighlighted;
private boolean debug;
public MyWindow(final String title,
final int width,
final int height,
final int x,
final int y) {
this.title = title;
this.width = Math.max(width, MINIMUM_WIDTH);
this.height = Math.max(height, MINIMUM_HEIGHT) + TITLE_BAR_HEIGHT;
this.x = x;
this.y = y;
this.active = false;
}
public void setDebug(final boolean debug) {
this.debug = debug;
}
public boolean active() {
return active;
}
public boolean active(final boolean active) {
final boolean old = this.active;
this.active = active;
return old;
}
public void clearBorderHighlights() {
topBorderHightlighted = false;
rightBorderHighlighted = false;
bottomBorderHightlighted = false;
leftBorderHighlighted = false;
}
public void highlightBorder(Border border) {
switch (border) {
case TOP:
topBorderHightlighted = true;
return; | {
"domain": "codereview.stackexchange",
"id": 12520,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, swing, simulation, gui, awt",
"url": null
} |
# Which matrices are covariances matrices?
Let $V$ be a matrix.
What conditions should we require so that we can find a random vector $X = (X_1, \dots, X_n)$ so that $V = Var(X)$?
Of course necessary conditions are:
• All the elements on the diagonal should be positive
• The matrix has to be symmetric
• $v_{ij} \le \sqrt{v_{ii}v_{jj}}$ (Because of $Cov(X_i, X_j) \le \sqrt{Var(X_i) Var(X_j)})$
But I am sure these are not sufficient as I have a counterexample.
So what other properties we should require on a matrix so that it can be considered a covariance matrix?
I think I cleared this up sufficiently.
Okay, so
1) If $V$ is not semi definite positive, then such a vector $X$ does not exists. (Since all covariances matrix are semi definite positive)
2) If $V$ is symmetric semidefinite positive, then such an $X$ exists! [0]
This implies that
$$\text{exists a random vector X: V = Cov(X)} \iff \text{V is symmetric positive semidefinite}$$
Since we know that those I listed in the question are necessary condition for $V$, we deduce that all symmetric semidefinite positive matrices have elements on the diagonal $\ge 0$ and are such that $v_{ij} \le \sqrt{v_{ii}v_{jj}}$.
These are not sufficient though for a matrix to be semidefinite positive but sufficient conditions are well known, after all.
[0] Proof
Since $V$ is symetric is possible to find an orthogonal matrix $Q$ such that $V = QDQ^T$, where $D$ is a diagonal matrix whose values are the eigenvalues of $V$. If $V$ is semipositive definite the elements of $D$ are all $\ge 0$, hence we can find $X$ such that $D = Cov(X)$ (just take all the variables independent with the specified variance) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429614552197,
"lm_q1q2_score": 0.8490100626472075,
"lm_q2_score": 0.8633916099737806,
"openwebmath_perplexity": 191.93880536483394,
"openwebmath_score": 0.8711594343185425,
"tags": null,
"url": "https://math.stackexchange.com/questions/1265071/which-matrices-are-covariances-matrices"
} |
complexity-theory, graphs, np-complete, colorings
Title: Graph coloring decision problem NP-complete
Given a graph $G = (V, E)$ and a natural number $k$, consider the problem of determining whether there is a way to color the vertices with two colors in such a way that at least $k$ edges are polychromatic.
Prove that this problem is NP-complete.
I thought about reducing it to the 3-colorability problem, but I can't figure out a possible transformation that would work.
Any leads? A simpler reduction is from MAX-CUT. The idea is to represent each edge as a gadget which is a complete bipartite graph having $N$ vertices on each side. By choosing $N$ large enough, we can force any potential coloring to color all vertices on one side using color 1, and all vertices on the other side using color 2. You can now connect these gadgets in such a way that the maximum number of bichromatic edges is closely related to the maximum cut in the original graph.
I'll let you work out the details. | {
"domain": "cs.stackexchange",
"id": 10313,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, graphs, np-complete, colorings",
"url": null
} |
homework-and-exercises, newtonian-mechanics, rotational-dynamics
As an analogy, consider the case when the wheel was massless. Then, the equation from energy considerations would have been: $$mgh=\frac{mv^2}{2}$$
Solving the above, we get, $$v=\sqrt{2gh}$$
When we use the equation $v^2=u^2+2gh$, setting $u=0$ for the system starting from rest, we again get, $v=\sqrt{2gh}$. The case with a massless wheel is a bit boring, because it is just the case without a wheel, i.e., a free falling body.
In classical mechanics, there is always multiple routes that lead to the same answer, so you could start from the kinematics equations if you can argue that the angular acceleration $\alpha$ is constant. However, if you don't start from the energy point of view, you have to work from a force point of view. So, what you have to do, is draw the forces on your sketch (you have a sketch right?). Then, what you need is the tension in the string. The tension force $T$ exerted by the string on the mass, is the same as the tension force exerted by the string on the wheel (why?).
Since this is a homework question, I'll leave the rest to you (hint: determine $\alpha$ based on what you know about $T$). The force and energy approach will lead to the same answer. However, in some cases, one approach is much easier to analyze than the other. | {
"domain": "physics.stackexchange",
"id": 72990,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics",
"url": null
} |
If you have ax^2 + bx + c = 0 and you want to factor, the first thing I would do is divide both sides by a. Here is an example: 4x^2 + 12x + 9 = 0. Dividing both sides by 4 yields: x^2 + 3x + (9/4) = 0. You might be able to recognize this as a perfect square: The square root of 9/4 is 3/2 and (3/2)x + (3/2)x = (6/2)x = 3x. So we have x^2+3x+(9/4) = (x + 3/2)^2
ok, what if there is a variable in the middle like X to the second,minus x, minus 20?
You can still factor x2-x-20. the -x would be the same as -1x. I prefer plugging these things into the quadratic equation for then you get exact answers without the guess and check method.
How do you do this is the question is t squared + 8t MINUS 15?
equals what? you have to complete the square or use the quadratic formula if you wanted to find t
Well, to be fair to teachers, they do have questions asked all the time. . . . but as for the 'taught it better' part, that depends on the teacher. A bit of a redundant statement, even by my standards. And the awesome thing is that we can ask our questions in the comments and know that we won't be shunned by the world for asking a redundant question, or a simple question that many people know but a few just can't grasp.
what if you had something like 2x(x+2)-3(x-2). How would you further simplify it?
You would have to expand it out using the distributive property. Then you could simplify it into a trinomial. That particular problem cannot be factored using real numbers, so a trinomial would be its simplest form.
Hi everyone, for factoring quadratic expressions how do you know what numbers to put in the parenthesis. I understand that you find numbers that multiply to whatever x and add up to the other number but does anyone have tips for finding out these numbers?? | {
"domain": "appspot.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9674102524151826,
"lm_q1q2_score": 0.8203326975969437,
"lm_q2_score": 0.8479677526147223,
"openwebmath_perplexity": 662.2570905678609,
"openwebmath_score": 0.5798130631446838,
"tags": null,
"url": "https://khan-academy.appspot.com/math/algebra-basics/quadratics-polynomials-topic/factoring-quadratic-expressions-core-algebra/v/factoring-trinomials-with-a-leading-1-coefficient?_escaped_fragment_="
} |
dna, codon
Title: Will a Codon result in same amino acid across organisms Will all organisms with the same 3 nucleotide sequence in the codon produce the same exact same amino acid. I read that the three nucleotide sequence will code for a particular amino acid. I did not understand if that is the case across organisms. Can someone explain this in simple terms. Although the answer to this question may be found in Mitochondrial Genetic code, because that answer is primarily about mitochondrial genetic codes, I shall give a more directed answer here.
Because the same genetic code that was elucidated in bacteria was found to apply to higher eukaryotes, it was initially assumed that the genetic code was universal, and was referred to as such. Subsequently it was discovered that mitochondria did not generally employ this ‘universal’ code, which is now usually referred to as the ‘standard’ code — indeed different mitochondrial codes were found in different organisms.
However the question seems to be more concerned with the genetic codes of bacteria and the nuclear genetic codes of eukaryotes. Here also there are deviations from the standard genetic code, which can be found listed either on this Wikipedia page or at NCBI.
Below are some examples from the NCBI list (where references may be found) with the standard coding in parentheses:
Mycoplasma
UGA Trp (Ter)
Ciliates, Dasycladacean and Hexamita
UAA Gln (Ter)
UAG Gln (Ter)
Euplotidae
UGA Cys (Ter)
Candidate Division SR1, Gracilibacteria
UGA Gly (Ter)
Pachysolen tannophilus
CUG Ala (Leu)
Finally, tRNAs for the ‘additional’ amino acids, selenocysteine and pyrrolysine recognize, respectively, the UGA and UAG stop codons in specific contexts. | {
"domain": "biology.stackexchange",
"id": 6178,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dna, codon",
"url": null
} |
coffeescript
Since tF is only used then you can get rid of it and use $(this) directly.
Next, extract the input width size calculation to one function and you get something like this.
getNewInputSize = (el, formWidth) ->
inputBorder = el.outerWidth() - el.innerWidth()
inputPadding = parseInt( el.css("padding-left"), 10) + parseInt(el.css("padding-right"), 10)
formWidth - inputBorder - inputPadding
resizeFormInputs = ->
$("form").each ->
formWidth = $(this).width()
$(this).find("input").each ->
$(this).css "width", getNewInputSize($(this), formWidth)
Final Code:
$ ->
getNewInputSize = (el, formWidth) ->
inputBorder = el.outerWidth() - el.innerWidth()
inputPadding = parseInt( el.css("padding-left"), 10) + parseInt(el.css("padding-right"), 10)
formWidth - inputBorder - inputPadding
resizeFormInputs = ->
$("form").each ->
formWidth = $(this).width()
$(this).find("input").each ->
$(this).css "width", getNewInputSize($(this), formWidth)
$(window).resize(resizeFormInputs).triggerHandler "resize" | {
"domain": "codereview.stackexchange",
"id": 2442,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "coffeescript",
"url": null
} |
c#, asp.net
Title: Validating that all strings in an array match a condition I want to validate a list of objects for example strings. and if one of the objects fails to pass the condition return false as validation result . this is the code I use :
public static bool AreValid(string[] strs)
{
foreach (string str in strs)
{
if (str != condition )
{
return false; // does this breaks the for loop ?
// break; // no need to this
}
}
return true;
}
is this a correct approach ? Yes, return immediately* returns from the method, no matter where in the method you are. You don't need the break.
But there is even easier way to write this code, using the LINQ method All():
strs.All(str => str == condition)
This also returns as soon as single non-matching element is found and is more readable.
* Actually, finally blocks run before you actually return from the method, but that's not relevant here. | {
"domain": "codereview.stackexchange",
"id": 22430,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, asp.net",
"url": null
} |
algorithms, time-complexity
Right? If you ignore the word size then both arrays use $\Theta(n)$ space. Remember that $32n = \Theta(n)$.
If your word has a (non-constant) length of $w$ bits and all the integers in a1 fit in a constant number of words (with no additional assumptions on their values), then a1 still uses $\Theta(n)$ words, while you can represent a2 using $\Theta(\lceil \frac{n}{w} \rceil)$ words (by packing groups of $w$ bits of a2 into a single word).
A common choice of $w$ in the word-RAM model is $w=\Theta(\log n)$.
In your Python example there is no difference between a1 and a2 since both lists are storing integers (using a fixed number of bytes that depends on the implementation and architecture, and assuming that the stored integers fit within the maximum integer representable using these bytes. Handling arbitrary precision integers is another story).
However there might be ad-hoc types specifically designed to handle indexed collection of bits (e.g., bitsets).
Also, the language/implementation might optimize the array representation when it knows that it will store bits. This is the case, for example, of std::vector<bool> in C++, which can possibly reduce the space usage by some constant factor by packing bits into integers (as described above). For a fixed words size (e.g., 32 or 64 bits), this does not change the asymptotic space complexity. | {
"domain": "cs.stackexchange",
"id": 16338,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, time-complexity",
"url": null
} |
c++, integer, networking, bitwise, portability
auto a = h.get(&Msg::a);
auto b = h.get(&Msg::b);
}
This eliminates the use of preprocessor macros, and makes a slightly more natural object-access syntax.
Consider modifying the structure members
If we're allowed to re-write Msg, we could make its members be network-endian values:
#include <cstdint>
struct Msg {
NetworkEndian<std::uint32_t> a;
NetworkEndian<std::uint16_t> b;
};
We can declare NetworkEndian like this:
template<typename Integer>
class NetworkEndian
{
Integer value;
public:
NetworkEndian(Integer value)
: value(byte_swap(value))
{}
// default copy, assign, destructor
operator Integer() const
{
return byte_swap(value);
}
private:
static Integer byte_swap(Integer v)
{
using Unsigned = std::make_unsigned_t<Integer>;
Unsigned u = reinterpret_cast<Unsigned>(v);
// Assume CHAR_BIT == 8 (so that sizeof gives us octets)
static const Unsigned mask = 0xff;
Unsigned result = 0;
for (auto shift = 0u; shift < 8 * sizeof u; shift += 8)
result = (result << 8) + ((u >> shift) & mask);
return reinterpret_cast<Integer>(result);
}
};
The conversion operators mean that we can now read the members in a completely natural manner:
void process(const Msg& message)
{
auto a = message.a;
auto b = message.b;
}
Not only that, but by providing a converting constructor, we also get to assign values equally naturally. | {
"domain": "codereview.stackexchange",
"id": 27956,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, integer, networking, bitwise, portability",
"url": null
} |
i of matrix A by each element of column j of matrix B and then adding them. We do this first with simple numerical examples and then using geometric diagrams. Know the de nition and basic properties of a fundamental matrix for such a system. Let (A, B, C) be any choice of matrices. Commutative, Associative and Distributive Laws. It's the associative property of matrix multiplication. edu/˘schiu/ Matrix Multiplication: Warnings WARNINGS Properties above are analogous to properties of real numbers. In this page multiplication properties of matrices we are going to see some properties in multiplication. Matrix Multiplication Lesson. For example, the number of walks of length 2 is the number of vertices ksuch that there is an arc from ito kand an arc from k to j. The rst theorem stated that 0v = 0 for all vectors v. The cross product of two vectors a= and b= is given by Although this may seem like a strange definition, its useful properties will soon become evident. OLS in Matrix Form 1 The True Model † Let X be an n £ k matrix where we have observations on k independent variables for n observations. For example 4 + 2 = 2 + 4 Associative Property: When three or more numbers are added,. Pre‐requisite: Inverse of a matrix, addition, multiplication and transpose of a matrix. 7 Example (Matrix groups). This is the general linear group of 2 by 2 matrices over the reals R. Multiplication of Matrices. 1 (Properties of Matrix Addition and Scalar Multiplication) By (b), sum of multiple matrices are written as A + B + + M. jugate, there is some other matrix Q such that Since the associative law holds for matrix multiplication, the theorem is proved in the following way. Matrix multiplication is associative; for example, given 3 matrices A, B and C, the following identity is always true. Formula IA = A;BI = B whenever the products are de ned. Math Goals (Standards for. A product of permutation matrices is again a permutation matrix. We shall see the reason for this is a little while. Many of the basic properties of expected value of random variables have analogous results for expected value of random matrices, with matrix operation replacing the ordinary ones. This is a consequence of lemma 1. In this case, we have etA = PetDP 1 | {
"domain": "ihuq.pw",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9869795114181105,
"lm_q1q2_score": 0.857514230470787,
"lm_q2_score": 0.8688267796346599,
"openwebmath_perplexity": 6607.073204715545,
"openwebmath_score": 0.199847012758255,
"tags": null,
"url": "http://nbkx.ihuq.pw/properties-of-matrix-multiplication-proof.html"
} |
type-theory, dependent-types, type-checking, coq
Presumably this will not work because a cannot be typed, but if we unfold its definition, we get a well-typed expression. Do you think the users will love us, or hate us for our design decision?
You need to think carefully what it means to have the "special case". If I have an application e₁ e₂, should I normalize e₁ before I decide whether it is a $\lambda$-abstraction? If yes, this means I will be normalizing ill-typed expressions, and those might cycle. If no, the usability of your proposal seems questionable.
You would also break the fundamental theorem which says that every sub-expression of a well-typed expression is well-typed. That's as sensible as introducing null into Java. | {
"domain": "cs.stackexchange",
"id": 21780,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "type-theory, dependent-types, type-checking, coq",
"url": null
} |
'green''g'[0 1 0]'#00FF00'
'blue''b'[0 0 1]'#0000FF'
'cyan' 'c'[0 1 1]'#00FFFF'
'magenta''m'[1 0 1]'#FF00FF'
'yellow''y'[1 1 0]'#FFFF00'
'black''k'[0 0 0]'#000000'
'white''w'[1 1 1]'#FFFFFF'
'none'Not applicableNot applicableNot applicableNo color
Here are the RGB triplets and hexadecimal color codes for the default colors MATLAB® uses in many types of plots.
[0 0.4470 0.7410]'#0072BD'
[0.8500 0.3250 0.0980]'#D95319'
[0.9290 0.6940 0.1250]'#EDB120'
[0.4940 0.1840 0.5560]'#7E2F8E'
[0.4660 0.6740 0.1880]'#77AC30'
[0.3010 0.7450 0.9330]'#4DBEEE'
[0.6350 0.0780 0.1840]'#A2142F'
Line width, specified as a positive value in points, where 1 point = 1/72 of an inch. If the line has markers, then the line width also affects the marker edges.
The line width cannot be thinner than the width of a pixel. If you set the line width to a value that is less than the width of a pixel on your system, the line displays as one pixel wide.
Marker size, specified as a positive value in points, where 1 point = 1/72 of an inch.
Marker outline color, specified as 'auto', an RGB triplet, a hexadecimal color code, a color name, or a short name. The default value of 'auto' uses the same color as the Color property.
For a custom color, specify an RGB triplet or a hexadecimal color code. | {
"domain": "mathworks.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.982287698703999,
"lm_q1q2_score": 0.8667980397304585,
"lm_q2_score": 0.8824278680004707,
"openwebmath_perplexity": 2821.3270388311807,
"openwebmath_score": 0.5981382131576538,
"tags": null,
"url": "https://au.mathworks.com/help/matlab/ref/plot3.html"
} |
waves, fourier-transform, frequency, interactions, non-linear-systems
$$
(\sin (\omega_0\cdot t) + \sin(\omega_1\cdot t))^2
= \sin(\omega_0\cdot t)^2
+ 2\cdot \sin (\omega_0\cdot t) \cdot\sin(\omega_1\cdot t)
+ \sin(\omega_1\cdot t)^2
$$
The $\sin(\omega_i\cdot t)^2$ terms† are often ignored. The reason is that these can be written in terms of $\sin(2\cdot\omega_i\cdot t)$, and double-frequency is often already present in the signal anyway (real-world signals can very well be periodic at frequency $\tfrac{\omega_i}{2\pi}$, but they won't be exact sinusoidals, meaning they can be interpreted as a Fourier series of integer-multiple frequencies). But
$$
2\cdot \sin (\omega_0\cdot t) \cdot\sin(\omega_1\cdot t)
= \cos ((\omega_0-\omega_1)\cdot t) - \cos ((\omega_0+\omega_1)\cdot t)
$$
...and those sum-and difference frequencies were quite definitely not at all in either of the individual signals.
For a sufficiently pathological nonlinear function, the quadratic approximation will be no good either – you'll get a whole mess of frequencies across the spectrum out of only two sines. But often, what we're interested in (or try to build) are systems that are to good approximation linear, and then the remainder is small and can be modelled by a quadratic. | {
"domain": "physics.stackexchange",
"id": 59042,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, fourier-transform, frequency, interactions, non-linear-systems",
"url": null
} |
quantum-mechanics, angular-momentum, operators, momentum, quantum-spin
$$\frac{d}{dt}\langle \sin\hat{\theta} \rangle = \langle\frac{1}{2}(\hat{\omega}_z \cos\hat{\theta}+ \cos\hat{\theta}\hat{\omega}_z))\rangle$$
The reason why why we can't simply use the operator $\hat{\theta}$ is that $\hat{L}_z$ is only a Hermitian operator if its domain is restricted to periodic functions, and $\hat{\theta}$ maps periodic functions to non-periodic functions. So if we want to keep things within the domain of $L_z$ we need to work with an operator $f(\hat{\theta})$ where $f$ is a periodic function. And the simplest periodic functions which make $f(\hat{\theta})$ a Hermitian operator are sine and cosine. (It needs to be Hermitian if we want our Ehrenfest result to be between observable quantities.)
EDIT: The paper also provides a more general result for arbitrary periodic functions $f$ with period $2\pi$:
$$\frac{d}{dt}\langle f(\hat{\theta}) \rangle = \langle \frac{1}{2}(\hat{\omega}_z f'(\hat{\theta})+ f'(\hat{\theta})\hat{\omega}_z)\rangle$$
where again $f(\hat{\theta})$ and $f'(\hat{\theta})$ are defined via Taylor series.
Note that while this formula is true for all such functions $f$, in order it to be a result between observable quantities $f(\hat{\theta})$ needs to be a Hermitian operator. I posted a question here to find out what functions $f$ make $f(\hat{\theta})$ Hermitian. | {
"domain": "physics.stackexchange",
"id": 35211,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, angular-momentum, operators, momentum, quantum-spin",
"url": null
} |
# How prove this inequality $x^3y+y^3z+z^3x\ge xyz(x+y+z)$
Let $x>0$, $y>0$ and $z>0$. Show that $$x^3y+y^3z+z^3x\ge xyz(x+y+z).$$
I known we can't WLOG: $x\ge y\ge z$, if this, I can use rearrangement inequality, But other I can't it. Thanks?
• Suppose for the time being $x\ge y\ge z$. Now observe that, \begin{align}(x^3y-x^2yz)+(y^3z-xy^2z)+(z^3x-xyz^2)&=x^2(xy-yz)+y^2(yz-zx)+z^2(zx-xy)\\&=(x^2-z^2)(xy-yz)+(y^2-z^2)(yz-zx)\\&=y(x^2-z^2)(x-z)+z(y^2-z^2)(y-z)\\&\ge0\end{align}To justify the last step we use the assumption that $x\ge y\ge z$ and the hypothesis that $x,y,z>0$. The other cases may be dealt in a similar manner. – user 170039 Aug 28 '17 at 15:42
You can not assume $x\geq y\geq z$ because our inequality is cyclic and not symmetric,
but we can say that $(x^2,y^2,z^2)$ and $\left(\frac{1}{x},\frac{1}{y},\frac{1}{z}\right)$ are opposite ordered. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9820137892483239,
"lm_q1q2_score": 0.8096329804120288,
"lm_q2_score": 0.8244619263765707,
"openwebmath_perplexity": 842.8857161706624,
"openwebmath_score": 0.7603166699409485,
"tags": null,
"url": "https://math.stackexchange.com/questions/2408805/how-prove-this-inequality-x3yy3zz3x-ge-xyzxyz"
} |
everyday-chemistry, metal, toxicity
*Spin-off questions:
Is there any substance to my teacher's claim? How come those asses that performed this mundane task survived and now live (seemingly) normal lives without going insane? Were they already insane prior to touching the mercury? Should I just go and touch some mercury, partly to discover if it is indeed possible to touch metal mercury safely (and partly just to let out all the stress that built up while typing out this question)? Mercury is toxic, but you need to carefully define what you mean by toxic or you draw incorrect conclusions
Toxic is a broad term. It means a lot of different things. The timescale matters. Some toxic things take years to exhibit their effects; others act instantly.
A binary distinction between toxic and not-toxic is pretty meaningless: you need to define the context and the timescale of the toxicity.
Mercury metal and mercury compounds are usually considered toxic. But their effects are varied in time and degree. Mercury metal is pernicious but only if you are exposed to it over a long time period. In fact you could probably drink it with few ill effects. The body just doesn't absorb it quickly. What is dangerous about mercury is not short term exposure to the metal but long term exposure to the vapour. This is why people don't suffer immediate ill effects when handling the metal even without skin protection.
Mercury vapour is readily absorbed in the body and will accumulate in tissue causing a variety of long term effects. This was discovered by mercury miners who often developed long term problems from their exposure. And it was documented for science by some chemists who started to suffer effects after working with the metal over long periods of time and managed to document their own decline (see Stock's work, for example). Mercury metal is often widely used in laboratories to provide a limited overpressure for gas distribution (you allow the gas to bubble through a mercury manometer).
Since the toxicity was recognised, chemists have been a lot more careful and always avoid vapour buildup by working in well ventilated spaces and making sure that manometers containing mercury are vented safely to the outside (via scrubbing filters) along with other potentially toxic vapours.
There is little immediate risk when working with metallic mercury as long as you don't spill it somewhere where it will collect and allow vapour to build up in the atmosphere. | {
"domain": "chemistry.stackexchange",
"id": 7200,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "everyday-chemistry, metal, toxicity",
"url": null
} |
matlab, digital-communications, spread-spectrum, baseband, cdma
Title: Adding Narrowband Interference to Spread Spectrum Signal in MATLAB (Baseband Model) I am interested in adding a narrowband interfering signal to my spread spectrum signal.
In the waveform model, it's straightforward to generate a narrowband signal within the frequency range of the spread spectrum. However, I'm unsure about how to add a narrowband interfering signal to a spread spectrum signal in a baseband model using MATLAB. Could anyone guide me on this? A single tone interferor at baseband is given as $e^{j\omega_c t}$ where $\omega_c$ is the radian frequency within the signal's bandwidth.
For example, if we had a spread spectrum signal with bandwidth of 10 MHz centered on a 1 GHz carrier, and we had a single-tone jammer at 1.001 GHz (1 MHz higher than the carrier frequency), then at baseband where the same spread spectrum waveform would be centered on DC (and extending +/-5 MHz), we would create the jammer using $Ke^{j2\pi (1e6) t}$, where $K$ is a real scaling constant.
What may not be clear and leading to the question is that the waveform in the passband is identical to the waveform at complex baseband other than the carrier changing from a real sinusoid to DC. Since a real waveform must have a complex conjugate symmetric spectrum (the negative frequency spectrum is identical to the positive frequency spectrum with opposite phase), then the baseband waveform must be complex in order to have an independent spectrum on each side of the carrier (such as our example of a single tone jammer offset from the carrier). This is demonstrated in the graphic below showing how we frequency translate a waveform from an RF carrier to complex baseband.
Note too from Euler's formula that a real sinusoid is made up of two individual tones in the Fourier spectrum; one at a positive frequency as $e^{j\omega t}$ and the other at a negative frequency as $e^{-j\omega t}$:
$$\cos(\omega t) = \frac{1}{2}e^{j\omega t} + \frac{1}{2}e^{-j\omega t}$$ | {
"domain": "dsp.stackexchange",
"id": 12421,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, digital-communications, spread-spectrum, baseband, cdma",
"url": null
} |
# Choosing n balls from 2 types
I want to choose n balls from 2 types using generating functions.
Normally I would think to write $$f(x)=(1+x+...+x^n)^2 = \left ( \frac{1-x^{n+1}}{1-x} \right )^2$$ and then look for the coefficient of $x^n$, but I'm thinking that since any coefficient after $x^n$ won't contribute anything I should be able to use the simpler expression $$(1+x+...)^2 = \left ( \frac{1}{1-x} \right )^2$$ Is this correct? Is it something I would need to prove or is the simple explanation above sufficient?
-
## 1 Answer
It is correct that the simpler expression gives the same answer as the original. As to whether you need to prove it, that would depend on your audience. If you are a 1st-year undergraduate writing a homework assignment, the marker might want to be convinced that you know what you're doing. If you are writing a paper for Inventiones Mathematicae, you can safely assume the reader will fill in the dots.
-
This is part of a larger 1st year undergrad assignment. Do you think stating that since the coefficients greater than $x^n$ don't contribute they can be ignored and the simpler expression can thus be used would be a sufficient explanation? – Robert S. Barnes Jan 20 '12 at 11:38
I think there is only one person who can answer that question, and she's the one who gave you the assignment. Better ask her. Alternatively, err on the safe side; you'll never get into trouble for giving too much justification. – Gerry Myerson Jan 20 '12 at 11:59 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534338604444,
"lm_q1q2_score": 0.8001127326778507,
"lm_q2_score": 0.8152324960856175,
"openwebmath_perplexity": 251.9416813298656,
"openwebmath_score": 0.8015778064727783,
"tags": null,
"url": "http://math.stackexchange.com/questions/100716/choosing-n-balls-from-2-types"
} |
compilers, strings, exact-string-matching
Title: Failure function of the Fibonacci strings (The Dragon Book Exercise 3.4.9d) Exercise 3.4.9 of The Dragon Book defines Fibonacci strings as follows:
$s_1 = b$.
$s_2 = a$.
$s_k = s_{k-1}s_{k-2}$ for $k>2$.
Part (d) then poses the following question:
Show that the failure function of $s_n$ can be expressed by $f(1) = f(2) = 0$, and for $2 < j \le |s_n|$, $f(j)$ is $j - |s_{k-1}|$ where $k$ is the largest integer such that $|s_k| \le j + 1$. | {
"domain": "cs.stackexchange",
"id": 20346,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "compilers, strings, exact-string-matching",
"url": null
} |
machine-learning, scikit-learn, loss-function, terminology, gbm
Title: Loss function in GradientBoostingRegressor Scikit Learn GradientBoostingRegressor:
I was looking at the scikit-Learn documentation for GradientBoostingRegressor.
Here it says that we can use 'ls' as a loss function which is least squares regression. But I am confused since least squares regression is a method to minimize the SSE loss function.
So shouldn't they mention SSE here? It would seem that you are over-interpreting what is essentially just convenience shorthand names for the model arguments, and not formal terminology; here, "‘ls’ refers to least squares regression" should be interpreted as "'ls' is the loss function used in least-squares regression".
Formally you do have a point of course - sse would be a more appropriate naming convention here; discussions about such naming conventions are not uncommon among the community, see for example the thread loss function name consistency in gradient boosting (which BTW was resolved here). And you would be most welcome opening a relevant issue for the convention used here. | {
"domain": "datascience.stackexchange",
"id": 8484,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, scikit-learn, loss-function, terminology, gbm",
"url": null
} |
# Proof that this set is convex
I need a help with prooving that a given set is a convex set:
$\{ x \in R^n | Ax \leq b, Cx = d \}$
I know the definition of convexity: $X \in R^n$ is a convex set if $\forall \alpha \in R, 0 \leq\alpha \leq 1$ and $\forall x,y \in X$ holds: $\alpha x + (1 - \alpha)y \in X$.
I tried to apply this for my set but I dont know how to prove that it works... Thanks in advance for any tips.
-
Take two vectors in the set and do convex combination. The result still lies in the set, thus it's convex. – FrenzY DT. Dec 5 '12 at 8:54
Suppose $Ax\leq b,Cx=d$ and $Ay\leq b,Cy=d$. Now, $$A(\alpha x+(1-\alpha )y)=\alpha Ax+(1-\alpha )Ay\leq\alpha b+(1-\alpha )b=b(\alpha +1-\alpha)=b$$ and similarly one can show $C(\alpha x+(1-\alpha )y)=d$.
HINT: Via the definition. Define the set $S$, and let $X_1, X_2 \in S$. Then $$\begin{cases} AX_1\le b, CX_1=d \\ AX_2\le b, CX_2=d \\ \end{cases}$$ The convex combination of $X_1$ and $X_2$ is $X=\alpha X_1 + (1-\alpha) X_2$, where $\alpha\in[0,1]$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9787126463438263,
"lm_q1q2_score": 0.8001704857040839,
"lm_q2_score": 0.8175744828610095,
"openwebmath_perplexity": 180.37316498752645,
"openwebmath_score": 0.9763045907020569,
"tags": null,
"url": "http://math.stackexchange.com/questions/251414/proof-that-this-set-is-convex"
} |
computability, turing-machines, terminology
Title: A new definition of recursively enumerable set? Given a Turing machine $M$, we associate a partial function $f_M : \Sigma^{\ast} \to \Sigma^{\ast}$ to it (this is called the function computed by the machine), where $\Sigma$ denotes the finite input and output alphabet, defined as
$$
f(u) = v :\Leftrightarrow \mbox{The machine halts on input $u$ with output $v$}.
$$
Then we say an arbitrary partial function $f : \Sigma^{\ast} \to \Sigma^{\ast}$ is called computable iff $f = f_M$ for some Turing machine $M$.
Then we define a language $A \subseteq \Sigma^{\ast}$ to be recursively enumerable iff it is the domain of some computable function. Clearly with the above definition $\operatorname{dom}(f_M) = \{ w \in \Sigma^{\ast} \mid \mbox{The machine halts on input $w$.} \}$, i.e. this is equivalent to say that a language is recursively enumerable iff we can find a machine that halts exactly for the words in the language.
But on other sources I found the following definition, a language $A \subseteq \Sigma^{\ast}$ is recursively enumerable, iff there exists a Turing machine such that
$$
A = \{ w \in \Sigma^{\ast} \mid \mbox{The machine halts in an accepting state} \}
$$
or
$$
A = \{ w \in \Sigma^{\ast} \mid \mbox{The machine halts and outputs a specified output } \}.
$$
Both notions, by special state or special output, are clearly equivalent. But they do not require the machine to run forever if $w \notin A$. This could be fixed, by letting the machine enter an endless loop if it enters a non-accepting state after finishing its computation. But this seems quite unnatural to me. | {
"domain": "cs.stackexchange",
"id": 8282,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computability, turing-machines, terminology",
"url": null
} |
ros, gazebo, ros-kinetic, collada
Originally posted by znbrito on ROS Answers with karma: 95 on 2018-04-10
Post score: 0
Hi,
solved the problem. When a white cube appears on Gazebo that means that the path to the visual mesh in not well specified. I guess I wasn't expecting this error because I was using the package finder, but when I used the full path the tree appear and everything seemed to be fine. After that I changed to the package finder once again and I got no errors so it seems that the first time I inserted the path I must have misspelled something...
Anyway, just in case you guys want to know, when you are working with SolidWorks to build your own models:
The SolidWorks2URDF (link: http://wiki.ros.org/sw_urdf_exporter) plugin works just fine but doesn't export COLLADA models, which means you have to get them through another plugin, which is the SimLab Collada Exporter.
The SimLab Collada Exporter (https://www.simlab-soft.com/3d-plugins/SolidWorks/Collada_exporter_for_SolidWorks-main.aspx) is supported on SolidWorks 2013 and more recent versions. It is a payed software but you can use a free trial license for 30 days that works just fine. Keep in mind that this plugin can't export any appearances to a COLLADA file, only normal colors, i.e., the RBG combinations.
(I also discovered that it is not possible to export appearances to a STEP (.stp file), which was something that I also tried when trying to work my problem around).
Combining this 2 SolidWorks plugins allowed me to fully export a SolidWorks model, which is something that I think that is not well documented on the Internet so I hope this helps you all as it helped me.
Cheers!
José Brito
Originally posted by znbrito with karma: 95 on 2018-04-10
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 30598,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, ros-kinetic, collada",
"url": null
} |
javascript
Title: An if statment which check access to a specific feature based on specific cases In my React application, I had to add an if statement to check for specific access requirements to a specific feature.
This access is determined by the roles, actions, and features.
I created an if the functionality that covers my scenarios but trying to find a better way to write it.
The scenario I'm covering is as described:
We have 4 roles as ['study_manager', 'system_admin', 'sponsor_admin', 'sponsor_user'],
Those who have access to the feature ANALYTYCS_FEATURE
The ANALYTICS_FEATURE can be turned on/off from a panel but only if RECRUITMENT_FEATURE is on
Scenario to be covered:
If we have RECRUITMENT_FEATURE active but ANALYTYCS_FEATURE not active only 'study_manager', 'system_admin' can access it
If we have both active then 'sponsor_admin', 'sponsor_user'] can also see it
No active RECRUITMENT_FEATURE no one sees it
I was able to cover this scenario by the following if functionality but I believe is very ugly
const accessRequirements = checkAccess => {
if (checkAccess({ features: [RECRUITMENT_FEATURE] })) {
console.log('RECRUITMENT_FEATURE IS ON !!!!!!!!!!!!!!');
if (
checkAccess({
roles: ['study_manager', 'system_admin'],
actions: ['analytics:show'],
})
) {
console.log('SHOWING FOR THE ROLES BUT ANALYTICS is OFF');
return true;
}
if (
checkAccess({
actions: ['analytics:show', 'analytics.candidates:get'],
features: [ANALYTICS_FEATURE],
})
) {
console.log('ANALYTICS_FEATURE IS ON!!!!!!!!!!!!');
return true;
}
} | {
"domain": "codereview.stackexchange",
"id": 42646,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript",
"url": null
} |
2. World War 1 started July 28, 1914. What day of the week was it?
3. Little Boy is the name of the atomic bomb dropped in Hiroshima, Japan on August 6, 1945. What day of the week that happened?
4. Marie Skłodowska-Curie is the real name of Polish Marie Curie, the first woman who won Nobel Prize in both Chemistry and Physics with her work in radioactivity. She was born on November 7,1867. What day was that?
5. I was born September 22, 1989. What day of the week does that fallen?
### Dan
Blogger and a Math enthusiast. Has no interest in Mathematics until MMC came. Aside from doing math, he also loves to travel and watch movies.
### 2 Responses
1. I’m also writing to let you know of the cool experience my daughter had checking yuor web blog. She learned several issues, including what it’s like to possess an amazing giving mindset to get most people without difficulty fully understand chosen tortuous issues. You really exceeded people’s expected results. Thank you for churning out such great, safe, educational and also unique tips on the topic.
2. I am curious to locate out what blog system you’re utilizing? Im having some small security problems with my latest site and Id like to find something a lot more risk-free. Do you’ve any suggestions? Hmm it looks like your weblog ate my initial comment (it was super long) so I guess Ill just sum it up what I wrote and say, Im thoroughly enjoying your weblog. I too am an aspiring weblog blogger but Im still new to everything. Do you’ve any guidelines and hints for rookie weblog writers? Id definitely appreciate it. | {
"domain": "techiemathteacher.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9905874109681435,
"lm_q1q2_score": 0.829735786838212,
"lm_q2_score": 0.837619959279793,
"openwebmath_perplexity": 2215.1526731916765,
"openwebmath_score": 0.17191937565803528,
"tags": null,
"url": "http://techiemathteacher.com/2013/11/26/method-in-determining-calendar-dates/"
} |
algorithms, graphs, graph-traversal
$$x_{w,t} \le x_{v,1}+x_{v,2}+\dots + x_{v,t-1}.$$
Also we require that every vertex be colored exactly once: $x_{v,1}+\dots+x_{v,n}=1$ for each $v \in V$, and you can only color one vertex at each time instant: $\sum_{v \in V} x_{v,t}=1$, for each $t=1,\dots,n$. Now the goal is to minimize the objective function
$$\Phi = \sum_{v,t} t C(v) x_{v,t},$$
which is a linear function of the variables. So, this is an ILP instance and can be fed to an off-the-shelf ILP solver.
ILP solvers incorporate a number of clever heuristics. If you're lucky, it's possible that one of them might help solve the problem faster than brute-force enumerating all valid topological sorts of the graph. | {
"domain": "cs.stackexchange",
"id": 4906,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs, graph-traversal",
"url": null
} |
First we choose two values, there are 13 values (2 to A), so $$13\choose2$$.
Then we want to choose two cards of the first value out of four cards, $$4\choose 2$$
Again, we want to choose two cards of the second value out of four cards, $$4\choose 2$$
And finally, choose one card not of the previously selected types (we can’t choose the 4 cards of the first value and the 4 cards of the second value), $${52-8\choose1} = {44\choose1}$$
So we get: $${{{13\choose2}\times{4\choose2}\times{4\choose2}\times{44\choose1} }\over{52\choose2}} = {198\over4165} ≈ 0.0475$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9912886139981473,
"lm_q1q2_score": 0.8010278304310203,
"lm_q2_score": 0.8080672158638528,
"openwebmath_perplexity": 168.05897420903145,
"openwebmath_score": 0.7074486613273621,
"tags": null,
"url": "https://math.stackexchange.com/questions/1528964/probability-that-a-five-card-poker-hand-contains-two-pairs"
} |
complexity-theory
It looks like you are trying to go the wrong direction with your question. You are asking about inserting a Clique into a graph, but you only need to concern yourself with a graph that would be sent to a HP solver and modify the inputs so they are appropriate for a HP+Clique solver. The HP part of the candidate problem doesn't require any modification - it just takes an undirected graph G(V,E). A clique takes an undirected graph G(V,E) and a goal, k. So you need to define a k that is in $G_{HP}$. what if you set k=1?. This way, the only graphs that your solver will fail to find a solution for are those that don't have a HP since every node is a clique with itself. This takes O(1) to define k=1.
Now, you just need to modify the output of the HP+Clique so it is the same as the output for the HP problem. e.g. If the candidate problem produces a solution, you would only return the path part and drop the clique part. If a solution is not found, you return NO. | {
"domain": "cs.stackexchange",
"id": 13647,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory",
"url": null
} |
javascript, object-oriented, constructor
Title: Elegant way of processing an "options" parameter In this constructor function I'm assigning everything to the proper variables. For the third parameter it accepts an options argument that contains the optional settings.
To prevent a Cannot read property x of undefined. error I'm always checking this variable by steps, since the options variable can contain both complex objects and primitive data types.
I'm wondering if there is a more elegant way I could write this?
Constructor
function LiveDate(timeUrl, element, options) {
this.timeUrl = timeUrl;
this.element = element;
this.format = options && options.format ? options.format : LiveDate.formats.ISO8601;
this.offset = options && options.offset ? options.offset : 0;
this.weekdayNames = {
long: options && options.weekdayNames && options.weekdayNames.long ? options.weekdayNames.long : [
'Sunday',
'Monday',
'Tuesday',
'Wednesday',
'Thursday',
'Friday',
'Saturday'
],
short: options && options.weekdayNames && options.weekdayNames.short ? options.weekdayNames.short : [
'Sun',
'Mon',
'Tue',
'Wed',
'Thu',
'Fri',
'Sat'
]
};
this.monthNames = {
long: options && options.monthNames && options.monthNames.long ? options.monthNames.long : [
'January',
'February',
'March',
'April',
'May',
'June',
'July',
'August',
'September',
'October',
'November',
'December'
],
short: options && options.monthNames && options.monthNames.short ? options.monthNames.short : [
'Jan',
'Feb',
'Mar',
'Apr',
'May',
'Jun',
'Jul',
'Aug',
'Sep',
'Oct',
'Nov',
'Dec'
]
};
this.start();
} You could do something like this to clean things up a little:
options = options || {}; | {
"domain": "codereview.stackexchange",
"id": 12953,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, object-oriented, constructor",
"url": null
} |
deep-learning, activation-functions
$\mathcal{H(x)}=\sum_{i=1}^nH_i(x)$. Substituting in (1):
$\mathcal{H(x)}=\sum_{i=1}^n((\sum_{j=1}^nw_{ij}\cdot x_j)+b_i)$. This can be re-aranged :
$\mathcal{H(x)}=(\sum_{j=1}^n(\sum_{i=1}^nw_{ij})\cdot x_j)+\sum_{i=1}^nb_i$. But this reduces to:
$\mathcal{H(x)}=\sum_{j=1}^n\tilde w_{j}\cdot x_j+\tilde b$. Where $\tilde w_{j},\tilde b$ are scalars.
Thus, without the non-linear activations (3) mathematically reduces to a single neuron. | {
"domain": "ai.stackexchange",
"id": 1421,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, activation-functions",
"url": null
} |
an upper bound for a function f, there is a to... Notations are mathematical tools to represent the complexities relating to an asymptote helps to standardize performance. To describe its properties when n becomes very large and oblique to curve... Measured in log notations ) Pertaining to values or properties approached at infinity properties of x2 iand xiuiand different... Different properties of x2 iand xiuiand hence different LLN and CLT of xiand to. Considering a function f ( n ) to within a constant factor analysis. Are used to represent the complexities a curve without ever meeting it a f. The data different sampling schemes assumptions include: 1 complexities of algorithms for asymptotic analysis analysis is best. Tools to represent the complexities of algorithms for asymptotic analysis pronunciation, asymptotic,. Horizontal, vertical and oblique synonyms, asymptotic translation, English dictionary definition of asymptotic to its. Programming languages the asymptotic analysis dictionary definition of asymptotic and oblique result values of the algorithm efficiency before implementing through... Simplest example is, when considering a function f, there is a need describe... ) to within a constant factor the asymptotic analysis generally measured in log.. And y-axis are asymptotes of the hyperbola xy = 3 used to represent the complexities,. The asymptotic analysis pronunciation, asymptotic pronunciation, asymptotic translation, English dictionary definition of asymptotic mathematics...: a line that draws increasingly nearer to a curve without ever meeting it when n becomes very large a... That draws increasingly nearer to a curve without ever meeting it of x2 iand xiuiand hence LLN! Curve without ever meeting it English dictionary definition of asymptotic algorithm for calculations. Through the programming languages represent the complexities three types of asymptotes:,. Bound for a function f, there is a need to describe its properties when becomes. ( n ) to within a constant factor schemes assumptions include: 1 to... Line that draws increasingly nearer to a curve without ever meeting it:. Different sampling schemes assumptions include: 1 the x-axis and y-axis are asymptotes the... To a curve without ever meeting it helps to standardize the performance of the asymptotic | {
"domain": "emilykalish.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363754361601,
"lm_q1q2_score": 0.8320268825317814,
"lm_q2_score": 0.8438951064805861,
"openwebmath_perplexity": 1816.9921015865223,
"openwebmath_score": 0.718110978603363,
"tags": null,
"url": "http://emilykalish.com/a5n76pcg/07bba8-asymptotic-properties-meaning"
} |
#### mhrob24
Good morning. I think I understand now but I want to make sure ...(getting the right answer without understanding isn't going to help me)
So, in this case we have the plates at 10 cm apart. So when they say that the potential difference at 7.85 cm away from the zero volt plate is 693v, that is just trying to confuse the reader because as long as the plates are held at 10cm apart, then regardless of where you're at in between the plates, the potential difference will remain 693v (as long as the field is assumed to be constant). So then the problem becomes very easy:
E = 693v/.1m = 6930 v/m
....I know this was supposed to be a simple question but I over thought it
#### gleem
then regardless of where you're at in between the plates, the potential difference will remain 693v (as long as the field is assumed to be constant). So then the problem becomes very easy:
No. E is constant let me use words . So the potential difference across the distance of the potential different is constant. The potential difference is 693 volts and it is across a distance of 0.0785 meters so E is ...
Got It?
#### mhrob24
Yes, E = 693V/.0785m
However, this just doesn't make any sense to me. Sorry if I'm being annoying but I don't want to just be satisfied getting the correct answer without seeing how it makes sense. This figure in my textbook is what is confusing me: | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9425067195846918,
"lm_q1q2_score": 0.8137523990028145,
"lm_q2_score": 0.8633916152464017,
"openwebmath_perplexity": 331.36394552123016,
"openwebmath_score": 0.8306557536125183,
"tags": null,
"url": "https://www.physicsforums.com/threads/find-the-e-field-between-plates-when-given-only-given-part-of-the-total-voltage.978459/"
} |
quantum-mechanics, heisenberg-uncertainty-principle, quantum-measurements
Title: How in experimental practice does a momentum measurement reduce a state to a momentum eigenfunction? It's easy to think of ways to reduce the state of a particle to a position eigenfunction (or at least a narrow spread in position space), whether by trapping the particle in a potential well or by striking it with a probe photon, and we can calculate the precision $\Delta x$ necessary to produce an experimentally significant $\Delta p$ through the [x,p] uncertainty relation. However, despite how much emphasis there is in QM texts on position and momentum eigenfunctions and the [x,p] uncertainty relation, I have not come across an example of an actual experiment in which a momentum measurement could reduce a state to a momentum eigenfunction, with corresponding spread in the position state in accordance with the uncertainty relation. For example in HEP momentum is measured through sequential position measurements in order to establish radius of curvature, but clearly you cannot produce momentum eigenstates by measuring position! Similarly for time-of-flight measurements, or anything else I can think of. Diffraction could be used to establish the expectation value of wavelength (and thus momentum) with an ensemble of identically prepared states, but you can't measure a diffraction pattern with a single particle!
Can someone point out any experiment that measures particle momentum with the result of leaving the state in a momentum eigenfunction? Use a collimator to filter out particles with wrong direction, and then use a narrow band-pass filter. Such a filter can be made using interference to reflect particles with wavevector outside of the passband. For photons this is typically implemented in photonic crystals, for electrons a superlattice can be made. | {
"domain": "physics.stackexchange",
"id": 55925,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle, quantum-measurements",
"url": null
} |
runtime-analysis, shortest-path, dijkstras-algorithm
def isInMinHeap(self, v):
if self.pos[v] < self.size:
return True
return False
Here's the graph of the runtime against the number of vertices v: Here is a worst-case example of a complete graph where the heap.decreaseKey() operation executes on every edge:
Let the vertices be $V = \{1,2,\dotsc,n\}$. The edge set $E$ is such that for every vertex $i$ and $j$ such that $i<j$, there is an edge of unit weight if $j = i+1$; and there is an edge of weight $2(n-i)$ if $j > i+1$.
Run the heap-based Dijkstra's algorithm on this graph with source vertex $1$.
It will decrease the distance of the vertex $j$ every time it traverses the edge $(i,j)$. Moreover, it will take $\Theta(\log |V|)$ time for each call to heap.decreaseKey() operation as per the aggregate analysis. Therefore, the time complexity will be $\Theta(|V|+|E|) \log |V|$. Compare its performance with unsorted array based approach. You will see the difference.
Note that here, the shortest path tree is $1$ -> $2$ -> $3$ -> $\dotsc$ -> $n$. | {
"domain": "cs.stackexchange",
"id": 18966,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "runtime-analysis, shortest-path, dijkstras-algorithm",
"url": null
} |
quantum-mechanics, quantum-field-theory, string-theory, quantum-gravity, popular-science
It was a very successful theft indeed, so let me try a theft too. Since Wheeler came up with this theory, we have come up with string theory that create electrons and other elementary particles by string vibrations. Could all strings also have equal properties or interactions, so they can create different particles with equal charge and mass? So if we merge Wheeler’s idea with string theory, we can formulate this into a hypothetical question: Could all strings be one single string which weaves the fabric of the universe? To simplify it even further we can say that a single particle drags the string along and tie the knots in the fabric of the universe together by interactions with itself. So then we get a single string or a single particle universe, which is the ultimate simplicity.
The speed of light can’t be a threshold for such a particle; because the particle itself must travel with infinite speed far beyond C and probably don't even have a velocity we can put any number on, but just call infinite speed. To go past the speed of light the particle must be without mass, and then it has no inertia and is free to go everywhere to interact with its own string which is woven into time, space, particles, mass, charge, magnetism, gravity, me, you and the universe itself. Some of the responses to this question are sour because there are no equations, it speculates in a naive way, etc. However, it has an impressionistic resemblance to some important ideas which really may be part of the final picture in physics.
Specifically, I mean (1) the idea that all physical reality consists of vibrations in a single substance (2) the idea that the history of the universe is a knotted worldline.
(1) was the physical picture of 11-dimensional supergravity, the leading candidate for a theory of everything before string theory, and still an important limit of string theory. In d=11 SUGRA, everything is just excitations of supergeometry. In string theory and M theory, we have strings and branes, but it seems rather plausible that in the end these will turn out to be extended excitations of some "generalized geometry". | {
"domain": "physics.stackexchange",
"id": 8443,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-field-theory, string-theory, quantum-gravity, popular-science",
"url": null
} |
Any help would be appreciated, thank you!!
Edit: I think the maximum # of days is just a random number, which leads me to the expectation of the sum of a random number of geometric random variable, but still I can't go any further beyond that. Thank you!
• For $i=0,1,2,3$, let $E_i$ be the expected number of days until $3$ consecutive rainy days, if the chain is currently in state $i$. We are asked to compute $E_0$. Notice that $E_0=1+.3E_2$. Develop similar equations for the other $E_i$ and solve. Jun 29, 2021 at 17:39
• Developing the equations is where I am stuck at and I am also confused as to whether there is a boundary condition. Could you please explain to me, for instance, how to develop E0? Jun 29, 2021 at 18:44
Let me get you started.
I claim that $$E_0=1+.3E_2$$ as I stated in a comment.
Suppose we are in state $$0$$. We must wait at least $$1$$ more day to see if we'll have three consecutive days of rain. $$70\%$$ of the time, it rains, and we are done, but $$30\%$$ of the time we transition to state $$2$$, and then we must wait, on average, $$E_2$$ days to get $$3$$ consecutive rainy days.
If we are in state $$2$$, similar reasoning gives $$E_2=1+.5E_1+ .5E_3$$
You can read the equations right off your diagram. You should get $$4$$ equations with $$4$$ unknowns and a unique solution. I'm sure you won't have any problem finishing from here. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9763105300791785,
"lm_q1q2_score": 0.8599314529279963,
"lm_q2_score": 0.880797068590724,
"openwebmath_perplexity": 205.81854152687754,
"openwebmath_score": 0.559999406337738,
"tags": null,
"url": "https://math.stackexchange.com/questions/4186288/need-help-with-a-markov-chain-for-weather-conditions"
} |
javascript, beginner, d3.js
],
[
["Arsenal", 0.5440890632512689],
["Chelsea", 0.5231358671618266],
["Liverpool", 0.4787550034617063],
["ManchesterCity", 0.50215325905151],
["MancheserUnited", 0.0],
["Tottenham", 0.5497016431211542]
],
[
["Arsenal", 0.6304670189118691],
["Chelsea", 0.6508134781353688],
["Liverpool", 0.5749363562907429],
["ManchesterCity", 0.5802928689025063],
["MancheserUnited", 0.5497016431211542],
["Tottenham", 0.0]
]
]; | {
"domain": "codereview.stackexchange",
"id": 31448,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, d3.js",
"url": null
} |
image-processing, python, video-processing
average = Average(10) # average difference between current frame and reference frame
delta = Average(10) # average difference between current frames
class AverageImage:
def __init__(self, shape, size):
self.queue = queue.Queue()
self.size = size
self.blended = np.zeros ( shape, float )
def __call__(self, image):
dark = image.astype(float) * (1 / self.size)
self.queue.put(dark)
self.blended += dark
if self.queue.qsize() > self.size:
self.blended -= self.queue.get()
return self.blended.astype ( np.uint8 )
windowname = 'frame'
cap = cv2.VideoCapture('Plasma_Motion_numkmi.mp4')
frames = cap.get(7) # total frames
print ( "total frames:", frames )
cap.set(1, 400) # frame 400 as reference
ret, frame = cap.read()
reference = cv2.resize ( frame, (0,0), fx=.5, fy=.5 )
blended = AverageImage ( reference.shape, 4 )
cap.set(1, 50) # starting frame
# calculate how much a pixel moved within an area
class PixelShift:
def __init__(self, area=8, sample=0.025, limit=30, color=0):
self.area = area # area (in pixels) to find other matching pixels around a pixel
self.sample = sample # downsampling of input images for better speed
self.limit = limit # color difference at which a pixel fits (0-255)
self.color = color # 0:blue, 1:red, 2:green
dx = dy = self.area # create a sorted list of coordinates with first being [0,0]
self.quad = sorted ( [ (x,y) for y in range(-dy, dy+1) for x in range(-dx, dx+1) ],
key=lambda tup: tup[0]*tup[0] + tup[1]*tup[1] # the shorter, the better
) | {
"domain": "dsp.stackexchange",
"id": 11609,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "image-processing, python, video-processing",
"url": null
} |
java, design-patterns, game, android
public void addTempSprite(float x,float y, Bitmap bmp){
//adds a coin (this is added to a separate list of sprites that are drawn before drawing the units, so they are part of the background)
}
}
Unit.java
public class Unit {
private int x;
private int y;
private Game gameView;
private Bitmap bmp;//sprite
private List<Action> actions;
public Unit(Game gameView, Bitmap bmp){
this.gameView = gameView;
this.bmp = bmp;
//set random x,y,speeds
}
public void setActions(List<Action> actions){
this.actions = actions;
}
private void update(){
//first calculate new position based on directions of speed and current position
//then call the execute method of all related actions.
for(Action a : this.actions){
a.execute();
}
}
public void onDraw(Canvas canvas){
//update the position, perform actions
update();
//draw image based on new coordinates
canvas.drawBitmap(bmp, x, y, null);
}
}
Action.Java
public interface Action {
public void execute();
}
DropGold.Java
public class DropGold implements Action{
private int tick;
private int interval = 50;
private Game game;
private Unit unit;
private int resource;
public DropGold(Game game, Unit unit){
this.game = game;
this.unit = unit;
this.tick = 0;
this.resource = R.drawable.coin2;
}
public void execute(){
tick++;
if (tick == interval){
tick = 0;
game.addTempSprite(unit.getX(), unit.getY(), BitmapFactory.decodeResource(gameView.getResources(), resource));
}
}
} | {
"domain": "codereview.stackexchange",
"id": 11107,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, design-patterns, game, android",
"url": null
} |
human-biology, genetics, human-genetics, cytogenetics
Title: How can a chromosome translocation in somatic cells lead to disease? Looking at this picture...
(source: nih.gov)
...I get the impression that the part of chromosome is attached to other chromosome, but it is not mutated. When we assume that all genes in the translocated part are intact and can still make mRNA, my question is:
How can a chromosome translocation in somatic cells lead to abnormality or disease? In the Down syndrom typically, the translocation does not induce (usually; see later) any disease. We call someone carrying the translocation a "balanced carrier". The problem arises after, at the moment of segregation.
When balanced carriers reproduce
Consider someone who has a translocation as you showed. You showed only one chromosome of each types (1N), below are the 2 chromosomes (2N). On the below image, the person with the translocation is assumed to be mating with someone who does not have the translocation.
The balance carrier is usually healthy. As a consequence of this translocation a kid has probability 1/4 to be "normal", 1/4 to have the translocation (and therefore to eventually have offspring with trisomy/monosomy) and a probability 1/2 to have trisomy/monosomy.
Balanced carrier
Citing from wikipedia:
Most balanced translocation carriers are healthy and do not have any symptoms. But about 6% of them have a range of symptoms that may include autism, intellectual disability, or congenital anomalies. A gene disrupted or disregulated at the breakpoint of the translocation carrier is likely the cause of these symptoms.
Indeed, even a balanced carrier can be affected by the translocation. Gene regulation network is often quite complex and separating a gene from its regulatory region can eventually have important impact. It is also possible that the break point occurred in the middle of a gene (or a regulatory sequence).
I think that one cannot say much more about why a balanced carrier can be sick in a general form. We'll need to go case by case and ask specifically what are the genes and regulatory processes involved in the disease. However, it is easy to conceptualize that moving genes around can affect the genetic network and the expression of genes. | {
"domain": "biology.stackexchange",
"id": 4381,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "human-biology, genetics, human-genetics, cytogenetics",
"url": null
} |
discrete-signals, audio, pitch, real-time, resampling
2) Use offline processing to oversample your sample table with a factor of 2, doubling the memory requirements obviously. Use the same linear interpolation on those oversampled samples.
3) Also double the audio sampling rate of the resampler process, using a halfband lowpass filter and a decimation stage at the very end to go back to the audio playback rate
4) Like before, but instead of linear interpolation use a short windowed sinc interpolator.
Each step improves the sound quality but also requires more resources. My guess is that you will find it hard to hear a difference between 3) and 4) if you implement them correctly. Go with 3) if it's good enough, and I think it should be. | {
"domain": "dsp.stackexchange",
"id": 2499,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "discrete-signals, audio, pitch, real-time, resampling",
"url": null
} |
rosmake
Originally posted by kwc with karma: 12244 on 2011-08-04
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by tfoote on 2011-08-09:
Yeah, the current design is to allow symmetry with the --unmark-installed option. In the unmark case the specifics do matter.
Comment by hcostelha on 2011-08-05:
Ok, thanks. Nevertheless, I think that marking (all) successful builds with ROS_NOBUILD when using rosmake would be an interesting/desired feature.
Comment by Daniel Stonier on 2011-08-04:
The python tools in eros' diamondback tag will still work for electric, I've just held off tagging it while electric is beta as there's a fair few things I need to do in other areas.
Comment by hcostelha on 2011-08-04:
Thanks for your answer but I am testing with the electric distribution, and eros_python_tools only as a tag for diamondback. Nevertheless, I think that making the switch recursive would be a better option. | {
"domain": "robotics.stackexchange",
"id": 6338,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosmake",
"url": null
} |
deep-learning, classification, pytorch, softmax
Title: Is it normal that the values of the LogSoftmax function are very large negative numbers? I have trained a classification network with PyTorch lightning where my training step looks like below:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
self.log("train_loss", loss, on_epoch=True)
return
When I look at the output logits, almost all of them are very large negative numbers, with one that is usually 0. Is this normal or might be something wrong with my training?
I am just using nn.LogSoftmax() on the outputs and taking the max to make my predictions, but my network is not doing so good when I am running on unseen data, and I want to make sure the problem is just me overfitting. Sounds like it worked to me.
nn.LogSoftmax returns the log of the softmax (duh). The outputs from softmax add up to 1, and form a probability distribution.
0 is the log of 1, meaning that class was predicted at a level of nearly 100%. And the other classes with large negative logs are a rounding error. | {
"domain": "ai.stackexchange",
"id": 3015,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, classification, pytorch, softmax",
"url": null
} |
the-sun, earth
correction = 0
CASE cosc < -1
correction = pi
CASE ELSE
correction = FNacos(cosc)
END SELECT
utnew = FNrange(utold - (GHA + glong + riset * correction))
LOOP
PRINT " UT : "; FNdegmin$(utnew * degs / 15)
PRINT " zone : "; FNdegmin$(utnew * degs / 15 + zone)
END
'****************************************************************** | {
"domain": "astronomy.stackexchange",
"id": 2655,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "the-sun, earth",
"url": null
} |
ros, ros-kinetic, ubuntu, ubuntu-xenial
A solution to this problem or other solutions to plotting in real time would be appreciated.
Originally posted by haloted on ROS Answers with karma: 32 on 2019-10-17
Post score: 0
I solved my own problem using rqt_plot. By publishing velocity information in a geometry_twist message to a specific set topic and call the rqt_plot to visualise it.
Originally posted by haloted with karma: 32 on 2019-10-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33899,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-kinetic, ubuntu, ubuntu-xenial",
"url": null
} |
Therefore we have found that the solution to the system of simultaneous equations
$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 2\hfill & \hfill 4\hfill \\ \hfill 3\hfill & \hfill 8\hfill & \hfill 14\hfill \\ \hfill 2\hfill & \hfill 6\hfill & \hfill 13\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 13\hfill \\ \hfill 4\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}\text{is}\phantom{\rule{2em}{0ex}}X=\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 4\hfill \\ \hfill -2\hfill \end{array}\right].$
Use the $LU$ decomposition you found earlier in the last Task (page 24) to solve | {
"domain": "github.io",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9907319871663826,
"lm_q1q2_score": 0.8029673842786849,
"lm_q2_score": 0.810478913248044,
"openwebmath_perplexity": 142.24747625613858,
"openwebmath_score": 0.9660990238189697,
"tags": null,
"url": "https://bathmash.github.io/HELM/30_3_lu_decmp-web/30_3_lu_decmp-webse2.html"
} |
## 10.25 Zerodivisors and total rings of fractions
The local ring at a minimal prime has the following properties.
Lemma 10.25.1. Let $\mathfrak p$ be a minimal prime of a ring $R$. Every element of the maximal ideal of $R_{\mathfrak p}$ is nilpotent. If $R$ is reduced then $R_{\mathfrak p}$ is a field.
Proof. If some element $x$ of ${\mathfrak p}R_{\mathfrak p}$ is not nilpotent, then $D(x) \not= \emptyset$, see Lemma 10.17.2. This contradicts the minimality of $\mathfrak p$. If $R$ is reduced, then ${\mathfrak p}R_{\mathfrak p} = 0$ and hence it is a field. $\square$
Lemma 10.25.2. Let $R$ be a reduced ring. Then
1. $R$ is a subring of a product of fields,
2. $R \to \prod _{\mathfrak p\text{ minimal}} R_{\mathfrak p}$ is an embedding into a product of fields,
3. $\bigcup _{\mathfrak p\text{ minimal}} \mathfrak p$ is the set of zerodivisors of $R$. | {
"domain": "columbia.edu",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.986777180969715,
"lm_q1q2_score": 0.802118924871582,
"lm_q2_score": 0.8128673223709251,
"openwebmath_perplexity": 247.8725097079258,
"openwebmath_score": 0.9583593606948853,
"tags": null,
"url": "https://stacks.math.columbia.edu/tag/02LV"
} |
ros, mavericks, macosx, osx
Run Build Command:"/usr/bin/make" "cmTryCompileExec2392446172/fast"
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/cmTryCompileExec2392446172.dir/build.make CMakeFiles/cmTryCompileExec2392446172.dir/build
/usr/local/Cellar/cmake/3.0.0/bin/cmake -E cmake_progress_report /Users/voladoddi/ros_catkin_ws/build_isolated/catkin/CMakeFiles/CMakeTmp/CMakeFiles 1
Building CXX object CMakeFiles/cmTryCompileExec2392446172.dir/testCXXCompiler.cxx.o
/usr/bin/c++ -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk -o CMakeFiles/cmTryCompileExec2392446172.dir/testCXXCompiler.cxx.o -c /Users/voladoddi/ros_catkin_ws/build_isolated/catkin/CMakeFiles/CMakeTmp/testCXXCompiler.cxx
Linking CXX executable cmTryCompileExec2392446172
/usr/local/Cellar/cmake/3.0.0/bin/cmake -E cmake_link_script CMakeFiles/cmTryCompileExec2392446172.dir/link.txt --verbose=1 | {
"domain": "robotics.stackexchange",
"id": 18571,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, mavericks, macosx, osx",
"url": null
} |
mechanical-engineering, structural-engineering, metal-folding
Title: How to form steel truck fenders I want to set up steel truck fenders production line, but have no idea how to do that. Something like this picture.
Edit1:
I have two plans to form final curved shape , using a hydraulic press or using a customized rolling machine .
Final products must have no riples . Your raw material will come in the form of a large spool of sheet metal containing a strip of steel hundreds of meters long. You will need an unspooler to feed steel off the spool and a straightener to take the curvature out of the steel strip. Then you need either a punch press or a shear to cut the sheet metal blanks to size and trim their corners. To put ripples or corrugations into the cut blanks, you will need a roller die or a rolling mill. To put a folded edge onto the sides of the blank you will need either a bending brake or a set of progressive dies. To bend the fenders into their final curved shape you will need a sheet metal press and to smooth out any resulting ripples you will need an ironing press.
To attach the mounting brackets you will need an electric spot welder. Finally, you will need either an electrostatic spray booth to paint them or a powder coating rig. | {
"domain": "engineering.stackexchange",
"id": 2921,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, structural-engineering, metal-folding",
"url": null
} |
javascript, array, google-apps-script, google-sheets
{'base':'f', 'letters':'\u0066\u24D5\uFF46\u1E1F\u0192\uA77C'},
{'base':'g', 'letters':'\u0067\u24D6\uFF47\u01F5\u011D\u1E21\u011F\u0121\u01E7\u0123\u01E5\u0260\uA7A1\u1D79\uA77F'},
{'base':'h', 'letters':'\u0068\u24D7\uFF48\u0125\u1E23\u1E27\u021F\u1E25\u1E29\u1E2B\u1E96\u0127\u2C68\u2C76\u0265'},
{'base':'hv','letters':'\u0195'},
{'base':'i', 'letters':'\u0069\u24D8\uFF49\u00EC\u00ED\u00EE\u0129\u012B\u012D\u00EF\u1E2F\u1EC9\u01D0\u0209\u020B\u1ECB\u012F\u1E2D\u0268\u0131\u0456'},
{'base':'j', 'letters':'\u006A\u24D9\uFF4A\u0135\u01F0\u0249'},
{'base':'k', 'letters':'\u006B\u24DA\uFF4B\u1E31\u01E9\u1E33\u0137\u1E35\u0199\u2C6A\uA741\uA743\uA745\uA7A3'}, | {
"domain": "codereview.stackexchange",
"id": 43514,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, array, google-apps-script, google-sheets",
"url": null
} |
mapping. is a pair of parametric equations with parameter t whose graph is identical to that of the function. Finding only those arguments t where the parametric graph passes appropriate intersection points the second time. Functions that have a two-dimensional input and a three-dimensional output can be thought of as drawing a surface in three-dimensional space. It is also possible to do some mathematical calculations on the functions. Bar graph is the histopathological severity score of lung sections. The graph of this is part of a parabola, starting at (0,0,0) and extending to (20,0,400), as shown. Math · Multivariable calculus · Integrating multivariable functions · Surface integral preliminaries (videos) Surface integral preliminaries (videos) This is the currently selected item. x(t) = 3cos(2t), y(t) = sin(2t), z(t) = t/2 (an elliptical helix) proceed as follows: Step 1. Fe Tkgooie Utilities 3dtools Group 3d Examiner For. Instructions for the 3D Bezier curve. Note that the graph is a surface, in other words, a two-dimensional geometric object sitting in three-space. Parametric surfaces: ezmesh, ezsurf [See Section 10. I described a surface as a 2-dimensional object in space. Recommended for you. Parametric curve plotter. Duncan BS(1), Olson AJ. In this grasshopper definition you can model a parametric creased surface by changing the scale of the base curve and also using different graphs in Graph mapper. Here is a list of best free 3D graphing software for Windows. The domain of the parametric equations is the same. The parameterization is. Answer to (a) Show that the parametric equations x = x1 + (x2 – x1)t y = y1 + (y2 – y1) t where 0 < t < 1, describe the line segment that joins the points P1(x1, y1) Toggle navigation Menu Tutors. Better yet, use the script recorder to transform actions performed in Grapher into a script. Look below to see them all. Given an initial curve (called the base curve or generator) on a parametric surface, the goal. End Value must be greater than or equal to Start Value and both must be finite. | {
"domain": "lotoblu.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462197739625,
"lm_q1q2_score": 0.8337229745151609,
"lm_q2_score": 0.8438951005915208,
"openwebmath_perplexity": 1009.6819380103711,
"openwebmath_score": 0.6727544069290161,
"tags": null,
"url": "http://lotoblu.it/hbdv/parametric-surface-grapher.html"
} |
matlab, power-spectral-density, eeg
Title: What could be causing these humps every 10 Hz on my PSD estimate data? I am working with EEG recordings about 1,000,000 data pts long, recorded at 4,000 Hz.
I'm generating PSD estimates for these recordings using the matlab "periodogram" function like so:
[power,f]=periodogram(testdata,hamming(length(testdata)),[],4000);
and plotted like so:
scatter(f,log(power),1)
When I look at my plotted data (frequency vs. power), I see these strange, camel-like humps with maxima at every 10 Hz. The humps kind of resemble the plot of the absolute value of a sine wave. These humps accompany and are quite distinct from what I believe to the the non-artifactual baseline data.
When I run the periodogram function on smaller portions of the data as opposed to the full 1,000,000 point recording, the humps and the baseline appear to merge together.
Does anyone know what these humps might represent? These are indeed artefacts from the stimulation
x = zeros(1,10*4000)
x(1) = 1;
x(401) = 1;
[power,f]=periodogram(x,hamming(length(x)),[],4000);
scatter(f,log(power),1);
xlim([0 200]); | {
"domain": "dsp.stackexchange",
"id": 3770,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, power-spectral-density, eeg",
"url": null
} |
... + 2 + 1 = n(n - … All pairwise non-isomorphic connected arc-transitive p-elementary abelian covers of the complete graph K 5 are constructed using the techniques developed by Malnič, Marušič and Potočnik. Complete Graph: A complete graph is a graph with N vertices in which every pair of vertices is joined by exactly one edge. Select a sink of the maximum flow. The complete graph K4 is planar K5 and K3,3 are not planar. Click to see full answer. Then, let G be a planar graph corresponding to K5. K8, 2=16. C. Find an isomorphic representation (graph) of K5. Zur Navigation springen Zur Suche springen. The symbol used to denote a complete graph is KN. How many triangles are see in complete K5 graph. The graph K3,3 is non-planar. i Der Quelltext dieser SVG -Datei ist valide . Let ' G − ' be a simple graph with some vertices as that of 'G' and an edge {U, V} is present in ' G − ', if the edge is not present in G.It means, two vertices are adjacent in ' G − ' if the two vertices are not adjacent in G.. K m,n is a complete graph if m = n = 1. Jump to navigation Jump to search. Denote the vertices of G by v₁,v₂,v₃,v₄,v5. Note: There could be exceptions also. In fact, any graph which contains a “topological embedding” of a nonplanar graph is non- planar. A Hamiltonian circuit is a path along a graph that visits every vertex exactly once and returns to the original. Active 3 years, 2 months ago. If n=9, k5, 4 = ⌊ n 2 / 4 ⌋ = ⌊ 9 2 / 4 ⌋ = 20. Definition. This category has only the following subcategory. Similarly K6, 3=18. A complete bipartite graph is a graph whose vertices can be partitioned into two subsets V1 and V2 such that no edge has both endpoints in the same subset, and every possible edge that could connect vertices in different subsets is part of the graph. Ask Question Asked 6 years, 5 months ago. Graph has not Eulerian | {
"domain": "com.au",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9850429129677614,
"lm_q1q2_score": 0.8187115627752205,
"lm_q2_score": 0.8311430415844385,
"openwebmath_perplexity": 1003.8854289814757,
"openwebmath_score": 0.4934229850769043,
"tags": null,
"url": "https://autoconfig.johanson.com.au/hergp/e10754-complete-graph-k5"
} |
Explanaton for Step 1. Write $$S_k := \frac{\displaystyle\int_{1/((k+1)\pi)}^{1/(k\pi)}\Bigg|\sin\frac{1}{t}\Bigg|\;dt}{\displaystyle\frac{1}{k\pi} - \frac{1}{(k+1)\pi}}$$ When $$k$$ is even, $$\sin\frac{1}{t} > 0$$ on the interval, when $$k$$ is odd, $$\sin\frac{1}{t} < 0$$ on the interval. We will do the even case; the odd case is similar. Change variables $$s = \frac{1}{t} - 2 k \pi$$ $$S_{2k} = \int_0^\pi\frac{(2k)(2k+1)\pi \sin(s+2 k \pi)}{(s+2 k \pi)^2}\;ds = \int_0^\pi\frac{(2k)(2k+1)\pi \sin(s)}{(s+2 k \pi)^2}\;ds$$ The integrand converges $$\lim_{k \to \infty} \frac{(2k)(2k+1)\pi \sin(s)}{(s+2 k \pi)^2} = \frac{\sin s}{\pi}\;\lim_{k \to \infty}\frac{1+\frac{1}{2k}}{1+\frac{s}{2k\pi}} = \frac{\sin s}{\pi}$$ and is dominated by $$\left|\frac{(2k)(2k+1)\pi \sin(s)}{(s+2 k \pi)^2}\right| = \frac{\sin | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9840936082881853,
"lm_q1q2_score": 0.8068517935160164,
"lm_q2_score": 0.8198933381139645,
"openwebmath_perplexity": 231.04785373562763,
"openwebmath_score": 0.9670203924179077,
"tags": null,
"url": "https://math.stackexchange.com/questions/3590854/differentiability-of-an-integral-accumulation-function"
} |
ruby, converting, rspec
And you get
"101010".bin_to_dec #=> 42
"101010".bin_to_dec(false) #=> 21
No memoization, but I'd call that pretty clean.
Took a quick look at your tests, and it's a bit overkill (also, you should use the expect syntax of rspec; the should syntax is old-school).
I'd actually just do something like
number = rand(12345) # can be anything really
string = number.to_s(2)
expect(string.bin_to_dec).to eq(number)
And call it good. You'll note I'm using to_s(2) above, but here it makes sense. We have to assume that Ruby works anyway, so in this case, it's a perfect yardstick. Similarly, to check big endian conversion, we can use to_i(2) with a clear conscience
string = rand(12345).to_s(2).reverse
expect(string.bin_to_dec(false)).to eq(string.to_i(2))
Add a test for the exception raising, and you've tested everything.
On another note, you say you "wanted to use a minimum of functions that come with Ruby by default as that would sort of defeat the purpose of this as a practice program". But I'd recommend you use as much built-in functionality as possible, especially in your practice programs (in this case stopping short of using String#to_i, of course).
Learning any language is usually less about learning the language itself (as in syntax), as it is about learning the conventions, idioms, and the built-in goodies. Intentionally doing things "the hard way" will probably teach you the wrong lessons. And the code you write won't teach you conventions and idioms, because it's unconventional to make things hard for yourself and no idioms exist for it.
Besides, at first, the easy solution will actually be the difficult one, because the easy solution is the one that requires experience. But brute-forcing your way though a problem won't earn you that experience and will actually teach you a lot less than trying to use the language to your advantage. | {
"domain": "codereview.stackexchange",
"id": 7146,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ruby, converting, rspec",
"url": null
} |
ros, ros-melodic, installation
docutils-common libfreetype6 libjbig0 libjpeg-turbo8 libjpeg8 liblcms2-2 libpaper-utils libpaper1
libpng16-16 libtiff5 libwebp6 libwebpdemux2 libwebpmux3 multiarch-support python3-catkin-pkg-modules
python3-dateutil python3-docutils python3-olefile python3-pil python3-pkg-resources python3-pygments
python3-pyparsing python3-roman python3-six sgml-base tzdata ucf xml-core
0 upgraded, 28 newly installed, 0 to remove and 16 not upgraded.
Need to get 42.0 kB/3179 kB of archives.
After this operation, 15.7 MB of additional disk space will be used.
Err:1 http://packages.ros.org/ros/ubuntu bionic/main amd64 python3-catkin-pkg-modules all 0.4.20-1
404 Not Found [IP: 64.50.236.52 80]
E: Failed to fetch http://packages.ros.org/ros/ubuntu/pool/main/p/python3-catkin-pkg-modules/python3-catkin-pkg-modules_0.4.20-1_all.deb 404 Not Found [IP: 64.50.236.52 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? | {
"domain": "robotics.stackexchange",
"id": 34973,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, installation",
"url": null
} |
ros
<xacro:macro name="prop_joint" params="name originxyz axis">
<joint name="${name}_joint" type="continuous">
<origin xyz="${originxyz}" rpy="0 0 0" />
<parent link="base_link" />
<child link="${name}" />
<axis xyz="${axis}" />
</joint>
</xacro:macro>
<xacro:macro name="prop_transmission" params="name">
<transmission name="${name}_trans">
<type>transmission_interface/SimpleTransmission</type>
<joint name="${name}_joint">
<hardwareInterface>hardware_interface/VelocityJointInterface</hardwareInterface>
</joint>
<actuator name="${name}_motor">
<mechanicalReduction>1</mechanicalReduction>
<hardwareInterface>hardware_interface/VelocityJointInterface</hardwareInterface>
</actuator>
</transmission>
</xacro:macro>
</robot>
drone_gazebo.xacro
<?xml version="1.0"?>
<robot xmlns:xacro="http://www.ros.org/wiki/xacro">
<!-- ROS CONTROL -->
<gazebo>
<plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so">
<robotNamespace>/</robotNamespace>
</plugin>
</gazebo> | {
"domain": "robotics.stackexchange",
"id": 36133,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
ros, rqt-gui, rqt-bag
Title: Context menu in rqt_bag won't appear
When running rqt_bag version 0.2.14 in ROS Groovy, I was trying to right-click the timeline to enable topic publishing, but the menu wouldn't appear. The menu WOULD appear in rxbag. In rqt_bag, however, right-clicks and middle-clicks didn't even seem to dispatch events. The strange part is that on another machine with a different graphics driver, rqt_bag works as expected.
rqt_bag 0.2.14
ROS Groovy
Xubuntu 12.04
Nvidia CUDA display drivers
Originally posted by jbohren on ROS Answers with karma: 5809 on 2013-04-02
Post score: 0
The world is broken, restart your computer, and it will work again.
Originally posted by jbohren with karma: 5809 on 2013-04-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13646,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rqt-gui, rqt-bag",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.