text stringlengths 1 1.11k | source dict |
|---|---|
java, ai, connect-four
enum Piece
{
Red,
Blue,
None
}
class Board extends JButton
{
public int i, j;
public Piece piece = Piece.None;
public Board(int i, int j)
{
this.i = i;
this.j = j;
setOpaque(true);
setColor();
}
public void setPiece(Piece piece)
{
this.piece = piece;
setColor();
}
public void setColor()
{
switch(piece)
{
case Red:
setBackground(Color.red);
break;
case Blue:
setBackground(Color.blue);
break;
case None:
setBackground(Color.white);
break;
}
}
}
class Tree // this is the minmax algorithm
{
public int value;
Board[][] Boards; // this is the board
private ArrayList<Integer> bestMoves;
Board prev = null;
int depth;
static int maxDepth = 4; // this is the max depth im going down | {
"domain": "codereview.stackexchange",
"id": 23497,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, ai, connect-four",
"url": null
} |
machine-learning, vc-dimension, pac-learning
Title: PAC-learning bound with epsilon-cover of hypothesis class In this video at 43:00, a version of the PAC bound for generalization error $\epsilon$, which I hadn't seen before, is quoted:
$$\epsilon^2 < \frac{\log{|H_\epsilon|} + \log{1/\delta}}{2m}$$
where $m$ is the number of samples, $\delta$ is the confidence parameter, and $H_\epsilon$ is the cardinality of an "$\epsilon$-cover of the hypothesis class", where he defines an $\epsilon$-cover as a set of subsets of the hypothesis class, such that the probability that two hypothesis in the same subset disagree is less than $\epsilon$. | {
"domain": "cstheory.stackexchange",
"id": 4860,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, vc-dimension, pac-learning",
"url": null
} |
There are different approaches to compute the requested auto-correlations but I would like to provide the simplest that suits to your case provided that.
• The difference equation is an LCCDE type indicating an LTI system.
• The input to the system $$v[n]$$ is WSS (wide sense stationary) random process.
Both of which I assume to be observed in your question. Then I would use the following well known relation between the input and output auto-correlations of an (real) LTI system driven by (real) WSS input $$v[n]$$:
$$r_{dd}[k] = h[k] \star h[-k] \star r_{vv}[k]$$
where the auto-correlation of WSS $$v[n]$$ is $$r_{vv}[k] = \sigma_v^2 \delta[k] = \delta[k]$$ for an uncorrelated, zero mean, unit variance process. And $$h[n]$$ is the impulse-response of the LTI system signified by the LCCDE.
Then the auto-correlation sequence of the WSS output $$d[n]$$ is $$r_{dd}[k] = h[k] \star h[-k] = \sum_{m=-\infty}^{\infty} h[m]h[-(k-m)]~~~,~~~\text{ for } k = 0,\pm 1, \pm 2,...$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.959762057376384,
"lm_q1q2_score": 0.8018653945429689,
"lm_q2_score": 0.8354835330070839,
"openwebmath_perplexity": 361.9283243712944,
"openwebmath_score": 0.9670982956886292,
"tags": null,
"url": "https://dsp.stackexchange.com/questions/52847/how-to-compute-autocorrelation-of-signal-defined-by-difference-equations"
} |
• I also have tried parts with $u=\log x$, $v'=\frac1{(1+x^2)^2}$, but I believe this leads to a term $$\int_0^\infty\frac{\arctan x}{x}\,dx$$which again looks troublesome. – John Doe May 21 '17 at 11:20
• integrate $\log^2(x)/(1+x^2)^2$ over (for example) a keyhole in the complex plane. search the site properly this was done hundreds of times before here on this site (Especially user @Mark Viola posted something similar these days) – tired May 21 '17 at 11:28
• btw $\int dx \arctan(x)/x$ is a very good candiate for differentiation under the integral sign.... – tired May 21 '17 at 11:30
• If one can solve (click for solution) $$f(t)=\int_0^\infty\frac{\ln(x)}{t^2+x^2}\ dx$$ then take the derivative and set $t=1$. – Simply Beautiful Art May 21 '17 at 11:37
Take the keyhole contour of $f(z)=\left[\frac{\ln(z)}{1+z^2}\right]^2$ with $\gamma$ the radius of the inner circle and $\Gamma$ the radius of the outer circle to get | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138190064204,
"lm_q1q2_score": 0.808287543311331,
"lm_q2_score": 0.8267117919359419,
"openwebmath_perplexity": 918.0222006236005,
"openwebmath_score": 0.9978640675544739,
"tags": null,
"url": "https://math.stackexchange.com/questions/2290308/is-this-definite-integral-possible-to-evaluate-int-0-infty-frac-log-x1x/2290340"
} |
“e” 2 Transformations of Exponential Functions We will apply the same rules we studied in Section The transformation of functions includes the shifting, stretching, and reflecting of their graph. Transformations of Linear and Exponential Graphs original exponential function? Do the same rules apply as with the What transformation do you observe when Transformations Of Exponential Functions This lecture will cover exponential parent functions, transformations. Homework 4 - Exponential Functions. exponential functions, rules of exponents, 4. How to Graph Logarithms: Transformations and Effects on Graphing Transformations of Logarithmic Functions. Objectives: Given a general exponential function the student will determine the horizontal, Rules of Exponents. Just as with other parent functions, we can apply the four types of transformations—shifts, reflections, stretches, and compressions—to the parent function f ( x ) = b x \displaystyle f\left(x\right)={b}^{x} f(x)=bx without loss of | {
"domain": "rtitb.info",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9820137874059599,
"lm_q1q2_score": 0.8366294032890222,
"lm_q2_score": 0.8519528076067262,
"openwebmath_perplexity": 781.113041914224,
"openwebmath_score": 0.5795921683311462,
"tags": null,
"url": "http://templgvinstructorregister.com.rtitb.info/jhjxwl/exponential-function-transformations-rules.php"
} |
ros, include, package, service
# LIBRARIES rosfond
CATKIN_DEPENDS roscpp rospy std_msgs
# DEPENDS system_lib
)
include_directories(
${catkin_INCLUDE_DIRS}
include
)
add_executable(my_nodes src/my_nodes.cpp)
target_link_libraries(my_nodes
${catkin_LIBRARIES}
) | {
"domain": "robotics.stackexchange",
"id": 25609,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, include, package, service",
"url": null
} |
automata, reference-request
Title: Comprehensive reference for automata I'm looking for a general reference cataloguing types of automata. Any introductory textbook will cover DFAs, NFAs, PDAs, and TMs, but there are countless more automata of practical or theoretical interest — linear bounded, embedded pushdown, nested stack, tagged finite, and so on. Online resources for these tend to be scattered.
Is there any sort of comprehensive reference, whether online or in print? Even incomplete surveys would be helpful. There is no list out on the web, nor even a partial list. I spent a few days looking and found nothing. The best you can do is piece together a list on your own (and post it here! :D). | {
"domain": "cs.stackexchange",
"id": 11474,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "automata, reference-request",
"url": null
} |
lambda-calculus, type-theory, typing, types-and-programming-languages
This contradicts the expectation that $\Sigma(l_1) = \mathrm{Int \to Int}$. But this expectation is there only because we know that $\mu(l_1)$ is a term of type $\mathrm{Int \to Int}$. However, $\mu$ plays no role in the typing of an expression. It only plays a role in the evaluation process.
What is violated here is not the typing of $t$. Indeed, $t$ is straight-forwardly typable by following the typing rules and choosing appropriate $\Gamma$ and $\Sigma$. What is violated is the typing of $\mu$. Following the theory and notation of the book, the judgment $\Gamma\,|\,\Sigma\vdash\mu$ does not hold. Because of this, the preservation theorem does not apply and if you try to evaluate $t$, a run-time error will occur. | {
"domain": "cs.stackexchange",
"id": 14051,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lambda-calculus, type-theory, typing, types-and-programming-languages",
"url": null
} |
```x = 2; p = 1/6; y = geocdf(x,p,"upper")```
```y = 0.5787 ```
The returned value `y` indicates that the probability of failing to roll a 6 within the first three rolls is 0.5787. Note that this probability is equal to the probability of rolling a non-6 value three times.
`probability = (1-p)^3`
```probability = 0.5787 ```
## Input Arguments
collapse all
Values at which to evaluate the cdf, specified as a nonnegative integer scalar or an array of nonnegative integer scalars. | {
"domain": "mathworks.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9861513893774303,
"lm_q1q2_score": 0.8174591617024591,
"lm_q2_score": 0.8289388125473629,
"openwebmath_perplexity": 395.95028006616155,
"openwebmath_score": 0.7795232534408569,
"tags": null,
"url": "https://kr.mathworks.com/help/stats/geocdf.html"
} |
the last times the last should give -12 or ab=-12, so what products can give -12?
There is -12 and 1, 2 and -6, 3 and -4 (and the others switch around the signs)
expanding (2x+a)(x+b) in your head, you'd see that the coefficient of x is a+2b. So looking at your choices, you want a+2b=5. Right away 12 and -1 (or 1 and -12) is eliminated.
2 and -6 is gone since 2+2(-6)≠5 or -2+2(6)≠5
so you are left with 3 and -4 or -3 and 4
3+2(-4)= -5 .So we are seeing 5 but negative, so we need to change the signs. So the choice is -3 and 4
so it is factored as (2x-3)(x+4)
as you practice, you can quickly do this and eliminate the obvious ones it can't be.
For ax2+b+c
Another way is to compute b2-4ac and find the square root of that. If it is an integer, then using the quadratic equation formula
$$x_1,x_2=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$$
In your example b2-4ac=121, so √(b2-4ac)=11
x1,x2= (-5±11)/2(2)
x1=(-5+11)/4=6/4=3/2
So one root is x=3/2 or 2x+3=0 | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232924970204,
"lm_q1q2_score": 0.8353714768575792,
"lm_q2_score": 0.8499711832583696,
"openwebmath_perplexity": 1947.432036252018,
"openwebmath_score": 0.6764963865280151,
"tags": null,
"url": "https://www.physicsforums.com/threads/help-with-factoring.333239/"
} |
electromagnetism, measurements, experimental-physics, speed-of-light, home-experiment
The minimum round trip time is 108 ms, which would correspond to 10,800 km instead of 5839 km. Off by a factor of 2, but the correct order of magnitude, due to delays in switches etc., which is why we said this is a lower bound.
If one looks more precisely the trajectory of my packets to New York with tracepath
fred@sanduleak2:~$ tracepath www.columbia.edu | {
"domain": "physics.stackexchange",
"id": 675,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, measurements, experimental-physics, speed-of-light, home-experiment",
"url": null
} |
c#, thread-safety, generics, cache
Just use a default argument, and implement the function once:
static internal void Set<T>(string cacheKey, T cacheItem, int minutes = DefaultMinutes) { /* ... */ } | {
"domain": "codereview.stackexchange",
"id": 20465,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, thread-safety, generics, cache",
"url": null
} |
ros, ros-melodic, archlinux
Install the version 3.12.3 of cmake, it's the one that I used, it might also work with 3.12.4 or other versions, but I can't confirm it.
Rebuild the package ros-melodic-cpp-common. As I don't know how to just rebuild something, I first removed it with pikaur -Rs ros-melodic-cpp-common and then reinstalled it with pikaur -S ros-melodic-cpp-common.
Now you should be able to install ros-melodic-rostime with pikaur -S ros-melodic-rostime.
I used pikaur as AUR helper, but it should work with any other, like yay for example.
Here was were I obtained all the information.
Originally posted by NelsonCandela with karma: 36 on 2019-01-02
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 32224,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, archlinux",
"url": null
} |
c#, performance
Type routeTypeWrapper = typeof(RouteTypeWrapper<>).MakeGenericType(new Type[] {
catchallRoute
});
RouteTypeProvider routeTypeProvider = (RouteTypeProvider)Activator.CreateInstance(routeTypeWrapper);
return GetRoute(routeTypeProvider, routeHandler);
}
private static RouteBase GetRoute(RouteTypeProvider routeTypeProvider, IRouteHandler routeHandler)
{
Object[] constructorParameters = new Object[]
{
routeHandler
};
return (RouteBase)Activator.CreateInstance(routeTypeProvider.RouteType, constructorParameters);
}
#region IRouteProvider Members
RouteBase IRouteProvider.GetRoute(IRouteHandler routeHandler)
{
return GetRoute(routeHandler);
}
#endregion
}
} | {
"domain": "codereview.stackexchange",
"id": 15654,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance",
"url": null
} |
# How should I approach using two 8s and two 3s to make the number 24?
Use two $$8$$s, two $$3$$s, and basic arithmetic operators ($$+, -, \times , \div$$, parentheses) to make the number $$24$$.
(You may not join numbers together to form new numbers, like $$8, 3\rightarrow 83$$)
I don't know how to start besides just trying to find the correct answer. Is there a way you can make this equation through small steps or I should just bruteforce it?
• see puzzling.stackexchange.com/questions/50259/coppers-make-24 (GM's answer) – JMP Feb 27 '19 at 21:17
• If I am not mistaken, this question is not asking people to solve the puzzle in question, but is instead asking strategies for how to go about solving it beyond just trying things at random. – Lunin Feb 28 '19 at 1:13
• One thing I'd suggest is determining whether any rounding is allowed. Narrows down the number of pieces you have to work with if no, and opens up more options if yes. – Justin Time - Reinstate Monica Feb 28 '19 at 1:38 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426428022032,
"lm_q1q2_score": 0.8137965885079417,
"lm_q2_score": 0.8354835330070838,
"openwebmath_perplexity": 308.34475120940135,
"openwebmath_score": 0.6632190942764282,
"tags": null,
"url": "https://puzzling.stackexchange.com/questions/80039/how-should-i-approach-using-two-8s-and-two-3s-to-make-the-number-24?noredirect=1"
} |
# Convergence of this series which gives no result in root test
Is the series whose general term is $$\frac{\tan \frac{1}{n}}{\sqrt n}$$ convergent? I have tried for root test but the limit is 1 so no decision taken. How to check for convergence of this series?
-
Use the Limit Comparison Test, with $\sum 1/n^{3/2}$ (for small $x$, $\tan x\approx x$). – David Mitra Jan 13 '14 at 0:29
sorry my question was not that "edit" changes it. tan (1/n) is in numerator – nothingobvious Jan 13 '14 at 0:29
now it is ok.... – nothingobvious Jan 13 '14 at 0:32
now help me to decide the convergence of the series. – nothingobvious Jan 13 '14 at 0:38 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9825575162853136,
"lm_q1q2_score": 0.8010128055220166,
"lm_q2_score": 0.8152324848629214,
"openwebmath_perplexity": 999.8891748556233,
"openwebmath_score": 0.9409425258636475,
"tags": null,
"url": "http://math.stackexchange.com/questions/636379/convergence-of-this-series-which-gives-no-result-in-root-test"
} |
ros, slam, navigation, rgbd6dslam
Title: Specs To Run RGBD-6D-SLAM?
I've successfully installed the packages, but when I try executing it, my computer lags/freezes when trying to run the code. The computer I've ran it on has a 2.4ghz dual core processor and 2gb of memory.
For those who have had success running the tool smoothly, what were the specs of the computer that you ran it on? Specifically the amount of memory and CPU speed? | {
"domain": "robotics.stackexchange",
"id": 5015,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, slam, navigation, rgbd6dslam",
"url": null
} |
black-holes, quantum-gravity, curvature
However as far back as the 60s there have been suggestions that quantum effects would cause the Schwarzschild geometry to become de Sitter near the singularity of a black hole, and this would mean there was a maximum radius of curvature. As it happens the recent question How does the friedmon solution to Einstein's equations resolve paradox of bounded infinities? covered some of this ground. You might also want to look at the paper Implementing Markov's Limiting Curvature Hypothesis for some more background information.
These ideas are not based on any fundamental theory of quantum gravity, because no such theory exists, but rather they argue from general principles. So you won't be surprised to learn that there is no precise prediction for the maximum curvature, but rather that it is expected to be around the Planck length. More precisely the principal curvature in any dimension cannot exceed the inverse of the Planck length. | {
"domain": "physics.stackexchange",
"id": 15221,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "black-holes, quantum-gravity, curvature",
"url": null
} |
Abstract We consider the problem of portfolio selection, with transaction costs and constraints on exposure to risk. CVXPY Documentation Release 0. Convex optimization, for everyone. from math import sqrt from cvxopt import matrix from cvxopt. 12)¶ source code # Figure 4. Maths : Statistics and probability, optimization in mathematics, linear algebra, time series, stochastic calculus IT : Applied econometrics on EViews, computer science applied to finance (Python, Matlab, VBA, R), big data and AI in finance, machine learning. Beginner’s Guide to Portfolio Optimization with Python from Scratch. When you purchase life and retirement insurance, you’re buying a promise. Prior to Citadel, I was a data scientist at Uber (Marketplace Optimization team). Learn how to calculate meaningful measures of risk and performance, and how to compile an optimal portfolio for the desired risk and return trade-off. 2018-07-20 python python-2. (Also my first time posting a problem anywhere, so please do | {
"domain": "freccezena.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9632305381464927,
"lm_q1q2_score": 0.8047632511380802,
"lm_q2_score": 0.8354835309589074,
"openwebmath_perplexity": 1894.8592761976015,
"openwebmath_score": 0.26045969128608704,
"tags": null,
"url": "http://nrvt.freccezena.it/python-portfolio-optimization-cvxopt.html"
} |
• @Hubble07 Your answer is simply incorrect. It would be correct only in the case that each sphere was infinitely deformable (i.e., equivalent to fluid volume) where you pour the volume of $n$ smaller spheres into the volume of the larger sphere. The proven answer are here: en.wikipedia.org/wiki/Sphere_packing_in_a_sphere. – David G. Stork Dec 6 '15 at 21:40
• note the question regards packing around a sphere, not inside. – george2079 Dec 7 '15 at 18:40 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9572777975782054,
"lm_q1q2_score": 0.8282597958639311,
"lm_q2_score": 0.865224073888819,
"openwebmath_perplexity": 1764.3662866110817,
"openwebmath_score": 0.7090698480606079,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/101350/packing-of-smaller-spheres-onto-a-bigger-one"
} |
python, beginner
Title: GUI to load/save player information in database I've started programming using Python two months ago. My only prior programming experience was VBA. I'm completely self taught. I wrote the code below for a project I'm creating as a way to learn the language.
I was hoping someone experienced with Python could look over my code quickly and let me know which places could be written better. It all works but I feel(especially with the variables) there must be best practices I am missing.
Any feedback would be greatly appreciated.
The code is meant to load a window with input boxes where you can save/load player information into a database file. It's will be used for a Dungeons and Dragons game. It does what its supposed to.
I'm more concerned with the code vs functionality. Is there a more elegant way to get the same results? Specifically within the def makeVar(self) section.
from tkinter import * #import tkinter
from tkinter import messagebox as mb
import SQLclass | {
"domain": "codereview.stackexchange",
"id": 39228,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner",
"url": null
} |
java, interview-questions, json, csv
Why not just write…
try {
String divisionId = tokenizer.nextToken();
String teamId = tokenizer.nextToken();
String managerId = tokenizer.nextToken();
String firstName = tokenizer.nextToken();
String lastName = tokenizer.nextToken();
String birthdate = tokenizer.nextToken();
if (tokenizer.hasMoreTokens()) {
throw new MalformedCSVException("Extra field in CSV");
}
buildSurveyData(divisionId, teamId, managerId, employeeId, firstName, lastName, birthdate, data, sortOrderOfDataOrEmployees, sortDirectionOfEmployees);
} catch (NoSuchElementException missingField) {
throw new MalformedCSVException("Missing field in CSV");
}
Note that no extra verification is necessary, and no magic numbers (MIN_TOKENS_PER_LINE) are necessary. The processing just happens naturally, and you throw an exception as you encounter an error. | {
"domain": "codereview.stackexchange",
"id": 32296,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, interview-questions, json, csv",
"url": null
} |
keras
Another example is Google's Inception layer where you have different convolutions that are added back together before getting to the next layer.
To feed multiple inputs to Keras you can pass a list of arrays. In the word/image example you would have two lists:
x_input_image = [image1, image2, image3]
x_input_word = ['Feline', 'Dog', 'TV']
y_output = [1, 0, 0]
Then you can fit as follows:
model.fit(x=[x_input_image, x_input_word], y=y_output] | {
"domain": "datascience.stackexchange",
"id": 1042,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "keras",
"url": null
} |
debye-length
The first one is simply that we do not know really how to deal with the non linear problem in general cases and therefore we linearize but I agree that it is not a good justification although this is most of the time the hidden reason why people use it
The Poisson-Boltzmann equation is mostly used in aqueaous solutions and therefore you have a factor 80 (owing to the dielectric constant of bulk water) that appears in all your potentials...this more or less ensures that a monovalent ion does not generate a so high potential energy with other monovalent ions. If you have free ions in solution, it means that the thermal energy was enough to begin with to unbind them from the groups there were bound to | {
"domain": "physics.stackexchange",
"id": 7459,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "debye-length",
"url": null
} |
phase, allpass
$$\cos(\omega_0/2)(a+1) = - \sin(\omega_0/2)(a-1) \tag{7}$$
Separating the trigonometric functions we get
$$ \frac{\sin(\omega_0/2)}{\cos(\omega_0/2)} = \frac{1+a}{1-a} = \tan(\omega_0/2) \tag{8}$$
And now we can finally solve for $a$
$$a = \frac{\tan(\omega_0/2) -1}{\tan(\omega_0/2) +1} \tag{10}$$ | {
"domain": "dsp.stackexchange",
"id": 12381,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "phase, allpass",
"url": null
} |
c#
Example usage:
public class Receiver
{
public Receiver(Messenger messenger)
{
messenger.Register<string>(x =>
{
Console.WriteLine(x);
});
messenger.Register<string, string>(x =>
{
if (x == "hello")
{
return "world";
}
return "who are you?";
});
messenger.Register<string, string>(x =>
{
if (x == "world")
{
return "hello";
}
return "what are you?";
});
}
}
public class Sender
{
public Sender(Messenger messenger)
{
messenger.Send<string>("Hello world!");
Console.WriteLine("");
foreach (string result in messenger.Request<string, string>("hello"))
{
Console.WriteLine(result);
}
Console.WriteLine(""); | {
"domain": "codereview.stackexchange",
"id": 10899,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#",
"url": null
} |
fluid-dynamics, friction, everyday-life, water, drag
However, your derivation isn't the whole story. The second condition is that the two surfaces must stay apart. If you use a lubricant with too low a viscosity, the surfaces will come in contact and the original friction will reappear.
So the optimum lubricant is the least viscous liquid that is viscous enough to keep the surfaces apart. Which of them is the optimal one depends on the detailed surfaces and other conditions. For example, there exist situations in which water is a better lubricant than oil – for example when ice slides on ice. Some of the ice melts and the water is why the ice slides so nicely. | {
"domain": "physics.stackexchange",
"id": 31388,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, friction, everyday-life, water, drag",
"url": null
} |
ros, bagfile
But than got the following error
ERROR: cannot launch node of type [rosbag/rosbag]: can't locate node [rosbag] in package [rosbag]
Im using ROS Fuerte and Ubuntu 10.4. Any help??
This is my bag file info
version: 2.0
duration: 1:32s (92s)
start: Apr 01 2011 14:17:41.82 (1301627861.82)
end: Apr 01 2011 14:19:14.15 (1301627954.15)
size: 260.5 MB
messages: 6530
compression: none [284/284 chunks]
types: sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
sensor_msgs/LaserScan [90c7ef2dc6895d81024acba2ac42f369]
topics: /camera/image_raw 1135 msgs : sensor_msgs/Image
/imu/data 1802 msgs : sensor_msgs/Imu
/scan 3593 msgs : sensor_msgs/LaserScan
Thanks
Originally posted by Astronaut on ROS Answers with karma: 330 on 2012-09-28
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 11173,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, bagfile",
"url": null
} |
ros, ros-melodic, dwa-local-planner
I am not sure whether this is a bug in my code or on the API itself, and I have bee trying to solve this issue for a bunch of hours already so I would like to know whether the steps I took are correct or not, or if you have any suggestion, please :).
How I Initialize
I start by creating the tf2 Buffer as well as the TransformListener in the main and then passing these to the constructor of the Class in order to populate the costmap (pointer) and the dynamic planner.
Here is the main:
int main(int argc, char **argv)
{
// Create ROS node
ros::init(argc, argv, "planner");
ROS_INFO("Created Planner node...");
// TF2 objects
tf2_ros::Buffer l_buffer(ros::Duration(10));
tf2_ros::TransformListener l_tf(l_buffer);
// Create Planner instance
Planner planner(l_buffer, l_tf);
// Spin ROS
ros::spin();
return 0;
} | {
"domain": "robotics.stackexchange",
"id": 34634,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, dwa-local-planner",
"url": null
} |
c++, recursion, reinventing-the-wheel, template, c++20
Image<ElementT>& operator-=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(image_data.cbegin(), image_data.cend(), rhs.image_data.cbegin(),
image_data.begin(), std::minus<>{});
return *this;
}
Image<ElementT>& operator*=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(image_data.cbegin(), image_data.cend(), rhs.image_data.cbegin(),
image_data.begin(), std::multiplies<>{});
return *this;
} | {
"domain": "codereview.stackexchange",
"id": 42521,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, recursion, reinventing-the-wheel, template, c++20",
"url": null
} |
c++, io
std::transform(std::istreambuf_iterator<char>(data_in), {},
std::ostreambuf_iterator<char>(data_out),
[&](char c) { char ret = *m ^ c; ++m; return ret; }
);
One minor addition: you want to specify std::ios::binary when you open the file of the encoded data, because the data you produce may no longer be a proper text file. Since you can use the same code to either encrypt or decrypte, that means you want to specify it for both the input file and the output file:
std::ifstream data_in(argv[1], std::ios::binary);
std::ofstream data_out(argv[3], std::ios::binary);
then the processing looks something like this:
auto m = make_mod_iterator(key);
std::transform(std::istreambuf_iterator<char>(data_in), {},
std::ostreambuf_iterator<char>(data_out),
[&](char c) { char ret = *m ^ c; ++m; return ret; }
); | {
"domain": "codereview.stackexchange",
"id": 21780,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, io",
"url": null
} |
c++, c++11, library, pointers
// reset pointer
template <typename Px = std::nullptr_t>
void reset( Px px = nullptr ) {
I question this function template. You're saying, reset should accept any function argument at all; but also, if a template argument is provided, then it should default its argument to nullptr? So, like,
myvalueptr.reset(42); // OK, dies deep inside
myvalueptr.reset<int*>(); // OK, compiles
I would think that a much better way to write the desired overload set would be
template<class Px, class = std::enable_if_t<std::is_convertible_v<Px, pointer>>>
void reset(Px px); // template accepting just convertible types
void reset() { reset(nullptr); } // non-template
const_reference operator*() const { return *this->get(); }
reference operator*() { return *this->get(); } | {
"domain": "codereview.stackexchange",
"id": 32813,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, library, pointers",
"url": null
} |
navigation, base-local-planner
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: /laser, data_type: LaserScan, topic: /scan, marking: true, clearing: true, expected_update_rate: 10.0}
local_costmap_params.yaml
local_costmap:
global_frame: /map
robot_base_frame: base_link
update_frequency: 5.0
publish_frequency: 2.0
static_map: false
rolling_window: true
width: 1.0
height: 1.0
resolution: 0.05
global_costmap_params.yaml
global_costmap:
global_frame: /map
robot_base_frame: base_link
update_frequency: 5.0
publish_frequency: 2.5
static_map: true
width: 0.0
height: 0.0
move_base_params.yaml
controller_frequency: 10
controller_patience: 15.0
oscillation_timeout: 10.0
oscillation_distance: 0.5
TrajectoryPlannerROS:
max_vel_x: 0.45
min_vel_x: 0.1
max_rotational_vel: 1.0
min_in_place_rotational_vel: 0.4
acc_lim_th: 1.0
acc_lim_x: 0.5
acc_lim_y: 0.5
path_distance_bias: 50.0
goal_distance_bias: 0.8
holonomic_robot: false | {
"domain": "robotics.stackexchange",
"id": 15274,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, base-local-planner",
"url": null
} |
parsing, math-expression-eval, lua
By the way, since current() always returned a string, current() or ' ' is redundant; the ' ' branch will never be used.
Instead of using while loops for lexing numbers, you could use Lua's built in pattern matching. This will likely run faster, and is a lot less code.
local _, integer_stop = s:find("[0-9]+", i)
local number_stop = integer_stop
if s:sub(integer_stop + 1, integer_stop + 1) == "." then
local _, fraction_stop = s:find("[0-9]+", integer_stop + 2)
if not fraction_stop then
error({ message = 'Expected digit after dot', pos = integer_stop + 2 })
end
number_stop = fraction_stop
end
tokens[#tokens + 1] = {
pos = i,
lexeme = s:sub(i, number_stop),
}
i = number_stop - 1 | {
"domain": "codereview.stackexchange",
"id": 34496,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "parsing, math-expression-eval, lua",
"url": null
} |
# Trouble with filling out a Cayley table
I have trouble with filling this table:
$$\begin{array}{ccc} * & \textbf{p} & \textbf{q} & \textbf{r} & \textbf{s} & \textbf{t} & \textbf{u} \\ \textbf{p} & & & q & & & \\ \textbf{q} & r & & & & & \\ \textbf{r} & & & & & r & \\ \textbf{s} & & p & & & & \\ \textbf{t} & & & & & & \\ \textbf{u} & t & & & & & p \end{array}$$
It needs to be filled out so it forms a group. I only figured out that t is the neutral element (because $r ∗ t = r$?). How do I figure out the rest of the table?
Thanks.
-
$u*u = p$. What is $u*u*u$? – Mitch Jul 28 '12 at 18:09
$$(u ∗ u) ∗ u = u ∗ (u ∗ u)$$ $$p ∗ u = u ∗ p = t$$ So these 2 elements are inverse to each other? And this an Abelian group? – Radiant Jul 28 '12 at 18:17 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9752018376422666,
"lm_q1q2_score": 0.8040167919777658,
"lm_q2_score": 0.8244619328462579,
"openwebmath_perplexity": 186.21391592223483,
"openwebmath_score": 0.9214010238647461,
"tags": null,
"url": "http://math.stackexchange.com/questions/176234/trouble-with-filling-out-a-cayley-table"
} |
c++, reinventing-the-wheel, vectors
if (m_size > 0) {
// Why not swap the next two lines?
// This would make the line to call the destructor
// simpler and easier to read as you don't need the -1
data[m_size - 1].~T();
m_size -= 1;
}
Same issue as reserve()
void shrink_to_fit() {
if (m_capacity > m_size) {
pointer new_data = allocate(m_size);
if (m_size > 0) {
std::uninitialized_move(data, data + m_size, new_data);
}
deallocate(data);
data = new_data;
m_capacity = m_size;
}
}
Same solution:
void shrink_to_fit() {
if (m_capacity > m_size) {
Vector temp(m_size); // Vector with reserved capacity
std::uninitialized_move(data, data + m_size, temp.data);
temp.m_size = m_size;
swap(temp);
}
} | {
"domain": "codereview.stackexchange",
"id": 44875,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, reinventing-the-wheel, vectors",
"url": null
} |
harmonic-oscillator, oscillators, resonance, viscosity, dissipation
Title: Modeling a viscoelastic string with a collection of damped spring oscillators? (To replace finite difference model.) How to find $Q$ per harmonic? Background
I have simulated a vibrating viscoelastic string fixed at each end under tension using finite difference modeling. Most simply this can be done using Kelvin-Voigt style mass-spring dampers as the units such as these: | {
"domain": "physics.stackexchange",
"id": 99152,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "harmonic-oscillator, oscillators, resonance, viscosity, dissipation",
"url": null
} |
quantum-field-theory, gauge-theory, path-integral, gauge-invariance
$$
The last equation shows that
$$
DA^{\prime \mu} = UDA^\mu U^\dagger \neq DA^\mu,\ \mbox{due to the group is non-abelian}
$$
My problem comes form the fact that my Professor said in class that they had to be equal (as assumption), but I have proven contrary. What do you think? Since $\det(U) = \det(U^{\dagger})^{-1}$ and the determinant is multiplicative, you get
$$\det \left[U \frac{DA^{\mu}}{DA^{\nu}} U^{\dagger}\right] = \det\left[\frac{DA^{\mu}}{DA^{\nu}}\right]$$ | {
"domain": "physics.stackexchange",
"id": 61980,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, gauge-theory, path-integral, gauge-invariance",
"url": null
} |
c
strncpy(results[*size], &str[start], length); //copy the matching string to the array
results[*size][length] = 0; //end the string
(*size)++; //add 1 to the total size
} while (str[index++]);
results = realloc(results, (*size) * sizeof(char*)); //trim unused bytes
return results;
} | {
"domain": "codereview.stackexchange",
"id": 2422,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c",
"url": null
} |
algorithms, strings, rolling-hash
From some preliminary tests, my hypothesis seems to be empirically accurate, yet I have not seen that written anywhere, so I am left wondering..
Am I missing something? Yes, in practice you can get by fine with just letting the computations overflow. You are effectively working modulo $2^{32}$. It also has the advantage of not requiring an (expensive) modulo computation. However, it lacks some of the theoretical performance guarantees. You need to be very careful with the choice of the base (in this case: $10$) with respect to the modulus.
In particular, your choice of $10$ is very poor. Note that $10^{32}=2^{32}\cdot 5^{32}$, so $10^{32} \textrm{ mod } 2^{32} = 0$. This means that only the last $32$ characters of the string are taken in to account in the hash, so one can construct an input on which your algorithm performs very poorly. | {
"domain": "cs.stackexchange",
"id": 4927,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, strings, rolling-hash",
"url": null
} |
reinforcement-learning, q-learning, convergence, epsilon-greedy-policy, exploration-strategies
The reason that different state-action pairs have longer arrows, i.e. higher Q-values, is simply because the value of being in that state-action pair is higher. An example would be the arrow pointing down right above the trophy -- obviously this has the highest Q-value as the return is 1. For all other states it will be $\gamma^k$ for some $k$ -- to see this remember that a Q-value is defined as
$$Q(s, a) = \mathbb{E}_\pi \left[\sum_{j=0}^\infty \gamma^j R_{t+j+1} |S_t = s, A_t = a \right]\;;$$
so for any state-action pair that is not the block above the trophy with the down arrow $\sum_{j=0}^\infty \gamma^j R_{t+j+1}$ will be a sum of $0$'s plus $\gamma^T$ where $T$ is the time that you finally reach the trophy (assuming you give a reward of 1 for reaching the trophy). | {
"domain": "ai.stackexchange",
"id": 2653,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, q-learning, convergence, epsilon-greedy-policy, exploration-strategies",
"url": null
} |
quantum-field-theory, renormalization
Any help would be appreciated. I think P&S explain their motivation in the book, if I'm not wrong (I haven't opened it in a long time!). I think the choice is motivated from an experimental POV. Suppose you were performing an $2\to2$ scattering experiment in the CM frame at energy $2E$. Then,
$$s=4E^2, \qquad t=-4(E^2-m^2) \sin^2\frac{\theta}{2} , \qquad t=-4(E^2-m^2) \cos^2\frac{\theta}{2}.
$$
To experimentally test out our theory, we first need to do a couple of "setup" experiments. Note that $\lambda$ and $m$ are parameters in our theory which must be fixed by experiment (i.e. the theory does not by itself predict these values). To fix these, we do one experiment at some energy scale $E_0$ which will fix $\lambda(E_0)$ and $m(E_0)$. Once this is known, the theory predicts the results of all other experiments at all energy scales. Every other experiment then can be used to test out the theory. | {
"domain": "physics.stackexchange",
"id": 85890,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, renormalization",
"url": null
} |
homework-and-exercises, newtonian-mechanics, forces
Title: Tension problem A weight "m" is suspended from the center of a light, taut and originally horizontal rope. After suspending the weight "m", what angle must the rope make with the horizontal if the tension in the rope is to equal the weight "m"?
Here is what I have so far.
I'm not sure what to do from here on out. I'm assuming T1 and T2 will be equal and i'm also assuming that T1 and T2 have about half the mass of M. So is the idea to solve for M then substitute I'm guessing but when I did that i ended up with something like 2sin(theta)=sin(theta).
I'm assuming T1 and T2 will be equal
What made you assume that? They may or maynot.
and i'm also assuming that T1 and T2 have about half the mass of M.
"about" when the string are almost equal, not always
So is the idea to solve for M then substitute I'm guessing but when I did that i ended up with something like 2sin(theta)=sin(theta).
that is $2\sin\theta=\sin\theta\implies\theta=0$? that doesn't make sense | {
"domain": "physics.stackexchange",
"id": 20578,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, forces",
"url": null
} |
quantum-chemistry, polarity, dipole
If we think about these two effects in terms of electrical repulsion, the Fermi hole is a lower energy configuration than the Fermi heap. Because we almost always have both spin-up and spin-down electrons in multi-electron systems, both of these phenomena will take place, and the overall electron density will be some kind of average of all of these. (This is only really a conceptual framework.) If we take a simple case of a homonuclear diatomic, then these heaps and holes will all average out and we will get a symmetric electron density. If, on the other hand, we take a heteronuclear diatomic, then we see that there will be a tendency for these Fermi heaps to form over the atom with the larger nuclear charge just based on Coulombic attraction. | {
"domain": "chemistry.stackexchange",
"id": 9122,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-chemistry, polarity, dipole",
"url": null
} |
java, beginner, object-oriented, interface
}
}
interface Human{
public void sleep();
public void eat();
public void wakeUp();
public void walk();
public void hit(Human name);
public void setHeight(int height);
public int getHeight();
public String getName();
}
class Nate implements Human{
int height = 0;
final String name = "Nate";
public String getName() {
return this.name;
} | {
"domain": "codereview.stackexchange",
"id": 31520,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, object-oriented, interface",
"url": null
} |
quantum-mechanics, particle-physics, photons, virtual-particles, particle-detectors
Title: can we detect the photons in the interaction of two charged bodies? if the interaction of two charged bodies is through the photon exchange:
1) how much is the energy of these photons and how do we calculate their energies?
2) can these photons be detected by a photon detector or simply as electromagnetic fields by a frequency in visual frequency range? 1) There exists the classical electromagnetic wave as described by Maxwell's equations.
2) The photoelectric effect showed that these electromagnetic waves are composed of photons, with energy E=h*nu , where nu is the frequency of the classical wave. Single photon experiments have been performed by limiting the intensity of the beam to one photon at the time.
3) Charged particles have an electric field associated with them both classically and quantum mechanically. Not a classical electromagnetic wave .
If the interaction of two charged bodies is through the virtual photon exchange: | {
"domain": "physics.stackexchange",
"id": 18396,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, particle-physics, photons, virtual-particles, particle-detectors",
"url": null
} |
Further Hints for Exercise 2.C
Prove that if $X$ and $Y$ are $\sigma$-compact, then the product $X \times Y$ is $\sigma$-compact, hence Lindelof.
Prove that $S$, the Sorgenfrey line, is Lindelof while its square $S \times S$ is not Lindelof.
Further Hints for Exercise 2.D
As suggested in the hints given earlier, prove that $X \times Y$ is Lindelof if $X$ is Lindelof and $Y$ is compact. As suggested, the Tube lemma is a useful tool.
Further Hints for Exercise 2.E
The product space $(0,1)^\omega$ is a subspace of the product space $[0,1]^\omega$. Since $[0,1]^\omega$ is compact, we can fall back on a Baire category theorem argument to show why $(0,1)^\omega$ cannot be $\sigma$-compact. To this end, we consider the notion of Baire space. A space $X$ is said to be a Baire space if for each countable family $\{ U_1,U_2,U_3,\cdots \}$ of open and dense subsets of $X$, the intersection $\bigcap_{i=1}^\infty U_i$ is a dense subset of $X$. Prove the following results. | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.982013790564298,
"lm_q1q2_score": 0.8204563532136758,
"lm_q2_score": 0.8354835350552604,
"openwebmath_perplexity": 579.0447059325933,
"openwebmath_score": 0.9998471736907959,
"tags": null,
"url": "https://dantopology.wordpress.com/tag/baire-space/"
} |
The left hand diagram shows the three forces acting on the sign: the attractive force due to the Earth (weight), the force exerted by the rope (tension) and the force on the sign due to the bolt.
Because this is a static equilibrium situation the lines of action of the three forces must all meet at a point which is $X$ in the diagram.
The right hand diagram is the vector addition of the three forces which must be zero as the sign is in static equilibrium.
This is sometime called the "triangle of forces".
The symmetry of the situation shows that the angle that the line of action of the force on the sign due to the bolt relative to the horizontal is $20^\circ$.
I hope that this also shows that this sort of problem can be solved in a few lines using the sine rule.
$\dfrac{16\;g}{\sin 40}= \dfrac{\text{tension}}{\sin 70}= \dfrac{\text{bolt force }}{\sin 70}$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812327313546,
"lm_q1q2_score": 0.8413555380253751,
"lm_q2_score": 0.868826769445233,
"openwebmath_perplexity": 227.5960125496929,
"openwebmath_score": 0.7003765106201172,
"tags": null,
"url": "https://physics.stackexchange.com/questions/295239/rigid-body-equilibrium-word-problem"
} |
in solid heat conduction is through lattice waves induced by atomic motions. The mathematical formulation of chemically reacting, inviscid, unsteady flows with species conservation equations and finite-rate chemistry is described. Equation 2. Mansur WK, Vasconcellos CAB, Zambrozuski NJM, Rottuno Filho OC (2009) Numerical solution for the linear transient heat conduction equation using an explicit Green’s approach. Rectangular Coordinates. 1-4: Heat equation on infinite 1D domain, Fourier transform pairs, Transforming the heat equation, Heat kernel Week 15: Slack time and review Week 16: Finals week: comprehensive final exam. [70] Since v satisfies the diffusion equation, the v terms in the last expression cancel leaving the following relationship between and w. Laplace's equation in spherical coordinates can then be written out fully like this. a new coordinate with respect to an old coordinate. •Simplify composite problems using the ther-mal resistance analogy. How to Solve Laplace's | {
"domain": "ilovebet.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9835969636371975,
"lm_q1q2_score": 0.8041637701453398,
"lm_q2_score": 0.8175744739711883,
"openwebmath_perplexity": 746.7859159697898,
"openwebmath_score": 0.813318133354187,
"tags": null,
"url": "http://koyd.ilovebet.it/1d-heat-equation-in-spherical-coordinates.html"
} |
python, performance, image, object-detection
do not seem accurate, because you include the background (0). You should probably exclude this.
Rather than implicit axis references with plt., prefer explicit axis object references, such as ax.imshow().
Do not hold masks_x as lists; instead hold them as one np.ndarray.
Combine zero and zero_2 into one tuple of two lists, and similar for one and more.
This loop:
for i in range(leng - 1):
though it doesn't say so due to a poor variable name, is actually looping through colours ("items" in your parlance). But why are you using a range? Why not actually iterate through the colour values themselves? This will relieve your code from needing to assume that your colours are contiguous integers.
Don't not sum(sum(x * y)) == 0. This is really just an np.any() on an np.logical_and over the proper dimensions.
Change this loop:
for m in one.keys(): | {
"domain": "codereview.stackexchange",
"id": 43494,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, image, object-detection",
"url": null
} |
Rather than having to iterate over all of these redundant prime multiples we can skip ahead with some appropriate arithmetic (see below).
Here’s some C# code to sieve a block …
private static void SieveBlock(bool[] isPrime,
uint startIndex,
ulong startNumber,
ulong endNumber)
{
foreach(var p in primes) {
if(p == 2) continue; // 2 is a special case not covered by our working array
// Start eliminating elements of the working array from p^2
var n = p * p;
if(n > endNumber) break;
// In the for loop below, n = p*p + 2*k*p for +ve integers k = 0, 1, 2, ...
// We don't need to do anything until n is >= startNum so fast forward until that is true
if(n < startNumber) {
// Find the smallest k such that n + 2*k*p >= startNum
// k >= (startNum - n) / (2 * p)
// k >= a / b for a = startNum - n and b = 2 * p
// k = a / b if a % b == 0 otherwise k = (a / b) + 1
var a = startNumber - n;
var b = 2 * p;
var k = a % b == 0 ? a / b : (a / b) + 1;
n = n + (2 * k * p);
} | {
"domain": "alandavies.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9886682481380669,
"lm_q1q2_score": 0.8128722894650741,
"lm_q2_score": 0.8221891327004132,
"openwebmath_perplexity": 759.7021361001217,
"openwebmath_score": 0.5288369059562683,
"tags": null,
"url": "https://alandavies.org/blog/2018/02/18/on-primes"
} |
electromagnetism, condensed-matter, superconductivity
If (i) is true, one may conclude that magnetic response for all crystals (neglect interactions) happens in a thin layer near to the surface (a few nanometer) since magnetic field can only penetrate that far. The length scale that electron density drops to zero is a few angstroms, so we are able to neglect the first term in an appropriate region. This is not true--- you made a mistake with the substitution and variation. As I am sure you know, experimentally, it is false, you can get strong bulk magnetic field penetration into any non-superconducting material. | {
"domain": "physics.stackexchange",
"id": 4617,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, condensed-matter, superconductivity",
"url": null
} |
python, object-oriented, sql, playing-cards, postgresql
@lock
class Database:
sql_id: int = attr.ib(default=None)
email: str = attr.ib(default=None)
password: str = attr.ib(default=None)
hashed_pw: str = attr.ib(default=None)
budget: int = attr.ib(default=None)
conn: t.Any = attr.ib(
default=psycopg2.connect(
dbname="blackjack", user="postgres", password="12344321", host="localhost"
)
)
cur: t.Any = attr.ib(default=None)
def check_account(self):
self.cur.execute("SELECT id FROM users WHERE email=%s", (self.email,))
return bool(self.cur.fetchone())
def login(self):
self.cur.execute("SELECT password FROM users WHERE email=%s", (self.email,))
credentials = self.cur.fetchone()
correct_hash = credentials[0].encode("utf8")
if bcrypt.checkpw(self.password, correct_hash):
print("You have successfully logged-in!")
else:
raise Exception("You have failed logging-in!") | {
"domain": "codereview.stackexchange",
"id": 35372,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, object-oriented, sql, playing-cards, postgresql",
"url": null
} |
hash, one-way-functions
If your question is about cryptographic nature of hashing, then the reason is the same as why any good cipher is "unbreakable". It is not that it is impossible to decrypt it (it surely cannot be the case, because then the reciever of cipher could not do it), but is just very very time consuming without knowing the key. You can find out more on that at Cryptography Stack Exchange. | {
"domain": "cs.stackexchange",
"id": 12240,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "hash, one-way-functions",
"url": null
} |
sql, vba, sql-server, excel
End Sub
It then generates SQL like this
BEGIN TRANSACTION
CREATE TABLE #Resources (
lLocaleID int NOT NULL,
txtResourceKey varchar(255) NOT NULL,
memText nvarchar(max) NOT NULL,
txtLastModifiedUsername varchar(255) NULL
);
DECLARE @US_LOCALE int = 0
, @UK_LOCALE int = 1
, @DE_LOCALE int = 2
, @JP_LOCALE int = 3
, @IT_LOCALE int = 4
, @FR_LOCALE int = 5
, @ES_LOCALE int = 6
; | {
"domain": "codereview.stackexchange",
"id": 21120,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "sql, vba, sql-server, excel",
"url": null
} |
homework-and-exercises, newtonian-mechanics, mass, rotational-dynamics
Title: How to calculate the energy required to rotate a planet? How to calculate the energy required to rotate a planet from non-rotating state? Say the planet is Venus with equally distributed mass of $4.8676 \times 10^{24}$ kg, and desired rate of 1 rotation per 24 hours. The rotational energy of a body is given by:
$$ E = \tfrac{1}{2}I\omega^2 $$
where $I$ is the moment of inertia and $\omega$ is the angular velocity. For a uniform sphere the moment of inertia is related to the mass of the sphere, $m$, and the radius of the sphere, $r$, by:
$$ I = \frac{2}{5}mr^2 $$
You already have the mass, and you can Google for the radius of Venus. That will give you everything you need to answer the question. | {
"domain": "physics.stackexchange",
"id": 20105,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, mass, rotational-dynamics",
"url": null
} |
evolved in the past have the best browsing on. So they are parallel: 3 9 … distance between the two parallel plane on the plane! Their normal vectors are also parallel if a given point lies inside or outside a polygon planes of page. Defined as the distance between two points in the coordinate plane ; distance planes. Other plane is the distance finding process, what you should not etc the! Specified direction perpendicular should give us the said shortest distance to report any issue with the same result //www.kristakingmath.com/vectors-course! The link here dimension the distance between planes = distance from a point to the lines... Dsa concepts with the above formulae distance between two parallel planes in 3d edit close, link brightness_4 code find distance between two i.e! + ( − 24 ) | √32 +12 + ( − 4 ) 2 = 22 √26 of... Collinear or coincides with their direction vectors that is how to check if two planes are.. 5X+4Y+ 3z= 1 are two equations for planes: 3 x + 2y − z = 4 and +! Is | {
"domain": "com.br",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9539660976007596,
"lm_q1q2_score": 0.8070004001563397,
"lm_q2_score": 0.8459424314825852,
"openwebmath_perplexity": 493.2718904415889,
"openwebmath_score": 0.478618860244751,
"tags": null,
"url": "http://www.laboratoriocitopar.com.br/steelton-products-axlrpk/distance-between-two-parallel-planes-in-3d-0b3737"
} |
ros, ros2, roslaunch, node
#include <rclcpp/rclcpp.hpp>
class MyNode : public rclcpp::Node
{
public:
explicit
MyNode(const std::string & node_name="my_node", const std::string & node_namespace="/")
: rclcpp::Node(node_name, node_namespace)
{}
};
int main(int argc, char const *argv[])
{
rclcpp::init(argc, argv);
rclcpp::executors::MultiThreadedExecutor executor;
auto node1 = std::make_shared<MyNode>("my_node1");
auto node2 = std::make_shared<MyNode>("my_node2");
executor.add_node(node1);
executor.add_node(node2);
executor.spin();
return 0;
}
The above program can be run in any of these ways:
remap a topic my_exec my_node1:chatter:=chatter1 my_node2:chatter:=chatter2
change node namespace my_exec my_node1:__ns:=/ns1 my_node2:__ns:=/ns2
change node name my_exec my_node1:__node:=foo my_node2:__node:=bar | {
"domain": "robotics.stackexchange",
"id": 32545,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros2, roslaunch, node",
"url": null
} |
signal-analysis, hilbert-transform
Title: Noise sensitivity of the (classical) Empirical Mode Decomposition routine I tried to apply a MATLAB Empirical Mode Decomposition routine to denoise a signal, basically retaining only the last IMFs, with a criterion based on the mode energy.
To validate the routine, I have built a synthetic signal with added Gaussian noise + a sinusoidal disturbance. I noticed that the EMD routine (at least mine) seems very sensitive to the noise. In fact, if I launch it twice, the generated noise is of course different and the IMFs are also quite different.
Do you have any suggestions on how to "stabilize" the routine? Indeed, at least in my experience, computing IMFs can be sensitive to borders, impulse signals and noise realizations. As you are interested in wavelets, note that in Empirical mode decomposition as a filter bank a link is made with DWT: | {
"domain": "dsp.stackexchange",
"id": 8977,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "signal-analysis, hilbert-transform",
"url": null
} |
forces, vectors, vector-fields
Title: Difference between projection and component of a vector in product? This is a very basic question about dot products or scalar products:-
If I want to move a block and I apply a force parallel to displacement, the block will move and some work will be done. So in the formula will be $W= F\cdot S$, here we won't calculate the force mg of the block but the force we applied (the parallel force).
Now let's say that the force is not parallel and is at some angle from the horizontal
So in this case the work done will be the projection of the force $F_1$ on the $x$ axis, because that is how dot products are defined (As Projections).
But can't we say that the block moved due to its horizontal component $F_1\cos(\theta)$ and the answer would be same. And obviously we won't count $F_1\sin(\theta)$ as the work is not done by it. | {
"domain": "physics.stackexchange",
"id": 65592,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, vectors, vector-fields",
"url": null
} |
ros-noetic, gazebo-11, gazebo-ignition
Visual plugin loaded to ground plane
I wrote a visual plugin and loaded it to the ground plane. Managed to use the DynamicLines class to render triangles (again, triangulated the whole square and used RENDERING_TRIANGLE_LIST as render operation type for the lines), following this answer. Changed visibility type to GZ_VISIBILITY_ALL instead of GZ_VISIBILITY_GUI to make it visible to cameras. But the behaviour in this case is quite strange. Gazebo renders the visual either in the GUI or in the camera but not both, and this varies randomly during different runs.
What is the best way to simulate this feature in gazebo??
Setup: Gazebo 11.6 + ROS Noetic on Ubuntu 20.04 | {
"domain": "robotics.stackexchange",
"id": 4607,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-noetic, gazebo-11, gazebo-ignition",
"url": null
} |
c++, c++11
Title: Static Observer Pattern in C++11 This is my first shot at implementing a fully static version of the Observer pattern in C++11. It is static in that all notifications are dispatched via CRTP, and therefore there is no virtual overhead. It also has the benefit of catching unimplemented notifications at compile time, by checking available overloads based on the Subject type. Its primary restriction is that all relevant observers must be known at compile time.
The motivation is to use this on an embedded system to help decouple systems while additionally providing better type safety. It appears that the initial output of this is exactly how I want it (zero cost) and the restriction of needing to know all observers in advance is reasonable given that the systems are well defined, and typically not dynamic. The platforms I work with typically do not allow malloc/new due to their ability to fail.
// static_observer header file
#include <utility>
#include <tuple> | {
"domain": "codereview.stackexchange",
"id": 5632,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11",
"url": null
} |
robotic-arm, servomotor, force, force-sensor
If they're mounted on or near your end effector you can measure the strain on the link/mount and then use that as your force feedback. It will give you direct feedback on the forces applied to the end effector and you can align them with your primary axes to simplify the math for providing the feedback to your controller potentially. Alternatively you can possibly have one at each joint to provide the feedback necessary. | {
"domain": "robotics.stackexchange",
"id": 2566,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "robotic-arm, servomotor, force, force-sensor",
"url": null
} |
logistic-regression, feature-engineering, feature-construction
Does it make sense doing these in the first place?
If yes, how do I know when is a good time to stop this cycle?
If not, why not? If you can keep adding new data (based on a main concept such as area i.e. the ZIP code) and the performance of your model improves, then it is of course allowed... assuming you only care about the final result.
There are metrics that will try to guide you with this, such as the Akaike Information Criterion (AIC) or the comparable Bayesian Information Criterion (BIC). These essentially help to pick a model based on its performance, being punished for all additional parameters that are introduced and that must be estimated. The AIC looks like this:
$${\displaystyle \mathrm {AIC} =2k-2\ln({\hat {L}})}$$ | {
"domain": "datascience.stackexchange",
"id": 5938,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "logistic-regression, feature-engineering, feature-construction",
"url": null
} |
tidal-forces, earth-like-planet, habitable-zone, antimatter
Title: Help in determining the features of an unusual, fictional star system (Hope this is the right Stack-exchange site for this question)
I'm working on a sci-fi RPG campaign, set on a very atypical location. Since this is a work of fiction, there's enough room for speculation and artistic license. But I'd still like to start from a more astronomically sound perspective.
Background
Consider the following system:
A single moon, with properties roughly comparable to Earth's so that it can support human life (1G gravity, Nitrogen-Oxygen-Carbon atmosphere, soil and liquid water etc.). This moon is tidally locked with it's planet.
A single gas-giant planet, with a size roughly comparable to Saturn. Orbiting it's star so that it completes a cycle about every 12 earth years.
A Star, larger than Sol and emitting enough energy to keep the moon in a comfortably warm temperature. | {
"domain": "astronomy.stackexchange",
"id": 418,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "tidal-forces, earth-like-planet, habitable-zone, antimatter",
"url": null
} |
In all of our examples the cosets of a subgroup $H$ partition the larger group $G\text{.}$ The following theorem proclaims that this will always be the case.
Let $g_1 H$ and $g_2 H$ be two cosets of $H$ in $G\text{.}$ We must show that either $g_1 H \cap g_2 H = \emptyset$ or $g_1 H = g_2 H\text{.}$ Suppose that $g_1 H \cap g_2 H \neq \emptyset$ and $a \in g_1 H \cap g_2 H\text{.}$ Then by the definition of a left coset, $a = g_1 h_1 = g_2 h_2$ for some elements $h_1$ and $h_2$ in $H\text{.}$ Hence, $g_1 = g_2 h_2 h_1^{-1}$ or $g_1 \in g_2 H\text{.}$ By Lemma 7.3, $g_1 H = g_2 H\text{.}$
###### Remark7.5
There is nothing special in this theorem about left cosets. Right cosets also partition $G\text{;}$ the proof of this fact is exactly the same as the proof for left cosets except that all group multiplications are done on the opposite side of $H\text{.}$ | {
"domain": "matthematics.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9925393552119457,
"lm_q1q2_score": 0.8334690148038711,
"lm_q2_score": 0.8397339716830605,
"openwebmath_perplexity": 321.67886411411246,
"openwebmath_score": 0.9532478451728821,
"tags": null,
"url": "http://matthematics.com/abstract/section-cosets.html"
} |
newtonian-mechanics, gravity, orbital-motion, velocity
When observing from the foci (marked with a pink X) the planet is at a distance $d$, and forms angle $\varphi$ from the perigee (along the x axis).
Given the semi-major axis $a$ and semi-minor axis $b$ the foci is located at $c = \sqrt{a^2-b^2}$. In terms of the ellipse eccentricity $\epsilon$ then
$$ \begin{array}{|c|c|} \hline b = a \sqrt{ 1 - \epsilon^2} & c = a \epsilon \\ \hline \end{array} $$
The distance is given by $$ d = \left( \frac{1 - \epsilon^2}{1 + \epsilon \cos \varphi} \right) a $$
and the velocity components when the speed is $v$ are
$$ \pmatrix{\dot{x} \\ \dot{y}} = v \pmatrix{- \frac{\sin \varphi}{ \sqrt{1+\epsilon^2 + 2 \epsilon \cos \varphi }} \\ \frac{\epsilon + \cos \varphi}{ \sqrt{1+\epsilon^2 + 2 \epsilon \cos \varphi }}} \tag{Ans} $$
A few more notes on velocity. If the distance and angle changes are observed then
$$ v = \sqrt{ \dot{d}^2 + d^2 \dot{\varphi} ^2 } $$ | {
"domain": "physics.stackexchange",
"id": 82992,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, gravity, orbital-motion, velocity",
"url": null
} |
machine-learning, python, scikit-learn
Why doesn't it make sense to either put both methods in every class, or never let these two occur in the same class together? For example, if KMeans needed to have a method that returns the Euclidean points, why not let it have a separate method? Implementing transform() to achieve this functionality, in my opinion, takes away the clear distinction between the two methods. Similarly, in PLSRegression, I haven't been able to understand the difference between the two methods. Partial Least Squares Regression is a supervised learning technique that also does dimensionality reduction. The PLSRegression class needs both transform and predict interfaces to properly implement the behavior of Partial Least Squares Regression. The transform method is the interface for dimensionality reduction. The predict method is the interface for generating targets from a trained regression model. | {
"domain": "datascience.stackexchange",
"id": 8985,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, python, scikit-learn",
"url": null
} |
python, python-3.x
Instead of:
The causal part of the competency definition is to distinguish between
those many characteristics that may be studied and measured about a
per- son and those aspects that actually do provide a link to relevant
behaviour. Thus having acquired a particular academic qualification
may or may not be correlated with the capacity to perform a particular
job. The qualifica- tion is scarcely likely – talisman-like – to cause
the capacity for perform- ance. However, a tendency to grasp
complexity and to learn new facts and figures may well be part of the
causal chain, and the competency would be expressed in these terms,
not in terms of the possibly related qualification.
Please look at the difference between person and per- son, qualification and qualifica- tion.
To solve this problem, I have written a small command line application.
#!/usr/bin/env python3
import sys
import argparse | {
"domain": "codereview.stackexchange",
"id": 42218,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x",
"url": null
} |
javascript, jquery
//keeping the pdf config options here will let you determine how you want the pdf to look case by case everytime you use this module. It's how jquery plugins work anyway.
return function(options) {
$view.setBg();
//uses a closure to keep track of the model outside the callback context
$view.bindToModel($model);
$view.readyDownload(function() {
//setting up the pdf is the meat of your business logic
var pdf = new jsPDF(options.orientation, options.pt, options.size);
$view.update($model, $data);
//I'm not entirely sure what is in your records div
$view.addHtml(pdf);
//In your code I don't know where the doc variable comes from
if (doc)
doc.save(pdf + $config.filename);
else if (pdf)
pdf.save(pdf + $config.filename);
else
alert($err['not defined']);
});
};
})( | {
"domain": "codereview.stackexchange",
"id": 17244,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery",
"url": null
} |
energy, electricity, energy-conservation
I know that James Franck studied synthetic photosynthesis in his later years, and others have investigated the endeavor since. But actually synthesizing, tapping into, or otherwise seriously accruing energy via photosynthesis, synthetic or otherwise, and not counting methods involving the burning of organic materials (secondary or tertiary to the photosynthetic process), seems far from possible now. So what is this plant probably doing, if anything?
I've heard of similar systems, but nothing with any more coherent an explanation. Apparently solar panels can generate about 16W/square foot, which is a lot more than this plant thing claims to, so outside of the aesthetic is there even any point to a soil-based generator like this (presuming it is real and works?)? This question appears to be a pseudo-duplicate on the Skeptics exchange, as pointed out by @CraigGidney. The highlights of the comments here and answer there appear to be that: | {
"domain": "physics.stackexchange",
"id": 30319,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "energy, electricity, energy-conservation",
"url": null
} |
javascript, array, json
Title: Finding an object in a nested object/array structure I have an app with a bunch of questions defined in a yaml file. For various reasons I would like the data to be divided into sections (section_0, section_1, section_2 etc.). When I load this file into my js app I get an object that looks like this:
const questionData = {
"section_0": {
"items": [
{
"key": "0.0",
"title": "Question one",
"type": "integer"
},
{
"key": "0.1",
"title": "Question 0.1",
"type": "integer"
}
]
},
"section_1": {
"items": [
{
"key": "1.0",
"title": "Some title",
"type": "integer",
"options": {
"1": "one",
"2": "two",
"3": "other option"
}
},
{
"key": "1.1",
"title": "Question 1.1",
"description": "Longer description text",
"type": "string"
},
{
"key": "1.2", | {
"domain": "codereview.stackexchange",
"id": 22394,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, array, json",
"url": null
} |
fft, fsk
CFO correction is generally performed as a (cyclic) digital spectrum shift by multiplication with a time-varying complex exponential in the time domain:
$$x_c[n] = x[n]\space e^{j\pi \frac{-f_{offset}}{F_s/2}n}$$
where $f_{offset}$ is the estimated CFO, and $F_s$ is the sample rate.
Although for FSK, if one has already computed the baseband waveform from the FSK-modulated symbols (i.e. the output of an FM discriminator), a simple level shift of the basedband waveform, to have it's peaks and troughs centered about 0, is equivalent to CFO correction.
CFO estimation algorithms can be divided up into two broad classes: Open-loop feed-forward algorithms and closed-loop feedback algorithms. Closed-loop algorithms work a sample at a time, are usually used for fine frequency offset estimation, and usually involve some arrangement of PLLs. Open-loop algorithms work on a block of samples at a time and are used for coarse frequency offset estimation. | {
"domain": "dsp.stackexchange",
"id": 7890,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, fsk",
"url": null
} |
complexity-theory, nondeterminism
From this $NP=PH$ should hold. Here is a simpler "proof" using the same ideas. I will show that NP=coNP. Indeed, suppose that $L \in \mathsf{coNP}$, and consider the machine which on input $x$, invokes the NP-oracle $\overline{L}$, and outputs the opposite value.
What goes wrong in this argument? Let us use the "witness" definition of NP: a language $L$ is in NP if there exist a polynomial $p$ and a polytime predicate $P$ such that $$ x \in L \Longleftrightarrow \exists |y| \leq p(|x|) \, P(x,y). $$
Now suppose that $L$ is in coNP, that is,
$$ x \in L \Longleftrightarrow \forall |y| \leq p(|x|) \, \lnot P(x,y). $$
(We get this by applying the previous definition to $\overline{L}$.)
Our machine deciding $L$ nondeterministically accepts as input $x,y$ and outputs $\lnot P(x,y)$. However, this accepts a different language $L'$ given by
$$ x \in L' \Longleftrightarrow \exists |y| \leq p(|x|) \, \lnot P(x,y). $$
The difference is the quantification — existential instead of universal. | {
"domain": "cs.stackexchange",
"id": 10353,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, nondeterminism",
"url": null
} |
jupiter, planetary-atmosphere, astrochemistry
CO2 has a longer half life in our atmosphere, mostly it isn't destroyed, it needs to be rained out. It's also constantly being resperated, destroyed in photosynthesis and re-created by respiration.
Point of all this is that any trace element in a planet's atmosphere, must have a source of creation and a half life of, destruction if you will.
A curious exception on Earth is Argon. Argon is a byproduct of radioactive decay and as a noble gas, it doesn't react with anything, so it gets created and it just stays in our atmosphere. It gets created slowly but because it's never destroyed, it's currently about 1% of our atmosphere.
GeH4 follows the same basic guideline. It gets created inside Jupiter and it's stable enough in Jupiter's atmosphere that it's equilibrium between creation and destruction leads to about 1 part per billion. I don't know nearly enough to say what it's atmospheric half life is. | {
"domain": "astronomy.stackexchange",
"id": 2320,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "jupiter, planetary-atmosphere, astrochemistry",
"url": null
} |
slam, navigation, kinect, rgbd6dslam, rgbd
Compiling GICP... make[1]: Entering
directory
/home/st-admin/ros/rgbdslam_freiburg/rgbdslam/external/gicp/ann_1.1.2' cd src ; make linux-g++ make[2]: Entering directory /home/st-admin/ros/rgbdslam_freiburg/rgbdslam/external/gicp/ann_1.1.2/src'
make targets \ "ANNLIB = libANN.a"
"C++ = g++" \ "CFLAGS = -O3"
"MAKELIB = ar ruv" \ "RANLIB = true"
make[3]: Entering directory
/home/st-admin/ros/rgbdslam_freiburg/rgbdslam/external/gicp/ann_1.1.2/src' make[3]: Nothing to be done for targets'. make[3]: Leaving directory
/home/st-admin/ros/rgbdslam_freiburg/rgbdslam/external/gicp/ann_1.1.2/src' make[2]: Leaving directory /home/st-admin/ros/rgbdslam_freiburg/rgbdslam/external/gicp/ann_1.1.2/src'
cd test ; make linux-g++ make[2]:
Entering directory | {
"domain": "robotics.stackexchange",
"id": 12407,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, kinect, rgbd6dslam, rgbd",
"url": null
} |
EXAMPLE 11.4.5
1. Obtain the Newton's forward interpolating polynomial, for the following tabular data and interpolate the value of the function at
x 0 0.001 0.002 0.003 0.004 0.005 y 1.121 1.123 1.1255 1.127 1.128 1.1285
Solution: For this data, we have the Forward difference difference table
0 1.121 0.002 0.0005 -0.0015 0.002 -.0025 .001 1.123 0.0025 -0.0010 0.0005 -0.0005 .002 1.1255 0.0015 -0.0005 0.0 .003 1.127 0.001 -0.0005 .004 1.128 0.0005 .005 1.1285
Thus, for where and we get
Thus,
2. Using the following table for approximate its value at Also, find an error estimate (Note ).
0.7 72 0.74 0.76 0.78 0.84229 0.87707 0.91309 0.95045 0.98926
Solution: As the point lies towards the initial tabular values, we shall use Newton's Forward formula. The forward difference table is:
0.70 0.84229 0.03478 0.00124 0.0001 0.00001 0.72 0.87707 0.03602 0.00134 0.00011 0.74 0.91309 0.03736 0.00145 0.76 0.95045 0.03881 0.78 0.98926 | {
"domain": "ac.in",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9912886168290836,
"lm_q1q2_score": 0.8150267194885549,
"lm_q2_score": 0.8221891239865619,
"openwebmath_perplexity": 647.1405795091325,
"openwebmath_score": 0.9405609369277954,
"tags": null,
"url": "http://nptel.ac.in/courses/Webcourse-contents/IIT-KANPUR/mathematics-2/node109.html"
} |
c#, algorithm
Process temp;
for (int k = 0; k < processes.Count; k++)
{
for (int i = k + 1; i < processes.Count; i++)
{
if (processes[k].arrivalTime > processes[i].arrivalTime || (processes[k].arrivalTime == processes[i].arrivalTime && processes[k].brust > processes[i].brust))
{
temp = processes[i];
processes[i] = processes[k];
processes[k] = temp;
}
}
}
Console.WriteLine("Processes After Sorting");
Console.WriteLine("_________________________________________________________________");
Console.WriteLine("Name\tArrival\tBrust\tPriority");
for (int i = 0; i < processes.Count; i++)
{ | {
"domain": "codereview.stackexchange",
"id": 10829,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, algorithm",
"url": null
} |
electromagnetism, radiation
There is no indication Earth's magnetic field will collapse. Field strength is decreasing at the moment, but that variation is within the normal range. | {
"domain": "engineering.stackexchange",
"id": 3069,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, radiation",
"url": null
} |
# Centripetal force for non-uniform circular motion
For uniform circular motion, centripetal force is given by $$\dfrac{mv^2}{r}.$$ But what will be the centripetal force if the circular motion is non-uniform in the sense that linear velocity is changing its magnitude? Will the above relation still be valid for this case?
If the tangential velocity is changing in magnitude, that implies a tangential acceleration, and thus a tangential force in addition to the centripetal force.
If the motion of the object is in a circle of constant radius, then the instantaneous centripetal force is given by the expression you wrote. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.963779946215714,
"lm_q1q2_score": 0.8229839835296482,
"lm_q2_score": 0.8539127492339909,
"openwebmath_perplexity": 318.4688279774775,
"openwebmath_score": 0.7960334420204163,
"tags": null,
"url": "https://physics.stackexchange.com/questions/148125/centripetal-force-for-non-uniform-circular-motion"
} |
transform
Frames:
Frame: camera_front/camera_front_depth_frame published by /camera_front_base_link Average Delay: -0.00940285 Max Delay: 0
Frame: camera_front/camera_front_link published by /second_kinect_broadcaster Average Delay: -0.0993219 Max Delay: 0
Frame: camera_front/camera_front_link published by /second_kinect_broadcaster Average Delay: -0.0993219 Max Delay: 0
Frame: camera_front/camera_front_rgb_frame published by /camera_front_base_link_1 Average Delay: -0.00940964 Max Delay: 0
Frame: camera_top/camera_top_depth_frame published by /camera_top_base_link Average Delay: -0.00945783 Max Delay: 0
Frame: camera_top/camera_top_link published by /camera_top_link_broadcaster Average Delay: -0.00942012 Max Delay: 0
Frame: camera_top/camera_top_link published by /camera_top_link_broadcaster Average Delay: -0.00942012 Max Delay: 0
Frame: camera_top/camera_top_rgb_frame published by /camera_top_base_link_1 Average Delay: -0.00942155 Max Delay: 0 | {
"domain": "robotics.stackexchange",
"id": 19429,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "transform",
"url": null
} |
java, array, binary-search
public void printPartitionBoundaries(List<Integer> array)
{
int f = 0;
int l = array.size(); // Upperbound exclusive
while (f < l) {
int cp = array.get(f); | {
"domain": "codereview.stackexchange",
"id": 28419,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, array, binary-search",
"url": null
} |
quantum-gate, programming, gate-synthesis
However, remember that usually you're trying to calculate the action of a unitary on some state vector. It will be far more memory efficient to calculate that application directly, rather than first calculating the unitary matrix and applying it to the state vector.
To understand where this formula came from, think about the two-qubit version, where the first qubit is the control qubit. You'd normally write the unitary as
$$
|0\rangle\langle 0|\otimes\mathbb{I}+|1\rangle\langle 1|\otimes U.
$$
Let's rewrite this as
$$
(\mathbb{I}-|1\rangle\langle 1|)\otimes\mathbb{I}+|1\rangle\langle 1|\otimes U=\mathbb{I}\otimes\mathbb{I}+|1\rangle\langle 1|\otimes (U-\mathbb{I}).
$$
It can be easier to write things in terms of Pauli matrices, so
$$
|1\rangle\langle 1|=(\mathbb{I}-Z)/2.
$$
To get the same unitary on a different number of qubits, you just need to pad with identity matrices everywhere. | {
"domain": "quantumcomputing.stackexchange",
"id": 317,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-gate, programming, gate-synthesis",
"url": null
} |
nuclear-physics, elements
In fact, Not only U-235 is used as a Nuclear Fuel. For now, it's the one which is mostly used. But, there are other fuels like Pu-239 from Breeders which we would use after constructing enough Breeder reactors and U-233 obtained from Th-232 whose fissile properties are somewhat similar to U-235 but it emits higher levels of radiation compared to Pu-239. It's used as fuel in the KAlpakkam MINI Reactor (KAMINI) which is near my area, at Chennai. | {
"domain": "physics.stackexchange",
"id": 4616,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-physics, elements",
"url": null
} |
# Probability to pick up biased coin
I have two identical looking coins, one is fair and has an equal chance of coming up heads or tails, but the other is weighted and will always land on heads. You pick one of the coins at random, toss it three times and get three heads. Given this, what is the chance that you've picked the weighted coin?
My way of solution is the following one.
With the fair coin, the probability that you get heads triple times is $$1/8$$.
With the biased coin, it is $$1$$. What's the total probability that you get three heads on three tosses? $$1/8 + 1 = 9/8$$
Now, assume you got your three heads. $$\dfrac{\dfrac{1}{8}}{\dfrac{9}{8}} = 0.1111$$ for fair coin and $$0.888$$ for biased one.
So the chance that I pick up biased coin is $$0.888$$.
Is my solution is correct? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226341042414,
"lm_q1q2_score": 0.8361859438072031,
"lm_q2_score": 0.8558511488056151,
"openwebmath_perplexity": 365.0090742043024,
"openwebmath_score": 0.9271571040153503,
"tags": null,
"url": "https://math.stackexchange.com/questions/3292708/probability-to-pick-up-biased-coin"
} |
So I view skew as "tilt" that speaks to symmetry around the mean (mean >? median) and kurtosis as a tail metric. You might find the normal mixture distribution (a distribution that adds n normals together), because combinations of normals can produce any combination of skew/kurtosis. I hope that helps with the distinction...David
#### sanjose04
##### New Member
Thank you for your thorough explanation. I've got the point clear.
Sincerely,
Donald. | {
"domain": "bionicturtle.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9648551556203815,
"lm_q1q2_score": 0.8220110551377725,
"lm_q2_score": 0.8519528038477825,
"openwebmath_perplexity": 2379.273165181411,
"openwebmath_score": 0.8042216897010803,
"tags": null,
"url": "https://www.bionicturtle.com/forum/threads/kurtosis-and-skewness-in-investment-terms.962/"
} |
python, performance, python-3.x, numpy, simulation
ncalls tottime percall cumtime percall filename:lineno(function)
202 72.331 0.358 135.766 0.672 simulation.py:17(simulation)
2529183 27.246 0.000 27.246 0.000 {method 'reduce' of 'numpy.ufunc' objects}
2456168 20.346 0.000 20.346 0.000 {method 'random_sample' of 'numpy.random.mtrand.RandomState' objects}
10000 2.575 0.000 4.456 0.000 {method 'choice' of 'numpy.random.mtrand.RandomState' objects}
1258084 2.326 0.000 2.326 0.000 {method 'nonzero' of 'numpy.ndarray' objects}
1228747 2.139 0.000 2.139 0.000 {method 'copy' of 'numpy.ndarray' objects}
2486771 2.043 0.000 29.905 0.000 {method 'sum' of 'numpy.ndarray' objects}
1228085 1.420 0.000 1.420 0.000 {built-in method numpy.zeros}
10000 1.354 0.000 1.683 0.000 {method 'binomial' of 'numpy.random.mtrand.RandomState' objects} | {
"domain": "codereview.stackexchange",
"id": 37631,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, python-3.x, numpy, simulation",
"url": null
} |
quantum-field-theory, string-theory, renormalization, yang-mills
The author of the notes is essentially using the background field method (BFM) to calculate the effective action.⁰
Many of his choices follow from his focus on calculating only the one-loop effective potential. The background field method works in all theories with scalar, gauge, fermions, gravity, superfields¹ etc... It also works at all loops, however at higher loops, diagrams are still helpful for organising the calculations.
Two of the seminal papers did not use diagramatics: Schwinger's On gauge invariance and vacuum polarization and the follow-up two-loop calculation of Ritus. However, since the BFM perturbation theory still uses a propagator-interaction separation, diagrammatics are only natural. | {
"domain": "physics.stackexchange",
"id": 5763,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, string-theory, renormalization, yang-mills",
"url": null
} |
cross-correlation
From $(1b)$: Negative shift | {
"domain": "dsp.stackexchange",
"id": 7208,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cross-correlation",
"url": null
} |
python, python-3.x, flask
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
browse.html:
<!DOCTYPE html>
<html lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<title></title>
</head>
<body>
<ul>
{% for item in itemList %}
<li><a href="/browser{{ urlFilePath }}/{{ item }}">{{ item }}</a></li>
{% endfor %}
</ul>
</body>
</html>
file.html:
<!DOCTYPE html>
<html lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<title></title>
</head>
<body>
<p>Current file: {{ currentFile }}</p>
<p>
<table>
{% for key, value in fileProperties.items() %}
<tr>
<td>{{ key }}</td>
<td>{{ value }}</td>
</tr>
{% endfor %}
</table>
</p>
</body>
</html>
The output when browsing looks something like this: | {
"domain": "codereview.stackexchange",
"id": 33694,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, flask",
"url": null
} |
java, unit-testing, junit
cityDao.update(cityEntity);
Assert.assertEquals("Expected different value.", "Bratislava", cityDao.find(3L).getName());
}
@Test
public void delete_NotUsed_Deleted() {
CityEntity cityEntity = cityDao.find(4L);
cityDao.delete(cityEntity);
Assert.assertNull("Expected null value.", cityDao.find(cityEntity.getId()));
}
@Test
public void delete_Used_ExceptionThrown(){
exception.expect(ConstraintViolationException.class);
CityEntity cityEntity = cityDao.find(1L);
cityDao.delete(cityEntity);
cityDao.flush();
}
} Is there anything you could improve in the test?
EDIT:
To make test conditions more clear: | {
"domain": "codereview.stackexchange",
"id": 7353,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, unit-testing, junit",
"url": null
} |
# High Dimensional Analysis
## High Dimensional Data Analysis
Write a script to do the following:
Hypersphere Volume: Plot the volume of a unit hypersphere as a function of dimension. Plot for $$d=1, \cdots, 50$$.
Hypersphere Radius: What value of radius would one need to maintain a hypersphere volume of 1 with increasing $d$. Plot this value for $$d = 1, \cdots, 100$$.
Nearest Neighbors: Assume we have a unit hypercube centered at $$(0.5, \cdots, 0.5)$$. Generate $n=10000$ uniformly random points in $d$ dimensions, in the range $(0,1)$ in each dimension. Find the ratio of the nearest and farthest point from the center of the space. Also store the actual distance of the nearest $d_n$ and farthest $d_f$ points from the center. Plot these value for $$d = 1, \cdots, 100$$. | {
"domain": "dataminingbook.info",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462183543601,
"lm_q1q2_score": 0.8145240380797483,
"lm_q2_score": 0.8244619220634456,
"openwebmath_perplexity": 358.5142767280001,
"openwebmath_score": 0.7848832011222839,
"tags": null,
"url": "https://dataminingbook.info/projects/proj_hda/"
} |
#%% Define function over 50x50 grid
xgrid = np.linspace(-1, 1, 50)
ygrid = xgrid
x, y = np.meshgrid(xgrid, ygrid)
z = x*y
fig = plt.figure(figsize=(8, 6))
ax.plot_surface(x,
y,
z,
rstride=2, cstride=2,
cmap=cm.jet,
alpha=0.7,
linewidth=0.25)
plt.title('Sparsely sampled function')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
#%% Interpolate function over new 70x70 grid
xgrid2 = np.linspace(-1, 1, 70)
ygrid2 = xgrid2
xnew,ynew = np.meshgrid(xgrid2, ygrid2, indexing='ij')
tck = bisplrep(x, y, z, s=0.1) # Build the spline
znew = bisplev(xnew[:,0], ynew[0,:], tck) # Evaluate the spline
fig = plt.figure(figsize=(8, 6))
ax.plot_surface(xnew,
ynew,
znew,
rstride=2, cstride=2,
cmap=cm.jet,
alpha=0.7,
linewidth=0.25)
plt.title('Interpolated function')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show() | {
"domain": "apmonitor.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9736446525071291,
"lm_q1q2_score": 0.809237981979551,
"lm_q2_score": 0.831143045767024,
"openwebmath_perplexity": 14156.8805554607,
"openwebmath_score": 0.272673100233078,
"tags": null,
"url": "http://apmonitor.com/wiki/index.php/Main/ObjectBspline"
} |
html, css
</span>
</a>
</li>
<li>
<a href="#x">
<img class="small-thumb" src="Picture2" alt="Picture 2" />
<span>
<img class="big-pic" src="big-Picture2.jpg" alt="Big Picture 2" />
</span>
</a>
<a href="#x">
<img class="small-thumb" src="Picture3" alt="Picture 2" />
<span>
<img class="big-pic" src="big-Picture3.jpg" alt="Big Picture 2" />
</span>
</a>
</li>
</ul>
</div> | {
"domain": "codereview.stackexchange",
"id": 12020,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "html, css",
"url": null
} |
homework-and-exercises, statistical-mechanics, approximations, partition-function
Title: Expand the partition fct. of a simple harmonic oscillator I come across a expansion of the partition fct. of a simple harmonic oscillator $q$ as:
$$q=x^{-1}(1-\frac{x^2}{24}+...) \tag{1}$$
where $x=h\nu/kT$. It’s easy to get $$q=\frac{e^{-x/2}}{1-e^{-x}}=\frac{1}{e^{x/2}-e^{-x/2}}.$$ Expand the RHS, I got:
$$x^{-1}(1+x^2/24+...)^{-1}. \tag{2}$$
But how can I go from (2) to (1)? When $z$ is small $(1+z)^{\alpha}=1+\alpha z+{\cal O}(z^{2})$. (This is just the leading term in Newton's Binomial Theorem. So the leading term in $x^{-1}(1+x^{2}/4!+\cdots)^{-1}$ is $x^{-1}(1-x^{2}/4!+\cdots)$. This is obtained by setting $\alpha=-1$ and $z=x^{2}/4!+\cdots$. This approximation omits terms of ${\cal O}(z^{2})$ and higher, but those are all of higher order in $x$ as well. | {
"domain": "physics.stackexchange",
"id": 53843,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, statistical-mechanics, approximations, partition-function",
"url": null
} |
c++, algorithm, template, c++20
Image<ElementT>& operator-=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data),
std::ranges::begin(image_data), std::minus<>{});
return *this;
}
Image<ElementT>& operator*=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data),
std::ranges::begin(image_data), std::multiplies<>{});
return *this;
} | {
"domain": "codereview.stackexchange",
"id": 42637,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, template, c++20",
"url": null
} |
navigation
Option: I could take inspiration from the tutorials about move_base and leave it putting out velocities commands which I ll later feed into my robot's model.
Pro: exaustive tutorials about move_base on this website and lot of explanations. Contra: my robot reads goals defined as points and moves accordingly to reach them. Feeding velocities commands instead of positions will not work.
Option: I feed the goal's position into the model and write an actionlib (server + client) to track the progress to the goals. At the goal it will trigger to the next goal. Pro: it sounds much easier and logic than the above idea. Contra: no idea on how to start developing such a server/client application and mostly important I could not use implement something like the base_local_planner if I need to generate a simple map to avoid obstacles (till now not necessary). | {
"domain": "robotics.stackexchange",
"id": 19576,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation",
"url": null
} |
optics, visible-light, geometric-optics
Title: Why do certain objects shine only in certain light? I understand that wavelength is inversely proportional to index of refraction, which causes dispersion of light (red visible light is deflected less than purple), and total internal reflection.
Is there a way to explain, based on these concepts why red objects (as seen in white light) absorb most light other than red light, which gets reflected? In your example, red objects are typically red not based on the index of refraction. Usually such an object will absorb light in the yellow-green-blue part of the visible spectrum leaving the color red.
The index of refraction, strictly speaking, is a complex valued function and the real part is what gives rise to the things like total internal reflection, which the imaginary part gives rise to absorption. | {
"domain": "physics.stackexchange",
"id": 97408,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "optics, visible-light, geometric-optics",
"url": null
} |
java, csv
innerList.add(innerValue);
}
}
return directorsRatingList;
}
public static void sortList(List<List<Object>> directorsRatingList) {
Integer sortByColNumber = XLS_COLUMNS.indexOf(SORT_BY);
Collections.sort(directorsRatingList, Collections.reverseOrder(Comparator.comparing(innerList ->
(Double) innerList.get(sortByColNumber))));
};
public static void writeToXlsx (List<List<Object>> directorsRatingList) throws IOException,
FileNotFoundException {
Integer rows = directorsRatingList.size();
Integer columns = directorsRatingList.get(0).size();
Workbook workbook = new XSSFWorkbook();
Sheet sheet = workbook.createSheet(SHEET_NAME);
for(int row = 0; row < rows; row++) {
sheet.createRow(row); | {
"domain": "codereview.stackexchange",
"id": 40302,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, csv",
"url": null
} |
java, performance, programming-challenge, complexity
if (!is_true){
for (int i = 0; i < 26; i++){
returnArray[i] = Math.abs(a1[i] - a2[i]);
}
}
return returnArray;
}
}
Output time: 0.1619sec
How can I further improve the performance of this code in terms of time? Avoid repetitions if I did it somewhere. Also, lambdas elude me. The first thing that pops in my mind is the naming you used. For example, from a method called isAnagramPossible I'd expect to receive a bool and not a string. The same from isAnagram.
In the isAnagram method the returnArray variable should be renamed IMO as the name is not so meaningful. The same regarding is_true.
Also, commented code is only confusing. I'd remove it.
In addition:
for (int i = 0, len = a.length(); i < len; i++ ){
if(Character.isAlphabetic(aArray[i])) {
int pos = (int) aArray[i] - (int) 'a';
a1[pos] += 1;
}
} | {
"domain": "codereview.stackexchange",
"id": 16322,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance, programming-challenge, complexity",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.