text stringlengths 1 1.11k | source dict |
|---|---|
special-relativity, reference-frames, acceleration, observers, time-dilation
Also, if you want to make things simpler for yourself to understand and make it easier for people to understand you, you really should use the standard form of time synchronisation. That means when B receives a timing signal from A of 0 years, B should set his own clock to +6 years, to allow for the light travel time. Now the clocks are properly synchronised in B's refence frame. When both clocks are stationary, A sees signals from B as 6 years behind and B sees signals from A as 6 years behind and things are symmetrical. Both consider each others clocks to be running at the same rate and reading the same time.
In relativity, an inertial reference frame is not just one clock, but an array of rulers and clocks that potentially stretches to infinity. | {
"domain": "physics.stackexchange",
"id": 99298,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, reference-frames, acceleration, observers, time-dilation",
"url": null
} |
scala
/**
* Called by an importing context at shutdown
*
* @param context - the context that no longer needs this module's services
*/
def removeContext(context: EFLContext) {
context.removeService(eventDispatchService)
}
private def waitForShutdown() {
var iterationCount = 0
while (eventDispatchService.registerCount > 0 && iterationCount < 25 ) {
Thread.sleep(50)
iterationCount += 1
}
logger.trace("[waitForShutdown] waited {} iterations for shutdown.", iterationCount)
}
}
case object EVENT_SHUTDOWN
trait CallableRegistration extends EventRegistration {
def call(event: EFLEvent)
}
private class EventService(context: EFLContext, logger: Logger) extends EventDispatchService {
val actor = new EventActor(context, this)
var registerCount = 0
def shutDown() {actor ! EVENT_SHUTDOWN}
def postEvent(event: EFLEvent) {
logger.trace("[EventService.postEvent] ({}) posting: {}", context.name, event.name, "")
actor ! event
} | {
"domain": "codereview.stackexchange",
"id": 7267,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "scala",
"url": null
} |
predictive-modeling, statistics, data-cleaning, matlab, outlier
Title: What method is recommended after outliers removal? I have a data of mice reaction times. In every session, there are some trials in which the mouse "decides of a break" and responds after a long time to these specific trials.
I was thinking of applying outlier removal on my data. and the data does look better (I used a Matlab function which removed all data above of below 3 IQR's above and below median).
After doing that I got a histogram which is more similar to a normal distribution (below an example picture of one of my sessions).
My question is:
After applying my outlier removal, how should I analyze the remaining data?
Should I consider the Median (together with IQR as standard error mean)?
Or should I consider the mean (together with $ \frac{\sigma}{\sqrt n}$ as standard error mean)? | {
"domain": "datascience.stackexchange",
"id": 6348,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "predictive-modeling, statistics, data-cleaning, matlab, outlier",
"url": null
} |
beginner, php, sql
PHP code:
<?php
class Article extends BaseEntity{
private $id, $categoryid, $author, $image, $title, $summary, $content, $created, $modified, $visible;
public function __construct($adapter) {
$table="articles";
parent::__construct($table, $adapter);
}
public function getArticlesAdv($limit, $category, $order){
$sql = "SELECT *
FROM articles
ORDER BY created DESC";
$stmt = $this->db->prepare($sql);
$stmt->execute();
if($stmt->execute()){
$numrows = $stmt->rowCount();
if(numrows>0){
while ($row = $query->fetchAll(PDO::FETCH_OBJ))
{
$resultSet[]=$row;
}
return $resultSet;
}
else
{
return false;
}
}
else
{
return false;
}
} | {
"domain": "codereview.stackexchange",
"id": 29709,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, php, sql",
"url": null
} |
thermodynamics, work
So that's a laundry list of things that I work on regularly and how we handle the EOS. You can see that for many things, the ideal gas equation of state works perfectly (pun intended) well. We can get fantastic scientific information using it -- not just engineering approximations. And often, when we want an engineering approximation to things, there are so many fudge factors due to uncertainties and safety margins that the errors from the "wrong" EOS are marginal.
However, it really depends on what you want to calculate. If you're doing studies are very high pressures, you need to account for the non-ideal effects. If you're doing things in rarefied gases, you need to account for non-equilibrium effects. | {
"domain": "physics.stackexchange",
"id": 60523,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, work",
"url": null
} |
java, object-oriented, recursion
if (choice == 1) {
location.equalsIgnoreCase("useast");
} else if (choice == 2) {
location.equalsIgnoreCase("uswest");
} else if (choice == 3) {
location.equalsIgnoreCase("alaska");
} else if (choice == 4) {
location.equalsIgnoreCase("hawaii");
} else if(choice == 5) {
location.equalsIgnoreCase("other");
}else {
System.out.println("Entered invalid selection.\n" +
"Please enter valid selection.");
setCardAccountType();
}
return location;
}
//set the card account type for the creatNewCard method to use
public static String setCardAccountType() {
String accountType = "other"; | {
"domain": "codereview.stackexchange",
"id": 39677,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, recursion",
"url": null
} |
taught. For Example, we know that equation x 2 + 1 = 0 has no solution, with number i, we can define the number as the solution of the equation. Example 1 – Determine which of the following is the rectangular form of a complex number. How do you write a complex number in rectangular form? So the root of negative number √-n can be solved as √-1 * n = √ n i, where n is a positive real number. There are two basic forms of complex number notation: polar and rectangular. Multiplying complex numbers is much like multiplying binomials. Multiplication . Real numbers can be considered a subset of the complex numbers that have the form a + 0i. The rectangular from of a complex number is written as a single real number a combined with a single imaginary term bi in the form a+bi. This way, a complex number is defined as a polynomial with real coefficients in the single indeterminate i, for which the relation i 2 + 1 = 0 is imposed. Solving linear equations using elimination method, Solving linear | {
"domain": "sintap.pt",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9643214511730025,
"lm_q1q2_score": 0.8117905317056598,
"lm_q2_score": 0.8418256492357359,
"openwebmath_perplexity": 364.2756635850276,
"openwebmath_score": 0.7707651257514954,
"tags": null,
"url": "https://www.sintap.pt/bananagrams-online-euj/multiplying-complex-numbers-in-rectangular-form-f6ee53"
} |
neurophysiology, literature
Can somebody please point me to the source of that claim, as it is not mentioned in the paper, at least to my knowledge. How about this:
https://www.springerprofessional.de/data-compression-and-data-selection-in-human-vision/3218014
The original link in the Abstract referenced (https://link.springer.com/content/pdf/10.1007%2F978-3-642-04954-5_7.pdf) above isn't available anymore, but I found a repository:
https://webdav.tuebingen.mpg.de/u/zli/prints/ZhaopingNReview2006.pdf
I don't think the review is peer reviewed, but maybe you find a reference in there (it's not my field, so I guess you know more on where to look ;-)
She's now in Germany:
https://www.researchgate.net/profile/Li_Zhaoping
If you don't find anything, maybe you can ask her on researchgate directly. | {
"domain": "biology.stackexchange",
"id": 10199,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neurophysiology, literature",
"url": null
} |
c#, winforms, event-handling
Func<EventInfo, FieldInfo> ei2fi =
ei => _webBrowser.GetType().GetField(eventInfo.Name,
BindingFlags.NonPublic |
BindingFlags.Instance |
BindingFlags.GetField);
return from eventInfo1 in new EventInfo[] { GetEventInfo(typeof(System.Windows.Forms.WebBrowser), EVENT_NAME) }
let eventFieldInfo = ei2fi(eventInfo1)
let eventFieldValue =
(System.Delegate)eventFieldInfo.GetValue(_webBrowser)
from subscribedDelegate in eventFieldValue.GetInvocationList()
select subscribedDelegate.Method;
}
there is no need call GetEventInfo() twice. Just reuse eventInfo. | {
"domain": "codereview.stackexchange",
"id": 16476,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, winforms, event-handling",
"url": null
} |
homework-and-exercises, newtonian-mechanics, kinematics, projectile
Title: Can someone explain this kinematics problem to me?
This is the problem itself. It is from David Morin's Introductory Mechanics.
This is the solution to the problem (or part of it)
$U\sin\phi /g$ is just the time to highest point of a ball thrown.
We are talking about the box .
$Mg\sin\beta$ is the force along, down the incline .
So $g\sin\beta$ is the acceleration along it.
Then from $a=(v-u)/t$
We get the result.
Read question carefully.
We are talking about meeting at maximum height of block/box not ball. | {
"domain": "physics.stackexchange",
"id": 67840,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics, kinematics, projectile",
"url": null
} |
c++, matrix, template
// template class opererator to multiply a 2D vector with another 2D vector
template <class T>
std::vector<std::vector<T>> operator*(const std::vector<std::vector<T>>& a, const std::vector<std::vector<T>>& b)
{
auto c = Multiply(a, b);
return c;
}
// template class opererator to multiply a 2D vector with const
template <class T>
std::vector<std::vector<T>> operator*(const T & n, const std::vector<std::vector<T>>& a)
{
auto c = Multiply(n, a);
return c;
}
// Function to get cofactor of a[p][q] in b[][]
template <class T, class T2>
void GetCofactor(std::vector<std::vector<T>> &a, std::vector<std::vector<T>> &b, T2 p, T2 q, T2 n)
{
T2 i = 0;
T2 j = 0; | {
"domain": "codereview.stackexchange",
"id": 28585,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, matrix, template",
"url": null
} |
javascript, node.js, ecmascript-6, django, virtual-machine
constant style: most style guides recommend constant names be in all caps so anyone reading it can distinguish variables from constants. So instead of:
const timeout = 1000
use all capitals - e.g.
const TIMEOUT = 1000
repeated require for dependency assertionError: instead of
const AssertionError = require('assert').AssertionError`
just use:
const AssertionError = assert.AssertionError
since the previous line already loaded assert
regular expressions class \d can be used in place of [0-9]
scope of variable Is ran declared with keyword, or okay as global?
arrow function for callback The callback to rawStr.replace() could be simplified to an arrow function
assigning ran a in the callback to .map() in assertionString can be stringified with JSON.stringify()- so instead of:
ran = {id: ${a.id}, assertion: '${a.assertion}', is_public: ${a.is_public}}
just do this:
const ran = ${JSON.stringify(a)} | {
"domain": "codereview.stackexchange",
"id": 41197,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, node.js, ecmascript-6, django, virtual-machine",
"url": null
} |
beginner, c, csv
puts(sum_comma_separated_numbers('comma_separated.txt'))
To make the code above comply with the Single Responsibility Principle, I'd make an enumerator that yields floats.
class NumberTokenizer
include Enumerable
def initialize(io, sep=',')
@io = io
@sep = sep
end
def each
@io.each do |line|
line.split(@sep).each { |num| yield num.to_f }
end
end
end
def average(enum)
sum, count = 0.0, 0
enum.each do |num|
sum += num
count += 1
end
return sum / count
end
open('comma_separated.txt') do |f|
puts(average(NumberTokenizer.new(f)))
end
Translating to C
Drawing inspiration from the Ruby example, we want to make a tokenizer that produces doubles, and an average() function that takes a tokenizer.
#include <stdio.h>
typedef struct {
FILE *f;
} Tokenizer;
Tokenizer *tokenizer_init(Tokenizer *t, FILE *f) {
t->f = f;
return t;
} | {
"domain": "codereview.stackexchange",
"id": 6832,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c, csv",
"url": null
} |
ros2
I am not sure what exactly the problem is with your setup. More information would help here. Our problem with the LGSVL simulator is actually this one https://github.com/lgsvl/simulator/issues/1403.
Originally posted by dejanpan with karma: 1420 on 2021-07-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by philiane on 2021-07-07:
Thanks a lot for your reply.
Two instances mean that two users are using the same ade environment at one server simultaneously. When the second user enters the 'ade', the first one is forced to quit automatically now. | {
"domain": "robotics.stackexchange",
"id": 36623,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros2",
"url": null
} |
java, optimization, serialization
Now I am trying to represent overall stuff in one particular class in Java so that I can just pass the necessary fields and it can make me a final Byte Array out of it which will have the header first and then the data:
public static void main(String[] args) throws IOException {
// header layout
byte addressed_center = 0;
byte record_version = 1;
// should packCustomerAddress method be in DataFrame class?
// Or we should remove it from there and put it somewhere else?
long address = DataFrame.packCustomerAddress((byte) 12, (short) 13, (byte) 32, (int) 120);
long address_from = DataFrame.packCustomerAddress((byte) 21, (short) 23, (byte) 41, (int) 130);
byte records_partition = 3;
byte already_replicated = 0; | {
"domain": "codereview.stackexchange",
"id": 10411,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, optimization, serialization",
"url": null
} |
big-bang-theory
"By now, one second of time has passed [since the big bang]. The universe has grown a few light years across."
How could the universe have expanded to a few light years in diameter in just one second if nothing can travel than light? By definition of a light year, wouldn't take a 1+ years for the universe to get this big? Your question hinges on the misconception that nothing can travel faster than light. No physical object can move faster than light, but plenty of non-physical things can. For example, take the straight line $y = mx + c$. If one varies $c$, the x-intercept of the line also changes. It's possible that this x-intercept moves faster than light for a suitably small $m$. The caveat is that this intercept is a mathematical point, and it cannot be used to transfer information. Relativity isn't violated. | {
"domain": "astronomy.stackexchange",
"id": 2642,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "big-bang-theory",
"url": null
} |
thermodynamics, statistical-mechanics, entropy
$$
\int e^{-E_P/mkT} d^{N} \mathbf{r} = V^N
$$
Combining with the momentum integral we obtain the canonical partition of the ideal gas:
$$
Q^\text{ig} = \frac{V^N}{N!}\left(\frac{2\pi m k T}{h^2}\right)^{3N/2}
\tag{1}
$$
Since the question was about entropy, in the canonical ensemble we calculate it as
$$
\frac{S}{k} = \ln Q - \beta \left(\frac{\ln\partial Q}{\partial\beta}\right)
$$
Applying this to the partition function of the ideal gas we obtain the an entropy that matches the classical entropy of the ideal gas to within an additive constant. The result can be found in any standard textbook. The point of this lengthier answer was to address the question about how to discretize the phase space: there is no discretization involved. | {
"domain": "physics.stackexchange",
"id": 91153,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, statistical-mechanics, entropy",
"url": null
} |
Assume that we are doing the grade school addition of the sum $n_1+n_2+\cdots+n_k=n$. Let the carry at position $j$, $j=0,1,\ldots$, be $c_j$. Because we are dealing with integers, there is no initial carry, so we declare $c_{-1}=0$. The addition algorithm for the digit at position $j$ amounts to the equation $$\sum_{i=1}^kb_{i,j}+c_{j-1}=pc_j+a_j,$$ or, equivalently, to the equation $$\left(\sum_{i=1}^kb_{i,j}\right)-a_j=pc_j-c_{j-1}$$ that holds for all $j\ge0$.
Adding all these equations together shows that numerator in the formula $(*)$ for $e$ is \begin{aligned} \sum_{i=1}^k\sum_j b_{i,j}-\sum_j a_j&=\sum_{j=0}^{j_{MAX}}(pc_j-c_{j-1})\\ &=pc_{j_{MAX}}+\sum_{j=0}^{j_{MAX}-1}(p-1)c_j-c_{-1}\\ &=(p-1)\sum_jc_j, \end{aligned} because clearly at the most significant digit there will be no further carry, $c_{j_{MAX}}=0$, and because $c_{-1}=0$. Thus we can rewrite formula $(*)$ to read $$e=\sum_j c_j.$$ In other words $e$ equals the total carry $\sum_j c_j$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363499098282,
"lm_q1q2_score": 0.8202716924767953,
"lm_q2_score": 0.8333245973817158,
"openwebmath_perplexity": 293.9364915310255,
"openwebmath_score": 0.8907673954963684,
"tags": null,
"url": "https://math.stackexchange.com/questions/833706/highest-power-of-prime-p-dividing-binommnn"
} |
java, array, memory-management, combinatorics, integer
return perm;
}
public void nextPermutation()
{
for(int i = 0; i < size; i++) //loop for permutation 10 times
{
for(Integer item : getRandomPermutation())
{
System.out.print(item + " ");
}
System.out.println();
}
}
}
This is my PermutationGeneratorViewer file:
public class PermutationGeneratorViewer
{
public static void main(String[] args)
{
BrutePermutationGenerator brute = new BrutePermutationGenerator();
SmartPermutationGenerator smart = new SmartPermutationGenerator();
System.out.println("\n" + "Random arrays using Brute Force: ");
brute.nextPermutation(); | {
"domain": "codereview.stackexchange",
"id": 10615,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, array, memory-management, combinatorics, integer",
"url": null
} |
ros, ros-melodic, nav-msgs, publisher
Point point;
__uint32_t cost_till_now;
};
struct compare_cost{
bool operator() (Cell &p1, Cell &p2)
{
return p1.cost_till_now < p2.cost_till_now;
}
};
public:
GlobalPlanner();
GlobalPlanner(std::string name, costmap_2d::Costmap2DROS* costmap_ros);
void initialize(std::string name, costmap_2d::Costmap2DROS* costmap_ros);
bool makePlan(const geometry_msgs::PoseStamped& start,
const geometry_msgs::PoseStamped& goal,
std::vector<geometry_msgs::PoseStamped>& plan
); | {
"domain": "robotics.stackexchange",
"id": 36512,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, nav-msgs, publisher",
"url": null
} |
thermodynamics, heat-engine, carnot-cycle
Title: Why does heat engine efficiency use absolute zero rather than thermal differential? Carnot's theorem calculates efficiency of a heat engine versus absolute zero. I read the question
Why can Carnot efficiency only be 100% at absolute zero?
But I don't feel it addresses it precisely.
One assumption of thermodynamics is that temperatures equalize. So within a system the only heat energy you actually have to work with is the difference between the heat source, and the cold sink. It seems to me the efficiency of a heat engine would be how well it handles this energy, since that is all the energy it could ever theoretically use based on other laws of thermodynamics.
So basically it seems like for instance a 60% efficient engine might actually be using all of the available heat energy as well as possible with no waste. Obviously this is a ideal engine. | {
"domain": "physics.stackexchange",
"id": 93154,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, heat-engine, carnot-cycle",
"url": null
} |
electrostatics, electric-fields, charge, gauss-law, conductors
A function $\psi$ which satisfies Laplace's equation is said to be
harmonic. A solution to Laplace's equation has the property that the
average value over a spherical surface is equal to the value at the
center of the sphere (Gauss's harmonic function theorem). Solutions
have no local maxima or minima
(emphasis is mine)
Since the potential is constant in the region below the plane, the electric field is necessarily zero. | {
"domain": "physics.stackexchange",
"id": 44471,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electric-fields, charge, gauss-law, conductors",
"url": null
} |
optimization, artificial-intelligence, search-algorithms, search-trees, search-problem
Let us re-define the cost function as:
$$
c(n,n')=\left\{\begin{array}{l}
2^{g^*(s,n)}, \mbox{in some cases, whichever}\\
2^{g^*(s,n)}+\delta, \mbox{otherwise}\end{array}\right.
$$
where $delta$ is just a small number, say $\frac{1}{2}$, such that it is smaller than the smallest of all costs $2^0=1$. Now you have a lot of different ways to get different path costs of nodes lying at the same level. In this particular case, the path cost of a node at depth $d$ ranges in the interval $[2^{d+1}-1, 2^{d+1}+\delta\times d-1]$. | {
"domain": "cs.stackexchange",
"id": 12088,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "optimization, artificial-intelligence, search-algorithms, search-trees, search-problem",
"url": null
} |
general-relativity, fluid-dynamics, cosmology, interactions, beyond-the-standard-model
theorem to convert this to an equation of the form $p = G(\rho)$ where $G$ is now a function of a single variable. As a special case of this, we can restrict to linear equations of state, which are then $p = w \rho$ where $w$ is a constant number (called the equation of state parameter, though in practice people often abuse terminology and call $w$ the equation of state itself). | {
"domain": "physics.stackexchange",
"id": 79860,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, fluid-dynamics, cosmology, interactions, beyond-the-standard-model",
"url": null
} |
javascript, node.js, asynchronous
var destStream = fs.createWriteStream(destFile);
destStream.on('error', function(err) {
// Is destStream closed?
// destStream.end();
callback(err); //moveFile() - Could not open writestream
});
sourceStream.pipe(destStream);}; From a once over:
Your indenting is iffy var sourceStream = fs.createReadStream(sourceFile); should be indented and everything under it within that function
sourceStream is auto-closed because If autoClose is set to true (default behavior), on error or end the file descriptor will be closed automatically.
fs.unlinkSync(sourceFile); <- It is more idiomatic to use the async version
sourceStream.pipe(destStream);}; <- Please use a beautifier or somesuch
This:
destStream.on('error', function(err) {
// Is destStream closed?
// destStream.end();
callback(err); //moveFile() - Could not open writestream
});
can be replaced with
destStream.on('error', callback );
since callback is also a function that can take err as a parameter | {
"domain": "codereview.stackexchange",
"id": 9469,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, node.js, asynchronous",
"url": null
} |
There are six possibilities for the boy who goes first. That leaves five for the boy who goes second, four for the boy who goes third, etc. We are computing the permutations of six objects, and the answer is simply $6\times 5\times 4\times 3\times 2\times 1 = 6!$.
-
What does the term "How many couples can perform together?" mean. If it means how many distinct ways can the boys and girl be matched up then it is 6! But if it just means how many pairing of boys and girls can there be then the answer is 6x6=36 because each boy can be matched up with any one of the 6 girls. I think the question is ambiguous.
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806529525571,
"lm_q1q2_score": 0.8128413599934694,
"lm_q2_score": 0.8289388104343892,
"openwebmath_perplexity": 220.31096189872764,
"openwebmath_score": 0.7118151187896729,
"tags": null,
"url": "http://math.stackexchange.com/questions/157469/dance-couples-permutation-question"
} |
reference-request, self-study, equalizer, adaptive-algorithms, lms
Title: How to choose a fixed adaptation step for decision feedback equalizer How do I choose $\mu$ - a value of a step size for adaptation in a decision feedback equalizer (DFE) with adaptive reference control (ARC)? For a regular adaptive FIR filter an adaptation step depends on a variance of input, but DFE gets symbol values as input. Another problem is that DFE employs a sign-error LMS adaptation algorithm, while ARC - a simple LMS. So do $\mu$ value for $ARC$ and $h_i(k+1)$ can be the same?
The algorithm for the DFE and ARC is the following (based on this M.Sc. thesis by Mirna Hage as a reference).
With $x^{in}_k$ being a $k^{th}$ sample:
\begin{align}
x_k &= x^{in}_k - \sum_{i=1}^N h_i(k) \cdot a_{k-i}\\
a_k &=
\begin{cases}
+3 & \text{if} & x_k > 2 \cdot ARC_{k-1}\\
+1 & \text{if} & 0 \leq x_k \leq 2 \cdot ARC_{k-1}\\
-1 & \text{if} & -2 \cdot ARC_{k-1} \leq x_k < 0\\
-3 & \text{if} & x_k < 2 \cdot ARC_{k-1}
\end{cases}\\
e_k &= x_k - ARC_{k-1} \cdot a_k\\ | {
"domain": "dsp.stackexchange",
"id": 9574,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reference-request, self-study, equalizer, adaptive-algorithms, lms",
"url": null
} |
discrete-signals, convolution, finite-impulse-response, z-transform
Assuming that the input signal has no spectral zeros, we can determine the impulse response of an LTI system by observing its response. Note, however, that we must assume that the system is LTI. If it isn't, and if we don't know anything else about the system, then a single input-output pair won't help to characterize the system.
Given that the input signal has no spectral zeros and that we know that the system is LTI, any input-output pair will allow us to completely characterize the system.
Coming to your example, your argument about the support of the impulse response seems convincing. The argument goes as follows: the input and output sequences have the same lengths, hence the impulse response must have length $1$, which is just a scaling. Since the output is not a scaled version of the input, the system can't be LTI. However, this line of thought is wrong. | {
"domain": "dsp.stackexchange",
"id": 12267,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "discrete-signals, convolution, finite-impulse-response, z-transform",
"url": null
} |
type-theory, functional-programming, dependent-types, proof-assistants, agda
Title: Proving transitivity in an intuitionistic type theory without the K rule In Agda, if I disable axiom $\mathbb{K}$ I'm not able to prove
$$
\forall\{A : \textbf{Set}\}\{a\ b : A\}\{p\ q : a \equiv b\} \to p \equiv q,
$$
which I guess is normal since the system does not truncate equalities. However, I'm still able to prove transitivity
$$
\forall \{A : \textbf{Set}\}\{a\ b\ c : A\} \to (b \equiv c) \to (a \equiv b) \to (a \equiv c)
$$
by pattern matching on $\textit{refl}$s, i.e. $\textit{trans }\textit{refl }\textit{refl} = \textit{refl}$. Am I missing something obvious here? What axiom is the system applying to assume the path is the reflexivity path (is it axiom $\mathbb{J}$?)? Thanks in advance. $\newcommand{\J}{\mathsf{J}} \newcommand{\K}{\mathsf{K}} \newcommand{\refl}{\mathsf{refl}}$
Yes, this just uses axiom $\J$. $\K$ is only necessary for translating certain particular cases of pattern matching. | {
"domain": "cs.stackexchange",
"id": 20018,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "type-theory, functional-programming, dependent-types, proof-assistants, agda",
"url": null
} |
ros, librviz, debian, nodelet
I remembered that I also had to export my path and after doing that (ldconfig -v) I got the same invalid pointer error as above. It seems that all this might be coming from all the OpenCV-related packages. I guess I have to download those, modify them and then rebuild. Christ, building ROS from source even with little tweaks is such a tedious work...
Originally posted by rbaleksandar on ROS Answers with karma: 299 on 2015-09-16
Post score: 1
Fixed it by doing a clean build with ONLY my custom built OpenCV and PCL inside my catkin workspace and then installing the packages.
Originally posted by rbaleksandar with karma: 299 on 2015-09-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22645,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, librviz, debian, nodelet",
"url": null
} |
c++
Likewise in do_parenthesis() you have one variable named nr. I suppose this is supposed to be a number? If so, just call it number. Then you have both op and oper. Are they operator and operand? No, it turns out they're both operators, but you can't have 2 variables of the same name. So I would rename all the variables like this:
int nextOperand;
vector<double> operands;
char nextOperator{'0'};
vector<char> operators; | {
"domain": "codereview.stackexchange",
"id": 28991,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++",
"url": null
} |
oxidation-state, elements, toxicity
So yes, lead is dangerous in some ways, especially if it enters your body, but for one big solid lump that does not happen accidentally so easily, in contrast to powder for example which you could breathe in. You can look at the H and P (hazard and precautionary) statements in the MSDS if you want to get a better impresssion. But for a proper assessment, you would have to find some competent person you trust. I would say that a piece of lead that you only look at and do not manipulate in any other way is harmless, but would suggest that you make up your own mind.
One note for MSDSs: They read rather drastic usually, and I assume this is on purpose. So if you want to be on the safe side, they are a good indicator.
Just to let you know: Cobalt, one of the elements you already have, reads like this: | {
"domain": "chemistry.stackexchange",
"id": 16190,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "oxidation-state, elements, toxicity",
"url": null
} |
filters, complex, allpass
The results of this approach with a 101 tap FIR filter are plotted below: | {
"domain": "dsp.stackexchange",
"id": 11323,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, complex, allpass",
"url": null
} |
electricity
Title: What is the relationship between AC frequency, volts, amps and watts? In an alternating current, how are frequency, voltage, amperage, and watts related?
For instance, imagining the power as a sine wave, what is amperage if voltage is the amplitude? Is there a better analogy than a sine wave?
EDIT: One of the things I specifically wanted to know is whether frequency and voltage are related? A 60 Hz power signal seems to only be transmitted at 120 or 240 volts - does it have to be a multiple? He didn't mean 'forget' it. He just meant that it's not really relevant. It's easy to understand the various relationships in DC. When you go to AC they are all the same AT ANY GIVEN MOMENT ON THE WAVE. It all boils down to V=IR and VI=W, RI^2=W, V^2/R=W. With DC it's just a constant. With AC they change with time. If you pick any moment in that time and apply the above, you will have your answer. | {
"domain": "physics.stackexchange",
"id": 9535,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electricity",
"url": null
} |
special-relativity
The signal will reach clock $2$ at precisely the time $d/c$ later, so once clock $2$ is engaged, it will be synchronized with clock $1$ for all later times. | {
"domain": "physics.stackexchange",
"id": 11283,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity",
"url": null
} |
gate-synthesis, universal-gates
Title: Are Toffoli gates actually used in designing quantum circuits? In an actual quantum computer, are we designing circuits with Toffoli Gates and then using compilers or optimizers to remove redundancies so that we can use fewer qubits than a full Toffoli gates would require? Or are multiple gates composed by the compilers into a single quantum circuit?
Are these operations mathematically equivalent if you use Toffoli Gates or Fredkin gates, or is one gate easier for optimizers to work with? I guess you assume that you can implement any quantum circuit using Toffoli gates only; this is not true.
Toffoli gate is classically universal, but not quantum universal. The importance of Toffoli gate in quantum information science is based on 2 facts:
Toffoli gate is classically universal, that is you can implement any classical circuit using Toffoli gates only;
Toffoli gate is reversible, so it is also a quantum gate. | {
"domain": "quantumcomputing.stackexchange",
"id": 1525,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gate-synthesis, universal-gates",
"url": null
} |
matlab
but i still got approx the same result, not what i expected
another edit:
i did something else i had only 50 samples for the singal and took all of them
%% 50 samples taking 50
clc;
clear;
% We are given 4 different frequencies
freqs = [0.08, 0.2, 0.32, 0.4];
periods = 1./ freqs;
% Let the max value of t be 4 times the largest period
t_max = 4 * periods(1);
% Let us have 50 samples over the interval [0, t_max].
t = linspace(0, t_max, 50);
% The synthesized signal
signal = sin(2*pi*0.08*t) + sin(2*pi*0.2*t) + sin(2*pi*0.32*t) + sin(2*pi*0.4*t);
% Determine the first 50 samples of this signal
% signal_first_50_samples= signal (1:50);
% Display its magnitude spectrum.
% ??????????????????????????????????????????????????????????????????????
% I was expecting spikes at each freq
N = 64;
signal_spect = abs (fft(signal,N));
signal_spect = fftshift(signal_spect);
F = [-N/2:N/2-1]/N;
figure
plot (F, signal_spect)
%set(gca, 'XTick',[-0.5:0.05:1])
grid on;
I got this | {
"domain": "dsp.stackexchange",
"id": 2266,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab",
"url": null
} |
electromagnetism, electromagnetic-radiation, electric-fields
is too bound to the concept of electrostatic field to be useful when moving to electrodynamics.
Within electrostatics (or magnetostatics), the concept of field is just a proxy for the concept of force on charges due to other charges (or currents, in the case of magnetic fields). Being in a static condition, time does not enter the description. One can say that the presence of a charge at the position ${\bf r}$ is related to the presence of a force on a test charge at position ${\bf r'}$, or, in an equivalent way, that the force on the test charge is due to the presence of an electric field at the same position, due to the source charge.
Things become more complicated (and interesting) when there is some time variation. | {
"domain": "physics.stackexchange",
"id": 80545,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, electromagnetic-radiation, electric-fields",
"url": null
} |
c++, performance, c++11, queue, collections
template<uint64_t MaxVal>
using MinUint =
Conditional<std::numeric_limits<uint8_t>::max()>=MaxVal, uint_fast8_t,
Conditional<std::numeric_limits<uint16_t>::max()>=MaxVal, uint_fast16_t,
Conditional<std::numeric_limits<uint32_t>::max()>=MaxVal, uint_fast32_t,
uint_fast64_t
>
>
>; | {
"domain": "codereview.stackexchange",
"id": 13038,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, c++11, queue, collections",
"url": null
} |
c#, hash-map
IEnumerator IEnumerable.GetEnumerator() => _dictionary.GetEnumerator();
public IDictionary<TKey, TValue> ToDictionary() => new Dictionary<TKey, TValue>(_dictionary);
}
There is nothing fancy about it so no examples this time. The question is as usual: is there anything (terribly) wrong with this class or maybe there is something missing? Is there anything that could be done better to make debugging even easier? Don't have much to say about the code as it's rather simple and not much can be changed on most places, but I will give my two cents.
Besides the fact that you can add some extra info in the exception message, I don't see much benefit from using this class :/ | {
"domain": "codereview.stackexchange",
"id": 27973,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, hash-map",
"url": null
} |
statistics, pca
Title: When standardizing data, does that imply that the mean and standard deviation will become 0, respectively 1? As title suggests, I've been wondering about how standardization works when trying to understand how Principal Component Analysis( PCA) works from this tutorial https://medium.com/analytics-vidhya/understanding-principle-component-analysis-pca-step-by-step-e7a4bb4031d9 YES
You can prove this by using properties of the mean and variance of a transformation, but the intuition is that subtracting the observed mean gives you a a new mean of zero (centers your data) and the dividing by the standard deviation compresses or expands the distribution to give a standard deviation of one.
Note that there is no requirement to standardize your data before you run PCA. | {
"domain": "datascience.stackexchange",
"id": 10600,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistics, pca",
"url": null
} |
electromagnetism, electricity, magnetic-fields, electric-circuits, history
Title: Clarification in deduction of Ampere's original experiments I am reading Ampere's original work. Therein he describes an experiment called "equilibrium of opposite current experiment".
In this experiment, two immobile straight wires with current flowing in opposite directions are placed near each other. Then a mobile astatic coil is placed near them. It experiences zero torque.
Next he describes an experiment called "equilibrium of bent wire experiment".
In this experiment, an immobile bent wire and straight wire with current flowing in opposite directions are placed near each other. Then a mobile astatic coil is placed near them. It experiences zero torque.
Then he concludes that magnetic effects of bent wire in "equilibrium of opposite current experiment" is equivalent to magnetic effects of straight wire in "equilibrium of bent wire experiment". This is fine. | {
"domain": "physics.stackexchange",
"id": 47982,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, electricity, magnetic-fields, electric-circuits, history",
"url": null
} |
# Number of non-decreasing functions?
Let $A = \{1,2,3,\dots,10\}$ and $B = \{1,2,3,\dots,20\}$.
Find the number of non-decreasing functions from $A$ to $B$.
What I tried:
Number of non-decreasing functions = (Total functions) - (Number of decreasing functions)
Total functions are $20^{10}$. And I think there are ${20 \choose 10}$ decreasing functions. Since you choose any $10$ codomain numbers and there's just one way for them to be arranged so that the resultant is a decreasing function.
However my answer doesn't match. Where am I going wrong?
How can I directly compute the non-decreasing functions like without subtracting from total?
Given a non-decreasing function $f:A\to B$, consider the function $\hat f:A\to\{1,2,\ldots,29\}$ given by
$$\hat f(n)=f(n)+n-1$$
In other words,
$$\hat f(1)=f(1),\ \hat f(2)=f(2)+1,\ \ldots,\ \hat f(10)=f(10)+9$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812345563904,
"lm_q1q2_score": 0.8069758963859587,
"lm_q2_score": 0.8333245911726382,
"openwebmath_perplexity": 238.615742017113,
"openwebmath_score": 0.8892835378646851,
"tags": null,
"url": "https://math.stackexchange.com/questions/1396896/number-of-non-decreasing-functions"
} |
electrostatics, electric-fields, charge
Title: How does induction work when a body is charged (ie) not neutral? I know that for a neutral body, if a point charge is placed next to it, induction works something like this:
Consider a uniformly charged conducting body with charge Q on it. We place a positive point charge q1 near it. Do the positive charges on the conducting sphere become denser away from the point charge and less dense near the point charge? Yes
Because field inside the conducting shell should be zero
Initially due to uniform charge the net field was zero
When we introduce a charge outside to overcome the change inside, the charge density becomes non uniform and decreases near the charge | {
"domain": "physics.stackexchange",
"id": 60316,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electric-fields, charge",
"url": null
} |
c++, template, iterator, collections
Title: Custom implementation to provide an immutable range of a container without copying it I needed to pass a const reference to part of a std::vector without copy. I posted the previous version in my earlier question, thank you for the great support! As far as I know this version supports not only std::vectors but any kind of iterator based continuous containers!
Some research, particularly this SO question, suggested that it is not something supported by default(at least in pre-c++20 std), so I created my own code for it:
#include <iterator>
#include <vector>
template <typename Iterator = std::vector<double>::const_iterator>
class ConstVectorSubrange{
public:
using T = typename Iterator::value_type;
ConstVectorSubrange(Iterator start_, std::size_t size_)
: start(start_)
, range_size(size_)
{ }
ConstVectorSubrange(Iterator begin_, Iterator end_)
: start(begin_)
, range_size(std::distance(start, end_))
{ } | {
"domain": "codereview.stackexchange",
"id": 42450,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template, iterator, collections",
"url": null
} |
c++, object-oriented
Careful With Overloads
The ability to overload the basic math operators is a nice feature of C++. However, it can be confusing if the meaning of those overloads isn't obvious from looking at the class definition. It's not immediately obvious to me what it means to add 2 circles. There are places where adding and subtracting shapes makes sense (constructive solid geometry), but the results are very different from what's happening here. I think the overloads you have are likely to confuse future readers of the code. I would make regular methods with names like addRadiuses() and subtractRadiuses() instead.
Avoid using namespace std
It's been said frequently that one should avoid using using namespace std. I'll let the experts explain it.
I also don't think naming a namespace after yourself is too useful. The name of a namespace should tell what it's for and hint at what it might contain. I don't know what corey should contain. | {
"domain": "codereview.stackexchange",
"id": 32923,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, object-oriented",
"url": null
} |
homework, mathematical-models, population-dynamics
Title: How to classify equilibrium points I have the two differential equations:
$$\frac{dN_1}{dt} = N_1(2 - N_1 - 2N_2)$$
$$\frac{dN_2}{dt} = N_2(3 - N_2 - 3N_1).$$
I worked out the equilibrium points to be at $N_1 = 0, \frac{4}{5}$ and $N_2 = 0, \frac{3}{5}$. I then calculated all the Jacobi matrices and worked out the eigenvalues (and eigenvectors). I now have to classify these equilibria. How do I do that? Is there a set of rules I follow to classify them?
Also, the next part says:
Sketch the phase portrait of this system in the biologically sensible region: draw the the null- clines of the system and determine the crude directions of trajectories in parts of the phase plane cut by the null clines, designate the equilibria in the phase plane, and sketch a few typical trajectories. | {
"domain": "biology.stackexchange",
"id": 946,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework, mathematical-models, population-dynamics",
"url": null
} |
c++, linux, io
void
GPIO::SeekToTopOfDirectionFd() {
if (lseek(_direction_fd, SEEK_SET, 0) < 0)
throw "Error seeking to top of GPIO direction.";
}
Source can be found here Prefer the C++ way
As you seem to expect, one of the key issues with your code is that it's not very C++y.
1. Prefer std::string over C-style strings.
std::string is better in almost every way. It takes care of its own memory and grows when needed. The speed is usually comparable to C-style strings (sometimes faster, sometimes slower). The code written using string is often more readable. For example, compare your code:
if (strncmp(data, "in", 2) == 0)
with the std::string variant:
if (data == "in") | {
"domain": "codereview.stackexchange",
"id": 4129,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, linux, io",
"url": null
} |
vba, excel
ActiveCell.Value = DESCRIPCION
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = CODCLR
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = CLR
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = TALLE
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = CANTIDAD
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = CONCATENAR
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = CMVUNIT
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = CMVCANT
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = PVPUNIT | {
"domain": "codereview.stackexchange",
"id": 31899,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba, excel",
"url": null
} |
regression, xgboost, model-evaluations, outlier, objective-function
Use quartic error, $(y - \hat{y})^4$, instead of quadratic error. This is going to penalize a lot big errors, way more than MSE. The issue is that this is not implemented in xgboost, and you would need to develop a custom loss.
If your target is always positive, you can use the target as training weights. This will give more weights to the outliers. If it is not always positive, you can use the absolute value of the target as weights. If using the target values directly puts too much weight on the outliers, you might want to transform it (e.g. using the log or square root), and if you have samples whose target value is zero, you might want to add some epsilon to all the weights. Note that xgboost can be easily trained using weights.
Try to predict the quantile of the training distribution, then transform your predictions using the training cumulative probability function. | {
"domain": "datascience.stackexchange",
"id": 12060,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "regression, xgboost, model-evaluations, outlier, objective-function",
"url": null
} |
organic-chemistry, nomenclature, hydrocarbons
What if you just reverse the numbering? By reversal I mean, the carbon assigned 1st number should be given the last number, am I violating any rule? If I do that, by numbering the compound in reverse order, we get least number for the methyl chain substituent, so is it necessary that methyl group substituted in the side chain should get the highest possible number?
In short, I mean, is the below given numbering valid or not? For your first structure, I think your numbering is correct. Your choice is between including the 6-alkene or 6-alkyne in the main chain. The rule is "tie goes to the alkene". This link may provide useful examples starting around p. 9. Also see page 5, item 7, "middle" example in the above link for another relevant example. Note how in this example (6-ethyl-6-methylocta-trans-2-cis-4,7-triene) they do not start numbering from the side with the terminal double bond, but rather from the side with the lowest overall double bond numbers. | {
"domain": "chemistry.stackexchange",
"id": 1871,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, nomenclature, hydrocarbons",
"url": null
} |
zoology, brain, growth, allometry
Title: There are some species that grow throughout their lives -- is there evidence that even in adulthood their brains increase in size? One species I have read about is carp whose growth is limited only by food supply and space. One can actually, because their skin is almost transparent, see their brains and it sure does look like larger carp indeed have proportionately large brains. If this is so, are their behavioral consequences and do such carp recover from brain injury? Your question falls into an area of study known as allometry.
It is generally believed that brain size must increase as body size increases1,2. This relationship has been demonstrated in fish 3,4, including carp. This phenomenon is also seen in other (non-mammalian) vertebrates including the Nile Crocodile (Crocodylus niloticus)5.
This continued growth is correlated with the ability to regenerate after brain injury2.
References:
Jerison, H. (2012). Evolution of the brain and intelligence. Elsevier. | {
"domain": "biology.stackexchange",
"id": 11756,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "zoology, brain, growth, allometry",
"url": null
} |
c, generics, quick-sort
for(i = 0; i < nitems; i++)
{
j = i;
while(j > 0 && cmp(&carray[j * memSize], &carray[(j - 1) * memSize]) < 0)
Consider
for (i = 1; i < nitems; i++)
{
j = i;
while(j > 0 && cmp(&carray[j * memSize], &carray[(j - 1) * memSize]) < 0)
That saves one iteration of the outer loop without changing behavior. This is because the inner loop won't run when i and j are 0.
Don't subtract and multiply when you can just do one
char *carray = (char *)base;
unsigned int i;
int j;
for(i = 0; i < nitems; i++)
{
j = i;
while(j > 0 && cmp(&carray[j * memSize], &carray[(j - 1) * memSize]) < 0)
{
byteSwap(&carray[j * memSize], &carray[(j - 1) * memSize], memSize);
j--;
}
}
Consider
char *start = base;
char *end = start + nitems * memSize;
char *top; | {
"domain": "codereview.stackexchange",
"id": 21830,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, generics, quick-sort",
"url": null
} |
python
)
NON_ASCII = (
[CharacterInterval('\u0100', '\U0010FFFF')],
lambda char: ord(char) - 0x100,
lambda index: chr(index + 0x100)
)
UNICODE = ([CharacterInterval('\x00', '\U0010FFFF')], ord, chr)
class ByteMap(IndexMap[bytes], metaclass = _HasPrebuiltMembers):
ASCII_LOWERCASE = [ByteInterval(b'a', b'z')]
ASCII_UPPERCASE = [ByteInterval(b'A', b'Z')]
ASCII_LETTERS = ASCII_LOWERCASE + ASCII_UPPERCASE
ASCII_DIGITS = [ByteInterval(b'0', b'9')]
LOWERCASE_HEX_DIGITS = ASCII_DIGITS + [ByteInterval(b'a', b'f')]
UPPERCASE_HEX_DIGITS = ASCII_DIGITS + [ByteInterval(b'A', b'F')]
LOWERCASE_BASE_36 = ASCII_DIGITS + ASCII_LOWERCASE
UPPERCASE_BASE_36 = ASCII_DIGITS + ASCII_UPPERCASE
ASCII = (
[ByteInterval(b'\x00', b'\xFF')],
_ascii_index_from_char_or_byte,
_ascii_byte_from_index | {
"domain": "codereview.stackexchange",
"id": 45270,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
linear-systems, classification, causality
Here is why I think that both of these are incorrect.
Contesting the Answers
Is the system $ h[n] = \delta[n+2] + \delta[n+1] + \delta[n] $ a causal system?
The output of a causal system depends only on past and present inputs. If $ h[n] $ is the output of the system, then I believe that this system is causal (is this true?). However, $ h[n] $ is what we have used to denote the impulse response of a system. Is it not true that a necessary condition for a causal system is that the impulse response $ h[n] = 0 $ for all $ n < 0 $ ? here, we can see that $ h[n] = 1 $ for $n \in \{ -1,-2 \} $. If $ h[n] $ is indeed the impulse response, then wouldn't the output be $ y[n] = x[n+2] + x[n+1] + x[n] $. Clearly, such an output depends on future values of the input. Thus, I believe that the system is anti-causal.
Is the system $ y[n] = x[n] + 1 $ a linear system?
A linear system must obey the principle of superposition. Let's consider additivity,
Let
$$ | {
"domain": "dsp.stackexchange",
"id": 11677,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "linear-systems, classification, causality",
"url": null
} |
cosmology, energy-conservation, big-bang, vacuum, dark-energy
In some sense, the exact flat slices may also be viewed as an example of fine-tuning - something would be zero without any good reason. Some advanced cosmological considerations could imply that the spatial curvature is positive (we can swim around the Universe in principle, after some time), or negative (curved like the Lobachevsky plane, a saddle if you wish). | {
"domain": "physics.stackexchange",
"id": 928,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, energy-conservation, big-bang, vacuum, dark-energy",
"url": null
} |
electrostatics, nuclear-physics, binding-energy, strong-force
My picture of fission is that the nuclear force is the centripetal force and the electrostatic is the centrifugal one and when some energy helps the electrostatic force the nucleus reacts with fission.
The words "centripetal" and "centrifugal" have connotations of circular motion, which are not correct here. And binding energies relate to thing like beta decay and fusion as well, not just fission.
Does more binding energy between nucleons in different elements as they have more nucleons mean the nuclear force between them is stronger?
It's not really clear what you mean by this or how this relates to your question. In the model described above, the strength of the nuclear binding between any pair of nucleons, if they're within range, is constant, not varying. | {
"domain": "physics.stackexchange",
"id": 67068,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, nuclear-physics, binding-energy, strong-force",
"url": null
} |
photons, experimental-physics, nuclear-physics, history, neutrons
Title: How Chadwick concluded that the particles are neutrons but not photons? James Chadwick conducted an experiment in which he bombarded Beryllium with alpha particles from the natural radioactive decay of Polonium. How he concluded that the radiation was made up of neutrons and not photons? The WP article I linked summarizes that | {
"domain": "physics.stackexchange",
"id": 88117,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "photons, experimental-physics, nuclear-physics, history, neutrons",
"url": null
} |
c#, linq
public string Data { get; set; }
public string DiffType { get; set; } // DiffType = Add/Remove/Diff
public string PreVal { get; set; } // preList value corresponding to Data item
public string PostVal { get; set; } // postList value corresponding to Data item
}
To accomplish this, first I extended IEqualityComparer and wrote a couple of comparers:
public class DataItemComparer : IEqualityComparer<DataItem>
{
public bool Equals(DataItem x, DataItem y)
{
return (string.Equals(x.Data, y.Data) && string.Equals(x.Value, y.Value));
}
public int GetHashCode(DataItem obj)
{
return obj.Data.GetHashCode();
}
}
public class DataItemDataComparer : IEqualityComparer<DataItem>
{
public bool Equals(DataItem x, DataItem y)
{
return string.Equals(x.Data, y.Data);
}
public int GetHashCode(DataItem obj)
{
return obj.Data.GetHashCode();
}
} | {
"domain": "codereview.stackexchange",
"id": 26530,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, linq",
"url": null
} |
$P(shortage)=\frac{12}{5}(\frac{4}{5})^{11}(0.9)^{1 1}+(\frac{4}{5})^{12}[12(0.9)^{11}(0.1)+(0.9)^{12}]=0.105$
The above was edited as a factor was inadvertently omitted.
8. Originally Posted by arze
2.When a boy fires a rifle at a range the probability that he hits the target is p.
(a) Find the probability that, firing 5 shots, he scores at least 4 hits.
(b) Find the probability that, firing n shots $(n\geq2)$, he scores at least two hits.
For (a), there would be the probability of hitting 4 or 5 times. $p^4(1-p)+p^5=p^4$ which is not the answer. Answer is [tex]5p^4-4p^5[\math]
(b) answer would be 1 minus the probability of missing all and hitting 1.
[tex]1-p(1-p)^{n-1}-(1-p)^n[\math]. answer is $1-np(1-p)^{n-1}-(1-p)^n$.
For all of these where have i got wrong?
Thanks for any help!
The only thing you are missing is "counting the number of ways the event can happen".
Q2 (a) There are $\binom{5}{4}$ ways to miss once..
on the 1st, 2nd, 3rd, 4th or 5th shot. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307621626614,
"lm_q1q2_score": 0.8249291169343809,
"lm_q2_score": 0.8479677545357568,
"openwebmath_perplexity": 656.9453761150505,
"openwebmath_score": 0.5475988984107971,
"tags": null,
"url": "http://mathhelpforum.com/statistics/125683-questions-help.html"
} |
particle-physics, neutrinos, beyond-the-standard-model, majorana-fermions
Title: About Majorana neutrinos and chirality If neutrinos are Majorana particles it is possible that neutrinoless double decay happens, with a diagram like this one: | {
"domain": "physics.stackexchange",
"id": 92879,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, neutrinos, beyond-the-standard-model, majorana-fermions",
"url": null
} |
Since none are 0 or 1, I'd just put all 2s until the last one, and then see what you need to make up 15.
$2+2+2+2+7 = 15$
Hence
$a = 2$
$b = 2$
$c = 2$
$d = 2$
$e = 7$
$x_1 = 4$
$x_2 = 4$
$x_3 = 4$
$x_4 = 5$
$x_5 = 15$
Is a solution.
4. ## i dont understand somthing...
i dont understand why u deside that
2a=x1
2b=x2
2c=x3
2d+1=x4
2e+1=x5
you dont know wich of the variable is odd and wich is even,
you just know that there are 3 even and 2 odd
so i think that you cant decide that x4 and x5 are odd by give them the value 2d+1 and 2e+1
am i right? of not? why?
thanks
5. Originally Posted by tukilala
i dont understand why u deside that
2a=x1
2b=x2
2c=x3
2d+1=x4
2e+1=x5
you dont know wich of the variable is odd and wich is even,
you just know that there are 3 even and 2 odd
so i think that you cant decide that x4 and x5 are odd by give them the value 2d+1 and 2e+1
am i right? of not? why?
thanks
Yes. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.983596970368816,
"lm_q1q2_score": 0.8175097817595551,
"lm_q2_score": 0.8311430457670241,
"openwebmath_perplexity": 625.3271526999227,
"openwebmath_score": 0.6949561834335327,
"tags": null,
"url": "http://mathhelpforum.com/discrete-math/70222-math-question-inclusion-exclusion-principle.html"
} |
c++, statistics, c++20
The insert() function operates in O(1) time, confirmed by varying the number of iterations of the test program's loop between 100,000 and 100,000,000.
That's the average-case complexity, but not the worst-case complexity. Consider that there is a potential non-\$O(1)\$ operation in insert(), and that's the call to std::ranges::lower_bound(). On a std::list, this is \$O(N)\$. That might happen if all elements in the list so far have the same count, and then another insert happens that would increment the count of one of those existing elements.
You might then think that the amortized complexity of finding the \$K\$ most-frequent elements of a vector is still \$O(N)\$, but I think that might not be the case with a worst-case input. Consider that vector having the values \$1 \dots N/2\$, in that order, repeated twice (possibly reversed the second time). That might up ending \$O(N^2)\$. | {
"domain": "codereview.stackexchange",
"id": 44165,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, statistics, c++20",
"url": null
} |
everyday-chemistry, gas-laws
An important property of breathed oxygen is its partial pressure. At normal conditions at sea level, the partial pressure of oxygen is about 0.21 atm. This is compatible with the widely known estimate that the atmosphere is about 78% nitrogen, 21% oxygen, and 1% "other". Partial pressures are added to give total pressure; this is Dalton's Law. As long as you don't use toxic gasses, you can replace the nitrogen and "other" with other gasses, like Helium, as long as you keep the partial pressure of oxygen near 0.21, and breathe the resulting mixtures without adverse effects. | {
"domain": "chemistry.stackexchange",
"id": 6647,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "everyday-chemistry, gas-laws",
"url": null
} |
java, interview-questions
List<Bid> actualBidsMade = bidTracker.getCurrentAuctionBoardCopy().get(item1);
// asserting that all bids were processed
assertEquals(totalNbBids, actualBidsMade.size() + invalidBidsCount.get());
// asserting that the accepted bids for the item are all ordered by increasing value
assertEquals(actualBidsMade, sortBidListByValue(actualBidsMade));
// asserting that the last bid is for 9999
Bid lastBidMade = actualBidsMade.get(actualBidsMade.size() - 1);
assertEquals(totalNbBids - 1, lastBidMade.getValue());
}
private void shutDownExecutor(ExecutorService executor) {
try {
executor.awaitTermination(2, TimeUnit.SECONDS);
} catch (InterruptedException e) {
System.err.println("tasks interrupted");
} finally {
executor.shutdownNow();
}
}
private List<Bid> sortBidListByValue(List<Bid> bidList) {
return bidList.stream()
.sorted(Comparator.comparing(Bid::getValue))
.collect(Collectors.toList());
} | {
"domain": "codereview.stackexchange",
"id": 35523,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, interview-questions",
"url": null
} |
c++, design-patterns, c++14, type-safety
template< typename T >
struct make_nop< T, true > {
template< typename ... Args > constexpr static T make( Args && ... args ) noexcept( noexcept(T( std::forward< Args >( args ) ... )) ) {
return T( std::forward< Args >( args ) ... );
};
};
template< typename T >
struct make_nop< T, false > {
template< typename ... Args > constexpr static Nop_impl< T > make( Args && ... args ) noexcept {
return Nop_impl< T >( std::forward< Args >( args ) ... );
};
};
}
template< typename T, bool E = !T_NDEBUG > using Nop_t = decltype( detail::make_nop<T,E>::make() );
// This macro defines a function taking exactly one argument of type Nop_t< T, false >, where T is any type
#define NOP_FUNCTION( func ) template< typename T > static inline constexpr \
::detail::Nop_impl< T > func( const ::detail::Nop_impl<T> & ) noexcept { return ::detail::Nop_impl< T >{}; } | {
"domain": "codereview.stackexchange",
"id": 42735,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, design-patterns, c++14, type-safety",
"url": null
} |
php, bash, linux, console, awk
Terminal Funkiness
You clear the terminal.... and you play with terminal colour and boldness, etc.
What if you are on a simple terminal that does not support these features? Your program will either fail, or will look really ugly.
A user operating in a terminal is looking for efficiency and usability. The trates you have added are not efficient, and ruin the usability experience the user wants... (otherwise they would be in a GUI...)
Printing out a huge banner is also not useful... what if the user has a small screen... it's ugly,
randomColour
echo Enter settings, or press enter for defaults:
randomColour e
The above implies that you are expecting user input, but then you don't... odd.
Directory Requirements
You expect your user to be inside a specific directory, and this is not 'friendly'. It means a user cannot add the program to their $PATH and expect it to work:
cd ../www/properties/lib/ | {
"domain": "codereview.stackexchange",
"id": 6222,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, bash, linux, console, awk",
"url": null
} |
homework-and-exercises, electromagnetism, newtonian-mechanics, lightning, estimation
Heat capacity of lava roughly 1 kJ/(kg K); latent heat of fusion of rock 400 kJ/kg (source), and boiling point around 2500 K (2230 C for quartz). I could not find the latent heat of vaporization of rock, but based on other silicon based compounds, I will put it at 8000 kJ/kg (somewhere between the value for iron and silicon).
So taking one kg of moon and vaporizing it takes roughly
8,000 + 2,000 * 1 + 400 ~ 10,000 kJ
Update:
According to this reference the specific energy of granite is 26 kJ/cm3, and the density of granite is about 2.6 g/cm3. That makes my estimate of the power required to vaporize rock surprisingly accurate. | {
"domain": "physics.stackexchange",
"id": 20803,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, electromagnetism, newtonian-mechanics, lightning, estimation",
"url": null
} |
python, beginner, python-3.x, linked-list
Now we just need to ensure the next value is is not None as the index would be out of range. And ensure we update self.foot.
def delete_node(self, index):
prev = self._index(index)
if prev.next is None:
raise IndexError(f"index, {index}, is out of range")
if prev.next is self.foot:
self.foot = prev
prev.next = prev.next.next
Getting / Updating
There should be no suprise. We just call self._index and handle the node however is needed.
Pythonisms
We can use the container dunder methods:
get_node_data -> __getitem__
update_node -> __setitem__
delete_node -> __delitem__
A common Python idiom is negative numbers mean 'go from the end'.
We can easily change _index to handle negatives rather than error.
You should rename is_empty_list to __bool__ to allow truthy testing your linked list. | {
"domain": "codereview.stackexchange",
"id": 40692,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-3.x, linked-list",
"url": null
} |
beginner, parsing, haskell, functional-programming
combine :: (a -> b -> c) -> Parser a -> Parser b -> Parser c
combine f p q s = case p s of
Right (x, s') -> case q s' of
Right (y, s'') -> Right (f x y, s'')
Left e -> Left e
Left e -> Left e | {
"domain": "codereview.stackexchange",
"id": 40175,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, parsing, haskell, functional-programming",
"url": null
} |
molecular-biology, dna, immunity
Distinguishing by sequence and structure are self-explanatory. Molecular modifications could include unmethylated CpG sites which are more common in bacterial and viral genomes and are recognized by TLR9. [3] Mislocalized DNA means any that is found outside the nucleus, which is a sign of damage or infection. Most of the well-studied nucleic-acid-recognizing PRRs are found outside the nucleus: TLRs are constrained to endosomes and the extracellular environment while RLRs are found in the cytoplasm. The as of yet identified DNA sensors like DAI also seem to localize to the cytoplasm. Extranuclear PRRs do not necessarily need to discriminate between endogenous and exogenous DNA: any that they sense is mislocalized. However, at least one dsDNA-recognizing PRR, IFI16, localizes to the nucleus as well as the cytoplasm, where it can recognize viral dsDNA. [4] | {
"domain": "biology.stackexchange",
"id": 4296,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-biology, dna, immunity",
"url": null
} |
c#, bitwise, serialization, stream
Any comments welcome. Little bit late answer but...
First of all I have some perplexities about Write() methods. Most of them are in the form Write(T value, T min, T max) where the only purpose of min and max is to validate value. In my opinion this validation does not belong in any way to BitStream class. Yes, you're using those parameters also to minimize the number of required bits but IMO you're adding too many responsibilities to this class, think about Stream design: you decorate each layer with its own responsibility.
In this case I'd imagine a plain BitStream class (even if name is little bit misleading unless you want to derive from Stream) which has only one responsibility: keep a bit queue with only one write method (eventually with an overload to omit numberOfBits parameter):
Write(long bits, int numberOfBits); | {
"domain": "codereview.stackexchange",
"id": 21901,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, bitwise, serialization, stream",
"url": null
} |
beginner, object-oriented, game, vba, excel
Function HandleShipCollisions() As Boolean
Dim ShipObject As ISpaceObject
Dim IncomingSpaceObject As ISpaceObject
Dim ShipObjectIndex As Long
Dim IncomingSpaceObjectIndex As Long
For ShipObjectIndex = CollectionShips.Count To 1 Step -1
Set ShipObject = CollectionShips.Item(ShipObjectIndex)
For IncomingSpaceObjectIndex = CollectionIncomingSpaceObjects.Count To 1 Step -1
Set IncomingSpaceObject = CollectionIncomingSpaceObjects.Item(IncomingSpaceObjectIndex)
If CheckIfCollided(ShipObject, IncomingSpaceObject) Then
HandleShipCollisions = True
Exit For
End If
Next IncomingSpaceObjectIndex
Next ShipObjectIndex
End Function | {
"domain": "codereview.stackexchange",
"id": 32207,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, object-oriented, game, vba, excel",
"url": null
} |
Interpretation of numerical symbols is something we tend to take for granted, because it has been taught to us for many years. However, if you were to try to communicate a quantity of something to a person ignorant of decimal numerals, that person could still understand the simple thermometer chart! The infinitely divisible vs. discrete and precision comparisons are really flip-sides of the same coin. The fact that digital representation is composed of individual, discrete symbols (decimal digits and abacus beads) necessarily means that it will be able to symbolize quantities in precise steps. On the other hand, an analog representation (such as a slide rule’s length) is not composed of individual steps, but rather a continuous range of motion. The ability for a slide rule to characterize a numerical quantity to infinite resolution is a trade-off for imprecision. If a slide rule is bumped, an error will be introduced into the representation of the number that was “entered” into it. | {
"domain": "allaboutcircuits.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471647042428,
"lm_q1q2_score": 0.8043683325242291,
"lm_q2_score": 0.817574478416099,
"openwebmath_perplexity": 551.041924173306,
"openwebmath_score": 0.7829985022544861,
"tags": null,
"url": "https://www.allaboutcircuits.com/textbook/digital/chpt-1/numbers-and-symbols"
} |
# Sum of series
## Homework Statement
Find the sum of the series:
$$S = 1~+~\frac{1 + 2^3}{1 + 2}~+~\frac{1 + 2^3 + 3^3}{1 + 2 + 3}~+~\frac{1 + 2^3 + 3^3 + 4^3}{1 + 2 + 3 + 4}~+~...$$
upto 11 terms
## Homework Equations
Sum of first 'n' natural numbers: $$S = \frac{n(n + 1)}{2}$$
Sum of the squares of the first 'n' natural numbers: $$S = \frac{n(n + 1)(2n + 1)}{6}$$
Sum of the cubes of the first 'n' natural numbers: $$S = \left(\frac{n(n+1)}{2}\right)^2$$
## The Attempt at a Solution
In the series, the $n^{th}$ term is given by:
$$T_n = \frac{1 + 2^3 + 3^3 + ... + n^3}{1 + 2 + 3 + ... + n}$$
$$T_n = \frac{\left(\frac{n(n+1)}{2}\right)^2}{\left(\frac{n(n + 1)}{2}\right)}$$
$$T_n = \frac{1}{2}(n^2 + n)$$
Hence,
$$S_n = \frac{1}{2}\left(\frac{n(n + 1)(2n + 1)}{6} + \frac{n(n + 1)}{2}\right)$$
On substituting n = 11, I get:
$$S_n = 286$$ | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850897067342,
"lm_q1q2_score": 0.8541306592748014,
"lm_q2_score": 0.8688267864276108,
"openwebmath_perplexity": 1922.9059068215759,
"openwebmath_score": 0.8423073887825012,
"tags": null,
"url": "https://www.physicsforums.com/threads/sum-of-series.230104/"
} |
newtonian-mechanics, angular-momentum, rotational-kinematics
Let capital Xi, Yi and Zi denote coordinates of i-th atom in an arbitrary Cartesian coordinate frame O and xi, yi and zi denote respective coordinates in the frame Ocom translated to the center of mass of the molecule.
The coordinates of the center of mass of the molecule in the O frame are:
\begin{equation}
X_{com} = \frac{1}{M}\sum_{i} m_i X_i \\
Y_{com} = \frac{1}{M}\sum_{i} m_i Y_i \\
Z_{com} = \frac{1}{M}\sum_{i} m_i Z_i
\end{equation}
Transformation from O to Ocom is simply translation by [-Xcom, -Ycom, -Zcom]:
\begin{equation}
x_i = X_i - X_{com} = X_i - \frac{1}{M}\sum_{i} m_i X_i\\
y_i = Y_i - Y_{com} = Y_i - \frac{1}{M}\sum_{i} m_i Y_i\\
z_i = Z_i - Z_{com} = Z_i - \frac{1}{M}\sum_{i} m_i Z_i
\end{equation}
Now, take for example the formula for Ixx in the O frame:
\begin{equation}
I_{xx} = \sum_{i} m_i (y_i^2 + z_i^2)
\end{equation} | {
"domain": "physics.stackexchange",
"id": 4147,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, angular-momentum, rotational-kinematics",
"url": null
} |
java, simulation
int contribution = duration - mapAverageWaitTime.get(degree);
contribution *= contribution;
mapWaitTimeDeviation.put(degree,
mapWaitTimeDeviation.get(degree) +
contribution);
}
}
private void computeStandardDeviations() {
for (AcademicDegree degree : groupCounts.keySet()) {
int sum = mapWaitTimeDeviation.get(degree);
mapWaitTimeDeviation.put(degree,
(int) Math.round(
Math.sqrt(sum /
groupCounts
.get(degree))));
}
} | {
"domain": "codereview.stackexchange",
"id": 17525,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, simulation",
"url": null
} |
## Online Course Discussion Forum
### Why can't there be a negative root for a radical function?
Why can't there be a negative root for a radical function?
This topic is referring to Math Challenge II-A Algebra Lecture 7 Radical Equations. Why can't we also assume that there is a negative solution for a square root when the equation has two distinct real roots? e.g. if the square root of x's absolute value is 3, then why can't the equivalent of the square root of x be negative three? My description might be inaccurate since I can't give the best case I can, but it's just that I'm confounded that sometimes we only solve x for a positive solution. Why is it?
Re: Why can't there be a negative root for a radical function?
This is a very good question. We need to be very clear about the meanings of the definition of square roots and the notation we use for them. | {
"domain": "areteem.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9790357628519819,
"lm_q1q2_score": 0.8397856581385202,
"lm_q2_score": 0.8577681122619883,
"openwebmath_perplexity": 389.53478431016197,
"openwebmath_score": 0.8859012722969055,
"tags": null,
"url": "https://classes.areteem.org/mod/forum/discuss.php?d=976"
} |
beginner, rust
pub fn read_cell(&mut self) {
self.input.read(&mut self.memory[self.pointer..self.pointer+1]).unwrap();
}
pub fn write_cell(&mut self) {
self.output.write(&self.memory[self.pointer..self.pointer+1]).unwrap();
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_write_cell() {
let input = "".as_bytes();
let mut output = vec![];
{
let mut machine = Machine::new(input, &mut output);
machine.memory[machine.pointer] = 1;
machine.write_cell();
}
assert_eq!(vec![1], output);
}
}
There are two main points in the code that are concerning me: | {
"domain": "codereview.stackexchange",
"id": 13561,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, rust",
"url": null
} |
used in real world problems. Please check Stanford ExploreCourses for information about the next or current offering. 4 16 : 10/31: Strongly Connected Components To take course 3 you should have finished both courses 1 and 2. Rodríguez Blancas (coordinador), Miguel Ángel Sánchez Granero, and Antonio Viruel. Weekly Schedule. Algorithms: sorting, searching, hashing. Here is an example. This schedule is subject to change by the course staff at any time. File structures, organization and access methods. com/problems/course-schedule-ii/ This is probably the most classic application of topological sorting. topological sort course schedule | {
"domain": "cniptbaiamh.ro",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9643214460461698,
"lm_q1q2_score": 0.8077348923242604,
"lm_q2_score": 0.837619961306541,
"openwebmath_perplexity": 1399.052146630165,
"openwebmath_score": 0.21160000562667847,
"tags": null,
"url": "https://www.cniptbaiamh.ro/ysrzcr/topological-sort-course-schedule.html"
} |
c#, .net, iterator, rubberduck, com
public TWrapperItem Current
{
get { return (TWrapperItem)Activator.CreateInstance(typeof(TWrapperItem), _internal.Current); }
}
object IEnumerator.Current
{
get { return Current; }
}
}
}
Anything else?
For context, here's the SafeComWrapper<T> abstract base class' constructor:
protected SafeComWrapper(T comObject)
{
_comObject = comObject;
}
Here's how the enumerator is used, here in the CodePanes wrapper:
IEnumerator<CodePane> IEnumerable<CodePane>.GetEnumerator()
{
return new ComWrapperEnumerator<Microsoft.Vbe.Interop.CodePanes, CodePane>(ComObject);
}
IEnumerator IEnumerable.GetEnumerator()
{
return new ComWrapperEnumerator<Microsoft.Vbe.Interop.CodePanes, CodePane>(ComObject);
} | {
"domain": "codereview.stackexchange",
"id": 22257,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, iterator, rubberduck, com",
"url": null
} |
filters, discrete-signals, homework, math, poles-zeros
Thanks for answer.
EDIT: Here i add another example from a recent exam. I would like if someone could do a brief/short explanation in how to properly draw it and what should i know or notice.
The question is the same, but i lack proper understanding of it. Your sketch is not bad at all. I don't know why your teacher drew a "minimum" (do you mean a zero?) at $\pi/4$, because there shouldn't be any. The zeros are clearly at $\pm\pi/2$ and at $\pi$, and the main peak should indeed be close to $\pi/4$. By the way, I'm missing a pole. There are $4$ zeros and only $3$ poles. The fourth one should probably be at the origin. Maybe your teacher didn't give this exercise as much thought as he should have ...
EDIT: | {
"domain": "dsp.stackexchange",
"id": 5979,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, discrete-signals, homework, math, poles-zeros",
"url": null
} |
python, parsing, xml
if ((i+9) % 10 == 0):
time.append(processtimestring(line[:-1]))
elif ((i+7) % 10 == 0):
lat.append(float(line[:-1]))
elif ((i+5) % 10 == 0):
lng.append(float(line[:-1]))
elif ((i+3) % 10 == 0):
alt.append(float(line[:-1]))
elif ((i+1) % 10 == 0):
dist.append(float(line[:-1]))
return [time, lat, lng, alt, dist] | {
"domain": "codereview.stackexchange",
"id": 25335,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, parsing, xml",
"url": null
} |
electromagnetism, magnetic-fields
In regions of space where the magnetic field has zero curl — that is, in regions without free currents, bound currents, or time-varying electric fields — it's possible to construct a magnetic scalar potential whose gradient is the strength and direction of the magnetic field. Surfaces (not lines) where this potential has constant value are called "equipotential surfaces" and do form a level set in the way that you ask about. Likewise in the absence of time-varying magnetic fields you can use an "electric potential" to describe the electric field; the electric field is the gradient of the potential and points perpendicular to any equipotential surface. | {
"domain": "physics.stackexchange",
"id": 21037,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, magnetic-fields",
"url": null
} |
atoms, orbitals
Is there a relationship between an electron's energy and its distance from the nucleus? | {
"domain": "chemistry.stackexchange",
"id": 4202,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "atoms, orbitals",
"url": null
} |
c#, wpf, mvvm
ColorList_Tab View:
<UserControl x:Class="MVVM_Color_Utilities.ColorsList_Tab.ColorListView"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:local="clr-namespace:MVVM_Color_Utilities.ColorsList_Tab"
xmlns:materialDesign="http://materialdesigninxaml.net/winfx/xaml/themes"
mc:Ignorable="d"
>
<UserControl.InputBindings>
<KeyBinding Key="Enter" Command="{Binding ExecuteCommand}"/>
<KeyBinding Modifiers="Ctrl" Key="D" Command="{Binding DeleteItem}"/>
<KeyBinding Modifiers="Ctrl" Key="A" Command="{Binding AddSwitchCommand}"/>
<KeyBinding Modifiers="Ctrl" Key="E" Command="{Binding EditSwitchCommand}"/> | {
"domain": "codereview.stackexchange",
"id": 35875,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, wpf, mvvm",
"url": null
} |
ros, navigation, odometry, base-odometry
Originally posted by Mike Scheutzow with karma: 4903 on 2021-07-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Yehor on 2021-07-21:
@Mike Scheutzow so If robot_localization publish odom->base_footprint transformation, I shouldn't publish odom->base_footprint from driver?
Am I correct?
Comment by Mike Scheutzow on 2021-07-21:
Correct. Some ros systems have multiple sources of odom->base_footprint, and in that case they must all be combined into a "best guess" before one node publishes the value.
Comment by Yehor on 2021-07-21:
@Mike Scheutzom Thank you! | {
"domain": "robotics.stackexchange",
"id": 36731,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation, odometry, base-odometry",
"url": null
} |
programming-challenge, haskell, graph
n<2 is subsumed in n<x, the chain length is monotonous with the set size, and do notation feels like it might help with include.
main :: IO ()
main = print $ maximumOn (\x -> include x Set.empty x) [2..lim] where
-- set keeps track of previously seen numbers
include x set n = do
guard $ n <= lim && n >= x
if Set.member n set
then guard (x == n) >> Just (Set.size set)
else include x (Set.insert n set) (sumOfProperDivisors n)
I have a hunch there's a way to decompose include completely, but I don't see it. | {
"domain": "codereview.stackexchange",
"id": 25984,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming-challenge, haskell, graph",
"url": null
} |
The test depends on the formation of a purple colour by reaction of the iron with thioglycollic acid in a solution buffered with ammonium citrate and comparison of the colour produced with a standard colour containing a known amount of iron. ) Use the table's rule to compare fields. Best Answer: The Limit Comparison test they talk about to you in high school or wherever doesnt really encompass the entire rule. population and a high degree of individual liberty, mobility, environmental quality, worker fairness and fiscal responsibility. If you cannot implement the Limit Comparison Test then try the Direct Comparison Test. From comments about our tests posted on the Web, we know that every time we pit a Husqvarna saw against a Stihl, we're entering a divisive political battle. You can often tell that a series converges or diverges by comparing it to a known series. For problems 11 { 22, apply the Comparison Test, Limit Comparison Test, Ratio Test, or Root Test to determine if the series | {
"domain": "mulabue.de",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.986777181358575,
"lm_q1q2_score": 0.817977907049245,
"lm_q2_score": 0.8289388146603365,
"openwebmath_perplexity": 648.0043592662438,
"openwebmath_score": 0.7686976194381714,
"tags": null,
"url": "http://uenp.mulabue.de/limit-comparison-test.html"
} |
performance, beginner, lisp, scheme, sicp
Title: Replacing elements from a list and its sublists
Write a procedure substitute that takes three arguments: a list, an old word, and a new word. It should return a copy of the list, but with every occurrence of the old word replaced by the new word, even in sublists. For example:
> (substitute ’((lead guitar) (bass guitar) (rhythm guitar) drums) ’guitar ’axe)
((lead axe) (bass axe) (rhythm axe) drums)
source
The argument "l" is the list that would be evaluated, I just don't know what name is appropriate because the word "list" is already a procedure. Subl will evaluate the items that are lists.
Please review my code.
(define (substitute l old new)
(if (null? l) '()
(let ((head (car l))
(tail (cdr l))
(subl (lambda (x) (substitute x old new))))
(cond ((list? head) (cons (subl head) (subl tail)))
((equal? head old) (cons new (subl tail)))
(else (cons head (subl tail))))))) | {
"domain": "codereview.stackexchange",
"id": 17623,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, beginner, lisp, scheme, sicp",
"url": null
} |
parallel-computing
Title: Running algorithms in parallel I'm doing a course in Computational Number Theory and am currently looking at algorithms such as Euclid's algorithm, Square-roots modulo a prime, LLL reduction algorithm, McKee's method, Pollard's $p-1$ method, the elliptic curve method and a couple of others.
In an assignment, I'm asked the following question: Which of the algorithms can be run easily using several computers in parallel?
Here are my questions which I hope can shed some light on my above question: | {
"domain": "cs.stackexchange",
"id": 2905,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "parallel-computing",
"url": null
} |
quantum-chemistry, computational-chemistry, ab-initio
This is the difficult case, because the orbital degeneracy means for a proper qualitative description of the molecular orbitals, two determinants must be used, requiring a CAS(2,2) (at a minimum). Because this valence electron degeneracy isn't possible in a single determinant picture, the degenerate MOs will "split" into one lower in energy, the other higher. Requesting a singlet spin multiplicity means the two electrons will now occupy the same orbital; there's no reason for them to occupy different orbitals, at least for the initial guess. Hopefully it's (visually) clear that if the spatial components of $\alpha$ and $\beta$ spins are the same in this case, our guess is equivalent to the RHF guess. This means that for an open-shell singlet, one must break spatial symmetry in the initial guess, just as Szabo and Ostlund states. Since the RHF solution always exists as a local minimum on the "determinental" potential energy surface, whatever you choose as your SCF converger is most | {
"domain": "chemistry.stackexchange",
"id": 4707,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-chemistry, computational-chemistry, ab-initio",
"url": null
} |
c#, performance, linq, entity-framework, asp.net-mvc
Filter SubProjects
private IEnumerable<SubProjectDto> FilterSubProjects(IEnumerable<Guid> projectIds)
{
var sp = UnitOfWork.SubProjectRepository.GetAllIncluding(x => x.SubProject1,
x => x.SubProjectPersons,
x => x.SubProjectPersons.Select(y => y.SubProjectPersonRoles),
x => x.SubProjectTeams)
.Where(y => y.ParentSubProjectID == null && projectIds.Contains(y.ProjectID)).OrderBy(z => z.SortOrder).ToList();
return sp.ToSubProjectDtoEnumerable();
} | {
"domain": "codereview.stackexchange",
"id": 35983,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, linq, entity-framework, asp.net-mvc",
"url": null
} |
newtonian-mechanics, mass
EDIT
Alright, so I guess the issue is in the original impulse equation. If there are forces acting one the mass outside of the control volume it is not correct, since the LHS does not account for the momentum of this mass. I suppose then that we can only use the above equation for variable mass systems if there is no force on particles which have left the control volume. Let us consider a scenario where particles are only leaving the control volume. The vector $\mathbf F(t)$ in your derivation is sum of all external forces acting on constant number of particles relevant for the interval $t,t+\Delta t$: the particles that remain inside plus the particles that leave in this time interval. External forces acting on other particles that left earlier than those are not included in $\mathbf F$. | {
"domain": "physics.stackexchange",
"id": 53301,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, mass",
"url": null
} |
fft, amplitude, tone-generation
and this is the result:
The amplitude of my sin is 0.3 so I expect to get the FFT signal with the amplitude of 0.15 at 1Khz while I get 0.096 as the amplitude. My goal is to extract the amplitude of a tone among different frequencies. Your signal frequency is 1000 Hz while your sampling frequency is 160 kHz. Since there are 160 samples per period, the number of points for the FFT should be a multiple of 160. In this case, I used 1600 points.
Amp = 0.3;
NbPoints = 1600;
freqHz = 1000;
fsHz = 160000;
dt = 1/fsHz;
t = 0:dt:(NbPoints-1)*dt;
sine = Amp * sin(2*pi*freqHz*t);
n = 1 : NbPoints;
subplot(2,2,1)
plot(t,sine)
y1 = sine;
Fs = 160000; % samples per second
N = length(y1); % samples
dF = Fs/N; % hertz per sample
f = -Fs/2:dF:Fs/2-dF + (dF/2)*mod(N,2); % hertz
Y1_fft = fftshift(fft(y1))/N;
subplot(2,2,2)
plot(f , (abs(Y1_fft)) - min((abs(Y1_fft))));
legend(num2str(1))
shg; | {
"domain": "dsp.stackexchange",
"id": 10162,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, amplitude, tone-generation",
"url": null
} |
digital-communications, bpsk
Title: What is the maximum data rate a wireless link can support using BPSK for given BER and SNR? What is maximum data rate a wireless link can support using BPSK for 10^-6 BER, Bandwidth of channel is 200 kHz and SNR 21.335dB ?
As per my understanding , Eb/No can be calculated for BPSK for given BER using curves and value found is 11.29dB, and after that we can use relation
Eb/No = S/N * W/R | {
"domain": "dsp.stackexchange",
"id": 2174,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "digital-communications, bpsk",
"url": null
} |
The root locus crosses the imaginary axis at this frequency $$\\pm j1.7989\$$ at the gain $$\K=\sqrt{80}\$$.
The second approach is to consider $$1+ \frac{K(s+1)}{s^4+4s^3 +6s^2 + 4s} = 0$$ Let $$\s=j\omega\$$, simplify the above expression, hence: $$(\omega^4-6\omega^2+K) + j (K\omega + 4\omega - 4 \omega^3) = 0$$ The left side is a single complex number and in order to this complex number to equal zero, we have \begin{align} (\omega^4-6\omega^2+K) &= 0 \implies K = 6\omega^2-\omega^4\\ ((6\omega^2-\omega^4)\omega + 4\omega - 4 \omega^3) &= 0 \implies -\omega^5 + 2 \omega^3 + 4\omega = 0 \\ w_{1,2,3,4,5} &= 0,\pm 1.7989,\pm j1.1118 \\ \end{align} Discard the zero and $$\\pm j1.1118\$$, we end up with this frequency $$\\pm 1.7989\$$ at which the root locus intersects with the imaginary axis. We can compute the gain K as well, hence:
\begin{align} K &= 6\omega^2-\omega^4\\ &= 6(1.7989)^2 - (1.7989)^4 \\ &= 8.9443 \end{align} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9678992960608886,
"lm_q1q2_score": 0.8392032982466755,
"lm_q2_score": 0.8670357563664174,
"openwebmath_perplexity": 508.96540794921304,
"openwebmath_score": 0.9985009431838989,
"tags": null,
"url": "https://electronics.stackexchange.com/questions/413208/root-locus-and-imaginary-axis"
} |
thermodynamics
http://en.wikipedia.org/wiki/Vapour_pressure_of_water
http://en.wikipedia.org/wiki/Goff%E2%80%93Gratch_equation.
equation from my book:
$$p = 611 \exp\left( \frac{a \theta}{b + \theta} \right)$$
where $\theta$ is temperature in $^\circ$C, $a = 22.44, b=272.44$ for negative temperatures and $a = 17.08, b=234.18$ for positive temperatures.
For specific enthalpy lines (entalphy over mass of dry air), you can calculate them using the data for specific latent heat for water at $0^\circ$C and heat capacities of vapor and dry air.
$$h = (c_{p,a} + x \; c_{p,w}) \theta + x \; l_0$$
where again $\theta$ is temerature in $^\circ$C, $l_0 = 2500$kJ/kg specific latent heat of water at $0^\circ$C, $c_{p,a} = 1.005$kJ/kgK heat capacity of dry air at constant pressure and $c_{p,w} = 1.926$kJ/kgK heat capacity of wapor at constant pressure. $x$ is absolute humidity.
Of course heat capacities are not temperature-independent constants, but the result will be fairly correct. | {
"domain": "physics.stackexchange",
"id": 58034,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics",
"url": null
} |
ros, gazebo
Originally posted by jayess with karma: 6155 on 2017-08-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by rmck on 2017-08-16:
There's also a repeated typo that may be affecting it, you've written namesapce rather than namespace
Comment by jayess on 2017-08-16:
@rmck are you referring to me or @balajithevoyager? If you're referring to me I don't see it.
Comment by jayess on 2017-08-16:
@balajithevoyager's use of namesapce seems to be consistent though so that may not be the issue.
Comment by rmck on 2017-08-16:
@jayess, yeah, was referring to the original post.
Comment by balajithevoyager on 2017-08-16:
I have changed /robot_description to /first_pelican/robot_description in the node name of "mybot_spawn", but unfortunately the issue is still present with same message:
gazebo_ros_control plugin is waiting for model URDF in parameter [/robot_description] on the ROS param server. | {
"domain": "robotics.stackexchange",
"id": 28595,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo",
"url": null
} |
c++
float Board::move(const Point& current_loc, const Point& direction,
float prob) {
Point new_loc = current_loc + direction;
// edge cases
if (util::is_in_vector(new_loc, this->obstacles) || !is_inside(new_loc)) {
return prob * best_value[current_loc.x][current_loc.y];
}
if (util::is_in_vector(new_loc, this->end_states)) {
return prob * best_value[new_loc.x][new_loc.y];
}
// end of edges cases
return prob * this->best_value[new_loc.x][new_loc.y];
} | {
"domain": "codereview.stackexchange",
"id": 35264,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.