text stringlengths 1 1.11k | source dict |
|---|---|
planet, solar-system, orbital-mechanics, 9th-planet, space-probe
2 - Precision of barycenter predictions
There are number of significant barriers to accurately calculating the barycenter of the solar system but the most troublesome are the uncertainties regarding the interiors of Saturn and Jupiter. Specifically, the behavior of liquid metallic hydrogen at such massive pressures and (more so for Saturn) understanding their gravitational moments (Fortney 2004).
The issue this raises is that without knowing the centers of mass for Jupiter and Saturn (which contain 92% of planetary mass in the solar system) with sufficient precision we don't know well enough what the solar system barycenter should be to determine whether or not the true barycenter differs enough to indicate the existence of Planet 9. | {
"domain": "astronomy.stackexchange",
"id": 4586,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "planet, solar-system, orbital-mechanics, 9th-planet, space-probe",
"url": null
} |
organic-chemistry, amino-acids
Title: Is selenocysteine C3H6NO2Se or C3H7NO2Se? I'm a beginner in chemistry, and recently I am interested in selenocysteine.
While most websites (Wikipedia, ChemSpider) say it is $\ce{C3H7NO2Se}$, PubChem says it is $\ce{C3H6NO2Se}$.
Which one is correct? Or are both reasonable? Selenocysteine can be either of the following: $\ce{C3H8NO2Se+, C3H7NO2Se, C3H6NO2Se-}$ or $\ce{C3H5NO2Se^2-}$. This is because the neutral form of selenocysteine is both a base and a diprotic acid: the amino group can be protonated and both the carboxylic group and the selenol can be deprotonated.[1] | {
"domain": "chemistry.stackexchange",
"id": 9295,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, amino-acids",
"url": null
} |
c++, beginner, music
#define astring std::string
#define cvector std::vector
astring rootnote = "";
astring scale = "";
void notation();
int root;
cvector<astring> notes;
cvector<astring> order;
void scaler();
int main()
{
while (rootnote != "null") {
std::cout << "Please enter your root note and scale: " << std::endl;
std::cin >> rootnote >> scale;
std::cout << "\nroot scale: " << rootnote << " " << scale << std::endl;
std::cout << std::endl;
notation();
scaler();
}
return 0;
}
void notation()
{
notes.clear();
cvector<astring> chromatic = { "C", "Db", "D", "Eb", "E", "F", "Gb", "G", "Ab", "A", "Bb", "B" };
root = std::distance(chromatic.begin(), std::find(chromatic.begin(), chromatic.end(), rootnote));
for (int i = root; i < chromatic.size(); i++) { notes.push_back(chromatic[i]); }
for (int j = 0; j < root; j++) { notes.push_back(chromatic[j]); }
return;
}
void scaler() {
order.clear(); | {
"domain": "codereview.stackexchange",
"id": 32085,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, music",
"url": null
} |
php, design-patterns, collections
The separation itself is good.
An exception is meant to be catched "whenever possible", and it is not possible in WorldCollection, since you don't know how to handle it. You need to wonder what you want your application to do when the data does not validate.
If you want WorldCollection::offsetSet to not do anything when the validation fails, then WorldValidator::validate should simply return true or false, and no exception should be thrown.
If the exception is to be catched later on in the code, then don't do anything in WorldCollection::offsetSet, since another code will take care of the issue.
You can easily mix those two approaches in order to catch some of the exception at some point, and other ones later on (which means WorldValidator::validate should always throw an exception, and return nothing). | {
"domain": "codereview.stackexchange",
"id": 1434,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, design-patterns, collections",
"url": null
} |
java, console, compression
/**
* Removes the very last bit from this bit string builder.
*/
public void removeLastBit() {
if (size == 0) {
throw new IllegalStateException(
"Removing the last bit from an bit string builder.");
}
--size;
}
/**
* Clears the entire bit string builder.
*/
public void clear() {
this.size = 0;
}
/**
* Returns the number of bytes occupied by bits.
*
* @return number of bytes occupied.
*/
public int getNumberOfBytesOccupied() {
return size / 8 + ((size % 8 == 0) ? 0 : 1);
}
public byte[] toByteArray() {
int numberOfBytes = (size / Byte.SIZE) +
((size % Byte.SIZE == 0) ? 0 : 1);
byte[] byteArray = new byte[numberOfBytes]; | {
"domain": "codereview.stackexchange",
"id": 22994,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, console, compression",
"url": null
} |
c, matrix, opengl
inv[2] = in->m[1] * in->m[6] * in->m[15] -
in->m[1] * in->m[7] * in->m[14] -
in->m[5] * in->m[2] * in->m[15] +
in->m[5] * in->m[3] * in->m[14] +
in->m[13] * in->m[2] * in->m[7] -
in->m[13] * in->m[3] * in->m[6];
inv[6] = -in->m[0] * in->m[6] * in->m[15] +
in->m[0] * in->m[7] * in->m[14] +
in->m[4] * in->m[2] * in->m[15] -
in->m[4] * in->m[3] * in->m[14] -
in->m[12] * in->m[2] * in->m[7] +
in->m[12] * in->m[3] * in->m[6];
inv[10] = in->m[0] * in->m[5] * in->m[15] -
in->m[0] * in->m[7] * in->m[13] -
in->m[4] * in->m[1] * in->m[15] +
in->m[4] * in->m[3] * in->m[13] +
in->m[12] * in->m[1] * in->m[7] -
in->m[12] * in->m[3] * in->m[5]; | {
"domain": "codereview.stackexchange",
"id": 14999,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, matrix, opengl",
"url": null
} |
java, game, hangman
fillLettersToGuess();
// Put first and last letter into lettersGuessed set.
lettersGuessed.add(wordToGuess.charAt(0));
lettersGuessed.add(wordToGuess.charAt(wordToGuess.length() - 1)); | {
"domain": "codereview.stackexchange",
"id": 7126,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, game, hangman",
"url": null
} |
c++, performance, algorithm, matrix, boost
static Matrix IDENTITY(int n);
static Matrix CONSTANT(int r, int c, cpp_int n);
bool is_square()
{
return rows_num == cols_num;
}
bool is_identity();
bool is_symmetric();
bool is_skewSymmetric();
bool is_diagonal();
bool is_null();
bool is_constant();
bool is_orthogonal();
bool is_invertible();
bool is_upperTriangular();
bool is_lowerTriangular();
Matrix transpose();
Fraction determinant();
Matrix inverse();
Matrix gaussJordanElimination();
};
#endif // MATRIX_H_INCLUDED
Matrix.cpp
#ifndef MATRIX_H_INCLUDED
#define MATRIX_H_INCLUDED
#include <vector>
#include <ostream>
#include <assert.h>
#include "Fraction.h"
#include <boost/multiprecision/cpp_int.hpp>
using boost::multiprecision::cpp_int;
class Matrix
{
private:
int rows_num;
int cols_num;
std::vector <std::vector<Fraction>> data;
public:
Matrix () = default; | {
"domain": "codereview.stackexchange",
"id": 37299,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, algorithm, matrix, boost",
"url": null
} |
correctness-proof, formal-methods
Title: Define a length function over $A^{*} \leftarrow{N}$ such that $length(l)$ outputs the length of $l$ Consider the following definitions
LIST:
$\overline{nil} \ \ \ \ \ \frac{l}{a \ l}$ $a \in A$
$A^* = \mu \widehat{LIST}, \ A^{\infty} = v \widehat{LIST}$
NAT:
$\overline{0} \ \ \ \ \ \frac{x}{s(x)}$
$ N = \mu \widehat{NAT}, \ N^{\infty} = v \widehat{NAT}$
Given those definitions I have to define a length function over $A^{*} \leftarrow{N}$ such that $length(l)$ outputs the length of $l$.
I have tried to do this problem using structural induction on a given $l$ but I don't seem to get any relevant result.
In general, how would you approach this kind of problem? Let F denote a “shape”, as your LIST and NAT shapes for example.
Declaring $I := μF$ is tantamount to saying that $I$ is recursively defined by some $α : F(I) ≅ I$
and moreover for any $β : F(K) → K$ there is a unique $f : I → K$ that respects the “F”-structure. | {
"domain": "cs.stackexchange",
"id": 12362,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "correctness-proof, formal-methods",
"url": null
} |
gazebo, urdf, xacro, ros-kinetic
Title: Texturize robot consisting of STL-files
I created robots links with CATIA exported them as STL an constructed my own robot with URDF. Now I'm able to move my little mobile robot around the world in Gazebo. Yeah.
Then I mounted an Universal Robot Arm on top of my rover and there where textures on the arm. I looked in the XACRO-Description and found out, they use DAE files with textures to colorize the robots links.
Is it possible (and if with which software) to import STL-files attach textures on it and export it as DAE as easy as possible?
How would you texturize my existing robot with STL files?
Thanks for your help
Max
Originally posted by vonunwerth on ROS Answers with karma: 66 on 2018-07-14
Post score: 0
There are many programs that can do this. Personally I'd use blender. It's free and very powerful will loads of you tube tutorials.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-07-14
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 31268,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo, urdf, xacro, ros-kinetic",
"url": null
} |
ros
Title: Reading from an IMU (single axis)
How do I use the single axis accelerometer (one in the single axis IMU) to measure linear velocity (and hence linear distance covered)? | {
"domain": "robotics.stackexchange",
"id": 13525,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
Consider the element $$\frac{1+\sqrt{21}}2$$. It satisfies the equation $$(x-\frac12)^2=\frac{21}4$$, i.e. $$x^2-x-5=0$$.
Hope this helps.
• Thanks a lot this solved the problem!!! – Ralph John Feb 22 at 8:27
• My other Q about this is there a general method on finding $\mathcal O_K$? – Ralph John Feb 22 at 8:29
• In general, no, but if at least you know the algebraic degree you can narrow down the possibilities. – Mr. Brooks Feb 24 at 22:40
• For that other "Q," you probably have to formally ask it as a separate question. Though it is likely that as you type it up, several suggestions come up. – Bill Thomas Feb 28 at 22:35
Yes, there is a general method of finding the ring of integers in biquadratic number fields $$K=\Bbb Q(\sqrt{m},\sqrt{n})$$ over $$\Bbb Q$$. Arturo's answer is very helpful in explaining this and giving further links - see | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9539660923657093,
"lm_q1q2_score": 0.8478457462566176,
"lm_q2_score": 0.888758786126321,
"openwebmath_perplexity": 329.0236931702972,
"openwebmath_score": 0.7714011073112488,
"tags": null,
"url": "https://math.stackexchange.com/questions/3555842/inclusion-in-the-ring-of-integers/3555890"
} |
fasta, text
Note that the first occurrence of a duplicate name isn't changed, the second will become _2, the third _3 etc.
Explanation
perl -pe : print each input line after applying the script given by -e to it.
++$seen{$_}>1 : increment the current value stored in the hash %seen for this line ($_) by 1 and compare it to 1.
s/$/_$seen{$_}/ if ++$seen{$_}>1 and /^>/ : if the current line starts with a > and the value stored in the hash %seen for this line is greater than 1 (if this isn't the first time we see this line), replace the end of the line ($) with a _ and the current value in the hash
Alternatively, here's the same idea in awk:
$ awk '(/^>/ && s[$0]++){$0=$0"_"s[$0]}1;' file.fa
>1_uniqueGeneName
atgc
>1_anotherUniqueGeneName
atgc
>1_duplicateName
atgc
>1_duplicateName_2
atgc
To make the changes in the original file (assuming you are using GNU awk which is the default on most Linux versions), use -i inplace:
awk -iinplace '(/^>/ && s[$0]++){$0=$0"_"s[$0]}1;' file.fa | {
"domain": "bioinformatics.stackexchange",
"id": 140,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fasta, text",
"url": null
} |
If we substitute each term in the definition of A by the above, we obtain:
A = 1/1 - 1/2 + 1/2 - 1/3 + 1/3 - 1/4 + … + 1/99 - 1/100
Note that all of the terms (besides 1/1 = 1 and 1/100) cancel; therefore,
A = 1/1 - 1/100 = 99/100
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
Intern
Joined: 16 Aug 2016
Posts: 7
Re: What is the value of 1/(1*2)+1/(2*3)+...+1/(99*100)? [#permalink]
### Show Tags
05 Dec 2016, 18:10
Although I can follow the explanations made in this thread I would not be able to come up with them on my own. How can we make sure that we identify the right pattern in such a question?
Manager
Joined: 03 Oct 2013
Posts: 84
Re: What is the value of 1/(1*2)+1/(2*3)+...+1/(99*100)? [#permalink]
### Show Tags | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9621075777163566,
"lm_q1q2_score": 0.8409563425072468,
"lm_q2_score": 0.8740772466456689,
"openwebmath_perplexity": 5716.801454999016,
"openwebmath_score": 0.8207427859306335,
"tags": null,
"url": "https://gmatclub.com/forum/what-is-the-value-of-229663.html"
} |
# Why is a projectile a parabola not a semicircle
1. Dec 9, 2015
### Ben Negus
This sounds like a dumb question. I have come to accept projectiles form parabolas but I need someone to explain why they form this shape
2. Dec 9, 2015
### Columbus
Because the force of gravitational acceleration is stronger than the deceleration the projectile experiences. When you view the middle of the flight path, you are dealing with the moments when the vertical velocities are the slowest and horizontal forces appear much more powerful, so that section could be mistaken by the naked eye for a circular path. But the forces are not balanced, so it is parabolic.
3. Dec 10, 2015
### robphy
The parabola is the shape that arises when "the slope of its tangent (which is related to the velocity) changes linearly with time"... [i.e. constant acceleration].
4. Dec 10, 2015
### Staff: Mentor | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.971992476960077,
"lm_q1q2_score": 0.8013707857857145,
"lm_q2_score": 0.8244619220634457,
"openwebmath_perplexity": 655.4098338527929,
"openwebmath_score": 0.6573933362960815,
"tags": null,
"url": "https://www.physicsforums.com/threads/why-is-a-projectile-a-parabola-not-a-semicircle.847492/"
} |
java, strings, programming-challenge
\$1 \le T \le 10\$
\$2 \le\$ length of S \$\le 10000\$
Output Format
For each string, print Funny or Not Funny in separate lines.
This passing solution took me about 20 minutes, so that might be a bit long given the difficulty of the problem. I'm open to critiques on my speed too.
import java.io.*;
import java.util.*;
import java.text.*;
import java.math.*;
import java.util.regex.*;
public class Solution {
private static boolean isFunny (String s)
{
String rev = reverse(s);
boolean stillEq = true;
for (int i = 2; i < s.length() && stillEq; ++i)
{
int comp = (int)s.charAt(i) - (int)s.charAt(i-1);
int comp2 = (int)rev.charAt(i) - (int)rev.charAt(i-1);
stillEq = Math.abs(comp) == Math.abs(comp2);
}
if (stillEq)
return true;
else
return false;
} | {
"domain": "codereview.stackexchange",
"id": 35444,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, strings, programming-challenge",
"url": null
} |
bottleneck spanning tree (MBST) is a spanning tree that seeks to minimize the most expensive edge in the tree. Bo Zeng. Show that a graph has a unique minimum spanning tree if, for every cut of the graph, there is a unique cheapest edge crossing the cut. Prove or give a counter example. A spanning tree is a minimum bottleneck spanning tree (or MBST) if the graph does not contain a spanning tree with a smaller bottleneck edge weight. Sum and bottleneck objective functions are considered, and it is shown that in most cases, the problem is NP-hard. More speci cally, for a tree T over a graph G, we say that e is a bottleneck edge of T if it’s an edge with maximal cost. The minimum bottleneck spanning tree in an undirected graph is a tree whose most expensive edge is as minimum as possible. Solution. A bottleneck edge is the highest weighted edge in a spanning tree. So in my example: when I create any spanning tree, I have to take an edge with w(e)=3. My Algorithms professor gave us an | {
"domain": "com.br",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9755769106559808,
"lm_q1q2_score": 0.8150784460331487,
"lm_q2_score": 0.8354835350552603,
"openwebmath_perplexity": 940.2938443740902,
"openwebmath_score": 0.3927384912967682,
"tags": null,
"url": "http://julioperes.com.br/blooming-flower-cedsm/minimum-bottleneck-spanning-tree-4451a5"
} |
r
"Human Resources & Employee Development team", "Human Resources & Employee Development team",
"Infrastructure & Technology team", "Infrastructure & Technology team",
"Infrastructure & Technology team", "IT Strategy team", "IT Strategy team",
"IT Strategy team", "IT Strategy team", "IT Strategy team", NA,
"Legal team", "Legal team", "Legal team", "Marketing team", "Marketing team",
"Marketing team", "Government Affairs & Regulatory Compliance team",
"Government Affairs & Regulatory Compliance team"), FTrole = c("DR",
"DR", "TL", "DR", "DR", "TL", "DR", "DR", "DR", "DR", "DR", "DR",
"DR", "TL", "TL", "DR", "DR", "TL", "DR", "DR", "DR", "DR", "DR",
"TL", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "DR",
"TL", "DR", "TL", "DR", "DR", "TL", "DR", "DR", "DR", "DR", "DR",
"TL", "TL", "DR", "DR", "DR", "DR", NA, "DR", "TL", "DR", "DR",
"DR", "TL", "DR", "DR"), FTnum = c(NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 7L, NA, NA, | {
"domain": "codereview.stackexchange",
"id": 32530,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "r",
"url": null
} |
a <- 79.5
b <- 80.5
z <- 50
sliceOfDat <- subset(data.frame(dat), X1 > a, X1 < b)
estimatedMean <- mean(sliceOfDat[,c(2)])
estimatedDev <- sd(sliceOfDat[,c(2)])
estimatedPercentile <- pnorm(z, estimatedMean, estimatedDev)
### Edit - R implementation of solution based on whuber's answer
Here is an implementation of the accepted solution using integrate, compared against my original idea based on sampling. The accepted solution provides the expected output 0.5, whereas my original idea deviated by a significant amount (0.41). Update - See wheber's edit for a better implementation.
# Bivariate distribution parameters
means <- c(79.55920, 52.29355)
variances <- c(268.8986, 770.0212)
rho <- 0.2821711
# Generate sample data for bivariate distribution
n <- 10000 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.98228770337066,
"lm_q1q2_score": 0.8585953166765454,
"lm_q2_score": 0.8740772318846386,
"openwebmath_perplexity": 3720.9475415620254,
"openwebmath_score": 0.7470393776893616,
"tags": null,
"url": "https://stats.stackexchange.com/questions/213456/how-to-calculate-the-total-probability-inside-a-slice-of-a-bivariate-normal-dist"
} |
organic-chemistry, reaction-mechanism, aromatic-compounds
Since each step in the sulfonation of benzene is an equilbirum, sulfonation is a reversible reaction. According to Le-Chatlier's principle addition of $\ce{H+}$ ions in the solution would shift the equilibrium to the left in the last step of sulfonation where arenium is deprotonated, hence addition of $\ce{HCl(aq)}$ accompanied with heating for kinetic requirements results in the production of 4-bromobenzenesulfonic acid in the given reaction. This process is called protodesulphonylation. | {
"domain": "chemistry.stackexchange",
"id": 9857,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, reaction-mechanism, aromatic-compounds",
"url": null
} |
java, genetic-algorithm, bitset
public void mutateGenePool() {
int totalGeneCount = genePoolSize * chromosomeLength;
System.out.println("Mutating genes:");
for (int i = 0; i < totalGeneCount; i++) {
double prob = Math.random();
if (prob < mutationRate) {
System.out.printf("Chromosome#: %d\tGene#: %d\n", i / chromosomeLength, i % chromosomeLength);
genePool.get(i / chromosomeLength).getGeneAt(i % chromosomeLength).mutate();
}
}
System.out.println("");
}
public int getLeastFitIndex() {
int index = 0;
int min = genePool.get(index).value();
int currentValue;
for (int i = 1; i < genePoolSize; i++) {
currentValue = genePool.get(i).value();
if (currentValue < min) {
index = i;
min = currentValue;
}
}
return index;
} | {
"domain": "codereview.stackexchange",
"id": 19800,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, genetic-algorithm, bitset",
"url": null
} |
It doesn't make sense to me.
Isn't $\sqrt[\frac 1a]{x}$ just $x^{a}$ for every $a\in\mathbb{R}$ (or even $\mathbb{C}$) or am I missing something?
-
If $\alpha\ne 0$, the $\alpha$-th root of a non-negative real certainly exists. One is more likely to use the notation $x^{1/\alpha}$ for it than $\sqrt[\alpha]{x}$. – André Nicolas Mar 19 '14 at 19:25
Is there a way I can show my teacher that she's wrong? Even just citing a theorem based on that – mattecapu Mar 19 '14 at 19:55
Your teacher is undoubtedly aware that $x^\beta$ is defined for positive $x$ and non-zero $\beta$, in particular for $\beta=1/\alpha$. So it would be a question of finding a paper or other publication that used the notation $\sqrt[\alpha]{x}$. I have no immediate example. – André Nicolas Mar 19 '14 at 20:01
Yeah, she knows it. But she denies that $\sqrt[\frac13]{x}=x^3$ while she accepts that $\sqrt[2]{x}=x^{\frac12}$... – mattecapu Mar 20 '14 at 17:08 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9859363750229257,
"lm_q1q2_score": 0.8419036405656215,
"lm_q2_score": 0.8539127492339909,
"openwebmath_perplexity": 499.04684890444605,
"openwebmath_score": 0.9817405939102173,
"tags": null,
"url": "http://math.stackexchange.com/questions/718727/does-nth-roots-with-non-natural-indexes-exists"
} |
c#, .net, multithreading, locking
Title: Loading data async with queue. Proper write only locking and scheduling I have to store and return data on multitasking requests. If data is missing, I schedule loading and return null. I'm using some nubie scheduling by saving current jobs in a list. Also I need data request to be locked during writing. .NET 4.0
How can I schedule and lock properly?
private ConcurrentDictionary<string, IDataSet> data;
private List<string> loadingNow;
public IDataSet GetData(string dataId)
{
if (loadingNow.Contains(dataId))
return null; // Currently loading. Return null
if (data.ContainsKey(dataId))
return data[dataId]; // Return data
// Schedule loading async. Return null.
loadingNow.Add(dataId);
dataIoAsync.LoadDataAsync(dataParams);
return null;
}
private void DataIoAsync_DataFileLoaded(object sender, DataFileLoadedAsyncEventArgs e)
{
loadingNow.Remove(e.DataId);
data.TryAdd(e.DataId, e.DataSet);
OnDataFileLoaded(e.DataId);
} | {
"domain": "codereview.stackexchange",
"id": 4100,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, multithreading, locking",
"url": null
} |
homework-and-exercises, thermodynamics, everyday-life, estimation, condensation
$$m=\rho V = \rho A_T \epsilon$$
where $\epsilon$ is the thickness of the sheet. I'm getting $m\approx3,758 \times10^{-5}$ kg. The amount of energy released by the condensation of this quantity of water is given by the relation
$$Q=mL_{water}$$
where $L_{water}$ is the specific latent heat for condensation of water, which is $2264.76$ kJ/kg. Then, $Q\approx85,12$ J. In a $330$ mL can, and assuming that the beverage density is close to water's, there's $0,33$ kg of beverage. The ratio of the amount of heat energy transferred to an object and the resulting increase in temperature of the object is its heat capacity:
$$Q=c_p \Delta T$$
which we can also assume close to water's: $c_p=$ $4185$ J/(kg⋅K). Then, the resulting increase in the temperature of our beverage is $\Delta T \approx 0,02$ K. Not a big deal, I'd say. | {
"domain": "physics.stackexchange",
"id": 31630,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, thermodynamics, everyday-life, estimation, condensation",
"url": null
} |
c++, queue
bool empty() const {
return elementsCount == 0;
}
bool full() const {
return elementsCount == MAX_SIZE;
}
int size() const { return elementsCount; }
InsertStatus push(T const& t) {
if (full()) return FailedQueueFull;
array[right] = t;
right = (right+1)%MAX_SIZE;
elementsCount++;
return OK;
}
InsertStatus pop() {
if (empty()) return FailedQueueEmpty;
left = (left+1)%MAX_SIZE;
elementsCount--;
return OK;
}
T top() const { return array[left]; }
void clear() {
left = 0;
right = 0;
elementsCount = 0;
}
~Queue() {
delete(array);
array = nullptr;
}
}; Advice 1
private:
int MAX_SIZE;
T* array;
int left = 0, right = 0, elementsCount = 0; | {
"domain": "codereview.stackexchange",
"id": 43608,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, queue",
"url": null
} |
ros, frontier-exploration
Title: frontier exploration theory and usage
Hi,
I'm trying to understand how I can use the package frontier_exploration in order to use it for my project. I have studied the paper describing this theory by Yamauchi (1997). I would like to know whether this implementation of frontier exploration is based on the most basic theory (i.e. the greedy approach where the robot chooses its closest frontier) or whether some objective function is being used to decide upon the choice of the frontier to explore.
Thanks.
--EDIT:
When I tried to run the global_map.launch file and provided a polygon boundary + a start point, I got the following result:
[ WARN] [1425309222.899948026]: Please select an initial point for exploration inside the polygon
[ INFO] [1425309234.966230727]: Sending goal
[ INFO] [1425309235.049964691]: Using plugin "static"
[ INFO] [1425309235.207880621]: Requesting the map...
[ INFO] [1425309235.485626067]: Resizing costmap to 4000 X 4000 at 0.050000 m/pix | {
"domain": "robotics.stackexchange",
"id": 20986,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, frontier-exploration",
"url": null
} |
python, performance, numpy, bioinformatics, numba
t0 = time.time()
getAlignmentData(source, sequence, sstart, tresholds)
t1 = time.time()
getAlignmentData(source, sequence, sstart, tresholds)
t2 = time.time()
print(str(t1-t0)+' to first try')
print(str(t2-t1)+' to second try')
When I add the @jit decorator on the two functions, I get slower code. Do I need to do something special, like signatures? Can Numba make this code faster or do I need to use Cython? After profiling your code as:
import cProfile, pstats, StringIO
pr = cProfile.Profile()
pr.enable()
for it in range(0,10000):
getAlignmentData(source, sequence, sstart, tresholds)
pr.disable()
s = StringIO.StringIO()
sortby = 'cumulative'
ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
ps.print_stats()
print s.getvalue()
You will get:
1370129 function calls (1370122 primitive calls) in 3.249 seconds
Ordered by: cumulative time | {
"domain": "codereview.stackexchange",
"id": 14236,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, numpy, bioinformatics, numba",
"url": null
} |
interpolation
x2 = ifft(y2, N)
x3 = ifft(y3, N)
Also, why zero-padding at the ends of the spectrum gives negatives values of interpolated data and positive of original sequence?
x2 =
2.0000 + 0i
2.1303 + 0.0000i
4.0000 - 0.0000i
5.2490 + 0i
5.0000 + 0i
3.6885 - 0.0000i
2.0000 + 0i
0.8029 - 0.0000i
1.0000 + 0.0000i
2.3184 + 0i
3.0000 - 0.0000i
1.9939 + 0.0000i
1.0000 + 0i
2.7335 - 0.0000i
7.0000 - 0.0000i
10.0992 + 0i
9.0000 + 0.0000i
4.9842 + 0.0000i
x3 =
2.0000 + 0i
-2.1303 - 0.0000i
4.0000 - 0.0000i
-5.2490 + 0i
5.0000 + 0i
-3.6885 + 0.0000i
2.0000 + 0i
-0.8029 + 0.0000i
1.0000 + 0.0000i
-2.3184 + 0i
3.0000 - 0.0000i
-1.9939 - 0.0000i
1.0000 + 0i
-2.7335 + 0.0000i
7.0000 - 0.0000i
-10.0992 + 0i
9.0000 + 0.0000i
-4.9842 - 0.0000i
I ... discovered that original sequence in interpolated data only when (N/M) = 2 | {
"domain": "dsp.stackexchange",
"id": 12406,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "interpolation",
"url": null
} |
quantum-mechanics, wavefunction, measurement-problem, superposition, wavefunction-collapse
Title: Wavefunction Collapse I believe my Lecturer and the textbook have contradicted one another. My lecturer gave the example that if the spatial part of the wavefunction of a particle is given by
$\psi(x) = c_1\psi_1(x) + c_2\psi_2(x)$
for the infinite square well potential (where $\psi_1$ and $\psi_2$ are the ground and first excited energy eigenfunctions). He stated that if we were to measure an observable, for example the energy of the particle, the wave function will collapse to one of the two energy eigenstates.
Where as in Griffiths the following is stated: | {
"domain": "physics.stackexchange",
"id": 33790,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, wavefunction, measurement-problem, superposition, wavefunction-collapse",
"url": null
} |
pressure, fluid-statics, buoyancy
If we assume that the gap between the puck and tube is water proof then your derivation is correct. All such pucks will remain in such position.
To think intuitively: imagine you somehow lift the puck by a small height. Now since the water above puck cannot slide through the water proof gap, it will force the puck back into its initial position. You can't push the puck downwards (as water is almost incompressible). So the puck will remain in its position.
If the gap is large enough to allow water to pass, we can assume a thin ring of water around the puck. This thin ring will create the dpg pressure and thus this situation will be similar to the usual case of buoyancy. | {
"domain": "physics.stackexchange",
"id": 76193,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pressure, fluid-statics, buoyancy",
"url": null
} |
loss-function, optimization
provided an optimum or smaller than optimum learning rate and the training dataset is shuffled
Why
When we get the Gradient of the full batch, it's towards the global minima. So with a controlled LR, you will reach there.
With stochastic GD, the individual gradients will not be towards the global minima but it will be with each set of few records. Obviously, it will look a bit zig-zag. For the same reason, it might miss the exact minima point and bounce around it.
In a theoretical worse case, if the dataset is sorted on Class, then it will move in the direction one Class and then the other and most likely miss the global minima.
Reference excerpt from Hands-On Machine Learning | {
"domain": "datascience.stackexchange",
"id": 8091,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "loss-function, optimization",
"url": null
} |
everyday-chemistry, home-experiment, decomposition
However, since you are already using stainless steel container, going with iron-based quencher seems like an optimal choice to me.
More historical background and interesting tech details can be found in Wiley's Encyclopedia of Packaging Technology [1, p. 845]: | {
"domain": "chemistry.stackexchange",
"id": 11911,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "everyday-chemistry, home-experiment, decomposition",
"url": null
} |
and
chisq.test(observed, p = prob, correct = F)
Chi-squared test for given probabilities
data: observed
X-squared = 18.2, df = 2, p-value = 0.0001117
These are "omnibus" tests, and post-hoc comparisons are needed. In this regard, this post was useful, and already indirectly discussed by @Glen_b here. The idea is to compare each one of the proportions to the aggregate of the other two groups with a binomial test, and compensate with the Bonferroni correction.
In this case, since we are doing $3$ comparisons, and with a significance level of $0.05$ the correction is: $0.05/3 = 0.01666667$.
But we really won't need it, because the results are quite obvious after running the first test, which doesn't even come close to being questionable:
binom.test(26, sum(observed), 1/3, alternative="greater")
Exact binomial test | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9763105314577313,
"lm_q1q2_score": 0.8198411103874825,
"lm_q2_score": 0.8397339616560072,
"openwebmath_perplexity": 969.5057596028118,
"openwebmath_score": 0.5782743692398071,
"tags": null,
"url": "https://stats.stackexchange.com/questions/203120/is-chi-squared-test-appropriate"
} |
arrows(0,0,p1[1],p1[2],col="black") arrows(0,0,p2[1],p2[2],col="red") abline(0,-1,col="red") # p2 axis (the new y axis) points(pts,type="l",col="black") # draw original figure # apply the transformation (mirror over y axis) stretcher <- matrix(c(-1, 0, 0 ,1),2,2) stretcher.pts <- t(apply(pts,1,function(p) stretcher %*% p)) points(stretcher.pts,type="l",col="green") # apply the transformation (mirror over p2 axis) aligner <- matrix(rbind(p1,p2),2,2) stretcher <- matrix(c(-1, 0, 0 ,1),2,2) hanger <- matrix(cbind(p1,p2),2,2) product.pts <- t(apply(pts,1,function(p) hanger %*% stretcher %*% aligner %*% p)) points(product.pts,type="l",col="blue") legend("bottomleft", c("original", "mirror y-axis", "mirror over p2"), col = c("black","green","blue"), lwd=1) This next eg fits an elipse to sample $$n$$ points $$(x_i,y_i)$$, ie, the ellipse will have the slope of the linear regression, with axis the size of both standard deviations Notice that we do not use linear regression (miniziming | {
"domain": "ul.pt",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9752018398044143,
"lm_q1q2_score": 0.8018003548766387,
"lm_q2_score": 0.8221891327004132,
"openwebmath_perplexity": 8614.690130796489,
"openwebmath_score": 0.6164302229881287,
"tags": null,
"url": "http://www.di.fc.ul.pt/~jpn/r/svd/svd.html"
} |
homework-and-exercises, thermodynamics, work
Title: Clarify homework results regarding heat, work and the ideal gas law So I have a bunch of homework problems relating to thermodynamics and work done by an ideal gas. I've been having trouble with the isobaric questions, and want to make sure I'm getting the concept down. I went through the following question and got the answers "right", but their answers are significantly different from mine and I'm not sure why (I had to answer part C using Part D because my answer to part C was too high).
The questions are on a website with a pass/fail system and very little feedback if the answer is wrong. If my answer is "close enough" it will accept it then give me their correct answer. In general, I use too many significant digits and let it round, rather than rounding myself, as too many digits isn't graded against me. | {
"domain": "physics.stackexchange",
"id": 24913,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, thermodynamics, work",
"url": null
} |
$$\text{det}(\Phi w_1^{(2)}, w_1^{(2)}, w_1^{(1)}) = \text{det}\underbrace{\begin{pmatrix}-1 & 1 & 1 \\ -1 & 0 & -1 \\ -2 & 0 & 0 \end{pmatrix}}_{:=T^{-1}}\neq 0$$
Finally we have found our basis of $W_1$ and $W_2$ and our basis $T^{-1}$ is exactly $(\Phi w_1^{(2)}, w_1^{(2)}, w_1^{(1)})$ Indeed you can check that
$$TAT^{-1} = \begin{pmatrix}-2&1&0\\0&-2&0\\0&0&-2\end{pmatrix}=J$$
I know it may sound a little confusing, but i always used this method and it has always worked. If you want to practice more solve the first exercises of this exercises sheet. You can find the solution here
• Thank you :) To be honest, What really confused me is that you were using det(A−t1) instead of det(A−λI) and It took me some time to see how you did there in the beginning :P The rest are fine. Thank you again. – user164945 Aug 15 '14 at 7:34
• @user164945 You are welcome, sorry about that, i really didn't thought about it :D – Bman72 Aug 15 '14 at 7:36 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462183543601,
"lm_q1q2_score": 0.8100105184958141,
"lm_q2_score": 0.8198933337131076,
"openwebmath_perplexity": 231.59350389810695,
"openwebmath_score": 0.9929654002189636,
"tags": null,
"url": "https://math.stackexchange.com/questions/897943/find-the-jordan-normal-form-j-for-a-and-a-jordan-basis-for-a"
} |
For the way you wrote the Lagrangian equations, a positive lagrange multiplier is a necessary condition for a MAXIMUM, so both of your candidate boundary points satisfy the first-order necessary conditions for a constrained maximum. The second-order *sufficient* conditions for a max are much trickier than you may think, because you have a point on a boundary of an inequality constraint. Basically, you need to project the Hessian of the *Lagrangian* (not the function f!) down to the tangent space of the constraint, and determine if it is negative definite in that subspace.
However, in this case you can by-pass all that because you have both possible points, so can just take the one with the larger f-value.
Last edited: Oct 31, 2015
4. Nov 1, 2015
### HallsofIvy | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765633889136,
"lm_q1q2_score": 0.8424503243713137,
"lm_q2_score": 0.8596637469145053,
"openwebmath_perplexity": 370.98362497661816,
"openwebmath_score": 0.6260808706283569,
"tags": null,
"url": "https://www.physicsforums.com/threads/finding-maximum-and-minimum-values-of-3-dimensional-function.840688/"
} |
electromagnetism, special-relativity, fluid-dynamics, navier-stokes, magnetohydrodynamics
Weinberg, Gravitation and Cosmology: here you can find the relativistic generalization of Navier-Stokes in the so-called "Eckart frame" (chapter 11).
The problem is that both these "naive" relativistic generalizations of Navier-Stokes do not work: the partial differential equations, despite being written in a covariant fashion, display instabilities and lead to non-causal propagation of signals (they display instabilities both at the computer simulation level, i.e. once discretized, as well as at the exact mathematical level!). This is a theorem based on the analysis of Hiscock and Lindblom (1985). | {
"domain": "physics.stackexchange",
"id": 90901,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, special-relativity, fluid-dynamics, navier-stokes, magnetohydrodynamics",
"url": null
} |
c++, callback
interface.addRequest(REQ_LOGGER, 0, "level", [this](const std::string& ctx, const ReferenceData& pload){
std::cout << "Received on level ctx=" << ctx << "\n";
if (ctx == "set") {
if ((pload.size() == 1)) {
if (auto plevel = std::get_if<std::string_view>(&pload[0])) { // MUST be true with the sGuard.
level = std::atoi(std::string(*plevel).c_str());
}
} else {
return std::make_pair(RequestErrors::TOO_FEW_INPUTS, ConcreteDataContainer{});
}
} else if (ctx != "get") {
return std::make_pair(RequestErrors::NOT_FOUND, ConcreteDataContainer{});
}
return std::make_pair(RequestErrors::OK, ConcreteDataContainer{std::to_string(level)});
}, {&sGuard, &nGuard});
}
};
Service1 svc1;
Logger logger; | {
"domain": "codereview.stackexchange",
"id": 45033,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, callback",
"url": null
} |
oxidation-state, elements, toxicity
Not a hazardous substance or mixture according to Regulation (EC) No 1272/2008.
You can sefely conclude that silver will not be dangerous. For lead, the MSDS says:
Reproductive toxicity (Category 1A), H360FD
Effects on or via lactation, H362
Specific target organ toxicity - repeated exposure, Oral (Category 1), Central nervous
system, Blood, Immune system, Kidney, H372
For the full text of the H-Statements mentioned in this Section, see Section 16. | {
"domain": "chemistry.stackexchange",
"id": 16190,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "oxidation-state, elements, toxicity",
"url": null
} |
ros, c++, callback, actionlib, subscribe
Title: Callback of actionlib_msgs::GoalStatusArray fails
Hi,
I am writing a node that subscribes to an actionlib_msgs::GoalStatusArray message. The callback code is:
move_base_status_sub = n.subscribe<actionlib_msgs::GoalStatusArray>("/move_base/status", 50, &rviz_text::move_base_statusCallback, this);
...
void rviz_text::move_base_statusCallback(const actionlib_msgs::GoalStatusArrayConstPtr& msg)
{
actionlib_msgs::GoalStatusArray status_array = *msg;
uint32_t sequence = status_array.header.seq;
// sequence is read successfully
actionlib_msgs::GoalStatus status_list_entry = status_array.status_list[0];
// status_list_entry is read unsuccessfully. The above line causes a seg fault at runtime.
} | {
"domain": "robotics.stackexchange",
"id": 18093,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, c++, callback, actionlib, subscribe",
"url": null
} |
xml, xslt
<!-- Strip out the sheet with RETURN barcode by replacing it with nothing (blank template) -->
<xsl:template match="Page[contains(Fields/Barcode, 'RETURN')]" />
<!-- Check if there is a page containing RETURN in the barcode field.
If yes append 'return' to all barcodes
If no just copy everything -->
<xsl:template match="Barcode">
<xsl:choose>
<xsl:when test="count(../../../Page[Fields/Barcode[contains(text(), 'RETURN')]]) > 0">
<xsl:element name="Barcode">
<xsl:value-of select="concat(ancestor::Page/Fields/Barcode, 'Return')"/>
</xsl:element>
</xsl:when>
<xsl:otherwise>
<xsl:copy>
<xsl:apply-templates select="@* | node()"/>
</xsl:copy>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet> | {
"domain": "codereview.stackexchange",
"id": 2223,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "xml, xslt",
"url": null
} |
performance, c, hash-map, c99
void CGL_hashtable_set(CGL_hashtable* table, const void* key, const void* value, size_t value_size)
{
size_t key_size, hash_table_index;
__CGL_hashtable_get_key_size_and_table_index(table, &key_size, &hash_table_index, key);
CGL_hashtable_entry* entry_ptr = __CGL_hashtable_get_entry_ptr(table, key);
if(entry_ptr)
{
entry_ptr->value_size = value_size;
entry_ptr->value = NULL;
if(value_size > 0)
{
entry_ptr->value = CGL_malloc(value_size);
memcpy(entry_ptr->value, value, value_size);
}
}
else
{
CGL_hashtable_entry entry;
entry.set = true;
entry.next_entry = NULL;
entry.prev_entry = NULL;
memcpy(entry.key, key, key_size);
entry.value_size = value_size;
entry.value = NULL;
if(value_size > 0)
{
entry.value = CGL_malloc(value_size);
memcpy(entry.value, value, value_size);
} | {
"domain": "codereview.stackexchange",
"id": 43880,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, c, hash-map, c99",
"url": null
} |
## Exercise
Write a function random_draw(distribution) that samples a random number from the given distribution, an array where distribution[i] represents the probability of sampling index i.
To test your function on a particular distribution, sample many indices from the distribution and ensure that the proportion of times each index gets sampled matches the corresponding probability in the distribution. Do this for a handful of different distributions.
Tags: | {
"domain": "justinmath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9914225133191064,
"lm_q1q2_score": 0.8522900014867285,
"lm_q2_score": 0.8596637559030338,
"openwebmath_perplexity": 291.4495911781642,
"openwebmath_score": 0.8775347471237183,
"tags": null,
"url": "https://www.justinmath.com/roulette-wheel-selection/"
} |
homework-and-exercises, quantum-field-theory, path-integral
There's 8 terms. Terms 1 and 7 are supposed to stay. Terms 2 and 5 cancel. Terms 4 and 8 cancel. Terms 3 and 6 must cancel but they don't. $A$ can be moved with two integrations by parts:
$$-iD_{xy} \cdot j^*_y \cdot Af_x = -iAD_{xy} \cdot j^*_y \cdot f_x = j^*_x \cdot f_x = j^*f $$
Instead of cancelling, they add. There's a missing minus sign. I've done this calculation at least five times, I can't find it. I'd appreciate if someone could show me where I made a mistake or link me this explicit calculation (it must be solved somewhere, this is a popular exercise). | {
"domain": "physics.stackexchange",
"id": 34481,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, quantum-field-theory, path-integral",
"url": null
} |
laser
Measuring the linewidth can be quite tricky. The usual direct technique is to beat the unknown laser with a known standard reference and study the beat signal. The equipment necessary is very expensive. However, you can make rough estimates from say a simple saturation spectroscopy based frequency locking setup by studying the amplitude fluctuations of the "error signal". Granted that it is crude and limited to the natural linewidth of your atomic system, but it does not need super expensive equipment or complicated electronics. There are other ways to use atomic physics (EIT, CPT resonances etc) to make reasonable estimates of laser linewidth and it all depends on what your goal is. | {
"domain": "physics.stackexchange",
"id": 54891,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "laser",
"url": null
} |
(also called maximum-minimum problems) occur in many fields and contexts in which it is necessary to find the maximum or minimum of a function to solve a problem. (solution by Calculus). What are the dimensions if the printed area is to be a maximum? 2) A cylindrical container with circular base is to hold 64 cubic centimeters. I plan on working through them in class. AP Calculus Sec 3. 265 Clove Road, New Rochelle, New York 10801 Pre-Calculus. Maximum Minimum. It also has its application to commercial problems, such as finding the least dimensions of a carton that is to contain a given volume. Login to reply the answers Post; Ray. So the function has a relative maximum at x=-5. So let's look. One example is the path of an airplane. 50$ , with 27,000+300(5) = 28,500 spectators and a revenue of \$ R(5) = 270,750. While they might not actually work out the quadratic function to come up with a precise number, managers at movie theaters,. Maximizing/Minimizing word problem: Minimum and | {
"domain": "associazionetalea.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9896718496130618,
"lm_q1q2_score": 0.8526401097215335,
"lm_q2_score": 0.8615382058759129,
"openwebmath_perplexity": 557.6241134805659,
"openwebmath_score": 0.44554403424263,
"tags": null,
"url": "http://fnpf.associazionetalea.it/maximum-and-minimum-word-problems-calculus.html"
} |
quantum-mechanics, thermodynamics, statistical-mechanics, hamiltonian
the Hamiltonian of the system of interest, $H_S$
the Hamiltonian of the system of interest + bath/thermostat/reservoir, $H_{tot} = H_S + H_B + V_{SB}$
$H_{tot}$ is treated in a microcanonical ensemble framework, and hence we are discussing its eigenstates. Generally it will not commute with $H_S$, since there is some interaction between the system and the bath. The derivation of the canonical ensemble is however based on solid reasoning that the interaction energy is smaller than the energy of the system, and can be neglected in tehrmodynamic limit (roughly speaking, the energy of the system is proportional to its volume, whereas the interaction energy is proportional to its surface, i.e., scales as volume to power $2/3$.)
Hence, once this logic is accepted and we talk about microcanonical ensemble, the microstates of the system are its eigenstates. | {
"domain": "physics.stackexchange",
"id": 84390,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, thermodynamics, statistical-mechanics, hamiltonian",
"url": null
} |
c++, c++11, parsing, yaml
virtual void getValue(unsigned short int& value) override;
virtual void getValue(unsigned int& value) override;
virtual void getValue(unsigned long int& value) override;
virtual void getValue(unsigned long long int& value) override;
virtual void getValue(float& value) override;
virtual void getValue(double& value) override;
virtual void getValue(long double& value) override;
virtual void getValue(bool& value) override;
virtual void getValue(std::string& value) override;
};
}
}
#endif
#endif
YamlParser.cpp
#include "../../config.h"
#ifdef HAVE_YAML
#include "YamlParser.h"
using namespace ThorsAnvil::Serialize;
extern "C"
{
int thorsanvilYamlStreamReader(void* data, unsigned char* buffer, size_t size, size_t* size_read);
} | {
"domain": "codereview.stackexchange",
"id": 12245,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, c++11, parsing, yaml",
"url": null
} |
quantum-mechanics, hilbert-space, measurement-problem, observables
What the postulate says is that immediately after the measurement the state is $$\frac{P_\lambda \Psi}{\sqrt{(P_\lambda \Psi, P_\lambda \Psi)}}$$
since this projection won't usually be normalized. In summary what is says is: the system doesn't collapse to an arbitrary element of $P_\lambda \mathscr{H}$ but rather to specificaly the part of $\Psi$ lying there properly normalized, which is given by the above formula. | {
"domain": "physics.stackexchange",
"id": 49093,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, hilbert-space, measurement-problem, observables",
"url": null
} |
nuclear-physics, nuclear-engineering, protons, neutrons, absorption
So, how long does it take until a control rod is completely depleted (no absorbing boron atoms left), that they are useless and have to be exchanged?
Control rods rarely absorb so much that they have lost their effectiveness. This is especially true in PWRs since power control is performed by soluble boron in the coolant and very little of the operating cycle is spent with CRs inserted.
Some plants will still measure CR lifetime in B-10 % remaining, but this is for ease of comparing simulation results since B4C pellet swelling has been correlated to a certain B-10 %.
The reason both BWR and PWR control rods are retired are due to structural issues related to:
absorber-cladding interaction due to absorber fluence (main PWR reason)(see 'why boron' below)
irradiation-assisted stress corrosion cracking
cladding neutron embrittlement
vibration induced wear on the fingertips.
And I also don't know why they use boron for control rods? What is the special with boron? | {
"domain": "physics.stackexchange",
"id": 21360,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-physics, nuclear-engineering, protons, neutrons, absorption",
"url": null
} |
materials, steel
Title: Will stainless steel with a #3 finish corrode faster than a #4 finish? I understand the difference between a #3 finish on a piece of stainless steel compared to a #4 finish being that the surface roughness is different. Specifically, a #4 finish is classified as having an Ra value of less than 25 micro-inches where as a #3 finish has an Ra value of less than 40 micro-inches.
The basis of my question is if I have two cabinets made from 304 SST, one has a #3 finish, one with a #4 finish in the same outdoor environment, which will rust faster? Are there any other concerns between the two cabinets as far as performance of the cabinet (welds, hinges, bends, etc.)? Have there been any studies conducted? Surface Finish
I have read about a couple of corrosion concerns with stainless steel surface finish. I don't have any direct experience with differing rates of corrosion, but below are some considerations: | {
"domain": "engineering.stackexchange",
"id": 201,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "materials, steel",
"url": null
} |
python, performance, python-3.x, sudoku
else:
allowed = False
break
all_permutes.remove(choice) # remove the row from all_permutes
grid[i] = choice # add row to board
return grid
grid = [[0 for i in range(NUM_COLS)] for i in range(NUM_ROWS)] # 9x9 grid of 0's
grid = fill_grid(grid) Repeated set() construction
allowed = False
while not allowed:
allowed = True
choice = random.choice(all_permutes)
for j in range(len(choice)):
if choice[j] not in set(curr_col_set[j]): # ensure digit isn't in col | {
"domain": "codereview.stackexchange",
"id": 36861,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, python-3.x, sudoku",
"url": null
} |
ds.algorithms, graph-theory, graph-algorithms
(1) Use Borůvka's algorithm, where each vertex finds its minimum-weight outgoing edge, you form trees from selected edges, collapse each tree to a supervertex, and repeat. But modify it so that after the collapse you return to a simple graph rather than a multigraph. To do so, use radix sort to sort the edges by which pair of supervertices they connect, and then scan the sorted list to find blocks of edges connecting the same pair of supervertices and keep only the minimum-weight edge from each block. The time adds in a recurrence like $T(n)=O(n^2)+T(n/2)$ which solves to $O(n^2)$. | {
"domain": "cstheory.stackexchange",
"id": 5306,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ds.algorithms, graph-theory, graph-algorithms",
"url": null
} |
inorganic-chemistry, terminology
The first pure substance containing only the element oxygen to be isolated was dioxygen ($\ce{O2}$), in 1774, though it was called "dephlogisticated air" until 1777 when Lavoisier used the term "oxygen" for the first time. This was some 30 years before John Dalton even proposed the first empirical atomic theory. Back then, we only barely had an understanding of stoichiometry, such that Dalton famously claimed the molecular formula for water was $\ce{HO}$. The fact that dioxygen is a substance made of molecules containing two atoms of oxygen probably wasn't widespread knowledge until at least 1811, with the gas stoichiometry experiments of Amadeo Avogadro. | {
"domain": "chemistry.stackexchange",
"id": 10972,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, terminology",
"url": null
} |
conservative and it will give use the largest sample size calculation. The distribution of sample proportions appears normal (at least for the examples we have investigated). Let = sample proportion or proportion of successes. AP® STATISTICS 2019 SCORING GUIDELINES Question 1 Intent of Question The primary goals of this question were to assess a student’s ability to (1) describe features of a distribution of sample data using information provided by a histogram; (2) identify potential outliers; (3) sketch a boxplot; and (4) comment on an advantage of displaying data as a histogram rather than as a boxplot. • From the sampling distribution, we can calculate the possibility of a particular sample mean: chances are that our observed sample mean originates from the middle of the true sampling distribution. In this task you will investigate the sampling distribution and model for the proportion of heads that may show up when a coin is tossed repeatedly. Since then i've standard me in the | {
"domain": "muse-deutsch.de",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9848109534209825,
"lm_q1q2_score": 0.8163480264741846,
"lm_q2_score": 0.82893881677331,
"openwebmath_perplexity": 447.9418475551134,
"openwebmath_score": 0.7959654927253723,
"tags": null,
"url": "http://skyf.muse-deutsch.de/sampling-distribution-of-the-sample-proportion-calculator.html"
} |
In the latter form, it is easy to see that the limit for $R\to\infty$ is $2\pi i$.
• Excellent point at the end. I attempted the derivative formula and wasted time trying to make it work; yours is much more elegant. – Darrin Jan 6 '14 at 23:12 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357173731318,
"lm_q1q2_score": 0.8324802280365485,
"lm_q2_score": 0.8479677506936878,
"openwebmath_perplexity": 230.5312836304988,
"openwebmath_score": 0.9916712045669556,
"tags": null,
"url": "https://math.stackexchange.com/questions/629570/help-with-a-complex-integral"
} |
general-relativity, cosmology, invariants
As Rennie says below this is true for all comoving observers in the FLRW universe regardless of position. Then shouldn't any observer (regardless of frame) agree on the interval S? Of course different frames will measure different times for the universe since it's inception, but I'm speaking of
the interval.
EDIT
For a chosen set of coordinates, the interval between two points appears to be independent of the path taken. The twin paradox is a perfect example (for flat space).
Consider one observer at “rest”, whilst the other one speeds off. Considering just the inertial (stationary in free space) coordinate system, the (stationary) observer witnesses a (proper) time pass $\Delta\tau$.
Let us denote the interval ($S$) witnessed by the observer to have units of distance. $$S^{2}=c^{2}\Delta\tau$$ | {
"domain": "physics.stackexchange",
"id": 35937,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, cosmology, invariants",
"url": null
} |
java, object-oriented
/**
* Decrement live by one.
*/
public void decrementLiveByOne() {
lives--;
}
/**
* Turn to play.
*
* @param opponent the opponent
*/
public void turnToPlay(Player opponent) {
System.out.printf("%n%nPlayer %d, Choose coordinates you want to hit (x y) ", id);
Point point = new Point(scanner.nextInt(), scanner.nextInt());
while(targetHistory.get(point) != null) {
System.out.print("This position has already been tried");
point = new Point(scanner.nextInt(), scanner.nextInt());
}
attack(point, opponent);
}
/**
* Attack
*
* @param point
* @param opponent
*/
private void attack(Point point, Player opponent) {
Ship ship = opponent.board.targetShip(point);
boolean isShipHit = (ship != null) ? true : false; | {
"domain": "codereview.stackexchange",
"id": 25362,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented",
"url": null
} |
homework-and-exercises, forces, reference-frames, torque
Title: I need help calculating moment/torque of an object Heyo!
I'll do my best to ask this question as clear as possible, but since English is not my mother tongue it might be hard :D | {
"domain": "physics.stackexchange",
"id": 45742,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, forces, reference-frames, torque",
"url": null
} |
ruby, programming-challenge, math-expression-eval
result = []
first, *rest = *tokens
while not first.nil?
second = rest.first #if there is an operator, second will contain it
if ops.include? second
second, third, *rest = *rest
first = [second, first, third] #[op, arg1, arg2]. This now becomes first and is compared again to the following tokens,
next #which will handle cases like 1+2+3 --> [+ [+ 1 2] 3]
else
result << first
end
first, *rest = *rest
end
result
end | {
"domain": "codereview.stackexchange",
"id": 18434,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ruby, programming-challenge, math-expression-eval",
"url": null
} |
javascript, beginner, algorithm, vue.js
Building to higher level
After some more refactoring and extraction of what you are doing into functions (notice, the comments are gone):
hasStep(chosenOptions, stepName) {
return chosenOptions.some(option => option.step.name === stepName)
}
getOptionByName(chosenOptions, name) {
return this.chosenOptions.find(options => options.step.name === step.name)
}
deleteFromChosenOptions(chosenOptions, chosenOption) {
return chosenOptions.filter(chOpt => chOpt === chosenOption)
}
excludeOptionsWithName(options, name) {
return chosenOption.option.filter((chOption) => chOption.name !== option.name)
}
deleteOptionFromStep(chosenOptions, optionName, stepName) {
const chosenOption = getOptionByName(chosenOptions, stepName)
if (chosenOption.options.length === 1) {
this.chosenOptions = deleteFromChosenOptions(chosenOptions, chosenOption)
} else {
chosenOption.options = excludeOptionsWithName(chosenOption.options, optionName)
}
} | {
"domain": "codereview.stackexchange",
"id": 41576,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, algorithm, vue.js",
"url": null
} |
is 2m-2. \sum_{d|n} \phi(d) = n. Then the number of elements of S S S is just ∑d∣nϕ(d) \sum_{d|n} \phi(d) ∑d∣nϕ(d). If a function f : A -> B is both one–one and onto, then f is called a bijection from A to B. Solution: Using m = 4 and n = 3, the number of onto functions is: 5+1 &= 5+1 \\ The double counting technique follows the same procedure, except that S=T S = T S=T, so the bijection is just the identity function. For example, given a sequence 1,1,−1,−1,1,−11,1,-1,-1,1,-11,1,−1,−1,1,−1, connect points 2 2 2 and 33 3, then ignore them to get 1,−1,1,−1 1,-1,1,-1 1,−1,1,−1. It is easy to prove that this is a bijection: indeed, fn−k f_{n-k} fn−k is the inverse of fk f_k fk, because S−(S−X)=X S - (S - X) = X S−(S−X)=X. This function will not be one-to-one. A function is bijective if and only if it has an inverse. This gives a function sending the set Sn S_n Sn of ways to connect the set of points to the set Tn T_n Tn of sequences of 2n 2n 2n copies of ±1 \pm 1 ±1 with nonnegative | {
"domain": "elespiadigital.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9783846716491915,
"lm_q1q2_score": 0.8482944921199215,
"lm_q2_score": 0.8670357546485407,
"openwebmath_perplexity": 684.3939895842474,
"openwebmath_score": 0.8127384185791016,
"tags": null,
"url": "http://elespiadigital.org/libs/qd9zstny/6y18e.php?tag=number-of-bijective-functions-from-a-to-b-4d2c0c"
} |
performance, beginner, php, object-oriented, array
"sector_coefficient" => 8,
"selected_tickers" => array(
"PPL" => 0.18,
"PCG" => 0.16,
"SO" => 0.14,
"WEC" => 0.12,
"PEG" => 0.1,
"XEL" => 0.1,
"D" => 0.08,
"NGG" => 0.06,
"NEE" => 0.04,
"PNW" => 0.02,
),
),
array(
"sector" => "Consumer Discretionary",
"directory" => "consumer-discretionary",
"sector_weight" => 0.08,
"sector_coefficient" => 8,
"selected_tickers" => array(
"DIS" => 0.18,
"HD" => 0.16,
"BBY" => 0.14,
"CBS" => 0.12,
"CMG" => 0.1,
"MCD" => 0.1,
"GPS" => 0.08, | {
"domain": "codereview.stackexchange",
"id": 34035,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, beginner, php, object-oriented, array",
"url": null
} |
php, sql, security
while( $row = $query->fetchArray(SQLITE3_ASSOC)){
print( "<pre>" );
print( htmlentities( $row['post'] ) );
print( "</pre><hr />" );
}
}
}
?>
<!DOCTYPE html>
<html>
<head><title>Try This</title></head>
<body>
<form action="post.php" method="post">
<textarea rows="10" cols="80" name="data"></textarea><br />
<input type="submit" value="Post" /><br />
Secret key: <input type="text" name="password" />
</form>
<hr />
<div id="posts">
<?php posts(); ?>
</div>
</body>
</html>
Without the htmlentities function, I know it would have client side vulnerabilities (running malicious scripts). SQL injection is covered (prepared statements). I don't see how the program can be exploited server side. What (if anything) am I missing? It seems fine for me.
Some small changes: | {
"domain": "codereview.stackexchange",
"id": 1746,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, sql, security",
"url": null
} |
formal-languages, computability, automata, context-free, pushdown-automata
Title: Difference between Counter-machine and stack machine I read from this question that counter automata is a push down automata with only one symbol allowed on the stack (plus a fixed bottom symbol).
My question is counter machine means counter coexist with stack? I mean "DFA $+ 1$ counter" is same thing with "DFA $+1$ stack" . For example to generate this language $\{a^n b^n \mid n\geq 0\}$ we need counter + stack.
I also from that question Languages recognized by one counter automata form a proper subset of the context free languages.
What does it mean? Could you give one example. You can recognize $\{a^nb^n\}$ with just a counter (which is incremented by an $a$ and decremented by a $b$). No additional stack is needed. (If you used a stack, it would only contain $a$s; in general a counter is functionally equivalent to a stack whose alphabet consists of only one symbol.) | {
"domain": "cs.stackexchange",
"id": 19094,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "formal-languages, computability, automata, context-free, pushdown-automata",
"url": null
} |
terminology, genetic-algorithms, fitness-functions
AES here refers to the number of evaluations of the objective function the algorithm required. The reason for this is to provide a standardized amount of computation budget each algorithm used. Imagine if you compared algorithms on the basis of how many generations it took to find a good solution. In that case, I could just increase the population size of my algorithm by 100-fold or 1000-fold, and I'll probably look better. Same number of generations, but I gave my algorithm far more chances to find something good. Suppose I use wall-clock time. Now my algorithm might look better than yours just because I ran it on a much more powerful computer than you did. | {
"domain": "ai.stackexchange",
"id": 614,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "terminology, genetic-algorithms, fitness-functions",
"url": null
} |
java, optimization
// Compute vector of "columns" from the given generator:
int genColumns[] = new int[k];
for (int i = 0; i < k; i++) {
int column = 0;
for (int j = 0; j < n; j++) {
column += generator[j][i] << j;
}
genColumns[i] = column;
}
// For all input strings:
for (int kString = 1; kString < maxKString; kString++) {
// matrix multiplication
int code = 0;
for (int i = 0; i < k; i++) {
if (((kString >>> i) & 1) != 0) {
code ^= genColumns[i];
}
}
int distance = Integer.bitCount(code);
minDistance = Math.min(minDistance, distance);
}
return minDistance;
} | {
"domain": "codereview.stackexchange",
"id": 10160,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, optimization",
"url": null
} |
java, performance, parsing, memory-management
public void setDetails(String detailKey, Object detailValue){
this.detailKey = detailKey;
this.detailValue = (String) detailValue;
}
public Long getRowId() {
return rowId;
}
public void setRowId(Long rowId) {
this.rowId = rowId;
}
public Long getContactId() {
return contactId;
}
public void setContactId(Long contactId) {
this.contactId = contactId;
}
public String getDetailKey() {
return detailKey;
}
public void setDetailKey(String detailKey) {
this.detailKey = detailKey;
}
public String getDetailValue() {
return detailValue;
}
public void setDetailValue(String detailValue) {
this.detailValue = detailValue;
}
public LocalDateTime getDateCreated() {
return dateCreated;
}
public void setDateCreated(LocalDateTime dateCreated) {
this.dateCreated = dateCreated;
}
public LocalDateTime getDateModified() {
return dateModified;
}
public void setDateModified(LocalDateTime dateModified) {
this.dateModified = dateModified;
} | {
"domain": "codereview.stackexchange",
"id": 6386,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance, parsing, memory-management",
"url": null
} |
bioinformatics
Title: Hamming distance between two DNA strings By definition from Wikipedia, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it is the number of substitutions required to transform one string into another. Given two strings of equal length, compute the Hamming distance.
So to find the how many differences there are between two strings, say we have two DNA strings A and B:
A = TGACCCGTTATGCTCGAGTTCGGTCAGAGCGTCATTGCGAGTAGTCGTTTGCTTTCTCAAACTCC
B = GAGCGATTAAGCGTGACAGCCCCAGGGAACCCACAAAACGTGATCGCAGTCCATCCGATCATACA
Throw them to two arrays A and B. Therefore, If
(= count 0)
(cond
[(symbol=? A[1] B[1]) /do nothing just move on to the next pair of string]
[else (add1 count)] ;; counter:= counter + 1; | {
"domain": "biology.stackexchange",
"id": 2984,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bioinformatics",
"url": null
} |
php, php5
$result = [];
foreach($matches as $match){
$attrName = $match[1];
//parse the string value into an integer if it's numeric,
// leave it as a string if it's not numeric,
$attrValue = is_numeric($match[2])? (int)$match[2]: trim($match[2]);
$result[$attrName] = $attrValue; //add match to results
}
return $result;
}
This will turn <tagname attr1="value1" attr2='value2'> into [attr1=>value1, attr2=>value2]. Unlike the other answer, this function supports spaces in attribute value and I have casted the value into a numeric type when it's really a number rather than a string | {
"domain": "codereview.stackexchange",
"id": 20537,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, php5",
"url": null
} |
electrostatics
Really PUZZLING(literally!). The bold sentences are my areas of concern. My questions:
1)How can really a field exert force on the charges that created the field?
2)I am not understanding what Mr.Purcell wants to tell about Newton's Third law; What does he mean when saying"the force on $dq$ due to all other charges within the patch itself. This later force is surely zero. Coulomb repulsion between charges within the patch is just another example of Newton's Third Law: the patch as a whole cannot push on itself. That simplifies our problem, for it allows us to the entire electric field $E$, including the field due to all the charges in the patch"? Can anyone help me understand this para??
3)Why is the electric field average of that of inside & outside?
How can really a field exert force on the charges that created the field? | {
"domain": "physics.stackexchange",
"id": 22905,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics",
"url": null
} |
quantum-mechanics, energy, schroedinger-equation, discrete
Title: How can energy be discrete when momentum and position are not? In quantum mechanics, it's said that the total energy of a system can only take certain discrete values. That is, the set of all possible energies of a system can be indexed by the natural numbers and is countably infinite. However, $E=T+U$, where $T$ is the kinetic energy and is a function of momentum, and $U$ is the potential energy and is a function of position. The problem is that the domain of $T$ and $U$ is the set of all real numbers (or arguably only positive reals, but the cardinality is equivalent). | {
"domain": "physics.stackexchange",
"id": 96752,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, energy, schroedinger-equation, discrete",
"url": null
} |
high-dimensional spaces is going to prohibit any easy -- or at least robust. Desmos will allow teachers to create card sorts, which is a nice addition to their activity builder that they unveiled last year at TMC15. θ = π 6 \theta=\frac{\pi}{6} θ = 6 π The general form equation for a line that passes through the pole is. Rotating Ellipse. Answered: Star Strider on 6 Feb 2016 tracage_ellipse. développement ellipse ellipses ensembles de symétrie de rotation symétrie de. Now we have our formula we can easily calculate other values. 5 Reply This calculator is the "rotation of axes" Calculator. 3 The Ellipse. Session Registration 1 Day 2 Days Member Metro $248$ 496 Member Non-Metro $240$ 480 Non-Member $327$ 654 Student $123$ 244 2. An ellipse is basically a circle that has been squished either horizontally or vertically. Now we can do the multiplication Ar to finish. The ellipse is first rotated about the. Cavalieri and E. What is the slip of a 6pole generator rotating at 1500 rpm a0. | {
"domain": "snionline.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226260757066,
"lm_q1q2_score": 0.818373664150042,
"lm_q2_score": 0.8376199714402813,
"openwebmath_perplexity": 897.3256986854708,
"openwebmath_score": 0.5953616499900818,
"tags": null,
"url": "http://snionline.it/kfvh/rotating-ellipse-desmos.html"
} |
c++, recursion, lambda, boost, c++20
Test code should be simple, clear, self-contained, idempotent, quick, and with no complicated or hidden logic. If you have to think about what a test is doing, something has gone horribly wrong. (As always, this is a rule of thumb, not a rule. If you need a complicated test… well, then you need it. C’est la vie.)
Test anything that might fail. Your tests check that the transform worked, and that’s good, but that’s not the only thing recursive_transform() does. It also deduces the return type, and can construct a rather complicated structure within the return value, and you never test whether any of that works. Those should be their own test case(s). For recursive_transform(), there are basically three things you need to check: | {
"domain": "codereview.stackexchange",
"id": 40259,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, recursion, lambda, boost, c++20",
"url": null
} |
The following method can be adapted to integrate your leftover region, but I think that is as much work as just applying it to the entire original region of integration, so I will do the entire region. Treating your double integral as an integral over a region in the plane, you want to integrate $f(x-y)$ over the rectangle with vertices at $(0,0)$, $(0,a)$, $(b,a)$, and $(b,0)$. Transforming the plane according to the rule $(x,y) \mapsto (x-y, y)$, your original integral is equal to the integral of $f(x)$ over the parallelogram with vertices at $(0,0)$, $(-a,a)$, $(b-a,a)$, and $(b,0)$. You can rewrite that as three integrals over functions of $x$ alone, or if you like, as one integral over a function of $x$ defined piecewise using $f(x)$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9711290913825541,
"lm_q1q2_score": 0.8348444839104312,
"lm_q2_score": 0.8596637577007394,
"openwebmath_perplexity": 113.65299117390353,
"openwebmath_score": 0.9567599892616272,
"tags": null,
"url": "https://math.stackexchange.com/questions/1894862/how-do-i-convert-the-following-double-integral-to-a-single-integral"
} |
c#, ajax, asp.net-mvc
Could be replaced with :
return Request.IsAjaxRequest() ? PartialView("NavigationBarOverlay", viewModel) : View("PageOverlay", viewModel);
Is it better? That, I don't know, it's a matter of opinion in that case.
PS: You might want to add [HttpGet] attribute above your controller's actions. You don't want people to reach these actions with a Post, Delete or Put, it'd be weird.
Thanks to @Heslacher, there's a way you can cut on some duplicated code by delegating the creation of NavigationPanelViewModel somewhere else. I would recommend creating a second constructor to this ViewModel that would look like this :
public NavigationPanelViewModel(string title, ??? level, bool allowFiltering)
{
Title = title;
NavigationLevel = level;
AllowFiltering = allowFiltering;
} | {
"domain": "codereview.stackexchange",
"id": 16435,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, ajax, asp.net-mvc",
"url": null
} |
7) $\lnot s\qquad\qquad\qquad$(...reasoning above)
7+) $\lnot s \lor \lnot w\quad\;$ (Disjunction Introduction/Addition from (7): that is, if we have established
$\qquad\qquad\qquad\qquad\quad\lnot s$, then we've established $\lnot s \lor$(anything))
5) $s \rightarrow \lnot w\qquad\;$ (Given (7+) and the fact that $\lnot s \lor \lnot w \equiv s \rightarrow \lnot w$).
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9688561694652216,
"lm_q1q2_score": 0.8052580757041438,
"lm_q2_score": 0.831143054132195,
"openwebmath_perplexity": 1003.0984306455719,
"openwebmath_score": 0.8878877758979797,
"tags": null,
"url": "http://math.stackexchange.com/questions/46074/if-s-implies-lnot-w-does-lnot-s-implies-w"
} |
newtonian-mechanics, energy, work
For example:
Drop a stone on a mill wheel. The mill wheel blade is hit and set in motion (displaced) by the force resulting from the momentum change of the stone. That the mill wheel blade is moved means that the mill wheel has been rotated - plug a generator to this mill wheel and drop many stones (or put it under a water or air stream) to keep up rotation, and the generator will convert this rotational kinetic energy into electrical energy.
In this example, work is done every time a force causes displacement. Work is only energy in transit so to speak, or a method of energy conversion from one form to another, so it doesn't appear clearly in the example. But it is a part of the process, since work is causing several of the energy conversion steps, for example the one from kinetic energy of the falling stone to rotational kinetic energy of the mill wheel. | {
"domain": "physics.stackexchange",
"id": 30269,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, energy, work",
"url": null
} |
pharmacology, pharmacokinetics, alcohol, pharmacodynamics, psychoneuropharmacology
Other psychoactive drugs are highly specific and have high affinity to their corresponding receptors. For example, caffeine acts on the adenosine receptors at µM concentrations (Daly et al., 1983). This is because of caffeine's high affinity to these receptors.
Moreover, ethanol is readily metabolized to acetyl-CoA by alcohol dehydrogenase and aldehyde dehydrogenase enzymes which are quite abundant in the body. Some people who have mutations in these enzymes leading to their reduced activity, "get drunk" and experience hangovers even with a low level of alcohol consumption. | {
"domain": "biology.stackexchange",
"id": 5290,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pharmacology, pharmacokinetics, alcohol, pharmacodynamics, psychoneuropharmacology",
"url": null
} |
newtonian-mechanics, forces, work, conservative-field
$^*$Of course, there are other ways to check if a field is conservative besides explicitly checking the work along every possible path. | {
"domain": "physics.stackexchange",
"id": 62620,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, work, conservative-field",
"url": null
} |
electromagnetism, quantum-field-theory, particle-physics, charge, feynman-diagrams
Title: What happens if two different elementary particles with different charges collide? For example, the up type quark and electron, or down type quark and up type quark. What will happen then? What will be absorbed or radiated? In the end, what kind of particles will turn out? Will it annihilate? | {
"domain": "physics.stackexchange",
"id": 88643,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, quantum-field-theory, particle-physics, charge, feynman-diagrams",
"url": null
} |
python, linear-regression
But all the model sees is the resulting Z, NOT that equation which you know created z.
So it attempts to build a model which considers how Z was accomplished without knowing there was noise, while KNOWING that both x and y were in this equation because you explicitly told it the are in the model.
Based on this data. (at least this is what I got without a seed, when I ran your data..so it is close with probably different Z due to different noise)
x y z
1 1 32.824550
2 2 21.382597
3 3 80.615424
4 4 30.958157
5 5 42.192197
6 6 75.649622
7 7 29.815352
8 8 40.167267
9 9 59.752065
10 10 53.402601 | {
"domain": "datascience.stackexchange",
"id": 1534,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, linear-regression",
"url": null
} |
algorithms, dynamic-programming, probability-theory, game-theory
If Bob plays optimally, what the expected amount of coins Bob will have?
Can any body give me an idea or a hint how to solve it. Let $w(a,b)$ be the expected profit with $a$ positive cards and $b$ negative cards. Then $w(0,0) = 0$ and
$$
w(a,b) = \max\bigl(0, \tfrac{a}{a+b} (w(a-1,b) + 1) + \tfrac{b}{a+b} (w(a,b-1) - 1)\bigr).
$$
Indeed, if we stop the game immediately, the profit is zero. Otherwise, with probability $\tfrac{a}{a+b}$, we pull out a positive card, and with probability $\tfrac{b}{a+b}$, we pull out a negative card.
Using this recurrence, it is easy to compute $w(4999,4999)$. On my laptop it took less than a second. | {
"domain": "cs.stackexchange",
"id": 15536,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, dynamic-programming, probability-theory, game-theory",
"url": null
} |
quantum-mechanics, statistical-mechanics, fermions, pauli-exclusion-principle, identical-particles
$$w(n_i,g_i) = \frac{g_i!}{n_i!(g_i-n_i)!}$$ for all $j$'s.
This is the statistical weight for the $E_i$ energy level. We can now repeat this for all $E_j$'s immediately finding $w(n_j,g_j) = \frac{g_j!}{n_j!(g_j-n_j)!}$.
For example, say $i = 4$ and that there are $n_i = n_4 = 3$ particles with this energy $E_4 = - \frac{1}{2 \cdot 4^2} = - \frac{1}{32}$ so that $n_i E_i = 3 E_4 = - \frac{3}{32}$ is the total energy of the $i=4$'th (sub)system. There are $g_4 = (\frac{1}{2}2+1) \sum_{l=0}^{n-1} \sum_{m=-l}^l 1 = 2 \cdot 3^2 = 18$ different wave functions $\psi_{(i=4,l_4,m_4,s_4)}(r,\theta,\phi)$ where $s_4 = \pm \frac{1}{2}$, $m_4$ goes from $-l_4$ to $l_4$, and $l_4$ goes from $0$ to $4-1 = 3$. Since they all correspond to the same energy eigenvalue they are called degenerate states, and again notice we set $n_4 = 3 \leq g_4 = 18$. Thus $w(n_i = 4,g_i = 18) = \frac{18!}{4!(14)!}$. | {
"domain": "physics.stackexchange",
"id": 82643,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, statistical-mechanics, fermions, pauli-exclusion-principle, identical-particles",
"url": null
} |
python, beginner, python-3.x, object-oriented, calculator
@property
def checked_string(self) -> str:
return self.__checked_string
@checked_string.setter
def checked_string(self, value: str):
self.__checked_string = value
@property
def check_result(self):
return self.res | {
"domain": "codereview.stackexchange",
"id": 38466,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-3.x, object-oriented, calculator",
"url": null
} |
javascript, node.js, async-await
for(let i = 0; i < data.length; i++){
let tempLead = data[i];
writtenLeads.push(tempLead.name);
let dataToBeWritten = '{' +
'"name_alias": "' + tempLead.name + '",' +
'"address": "' + tempLead.address + '",' +
'"zip": "' + tempLead.zip + '",' +
'"town": "' + tempLead.city + '",' +
'"phone": "' + tempLead.phone + '",' +
'"email": "' + tempLead.email + '",' +
'"name": "' + tempLead.name + '",' +
'"lastname": "' + tempLead.contact + '",' +
'"firstname": ""' +
'}';
let options = {
method: 'POST',
url: 'http://localhost/dolibarr/api/index.php/thirdparties/',
headers: {
'Content-Type': 'application/json',
DOLAPIKEY: 'MY_API_KE'
},
body: dataToBeWritten
}; | {
"domain": "codereview.stackexchange",
"id": 30556,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, node.js, async-await",
"url": null
} |
spectroscopy
I was thinking of something like this: The traditional spectroscopy you describe is still used; different techniques apply better in different circumstances.
For example, there are chemical bonds whose energy is in the ultraviolet. UV spectroscopy is good if you are dealing with such molecules, perhaps things with lots of $\require{mhchem}\ce{C=C}$ bonds. The information you get can tell you about the connectivity of the molecules (which atoms are connected to which others).
Lower energy transitions in the infrared tend to be between different rotational and/or vibrational states of molecules. Think of a molecule bending and flexing -- the energy scales are not as high as actually breaking bonds or liberating electrons. Thus IR spectroscopy is well suited for studying molecules according to their flexibility.
In between is optical spectroscopy. And of course there is no clear delineation in nature as to what information can be gleaned from what whavelengths. | {
"domain": "physics.stackexchange",
"id": 16490,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "spectroscopy",
"url": null
} |
filters, infinite-impulse-response, group-delay
Realizable IIR filters cannot have a linear phase. Hence, their group delay is frequency dependent. So there is no way to just estimate the group delay from the filter order. You can of course compute the group delay for any given filter. A corresponding function is usually implemented in signal processing toolboxes. E.g., in Matlab/Octave you have the function grpdelay() which computes the group delay from the filter coefficients. | {
"domain": "dsp.stackexchange",
"id": 11779,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, infinite-impulse-response, group-delay",
"url": null
} |
plasma-physics
Hope that helped!! | {
"domain": "physics.stackexchange",
"id": 17259,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "plasma-physics",
"url": null
} |
energy-efficiency, renewable-energy, nuclear-technology, infrastructure
Energy Density
The issue of producing fusion with energy gain while simultaneously achieving a high enough power density are dominant themes of the near future in selection of which energy technology is best suited to produce competitive power plants that communities will want to build to produce safe and dependable power.
Comparison of fusion and “renewable” energy systems from the standpoint of energy density
Source Joules per cubic meter
Solar Radiation = 0.0000015
Geothermal = 0.05
Wind at 10 mph = 7
Tidal water = 0.5-50
deuterium in the form of D2O (heavy water) = 6.88 x 10^10 MJ/cubic meter
note: about 20% by mass of heavy water or D2O is deuterium
Deuterium is a gas at standard temperature and pressure but heavy water D2O is a easily and safely stored as a liquid. D-D fusion is in excess of million times more energy dense than any renewable energy system. | {
"domain": "engineering.stackexchange",
"id": 1607,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "energy-efficiency, renewable-energy, nuclear-technology, infrastructure",
"url": null
} |
c#, vba, rubberduck, antlr
To avoid storing a state in the visitor, I opted to create a class ExtractMethodValidationResult which importantly has 2 separate lists. ValidContexts gives me all contexts (of one type) that are valid for a user's selection whereas InvalidContexts gives me any other contexts (regardless of type) where there might be a problem. I then pass that into the generic visitor.In a typical sample, a generic visitor will take an IEnumerable of some certain contexts and return that. This is quite different as we take a class and return the results. As a result, the process will create many new instances of the class which is then aggregated. Are there any potential pitfalls with the approach?
I'm not exactly thrilled with the high amount of duplicative code, notably with those that collects invalid contexts. Can this be done better? | {
"domain": "codereview.stackexchange",
"id": 27518,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, vba, rubberduck, antlr",
"url": null
} |
Kudos [?]: 93382 [0], given: 10557
Re: Each of a, b, and c is positive. Does b = root(ac) [#permalink]
### Show Tags
29 Aug 2013, 12:58
Each of a, b, and c is positive. Does $$b = \sqrt{ac}$$?
(1) $$\frac{a}{b} = \frac{b}{c}$$. Cross-multiply: $$b^2=ac$$ --> $$b = \sqrt{ac}$$. Sufficient.
(2) $$\frac{1}{b^3} = \frac{1}{abc}$$. Cross-multiply: $$b^3=abc$$ --> reduce by b: $$b^2=ac$$. Sufficient. | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9579122744874228,
"lm_q1q2_score": 0.8216665964338367,
"lm_q2_score": 0.8577681049901037,
"openwebmath_perplexity": 3819.369964988416,
"openwebmath_score": 0.517752468585968,
"tags": null,
"url": "http://gmatclub.com/forum/each-of-a-b-and-c-is-positive-does-b-root-ac-158866.html"
} |
c#, performance, strings
return (char)m_bytes.Array[m_bytes.Offset + index];
}
}
/// <summary>
/// Indicates whether this <see cref="AsciiString"/> instance is empty.
/// </summary>
public bool IsEmpty {
get {
return m_bytes.Count == 0;
}
}
/// <summary>
/// Indicates whether this <see cref="AsciiString"/> instance is empty or consists of only whitespace characters.
/// </summary>
public bool IsEmptyOrWhitespace {
get {
return Trim().IsEmpty;
}
}
/// <summary>
/// The number of characters in this <see cref="AsciiString"/> instance.
/// </summary>
public int Length {
get {
return m_bytes.Count;
}
} | {
"domain": "codereview.stackexchange",
"id": 25248,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, strings",
"url": null
} |
deep-learning, nlp, lstm
Training Epoch: 14/20, Training Loss: 12.026118357521792, Training Accuracy: 0.8491757446561087
Validation Epoch: 14/20, Validation Loss: 59.89944394497038, Validation Accuracy: 0.45497359431776685
Training Epoch: 15/20, Training Loss: 10.785567499923806, Training Accuracy: 0.8628473173326144
Validation Epoch: 15/20, Validation Loss: 61.482036528946125, Validation Accuracy: 0.45541000266481596
Training Epoch: 16/20, Training Loss: 9.373574649788727, Training Accuracy: 0.8767987081840235
Validation Epoch: 16/20, Validation Loss: 62.18386231796834, Validation Accuracy: 0.4580630794998584
Training Epoch: 17/20, Training Loss: 8.5658748998932, Training Accuracy: 0.8878869616990712
Validation Epoch: 17/20, Validation Loss: 63.56435154233743, Validation Accuracy: 0.4606744393166781
Training Epoch: 18/20, Training Loss: 7.807730126944895, Training Accuracy: 0.8960175152587504
Validation Epoch: 18/20, Validation Loss: 63.88373188037895, Validation Accuracy: 0.4606897915210869 | {
"domain": "datascience.stackexchange",
"id": 6832,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, nlp, lstm",
"url": null
} |
newtonian-gravity, work, potential-energy, integration, singularities
Title: Better derivation for the gravitational potential energy I was shown this derivation for the gravitational potential energy, and I'm not very happy about it assuming that $\frac{1}{\infty} = 0$. Is there a better derivation, either using a completely different method, or one similar that avoids $\frac{1}{\infty}$?
\begin{align}
\text{work done} &= \int F dx\\
&= \int_{\infty}^{r} F \, dr\\
\text{substitute} \,F &= \frac{G M m}{r^2}\\
\text{work done} &= \int_{\infty}^{r} \left(\frac{G M m}{r^2}\right)\,dr\\
&= G M m \int_{\infty}^{r} \frac{dr}{r^2}\\
&= G M m \int_{\infty}^{r} r^{-2} \, dr\\
&= G M m \left[\frac{r^{-1}}{-1}\right]_{\infty}^r\\
&= - G M m \left[\frac{1}{r} - \frac{1}{\infty}\right]\\
\text{Assuming} \, \frac{1}{\infty} = 0\\
\text{gravitational potential energy} &= -\frac{G M m}{r}
\end{align} You're just having an issue with improper integrals. All you need to do is use limits, since you can prove that $\lim_{r\to\infty}1/r$ goes to $0$.
\begin{align} | {
"domain": "physics.stackexchange",
"id": 71422,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-gravity, work, potential-energy, integration, singularities",
"url": null
} |
xml, perl, cache
use strict 'vars';
use warnings;
use utf8;
use Encode qw(encode decode);
use MP3::Tag;
# ==============================================================================
# Auxiliary functions for generation of the output
sub pr_header {
print FH <<EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Current Version</key><integer>1</integer>
<key>Compatible Version</key><integer>1</integer>
<key>Application</key><string>m3s v0.0</string>
<key>Burner Info</key><string>$_[0]</string>
<key>Disc ID</key><string>$_[1]</string>
<key>Disc Name</key><string>$_[2]</string>
<key>tracks</key>
<array>
EOF
}
sub pr_footer { print FH "\t</array>\n</dict>\n</plist>\n"; } | {
"domain": "codereview.stackexchange",
"id": 4302,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "xml, perl, cache",
"url": null
} |
algorithms, computational-geometry
Title: Runtime of the optimal greedy $2$-approximation algorithm for the $k$-clustering problem We are given a set 2-dimensional points $|P| = n$ and an integer $k$. We must find a collection of $k$ circles that enclose all the $n$ points such that the radius of the largest circle is as small as possible. In other words, we must find a set $C = \{ c_1,c_2,\ldots,c_k\}$ of $k$ center points such that the cost function $\text{cost}(C) = \max_i \min_j D(p_i, c_j)$ is minimized. Here, $D$ denotes the Euclidean distance between an input point $p_i$ and a center point $c_j$. Each point assigns itself to the closest cluster center grouping the vertices into $k$ different clusters.
The problem is known as the (discrete) $k$-clustering problem and it is $\text{NP}$-hard. It can be shown with a reduction from the $\text{NP}$-complete dominating set problem that if there exists a $\rho$-approximation algorithm for the problem with $\rho < 2$ then $\text{P} = \text{NP}$. | {
"domain": "cs.stackexchange",
"id": 196,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, computational-geometry",
"url": null
} |
python, unit-testing, functional-programming, regex, validation
def test_delivery_regex():
assert VALIDATE_REGEX['delivery'].match('') is None
assert VALIDATE_REGEX['delivery'].match('a') is None
assert VALIDATE_REGEX['delivery'].match('1') is None
assert VALIDATE_REGEX['delivery'].match('1.') is None
assert VALIDATE_REGEX['delivery'].match('1.5') is None
assert VALIDATE_REGEX['delivery'].match('1,2') is None
assert VALIDATE_REGEX['delivery'].match('1-2') is None
assert VALIDATE_REGEX['delivery'].match('1;2') is None
assert VALIDATE_REGEX['delivery'].match('1:2') is None
assert VALIDATE_REGEX['delivery'].match('1,2,3') is None
assert VALIDATE_REGEX['delivery'].match('Y') is None
assert VALIDATE_REGEX['delivery'].match('N') is None
assert VALIDATE_REGEX['delivery'].match('YY') is None
assert VALIDATE_REGEX['delivery'].match('YYY') is None
assert VALIDATE_REGEX['delivery'].match('YYYY') is None
assert VALIDATE_REGEX['delivery'].match('YYYYY') is None | {
"domain": "codereview.stackexchange",
"id": 43240,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, unit-testing, functional-programming, regex, validation",
"url": null
} |
kinetics, computational-chemistry
Title: If I want to sample rate constants of chemical reactions, what the distribution will be appropriate? For example, I have a (chemical) reaction systems, I want to sample the parameter space. What will be the best and validated distribution for sampling? And why? Since you have accepted my proposal of using a free-energy calculation, you can use that in a combination with classical Transition State Theory (think of it as an extension to the Arrhenius equation).
More practically, you can read-up, for example on
GROAMCS
NAMD
For more options, see this overview presentation. | {
"domain": "chemistry.stackexchange",
"id": 5179,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "kinetics, computational-chemistry",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.