text stringlengths 1 1.11k | source dict |
|---|---|
comprises two or more linear equations. 2 3 نه شم نیم | سي 1 A= 24 35 اه انا لا اني نانا b = 8 0 1 11 6 The least-squares solution of Ax=b is = 8 (Simplify your answers. A Computer Science portal for geeks. Bored with Algebra? Confused by Algebra? Hate Algebra? We can fix that. And after each substantial topic, there is a short practice quiz. Linear Algebra With Applications 9th by Steven J. Questions tagged [linear-algebra] Ask Question A field of mathematics concerned with the study of finite dimensional vector spaces, including matrices and their manipulation, which are important in statistics. Played 0 times. READ: All New People By Ann Lamott Essay. Elementary Linear Algebra with Applications. The second part of the video above. Books 5-7 introduce rational numbers and expressions. Once a week, you meet in small groups for discussion sections with a TA and you will have a chance to ask questions, especially those which concern the discussion problems posted each week. Get smarter | {
"domain": "asdpallavolorossano.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9822876992225169,
"lm_q1q2_score": 0.8076262607866592,
"lm_q2_score": 0.822189121808099,
"openwebmath_perplexity": 1135.8759574371509,
"openwebmath_score": 0.3489063084125519,
"tags": null,
"url": "http://uvym.asdpallavolorossano.it/linear-algebra-quiz-questions.html"
} |
rosbridge, windows
Title: rosbridge questions
I am kinda new to ROS here. I'm confused as to how rosjs works, it specified in the tutorial that I need to run rosbridge, but since I am currently running on windows, is it possible to get the package via SVN checkout or apt-get repository?
Originally posted by RJ on ROS Answers with karma: 48 on 2012-05-31
Post score: 0
Hi,
Here you will find the svn adress http://brown-ros-pkg.googlecode.com/svn/trunk/distribution/brown_remotelab/rosbridge
For more information on this package, take a loof to http://www.ros.org/wiki/rosbridge
Originally posted by BeuBeu with karma: 56 on 2012-05-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosbridge, windows",
"url": null
} |
python, beginner, python-2.x, json
parser = argparse.ArgumentParser(description=
"Currency converter. For instance: to convert 100 USD to RUB, it would be "
"100 USD RUB. If you need only the rate: USD RUB"
)
parser.add_argument("amount", type=int, help="Amount of currency to convert", default=1, nargs='?')
parser.add_argument('convert_from', type=str, help="Currency to convert")
parser.add_argument('convert_to', type=str, help = "Currency of converted")
args = parser.parse_args()
try:
rate = checker(args.convert_from, args.convert_to)
except KeyError:
print "Invalid argument(s)"
else:
print rate*args.amount | {
"domain": "codereview.stackexchange",
"id": 19662,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-2.x, json",
"url": null
} |
ros, rosdep-install
ERROR: Rosdep experienced an internal
error: [Errno 13] Permission denied:
'/home/barrybear/.ros/rosdep/sources.cache/index'
Please go to the rosdep page [1] and
file a bug report with the stack trace
below. [1] :
http://www.ros.org/wiki/rosdep
Traceback (most recent call last):
File
"/usr/lib/pymodules/python2.7/rosdep2/main.py",
line 116, in rosdep_main
exit_code = _rosdep_main(args) File
"/usr/lib/pymodules/python2.7/rosdep2/main.py",
line 257, in _rosdep_main
return _package_args_handler(command, parser, options, args) File
"/usr/lib/pymodules/python2.7/rosdep2/main.py",
line 331, in _package_args_handler
lookup = _get_default_RosdepLookup(options) File
"/usr/lib/pymodules/python2.7/rosdep2/main.py",
line 107, in _get_default_RosdepLookup
verbose=options.verbose) File "/usr/lib/pymodules/python2.7/rosdep2/sources_list.py",
line 501, in create_default
sources = load_cached_sources_list(sources_cache_dir=sources_cache_dir,
verbose=verbose) File | {
"domain": "robotics.stackexchange",
"id": 12180,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rosdep-install",
"url": null
} |
java, algorithm, computational-geometry
Here is a unit test it fails:
final double DELTA = 1e-12;
public void testMaxLineWithSixPoints() {
Solution solution = new Solution();
ArrayList<Solution.Point> p = new ArrayList<>(Arrays.asList(
solution.new Point(0, 0),
solution.new Point(1, 0),
solution.new Point(2, 0),
solution.new Point(1, 1),
solution.new Point(2, 1),
solution.new Point(3, 1)
));
Solution.Line line = solution.maxLine(p);
assertEquals(1, line.p1.x, DELTA);
assertEquals(1, line.p1.y, DELTA);
assertEquals(3, line.p2.x, DELTA);
assertEquals(1, line.p2.y, DELTA);
}
Your code returns a Line with Point(0, 0) and Point(3, 1), which contains only two points(while the correct answer is a Line with Point(1, 1) and Point(3, 1), which contains three points(or (0, 0), (2, 0)). I would recommend refactoring this piece of code to make it readable and correct: | {
"domain": "codereview.stackexchange",
"id": 12317,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, computational-geometry",
"url": null
} |
1/(x(ln x)^p) upper limit : infinity lower limit : e. The Euler Integral of the second kind is also known as gamma function. When the function f(x) is even (i. De kommer från många källor och är inte kontrolleras. Improper Integrals However, areas of unbounded regions also arise in applications and are represented by improper integrals. Integrating using Samples¶. Otherwise, we say the improper integral diverges, which we capture in the following definition. Improper Integrals. Example 1: Evaluate the integral of the given function, f(x) = 1/x 3 with the limits of integration [1, ∞). n a definite integral having one. When the function f(x) is even (i. 7) I Review: Improper integrals type I and II. Kcashew New member. Z 1 0 dx x2=3 2. Example 1: 1 0 1 1 x dx x +. In Section 2. Ahrens 2000-2006. Give one example each of an improper integral of Type I and an improper integral of Type II. The Geometry of Gaussian Distributions. Provided, that you can evaluate the inner integral | {
"domain": "kangalmalakli.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9904406026280905,
"lm_q1q2_score": 0.8097589590957879,
"lm_q2_score": 0.817574478416099,
"openwebmath_perplexity": 604.3811915113284,
"openwebmath_score": 0.9631556272506714,
"tags": null,
"url": "http://kangalmalakli.it/fjie/improper-integral.html"
} |
For $\mathbb{R}^{c}$ and $\mathbb{R}^{\omega_1}$, there is another way to show non-Lindelof. For example, both product spaces are not normal. As a result, both product spaces cannot be Lindelof. Note that every regular Lindelof space is normal. Both product spaces contain the product $\omega^{\omega_1}$ as a closed subspace. The non-normality of $\omega^{\omega_1}$ is discussed here. | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9833429619433693,
"lm_q1q2_score": 0.8278033273593683,
"lm_q2_score": 0.8418256492357358,
"openwebmath_perplexity": 239.6477656412658,
"openwebmath_score": 0.995428204536438,
"tags": null,
"url": "https://dantopology.wordpress.com/tag/baire-category-theorem/"
} |
cell-biology, stem-cells
Undifferentiated biological cells that can differentiate into
specialized cells and can divide (through mitosis) to produce more
stem cells
But from my limited understanding of biology embryonic germ cells can't specialize into any other cells unless they fuse with another gamete, so why would they be considered stem cells? What am I missing here? The difference in designation is the timing of the foundation of the cell line and the tissue that it was sourced from.
Embryonic Stem Cells are harvested from the inner cell mass of a Blastocyst around day 5 post fertilization. This is the first or second generation of cells to have started to differentiate, but they still have Pluripotency, which means that they can differentiate into any one of the three germ line cell types. [1][2]
Those three germ cell types are:
Endoderm
Mesoderm
Ectoderm | {
"domain": "biology.stackexchange",
"id": 4622,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cell-biology, stem-cells",
"url": null
} |
inorganic-chemistry, everyday-chemistry, nomenclature, reference-request
What is the significance of "super" in superphosphate? Is it a real chemical name or a trademark name?
Is the same as superoxide?
Are there any other ions containing the name "super"? The term superphosphate is really old, even well before the concept of atoms was proposed by Dalton. Therefore it is difficult to rationalize the choice of this terminology.
In the unabridged version of the Oxford English Dictionary, you can see the earliest usage dates back to 1798
Chemistry. A phosphate containing an excess of phosphoric acid; an
acid phosphate. Now disused except in superphosphate of lime, calcium
superphosphate: cf. sense 2.
1798 Philos. Trans. (Royal Soc.) 88 17 It was..Scheele who discovered, that the urine of healthy persons contains superphosphate,
or acidulous phosphate, of lime.
Further information from the OED on the usage of "super" in chemical names confirms its use since antiquity. See antique examples | {
"domain": "chemistry.stackexchange",
"id": 12429,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, everyday-chemistry, nomenclature, reference-request",
"url": null
} |
• This is a very helpful and simple answer. Can I ask why it is not the case why f splits in Fp? Apr 1 '17 at 23:20
• Also, by f splits in Fp, do you mean that Fp is the splitting field of f? Why would Fp being the splitting field of f imply that 0 or 1 is a root? Apr 1 '17 at 23:26
• @P-S.D If $\alpha\in\mathbb{F}_p$, then the $p$ elements $\alpha+i$, $0\le i<p$, are all in $\mathbb{F}_p$ and are all roots of $f$. But $f$ has degree $p$, so it clearly splits in $\mathbb{F}_p$ and thus each of the $p$ elements of that field, including $0$ and $1$, is a root of $f$. Apr 2 '17 at 14:53
The supposition of Greg Martin is truth, if a polinomyal $f$ with $deg(f)=n$ satisfies the property, then $n\ge p$, by contradiction argument, just write the expansion with the Newton's formula and analyse the coeficient of $x^{n-1}$ term, you get $\binom n 1a_{n}+a_{n-1}=a_{n-1}$, if $n\lt p$, this equation is an absurd. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517443658749,
"lm_q1q2_score": 0.8085669164544198,
"lm_q2_score": 0.8267117983401363,
"openwebmath_perplexity": 102.31882911541936,
"openwebmath_score": 0.9561370015144348,
"tags": null,
"url": "https://math.stackexchange.com/questions/81583/how-do-i-prove-that-xp-xa-is-irreducible-in-a-field-with-p-elements-when"
} |
Dear PUK, you missed $\theta=0$ because you divided the equation by $\tan\theta$ in the first step which is only possible if $\tan\theta$ is nonzero and finite. That's why $\theta=0$ and $\theta=\pi$ and $\theta=-\pi$ for which $\tan\theta$ vanish must be separately checked and indeed, you will find out that the equation is satisfied because $0=0$.
On the other hand, you added the wrong solutions $\pm 5\pi/6$ because their squared cosine is $3/4$, but the cosine itself has a wrong sign, so your squaring created problems. The squaring was unnecessary, as pointed out by lhf as I was writing this sentence, but if you still want to square it, you have to check all the solutions that they work and you will find out that the $\pm 5\pi/6$ solutions don't. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9585377284730285,
"lm_q1q2_score": 0.8028903311490089,
"lm_q2_score": 0.8376199572530448,
"openwebmath_perplexity": 519.770442419325,
"openwebmath_score": 0.9106977581977844,
"tags": null,
"url": "http://math.stackexchange.com/questions/40077/trig-equation-help-please"
} |
automata-theory, turing-machines, big-picture, ho.history-overview
The creation of Turing Machines I think it is simplistic to believe that Turing Machines were created only to address the Entscheidungsproblem. The theory of computation was an idea whose time has arrived. There were several equivalent models of computation developed in that period with different motivation. The different models were not developed in isolation. Post developed a model of mechanical calculation in addition to more mathematical models of Church and Kleene. Electro-mechanical calculational devices were also in the background of such developments. Turing may have developed his model specifically motivated by a specific problem, but he did not work in a vacuum and his historical and intellectual context should not be discounted. | {
"domain": "cstheory.stackexchange",
"id": 1831,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "automata-theory, turing-machines, big-picture, ho.history-overview",
"url": null
} |
polynomials
Title: Proof of Minsky Papert Symmetrization technique I frequently hear about the Minsky-Papert Symmetrization technique in many papers with a reference to the book of Minsky. I could not locate the book online. Could someone supply me a proof of the symmetrization technique?
For instance, it is used in Lemma $5$ in this paper http://www.csee.usf.edu/~tripathi/Publication/polynomial-degree-conference.pdf Over $0/1$ inputs we have
$$
\begin{align*}
(y_1+\cdots+y_N)^0 &= 1 \\
(y_1+\cdots+y_N)^1 &= \sum_i y_i \\
(y_1+\cdots+y_N)^2 &= \sum_i y_i+2\sum_{i<j} y_iy_j \\
(y_1+\cdots+y_N)^3 &= \sum_i y_i+6\sum_{i<j} y_iy_j + 6\sum_{i<j<k} y_iy_jy_k
\end{align*}
$$
And so on. It follows that for $0/1$ inputs, $p_{sym}$ can be written as a linear combination of $(y_1+\cdots+y_n)^0,\ldots,(y_1+\cdots+y_n)^d$, where $d$ is its degree. This linear combination can also be viewed as a polynomial $\tilde{p}$ in $y_1+\cdots+y_n$, which is equal to $p_{sym}$ for $0/1$ inputs. | {
"domain": "cs.stackexchange",
"id": 3687,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "polynomials",
"url": null
} |
• You folks are missing the point of physical laws. The idea is that if the current state is completely known, the system should evolve according to the specific equations, and we should be able to know the state of the system at a later time. Since we know there is only one state at any given time in the universe, it is ridiculous for our physical universe to be in two states. And I'm not just talking classically. Yes, there are different states in the quantum mechanical system, but the state of the system as a whole, that is the spectrum and wavefunctions, are unique. – abnry Aug 11 '16 at 17:21 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9532750387190131,
"lm_q1q2_score": 0.8230496704274267,
"lm_q2_score": 0.8633916099737806,
"openwebmath_perplexity": 266.9762031982337,
"openwebmath_score": 0.6692443490028381,
"tags": null,
"url": "https://math.stackexchange.com/questions/1864047/why-is-uniqueness-important-for-pdes"
} |
organic-chemistry, amino-acids, enolate-chemistry, synthesis
The allyl product is then epoxidised (MCPBA) and the epoxide opened with iodide procedure here
Deprotection with acid hydrolysis gives your target molecule. | {
"domain": "chemistry.stackexchange",
"id": 14613,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, amino-acids, enolate-chemistry, synthesis",
"url": null
} |
javascript, sudoku
Unless you need to use the index somewhere other than to reference elements[i], it would make more sense to write DRY and more abstract code with an iterator. It's shorter too:
for (const element of elements) {
element.classList.add('focus');
// do lots of other stuff with element
Aleksei's answer shows how to use array methods with other parts of the code too.
Condense class names Since the cells will always have exactly 1 of 2 classes, focus or bold, consider using just a single class instead that you toggle on and off - perhaps focus. For example, instead of:
.bold {
font-weight: bold;
font-size: 1em;
}
use
.container > input {
font-weight: bold;
font-size: 1em;
}
whose rules get overridden by a .container > .focus rule below.
Don't reassign a variable to the same object you already have a reference to. That is:
sudoku = addNumber(sudoku);
should be
addNumber(sudoku); | {
"domain": "codereview.stackexchange",
"id": 39979,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, sudoku",
"url": null
} |
earth-rotation, poles
The left-hand side of the figure shows Length of Day, and I think that's chiefly why they are making these refined polar measurements — to keep track of leap seconds, etc.
It's been established, by Fong et al. (19996), among others, that earthquakes could change the earth's rotational axis, by redistributing mass on a large scale. Their table 1 shows [the sort of effect that individual earthquakes theoretically have — edited after comment from David Hammen] on the length of day; $\Psi$ is the excitation vector:
The EOC's website lists other geophysical excitations, among them:
Atmospheric angular momentum
Oceanic angular momentum
Hydrological excitation function
Axial angular momentum of the core | {
"domain": "earthscience.stackexchange",
"id": 750,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "earth-rotation, poles",
"url": null
} |
newtonian-mechanics, forces, friction, vectors, textbook-erratum
Title: Why is answer C correct? My professor said because two Forces are same. I don't think so Please note that it is stating force A is equal to the component of another force, not magnitude.
Let me say this again. You are not going to say component is the magnitude.
However, even when I asked the department chair of physics, he said it is asking about magnitudes, and that I should know it is saying magnitude.
I understand the answer (d) is wrong, you don't need to explain it. If it is really my professor's fault that answer c is wrong. My grade can be went up one level :D However, my professor is very angry when I ask this question. I understand the answer never must true, but it doesn't mean it always false. Moreover; Answer c is always false.
The problem is: | {
"domain": "physics.stackexchange",
"id": 42273,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, friction, vectors, textbook-erratum",
"url": null
} |
rostopic
Services:
* /imu/declination_compute/set_logger_level
* /imu/declination_compute/get_loggers
contacting node http://192.168.10.1:36537/ ...
Pid: 2954
Connections:
* topic: /rosout
* to: /rosout
* direction: outbound
* transport: TCPROS
* topic: /imu/declination
* to: /imu_compass
* direction: outbound
* transport: TCPROS
* topic: /navsat/fix
* to: /navsat/nmea_topic_driver (http://192.168.10.1:57719/)
* direction: inbound
* transport: TCPROS
Originally posted by Alsing on ROS Answers with karma: 27 on 2017-01-05
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26646,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rostopic",
"url": null
} |
general-relativity, differential-geometry
Second, we can use it to encode the existing gravitational degrees of freedom. There's a sort of gauge symmetry between curvature and torsion. Gauge fixing torsion to 0, we end up with general relativity, whereas fixing curvature to 0, we end up with its teleparallel equivalent. | {
"domain": "physics.stackexchange",
"id": 12448,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, differential-geometry",
"url": null
} |
thermodynamics, ideal-gas, adiabatic
$$ t_c = \frac{r^2}{\alpha} $$
I get 0.045 seconds for this.
So, it looks like conduction is the gas is pretty fast on this scale, while the rising of the bubble. So I'd say that you can assume that the bubble remains at the temperature of the water.
This wouldn't be the case for a 20 mm bubble. That would take something more like 30 seconds to reach the top but would have a characteristic time of 18 seconds. that would get pretty gnarly to deal with.
You'd get some thin adiabatic with a bubble 100 mm in diameter. 12 s to rise 455 s characteristic time. | {
"domain": "physics.stackexchange",
"id": 20264,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, ideal-gas, adiabatic",
"url": null
} |
quantum-mechanics, homework-and-exercises, operators, wavefunction
We can now take the time derivative of the expectation value:
$$\frac{d}{dt}\left< A \right> = \frac{d}{dt}\left< \psi \left| \hat{A} \right| \psi \right>$$
Now by expanding the right hand side:
$$\frac{d}{dt}\left< A \right> =\left< \frac{d}{dt}\psi \left| \hat{A} \right| \psi \right> + \left< \psi \left| \frac{\partial}{\partial t}\hat{A} \right| \psi \right>+ \left< \psi \left| \hat{A} \right| \frac{d}{dt}\psi \right>$$
Now we can see that the first and last term can be replaced directly by considering the time-depend Schrödinger equation:
$$i\hbar\frac{\partial}{\partial t}\left| \psi \right>=\hat{H}\left| \psi \right>$$
Now giving:
$$\frac{d}{dt}\left< A \right> =-\frac{1}{i\hbar}\left< \hat{H} \psi \left| \hat{A} \right| \psi \right> + \left< \psi \left| \frac{\partial}{\partial t}\hat{A} \right| \psi \right>+ \frac{1}{i\hbar}\left< \psi \left| \hat{A} \right| \hat{H}\psi \right>$$
By the definition of commutators this can be seen to reduce to: | {
"domain": "physics.stackexchange",
"id": 49402,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises, operators, wavefunction",
"url": null
} |
electrons, valence-bond-theory, noble-gases
Oxidation state is a hypothetical number assigned to each atom in a molecular structure (refer this) which assumes all bonds to be purely ionic. So it doesn't mean that that the carbon atoms in $\ce{C_2H_2}$ having an oxidation state of $-1$ have gained $1$ electron each to result in 5 electrons in the valence shell.
If you meant the $\ce{C^{4-}}$ (carbide) ion, carbon actually prefers this because removing 4 electrons is difficult as it is quite endoergic as it requires nearly 14000 kJ/mol (refer this) to form a $\ce{C^{4+}}$ basically due to its small size, which means that electrons are closer to the nucleus and we can remove it only at the expense of high energy. Thus, $\ce{C^{4-}}$ is comparatively easy to form as it is easier to add electrons into the valence shell. | {
"domain": "chemistry.stackexchange",
"id": 9604,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrons, valence-bond-theory, noble-gases",
"url": null
} |
telescope, telescope-lens
The cartoon illustration shown in the Canada under the stars website shows a meniscus with it's concave side used as a reflector. I think this is just an error by the illustrator. It wouldn't make sense to show a Gregorian telescope in a paragraph about Laurent Cassegrain, and the primary focus does not fall in front of the secondary.
I've tilted the drawing so that the optical axis is horizontal, and highlighted the incoming and reflected rays of light striking the top edge of the secondary. You can see that the reflected ray is more parallel to the axis, which would only happen if the mirror were convex. The very large distance between the lens of the eyepiece, and the actual location of the exit for one's eye confirms that some artistic license has been taken with the optics here.
For reference, look at the realistic drawing of a Cassegrain optical system from Wikipedia shown below. | {
"domain": "astronomy.stackexchange",
"id": 2343,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "telescope, telescope-lens",
"url": null
} |
organic-chemistry, nomenclature, heterocyclic-compounds
All locants are omitted in compounds or substituent groups in which all substitutable positions are completely substituted or modified, for example, by hydro, in the same way. Thus, in this case, the systematic name is simplified to ‘tetrahydro-2H-pyran’. Furthermore, omission of indicated hydrogen is permitted in general nomenclature if no ambiguity would result. Thus, in this case, a correct name is just ‘tetrahydropyran’.
Anyway, according to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), preferred IUPAC names of saturated heteromonocyclic compounds are formed by the extended Hantzsch–Widman system. Therefore, the preferred IUPAC name of the given compound is ‘oxane’. | {
"domain": "chemistry.stackexchange",
"id": 4239,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, nomenclature, heterocyclic-compounds",
"url": null
} |
cosmology, statistical-mechanics, temperature, standard-model, degrees-of-freedom
$$
s(T)a^3 = \text{constant}.
$$
Also, the temperature of the neutrinos drops off as $T_\nu \sim 1/a$ after they decouple. Combining these results, we find
$$
\left(\frac{T_\text{low}}{T_{\nu,\text{low}}}\right)^3 = \frac{11}{4} \left(\frac{T_\text{high}}{T_{\nu,\text{high}}}\right)^3.
$$
At high temperatures, the neutrinos are still in thermal equilibrium with the photons, i.e. $T_{\nu,\text{high}}=T_\text{high}\,$, so finally we obtain
$$
T_\nu = \left(\frac{4}{11}\right)^{1/3}T
$$
at low temperatures. Therefore,
$$
g(T) = 2 + \frac{7}{8}\cdot 6\left(\frac{4}{11}\right)^{4/3} = 3.36.
$$
A more detailed treatment is given in the same link from where I took the table. | {
"domain": "physics.stackexchange",
"id": 20421,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, statistical-mechanics, temperature, standard-model, degrees-of-freedom",
"url": null
} |
What has not been mentioned thus far is that one can in fact use any number of "auxiliary cubics" in the solution of the quartic equation. Don Herbison-Evans, in this page (Wayback link, as the original page has gone kaput), which was adapted from his technical report, mention five such possible auxiliary cubics.
Given the quartic equation
$$x^4 + ax^3 + bx^2 + cx + d = 0$$
the five possible auxiliary cubics are referred to in the document as
Christianson-Brown:
$$y^3 +\frac{4a^2b - 4b^2 - 4ac + 16d - \frac34a^4}{a^3 - 4ab + 8c}y^2 + \left(\frac3{16}a^2 - \frac{b}{2}\right)y - \frac{1}{64}(a^3 + 4a b - 8c) = 0$$
Descartes-Euler-Cardano:
$$y^3 + \left(2b - \frac34 a^2\right)y^2 + \left(\frac3{16}a^4 - a^2b + ac + b^2 - 4d\right)y - \frac{1}{64}(a^3 + 4a b - 8c)^2 = 0$$
Ferrari-Lagrange
$$y^3 + by^2 + (ac - 4d)y + a^2d + c^2 - 4bd = 0$$
Neumark
$$y^3 - 2by^2 + (ac + b^2 - 4d)y + a^2d - abc + c^2 = 0$$
Yacoub-Fraidenraich-Brown | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9888419667789669,
"lm_q1q2_score": 0.817487318519125,
"lm_q2_score": 0.8267117962054049,
"openwebmath_perplexity": 385.00272215571556,
"openwebmath_score": 0.8640218377113342,
"tags": null,
"url": "https://math.stackexchange.com/questions/785/is-there-a-general-formula-for-solving-4th-degree-equations-quartic"
} |
If the smallest element of $S$ is $3$, the only restriction we have is that $9$ is not in $S$. This leaves us $2^6=64$ such sets.
If the smallest element of $S$ is not $2$ or $3$, then $S$ can be any subset of $\{4, 5, 6, 7, 8, 9, 10\}$, including the empty set. This gives us $2^7=128$ such subsets.
So our answer is $60+64+128=\boxed{252}$.
## Solution 2(PIE)
We cannot have the following pairs or triplets: $\{2, 4\}, \{3, 9\}, \{2, 3, 6\}, \{2, 5, 10\}$. Since there are $512$ subsets($1$ isn't needed) we have the following: $(512-(384-160+40-4)) \implies \boxed{252}$. | {
"domain": "artofproblemsolving.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.98615138856342,
"lm_q1q2_score": 0.8062522093357272,
"lm_q2_score": 0.8175744806385542,
"openwebmath_perplexity": 117.12364503621075,
"openwebmath_score": 0.9290510416030884,
"tags": null,
"url": "https://artofproblemsolving.com/wiki/index.php?title=2017_AIME_I_Problems/Problem_12&diff=next&oldid=86197"
} |
ros, cpu
Comment by Stefan Kohlbrecher on 2011-07-22:
Sounds like a Geode LX800 board to me. I can recommend using a small Atom based board (for example FitPC2) instead. We did that on our KidSize humanoid robots 2 years ago and are quite happy with the improved performance. FitPC2 definitely works well with ROS.
Comment by Eric Perko on 2011-07-22:
Could you elaborate on "computing the all the autopilot information"? Do you need full 3D (6dof) navigation support w/ obstacle avoidance? Just, say, keep the glider going "straight and level"?
closing this question as it has not had activity in over a month
In general, this question is not answerable by the community, as you are the expert on the your own software needs. It is possible to use ROS w/o adding significant overhead to what you are doing, so it is likely that your own software will be the bottleneck. i.e. if your system can run Linux, it can run ROS. | {
"domain": "robotics.stackexchange",
"id": 6231,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, cpu",
"url": null
} |
python, beginner, datetime, converting, json
It concerns click statistics of different websites. This data needs to be converted to data to make a consistent chart with Google visualisations. So first a conversion is done in Python, then this result converted to JSON to pass to the Google libs.
My current code is this:
# determine min date
mindate = datetime.date.max
for dataSet in sets:
if (dataSet[0][0] < mindate):
mindate = dataSet[0][0];
# fill a dictionary with all dates involved
datedict = {}
for dat in daterange(mindate, today):
datedict[dat] = [dat];
# fill dictionary with rest of data
arrlen = 2
for dataSet in sets:
# first the values
for value in dataSet:
datedict[value[0]] = datedict[value[0]] + [value[1]];
# don't forget the missing values (use 0)
for dat in daterange(mindate, today):
if len(datedict[dat]) < arrlen:
datedict[dat] = datedict[dat] + [0]
arrlen = arrlen + 1
# convert to an array
datearr = []
for dat in daterange(mindate, today): | {
"domain": "codereview.stackexchange",
"id": 11755,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, datetime, converting, json",
"url": null
} |
3. ## Re: Dice probability problem
Originally Posted by Soroban
Hello, ChaosticMoon!
I assume each die has sides numbered from 1 to 100.
There are $100^3\,=\,1,\!000,\!000$ possible outcomes.
Visualize a cube graphed in the first octant of an xyz-system.
One vertex is at the origin; the opposite vertex is at (100, 100, 100).
Consider all the lattice points (those with integer coordinates)
. . from (1,1,1) to (100,100,100).
These represent the 1,000,000 possible outcomes.
The points whose coordinates have a sum exceeding 222
. . are "outside" the triangle with coordinates:
. . (22, 100, 100), (100, 22, 100), (100, 100, 22).
How many lattice points are contained in this tetrahedron?
The tetrahedron has 21 "levels".
Each level contains a triangular number of points.
We want the sum of the first 21 triangular numbers.
Fortunately, there is a formula for this: . $N \:=\:\frac{n(n+1)(n+2)}{6}$
For $n = 21\!:\;\;N \:=\:\frac{(21)(22)(23)}{6} \:=\:1771$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9854964232904171,
"lm_q1q2_score": 0.8125042859590913,
"lm_q2_score": 0.824461932846258,
"openwebmath_perplexity": 1670.569355415077,
"openwebmath_score": 0.683907687664032,
"tags": null,
"url": "http://mathhelpforum.com/advanced-statistics/188546-dice-probability-problem.html"
} |
polymers, plastics, optical-properties
Title: Suggestions for 200-600 nm range transmitting polymers/materials I am a student of physics who is doing research related to Cherenkov measurements. For that purpose, I need suggestions for some materials that have the following properties:
$80\%$ transparency in the range of $200$-$600$ $\mathrm{nm}$
Low scintillation light produced from electrons
Relatively lower density
A high refractive index
Some materials in this regard that I have tested are PMMA, TPX, NAS, LURAN, TOPAX, and Plexiglass. I want to test more materials to see if something does better; what polymers or other materials might I try to achieve these properties? No plastic of any practically useful thickness$^\dagger$ has $80\%$ transparency down to $200~\mathrm{nm}$. Period.
Below is a UV transmission chart for various materials from a marketing brochure published by BrandTech, a manufacturer of disposable UV/Vis cuvettes (click image to enlarge): | {
"domain": "chemistry.stackexchange",
"id": 7201,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "polymers, plastics, optical-properties",
"url": null
} |
strings, typescript
ProcessFullName(full_name: string): SplitNameData {
const Obj: SplitNameData = {
firstName: '',
middleName: '',
lastName: ''
};
I think it would be cleaner to use null as a marker for non-existing names instead of the empty string even though the empty is probably not a valid name anywhere in the world.
What is your function supposed to do with names having more than three parts?
Consider using const wherever possible and let in all other cases. I recommend to never use var. ⇒ const Obj, const splitName.
Use a more descriptive name for Obj, e.g. splitNameData or separatedNames. In this case, I would recommend to rename splitName to something like nameParts as well to avoid confusion.
Use a consistent naming scheme: fullName instead of full_name, firstName instead of FirstName and so on.
Use array destructuring:
case 2:
[Obj.FirstName, Obj.LastName] = splitName;
break;
case 3:
[Obj.FirstName, Obj.MiddleName, Obj.LastName] = splitName;
break; | {
"domain": "codereview.stackexchange",
"id": 29742,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "strings, typescript",
"url": null
} |
javascript, react.js
Title: Import function vs turning it into custom hook I have this ProductSearchContainer.js file that it's getting too long to handle and maintain.
What it's responsible for:
Doing network requests to get a list of products
Filter that list with filters values it gets from URL query string and update state with the new filteredList
It also holds functions to simulate the length of products on each filter before you click it. For example: a color filter with Black(5) indicating that you'll get 5 products if you click on Black.
Along other functions
The file was getting too big (600+ lines) and I decide to move some pieces of logic to other files.
For each filter category (brand, price, rating, etc) I have two functions:
1 to apply the active filters (get a list and the activeFilters array and it returns a filtered list for those filters.
1 to simulate the next filter length (like I explained above on the colors example) | {
"domain": "codereview.stackexchange",
"id": 35174,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, react.js",
"url": null
} |
quantum-mechanics, electromagnetism, gauge-theory, quantum-anomalies
Now, consider a spacetime $M$ with some not-necessarily-trivial topology. Consider a quantum model on $M$ that has some discrete group $G$ of global symmetries. Global means that the symmetry acts uniformly throughout spacetime. An 't Hooft anomaly is an obstruction to gauging the symmetry group $G$.
Gauging the symmetry group $G$ means, among other things, adding a corresponding gauge field to the model and summing over all possible configurations of this gauge field in the path integral, which in turn is supposed to define all of the model's quantum correlation functions. We have an 't Hooft anomaly if that sum turns out to be zero, because a model whose correlation functions are all zero is not a model at all. | {
"domain": "physics.stackexchange",
"id": 80138,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, electromagnetism, gauge-theory, quantum-anomalies",
"url": null
} |
neural-network, deep-learning, mathematics, cost-function
Title: Norm type in cost function of ANN I'm reading a tutorial about ANN. They use the following cost function:
As you can see this equation includes a norm. I'm new to the concept of norm.
My question is what kind of norm they use here (There are Absolute-value norm, Euclidean norm, Euclidean norm of a complex number, Taxicab norm, Manhattan norm, p-norm, Maximum norm, infinity norm, uniform norm, supremum norm, Zero norm, etc.) Reading the article they state that
The two vertical lines represent the $L^2$ norm of the error, or what is known as the sum-of-squares error (SSE)
Otherwise known as the Euclidean norm, Euclidean length, $L^2$ distance, $ℓ^2$ distance, $L^2$ norm, $ℓ^2$ norm, $2$-norm, or square norm.
Also speaking from my experience, most of the time when a norm is specified without a subscript then you can generally assume that it is the Euclidean norm. | {
"domain": "datascience.stackexchange",
"id": 7505,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-network, deep-learning, mathematics, cost-function",
"url": null
} |
arduino, wheel, two-wheeled, interrupts
void doEncoderRight(){
if (firstChangeR) delay(1); // if this is the first detection then we wait
// for the bounce to be over
// if the current state is different from the last saved state then:
// a real change happened and it's not part of the bouncing but
// actually the real beginning of the change: the first bounce !
if (digitalRead(2) != right_set) {
right_set = !right_set; // so we change the real state
countR ++; // we also increment the right encoder
// since this was the firstChange the next are part of bouncing, so:
firstChangeR = false;
}
}
void doEncoderLeft(){
if (firstChangeL) delay(1);
if (digitalRead(3) != left_set) {
left_set = !left_set;
countL ++;
firstChangeL = false;
}
}
Tell me what do you think about it? Do you think it's reliable and is there any improvement you can propose? | {
"domain": "robotics.stackexchange",
"id": 947,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "arduino, wheel, two-wheeled, interrupts",
"url": null
} |
quantum-mechanics, quantum-interpretations, wavefunction-collapse, decoherence, quantum-measurements
Title: Is the Copenhagen interpretation still the most widely accepted position? In my undergraduate Quantum Mechanics textbook (Griffiths), of the Copenhagen interpretation it says
"Among physicists it has always been the most widely accepted position".
I'm currently reading another book called "Philosophy of Physics: Quantum Theory" by Tim Maudlin. In the book's introduction, he points out that he does not even mention the Copenhagen interpretation in the rest of his book at all. His reasoning is that
"a physical theory should
clearly and forthrightly address two fundamental questions: what there
is, and what it does", | {
"domain": "physics.stackexchange",
"id": 74328,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-interpretations, wavefunction-collapse, decoherence, quantum-measurements",
"url": null
} |
For instance you can declare a matrix class as an array of say 16 contiguous floats. That's fine. Where coefficients m14, m24, m34 represent the translation part of the matrix (Tx, Ty, Tz), so you assume your "convention" is row-major even though you are told to use OpenGL matrix convention which is said to be column-major. Here the possible confusion comes from the fact that the mapping of the coefficients in memory is different from the mental representation you are making yourself of a "column-major" matrix. You code "row" but you were said to use (from a mathematical point of view) "column", hence your difficulty to make sense of whether you do things right or wrong. | {
"domain": "scratchapixel.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787846017969,
"lm_q1q2_score": 0.80478021135912,
"lm_q2_score": 0.8152324826183822,
"openwebmath_perplexity": 921.7886747497524,
"openwebmath_score": 0.6234551668167114,
"tags": null,
"url": "http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-4-geometry/conventions-again-row-major-vs-column-major-vector/"
} |
complexity-theory, interactive-proof-systems
Title: Number of rounds in interactive proofs - Arora & Barak In the web draft of Arora and Barak, "Computational Complexity: A Modern Approach", the way I understand their definition of a round of interaction is that it consists of either the verifier or the prover sending a message. In other sources on the matter, it seems to me that a round consists of both the verifier and the prover sending a message. Could someone clarify which of the definitions is the one that is usually used? It could be that both definitions are used by different authors. Whenever you use the concept of rounds, make sure to tell the reader what you mean by a round. In any case, the two concepts differ by a factor of $2$. | {
"domain": "cs.stackexchange",
"id": 1621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, interactive-proof-systems",
"url": null
} |
organic-chemistry, grignard-reagent
Title: Reactions of Grignard Reagent: 1,2 vs 1,4 addition So I was reading through Grignard reagent and I came across this answer by jerepierre, which mentions:
In general, Grignard reagents and organolithium reagents add directly to the carbonyl carbon, while organocuprates (organocopper reagents) add to the beta-position of an unsaturated ketone.
Now, I want to know, what is the meaning of 'in general'? Can anyone please provide me some cases where the general rules are not followed.
Also, I came to know that:
give 1,4 addition with $\ce{PhMgBr}$ to give:
(that reference is difficult to provide at the moment).
Can someone please explain why this happened? This is not a 1,4 addition. It is a 1,2 addition across a carbon-carbon double bond. This mode of reaction is unusual for Grignard reagents, but here a highly stabilized "double benzylic" anion is formed and the competitive reaction of adding to the carbonyl group is sterically hindered. | {
"domain": "chemistry.stackexchange",
"id": 10210,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, grignard-reagent",
"url": null
} |
c#, performance, queue
UPDATE: I performed some tests and I noticed that the TryDequeue method of the version proposed above suffers from performance issues when the priority mode is enabled: the Remove method, called on m_FifoOrder linked list, performs a linear search, that is an O(n) operation. Obviously, the performance is reduced more so when n is very large.
In order to reduce the latency caused by this method, I created a new version of the priority queue: the FastPriorityQueue class. The inner class ItemInfo simply contains the object to be enqueued and the priority that is assigned during the queuing operation. An ItemInfo object is always inserted at the end of the m_FifoOrder linked list, so that the AddLast method returns a reference to the last added LinkedListNode<ItemInfo>: this reference is enqueued to one of the m_PriorityQueues queues depending on the chosen priority.
public class FastPriorityQueue
{
private class ItemInfo
{
public object Data { get; set; } | {
"domain": "codereview.stackexchange",
"id": 2463,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, queue",
"url": null
} |
programming-languages, type-theory
Title: Are references of any use without updating? Almost all type-theoretical treatments of references that I've studied introduce references as accompanied with at least three operations (sometimes including the fourth):
Construction (allocation): $ \text{ref } e $
Elimination (dereferencing): $ !e $
Updating: $ e_1 := e_2 $
(not as common) Destruction (deallocation): $ \text{free } e $ | {
"domain": "cs.stackexchange",
"id": 2330,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming-languages, type-theory",
"url": null
} |
• @Mathemagician1234: If you meant that I had not considered the case where $A$ and $B$ are empty - the question requires me to solve assuming they are both non-void.. – Ishfaaq Feb 17 '14 at 6:08
• The way you worded the question doesn't make that clear. But I think it's worth covering all cases if you're going to write up a full response. Notice since the empty set is open, the implication is true in that case-in effect,this is the trivial case of the answer to your question. Little things like this matter in mathematics. – Mathemagician1234 Feb 17 '14 at 6:36 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765546169712,
"lm_q1q2_score": 0.8442872409231954,
"lm_q2_score": 0.8615382040983515,
"openwebmath_perplexity": 189.45590151905301,
"openwebmath_score": 0.9109712839126587,
"tags": null,
"url": "https://math.stackexchange.com/questions/679096/a-times-b-is-an-open-set-in-bbb-r2-implies-a-and-b-are-both-open-in"
} |
The solution set of $$A\overrightarrow{x}=\overrightarrow{0}$$ is $$\operatorname{Span}\{\overrightarrow{v_1},\ldots,\overrightarrow{v_k}\}$$ for some vectors $$\overrightarrow{v_1},\ldots,\overrightarrow{v_k}$$.
The solution set of $$A\overrightarrow{x}=\overrightarrow{b}$$ is $$\{\overrightarrow{p}+\overrightarrow{v}\:|\: A\overrightarrow{v}=\overrightarrow{0}\}$$ where $$A\overrightarrow{p}=\overrightarrow{b}.$$
So a nonhomogenous solution is a sum of a particular solution and a homogeneous solution. To justify it, let $$\overrightarrow{y}$$ be a solution of $$A\overrightarrow{x}=\overrightarrow{b}$$, i.e., $$A\overrightarrow{y}=\overrightarrow{b}$$. Then $A(\overrightarrow{y}-\overrightarrow{p})=\overrightarrow{b}-\overrightarrow{b}=\overrightarrow{0}.$ Then $$(\overrightarrow{y}-\overrightarrow{p})=\overrightarrow{v}$$ where $$A\overrightarrow{v}=\overrightarrow{0}$$. Thus $$\overrightarrow{y}=\overrightarrow{p}+\overrightarrow{v}$$. | {
"domain": "learnmathonline.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9921841109796004,
"lm_q1q2_score": 0.8202503043183149,
"lm_q2_score": 0.8267117919359419,
"openwebmath_perplexity": 75.7955109026582,
"openwebmath_score": 0.9548026919364929,
"tags": null,
"url": "https://learnmathonline.org/LinearAlgebra/GeometryOfSolutionSets.html"
} |
par(mfrow=c(1,2))
hist(catch, prob=T, br=60, col="skyblue2")
plot(ecdf(catch))
curve(.4*pexp(x, 1/3)+.6*(pnorm((x-mu)/sg) - pnorm((-x-mu)/sg)), 0, 50,
• So I got the CDF for rainy day as 1-$e^{\frac{-1}{3}x}$ and for the CDF of |Y|, I got P(-y$\le$Y$\le$y)=P(z$\le \frac{y-\mu}{\sigma}$)-P(z$\le \frac{-y-\mu}{\sigma}$)= $\Phi(\frac{y-\mu}{\sigma})$-$\Phi(\frac{-y-\mu}{\sigma})$. So then the CDF of the whole thing would just be 0.4(CDF of rainy day)+0.6(CDF of |Y|). Does this look right to you? @BruceET – gofish Mar 13 '17 at 7:22
• Your "CDF" for $|Y|$ doesn't seem to reach 1 toward the right. Maybe add, not multiply? See Addendum. – BruceET Mar 13 '17 at 18:24 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9822876976669629,
"lm_q1q2_score": 0.8206852000999855,
"lm_q2_score": 0.8354835371034368,
"openwebmath_perplexity": 426.64284684382375,
"openwebmath_score": 0.8333212733268738,
"tags": null,
"url": "https://math.stackexchange.com/questions/2184301/probability-pdf-and-cdf-of-a-standard-normal-distribution"
} |
java, programming-challenge, interview-questions, dynamic-programming
Deque<int[]> stack = new ArrayDeque<>();
int x = 0;
int y = 0;
while (true) {
if (x == maxX && y == maxY) {
// Found the exit!
return true;
} else if (x + 1 <= maxX && maze[y][x + 1]) {
// Try moving right
stack.push(new int[]{x + 1, y});
x++;
} else if (y + 1 <= maxY && maze[y + 1][x]) {
// Try moving down
stack.push(new int[]{x, y + 1});
y++;
} else if (!stack.isEmpty()) {
// Mark as dead end (so we will not try to reach here again)
maze[y][x] = false;
int[] current = stack.pop();
x = current[0];
y = current[1];
} else {
// No way to go -> impossible to reach the exit
return false;
}
}
} | {
"domain": "codereview.stackexchange",
"id": 31433,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, programming-challenge, interview-questions, dynamic-programming",
"url": null
} |
time-series, data-science-model, methodology, methods
Title: Time-series decomposition to a base level and an effect of another feature I've got a time-series data (let's denote it as y) and some feature (let's denote it as x). y is dependent on x, but x is often equal to 0. Even then, y is not 0, so we can assume that there's a base level in y which is independent of x. Additionally, we can observe some seasonality in y. I need to decompose y into base level and an effect of x. And I need some hint about methodology. I have googled and found plenty of methods to decompose time-series data into trend, seasonality and random noise. However, my case is different, because I have an additional feature x and I would like just to extract its effect, and leave trend, seasonality and noise alltogether. What I have in mind can be represented on a plot below, where turquoise area represents base level of y and red area represents an effect of x. What method would allow to make such a split? I would also appreciate any links or materials. You are | {
"domain": "datascience.stackexchange",
"id": 3737,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "time-series, data-science-model, methodology, methods",
"url": null
} |
three other approaches can be recognized, based on linear approximation, based on multiplicities, or based on transition points. First, have a look at the graph below and observe the slope of the (red) tangent line at the point A is the same as the y-value of the point B. The calculation of the slope is shown. A secant to the graph. slope of this line = 20-8 miles 35-10 min = 0. This will change the first point on the secant line, keeping the horizontal distance h between the two points the same. 4 3 2 27 1 -5 -4 -2 1 2 3 4 -2 -3 -4 -5+ In the above graph of y = f(x), find the slope of the secant line through the points (-4, f(-4) ) and (3, fl 3)). Example 1 Identify the x and ∆x for the interval [2,10]. The average rate of change in f(t) between t = a and t = b is the same as the slope of the secant line between the points (a, f(a)) and (b, f(b)) on the graph of f. Find the slope of the secant line through P and Q, call it m PQ. (b) Estimate the slope of the tangent line at P by | {
"domain": "otticamuti.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9873750496039277,
"lm_q1q2_score": 0.8291323699645727,
"lm_q2_score": 0.8397339696776499,
"openwebmath_perplexity": 238.17182620148998,
"openwebmath_score": 0.7098960280418396,
"tags": null,
"url": "http://oogo.otticamuti.it/find-the-slope-of-the-secant-line-through-the-points.html"
} |
quantum-mechanics, double-slit-experiment, wave-particle-duality
path information is not known then the interference pattern is present. It's a little complicated, you'll have to study the diagram and the description carefully. But it is an heroic effort to observe "which slit?" | {
"domain": "physics.stackexchange",
"id": 66659,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, double-slit-experiment, wave-particle-duality",
"url": null
} |
c, generics, collections, hash-map, set
} \
\
FMOD SNAME *PFX##_difference(SNAME *_set1_, SNAME *_set2_) \
{ \
V value; \
size_t index; \
SNAME##_iter iter; \
\
SNAME *_set_r_ = PFX##_new(_set1_->capacity, _set1_->load, _set1_->cmp, _set1_->hash); \
\ | {
"domain": "codereview.stackexchange",
"id": 34165,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, generics, collections, hash-map, set",
"url": null
} |
λ k ( M j ) ) â k â i ( λ i â λ k ) , {\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},} Learn more about eig(), eigenvalues, hermitian matrix, complex MATLAB 2. ⦠(b) Eigenvectors for distinct eigenvalues of A are orthogonal. Eigenvectors corresponding to distinct eigenvalues are orthogonal. Inner Products, Lengths, and Distances of 3-Dimensional Real Vectors. 466 CHAPTER 8 COMPLEX VECTOR SPACES. Corollary : Æ unitary matrix V such that V â 1 HV is a real diagonal matrix. If A is real-symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the square root. stream Let x= a+ ib, where a;bare real numbers, and i= p 1. H* = H â symmetric if real) then all the eigenvalues of H are real. These two proofs are essentially the same. In other words, the matrix A is Hermitian if and only if A = A H. Obviously a Hermitian matrix must be square, i.e., it must have dimension m ´ m | {
"domain": "fluencyuniversity.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9553191271831558,
"lm_q1q2_score": 0.8445396379518879,
"lm_q2_score": 0.8840392848011834,
"openwebmath_perplexity": 661.7308412130558,
"openwebmath_score": 0.9100313186645508,
"tags": null,
"url": "http://fluencyuniversity.com/m9o4am/b437h.php?cc98e9=eigenvalues-of-hermitian-matrix"
} |
java, array, pig-latin
The while(true) loop is bogus too. I had to look for the exit condition. Make it explicit.
You remember to close the Scanner, but do it in a finally block, so a possible Exception doesn't bypass the closing. For extra points use a try-with-resources structure (see my refactored example).
Wait, remember I said the code would work fine with the result of Arrays.asList()? It will, but you don't even need a list to loop over an array. Just use an advanced for loop.
for (String temp : userString.split("\\s")) {
...
} | {
"domain": "codereview.stackexchange",
"id": 16645,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, array, pig-latin",
"url": null
} |
fluid-dynamics, atmospheric-science, vortex
$\phantom{texttexttexttexttexttexttexttexttextte}$
The plot shows the (azimuthally averaged, i.e. averaged along circles) tangential wind profile as a function of radius at a fixed height. Looking at this picture one is naturally reminded of a Rakine vortex, which consists of a forced vortex core surrounded by a free vortex. You said:
[...] a forced vortex has a velocity profile u∝r (r is radial distance from centre of vortex), concluding at some outer boundary r=R to avoid fluid particles travelling at infinite speed. At this outer boundary it requires an external torque to be constantly supplied to keep going. | {
"domain": "physics.stackexchange",
"id": 33163,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, atmospheric-science, vortex",
"url": null
} |
javascript, jquery, playing-cards
<fieldset id="result">
<legend>Game Result</legend>
<div id="gameResult"></div>
</fieldset>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="BlackJack.js"></script> This looks like it showcases your skills well, and the comments will help anyone looking back to know that you're a programmer who is interested in doing their research and knowing things.
I tested this in Chrome on my local computer.
Your question included a mention about your timeout function not working. You'll need to make a stackoverflow question for that.
Overall:
This was very straight-forward to setup and the game works great! It is a little confusing visually that the cards aren't cleaned up after every game. Since the deck is reshuffled after each play. | {
"domain": "codereview.stackexchange",
"id": 35048,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, playing-cards",
"url": null
} |
eyes, human-eye, light, dogs
Though the cause of this effect is wired in the biology of the eye, some cameras can reduce red-eye by sending a few preliminary flashes before the final flash to give the pupils enough time to contract and adapt to the increased-light conditions. Another way to reduce the effect is to avoid looking directly into the camera lens, which will reduce the reflection of light. Finally, if all else fails, modern image editing software, such as Photoshop, can remove the red discoloration. | {
"domain": "biology.stackexchange",
"id": 3439,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "eyes, human-eye, light, dogs",
"url": null
} |
java
//Reading text tag location
System.out.println("From what TXT File are you reading the tags that need changing: ");
String tagChangeText = input.next();
readFromTextFile(tagChangeText);
}
public void changeToDICOMObject(String path)
{
DicomInputStream din = null;
try {
din = new DicomInputStream(new File(path));
dcmObj = din.readDicomObject();
}
catch (IOException e) {
e.printStackTrace();
return;
}
finally {
try {
din.close();
}
catch (IOException ignore) {
}
}
System.out.println("Now reading DCM File");
}
public void readFromTextFile(String path) throws IOException
{
Scanner read = new Scanner(new File(path));
while (read.hasNext())
{
list.add(read.next());
} | {
"domain": "codereview.stackexchange",
"id": 12835,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java",
"url": null
} |
wave-particle-duality, interference, photoelectric-effect, thought-experiment
Which one of the two will happen(Or will something entirely different happen)?
Has this experiment been done before? First, note that the metal "plate" will need to be more like a strip -- it needs to be small enough to fit into one trough (or peak) in the interference pattern.
But this thin strip that interacts with light is just like any other photodetector (including your eye). So, when it is in a trough, no electrons are emitted.
Your intuitive picture of "two photons reaching the interface" is misleading you in this context. The amplitude of the photon wave function is (essentially) zero at the center of the trough. Thus, the rate of electron ejection is (essentially) zero. | {
"domain": "physics.stackexchange",
"id": 10023,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "wave-particle-duality, interference, photoelectric-effect, thought-experiment",
"url": null
} |
javascript, programming-challenge, functional-programming, ecmascript-6
const CumulativeDiffs = (totalSoFar, val) => {
const row = val.split(/\s+/);
const max = Math.max(...row);
const min = Math.min(...row);
return totalSoFar + max - min;
};
const solution = INPUT.split(/\n/)
.reduce(CumulativeDiffs, 0);
console.log("solution ", solution); | {
"domain": "codereview.stackexchange",
"id": 28912,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, programming-challenge, functional-programming, ecmascript-6",
"url": null
} |
forces, electric-circuits, electric-fields, electricity, charge
The charge carriers alternatively acquire kinetic energy from the electric field and give up kinetic energy due to collisions with atoms, other electrons, or impurities in the conductor that cause resistance.
Essentially, the positive work done by the electric field on the charge carriers equals the negative work done on the charge carriers by the resistance, for a net work of zero and no change in kinetic energy.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 98239,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "forces, electric-circuits, electric-fields, electricity, charge",
"url": null
} |
java, inheritance, interface
IAddressService.java: (7 lines, 199 bytes)
public interface IAddressService extends IDependentListService<Address>{
public void promoteToMainAddress(long addressId);
}
IContactPersonService.java: (7 lines, 167 bytes)
public interface IContactPersonService extends IDependentListService<ContactPerson>{
}
IContractService.java: (9 lines, 234 bytes)
public interface IContractService extends IDependentListService<Contract> {
public List<Contract> getAllContractsByLoggedInUser();
}
ICustomerService.java: (9 lines, 220 bytes)
public interface ICustomerService extends IListService<Customer> {
}
IProductService.java: (10 lines, 262 bytes)
public interface IProductService extends IListService<Product> {
public List<Product> getAllCurrentProducts();
public List<Product> getAllArchivedProducts();
}
IProjectService.java: (9 lines, 228 bytes)
public interface IProjectService extends IDependentListService<Project>{
public List<Project> getAllProjectsByLoggedInUser();
} | {
"domain": "codereview.stackexchange",
"id": 6893,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, inheritance, interface",
"url": null
} |
c++
// Read lines from a non-blocking reader (e.g. an O_NONBLOCK socket).
// Every complete line is sent to a consumer function, without a newline or null terminator.
// Lines larger than BufSize are silently discarded.
//
// Reader should have approximately this signature:
// ssize_t reader(void *buffer, size_t maxSize);
// Should return 0 on EOF, -1 on error, otherwise the number of read bytes.
// In practice, this should just call read() on a file descriptor.
// Consumer should have approximately this signature:
// void consumer(const char *start, size_t size);
template<size_t BufSize>
class LineBuffer {
std::array<char, BufSize> _buffer;
size_t _bytesBuffered {0};
size_t _bytesConsumed {0};
bool _discardLine {false};
public:
template<typename Reader, typename Consumer>
size_t readLines(const Reader& reader, const Consumer& consumer)
{
char *buf = _buffer.data();
while (true) { | {
"domain": "codereview.stackexchange",
"id": 42291,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++",
"url": null
} |
ros, rosserial
self.target.setParam(rospy.names.get_caller_id(), rospy.names.resolve_name(key), val)
File "/usr/lib/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
File "/usr/lib/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
File "/usr/lib/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
File "/usr/lib/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 797, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 778, in connect | {
"domain": "robotics.stackexchange",
"id": 26267,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rosserial",
"url": null
} |
quantum-mechanics, statistical-mechanics, entropy
PS : I am sure coarse graining and quantisation have entirely different origins, my question here to is to only address their effects rather than their causes. The von Neumann entropy, written in terms of the quantum mechanical density operator, is a constant of the motion if you keep track of everything (including entanglement with the environment) and don't have any collapse events (which, depending on your favorite interpretation of quantum mechanics, might not exist anyway). | {
"domain": "physics.stackexchange",
"id": 50067,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, statistical-mechanics, entropy",
"url": null
} |
c#, validation, silverlight
var validationResults = new Collection<ValidationResult>();
ComponentModelValidator.TryValidateObject(item, new ValidationContext(item), validationResults);
if (item is Entity)
{
foreach (var validationResult in validationResults)
{
((Entity)item).ValidationErrors.Add(validationResult);
}
}
if (item is ComplexObject)
{
foreach (var validationResult in validationResults)
{
((ComplexObject)item).ValidationErrors.Add(validationResult);
}
}
return validationResults;
} | {
"domain": "codereview.stackexchange",
"id": 11767,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, validation, silverlight",
"url": null
} |
c#, entity-framework
/// <summary>
/// Generic base class for all other TimeTracker.BusinessLayer.Facade classes.
/// </summary>
/// <typeparam name="TDbContext">The type of the db context.</typeparam>
/// <typeparam name="TEntity">The type of the entity.</typeparam>
/// <typeparam name="TEntityKey">The type of the entity key.</typeparam>
/// <remarks></remarks>
public abstract class FacadeBase<TDbContext, TEntity, TEntityKey>
where TDbContext : DbContext, new()
where TEntity : class
{
/// <summary>
/// Gets the db context.
/// </summary>
public TDbContext DbContext
{
get
{
if (DbContextManager == null)
{
this.InstantiateDbContextManager();
}
return DbContextManager != null
? DbContextManager.GetDbContext<TDbContext>()
: null;
}
} | {
"domain": "codereview.stackexchange",
"id": 3692,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, entity-framework",
"url": null
} |
c#, interview-questions, interval
//place the first person meetings a the first result
List<TimePeriod> firstPersonList = Utiliies.GetTimesAvail(people.FirstOrDefault());
List<TimePeriod> result = new List<TimePeriod>();
//intersect the meetings with the others
for (int i = 1; i < people.Count; i++)
{
List<TimePeriod> secondPersonList = Utiliies.GetTimesAvail(people[i]);
foreach (var secondSlot in secondPersonList)
{
foreach (var firstSlot in firstPersonList)
{
if (secondSlot.SameDay(firstSlot))
{
CheckHourIntersections(firstSlot, secondSlot, result);
}
}
}
//copy the result into the first person
firstPersonList.Clear();
foreach (var timeSlot in result)
{ | {
"domain": "codereview.stackexchange",
"id": 35253,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, interview-questions, interval",
"url": null
} |
c#, multithreading, locking
Title: How to correctly get a IDisposeable when you need to lock the factory? If I need to create a IDisposeable object from a factory, but if the factory object is not thread safe and requires me to lock on it is this the correct pattern to use?
public void DisposeExample(FactoryClass factoryClass)
{
DispObject dispObject = null;
lock(factoryClass)
{
dispObject = factoryClass.GetDispObject();
}
using(dispObject)
{
dispObject.DoWork();
}
} Yes, assuming the factory is shared between threads and the created object doesn't contain any resource that is shared with other objects from the same factory.
But as was already pointed out in your previous question, a better solution might be to have a separate factory for each thread. Factories usually don't use much resources, so it should be fine to have more of them. | {
"domain": "codereview.stackexchange",
"id": 3053,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, multithreading, locking",
"url": null
} |
c++, object-oriented, c++14, networking, wrapper
void TelnetClient::telnetEvent(telnet_event_t *event) {
switch (event->type) {
// data received
case TELNET_EV_DATA:
mReceivedMsg = std::string(event->data.buffer, event->data.size);
#if DEBUG_MODE
std::cout << "response: [" << mReceivedMsg << "]" << std::endl;
#endif
break;
// data must be sent
case TELNET_EV_SEND:
sendAll(event->data.buffer, event->data.size);
break;
// execute to enable local feature (or receipt)
case TELNET_EV_DO:
throw NotImplemented();
// demand to disable local feature (or receipt)
case TELNET_EV_DONT:
throw NotImplemented();
// respond to TTYPE commands
case TELNET_EV_TTYPE:
throw NotImplemented();
// respond to particular subnegotiations
case TELNET_EV_SUBNEGOTIATION:
throw NotImplemented();
// error
case TELNET_EV_ERROR:
throw std::runtime_error("telnet error: " + std::string(event->error.msg));
default:
// ignore | {
"domain": "codereview.stackexchange",
"id": 43637,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, object-oriented, c++14, networking, wrapper",
"url": null
} |
c++, object-oriented, game, mvvm, sfml
constexpr float play_string_x_offset = 18.f;
constexpr float play_string_y_offset = 28.f;
constexpr float six_string_x_offset = 80.f;
constexpr float eight_string_x_offset = 40.f;
constexpr float twelve_string_x_offset = 15.f;
constexpr float sixteen_string_x_offset = 8.f;
constexpr float pair_string_x_offset = 40.f;
constexpr float pair_string_y_offset = 100.f;
constexpr float game_button_width = 128.f;
constexpr float game_button_height = 64.f;
constexpr float pause_x = 1244.f;
constexpr float reset_x = 1372.f;
constexpr float pause_offset = 12.f;
constexpr float reset_offset = 5.f;
constexpr float display_x = 1244.f;
constexpr float player_one_display_y = 64.f;
constexpr float player_two_display_y = 432.f;
constexpr float display_offset = 20.f;
constexpr float display_width = 256.f;
constexpr float display_height = 368.f; | {
"domain": "codereview.stackexchange",
"id": 34697,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, object-oriented, game, mvvm, sfml",
"url": null
} |
c#, asp.net, entity-framework, asp.net-mvc-3, authentication
return existingRole.Any();
}
public bool IsUserCached(string username)
{
return dbContext.Accounts.Any(x => x.Username == username);
}
public void CacheUser(string username)
{
dbContext.Accounts.Add(new Account
{
Active = true,
Username = username
});
dbContext.Accounts.Add(newAcc);
dbContext.SaveChanges();
AssignRoleUser(username);
}
public bool AssignRoleUser(string username)
{
return AssignRole(username, Role.User);
}
public bool AssignRoleComment(string username)
{
return AssignRole(username, Role.Comment);
}
public bool AssignRoleCommentControl(string username)
{
return AssignRole(username, Role.CommentControl);
}
public bool AssignRoleSysAdmin(string username)
{
return AssignRole(username, Role.Sysadmin);
} | {
"domain": "codereview.stackexchange",
"id": 7046,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, asp.net, entity-framework, asp.net-mvc-3, authentication",
"url": null
} |
performance, python-3.x
Are there any other ways I could further optimize the code? Another interesting thing of note is that adding @lru_cache actually increases execution time as measured by timeit despite the caching mechanism reducing the number of functions calls to char_to_int() (from 120612 to 42). Disclaimer: I did not get any noticeable performance improvements but here are a few ideas nonetheless. Also, your code looks really good to me and there is not much to improve.
In calculate_hash, you compute a power in the body of the loop. Based on the way the exponents is computed, we can tell that the different values will be: BASE ** (length - 1), BASE ** (length - 2), etc. A different option could be to compute the initial value with the '**' and then update the power using division by BASE.
We could get something like:
def calculate_hash(s: str, length: Optional[int] = None) -> int:
if length is not None:
s = islice(s, 0, length)
else:
length = len(s) | {
"domain": "codereview.stackexchange",
"id": 41269,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, python-3.x",
"url": null
} |
software-engineering, empirical-research, statistics
Sometimes, there are excuses for justifying the hypothesis with such a small sample size.
My question here is thusly posed as a student of CS disciplines as an aspirant to learn more about statistics: How do computer scientists approaches statistics?
This question might seems like I am asking what I have already explained, but that is my opinion. I might be wrong, or I might be focusing on a group of practitioners whereas other groups of CS researchers might be doing something else that follows better practices with respect to statistics rigor. | {
"domain": "cs.stackexchange",
"id": 2354,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "software-engineering, empirical-research, statistics",
"url": null
} |
neural-network, keras, tensorflow
])
#Compile and fit the model
model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model.fit(train_images, train_labels, epochs=5)
#Test the model
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc) | {
"domain": "datascience.stackexchange",
"id": 8022,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-network, keras, tensorflow",
"url": null
} |
thermodynamics, heat
Title: Chemistry Thermodynamics and Sign Convention Can anyone pls explain me the sign conventions that we use in chemistry thermodynamics for heat and work...also kindly explain how to identify what will happen to heat in positive or negative work and vice-verse... There exist two ways for defining the sign of the work, depending who was defining it.
The definition proposed by the chemists implies that all energies that are added to a system are positive. $\Delta U = Q + W$ | {
"domain": "chemistry.stackexchange",
"id": 17074,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, heat",
"url": null
} |
electromagnetic-radiation, photons, temperature, thermal-radiation, wavelength
Question 2:
Mainly, I think 700 is the power emitted by object at 1cm^3 at 500nm. what does per unit frequency mean in this context then ?
EDIT:
Q1: Is the below already an integral derived from what you said ? and I guess, angle got integrated into π.
Q2: so, I think the above formula(2nd one in the picture) states what you said in the comments. In the wikipedias's formula, if I put 10mm in the formula as a wavelength, I would get energy emitted per 1cm^3 in the range of 10mm and 11mm wavelengths, but with the formula shown in the picture, I can even go in a shorter range, for example if I want to know the enery in the range of 10mm and 10.001mm, then I would put 10 for Λ and 0.001 for dλ. The same can't be achieved with wikipedia's general formula as it only gives you the energy per unit wavelength which is 1mm in the SI system. Is this all correct ?
Mainly, I think 700 is the power emitted by object at 1cm^3 at 500nm. what does per unit frequency mean in this context then ? | {
"domain": "physics.stackexchange",
"id": 94626,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-radiation, photons, temperature, thermal-radiation, wavelength",
"url": null
} |
game-ai, monte-carlo-tree-search, algorithm-request, minimax, heuristics
I think the main conceptual barrier you have to improvements is how to account for the complex behaviour of probabilities for drawing specific useful cards. There are a few ways to do this, but I think the simplest would be some kind of rollout (simulated look ahead), which might lead to more sophisticated algorithm such as Monte Carlo Tree Search (MCTS).
Here's how a really simple variant might work:
For each possible choice of play in the game that you are currently looking at:
Simulate the remaining deck (shuffle a copy of the known remaining cards)
Play a simulation (a "rollout") to the end of game against the simulated deck using a simple heuristic (your current greedy choice version should be good as long as it is fast enough, but even random choices can work). Take note of the final score.
Repeat 1.1 and 1.2 as many times as you can afford to (given allowed decision time). Average the result and save it as a score for the choice of play being considered. | {
"domain": "ai.stackexchange",
"id": 2391,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "game-ai, monte-carlo-tree-search, algorithm-request, minimax, heuristics",
"url": null
} |
algorithms, complexity-theory, time-complexity, search-algorithms
&&&&&&&\\\hline
&&&&&&&\\\hline
\end{array}$$
Otherwise, it must be at one of the following cells.
$$\begin{array}{|c|c|c|c|c|c|c|}\hline
&&&&&&&\\\hline
&&&&&&&\\\hline
&&&&&&&\\\hline
&&&&&&&\\\hline
\phantom{t}&\phantom{t}&\phantom{t}&
\phantom{t}&t&t&t&t\\\hline
&&&&t&&&\\\hline
&&&&t&&&\\\hline
&&&&t&&&\\\hline
\end{array}$$
Now choose the following square to ask to determine whether it is on the top row of that 4 by 4 square except the top left cell. If yes, we are left with 3 cells on the same row; otherwise we are left with 4 cells on the same column. Now start binary search.
$$\begin{array}{|c|c|c|c|c|c|c|}\hline
&&&&&&&\\\hline
&&&&&&&\\\hline
&&&&&&&\\\hline
&&&&&?&?&?\\\hline
&&&&&?&?&?\\\hline
\phantom{t}&\phantom{t}&\phantom{t}&
\phantom{t}&\phantom{t}&?&?&?\\\hline
&&&&&&&\\\hline
&&&&&&&\\\hline
&&&&&&&\\\hline
\end{array}$$ | {
"domain": "cs.stackexchange",
"id": 14138,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, complexity-theory, time-complexity, search-algorithms",
"url": null
} |
python, algorithm, game, dice
if oppClaim > 0:
for myClaim in range(oppClaim):
node = self.responseNodes[myClaim, oppClaim]
actionProb = node.strategy
node.u = 0.0
doubtUtil = 1 if oppClaim > rollAfterAcceptingClaim[myClaim] else -1
regret[self.DOUBT] = doubtUtil
node.u += actionProb[self.DOUBT] * doubtUtil
if oppClaim < self.sides:
nextNode = self.claimNodes[oppClaim, rollAfterAcceptingClaim[oppClaim]]
regret[self.ACCEPT] += nextNode.u
node.u += actionProb[self.ACCEPT] * nextNode.u
for a in range(len(actionProb)):
regret[a] -= node.u
node.regretSum[a] += node.pOpponent * regret[a] | {
"domain": "codereview.stackexchange",
"id": 32980,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, algorithm, game, dice",
"url": null
} |
with each given angle. Reference and Coterminal Angles Find a positive and a negative coterminal angle for each given angle. Negative Coterminal Angle = Angle -360. If θ is an angle in standard position, its. Therefore, the reference angle is always coterminal with the original angle θ. Find both a positive and negative angle that is coterminal angle with the following angles. For a positive coterminal angle, add 360º : 112º + 360º = 472º. 68% average accuracy. I understand perfectly how to find the reference angle for a degree, such as 150 degress = reference angle of 30 degrees, because the 2nd quadrant goes from 90 to 180 degrees, so you simply subtract 150 from 180 to come up with 30. coterminal angles. 1) 326 ° 686 ° and −34 ° 2) 530 ° 170 ° and −190 ° 3) −215 ° 145 ° and −575 ° 4) −84 ° 276 ° and −444 ° 5) 215 ° 575 ° and −145 ° 6) 255 ° 615 ° and −105 ° 7) −660 ° 60 ° and −300 ° 8) −255 °. They came up with -340 degrees and 380 degrees. For any angle α, the positive coterminal | {
"domain": "hoerzeitungen.de",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9802808678600415,
"lm_q1q2_score": 0.8104097465911859,
"lm_q2_score": 0.8267117855317474,
"openwebmath_perplexity": 969.8806239304247,
"openwebmath_score": 0.6538034677505493,
"tags": null,
"url": "http://oddf.hoerzeitungen.de/coterminal-and-reference-angles.html"
} |
genetics
Title: What is allelic effect sizes and direction? In a paper, Berkley, C. A., and C. Lexer. 2008. Admixture as the basis for genetic mapping. Trends in Ecology & Evolution 23:686–694, the definition of Genetic architecture is given. It says:
Genetic architecture: the number and genomic location of loci that
contribute to variation in a trait, as well as the allelic effect
sizes and direction, the genotypic effects (additivity and dominance)
and the extent of epistatic interactions among loci. | {
"domain": "biology.stackexchange",
"id": 4995,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "genetics",
"url": null
} |
answered Oct 9 '12 at 15:54 -- so will... 2J − k so they are parallel this distance is actually the of... In mathematics, a plane as well as between lines and planes value to get a vector that from... In coordinate Geometry 3 plus 3 times something, minus 5 planes, and the formula... The shortest distance between two planes are the same set of planes I + −! Pick any point on the other to get a vector that points from to or vice versa trick here to. To find the distance between two planes is the modulus of the plane using the distance between planes! This article only discusses the distance between any two points on Rectangular coordinate plane we find. Times something, minus 5 two complex numbers a formula that is used to find their distance we need find. Related rate at one particular point in time a vector that points from to or vice.! At -- so this will just be 1 times 2 minus 2 times I. 2D and 3D coordinate planes the formula times 2 minus 2 times 3 3... Flat, two-dimensional | {
"domain": "designtemple.se",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9865717452580316,
"lm_q1q2_score": 0.8284578040603712,
"lm_q2_score": 0.8397339656668287,
"openwebmath_perplexity": 226.09025006307743,
"openwebmath_score": 0.7681993842124939,
"tags": null,
"url": "https://designtemple.se/4dx1qg03/85bcb3-distance-between-two-planes-formula"
} |
java, graphics
If we look at the remaining while conditions we see, that after this condtions will be true the first time, neither x < xStart will be true nor y > height will be true. So there is no need for a while loop. To save an additional indention, we also revert the condition to be a guard condition. Additional we extract the check for generating a random color to a separate method isColorChangeNeeded(), rename the class variable i to colorCheckCounter and the changeColor variable to colorCheckLimit which should be final also. And finally we will rename the method to assignDiagonalPixelLocation
public void assignDiagonalPixelLocation(){
x--;
y++;
if (x >= xStart && y <=height) { return; }
xLock++;
if (xLock > width) {
yLock++;
xLock = width;
if(yLock == height){
xLock = xStart;
yLock = yStart;
}
}
x = xLock;
y = yLock;
if(isColorChangeNeeded()){
generateColor();
}
} | {
"domain": "codereview.stackexchange",
"id": 10059,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, graphics",
"url": null
} |
javascript, html, datetime, finance
Title: Prorated refund calculator I wrote this little piece of script for my workplace (insurance, etc.) to help our admins, as a light and simple file I can just email to my colleagues.
The purpose of it is to calculate a prorated refund in case of a policy holder cancelling it. Basically a policy's purchase value "ticks down" gradually after a certain grace period (calculated in days), and any claims paid against the policy are then deducted from the remaining amount.
I tested for a number of valid values and it works good. I also tested for invalid input and it returns Invalid Date if either of the dates can't be reconciled to a valid Date() value, or the calculations return NaN, which is fine with me but I'm open to more elegant solutions. | {
"domain": "codereview.stackexchange",
"id": 13768,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, html, datetime, finance",
"url": null
} |
cell-biology
Title: Is there sufficient evidence that human cells are not intelligent? Being structurally composed of one or more cells, which are the basic units of life.
Yet within a cell, there seems to be the same behaviors that define life:
Regulation of the internal environment to maintain a constant state;
Organization: Don't most cells have organs?
Metabolism.
Growth ~ not sure about this one.
Adaption ~ Think this is accurate, but I'm not well versed in cellular biology at all.
Response to stimuli. Cells interact with external whatnots.
Reproduction - Cells do this, at least many of them.
Most of what lead me to this question came from chapter 5 of Spontaneous Healing by Andrew Weil, M.D.. I'll type the parts relevant to the discussion below - in case he's wrong and I'm basing my thoughts on wrong information. bold sections indicate my interjection | {
"domain": "biology.stackexchange",
"id": 595,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cell-biology",
"url": null
} |
filters, finite-impulse-response, infinite-impulse-response, digital-filters
IIR_Order = 1;
[b, a] = butter(IIR_Order,[Wl, Wh]);
filtered_IIR = filter(b, a, f);
hold on;
figure 01;
plot(f);
plot(filtered_FIR);
plot(filtered_IIR);
hold off; DC offset will only be removed if the filter does not pass DC. If the filter is a low pass filter, then the DC portion of the signal will pass through, scaled by the gain of the filter.
To completely remove DC, the filter would have a zero at $z=1$, which will provide a null at DC. You can expect to see an initial time transient but in the settled state DC will be completely removed if the filter has a zero at $z=1$.
Alternatively, to implement DC with a very simple IIR filter consider the DC Nulling Filter demonstrated and detailed in these posts:
What does correcting IQ do?
Transfer function of second order notch filter | {
"domain": "dsp.stackexchange",
"id": 10998,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, finite-impulse-response, infinite-impulse-response, digital-filters",
"url": null
} |
special-relativity, gravity, newtonian-gravity, lagrangian-formalism, causality
What is not clear to me is why we cannot simply put $A_0 = \frac{1}{\rho}$ to include gravity. Here $\rho$ is the distance from the origin in polar/spherical coordinates. | {
"domain": "physics.stackexchange",
"id": 54322,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, gravity, newtonian-gravity, lagrangian-formalism, causality",
"url": null
} |
quantum-circuit, experimental-realization, computational-models
These kinds of loops would allow you to make quantum circuits that, if read literally, required violating the no-cloning theorem. Allowing them would create a lot of problems, so Nielsen and Chuang don't.
Giving a consistent treatment to these kinds of loops isn't impossible. For example, the ZX calculus is a quantum diagram language that allows loops and contains quantum circuits as a subset. But it does cause a lot of hassle. For example, although the meaning of a ZX diagram is unambiguous, it can be expensive to translate a ZX diagram into a series of steps that can be executed on a quantum computer, and that series of steps might be enormous.
Your other reference is referring to "loops" like loops in a programming language, where certain steps are repeated until a condition is met. Those are fine. | {
"domain": "quantumcomputing.stackexchange",
"id": 4508,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-circuit, experimental-realization, computational-models",
"url": null
} |
Step 4:
Finally, multiply 2nd row of the first matrix and the 2st column of the second matrix. The result goes in the position (2, 2)
$\left[ {\begin{array}{*{20}{l}} 1&3&5\\ \color{red}{2}&\color{red}{4}&\color{red}{6} \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{l}} \color{blue}{3}&6\\ \color{blue}{1}&4\\ \color{blue}{5}&2 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 31&28\\ 40&{\color{red}{2} \cdot \color{blue}{6} + \color{red}{4} \cdot \color{blue}{4} + \color{red}{6} \cdot \color{blue}{2}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {31}&{28}\\ {40}&{40} \end{array}} \right]$
So, the result is:
$\left[ {\begin{array}{*{20}{l}} 1&3&5\\ 2&4&6 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{l}} 3&6\\ 1&4\\ 5&2 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {31}&{28}\\ {40}&{40} \end{array}} \right]$
Example 2: Find the product AB where A and B are matrices given by: | {
"domain": "mathportal.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9946150617189832,
"lm_q1q2_score": 0.8266674021630286,
"lm_q2_score": 0.8311430562234877,
"openwebmath_perplexity": 262.4977137983196,
"openwebmath_score": 0.8934695720672607,
"tags": null,
"url": "https://www.mathportal.org/linear-algebra/matrices/matrix-operations.php"
} |
c++, algorithm, strings
int main() {
string phrase;
cout << "Please give me some phrase" << endl;
getline(cin, phrase);
vector <string> splitted = split(phrase);
vector<string> permuted = makePermutedIndex(splitted);
for (const auto i : permuted) {
cout << i << endl;
}
return 0;
}
myImplementation.cpp
#include <vector>
#include <sstream>
#include <algorithm>
using namespace std;
vector<string> concatLeftAndRight(const vector<string> &left, const vector<string> &right,
const string::size_type maxLengthOfLeftLine) {
vector<string> ret;
for (vector<string>::size_type i = 0; i != left.size(); ++i) {
std::stringstream ss;
ss << string(maxLengthOfLeftLine - left[i].size(), ' ')
<< left[i] << " " << right[i];
ret.push_back(ss.str());
}
return ret;
}
vector<string> makePermutedIndex(const vector<string> &splitted) {
vector<string> sorted(splitted); | {
"domain": "codereview.stackexchange",
"id": 18889,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, strings",
"url": null
} |
linear-systems, state-space
So we have a system that has to be LTI as it is expressed in state space with constant matrices, but it can't be LTI because it has $x(0)\neq0$.
I can't see the mistake in the reasoning that leads me to this absurd contradiction. Can someone point it out? I am only an undergraduate student so perhaps my answer is a bit naive, but according to Oppenheim it is not just nonzero initial conditions that cause an linear constant coefficient differential/difference equation to be non-LTI. A differential/difference equation with fixed zero initial conditions cannot be LTI either. For a linear constant coefficient differential/difference equation describe a causal, LTI system, the initial conditions have to satisfy the condition of initial rest: that is, the output does not become nonzero until the input becomes nonzero. | {
"domain": "dsp.stackexchange",
"id": 6221,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "linear-systems, state-space",
"url": null
} |
fluid-dynamics, conservation-laws
Title: Help understanding this derivation of a general balance equation in fluid mechanics I am trying to understand the derivation of a general balance equation for an arbitrary quantity, as given by this textbook (pg. 13). Now, I have a decent knowledge of how to derive the continuity/Navier-Stokes equations in a conventional method, but this book's approach involves deriving a general balance equation which can be quickly transformed into conservation of mass/momentum/energy etc.
First we define $\psi$ as an arbitrary quantity per unit mass, $\rho$ as the fluid density, $\underline{\underline{J}}$ as the surface efflux of $\psi$ (tensor), and $\phi$ as a body source of $\psi$.
Firstly,
$$ \frac{d}{dt} \int_V \rho\psi dV=-\oint_A(\underline{n}\cdot \underline{\underline{J}})dA+\int_V \rho\phi dV
$$
Using Reynolds Transport theoerm and Divergence theorem, we can transform this equation into the following form: | {
"domain": "physics.stackexchange",
"id": 38765,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, conservation-laws",
"url": null
} |
php, object-oriented, mysql, pdo, static
$db = Database::create();
$db->exec(
"CREATE TABLE IF NOT EXISTS `users` (
`uid` int NOT NULL PRIMARY KEY AUTO_INCREMENT,
`username` varchar(100) NOT NULL UNIQUE,
`email` varchar(100) NOT NULL UNIQUE,
`verified` boolean DEFAULT 0,
`hash` varchar(255) NOT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
)"
);
$db->exec(
"CREATE TABLE IF NOT EXISTS `images` (
`id` int NOT NULL PRIMARY KEY AUTO_INCREMENT,
`uid` int NOT NULL,
`image` varchar(255) NOT NULL,
`like_count` int NOT NULL DEFAULT 0
)"
);
$user = User::signUp($db, 'JohnDoe', 'john.doe@sample.org', '12345');
?> Main question
I'm wondering if the way I'm using static methods throughout the code makes sense. | {
"domain": "codereview.stackexchange",
"id": 40055,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, object-oriented, mysql, pdo, static",
"url": null
} |
quantum-mechanics, quantum-optics, interferometry
As for the resolution, you can think of a quantum state consisting of N entangled photons as like a superparticle at N times the energy, and thus 1/N wavelength. Thus the resolution is limited to roughly the average wavelength/N. See e.g. "N00N states" for a discussion of quantum super-resolution in metrology.
Matter-wave interferometry
Matter wave interferometers use matter instead of EM radiation. Because historically matter was considered corpuscular (rather than having a wave/field like nature that light has), any wave-like properties of matter are automatically considered "quantum". Therefore a simple 2-slit or Mach–Zehnder interferometer made using matter is "quantum" interferometry, while using light is "classical," even though the principles are the same.
The resolution is again roughly limited to the wavelength (which for matter means the de Broglie wavelength). | {
"domain": "physics.stackexchange",
"id": 21590,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-optics, interferometry",
"url": null
} |
algorithms, optimization, combinatorics, integers
If we assume that the number of elements in the power sum set corresponds to the number of partitions of the largest element in the underlying set then the complexity is around $m\log^3(m)$. Any of the two justifies the initial sorting in order to find the largest element.
Parts of the algorithm assume that we can find the pair of sums in linear time and this requires sorting.
Incorrect start | {
"domain": "cs.stackexchange",
"id": 5906,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, optimization, combinatorics, integers",
"url": null
} |
python, python-3.x, constructor, meta-programming
>>> @autofill('a', 'b', c=3)
... class Foo: pass
>>> sorted(Foo(1, 2).__dict__.items())
[('a', 1), ('b', 2), ('c', 3)]
"""
def init_switcher(cls):
kind = Parameter.POSITIONAL_OR_KEYWORD
signature = Signature(
[Parameter(a, kind) for a in argnames]
+ [Parameter(k, kind, default=v) for k, v in defaults.items()])
original_init = cls.__init__
def init(self, *args, **kwargs):
bound = signature.bind(*args, **kwargs)
bound.apply_defaults()
for k, v in bound.arguments.items():
setattr(self, k, v)
original_init(self)
cls.__init__ = init
return cls
return init_switcher
There's no test case that checks that the original __init__ method is called. | {
"domain": "codereview.stackexchange",
"id": 22136,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, constructor, meta-programming",
"url": null
} |
computational-chemistry, cheminformatics
For your particular case, yes, CC1CC=CC(=O)C1 is both a valid SMILES and a valid SMARTS, but as a SMARTS query, it represents not just 5-methyl-2-cyclohexen-1-one, but also 5-propyl-2-cyclohexen-1-one and 3-hydroxy-5-butyl-6-amino-2-cyclohexen-1-one, as well as many others, all of which contain that substructure. The SMARTS viewer you link doesn't depict this explicitly, because it's implicit in the use of SMARTS that it's a substructure pattern for a broader class of compounds. | {
"domain": "chemistry.stackexchange",
"id": 7963,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computational-chemistry, cheminformatics",
"url": null
} |
structural-engineering, beam, stresses, reinforcement
common steel thin-walled beam (25mm x 25mm x 2mm wall thick)
each joint point is welded, we can be simplify and assume that the welds are exactly as strong as the material itself
the suspension points can hold infinite force
and any other possible simplification - this problem isn't for any rocket-science but for solving an evening talk with a friend. As grfrazee said, you won't know for sure until you do a finite element analysis. I was intrigued by this question as a colleague and I got into a discussion about this. While we both agreed the diagonal bracing would be better at resisting deflection, we wondered by what factor it would be better. | {
"domain": "engineering.stackexchange",
"id": 362,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "structural-engineering, beam, stresses, reinforcement",
"url": null
} |
error-correction, information-theory, decoherence, error-mitigation
$$E_i=e_{i 0} I+e_{i 1} X+e_{i 2} Z+e_{i 3} XZ$$
For the partial trace, the Kraus operators are $E_i=\langle i|$ where the bra is on the register we are tracing out. It's not clear to me how the Pauli decomposition works since the dimension of the Kraus operator is $2\times 1$ while the linear combination of Pauli matrices gives us a $2\times 2$ matrix.
My question is still - how are erasures dealt with in QEC? You should think of error correction as the process of measuring syndromes (i.e. determining the $\pm 1$ values of stabilisers, which is the case I'll exclusively focus on) and then following a lookup table to see what correction to make.
For a non-degenerate distance-3 code, for example, under normal circumstances, you are promised that in your lookup table, for every possible syndrome there is at most one weight one error corresponding to it. So, under the assumption that errors are low probability, that's the one you correct. | {
"domain": "quantumcomputing.stackexchange",
"id": 5348,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "error-correction, information-theory, decoherence, error-mitigation",
"url": null
} |
reach the. I know for sure that pilots use the trig. These functions are most conveniently defined in terms of the exponential function, with sinh z = 1 / 2 (e z − e −z) and cosh z = 1 / 2 (e z + e. Scroll down the page for examples and solutions. Sinusoidal functions graph wave forms. Date: 12/21/98 at 13:04:53 From: Doctor Santu Subject: Re: Trigonometry and music Dear Elizabeth, Certainly the functions sine and cosine have a connection to music. 2 to 5 Tuesday 10/22 Graphing Sine and Cosine functions cont'd. Applications of sinusoidal functions Description. Plotting a basic sine wave. Find the period of a sine or cosine function. View Notes - NOTESTrigonometry 3. Let's take a look at navigation. In addition, how do I know if this the graph of sine or cosine?. The midline is the average value. The applica-tions listed below represent a small sample of the applications. 1 4 Sine Law C2. 4 The Sine and Cosine Ratio Learning Goal: Determine the measures of the sides and angles in right | {
"domain": "salesianipinerolo.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.990291521799369,
"lm_q1q2_score": 0.8655912851267503,
"lm_q2_score": 0.8740772450055544,
"openwebmath_perplexity": 489.46033083010536,
"openwebmath_score": 0.754558265209198,
"tags": null,
"url": "http://salesianipinerolo.it/ubdl/real-life-applications-of-sine-and-cosine-functions.html"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.