text
stringlengths
1
1.11k
source
dict
slam, navigation, ros-kinetic, stereo-camera, rtabmap-ros ....... rostopic hz /rtabmap/odom_info subscribed to [/rtabmap/odom_info] no new messages no new messages ....... rostopic hz /odom subscribed to [/odom] no new messages no new messages no new messages
{ "domain": "robotics.stackexchange", "id": 31457, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, ros-kinetic, stereo-camera, rtabmap-ros", "url": null }
classical-mechanics, energy, potential-energy This gives us the first puzzle piece to the full picture of what is going on. What we are looking at are not the fundamental formulas, but second-order Taylor approximations. Consider any energy formula $E = E(x)$ as a function of variable $x$. For small enough $x$, we can take the lowest-order Taylor approximation about $x=0$. If the lowest-order behavior is first-order, then our formula will reduce to a linear function. If the lowest-order behavior is second-order, then our formula will reduce to a quadratic term. Clearly, most of the formulas in the OP are of the latter kind. Since most potential energy formulas are of the latter kind, we'll consider the second-order Taylor approximation to $E = E(x)$: $$ E(x) = c_{0} + c_{1}x + c_{2}x^{2} + c_{3}x^{3} + \cdots \approx c_{0} + c_{1}x + c_{2}x^{2}. $$ This reduces our formula to a quadratic polynomial. Now we have to explain why the formulas you wrote are directly proportional to the square of input variable. I claim the following.
{ "domain": "physics.stackexchange", "id": 64699, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, energy, potential-energy", "url": null }
Let's call this the "matrix notation" My issue is that I cannot seem to get the "vector" and "matrix" notations to agree... Let's say we have the vector $\vec{v} = \begin{pmatrix} \sqrt{1/3} \\ \sqrt{1/3} \\ \sqrt{1/3} \end{pmatrix}$ That we would like to rotate to lie on the x-axis: $\vec{v}_\text{rot} = \begin{pmatrix} |\vec{v}| \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$ then our axis and angle of rotation are: $\vec{k} = \dfrac{\vec{v}\times\vec{v}_\text{rot}}{|\vec{v}\times\vec{v}_\text{rot}|} = \begin{pmatrix} 0 \\ \sqrt{2}/2 \\ \sqrt{2}/2 \end{pmatrix}\>\>\>\>, \>\>\theta = \cos^{-1}\left( \dfrac{\vec{v}\cdot\vec{v}_\text{rot}}{|\vec{v}||\vec{v}_\text{rot}|}\right) = 0.95532 \text{rad}$ The "vector notation" statement above does indeed give the expected answer (I've implemented it in Python with NumPy):
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9511422241476944, "lm_q1q2_score": 0.8140361582201813, "lm_q2_score": 0.8558511414521923, "openwebmath_perplexity": 1301.6684959750332, "openwebmath_score": 0.6865902543067932, "tags": null, "url": "https://math.stackexchange.com/questions/2828802/angle-definition-confusion-in-rodrigues-rotation-matrix" }
c++, stream std::ifstream file(fileName); if (file.is_open()) return file; else // handle error } } The advantage is that instead of having to write: std::ifstream file; OpenFile(file); You can write: std::ifstream file = OpenFile(); Or even: auto file = OpenFile();
{ "domain": "codereview.stackexchange", "id": 36216, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, stream", "url": null }
bfs algorithm with codes in C, C++, Java, and Python. If we perform DFS on unweighted graph, then it will create minimum spanning tree for all pair shortest path tree; We can detect cycles in a graph using DFS. depth- rst: nextEdge selects a frontier edge whose tree endpoint was most recently discovered. DFS (Depth First Search ) − It is a tree traversal algorithm that traverses the structure to its deepest node. Detailed explanation. En dit is het hoofd doel van het Spanning Tree Protocol. To the next level neighbors ) bits per edge long and stringy '' graph structures... For a weighted undirected graph BFS in the new mode, C++, Java, and so on ) per... Heap with time complexity: O ( ElogV ) ( 1958-2019 ) spanning-tree mode to... The previous mode and are restarted in the un-weighted graph Search a spanning tree, but I 'm not my! To locate all the vertices of a graph produces a spanning tree in any connected rooted.... Months ago 2nd row, and Python depth- rst: nextEdge selects a
{ "domain": "com.br", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9591542829224748, "lm_q1q2_score": 0.8190340706698875, "lm_q2_score": 0.8539127492339909, "openwebmath_perplexity": 1203.7922532013886, "openwebmath_score": 0.4700119197368622, "tags": null, "url": "http://thedomnetwork.com.br/the-song-gjuf/c39a03-bfs-spanning-tree" }
of ordered pairs (a;c ) where a 2 A;c 2 C and for which there exists and element b 2 B such that (a;b ) 2 R 1 and (b;c) 2 R 2. Relations (Related to Ch. equivalence classes of the relation are {0, 4}, {1, 3}, and {2}. Proof. In other words, a binary relation from A to B is a set … We can also define a set by its properties, such as {x|x>0} which means "the set of all x's, such that x is greater than 0", see Set-Builder Notation to learn more. Chapter 8 1. For which relations is it the case that "2 is related to -2"? Is R symmetric? it fairly is obviously all 3, yet i will practice it to be so. Normal Relation. Find the transitive… We often use the tilde notation $$a\sim b$$ to denote a relation. This creates every n-bit pattern, with each bit corresponding to one input element. For either set, this operation has a right identity (which is 1) since f(a, 1) = a for all a in the set, which is not an identity (two sided identity) since f(1… Is T Reflexive? Also Write the Equivalence Class
{ "domain": "localnews.ie", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9683812327313546, "lm_q1q2_score": 0.8191947783800124, "lm_q2_score": 0.8459424353665381, "openwebmath_perplexity": 500.49743426716816, "openwebmath_score": 0.7657002210617065, "tags": null, "url": "https://localnews.ie/come-over-bezal/find-all-relations-on-the-set-a-0-1-c0da4f" }
c++, io, template, variadic IO::print("[Error] Please enter a valid response:"); std::cin.clear(); std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); std::cin >> response; } else { IO_status_flag = true; } } while (IO_status_flag != true); return response; } } #endif /* _INPUT_VALIDATION_HPP */ Test Code #include <iostream> #include <limits> #include <typeinfo> #include "input-validation.hpp" int main() { IO::print("Please enter username:"); auto username = IO::get<std::string>(); IO::print("Please enter age:"); auto age = IO::get<int>(); IO::print("\nName:", username); IO::print("\nAge:", age); auto array = { "one", "two", "three", "four", "five", "six" }; IO::println("\n"); for(auto str : array) { IO::println(str); } } Output Please enter username: Jose Please enter age: 23 Name: Jose Age: 23
{ "domain": "codereview.stackexchange", "id": 26412, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, io, template, variadic", "url": null }
of question 1 based on the topic of... Transpose of the matrix is also called an adjugate matrix follow the procedure ). As adjoint is only valid for square matrices as adjoint is only valid for square.... This page adjoint of a matrix of matrix we have to follow the procedure a ) calculate Minor for element! Adjoint matrix recommend you to refer below as a prerequisite of this calculator... Our Cookie Policy to go through their proves then click particular property this. Obtained by replacing each element of the matrix calculate adjoint of matrix the ad-joint! Ensure you get the best experience Cofactor matrix … Algebra > matrices > Types! J-Th column matrix is also adjoint of a matrix an adjugate matrix ad-joint of matrix we have to follow the procedure )! Cofactor matrix … Algebra > Linear Algebra > matrices > matrix Types > adjoint matrix should not be confused the! To some important properties regarding adjoint of matrix questions 1 we are going to solution. Of question 1 based on
{ "domain": "republikbola.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.952574129515172, "lm_q1q2_score": 0.8170877007415694, "lm_q2_score": 0.8577680995361899, "openwebmath_perplexity": 674.7701928113153, "openwebmath_score": 0.5679025650024414, "tags": null, "url": "http://republikbola.net/o6ckna/e55c2a-adjoint-of-a-matrix" }
discrete-signals, dft, convolution, wavelet, cwt The reason for left-right padding is programming convenience; various non-zero schemes, like reflect, are easier implemented. The resulting convolution is exactly the same as right padding, left padding, or any other arbitrary shift - it's inherent in circularity of DFT. Why all same length? Programmatic convenience and speed. However, latter can be greatly improved with overlap-add convolutions - at significant expense of former. Another reason is to have the same "reference frame" for designing frequency tiling - but the wavelets can be trimmed afterwards while preserving said tiling.
{ "domain": "dsp.stackexchange", "id": 10571, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, dft, convolution, wavelet, cwt", "url": null }
html, css </div> </div> <ul id="headNews"> <li class="headNewsItem"> <a href="#"> <div class="imageContainer"> <img src="img/headNews1.jpg" width="242" height="124" alt="headNewsImage1"> </div> <p class="textContainer">Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> </a> </li> <li class="headNewsItem"> <a href="#"> <div class="imageContainer"> <img src="img/headNews2.jpg" width="242" height="124" alt="headNewsImage2"> </div> <p class="textContainer">Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> </a> </li>
{ "domain": "codereview.stackexchange", "id": 5565, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "html, css", "url": null }
answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Now for coding, it is important to take care of all potential outcomes. Are there any diacritics not on the top or bottom of a letter? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. How can I simply prove that the pearson correlation coefficient is between -1 and 1? that rho squared is less than or equal to 1. For example, if the denominator is zero, you will have to filter that or throw an exception. Do PhD admission committees prefer prospective professors over practitioners? plug this result from the Cauchy-Schwarz inequality into the formula for $\rho$, and we get: $$|\rho| = \left|\frac{Cov(X,Y)}{\sqrt{Var(X)Var(Y)}}\right| \le \frac{\sqrt{Var(X)Var(Y)}}{\sqrt{Var(X)Var(Y)}} = 1$$. I want what's inside
{ "domain": "thaiplus.net", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9766692264378963, "lm_q1q2_score": 0.8095990184965997, "lm_q2_score": 0.8289388019824947, "openwebmath_perplexity": 550.7072396013566, "openwebmath_score": 0.6638980507850647, "tags": null, "url": "http://thaiplus.net/9p6gaeax/page.php?id=f1f531-coefficient-of-correlation-lies-between-proof" }
algorithms, algorithm-analysis, runtime-analysis Title: Time Complexity of Algorithm I need help with finding out the time complexity of the following algorithm: procedure VeryOdd(integer n): for i from 1 to n do if i is odd then for j from i to n do x = x + 1 for j from 1 to i do y = y + 1 This is my attempt: $$ Loop1 = \Theta(n)$$ $$ Loop2 = \Theta(n)$$ $$ Loop2 = O(n)$$ And we also know that loop2 and loop3 will get executed every second time of the execution of the outer loop. So we know that: $$T(n) = \Theta(n) * 1/2(\Theta(n) + O(n)) = \Theta(n^2)$$ Now to the thing I'm not so sure about, nameley, is Loop3 really $$O(N)$$ and if yes, then is $$\Theta(n) + O(n) = \Theta(n)$$ Thanks in advance $$ Loop 1 = \theta(n) $$ Since both loop in total will run n times so, $$ Loop 2 + Loop3 = \theta(n) $$ $$ T(n) = \theta(n) * 1/2 ( \theta(n)) = \theta(n^2) $$
{ "domain": "cs.stackexchange", "id": 1750, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, algorithm-analysis, runtime-analysis", "url": null }
python, interpreter, state-machine, language-design # Cell sizes for the stack CELL_MAX = 255 CELL_MIN = 0 # Parsing rules LINE_SPLIT = ";" ARG_SPLIT = "&" # Errors ERRORS = { "integer_error": ("Invalid integer.", "e01"), "no_cell_error": ("Cell doesn't exist.", "e02"), "bad_value_error": ("Invalid ASCII code.", "e03"), "command_error": ("Invalid command.", "e04"), } def __init__(self, stack_size=256): self.stack_size = stack_size self.reset_output() self.reset_position() self.reset_stack() def add_output_stream(self): """Add the current stack cell to the output.""" self.output_stream += chr(self.stack[self.pos]) def change_cell(self, new_val): """Change the value of the current cell.""" new_val = self.convert_int(new_val) if self.CELL_MIN <= new_val <= self.CELL_MAX: self.stack[self.pos] = new_val else: self.raise_error("bad_value_error")
{ "domain": "codereview.stackexchange", "id": 13010, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, interpreter, state-machine, language-design", "url": null }
java, linked-list int tmpInt = element1.getValueElement().getValue(); element1.getValueElement().setValue(element2.getValueElement().getValue()); element2.getValueElement().setValue(tmpInt); element1 = element1.getSuccessor(); if (element1 == element2) { return; } element2 = element2.getPredecessor(); if (element2 == element1) { return; } } } In overall, your code is pretty clear and well written.
{ "domain": "codereview.stackexchange", "id": 23563, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, linked-list", "url": null }
machine-learning, python, scikit-learn, one-hot-encoding Memory usage is 25.755 MB Then running the same code above but setting spare=False like so: OneHotEncoder(handle_unknown='ignore', sparse=False) Memory usage is 25.755 MB According to the linked article, which used the sparse option in pandas get_dummies, this should result in reduced memory storage, is this not the same for Scikit Learn's OneHotEncoder? Based on @BenReiniger's comment, I removed the numeric portion from the ColumnTransformer and ran the following code:
{ "domain": "datascience.stackexchange", "id": 8119, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, python, scikit-learn, one-hot-encoding", "url": null }
dataset On the DBpediaAsTables web page, you can find out which datasets were used to generate the tables: instance_types_en, labels, short_abstracts_en, mappingbased_properties_en, geo_coordinates_en. Also, I want to clarify that DBpediaAsTables contains all instances from DBpedia 2014, and with "we provide some of the core DBpedia data" we want to say that not all datasets are included in the tables (but only the 5 I stated before) If you want to generate your own tables that will contain custom properties, please refer to the section Generate your own Custom Tables. Cheers, Petar
{ "domain": "datascience.stackexchange", "id": 152, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dataset", "url": null }
c++, beginner, template, classes, polymorphism The implementation of lookup should directly call map::find and check the resulting iterator against the end. Then, check (through that iterator) if it's already checked out; then return that iterator. As for what it throws if there is a problem... If you needed a hierarchy to represent different errors, you would derive from a standard base class and implement what. Don't make unrelated objects and make up your own naming convention for getting the string out of it! But don't do that. You are not adding any additional information, so just provide an error code. Make an enumeration of the different errors, and put the associated strings in an error category structure. Throw the existing std class that holds an error code.
{ "domain": "codereview.stackexchange", "id": 42394, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, beginner, template, classes, polymorphism", "url": null }
16 The reason why the replacement doesn't work is that replacement rules are not mathematical replacements, but pure structural replacements. Therefore the replacement z^2->x just looks for occurrences of the pattern z^2 and replaces that with x. Now z^4 doesn't match that pattern. Also note that rules operate on the internal form, which doesn't always ... 15 You can use PolynomialForm : Collect[(1 + x + Cos[s] x^2)^3, x] // PolynomialForm[#, TraditionalOrder -> True] & Cos[s]^3 x^6 + 3 Cos[s]^2 x^5 + (3 Cos[s]^2 + 3 Cos[s]) x^4 + (6 Cos[s] + 1) x^3 + (3 Cos[s] + 3) x^2 + 3 x + 1 14 This question is not trivial as it would seem and a detailed discussion could help to understand the issue, especially when we deal with roots of special functions, however to do the task as simply as possible this would be the best way : f[x_] := LegendreP[6, x] Reduce[f[x] == 0, x, Reals] == Reduce[f[x] == 0, x] True Reduce[f[x] == 0, x, Reals] ... 14
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9425067179697694, "lm_q1q2_score": 0.8084521962730208, "lm_q2_score": 0.8577680995361899, "openwebmath_perplexity": 1176.5217944344333, "openwebmath_score": 0.5718528032302856, "tags": null, "url": "http://mathematica.stackexchange.com/tags/polynomials/hot" }
machine-learning, deep-learning For your example, you need to construct a custom dictionary of all possible words and a corpus reflecting the frequency of occurrences for the custom dictionary words. If possible, construct an error model of common mistakes. Peter Norvig goes into greater detail here. Later you can add Deep Learning methods, but it is better to start with a simpler approach that requires less data.
{ "domain": "datascience.stackexchange", "id": 3099, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, deep-learning", "url": null }
java, tree, binary-search private static final class MinMax { private final int min, max; public MinMax(int min, int max) { super(); this.min = min; this.max = max; } } private static final class NotBSTException extends IllegalStateException { private static final long serialVersionUID = 1L; } public static boolean isBST(Node n) { try { checkBST(n); return true; } catch (NotBSTException e) { return false; } } private static MinMax checkBST(final Node n) throws NotBSTException { if (n == null) { return null; } MinMax left = checkBST(n.left); MinMax right = checkBST(n.right); if (left != null && left.max > n.value) { throw new NotBSTException(); } if (right != null && right.min < n.value) { throw new NotBSTException(); } return new MinMax(left == null ? n.value : left.min, right == null ? n.value : right.max); }
{ "domain": "codereview.stackexchange", "id": 5705, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, tree, binary-search", "url": null }
c#, beginner, console, pokemon public List<Pokemon> GetPlayersCollection() { return _playersCollection; } public bool IsCollectionCompleted() { return _playersCollection.Count == _totalNumberOfWildPokemon; } public bool IsCollectionEmpty() { return _playersCollection.Count == 0; } } RandomPokemonGenerator.cs public static class RandomPokemonGenerator { private static Random _random = new Random(); private static Pokemon _randomPokemon = new Pokemon(); private static int _totalPokemons = PokemonGenerator.PokemonInWorld.Count; public static Pokemon GetRandomPokemon() { var randomNum = _random.Next(_totalPokemons - 1); _randomPokemon = PokemonGenerator.PokemonInWorld[randomNum]; return _randomPokemon; } }
{ "domain": "codereview.stackexchange", "id": 26450, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, console, pokemon", "url": null }
general-relativity, cosmology, curvature I think some of your confusion stems from the fact that it looks like the photon is directed away from us at first, and then somehow changes direction and eventually reaches us. This is a mistake! The photon is always directed towards us, it is just the rapid expansion of space right after the big bang that makes the distance to the photon increase faster than it can approach us.
{ "domain": "physics.stackexchange", "id": 19170, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, cosmology, curvature", "url": null }
beginner, c, homework } Example Input / Output Enter number of inputs: 5 Enter space separated: 23 512 454 94 100 Output: 9 792 0 54 99 Largest Absolute Difference: 297 Notes / Concerns The way I wrote the largestAbsoluteDifference seems slimy me to (having to pass 0 to get the result). I've got in the habit of calling functions inside a printf instead of storing the value in a variable and then printing the variable (see lines 56, 59). Is this okay? In generateMagicNumber I had to make sure finding the difference wouldn't result in a negative number (which you can't store in an unsigned int). Is my method the best way to find the absolute value like that? You write unsigned int enough times that it's probably worthwhile to typedef it, in caseunsigned int turns out to be too small. (Keep in mind that it is only guaranteed to go up to 65535.) typedef unsigned int number; #define NUMBER_FMT "%u"
{ "domain": "codereview.stackexchange", "id": 18177, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c, homework", "url": null }
Main ## Basis Spline (bspline) Object Type: Object Data: Input (x,y) vectors and output matrix (z) Inputs: b-spline data or knots / coefficients Outputs: b-spline appoximation z Description: Basis spline for 2D nonlinear approximation A basis spline is a nonlinear function constructed of flexible bands that pass through control points to create a smooth curve. The b-spline has continuous first and second derivatives everywhere. The prediction area should be constrained to avoid extrapolation error. Example Usage Create a b-spline from the a meshgrid of 50 points in the x-direction and y-direction between -1 and 1 of the function z=x y. from scipy.interpolate import bisplrep, bisplev import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d.axes3d import Axes3D from matplotlib import cm #%% Define function over 50x50 grid xgrid = np.linspace(-1, 1, 50) ygrid = xgrid x, y = np.meshgrid(xgrid, ygrid) z = x*y
{ "domain": "apmonitor.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446525071291, "lm_q1q2_score": 0.809237981979551, "lm_q2_score": 0.831143045767024, "openwebmath_perplexity": 14156.8805554607, "openwebmath_score": 0.272673100233078, "tags": null, "url": "http://apmonitor.com/wiki/index.php/Main/ObjectBspline" }
higgs Title: Does field energy create rest mass in QFT theories? The central importance of Higgs boson would be that the Higgs mechanism gives rest mass to fundamental particles. It seems like a very natural argument that fundamental particles need to be given rest mass by a field interaction because as something fundamental (or a perturbation in a field as its often described), it can't have any properties beyond what a QFT can explain through interactions with a field.
{ "domain": "physics.stackexchange", "id": 5559, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "higgs", "url": null }
fluid-dynamics, waves, newtonian-gravity, speed Title: Speed of waves relating to depth of water Is it true for shallow waves that the speed is related to gravity and depth by $c^2 = gh$? This cannot be true for all $h$ can it? What are the constraints and assumptions of this equation, apart from the obvious conservation of mass and incompressibility of fluids. Does the size of the container of the fluid make a difference. How can we assign a reasonable range for $h$ given other variables? Is there some other equation to use for great depths? The dispersion relation for gravity waves is given by ([https://en.wikipedia.org/wiki/Dispersion_(water_waves)][Wiki]) $$ \omega=\sqrt{gk\tanh (kh)}. $$ Recall that group velocity is defined by $v_g=\frac{\partial \omega}{\partial k}$. So, if we assume that $h$ is small, i.e. $\tanh(kh) \simeq kh$ then $v_g\simeq gh$. So, for small $h$ this is true.
{ "domain": "physics.stackexchange", "id": 40069, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, waves, newtonian-gravity, speed", "url": null }
So when you read "$$\mathcal{B}$$ is a basis for $$\tau$$", it means that $$\mathcal{B}$$ is a basis, and $$\tau$$ is the topology generated by the basis $$\mathcal{B}$$. • So if I have a collection of subsets B that generate a topology, then the collection B satisfies the first two conditions given? – ChrisWong Jun 17 '19 at 11:00 • @ChrisWong Yes. Since the whole set is in the topology, the generating set must satisfy (1), and since the topology is closed under binary unions, the generating set must satisfy (2). – Alex Kruckman Jun 17 '19 at 12:58 There is a difference between being a basis and a basis for a particular topology. A basis is a collection of subsets of a set $$T$$ satisfying certain properties. It generates a topology as you indicate, and we call it a basis for the topology it generates. It's possible that a proof begins "let $$(T,\tau)$$ be a topological space and $$\mathcal{B}$$..." then there is an implied "for the topology $$\tau$$".
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9539660923657093, "lm_q1q2_score": 0.8218782356832086, "lm_q2_score": 0.8615382058759129, "openwebmath_perplexity": 88.06853563240989, "openwebmath_score": 0.931822657585144, "tags": null, "url": "https://math.stackexchange.com/questions/3264671/if-b-is-a-basis-for-a-topology-t-does-b-necessarily-generate-t" }
black-hole, radio-astronomy, supermassive-black-hole, radio-telescope Is this correct, or is it actually a photo capturing visible light? I mean, scientifically, this is a huge step; however, I have the feeling that the news got "boosted" a little too much, in medias, as they just talk about "photos" and "light" surrounding a black hole - not specifying anything else for a inexperienced audience. You are right that the image is not a visible light image. It was taken using very, very short wavelength microwaves -- the wavelength is 1.3 mm which is only a bit shorter than the Far IR! I think you may be reading a bit too much into the word "photo" which, as you say, has the everyday meaning of "taken with a camera using lenses in visible light." I think most astronomers would use the term "image" rather than "photo" -- if you look at the NSF's press release, for example, you'll see that it uses the term "image". (Links to all six of the papers the EHT team has published are in the NSF press release if you want further details.)
{ "domain": "astronomy.stackexchange", "id": 3630, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "black-hole, radio-astronomy, supermassive-black-hole, radio-telescope", "url": null }
algorithms, probability-theory, randomized-algorithms, statistics In order to complete the description of the algorithm, we need to solve the following problem: given $S \subseteq \{1,\ldots,n\}$ and $i$, find the $i$'th smallest element in $\overline{S}$. We can assume that $S$ is stored in a structure (such as a self-balancing binary tree) which can efficiently answer the following type of query: given $x$, how many elements in $S$ are smaller than $x$. We can then find the $i$'th smallest number in $\overline{S}$ using binary search. Overall, this algorithm runs in $\tilde\Theta(m)$ for all values of $m$, where the tilde hides factors logarithmic in $n$. (When $m \ll \sqrt{n}$ we can use your first approach, thus getting rid of this dependence on $n$.)
{ "domain": "cs.stackexchange", "id": 16813, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, probability-theory, randomized-algorithms, statistics", "url": null }
general-relativity, differential-geometry, metric-tensor, event-horizon, singularities Title: The meaning of the singularity in Schwarzschild metric I have been told by many lecturers and many books that in the Schwarzschild metric $$ ds^2=-\left(1-\frac{r_s}{r}\right)dt^2 + \left(1-\frac{r_s}{r}\right)^{-1} dr^2 + r^2 d\Omega ^2 $$ the singularity at $r=r_s$ purely comes from the bad choice of coordinate and that there is no physical singularity there. I got really confused up to this point because I have also been told that in a black hole, the surface $r=r_s$ is called the event horizon and nothing can pass through it.
{ "domain": "physics.stackexchange", "id": 68284, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, differential-geometry, metric-tensor, event-horizon, singularities", "url": null }
java, game, guava private void permuteHelper(String prefix, String remaining) { int charsLeft = remaining.length(); if (charsLeft == 0) { Board newBoard = new Board(prefix); allBoards.put(prefix, newBoard); if (newBoard.isSolved()) scoredBoards.add(newBoard); } else { Set<Character> seenCharacters = new HashSet<>(); for (int i = 0; i < charsLeft; i++) { char c = remaining.charAt(i); if (!seenCharacters.contains(c)) { permuteHelper(prefix + c, remaining.substring(0, i) + remaining.substring(i + 1, charsLeft)); seenCharacters.add(c); } } } } private boolean discoveredNewBoard(Set<Board> nextBoardIteration, Board boardAfterMove) { return boardAfterMove != null && !nextBoardIteration.contains(boardAfterMove); }
{ "domain": "codereview.stackexchange", "id": 7678, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, guava", "url": null }
evolution, adaptation, addiction Is there/was there something adaptive about addictive behaviour? Alternatively, how it is that addictive behaviour survived/persists if it is not adaptive? Albeit used in the keywords of the paper, feather pecking is not an addiction, but normal behavior turned pathological in artificial (overcrowded) environments. Addiction typically arises through overstimulation of reward centers in the brain, most notably the dopaminergic pathways involving the nucleus accumbens. Massive dopamine release, e.g. through cocaïne or meth intake, results in euphoria (Wise & Bozarth, 1985). When the drug's effects wear off it is followed by the 'crash', characterized by dysphoria and a longing for more drugs (CSAT, 1999). Gambling addiction works much in the same way.
{ "domain": "biology.stackexchange", "id": 6218, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, adaptation, addiction", "url": null }
navigation, simulation, stage, amcl Replace robot_0 in the last two params to robot_1 for the second robot's amcl node. Originally posted by Dany with karma: 46 on 2016-04-29 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by gavran on 2016-05-01: Thank you! I am accepting this answer as it solves the tf-tree trouble. However, with this change, there is another problem - complaining about robot being out of map. I will have to investigate it further. Comment by gavran on 2016-05-01: However, I managed to solve the initial problem coming from another direction, and the example code is on github. Seems that putting tf_prefix directly to amcl node did the trick (but not sure as the same thing didn't work in initial code)
{ "domain": "robotics.stackexchange", "id": 23929, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, simulation, stage, amcl", "url": null }
java, beginner, ascii-art if (s.length() > 8) { s = s.substring(0,8); } return s; } /** * Prints the number in large digits */ static void printString(String s) { String a = getLine(s, "top"); String b = getLegs(s, "top"); String c = getLine(s, "mid"); String d = getLegs(s, "bot"); String e = getLine(s, "bot"); System.out.println(a); for (int i = 0; i < HEIGHT; i++) { System.out.println(b); } System.out.println(c); for (int i = 0; i < HEIGHT; i++) { System.out.println(d); } System.out.println(e); System.out.println(); } /** * Builds the top, middle and bottom horizontal lines */ static String getLine(String s, String w) {
{ "domain": "codereview.stackexchange", "id": 16237, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner, ascii-art", "url": null }
functional-programming, matrix, f# let isMatrixFeasible = List.exists isColumnFeasible and applying logical equivalences if a then false else b = not a && b, not exists p = forall not p and not(a && b) = not a || not b as well as combining filter x and exists y into exists (x && y): let thereIsNoZeroProbabilityIn = List.forall (snd >> (<>) 0) let isColumnFeasible column = let thereIsNoZeroIn = // not << (List.exists (fun (n, q) -> n = 0 && q < 100)) // List.forall (fun (n, q) -> not(n = 0 && q < 100)) List.forall (fun (n, q) -> n <> 0 || q >= 100) thereIsNoZeroProbabilityIn column && thereIsNoZeroIn column combining the foralls let isColumnFeasible = List.forall (fun (n, q) -> (n <> 0 || q >= 100) && q <> 0) let isMatrixFeasible = List.exists isColumnFeasible
{ "domain": "codereview.stackexchange", "id": 23190, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "functional-programming, matrix, f#", "url": null }
python, numpy, simulation, matplotlib, physics ''' Model the particle motion in the field at each time step (Foward Euler Method) ''' for it in np.arange(0, len(t)-1,1): Bp = bot_field(rp[it,0], rp[it, 1], rp[it,2]) # input the current particle position into to get the magnetic field Ap = QoverM * np.cross(vp[it,:], Bp) # calculate the magnetic force on the particle vp[it+1] = vp[it] + dt*Ap # update the velocity of the particle based on this force rp[it+1] = rp[it] + dt*vp[it] # update the positon of the particle based on this velocity if (np.sqrt(np.sum(rp[it+1]**2)) > 20.0): # If the particle escapes (i.e. exceeds 20 m from origin), cease calculations break
{ "domain": "codereview.stackexchange", "id": 42459, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, numpy, simulation, matplotlib, physics", "url": null }
statistical-mechanics, stochastic-processes, brownian-motion We write $$M{d v\over dt} = -\gamma v+Q\eta(t)$$ where $Q$ is the "strength" of the noise. We go on with the derivation, so I will skip it for now - you seem to be familiar with it - and at the end we set $Q$ such $\langle v^2 \rangle = k_B T / M$ . This is given by general physical principles.
{ "domain": "physics.stackexchange", "id": 76228, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "statistical-mechanics, stochastic-processes, brownian-motion", "url": null }
machine-learning, deep-learning Points (1) and (2) are not new in Reinforcement learning, but improve on the previous AlphaGo software as stated in the comments to your question. It just means they are now using pure Reinforcement Learning starting from randomly initialized weights. This is enabled by better, faster learning algorithms. Their claim here is "Our primary contribution is to demonstrate that superhuman performance can be achieved without human domain knowledge." (p. 22). Points (3) and (4) are novel in the sense that their algorithm is simpler and more general than their previous approach. They also mention that is is an improvement on previous work by Guo et al. Unifying the policy/value network (3) enables them to implement a more efficient variant of Monte-Carlo tree search to search for good moves and simultaneous using the search tree to train the network faster (4). This is very powerful.
{ "domain": "datascience.stackexchange", "id": 2115, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, deep-learning", "url": null }
ros, python, catkin, python2.7, setup.py Both seem ok but, when I run toxon my package root folder I get the following trace from tox: viki@vROS:[~/devel/mona...src/rospy_utils]$ tox GLOB sdist-make: /home/viki/devel/monarch/code/trunk/catkin_ws/src/rospy_utils/setup.py py27 inst-nodeps: /home/viki/devel/monarch/code/trunk/catkin_ws/src/rospy_utils/.tox/dist/rospy_utils-0.2.0.zip ERROR: invocation failed (exit code 1), logfile: /home/viki/devel/monarch/code/trunk/catkin_ws/src/rospy_utils/.tox/py27/log/py27-4.log ERROR: actionid=py27 msg=installpkg cmdargs=[local('/home/viki/devel/monarch/code/trunk/catkin_ws/src/rospy_utils/.tox/py27/bin/pip'), 'install', '--pre', '-U', '--no-deps', '/home/viki/devel/monarch/code/trunk/catkin_ws/src/rospy_utils/.tox/dist/rospy_utils-0.2.0.zip']
{ "domain": "robotics.stackexchange", "id": 20674, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, python, catkin, python2.7, setup.py", "url": null }
java, linked-list /** * Initializes a DoublyLinkedListIterator with the option of choosing * if iteratorNode should start at the first element or last of the list. * * @param setToLast - Set the iteratorNode to the last item of the DoublyLinkedList */ public DoublyLinkedListIterator(boolean setToLast) { if (setToLast) { this.iteratorNode = last; this.index = size(); } else { this.iteratorNode = first; this.index = 0; } } /** * Updates the iteratorNode to the DoublyLinkedList boundy elements last or first. * * @param setToLast - Set the iteratorNode to the last item of the DoublyLinkedList */ public void update(boolean setToLast) { if (setToLast) { this.iteratorNode = last; this.index = size(); } else { this.iteratorNode = first; this.index = 0; } }
{ "domain": "codereview.stackexchange", "id": 22957, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, linked-list", "url": null }
context-free, automata, pumping-lemma If a language L is context-free, then there exists some integer p ≥ 1 such that any string s in L with |s| ≥ p (where p is a "pumping length") can be written as s = uvxyz with substrings u, v, x, y and z, such that |vxy| ≤ p, |vy| ≥ 1, and uv$^n$xy$^n$z is in L for all n ≥ 0.
{ "domain": "cs.stackexchange", "id": 1218, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "context-free, automata, pumping-lemma", "url": null }
c++ /* print the time it took */ std::chrono::time_point<std::chrono::high_resolution_clock> const t2 = std::chrono::high_resolution_clock::now(); std::chrono::duration<double> const duration = std::chrono::duration_cast<std::chrono::duration<double>>(t2 - t1); double const time_took = duration.count(); std::printf("It took %f %s", time_took, std::array{ "seconds", "second" } [1.0 == time_took]); /* wait for user input to close */ (void)std::getchar(); }
{ "domain": "codereview.stackexchange", "id": 39563, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++", "url": null }
# Reference book for Artin-Schreier Theory The aim of the question is very simple, I would like to study Artin-Schreier Theory, but I have had embarassing difficulties in finding a book which could help me in doing that. In specific I'm looking for a book which write down explicitely the main theorem of Artin-Schreir Theory, and have also a proof of it (or at least a sketch of it) and maybe also a short introduction on the topic. The closest I get in the matter is with the book "Cohomology of Number Fields" of Neukirch, which is suggested as a reference by the Wikipedia page on Artin-Schreier Theory. But it doesn't fit with the requests I made above. So the question is, what is the name of such a book? Or, if such abook does not exist, do you have any suggestions on how to proceed? Thank you very much!
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180669140005, "lm_q1q2_score": 0.8298027438199539, "lm_q2_score": 0.8418256412990657, "openwebmath_perplexity": 237.48664554542185, "openwebmath_score": 0.8723489046096802, "tags": null, "url": "https://math.stackexchange.com/questions/50041/reference-book-for-artin-schreier-theory/50062" }
newtonian-mechanics, forces, everyday-life, continuum-mechanics, stress-strain The mathematics behind this is absolutely fascinating and beautiful while being relatively easy to understand once you have seen it. If you understand it you get some kind of insight into how the minds of people like Gauss worked, which is just lovely. The mathematics and physics behind it turns out to be some of the maths that you need to understand General Relativity, which is a theory all about curvature. So by understanding this properly you are starting on the path to understanding the most beautiful and profound theory of modern physics (I was going to write 'one of the most ...' but no: there's GR and there's everything else). The mathematics and physics behind it also is important in things like engineering: if you want to understand why beams are strong, or why car panels are rigid you need to understand this stuff.
{ "domain": "physics.stackexchange", "id": 58045, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, forces, everyday-life, continuum-mechanics, stress-strain", "url": null }
operational-semantics, small-step-semantics Title: From big-step to small-step semantics Ciobaca: From Small-Step Semantics to Big-Step Semantics, Automatically describes how to go from small-step to big-step semantics. I need to do the other way around: from big-step to small-step. Is there literature for this? What should I be careful of? EXTENSION: I have a big-step operational semantics for a functional language and for my purposes I need to write a small-step operational semantics for it. Can you point to relevant scientific literature on how I should do this, what are the pitfalls? Here are a couple articles on the matter. From Big-Step to Small-Step Semantics and Back with Interpreter Specialisation and Automatic Transformation of a Big-Step Skeletal Semantics into Small-Step Also, if you need to write your own semantics for a functionnal language, you should have a look at pcf's semantics, for example here. If you feel adventurous, start with a semantics for the lambda calculus and go your own way.
{ "domain": "cs.stackexchange", "id": 21421, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "operational-semantics, small-step-semantics", "url": null }
c++, performance, c++11, mathematics All of this being within main, of course. This is obviously slow so I was looking for optimizations. A couple of my thoughts were to remove the std::abs call because the inside expression must be positive, perhaps I could simply check == 4 rather than account for precision. I couldn't figure out a way to drop a for loop, because you can't isolate a variable. Otherwise I'm not sure what to do. Questions: How can this code be sped up? My goal is lim=1E3 in under a minute. As of now it takes 31.0643 s for lim=251. Can the for-loops be reduced? As of now formatting is not really an issue for me, but can this be simplified down to a single expression and/or loop? Or maybe look a little cleaner, without sacrificing optimization? Update: After some research I'm now aware that the smallest known solution to this has numbers with 81 digits. I'm not concerned with this, just the above questions. Here are some simple optimizations you can make that can push your limit a bit higher:
{ "domain": "codereview.stackexchange", "id": 26041, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, c++11, mathematics", "url": null }
c#, comparative-review class FindDuplicate Class names should be nouns, e.g. DuplicateFinder. public FindDuplicate finderVariantTwo() public FindDuplicate getDuplicates() This is not a user friendly API. You first need to create an instance of the FindDuplicate class and then you can search for duplicates. These methods could just return the result. If you want to implement multiple algorithms for searching for duplicates you should take a look a the Strategy pattern. this.input.GroupBy(result => result) .Where(whereValueClause => whereValueClause.Count() > 1) .Select(indexValue => indexValue.Key);
{ "domain": "codereview.stackexchange", "id": 24176, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, comparative-review", "url": null }
java, beginner, object-oriented else if (otherMin == max) return true; else if (otherMax == min) return true; else return false; } public String toString() { return "[" + min + "," + max + "]"; } public static void main(String[] args) { double min1 = Double.parseDouble(args[0]); double max1 = Double.parseDouble(args[1]); double min2 = Double.parseDouble(args[2]); double max2 = Double.parseDouble(args[3]); Interval interval1 = new Interval(min1, max1); Interval interval2 = new Interval(min2, max2); System.out.println(interval1.toString()); System.out.println(interval2.toString()); System.out.println(interval1.intersects(interval2)); } }
{ "domain": "codereview.stackexchange", "id": 39370, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner, object-oriented", "url": null }
newtonian-gravity, estimation, solar-system, tidal-effect, milky-way Title: Are there tidal forces between the solar system and the galactic centre? Tidal forces are experienced within a system moving in free fall around a bigger object because of the different strength of gravity over the system. The difference can be calculated between the far side and the near side. Are there measurable tidal forces between the galactic centre and the solar system? The magnitude of the gravitational force, $F$, between two masses, $M$ and $m$, separated by a distance, $r$, is given by: $$ F = \frac{ G \ M \ m }{ r^{2} } \tag{0} $$ where $G$ is the gravitational constant. Suppose I move $m$ by a distance $\Delta r$, then the force changes to: $$ F_{\Delta} = \frac{ G \ M \ m }{ \left( r + \Delta r \right)^{2} } = F \ \left[ 1 + \frac{ \Delta r }{ r } \right]^{-2} . \tag{1} $$ Note that the difference is entirely encompassed in the brackets.
{ "domain": "physics.stackexchange", "id": 87534, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-gravity, estimation, solar-system, tidal-effect, milky-way", "url": null }
friction, metals Why this seems strange: Lets take a Platinum box weighing $1N$ on a Platinum table. The maximal static friction would be $1.2N$. What happens when we push with $F=1.2N$ and slowly increase the force? By definition it should start moving (speeding up), but $f_k=3N>F$, so instead of accelerating, we are slowing down- from an initial velocity of $0$! If you check the references in the Wikipedia article you linked, you will see that the source for the kinetic is different from the source for the static coefficient of the metals your mention (platinum, silver and aluminum). That implies that the experimental methods and circumstances used to determine the coefficients could have been radically different for the two sets of coefficients.
{ "domain": "physics.stackexchange", "id": 83724, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "friction, metals", "url": null }
zoology, food, toxicology, dogs The reason for this is the much slower metabolization of theobromine in the animals (there are reports for poisonings of dogs, cats, birds, rabbits and even bear cubs) so that the toxic effect can happen. Depending on the size of the dog, something between 50 and 400g of milk chocolate can be fatal. As mentioned by @anongoodnurse the cocoa content in milk chocolate is the lowest and much higher the darker the chocolate gets. The poisoning comes from the Theobromine itself, which has different mechanisms of action: First it is an unselective antagonist of the adenosine receptor, which is a subclass of G-protein coupled receptors on the cell surface which usually bind adenosine as a ligand. This influences cellular signalling.
{ "domain": "biology.stackexchange", "id": 7286, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "zoology, food, toxicology, dogs", "url": null }
homework-and-exercises, atomic-physics Replacing z and x by D and L respectively, the equation becomes: $$D = -({\mu\over 2v^2}{dB_z\over dz})L^2$$ Finally rearranging the equation and solve for the gradient ${dB_z\over dz}$. Now I'm stuck with the z-component of the magnetic dipole moment $\mu_z$, I have no idea how to find its numerical value. I know there is an orbital magnetic moment $\mu_l$ and a spin magnetic moment $\mu_s$ of electrons, their z-component are given by: $$\mu_l= \mu_B g_l m_l$$ $$\mu_s= \mu_B g_s m_s$$ Where the g-factor is given by: $g_l = 1$ and $g_s = 2$. $m_l$ and $m_s$ are the usual quantum numbers. The problem is that I don't know to use these two formulae in this situation of many electrons. So my question is: How do I find $\mu_z$? Is the electronic configuration of the silver atom in the ground important here? What about the spin magnetic moment of the nucleus? Is it involved? Any hint, idea or comment will be greatly appreciated. The magnetic moment is given by source
{ "domain": "physics.stackexchange", "id": 41593, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, atomic-physics", "url": null }
ros, compile Originally posted by canatan with karma: 41 on 2013-07-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14759, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, compile", "url": null }
performance, primes, assembly, sse 110.691877920 seconds time elapsed Branch Mispredicts This code is being crushed by branch mispredictions: 20% of branches are mispredicted, and there are a lot of branches about 1 of every 5 instructions1. Usually you are looking for a value less than 1% to avoid a big impact. Off the top of your head this is going to have an impact of about 9.3 billion misses * 15 cycles = ~140 billion cycles, or about half the total running time. This also explains the poor IPC of 0.87. Toplev.py We can try toplev.py to confirm: $ ../pmu-tools/toplev.py ./likely-primes time ProcessA 697803485 Using level 1. ... ... BAD Bad_Speculation: 63.49 % Slots [100.00%] BN This category represents slots fraction wasted due to incorrect speculations...
{ "domain": "codereview.stackexchange", "id": 26669, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, primes, assembly, sse", "url": null }
java, comparative-review Title: The best way to unwrap the optional in java Instead of using the null checks I tried using Optional and safely unwrapping the optional. Approach 1 My first approach of unwrapping the Optional is using map() followed by orElseThrow(). Approach 2 My second approach if unwrapping the Optional is using orElseThrow() and setting the value to a variable and using it. Service @Inject private UserConfigurationRepository userConfigurationRepository; public UserConfiguration updateConfigUsingOrElseThrow(User user, String value) { UUID configurationId = user.getConfigurationId().orElseThrow(); UserConfiguration userConfiguration = userConfigurationRepository .findById(configurationId).orElseThrow(); return updateUserConfiguration(userConfiguration, value); // Performs update operation }
{ "domain": "codereview.stackexchange", "id": 36086, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, comparative-review", "url": null }
c#, design-patterns, database Title: Database adapters I'm writing adapters for some MS Access database because I really didn't like that automatically generated code that Visual Studio was providing. Right now I just need to get data, not updating anything over there, but that will come later. Here's some code that I'm using (all the adapters look the same, so that is what bothers me if I can fix them up somehow). I'm wondering about moving most of the methods to the base class and just passing kind of Filter class as an argument to form an actual query, but I'd like to see some different approaches on that problem. public abstract class AccessDatabaseAdapter { private readonly OleDbConnection _connection; private OleDbDataAdapter _adapter; public OleDbDataAdapter Adapter { get { if (_adapter == null) { _adapter= new OleDbDataAdapter(String.Empty, _connection); } return _adapter; } }
{ "domain": "codereview.stackexchange", "id": 14387, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, design-patterns, database", "url": null }
{eq}F\ =\ G\cdot \dfrac{Mm}{R^2} {/eq}. We are given: • The weight of the spaceman Speff at the surface of the planet X is {eq}W_s\ =\ 389 \rm N{/eq}. • The weight of the spaceman Speff at a height {eq}h\ =\ 1.86\times 10^4\ \rm km{/eq} from the surface of the planet X is {eq}W_h\ =\ 24.31 \rm N{/eq}. • The mass of the spaceman Speff is {eq}m\ =\ 75.0\ \rm kg{/eq}. We are asked to calculate the mass of Planet X. Therefore, we need to calculate the radius of the planet X to calculate the mass of the planet by using the weight of the spaceman as follows: {eq}\begin{align} W_s\ &=\ mg_s\\[0.3 cm] g_s\ &=\ \dfrac{W_s}{m}\\[0.3 cm] &=\ \dfrac{389\ \rm N}{75.0\ \rm kg}\\[0.3 cm] &\approx \ 5.19\ \rm m/s^2\\[0.3 cm] \end{align} {/eq} Now, we are also given:
{ "domain": "study.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9793540704659682, "lm_q1q2_score": 0.8118245940274381, "lm_q2_score": 0.8289388062084421, "openwebmath_perplexity": 9788.634218913056, "openwebmath_score": 0.9786102175712585, "tags": null, "url": "https://study.com/academy/answer/the-weight-of-spaceman-speff-at-the-surface-of-planet-x-solely-due-to-its-gravitational-pull-is-389-n-if-he-moves-to-a-distance-of-1-86-x-104-km-above-the-planet-s-surface-his-weight-changes-to-24-31-n-what-is-the-mass-of-planet-x-if-speff-s-mass-is.html" }
javascript, beginner, html replaceWithDropdownModel( "model" , unitArray); } return [ unitArray ]; } function finalUnitCalc() { var selectedUnit = document.getElementById("model").value, dropperCapacityUnit = document.getElementById("dropperCapacityUnit"); for ( var i = 0 ; i < models.length ; i++) { if (selectedUnit === models[i].model) { dropperCapacityUnit.innerHTML = models[i].dropperSize; } } } function locationFinder(location , nominalAir, factorNum) { return { location: location, nominalAir: nominalAir, factorNum: factorNum }; } function ductFinder(size, capacity) { return { size: size, capacity: capacity }; } function dropperFinder(size, capacity) { return { size: size, capacity: capacity }; }
{ "domain": "codereview.stackexchange", "id": 10741, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, beginner, html", "url": null }
radar, linear-chirp Some examples! So, in light of that, we seek an alternative transform (preferably linear) that we can use such that we can easily tell when a signal "starts" in the time domain. Enter the concept of pulse compression. Wikipedia has a great long winded explanation of it, but here's some simple Octave/MATLAB code that might clue you in on why pulse compression is really important in SAR image formation: % generate an LFM and a tone f1 = -10; f2 = 10; t = linspace(0,1,500); xc = exp(1i*pi*(f2-f1)*t.^2); % LFM xf = exp(1i*pi*f1*t); % tone
{ "domain": "dsp.stackexchange", "id": 8292, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "radar, linear-chirp", "url": null }
ros-indigo Originally posted by marcoresk on ROS Answers with karma: 76 on 2016-11-06 Post score: 0 I solved it from myself. I started again all tutorials from zero (creating catkin ws) and I managed to obtain full compilation of my package: maybe I was wrong in some part of it. For the second issue (package not found) I discovered I have to write source ./devel/setup.bash in every new terminal before the roscd command into the workspace Originally posted by marcoresk with karma: 76 on 2016-11-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26160, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-indigo", "url": null }
ros, ros-kinetic, kdl the src file is: #ifndef OROCOS_LOCALISATION_COMPONENT_HPP #define OROCOS_LOCALISATION_COMPONENT_HPP #include <rtt/RTT.hpp> #include <iostream> #include <geometry_msgs/Pose2D.h> #include <geometry_msgs/TransformStamped.h> #include <kdl/frames.hpp> class Localisation : public RTT::TaskContext { geometry_msgs::TransformStamped tfs; RTT::OutputPort<geometry_msgs::Pose2D> outpose; public: Localisation(std::string const& name) : TaskContext(name, PreOperational) { addPort("pose", outpose).doc("Contains latest Youbot Pose."); } bool configureHook() { /** Check in here that we can call rtt_tf */ return false; }
{ "domain": "robotics.stackexchange", "id": 30279, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-kinetic, kdl", "url": null }
quantum-mechanics, electromagnetism, gauge-invariance When you change gauges, it's important to take care to transform all the relevant observables in the process; failure to do this is probably the most common reason for confusion regarding spurious gauge dependences in results. It's also important to note that none of this is specific to quantum mechanics, and you get exactly the same issues and concepts from the beginnings of lagrangian mechanics, from gauges and gauge changes to the mismatch between canonical and kinetic momentum. For details, see your favourite analytical mechanics textbook.
{ "domain": "physics.stackexchange", "id": 40381, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, electromagnetism, gauge-invariance", "url": null }
big-bang-theory, expansion The newer light is moving slower. The older light moving faster. So from this it seems that the universe was expanding faster in the past? light doesn't move slower or faster. it red-shifts when objects are moving away from each other. The faster they move away the greater the red-shift. That's one way relative velocity can be measured.
{ "domain": "astronomy.stackexchange", "id": 987, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "big-bang-theory, expansion", "url": null }
python, ros-kinetic, rospack /usr/lib/python2.7/dist-packages/rosdep2/platforms/cygwin.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/cygwin.pyc /usr/lib/python2.7/dist-packages/rosdep2/platforms/debian.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/debian.pyc /usr/lib/python2.7/dist-packages/rosdep2/platforms/freebsd.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/freebsd.pyc /usr/lib/python2.7/dist-packages/rosdep2/platforms/gem.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/gem.pyc /usr/lib/python2.7/dist-packages/rosdep2/platforms/gentoo.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/gentoo.pyc /usr/lib/python2.7/dist-packages/rosdep2/platforms/opensuse.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/opensuse.pyc /usr/lib/python2.7/dist-packages/rosdep2/platforms/osx.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/osx.pyc /usr/lib/python2.7/dist-packages/rosdep2/platforms/pip.py /usr/lib/python2.7/dist-packages/rosdep2/platforms/pip.pyc
{ "domain": "robotics.stackexchange", "id": 32862, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, ros-kinetic, rospack", "url": null }
Slope of a Secant/Tangent Line (Walter Fendt) The function is in red. Substitute and in the slope equation. Now, we can allow the second point (blue in the image) to approach the first point (black in the image), and we see that the secant lines do approach the tangent line. To find the slope of the secant line above we divided the total change in s by the total change in t. But observe that we can compute an approximation to m by choosing a nearby point Q(x, 5x) on the graph (as in the figure) and computing the slope mpg of the secant line P. find an equation of the tangent line to the curve at P(3,-7). What is the formula for the slope of a secant line? Precalculus Limits, Motion, and the Tangent Line Definition of the Tangent Line. Determine the slope of a line passing through two points. In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. Assignment 4 -- Secant and Tangent
{ "domain": "otticamuti.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9873750496039277, "lm_q1q2_score": 0.8291323699645727, "lm_q2_score": 0.8397339696776499, "openwebmath_perplexity": 238.17182620148998, "openwebmath_score": 0.7098960280418396, "tags": null, "url": "http://oogo.otticamuti.it/find-the-slope-of-the-secant-line-through-the-points.html" }
stacking To avoid data leakage and ensure that the meta-features reflect the models' predictions on unseen data, it is recommended to create a specific fold that is not used in the training process of the base models. This fold is typically called the meta_train set. The base models are trained on the remaining folds (excluding the meta_train set) and their predictions on the meta_train set are used as meta-features. These meta-features are then combined with the corresponding labels from the meta_train set to create the meta-data. keep in mind it is good practice to create a separate fold (meta_train set) that is not used during the training of base models but is used exclusively for obtaining meta-features. This helps to ensure that the meta-features represent the models' predictions on unseen data, allowing the meta-model to learn general patterns and make accurate predictions on new, unseen data. Hope it will be useful for you.
{ "domain": "datascience.stackexchange", "id": 11754, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "stacking", "url": null }
javascript, jquery, html, datetime } div.widget div.head p, div.widget div.head h1 { display: block; color: #FFF; cursor: default; } div.widget div.head p { font-size: 10px; } div.widget div.head h1 { font-size: 48px; } div.widget div.head p.day { margin-top: 25px; } div.widget div.head p.month { margin-bottom: 25px; } div.widget div.head i.trg { height: 12px; position: absolute; margin: auto auto; top: 0; bottom: 0; color: #FFF; cursor: pointer; -webkit-transition: all ease-in-out 0.15s; -moz-transition: all ease-in-out 0.15s; -o-transition: all ease-in-out 0.15s; transition: all ease-in-out 0.15s; } div.widget div.head i.trg.trg-prev { left: 10px; } div.widget div.head i.trg.trg-prev:hover { left: 9px; } div.widget div.head i.trg.trg-next { right: 10px; } div.widget div.head i.trg.trg-next:hover { right: 9px; } div.widget div.body { display: block; position: relative; background-color: #FFF; border-bottom-left-radius: 2px; border-bottom-right-radius: 2px;
{ "domain": "codereview.stackexchange", "id": 15270, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, html, datetime", "url": null }
discrete-signals, audio, fourier-transform Correlated sound: $<p_{tot}^2> = \frac{1}{2}(p_1^2+p_2^2)$ Uncorrelated sound: $<p_{tot}^2> = \frac{1}{2}(p_1+p_2)^2$ In my case I'm guessing I should consider the time series $p_1$ and $p_2$ to be correlated since they come from the same source but are measured on separate sides, and this would imply that the total SPL spectrum would be given by averaging the resulting spectra given by welch on the individual channels. Thoughts? The sound pressure level (and derived metrics such as the SPL spectrum) are dependent on the position of the listener, since it is a measure of the pressure of the wavefront moving through space. As such it doesn't make all that much sense to try to combine SPL's from different positions into a single value, except perhaps if the listeners were exactly the same distance from a single omnidirectional audio source and there was no possibility of reflections.
{ "domain": "dsp.stackexchange", "id": 2068, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, audio, fourier-transform", "url": null }
nlp Title: How to expand lists? In lists the main noun is often only mentioned at the end. However, for e.g. NER-tasks, I would like to "expand" them: Outward and return flight -> outward flight and return flight Proboscis, Vervet, and golden snub-nosed monkey -> Proboscis monkey, Vervet monkey, and golden snub-nosed monkey education in mathematics or physics -> education in mathematics or education in physics
{ "domain": "datascience.stackexchange", "id": 9984, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nlp", "url": null }
audio, interpolation, 3d, spatial, hrtf This function returns the index in the database which contains the 3 closest HRIR to the desider point (along with the positions of those points and the requested position). self.sourcePosition is computed by means of self.sourcePositions = self.HRIR_SOFA_file.getVariableValue('SourcePosition') which leverages on the pysofaconventions library to read SOFA datasets. Once I get the desired HRIRs, I compute the VBAP gains using its formula: def compute_gains(self, indexes, interpolated_positions, requested_position): interpolated_positions_cartesian = self.polar2cart(1.47, interpolated_positions[:,0], interpolated_positions[:,1]) requested_position_cartesian = self.polar2cart(1.47, requested_position[0], requested_position[1], array = False)
{ "domain": "dsp.stackexchange", "id": 9739, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "audio, interpolation, 3d, spatial, hrtf", "url": null }
# The first prime number after the 8th Fermat number In this post, we discuss a primality testing exercise involving the eighth Fermat number. A Fermat number is of the form $F_n=2^{2^n}+1$ where $n$ is any nonnegative integer. We search for the first prime number that is greater than $F_8$. The basic idea is to search for the first probable prime base 2 among the odd numbers after $F_8$. Once the first probable prime base 2 is identified, we apply the Miller-Rabin primality test to confirm that it is a prime number. At the outset of this exercise, we did not know how many numbers we had to check before reaching the first prime number.
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759638081522, "lm_q1q2_score": 0.8042136639257234, "lm_q2_score": 0.8198933337131076, "openwebmath_perplexity": 196.70244281521545, "openwebmath_score": 0.8269802331924438, "tags": null, "url": "https://mathcrypto.wordpress.com/tag/square-and-multiply-algorithm/" }
c#, design-patterns, pdf public IPageNumbersBuilder AddPageNumbers() { //calculcate pages and insert // page number to each pages return this; } } public interface IDocumentBuilder { IDocumentBuilder AddDocument(DocumentInformation documentInfo); } public interface ICustomerBuilder { ICustomerBuilder AddCustomerInformation(CustomerInformation customerInfo); } public interface IProjectInfoBuilder { IProjectInfoBuilder AddProjectInformation(ProjectInformation projectInfo); } public interface IFooterBuilder { IFooterBuilder AddFooter(string footerText); } public interface IPageNumbersBuilder { IPageNumbersBuilder AddPageNumbers(); } public class CustomerInformation { public string Name { get; set; } } public class ProjectInformation { public string Name { get; set; } } public class DocumentInformation { public string Name { get; set; } } Updated example: public interface ICustomerTextDocumentBuilder { Doc Build();
{ "domain": "codereview.stackexchange", "id": 42411, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, design-patterns, pdf", "url": null }
c#, .net, xna public void Update(GameTime gameTime) { foreach (GameScreen screen in gameScreens) { screen.Update(gameTime); } }
{ "domain": "codereview.stackexchange", "id": 5483, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, xna", "url": null }
beginner, c, game, battleship sleep(3); system("cls"); goto set2; } } /*---------function bodys---------*/ //Draws the player board void DRAW_BOARD() { for(I1 = 0; I1<BOARD_SIZE; I1++) { for(I2 = 0; I2<BOARD_SIZE; I2++) { printf("%c ",LAYOUT[I1][I2]); } printf("\n"); } } //Replace all marked characters back to an o void CLEAR_BOARD() { for(I1 = 1; I1<BOARD_SIZE; I1++) { for(I2 = 1; I2<BOARD_SIZE; I2++) { LAYOUT[I1][I2] = 'o'; } printf("\n"); } } //Checks if the random value generated is zero if it is then it will increment to 1 int GEN_RND_ZERO() { int temp = rand()%4; //Pure Laziness if(temp == 0) { temp++; } return temp; } Naming conventions There doesn't seem to be one definite naming convention for C, but there's still something to be said here.
{ "domain": "codereview.stackexchange", "id": 42205, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c, game, battleship", "url": null }
asymptotic bounds informally ===================================== Induction is a good technique for proving $T$ is $O(g)$ once you know $g$, but how do you derive $g$ in the first place? A useful technique is to draw the call tree representing the algorithm, and to determine how much work the algorithm performs at each level. Consider an algorithm with running time $T(n) ≤ c + 3T(n/2)$. This algorithm performs some constant time work and then makes three recursive calls with inputs of size $n/2$. We can draw the call tree for this algorithm: ![recursion tree](tree.png) The time spent at the top level (level 0) is $c$ plus the time spent in the three recursive calls. At the level 1, we spend $c$ time for each of the three calls, plus the time spent in the 9 recursive calls. Thus we spend $3c$ time at level 1. Similarly, we spend $9c$ time at level 2, and $3^ic$ time at level $i$. The total number of levels is $\log n$, because we split the size in half at each step. Thus the total time
{ "domain": "cornell.edu", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.986777175136814, "lm_q1q2_score": 0.8113174676446434, "lm_q2_score": 0.8221891305219504, "openwebmath_perplexity": 234.0993763828404, "openwebmath_score": 0.8676836490631104, "tags": null, "url": "http://www.cs.cornell.edu/courses/cs3110/2015fa/l/24b-recurrences/rec.html" }
neural-network, deep-learning, cnn Title: What is the difference between TDNN and CNN? I read about time delay neural network (TDNN) and I am not sure I understood it. From what I read it seems that tdnn works just like one dimensional convolutional neural network (CNN). What are the differences between the architectures, if they exist? I found the answer - there is no difference. According to the paper "Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks": "Time Delay Neural Networks (TDNNs), also known as one dimensional Convolutional Neural Networks (1-d CNNs)..."
{ "domain": "datascience.stackexchange", "id": 8950, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-network, deep-learning, cnn", "url": null }
special-relativity Title: Do relativistic events need to match if accounted for time dilation and length contraction? To explain the question let me give you a short example. In the scenario there are two references frames A and B. A consists of a x'=1 Ls (lightsecond) long pole in the positive x direction. At t=0 a flash is generated at its origin. 1s later the flash reaches the end of the pole. B sees A moving with v=0.866c in the positive x direction. Due to length contraction, A's pole appears to only be 0.5 Ls long. In B, 3.731s after A generated the flash the flash reaches the end of the pole, because: $$x-ct=0 \quad with \quad x=v \cdot t+x'\sqrt{1-v^2/c^2}$$ $$(v \cdot t+x'\sqrt{1-v^2/c^2})-ct=0$$ $$(0.866c \cdot 3.731s+0.5)-3.731s \cdot c=0$$ So the flash reaches the pole's end after 1s in A. But from B's point of view it takes 3.371s. Wouldn't this require a time dilation factor of 3.371? But the actual factor is $$\frac{1}{\sqrt{1-v^2/c^2}}=2$$
{ "domain": "physics.stackexchange", "id": 25003, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity", "url": null }
quantum-field-theory, hilbert-space, mathematical-physics, symmetry-breaking, goldstone-mode Finally $$\begin{align}0~\neq~ a~ \stackrel{(4)}{\longleftarrow} ~ {\rm Im} a_V(t) ~ \stackrel{(12)}{\longrightarrow}&~ {\rm Im}\sum_E e^{-iE t} f(E)\cr ~\stackrel{(18)}{=}~&{\rm Im}f(E\!=\!0)\cr &\text{for}\quad V~\to ~\mathbb{R}^3.\end{align} \tag{19} $$ In order to have SSB, we must have $f(E\!=\!0)\neq 0$, i.e. there exists a massless mode $|n \rangle\neq|\Omega \rangle$ with $(E_n,{\bf P}_n)=(0,{\bf 0})$ that couples $c_n\neq 0$ between the current $\hat{J}^0$ and the observable $\hat{A}$. $\Box$ See also this related Phys.SE post. References: C. Itzykson & J.B. Zuber, QFT, 1985, Section 11-2-2, p. 520. S. Weinberg, Quantum Theory of Fields, Vol. 2, 1995; Section 19.2. -- $^1$ Cartoon version of the proof of Goldstone's theorem (ignoring the Fabri–Picasso theorem): $\quad |{\bf 0}\rangle ~:=~\hat{Q}|\Omega\rangle~\neq ~0.$ $\quad\hat{H}|\Omega\rangle~=~ 0.$ $\quad [\hat{H},\hat{Q}]~=~ 0.$
{ "domain": "physics.stackexchange", "id": 50308, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, hilbert-space, mathematical-physics, symmetry-breaking, goldstone-mode", "url": null }
Exercise \ref{exercise-compute-TS} explains why it is necessary to consider the tangent space of $X$ over $S$ to get a good notion. \end{remark} \begin{exercise} \label{exercise-compute-TS-field} Consider the morphism of schemes $$f : X = \Spec(\mathbf{F}_p(t)) \longrightarrow \Spec(\mathbf{F}_p(t^p)) = S$$ Compute the tangent space of $X/S$ at the unique point of $X$. Isn't that weird? What do you think happens if you take the morphism of schemes corresponding to $\mathbf{F}_p[t^p] \to \mathbf{F}_p[t]$? \end{exercise} \begin{exercise} \label{exercise-compute-TS-cusp} Let $k$ be a field. Compute the tangent space of $X/k$ at the point $x = (0, 0)$ where $X = \Spec(k[x, y]/(x^2 - y^3))$. \end{exercise} \begin{exercise} \label{exercise-map-tangent-spaces} Let $f : X \to Y$ be a morphism of schemes over $S$. Let $x \in X$ be a point. Set $y = f(x)$. Assume that the natural map $\kappa(y) \to \kappa(x)$ is bijective. Show, using the definition, that $f$ induces a natural linear map
{ "domain": "github.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9854964220125032, "lm_q1q2_score": 0.8080019446376565, "lm_q2_score": 0.8198933315126792, "openwebmath_perplexity": 120.73245969658541, "openwebmath_score": 0.9982831478118896, "tags": null, "url": "https://github.com/stacks/stacks-project/blob/master/exercises.tex" }
# Does covariance equal to zero implies independence for binary random variables? If $X$ and $Y$ are two random variables that can only take two possible states, how can I show that $Cov(X,Y) = 0$ implies independence? This kind of goes against what I learned back in the day that $Cov(X,Y) = 0$ does not imply independence... The hint says to start with $1$ and $0$ as the possible states and generalize from there. And I can do that and show $E(XY) = E(X)E(Y)$, but this doesn't imply independence??? Kind of confused how to do this mathematically I guess.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9697854103128328, "lm_q1q2_score": 0.8408386417374486, "lm_q2_score": 0.867035771827307, "openwebmath_perplexity": 485.78910135418664, "openwebmath_score": 0.9997827410697937, "tags": null, "url": "https://stats.stackexchange.com/questions/258704/does-covariance-equal-to-zero-implies-independence-for-binary-random-variables" }
c++, recursion, lambda, boost, c++20 On my machine, the latter is over 5× faster, and the test executable is over 30× smaller. And that speed difference gets really important when you have more tests, and more complicated test structures. (Actually, when I code, I quite often have a process where a single command triggers a make of the entire project with all unit tests THREE TIMES; once with GCC, once with Clang, and once with Clang using libc++ rather than libstdc++. This all happens lightning fast, generally (and in parallel), so my process is to edit some code, make and run the tests, grab a sip of my drink and a nibble of my snack, and by the time I look at the screen again, the tests are already complete… and, hopefully, have all passed… and I’m ready for the next iteration. This turnaround needs to be fast to work, and that means the tests have to build and run in mere seconds.) You can get the best of both worlds by doing this: // recursive_transform.test.cpp
{ "domain": "codereview.stackexchange", "id": 40259, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, recursion, lambda, boost, c++20", "url": null }
identity matrix in linear. Vital role in the most comprehensive dictionary definitions resource on the diagonal and zeroes else. Resource on the diagonal and zeroes everywhere else and meaning for various math words this! A lot in programming problems define identity matrix are often talking about “an” identity matrix identity matrix… at. Identity matrix pronunciation, identity matrix is the product of two square matrices, two... Matrix is always a square matrix with non-zero determinant thing^ [ x+y ] modulo.. Matrices, the two matrices are said to be square since there …!, 1 'm newbie and I 'm newbie and I 'm newbie I. Kronecker delta notation: = calculators and Converters ↳ math dictionary ↳ dictionary... Learn what an identity matrix is a square matrix with non-zero determinant is and about its role in most. Non-Zero determinant problem, I 'm newbie and I 'm trying to practice a lot programming... Is always a square matrix with 1 's on the diagonal and zeroes everywhere
{ "domain": "lockxx.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9399133531922388, "lm_q1q2_score": 0.800761807793395, "lm_q2_score": 0.8519527944504227, "openwebmath_perplexity": 979.854458990299, "openwebmath_score": 0.8951935172080994, "tags": null, "url": "https://lockxx.com/wnseq/define-identity-matrix-ed16d3" }
classical-mechanics, statistical-mechanics, hamiltonian-formalism, phase-space Inference 1: From axioms 1. and 2., deduce that any $X\in T_p\,\mathcal{P};\;\forall p\in \mathcal{P}$ expressed in canonical co-ordinates (i.e. ones for which the Hamilton equations hold) that is Lie-dragged by the Hamiltonian flow evolves according to $\dot{X} = A(t)\,X$ where $A(t)\in\mathfrak{sp}(N,\,\mathbb{R})$, thus the symplectic 2-form $\omega(X,\,Y)\stackrel{def}{=} X^T\,\Omega\,Y$ where, for the special case of canonical co-ordinates, $\Omega =\left(\begin{array}{cc}0&-1_N\\1_N&0\end{array}\right)\;\forall p\in\mathcal{P}$ is conserved under the mapping $\mathcal{P}\mapsto \Phi(H,\,t)\,\mathcal{P},\,\forall t\in\mathbb{R}$ induced by the Hamiltonian flow. (Indeed, at any given point $p\in\mathcal{P}$ find $N$ different $C^2$ Hamiltonians such that the tangents to their flows span $T_p\,\mathcal{P}$ to deduce that the Lie derivative of $\omega$ in any direction is nought, thus $\mathrm{d}\omega=0$ from Cartan's formula relating Lie and Exterior derivatives, but this
{ "domain": "physics.stackexchange", "id": 40537, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, statistical-mechanics, hamiltonian-formalism, phase-space", "url": null }
rviz, ros-melodic, tf2, camera-info, sensor-msgs#image Title: how to understand image topic frame orientation? Hello everyone, as the title says, I want to ask how to figure out how the frame orientation of an image is set, any ideas ? Originally posted by Ifx13 on ROS Answers with karma: 54 on 2021-07-12 Post score: 0 Documentation of the orientation of an image message can be found in the message itself with Z forward: http://docs.ros.org/en/api/sensor_msgs/html/msg/Image.html It usually uses the _optical postfix for the frame_id to be clear that it's different than the standard coordinate frames with X forward. https://ros.org/reps/rep-0103.html Originally posted by tfoote with karma: 58457 on 2021-07-12 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 36690, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rviz, ros-melodic, tf2, camera-info, sensor-msgs#image", "url": null }
A point $x$ of $A$ can be of one of two mutually exclusive types: a limit point of $A$ or an isolated point of $A$. If the latter, it means that there exists some open $O$ in $X$ such that $\{x\} = O \cap A$. The negation of this is exactly that every open set $O$ that contains $x$ always intersects points of $A$ unequal to $x$ as well, and this means exactly that $x$ is a limit point of $A$. E.g. $A = (0,1) \cup \{2,3\}$ (usual topology of the reals) has two isolated points $2$ and $3$ (which are not interior points of $A$), and the rest are limit points of $A$ as well as interior points. There are also limit points $0,1$ that are not in $A$ (showing $A$ is not closed). So if $A$ has no isolated point, all of the points of $A$ (in particular all its interior points) are limit points of $A$. So often there will quite an overlap between interior and limit points.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.989986431444902, "lm_q1q2_score": 0.8751868983297055, "lm_q2_score": 0.8840392863287585, "openwebmath_perplexity": 79.14143036539244, "openwebmath_score": 0.828119695186615, "tags": null, "url": "https://math.stackexchange.com/questions/1606306/are-interior-points-ever-limit-points-as-well" }
performance, datetime, graphics, memory-optimization, processing Versus the resources it's consuming: The source is in two parts: the main .pde and another containing the 'Gua' (hexagram) class. yijingClock.pde PGraphics mins; PGraphics hrs; float fadeAmount; static final float fadeMax = 1440; //1440 means 1 step per frame takes 1 minute at 24fps void setup() { size(500, 500); colorMode(RGB, 255, 255, 255, fadeMax); background(255); imageMode(CENTER); // All images in the sketch are drawn CENTER-wise frameRate(24); // Affects the smoothness and speed of fade(); mins = createGraphics(width, height); hrs = createGraphics(width, height); noFill(); stroke(100); //polygon(5, width/2, height/2, width / 4 * 3, height / 4 * 3, -PI / 2); // Draw a static pentagon as the background fill(0); fadeAmount = 0; } // end setup()
{ "domain": "codereview.stackexchange", "id": 16509, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, datetime, graphics, memory-optimization, processing", "url": null }
2. By using 5 rectangles, we are asked to compute the area under a curve using the function {eq}f(x) = x {/eq} with boundaries at {eq}[-2,3] {/eq}. It is desired to calculate the initial oil in place for an oil reservoir having a gas cap as illustrated below. If we want to approximate the area under a curve using n=4, that means we will be using 4 rectangles. Show how to calculate the estimated area by finding the sum of areas of the rectangles. From Ramanujan to calculus co-creator Gottfried Leibniz, many of the world's best and brightest mathematical minds have belonged to autodidacts. RMS is a tool which allows us to use the DC power equations, namely: P=IV=I*I/R, with AC waveforms, and still have everything work out. •Underestimate the area of a decreasing curve. When we use rectangles to compute the area under a curve, the width of each rectangle is. Enter your function below. By taking more rectangles, you get a better approximation. View All Articles. To do this we need to find
{ "domain": "centropartenopeo.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759632491111, "lm_q1q2_score": 0.8195057111990796, "lm_q2_score": 0.8354835289107307, "openwebmath_perplexity": 316.55087815325567, "openwebmath_score": 0.8472341895103455, "tags": null, "url": "http://bbdr.centropartenopeo.it/area-under-curve-calculator-with-rectangles.html" }
java, beginner, object-oriented, simulation, physics do{ solarSim.updateVeolcitiesAndPositions(); }while (solarSim.hasMoreSteps()); solarSim.printResult(); solarSim.closeFile(); } You could take the first part further, and call the methods not dependent on user input (file opening, initialisation of variables, fields, velocities etc.) from the constructor of SolarSim. (Beware, I'm not suggesting to put all the initialisation code into the constructor, because in this way the constructor would become very long. Just call those methods from within the constructor.) Ideas for further improvements The output file could be an argument of the program. I.e., you would invoke the program like this: java SolarSim /path/to/my/customoutput.txt The path to the file can be read from args[0] in this case (i.e., from the args parameter of main). Also the number of steps (which is currently constant 1000), could be a parameter of the simulation.
{ "domain": "codereview.stackexchange", "id": 17166, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner, object-oriented, simulation, physics", "url": null }
u. ${\left({f\,g} \right)^\prime } = f'\,g + f\,g'$ Now, integrate both sides of this. Already in North America, you can fly it today! 19000$no wing, 23900 brand new Apogee 16, 25490USD with brand new Profi TL (other wings can be installed as well), with electric trim. Rearrange to isolate the integral of . Tanzalin Method is commonly used in Indonesia. You have to use a little trick and integration by parts to solve this integral. Integration by parts: ∫x⋅cos(x)dx. ()uv u v v u''=+' Taking the antiderivative (and replacing u v' and ' with du and dv) of both sides yields =+v du u dv. You can use integration by parts while dealing with the multiplication of continuously differentiable functions. 5. 3. ) The trick to integrating by parts is strategically picking what function is u and dv: 1. This short-cut is also known as the Tabular Method, the Hindu Method, and th Skip navigation The trick I always use is, let dv be the function that has the cleanest antiderivative such that the order
{ "domain": "goglobalm.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9763105287006255, "lm_q1q2_score": 0.8177771872733866, "lm_q2_score": 0.837619961306541, "openwebmath_perplexity": 449.4239282598786, "openwebmath_score": 0.9342829585075378, "tags": null, "url": "https://admin.goglobalm.com/duxqm/integrating-by-parts-trick.html" }
naming, stellar-classification Title: Any stellar entity named A1? Is there a spatial entity (moon, planet, star, system, galaxy, cluster,..) that is identified with "A1" (or A{first_term})? The Lowell observatory gives temporary designations in the form "A-number" to objects that it studies. A1 would be a temporary designation that the Lowell observatory could give an object. For example NGC 3603-A1 is likely the most massive binary star ever “weighed” through its orbital mass. [...] We have also produced a light curve for A1 using HST photometry, which provides a more accurate measurement of the systems inclination. Ultimately we hope to refine the masses of A1 with new observations. Source There is an object (1) it is (1) Ceres, the dwarf planet. And 1P is Halley's comet.
{ "domain": "astronomy.stackexchange", "id": 5715, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "naming, stellar-classification", "url": null }
visualization, terminology If you want to compare volume to crop type then you should use a plain column chart, where it is easy to distinguish relative heights. This is analogous to why bar/column charts are preferable to pie charts - humans are better at comparing heights/lengths vs area/volume. If, on the other hand, the point of the graph is to compare crop type irrigation by area, you would want to display a column graph with regions next to each other, but grouped by crop type. As far as a superbly produced duck, my guess is that the Applied Irrigation Water is a duck, but a superb duck, at least when compared to the monstrous duck on the preceding page of the Visual Display of Quantitative Information 2nd Ed.
{ "domain": "datascience.stackexchange", "id": 2723, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "visualization, terminology", "url": null }
discrete-signals, convolution, linear-systems, impulse-response Since the input is 3 samples long and the output is 6 samples long, then my impulse response $h[n]$ has to be 4 samples long. So I discard the last two to finally yield: $$h[n] = [0,1,0,1]$$ Convolving the input signals with the newly found $h[n]$ yields the given output signals. I am however uncomfortable with having to invoke the required length of $h[n]$ to yield its final form by deleting the last two samples. Did I go wrong somewhere? Your instinct about the truncation being fishy is correct. Your solution is indeed wrong: you just got lucky to get the correct result. As @Fat32 has pointed out, $x_1[n]-x_2[n] = {1,0,0-1}$, which is not a delta impulse and is longer than the original sequence. The result is $h[n]-h[n-3]$, and not $h[n]$. You actually only need one set of input and output to solve this. Given the impulse response length, we know that $$y[n]=\sum_{k=0}^{3}h[k] \cdot x[n-k]$$
{ "domain": "dsp.stackexchange", "id": 5982, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, convolution, linear-systems, impulse-response", "url": null }
algorithms, graphs, named-entity-recognition Since each band holds similar records, similar record sets will be S1 = {R2,R4} S2 = {R2, R4, R5} S3 = {R1,R3} And the merged similarity structures will be S1' = {R2,R4,R5} S2' = {R1,R3} I want to span the matrix and map all similarity structures so I have buckets of similar records. The dimensionality of the hash table is large, hence a naive approach is not likely to work. What types of algorithms should I consider to achieve this efficiently? Edit 1: Question updated to clarify more on the objective The algorithm you are asking about is very straightforward. What you do, you are looking for connected components in a certain graph where edges are determined by matching hash values. You can implement this with a modification of the disjoint-set data structure. Your particular variation would be that apart from keeping track of vertices in each component, you have to keep track of m (number of rows) sets of hash values that have been found for each component.
{ "domain": "datascience.stackexchange", "id": 569, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, graphs, named-entity-recognition", "url": null }
formal-languages, context-free, formal-grammars $$ \{a^ib^jc^k\mid i\ne j\}=\{a^ib^jc^k\mid i>j\}\cup\{a^ib^jc^k\mid i<j\} $$ Idiom 4. Separate the problem into concatenated pieces if possible. For this example, we note that the $a^ib^j$ part will have nothing to do with the $c^k$ part, so we might start out grammar with the production $S\rightarrow TU$, where $T$ will generate the $a^ib^j$ and $U$ will generate the $c^k$ part. The latter is easy, and now we fill in the omission in idiom 1, writing $U\rightarrow cU\mid \epsilon$ (since if the requirement had been, say, $k>0$ we could just use $U\rightarrow cU\mid c$). Idiom 5. Now what to do with the $a^ib^j$ part, where $i>j$? Simple, write it as $a^na^jb^j$ for some $n>0$. To get an arbitrary number of a's in front, we can use $T\rightarrow aT$. Eventually we'll stop generating a's and will switch to the matching a's and b's, using yet another variable $V$, so we'll have $T\rightarrow aT\mid aV$.
{ "domain": "cs.stackexchange", "id": 3617, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, context-free, formal-grammars", "url": null }
meteorology, lightning, electromagnetism, ionizing-radiation AGU Fall Meeting, San Francisco, 3-7 December 2012 According to the American Geophysical Union and reported at phys.org: Dark lightning is a burst of gamma-rays produced during thunderstorms by extremely fast moving electrons colliding with air molecules. Researchers refer to such a burst as a terrestrial gamma-ray flash (TGF). However, things are not quite so simple. There are actually several gamma ray phenomena related to thunderstorms. According to Gamma-ray glow preceding downward terrestrial gamma-ray flash: Two types of high-energy events have been detected from thunderstorms. One is “terrestrial gamma-ray flashes” (TGFs), sub-millisecond emissions coinciding with lightning discharges. The other is minute-lasting “gamma-ray glows”. Although both phenomena are thought to originate from relativistic runaway electron avalanches in strong electric fields, the connection between them is not well understood.
{ "domain": "earthscience.stackexchange", "id": 2429, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "meteorology, lightning, electromagnetism, ionizing-radiation", "url": null }
algorithms, computational-geometry, efficiency, numerical-algorithms At each iteration of the algorithm, we have some value of $x$, and we calculate what is the next largest value of $x$ where something interesting happens. Here, "something interesting" is either (1) two pieces intersect each other at a point, or (2) we reach the end of one of the current pieces. Then we advance $x$ to this next value. At each iteration, we keep track of which pieces are currently active (i.e., $x$ is within the domain of that piece) and the order of the pieces, ordered by their $y$-value for that particular value of $x$. Store them in sorted order, sorted by $y$-value.
{ "domain": "cs.stackexchange", "id": 5802, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, computational-geometry, efficiency, numerical-algorithms", "url": null }
civil-engineering, tunnels, positions-measurement Title: How are tunnels dug from two endpoints joined? Let's assume I want to build a subway system. I dig two subway stations and now I start digging a tunnel from both of these endpoints simultaneously. How can I ensure that the two tunnels dug from the different endpoints meet in the middle? I don't believe I can use GPS underground. What methods do professionals use to ensure that the two tunnels meet? Are there some positioning technologies that work underground? The alignment of a tunnel constructed from two ends and meeting somewhere is the middle is achieved via old fashioned surveying methods. If the tunnel breaks through to the surface, surveying stations are accurately established on the surface of the Earth at each end of the tunnel. These days GPS surveying techniques would most likely be used for that. Once the surface surveying stations have been established, surveying stations are established underground, as the tunnel progresses from each end.
{ "domain": "engineering.stackexchange", "id": 4100, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "civil-engineering, tunnels, positions-measurement", "url": null }
quantum-field-theory, lagrangian-formalism, supersymmetry, yang-mills, duality C. Auxiliary Question: Explain how one can decompose a SUSY gauge field into its bosonic and fermionic parts. [Background: I am familiar with non-abelian gauge theory and have had a class on SUSY some years back. So I pick up on the jargon but am missing a fuller picture.] A. The action of $N=4$ SYM (Super Yang-Mills theory) in $d=4$ is the simple dimensional reduction of the 9+1-dimensional SYM, the maximal dimensional SYM that exists. The latter is $$S = \int d^{10} x\mbox{ Tr } \left( -\frac{1}{4} F_{\mu\nu}F^{\mu \nu} + \overline{\psi}D_\mu \gamma^\mu \psi \right)$$ where $D$ is the covariant derivative and $\psi$ is a real chiral spinor in 9+1 dimensions which has 16 real components, leading to 8 fermionic on-shell degrees of freedom. The dimensional reduction reduces $d^{10}x$ to $d^4 x$ but it also renames 6 "compactified" spatial components $A_\mu$ as six scalars $\Phi_I$ in $d=4$. The derivatives in the corresponding 6 directions are set to zero.
{ "domain": "physics.stackexchange", "id": 423, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, lagrangian-formalism, supersymmetry, yang-mills, duality", "url": null }
c++, reinventing-the-wheel template<typename T> struct ControlBlock: public ControlBlock_base { template<typename Deleter = details::DefaultDeleter<T>> ControlBlock(size_t refCounter, Deleter d = {}) :ControlBlock_base(refCounter), deleter(d) { } void deletePtr(void* t) override { deleter(static_cast<T*>(t)); } function<void(T*)> deleter; }; } template<typename T> class MySharedPtr { using ControlBlock_base = details::ControlBlock_base; template<typename T> using ControlBlock = details::ControlBlock<T>; template<typename k> friend class MySharedPtr; public: MySharedPtr() = default; template<typename Deleter = details::DefaultDeleter<T>> MySharedPtr(T* p, Deleter d = {}) :payload(p) { if (p != nullptr) { cb = new ControlBlock<T>(1, d); } } MySharedPtr(const MySharedPtr& other) : payload(other.payload), cb(other.cb) { if (cb) { cb->refCounter++; } }
{ "domain": "codereview.stackexchange", "id": 45120, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, reinventing-the-wheel", "url": null }
immunology, hiv Q2: I expect because merely injecting T cell cytokines would not be sufficient in order to replace the essential function of T cells. Aside from that, WYSIWYG's point about the general response is probably also true: the side effects would be horrendous.
{ "domain": "biology.stackexchange", "id": 2315, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "immunology, hiv", "url": null }