text
stringlengths
49
10.4k
source
dict
gazebo-7 And this is the resulting view in Gazebo when I launch spawn this model It is only one color (the color in the bottom left corner of the PNG file). Couriously if I change the scale in the material file, the color might change to orange (the color in the middle of the PNG file) but the color stays homogeneous. What is happening and how do I make the Gazebo render the texture correctly? Originally posted by kumpakri on Gazebo Answers with karma: 755 on 2019-08-21 Post score: 0 The problem seems to be on the UV mapping of your OBJ file. We usually use MTL files to set the material of OBJ meshes, see an example here. I would try to export an MTL file together with your OBJ from the modelling software you used. Originally posted by chapulina with karma: 7504 on 2019-08-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by kumpakri on 2019-08-22: Thank you! I had to make the UV map for my model inside Blender by hand following this tutorial and after that the textures applied to the model are showing correctly.
{ "domain": "robotics.stackexchange", "id": 4425, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo-7", "url": null }
ros, build-from-source, debian, ubuntu Title: debian source packages for python-rospkg, python-rosdep, etc For most ROS .deb packages (e.g. for ros-hydro-ros), one can get the source by adding a "deb-src" line into the /etc/apt/sources.list with the same repository as the "deb" line. Then, one can download the source package files with something like: apt-get source ros-hydro-ros This will download the .orig.tar.gz, .dsc, and .debian.tar.gz file. So far so good. (And a great improvement from earlier versions of ROS.) However, there are some packages for which the source cannot be downloaded this way. Where should I look for the source to packages such as python-rospkg, python-rosdep, python-vcstools, python-catkin-pkg, ps-engine and so on? Also, I think it would be great if these were included in the same source repository as the rest of the packages. Originally posted by AndrewStraw on ROS Answers with karma: 171 on 2014-07-26 Post score: 2 The toolchain for building them is available at: https://github.com/ros-infrastructure/ros_release_python If you take a look at the upload logic I expect that they could be extended to push the sourcedebs as well as the binary debs to the repo. Originally posted by tfoote with karma: 58457 on 2014-07-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18780, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, build-from-source, debian, ubuntu", "url": null }
Consider the set $L$ consisting of those $x\in X$ that have the following form: • Every 1 has a 2 to its right, • every 2 has a 0 to its right, • every 0 has a 0 or a 1 to its right. On $L$, the map $T$ acts a left-shift. Similarly, let $R$ contain those $x\in X$ where • Every 1 has a 2 to its left, • every 2 has a 0 to its left, • every 0 has a 0 or a 1 to its left. On $R$, the map $T$ acts as the right-shift. The computation of the topological entropy gave $2\ln\rho$, where $\rho$ is the positive root of $x^3-x^3-1$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9850429151632047, "lm_q1q2_score": 0.809891580090722, "lm_q2_score": 0.8221891327004133, "openwebmath_perplexity": 113.32071120682777, "openwebmath_score": 0.9403144121170044, "tags": null, "url": "https://math.stackexchange.com/questions/1273964/how-to-determine-omegat" }
# What does this symbol mean? This is from Discrete Mathematics and its Applications What is the symbol used in 9c, 9d, 9f, 10c, 10f, 10g? I looked through the chapter section and the closest symbol I saw to this is the subset, which is DEFINITION 3. The set $$A$$ is a subset of $$B$$ if and only if every element of $$A$$ is also an element of $$B$$. We use the notation $$A \subseteq B$$ to indicate that $$A$$ is a subset of the set $$B$$. We see that $$A \subseteq B$$ if and only if the quantification $$\forall x(x \in A \to x \in B)$$ is true. Note that to show that $$A$$ is not a subset of $$B$$ we need only find one element $$x \in A$$ with $$x \not\in B$$. Such an $$x$$ is a counterexample to the claim that $$x \in A$$ implies $$x \in B$$. Is it just a typo for subset? Thats what I originally thought. However via my use of an implication(trying to apply what I learn :) ), I came up with if the symbol is a typo, it will be used in a single place or the supposed actual symbol, the subset, will not be used in the surrounding proximity(page). If I assumed the hypothesis to be true, then my conclusion and my implication is false because the subset does appear in the near proximity(9g) and this symbol is used in multiple locations(all the ones I described). Therefore the hypothesis is false(reached a contradiction), and the symbol is not a typo. Is that correct logic? • It should be noted that some authors use $\subset$ for $\subseteq$ and denote $\subsetneq$ for a proper subset. – Hugh Jan 14, 2015 at 7:58
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787830929848, "lm_q1q2_score": 0.8467704722654436, "lm_q2_score": 0.8577681031721325, "openwebmath_perplexity": 291.03081895004715, "openwebmath_score": 0.9194430112838745, "tags": null, "url": "https://math.stackexchange.com/questions/1103703/what-does-this-symbol-mean" }
ros, c++, px4, pixhawk, mavros this->cmd_att.pose.position.x = 0.0; this->cmd_att.pose.position.y = 0.0; this->cmd_att.pose.position.z = 0.0; tf::Quaternion mav_orientation = tf::createQuaternionFromRPY(0.523598775, 0.523598775, 0 ); this->cmd_att.pose.orientation.x = mav_orientation.x(); this->cmd_att.pose.orientation.y = mav_orientation.y(); this->cmd_att.pose.orientation.z = mav_orientation.z(); this->cmd_att.pose.orientation.w = mav_orientation.w(); this->cmd_thr.data = 0.9; for(int i=0; i<500; i++){ cout << this->cmd_att.pose.orientation << endl; cout << "-----" << endl; cout << this->cmd_thr.data << endl; cout << "-----" << endl; this->cmd_att.header.stamp = ros::Time::now(); this->cmd_att.header.seq = i; this->mav_att_pub.publish( cmd_att ); this->mav_thr_pub.publish( cmd_thr ); ros::spinOnce(); rate.sleep(); } } Originally posted by MaximeRector on ROS Answers with karma: 60 on 2017-11-30 Post score: 0 I worked more than a year ago with MAVROS. please, be not so strict, if something wrong.
{ "domain": "robotics.stackexchange", "id": 29485, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, c++, px4, pixhawk, mavros", "url": null }
general-relativity, special-relativity, time, twin-paradox Now, this quantity $\frac{\mathrm{d}\tau}{\mathrm{d}t}$ is the rate at which proper time ($\tau$) elapses relative to coordinate time ($t$). The coordinate time is, again, basically what would be measured by a distant observer. So each time the three observers A,B,C meet up, the meeting takes place at the same coordinate time for all three of them. However, the proper time $\tau$, which is the time each observer measures internally, is not the same for all three. The fact that $\frac{\mathrm{d}\tau}{\mathrm{d}t}$ is larger for observer A means that for a given amount of coordinate time (like, say, the interval between two successive meetings of the three observers), A will experience more time than B or C. So if the observers start with synchronized clocks, when they next meet up, A will find that its clock is a little bit ahead of B's and C's clocks. If you're curious about the numbers: we may not have enough precision to get an accurate result, but I can do this just to show how the calculation would work. Let's plug in the Earth's Schwarzschild radius of $r_s = 8.9\text{ mm}$ and the orbital radius of, say, the International Space Station at $r = 6750\text{ km}$ (rough average). We can also use the ISS's orbital speed of $r\frac{\mathrm{d}\theta}{\mathrm{d}t} = 7.68\ \mathrm{\frac{km}{s}}$ for observers B and C. That gives the following rates: $$\begin{align}\frac{\mathrm{d}\tau_A}{\mathrm{d}t} &= 1 - 1.2027\times 10^{-8} & \frac{\mathrm{d}\tau_{B,C}}{\mathrm{d}t} &= 1 - 1.2355\times 10^{-8}\end{align}$$
{ "domain": "physics.stackexchange", "id": 58750, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, special-relativity, time, twin-paradox", "url": null }
python, escaping Title: Is this escape unescape function pair correct? Is this example of an escape - unescape function correct? ESCAPE = '/' def escape(s): rv = s rv = rv.replace(ESCAPE, ESCAPE + ESCAPE) rv = rv.replace('+', ESCAPE + 'PLUS') rv = rv.replace('-', ESCAPE + 'MINUS') return rv def unescape(s): rv = s rv = rv.replace(ESCAPE + 'MINUS', '-') rv = rv.replace(ESCAPE + 'PLUS', '+') rv = rv.replace(ESCAPE + ESCAPE, ESCAPE) return rv Performing string substitutions using multiple passes is almost always a bad idea, not only for performance, but for correctness. In this case, there is indeed a bug: unescape('//PLUS/PLUS') should produce '/PLUS+', but instead produces '/++'. Therefore, you have to parse the input string in one pass: import re def unescape(s): def replacement(match): return { '/': '/', 'PLUS': '+', 'MINUS': '-', }[match.group(1)] return re.sub('/(/|PLUS|MINUS)', replacement, s) In your original code and in the implementation above, there is a lot of repetition. If there are more symbols being escaped than just + and -, then it makes sense to specify the mapping just once. ESCAPE_CHAR = '/' ESCAPE = { ESCAPE_CHAR: ESCAPE_CHAR, '+': 'PLUS', '-': 'MINUS', } ESCAPE_RE = re.compile('|'.join(map(re.escape, ESCAPE.keys()))) UNESCAPE = dict((v, k) for (k, v) in ESCAPE.items()) UNESCAPE_RE = re.compile( re.escape(ESCAPE_CHAR) + '(' + '|'.join(map(re.escape, UNESCAPE.keys())) + ')')
{ "domain": "codereview.stackexchange", "id": 8534, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, escaping", "url": null }
c, unix Title: A trivial command line utility for trimming whitespace from lines in C - follow-up See the previous iteration: A trivial command line utility for trimming whitespace from lines in C Note: see the next iteration at A trivial command line utility for trimming whitespace from lines in C - follow-up 2 Now my code looks like this: #include <ctype.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #define HELP_MESSAGE "Usage: trim [-h] [-v] [FILE1, [FILE2, [...]]]\n" \ " -h Print the help message and exit.\n" \ " -v Print the version message and exit.\n" \ " If no files specified, reads from standard input.\n" #define VERSION_MESSAGE "trim 1.61\n" \ "By Rodion \"rodde\" Efremov 07.04.2015 Helsinki\n" #define HELP_FLAG "-h" #define VERSION_FLAG "-v" #define NEWLINE_CHAR '\n' /******************************************************************************* * This routine removes all leading and trailing whitespace from a string, * * doing that in-place. ( * ********************************************************************************/ static char* trim_inplace(char* start) { char* end; for (end = &start[strlen(start) - 1]; isspace(*end); --end) { *end = '\0'; }
{ "domain": "codereview.stackexchange", "id": 12932, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, unix", "url": null }
vba, excel Debug.Print "Union at :" & Timer - tm If Not Grt100Rng Is Nothing Then With Grt100Rng.Borders If GreaterThan100.Value Then .Color = vbBlue .LineStyle = xlContinuous .Weight = xlThick Else .Color = vbBlack .LineStyle = xlNone .Weight = xlThin End If End With With Less0Rng.Borders If LessThan0.Value Then .Color = vbBlue .LineStyle = xlContinuous .Weight = xlThick Else .Color = vbBlack .LineStyle = xlNone .Weight = xlThin End If End With End If Debug.Print Timer - tm End Sub
{ "domain": "codereview.stackexchange", "id": 35621, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
## 11-3 LDA (線性è??¥å??? chinese version If we treat each data entry as a point in high-dimensional space, then we shall be able to visualize the dataset if the dimensionality is less than or equal to 3. For instance, the following example plots the dataset (3 dimensions, 300 entries) with 3 class labels using different viewing angles: Example 1: inputExtraction01.mDS=prData('random3'); subplot(1,2,1); dsScatterPlot3(DS); view(155, 46); subplot(1,2,2); dsScatterPlot3(DS); view(-44, 22); The left and right plots are based on the same 300 entries except for different view angles. The left plot seems a bit random since the view angle tends to overlap the projection onto the 2D space. On the otherhand, the right plot give us a much more separation between classes after 2D projection. As you can imagine, the goal of LDA is to find the best projection (or equivalently, the best viewing angle) such that the separation between classes is maximized. Hint In the above example, you can try to click and drag the plot axis to change the viewing angle. Try it and see which viewing angle can give you the best separation between data of different classes. The ultimate goal of feature extraction can be explained in a mathematical manner. Suppose that we have a dataset with features of V = {v1 , v2 ,... , vd}. We wish to have a classifier with recognition rate denoted by J(£»), which is a function of the transform features. Therefore the objective of feature extraction is to find the best of transformed features S such that J(S)¡ÙJ(T), where both S and T are sets of linearly transformed features obtained from set V. Let us use LDA to project the dataset used in the previous example to 2D space:
{ "domain": "mirlab.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9653811621568289, "lm_q1q2_score": 0.80656006010273, "lm_q2_score": 0.8354835289107309, "openwebmath_perplexity": 9815.860437160674, "openwebmath_score": 0.7145505547523499, "tags": null, "url": "http://mirlab.org/jang/books/dcpr/feLda.asp?title=11-3%20LDA%20(%E7%B7%9A%E6%80%A7%E8%AD%98%E5%88%A5%E5%88%86%E6%9E%90)" }
java, beginner, sudoku if ( puzzleIsValid() == false ) { System.out.println("Illegal puzzle."); System.exit(1); } } public void printPuzzle() { for ( byte y = 0; y <=8; y++ ) { for ( byte x = 0; x <=8; x++ ) { System.out.print(myPuzzle[y][x] + " "); if ( x == 2 || x == 5 ) { System.out.print(" "); } } System.out.println(); if ( y == 2 || y == 5 || y == 8 ) { System.out.println(); } } } public void solvePuzzle() { solvePuzzle(false); } public void solvePuzzle(boolean showDetails) { // print puzzle in the console System.out.println("Unsolved puzzle:"); printPuzzle(); byte cyclesElapsed = 1; // iterate through each square in a 9x9 sudoku puzzle and try to solve the square while ( true ) { byte squaresSolved = 0; // We need some way to pause execution if the puzzle either becomes solved or is discovered to be unsolvable. boolean somethingChanged = false; // check each square for ( byte square = 1; square <= 9; square++ ) { // when we do our x-ray checks later we will need to know what row numbers and what column numbers to check byte[] coordinates = getSquareCoordinates(square, (byte) 1); byte checkThisRowFirst = coordinates[1]; byte checkThisColumnFirst = coordinates[0]; // make a boolean array to keep track of whether or not a coordinate in the square can be a solution for that numToCheck // true = possibly a solution, false = not a solution // once we are down to 1 true and 8 falses, we have solved that numToCheck and can update the box in myPuzzle boolean[] possiblyContainsNum = new boolean[] {true, true, true, true, true, true, true, true, true};
{ "domain": "codereview.stackexchange", "id": 11145, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner, sudoku", "url": null }
- To what function are we applying Taylor's theorem? – robjohn Nov 9 '12 at 19:50 @robjohn In this case $e^x$ for $x = 1$, but it works in general. – glebovg Nov 9 '12 at 20:17 Why downvote? Please comment. – glebovg Nov 9 '12 at 20:17 I downvoted because it doesn't address the distinction between the error term and the number of correct digits in the partial sum. (Also consider simplifying or clarifying: the question asks only about $e$ not $e^x$.) – Dan Brumleve Nov 9 '12 at 20:21 @glebovg: Since $f$ was not specified in the question, it would be good to mention it in the answer. – robjohn Nov 9 '12 at 22:49
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9706877717925422, "lm_q1q2_score": 0.853377019906055, "lm_q2_score": 0.8791467706759583, "openwebmath_perplexity": 472.60097472746503, "openwebmath_score": 0.8965113759040833, "tags": null, "url": "http://math.stackexchange.com/questions/233760/calculating-e/233768" }
cell-biology, proteins, transcription, cell-signaling, intracellular-transport Time is in minutes, and zeroed at first contact between the two cells. I've put a red dot on the T-cell and a blue one on the APC in the DIC images (left panes); hopefully that proves more informative than annoying. The right panes show GFP fluorescence and thus CD3 localization. As time progresses, CD3 is re-localized from one part of the membrane to another (the synapse). There is supposedly a video of this is in the supplementary information of the article, though I was unable to open it. The rate and directionality of the movement implies that an active process is occurring, rather than simple diffusion. However, they did not find the actual mechanism for movement and I haven't found any follow-up papers in a brief search (though many subsequent papers implicate the cytoskeleton in this movement). Just to show that movement of transmembrane proteins can, in fact, be actively directed by the cytoskeleton, I refer you to this paper: Grabham PW, Foley M, Umeojiako A, Goldberg DJ. 2000. Nerve growth factor stimulates coupling of beta1 integrin to distinct transport mechanisms in the filopodia of growth cones. J Cell Sci 113:3003-3012. They show that membrane-spanning integrins are moved along actin filaments of the cytoskeleton by myosin motor proteins. Expectedly, the abstract does a good job of summarizing the paper: The cycling of membrane receptors for substrate-bound proteins via their interaction with the actin cytoskeleton at the leading edge of growth cones and other motile cells is important for neurite outgrowth and cell migration. Receptor delivered to the leading edge binds to its ligand, which induces coupling of the receptor to a rearward flowing network of actin filaments. This coupling is thought to facilitate advance... [T]ransport was dependent on an intact actin cytoskeleton and myosin ATPase...
{ "domain": "biology.stackexchange", "id": 8309, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cell-biology, proteins, transcription, cell-signaling, intracellular-transport", "url": null }
javascript, performance, minecraft function doRandomTickEventsForEachPlayer() { const allPlayers = world.getAllPlayers(); for (var p = 0; p < allPlayers.length; p++) { processRhizome(allPlayers[p]); } var nextInterval = 60 + Math.round(Math.random()*40); // Between 3 and 5 seconds system.runTimeout(doRandomTickEventsForEachPlayer, nextInterval); } doRandomTickEventsForEachPlayer(); While you could go about optimizing the detection of rhizome blocks using deltas, caching, and all sorts of smart heuristics, there is also a simpler optimization available that utilizes the probabilistic nature of your actions. Currently, you check every block to test if its rhizome, and then if it is you still ignore its decay 75% of the time and its spreading 95% of the time. What would be better would be to only check 25% of the blocks for rhizome, and decay these (if applicable) 100% of the time (similarly, check 5% of the blocks for rhizome to spread). This alone is a ~4x speedup, but you could also reduce the lag spikes by smoothing out your random ticks using a similar trick. You could check half as many blocks for decay, twice as often, resulting in smaller lag spikes (or any fraction you want). This of course won't improve your overall efficiency, it will just make the server run smoother. In reality, you're probably best off merging these improvements with some of the optimizations shared by J_H, so you get the free 4x speedup on top of some heuristics to generally avoid wasting time searching non-rhizome blocks.
{ "domain": "codereview.stackexchange", "id": 45456, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, performance, minecraft", "url": null }
java, c, functional-programming, lisp, compiler build.bat bash -c "ruby a.rb && cat mjlc.c >> out.c && gcc out.c -omjlc -std=c11 -pedantic -Wall `pkg-config --cflags --libs glib-2.0` -fwhole-program -O3 -s && rm out.c" a.rb def f(dst, src, str) dst.puts("#define #{str} \\") src.readlines.each do |line| dst.puts("\"#{line.chomp.tr("\t", "")}\\n\"\\") end dst.puts() end in_a = File.open("a.java", "rb") in_b = File.open("b.java", "rb") in_c = File.open("c.java", "rb") out = File.open("out.c", "wb") f(out, in_a, "A") f(out, in_b, "B") f(out, in_c, "C") in_a.close in_b.close in_c.close out.close a.java import java.math.BigInteger; class List { Object a; Object b; } class Expression { Object eval() { return null; } } b.java static Object cons(Object a, Object b) { List c = new List(); c.a = a; c.b = b; return c; } static Object list() { return null; } static Object list(Object a) { return cons(a, null); } static Object list(Object a, Object... b) { Object c = list(b[b.length - 1]); for(int i = b.length - 2; i >= 0; i--) { c = cons(b[i], c); } return cons(a, c); }
{ "domain": "codereview.stackexchange", "id": 18391, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, c, functional-programming, lisp, compiler", "url": null }
More generally, if $\rm\:b,c\:$ are coprime then $\rm\: e' = b^{\phi(c)}$ satisfies $\rm\: e'\equiv 0\pmod{b},\ e'\equiv 1\pmod{c}\:.\$ Hence, $\$ using $\rm\:e'\:$ and its complement $\rm\:\ e = 1-e'\:,\$ where $\rm\ \ e\: \equiv 1\pmod{b},\ e\:\equiv 0\pmod{c}\:,\:$ yields the following closed-form for solutions of congruences by the Chinese Remainder Theorem $$\begin{eqnarray}\rm x\ \equiv\ a\pmod{b} \\ \rm x\ \equiv\ d\pmod{c}\end{eqnarray}\rm\ \ \iff\ \ x\ \equiv\ a\ e + d\ e'\pmod{b\:c}$$ The relationship between CRT and the orthogonal idempotents $\rm\:(1,0),\ (0,1)\:$ will become clearer when you study the Peirce decomposition induced by such idempotents. first using Fermat's theorem, $q^{p-1}\equiv 1 \pmod p$ Also,$\ \ \ p^{q-1}\equiv0 \pmod p$ Second using Fermat's theorem, $p^{q-1}\equiv 1 \pmod q$ Also,$q^{p-1}\equiv0 \pmod q$ Using first part we get, $q^{p-1}+p^{q-1}\equiv1 \pmod p$ using second part we get, $p^{q-1}+q^{p-1}\equiv1 \pmod q$ Since $p$ and $q$ are distinct prime, therefore $\gcd (p,q)=1$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290913825541, "lm_q1q2_score": 0.8215193161630903, "lm_q2_score": 0.8459424431344437, "openwebmath_perplexity": 439.6872591471656, "openwebmath_score": 0.7740459442138672, "tags": null, "url": "https://math.stackexchange.com/questions/134106/crt-fermats-little-theorem/134110" }
digital-communications, sampling, modulation, polar Title: How is the normalized power required for unipolar NRZ twice that of polar NRZ? So, the normalized power for unipolar NRZ scheme is $\frac{1}{2}V^2$. According to my book, the normalized power required for polar NRZ is half that of unipolar NRZ. So, the power should be $\frac{V^2}{4}$. However, the normalized power for polar NRZ seems to me to be $\frac{1}{2}V^2+\frac{1}{2}(-V)^2=V^2$. Can someone please explain this contradiction? Diagrams from Data Communications and Networking, Behrouz A. Forouzan, p. 101-2 If the levels of the polar signal were $\pm V$ you would be right, but then it wouldn't be a fair comparison with the unipolar signal with levels $+V$ and zero. A fair comparison would be to have the same difference between the two levels. So we look at a polar signal with levels $\pm V/2$, which has a power of $V^2/4$.
{ "domain": "dsp.stackexchange", "id": 12358, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "digital-communications, sampling, modulation, polar", "url": null }
homework-and-exercises, special-relativity, coordinate-systems, inertial-frames Title: Clarification on the physical mechanism in an exercise question Following is a question from Rindler Prove that at any instant there is just one plane in $S$ on which the clocks of $S$ agree with the clocks of $S'$, and that this plane moves with velocity $\frac{c^2}{v}(1 − 1/\gamma)$.
{ "domain": "physics.stackexchange", "id": 65023, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, special-relativity, coordinate-systems, inertial-frames", "url": null }
ros, catkin, ros-kinetic, gcc, cmake Title: How to know which version of gcc/g++ compiled ros and how to change it for compiling my workspace Hi, I have a problem compiling my workspace: https://answers.ros.org/question/313955/undefined-reference-to-everything/ and I think that this could be because of the version of gcc and g++. I want to follow this answer: https://answers.ros.org/question/291910/linking-problem-with-catkin_libraries/ because I noticed that all of my undefined references involve std::string, like he said. But I don't know how to know if this is the problem or not, I don't know how to know which version of gcc/g++ was used to build ros and I don't know how to choose the version to compile my workspace (use the default compiler like the answer said). Thank you in advance. EDIT Here the output of: which gcc /usr/bin/gcc env | grep -E 'CC|CXX' QT_LINUX_ACCESSIBILITY_ALWAYS_ON=1 QT_ACCESSIBILITY=1 gcc --version gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 Copyright (C) 2015 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Originally posted by ipgvl on ROS Answers with karma: 45 on 2019-02-04 Post score: 1 "ROS" (in quotes as it's really ambiguous to ask these sort of things like that) is built with whatever is the default C++ compiler is on the system that it is built on. In the end it's all CMake, so whatever CMake decides should be the compiler, will be the compiler. You can override/configure that using the traditional CC and CXX environment variables (and using something called a toolchain file, but I'll ignore those here).
{ "domain": "robotics.stackexchange", "id": 32403, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, catkin, ros-kinetic, gcc, cmake", "url": null }
c++, image, template, classes, variadic // Enable this function if ElementT = RGB void print(std::string separator = "\t", std::ostream& os = std::cout) const requires(std::same_as<ElementT, RGB>) { for (std::size_t y = 0; y < size[1]; ++y) { for (std::size_t x = 0; x < size[0]; ++x) { os << "( "; for (std::size_t channel_index = 0; channel_index < 3; ++channel_index) { // Ref: https://isocpp.org/wiki/faq/input-output#print-char-or-ptr-as-number os << +at(x, y).channels[channel_index] << separator; } os << ")" << separator; } os << "\n"; } os << "\n"; return; } void setAllValue(const ElementT input) { std::fill(image_data.begin(), image_data.end(), input); } friend std::ostream& operator<<(std::ostream& os, const Image<ElementT>& rhs) { const std::string separator = "\t"; rhs.print(separator, os); return os; } Image<ElementT>& operator+=(const Image<ElementT>& rhs) { check_size_same(rhs, *this); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::plus<>{}); return *this; }
{ "domain": "codereview.stackexchange", "id": 45280, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, image, template, classes, variadic", "url": null }
javascript, jquery, performance, stackexchange, physics Title: Celebratory fireworks animation To celebrate an important event, I hastily cobbled together an HTML canvas-based animation. A few concerns I have include: Performance: Does it run reasonably smoothly on most modern machines? How can I make it more efficient? Portability / Compatibility: Does it work correctly on all modern browsers (excluding old versions of Internet Explorer)? Modelling: Is this a good way to simulate fireworks? Is there anything I could do to enhance the realism? While we normally don't say "thanks" on Stack Exchange questions, I'd like to break that rule right now and say a big Thank you! to all members of the Code Review community. function animate(selector) { var $canvas = $(selector); var width = $canvas.innerWidth(); var height = $canvas.innerHeight();
{ "domain": "codereview.stackexchange", "id": 16347, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, performance, stackexchange, physics", "url": null }
python, performance, numpy, pandas I've made a few minor changes like o -> i as i is a far more common name for a throwaway variable in a loop. I've also changed how the ending index of a batch is calculated, checking if you can move by the batch size makes more sense to me. Some closing remarks, this is definitely not even close to perfect. There are a couple of loops that could be changed to functions which are then apply-ied for more parallelism. The main matrix we are working with is converted from floats to bools to ints, but it probably could have stayed as bools. I hope there is enough in this that you can actually work on and improve upon yourself, good luck and please post a follow up question!
{ "domain": "codereview.stackexchange", "id": 31545, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, numpy, pandas", "url": null }
mars, phobos According to Geoffrey Landis, Deimos and Phobos could have each been moons of parent asteroids, and they were separated due to tidal forces: This would explain how Phobos and Deimos could survive as moons. I could mention the other hypotheses, but Landis' would answer your question quite well. Right now, there isn't any clear consensus as to how Phobos and Deimos became moons. More research is still necessary, and we may need to modify our models for the evolution of Mars and the asteroid belt.
{ "domain": "astronomy.stackexchange", "id": 1870, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mars, phobos", "url": null }
ruby, ruby-on-rails Title: How would you refactor this case statement? I've got this code in my ApplicationHelper file: def new_button case controller_name when 'cars' "<li class='has-form'><a class='button' href='#{new_car_path}'>New Car</a></li>".html_safe when 'trucks' "<li class='has-form'><a class='button' href='#{new_truck_path}'>New Truck</a></li>".html_safe when 'mopeds' "<li class='has-form'><a class='button' href='#{new_moped_path}'>New Moped</a></li>".html_safe else nil end end So I can just put <%= new_button %> in the view, to display the appropriate button based on the controller_name being accessed. I have about 10 different controllers to select from (and I'm sure that collection will grow), so the code is getting a bit lengthy. Is there a better way to accomplish this? You can use the name of your current controller to dynamically generate your buttons. Controllers names are plural by convention in Rails, so you will want to get the singular version of your controller name. singular = controller_name.singularize # get the controller name & make it singular You can generate path to the "new" action by using send to call a dynamic method name. If you are not using Rails resources, you can use url_for(controller: controller_name, action: "new") to accomplish the same thing. path = send("new_#{singular}_path") # generate the URL for that item's new action Similarly, you can dynamically generate the text for your buttons. title = "New #{singular.titleize}" # generate the text for the link And finally, put it all together to generate your HTML: "<li class='has-form'><a class='button' href='#{path}'>#{title}</a></li>".html_safe # return
{ "domain": "codereview.stackexchange", "id": 8009, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ruby, ruby-on-rails", "url": null }
nuclear-physics, charge, atomic-physics Title: Why does the charge density at the centre of a nucleus decrease with increasing mass number? I have been learning about the Woods-Saxon approximation for charge distribution in the nucleus; it is given by $$\rho_{ch}(r)=\frac{\rho_{ch}^0}{1+e^{(r-a)/b}}$$ The value of $\rho_{ch}^0$ decreases with increasing mass number A, but it is not immediately clear to me why; the distribution is subject to a normalisation condition, but it is $$\int \rho_{ch}(\textbf{r}) \text{d}^3\textbf{r}=Ze$$ so it is not that the charge distribution is normalised by dividing by the total charge of the nucleus, as I initially thought. Is there some physical explanation for why $\rho_{ch}^0$ decreases with mass number? For light nuclei, up to, say, neon ($z=10$) the numbers of neutrons and protons in the stable isotopes are roughly equal. As z increases the ratio of neutrons to protons increases, reaching about 1.5 for lead ($z=82$). Treating the nucleons in nuclei naively as close-packed spheres of equal radius, the nuclear volume will be proportional to $A=Z+N$, whereas the charge is proportional to $z$. The mean nuclear charge density is therefore proportional to $\frac ZA =\frac1{1+\tfrac NZ}$, so as $Z$ increases and $\frac NZ$ increases, the nuclear charge density decreases.
{ "domain": "physics.stackexchange", "id": 86575, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nuclear-physics, charge, atomic-physics", "url": null }
In simple terms, PageRank is an intuitive way of ranking web pages, which formed the basis for Google’s web indexing algorithm during its early phase. In this article, you’ll learn about the intuition behind page rank and implementing page rank in python. The article is divided into the following sections: • Basic Idea behind Page Rank • Understanding the Pank Rank algorithm • Implementing Page Rank from scratch ## Basic Idea behind Page Rank The intuition behind the Page-Rank is based on the idea that popularity of a webpage is determined not only by the number of incoming links but also by the kind of incomings links. Citations from highly ranked pages contribute more than lower ranked web pages, for example if your website in linked by forbes website it will affect your ranking more than compared to a random website. Taking it further let’s take an example for calculation the PR of a web page A cited by web page B shown in Fig. 1: $$PR(A)=(1-d)* (1/N)+ d* P(B,A)*PR(B)$$ PR(A): PR of A PR(B): PR of B P(B,A): Probability going from B to A (here it is equal to one) N: Total number of webpages(in our case 2). d: is known as damping factor, to add some randomness to the equation. Simultaneously PR of B is calculated. This process continues until it PR does not change beyond some value. ## Page Rank Algorithm Taking it further we and to have a better understanding of how page rank works, we consider a graph (shown by fig 2) of web pages having links shown by the arrow. Note that, if there are web pages with no out link then they do not contribute to the page ranking (they are usually referred to as dangling pages). Our aim in now to figure of the PR of individual web pages. Inorder to do so, we need to perform the following steps: • Find the probabilities of going from one web page to another (respresented using probability transition matrix) • Apply the page rank algorithm our the web page until it converges. ### STEP 1
{ "domain": "github.io", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180694313357, "lm_q1q2_score": 0.817099963898094, "lm_q2_score": 0.8289388104343892, "openwebmath_perplexity": 1006.8610830645682, "openwebmath_score": 0.7309039235115051, "tags": null, "url": "https://isarth.github.io/pagerank/" }
python, performance, algorithm, python-3.x Runtime: 40 ms, faster than 81.00% of Python3 online submissions for Two Sum. Memory Usage: 14.3 MB, less than 5.08% of Python3 online submissions for Two Sum. Next challenges: How could continue to improve the code, and I am curious about the approaches of the top 20%. Don't leave white space at the end of lines. Use enumerate. When comparing with None use is and is not. It'd be cleaner to just use in rather than get. If you want to use nums_d.get then you should use it outside the if so you don't have to use it a second time when you enter the if. This however makes the code messy for not much of a benefit IMO. Unless the site forces you to return lists returning a tuple would be more Pythonic. Your comments aren't helpful, if anything they make the code harder to read for me. The variable name nums is easier to read then nums_d, the _d is useless. When returning it would be better to either: Raise an exception, as the lookup failed. Return a tuple where both values are None. This is so you can tuple unpack without errors. Getting the code: def test_peil(nums: List[int], target: int) -> Tuple[int, ...]: lookup = {} for i, v in enumerate(nums): if target - v in lookup: return i, lookup[target - v] lookup[v] = i raise ValueError('Two sums target not in list.')
{ "domain": "codereview.stackexchange", "id": 33936, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, algorithm, python-3.x", "url": null }
sql, sql-server Title: Too many parentheses to format a percentage in SELECT This query seems to have way too many parentheses. SELECT CAST(ISNULL(ROUND(CAST((SUM(product1) + SUM(product2)) AS FLOAT) / CAST(salescalls.visits AS FLOAT) * 100, 2), 0) AS VARCHAR) + '%' AS avrg FROM sales, salescalls WHERE sales.salescallId = salescalls.id There is something you can do about CAST((SUM(product1) + SUM(product2)) AS FLOAT) / CAST(salescalls.visits AS FLOAT) * 100 To perform floating-point division rather than integer division, either the dividend or the divisor needs to be a float. You don't need make both of operands floats. Furthermore, you can make a the dividend a float by using floating-point multiplication. That means that you don't need to cast anything to a float. That simplifies it to 100.0 * (SUM(product1) + SUM(product2)) / salescalls.visits Next, you can rewrite the sums as one aggregate by adding the two columns in each row before summing the rows: 100.0 * SUM(product1 + product2) / salescalls.visits I don't think that there's not much that can be done about ROUND(), ISNULL(), and CAST(… AS VARCHAR), though. Consider doing that formatting in your application-layer code instead of SQL. You've written the query using an old-style join. It would be better to write it using a JOIN keyword: SELECT CAST( ISNULL( ROUND( 100.0 * SUM(product1 + product2) / salescalls.visits, 2 ), 0 ) AS VARCHAR ) + '%' AS avrg FROM sales INNER JOIN salescalls ON sales.salescallId = salescalls.id
{ "domain": "codereview.stackexchange", "id": 9299, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sql, sql-server", "url": null }
ros install(DIRECTORY include/${PROJECT_NAME}/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} PATTERN ".svn" EXCLUDE ) This is the package.xml <?xml version="1.0"?> <package> <name>costmap_2d</name> <version>0.0.0</version> <description>The costmap_2d package</description> <!-- One maintainer tag required, multiple allowed, one person per tag --> <!-- Example: --> <!-- <maintainer email="jane.doe@example.com">Jane Doe</maintainer> --> <maintainer email="lempereur@todo.todo">lempereur</maintainer> <!-- One license tag required, multiple allowed, one license per tag --> <!-- Commonly used license strings: --> <!-- BSD, MIT, Boost Software License, GPLv2, GPLv3, LGPLv2.1, LGPLv3 --> <license>TODO</license> <!-- Url tags are optional, but mutiple are allowed, one per tag --> <!-- Optional attribute type can be: website, bugtracker, or repository --> <!-- Example: --> <!-- <url type="website">http://wiki.ros.org/costmap_2d</url> --> <!-- Author tags are optional, mutiple are allowed, one per tag --> <!-- Authors do not have to be maintianers, but could be --> <!-- Example: --> <!-- <author email="jane.doe@example.com">Jane Doe</author> -->
{ "domain": "robotics.stackexchange", "id": 18691, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros", "url": null }
java, programming-challenge class Main { public static void main(final String[] args) { try (final Scanner sc = new Scanner(System.in)) { final int numberOfCases = sc.nextInt(); for (int i = 0; i < numberOfCases; i++) { final int firstNumber = sc.nextInt(); final int secondNumber = sc.nextInt(); System.out.println(addReversedNumbers(firstNumber, secondNumber)); } } } private static int addReversedNumbers(final int firstNumber, final int secondNumber) { return reverseNumber(reverseNumber(firstNumber) + reverseNumber(secondNumber)); } private static int reverseNumber(final int number) { int numberToReverse = number; int reversedNumber = 0; while (numberToReverse > 0) { final int digit = numberToReverse % 10; numberToReverse /= 10; reversedNumber = 10 * reversedNumber + digit; } return reversedNumber; } }
{ "domain": "codereview.stackexchange", "id": 33553, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, programming-challenge", "url": null }
astrophysics, python, planetary-transits, astropy, light-curve However, the minimum point is not the one in the middle of the transit due to noise, so transit depth may not be accurate. Calculation of both, depth and duration, is usually done not on the raw data but derived from a fit to the data. In your last three lines of code you also calculate the average / medium over all data while you should calculate the uneclipsed mean or median flux only for the non-transit time (with using median it possibly has only a tiny influence, yet it might). As a first and crude step, I'd de-noise the data by applying a floating average filter over the data; you will have to test for its width and see what gives you best results: you don't want to average out features, but you want to average-out noise. The better approach is to not smooth but actually fit a physical model to the data which takes into account the typical light curve behaviour of a transit. For an implementation I can point you to pytransit (reference paper). (Are you sure you are not re-inventing the wheel?). See also this paper by Maxted and Gill for a comparison of a few algorithms.
{ "domain": "astronomy.stackexchange", "id": 4509, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "astrophysics, python, planetary-transits, astropy, light-curve", "url": null }
python, regex, lambda matched = [] unmatched = [] with open(source) as f: for line in f: if pattern.match(line): matched.append(line) else: unmatched.append(line) write_csv(matched_data_file, matched) print("Csv for the good data has been created successfully") write_csv(unmatched_data_file, unmatched) print("Csv for the bad data has been created successfully") if __name__ == "__main__": process(source_filename)
{ "domain": "codereview.stackexchange", "id": 16475, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, regex, lambda", "url": null }
quantum-field-theory, hilbert-space, klein-gordon-equation You have only considered positive-energy wavefunctions; you may wish to look at what happens when one or both are negative-energy instead, since the most general solution of the KG equation superposes both energy sectors. You should find for example that $\left( \psi^\ast,\,\psi^\ast \right)_{KG}=-\left( \psi,\,\psi \right)_{KG}$ for any solution $\psi$.
{ "domain": "physics.stackexchange", "id": 38977, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, hilbert-space, klein-gordon-equation", "url": null }
general-relativity, special-relativity, causality Title: Is simultaneity in SR only a pedagogical tool? In a very recent post here I recently learned that simultaneity has no meaning in general relativity; I can accept the answer and explanation that was given for that question. But then Harry Johnston replied to my comment saying that the concept of simultaneity also has no place in special relativity - that it's just a pedagogical tool. Is that right - simultaneity - what I take as two observers in separate reference frames not necessarily agreeing on when events happen, or even the order in which they happen - is just a pedagogical tool? To what end? How does that prepare them for understanding general relativity? I only have a cursory understanding of both SR and GR - but now I'm really confused. In most introductions to Special Relativity, students learn a special procedure for setting up coordinates which involves synchronizing clocks with light pulses. This leads to a natural definition of simultaneous events as events which occur at the same coordinate time. The notion of simultaneity basically stems from a preference for a particular set of coordinates. Its a fundamental principle in Relativity that the choice of coordinates is arbitrary (so long as you preserve the space time interval). The whole point of the subject is that there isn't a preferred reference frame. There is no coordinate invariant notion of simultaneity, so it doesn't make sense to talk about it in from this perspective, except to emphasize that its not a thing.
{ "domain": "physics.stackexchange", "id": 22490, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, special-relativity, causality", "url": null }
# Clopen subsets of $A^\Bbb N$ for finite $A$ Let $A$ be a finite set with the discrete topology and let $X = A^\Bbb N$ be the product space. Let $$\pi_n:X\to A^n$$ be the projection map, i.e. $\pi_n(x_1,\dots,x_{n},x_{n+1},\dots) = (x_1,\dots,x_n)$. Each such map induces a partition of $X$ which we denote as $[x]_n=:\pi_n^{-1}(\pi_n(x))$. Such sets $[x]_n$ are the cylinders of the product topology on $X$. As it has been pointed out here, for each $x$ and $n$ the cylinder $[x]_n$ is clopen in $X$ and hence those are finite unions of cylinders. My question is the following: is it true that clopen subsets of $X$ are exactly finite unions of cylinders - otherwise I am interested in a counterexample. Does the situation change in case $A$ is countable? I guess this question is also related, but I am not sure whether answers there apply directly here. - Yes. Let $\sigma \in A^{<\mathbb{N}}$, i.e. a finite length string of elements of $A$. Let $[\sigma] = \{f \in A^{\mathbb{N}} : \sigma \preceq f\}$. $[\sigma]$ is a clopen set. So finite union of these cyclinder sets are clopen.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290922181331, "lm_q1q2_score": 0.8028438782480769, "lm_q2_score": 0.8267117983401363, "openwebmath_perplexity": 134.2976818626588, "openwebmath_score": 0.9721150994300842, "tags": null, "url": "http://math.stackexchange.com/questions/249943/clopen-subsets-of-a-bbb-n-for-finite-a" }
universe, cosmology, fate-of-universe Why is this interesting? Because Kant (that great philosopher) said that there were two transcendental elements, space and time. I will skip space, but must say something on time, because someone thinks it might end. Kant states that time is an a priori element of knowledge in pure form. Kant uses five reasons, why time is universal and not empirical. They all can be brought back to the fact that we cannot see things/phenomena to coexist together or successively when there is no concept of time. Therefore, time is an a priori element of pure form. Now, what has this to do with the ending of the universe in respect to time? The theory you mentioned as heat death is such a transcendental theory. The theory states that the universe expands until the universe is too big to be heated by the energy provided by the matter in the universe. This theory models that the universe follow the same rules as any thermo dynamic system and therefore at a certain point in time all energy, matter and temperature are evenly distributed in this huge universe. Because all is evenly distributed no stars will be created and all processes come to a grinding halt. Since the state before, now and after are the same, the scientists now determine this as the end of time. The formula theories also end up making some theory that time will end. The problem with these theories is that time will never end. According to Kant time is an a priori element of pure reason and will be there despite of what we observe. Because the state of the universe doesn't change, doesn't mean that there will be no time. Time has just stopped. Theories that state that time will end, do not comply with Kant's critique of pure reason, which states that time is the formal condition of all phenomena whatsoever.
{ "domain": "astronomy.stackexchange", "id": 1766, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "universe, cosmology, fate-of-universe", "url": null }
everyday-chemistry, photochemistry, applied-chemistry Title: Chemiluminescence - Determining Temperature from Luminol Imaging I'm working on an imaging project and I have no background in chemistry or chemiluminescence for that matter. This is a Computer Science project. While most of the details for it have been worked out, I'm having difficulty figuring out the scope and possibility of an important requirement. Given a luminol image, I will develop a heat-map. For now the heat map is built based upon the intensity of the light. However, given the intensity of the light and with the knowledge that luminol is being used, is it possible to determine the approximate temperature of the point based on its intensity and the intensity of the surrounding region? I would assume that this chemical would burn differently at various temperature and would have a specific signature of light emitted depending on it its temperature but I'm having difficulty pinpointing whether this is true, possible and backed by research (or science). The closest parallel I've found is combustion analysis. The heat release rate is calculated by measuring the amount of light radiated from a flame at various wavelengths, since certain chemicals give off radiations at specific wavelengths. Can anyone confirm this, suggest alternatives or point me int the right direction? However, given the intensity of the light and with the knowledge that luminol is being used, is it possible to determine the approximate temperature of the point based on its intensity and the intensity of the surrounding region? Probably not! The energy released from the excited state of the 3-aminophthalate dianion formed in the oxidation of luminol (or rather its dianion) by hydrogen peroxide in alkaline is released in form of light. There's little to no heat released in this chemoluminescent reaction.
{ "domain": "chemistry.stackexchange", "id": 2807, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "everyday-chemistry, photochemistry, applied-chemistry", "url": null }
python, tkinter #e1IgnoreCase = keywordEntry.get() if currentWorkingLib in notebook: note_var = notebook[currentWorkingLib] if e1Current in note_var: #tags_list=[r"(?:<<)",r"(?:>>)",r"(?:<)",r"(?:>)"] root.text.delete(1.0, "end-1c") root.text.insert("end-1c", note_var[e1Current]) root.text.see("end-1c") else: root.text.delete(1.0, "end-1c") root.text.insert("end-1c", "Not a Keyword") root.text.see("end-1c") else: root.text.delete(1.0, "end-1c") root.text.insert("end-1c", "No Library Selected") root.text.see("end-1c") #~~~~~~~~~~~~~~~~~~~< Preset Themes >~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ baseBGimage=PhotoImage(file="./Colors/pybgbase.png") bgLable = Label(root, image= baseBGimage) bgLable.place(x = 0, y = 0)
{ "domain": "codereview.stackexchange", "id": 25707, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tkinter", "url": null }
python, algorithm, python-3.x, pathfinding All of this together can look like: from collections import namedtuple from dataclasses import dataclass, field from typing import * # Against best-practice but shhh import math Edge = namedtuple('Edge', 'distance node'.split()) class Node(namedtuple('Node', 'start end edges'.split())): def __str__(self): return f'{self.start} -> {self.end}' @dataclass(order=True) class Path: distance: int current: Node=field(compare=False) previous: Node=field(compare=False) @dataclass class Graph: nodes: List[Node] def shortest_paths(self, start: Node) -> Dict[Node, Path]: if start not in self.nodes: raise ValueError("Graph doesn't contain start node.") paths = {} queue = [] for node in self.nodes: path = Path(float('inf'), node, None) paths[node[:2]] = path queue.append(path) paths[start[:2]].distance = 0 queue.sort(reverse=True) while queue: node = queue.pop() for neighbor in node.current.edges: alt = node.distance + neighbor.distance path = paths[neighbor.node[:2]] if alt < path.distance: path.distance = alt path.previous = node queue.sort(reverse=True) return paths def shortest_path(self, start: Node, end: Node) -> List[Tuple[int, Node]]: if end not in self.nodes: raise ValueError("Graph doesn't contain end node.") paths = self.shortest_paths(start) node = paths[end[:2]] output = [] while node is not None: output.append((node.distance, node.current)) node = node.previous return list(reversed(output))
{ "domain": "codereview.stackexchange", "id": 34873, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, algorithm, python-3.x, pathfinding", "url": null }
meteorology, atmosphere-modelling, earth-rotation, fluid-dynamics, coriolis There is a centrifugal acceleration in those equations. What we call "gravity" in a lay sense is a combination of gravitational acceleration and centrifugal acceleration. In a technical sense, what we call "gravity" is the acceleration of a free-falling body at sea level as observed in a frame of reference fixed with respect to the rotating Earth. Centrifugal acceleration is baked into $g$. This means centrifugal acceleration is present in your equations. In particular, it is in your equation (3) as $g$. If one takes $g$ as a constant, $g_0=9.80665\,\text{m}/\text{s}^2$ (which may be a bad idea), that's the acceleration due to gravitation and centrifugal forces at roughly the latitude of Paris. What we call $g$ varies with latitude and with height above the ellipsoid, plus minor local perturbations. (Gravity near the Himalaya can get quite complex if one wants to be very precise.) A fairly simple approximation that accounts for latitude (but not height) is the Somigliana gravity formula,$$g = g_{\text{eq}}\frac{1+\kappa \sin^2\phi}{\sqrt{1-e^2\sin^2 \phi}}$$ where $g_{\text{eq}} = 9.7803267714\,\text{m}/\text{s}^2$ is the acceleration due to gravity (including centrifugal acceleration) at the equator, $\kappa = 0.00193185138639$, which reflects the observed difference between gravity at the equator versus the poles, $e^2=0.00669437999013$ is the square of the eccentricity of the figure of the Earth, and $\phi$ is the geodetic latitude.
{ "domain": "earthscience.stackexchange", "id": 2563, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "meteorology, atmosphere-modelling, earth-rotation, fluid-dynamics, coriolis", "url": null }
complexity-theory, time-complexity, algorithm-analysis, asymptotics Title: Finding the Big-O and Big-Omega bounds of a program I am asked to select the bounding Big-O and Big-Omega functions of the following program: void recursiveFunction(int n) { if (n < 2) { return; } recursiveFunction(n - 1); recursiveFunction(n - 2); } From my understanding, this is a Fibonacci sequence, and according to this article here, https://www.geeksforgeeks.org/time-complexity-recursive-fibonacci-program/, the tight upper bound is $1.6180^n$. Thus, I chose all the Big-O bounds >= exp(n) and all the Big-Omega bounds <= exp(n). Below are the choices: O(1) O(log n) O(n) O(n^2) O(exp(n)) Om(1) Om(log n) Om(n) Om(n^2) Om(exp(n)) The answer choices I selected: O(n^2) O(exp(n)) Om(1) Om(log n) Om(n) Om(n^2) Om(exp(n))
{ "domain": "cs.stackexchange", "id": 17550, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, time-complexity, algorithm-analysis, asymptotics", "url": null }
regular-languages, finite-automata, pumping-lemma Title: Which z should I pick? I'm currently trying to show that the language $L_2=\{0^n \text{ } | \text{ } n=2^k, k\geq 0\}$ is not regular by using the Pumping Lemma (at least I think it is not regular, because I couldn't find any regular expressions or DFA for it). I know all the steps that I need to go through, but I am having a very hard time figuring out which specific $z\in L_2$ I need to use. I tried using $z=0^{2n}=0^{2^{k+1}}$ and $z^{2^n}$, but I had no luck. Do you think I'm doing something wrong and using the wrong z's or are the above two okay to work with, but I'm just not comprehending it? We want to show that the language $L_2 = \{0^n: n=2^k, k \ge 0\}$ is nonregular. By way of contradiction, suppose $L_2$ is regular. Let $p$ be the pumping length. Let $z = 0^{2^p}$. Then, $z \in L_2$ and $|z| \ge p$. Let $z=abc$ be a partition of $z$ such satisfying $|ab| \le p, |b| \ge 1$. Observe that $2^p < |abbc| \le 2^p + p < 2^p + 2^p = 2^{p+1}$, whence $abbc \notin L_2$, a contradiction.
{ "domain": "cs.stackexchange", "id": 20004, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "regular-languages, finite-automata, pumping-lemma", "url": null }
vba, excel getLoanMessageHTML = Join(list.ToArray, vbNewLine) End Function Function getTR(ParamArray TDValues() As Variant) Dim list As Object Set list = CreateObject("System.Collections.Arraylist") Dim Item As Variant list.Add Space(8) & "<tr>" For Each Item In TDValues list.Add Space(10) & "<td>" & Item & "</td>" Next list.Add Space(8) & "</tr>" getTR = Join(list.ToArray, vbNewLine) End Function
{ "domain": "codereview.stackexchange", "id": 36703, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
general-relativity, black-holes, solar-system Title: If the Sun were to suddenly become a black hole of the same mass, what would the orbital periods of the planets be? I am interested in theoretical and practical considerations. It would be exactly the same, (atleast in Newtionian picture), no gravitational fields outside planets radius would change. The easiest way to see this I think is to use the gravitational analogoue of Gauss law. Since we have spherical symmetry in both cases Int G dA = G*4*pi*r^2 ~ M So G is constant. See http://en.wikipedia.org/wiki/Gauss's_law
{ "domain": "physics.stackexchange", "id": 754, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, black-holes, solar-system", "url": null }
javascript, jquery, css, image, html5 Then the class name can be added via .addClass() if (i == 0) { $(slides[0]).addClass('active'); bullet.addClass('activeSlide'); That way the inner HTML is only specified once. Shuffle function return value unused There is no need to return slides at the end of shuffle(): return slides; This is because the return value is not assigned to anything (unless you intended for that to be the case): shuffle(slides); Rewrite See the modified code below. It doesn't use the queue or promises at all, and as far as I can tell maintains the same functionality. I also made previousIdx and activeIdx variables outside the functions instead of parameters. Because of this, I wrapped the whole thing in an IIFE to avoid adding those variables to the global scope. $(function() { var $elm = $('.slider'), $slidesContainer = $elm.find('.slides-container'), slides = $slidesContainer.children('a'), slidesCount = slides.length, slideHeight = $(slides[0]).find('img').outerHeight(false), animationspeed = 1500, animationInterval = 7000; var activeIdx = 0; var previousIdx = 0; // First slide var shuffle = function(slides) { var j, x, i; for (i = slides.length - 1; i > 0; i--) { j = Math.floor(Math.random() * (i + 1)); x = slides[i]; slides[i] = slides[j]; slides[j] = x; } return slides; } shuffle(slides); // Set (initial) z-index for each slide var setZindex = function() { for (var i = 0; i < slidesCount; i++) { $(slides[i]).css('z-index', slidesCount - i); } }; setZindex();
{ "domain": "codereview.stackexchange", "id": 31993, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, css, image, html5", "url": null }
Register for free sessions Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Must Read Articles Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 | Remainders-1 | Remainders-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com SVP Joined: 26 Mar 2013 Posts: 1904 Re: (1/2 - 1/3) + (1/3 - 1/4) + (1/4 - 1/5) + (1/5 - 1/6) =  [#permalink] ### Show Tags 26 Jun 2018, 04:40 Bunuel wrote: $$(\frac{1}{2} - \frac{1}{3}) + (\frac{1}{3} - \frac{1}{4}) + (\frac{1}{4} - \frac{1}{5}) + (\frac{1}{5} - \frac{1}{6}) =$$ A. $$-\frac{1}{6}$$ B. 0 C. $$\frac{1}{3}$$ D. $$\frac{1}{2}$$ E. $$\frac{2}{3}$$ NEW question from GMAT® Official Guide 2019 (PS05410) After canceling all term we left with: $$\frac{1}{2}$$ - $$\frac{1}{6}$$
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.920217986535907, "lm_q1q2_score": 0.8105253136435172, "lm_q2_score": 0.8807970779778824, "openwebmath_perplexity": 4055.4251447114902, "openwebmath_score": 0.646536648273468, "tags": null, "url": "https://gmatclub.com/forum/1-268963.html" }
catkin-make, catkin -- Configuring done -- Generating done -- Build files have been written to: /home/jan/roboweld_ws/build #### #### Running command: "make -j4 -l4" in "/home/jan/roboweld_ws/build" #### Scanning dependencies of target planner_OmplPlanner Scanning dependencies of target planner_CeresOpt Scanning dependencies of target planner_graph_plotter Scanning dependencies of target planner_utils [ 5%] Building CXX object roboweld/planner/CMakeFiles/planner_CeresOpt.dir/src/planner/CeresOpt.cpp.o [ 11%] Building CXX object roboweld/planner/CMakeFiles/planner_OmplPlanner.dir/src/planner/OmplPlanner.cpp.o [ 16%] Building CXX object roboweld/planner/CMakeFiles/planner_utils.dir/src/planner/utils.cpp.o [ 22%] Building CXX object roboweld/planner/CMakeFiles/planner_graph_plotter.dir/src/planner/graph_plotter.cpp.o In file included from /home/jan/roboweld_ws/src/roboweld/planner/src/planner/graph_plotter.cpp:6: /home/jan/roboweld_ws/src/roboweld/planner/include/planner/matplotlibcpp.h:5:10: fatal error: Python.h: No such file or directory 5 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. make[2]: *** [roboweld/planner/CMakeFiles/planner_graph_plotter.dir/build.make:63: roboweld/planner/CMakeFiles/planner_graph_plotter.dir/src/planner/graph_plotter.cpp.o] Error 1
{ "domain": "robotics.stackexchange", "id": 36711, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "catkin-make, catkin", "url": null }
orbital-mechanics, star-systems Note that the equation for the center of mass of two bodies of masses $m_1$ and $m_2$ and positions $\mathbf{x}_1$ and $\mathbf{x}_2$ is $$\mathbf{x}_{\text{cm}}=\frac{m_1\mathbf{x}_1+m_2\mathbf{x}_2}{m_1+m_2}$$ and if $m_1\ll m_2$, $$\mathbf{x}_{\text{cm}}=\frac{m_1\mathbf{x}_1}{m_1+m_2}+\frac{m_2\mathbf{x}_2}{m_1+m_2}\approx\frac{m_1}{m_2}\mathbf{x}_1+\frac{m_2}{m_2}\mathbf{x}_2\approx\mathbf{x}_2$$
{ "domain": "astronomy.stackexchange", "id": 2983, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "orbital-mechanics, star-systems", "url": null }
python, ignition-fortress Title: Trying to start gazebo fortress through pybind, undefined symbol - destructor Since the python bindings for Fortress are quite limited at the moment, I'm trying to write my own for my use case, which is to start a gazebo server, step forward one time-step at a time, pass instructions to my robot and return the state. (For a reinforcement learning project) When I try to import my module, I'm getting "undefined symbol: _ZN8ignition6gazebo2v66ServerD1Ev" I think this due to an issue linking the library, so I'm looking for the library file but I can't find it. I've installed Gazebo Fortress following the install with ros documentation https://gazebosim.org/docs/fortress/ros_installation. I'm on ubuntu 22.04. Where are the library files installed? Do I need to install from source or using a dev version? -- edit -- Ok so I think that symbol that it can't find is the destructor for Server. So I'm not sure it's a problem with getting the library files anymore. It looks like the destructor is defined fine in both Server.hh and in Server.cc when I got the source code. Anyone know why pybind can't find it? -- further edit -- I'm still getting this error for other functions on Server and ServerConfig that are declared in the headers but not given an implementation there, so I'm back to thinking it can't find the libraries to link... Originally posted by Lestes on Gazebo Answers with karma: 3 on 2023-05-26 Post score: 0 Ok Turns out I was just missing a configuration in my setup.py. I had from glob import glob from setuptools import setup from pybind11.setup_helpers import Pybind11Extension, build_ext ext_modules = [Pybind11Extension( "gazebo_rl_sim", sorted(glob("src/*.cpp")) )]
{ "domain": "robotics.stackexchange", "id": 4703, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, ignition-fortress", "url": null }
homework-and-exercises, thermodynamics, temperature, entropy For further reference: supposing that the system and the environment are exchanging energy with each other via heat but are otherwise isolated from their environments, we can write $$\delta Q_{\textrm{env}} = -\delta Q_{\textrm{sys}},$$ in which case $$dS_{\textrm{env}} = \frac{\delta Q_{\textrm{env}}}{T_{\textrm{env}}} = -\frac{\delta Q_{\textrm{sys}}}{T_{\textrm{env}}} \geq -\frac{\delta Q_{\textrm{sys}}}{T_{\textrm{sys}}}. $$ Let's prove the inequality:
{ "domain": "physics.stackexchange", "id": 31645, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, thermodynamics, temperature, entropy", "url": null }
java, strings, reinventing-the-wheel, integer String digits = "0123456789"; int remainder =0; int result = number ; --length; number = length; String strSeq = ""; String valStr = ""; // loop through the whole integer digit by digit //use modulo operator get the remainder //save it in remainder. then concatenate valStr //returned from returnDigitString() //method with previous String of Digits. Divide the result by 10. Again //repeat the same process. this time the modulo and the //number to be divided will be one digit less at each decremental //iteration of the loop. for(int i = number; i >= 0; --i) { remainder = result % 10; valStr = returnDigitString(remainder); strSeq = valStr + strSeq; result = result / 10; } //Print the string version of the integer System.out.println("The String conversion of " + input + " is: " + strSeq); } } Things I like about your code The idea to calculate the length of a number with the logarithm is really good! In my opinion you are writing good comments. Good variable names Works as intended, without (to my knowledge) any bugs. Criticism returnDigitString() It is considered bad practice to put more than one command into one line. So please make line breaks after every ";". Your solution is pretty long (over 30 lines) in comparison to the complexity of the problem. You could also have done something like that: public static String returnDigitString(int digit) { String res = ""; String[] digits = {"0", "1", "2", "3", "4", "5", "6", "7", "8", "9"}; for(int i = 0; i <= 9; i++) { if(digit == i) { res += digits[i]; break; } } return res; } main()
{ "domain": "codereview.stackexchange", "id": 38102, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, strings, reinventing-the-wheel, integer", "url": null }
ros, kinect, rosbag, load, openni-launch Title: What is the API to generate a registered point cloud from raw kinect streams As specified on the openni_launch web page, I have been recording live Kinect data by recording the following 4 topics with rosbag record camera/depth_registered/image_raw camera/depth_registered/camera_info camera/rgb/image_raw camera/rgb/camera_info /tf To generate registered pcl::PointXYZRGB point cloud topics from this bag file, I have been playing the bag file at the same time as: roslaunch openni_launch _load_driver:=false I now need to read the bag file in my source code using the rosbag python API. However, I do not know how to produce the registered pcl::PointXYZRGB point cloud from these recorded topics in code. Is there a library function that takes in these 5 image topics and outputs the registered pcl::PointXYZRGB point cloud? Many Thanks Originally posted by Arrakis on ROS Answers with karma: 163 on 2012-09-11 Post score: 2 There is a script provided with the RGBDSLAM benchmark data set from Freiburg that adds a pointcloud to a bagfile given these topics. Take a look here http://vision.in.tum.de/data/datasets/rgbd-dataset/tools#adding_point_clouds_to_ros_bag_files Take a look at the script 'add_pointclouds_to_bagfile.py'. You can either use it on your bagfile or look at the code. Note if you want to use it you will need to change the topic names in the script, as the script was written for the electric openni drivers. EDIT: Sorry I didn't see that you had recorded the raw images. I don't know how to rectify these images, however recording /camera/depth_registered/image and /camera/rgb/image_rect_color instead of the raw images would solve this. Originally posted by RossK with karma: 141 on 2012-09-12 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 10985, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, kinect, rosbag, load, openni-launch", "url": null }
python, object-oriented def _starport_default(self): return roll('2d6') + self._STARPORT_ADJUST[self.population] _TECH_LEVEL_ADJUST = [ # attribute, {value:adjustment} ('size', {0:2, 1:2, 2:1, 3:1, 4:1}), ('atmosphere', {0:1, 1:1, 2:1, 3:1, 10:1, 11:1, 12:1, 13:1, 14:1, 15:1}), ('hydrographics', {0:1, 9:1, 10:2}), ('population', {1:1, 2:1, 3:1, 4:1, 5:1, 8:1, 9:2, 10:4}), ('government', {0:1, 5:1, 7:2, 13:-2, 14:-2}), ('starport', {0:-4, 1:-4, 2:-4, 7:2, 8:2, 9:4, 10:4, 11:6, 12:6, 13:6, 14:6, 15:6}), ] def _tech_level_default(self): if self.population == 0: return 0 tech_level = roll('1d6') for attribute, adjustment in self._TECH_LEVEL_ADJUST: tech_level += adjustment.get(getattr(self, attribute), 0) return tech_level
{ "domain": "codereview.stackexchange", "id": 29210, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, object-oriented", "url": null }
c#, .net, cryptography Okay, so keeping this specific code in mind: Depending on how it's stored (in a database at rest?) it might be good to also store the other parameters, like number of iterations (then again, the lengths could just also be stored, it's not a lot of data after all), as part of the data package, that way the parameters can be later increased without having to reprocess all stored data. The exception should probably at least be logged so someone can check what's up in production. new RNGCryptoServiceProvider() gets run too often, should be enough to keep one instance around? The password check is just to make sure there was no coding error, right? Otherwise the minimum length should be a bit more, plus some dictionary checks etc. would be required. But then again, where's this password coming from, is it user-chosen or the global password for the whole database? 1000 iterations seems low, but I also haven't (can't) checked what the actual run time of it is. To prevent attacks based on that it should probably be high enough to be noticeable when doing the password derivation. In any case there's a few better functions for this as mentioned in the linked documents.
{ "domain": "codereview.stackexchange", "id": 36312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, cryptography", "url": null }
php, trait $create->setContent($user); $user = $create->execute(); So is it a good approach to use traits in this case? Not every endpoint requires parameters, headers and content. Your class names are a little generic, so it is hard for me to get a feel for your intent here. Create seems like an odd name for a class. Maybe UserFactory, UserProvider or similar. If Create is ultimately intended to be a more generic factory type of class that could create any sort of entity depending on what endpoint is used, then perhaps a name like EntityFactory or similar would be more appropriate. Try to name classes, methods, etc. to clearly indicate their purpose. If you are getting advanced enough in your coding to start considering using traits, then you are advanced enough to write proper Doc blocks. Sorry, not trying to shame here, but this is generally well thought out code compared to many samples on this site. I want to see you take it to the next level in terms of being closer to production-ready code in a professional environment. Consider implementing concrete methods (which could be overridden) in your abstract class. You are repeating yourself a lot in extending classes with implementing your getters. So do you have a second use case where you are going to use that trait? It probably doesn't make sense to make a trait unless you have other use cases where you need to consume it. For this single use case you show here, I don't know why this functionality would not just be part of base class. From a coding style standpoint, you are doing a few thiings that are generally frowned upon. You are putting comments out to the right of code instead of above the code they are applicable to, and you have several lines the go beyond the typical "best practice" of keeping lines of code to an 80 character limit. These make code harder to read. Some more specific comments follow: const API_DOMAIN = 'https://test-api.com/v1/'; Why is this a constant on this class? If you want to make this class more flexible, perhaps it is passed on constructor or set via setter. If this is very narrow use class, perhaps at least consider moving this to app configuration.
{ "domain": "codereview.stackexchange", "id": 22221, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, trait", "url": null }
c#, design-patterns, unit-testing When I tried to test the logic of this method It failed because it uses an external resource at the line of adding new attachments where it cannot find the related paths on the Hard Drive, what do you people think about it? Implementation concerns should be encapsulated behind abstractions that avoid tight coupling to external dependencies. In this case, when you were testing, the Attachment will try to read the file at the provided path. Since those paths may not exist when testing, you should consider refactoring the current design. Provide an abstraction that would allow the attachment stream to be read in isolation without any adverse behavior. public interface IFileInfo { string Name { get; } string PhysicalPath { get; } Stream CreateReadStream(); } Here is a simple implementation that can be used at run-time public class AttachmentInfo : IFileInfo { private readonly FileInfo innerFile; public AttachmentInfo(string path) { innerFile = new FileInfo(path); } public string Name => innerFile.Name; public string PhysicalPath => innerFile.FullName; public Stream CreateReadStream() => innerFile.OpenRead(); } The email notification can be refactored to use the abstraction for attachments public interface IEmailNotification : INotification { string From { get; } string Subject { get; } bool IsBodyHtml { get; } string CC { get; } string BCC { get; } string ReplyToList { get; } List<IFileInfo> Attachments { get; } } Resulting in the factory method to become public class EmailMailMessageFactory : MailMessageFactory { public EmailMailMessageFactory(string backupBccEmail) : base(backupBccEmail) { } public override MailMessage CreateMailMessage(IEmailNotification emailNotification) { var mailMessage = new MailMessage { From = new MailAddress(emailNotification.From), Subject = emailNotification.Subject, Body = emailNotification.Body, IsBodyHtml = emailNotification.IsBodyHtml }; mailMessage.To.Add(emailNotification.To);
{ "domain": "codereview.stackexchange", "id": 35255, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, design-patterns, unit-testing", "url": null }
renormalization, quantum-electrodynamics, gauge-theory, unitarity, ward-identity 2) What about scenarios where one doesn't perform the polarization sum? I thought that the polarization sum was only performed when the detector is insensitive to polarization, which is not always the case at hand. I would have thought that one could show that non-transverse polarization are unphysical without having to do a polarization sum. For example, given $\mathcal{M}^\mu$ I would have thought that we could have contracted this with one of the non-traverse polarization vectors, say $\alpha_2^\mu$, and we should find that the amplitude for this process vanishes by itself. 3) Shouldn't we be showing that we can also ignore non-transverse states in all parts of the diagram, not just on external legs? If non-transverse states can run in loops then they need to be included in physical initial and final states, due to the optical theorem, which needs to be avoided. Or are P&S claiming that they've proved that they've shown that non-transverse polarizations can be ignored in the initial and final states, so by the optical theorem they can be ignored in loops, too? (1) The completeness relationship for a basis of vectors orthonormal with respect to $\eta_{\mu\nu}$ is \begin{equation} \eta_{ij}\epsilon^{(i)}_\mu \epsilon^{(j)}_\nu = \eta_{\mu\nu} \end{equation} This normalization convention is picked for Lorentz invariance... I know you said you didn't want that answer but the point is that the normalization of these vectors is a matter of convention and it's best to pick a Lorentz invariant one. One advantage of choosing a L.I. normalization is that we don't need to specify the argument: the $\epsilon$ depend on the momentum, but these normalization conditions do not. The $\eta_{ij}$ provides the minus sign you are missing. Also here you see the basic problem that the gauge symmetry fixes: one of the polarization vectors necessarily has a negative norm.
{ "domain": "physics.stackexchange", "id": 9010, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "renormalization, quantum-electrodynamics, gauge-theory, unitarity, ward-identity", "url": null }
quantum-mechanics, electromagnetism, quantum-field-theory As far as this question is concerned, that's really the only difference between the electron case and the photon case. That's enough of a difference to prevent us from constructing a model for photons that is analogous to non-relativistic quantum mechanics for electrons, but it's not enough of a difference to prevent photon-detection observables from being both localized and reliable for most practical purposes. The larger we allow its localization region to be, the more reliable (less noisy) a photon detector can be. Our definition of how-good-is-good-enough needs to be based on something else besides QEM itself, because QEM doesn't have any characteristic length-scale of its own. That's not an obstacle to having relatively well-localized photon-observables in practice, because there's more to the real world than QEM. Position operators What is a position operator? Nothing that I said above refers to such a thing. Instead, everything I said above was expressed in terms of observables that represent particle detectors (or counters). I did that because the starting point was relativistic QFT, and QFT is expressed in terms of observables that are localized in bounded regions.
{ "domain": "physics.stackexchange", "id": 73359, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, electromagnetism, quantum-field-theory", "url": null }
I tried using even[ff, x_] := (ff[x] + ff[-x])/2 But that doesn't seem to work. What would the correct approach to this be? • even[f_[x_]] := (f[x] + f[-x])/2 and odd[f_[x_]] := (f[x] - f[-x])/2 Feb 18, 2014 at 4:38 • or like this: even[f_] := (f[#] + f[-#])/2 &; odd[f_] := (f[#] - f[-#])/2 & Feb 18, 2014 at 4:39 • Neither approach is working for me at the moment. In the first approach mathematica doesn't evaluate anything, just leaving the even function with whatever its argument is, and in the second approach I have a lot of #'s that are left unevaluated as well. – R R Feb 18, 2014 at 4:46 • When I boot mathematica fresh and enter it in this is what I get:In[28]:= ClearAll[f, ff, even, x] In[29]:= even[f_[x_]] := (f[x] + f[-x])/2 In[30]:= f[x_] := x^2 + x In[35]:= even[f] even[f[x]] even[f[x_]] even[f[#]] Out[35]= even[f] Out[36]= even[x + x^2] Out[37]= even[x_ + x_^2] Out[38]= even[#1 + #1^2] – R R Feb 18, 2014 at 5:12 • Did you SetAttributes like I did? Feb 18, 2014 at 5:23 SetAttributes[{even, odd}, HoldAll]; even[f_[x_]] := (f[x] + f[-x])/2; odd[f_[x_]] := (f[x] - f[-x])/2; Usage
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9465966747198242, "lm_q1q2_score": 0.8353032738219968, "lm_q2_score": 0.8824278556326344, "openwebmath_perplexity": 7118.2226918373235, "openwebmath_score": 0.32384538650512695, "tags": null, "url": "https://mathematica.stackexchange.com/questions/42478/define-a-function-with-a-function-for-an-input" }
homework-and-exercises, cosmology, space-expansion, integration, cosmological-inflation So, how do I now get to the above solution from the current step I'm stuck on? You have $$\ln a + C_1 = -\frac{1}{4}\phi^2 + C_2$$ which gives you $$ \ln a(t) = (C_2 - C_1) - \frac{1}{4} \phi(t)^2 $$ in general. At some initial time $t_i$, let $a(t) = a_i$ and $\phi(t_i) = \phi_i$. This means that at time $t_i$, $$\ln a_i = (C_2 - C_1) - \frac{1}{4}\phi_i^2$$ i.e. $$C_2 - C_1 = \ln a_i + \frac{1}{4}\phi_i^2.$$ Plug this into $$ \ln a(t) = (C_2 - C_1) - \frac{1}{4} \phi(t)^2 $$ to get $$ \ln a(t) = \ln a_i + \frac{1}{4}\phi_i^2 - \frac{1}{4} \phi(t)^2 $$ which gives you $$a(t) = a_i\exp(\frac{1}{4}(\phi_i^2 - \phi(t)^2))$$ which is the expression you want. You just need to solve for the constants.
{ "domain": "physics.stackexchange", "id": 28803, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, cosmology, space-expansion, integration, cosmological-inflation", "url": null }
tensor-calculus, linear-algebra So let's multiply the expression for $v^i$ above by this on the right: $$ \begin{align} v^i\left(T^{-1}\right)_i{}^k &= v'^j T_j{}^i\left(T^{-1}\right)_i{}^k\\ &= v'^j\delta_j{}^k\\ &= v'^k \end{align} $$ or (big fanfare, and renaming indices gratuitously) $$v'^i = v^j \left(T^{-1}\right)_j{}^i$$ And finally, we want to make this look like matrix multiplication on the left, so we need to diddle the indices of $T$: $$v'^i = \left(\left(T^{-1}\right)^T\right)^i{}_j v^j$$ And we're done.
{ "domain": "physics.stackexchange", "id": 30745, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "tensor-calculus, linear-algebra", "url": null }
sql, datetime, t-sql where t.FiscalDayOfWeek = 7 -- week ending dates group by t.FiscalYear ,t.FiscalQuarterOfYear ) ,tLastYearSameWeek as ( select t.FiscalYear LYTW_FiscalYear ,t.FiscalQuarterOfYear LYTW_FiscalQuarterOfYear ,t.FiscalMonthOfYear LYTW_FiscalMonthOfYear ,t.FiscalWeekOfYear LYTW_FiscalWeekOfYear ,t.CalendarDate LYTW_WeekEndingDate from dwd.FiscalCalendars t inner join tToday on t.FiscalWeekOfYear = tToday.FiscalWeekOfYear and t.FiscalYear = tToday.FiscalYear - 1 where t.FiscalDayOfWeek = 7 -- week ending dates ) ,tLastYearSameMonth as ( select t.FiscalYear LYTM_FiscalYear ,t.FiscalQuarterOfYear LYTM_FiscalQuarterOfYear ,t.FiscalMonthOfYear LYTM_FiscalMonthOfYear ,m.NameEN LYTM_FiscalMonthNameEN ,m.NameFR LYTM_FiscalMonthNameFR ,max(t.CalendarDate) LYTM_MonthEndingDate from dwd.FiscalCalendars t inner join dbo.MonthNames m on t.FiscalMonthOfYear = m.FiscalMonthOfYear inner join tLastYearSameWeek lw on t.FiscalMonthOfYear = lw.LYTW_FiscalMonthOfYear and t.FiscalYear = lw.LYTW_FiscalYear where t.FiscalDayOfWeek = 7 -- week ending dates group by t.FiscalYear ,t.FiscalQuarterOfYear ,t.FiscalMonthOfYear ,m.NameEN ,m.NameFR ) ,tLastYearSameQuarter as ( select t.FiscalYear LYTQ_FiscalYear ,t.FiscalQuarterOfYear LYTQ_FiscalQuarterOfYear ,max(t.CalendarDate) LYTQ_QuarterEndingDate from dwd.FiscalCalendars t inner join tLastYearSameMonth tm on t.FiscalQuarterOfYear = tm.LYTM_FiscalQuarterOfYear and t.FiscalYear = tm.LYTM_FiscalYear where
{ "domain": "codereview.stackexchange", "id": 17226, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sql, datetime, t-sql", "url": null }
mechanical-engineering, fluid-mechanics, dynamics I tried by finding out the torque due to viscous forces which is $$\tau_1=\mu(2\pi Rh)\frac{R\omega}{a}$$ and the torque due to mass m1 as $$\tau_2=m_1gR$$ Writing into equation $$\tau_2- \tau_1=m_2R^2 \frac{d\omega}{dt}$$ integrating this and using boundary condition $\omega=0$ at $t=0$ I got $$\omega=\frac{m_1ga}{2\pi Rh\mu}[1-exp(\frac{-2\pi \mu htR}{am_2})]$$ . However I am missing $m_1+m_2$ instead of $m_2$ in the exponential part.Any ideas? Thanks. You didn't account for the acceleration of $m_1$. Setting up a free body diagram on the weight shows: $$m_1g - T = m_1a_y$$ Where $T$ is the tension in the rope, $a_y$ is the acceleration of the block. This leads to the following corrections: $$\tau_1=\mu(2\pi Rh)\frac{R\omega}{a}*R$$ (The original had a value for force, whereas we need a torque.) This is because the entire viscous force would operate at a distance R from the origin. $$\tau_2=TR = (m_1g-m_1a_y)R=m_1gR-m_1R^2\frac{d\omega}{dt}$$ To account for the tension in the rope properly, the acceleration of $m_1$ must be considered. Finally: $$\tau_2- \tau_1=m_2R^2 \frac{d\omega}{dt}$$
{ "domain": "engineering.stackexchange", "id": 2265, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mechanical-engineering, fluid-mechanics, dynamics", "url": null }
# What is the VC dimension of the hypothesis class $H=\left\{f_{\theta_{1}, \theta_{2}}: R^{2} \rightarrow\{0,1\} \mid 0<\theta_{1}<\theta_{2}\right\}$? I would like to know what is the VC dimension of the following hypothesis class. $$H=\left\{f_{\theta_{1}, \theta_{2}}: R^{2} \rightarrow\{0,1\} \mid 0<\theta_{1}<\theta_{2}\right\}$$ where $$f_{\theta_{1}, \theta_{2}}(x, y)=1$$ if $$\theta_{1} x \leqslant y \leqslant \theta_{2} x,$$ else $$f_{\theta_{1}, \theta_{2}}(x, y)=0$$. I am not really sure how to prove it. What do you think? The VC-dimension of your hypothesis class $$\mathcal H$$ is 2. To see this, we begin by showing that $$\mathcal H$$ shatters any 2-element set $$\{(a_1 a_2), (b_1, b_2)\}$$ of real numbers where all components of the pairs are positive:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9883127420043055, "lm_q1q2_score": 0.8721147073724698, "lm_q2_score": 0.8824278695464501, "openwebmath_perplexity": 113.22868080510965, "openwebmath_score": 0.9148173928260803, "tags": null, "url": "https://cs.stackexchange.com/questions/127185/what-is-the-vc-dimension-of-the-hypothesis-class-h-left-f-theta-1-theta/127202" }
# Explanation for Additive Property of Variance? I'm wondering why variance has additive property, as opposed to why this property doesn't extend to standard deviation? Additive property is defined as: Var(A+B) = Var(A) + Var(B) I imagine this as adding two distribution together which makes sense. But in that case SD should have similar property as well. Why does variance possess this magical property? • This is only the case if $A$ and $B$ are uncorrelated random variables. If this holds, then $\text{Sd}(A+B)=\sqrt{\text{Var}(A)+\text{Var}(B)}$, which doesn't equal $\text{Sd}(A)+\text{Sd}(B)$ simply because $\sqrt{a+b}\ne \sqrt a+\sqrt b$ in general. Feb 3, 2019 at 20:26 • @StubbornAtom you should make it an answer. – Tim Feb 3, 2019 at 22:17 • @StubbornAtom I see. Why is it Var (A+B) = Var(A)+Var(B) instead of Sd(A+B)² = (Sd(A)+Sd(B))². Of course I understand (Sd(A)+Sd(B))² ≠ Sd(A)² + Sd(B)². In my own uninitiated terms, what is the magical property of variance that standard deviation do not possess. Feb 5, 2019 at 15:34 • @FudgeAruth No magic. By definition, $\text{var}(A+B)=E(A+B-E(A+B))^2=E[(A+B)^2]-[E(A+B)]^2$. Now use the linearity of expectation to arrive at $\text{var}(A+B)=\text{var}(A)+\text{var}(B)+2\text{cov}(A,B)$. More details here. Feb 5, 2019 at 15:45 It doesn't! In general: Var(A+B) = Var(A) + Var(B) + Cov(A, B)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9759464474553782, "lm_q1q2_score": 0.8046306817953631, "lm_q2_score": 0.8244619199068831, "openwebmath_perplexity": 1037.163406664837, "openwebmath_score": 0.6246182918548584, "tags": null, "url": "https://stats.stackexchange.com/questions/390609/explanation-for-additive-property-of-variance" }
symmetry, coordinate-systems, noethers-theorem If $\Gamma$ is a manifold, coordinate transformations do not act as active transformations on $\Gamma$ (they are the identity map!) thus they are not dynamical symmetries by definition. However you may sometimes interpret them as active transformations (since $s$ is bijective, regular, and thus transforms coordinate systems into coordinate systems) referring to suitable classes of coordinates, the ones connected by symmetries! With respect to such a pair of coordinate systems, the studied symmetry in coordinates looks like the identity map. This coordinate-minded approach is not the right starting point to understand these notions however, at least for beginners, in my view.
{ "domain": "physics.stackexchange", "id": 33502, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "symmetry, coordinate-systems, noethers-theorem", "url": null }
2. ⌨ For each function in Exercise 1, make side-by-side surface plots of $$f_x$$ and $$f_y$$ using Chebyshev spectral differentiation. 3. ⌨ For each function in Exercise 1, make a contour plot of the mixed derivative $$f_{xy}$$ using Chebyshev spectral differentiation. 4. ⌨ In each case, make a plot of the function given in polar or Cartesian coordinates over the unit disk. (a) $$f(r,\theta) = r^2 - 2r\cos \theta$$ (b) $$f(r,\theta) = e^{-10r^2}$$ (c) $$f(x,y) = xy - 2 \sin (x)$$ 5. ⌨ Plot $$f(x,y,z)=x y - x z - y z$$ as a function on the unit sphere. 6. ⌨ Plot $$f(x,y,z)=x y - x z - y z$$ as a function on the cylinder $$r=1$$ for $$-1\le z \le 2$$.
{ "domain": "tobydriscoll.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9869795079712153, "lm_q1q2_score": 0.8389041383666002, "lm_q2_score": 0.849971181358171, "openwebmath_perplexity": 574.4103973533868, "openwebmath_score": 0.9795880913734436, "tags": null, "url": "https://tobydriscoll.net/fnc-julia/twodim/tensorprod.html" }
inorganic-chemistry, molecular-orbital-theory, transition-metals, symmetry Also; my understanding of how to decide on the ligand orbitals' relative energy levels is dubious. Yves Jean's Molecular Orbitals of Transition Metal Complexes says "one must analyze the bonding or antibonding character" (p. 43), but I am not getting it right every time. Can anyone give me some tips? T refers to triply degenerate set, E refers to doubly degenerate set, A and B are singly degenerate. The total number of orbitals should add up to the total number of atomic orbitals on your ligands. 1,2,g,u are symmetry labels (look up Mulliken symbols). One thing to remember when constructing you MO diagram is that only orbitals of similar energy and the exact same type and symmetry can mix. So, you can only mix A with A, B with B, E with E and T with T. Also, only g with g, u with u, 1 with 1 and 2 with 2.
{ "domain": "chemistry.stackexchange", "id": 6535, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, molecular-orbital-theory, transition-metals, symmetry", "url": null }
machine-learning, deep-learning, classification, image-classification, reinforcement-learning Title: Reframing action recognition as a reinforcement learning problem Given the significant advancements in reinforcement learning, I wanted to know whether it is possible to recast problems such as action recogniton, object tracking, or image classification into reinforcement learning problems. Given the significant advancements in reinforcement learning Worth noting that many of the recent advancements are due to improvements in neural networks used as function approximators, and understanding how to integrate them with reinforcement learning (RL) to help solve RL challenges involving vision or other complex non-linear mapping from state to best action. So at least some of current improvements in RL were due to researchers asking the opposite question "Given the significant advancements in neural networks . . ." I wanted to know whether it is possible to recast problems such as action recogniton, object tracking, or image classification into reinforcement learning problems.
{ "domain": "datascience.stackexchange", "id": 3215, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, deep-learning, classification, image-classification, reinforcement-learning", "url": null }
&=-\frac{\operatorname{Li}_{2}{\left(-b\right)}}{a}+\frac{1}{a}\int_{\frac{1}{1+a}}^{\frac{1+b}{1+a}}\frac{\ln{\left(z\right)}}{1-z}\,\mathrm{d}z;~~~\small{\left[\frac{1+by}{1+a}=z\right]}\\ &=-\frac{\operatorname{Li}_{2}{\left(-b\right)}}{a}+\frac{1}{a}\int_{\frac{a}{1+a}}^{\frac{a-b}{1+a}}\frac{(-1)\ln{\left(1-t\right)}}{t}\,\mathrm{d}t;~~~\small{\left[1-z=t\right]}\\ &=-\frac{\operatorname{Li}_{2}{\left(-b\right)}}{a}+\frac{\operatorname{Li}_{2}{\left(\frac{a-b}{1+a}\right)}-\operatorname{Li}_{2}{\left(\frac{a}{1+a}\right)}}{a}\\ &=\frac{\operatorname{Li}_{2}{\left(\frac{a-b}{1+a}\right)}-\operatorname{Li}_{2}{\left(\frac{a}{1+a}\right)}-\operatorname{Li}_{2}{\left(-b\right)}}{a}.\blacksquare\\ \end{align}
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9884918516137418, "lm_q1q2_score": 0.8081657034110319, "lm_q2_score": 0.817574471748733, "openwebmath_perplexity": 1801.0404679211936, "openwebmath_score": 0.9768937230110168, "tags": null, "url": "https://math.stackexchange.com/questions/1337456/how-to-compute-the-integral-i-leftc-right-int-01-frac-ln1-cx1x" }
navigation, move-base [roslaunch][INFO] 2014-09-03 07:00:28,770: create_master_process: rosmaster, /opt/ros/hydro/share/ros, 11311 [roslaunch][INFO] 2014-09-03 07:00:28,770: process[master]: launching with args [['rosmaster', '--core', '-p', '11311']] [roslaunch.pmon][INFO] 2014-09-03 07:00:28,771: ProcessMonitor.register[master] [roslaunch.pmon][INFO] 2014-09-03 07:00:28,772: ProcessMonitor.register[master] complete [roslaunch][INFO] 2014-09-03 07:00:28,772: process[master]: starting os process [roslaunch][INFO] 2014-09-03 07:00:28,773: process[master]: start w/ args [['rosmaster', '--core', '-p', '11311', '__log:=/home/udoo/.ros/log/fc4df7ea-3337-11e4-b578-7cdd9047677b/master.log']] [roslaunch][INFO] 2014-09-03 07:00:28,774: process[master]: cwd will be [/home/udoo/.ros] [roslaunch][INFO] 2014-09-03 07:00:28,795: process[master]: started with pid [3210] [roslaunch][INFO] 2014-09-03 07:00:28,797: master.is_running[http://localhost:11311] [roslaunch][INFO] 2014-09-03 07:00:28,899: master.is_running[http://localhost:11311]
{ "domain": "robotics.stackexchange", "id": 19275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, move-base", "url": null }
f#, playing-cards However, the code above did not account for jokers. So I updated the code to the following: type Suit = | Spades | Hearts | Clubs | Diamonds type Face = | Ace | King | Queen | Jack | Ten | Nine | Eight | Seven | Six | Five | Four | Three | Two type Joker = BigJoker | LittleJoker type Standard = { Face:Face; Suit:Suit } and Card = | Card of Standard | Wild of Joker let suits = [Spades; Hearts; Clubs; Diamonds] let faces = [Ace; King; Queen; Jack; Ten; Nine; Eight; Seven Six; Five; Four; Three; Two] let deck = [for suit in suits do for face in faces do yield Card { Face=face; Suit=suit } ] @ [Wild BigJoker Wild LittleJoker]
{ "domain": "codereview.stackexchange", "id": 23758, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "f#, playing-cards", "url": null }
soft-question, quantum-entanglement, degrees-of-freedom, particle-detectors, epr-experiment Title: Is it possible to know the efficiency of a particle detector without assuming the truth of Theory (e.g.,, Quantum Theory) I reorganized the question to clarify exactly what it is that I'm asking. Suppose an experiment is performed where a particle detector records 50 particles per second, on average. Absent any other considerations, it seems easy enough to come up with any number of theories to explain these results. For example, either of the following two theories would seem to be acceptible: assume the emitter produces 100 particles per second, implying a detector efficiency of 50% assume the emitter produces 200 particles per second, implying a detector efficiency of 25% The only constraint in designing a consistent model is that the particle production rate times the detector efficiency must equal the number of particles detected. It seems that I can assume any emission rate that is greater than or equal to my actual measured detection rate. In other words, we seem to be free to multiply the presumed emission rate by any positive factor, as long as we reduce the efficiency of the detector by the same factor. The context of the question is this: detector efficiency is crucial to the argument in every Bell Test experiment I'm familiar with. But as far as I can tell, assumptions about detector efficiency are determined in practice so as not to violate the tenets of Quantum Theory. If that's the case, then the argument becomes circular and Bell tests only provide evidence that QT axioms are consistent with experiment, but doesn't decide between QT and other possible theories. This line of thought led me to wonder whether the proportionality between macroscopic and sub-atomic energy and mass constants (for example, the Compton wavelength) aren't similarly under-determined. To be clear, I'm asking a question about detector theory, and am not concerned about accuracy or calibration. Thanks in advance, and let me know if my question can be improved, or if it's ambiguous in any way. assume the emitter produces 100 particles per second, implying a detector efficiency of 50% assume the emitter produces 200 particles per second, implying a detector efficiency of 25% The only constraint in designing a consistent model is that the particle production rate times the detector efficiency must equal the number of particles detected. It seems that I can assume any emission rate that is greater than or equal to my actual measured detection rate.
{ "domain": "physics.stackexchange", "id": 45945, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "soft-question, quantum-entanglement, degrees-of-freedom, particle-detectors, epr-experiment", "url": null }
(This exercise is offered as a modernization of Euclid’s theorem on the infinitude of primes.) Prove that an infinite integral domain with with a finite number of units has an infinite number of maximal ideals. I highly recommend Kap’s classic textbook to everyone interested in mastering commutative ring theory. In fact I highly recommend everything by Kaplansky – it is almost always very insightful and elegant. Learn from the masters! For more about Kaplansky see this interesting NAMS paper which includes quotes from many eminent mathematicians (Bass, Eisenbud, Kadison, Lam, Rotman, Swan, etc). I liked the algebraic way of looking at things. I’m additionally fascinated when the algebraic method is applied to infinite objects. $\$–Irving Kaplansky NOTE $\$ The reader familiar with the Jacobson radical may note that it may be employed to describe the relationship between the units in $\rm R$ and $\rm R/J\:$ used in the above proof. Namely THEOREM $\$ TFAE in ring $\rm\:R\:$ with units $\rm\:U,\:$ ideal $\rm\:J,\:$ and Jacobson radical $\rm\:Jac(R)\:.$ $\rm(1)\quad J \subseteq Jac(R),\quad$ i.e. $\rm\:J\:$ lies in every max ideal $\rm\:M\:$ of $\rm\:R\:.$ $\rm(2)\quad 1+J \subseteq U,\quad\ \$ i.e. $\rm\: 1 + j\:$ is a unit for every $\rm\: j \in J\:.$ $\rm(3)\quad I\neq 1\ \Rightarrow\ I+J \neq 1,\qquad\$ i.e. proper ideals survive in $\rm\:R/J\:.$ $\rm(4)\quad M\:$ max $\rm\:\Rightarrow M+J \ne 1,\quad$ i.e. max ideals survive in $\rm\:R/J\:.$
{ "domain": "bootmath.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9875683513421314, "lm_q1q2_score": 0.8186337343841973, "lm_q2_score": 0.8289388104343892, "openwebmath_perplexity": 473.1220255717864, "openwebmath_score": 0.8989055156707764, "tags": null, "url": "http://bootmath.com/is-there-possibly-a-largest-prime-number.html" }
experimental-chemistry Title: Sonication to purify solid product- the crude product just disappeared I am forming Fmoc-valine-citrulline dipeptide, and below is the literature protocol I am following (from Bioconjugate Chem 2002, 13 (4), 855-869): Formation of Fmoc-Val-Cit Fmoc-Val-OSuc (succinimidyl ester) ($\pu{14.91 mmol}$) in DME ($\pu{40 mL}$) was added to a solution of Cit ($\pu{2.743 g}$, $\pu{1.05 equiv.}$) and NaHCO3 ($\pu{1.315 g}$, $\pu{1.05 equiv.}$) in water ($\pu{40 mL}$). THF ($\pu{20 mL}$) was added to aid solubility, and the mixture was stirred at room temperature for 16 h. Aqueous citric acid (15%, $\pu{75 mL}$) was added, and the mixture was extracted with 10% 2-propanol/ethyl acetate ($\pu{2 \times 100 mL}$). The solid product began to precipitate but remained in the organic layer. The suspension was washed with water ($\pu{2 \times 150 mL}$), and the solvents were evaporated. The resulting white solid was dried in vacuo for 5 h and then treated with ether ($\pu{80 mL}$). After brief sonication and trituration, the white solid product was collected by filtration
{ "domain": "chemistry.stackexchange", "id": 14760, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "experimental-chemistry", "url": null }
mass, terminology, matter Title: What is the meaning of "matter" in physics? What is the meaning of matter in physics? By defining matter in terms of mass and mass in terms of matter in physics, are we not forming circular definitions? Please give a meaning of "matter" in Physics that circumvents this circularity. What is the meaning of "matter" in physics? It doesn't matter. Sometimes matter means "particles with rest mass". Sometimes matter means "anything that contributes to the stress-energy tensor". Sometimes matter means "anything made of fermions". And so on. There's no need to have one official definition of the word "matter", nothing about the physical theories depends on what we call the words. Discussing this any further is just like worrying about whether a tomato is really a fruit or a vegetable. A cook doesn't care.
{ "domain": "physics.stackexchange", "id": 90254, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mass, terminology, matter", "url": null }
civil-engineering, software, rail, transportation The latter may be akin to a named/signed highway junction, or simply a named point along the tracks marked by a sign. These are more commonly used by higher-level people, such as folks in Customer Service, as they don't care as much about what precisely the train is doing at a given station. Communication Based Train Control (CBTC) systems may either use GPS lat/long and then convert it to a track name and milepost based on an internal database of the railroad's layout (this is what Positive Train Control or PTC does), or use beacons set in the track that act as "electronic mileposts" (the European Rail Traffic Management System/ERTMS approach, at least at Level 1). Of course, along with this all, you need to know what route (subdivision) you are on to begin with -- that's akin to the highway number on a road. It doesn't do you any good at all to know you're at milepost 100.000000 on the main track if you don't have the foggiest clue what subdivision you're in!
{ "domain": "engineering.stackexchange", "id": 915, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "civil-engineering, software, rail, transportation", "url": null }
np-complete Title: Complexity of a linear algebra problem Let matrix $M \in \{0,1\}^{r \times s}$ ($s>r$), let function $f:\Bbb Z^{} \rightarrow \pm1$ and let $\alpha \in \Bbb Z \cap (0,s)$ be given. Is it NP-complete to decide if $\exists u \in \{0,1\}^{1 \times r}$ with $v=uM$: $|f(v)| = {\sum_{i=1}^{s}f(v_{i})^{}} \geq \alpha$? Yes, it is NP-complete, by reduction from 3SAT. In particular, we will go through an intermediate problem, which I will define as follows: Definition. The 1-to-3 of $k$ problem is as follows: given $n$ sets $S_1,\dots,S_n$, decide whether there exists $y_1,\dots,y_m \in \{0,1\}$ such that $1 \le \sum_{j\in S_i} y_j \le 3$ holds for all $i$. Theorem 1. The 1-to-3 of $k$ problem is at least as hard as 3SAT. Proof: Suppose we have a formula $\varphi$ with $n$ clauses over $m$ variables, $x_1,\dots,x_m$. Introduce $m'=2m$ variables, $y_1,\dots,y_m$. The intent is that these will correspond to the $2m$ literals $x_1,\dots,x_m,\neg x_1,\dots,\neg x_m$ (e.g., $y_i=1$ if and only if $x_i=\text{True}$; $y_{m+i}=1$ if and only if $x_i=\text{False}$); we will introduce a few sets to enforce this intent. There are $n'=n+m+\lceil m/3\rceil$ sets, as follows:
{ "domain": "cstheory.stackexchange", "id": 2189, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "np-complete", "url": null }
ros, macbook Originally posted by Artem with karma: 709 on 2013-06-02 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14389, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, macbook", "url": null }
c++, performance, arduino Title: How do I re-write 3 for loops into one function I am building a color sorting machine using a Arduino uno and a TCS3200 color sensor. I have a code that is working perfectly fine however I feel like the code could be a bit more clean seeing that I am using 3 almost identical for loops. Could anyone help me re-write these 3 for loops into one loop. Thank you in advance! Like I mentioned the code works perfectly fine. I only need some help re-writing these 3 for loops void loop() { delay(150); float frequencyR[3]; for(unsigned int i = 0; i < 3; i++) { delay(150); digitalWrite(S2, LOW); digitalWrite(S3, LOW); frequencyR[i] = pulseIn(sensorOut, LOW); delay(150); } float frequencyG[3]; for(unsigned int i = 0; i < 3; i++) { delay(150); digitalWrite(S2, HIGH); digitalWrite(S3, HIGH); frequencyG[i] = pulseIn(sensorOut, LOW); delay(150); } float frequencyB[3]; for(unsigned int i = 0; i < 3; i++) { digitalWrite(S2, LOW); digitalWrite(S3, HIGH); frequencyB[i] = pulseIn(sensorOut, LOW); delay(150); } Pass the pin values as ints and the array as a float pointer. void sample(int pin1, int pin2, float *result){ for(unsigned int i = 0; i < 3; i++) { digitalWrite(S2, pin1); digitalWrite(S3, pin2); result[i] = pulseIn(sensorOut, LOW); delay(150); } } And you call it with: float frequencyR[3]; sample(LOW, LOW, frequencyR); float frequencyG[3]; sample(HIGH, HIGH, frequencyG); float frequencyB[3]; sample(LOW, HIGH, frequencyB);
{ "domain": "codereview.stackexchange", "id": 35679, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, arduino", "url": null }
inorganic-chemistry, redox, oxidation-state Title: When HI reacts with H2SO4, why is the sulphate ion reduced to hydrogen sulphide instead of sulfur dioxide? When $\ce{HI}$ reacts with $\ce{H2SO4}$, it can be represented by the following chemical equation: $$\ce{8HI + H2SO4 -> 4I2 + H2S + 4H2O}$$ However, when $\ce{HBr}$ reacts with $\ce{H2SO4}$, the sulphate ion is reduced to sulphur dioxide instead: $$\ce{2HBr + H2SO4 -> Br2 + SO2 + 2H2O}$$ Could anyone account for the difference here? Is there any more cases where the sulphate ion is reduced to $\ce{H2S}$ instead of $\ce{SO2}$? The book intitled Qualitative Analysis emitted by F. P. Treadwell in $1924$ mentions in the chapter Iodine (p. 323) that the reaction depends on the relative amount of $\ce{H2SO4}$. In the presence of a great excess of $\ce{H2SO4}$, its reaction on $\ce{HI}$ or iodides produces $\ce{SO2}$ apart from $\ce{I2}$. In the presence of an excess of iodides, the reaction produces $\ce{H2S}$ as mentioned by Jelly Qwerty.
{ "domain": "chemistry.stackexchange", "id": 17404, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, redox, oxidation-state", "url": null }
navigation, odometry, rviz Title: dispaying the data of /odom (nav_msgs/Odometry) UTM-encoded position in RVIZ Fuerte on Ubuntu 12.04 The goal is to display the odometry of the robot in RVIZ. I am using the Xsens-mtig gps imu module. I used the package cyphy_xsens_mtig package which publishes /fix and then gps_common package which subscribes to /fix and publishes /odom (nav_msgs/Odometry) which is a UTM-encoded position. I launched RVIZ and and selected the /base_imu frame id . The odometry display type in rviz shows that the messages are being received but i am not able to see the GPS location in rviz. I have tried all the views possible but it just shows a line which is a reference to TF. I guess that some link is missing between the TF of GPS and UTM..If any one has any idea, please let me know Originally posted by sai on ROS Answers with karma: 1935 on 2012-09-02 Post score: 0 Got the answer. The gps_common package does not publish any TF data. So I have added a TF broadcaster and am able to view the TF data in RVIZ. Originally posted by sai with karma: 1935 on 2012-09-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10860, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, odometry, rviz", "url": null }
soil Caliche generally forms when minerals leach from the upper layer of the soil (the A horizon) and accumulate in the next layer (the B horizon), at depths around 3 to 10 feet under the surface. It generally consists of carbonates in semiarid regions—in arid regions, less-soluble minerals form caliche layers after all the carbonates have been leached from the soil. The deposited calcium carbonate accumulates—first forming grains, then small clumps, then a discernible layer, and finally, a thicker, solid bed. As the caliche layer forms, the layer gradually becomes deeper, and eventually moves into the parent material, which lies under the upper soil horizons. However, caliche also forms in other ways. It can form when water rises through capillary action. In an arid region, rainwater sinks into the ground very quickly. Later, as the surface dries out, the water below the surface rises, carrying up dissolved minerals from lower layers. This water movement forms a caliche that tends to grow thinner and branch out as it nears the surface. Plants can contribute to the formation of caliche, as well. Plant roots take up water through transpiration, and leave behind the dissolved calcium carbonate, which precipitates to form caliche. It can also form on outcrops of porous rocks or in rock fissures where water is trapped and evaporates. In general, caliche deposition is a slow process, but if enough moisture is present in an otherwise arid site, it can accumulate fast enough to block a drain pipe. (photo from http://www.naturephoto-cz.com/karst-cave-photo-24442.html)
{ "domain": "earthscience.stackexchange", "id": 1534, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "soil", "url": null }
general-relativity, differential-geometry, metric-tensor, tensor-calculus, notation Finally, the expression $g_{\mu\nu}(x) \textrm{d}x^\mu \otimes \textrm{d}x^\nu$ does employ Einstein's convention. The object $g = g_{\mu\nu}(x) \textrm{d}x^\mu \otimes \textrm{d}x^\nu$ is the metric itself, the tensor, the bilinear map I described above. $g_{\mu\nu}(x)$ are its components on the coordinate chart $x^\alpha$. The idea is pretty similar to defining the vectors $$e_1 = \begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix}, \quad e_2 = \begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix}, \quad e_3 = \begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix}$$ and then writing the vector $$v = \begin{pmatrix}v^1 \\ v^2 \\ v^3\end{pmatrix}$$ as $$v = v^1 e_1 + v^2 e_2 + v^3 e_3 = \sum_{i=1}^{3} v^i e_i = v^i e_i,$$ the only difference being that the basis for the metric is provided by the objects $\textrm{d}x^\mu \otimes \textrm{d}x^\nu$ instead of the $e_i$. The $\textrm{d}x^\mu$ are the basis for the cotangent space at $p$, $T_p^*$, and hence $\textrm{d}x^\mu \otimes \textrm{d}x^\nu$ provide the basis for $T_p^* \otimes T_p^*$ which, as mentioned above, is the space where the metric "lives".
{ "domain": "physics.stackexchange", "id": 84700, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, differential-geometry, metric-tensor, tensor-calculus, notation", "url": null }
1-is goal legal? 2-If legal then how to deal with resulting form? 3- If not (1), then looking for some alternative approach to handle it. • Do you have the fundamental theorem of algebra at your disposal? This would settle equivalence once you've shown that the formula gives the two roots. (Watch out for $b^2-4ac = 0$, though). – AlexR Sep 12 '14 at 8:44 • At some point you want to assume the coefficient of $x^2$ is not zero. – Gerry Myerson Sep 12 '14 at 9:19 • @GerryMyerson Right I have edited the question thanks for mentioning – Asad Sep 12 '14 at 10:39 The trinomial $$ax^2+bx+c=0$$ May be solved when $a,b,c$ are complex numbers, and it works fine with real $a,b,c$, since $\Bbb R\subset\Bbb C$. However, we'll start with the real case. Real resolution $$ax^2+bx+c=0$$ Where $a\neq0$, and $a,b,c\in\Bbb C$. Then, rewrite the equation $$\left(x+\frac{b}{2a}\right)^2-\frac{b^2-4ac}{4a^2}=0$$ Then, according to the sign of $\Delta=b^2-4ac$, there are three possibilities: • If $\Delta=0$, the equation amounts to $$\left(x+\frac{b}{2a}\right)^2=0$$ It has thus one double root, $x=-\frac{b}{2a}$. But since $\Delta=0$, we can also write $$x=\frac{-b\pm\sqrt{\Delta}}{2a}$$ • If $\Delta>0$, then $\frac{b^2-4ac}{4a^2}$ is positive, so it's the square of number $\frac{\sqrt{\Delta}}{2a}$, and we can write
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9706877692486436, "lm_q1q2_score": 0.8067803930821006, "lm_q2_score": 0.8311430499496096, "openwebmath_perplexity": 225.72422040052518, "openwebmath_score": 0.9341962933540344, "tags": null, "url": "https://math.stackexchange.com/questions/928516/quadratic-formula-for-complex-variable-with-real-coefficients" }
quantum-field-theory, symmetry, fermions, symmetry-breaking, classical-field-theory For the Gross-Neveu model in Dwagg's answer there is already a quartic term in the Lagrangian, and minimizing the potential w.r.t $\bar{\psi}_a\psi_a$ can give rise to SSB. Such a term is not there in QCD Lagrangian. First, as already discussed, there are several (very similar) purely fermionic model field theories that exhibit spontaneous symmetry breaking, the models of Gross-Neveu, Thirring, and Nambu-Jona-Lasionio. These theories contain a fundamental four-fermion interaction ${\cal L}\sim G(\bar\psi\psi)^2$. Second, there are interesting and important condensed matter systems that exhibit spontaneous symmetry breaking with a fermion-bilinear order parameter, BCS superconductivity in metals, neutron matter, liquid helium 3, and atomic gases. Fundamentally, these are not purely fermionic theories, but they can often be reduced to effective four-fermion theories. This is already a hint that a fundamental four-fermion interaction is not essential; such an interaction can always appear from integrating out other degrees of freedom. In QCD we can define an effective potential for $\langle\bar\psi\psi\rangle$ in the usual way. Couple an external field to the order parameter (QCD already has such a term, the mass term) and compute the partition function. Then Legendre transform to get the effective action as a function of the order parameter. The static part is the effective potential. Chiral symmetry breaking takes place at strong coupling, so we cannot compute the effective potential reliably (except in certain limiting cases), but we can identify the diagrams that contribute to it. The simplest is a fermion loop with a gluon going across. If the gluon was heavy (which it is not), we would be able to contract the gluon propagator to a point, and this diagram would be the same four-fermion closed-off-by-two-loops diagram that appears in the Gross-Neveu model (which is why, historically, Gross and Neveu studied it).
{ "domain": "physics.stackexchange", "id": 56988, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, symmetry, fermions, symmetry-breaking, classical-field-theory", "url": null }
navigation, odometry, ros-kinetic, rtabmap-odometry, rosaria Title: How to disable odometry from ROSAria Hi, is there a way to disable wheel odometry (at least the TF) that is published by ROSAria? I want to use odometry from another source like visual odometry. This is how my TF frames look like atm. I have two separate trees because ROSAria still publishes odom to base_link. I use Ubuntu 14.04. with ROS Kinetic. Thank you in advance. Originally posted by Dox on ROS Answers with karma: 36 on 2018-07-31 Post score: 0 It would appear not. If you'd like that feature, it seems like a pretty straightforward change for you to contribute. Originally posted by Tom Moore with karma: 13689 on 2018-08-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 31424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, odometry, ros-kinetic, rtabmap-odometry, rosaria", "url": null }
java, algorithm, sorting System.out.println("Counting sort in " + duration + " milliseconds."); totalMySort += duration; startTime = System.currentTimeMillis(); Insertionsort.sort(array2, fromIndex, toIndex); endTime = System.currentTimeMillis(); duration = endTime - startTime; System.out.println("Insertion sort in " + duration + " milliseconds."); System.out.println(bar()); totalInsertionsort += duration; if (!Arrays.equals(array1, array2)) { throw new RuntimeException("Sorts did not agree."); } } System.out.println(); System.out.println(title("Presorted arrays")); //// PRESORTED ARRAYS //// for (int op = 0; op < OPERATION_COUNT; ++op) { int runAmount = 20 + 20 * op; System.out.println("Run amount: " + runAmount); array1 = getPresortedIntegerArray(LENGTH, runAmount, random); array2 = array1.clone(); int fromIndex = random.nextInt(LENGTH / 20); int toIndex = LENGTH - random.nextInt(LENGTH / 20); long startTime = System.currentTimeMillis(); CountingSort.sort(array1, fromIndex, toIndex); long endTime = System.currentTimeMillis(); long duration = endTime - startTime; System.out.println("Counting sort in " + duration + " milliseconds."); totalMySort += duration; startTime = System.currentTimeMillis(); Insertionsort.sort(array2, fromIndex, toIndex); endTime = System.currentTimeMillis(); duration = endTime - startTime; System.out.println("Insertion sort in " + duration + " milliseconds."); System.out.println(bar()); totalInsertionsort += duration;
{ "domain": "codereview.stackexchange", "id": 15211, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, sorting", "url": null }
As a general rule, partial fractions will greatly simplify the work required in similar problems. • I found this partial fraction using polynomial long division, yet it didn't appear to me, that any remainder appearing there would obviously disappear when taking the derivative. This really is a great hint! – LeonTheProfessional Jun 5 '20 at 17:39 • @hdighfan I think there is a slight typo - it should read that the derivative of $\left(f(x)\right)^2$ is $2f(x)f'(x)$. – Zubin Mukerjee Jun 6 '20 at 3:01 • I have edited it to fix, please revert if not wanted – Zubin Mukerjee Jun 6 '20 at 20:18 • Ah, of course. Thanks for the edit. – hdighfan Jun 6 '20 at 20:19
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9748211582993982, "lm_q1q2_score": 0.8570108625020557, "lm_q2_score": 0.8791467595934563, "openwebmath_perplexity": 383.7864381263352, "openwebmath_score": 0.8510565161705017, "tags": null, "url": "https://math.stackexchange.com/questions/3707227/strategy-to-calculate-fracddx-left-fracx2-6x-92x2x32-right/3707238" }
newtonian-mechanics, rotational-dynamics, reference-frames The answer that satisfied me is written by Claudio Saspinski below (Thankyou very much). Rotation is a basic type of motion like translation, where one point of the rigid body is fixed (relatively). So there is no need to "prove" that the motion of a rigid body is rotation, all bodies rotate about some point on it. As stated in the comment, any point on the rigid body can be taken as the centre of rotation of the body. It is proved that no matter what this point is, the angular velocity is the same, by applying the conditions of a rigid body. So we can take any point other than the centre of mass as the rotation reference. The reason we take the centre of mass as the centre of rotation in my current understanding, is for ease of analysis of motion. This is because the motion of the CM is just like a point object under applied force and is easy to handle. If we take the centre of rotation as another point, then to get the configuration of the body at a later time t, we need the position of this point at that time, which is more involved, as this point does not behave like a point body under applied force. So in conclusion, my question is kind of not correct or not valid. Centre of rotation of a body is any point you choose to be. The point is not that a unconstrained rigid body rotates around its COM. In reality, for any time $t$, a rigid body (constrained or not) always rotates around any of its points with an angular velocity $\boldsymbol \omega (t)$. Suppose $3$ generic points $P_0, P_1, P_2$ that belongs to the rigid body. And lets $P_0$ be by hypothesis the center of rotation. Let's $\mathbf {r_1}$ and $\mathbf {r_2}$ be the position vectors of $P_1$ and $P_2$ in a non rotating frame where $P_0$ is the origin. As the distances between the points doesn't change, The modulus of the position vectors are constant. That leads to: $\mathbf {r_1.v_1} = \mathbf {r_2.v_2} = 0$
{ "domain": "physics.stackexchange", "id": 84253, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames", "url": null }
homework-and-exercises, general-relativity, differential-geometry, metric-tensor, variational-principle that reflects the so-called "projective symmetry" of the Einstein-Hilbert-Palatini action. You can calculate the torsion and the non-metricity in order to check what I said before: $$ T_{\mu \nu}{}^{\rho}= A_{\mu}\delta^{\rho}_{\nu} -A_{\nu}\delta^{\rho}_{\mu} \\ \nabla_{\mu} g_{\nu \rho}= -2A_{\mu}g_{\nu \rho} $$ If any of them vanishes then $A_\mu = 0 $ and then the other one vanishes as well. It is worth pointing out that, even for this general solution, the equation of the metric turns into the Einstein's one. And these and other details tell us that this $A_\mu$ field seems to be undetectable (there is no difference between the Einstein-Hilbert dynamics and the Einstein-Hilbert-Palatini dynamics). Check the following reference: A. N. Bernal et al. Physics Letters B 768 (2017), 280-287. https://arxiv.org/abs/1606.08756
{ "domain": "physics.stackexchange", "id": 44679, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, general-relativity, differential-geometry, metric-tensor, variational-principle", "url": null }
# Determine matrix A with respect to standard basis $f:U->R^2$ I am having trouble understanding my problem and what to calculate I have been given the subspace $$U=\{x= \begin{vmatrix} x_1\\ x_2\\ x_3\\ \end{vmatrix} \in F^3 | x_1 + x_2 + x_3 = 0\} \subset F^3$$ and the linear transformation $$f: U \rightarrow F^2$$ $$f\begin{vmatrix} x_1\\ x_2\\ x_3\\ \end{vmatrix} =\begin{vmatrix} x_1\\ x_2+x_3\\ \end{vmatrix}$$ The question is to determine a matrix A that represent $$f: U \rightarrow F^2$$ with respect to the basis for U and standard basis $$(e_1,e_2)$$ for $$F^2$$ My attempt: I have calculated the basis $$B=\{\begin{vmatrix} 1\\ 0\\ -1\\ \end{vmatrix},\begin{vmatrix} 0\\ 1\\ -1\\ \end{vmatrix}\}$$ And my matrix A calculated from the linear transformation $$\begin{vmatrix} 1&0&0\\ 0&1&1\\ \end{vmatrix}$$ I'm aware the standard basis are $$e_1=\begin{vmatrix} 1\\ 0\\ \end{vmatrix} , e_2=\begin{vmatrix} 0\\ 1\\ \end{vmatrix}$$ I'm not sure about the next step. Do I calculate: $$f\begin{vmatrix} 1\\ 0\\ -1\\ \end{vmatrix} =\begin{vmatrix} 1\\ -1\\ \end{vmatrix}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9863631659211718, "lm_q1q2_score": 0.8017823721530579, "lm_q2_score": 0.8128673087708699, "openwebmath_perplexity": 103.43049189026804, "openwebmath_score": 0.9054194688796997, "tags": null, "url": "https://math.stackexchange.com/questions/3060058/determine-matrix-a-with-respect-to-standard-basis-fu-r2" }
thermodynamics, statistical-mechanics $$\chi^2_{3N}(E)=\frac{\beta (\beta E)^{\frac{3N}{2}-1}e^{-\beta E}}{\Gamma(\frac{3N}{2})}$$ Which is the distribution you were looking for. P.S. For both of these rescalings I have used the general identity for $y=sx$: $p(y)=sp(sx)$ which is just a change of variables. I have also used some algebra to derive the above so they might not be immediately obvious but they should be readily demonstrable.
{ "domain": "physics.stackexchange", "id": 71191, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, statistical-mechanics", "url": null }
quantum-mechanics, quantum-information, quantum-spin, wavefunction-collapse, foundations where the operators $\mathsf{V}$ are unitary! Now the important part: Measurements act almost trivially if we represent the initial state in the basis of the operator that one intends to measure. If both Alice and Bob intend to measure $Z$, we write their shared Bell state in the $Z$ basis. In other words, we have the initial state $$ \left| \Psi_0 \right\rangle = \frac{1}{\sqrt{2}} \left( \left| 0 \right\rangle^{\vphantom{\prime}}_A \otimes \left| 0 \right\rangle^{\vphantom{\prime}}_B + \left| 1 \right\rangle^{\vphantom{\prime}}_A \otimes \left| 1 \right\rangle^{\vphantom{\prime}}_B \right) \otimes \left| 0 \right\rangle^{\vphantom{\prime}}_a \otimes \left| 0 \right\rangle^{\vphantom{\prime}}_b \, ,~$$ and after Alice's measurement, we have
{ "domain": "physics.stackexchange", "id": 95545, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-information, quantum-spin, wavefunction-collapse, foundations", "url": null }
structural-engineering, civil-engineering, structural-analysis Title: General solution for the most critical pattern of live loading From ASCE 7-05 code: ASCE 7-05 Section 4.6 states "The full intensity of the appropriately reduced live load applied only to a portion of a structure or member shall be accounted for if it produces a more unfavorable effect than the same intensity applied over the full structure or member." The article then goes on to demonstrate how can we calculate the pattern of live loading for a few simple, text-book cases. The problem now is, what if the configuration is not as simple? In real life the beam configuration, the support condition can be very different than from textbook examples. How to obtain the most critical pattern of live loading for the most general situation? Is there an algorithm for this? As mentioned in the linked text and in @grfrazee's answer, the secret is influence lines. Or, more generically, influence surfaces. For starters, let's stick to influence lines, since they are far easier to describe. An influence line is a diagram for a given point on an object composed of unidimensional beam elements. It describes the internal force that will occur on that point due to a unit load applied at different points along the entire structure. For instance, a simply supported beam has the following bending-moment influence line for the point at a quarter-span (I'm mostly going to talk about bending moment influence lines here, but the general gist of things applies to other forces as well):
{ "domain": "engineering.stackexchange", "id": 815, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "structural-engineering, civil-engineering, structural-analysis", "url": null }
java, algorithm, sorting, mergesort Run dequeue() { Run run = runArray[head]; head = (head + 1) & mask; --size; return run; } int size() { return size; } private static int ceilCapacityToPowerOfTwo(int capacity) { int ret = Integer.highestOneBit(capacity); return ret != capacity ? ret << 1 : ret; } @Override public String toString() { StringBuilder sb = new StringBuilder("["); String separator = ""; for (int i = 0; i < size; ++i) { sb.append(separator).append(runArray[(head + i) & mask]); separator = ", "; } return sb.append("]").toString(); } } private static final class RunLengthQueueBuilder<T extends Comparable<? super T>> { private final RunQueue queue; private final T[] array; private int head; private int left; private int right; private final int last; private boolean previousRunWasDesending; RunLengthQueueBuilder(T[] array) { this.queue = new RunQueue((array.length >>> 1) + 1); this.array = array; this.left = 0; this.right = 1; this.last = array.length - 1; } RunQueue run() { while (left < last) { head = left; if (array[left++].compareTo(array[right++]) <= 0) { scanAscendingRun(); } else { scanDescendingRun(); } ++left; ++right; } if (left == last) { if (array[last - 1].compareTo(array[last]) <= 0) { queue.addToLastRun(1); } else { queue.enqueue(new Run(left, left)); } } return queue; }
{ "domain": "codereview.stackexchange", "id": 27821, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, sorting, mergesort", "url": null }
ros Title: fatal error: ros/ros.h: No such file or directory I have created a package using catkin_create_pkg on my raspberry pi 3 where I have previously installed Ubuntu Xenial and ROS kinetic. But everytime I am trying to compile the cpp file included in the created package I get the following error fatal error: ros/ros.h: No such file or directory compilation terminated. looks like it does not see any of the header files included in my cpp file. I have all set up the same way on my other computer running ubuntu Trusty and ROS indigo. Not sure why this is happening here. Edit: This is what I added to my package.xml <build_depend>roscpp</build_depend> <build_depend>rospy</build_depend> <build_depend>std_msgs</build_depend> <build_depend>geometry_msgs</build_depend> <build_depend>message_generation</build_depend> <run_depend>roscpp</run_depend> <run_depend>rospy</run_depend> <run_depend>std_msgs</run_depend> <run_depend>geometry_msgs</run_depend> <run_depend>message_runtime</run_depend> here is the Cmakelist.txt: cmake_minimum_required(VERSION 2.8.3) project(sphero_move) ## Find catkin macros and libraries ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) ## is used, also find other catkin packages find_package(catkin REQUIRED) include_directories(${catkin_INCLUDE_DIRS}) ## System dependencies are found with CMake's conventions # find_package(Boost REQUIRED COMPONENTS system) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs geometry_msgs message_generation )
{ "domain": "robotics.stackexchange", "id": 24990, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros", "url": null }
time-complexity, recurrence-relation So I think that the hypotheses are wrong. Probably I misunderstanding subtraction of lower-order term. What are the constraints to apply this technique? Only that $T(k) \gt 0$? In this recurrence the latter seems valid. Synthesis: https://cs.stackexchange.com/questions/65549/lower-order-term-constraints-and-wrong-guess/65579?noredirect=1#comment139380_65579 First of all, the way you state the induction hypothesis is a bit strange, and it definitely doesn't prove that $T(n) = O(n)$. Assuming that the induction is on $n$, what your induction hypothesis states (more or less) is that for every $n$ there exists $C$ such that $T(k) \leq Ck$ for all $k \leq n$. The function $T(n) = n^2$ satisfies this, with $C = n$. What you really want to do is to fix the parameters $b,c,n_0$ in advance. In other words, you want to prove $$ \exists b,c,n_0 \, \forall n \, \forall n_0 \leq k \leq n\colon T(k) \leq ck - bn. $$ You first instantiate $b,c,n_0$, and then you prove only the following part by induction: $$ \forall n \, \forall n_0 \leq k \leq n\colon T(k) \leq ck - bn. $$ At each step of the induction, you are proving $$ \forall n_0 \leq k \leq n\colon T(k) \leq ck - bn $$ for the current value of $n$. Now for your mistake. Your induction hypothesis states that for all $k \leq n$, $$ T(k) \leq ck - bn. $$ However, in the inductive step, you only prove $$ T(k) \leq ck. $$ Also, you only prove it for $k = n$, despite the form of your actual induction hypothesis.
{ "domain": "cs.stackexchange", "id": 7635, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "time-complexity, recurrence-relation", "url": null }
slam, navigation, amcl, gmapping I did not post any data with regards to AMCL/GMCL because I believe that the issue is probably the same. But please help me find a fix and verify if it can also work with AMCL/GMCL. I would really love to make this work with 2 LiDARs. Thanks so much for your help. Originally posted by Orl on ROS Answers with karma: 36 on 2022-06-22 Post score: 0 Here's the solution. Don't use ira_laser_tools package. Use laserscan_merger instead which is over here : https://github.com/robotics-upo/laserscan_merger Originally posted by Orl with karma: 36 on 2022-06-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37787, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, amcl, gmapping", "url": null }
php, array Output: $data = [ ['category' => 1, 'categoryname' => 'c1', 'attribute' => 1, 'attributename' => 'a1', 'option' => 1, 'optionname' => 'o1'], ['category' => 1, 'categoryname' => 'c1', 'attribute' => 1, 'attributename' => 'a1', 'option' => 2, 'optionname' => 'o2'], ['category' => 1, 'categoryname' => 'c1', 'attribute' => 2, 'attributename' => 'a2', 'option' => 3, 'optionname' => 'o3'], ['category' => 1, 'categoryname' => 'c1', 'attribute' => 2, 'attributename' => 'a2', 'option' => 4, 'optionname' => 'o4'], ['category' => 2, 'categoryname' => 'c2', 'attribute' => 3, 'attributename' => 'a3', 'option' => 5, 'optionname' => 'o5'], ['category' => 2, 'categoryname' => 'c2', 'attribute' => 3, 'attributename' => 'a3', 'option' => 6, 'optionname' => 'o6'], ['category' => 2, 'categoryname' => 'c2', 'attribute' => 4, 'attributename' => 'a4', 'option' => 7, 'optionname' => 'o7'], ['category' => 2, 'categoryname' => 'c2', 'attribute' => 4, 'attributename' => 'a4', 'option' => 8, 'optionname' => 'o8'], ]; Method I used: $final = [];
{ "domain": "codereview.stackexchange", "id": 40633, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, array", "url": null }