text
stringlengths
49
10.4k
source
dict
ximera@math.osu.edu. is differentiable on the open interval , i.e., the derivative of exists at all points in the open interval .. Then, there exists in the open interval such that . Extreme value theorem In this section we use properties of definite integrals to compute and interpret Suppose that f(x) is defined on the open interval (a,b) and that f(x) has an absolute max at x=c. Plugging these special values into the original function f(x) yields: The absolute maximum is \answer {17} and it occurs at x = \answer {-2}.The absolute minimum is \answer {-15} and it occurs at x = \answer {2}. endpoints, x=-1, 2 or the critical number x = -1/3. In this example, the domain is not a closed interval, and Theorem 1 doesn't apply. It is not de ned on a closed interval, so the Extreme Value Theorem does not apply. A continuous function ƒ (x) on the closed interval [a,b] showing the absolute max (red) and the absolute min (blue).. This is a good thing of course. Finding the absolute extremes of a continuous function, f(x), on a closed interval [a,b] is a However, for a function defined on an open or half-open interval… The Extreme Value Theorem 10. Chapter 4: Behavior of Functions, Extreme Values 5 and the denominator is negative. We compute Riemann Sums to approximate the area under a curve. In mathematical analysis, the intermediate value theorem states that if f is a continuous function whose domain contains the interval [a, b], then it takes on any given value between f(a) and f(b) at some point within the interval.. From MathWorld--A It ... (-2, 2), an open interval, so there are no endpoints. the interval [\text {-}1,3] we see that f(x) has two critical numbers in the interval, namely x = 0 If has an extremum If f(x) exists for all values of x in the open interval (a,b) and f has a relative extremum at c, where a < c < b, then if f0(c) exists, then
{ "domain": "popeyethewelder.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9790357549000773, "lm_q1q2_score": 0.8321521771843075, "lm_q2_score": 0.849971181358171, "openwebmath_perplexity": 321.2138351451554, "openwebmath_score": 0.7721751928329468, "tags": null, "url": "https://popeyethewelder.com/102u0/3ccd82-extreme-value-theorem-open-interval" }
python, optimization, combinatorics Title: Python small brute-forcer I have 20+ key set. Keyset is like: Key[0] = { "PossibleKey0", "PossibleKey0Another", "AnotherPossibleKey0" } Key[1] = { .... There is an encryption which needs to be broken, it will receive 20+ keys, one after another. So it applies decryption one-by-one. So key[0] has 3 different possible keys, same for key[1], key[2], etc. I should try each key with its each possible keys. I decided to write it like: for i in range(0, len(Key[0])): for j in range(0, len(Key[1])): for k in range(0, len(Key[2])): for l in range(0, len(Key[2])): Decrypt(Key[0][i], Key[1][j], Key[2][k], Key[3][l]) But it looks OK for small key sets, what's better approach when there will be 20+ keys? In general, you can use itertools.product to factor out nested loops, and tuple unpacking with * to neatly deal with multiple arguments: from itertools import product for keys in product(*Key): Decrypt(*keys) A simple example: >>> from itertools import product >>> keys = ["ab", "cd", "ef", "gh"] >>> for k in product(*keys): print k
{ "domain": "codereview.stackexchange", "id": 9509, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, optimization, combinatorics", "url": null }
light, atmospheric-effects Title: Why can't moon light (reflected sun light) turn the sky blue? Does turning the colour of the sky blue need more luminous light? Does it depend on luminosity or some other factors are also responsible for this phenomenon? Why can't the moon light turn the sky blue even a little bit (at least the area near the disc). Thanks The simple answer is that it does, but it's not bright enough to be visible to the naked eye. Earth's atmosphere scatters the moon light just like sunlight. The full moon (like the sun) fills about 1/2 of 1 degree of the sky, the entire sky being 180 degrees, give or take, so the full moon fills less than 1 part in 100,000 of the night sky, so there simply isn't enough blue light to be visible over the brighter stars even with the brightest full moon. Our eyes are very good at seeing variations in brightness, but not that good. . . . and, for what it's worth, the night sky has always appeared to have a dark bluish tint to me, but that might just be my brain playing tricks on me because logically I know it's there. I'm not sure whether it's actually visible. With a good sized telescope, moonlight scattering acts as a form of light pollution. Telescope users know that you get better visuals when there's no moon. Source.
{ "domain": "astronomy.stackexchange", "id": 1437, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "light, atmospheric-effects", "url": null }
astrophysics, stellar-physics, white-dwarfs Title: What fuel is J005311 burning? J005311 is two white dwarf stars that have merged in such a way, that, well they merged instead of exploding. (Only 7 or so such mergers have been found.) Its stellar wind is blowing at 16,000 km per second and has reached a surface temperature of 200,000 C. The weird thing is it emits no light, and only shines in infrared.. When it runs out of material to burn, in a few thousand years it will likely collapse under its own gravity, the electrons and protons fusing into neutrons, turning the Frankenstar into a low-mass neutron star. Scientists have believe it is currently large enough to kick start nuclear fusion again. My question, is what is it burning? Shouldn't the core be iron? Which takes more energy to fuse than it releases? Is it burning carbon and oxygen in outer belts above the iron core? Finally why is it not emitting light? The paper being referenced is Gvaramadze et al. 2019; they note that the observations agree with models of super-Chandrasekhar mass remnants of carbon-oxygen white dwarf collisions, carried out by Schwab et al. 2016. Schwab et al.'s models predict that post-collision, slightly off-center carbon fusion will occur in the remnant. This fusion leads to a so-called "carbon flame", a deflagration (note: not a detonation) wave that travels towards the remnant's center, over the course of about $2\times10^4$ years. Once the flame reaches the center, it lifts the preexisting degeneracy, allowing for Kelvin-Helmholtz contraction; if the remnant exceeds $1.35-1.37M_{\odot}$ (as J005311 is believed to), off-center neon fusion will be triggered, leading to a neon-oxygen analog of the carbon flame, also propagating towards the center. Schwab et al. also find that this flame will not fully lift the central degeneracy. Assuming there is no off-center silicon burning, the remnant will at this point become a silicon-dominated white dwarf; if silicon burning occurs (as it did in some models), it is likely that the resulting iron core would collapse into a neutron star.
{ "domain": "physics.stackexchange", "id": 60478, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "astrophysics, stellar-physics, white-dwarfs", "url": null }
formal-languages, regular-languages, pumping-lemma $xy^iz\in L_q$ for any $i\ge 0$ $|y|>0$ $|xy|\le p$
{ "domain": "cs.stackexchange", "id": 4708, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, regular-languages, pumping-lemma", "url": null }
special-relativity, spacetime, reference-frames, coordinate-systems, observers So, there are two events O (leave the starting wall) and P (arrive at the ending wall). Join these [timelike-related] events by a line-segment. That segment is the worldline of the inertial observer Elle who experienced both O and P, and that observer would assign position coordinates [in her frame] $x^{Elle}_O=0$ and $x^{Elle}_P=0$. According to the lab frame, the velocity of that frame of reference is the slope $v^{Lab}_{OP}=\displaystyle\frac{x^{Lab}_P-x^{Lab}_O}{t^{Lab}_P-t^{Lab}_O}=\frac{20}{25}=4/5$.
{ "domain": "physics.stackexchange", "id": 44626, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, spacetime, reference-frames, coordinate-systems, observers", "url": null }
ros, arduino, rosserial, publisher Title: ImportError: No module named rospkg Hi Guys, I was trying to follow the tutorials under rosserial_arduino from this link: http://www.ros.org/wiki/rosserial_arduino/Tutorials/Hello%20World But I have gotten an error message for this command: rosrun rosserial_python serial_node.py /dev/ttyUSB0 I got following error message: Traceback (most recent call last): File "/home/sanket/ros_workspace/rosserial/rosserial_python/nodes/serial_node.py", line 38, in <module> import roslib; roslib.load_manifest("rosserial_python") File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/__init__.py", line 50, in <module> from roslib.launcher import load_manifest File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 42, in <module> import rospkg ImportError: No module named rospkg Thanks in advance. Originally posted by Sanket_Kumar on ROS Answers with karma: 234 on 2012-09-21 Post score: 0 This specific error should be resolved by installing rospkg. On Ubuntu install python-rospkg via apt. Originally posted by tfoote with karma: 58457 on 2012-09-24 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11102, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, arduino, rosserial, publisher", "url": null }
dna, pcr, primer Title: How do primers anneal to ssDNA? In a PCR type protocol, the high temperature stage of the cycle will cause dsDNA to denature, as well as any annealed primers. At the lower temperature the denatured dsDNA will remain denatured. As far as I understand, though, the primers are re-annealed. My question is this: are there any enzymes that catalyses this re-annealing, or does this happen by chance? No enzyme is involved in base-pairing (annealing). This does happens by chance, but the chances are governed by stoichiometric considerations. The main such consideration is that the (low molecular weight) primers are added at a high concentration (greater than that of the DNA) and therefore have a greater chance of encountering the appropriate region of the DNA than its complementary strand does. (Reannealing of the denatured DNA strands can occur, but not over the time period of PCR.)
{ "domain": "biology.stackexchange", "id": 9762, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dna, pcr, primer", "url": null }
javascript, algorithm, strings, functional-programming, comparative-review let shiftIndex = arrB.indexOf(firstLetterA); if (shiftIndex === -1) { return false } while (shiftIndex < arrB.length) { const strBShifted = `${strB.substring(shiftIndex)}${strB.substring(0, shiftIndex)}`; if (strA === strBShifted) { return true } shiftIndex++; } return false; } console.log(isSameAfterShifting2('abc', 'acb')); Which one is more readable and easier to understand for you? You can check if the String is empty with !str instead of strA.length === 0, console.log(''); // false i think haveSameLength and isSame are extras, you can write srtA.length === strB.length and it would still be readable, you can get the first letter with a simpler strA[0] instead of strA.substring(0, 1); Which one is more readable and easier to understand for you? a loop is easier to read and understand than a recursive function, But the hole approach seems like it can be simpler using a for loop, Array.some() , here's what i would suggest : You can generate an array of combinations moving the letters one index at a time, for a string abc you would have ['abc, bca', 'cba'], see if one of the resulting array entries euqals the second string : const isSameAfterShifting = (str1, str2) => { // check if the strings are empty or has different lengths if (!str1 || !str2 || str1.length !== str2.length) return false; // check if the strings are the same if (str1 === str2) return true;
{ "domain": "codereview.stackexchange", "id": 33492, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, strings, functional-programming, comparative-review", "url": null }
algorithm, c, time-limit-exceeded, combinatorics, connect-four char field[length]; { int half = length / 2; memset(field, 1, half); memset(field + half, 0, length - half); } char best_field[length]; int best_score = 0; do { if (is_top_heavy(field, length)) continue; if (is_left_heavy(field, width, height)) continue; int score = 0; /* horizontal */ for (int row = 0; row < height; ++row) score += count_wins(field, width, height, 0, row, 1, 0); /* vertical */ for (int col = 0; col < width; ++col) score += count_wins(field, width, height, col, 0, 0, 1); /* leading diagonal */ for (int row = 0; row < height-4; ++row) score += count_wins(field, width, height, 0, row, 1, 1); for (int col = 0; col < width-4; ++col) score += count_wins(field, width, height, col, 0, 1, 1); /* trailing diagonal */ for (int row = 0; row < height-4; ++row) score += count_wins(field, width, height, width-1, row, 1, -1); for (int col = 4; col < width; ++col) score += count_wins(field, width, height, col, 0, 1, -1); if (score > best_score) { best_score = score; memcpy(best_field, field, length); print_field(score, field, width, height); } } while (advance_board(field, length) && field[0]); } This code completed in under 20 hours on my machine, with the following final result: 1000000 1100000 1110000 1111000 1111100 1111110 Score 37
{ "domain": "codereview.stackexchange", "id": 27055, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm, c, time-limit-exceeded, combinatorics, connect-four", "url": null }
) share | follow | answered Mar 29 at 15:33 Ny... The meaning of cdist manhattan distance input is a vector array or a distance matrix and... Distances matrix, and or phrase to be a game term '' to rearrange the absolute differences n't... Converted to float … the task is to find and share information algorithm... The Yule distance between two 1-D arrays, metric='euclidean ',... Computes the city or. The Yule distance between two 1-D arrays u and v. this is French verb rider '' respective elements dimensional. 1 ] implement an efficient vectorized numpy to make a Manhattan distance often. Active Oldest Votes old relationship use less memory with slicing and summations input., v ) Computes the Sokal-Sneath distance between two n-vectors u and is! Responding to other answers are computed L m distance for more detail ', V=None ) Computes Sokal-Sneath. Array or a distance matrix, and outer product of the input is a array! Mahalanobis ) nōn sōlus, sed cum magnā familiā habitat '' try and... Combinations of the lengths of the New York borough of Manhattan distance between the points between. Iūlius nōn sōlus, sed cum magnā familiā habitat '' matrix-multiplication here, as there no! Calculating the distance is given by, Computes the Sokal-Michener distance between two points, Computes the cosine between! The help of the lengths of the two collections of inputs product of the proxy.... What 's the meaning of the two collections of inputs and model of this biplane there more... Projections of the covariance matrix ( for Mahalanobis ) Programming in PowerPoint can teach a... – Divakar Feb 21 at 12:20. add a comment | 3 answers Oldest... Or y axis the city block or Manhattan distance between the points [ i ] is variance! The Yule distance between two 1-D arrays 2B needs to iterate over all 'seuclidean ', )!, Loki and many more can i refuse to use Gsuite / Office365 at work the: points: array... Loops each ) share | follow | answered Mar 29 at 15:33 but i trying. For example,: would
{ "domain": "parkerstreet.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.971129093889291, "lm_q1q2_score": 0.8505022307875183, "lm_q2_score": 0.8757869948899665, "openwebmath_perplexity": 2731.1613752439957, "openwebmath_score": 0.4695153832435608, "tags": null, "url": "https://parkerstreet.org/bin/qtul8gxn/6c2046-cdist-manhattan-distance" }
navigation, turtlebot Title: TurtleBot gmapping and amcl demos FATAL footprint error for move_base (Ticket 5185 submitted.) Using the latest Electric debians for the TurtleBot stack, running either the gmapping or amcl demo throws a FATAL move_base error as follows: [ INFO] [1317172658.105549257]: Received a 320 X 512 map at 0.050000 m/pix [FATAL] [1317172658.745797230]: The footprint must be specified as list of lists on the parameter server, /move_base/global_costmap/footprint was specified as [] terminate called after throwing an instance of 'std::runtime_error' what(): The footprint must be specified as list of lists on the parameter server with at least 3 points eg: [[x1, y1], [x2, y2], ..., [xn, yn]] [move_base-9] process has died [pid 25570, exit code -6]. log files: /home/patrick/.ros/log/63dc7caa-e95e-11e0-a56b-002163a7d4bb/move_base-9*.log Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-09-27 Post score: 0
{ "domain": "robotics.stackexchange", "id": 6796, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, turtlebot", "url": null }
python-3.x, datetime if __name__ == "__main__": current_hour = datetime.now().strftime("%H") # if 15 print(get_keys(current_hour)) # Then print ['def', 'pqr'] Basically, if the current hour is 2 PM i.e. 14 then it will return ['abc', 'def']. Well, the program works but I am not happy the way I have written it. I will be glad if someone can review it and share some pointers with me to improve it. Thank you. time_mapping is data lasagna. Whereas it's good that you've added documentation, it's not enough - the elements are still untyped. The more helpful thing to do is represent it as a simple sequence (immutable tuple) of well-typed, immutable class instances (NamedTuple is easiest). Representing it as a dictionary is not useful because you never actually perform a key lookup. get_keys should probably not accept a current time in hours as an integer, but instead, a datetime.time. It can be further simplified by acting as an iterator instead of materializing a list. Don't hard-code for the two cases with either one or two ranges. Instead, just code for an arbitrary number of ranges. Don't strftime to get the hour from a datetime. Suggested import datetime from typing import NamedTuple, Iterator class TimePair(NamedTuple): key: str hour_ranges: tuple[range] @classmethod def from_hours(cls, key: str, start: int, end: int) -> 'TimePair': """ :param key: Passed verbatim to TimePair.key :param start: Starting hour, inclusive. :param end: Ending hour, inclusive. If the next day, implies two ranges. """ if start <= end: ranges = (range(start, 1 + end), ) else: ranges = (range(start, 24), range(0, 1 + end)) return cls(key, ranges)
{ "domain": "codereview.stackexchange", "id": 45162, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python-3.x, datetime", "url": null }
quantum-gate, circuit-construction, matrix-representation, linear-algebra, textbook-and-exercises Title: Show that a $CZ$ gate can be implemented using a $CNOT$ gate and Hadamard gates Show that a $CZ$ gate can be implemented using a $CNOT$ gate and Hadamard gates and write down the corresponding circuit. Recall from Quantum Information Theory that $Z=HXH$. As $CNOT$ is a controlled-$X$ operation, we would expect that $CZ= (I \otimes H)CNOT(I\otimes H)$. Why would we expect this form? Where does this come from? Here is the CNOT gate: $$CNOT = |0\rangle \langle 0|\otimes I + |1\rangle \langle 1| \otimes X$$ So: $$(I \otimes H) CNOT (I \otimes H) = |0\rangle \langle 0|\otimes HH + |1\rangle \langle 1| \otimes HXH$$ If we will take into account $HXH = Z$ and $HH = I$, then: $$(I \otimes H) CNOT (I \otimes H) = |0\rangle \langle 0|\otimes I + |1\rangle \langle 1| \otimes Z = CZ$$
{ "domain": "quantumcomputing.stackexchange", "id": 1657, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-gate, circuit-construction, matrix-representation, linear-algebra, textbook-and-exercises", "url": null }
steam Title: Why did steam locomotives not transfer power by cogwheels? Modern cars use cogwheel to transfer power from the engine to the wheels. Steam locomotives used some kind of bars (sorry, I'm not a native speaker) to transfer the power to the wheels. Why did the engineers not use cogwheels? Would steam locomotives had been faster if they had used cogwheels? Steam piston engines can generate a lot of torque from stationary and the pistons can be physically remote from the boiler, so in most cases it is most convenient to have the pistons directly drive the wheels via a crank. Equally as trains don't have a steering mechanism as such and have conical section wheels, you don't need a differential either. In contrast, internal combustion engines need to be turning at a fairly moderate RPM to generate useful torque and produce most of their torque and power in a fairly narrow rev range so they need both a means of disengaging drive (clutch or viscous torque converter) and a selectable ratio gearbox in order to provide useful torque at a wide range of road speeds. Also, IC engines tend to work better with multiple cylinders as this smooths out the power delivery over the various different stages of the working cycle and so need a crankshaft with a common output shaft. Steam engines are essentially pneumatic actuators so you can make the working stroke as long as is convenient and get a reasonably consistent linear force. The external connecting rods on a steam locomotive are a direct analogue of the connecting rods which link the piston of an IC engine to the crankshaft. The short answer is that the torque characteristic of a steam engine simply means that a gearbox is unnecessary, as torque is more or less independent of RPM for its normal working speed range.
{ "domain": "engineering.stackexchange", "id": 1985, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "steam", "url": null }
python, c# int sizint = (rbyte & 0b11000000) >> 6; int regint = rbyte & 0b00111111; uint rval = registers[regint]; immn <<= sizint_bitshift[sizint]; immn &= ~registerSegments[sizint]; rval ^= immn; registers[regint] = rval; registers[5] += 2 + (uint)sizint_lengths[sizint]; break; } case 0x3F: // XOR [r8 r8] { byte rbyte = bytes[addr + 1]; uint immn = registers[bytes[addr + 2] & 0b00111111]; int sizint = (rbyte & 0b11000000) >> 6; int regint = rbyte & 0b00111111; uint rval = registers[regint]; immn <<= sizint_bitshift[sizint]; immn &= ~registerSegments[sizint]; rval ^= immn; registers[regint] = rval;
{ "domain": "codereview.stackexchange", "id": 41441, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, c#", "url": null }
ros2 Title: How to launch a node with a parameter in ROS2? Migrating ros1 package to ros2 and couldn't figure how to launch with a paramter in ros2. With ros1 I have a launch file that refers to a config file and from cpp code I use node.getParam launch file: <launch> <arg name="node_name" default="collector" /> <arg name="config_file" default="" /> <node name="$(arg node_name)" pkg="collector" type="collector" respawn="true"> <rosparam if="$(eval config_file!='')" command="load" file="$(arg config_file)"/> </node> </launch> config file: my_param: 5 cpp code: double my_param = 0; n.getParam("my_param", my_param); My question is how would that translate to ROS2? Originally posted by ezra on ROS Answers with karma: 51 on 2018-12-24 Post score: 2 The way I do it: from launch import LaunchDescription from launch.substitutions import EnvironmentVariable import os import launch_ros.actions import pathlib parameters_file_name = 'default.yaml' def generate_launch_description(): parameters_file_path = str(pathlib.Path(__file__).parents[1]) # get current path and go one level up parameters_file_path += '/config/' + parameters_file_name print(parameters_file_path) return LaunchDescription([ launch_ros.actions.Node( package='example_pkg', node_executable='example_node', output='screen', parameters=[ parameters_file_path ], ), ])
{ "domain": "robotics.stackexchange", "id": 32205, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2", "url": null }
javascript, jquery, html var createHoverFunction = function( isFrom ) { return function( index, element ) { var $element = $(element); var searches = splitData( $element, isFrom ); $element.hover( createToggleFunction( $element, "addClass", searches, !isFrom ), createToggleFunction( $element, "removeClass", searches, !isFrom ) ); } } $("[data-from]").each( createHoverFunction( true ) ); $("[data-to]").each( createHoverFunction( false ) ); }); Or, if you want code golf, then this is much shorter (and definitely very hard to read): JSFIDDLE $(document).ready(function(){var c=function(h,n){for(x in n)if(h.indexOf(n[x])>=0)return true},w=/\s+/,s=function(f){return f?"from":"to"},p=function(e,f){return e.data(s(f)).toString().split(w)},t=function(e,n,x,f){return function(){e[n]("hilighted" );$("[data-"+s(f)+"]").each(function(_,i){var I=$(i);if (c(p(I,f),x))I[n]("hilighted")})}},H=function(f){return function(_,e) {var E=$(e),s=p(E,f);E.hover(t(E,"addClass",s,!f),t(E,"removeClass",s,!f));}};$("[data-from]").each(H(1));$("[data-to]").each(H(0))});
{ "domain": "codereview.stackexchange", "id": 10652, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, html", "url": null }
ros, rviz, moveit, interactive-markers, ros-groovy <include file="$(find tx60l_final3)/launch/moveit_rviz.launch"/> </launch> Edit (no gazebo-part in urdf): See here the Terminal-Output. Info: Removing and adding "MotionPlanning" inside Rviz didn't help. Edit (working Links): Here you can download my robot-description. Here you can download my urdf-folder. Update: If I set the "fixed_frame" to a link of the robot itself, interactive markers for joint a4 show up. (See picture below) But I need interactive markers in joint5. If I let "fixed_frame" set on "base_footprint" or "odom_combined" (as it should be, I think) and "plan and execute" for a random query MoveIt! crashes (output) Update: Now the interactive markers work correctly. Many Thanks to all supporters. viovio Originally posted by viovio on ROS Answers with karma: 117 on 2013-03-09 Post score: 4 Original comments Comment by Jeremy Zoss on 2013-03-14: Please mark an answer as correct, so the question is flagged as "answered". Glad it's working for you! Consider looking into http://ros.org/wiki/Industrial, which has several tutorials and templates for controlling your industrial robot through ROS. We'd love to add support for Staubli to ROS-I ! Comment by ROSkinect on 2015-03-03: did you tried to connect that to real robot or the simulator ? To me, it looks like the issue is that your joint_state_publisher is crashing due to a malformed robot URDF. See the following snippet from your error log: xml.parsers.expat.ExpatError: unbound prefix: line 3, column 4 [joint_state_publisher-3] process has died
{ "domain": "robotics.stackexchange", "id": 13272, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, rviz, moveit, interactive-markers, ros-groovy", "url": null }
asteroids, coordinate Title: Transform asteroid rotation to heliocentric ecliptic coordinates I'm working with data from the DAMIT database of asteroid shape models. I'm adding them to a visualization in which the sun is at [0, 0, 0] and the X, Y axes constitute the ecliptic plane of the solar system. Each asteroid model comes with some attributes that define its orientation and spin: λ (ecliptic longitude), β (ecliptic latitude) P (sidereal rotation period) φ0 (initial rotation angle) JD0 (initial date). I've applied the matrix formulas suggested by the folks at DAMIT: in which
{ "domain": "astronomy.stackexchange", "id": 3469, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "asteroids, coordinate", "url": null }
orbit, the-moon, planet, lagrange-point Title: Is there a Lagrange point between the earth and the moon? Is there a Lagrange point between the earth and moon where a space station could sit forever without orbiting around either? Just curious, but it seems like a place like that would be perfect for either a second space station or maybe like a giant tv that could display international weather or something. Or maybe a giant laser that can shoot down all that space debris. Yes. The Earth-Moon system has a Lagrange point L1, positioned between the Earth and the Moon, It is about 85% of the distance to the moon (about 320000km compared to 380000km.) A body at L1 would orbit the Earth, once every month (it would be in a 1:1 resonance with the moon) L1 is an unstable point, so you would need to use rockets to keep the satellite close to the L1 point for an extended period of time. 320000km is much further than the geostationary orbits, (at about 36000km) and much much further than the Low Earth Orbit of the ISS (about 400km) There are practical reasons not to put a space station there: It is much harder to get to, with few practical benefits. It is too far for "tv" or "lasers".
{ "domain": "astronomy.stackexchange", "id": 6300, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "orbit, the-moon, planet, lagrange-point", "url": null }
Here’s an initial implementation of the brute force primality test in C# … namespace PrimeNumbers { using System; public static class NumberExtentions { public static bool IsPrime(this ulong number) { if(number < 2) return false; // 0 & 1 are not prime if(number < 4) return true; // 2 & 3 are prime if(number % 2 == 0) return false; // 4, 6, 8, 10, 12, 24, ... are composite if(number % 3 == 0) return false; // 9, 15, 21, 27, 33, ... are composite // Now test for factors of the form 6n - 1 and 6n + 1 for n = 1, 2, 3, ... // 6n - 1 : 5, 11, 17, 23, 29, ... // 6n + 1 : 7, 13, 19, 25, 31, ... // ... up through floor(sqrt(number)) var max = (ulong)Math.Sqrt(number); // We will get here for number = 5, 7, 11, 13, 17, 19, 23, 25, 29, 31, 35, ... // Those numbers with floor(sqrt(number)) < 5 will not go through this loop at all // That's OK though since all of those numbers are prime: 5, 7, 11, 13, 17, 19, 23 // 6n - 1 and 6n + 1 for n = 1, 2, 3, ... // is equivalent to n and n + 2 for n = 6m + 5 where m = 0, 1, 2, ... for(ulong n = 5; n <= max; n += 6) { if(number % n == 0) return false; if(number % (n + 2) == 0) return false; } return true; } } } We can also leverage known primes. If N is small then we can simply do a lookup into a list of all known primes up to some value.
{ "domain": "alandavies.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9886682481380669, "lm_q1q2_score": 0.8128722894650741, "lm_q2_score": 0.8221891327004132, "openwebmath_perplexity": 759.7021361001217, "openwebmath_score": 0.5288369059562683, "tags": null, "url": "https://alandavies.org/blog/2018/02/18/on-primes" }
For pin codes, this means that you only need 10,001 people together to guarantee that at least two share the same pin code. You can imagine that if you were handing out pin codes, you would run out at the 10,000th person. At that point, you would have to give them the same code as someone else. There are definitely more than 10,000 people at a baseball game –So the movie idea works! I’m going to be a millionaire!  Ok back to math… ## Simulations show you need way fewer than 10,000 people Everything we have talked about so far is based on the idea that pin codes are randomly selected by people. That is, we are assuming that any pin code has the same chance of being used by a person as any other. For now, we will continue that assumption but as you can imagine, people definitely don’t behave this way. ### The question If we randomly assign pin codes, how many assignments (on average) will there be before there is a repeated code? We know by the pigeonhole principle that it is guaranteed after 10,000. But what is the typical number? For sure, through randomness, it often takes less than 10,000 right? ### The simulation For the sake of ease with writing the code, we will assign each pin code a whole number 1 – 10,000. So you can imagine the code 0000 is 1, the code 0001 is 2, and the code 9999 is 10,000. This way, assigning a pin code is really just assigning a random number from 1 to 10,000. In fact, we can now state the problem as: “How many random selections from {1,2,…,10000} until there is a repeated value?” Let’s look at the code!
{ "domain": "jerimiannwalker.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9916842225056003, "lm_q1q2_score": 0.8368774605997831, "lm_q2_score": 0.8438951045175643, "openwebmath_perplexity": 681.4965363218902, "openwebmath_score": 0.7005283236503601, "tags": null, "url": "http://www.jerimiannwalker.com/category/cool-math/" }
As asked, the question is seeking to know the number of subsets of size 4 in a set of size 12. A subsequent accounting for the order in which they are chosen is a change in the nature of the question. So there are $${12\choose 4}$$ ways to do this. Your interpretation is correct. Were order important, you would be asked the number of ordered subsets of size four, or you might be asked to name them something like president, vice-president, secretary and treasurer. This labeling can be done to each subset of size 4 freely, so it boosts the count by a factor of $4! = 24.$ No such secondary labeling is present in the question. Being asked to choose "a group (set) of size 4" implies sampling without replacement. In a word, the book is on rock-solid ground.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9715639694252315, "lm_q1q2_score": 0.8258013767035624, "lm_q2_score": 0.8499711832583695, "openwebmath_perplexity": 316.0701972792545, "openwebmath_score": 0.841927170753479, "tags": null, "url": "https://math.stackexchange.com/questions/546239/in-how-many-ways-can-four-students-be-chosen-from-a-group-of-12-students" }
c++, performance, strings, library, c++23 size_t start { str.find_first_not_of( delimiter ) }; size_t end { }; #if METHOD == 1 for ( auto idx { 0uz }; start != std::string_view::npos && idx < std::size( found_tokens_OUT ); ++idx ) { end = str.find_first_of( delimiter, start ); found_tokens_OUT[ idx ] = str.substr( start, end - start ); ++found_tokens_count; start = str.find_first_not_of( delimiter, end ); } #else for ( auto&& token : found_tokens_OUT ) { if ( start == std::basic_string_view<CharT, Traits>::npos ) break; end = str.find_first_of( delimiter, start ); token = str.substr( start, end - start ); ++found_tokens_count; start = str.find_first_not_of( delimiter, end ); } #endif if ( start == std::basic_string_view<CharT, Traits>::npos ) return found_tokens_count; else return found_tokens_count = std::numeric_limits<size_t>::max( ); } int main( ) { using std::string_view_literals::operator""sv; const auto str { "1 % "sv }; constexpr auto delimiter { " \t"sv }; std::array<std::string_view, 5> tokens; const auto token_count { tokenize( str, delimiter, { std::begin( tokens ), 2 } ) };
{ "domain": "codereview.stackexchange", "id": 44688, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, strings, library, c++23", "url": null }
coordination-compounds, electronic-configuration, molecular-structure Notes and references A. Cruz, V. Bertin, M. Castro, Int. J. Quantum Chem. 2000, 80, 298. DOI: 10.1002/1097-461X(2000)80:3<298::AID-QUA3>3.0.CO;2-2. I am not a quantum chemist, and my last calculation efforts date back to a research project during my master’s studies. Those acquainted with methods will have to comment on how good the paper is. N. M. Boag, J. A. K. Howard, J. L. Spencer, F. G. A. Stone, J. Chem. Soc., Dalton Trans. 1981, 1051. DOI: 10.1039/DT9810001051. Two platinum atoms with 18 electrons in a square-planar environment, one with 16 electrons in a trigonal environment. The former coordinated by $\ce{cod}$ and $\ce{cot}$ (each contributing two double bonds), the latter coordinated by two $\ce{cot}$ molecules (each contributing one double bond) and an ethylene. The three platinum centres are too far apart to be considered bonded, according to the authors. The bonding of the two 18-electron platinums to their respective $\ce{cot}$ unit is described as $\ce{cot^2-}$ and $\ce{Pt^2+}$, resulting in oxidation states of $\mathrm{+II}$ and $\mathrm{\pm 0}$ for platinum.
{ "domain": "chemistry.stackexchange", "id": 5508, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "coordination-compounds, electronic-configuration, molecular-structure", "url": null }
javascript, performance, beginner, node.js data = {"messages":[{"type":"flex","altText":`Episode ${epNo} is Now Available`,"contents":{"type":"bubble","size":"giga","hero":{"type":"image","url": banner,"size":"full","aspectRatio":"20:9","aspectMode":"cover","action":{"type":"uri","uri":emLink}},"body":{"type":"box","layout":"vertical","contents":[{"type":"text","text":"New Episode Available","weight":"bold","size":"xl","align":"center"},{"type":"box","layout":"vertical","margin":"lg","spacing":"sm","contents":[{"type":"text","text":`Episode ${epNo} is now up!`,"wrap":true,"align":"center"}]}]},"footer":{"type":"box","layout":"vertical","spacing":"sm","contents":[{"type":"separator","margin":"xs"},{"type":"button","style":"link","height":"sm","action":{"type":"uri","label":"Open Player","uri":emLink},"color":"#007bff"},{"type":"button","style":"link","height":"sm","action":{"type":"uri","label":"Open in Kodi (via custom
{ "domain": "codereview.stackexchange", "id": 40186, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, performance, beginner, node.js", "url": null }
deep-learning, nlp, transformer, text-generation Title: Based on transformer, how to improve the text generation results? If I do not pretrain the text generation model like BART, how to improve the result based on transformer like tensor2tensor? What are the improvement ideas for transformer in text generation task? If you have a lot of data available to train, you should apply the techniques used in large transformer models, like GPT-2: very deep models (48 layers for the 1.5B parameters), modified initialization, pre-normalization, and reversible tokenization. You could also apply GPT-3's locally banded sparse attention patterns. If you have very small training data, you can apply the "unwritten" aggressive techniques described in this tweet, namely data augmentation, discrete embedding dropout, normal dropout and weight decay, and lots of patient training time. Update: I feel like the tweet thread I referred to is important, so here are the most relevant tweets: How can you successfully train transformers on small datasets like PTB and WikiText-2? Are LSTMs better on small datasets? I ran 339 experiments worth 568 GPU hours and came up with some answers. I do not have time to write a blog post, so here a twitter thread instead. To give a bit background: All this came about by my past frustration with replicating Transformer-XL results on PTB and having very poor results on WikiText-2 (WT2). On WT2, my best model after 200+ experiments was 90ish ppl which is far from standard LSTM baselines (65.8 ppl). ... The key insight is the following: In the small dataset regime, it is all about dataset augmentation. The analog in computer vision is that you get much better results, particularly on small datasets, if you do certain dataset augmentations. This also regularizes the model. The most dramatic performance gain comes from discrete embedding dropout: You embed as usual, but now with a probability p you zero the entire word vector. This is akin to masked language modeling but the goal is not to predict the mask — just regular LM with uncertain context.
{ "domain": "datascience.stackexchange", "id": 8129, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "deep-learning, nlp, transformer, text-generation", "url": null }
beginner, c, bitwise For this task, using "%4x" would be more informative than "%u". Wider testing "I've tested with a number of different scenarios, and this code has passed all of them.". --> Testing with UINT_MAX, UINT_MAX-1, 1, 0 would help check out the "corners" of the function. Document restrictions Code limited to 0 <= p < bit_width and 0 <= n < bit_width. Since code cannot handle all possible input values, stating restrictions lessens incorrect usage. Defining behavior for all possible inputs has the advantage of not needing to list limitations, yet may not be functional needed (and code over-kill). Still good to post limitations. On this last point, perhaps a slight code re-write is in order as one could easily expect n == bit_width to be valid. // This assumes no padding in `unsigned` #define UINT_BIT_WIDTH (CHAR_BIT * sizeof(unsigned)) unsigned setbits(unsigned x, int p, int n, unsigned y) { unsigned new_mask = n >= UINT_BIT_WIDTH ? UINT_MAX : ~(~0u << n); ... }
{ "domain": "codereview.stackexchange", "id": 30910, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c, bitwise", "url": null }
quantum-field-theory, renormalization, path-integral, effective-field-theory \end{align} Likewise, the functional integral measure factorizes as $$[\mathcal{D}\hat{\phi}]\equiv\prod_{|\mathbf{p}|\leq\Lambda}d\hat{\phi}(\mathbf{p})=\prod_{|\mathbf{p}|\leq s\Lambda}d\hat{\varphi}(\mathbf{p})\prod_{s\Lambda<|\mathbf{p}|\leq\Lambda}d\hat{\chi}(\mathbf{p})=[\mathcal{D}\varphi][\mathcal{D}\chi],$$ where I have used the fact that Fourier transforms are unitary. Performing the functional integral over $\chi$, one has $$e^{-\frac{1}{\hbar}S_{\mathrm{eff}}\,[\varphi;s\Lambda]}=\underset{C^{\infty}\,((s\Lambda,\Lambda])}{\int}[\mathcal{D}\chi]e^{-\frac{1}{\hbar}S[\varphi+\chi;\Lambda]}\quad\mathrm{or}\quad S_{\mathrm{eff}}[\varphi;s\Lambda]\equiv-\hbar\log\left[\,\underset{C^{\infty}\,((s\Lambda,\Lambda])}{\int}[\mathcal{D}\chi]\exp\left(-\frac{1}{\hbar}S[\varphi+\chi;\Lambda]\right)\right]. \tag{9}$$ From (8), the regularized action contains a part $S[\varphi;s\Lambda]$ which is independent of the field $\chi$, and so it can be factored out of the integral over $\chi$. Then,
{ "domain": "physics.stackexchange", "id": 92626, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, renormalization, path-integral, effective-field-theory", "url": null }
c, image, bmp /* Define allowable filters */ static const struct option long_options[] = { { "grayscale", no_argument, NULL, 'g' }, { "reverse", no_argument, NULL, 'r' }, { "sepia", no_argument, NULL, 's' }, { "blur", no_argument, NULL, 'b' }, { "help", no_argument, NULL, 'h' }, { "output", required_argument, NULL, 'o' }, { NULL, 0, NULL, 0 } }; FILE *in_file = stdin; struct flags options = { false, false, false, false, stdout }; int result = EXIT_SUCCESS; parse_options(long_options, "grsbho:", &options, argc, argv); if ((optind + 1) == argc) { in_file = (errno = 0, fopen(argv[optind], "rb")); if (!in_file) { errno ? perror(argv[optind]) : (void) fputs("Error - failed to open input file.", stderr); return EXIT_FAILURE; } } else if (optind > argc) { err_msg(); } if (process_image(&options, in_file, options.out_file) == -1) { result = EXIT_FAILURE; } if (in_file != stdin) { fclose(in_file); } return result; } Should the functions doing the input/output be moved to bmp.c?
{ "domain": "codereview.stackexchange", "id": 45150, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, image, bmp", "url": null }
potential Title: Electric Potential of an Electron Orbiting a Nucleus Based on my understanding, electric potential is $\frac{kg}{r}$. Why is the electric potential felt by an electron orbiting a nucleus is quantitatively described by the equation in image shown below? Source: Quantum Physics by Robert Eisberg This is the potential energy. First of all, $k = 1/(4 \pi \varepsilon_0)$. The potential energy is given by $V = -e\phi$ where $\phi$ is the electrostatic potential, which is the formula that you write (I think). So we have: $$ V(r) = -e\phi(r) = -e(\frac{ke}{r}) = \frac{-ke^2}{r} = \frac{-e^2}{4\pi\varepsilon_0} $$ $Z$ is the atomic number. If our electron is orbiting a nucleus with $Z \neq 1$, then we just multiply the above by $Z$.
{ "domain": "physics.stackexchange", "id": 76766, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "potential", "url": null }
Now, consider F(x) - xF(x) + x^2F(x), by adding like terms together. Since f_n = f_{n-1} - f_{n-2}, almost all terms will cancel out. The only nonzero terms are the constant and linear term. Thus, we have: \begin{aligned} F(x) - xF(x) + x^2F(x) &= f_0 + f_1x \\\ (1 - x + x^2)F(x) &= f_0 + f_1x \\\ F(x) &= \frac{f_0 + f_1x}{1 - x + x^2} \\\ F(x) &= \frac{A-B + Ax}{1 - x + x^2} \end{aligned} Now, let’s try to factorize 1 - x + x^2. It has two roots: \dfrac{1 \pm \sqrt{3}i}{2}. Let us use the symbol \psi and \bar{\psi} for \dfrac{1 + \sqrt{3}i}{2} and \dfrac{1 - \sqrt{3}i}{2}, respectively (\bar{x} denotes complex conjugation). Thus, 1 - x + x^2 must be equal to (x - \psi)(x - \bar{\psi}). But since \psi\bar{\psi} = 1, we get: \begin{aligned} 1 - x + x^2 &= (x - \psi)(x - \bar{\psi}) \\\ &= \psi\bar{\psi}(x - \psi)(x - \bar{\psi}) \\\ &= (\bar{\psi}x - \bar{\psi}\psi)(\psi x - \psi\bar{\psi}) \\\ &= (\bar{\psi}x - 1)(\psi x - 1) \\\ &= (1 - \psi x)(1 - \bar{\psi}x) \end{aligned}
{ "domain": "codechef.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9840936087546923, "lm_q1q2_score": 0.8441241099143849, "lm_q2_score": 0.8577681049901037, "openwebmath_perplexity": 3470.341465561802, "openwebmath_score": 0.9550616145133972, "tags": null, "url": "https://discuss.codechef.com/t/chn08-editorial/12127" }
\u00a9 2020 wikiHow, Inc. All rights reserved. ↩ : For example: A more meaningful example: if something increases by xi​ percents for i=(1…10), then its total increase is by GM(xi​)10. We use cookies to make wikiHow great. You cannot find the geometric mean of negative numbers. This image is not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. For example, say you want to find the geometric mean of the value of an object that increases by 10%, and then falls by 3%. This image is not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. Thanks to all authors for creating a page that has been read 787,273 times. . Generally geometric mean of n numbers is the n th root of their product.. Press "Reset" to clear the data and start again. Multiplying by 10 different numbers gives the same result as multiplying by their geometric mean ten times. This image may not be used by other entities without the express written consent of wikiHow, Inc. \n<\/p> \n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/0a\/Calculate-the-Geometric-Mean-Step-3-Version-4.jpg\/v4-460px-Calculate-the-Geometric-Mean-Step-3-Version-4.jpg","bigUrl":"\/images\/thumb\/0\/0a\/Calculate-the-Geometric-Mean-Step-3-Version-4.jpg\/aid159065-v4-728px-Calculate-the-Geometric-Mean-Step-3-Version-4.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
{ "domain": "com.sv", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9693241982893258, "lm_q1q2_score": 0.8258184685700949, "lm_q2_score": 0.8519528038477824, "openwebmath_perplexity": 1452.4229376157004, "openwebmath_score": 0.23681902885437012, "tags": null, "url": "http://www.epersonnel.com.sv/o6tbydc5/4a7fa9-how-to-calculate-geometric-mean" }
Complex numbers and quadratic equations. Multiplication of pure imaginary numbers by non-finite numbers might not match MATLAB. That we can actually put, input, negative numbers in the domain of this function. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Group the real coefficients (3 and 5) and the imaginary terms $$( \blue{ 3 \cdot 5} ) ( \red{ \sqrt{-6}} \cdot \red{ \sqrt{-2} } )$$ Information and translations of PURE IMAGINARY NUMBER in the most comprehensive dictionary definitions resource on the web. The union of the set of all imaginary numbers and the set of all real numbers is the set of complex numbers. Definition of pure imaginary number in the Fine Dictionary. The confusion in your example is just notation. We prove that eigenvalues of a real skew-symmetric matrix are zero or purely imaginary and the rank of the matrix is even. Noting … The code generator does not specialize multiplication by pure imaginary numbers—it does not eliminate calculations with the zero real part. Find more similar words at wordhippo.com! Educreations is a community where anyone can teach what they know and learn what they don't. Pronunciation of pure imaginary number and its etymology. Complex numbers and complex conjugates. This lesson plan includes the objectives, prerequisites, and exclusions of the lesson teaching students how to evaluate, simplify, and multiply pure imaginary numbers and solve equations over the set of pure imaginary numbers. How to Multiply Imaginary Numbers Example 3. Define pure imaginary number. A complex number is any expression that is a sum of a pure imaginary number and a real number. That's because the real numbers are also closed under subtraction. The square of an imaginary number bi is −b 2.For example, 5i is an imaginary number, and its square is −25.By definition, zero is considered to be both real and imaginary. The Fourier Transformation of an even function is pure real. Then, we can just call it complex number. Our software turns any iPad or web browser into a recordable, interactive whiteboard, making it easy for teachers and experts to create engaging video lessons and share them on the web. Complex numbers of the form i{y}, where y
{ "domain": "livelovelocks.com", "id": null, "lm_label": "1. Yes\n2. Yes\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446425653806, "lm_q1q2_score": 0.8332948912377289, "lm_q2_score": 0.855851154320682, "openwebmath_perplexity": 971.1354126985367, "openwebmath_score": 0.5506659150123596, "tags": null, "url": "http://livelovelocks.com/eyemc/products-of-pure-imaginary-numbers-a258e0" }
c, sorting, linked-list, mergesort Minor Spelling fuction --> function. sizeof(*list) can be coded simpler as sizeof *list. (style issue) Not a fan of using printf(string) to print strings. printf() expects that first argument to point to a string used as a format - which is a problem should it contain "%". Consider fputs(string, stdout) Uncertain that const is generally OK in main(). Since it is test code, suggest simplifying. // int main(int argc, const char * argv[]) int main(int argc, char * argv[]) Add date and your ID (name) to the file as a comment. Format: The below format is hard to maintain with automatic formatting tools (at least mine). Maybe it does well with yours. Formatting is a pain and should not be maintained manually. Assuming you did not use such a tool, try one. static linked_list_node* merge(linked_list_node* left_head, linked_list_node* right_head) [Edit] Simplified merge(), only while() loop needed. Down-side: need a linked_list_node variable and not just a linked_list_node *. static linked_list_node *merge(linked_list_node *left, linked_list_node *right) { linked_list_node head; // Only use next field linked_list_node *tail = &head; while (left && right) { if (right->value < left->value) { tail->next = right; tail = right; right = right->next; } else { tail->next = left; tail = left; left = left->next; } } tail->next = left ? left : right; return head.next; }
{ "domain": "codereview.stackexchange", "id": 21969, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, sorting, linked-list, mergesort", "url": null }
special-relativity, energy, mass-energy $$ \boxed{ E^2 = (m c^2 )^2 = (m_0 c^2)^2 + (pc)^2} $$ In modern day notation, physicists have decided to drop discussion of the relativistic mass $m$ since it is not an absolute constant and depends on the speed of the particle. Nowadays, we only talk about the rest mass, $m_0$. However, in a confusing notational change physicists today decided to use $m$ for the rest mass (which in today's notation is not confusing at all, since we don't talk about relativistic mass, but it is often confusing to students who try to compare Einstein's original papers with books written today). Following modern day notation then, we only have ONE equation, namely $$ \boxed{ E^2 = (m c^2)^2 + (pc)^2 } $$ where in the above equation $m$ is now the rest mass.
{ "domain": "physics.stackexchange", "id": 61266, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, energy, mass-energy", "url": null }
aqueous-solution, concentration, terminology Title: Volume in decimetres cubed dimensional analysis In chemistry class, we were learning about concentration, which is the number of moles of a substance per decimeter cubed of water, expressed as $\mathrm{mol} \cdot \mathrm{dm}^{-3}$. However, I was confused when my teacher wrote decimeters cubed as $\mathrm{dm}^{-3}$ instead of what I presumed would be $\mathrm{dm}^3$. I asked why and he didn't know. Why do chemists write it like this? This touches a very powerful analytical technique called unit analysis (aka dimensional analysis). Moles per decimetre cubed is $\dfrac{\text{moles}}{\text{dm}^3} = \text{moles}\cdot \text{dm}^{-3} \ne \text{moles}\cdot \text{dm}^3$ So the -3 indicates that the unit is in the denominator not the numerator. PS - I learned chemistry when there were only four elements - earth, wind, water and air. I think of liters not $\text{dm}^3$.
{ "domain": "chemistry.stackexchange", "id": 7273, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "aqueous-solution, concentration, terminology", "url": null }
forces, newtonian-mechanics, momentum Your force meter measures in what we now call pounds. In other words, a reading of $3$ on the force meter corresponds to the force we would now call $3\ \mathrm{lb_F}$, and similarly for other numbers - but of course, when you take that reading, you just think of it as "$3$". Your momentum meter measures in what we would currently call pound-feet per second, $\mathrm{lb_M}\;\mathrm{ft}\;\mathrm{s}^{-1}$. Your time meter measures in units equivalent to modern seconds.
{ "domain": "physics.stackexchange", "id": 7345, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "forces, newtonian-mechanics, momentum", "url": null }
ros, teb-local-planner, eband-local-planner Title: difference between eband_local_planner and teb_local planner What exactly is the difference between teb and eband local planner and how exactly do they work differently ? Originally posted by rajnunes on ROS Answers with karma: 73 on 2016-08-19 Post score: 5 eband_local_planner (classical Elastic Band approach by Quinlan et al.) and teb_local_planner (Timed Elastic Band (TEB) approach) are two completely different planning algorithms. However, the TEB principle is based on the classic elastic band idea. eband_local_planner (Elastic Band) Local path deformation (path: no timing law) based on internal and external forces Internal forces contract the path (-> leading to the shortest path between start and goal) External forces repel the path from obstacles Implementation based on bubbles that represent discrete path points and free-space. Adaption of the trajectory length w.r.t. bubbles/free-space (insertion and deletion of discrete points) Extension to non-holonomic kinematics (supports differential-drive and omnidirectional robots) Subject to local minima (e.g. left or right path around an obstacle, depends on initial path) teb_local_planner (Timed Elastic Band) Local trajectory deformation/optimization (trajectory: includes temporal information) Instead of generating and applying forces, an objective/cost function is minimized Temporal information is subject to optimization -> time-optimal trajectories (replacement for the internal forces) Temporal information allows incorporation of (kino-)dynamic constraints during optimization (no need for a dedicated path-following controller, the teb_local_planner mimics a predictive controller) Adaptation of the trajectory length based on the temporal discretization (insertion and deletion of discrete trajectory points) Supports differential-drive, car-like and omnidirectional robots Explores multiple distinctive topologies for parallel trajectory optimization in order to partially overcome the local minima problem (only in the scope of the local costmap due to limited CPU resources, a global planner is still required). Path-following mode (minimize distance to global plan instead of minimizing transition time) Bottleneck: very high computational burden (-> limited local costmap size/resolution resp. robot size).
{ "domain": "robotics.stackexchange", "id": 25560, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, teb-local-planner, eband-local-planner", "url": null }
c#, entity-framework As I said, I have many different queries that produce different kinds of data, but all have one common part: the building of subquery that selects only documents accessible to a certain UserID. Is there a better way to 'inject' this common query part into multiple Entity Framework queries rather than instantiate DocumentPermissionsHelper every time? My own solution seems somewhat clumsy to me. Probably more can be made to clean that code but from your snippet I'd first introduce an extension method. You're using an object just to hold some parameters. static class MyContextExtensions { public static IQueryable<Document> SelectDocumentsAccessibleToUser(this MyDbContext dc, IQueryable<Document> query, Guid userId) { var permissionsQuery = dc. ... // really big query with multiple JOIN's to determine which user has access to which documents IQueriable<Guid> docIdsAccessibleToUserQuery = from permissions in permissionsQuery.Where(_ => _.UserId == userId && _.AccessLevel == "Read") select permissions.DocumentId; return from accessibleDocId in docIdsAccessibleToUserQuery.Distinct() from accessibleDoc in query.Where(g => g.Id == accessibleDocId) select accessibleDoc; } } Simply used like this: public void GetLatestDocsCount(Guid currentUserId) { using (var dc = new MyDbContext(_environment)) { var docQuery = dc.Docs.Where(d => d.CreatedDate > DateTime.Now.AddDays(-1)); docQuery = dc.SelectDocumentsAccessibleToUser(docQuery, currentUserId); int latestDocsCount = docQuery.Count(); return latestDocsCount; } } Few other things:
{ "domain": "codereview.stackexchange", "id": 25344, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, entity-framework", "url": null }
ros Title: Error with osm_cartography on ROS indigo I install osm_carography using "git clone" since my work space, and when i execute the example (viz_osm.launch) ,show me the next error: Traceback (most recent call last): File "/home/andres/ros_ws/open_street_map/osm_cartography/scripts/viz_osm", line 65, in import osm_cartography.cfg.VizOSMConfig as Config ImportError: No module named osm_cartography.cfg.VizOSMConfig Please can somebody help with an error? Regards. Originally posted by Fenix on ROS Answers with karma: 5 on 2015-10-07 Post score: 0 Some things to check: Did you build your workspace after cloning the source? Did you source devel/setup.bash from the workspace after building it? UPDATE: this is a catkin package, you don't build it with rosmake. Instead: $ cd ~/ros_workspace $ catkin_make $ source devel/setup.bash After that you can run the program. Originally posted by joq with karma: 25443 on 2015-10-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Fenix on 2015-10-08: Hi joq, I made this: cd ~/ros_workspace. git clone //repository/package_name\. cd package_name. rosmake. To update the package later on, run the commands: roscd package_name. git pull. rosmake --pre-clean. Please, How do you install the osm_cartography package? Regards Comment by Fenix on 2015-10-09: Hi again joq, Thanks a lot for your help and time. Problem solved.
{ "domain": "robotics.stackexchange", "id": 22750, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros", "url": null }
c++, event-handling, pointers, callback template<typename T> T* hasEventPolled(const std::string &id); ~windowEventManager(); }; Grabbing and setting the pointer void windowEventManager::pollEvent(sf::Event event) { for (auto &eventCheck : _subscribedEvents) { if (eventCheck.event == event.type) { switch (event.type) { case sf::Event::KeyPressed: case sf::Event::KeyReleased: eventCheck.data = new sf::Event::KeyEvent(event.key); break; case sf::Event::MouseButtonPressed: case sf::Event::MouseButtonReleased: eventCheck.data = new sf::Mouse::Button(event.mouseButton.button); break; case sf::Event::MouseWheelMoved: case sf::Event::MouseWheelScrolled: eventCheck.data = new bool(true); break; case sf::Event::TextEntered: eventCheck.data = new sf::String(event.text.unicode); break; default: break; } eventCheck.polled = true; } } } I clear events before each poll, and in the destructor so I don't have memory leaks/too much on the heap void windowEventManager::clearEvents() { for (auto &eventCheck : _subscribedEvents) { if (eventCheck.data) { delete eventCheck.data; eventCheck.data = nullptr; } eventCheck.polled = false; } }
{ "domain": "codereview.stackexchange", "id": 22483, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, event-handling, pointers, callback", "url": null }
# In a topology, if a third open set is formed of intersection of two open sets. How is it Hausdorff? A very basic question. If a Topology, $T$ has open sets $O_1$ and $O_2$, and $O_3$ is their intersection or union then how can the neighborhoods of $O_3$ and $O_2$ or $O_3$ and $O_1$ be disjoint, because the neighborhood is a superset of the open set. So how can any topology be ever Hausdorff? "A topological space (X,T) is Hausdorff space if all two distinct points in X have two disjoint neighborhoods." Please correct me. My understanding has gone haywire. • A space is Hausdorff if disjoint points can be separated by disjoint neighbourhoods. I'm not sure what your problem is here. – Ian Coley Feb 13 '14 at 22:05 • Hausdorffness does not say any two open sets are disjoint. It says for any two fixed points $x$ and $y$, there are disjoint open sets $O_x$ and $O_y$ with $x\in O_x$ and $y\in O_y$. – David Mitra Feb 13 '14 at 22:06 • A point is both in open set $O_1$ and open set $O_3$ (from above). So they can never be disjoint right? And $O_3$ must exist as per the intersection condition of toloplogy. – kosmos Feb 13 '14 at 22:09 • It doesn't say for all pairs of open subsets, it just says there must be at least one pair of open subsets such that the condition is satisfied – Robert Wolfe Feb 13 '14 at 22:12 • Think about the real line. For any two points you can find an infinite number open balls containing both. However, you can also find two disjoint balls that contain only one of them. – Felipe Jacob Feb 13 '14 at 22:14
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9603611574955211, "lm_q1q2_score": 0.8181823790298391, "lm_q2_score": 0.8519528019683105, "openwebmath_perplexity": 168.55461839013014, "openwebmath_score": 0.885749101638794, "tags": null, "url": "https://math.stackexchange.com/questions/675610/in-a-topology-if-a-third-open-set-is-formed-of-intersection-of-two-open-sets-h" }
homework-and-exercises, waves, harmonic-oscillator Title: How to find equation of simple harmonic motion from positional information at 3 different times? Given a particle at three distinct position $x_1, x_2 \ and \ x_3 $from equilibrium position at different times $ t_1, t_2 \ and \ t_3 $ how can we find the amplitude, frequency and initial phase? It seems to me that I lack the mathematical knowledge necessary to solve the equations involved. What are such equations called and where can we learn about solving them? $ x_1 = A \sin(\omega \times t_1) + B \cos(\omega \times t_1)$ $x_2 = A \sin(\omega \times t_2) + B \cos(\omega \times t_2)$ $x_3 = A \sin(\omega \times t_3) + B \cos(\omega \times t_3)$ where $ \omega \times t_1 , \ \omega \times t_2, \omega \times t_3 $ is less than $ 2\pi$ but greater than 0. Apologies I am ignorant of the nomenclature and sources. If you were interested in a quick homing in and won't falter at issues of uniqueness and optimization of solutions, here is a direct way to get you where you might like to go. First rearrange your real coefficients to polar form, $$ A\equiv r \cos \phi , \qquad B\equiv r \sin\phi ,\\ r=\sqrt{A^2+B^2} , \qquad \phi=\arctan (B/A). $$ It follows that your three equations are transcribable as $$ x_i/r=\sin (\omega t_i+\phi), $$ whence $$ y_i\equiv \arcsin (x_i/r)= \omega t_i+\phi ~. $$ One may eliminate the unknowns $\phi,\omega$ among these three equations to consider the (transcendental ?) equation $$
{ "domain": "physics.stackexchange", "id": 54584, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, waves, harmonic-oscillator", "url": null }
logic, first-order-logic, propositional-logic Title: In the Resolution equivalence ($\neg A \implies B, B \implies C \models \neg A \implies C$) must $A$ be negated? The sheet of equivalences given to us in class provides the the equivalences \begin{array}{|c|c|c|} \hline \text{Resolution} & A \vee B, \neg B \vee C \models A \vee C & \neg A \implies B, B \implies C \models \neg A \implies C \\ \hline \end{array} I noticed that the $A$ is negated. Is this necessary for proper Resolution? Or is this just an example that $A$ can be negated? To me it makes logical sense that $A \implies B, B \implies C \models A \implies C$, but being somewhat new to the subject matter I would like to ensure that $\neg A$ is not necessary for Resolution. The answer is yes. But the departure point is slightly different. Here it is: \begin{array}{|c|c|c|} \hline \neg A \vee B, \neg B \vee C \models \neg A \vee C & A \implies B, B \implies C \models A \implies C \\ \hline \end{array} The former statement is just the fact that if you start with $A\vee B$ and turn it into an implication you find $\neg A\implies B$.
{ "domain": "cs.stackexchange", "id": 8427, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "logic, first-order-logic, propositional-logic", "url": null }
\end{document} Use \intertext to place text between aligned equations. There is rarely any need to manually align material using 'hard' spaces. The amsmath package provides environments that meet all of the most common requirements. \documentclass[fleqn, 12pt]{article} \usepackage{amsmath} \begin{document} From the model, \begin{align*} E( Y_i - \bar{ Y } ) &= E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \\ \intertext{Since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$} & = \beta_1( X_i - \bar{X} ) \end{align*} \end{document} • Thanks for the answer. The problem with this is that it removes the indentation of the middle line, which I'd prefer to keep. – The Pointer Jun 3 '18 at 13:59 • @ThePointer You're removing all indentations; why should that line be indented? – egreg Jun 3 '18 at 14:01 • @egreg I want it to be viewed as comment specifically with regards to that part of the equation, rather than something more major, if that makes sense. – The Pointer Jun 3 '18 at 14:03 Another solution is to just use \tag. This seems to be appropriate semantically, seeing as the text pertains to the previous equation-line.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9553191284552529, "lm_q1q2_score": 0.8414422909012761, "lm_q2_score": 0.8807970717197768, "openwebmath_perplexity": 1112.5858445806566, "openwebmath_score": 0.9673343896865845, "tags": null, "url": "https://tex.stackexchange.com/questions/434771/lining-up-nonconsecutive-multi-line-equations/434775" }
ros, node Title: How to structure a node to publish a topic using classes? I am writing a node but I have some doubts of how to structure it if I am using classes. Right now I have written two methods. The first one is: #include "ros/ros.h" #include "std_msgs/String.h" #include <sstream> class node_class { public: node_class(); private: ros::NodeHandle nh_; ros::Publisher pub_; std_msgs::String msg; ros::Rate loop_rate; }; node_class::node_class(): pub_(nh_.advertise<std_msgs::String>("chatter", 10)), loop_rate(1) { msg.data = "hello world"; while(ros::ok()) { pub_.publish(msg); loop_rate.sleep(); } } int main(int argc, char **argv) { ros::init(argc, argv, "node_class"); node_class this_node; while(ros::ok()) ros::spinOnce(); return 0; }
{ "domain": "robotics.stackexchange", "id": 18903, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, node", "url": null }
discrete-signals, phase, time-frequency Title: How does time shift correspond to phase change in a discrete signal? I was watching this video where the presenter remarks: For a discrete signal, time shift corresponds to phase change in a discrete signal but not vice versa.
{ "domain": "dsp.stackexchange", "id": 1726, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, phase, time-frequency", "url": null }
algorithms, linear-programming, clustering Title: Formulate a 2-clustering problem in LP The problem: Suppose there are $n$ points in plane, and we want to partition points into two clusters such that sum of diameter of clusters is minimized. The diameter of cluster is maximum distance between each pair of points. My question is, how we can formulate above problem in LP manner? My attempt: I think so much to find a LP formulation for above problem but i have no idea. Let $P$ be the given set of points. Declare a variable $x_p$ for every point $p \in P$ in the plane. Set $x_p = 1$ if point $p$ belongs to cluster $1$ and $x_p = -1$ if point $p$ belongs to cluster $2$. Then, the integer linear program would be as follows: Objective function: minimize $d_1+d_2$ Constraints: $\frac{(x_p + x_{p'})}{2} \cdot d(p,p') \leq d_1$ for every pair $p,p' \in P$ $-\frac{(x_p + x_{p'})}{2} \cdot d(p,p') \leq d_2$ for every pair $p,p' \in P$ $x_p \in \{1,-1 \}$ for every point $p \in P$ $d_1,d_2 \geq 0$ Here, $d(p,p')$ denotes the distance between points $p$ and $p'$.
{ "domain": "cs.stackexchange", "id": 18879, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, linear-programming, clustering", "url": null }
Use the mean value theorem: $$f'(c_0)=\frac{f(k)-f(0)}{k-0}=\frac{1}{k}\\f'(c_1)=\frac{f(1)-f(k)}{1-k}=\frac{-1}{1-k}$$ Now, there are 3 cases: $\begin{cases}k\in(0,\frac12)\\k=\frac12\\k\in(\frac12,1)\end{cases}$ In case 1, $f'(c_0)>\frac1{\frac12}=2$ In case 3, $f'(c_1)<-\frac1{1-\frac12}=-2\implies |f'(c_1)|>2$ In case 2, both $f'(c_0),|f'(c_1)|$ are equal to 2, if every point between $0$ to $k$ and between $k$ to $1$ has slope of $2$ in one side and $-2$ in the other then $f'(k)$ is undefined​, so I know that it can't be, but I also know that the average is $2$ in one side and $-2$ in the other, I know that because the average value of a function is: $$\frac1{b-a}\int_a^b g(x) dx$$ put in this $b=\frac12,a=0$ or $b=1, a=\frac12$, and $g(x)=f'(x)$ and you will get $2$ or $-2$ So there exist at leas one point between $0$ and $k$ that has slope is greater than $2$ and at least one point between $k$ and $1$ that has slope that is less than $-2$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759626900699, "lm_q1q2_score": 0.8375824863115992, "lm_q2_score": 0.8539127455162773, "openwebmath_perplexity": 152.01790370164318, "openwebmath_score": 0.855697751045227, "tags": null, "url": "https://math.stackexchange.com/questions/2523322/prove-that-there-exist-c-in-0-1-such-that-fc2" }
algorithms, efficiency Edit: as pointed out in the comments, this problem can be thought of as finding connected components of a graph. The graph nodes are the points from the data set and two nodes are adjacent if their distance is smaller than the reference value. The problem I see is that establishing the edges seems to be O(n^2), because I don't see a way that doesn't require me to do pairwise checks on all points of the set, is that correct? There is an efficient algorithm whose running time is nearly linear in most cases. Suppose you the distance threshold is $d$, i.e., you want two elements at distance $\le d$ to be in the same group. Then you can divide up space into cubes of side length $2d$, and store each element in the cube it is contained in. The cubes can be stored in a hash table for efficient lookup. For instance, given a point at coordinates $(x,y,z)$, it is associated with the cube whose lower-left corner is at coordinates $(\lfloor x/2d \rfloor, \lfloor y/2d \rfloor, \lfloor z/2d \rfloor \rfloor)$; you can hash those coordinates and store the point in the corresponding bucket of the hashtable. Now, if two points are at distance $\le d$ apart, then they must be in adjacent cubes. So, you can find all points that need to be put into the same group by looking at pairs of adjacent cubes. In particular, the algorithm becomes: For each cube $C$: For each cube $C'$ that is adjacent to $C$: For each point $P$ in $C$ and each point $P'$ in $C'$: If the distance between $P,P'$ is at most $d$, merge them into the same group.
{ "domain": "cs.stackexchange", "id": 10444, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, efficiency", "url": null }
javascript, jquery, image, animation $(window).resize(function() { setContainHeight(); if ($("body").width() < 980 ) { imageResize('.com-background > img', ratio1); // var ratio1 = imageCalc('.com-background > img'); // setImageDims('.com-background > img', '#main-content', ratio1); } else { setImageDims('.com-background > img', '#main-content'); } if ($("body").width() < 770 ) { jQuery('.menu-main-container').css('display', 'none'); } if ($("body").width() > 770 ) { jQuery('.menu-main-container').css('display', 'block'); } }).resize(); I have also created a rudimentary fiddle to illustrate the imageCalc(), setImageDims(), and imageResize() functionality here. There's a lot of code here, so I'll just start with a single function: setImageDims
{ "domain": "codereview.stackexchange", "id": 9993, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, image, animation", "url": null }
ros, publisher ROS will handle subscribing to a new publisher automatically - no need to call subscribe on every iteration ros::spin() doesn't return until your program is done - probably not what you want The pseudocode for a watchdog should look more like: class Watchdog { private: // timer or last update time member variables public: void callback(your_message & msg); // update/reset timer bool check(); // check timer for timeout } int main() { // ros node and nodehandle setup Watchdog watchdog; ros::Subscriber sub = nh.subscribe("topic", 10, &watchdog, &Watchdog::callback); ros::Rate rate(100); while(ros::ok()) { if(!watchdog.check()) { // scream loudly and/or throw things } // check subscriber for correct number of publishers ros::spinOnce(); } return 0; } Most of the implementation details are left to the reader. Note that there are two different checks being done here: Check that a message was received recently (the publisher is still publishing) Check that there are still publishers connected Note also that you could move the ros::Subscriber object to be a member of the Watchdog class, and do the subscriber setup in the Watchdog constructor. Originally posted by ahendrix with karma: 47576 on 2015-03-30 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Matth on 2015-04-01: Thank you a lot for your answer, i understand the concept of publishing and subscribing !
{ "domain": "robotics.stackexchange", "id": 21272, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, publisher", "url": null }
galaxy, cosmology, dark-matter, redshift Redshift range and area Re-reading your question, I think maybe you're interested not in the number density, but in the absolute number in a volume spanned by a redshift range $dz$ and area $dA$. If that is the case, then you simply multiply your $N(>\!\!M_\mathrm{h})$ by the cosmological volume given by $dz$ and $dA$. Let me know if you also want to know how to calculate that.
{ "domain": "astronomy.stackexchange", "id": 3175, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "galaxy, cosmology, dark-matter, redshift", "url": null }
Is my attempt for (2) okay? For (1) I know I have to show that the total variation of $f$ is unbounded, but I don't how to do that. Any suggestions? Thanks. • 2. looks good. For 1.: did you try the crudest estimate: determine (or find something close to) the local maxima and minima and plug in the definition? I haven't checked if this works, but it would be the first thing I try. – t.b. Nov 2 '11 at 17:07 • In your attempt for part 2 you are OK as long as you are sure that g is indeed monotonic. As far as part 1 is concerned, you may want to look at an example provided by Rudin in his Principles of Mathematical Analysis (the section on functions of bounded variation). I think you can get away with a partition of [0,1] and show that the variation becomes infinite as the length of the intervals in the partition shrink to 0 (as usual, with the length of the longest interval going to 0). This should give you some thoughts with which to get started. – Chris Leary Nov 2 '11 at 17:28 • Thanks to both of you for your comments. – Nana Nov 2 '11 at 17:41 • @Nana See this Wolfram Alpha plot showing that $g'(x)$ is not positive at $x=0.17$ – Sasha Nov 2 '11 at 19:21 • @Sasha: hehe...thanks. I've fixed. $g'(x)$ is certainly positive now...:) – Nana Nov 2 '11 at 19:40 Consider $f(x) = x^2 \sin\left( \frac{1}{x}\right)$ for $x\not=0$ first. Due to parity of the function, i.e. $f(-x) = -f(x)$, it is sufficient to determine the finiteness of its total variation on $(0,1)$ interval.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9678992932829917, "lm_q1q2_score": 0.8065742848745885, "lm_q2_score": 0.8333245932423308, "openwebmath_perplexity": 242.92820107059703, "openwebmath_score": 0.9305070042610168, "tags": null, "url": "https://math.stackexchange.com/questions/78238/functions-of-bounded-and-unbounded-variations" }
discrete-signals X_0 = mean(x); X_omega = mean(x.*phasor); estimate_MS_biased = 2*abs(X_omega)^2; mean_estimate_MS_biased(s) += estimate_MS_biased; MS_frac_err_estimate_MS_biased(s) += ((estimate_MS_biased-MS_signal(s))/MS_signal(s))^2; estimate_MS_spectral_subtraction = 2*abs(X_omega)^2 - 2/(N-3)*(MS_x - X_0^2 - 2*abs(X_omega)^2); mean_estimate_MS_spectral_subtraction(s) += estimate_MS_spectral_subtraction; MS_frac_err_estimate_MS_spectral_subtraction(s) += ((estimate_MS_spectral_subtraction-MS_signal(s))/MS_signal(s))^2; estimate_MS_clamped_spectral_subtraction = estimate_MS_spectral_subtraction; if estimate_MS_clamped_spectral_subtraction < 0 estimate_MS_clamped_spectral_subtraction = 0; end mean_estimate_MS_clamped_spectral_subtraction(s) += estimate_MS_clamped_spectral_subtraction; MS_frac_err_estimate_MS_clamped_spectral_subtraction(s) += ((estimate_MS_clamped_spectral_subtraction-MS_signal(s))/MS_signal(s))^2; estimate_MS_Simon = MS_x - X_0^2; mean_estimate_MS_Simon(s) += estimate_MS_Simon; MS_frac_err_estimate_MS_Simon(s) += ((estimate_MS_Simon-MS_signal(s))/MS_signal(s))^2; if plot_IEEE1057 x0 = R\(Q.'*x');
{ "domain": "dsp.stackexchange", "id": 7151, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals", "url": null }
audio, filtering, frequency-domain Can't we just remove the railings by copying and pasting the grass from another (and very similar) part of the image and adjust its brightness? Couldn't we even replicate how the green colour varies in the grass portions and just generate some "new" grass for the regions of the railings? Is it going to look odd? Possibly, if it was overdone, but for small regions it may be good enough to fool the eye. This is what the "Replace" mode (and "Attenuate") does in iZotope, but instead of grass and railing there is background noise and local disturbance. It's not so much attenuation (or "setting harmonics to zero") as "masking" or "hiding" the unwanted sound, since it attempts to make up a good enough patch of harmonics to bury it in by "looking at" the surroundings of the disturbance. For more information on interpolation please see this link. For an example of "learning the profile and applying a filter" please see this link and this link. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 3611, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "audio, filtering, frequency-domain", "url": null }
python, beginner, csv, database Any advice would be hugely appreciated! Link to code: https://github.com/hanners999/Natwest-T20-Blast/blob/main/cricket_data.py Link to CSV files (towards bottom of page - look for T20 Blast (CSV new) : https://cricsheet.org/downloads/ Disclaimer: I know nothing whatsoever about the rules of cricket. Useless check Because either info['toss_decision'] equals 'field' or it does not, you do not need to check things twice in: if info['toss_decision'] == 'field': first_innings: object = toss_loser elif info['toss_decision'] != 'field': first_innings = info['toss_winner'] which could be written as: if info['toss_decision'] == 'field': first_innings: object = toss_loser else: first_innings = info['toss_winner'] Not that it really changes anything but you could also write this using the ternary operator: first_innings = toss_loser if info['toss_decision'] == 'field' else info['toss_winner'] Similarly, either info['home team'] and info['toss_winner'] are equal or they are not. We could write: if info['home team'] == info['toss_winner']: toss_loser = info['away team'] else: toss_loser = info['home team'] or even: toss_loser = info['away team'] if info['home team'] == info['toss_winner'] else info['home team'] Going further, one could also imagine writing the retrieval from the structure slightly differently: toss_loser = info['away team' if (info['home team'] == info['toss_winner']) else 'home team']
{ "domain": "codereview.stackexchange", "id": 41862, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, beginner, csv, database", "url": null }
javascript, beginner, algorithm, complexity The function passes all tests, however, I wonder if there is an even more efficient way write it--would recursion be faster? Can the speed of this algorithm be improved? Bad storage and time Your code solves the problem but is a memory hog. Algorithm complexity You are looping over the digits twice, once to extract each digit, and then once to sum each half. The logic can be improved if you calculate the sums as you extract each digit. This also means you do not have to store each digit in the array for the sum calculations. If you include the two Array.slice calls that is a total of 3 iteration of each digit in n and two stored copies of each digit. JavaScript performance In terms of JavaScript performance array functions that take callbacks, like Array.reduce, have a high per iteration overhead when compared to standard loops. When the inner code is small that overhead is a significant part of the overall iteration time. Array.slice should only be used when you need a copy of items in a new array. (Note items that are references only copy the reference. IE shallow copy). Arrays in JavaScript are expensive in terms of performance because they do not get space from the heap but rather invoke the memory management system for allocation and disposal. Code style Never create un-delimited blocks. if (n === 10) return false; should be if (n === 10) { return false; } or if (n === 10) { return false } The reduce callback is written in the long form, .reduce(function(a, b) { return a + b }) it would be less noisy to write it as .reduce((a, b) => a + b) Too many comment, and one that conflicts with the code (a lie). Apart from that the code is well formatted, has good naming, and good use of variable declaration type. Your question would recursion be faster?
{ "domain": "codereview.stackexchange", "id": 31296, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, beginner, algorithm, complexity", "url": null }
signal-analysis, fourier-transform, power-spectral-density, stochastic Title: Why look at power spectral density for stochastic processes? I have been told that for deterministic signals, it makes sense to look at their respective Fourier transforms/spectra. For stochastic processes on the other hand, I am supposed to work with power spectral density in terms of qualitative analysis. Why? Because a stochastic process itself doesn't have a Fourier transform. That's really all there is to it. You can only transform signals (i.e. functions over a body isomorphic to $\mathbb R$, for example, functions of time). You can't transform a random variable whose individual realizations are such functions!
{ "domain": "dsp.stackexchange", "id": 6120, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "signal-analysis, fourier-transform, power-spectral-density, stochastic", "url": null }
mysql, bash, configuration # load all allowed arguments into $config array parseargs() { (getopt --test > /dev/null) || true [[ "$?" -gt 4 ]] && die 'I’m sorry, `getopt --test` failed in this environment.' OPTIONS="" LONGOPTS="help,webmaster:,webgroup:,webroot:,domain:,subdomain:,virtualhost:,virtualport:,serveradmin:" ! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@") if [[ ${PIPESTATUS[0]} -ne 0 ]]; then # e.g. return value is 1 # then getopt has complained about wrong arguments to stdout exit 2 fi # read getopt’s output this way to handle the quoting right: eval set -- "$PARSED" while true; do case "$1" in --help) man -P cat ./virtualhost.1 exit 0 ;; --) shift break ;; *) index=${1#--} # limited to LONGOPTS config[$index]=$2 shift 2 ;; esac done } validate_mysql() { for key in adminuser database user;do (LANG=C; if_match "${mysql[$key]}" "^[a-zA-Z][a-zA-Z0-9_-]*$") || die "bad mysql $key" done } escape_mysql() { for key in adminpasswd passwd;do printf -v var "%q" "${mysql[$key]}" mysql[$key]=$var done } virtualhost-yad.sh - GUI tool for CLI scripts #!/usr/bin/env bash set -e cd "${0%/*}"
{ "domain": "codereview.stackexchange", "id": 34133, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mysql, bash, configuration", "url": null }
kinetic-theory, viscosity Please can someone give an (intuitive) explanation of these points. 1) There are more carefully derived values assuming a distribution of particles (I do not have the reference here), but instead of considering all particles as distributed between different initial positions and velocities, lets us take them all equal to the same "typical" particle: the one with the typical velocity, let us say the median velocity $u_x$; the mean instead of the median could have also been used. They differ only in a factor, which is not really important (see answer (2)). 2) The typical particle has moved the typical distance before crossing the boundary. It is about $\lambda_{fmp}$, or, perhaps better, $\lambda \cos{\theta}$. But this is a very crude approximation. I have seen derivations where it is considered to be $1/2 \lambda$ (assuming that molecules hitting the unit area come from all distances between 0 and λ; equally distributed), or, calculated in more detail, $1/3 \lambda$ (F. Reif, Statistical and Thermal Physics (McGraw-Hill), Ch. 12: http://physics.bu.edu/~redner/542/refs/reif-chap12.pdf.). Even this last factor is not very accurate because, as the author states: "Our calculation has been very simplified and careless about the exact way various quantities ought to be averaged. Hence the factor 1/3 is not to be trusted too much". 3) You should not use the total momentum, but only the difference. You can see this intuitively by noticing that from the other side of the surface there will be the same number of particles crossing back with momentum $p-\Delta p$. Thus the net transfer is only $\Delta p$
{ "domain": "physics.stackexchange", "id": 27078, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinetic-theory, viscosity", "url": null }
ros, roslaunch, machines <node pkg ="talker" type = "talker" name="talker" machine = "slave"</node> </launch> Comment by Vinh K on 2015-12-18: i get this error now: [10.42.0.3-0]: ERROR: cannot launch node of type [talker/talker.cpp]: talker ROS path [0]=/opt/ros/indigo/share/ros ROS path [1]=/opt/ros/indigo/share ROS path [2]=/opt/ros/indigo/stacks i am almost there Chrissi
{ "domain": "robotics.stackexchange", "id": 23231, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, roslaunch, machines", "url": null }
gazebo-camera, gazebo-plugin Title: Problem: Depth camera plugin does not show any coloured images Hi everybody, I'm currently trying to attach a camera to a robot. While that works fine, it seems that I can only see the depth images it produces and not the RGB images it should (I think?) also produce. Below is the SDF code I use to define the sensor: <sensor name="downward_cam_camera_sensor" type="depth"> <pose>0 0 0 0 1.57079632679 0</pose> <update_rate>20</update_rate> <always_on>true</always_on> <camera> <horizontal_fov>1.745</horizontal_fov>  <clip> <near>0.01</near> <far>100</far> </clip> </camera> <plugin name="downward_cam_camera_controller" filename="libgazebo_ros_depth_camera.so"> <cameraName>downward_cam</cameraName> <alwaysOn>true</alwaysOn> <updateRate>20</updateRate> <imageTopicName>camera/image</imageTopicName> <depthImageTopicName>camera/depth_image</depthImageTopicName> <cameraInfoTopicName>camera/camera_info</cameraInfoTopicName> <depthCameraInfoTopicName>camera/depth_camera_info</depthCameraInfoTopicName> <depthImageCameraInfoTopicName>camera/depth_image_camera_info</deptImageCameraInfoTopicName> <frameName>downward_cam_optical_frame</frameName> <interface:camera name="downward_cam_camera_iface"/> </plugin> </sensor>
{ "domain": "robotics.stackexchange", "id": 3095, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo-camera, gazebo-plugin", "url": null }
javascript, angular.js $scope.pageData.people = []; // just a demo. The point of this is just to show the controller creating Person objects // assume that first name and last name will always be the same length; for (var i = 0; i < firstName.length; i++) { $scope.pageData.people.push(new Person($scope.pageData.firstName[i], $scope.pageData.lastName[i]); } }); Notice that you still have to instantiate new Person, you can use builder pattern to avoid that. Check this blog for more information: https://medium.com/opinionated-angularjs/angular-model-objects-with-javascript-classes-2e6a067c73bc
{ "domain": "codereview.stackexchange", "id": 12116, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, angular.js", "url": null }
special-relativity, forces, acceleration Title: Whether $m$ in $E=mc^{2}$ and $F=ma$ are both relativistic mass? I know that $m$ in $E=mc^{2}$ is the relativistic mass, but can $m$ in $F=ma$ can also be relativistic? If the answer is yes, then can you tell me whether this equation is valid $E=\frac{F}{a}c^{2}$? If not, can you tell why this is not valid? Advance thanks for your help and please forgive me my english as it is my second language. Relativistic force is defined as $$\vec F = \frac {d} {dt} (\gamma m_o \vec v) = \frac {m_o\gamma^3} {c^2}\vec a\cdot\vec v + \gamma m_o\vec a$$ Although generally different, this becomes the same as your expression when $\vec a$ is perpendicular to $\vec v$ giving $\vec a\cdot\vec v = 0$.
{ "domain": "physics.stackexchange", "id": 2486, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, forces, acceleration", "url": null }
homework, system-identification is the output of the zero signal zero? For a linear system , $T(\vec{0})=0$. is a single output with a zero coefficient zero? This tests if $T(0.\vec{x})=0.T(\vec{x}) = 0$ is a single output linear? This tests if $T(\alpha.\vec{x})=\alpha.T(\vec{x})$ is a simple addition linear? This tests if $T(\vec{x}+\vec{y})=T(\vec{x})+T(\vec{y})$ Those, if not passed, prove that the system is non-linear. And instead of using the generic version, they can show to your (clever) professor that you have some intuition about what is going on, and the risk of errors is reduced. I personally appreciate a lot when students use minimal arguments: they go straight to the point, spend less time on such questions, to focus on more involved ones. Of course, if the system is linear, more is required. Here, your system is non-linear... unless $b=0$.
{ "domain": "dsp.stackexchange", "id": 6983, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework, system-identification", "url": null }
The solution is -0.47346580772912616. Since $m^3+4m+2\gt0$ if $m\ge0$, any real roots must be negative. If it has more than one real root, then all three roots are real. But by Vieta's formula, the sum of the roots is $0$, the coefficient of the (missing) quadratic term, which is not possible for the sum of three negative values. So there is only real root. Here is an approach that doesn't require calculus at all: $$m^3+4m+2=0$$ Now, substitute $$x=k-\frac{4}{3k}$$ $$\therefore m^3+4m+2=\bigg(k-\frac{4}{3k}\bigg)^3+4·\bigg(k-\frac{4}{3k}\bigg)+2$$ $$=\bigg(k^3-3k^2·\frac{4}{3k}+3k·\frac{16}{9k^2}-\frac{64}{27k^3}\bigg)+4k-\frac{16}{3k}+2$$ $$=\bigg(k^3-4k+\frac{16}{3k}-\frac{64}{27k^3}\bigg)+4k-\frac{16}{3k}+2$$ $$=k^3-\frac{64}{27k^3}+2=0\iff \bigl(k^3\big)^2+2k^3-\frac{64}{27}=0$$ Substituting $$z=k^3$$ $$z^2+2z-\frac{64}{27}=0$$ Thus $$z_{1/2}=-1\pm\sqrt{1+\frac{64}{27}}=-1\pm\sqrt{\frac{91}{27}}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9732407175907054, "lm_q1q2_score": 0.813126601172507, "lm_q2_score": 0.8354835411997897, "openwebmath_perplexity": 184.31383129228212, "openwebmath_score": 0.7956951260566711, "tags": null, "url": "https://math.stackexchange.com/questions/2793741/showing-that-m34m2-0-has-only-one-real-root/2793783" }
homework-and-exercises, forces, electrostatics, electric-fields Title: The formula of the force exerted on an electric dipole by non-uniform electric field When an electric dipole of moment $\mathbf{P}$ is located in a non-uniform electric field $\mathbf{E}$, there is an net force exerted on it. However, the formula of the force in some books is read $\mathbf{F}=\nabla(\mathbf{P}·\mathbf{E})$, while in other books, it is $\mathbf{F}=(\mathbf{P}·\nabla)\mathbf{E}$. Obviously, the two formula are not the same. So, which one is true? Both formulas are equivalent, if you are in the electrostatic approximation and your dipole vector does not depend on the position $\mathbf{r}$. Let's consider the expression $\mathbf{F}=\nabla_{\mathbf{r}}(\mathbf{p} \cdot \mathbf{E})$ which can be easily obtained from the potential energy function $U=-\mathbf{p} \cdot \mathbf{E}$ and its relation with the force $\mathbf{F}=\nabla_\mathbf{r} U$. Now, recall the vector identity $\nabla_\mathbf{r}(\mathbf{a}\cdot \mathbf{b})= (\mathbf{a} \cdot \nabla_\mathbf{r}) \mathbf{b}+(\mathbf{b} \cdot \nabla_\mathbf{r}) \mathbf{a} + \mathbf{a} \times (\nabla_\mathbf{r} \times \mathbf{b})+ \mathbf{b} \times (\nabla_\mathbf{r} \times \mathbf{a})$
{ "domain": "physics.stackexchange", "id": 5358, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, forces, electrostatics, electric-fields", "url": null }
computer-architecture Secondly, why was it so appealing to create a machine without a mode bit? Many of the benefits touted in the book is that customers wanted to run old software. But this doesn't seem to speak against a mode bit, since the whole purpose of using a mode bit is to have backwards compatibility. When AMD extended x86 to 64-bits, at least according to my understanding of the word "mode bit" what they did was exactly to add a mode bit. A special bit that would make the CPU in 64-bit mode. And another bit that would make a process execute in a "sub-mode" of the 64-bit mode (to enable compatibility with 32-bit applications). The essence of the submode is that the CPU interprets the instruction stream as the old 32-bit instructions but that the 32-bit memory accesses made are resolved using the new page tables format (setup by the 64-bit aware operating system) and eventually mapped to the full physical address space. Also, the 32-bit code can be preempted by 64-bit code. Like the Data General solution this also allowed a 32-bit program to run along under 64-bit programs (16-bit vs 32-bit in the DG case). So from a customer point-of-view there appears to be no difference at all. Hence the only benefit could have been in the implementation, simplifying the design, but the book doesn't make it sound like that is the concern, since the mode bit seeemed to be common even at that time (and it seems later architectures have also employed it as the x64 case shows).
{ "domain": "cs.stackexchange", "id": 17760, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computer-architecture", "url": null }
from here let consider the change in the new basis $B’$ that is $$y_C=A_Cx_C\implies P_{B’}y_{B’}=A_CP_{B’}x_{B’} \implies y_{B’}=P_{B’}^{-1}A_CP_{B’}x_{B’}=A'x_{B’}=$$ therefore $$A'=P_{B’}^{-1}\,P_B\,A\,P_B^{-1}\,P_{B’}$$ • You really don't need to use the canonical basis. It is one way to do it, but there is a way to compute the change of basis matrix between any two basis without going through the standard one. The notation is a little worse, but the computation is not so bad – N8tron Jun 7 '18 at 7:37 • @N8tron Ok I would say, we can use the canonical basis, it is a clear way to obtain the result. For the notation you are right but I've followed the notation used in the OP. – gimusi Jun 7 '18 at 7:58 • Also it's worth pointing out that though this method is okay for $2 \times 2$ matrices it scales really poorly with to $n \times n$ finding each inverse will take roughly the same computational power as finding the single change of coordinates matrix, not to mention the extra matrix multiplications – N8tron Jun 7 '18 at 11:11 • @N8tron Anyway I think it can be useful to know at least from the theoretical point of view in order to set up the correct equality to obtain $A'$ from $A$ and then performing the calculation by computational methods. – gimusi Jun 7 '18 at 11:59 • @N8tron That's Always nice have different point of view for any OP. Thanks – gimusi Jun 7 '18 at 12:14
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429595026213, "lm_q1q2_score": 0.8129412243363507, "lm_q2_score": 0.8267117962054049, "openwebmath_perplexity": 156.66128652211037, "openwebmath_score": 0.900635838508606, "tags": null, "url": "https://math.stackexchange.com/questions/2811006/find-the-matrix-a-with-respect-to-the-basis-b-2-1-1-1" }
kinect, ros-kinetic Title: Kinect 360 not working in Ubuntu 16.04 I am using Kinetic in Ubuntu 16.04. I have a kinect xbox 360 and the appropriate power supply. The kinect works fine in windows 10. In Ubuntu, it will not show up in lsusb. I can see the motor, but the rest of the components do not show. The best I can see is that it is being detected as a usb hub. But at each lsusb it changes IDs. Here is a sample from lsusb: Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:07dc Intel Corp. Bus 001 Device 007: ID 04f9:0331 Brother Industries, Ltd Bus 001 Device 006: ID 0922:0020 Dymo-CoStar Corp. LabelWriter 450 Bus 001 Device 009: ID 15d9:0a33 Trust International B.V. Optical Mouse Bus 001 Device 008: ID 04d9:1702 Holtek Semiconductor, Inc. Keyboard LKS02 Bus 001 Device 005: ID 1a40:0101 Terminus Technology Inc. Hub Bus 001 Device 004: ID 1a40:0101 Terminus Technology Inc. Hub Bus 001 Device 003: ID 1a40:0101 Terminus Technology Inc. Hub Bus 001 Device 099: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor Bus 001 Device 098: ID 0409:005a NEC Corp. HighSpeed Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub running lsusb again gives: Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:07dc Intel Corp. Bus 001 Device 056: ID 04f9:0331 Brother Industries, Ltd Bus 001 Device 055: ID 0922:0020 Dymo-CoStar Corp. LabelWriter 450 Bus 001 Device 059: ID 15d9:0a33 Trust International B.V. Optical Mouse Bus 001 Device 057: ID 04d9:1702 Holtek Semiconductor, Inc. Keyboard LKS02 Bus 001 Device 054: ID 1a40:0101 Terminus Technology Inc. Hub
{ "domain": "robotics.stackexchange", "id": 32108, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinect, ros-kinetic", "url": null }
c++, memory-management, c++17 template <class T, class U> constexpr bool operator!= (const Allocator<T>& lhs, const Allocator<U>& rhs) noexcept { return !operator==(lhs, rhs); } struct Cell; static Allocator<Cell> cellAlloc; struct Cell { using size_type = std::int64_t; constexpr static std::size_t CELLSIZE = sizeof(size_type); explicit Cell() : Cell(0) { } Cell(size_type val) : val_{val} { } static void* operator new ( std::size_t n ) { return std::allocator_traits<Allocator<Cell>>::allocate(cellAlloc, n); } static void* operator new[] ( std::size_t n ) { return operator new(n - sizeof(Cell)); } static void operator delete (void *ptr, std::size_t n = 1) { std::allocator_traits<Allocator<Cell>>::deallocate( cellAlloc, static_cast<Cell*>(ptr), n); } static void operator delete[] (void *ptr, std::size_t n) { operator delete(ptr, n - sizeof(Cell)); } union { size_type val_; std::uint8_t bytes_[CELLSIZE]; }; }; struct Flag; static Allocator<Flag> flagAlloc; struct Flag { Flag(std::uint8_t val) : val_{val} { } static void* operator new ( std::size_t n ) { return std::allocator_traits<Allocator<Flag>>::allocate(flagAlloc, n); } static void* operator new[] ( std::size_t n ) { return operator new(n - sizeof(std::uint8_t)); }
{ "domain": "codereview.stackexchange", "id": 39635, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, memory-management, c++17", "url": null }
rightside = arg_dict['other'][0] for arg in arg_dict['other'][1:]: rightside = rightside + arg print '4th Equation:' Eq(leftside, -1*Sum(factor(rightside), (i,0,n)))
{ "domain": "jupyter.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9770226320971079, "lm_q1q2_score": 0.8077161267716086, "lm_q2_score": 0.8267117876664789, "openwebmath_perplexity": 4965.667367100124, "openwebmath_score": 0.7560567855834961, "tags": null, "url": "http://nbviewer.jupyter.org/gist/anonymous/5688579" }
physical-chemistry, atomic-structure, rare-earth-elements Title: Magnetic moment of trivalent lanthanide cations The effective magnetic moment $\mu_{\mathrm{eff}}$ of tripositive rare earth elements, is calculated by $$\mu_{\mathrm{eff}}=g_J\sqrt{J(J+1)}\mu_\mathrm{B}$$
{ "domain": "chemistry.stackexchange", "id": 13725, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, atomic-structure, rare-earth-elements", "url": null }
solid-state-physics, molecules The band structure in Carbon comes from 2p electrons on the individual carbon atoms which are aligned perpendicularly to the planes of the carbon sheets. The bonding of these 2p electrons can be seen on the left hand side in Benzene molecules with the hexagonal structure - 6 atomic orbitals contribute to make 3 bonding and 3 antibonding molecular orbitals. The bonding orbitals ($\pi$) are filled with 6 electrons (the arrows) and the antibonding levels ($\pi^*$) are empty. Now on the right handside of the diagram many 2p orbitals from many carbon atoms combine to form the bonding (mostly filled) and antibonding (mostly empty) bands of graphite. Now you could look at the energies of the electron states and say that we have non-integer $n$ values, but I think it makes more sense to think of the electron states being spread over many many atoms and having different (integer) numbers of nodes in the wavefunctions over a larger number of atoms.
{ "domain": "physics.stackexchange", "id": 22306, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solid-state-physics, molecules", "url": null }
gravity, energy, geometry, stars, stability the radiation in our part of the galaxy isn't isotropic, so it would push the sphere to hit the star eventually gravitational waves can act non-uniformly, and this can lead to other modes of instability (next list) to drive them to hit eventually Factors that apply if the sun it slightly displaced from the origin: if the sun wasn't dead center, tidal forces from other astronomical bodies would accelerate it toward the side if the Dyson Sphere was non-uniform and the sun was not at the CM, it would be accelerating, and that would be unstable acceleration However, if we are assuming an advanced civilization built the Dyson sphere, it shouldn't be a difficult task to use controlled reflected radiation from the sun itself to keep it in the center. There are other, much more major, problems with the physicality of such a structure.
{ "domain": "physics.stackexchange", "id": 20953, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gravity, energy, geometry, stars, stability", "url": null }
c, casting, pthreads } } free(threads); return 0; } Align your code better. It probably has tabs (replace them with space before pasting) in it that make reading it hard on a website. Please declare one variable per line: int t,ret,numthreads = 0,thread_success = 0,*iptr;
{ "domain": "codereview.stackexchange", "id": 2352, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, casting, pthreads", "url": null }
seurat, single-cell Title: Single cell RNAseq cell cluster (true cluster or sub cluster) I am trying to run seurat on ~5000 sinngle cells. I am expecting minimum 15 cell types to be present in the data. I tried to runn it with multiple conditions;I can see there is 27 clusters; I believe many of them might be sub clusters. Is there a way to know which clusters are actually a sub-clusters. As suggested by the members I have used clustree to check the cluster stability also changes the clustering parameter. I wouldn't be surprised if the 15 cell types that you are expecting were mostly characterized via "protein-based" methods; antibody staining, flow cytometry using fluorophore coupled antibodies, ... The data that scRNA-seq is RNA based and should not be expected to provide the same information as protein based assays. Moreover, the scRNA-seq data, especially those from microfluidic systems like 10x, are sparse and might not be enough to resolve closely related cell types/states such as those of T cells. On top of the aforementioned, evaluating clusters or clustering stability is a difficult task and in my experience, quite some cluster evaluation metrics cannot be easily applied to single cell data simply because of the shear size of the data. Having asked your question and looked for answers (and I am still doing that), I am doing the following: i) Try to use transcriptomic markers as much as possible. Even then a lack of a marker would not mean much in terms of scRNA-seq, it might very well not be detected simply due to sensitivity ii) Use clustree do select the most plausible number of clusters (resolution parameter in Seurat). The package is compatible with Seurat and some other scRNA-seq packages. iii) Use silhouette width as a metric for clustering. This is computationally expensive, however, should be fine for your 5000 cells. For larger cell numbers, I use another package that approximates this (will add a link but first I have go through my scripts for the name). iv) Check if batch effects are accounting for what you call "sub-clusters". For example clusters 0, 14, 24 and 25 above might correspond to different samples processed in different days.
{ "domain": "bioinformatics.stackexchange", "id": 1574, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "seurat, single-cell", "url": null }
programming-languages, operational-semantics Title: Show that if => * s', it is not necessarily the case that =>* s' I am studying Structural Semantics for programming languages and I have come to this proof that I can not achieve. Whith these rules plus Splitting and No interference, I am trying to get a proof. I tried: s0 in State <S1,s> =>* s0 Stm0 in Stm <Stm0, s0> =>* <S2,s'> And somehow put S2 inside Stm0, but it does not make sense and can not get it to work. I am sure it is something related to an intermediate state and make S2 jump to <S2,s'> from that state. EDIT: Hi, I did the following: S1 = Skip, so <Skip,s> => s then <Skip;S2,s> => <S2,s>, for s'=s Can this serve as a proof? The proof you give is a counterexample of the reciprocal. That is, you're giving an example where $\langle S_1, s\rangle \Rightarrow^* s'$ but not $\langle S_1; S_2, s\rangle \Rightarrow^* s'$. To prove the original statement, you would need to find an $s'$ such that $\langle S_1; S_2, s\rangle \Rightarrow^* s'$ but $\langle S_1, s\rangle \Rightarrow^* s'$ does not hold. For example, if $S_1 = \text{Skip}$ that means choosing $s'$ such that $\langle S_2, s\rangle \Rightarrow^* s'$ but $s' \not\Rightarrow^* s$. I'm not sure about the particulars of your language, but if $s = \{x \mapsto 1, a \mapsto 2\}$, $s' = \{x \mapsto 2, a \mapsto 2\}$ and $S_2 = x := a$ then the required condition to holds.
{ "domain": "cs.stackexchange", "id": 21390, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming-languages, operational-semantics", "url": null }
c++, interview-questions if (ac < 2) { in = &cin; } else { in = new ifstream(av[1]); } if (!in->good()) return 1; collect_lines(*in, *lines); reorg_by_count(*lines, *lines_by_count); if (in != &cin) { ((ifstream *)in)->close(); delete in; } cout << "=====================\n\n"; multimap<int, string>::reverse_iterator it = lines_by_count->rbegin(); for (; it != lines_by_count->rend(); it++) { cout << it->second << " " << it->first << '\n'; } delete lines; delete lines_by_count; return 0; } // Read the instream line by line, until EOF. // Trim initial space. Empty lines skipped void collect_lines(istream &in, map<string, int> &lines) { string tmp; while (in.good()) { getline(in, tmp); int i = 0; // trim initial space (also skips empty strings) for (i = 0; i < tmp.length() && !isalnum(tmp[i]); i++); if (i >= tmp.length()) continue; tmp = tmp.substr(i); for (i = 0; i < tmp.length(); i++) { if (!isalnum(tmp[i])) { tmp[i] = ' '; } // thus, HoNdA == Honda if (i == 0) { tmp[i] = toupper(tmp[i]); } else { tmp[i] = tolower(tmp[i]); } } // and record the counts if (lines.count(tmp) == 0) { lines[tmp] = 0; }
{ "domain": "codereview.stackexchange", "id": 7768, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, interview-questions", "url": null }
of Economics and Management – CMA since all the elements below the sum... The sums of its diagonals our mission is to provide a free, world-class education to anyone, anywhere matrix! Question 5: what is meant by matrices and their forms are used for solving numerous problems … the. Order m × 1 are going to change, if at all lectures, practise questions and tests... Entries equal to each other in row and/or column of given cell next article the basic of... Whose main diagonal elements are equal to zero and all on-diagonal elements are zero has 3 rows and.... Are 0 and k’th column are 1 to implement fill ( ) in paint, all off-diagonal elements are is! K = 1 + 9 + 5 = 15 identity matrix when k = 1 + +! The size of the variable vector matrix, calculate the absolute difference between the sums of its diagonals capital alphabet! ) nonprofit organization C, C++, Java, Python a free, world-class to... 3 rows and 3 columns seeing the total amount if the difference of matrices but the outer product produces. Vector is scalar multiplication 11 + 5 = 17 in rows and.! Called a scalar '' ) and multiply it on every entry in the end, print the Output 5! Type '': Explain a scalar, but could be a vector, it 's a... Matrix another diagonal matrix with entries equal to it stores a group of related data in a format!, in the matrix are C……, etc = 3 + 9 + 5 - 12 = 4 17| 2. Ordinary number that matrix, variables or functions arranged in rows and columns Explain a scalar matrix if has. @ type '': Explain a scalar, you consent to our cookies Policy of! Order m × 1 is a column matrix of size n * n, calculate the absolute is... Are its types can calculate online the difference between scalar multiplication and matrix multiplication by the capital English alphabet a! Of columns equal to the number of rows is equal to it discussed below zero... Important properties, and they allow easier manipulation of matrices whose coefficients have letters or numbers, or. Be outlined Economics and Management – CMA your concepts cleared in less 3! name '': question '', name '': Explain scalar... A symmetric matrix to change, if at all
{ "domain": "christopher-phillips.com", "id": null, "lm_label": "1. Yes\n2. Yes\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9759464506036182, "lm_q1q2_score": 0.9146675521834541, "lm_q2_score": 0.9372107984180245, "openwebmath_perplexity": 491.08047682015183, "openwebmath_score": 0.6788380742073059, "tags": null, "url": "http://christopher-phillips.com/how-to-iiih/difference-between-scalar-matrix-and-diagonal-matrix-ebd627" }
html, css Title: Hearthstone deck list My code will display a Hearthstone deck list. I'm still learning HTML and I'd like help with a few things: • How readable is my code? How can I improve readability? • Have I used any bad practices? What should I do instead? • Is there any redundancy in my code? <!DOCTYPE html> <html> <head> <link href='https://fonts.googleapis.com/css?family=Lato' rel='stylesheet' type='text/css'> <meta charset="UTF-8"> <title>Hearthstone Deck List</title> <style> body { font-family: 'Lato', sans-serif; } img { vertical-align: middle; } * { box-sizing: border-box; } .card-list ul { list-style-type: none; margin: 0px; max-width: 250px; padding: 0px; } .card-list ul li { margin: 1px; position: relative; } a.card-frame { background-color: #191919; display: block; font-size: 12.5px; height: 25px; } a.card-frame:hover { background-color: #646464; } a.card-frame span.card-cost, a.card-frame span.card-count { color: #FFFFFF; } a.card-frame span.card-cost, a.card-frame span.card-name, a.card-frame span.card-count { height: 25px; padding-top: 6.25px; position: absolute; text-align: center; } a.card-frame span.card-cost { background-color: #005580; left: 0px; width: 25px; } a.card-frame span.card-name { font-size: 9.375px; left: 31.25px; z-index: 1000; } a.card-frame span.card-count { background-color: #323232; right: 0px; width: 25px; }
{ "domain": "codereview.stackexchange", "id": 19650, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "html, css", "url": null }
image-processing, dithering Title: Why does this image look so bad when dithered with a certain palette? I'm trying to write software to create a mosaic of one big image made up of a bunch of smaller images. The result is okay, but I think it would be better if I used dithering. My problem is that dithering is giving even worse results and I don't know why. Here's the original: Here's the same image quantized (no dithering) with the following palette (in (R,G,B) format): (162, 143, 66) (128, 148, 168) (120, 99, 100) (31, 39, 97) (126, 116, 103) (203, 35, 9) (57, 81, 43) (104, 101, 98) Not great, but workable. About what I'd expect. Now here's the same image dithered using Floyd-Steinberg: Pretty much unrecognizable. What's going on here? To make sure my dithering algorithm was implemented properly, here's the same image dithered using 2-color and 8-color palettes.
{ "domain": "dsp.stackexchange", "id": 8797, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, dithering", "url": null }
python, python-3.x Title: Nested for-loop to create list of lists - Python I have posted the below also at stackoverflow in this thread, but a poster there recommended I post this here. Thanks for any suggestions. I have the below code, and as someone new to Python, I was hoping for suggestions on making it more Pythonic. Posters in this thread have provided very helpful numpy and list comprehension solutions; however, I'd like to improve my more basic nested loop solution solely for the sake of becoming better with basic Python tasks (please do let me know if I should have posted this in the prior thread instead of starting a new semi-duplicate...apologies if that's what I should have done). Here's the current code (which works as desired): sales_data = [[201703, 'Bob', 3000], [201703, 'Sarah', 6000], [201703, 'Jim', 9000], [201704, 'Bob', 8000], [201704, 'Sarah', 7000], [201704, 'Jim', 12000], [201705, 'Bob', 15000], [201705, 'Sarah', 14000], [201705, 'Jim', 8000], [201706, 'Bob', 10000], [201706, 'Sarah', 18000]] sorted_sales_data = sorted(sales_data, key=lambda x: -x[2]) date_list = [] sales_ranks = [] for i in sales_data: date_list.append(i[0]) sorted_dates = sorted(set(date_list), reverse=True) for i in sorted_dates: tmp_lst = [] tmp_lst.append(i) for j in sorted_sales_data: if j[0] == i: tmp_lst.append(j[1]) sales_ranks.append(tmp_lst) print(sales_ranks)
{ "domain": "codereview.stackexchange", "id": 26219, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x", "url": null }
javascript, jquery //From user rights if ($("input[id*='hdnTab2ShowDelete']").val() != "Y") { btnTobeDeleted.hide(); //btnDelete.show(); } }); Use a single selector instead of "find" and "not." Leave the find, this is faster in every browser except opera (see comments). For the 2nd part of the selector, you can use the "sibling" combinator ~ to grab everthing except its first operand, and the :first-child pseudoclass selector to get the first child, giving you the same set of elements without using several jQuery methods. This is faster than using not(':first') in all browsers, and faster than a single selector (e.g. not using find either) in all browsers except Opera (which maintains its native-selector edge). See this test. Note: #someTable tr will also return tr elements from a nested table. You really want to target the direct row descendants of the table. But don't forget about tbody, which is a required element. So this probably should be "#divTab2GridInquiries > tbody > tr:first-child ~ tr". But that is a mouthful... and it's really slow. If you have no nested tables it will work fine as coded below. $.each($("#divTab2GridInquiries").find("tr:first-child ~ tr"), function () { var tr = $(this); Not sure what you're doing here - the selector is using a wildcard match, but val only operates against the first element in a selection set. Can you target this element more specifically? In any event, instead of wildcard matching the id, add a class and select on that. Classes are much faster than substring matching attributes. //var val = tr.find("input[id*='hdnLineStatus']").val(); var val = tr.find(".hdnLineStatus").val();
{ "domain": "codereview.stackexchange", "id": 2100, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery", "url": null }
# For a given value of $x$, find the basis We have the matrix $X = \begin{bmatrix} 2 & 0 & -1 & 1 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ -1 & x & 2 & 1 \end{bmatrix}$ We want to find a basis for the row space, column space and null space of $X$ for values of $x \in \mathbb{R}$. What I did is put the matrix in rref, but I had to do it twice: once for $x=0$, once for $x \neq 0$. • $x=0 \implies$ $\text{rref}(X) = \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$ • $x \neq 0 \implies$ $\text{rref}(X) = \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}$ And from there, we can find the row space (the above two rows of the rref if $x=0$ and the above three of the rref is $x \neq 0$). Then the column spaec is column 1 and column 3 of $X$ if $x=0$ and column 1,2,3 of $X$ if $x \neq 0$. I'm wondering if my ideas here are correct. I separated the problem into two cases, $x=0$ and $x \neq 0$, because to get the rref of $X$ I had to divide by $x$ at one point in a row operation. So I had to obtain the rref twice. If this is correct, is this also the best way of solving this problem, or can we do it in an easier way?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9896718496130619, "lm_q1q2_score": 0.802108169677018, "lm_q2_score": 0.8104789178257654, "openwebmath_perplexity": 109.20088961863134, "openwebmath_score": 0.9446017146110535, "tags": null, "url": "https://math.stackexchange.com/questions/2027754/for-a-given-value-of-x-find-the-basis" }
c#, beginner, game, hangman private void MainGame_KeyPress(object sender, KeyPressEventArgs e) { foreach (var letterButton in LetterButtons) { if (e.KeyChar == letterButton.Text.ToLower()[0]) { CheckLetter(e.KeyChar, letterButton); } } } } Here I have a few ugly methods which are performing operations with the Properties.Settings.Default because I cant seem to find a way to pack them in some collection. If something is unclear I will happily answer in the comments. I'm looking for answer's concerning the code style and ideas on how to shorten the long methods or overall how to shorten the code. A Single "Difficulty" Event Handler Extend EventArgs so you can pass Difficulty. public class HangEventArgs : EventArgs { public Difficulty challenge {get; set;} } // one handler to rule them all private void Difficulty_Click(object sender, HangEventArgs e) { GameDifficulty = e.challenge; ChapterSelection cs = new ChapterSelection(); cs.ShowDialog(); } More HangEventArgs Goodness Same idea as above. BONUS: new categories use this handler too. public class HangEventArgs : EventArgs { public Difficulty Challenge {get; set;} public Category Jeopardy {get; set;} } private void Category_Click(object sender, HangEventArgs e) { GameWords = Words.Capitals; // this will be dealt with below GameCategory = e.Jeopardy; MainGame mg = new MainGame(); mg.ShowDialog(); UpdateEasyCompletedCategories(); // this will be dealt with below }
{ "domain": "codereview.stackexchange", "id": 19849, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, game, hangman", "url": null }
Must I use path integrals here. Is the above argumentation incorrect or imprecise? Last edited: Oct 21, 2007 4. Oct 21, 2007 ### quasar987 Ok, now it looks good and complete! (But with path integrals it is very fast: "Let a,b be in U and y be a path in U joining and and B. Then by the FTC for path integrals in the complex plane, the integral of f '(z) along y is both 0 and f(b)-f(a). Therefor f(b)-f(a)=0. QED) 5. Oct 21, 2007 ### littleHilbert Ok, I'll try to go into detail, because I'd like to write it down formally and clearly. So you say that: Constant functions are trivially continuous and holomorphic everywhere in C. U is open, so f' is holomorphic on U. Also the path gamma is continuous and hence smooth. These two statements imply that a primitive of f' on U exists and is determined up to a constant. Clearly, f' has a primitive on gamma. Let F be a primitive. Again, differentiability of f implies that primitive F is holomorphic and F=f. Now we apply the FTC to any path gamma between any two points a and b to conclude that: $\displaystyle\int^{}_{\gamma} f'(z)\,dz = f(b)-f(a)$ At the same time: $\displaystyle\int^{}_{\gamma} f'(z)\,dz = 0$ by hypothesis. Hence f(b)=f(a) for all different a and b in U. Thus it follows f is constant. Is it OK?
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9830850867332735, "lm_q1q2_score": 0.814917384429701, "lm_q2_score": 0.8289388125473628, "openwebmath_perplexity": 495.80882935287246, "openwebmath_score": 0.9544483423233032, "tags": null, "url": "https://www.physicsforums.com/threads/constant-function.192603/" }
solid-state-physics, atomic-physics, terminology, crystals, ion-traps Title: Can two atoms be a crystal? In the physics literature, you can often find the term "two-ion crystal", when talking about two ions that are confined in a e.g. Paul trap. How is this possible? Shouldn't a crystal be a structure which repeats in space multiple (>2) times? Otherwise, what are the necessary requirements to define something as a crystal? EDIT: one of the first ≈5k results found by Googling "two-ion crystal" https://arxiv.org/abs/1202.2730 Coulomb crystals are the structures formed by ions in a trap when they are sufficiently cold: once they stop jiggling around, they come down to equilibrium positions which need to balance the need to get down to the center of the trap, where the trapping potential is at its minimum, with the mutual repulsion between the ions. This usually results in an orderly stacking of the ions, often with very clear local symmetries in a bunch of places. Here's one example, formed in an elongated ion trap (with experiment on the left and a simulation on the right; the lines are blurry because the whole thing is rigidly rotating about its vertical axis): Image source Within an ion-trapping context, the phrase "two-ion crystal" is a perfectly natural phrase to use for the case where you have coulomb-crystal dynamics, with a trapping potential and a Coulomb repulsion balancing out to give the equilibrium positions, and you have $N=2$ ions in the structure. If the phrase doesn't make sense to you, then that's just an indication that you're not within that text's intended audience. Now, is the word "crystal" being used correctly here? The real answer is that it doesn't matter, at all: this is unambiguous notation, and lack of ambiguity is the single requirement that we make of notation.
{ "domain": "physics.stackexchange", "id": 55128, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solid-state-physics, atomic-physics, terminology, crystals, ion-traps", "url": null }
c#, .net Title: Laundry - sort socks This is very artificial problem Each sock has exactly 1 match. Can only take one random sock out of the Laundry basket. Can only compare to the last random sock. If it matches you put the match aside. It is like you have a stack of socks and you can only match on the top sock. So if the match is down the sock pile you are not going to find a match. But you don't even get to look down the sock pile to even know if the match is in the sock pile. You only get to match to the top sock. You will eventually need to put the sock pile back in the Laundry basket. You can throw the unmatched socks back in the Landry basket at any time but they come out random On CS there is a guy telling me he has a better algorithm but he cheats and searches the sock pile for a match down the pile. public static void MatchSocks() { Random rand = new Random(); List<int> Laundry = new List<int>(); List<int> Matched = new List<int>(); List<int> Unmatched = new List<int>(); int? LastUnmatched = null; int Sock; int count = 0; for (int i = 0; i < 500; i++) { Laundry.Add(i); Laundry.Add(i); } while (true) { count++; if(Laundry.Count == 0) { if (Unmatched.Count == 0) break; Laundry = new List<int>(Unmatched); Unmatched.Clear(); LastUnmatched = null; } Sock = Laundry.ElementAt(rand.Next(Laundry.Count)); Laundry.Remove(Sock); if (LastUnmatched == null) { Unmatched.Add(Sock); } else { if (Sock == LastUnmatched) { Matched.Add(Sock); Unmatched.Remove(Sock); } else { Unmatched.Add(Sock); } } LastUnmatched = Sock; } Debug.WriteLine(count); } You're using some poor methods when there are better options available to the data types you've picked. A huge bottleneck is hidden in those 2 lines:
{ "domain": "codereview.stackexchange", "id": 26063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net", "url": null }
stresses, dynamics, friction Title: Are "interference fit" equations appropriate for calculating barrel stress? I am curious about calculating stress at various points along a barrel as a bullet is being fired through it. The interference fit (press/friction fit) equations immediately come to mind, but I am wondering if these equations are appropriate to use because of: (i) The non-linear pressure curve of the gas behind a fired bullet (ii) The bullet is moving through the barrel (dynamic system) Here is a link to some interfernce fit equations I'm refering to, http://www.engineersedge.com/calculators/machine-design/press-fit/press-fit-equations.htm Any pointers? Indeed, you can use the same equations for calculating the stresses, as the barrel essentially is a thick walled pipe with internal pressure. Looking at the equations you linked: (7) "Radial Stress Casued by axial force" (8) "Circumferential Stress Caused by Axial force" are the right ones. (there seems to be a mistake though: the correct term for (8) would be "Circumferential Stress Caused by internal pressure" The nonlinearity of the pressure: the pressure will be the highest at the chamber. Assuming an even wall thickness, check the stresses here and you are ok. In case of changing barrel outer diameter, you shuld look for benchmark pressure curves, and check the stresses at multiple locations along the barrel. Dynamics: in my opinion this concerns the fatigue life of the barrel. To calculate the lifetime in terms of number of shots fired and survival probability, you will need the Haigh-diagram of the chosen material. But I believe the wear will be the limiting factor. edit: this paper seems to be dealing with the same issue: http://www.slideshare.net/JoshuaRicci/design-of-a-rifle-barrel
{ "domain": "engineering.stackexchange", "id": 1029, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "stresses, dynamics, friction", "url": null }
# Two combinatorics problems I have two problems I can't cope with: Problem 1. How many are ways to divide $n$-convex polygon into triangles using non-intersecting diagonals? Problem 2. We have $3n$ different balls and $n$ different boxes. How many are ways to put all balls in boxes and in every box there is at least two balls? I have no ideas for first one. I'm wondering if it wasn't easier, in second problem, to count situations in which there exists box with one or zero balls and subtract them from $n^{3n}$. But I don't know how to count them. - 1 is basically Catalan numbers –  Henry Aug 9 '12 at 10:54 So according to wiki the answer for Problem1. is ${2(n-2)\choose n-2}$, but unfortunately I completely don't know why. I think to solve this problem I should give an explanation that number if these ways satisfy reccurence for Catalan numbers (shifted), but I don't know how to substantiate it. –  ray Aug 9 '12 at 12:40 The number of ways to put $m$ different balls into $n$ different boxes so each box has at least one ball is the number of onto functions from an $m$-set to an $n$-set. It's counted by the Stirling numbers, q.v., and you use inclusion-exclusion to get a formula. The two ball problem should be similar, but more complicated; once you understand how the onto functions are counted, you'll be on your way. –  Gerry Myerson Aug 9 '12 at 12:49 You should post Problem 2 as a separate question. –  Austin Mohr Aug 14 '12 at 6:16 For the first problem you need to look at Catalan numbers.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9861513905984457, "lm_q1q2_score": 0.8174591543797587, "lm_q2_score": 0.8289388040954684, "openwebmath_perplexity": 158.0377382801477, "openwebmath_score": 0.8740003108978271, "tags": null, "url": "http://math.stackexchange.com/questions/180601/two-combinatorics-problems" }
lagrangian-formalism, field-theory, gauge-theory, hamiltonian-formalism, constrained-dynamics Title: Can I write the Hamiltonian $H$ in the standard way $p\dot{q}-L$ for a general QFT? I have read some questions (and the Wikipedia article) about the hamiltonian formulation of a QFT, but the only example that seems to be brought up is the scalar case, saying that $$\mathcal{H}_S=\Pi\partial_0\phi-\mathcal{L}_S.$$ Can I write the Hamiltonian for a general theory in the same way? For example, for Yang-Mills theory is the following true? $$\mathcal{H}_{YM}=\pi_\mu^a\partial_0W^{a\mu}-\mathcal{L}_{YM}.$$ What about for an interacting theory like Yang-Mills coupled with a scalar, can I write as follows? $$\mathcal{H}=\pi_\mu^a\partial_0W^{a\mu}+\Pi\partial_0\phi-\mathcal{L}.$$ I don't see why not, after all the two functions should exist for all these theories, and I can't think of another way to find the Hamiltonian knowing the Lagrangian. In general the Legendre transformation$^1$ from the Lagrangian to the Hamiltonian formulation may be singular, which leads to primary constraints. This is e.g. the case for gauge theories like Yang-Mills (YM) theory with or without matter, which OP mentions. However, in case of a singular Legendre transformation, by performing a so-called Dirac-Bergmann analysis (which may lead to secondary constraints), it is still possible in principle to define a corresponding Hamiltonian formulation. Typically, the canonical Hamiltonian $H_0=p\dot{q}-L$ gets amended with terms of the form 'constraint times Lagrange multiplier'. For details, see e.g. Refs. 1 & 2. References: P.A.M. Dirac, Lectures on QM, 1964. M. Henneaux & C. Teitelboim, Quantization of Gauge Systems, 1994.
{ "domain": "physics.stackexchange", "id": 83887, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lagrangian-formalism, field-theory, gauge-theory, hamiltonian-formalism, constrained-dynamics", "url": null }
sampling, dft, nyquist Title: Cross Domain Equivalent to Nyquist Sampling Theorem? In attempting to answer this question by @Oliver here: What characterizies 'causality' for a finite FFT? I have considered the minimum requirement to avoid time domain aliasing in the Discrete Fourier Transform, or more generally any application where the frequency domain is sampled. Similar to sampling in time at at least twice the highest frequency to represent the spectrum without the effects of aliasing, I suggest using a time duration that is at least twice the response time of the underlying continuous time signal to represent the continuous time domain signal (in the DFT) without the effects of time aliasing. Or when the time domain process is restricted to known causal processes, the time duration is at least as long as the response time. This is the equivalent to Nyquist's Sampling Theorem in the frequency domain; ultimately "sampling in frequency" such that the duration of the time domain waveform is greater than twice its response time. I understand that the same theory would apply, but given that Shannon in his paper provides the Nyquist theorem in the time domain specifically has made me curious if this property may go by other formally named theorems in other domains? To illustrate this graphically consider the drawing by RBJ below except replace the frequency axis with the time axis. Public Domain, https://commons.wikimedia.org/w/index.php?curid=1065579 I'd say that this is not only "similar to a cross-domain equivalent to Nyquist's Sampling Theorem", but it simply is the sampling theorem. The sampling theorem does not specify the domains of the signals involved; it is rather a mathematical condition that a function of a continuous variable needs to satisfy such that it is perfectly represented by equidistant samples. It is irrelevant if the independent variable of that continuous function is time, frequency, space or anything else.
{ "domain": "dsp.stackexchange", "id": 8795, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sampling, dft, nyquist", "url": null }
quantum-gate, quantum-circuit where $|\psi\rangle$ is a qubit in $\mathbb{C}^2$, $|0\rangle= \begin{pmatrix}1 \\ 0 \end{pmatrix}$, $T= \begin{pmatrix}1 & 0\\ 0 & e^{i\pi/4} \end{pmatrix}$ is the $\pi/8$ gate, $H= \frac{1}{\sqrt{2}}\begin{pmatrix}1 & 1\\ 1 & -1 \end{pmatrix}$ is the Hadamard gate, $X= \begin{pmatrix}0 & 1\\ 1 & 0 \end{pmatrix}$ and $P= \begin{pmatrix}1 & 0\\ 0 & i \end{pmatrix}$.
{ "domain": "quantumcomputing.stackexchange", "id": 5178, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-gate, quantum-circuit", "url": null }
quantum-field-theory, special-relativity, vacuum, parity, time-reversal-symmetry Title: QFT: Vacuum invariant, but vacuum correlations aren't Consider a free scalar field theory. My struggle is that vacuum correlation functions of fields are only Lorentz invariant under a subgroup of Lorentz transformations, despite the invariance of the vacuum under the complete group of Lorentz transformations! I expect that I am making suspect assumptions somewhere.
{ "domain": "physics.stackexchange", "id": 72691, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, special-relativity, vacuum, parity, time-reversal-symmetry", "url": null }
the two image distance value. Notify administrators if there is objectionable content in this page. To calculate the Euclidean distance between two vectors in Python, we can use the numpy.linalg.norm function: Solution to example 1: v . View wiki source for this page without editing. Compute the euclidean distance between two vectors. In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two  (geometry) The distance between two points defined as the square root of the sum of the squares of the differences between the corresponding coordinates of the points; for example, in two-dimensional Euclidean geometry, the Euclidean distance between two points a = (a x, a y) and b = (b x, b y) is defined as: What does euclidean distance mean?, In the spatial power covariance structure, unequal spacing is measured by the Euclidean distance d ⌢ j j ′ , defined as the absolute difference between two  In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. This is helpful  variables, the normalized Euclidean distance would be 31.627. scipy.spatial.distance.euclidean¶ scipy.spatial.distance.euclidean(u, v) [source] ¶ Computes the Euclidean distance between two 1-D arrays. Let’s discuss a few ways to find Euclidean distance by NumPy library. And these is the square root off 14. u of the two vectors. The length of the vector a can be computed with the Euclidean norm. X1 and X2 are the x-coordinates. $\begingroup$ Even in infinitely many dimensions, any two vectors determine a subspace of dimension at most $2$: therefore the (Euclidean) relationships that hold in two dimensions among pairs of vectors hold entirely without any change at all in any number of higher dimensions, too. w 1 = [ 1 + i 1 − i 0], w 2 = [ − i 0 2 − i], w 3 = [ 2 + i 1 − 3 i 2 i]. Glossary, Freebase(1.00 / 1 vote)Rate this definition: Euclidean distance. Brief review of Euclidean distance. Recall that the squared Euclidean distance between any two vectors a and b is simply the sum of the square component-wise differences. Watch
{ "domain": "jayvijay.co", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9632305318133554, "lm_q1q2_score": 0.8108721582514072, "lm_q2_score": 0.8418256393148982, "openwebmath_perplexity": 647.780048094751, "openwebmath_score": 0.9422368407249451, "tags": null, "url": "http://jayvijay.co/5eb6g84/euclidean-distance-between-two-vectors-5210d3" }
java, game, console, minesweeper private static Area[][] pickLengthsOfArea(Scanner scanner) { String[] turnXandY; while(true) { System.out.println("Pick x.length and y.length of area(print \"x y\"): "); turnXandY = scanner.nextLine().split(" "); if(turnXandY.length != 2) { System.out.println("print: \"x y\"!"); } else if(!isNumeric(turnXandY[0]) || !isNumeric(turnXandY[1])) { System.out.println("x and y should be numbers!"); } else if(Integer.parseInt(turnXandY[0]) <= 0 || Integer.parseInt(turnXandY[1]) <= 0) { System.out.println("x and y should be >0!"); } else { return new Area[Integer.parseInt(turnXandY[0])][Integer.parseInt(turnXandY[1])]; } } } private static boolean isXandYIn(int turnX,int turnY, Area[][] area) { if(turnX<0 || area[0].length<=turnX) { return false; } if(turnY<0 || area.length<=turnY) { return false; } return true; } public static boolean isNumeric(String strNum) { try { Integer.parseInt(strNum); } catch (NumberFormatException | NullPointerException nfe) { return false; } return true; } } class Area{ private final ValueOfArea valueOfArea; private StatusOfArea statusOfArea;
{ "domain": "codereview.stackexchange", "id": 36424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, console, minesweeper", "url": null }