text
stringlengths
1
1.11k
source
dict
ros, drone, joy, ros-indigo Originally posted by Mani with karma: 1704 on 2015-09-21 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by gvdhoorn on 2015-09-21: Just to clarify: you can still use rosbuild under Indigo (and even Jade). It's just that there are some restrictions when interacting with catkin workspaces, and you cannot release rosbuild packages anymore. Comment by Haifei on 2015-09-21: Thank you for the response.
{ "domain": "robotics.stackexchange", "id": 22668, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, drone, joy, ros-indigo", "url": null }
from the left to the right half, namely mx2 - mn1. Finding the Maximum and Minimum of a List. If the array has only 1 element the answer will be that element. LECTURE 2: DIVIDE AND CONQUER AND DYNAMIC PROGRAMMING where we have used that a logb = b a. Write a C++ Program to implement Merge Sort using Divide and Conquer Algorithm. Write ( "Enter the number of Elements: " ) n = CInt ( Console. I have to write a C++ program that will let the user enter 10 values into an array. For example, given the array [−2,1,−3,4,−1,2,1,−5,4] ,. Show a tree of the divide-and-conquer algorithm’s process. Suppose we want to find the maximum and minimum elements of an unsorted list of n items. Recursion works on the concept of divide and conquer. Can use binary search to find the first point where it is possible. Next we rewrite an if-condition expression to use the method Math. This is the base case. Conquer: Recursively solve these subproblems 3. For some algorithms the smaller problems are a fraction of
{ "domain": "rakumanu.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.976310525254243, "lm_q1q2_score": 0.8071274238188559, "lm_q2_score": 0.8267117919359419, "openwebmath_perplexity": 678.0343336001376, "openwebmath_score": 0.3174936771392822, "tags": null, "url": "http://nnwh.rakumanu.it/find-min-and-max-in-array-using-divide-and-conquer.html" }
turing-machines, quantum-computing Is this something above my head and should I delete this post? I don't mean to be precocious, I just didn't see a question similar to mine. You're mixing up computability theory (also known as recursion theory) and complexity theory (or computational complexity). Computability theory is a vast mathematical subject which studies the ramifications of the concept of computation. It does not deal with the complexity of computation. As your professor mentions, all (Turing-complete) computation models are the same from the point of view of computability theory. Computability theory, while an interesting mathematical subject, is not a good model for real-world computation for this reason, as you mention.
{ "domain": "cs.stackexchange", "id": 2597, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turing-machines, quantum-computing", "url": null }
c#, performance, asp.net-core, entity-framework-core public Order Order { get; set; } public event PropertyChangedEventHandler PropertyChanged; [NotifyPropertyChangedInvocator] protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } } Sorry for the late post. I did find a solution. The Foreach loop in my original question was making a database call for each item. It worked but it was painfully slow. My biggest improvement was getting that done in one database call with a group by. var ecItems = await _context.BmaEcItems .Where(e => e.DisplayOnEcommerce == true) .Select(e => e.Itemnmbr) .ToListAsync();
{ "domain": "codereview.stackexchange", "id": 43287, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, asp.net-core, entity-framework-core", "url": null }
quantum-field-theory, special-relativity, mass, heisenberg-uncertainty-principle, virtual-particles Is the article somehow being misleading, or am I misunderstanding something? Yes and no. You are quite correct that virtual particles are a mathematical device used for doing perturbative calculations in QFT. So in that sense virtual particles don't contribute to the mass of the nucleons in ice. But although virtual particles are just a computational device they are describing something real i.e. the energy in the quantum field. And that energy does have a mass given by Einstein's famous equation $E = mc^2$. The article is a little sensationalist, but what is says is basically correct. Even before QCD was understood we already knew that energy contributes to mass due to observations of things like the mass deficit in nuclei. Even leaving aside quantum mechanics, for example in special relativistic collisions we know that mass isn't conserved. See for example Is (rest) mass conserved in special relativity?
{ "domain": "physics.stackexchange", "id": 44581, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, special-relativity, mass, heisenberg-uncertainty-principle, virtual-particles", "url": null }
beginner, bash, linux, git, ssh cleanup () { if [ -n "$1" ]; then echo "Aborted by $1" elif [ $status -ne 0 ]; then echo "Failure (status $status)" else echo "Success" IFS=$IFS_OLD #cd "$HOME" || { echo "cd $HOME failed"; exit 155; } fi } trap 'status=$?; cleanup; exit $status' EXIT trap 'trap - HUP; cleanup SIGHUP; kill -HUP $$' HUP ############################################################################ Sync the local TO the remote ########################################## #{ #{ #...part of script with redirection... #} > file1 2>file2 # ...and others as appropriate... cd /home/kristjan/gitRepo_May2019/ || { echo "Failed to cd to /home/kristjan/gitRepo_May2019/!!!!!!!!"; exit 155; } git add -A || { echo "Failed to git add -A: Sync the local TO the remote!!!!!!!!"; exit 155; } | if ! `grep -r up-to-date` then
{ "domain": "codereview.stackexchange", "id": 34817, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, bash, linux, git, ssh", "url": null }
newtonian-mechanics, fluid-dynamics, pressure, fluid-statics Title: Would a giant solid column float? This question was originally inspired by this one: Why does the air pressure at the surface of the earth exactly equal the weight of the entire air column above it In it, the author mentions a really tall solid column (extending all the way up into space) whose base has an area of $1$ square inch, and which weighs $14.7$ pounds, which is equal to the pressure near the surface of the earth. My question is this: would such a column...float? After all, the weight of the column is $14.7$ pounds, its base has an area of $1$ square inch, and the pressure from the air at its base is $14.7$ pounds per square inch. So, would the pressure from the air at its base balance out the weight of the column?
{ "domain": "physics.stackexchange", "id": 59459, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, fluid-dynamics, pressure, fluid-statics", "url": null }
Note that because $f(x)$ is continuous on $[1,2]$, the function $f(x)$ is bounded on $[1,2]$. Suppose that $|f(x)|\lt B$ for all $x$ in our interval. Let $$g(x)=f(x)-\frac{1}{1-x}-\frac{1}{2-x}.$$ There is an $a$ in $(1,2)$ such that $g(a)$ is positive, and a $b$ such that $g(b)$ is negative, and hence by the Intermediate Value Theorem there is a $c$ between $a$ and $b$ such that $g(c)=0$. Detail: We show that there is indeed an $a$ such that $g(a)$ is positive. In order to have fewer minus signs, note that $$g(x)=f(x)+\frac{1}{x-1}-\frac{1}{2-x}.$$ Note that $\frac{1}{x-1}$ becomes very large positive for $x$ close enough to $1$ but to the right of $1$. The term $\frac{1}{2-x}$ is close to $1$ when $x$ is close to $1$. So by choosing $a$ near $1$ such that $\frac{1}{a-1}\gt B+2$, we can make sure that $g(a)$ is positive. For then the $f(a)-\frac{1}{2-a}$ part cannot be negative enough to make $g(a)$ negative.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9891815497525024, "lm_q1q2_score": 0.8017107783920286, "lm_q2_score": 0.8104789040926008, "openwebmath_perplexity": 119.547216722531, "openwebmath_score": 0.929459810256958, "tags": null, "url": "http://math.stackexchange.com/questions/203255/using-the-intermediate-value-theorem-to-prove-that-there-is-at-least-one-number" }
auroras, magnetosphere Fairfield, D. H., and J. D. Scudder (1985), Polar rain: Solar coronal electrons in the Earth's magnetosphere, J. Geophys. Res., 90(A5), 4055–4068, doi:10.1029/JA090iA05p04055. Zhang, Y., L. J. Paxton, and A. T. Y. Lui (2007), Polar rain aurora, Geophys. Res. Lett., 34, L20114, doi:10.1029/2007GL031602. J. A. Wild et al. (2004), The location of the open-closed magnetic field line boundary in the dawn sector auroral ionosphere, Ann. Geo., 22: 3625–3639, doi:10.5194/angeo-22-3625-2004 J. W. Dungey (1961), Interplanetary Magnetic Field and the Auroral Zones, Phys. Rev. Lett. 6, 47-48, doi:10.1103/PhysRevLett.6.47
{ "domain": "earthscience.stackexchange", "id": 1372, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "auroras, magnetosphere", "url": null }
moveit moveit_msgs::GetMotionPlan mp_srv; mp_srv.request.motion_plan_request = mp_request; mp_srv.response.motion_plan_response = mp_response; if(!mp_client.call(mp_srv)){ std::cout setStateValues(mp_response.trajectory.joint_trajectory.points.back().positions); joint_state_msg.name = joint_state_group->getJointModelGroup()->getJointModelNames(); joint_state_msg.position = mp_response.trajectory.joint_trajectory.points.back().positions; joint_state_publisher.publish(joint_state_msg); ros::spinOnce(); current_joints_ = joint_state_msg; mp_request.start_state.joint_state = current_joints_; ////////Self motion///////////////////////// moveit_msgs::Constraints path_eef_constraints_, joint_constraints_; moveit_msgs::JointConstraint goal_joint_constraint_; moveit_msgs::OrientationConstraint eef_orientation_constraint_; moveit_msgs::PositionConstraint eef_pos_constraint_;
{ "domain": "robotics.stackexchange", "id": 14108, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "moveit", "url": null }
algorithms, algorithm-analysis This should make it a stable implementation and would reduce the extra swaps needed. Or will it break the algorithm somehow? As @YuvalFilmus pointed out it still won't solve the problem. Even if we can avoid the extra swaps when elements are equal to the pivot, the last swap (A[i+1], A[r]) can result into the order getting changed as the pivot will be swapped into it's correct place.
{ "domain": "cs.stackexchange", "id": 21129, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, algorithm-analysis", "url": null }
python, file, classes some will use inconsistent commenting style with or without reason. Style guides will emerge recommending one writing style over another. Religious wars. Sometimes it's good to have an opinion, and be unburdened by the responsibility of choice. I suggest to choose one commenting style to support. Whichever one. Use context manager for file operations The recommended idiom to work with files looks like this: with open(path) as fh: ...
{ "domain": "codereview.stackexchange", "id": 43826, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file, classes", "url": null }
oscillators, resonance contact with colder objects. But there isn't any more macroscopic kinetic energy or potential energy available besides $E$ that's all that is available. And that's the state the friction drives the system to. And it never gets perfectly there but it gets close in time $\tau$ and so in time $\tau$ almost $E$ is dissipated. Because of the definition of the two symbols.
{ "domain": "physics.stackexchange", "id": 26017, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "oscillators, resonance", "url": null }
time, frequency, wavelength Title: Is there any naturally occurring light wave with a constant frequency that has a terminable decimal. I was wanting to think of a natural unit of time, if one exists, that is known exactly. I started thinking about it with the Voyager plaque and the 21 cm Hydrogen line (HL). When I looked into the HL, however, there didn't seem to be exact representations. For example, Wikipedia gives 1420.405751786 MHz, but when I plug this into Wolfram Alpha, it tells me this number is "approximately equal to..." Furthermore, this stackexchange answer on the Voyager Golden Record gives the number 1420.40575177 MHz, which is less precise and slightly off from wikipedia, but nonetheless gets the same response from Wolfram Alpha: that it's an approximate. So, is there any naturally occurring light that is always emitted at an exact frequency, where the frequency is exactly known, and is a terminable decimal? If so, what is it? Furthermore, I don't have much have faith in my math skills. It took me
{ "domain": "physics.stackexchange", "id": 29266, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "time, frequency, wavelength", "url": null }
charge, atomic-physics, scattering, atoms I know this model is unsatisfactory, but how did Rutherford calculate the radius of the atom to be $10^{-10}\,\mathrm{m}$? Rutherford probably estimated the size of gold atoms as already sketched by @AndrewSteane in his comment. The density of gold is $\rho=19.3\text{ g/cm}^3$. The molar mass of gold was known from chemistry: $m_\text{mol}= 197 \text{ g/mol}$. From this you get the molar volume $$V_\text{mol}=\frac{m_\text{mol}}{\rho}$$ Early estimations of Avogadro's constant (i.e. the number of atoms per mol) were already known from physical experiments before Rutherford's time. Later experiments refined this value: $$N_A=6.02\cdot 10^{23}\text{/mol}$$ Using this you get the volume per atom $$V_\text{atom}=\frac{V_\text{mol}}{N_A}$$ Let us assume the gold atoms form a cubic lattice (this is wrong, but good enough for an estimation). Then each atom occupies a cube of edge length $$d=\sqrt[3]{V_\text{atom}}$$ Doing the calculation we get $$\begin{align} d&=\sqrt[3]{V_\text{atom}}
{ "domain": "physics.stackexchange", "id": 83504, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "charge, atomic-physics, scattering, atoms", "url": null }
navigation, costmap, husky, frontier-exploration, clear-costmap obstacles_laser: observation_sources: laser laser: {data_type: LaserScan, clearing: true, marking: true, topic: scan, inf_is_valid: true} expected_update_rate: 0.07 inflation: inflation_radius: 0.05 range_sensor_layer: expected_update_rate: 0.125 clear_threshold: 0.1 mark_threshold: 0.1 no_readings_timeout: 0.2 topics: ["/ultrasound1"] costmap_exploration.yaml track_unknown_space: true global_frame: map rolling_window: false plugins: - {name: external, type: "costmap_2d::StaticLayer"} - {name: explore_boundary, type: "frontier_exploration::BoundedExploreLayer"} #Can disable sensor layer if gmapping is fast enough to update scans - {name: obstacles_laser, type: "costmap_2d::ObstacleLayer"} - {name: inflation, type: "costmap_2d::InflationLayer"} explore_boundary: resize_to_boundary: false frontier_travel_point: middle #set to false for gmapping, true if re-exploring a known area explore_clear_space: false
{ "domain": "robotics.stackexchange", "id": 22722, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, costmap, husky, frontier-exploration, clear-costmap", "url": null }
newtonian-mechanics, momentum, conservation-laws, rocket-science Title: Conservation of momentum, rocket When we analyze a rocket using conservation of momentum we neglect gravity and air drag. We then consider rate of fuel consumption or exhaust rate and by applying the law of conservation of momentum we find this equation - $RV = Ma$, where $R$ is the mass rate of fuel consumption, $V$ is velocity of exhaust w.r.t rocket. So my question is as follows: Is the mass rate considered here for only that mass which contributes solely in the velocity of the rocket and not in overcoming gravity and air drag? (Because they are neglected, to apply the conservation of momentum, so that system becomes isolated.) I've also uploaded a screenshot of the derivation of above equation from Halliday, Resnick, Walker - fundamentals of physics: The important thing to realise is that in the rocket equation derivation a system has been defined as the rocket, its fuel and the combustion products.
{ "domain": "physics.stackexchange", "id": 37857, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, momentum, conservation-laws, rocket-science", "url": null }
php, beginner, security, email Title: Are there any open vulnerabilities in this mailer script? I made a PHP mailer script does the basic validation of fields, return errors, else submit if all is good. But it also has a honeypot field that is not required to be filled in (I'm assuming by hiding it using CSS a spambot will fill in the field anyway). If the field is not empty, it opens a text file and writes/appends the attempt on it, and it also sends an email alert of the attempt. <?php //print_r($_POST); $error['name'] =""; $error['company']=""; $error['email'] =""; $error['subject'] =""; $error['message'] =""; $error['website'] =""; $success = ""; $thistime = time(); $current_date = date('m/d/Y/T ==> H:i:s'); if(isset($_POST['_save'])) { $name = stripslashes($_POST['name']); $email = stripslashes($_POST['email']); $company = stripslashes($_POST['company']); $message = stripslashes($_POST['message']); $subject = stripslashes($_POST['subject']); $website = stripslashes($_POST['website']);
{ "domain": "codereview.stackexchange", "id": 6339, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, beginner, security, email", "url": null }
ros, roslaunch, rosrun, image-proc However, I can't find any examples of using image_proc in a launch file. I can pipe the color video stream into image_proc and display the mono using: roslaunch mypkg webcam_test.launch ROS_NAMESPACE=usb_cam rosrun image_proc image_proc rosrun image_view image_view image:=/usb_cam/image_mono What would be the equivalent of doing this all in a single launch file? Originally posted by Cerin on ROS Answers with karma: 940 on 2015-05-06 Post score: 1 the node tag in roslaunch xml format allows a ns, i.e. namespace, attribute: wiki.ros.org/roslaunch/XML/node#Attributes i think ROS_NAMESPACE=usb_cam rosrun image_proc image_proc would be something like <node name="image_proc" pkg="image_proc" type="image_proc" ns="usb_cam"/> in a launch file. Originally posted by Wolf with karma: 7555 on 2015-05-07 This answer was ACCEPTED on the original site Post score: 7
{ "domain": "robotics.stackexchange", "id": 21625, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, roslaunch, rosrun, image-proc", "url": null }
graphs, data-structures, computability, reductions Edge-Coloring problem, we get as input graph G = (V, E) and natural number k and ask "Is there a coloration in arcs of G which uses at most k colors?". While painting vertices to two neighboring vertices must not have the same color, painting arcs to two neighboring arcs (i.e., having a common vertex) must not be The same color. That is, the language is: Edge-Coloring = {<G,k>|G can be arcuated by coloring using ≤ k colors} Let's look at reduction, Edge-Coloring $\leq _p$ Vertex-Coloring According to the graph G = (V, E), built new vertices Group: $\widetilde{V}$ = {$x_e | e \in E$} We will define a new edge between two vertices, $x_{e_1}$ and $x_{e_2}$, if there is a common vertex between edges $e_1$ and $e_2$. $\widetilde{E}$ = {$(x_{e_1},x_{e_2}) | e_1 \cap e_2 \neq \phi $} Finally we will define: $\widetilde{G}$ = ($\widetilde{V}$ , $\widetilde{E}$)
{ "domain": "cs.stackexchange", "id": 18726, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "graphs, data-structures, computability, reductions", "url": null }
arduino, electronics, actuator Title: Electronic circuit for heating nylon fishing line muscle I'm trying to make artificial muscles using nylon fishing lines (see http://io9.com/scientists-just-created-some-of-the-most-powerful-muscl-1526957560 and http://writerofminds.blogspot.com.ar/2014/03/homemade-artificial-muscles-from.html) So far, I've produced a nicely coiled piece of nylon fishing line, but I'm a little confused about how to heat it electrically. I've seen most people say they wrap the muscle in copper wire and the like, pass current through the wire, and the muscle acuates on the dissipated heat given the wire resistance. I have two questions regarding the heating: 1) isn't copper wire resistance extremely low, and thus generates very little heat? what metal should I use?
{ "domain": "robotics.stackexchange", "id": 38514, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "arduino, electronics, actuator", "url": null }
Circle is the circle centered at the origin with radius 1 unit (hence, the “ unit” circle). Use draw line segment Test whether the trig ratios for two In this Unit Circle worksheet, students explore a method to help remember the coordinates of the points in the first quadrant on the unit circle. Draw the unit circle here, and label the coordinates at the quadrantal (multiple of 90 ) angles. NOTE: When doing unit circle practice, working on a unit circle worksheet, or studying for a unit circle quiz, remember that the x-coordinate is the COS value, and the y-coordinate will give you The Unit Circle Written by tutor ShuJen W. Let (x,y) be the point on the circle that is in the first quadrant. To memorize the unit circle, use the acronym "ASAP," which stands for "All, Subtract, Add, Prime. Test Prep Tutors The unit circle is often denoted S 1; the generalization to higher dimensions is the unit sphere. 2|Unit Circle: Sine and Cosine Functions Learning Objectives In this section, you will:
{ "domain": "sri60.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9840936106207202, "lm_q1q2_score": 0.8284352446221337, "lm_q2_score": 0.8418256512199033, "openwebmath_perplexity": 475.8801771295073, "openwebmath_score": 0.5969211459159851, "tags": null, "url": "http://sri60.com/fzagd99/zxqvhik.php?tynundghs=unit-circle-first-quadrant-quiz" }
optics, refraction $$\begin{bmatrix} 1 & 0 \\ 0 & \frac{n'}1\\ \end{bmatrix} \begin{bmatrix} 1 & d \\ 0 & 1\\ \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & \frac1{n'}\\ \end{bmatrix}= \begin{bmatrix} 1 & \frac{d}{n'} \\ 0 & 1\\ \end{bmatrix}$$ This makes sense intuitively as the higher the index of refraction of your glass the less displacement you'll get. If your starting and/or ending medium have index of refractions other than 1 then you'd need to modify the 1 in the corresponding fractions to get the corresponding transfer matrix.
{ "domain": "physics.stackexchange", "id": 25675, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, refraction", "url": null }
scikit-learn, accuracy Title: accuracy at a false positive rate of 1% I need to calculate the accuracy but at a false positive rate of 1%. I am not sure if it is the normal accuracy that we can calculate with sklearn or I need a customized formula? Calculate this by finding the threshold at which the false positive rate is 1%. Your model outputs a probability, not a category. You get a category by seeing on which side of some threshold, probably 0.5, the probability lies. Test some other thresholds. You can use the ROC curve for this. Once you have that threshold, calculate the accuracy. However, be cautious about accuracy (or any threshold-based scoring rule, such as F1 score) as a scoring rule. https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models/312787#312787 https://stats.stackexchange.com/questions/464636/proper-scoring-rule-when-there-is-a-decision-to-make-e-g-spam-vs-ham-email
{ "domain": "datascience.stackexchange", "id": 8512, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "scikit-learn, accuracy", "url": null }
or adjoint matrix of an m-by-n matrix A with complex entries is the n-by-m matrix A * obtained from A by taking the transpose and then taking the complex conjugate of each entry (i.e., negating their imaginary parts but not their real parts). Annihilator. This lecture explains the trace of matrix, transpose of matrix and conjugate of matrix. To understand the properties of transpose matrix, we will take two matrices A and B which have equal order. For example, if A(3,2) is 1+2i and B = A. In mathematics, the conjugate transpose or Hermitian transpose of an m-by-n matrix A with complex entries is the n-by-m matrix A * obtained from A by taking the transpose and then taking the complex conjugate of each entry (i.e., negating their imaginary parts but not their real parts). $\begingroup$ I got the conjugate. Let V be an abstract vector space over a field F. A functional T is a function T:V → F that assigns a number from field F to each vector x ε V. Def. Hermitian conjugate) of a
{ "domain": "thelaunchpadtech.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429629196684, "lm_q1q2_score": 0.8490100725529247, "lm_q2_score": 0.8633916187614823, "openwebmath_perplexity": 308.3740080056074, "openwebmath_score": 0.9081307053565979, "tags": null, "url": "https://www.thelaunchpadtech.com/r52a72/g6p3jm.php?c9f433=conjugate-transpose-of-a-matrix-example" }
evolution, brain 'We ate something' I once read a discussion of how humans might have ingested psychotropic plants or fungi. As you can see, this was 2010. Its not the worst idea, but its hard to prove. Lots of experiments on cats since the '60s have not produced cat's who care to tell us if they are intelligent. Maybe they are just too smart in the first place.
{ "domain": "biology.stackexchange", "id": 9811, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, brain", "url": null }
orbital-mechanics The values for aphelion, perihelion, semi-major axis, and eccentricity are inconsistent with one other. The values for perihelion, semi-major axis, and eccentricity are unsourced. The values for perihelion, semi-major axis, and eccentricity have far too much precision. Sedna was discovered 17 years ago, corresponding to about 0.15% of Sedna's orbital period. That's far too short of an arc length to justify five or six places of accuracy. The value for semi-major axis is inconsistent with other sources. The value for aphelion is poorly conducted original research. Footnote 5 (the reference for the aphelion value) states that the source is from osculating orbital elements about the solar system barycenter as retrieved from JPL Horizons. There are two key things wrong with this. One is the use of osculating elements, which can be deceiving. The other is using orbital elements about the solar system barycenter rather than about the Sun. This is just wrong.
{ "domain": "astronomy.stackexchange", "id": 5138, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "orbital-mechanics", "url": null }
c, hashcode Title: Hash function for a key of type structure I need to implement a hash function for a key of type structure. This structure is as follows: typedef struct hash_key_s { void *k1; char *k2; char *k3; } hash_key_t; My proposed hash function is as follows: unsigned long hash_func (void *key) { hash_key_t *k = (hash_key_t *) key; unsigned long h = 0; char c = 0; char *p = k->k2; if (p != NULL) { for (c=0; c=*p; p++) { h = h*31 + c; } } p = k->k3; if (p != NULL) { for (c=0; c=*p; p++) { h = h*31 + c; } } h = h*31 + (((unsigned long) k->k1) >> 2); return h*110351524UL; }
{ "domain": "codereview.stackexchange", "id": 23337, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, hashcode", "url": null }
cc.complexity-theory, reference-request, topological-graph-theory So mixing the Hopcroft-Wong algorithm with some tree isomorphism algorithm should solve your problem in linear time. (this is actually similar to the planar graph isomorphism algorithm, see for example Hopcroft-Tarjan)
{ "domain": "cstheory.stackexchange", "id": 4458, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, reference-request, topological-graph-theory", "url": null }
javascript, html, css, simulation, animation var t2 = document.getElementById("arrbox" + small.toString()); var t3 = t1.id; t1.id = t2.id; t2.id = t3; } } /** * Simulate seelction Sort * The simulate button will call this */ function selection_simulate() { var numbers = $("arrayinput").value; var toBeSorted = numbers.split(","); if (toBeSorted.length >= 1) { var toBeSortedi = new Array(); for (var q = 0; q < toBeSorted.length; q++) { toBeSortedi[q] = parseInt(toBeSorted[q]); } drawarray(toBeSortedi); selectionSort(toBeSortedi); } } function show_selection_sort() { var strVar=""; strVar += "<h2>Selection sort<\/h2>"; strVar += "A{"; strVar += "<input type=\"text\" id=\"arrayinput\" style=\"width:550px;\"\/>"; strVar += "}"; strVar += "<hr style=\"visibility:hidden;\"\/>"; strVar += "<div class=\"dbutton\" onclick=\"selection_simulate();\">"; strVar += "Simulate"; strVar += "<\/div>";
{ "domain": "codereview.stackexchange", "id": 8918, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, html, css, simulation, animation", "url": null }
turbulence, noise, fan Title: Why are ceiling fans so much quieter than others? I used to live in a house that had ceiling fans, which were great for the summer and so wonderfully silent when running. Now however I've moved to a house that has no ceiling fans so I've bought a couple of standing fans. However, they're too noisy to use at night without disrupting sleep. So my question is: what is it about the design of a ceiling fan that allows it to be so quiet? (I'm in Australia, hence thinking about fans at this time of year) The noise is generated to a large degree by the fan blades. The portable fans you mention typically provide much higher flow velocities than ceiling fans, and their blades obviously rotate much, much faster. Two factors are then responsible for their significant noise level:
{ "domain": "physics.stackexchange", "id": 36548, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turbulence, noise, fan", "url": null }
Start by considering the following auxiliary function $$I(a) = \left (\int^a_0 e^{-t^2} \, dt \right )^2 + \int^1_0 \frac{e^{-a^2 (t^2 + 1)}}{1 + t^2} \, dt, \,\, a > 0. \tag1$$ Note the term appearing between the brackets is nothing more than the error function. On differentiating the auxiliary function with respect to $a$ we obtain $$I'(a) = 2 e^{-a^2} \int^a_0 e^{-t^2} \, dt - 2a e^{-a^2} \int^1_0 e^{-a^2 t^2} \, dt.$$ In obtaining this result, Leibniz' rule for differentiating under the integral sign has been used. In the second integral, if a substitution of $u = at$ is made, the result $I'(a) = 0$ quickly follows showing the auxiliary function is indeed constant for all $a > 0$. To find the value for this constant, letting $a \to 0^+$ gives $$I(a) \to \int^1_0 \frac{dt}{1 + t^2} = \frac{\pi}{4},$$ so that $I(a) = \pi/4$ for all $a > 0$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9732407191430024, "lm_q1q2_score": 0.8348148402812042, "lm_q2_score": 0.8577680977182186, "openwebmath_perplexity": 295.9397484241945, "openwebmath_score": 0.9809170365333557, "tags": null, "url": "https://math.stackexchange.com/questions/2526959/gaussian-type-integral-int-infty-infty-frac-mathrme-a2-x21" }
general-relativity, perturbation-theory Title: Interpreting perturbation theory in general relativity In quantum mechanics, we start with a Hamiltonian $H_0$ for which we know the exact eigenstates and energy eigenvalues. We perturb it by a known term $H$, and then attempt to compute (approximately) the new eigenstates and eigenvalues. In general relativity, my understanding is we start with a metric $g_{\mu \nu}$, and perturb it by a known $h_{\mu \nu}.$ But in my lecture notes (https://arxiv.org/abs/0804.2595), the lecturer shows how to compute $h_{\mu \nu}$. I thought we perturbed a system by a known quantity; can someone clarify the regular procedure of perturbation theory in general relativity, and what typical 'goals' are?
{ "domain": "physics.stackexchange", "id": 12377, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, perturbation-theory", "url": null }
research, social, ethics Domain-specific AI systems. The current AI-based system is lack of abilities to understand the variety of domains of human expertise (such as medicine, engineering, law and many more). The systems should be able to perform professional-level tasks such as designing problems, experiments, managing contradictions, negotiating, etc. Data assurance and trust. The current AI-based systems require huge amounts of data and their behaviour directly depends on the quality of this data which can be biased, incomplete or compromised. This can be expensive and time-consuming especially where it is used for safety-critical systems which potentially can be very dangerous. Radically efficient computing infrastructure. The current AI-based systems require unprecedented workloads and computing power which require the development of new computing architectures (such as neuromorphic).
{ "domain": "ai.stackexchange", "id": 49, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "research, social, ethics", "url": null }
formal-languages, regular-languages, pumping-lemma Take $w=C^{2N}A^{3N}$ (or $A^{3N}C^{2N}$) where $N$ is the pumping length and $w$ is a string in your language, and then follow the proof in similar lines as in How to prove that a language is not regular?. You should prove by yourself that above string $w$ can not be pumped.
{ "domain": "cs.stackexchange", "id": 6304, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, regular-languages, pumping-lemma", "url": null }
solubility, solutions, solvents The acid-base equilibria is affected by a change in the 1) acidity or basicity of the solvent, 2) the relative permittivity of the solvent, $\epsilon_{r}$, and 3) its ability to solvate the species in the equilibria. There exists some formulas that relates the equilibrium constant and the relative permittivity (Born, Onsager, etc). In the case of the acetic acid, for example, the relative permittivity will have a strong influence in the solvation of the anionic and cationic species of the equilibria $$\text{CH}_{3}\text{CO}_{2}\text{H}\rightleftarrows\text{CH}_{3}\text{CO}_{2}^{-}+\text{H}^{+}$$
{ "domain": "chemistry.stackexchange", "id": 7521, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solubility, solutions, solvents", "url": null }
python, python-3.x, gui, raspberry-pi And here comes the real kicker, it can be done even shorter: def press(Button): #function to write into entries with buttons of the program we_put_it_in_a_function(Button):
{ "domain": "codereview.stackexchange", "id": 30086, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, gui, raspberry-pi", "url": null }
thermodynamics, heat-conduction I think there is probably something wrong on my way to approach the problem, but I haven't found anything concluant on the web to do such a thing. I am stuck there and I need some help.. The equation you've been trying to derive is $$\dot{Q}=2\pi k\frac{(T_1-T_0)}{\ln{(r_0/r_1)}}$$where $\dot{Q}$ is the rate of heat loss per unit length of pipe and k is the thermal conductivity of the pipe. Note that there are two unknowns ($\dot{Q}$ and $T_1$) but only one equation. To provide closure on this, as @Gert has indicated, you need to characterize the rate of heat loss from the pipe to the surrounding air in the room: $$\dot{Q}=2\pi r_0 h(T_0-T_{surr})$$where h is the convective heat transfer coefficient on the outside of the pipe. You can then get both $\dot{Q}$ and $T_1$ by combining these equations (using an estimate of h).
{ "domain": "physics.stackexchange", "id": 56896, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, heat-conduction", "url": null }
aerodynamics Title: Why is 55-60 MPH optimal for gas mileage of a passenger car? My driver's education teacher back in high school said 55 MPH is optimal for gas mileage of a passenger car. Just last week, I read an article in a magazine saying 60 MPH is optimal. These numbers are pretty close, so there's some validity in the statement. What's the physics explanation for this 55-60 MPH sweet spot? Although this is an engineering question it's one that I've had great interest in the specific values myself. Wikipedia does an alright job with the question. Looking a little bit deeper into it, this seems to be ripped from some Oak Ridge National Lab report, which as been taken down, but is still on the wayback machine. In fact, a lot of the Wikipedia article looks like a copy of this. Funny, the report references a website that sounds like it has all the credibility in the world, fueleconomy.gov. They give the following image:
{ "domain": "physics.stackexchange", "id": 31696, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "aerodynamics", "url": null }
by the method of integration in parts, in which I will explain each of the steps and you will understand better the operation of this method. 1 Answer Wataru · Manikandan S. Integration by Parts Date_____ Period____ Evaluate each indefinite integral using integration by parts. Using the integration by parts formula can be broken down into 3 simple steps and is going to start out somewhat similarly to integrating with u-substitution. Mar 2, 2018 - Explore deepakmahajan1511's board "Integration by parts" on Pinterest. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. (d) Z ex cos(x) dx. Integration by parts is a "fancy" technique for solving integrals. Integration by Parts is a very useful method, second only to substitution. Another method to integrate a given function is integration by substitution method. Integration by Parts. Together with integration by substitution, it will allow you to
{ "domain": "schermafisciano.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9744347838494567, "lm_q1q2_score": 0.8055767200404267, "lm_q2_score": 0.8267117855317474, "openwebmath_perplexity": 715.7466028493702, "openwebmath_score": 0.8843280076980591, "tags": null, "url": "http://qvxg.schermafisciano.it/integration-by-parts.html" }
vba, ms-access 'Now we need to change the font sizes on the controls. 'If this control has a FontSize property, then find the ratio of 'the current height of the control to the form-load height of the control. 'So if form-load height was 1000 (twips) and the current height is 500 (twips) 'then we multiply the original font size * (500/1000), or 50%. 'Then we multiply that by the fontZoom setting in case the user wants to 'increase or decrease the font sizes while viewing the form. Select Case ctl.ControlType Case acLabel, acCommandButton, acTextBox, acComboBox, acListBox, acTabCtl, acToggleButton ctl.FontSize = Round(CDbl(tagArray(ControlTag.OriginalFontSize)) * CDbl(ctl.Height / tagArray(ControlTag.OriginalControlHeight))) * fontZoom End Select End If Next End Sub
{ "domain": "codereview.stackexchange", "id": 23200, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, ms-access", "url": null }
function f ( x = 0, where must! Makes a sharp turn as it crosses the y-axis the class C0 consists of all differentiable functions continuous! ) # differentiable f ' ( a ) = RHD ( at x = 0, where it a. The sum of the requirements for a continuous function for where we can use it to find general for! Met both of the series everywhere continuous NOWHERE differentiable functions, as well as the proof of an of. To determine if a function whose derivative exists at x=a means f (. Specifying an interval if f ' ( a ) = 2x is continuous, i.e this theorem that! Review of Rules of Differentiation ( material not lectured ) a cusp on differentiable vs continuous derivative graph of a continuous:., any non-differentiable function with partial derivatives, shown as a surface plot, has partial derivatives sufficient., and for a graph all continuous functions have continuous derivatives differentiable without! Differentiability is a function that does not imply that every continuous function whose
{ "domain": "webwriter.ie", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9688561694652216, "lm_q1q2_score": 0.8094633833780669, "lm_q2_score": 0.8354835411997897, "openwebmath_perplexity": 290.2482914011687, "openwebmath_score": 0.8581441044807434, "tags": null, "url": "http://webwriter.ie/international-trade-tszq/6cca65-differentiable-vs-continuous-derivative" }
electromagnetism, maxwell-equations $$\oint\vec{B}\cdot d\vec{s}= 2\pi rB$$ since the magnetic field is always parallel to the loop (by the right hand rule), and the path length is $2\pi r$ where $r$ is the distance from the wire. We also know that: $$\mu_0I+\mu_0\epsilon_o\frac{d\Phi_e}{dt} = \mu_0I$$ since there is no displacement current in this situation. This allows us to find an equation for the magnitude of $B$: $$B=\frac{\mu_0I}{2\pi r}$$ Now let's consider the case of finding the magnetic field inside a capacitor $(r<R)$ with two circular plates or radius $R$, and with an alternating current in the circuit. Taking the same Amperian loop as before, we also have: $$\oint\vec{B}\cdot d\vec{s}= 2\pi rB$$ However, in this situation, we have: $$\mu_0I+\mu_0\epsilon_o\frac{d\Phi_e}{dt} = \mu_0\epsilon_o\frac{d\Phi_e}{dt}$$ because we know that there is no current through a capacitor.
{ "domain": "physics.stackexchange", "id": 16526, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, maxwell-equations", "url": null }
javascript, performance, sqlite /// <summary> /// Delete By ID Statements Factory /// </summary> /// <parameters> /// The actual Delete Statement, /// the ID of elements to delete /// The table name to verify it exists /// </parameters> deleteRowById: function(deleteStatement, dataArray, tableName) { var deferred = q.defer(); var existsQuery = "SELECT * FROM sqlite_master WHERE name = '" + tableName + "' and type='table'"; try { var checkTable = _db.instance.transaction(function(tx) { tx.executeSql(existsQuery, [], function(tx, result) { if (result.rows.length === 0) { //No Such Table Exists deferred.reject(); } else { //Table Exists try Selecting var request = _db.instance.transaction(function(tx) {
{ "domain": "codereview.stackexchange", "id": 21371, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, performance, sqlite", "url": null }
machine-learning, neural-network, time-series, linear-regression Title: Time-series prediction: Model & data assumptions in AI/ML models vs conventional models I was wondering if there was a good paper out there that informs about model and data assumptions in AI/ML approaches. For example, if you look at Time Series Modelling (Estimation or Prediction) with Linear models or (G)ARCH/ARMA processes, there are a lot of data assumptions that have to be satisfied to meet the underlying model assumptions: Linear Regression No Autocorrelation in your observations, often when dealt with level data (--> ACF) Stationarity (Unit-Roots --> Spurious Regressions) Homoscedasticity Assumptions about error term distribution "Normaldist" (mean = 0, and some finite variance) etc. Autoregressive Models stationarity squared error autocorrelation ...
{ "domain": "datascience.stackexchange", "id": 6570, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, neural-network, time-series, linear-regression", "url": null }
kinect, pointcloud Title: How to get a point cloud from kinect Is there an easy way to get a segmented point cloud from the kinect based on one color? Originally posted by toniOliver on ROS Answers with karma: 159 on 2011-10-28 Post score: 0 You can use a Passthrough Filter to exclude points not within a specified range for any of the fields in your pointcloud (including R,G, and B). Originally posted by Dan Lazewatsky with karma: 9115 on 2011-10-29 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7125, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinect, pointcloud", "url": null }
complexity-theory, time-complexity, randomized-algorithms, p-vs-np, np-intermediate Title: Is following observation on Ladner's theorem correct? Suppose $NP\subseteq DTIME[n^{f(n)}]$ where $f(n)$ is any function satisfying $\omega(1)$ then is it true $P=NP$ holds? Ladner's theorem states infinite time hierarchy between $P$ and $NP$. That is there are $NP$ problems in $DTIME[n^{g(n)}]$ for any $g(n)$ such that $n^c\leq n^{g(n)}\leq 2^{n^{1/c}}$ at some fixed $c\geq1$. If $NP\subseteq DTIME[n^{f(n)}]$ where $f(n)$ is any function satisfying $\omega(1)$ then for every such $g(n)$ we can find a $f(n)\ll g(n)$ and so Ladner's theorem is violated and so $P=NP$ has to hold. And so if $P\neq NP$ there is $g(n)$ with $n^c\leq n^{g(n)}\leq 2^{n^{1/c}}$ such that $NP\subsetneq DTIME[n^{g(n)}]$ and $NP\subseteq DTIME[n^{f(n)}]$ where $f(n)$ is any function satisfying $\omega(1)$ implies $P=NP$. Are these observations correct?
{ "domain": "cs.stackexchange", "id": 7534, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, time-complexity, randomized-algorithms, p-vs-np, np-intermediate", "url": null }
Why is $A$ diagonalizable? If it has two distinct eigenvalues, $1$ and $-1$, then there is nothing to do; we know it is diagonalizable. If it has a repeated eigenvalue, say $1$, but $A-I$ is not the zero matrix, pick $\mathbf{x}\in \mathbb{R}^2$ such that $A\mathbf{x}\neq \mathbf{x}$; then $$\mathbf{0}=(A-I)^2\mathbf{x} = (A^2-2A + I)\mathbf{x} = (2I-2A)\mathbf{x}$$ by the Cayley Hamilton Theorem. But that means that $2(A-I)\mathbf{x}=\mathbf{0}$, contradicting our choice of $\mathbf{x}$. Thus, $A-I=0$, so $A=I$ and $A$ is diagonalizable. A similar argument shows that if $-1$ is the only eigenvalue, then $A+I=0$. . (Hiding behind that paragraph is the fact that if the minimal polynomial is squarefree and splits, then the matrix is diagonalizable; since $p(x)=x^2-1=(x-1)(x+1)$ is a multiple of the minimal polynomial, the matrix must be diagonalizable). So this completes the proof that the trace must be $0$, given that $A\neq I$ and $A\neq -I$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9799765581257485, "lm_q1q2_score": 0.8478993133428055, "lm_q2_score": 0.8652240773641087, "openwebmath_perplexity": 41.315982766130084, "openwebmath_score": 0.9789332747459412, "tags": null, "url": "http://math.stackexchange.com/questions/48123/is-the-converse-of-cayley-hamilton-theorem-true" }
c++ } else if (selection == 2) { compareVolume(&shapes[2], &shapes[1]); } else { cout << shapes[2]->getID() << " --"; shapes[2]->print(); cout << shapes[1]->getID() << " --"; shapes[1]->print(); } init2 = true; break; case 3: if (selection == 1) { compareArea(&shapes[2], &shapes[2]); } else if (selection == 2) { compareVolume(&shapes[2], &shapes[2]); } else { cout << shapes[2]->getID() << " --"; shapes[2]->print(); cout << shapes[2]->getID() << " --"; shapes[2]->print();
{ "domain": "codereview.stackexchange", "id": 17301, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++", "url": null }
classical-mechanics, point-particles From conservation of energy, you can write down the following formula: $$ mgR = \frac{1}{2}mR^2\dot{\theta}^2 + m g R (1-\cos \theta) \; .$$ The left hand side represents the potential energy at the top of your semi-circular hill, the right hand side the total energy at any point of the trajectory. (Angle measured from the vertical.) Rearranging, you can write this as $$ \frac{2 g \cos \theta}{R} = \dot{\theta}^2$$ or after some additional work and intergrating $$ T = \sqrt{\frac{R}{2g}} \int_0^{\pi/2} \frac{d\theta}{\sqrt{\cos\theta}}$$ A quick check with wolframalpha gives a finite number for the right integral. Therefore the time it takes for the ball to roll up the hill is finite.
{ "domain": "physics.stackexchange", "id": 1624, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, point-particles", "url": null }
algorithms, search-algorithms, strings, substrings Title: I'm looking for an algorithm to find unknown patterns in a string I am trying to detect patterns in huge code bases. I managed to filter the entire codebase into a tagged string, as in: ABACBABAABBCBABA The result should be: ABA *3 CBA *2 I'm trying to build / use an algorithm which will find ANY unknown repeating pattern inside the string. The length of the pattern, it's composition, and the number of repeats is unknow. To be a pattern it must occur atleast twice. And have atleast 2 items. Once I detect the patterns I can represent them back in their original context I have tried iterating over each tag. For each tag find the following tag in the string. continue until adding a tag matches only one repeat - hence no more pattern. I get lost in implemetation (in JS or Python) and I'm hoping there is a better way. Thanks. Here is a simple search in Python: s = "ABACBABAABBCBABA" d={} for sublen in range(MINLEN,int(len(s)/MINCNT)): for i in range(0,len(s)-sublen):
{ "domain": "cs.stackexchange", "id": 16502, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, search-algorithms, strings, substrings", "url": null }
electricity, electric-circuits, electric-current, semiconductor-physics If a charge +q flows from the left to right we say that a current is flowing from left to right. If it were flowing from the right to the left a current would be flowing from right to left. For a charge -q a current is said to flow from the left to right if it moves from right to left (transport of a negative charge from right to left can be visualised as a transport of an equal amount of positive charge from left to right) and similarly the other case. When we say that a current I is flowing from the left to the right it is due to a net charge Q=+q-q flowing across the area element from left to right (in the time frame we have chosen). It could be that no negative charge is flowing from the right to left. In such a case the current would be due to the positive charges solely. Similarly it could be that no positive charge flows from left to right. In this case the current would be due to negative charges flowing from right to left only. The more general case assumes the net +q-q
{ "domain": "physics.stackexchange", "id": 38700, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electricity, electric-circuits, electric-current, semiconductor-physics", "url": null }
c++, vectors void clear() { for (size_t i = 0; i < m_Size; i++) m_Data[i].~T(); m_Size = 0; } size_t size() const { return m_Size; } size_t capacity() const { return m_Capacity; } bool empty() const { return m_Size == 0; } const T& operator[](size_t index) const { if (index >= m_Size) throw "Index out of bounds"; return m_Data[index]; } T& operator[](size_t index) { if (index >= m_Size) throw "Index out of bounds"; return m_Data[index]; } Vector<T> operator+(const Vector& other) { if (m_Size != other.size()) throw "Vectors are of different size"; Vector<T> v(m_Size); for (size_t i = 0; i < m_Size; i++) { v.push_back(m_Data[i] + other[i]); } return v; } };
{ "domain": "codereview.stackexchange", "id": 39348, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, vectors", "url": null }
transform of a Borel measure. edu Fourier theory is pretty complicated mathematically. In Python after calling the fft function on the data. The discrete Fourier transform or DFT is the transform that deals with a nite discrete-time signal and a nite or discrete number of frequencies. It converts a space or time signal to signal of the frequency domain. Define a transform to extract a subregion from an image. The foundation of the product is the fast Fourier transform (FFT), a method for computing the DFT with reduced execution time. In order to compute the FT of a signal with Python we need to use the ftt function built in into Scipy. Take the Fourier transform of the whole signals (or a large interval of samples of the signal). The code I have. AI algorithms C++ C++11 CodeForces CS231n dfs dft discrete fourier transform dynamic programming fast fourier transform fft Graph Theory Hashicorp Atlas Java Javascript JPA longest increasing subsequence Machine Learning Netbeans Ninja
{ "domain": "flcgorizia.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9838471628097781, "lm_q1q2_score": 0.8322780687569786, "lm_q2_score": 0.845942439250491, "openwebmath_perplexity": 552.3158417307098, "openwebmath_score": 0.6439871788024902, "tags": null, "url": "http://ozad.flcgorizia.it/fourier-transform-python.html" }
acid-base, equilibrium, ph "$\ce{H3O+}$ has a concentration of $2x$ at equilibrium. $\ce{SO4^2-}$ and $\ce{NH3}$ each have a equilibrium concentrations of $x$" is wrong. When you solve the quadratic equation $$K_\mathrm{a} = \frac{[\ce{H3O+}] [\ce{SO4^2-}]}{[\ce{HSO4^-}]} = \frac{x^2}{c - x}$$ you will get a pH near your text book's value.
{ "domain": "chemistry.stackexchange", "id": 9298, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "acid-base, equilibrium, ph", "url": null }
c++, api, interface, overloading, bigint Which will be surprising for programmers used to the behaviour of the built-in types. for our equality operator, we actually DO want the default, as we want NaN == NaN Are you really sure you want that? Consider a scenario where you did allow division by zero and the square root of a negative number to return a NaN instead of throwing an exception, would you want the following expression to be true? Number(0) / Number(0) == Number(-1).sqrt() There is a good reason why NaNs don't compare equal to themselves, and often this will cause code that was not written specifically to handle NaNs to do the better thing. And as J_H mentioned, programmers who know about IEE754 NaNs might rely on your NaNs to behave the same, and will be surprised if they don't. Be careful with implicit constructors The following constructor allows implicit conversion of any integer type to a Number: constexpr Number(std::intmax_t number) : _number(number) {}
{ "domain": "codereview.stackexchange", "id": 44440, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, api, interface, overloading, bigint", "url": null }
machine-learning, classification, machine-learning-model, multilabel-classification, class-imbalance #PCA pca = PCA(random_state=42) #Classifier regularization (SVC). svc = SVC(random_state=42, class_weight= 'balanced') pipe_svc = Pipeline(steps=[('pca', pca), ('svc', svc)]) # Parameters of pipelines can be set using ‘__’ separated parameter names: parameters_svc = [{'pca__n_components': [2, 5, 20, 30, 40, 50, 60, 70, 80, 90, 100, 140, 150]}, {'svc__C':[1, 10, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 250, 300, 400, 500], 'svc__kernel':['rbf', 'linear','poly'], 'svc__gamma': [0.05, 0.06, 0.07, 0.08, 0.09, 0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008,0.009, 0.0001, 0.0002, 0.0003, 0.0004, 0.0005], 'svc__degree': [1, 2, 3, 4, 5, 6], 'svc__gamma': ['auto', 'scale']}] clfsvc = GridSearchCV(pipe_svc, param_grid =parameters_svc, iid=False, cv=10, return_train_score=False) clfsvc.fit(X_balanced, y_balanced)
{ "domain": "datascience.stackexchange", "id": 4770, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, classification, machine-learning-model, multilabel-classification, class-imbalance", "url": null }
of a rhombus can be no inscribing circle diagonal one... Its side measures dimensional shape ) ^2 = KS^2144 cm^2 + 25 cm^2 = KS^2 ( 12 cm ) +! That that particular diagonal forms a pair of equilateral triangle with each set of connecting sides different... + OK^2 = KS^2 symmetry allows one or more degrees of freedom for irregular quadrilaterals 24! Is to provide a free, world-class education to anyone, anywhere the angles formed the! A bit like a square has diagonals that are congruent MNOP, a rectangle examples of each diagonal the angles. ( 90° ) ( 1/2 ) ( 5 ) = 5 the square, and a1 is no.. Set ( 21 ) Theorem 1 - opposite sides of a rhombus… every rhombus has two connecting! Rectangle, and two pairs of vertices rhombus using the Pythagorean Theorem property of quadrilateral thisDrag! Tutorials and quizzes, using our Many Ways ( TM diagonals uk and hs of a rhombus approach from multiple teachers was first... Can 'lean over ' and the 24 to become 12cm given the area the of... Right
{ "domain": "pp.ua", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104962847372, "lm_q1q2_score": 0.8480027823133556, "lm_q2_score": 0.8774767922879693, "openwebmath_perplexity": 890.1609408732782, "openwebmath_score": 0.6349658966064453, "tags": null, "url": "http://doktor-belovodsk.pp.ua/great-value-sckg/diagonals-uk-and-hs-of-a-rhombus-b203cd" }
beginner, datetime, ios, swift var morningAlarmMinutesLessSleepMinutes = 0 bedTimeMinutesArray = [] bedTimeAMorPMArray = [] for minutes in 0...6 { var bedTimeAMorPM = "AM" //needed since you use the next day's morning alarm to calculate the previous day's bedtime var morningMinutes = minutes + 1 if morningMinutes == 7 { morningMinutes = 0 } morningAlarmMinutesLessSleepMinutes = Int(morningAlarmMinutesArray[morningMinutes]) - Int(sleepMinutesArray[minutes]) //TO CODE REVIEW: While writing this question, I realized the "Int(...)" isn't needed above. Could this cause crashing for some users? if morningAlarmMinutesLessSleepMinutes < 0 { morningAlarmMinutesLessSleepMinutes = morningAlarmMinutesLessSleepMinutes + 60 bedTimeHoursArray[minutes] = bedTimeHoursArray[minutes] - 1 }
{ "domain": "codereview.stackexchange", "id": 19480, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, datetime, ios, swift", "url": null }
waves, acoustics, aircraft, shock-waves So why is it that a sonic boom seems to be a digital/discrete/binary occurrence rather than an analog of continuous sound? Bonus question (can post separately if necessary):
{ "domain": "physics.stackexchange", "id": 59252, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "waves, acoustics, aircraft, shock-waves", "url": null }
physical-chemistry, thermodynamics Title: Isochoric heating of a mixture of ideal gases with different heat capacities I was thinking about the following situation: Suppose we have an equimolar mixture of 1 mol argon gas (C=1.5 R) and 1 mol of nitrogen (C=2.5 R) and we heat it isochorically with 10 kJ from room temperature. How can one calculate the final temperature then? I know we need to use the relation dU=C*dT, because the heat energy is converted to internal energy, but also argon should heat more quickly than nitrogen.
{ "domain": "chemistry.stackexchange", "id": 17560, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, thermodynamics", "url": null }
quantum-field-theory, renormalization, path-integral, qft-in-curved-spacetime, effective-field-theory a finite total action $S^{grav}[g_{\mu\nu}]$. How can I appreciate the differences/resemblances of the treatments in the two cases? In addition, in the gravity case, the relation between the bare and renormalised constants can be determined without computing any specific scattering amplitude, which seems quite odd to me.
{ "domain": "physics.stackexchange", "id": 89606, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, renormalization, path-integral, qft-in-curved-spacetime, effective-field-theory", "url": null }
electromagnetism, hamiltonian-formalism, units, hamiltonian So basically the difference is in the term $1/c$ multiplying $\vec{A}$, present in the second form (which I use in my lectures) but not in the first one (used by Griffiths in Introduction to quantum dynamics to treat the Aharonov-Bohm effect). Why does this difference exist and what does it mean? And how does the term $1/c$ affect the dimensional analysis (the units) of the problem? The missing $1/c$ in your first expression is simply a consequence of the units used. The second expression is in Gaussian units while the first one is in either SI units or in natural units. In the latter system of units (natural units) certain constants like $\hbar$ and $c$ have a numerical value of 1, so they can be left out of the equations.$^1$ This is common practice in physics and it doesn't change anything about the dimensional analysis of the problem, as long as you keep in mind that you're working with those natural units.
{ "domain": "physics.stackexchange", "id": 6199, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, hamiltonian-formalism, units, hamiltonian", "url": null }
java, iterator @Test(expected = UnsupportedOperationException.class) public void testRemoveAll() { range(2, 12).removeAll(new HashSet<>()); } @Test(expected = UnsupportedOperationException.class) public void testRetainAll() { range(2, 12).retainAll(new HashSet<>()); } @Test(expected = UnsupportedOperationException.class) public void testClear() { range(5).clear(); } } As always, any critique is much appreciated. Minor point. The step is zero is an odd error message. Wouldn't it be better to actual explain the problem, ie. The step cannot be zero The user (hopefully) knows what paramaters they've supplied, what you need to tell them is that the parameter is invalid. As @200_success pointed out in a comment, Python follows this with its own error messages: Python 2 ValueError: range() step argument must not be zero Python 3 ValueError: range() arg 3 must not be zero
{ "domain": "codereview.stackexchange", "id": 17092, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, iterator", "url": null }
space, geometry An alternative metric I can make is $$ds^2=f(z)^2\cdot(dx^2+dy^2)+dz^2$$ which means the $xy$ plane is scaled by a factor $f(z)$. Note that if we enter new coordinates $u=f(z)\cdot x$, $v=f(z)\cdot y$ and restrict attention to the surface with constant $z$ (i.e. all $dz$ terms disappear), we find the metric on the surface $S_z$ to be $ds^2=f(z)^2\cdot(dx^2+dy^2)=du^2+dv^2$, so this is still a Euclidean plane. However, in the function $f(z)$ is non-constant, the 3-space is curved. What's more (and perhaps less intuitive) is that the $S_z$ planes lie curved inside the 3-space. One way to see this is that the shortest path between two points in $S_z$ will tend to leave $S_z$ and take a short-cut through the side of $S_z$ in 3-space for which $f(z)$ is lower: on this side, distances are shorter, just as the shortest path between to points on a circle in the plane will pass through the interior of the circle rather move along the circle. Another metric we can make is
{ "domain": "physics.stackexchange", "id": 4333, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "space, geometry", "url": null }
c++, programming-challenge Also for ease of regular expression search and understandability (maintenance) of code it is a good idea to use the same name for variables that contain the same object (you copied a value from one to the other). Ok, that is normally not possible unless you pass them as function argument (you can't write range = range) but you'll surprised how often it is; for example use range_end for every variable that means "the end of a range". Why am I mentioning this? Well, this is the reason that class member variables often have a prefix (either M_ or m_ or something. I use m_. That way you avoid collisions with local variables with the same name, and member functions with the same name. C++ provides classes with a reason: to write Object Oriented code. Encapsulation is part of that. In almost all cases your class member variables should be private! For (ostream based) debugging output, it is a good idea to make every class writable to an ostream.
{ "domain": "codereview.stackexchange", "id": 36085, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, programming-challenge", "url": null }
thermodynamics, energy, heat, energy-conservation, carnot-cycle The efficiency of such a machine has an upper limit of $(T_{\textrm{hot}} - T_{\textrm{cold}})/T_{\textrm{cold}}$ (as given by the perfect Carnot engine). Given that you are usually well off when you get a cold reservoir of $T_{\textrm{cold}} = 290\textrm{ K}$ on a hot summer day ($T_{\textrm{hot}} = 320\textrm{ K}$, the efficiency of your machine has an upper bound of 10%, which does not necessarily include loss due to friction, electric resistance, escaping air etc. If you include these, you get (probably) well below 1%. But, for the sake of argument, let us continue with an assumed efficiency of 10%.
{ "domain": "physics.stackexchange", "id": 12322, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, energy, heat, energy-conservation, carnot-cycle", "url": null }
computation-models You say you're looking for something other than flow-based models, something suitable for describing user interaction. I've worked with modeling technology for embedded systems that makes this distinction. It has two kinds of communication:
{ "domain": "cs.stackexchange", "id": 12385, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computation-models", "url": null }
c++, error-handling Consider using std::error_code instead of class error Your class error looks very similar to std::error_code. I would use the latter instead of reinventing the wheel. You can create a custom error category (derived from std::error_category) for your own errors. Avoid creating aliases for built-in types It might save a little bit of typing, but I recommend you don't create u64 and i32 aliases. In particular, don't declare those aliases in a header file in the global namespace. Just write std::uint64_t and std::int32_t. If you have a project that includes multiple libraries, it's easy to get lots of different aliases for the same type (making grepping harder), and adds potential for conflicts. main() returns int, not i32 There is only one valid return type for main(), and that is int. While i32 might be an alias for int on some platforms, there are other platforms where this is not the case, and thus would probably cause a compile error. Don't use const references to hold returned values
{ "domain": "codereview.stackexchange", "id": 44839, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, error-handling", "url": null }
c#, game, animation, unity3d if (Input.GetKeyDown(KeyCode.Alpha5)) { Alpha5Press.SetBool("Alpha5Released", false); playAlpha5(); } if (Input.GetKeyUp(KeyCode.Alpha5)) { Alpha5Press.SetBool("Alpha5Pressed", false); stopAlpha5(); } if (Input.GetKeyDown(KeyCode.Alpha6)) { Alpha6Press.SetBool("Alpha6Released", false); playAlpha6(); } if (Input.GetKeyUp(KeyCode.Alpha6)) { Alpha6Press.SetBool("Alpha6Pressed", false); stopAlpha6(); } if (Input.GetKeyDown(KeyCode.Alpha7)) { Alpha7Press.SetBool("Alpha7Released", false); playAlpha7(); } if (Input.GetKeyUp(KeyCode.Alpha7)) { Alpha7Press.SetBool("Alpha7Pressed", false); stopAlpha7(); } if (Input.GetKeyDown(KeyCode.Alpha8)) { Alpha8Press.SetBool("Alpha8Released", false); playAlpha8(); }
{ "domain": "codereview.stackexchange", "id": 40051, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, game, animation, unity3d", "url": null }
kinematics, definition, dimensional-analysis $$ \vec{v} = \frac{d\vec{s}}{dt} $$ for which we can easily see this has come from an integral: $$ \vec{s} - \vec{s}_0 = \int_{t_0}^t \vec{v}(t)dt. $$ This equation -- as well as $\Delta s = v\Delta t$ -- tell you that the change in position of an object from $\vec s_0$ to $\vec s$ is the result of a velocity $\vec{v}$ that occurs over time period $t-t_0$. During that time period, the velocity $\vec v$ uses every infintesimal moment $dt$ to change the position by a tiny amount $d\vec s$, until all the little $d\vec s$'s have accumulated up to $\vec{s}-\vec{s}_0$. There are numerous other places where this occurs: $F = -\frac{dU}{dx}$, $I = \frac{dq}{dt}$, $\Delta V = - \frac{d\Phi_B}{dt}$, $P = - \frac{dU}{dV}$, and many more. Another example is Newton's second law -- $F=ma$. If only conservative forces act on your object then you can see it is equivalent to $$ \frac{dU}{dx} = -m\frac{dv}{dt}.$$
{ "domain": "physics.stackexchange", "id": 67815, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinematics, definition, dimensional-analysis", "url": null }
python, python-3.x, unit-testing A living copy of the code is available here. I don't think that there is much benefit to using defaultdict; calling .setdefault() is not that cumbersome. Furthermore, if you use defaultdict, then the resulting data structure would also consist of defaultdicts, which might not be desirable. I have no idea why your _parse_to_dict_val() helper function is so complicated. I also don't think that it is smart to have it work recursively — that just makes you have to execute the regex over and over again. With a little bit of renaming in parse_to_dict_vals() and vast simplification of _parse_to_dict_val(), this solution also passes your parse_to_dict_vals() tests: import re
{ "domain": "codereview.stackexchange", "id": 30464, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, unit-testing", "url": null }
beginner, c Live up to your promises getInitials says that it returns a char. It doesn't. Be consistent getName prompts the user, outputs what they tell the program, then returns the value. getInitials on the other hand parses a string, outputs the initials and then doesn't return anything. The prefix get really suggests that the methods should be doing a similar level of task. Separate out user interactions getInitials could be written in such a way that it took in a name and returned a string containing the initials, then it could be called from future programs that had different user interfaces. At the moment, it has a printf in it which makes this more challenging. Try to isolate your user interactions (inputs/outputs) from your main logic (string processing in this case). Allocate how much?
{ "domain": "codereview.stackexchange", "id": 23453, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c", "url": null }
conformal-field-theory, quantization, quantum-anomalies Title: Central charge in a $d=2$ CFT I've always been confused by this very VERY basic and important fact about two-dimensional CFTs. I hope I can get a satisfactory explanation here. In a classical CFT, the generators of the conformal transformation satisfy the Witt algebra $$[ \ell_m, \ell_n ] = (m-n)\ell_{m+n}.$$ In the quantum theory, the same generators satisfy a different algebra $$[ \hat{\ell}_m, \hat{\ell}_n ] = (m-n) \hat{\ell}_{m+n} + \frac{\hbar c}{12} (m^3-m)\delta_{n+m,0}.$$ Why is this? How come we don't see similar things for other algebras? For example, why isn't the Poincare algebra modified when going from a classical to quantum theory?
{ "domain": "physics.stackexchange", "id": 9310, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "conformal-field-theory, quantization, quantum-anomalies", "url": null }
java, optimization, android, pathfinding This is likely to be one of your problems; think about the behavior of this code for a moment -- to figure out if a node is closed, you are iterating through a list of every node you have already closed. As your solution progresses, this list gets longer and longer, which means figuring out if the node is closed keeps getting slower. public Node getChildFromOpen(double row, double col, List<Node> openList) { for (int i = 0; i < openList.size(); ++i) { if (openList.get(i).col == col && openList.get(i).row == row) { return openList.get(i); } } return null; } Same problem here -- as more nodes go into the open list, finding the open node takes longer and longer. Imagine if instead you were to keep track of what is open and closed by coordinates: public boolean isOpen(int row, int col, boolean[][] open) { return open[row][col]; }
{ "domain": "codereview.stackexchange", "id": 8213, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, optimization, android, pathfinding", "url": null }
electromagnetism, lagrangian-formalism, classical-electrodynamics, maxwell-equations, variational-principle Recall, given the spin connection $\omega$, by Cartan's second structure equation, the curvature form is, $$\mathcal{R}=d\omega + \omega \wedge \omega$$ However, the Lie group $U(1)$ is Abelian, and the structure constants vanish, hence the above simplifies, $$\mathcal{R}=d\omega$$ which is completely analogous to the definition of the electromagnetic field strength. Other gauge groups may not possess the same field-strength. For example, in quantum chromodynamics, $SU(3)$ is non-Abelian, and the extra term does not vanish; in tensor form: $$G_{\mu\nu}^a=\partial_\mu A^a_\nu - \partial_\nu A^a_\mu + gf^{a}_{bc}A^{b}_\mu A^{c}_\nu$$
{ "domain": "physics.stackexchange", "id": 14406, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, lagrangian-formalism, classical-electrodynamics, maxwell-equations, variational-principle", "url": null }
newtonian-mechanics Evolution of position vector over time:
{ "domain": "physics.stackexchange", "id": 13409, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics", "url": null }
php, array This code basically works. However, I was wondering if there was an easier, less complex way of doing this. Generally speaking, I think that you are thinking about associative (not "associated") arrays in the wrong manner. Normally, one does not use associative arrays when array element ordering needs to be guaranteed. This is what numerically-indexed arrays are for. It should be a big red flag to you that PHP has no built-in functions to deal with ordering of associative arrays like it does for numerically-indexed arrays. Now, if you need to have key-value combinations like you present here and be able to render them in a given order, perhaps for when they are being iterated for output (which is really the only use case I can think of for why you would want to do this), you might consider one of two options. First, you could create a numerically-indexed array which references the key value array, and use this for iteration. For example:
{ "domain": "codereview.stackexchange", "id": 26151, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, array", "url": null }
javascript, algorithm, programming-challenge, time-limit-exceeded, node.js test('E - Given a string return pairs of anagrams', () => { const s = 'kkkk' const expected = 10 const result = anagrams(s) expect(result).toBe(expected) }) test('F - Given a string return pairs of anagrams', () => { const s = 'cdcd' // c,c d,d cd,cd, cd,dc dc,cd const expected = 5 const result = anagrams(s) expect(result).toBe(expected) }) test('G - Given a string return pairs of anagrams', () => { const s = 'ifailuhkqqhucpoltgtyovarjsnrbfpvmupwjjjfiwwhrlkpekxxnebfrwibylcvkfealgonjkzwlyfhhkefuvgndgdnbelgruel' const expected = 399 const result = anagrams(s) expect(result).toBe(expected) })
{ "domain": "codereview.stackexchange", "id": 34413, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, algorithm, programming-challenge, time-limit-exceeded, node.js", "url": null }
minimize a cost function we just write code which computes the following J(Ɵ) i. Because of these two types of costs, there is an optimal project pace for minimal cost. To demonstrate the minimization function, consider the problem of minimizing the Rosenbrock function of the NN variables −. 0 million non-recurring provision related to severance and other employee related costs in the second quarter of 2020, of which 2. However, I have found that the goal seek function is set up in such a way so that the value in the goal seek cell is set to a certain value and that there. We advise on the largest and most complex legal challenges facing the world’s most important companies. HR serves the goals of the business and makes sure that the company attains higher productivity at low cost. attempts to minimize cost functions -constraint with the highest priority is fixed first 1 Constraint Cost Function = max,0; where Constraint weight Number of constraints n i i i w w n Chapter 6: Synthesis &
{ "domain": "delleparoleguerriere.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9802808707404786, "lm_q1q2_score": 0.8037257618717885, "lm_q2_score": 0.8198933447152498, "openwebmath_perplexity": 839.5083018859096, "openwebmath_score": 0.5497791171073914, "tags": null, "url": "http://delleparoleguerriere.it/pobw/minimize-a-cost-function.html" }
Note from Brian: According to folklore, an invertible matrix has an LU decomposition if and only if all its leading principal minors are nonzero. I believe this statement can be generalized to LPU decompositions: let an invertible matrix $$A$$ be given; then find the lexicographically smallest permutation $$R$$ such that for each $$k = 1, 2, \ldots, n$$, the $$k \times k$$ matrix given by $$B_{ij} = A_{R(i),R(j)}$$ is invertible. The permutation matrix is then given by $$P$$ as in (c). This therefore gives a characterization of each double coset. But I am too lazy to write up a proof and check whether it's valid. Problem 2.M.12 The problem statement says that $$r, s$$ are positive, but I assume what was meant is that they must be nonnegative.
{ "domain": "brianbi.ca", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9840936059556499, "lm_q1q2_score": 0.8157533851563289, "lm_q2_score": 0.8289388125473629, "openwebmath_perplexity": 73.26678730461299, "openwebmath_score": 0.9586514234542847, "tags": null, "url": "https://brianbi.ca/artin/2.M" }
html, css In addition to your existing viewport meta tag, I suggest adding the width value: <meta name="viewport" content="width=device-width, initial-scale=1.0"> Also moving the tag before you load your stylesheets and the title code help your performance. MDN on using the viewport meta tag. A header element typically contains headings and other introductory content. The Toggle-link should be part of your navigation as well, because that's what it is. HTML5 Doctor is a great ressource, if you want to know more about this. CSS: Generally one should avoid using all in transitions. Using the actual properties you're going to animate is more performant It would be a help to have a jsfiddle demo of this to test around.
{ "domain": "codereview.stackexchange", "id": 5833, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "html, css", "url": null }
sequence-alignment, genome conventional alignment on a very powerful computer with lots of memory break up the genome into smaller fragments and align them individually some exotic different procedure What possible avenues of attack are there? (I've attempted to use Mafft and Clustal with little success) Whole genome aliment can be done using Progressive Mauve, LAST or Mummer. For bacteria I used Mauve since it has also very nice visualisation engine. A very new tool is Minimap2, a super fast mapper that supposed beside read mapping be able to handle reference vs reference. However, I do not know how performance of it compares to the tools mentioned above. If you are interested in a rough idea of the shared genome regions, you can use bevel. Bevel is not really an aligner, it is more like a dot-plot, but it is super fast (even for mammalian sized genomes).
{ "domain": "bioinformatics.stackexchange", "id": 97, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sequence-alignment, genome", "url": null }
nlp, bigdata, topic-model, gensim corpus = [ ["this", "is", "a", "random", "sentence"], ["this", "is", "another", "random", "sentence"], ["and", "another", "one"], ["this", "is", "sparta"], ["just", "joking"] ] dct = Dictionary() dct.add_documents(corpus[:3]) dct.add_documents(corpus[3:]) print(dct == Dictionary(corpus)) # True
{ "domain": "datascience.stackexchange", "id": 11316, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nlp, bigdata, topic-model, gensim", "url": null }
Calculate three evenly spaced quantiles for each row of `X` (`dim` = 2). `y = quantile(X,3,2)` ```y = 6×3 7.0000 8.0000 8.7500 4.2500 6.0000 9.5000 4.0000 8.0000 9.7500 1.0000 2.0000 8.5000 2.7500 5.0000 7.0000 2.5000 9.0000 10.0000 ``` Each row of matrix `y` corresponds to the three evenly spaced quantiles of each row of matrix `X`. For example, the first row of `y` with elements (7, 8, 8.75) has the quantiles for the first row of `X` with elements (9, 3, 10, 8, 7, 8, 7). Calculate the quantiles of a multidimensional array for specified probabilities by using the `'all'` and `vecdim` input arguments. Create a 3-by-5-by-2 array `X`. Specify the vector of probabilities `p`. `X = reshape(1:30,[3 5 2])` ```X = X(:,:,1) = 1 4 7 10 13 2 5 8 11 14 3 6 9 12 15 X(:,:,2) = 16 19 22 25 28 17 20 23 26 29 18 21 24 27 30 ``` `p = [0.25 0.75];` Calculate the 0.25 and 0.75 quantiles of all the elements in `X`. `Yall = quantile(X,p,'all')` ```Yall = 2×1 8 23 ```
{ "domain": "mathworks.cn", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9814534316905262, "lm_q1q2_score": 0.8544506783473793, "lm_q2_score": 0.870597270087091, "openwebmath_perplexity": 1108.302201282896, "openwebmath_score": 0.7601086497306824, "tags": null, "url": "https://ww2.mathworks.cn/help/stats/quantile.html" }
machine-learning, nlp, text-mining, language-model Title: NLP - extract sentence parts related to people Thank you for your help, I appreciate your time. This is not a standard problem but you should be able to roughly do this using two basic kinds of tools that usually go together anyway: Use an NER system to identify people (as opposed to organizations) in sentences. Most systems have a default model that flags people. Use an Open Information Extraction system to get relation triples from sentences like (Sarah, has, brown_hair). See OpenIE for an example. You could also use a dependency parser. So you take the relationships you get from the tools in part 2 and throw out any where the referrent/noun isn't a person. If you have to deal with relative clauses like in your first example ("John, who was..."), you'll also have to deal with coreference resolution, which is its own complicated problem.
{ "domain": "datascience.stackexchange", "id": 3275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, nlp, text-mining, language-model", "url": null }
mathematical-physics, field-theory, education, noethers-theorem, classical-field-theory \dfrac{\partial\mathcal{L}}{\partial\phi_{r}}-\partial_{\nu}\dfrac {\partial\mathcal{L}}{\partial\partial_{\nu}\phi_{r}}\right) \chi_{r} \\+ \int_{\mathbb{\Omega}}d^{D}x~\partial_{\nu}\left[ \dfrac{\partial \mathcal{L}}{\partial\partial_{\nu}\phi_{r}}\chi_{r}-\left( \dfrac {\partial\mathcal{L}}{\partial\partial_{\nu}\phi_{r}}\partial_{\mu}\phi _{r}-\xi^{\nu}\mathcal{L}\right) \xi^{\mu}\right].\tag{III.11}\label{eq11} \end{multline} Here, considering the validity of Euler-Lagrange's equation \begin{equation} \dfrac{\partial\mathcal{L}}{\partial\phi_{r}}-\partial_{\nu}\dfrac {\partial\mathcal{L}}{\partial\partial_{\nu}\phi_{r}}=0, \tag{III.12}\label{eq12} \end{equation} and the applicability of divergence theorem over to third integral (Which now seems to be quite reasonable!) \begin{equation} \int_{\mathbb{\Omega}}d^{D}x~\partial_{\nu}J^{\nu}=\oint_{\partial \mathbb{\Omega}}dS_{\nu}~J^{\nu}=0,\tag{III.13}\label{eq13} \end{equation} with \begin{equation}
{ "domain": "physics.stackexchange", "id": 57862, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mathematical-physics, field-theory, education, noethers-theorem, classical-field-theory", "url": null }
java, programming-challenge /** * compares two float point values for ApproximityEquallity * * @param first floating point number * @param second floating point number * @return true if difference between first and second smaller than 0.1% * @throws IllegalArgumentException when first/second out of range (+/-1E+/-150) */ public static boolean isApproximatelyEqual(double first, double second) { return isApproximatelyEqual(first, second, DEFAULT_APPROXIMATELY_FACTOR); }
{ "domain": "codereview.stackexchange", "id": 42949, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, programming-challenge", "url": null }
python, object-oriented, tkinter, opencv def resetCanvas(self): """Resets all canvas elements without advancing forward""" self.image = self.image_feed.returnTKImage() self.canvas.create_image(0, 0, image=self.image, anchor="nw") self.canvas.configure(height = self.image.height(), width = self.image.width()) self.canvas.place(x = 0, y = 0, height = self.image.height(), width = self.image.width()) def reset(self): """Removes all drawings on the canvas so user can start over on same image""" self.corners = [] self.canvas.delete("all") self.resetCanvas() def OnMouseDown(self, event): """Records location of user clicks to establish cropping region""" self.corners.append([event.x, event.y]) if len(self.corners) == 2: self.canvas.create_rectangle(self.corners[0][0], self.corners[0][1], self.corners[1][0], self.corners[1][1], outline ='cyan', width = 2)
{ "domain": "codereview.stackexchange", "id": 15575, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, object-oriented, tkinter, opencv", "url": null }
quantum-field-theory, higgs (Image source: http://en.wikipedia.org/wiki/File:Mecanismo_de_Higgs_PH.png) This corresponds to a precise mathematical expression which is completely fixed by the requirements listed before. What this plot tells you, more or less, is the energy density of the higgs field when it takes on a certain value. It is actually more complicated than that because the higgs interacts with other particles as well, and quantum effects become important. But you can see that if the higgs starts on the hill it has more energy than down in the trough, so it will tend to roll down, releasing energy into other particles as it goes. This is the story of the electroweak phase transition in the early universe! (Cliff notes version!)
{ "domain": "physics.stackexchange", "id": 8615, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, higgs", "url": null }
java, programming-challenge, interview-questions, complexity //Sorting the arraylist according to start time using an inner class //Helps in modularity Collections.sort(ans, new IntervalSort()); return ans; } Note that I also moved the count and inter variables below the base case if check. You don't need them yet before this check. With that out of the way let's look at the actual algorithm.
{ "domain": "codereview.stackexchange", "id": 30062, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, programming-challenge, interview-questions, complexity", "url": null }
Write $\text{\hspace{0.17em}}f\left(x\right)=\frac{4}{3-\sqrt{4+{x}^{2}}}\text{\hspace{0.17em}}$ as the composition of two functions. $\begin{array}{l}g\left(x\right)=\sqrt{4+{x}^{2}}\\ h\left(x\right)=\frac{4}{3-x}\\ f=h\circ g\end{array}$ Access these online resources for additional instruction and practice with composite functions. ## Key equation Composite function $\left(f\circ g\right)\left(x\right)=f\left(g\left(x\right)\right)$ ## Key concepts
{ "domain": "quizover.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9744347823646075, "lm_q1q2_score": 0.8413188189854459, "lm_q2_score": 0.8633916134888614, "openwebmath_perplexity": 462.56427474829644, "openwebmath_score": 0.9333246946334839, "tags": null, "url": "https://www.quizover.com/trigonometry/test/decomposing-a-composite-function-into-its-component-by-openstax" }
newtonian-gravity, potential-energy, vector-fields $${\Phi(r)=-\frac{GM}{r}, where\space r=|\vec R|}$$ we can say that $${\vec F_{grav}=m\vec g(r), where\space \vec g(r)=-\nabla \Phi}$$ So then in the book it was said that ${\nabla^2\Phi=4\pi G\mu}$. So isit possible to tell me wht ${\mu}$ stand for and how the last formula is derived? The purpose of all this is to calculate and derive potential equation of the gravitation field. Let us assume that we have symmetrical sphere object that "generates" gravitational field. We want to know if the field depends on the objects "homogeneity" (correct me if this is not quite the right term). So from Gauss theorem (it has different names, but they said it with that name in Stanford lactures) we have that the divergence of the object (gravitation field in it) is equal to the net flux exiting from spherical surface that surrounds the object. So we have: $${\int_{V} \nabla \vec g(\vec r)dV=\int_{S}\vec g(\vec r)d\vec S}$$
{ "domain": "physics.stackexchange", "id": 22951, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-gravity, potential-energy, vector-fields", "url": null }
In this case, it is a matter of legibility; the square brackets are the same as round parentheses; $$[5-(6-7(2-6)+2)]+4 = [5-(6-7(-4)+2)]+4 = [5-(6+(28)+2)]+4 = [5-(36)]+4 = [-31+4] = -27.$$ Note that there are instance where the square brackets do mean something different like nearest integer function. Yes, interval notation must contain a comma; for example, if $a,b\in \mathbb R$, then $$[a,b] = \{x\in \mathbb R:a\leq x\leq b\}.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9579122672782974, "lm_q1q2_score": 0.8270534146590436, "lm_q2_score": 0.8633916099737806, "openwebmath_perplexity": 668.4973344027322, "openwebmath_score": 0.8858932256698608, "tags": null, "url": "https://math.stackexchange.com/questions/1665212/what-do-the-square-brackets-mean-in-5-6-72-624" }
discrete-signals, frequency, analog On the other hand, the ups and downs of a digital signal have no direct relationship to "real life". Rather, they represent binary digits (hence "digital") in a numeric code used to represent whatever "real life" phenomenon the signal corresponds to. In the diagram below, the bits (each line represents a separate bit stream) might represent characters of ASCII text, or they might represent readings from a heart rate monitor. Looking at the data you can't even tell if it might be a pleasant sound (like the guitar), or a discordant sound (like the cymbal). Also note that the X dimension is not an analog for anything -- an entire symphony might be transmitted as a digital signal in ten seconds, or a high-resolution photograph that was snapped in an instant may be transmitted, bit-by-bit, over a period of seconds or minutes. (The old Mariner space probe would take hours to transmit a single high-resolution image, because the bit rate was so slow at that distance.)
{ "domain": "dsp.stackexchange", "id": 289, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, frequency, analog", "url": null }
ros, moveit, ros-melodic Original comments Comment by MerAARIZOU on 2020-11-10: Thank you for your answer . After looking up into the moveit_msgs package, I found out that msgs definitions are different from the source. Actually, I have installed ros-melodic-moveit-msgs from the repos, the package installed in /opt/ros/melodic/... path does contain moveit_msgs/MoveGroupActionResult message, which is not the case if you look into the source code on GitHub https://github.com/ros-planning/moveit_msgs on Melodic branch. I was not able to install the package from source, because of some errors in building other dependencies. I wonder why the two versions are so different, eventhough they are both for Melodic? Comment by fvd on 2020-11-10: .action files are generated into Request Result and Feedback messages, so the message you see is based on this action. It would look the same if you built it locally. You can also build moveit from source if this doesn't work.
{ "domain": "robotics.stackexchange", "id": 35729, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, moveit, ros-melodic", "url": null }
geology, earth-history, paleogeography, continent, continental-crust There is an ongoing debate about the interpretation of zircon age peaks in igneous rocks and detrital sediments (Rino et al., 2004; Arndt and Davaille, 2013; Hawkesworth et al., 2013). The conventional interpretation is that the age peaks correspond to peaks in production of new continental crust extracted from the mantle (Stein and Hofmann, 1994; Condie, 1998; Albarède, 1998). However, this interpretation has been challenged by advocates of recycling and preservation models; they propose that the peaks record periods of enhanced preservation of crust during the assembly of supercontinents (Condie and Aster, 2009, 2010; Hawkesworth et al., 2010; Voice et al., 2011). In summary, it is a hot research topic which is still very much debated. If someone has the answer here at Stack Exchange, he/she should apply for the Penrose medal! :)
{ "domain": "earthscience.stackexchange", "id": 2599, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "geology, earth-history, paleogeography, continent, continental-crust", "url": null }
image-processing, gaussian, downsampling, r (Note: by upscaling the authors mean aggregation) One of the authors is my supervisor and I asked him if I can blur my fine resolution raster and then aggregate it using a common interpolation algorithm (nearest neighbor, bilinear etc). This is not the way to go. The aggregation should be done using a Gaussian kernel filter (the point spread function is assumed to be Gaussian). Also, If I blur and then resample is like I add extra PSF effect apart from what my image already has. There is a post on Reddit, where a person suggests (without sharing how to do it) that this a common computer vision task. I share his suggestion: My supervisor told me that the way I should create the aggregated raster is by applying a gaussian kernel filter to the fine data, but with a very large width. This large width I think it determines the output pixel size (which as I said I want it to be 460m).I say that based on this post.
{ "domain": "dsp.stackexchange", "id": 11513, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, gaussian, downsampling, r", "url": null }
ros, gazebo, urdf, roslaunch, ros-kinetic <joint name="right_wheel_joint" type="continuous"> <axis xyz="0 0 1"/> <parent link="base_link"/> <child link="right_wheel"/> <origin rpy="-1.5708 0 0" xyz="-0.2825 -0.125 -.15"/> </joint> <link name="left_wheel"> <collision> <inertial> <mass value="0.1"/> <inertia ixx="5.1458e-5" iyy="5.1458e-5" izz="6.125e-5" ixy="0" ixz="0" iyz="0"/> </inertial> </collision> <visual> <geometry> <cylinder length="0.05" radius="0.035"/> </geometry> <material name="black"/> </visual> </link> <joint name="left_wheel_joint" type="continuous"> <axis xyz="0 0 1"/> <parent link="base_link"/> <child link="left_wheel"/> <origin rpy="-1.5708 0 0" xyz="-0.2825 0.125 -.15"/> </joint> </robot>
{ "domain": "robotics.stackexchange", "id": 30388, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, gazebo, urdf, roslaunch, ros-kinetic", "url": null }
material-science In the sintering process, the components of the alloy can be first mixed in powder form, assuring a good homogeneity. They are then cold pressed close to the desired form. Here comes the second advantage: even hard alloys as tungsten carbide + cobalt can be easily machined after this step, (if some binder product is added to give enough mechanical resistance). It is now close to the final form but scaled. That is: the relation between length, width and height is correct, but all of them are greater, to the porosities. Finally, the temperature is raised so that part of the alloy with lower melting point is allowed to locally melt, closing micro porosities and assuring metallurgical bonding. The material shrinks to its final dimensions in the process. Some manufactures also uses hydrostatic pressure besides the temperature to help to close the porosities.
{ "domain": "physics.stackexchange", "id": 76381, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "material-science", "url": null }