text
stringlengths
1
1.11k
source
dict
haskell Title: UPenn Homework 3: localMaxima :: [Integer] -> [Integer] Please see this question for the general description. Exercise 2: localMaxima :: [Integer] -> [Integer] A local maximum of a list is an element of the list which is strictly greater than both the elements immediately before and after it localMaxima :: [Integer] -> [Integer] localMaxima l = let getMaxOf3 x y z | x < y && y > z = Just y | otherwise = Nothing accumulate l2 acc = case l2 of u:v:w:xs -> case (getMaxOf3 u v w) of Just q -> accumulate (v:w:xs) (q:acc) Nothing -> accumulate (v:w:xs) acc otherwise -> [] in reverse (accumulate l []) The implementation can be much simpler. The main insight I can offer is to use map and filter instead of Maybe and writing your own accumulate. In addition,
{ "domain": "codereview.stackexchange", "id": 15506, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haskell", "url": null }
c, game, error-handling, memory-management return 1; } void handle_collision(t_board *board, t_entity *entity, t_position *new_pos) { t_position old_pos = entity->pos; if (!check_valid_move(board, new_pos)) { *new_pos = old_pos; return; } int collided_with_wall = check_wall(board, new_pos); if (entity->type == MONSTER) { if (collided_with_wall || get_cell_at(board, new_pos->x, new_pos->y) == 'A') { entity->facing_dir = get_opposite_direction(entity->facing_dir); *new_pos = old_pos; char c = map_direction_to_char(entity->facing_dir); set_cell_at(board, new_pos->x, new_pos->y, c); return; } } if (entity->type == PLAYER && collided_with_wall) { *new_pos = old_pos; } } int compare_positions(t_position *pos1, t_position* pos2) { if (pos1->y < pos2->y) return -1; if (pos1->y > pos2->y) return 2; if (pos1->x < pos2->x) return -1;
{ "domain": "codereview.stackexchange", "id": 42016, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, game, error-handling, memory-management", "url": null }
javascript, css, xml, library, tex var saveIndentation = state["indentation"]; state["lineNumber"]++; state["indentation"]++; var childElements = whileElement.children; for (var i = 0; i < childElements.length; ++i) { var elementName = childElements[i].tagName.toLowerCase(); var handlerFunction = Algotype.dispatchTable[elementName]; if (handlerFunction) { htmlText += handlerFunction(childElements[i], state); } else { throw new Error("Unknown element: '" + elementName + "'."); } } // Reset the indentation counter. state["indentation"] = saveIndentation; return htmlText; }; Algotype.typesetRepeatUntil = function(repeatUntilElement, state) { var conditionTeX = (repeatUntilElement.getAttribute("condition") || "").trim(); conditionTeX = addTeXDelimeters(conditionTeX);
{ "domain": "codereview.stackexchange", "id": 26692, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, css, xml, library, tex", "url": null }
organic-chemistry, reaction-mechanism, alcohols, catalysis, organosulfur-compounds Title: Why is pyridine used when making tosyl esters from alcohols? Tosyl chloride is used to make a hydroxyl group into a better leaving group. However, when the reaction of tosyl chloride and an alcohol occurs, a weak base such as pyridine should be used. Why? The function of pyridine is actually not so simple and not so easy to notice at first glance. There is a fundamental reason why pyridine is used to promote the acylation reaction, which is that it can act as a catalyst.
{ "domain": "chemistry.stackexchange", "id": 12617, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, reaction-mechanism, alcohols, catalysis, organosulfur-compounds", "url": null }
It's just one of those conventions that people have adopted. This one saves on brackets as we can write $a^{b c}$ for $(a^b)^c$, and $a^{b^c}$ for $a^{(b^c)}$ – user254665 Jan 31 at 1:48 One possible motivation I think for the convention is that $\exp \exp x$ can only reasonably be interpreted as $\exp ( \exp x)$ (where $\exp x$ is a common notation for the ubiquitous $e^x$). Choosing $x^{(y^z)}$ over $(x^y)^z$ would keep to that. – Vandermonde Feb 1 at 0:43
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.950410972802222, "lm_q1q2_score": 0.8039929729313924, "lm_q2_score": 0.8459424353665382, "openwebmath_perplexity": 652.2481654303058, "openwebmath_score": 0.885234534740448, "tags": null, "url": "http://math.stackexchange.com/questions/1633790/what-is-the-order-when-doing-xyz-and-why" }
java, game-of-life, simulation public int width() { return this.cells.length; } public int height() { return this.cells[0].length; } public Ocean timeStep() { Ocean next = new Ocean(width(), height(), getStarveTime()); for (int x = 0; x < width(); x++) { for (int y = 0; y < height(); y++) { next.cells[x][y].putOccupant(this.cell(x, y).timeStep()); } } return next; } public int cellContents(int x, int y) { return cell(x, y).getOccupant().getType(); } private Cell cell(int x, int y) { return this.cells[((x % width()) + width()) % width()] [((y % height()) + height()) % height()]; } }
{ "domain": "codereview.stackexchange", "id": 6547, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game-of-life, simulation", "url": null }
atmospheric-science, tidal-effect, exoplanets I have tried MIT GCM, which I found quite complicated to compile and run, and EdGCM, which is too much focused on climate on Earth and I didn't manage to modify it for other conditions. I have also tried PlaSim, which is perfect (easily compilable and modifiable), but so far it seems rather unstable when used on different conditions for which it was written. (Like tidal locked planet, which it cannot handle at all.) After several times when I repeatedly returned to the problem, I finally found a model, which satisfies my original requirements. It is called THOR (github), it is developed for Ubuntu, is fairly easy to install and it works also on a single PC using GPU. Moreover, it supports various exoplanet conditions, like tidal-locking and non-shallow atmospheres of hot Jupiters. I am sharing this for anybody interested in the original question. It works well for me so far.
{ "domain": "physics.stackexchange", "id": 67646, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "atmospheric-science, tidal-effect, exoplanets", "url": null }
javascript, jquery, html, css /** * Removes .selected from previous element and adds it to target * If the dropdown is open, it also updates .current * @params {jQuery} target - Item to change to .selected * @params {jQuery} container - The .hex-selector containing the target */ var selectItem = function(target, container) { if($("."+SELECTED, container).length) { $("."+SELECTED, container).removeClass(SELECTED); } target.addClass(SELECTED); $("span", container).text(target.text()); $("span", container).attr("value", target.attr("value")); if(container.hasClass(ACTIVE)) { updateCurrent(target, container); } }
{ "domain": "codereview.stackexchange", "id": 13119, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, html, css", "url": null }
Yes, the empty set does have a partition. Let's see what a partition is: given a set $X$, a partition of $X$ is a set $P$ of nonempty subsets of $X$ such that each element of $X$ is contained in exactly one element of $P$. Consider $P = \varnothing$. • Is it a collection of nonempty subsets of $\varnothing$? Yes, all the elements of $P$ are nonempty subsets of $\varnothing$, because $P$ has no elements so the assertion is vacuously true. • Is every element of $\varnothing$ contained in exactly one element of $P$? Again yes, this is vacuously true. Therefore $P = \varnothing$ is indeed a partition of $\varnothing$. However $P = \{ \varnothing \}$, the set with one member, is not a partition of $\varnothing$, because it fails the first requirement. • Thank you, the comment actually clarified it for me, I was reading $P = \{ \emptyset \}$ instead of $P=\emptyset$. – PseudoRandom Mar 19 '17 at 19:00
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9780517469248845, "lm_q1q2_score": 0.8107450454393974, "lm_q2_score": 0.8289388040954684, "openwebmath_perplexity": 139.17206515041238, "openwebmath_score": 0.9341630339622498, "tags": null, "url": "https://math.stackexchange.com/questions/2193873/is-wikipedia-wrong-when-stating-that-emptyset-has-exactly-one-partition-name" }
quantum-mechanics, heisenberg-uncertainty-principle Title: Does Heisenberg Indeterminism have a lower bond in $\Delta \chi$? Take the Heisenberg's indeterminism law: $$\Delta \chi \cdot \Delta \rho \geq h/ 2$$ Does the momentum pose a limit so that we cannot measure the position with a precision greater than: $$h / (2\cdot \Delta \rho)$$ where $\Delta \rho$ is at maximum $mc$?
{ "domain": "physics.stackexchange", "id": 35041, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, heisenberg-uncertainty-principle", "url": null }
reaction-mechanism, thermodynamics, equilibrium If possible I’ll appreciate a numerical example. After some research I found that when Keq is 1 it means that the concentration of products and reactants is the same. [...] If my reaction is of the type $\ce{A <=> 2B}$ only if their concentration is 1M this is correct, but it is only a special case and not the rule.
{ "domain": "chemistry.stackexchange", "id": 17552, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reaction-mechanism, thermodynamics, equilibrium", "url": null }
java, file, database Title: Java program to manage animal data and store them in text files I made this program as a baby step to creating an animal shelter management system that will be used in a real life animal shelter. The goal is to move from text files to an online database, and from console interface to a GUI. Two things I'm yet to learn. I set out with the goal of creating a CRUD program that stores the information in files. The program runs with no compilation errors, and as far as I can see, no logical errors. I am looking for any type of input. Whether it's naming, logic, or OOP choices. Main.java public class Main { public static void main(String[] args) { UserTextInterface userTextInterface = new UserTextInterface(); userTextInterface.run(); } } Animal.java import java.util.Date; public class Animal { private String name; private int ID; private String notes; private String species; private Date dateOfBirth;
{ "domain": "codereview.stackexchange", "id": 35215, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, file, database", "url": null }
php, mysql while($check > $x) { $userdata[$x] = $query->fetch()->description; $query->nextrowset(); $x++; } $user_data['gender'] = $userdata[1]; $user_data['ethnicity'] = $userdata[2]; $user_data['country'] = $userdata[3]; $user_data['lookingfor'] = $userdata[4]; $user_data['type'] = $userdata[5]; $user_data['seeking'] = $userdata[6]; $user_data['intent'] = $userdata[7]; $user_data['longestrelationship'] = $userdata[8]; return $user_data; }
{ "domain": "codereview.stackexchange", "id": 10518, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, mysql", "url": null }
electromagnetism, waves, radio EDIT: As pointed out by @Alfred Centauri in the comments, the transmitter would be affected if the receiver was in the near field. For amateur radio purposes, the near field ceases to exist well within 200 meters of the transmitter (for the vast majority of cases), thus it is unlikely that anyone "tuning in" to your broadcast would be directly affecting your transmitter.
{ "domain": "physics.stackexchange", "id": 15554, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, waves, radio", "url": null }
& millimeters (mm). Find the surface area of the trapezoidal prism. A net can be helpful when finding surface area. And all of its faces are rectangles. Using the previous example, multiply 3 by 4 to calculate the surface area of one face. In most of the cases, the box is an enclosed figure either a rectangle or a square. A manufacturer wants to design an open box having a square base and a surface area of 108 square inches. 4 /above, iPhone 4S/above & iOS 9. We will use the formula in the. Write the areas of the base, top, and sides in terms of and b. A label that wraps around a box of golf balls covers 75% of its lateral surface area. (Use the formulas from problem 1 and the approach from problem 2. Minimum surface area box of all boxes with a square base and a volume of 100 m2 which one has the minimum surface area? (Give its dimensions. Multiply this figure by two to future the opposing side, so you would now have 24 square feet. Calculator online for a the surface area of a
{ "domain": "auralicht.de", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9825575193498701, "lm_q1q2_score": 0.8166458533833736, "lm_q2_score": 0.8311430499496096, "openwebmath_perplexity": 333.35130182291016, "openwebmath_score": 0.7742525935173035, "tags": null, "url": "http://kgll.auralicht.de/surface-area-of-a-box-with-a-square-base.html" }
forward-error-correction, ldpc Here's the problem I'm running into: I start with $A$ and do Gauss-Jordan elimination in $\mathbb{G_2}$ to get it into reduced row echelon form. Sometimes the matrix I've been given (I'm not designing it, have to use what I'm given) can't be put into that form with solely row operations. It requires actually swapping the order of some columns (is that what he means by "column-pivoting" in the reference above?). Everything in me screams that's wrong, but it turns out if I just do the same thing on the decoding side it works out. But then I'm left with the conundrum that when I go to decode something encoded with this parity check matrix I have to know how the matrix columns were reordered on the transmit side. Basically I have to re-do the generator matrix, which is by far the slowest part of my process (I'm going to make some improvements there though, using the QC-LDPC structure of the parity matrix).
{ "domain": "dsp.stackexchange", "id": 8377, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "forward-error-correction, ldpc", "url": null }
waves, gravity, mass Sure then appears next question -- why then there possible growth of value and increasing to big objects at all? How does something from almost zero (0.00000001 as example) sums to 1 or 2 or even 6 and 9...? Maybe you had explanation from waves summation? Help me to understand why does our material world make it possible to sum things into large objects. From speck of dust into Jupiter In physics, when we study macroscopic systems, we distinguish between what we call extensive and intensive quantities. Extensive quantities count the total amount of something, so they depend on the total amount or volume of whatever macroscopic system you're looking at. Intensive quantities are rates of change that are independent of the size of the system that you're looking at.
{ "domain": "physics.stackexchange", "id": 71381, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "waves, gravity, mass", "url": null }
quantum-mechanics, condensed-matter, gauge-theory, research-level, topological-order which results in a 2nd nearest neighbor coupling between the low-energy $\chi^0$ fermion, $$H_{\text{MF},0}=\text{i}u_a J\sum_{\langle ij\rangle} \chi_i^0\chi_j^0+\text{i}\kappa\sum_{\langle\!\langle ij\rangle\!\rangle}\chi_i^0\chi_j^0,$$ with the coefficient $\kappa$ given by the 3rd order perturbation (see this Wikipedia page for the 3rd order perturbation formula) $$\kappa=\Big(\frac{h_x}{2}\Big)\frac{1}{u_0J}\Big(-\frac{h_z}{2}\Big)\frac{1}{u_0J}\Big(-\frac{h_y}{2}\Big)=\frac{h_xh_yh_z}{8u_0^2J^2}.$$ The 2nd neighbor coupling term $\kappa$ breaks the time-reversal symmetry and gaps out the low-energy fermion $\chi^0$. The gapless Kitaev spin liquid is then driven into the non-Abelian phase with the Ising topological order.
{ "domain": "physics.stackexchange", "id": 39464, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, condensed-matter, gauge-theory, research-level, topological-order", "url": null }
moveit, transform Frame /r_gripper_l_finger_tip_frame exists with parent /r_gripper_r_finger_tip_link. Frame /r_gripper_r_finger_tip_link exists with parent /r_gripper_r_finger_link. Frame /r_gripper_l_finger_link exists with parent /r_gripper_palm_link. Frame /r_gripper_l_finger_tip_link exists with parent /r_gripper_l_finger_link. Frame /r_gripper_motor_screw_link exists with parent /r_gripper_motor_slider_link. Frame /r_gripper_motor_slider_link exists with parent /r_gripper_palm_link. Frame /r_gripper_r_finger_link exists with parent /r_gripper_palm_link. Frame /r_shoulder_lift_link exists with parent /r_shoulder_pan_link. Frame /r_shoulder_pan_link exists with parent /torso_lift_link. Frame /r_wrist_flex_link exists with parent /r_forearm_link. Frame /torso_lift_motor_screw_link exists with parent /base_link. Frame /camera_rgb_frame exists with parent /camera_link. Frame /camera_link exists with parent NO_PARENT. Frame /camera_rgb_optical_frame exists with parent /camera_rgb_frame.
{ "domain": "robotics.stackexchange", "id": 16191, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "moveit, transform", "url": null }
python, performance, beginner, physics, scipy Another rule (appropriately titled Whitespace in Expressions and Statements - Pet Peeves), I personally consider as more "essential" than the previous one, is to put a space after you have written a ,, e.g. while listing function arguments. It helps to avoid visual clutter IMHO. E.g. p = {'mass':80.0, 'stiffness':8200.0, 'resting_length':1.0, 'gravity':9.81, 'aoa':1/5*np.pi} # aoa stands for angle_of_attack x0 = [0, 0.85, 5.5, 0, 0, 0] x0 = resetLeg(x0,p) p['total_energy'] = computeTotalEnergy(x0,p) sol = step(x0,p)
{ "domain": "codereview.stackexchange", "id": 34220, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, beginner, physics, scipy", "url": null }
java, game, swing private static final long serialVersionUID = 1L; /** The frame that the panel goes in. */ static JFrame frame = new JFrame(); /** The enum instance used for switching the state of the game. */ static final int INTRO = 0, MAIN_MENU = 1, LEVEL_TITLE = 2, LEVEL = 3; /** The integer used for the game state. */ static int gameState = INTRO; /** Used for when the instructions should be shown. */ private boolean showIntro = false; /** This is the level that the player is on. */ static int levelNum = 0; /** A player class, used to get information about the player. */ private Player player = new Player(); /** The data of the current level. This should be given data in initLevel(). */ static GameLevel level = new GameLevel(); /** Controls whether the game has sound or not. */ static boolean muted = false;
{ "domain": "codereview.stackexchange", "id": 21888, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, swing", "url": null }
electromagnetism, thermodynamics, statistical-mechanics, ising-model, mean-field-theory Title: Product Rule for Partition Sums $Z_N=(Z_1)^N$ For the 1D Ising model with the Hamiltonian $$H=const.-\mu h' \sum_i S_i^z$$ we can write the canonical partition sum as $$Z_N = \sum_{ \{ S_i^z \}_N } e^{-\beta \mu h \sum_i S^z_i} = \sum_{ \{ S_i^z \}_N } \prod_i e^{-\beta \mu h S^z_i}$$ for which we then later used $$Z_N=(Z_1)^N$$with the single particle partition sum $Z_1$
{ "domain": "physics.stackexchange", "id": 53887, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, thermodynamics, statistical-mechanics, ising-model, mean-field-theory", "url": null }
The sections "Reduce to the hard case" and "transform to the containing pairs of non-wrapping pairs" follow D.W's answer roughly.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9780517501236461, "lm_q1q2_score": 0.8514891853034326, "lm_q2_score": 0.8705972717658209, "openwebmath_perplexity": 424.69788964303507, "openwebmath_score": 0.7298941612243652, "tags": null, "url": "https://cs.stackexchange.com/questions/136555/how-can-we-find-the-number-of-pairs-of-intersecting-ranges-on-a-circular-number" }
2.5, 0.5, -3.2, -4, 5.2, -2.2, -2.2, 3) Do not set S4 methods on logb itself. values. Let's assume you want to use this tool as a log base 2 calculator. A vector of the same length as x containing the transformed values.log(0) gives -Inf (when available). For complex inputs to the log functions, the value is a complex number log will be used). If y = e x, then x = log e y 'e' is NOT a variable -- it's always equal to the same irrational number, which we can approximate to 2.71828 'e' is also known as the 'natural base' log e y is usually written as 'ln(x)' (S3 or S4) methods are set for log they will be dispatched. Stringham was an American, so I have no idea why he would have used the notation "ln", other than perhaps to reflect a common, though mistaken, idea that Napier's log was a base-e log.That is, "ln" might have meant to stand for "Log of Napier". A vector of the same length as x containing the transformed The limit of the base b logarithm of x, when x approaches zero, is minus
{ "domain": "oseanz.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9481545362802363, "lm_q1q2_score": 0.8132967249940216, "lm_q2_score": 0.8577681104440172, "openwebmath_perplexity": 1931.4552746631427, "openwebmath_score": 0.6161028742790222, "tags": null, "url": "http://oseanz.com/mx9ryw/archive.php?page=4eacc9-log-base-e-in-r" }
performance, rust Title: Boggle solver in Rust - Looking for speedup I recently made a boggle solver in Rust to compensate for the fact that I'm really bad at boggle. This has also been a good Rust learning experience. The code does what it's supposed to quite well but it is a bit slow, even when compiling with the -O flag. Since I'm new to Rust, I'm not sure where the slow spots could be. I'm using a hashset to lookup the words, so I assume the lookup isn't the bottleneck. Any tips? (Also, if I'm doing any kind of bad practice, you can let me know). Thanks. use std::collections::HashSet; use std::fs; const MAX_WORD_LEN: usize = 12; #[derive(Clone, Debug, Copy)] struct Coordinate { row: i64, col: i64 }
{ "domain": "codereview.stackexchange", "id": 36942, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, rust", "url": null }
haskell, monads, telegram Title: Telegram bot in Haskell using custom monad transformers Note: I show almost all of my code for completeness, but I really only want the review to focus on Session.hs, Handler.hs, and maybe Controller.hs. I can delete the extra code from the review or collapse it to definitions. The project I've never heard of monad transformers and monad stacks before, but I've decided to learn them while making a real world Haskell application. This is a Telegram bot that can do various tasks based on the user's commands. The project is meant to teach me about monad stacks and how to use them properly, while also being a useful tool for my own disposal. The scope of the review
{ "domain": "codereview.stackexchange", "id": 39399, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haskell, monads, telegram", "url": null }
cosmology, energy-conservation, space-expansion, redshift I'm not very familiar with relativity, I may have missed some phenomenon in relativity that explains all this. You have discovered a prediction of general relativity! The prediction is that energy is not actually conserved at a global cosmological scale. It is conserved in its motion through space-time, which we would express with the equation $\nabla_\alpha T^{\alpha\beta} =0$, but the warping of space-time itself changes the ground rules of the universe in a way that does change energies, in this case the energies of red-shifted photons when viewed from a certain perspective inside the space-time. A much more dramatic version of this holds for the general relativistic prediction of dark energy, where the dark energy density remains constant even as the volume of the boxes that contain it increase, so that the total dark energy, if viewed from this perspective, is also increasing. It is just a change that cannot be described as a flow through space-time, so it doesn't have to obey any
{ "domain": "physics.stackexchange", "id": 51742, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology, energy-conservation, space-expansion, redshift", "url": null }
quantum-field-theory, differential-geometry, gauge-theory, topological-field-theory, chern-simons-theory Title: Chern-Simons theory on a plane/sphere with a single charge insertion Consider the pure Chern-Simons theory on the plane $\mathbb{R}^2$ with a single charge insertion in some representation $\rho$ of the group $G$. What does the Hilbert space look like? Is it null or non-null for non-integrable representations $\rho$? Below is my attempt at tackling this problem and an outline of the difficulties that I ran into. The same question has a very nice answer for the case of a sphere $S^2$. Here there's no non-contractible loops, thus the loop around the charge insertion must have holonomy of $1$ (the identity element of $G$). Thus it is restricted to the trivial orbit (which is actually a single point), which means that the phase space is a point if $\rho$ is trivial and it doesn't exist if it isn't. Hence, the Hilbert space is 1-dimensional for $\rho$ a trivial representation and 0-dimensional otherwise.
{ "domain": "physics.stackexchange", "id": 59427, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, differential-geometry, gauge-theory, topological-field-theory, chern-simons-theory", "url": null }
javascript, regex Title: Regular Expression in Javascript to test a string of 1s and 0s I have a string of four bits which represent true/false values. There are only seven valid options: 1010 1110 0000 1101 1001 1000 0101 There are three options which could potentially be selected which are not valid and that I want to check for before I proceed with some other code. These are: 0110 0100 0010 I want to do this with as little code as possible thus having one regex to test all three conditions. My question is if this is a correct regex to accomplish this test. It seems to work, but I am not a regex expert, and have to be sure in this case. if (!/0(10|01|11)0/.test(precode)) { //do some code } Why not simply test the valids? if((/0110|0100|0010/).test(precode)) Seems more readable to me.
{ "domain": "codereview.stackexchange", "id": 3964, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, regex", "url": null }
# Thread: Tension in a string 1. ## Tension in a string I need to find the tension in the horizontal string. Mass of the bar: $m_1 = 4.7 kg$ Hanging mass: $m_2 = 3.4 kg$ Length of the bar: $l_1 = 2.3 m$ Angle between bar and horizontal: $\theta = 42^{\circ}$ Length between hanging mass and string-bar connection: $l_2 = 0.8 m$ Thanks in advance for the help! 2. So far all I've got is that the tension in the string is going to be equal to the torque on the bar, which should just be the force on the bar in the perpendicular direction. So, I can find the perpendicular components of the force due to gravity on the bar and the force due to the hanging mass. This is where I'm a bit confused. At this point, I would use the formula: $|{\tau}| = rf$ But I'm not exactly sure what to plug in and where. 3. Ok I am positive that I've got it this time, but my answer sheet has a different number. Can anyone check over my work to see if I made any mistakes? $\tau = rf\sin{\theta}$
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9732407168145569, "lm_q1q2_score": 0.8001879429866577, "lm_q2_score": 0.822189134878876, "openwebmath_perplexity": 346.4280028699346, "openwebmath_score": 0.7124019861221313, "tags": null, "url": "http://mathhelpforum.com/math-topics/70642-tension-string.html" }
c++, game i=600; while(i<1100) { Beep(i,50); i+=150; } } } /**< game over */ for(i=0;i<10;i++) { if(kbhit()) { if(getch()==13) break; } Sleep(100); system("CLS"); gotoxy(30,20-i); cout<<"GAMEOVER!!"; gotoxy(30,20-i+1); cout<<"SCORE: "<<food; } system("CLS"); gotoxy(30,20-i); cout<<"GAMEOVER!!"; gotoxy(30,20-i+1); cout<<"SCORE: "<<food; gotoxy(30,20-i+2); cout<<"NAME:"; update_highscore(); gotoxy(30,20-i+3); system("PAUSE"); score_print(); } };
{ "domain": "codereview.stackexchange", "id": 14137, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, game", "url": null }
java, android, mvp, rx-java if (universalResponse.isSessionExpired()) { view.onSessionExpired(); return; } if (universalResponse.isError()) { if (universalResponse.isServerError()) { view.showErrorViewerPage(universalResponse.getServerErrorMessage()); } view.showFeaturedFruitsError(universalResponse.getMessage()); return; } List<Fruit> fruits = universalResponse.getItems(); if (fruits != null && fruits.size() > 0) { view.showFeaturedFruits(fruits); } else { view.showNoFeaturedFruits(); } } @Override public void onError(Throwable e) { if (view.isLost()) return; view.setProgressVisibility(false); view.showFeaturedFruitsError(e.getMessage()); } @Override public void onComplete() { if (view.isLost()) return; view.setProgressVisibility(false); } });
{ "domain": "codereview.stackexchange", "id": 30601, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, android, mvp, rx-java", "url": null }
friction adjustment associated with solid friction brakes. Here the obvious solution for a small motor is to scavenge the fan from a cheap PC cooler. If the fan has an enclosure you can even vary the load by the simple expedient of using duct tape to restrict the flow.
{ "domain": "engineering.stackexchange", "id": 858, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "friction", "url": null }
ros, python, rqt-reconfigure, dynamic-reconfigure, ros-groovy And selecting my node, I get the following error: Traceback (most recent call last): File "/opt/ros/groovy/lib/python2.7/dist-packages/rqt_reconfigure/node_selector_widget.py", line 250, in _selection_changed_slot self._selection_selected(index_current, rosnode_name_selected) File "/opt/ros/groovy/lib/python2.7/dist-packages/rqt_reconfigure/node_selector_widget.py", line 200, in _selection_selected item_widget = item_child.get_dynreconf_widget() File "/opt/ros/groovy/lib/python2.7/dist-packages/rqt_reconfigure/treenode_qstditem.py", line 148, in get_dynreconf_widget self._param_name_raw) File "/opt/ros/groovy/lib/python2.7/dist-packages/rqt_reconfigure/dynreconf_client_widget.py", line 57, in __init__ group_desc, node_name) File "/opt/ros/groovy/lib/python2.7/dist-packages/rqt_reconfigure/param_groups.py", line 153, in __init__ self._create_node_widgets(config)
{ "domain": "robotics.stackexchange", "id": 15734, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, python, rqt-reconfigure, dynamic-reconfigure, ros-groovy", "url": null }
c++, linked-list /** * @brief Insert node to the tail of the LinkedList. * @param item Data to be stored in the node. * @return void. * * The insertTail function inserts a node to the tail location * of the LinkedList. * * If the LinkedList empty, insertTail function uses insertHead function. * * As a result of this function the currPtr points to the newly * inserted tail node. */ template <class T> void LinkedList<T>::insertTail(const T& item) { if(empty()) { insertHead(item); } else { Node<T>* tempPtr = newNode(item); //get a new node prevPtr = tailPtr; tailPtr->insertNext(tempPtr); //insert it after tailPtr currPtr = tailPtr = tempPtr; //arrange currPtr, tailPtr sizeL += 1; //sizeL increments } }
{ "domain": "codereview.stackexchange", "id": 35685, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, linked-list", "url": null }
earth, time, mathematics, clock Title: Pendulum clock correction I'm trying to solve this task: The pendulum clock was transported from the Earth's equator to Antarctica (in the vicinity of the southern geopole) for scientific experiments. Estimate the pendulum clock correction over the Earth's day in Antarctica (at a temperature of $t = −90 ° C$) if these clocks are calibrated at the equator (at a temperature of $t = + 50 ° C$). The coefficient of thermal expansion of the pendulum substance is $α_h$ = $2,4 · 10^−5$ $deg^−1$. The original verified length of the pendulum is $ℓ_0$ = 300 mm. How much should the length of the pendulum be changed so that the correction of the hours per day is no more than 10 seconds?
{ "domain": "astronomy.stackexchange", "id": 6006, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "earth, time, mathematics, clock", "url": null }
clojure Title: Adding/removing map values: unsymmetric implementation I am learning Clojure by implementing a simplistic resource reservation system. At its core are the two functions for placing/canceling reservations for a given resource: ; thread-safe map of {resource -> #{reservations}} (def known-reservations (atom {})) (defn place-reservation [new-res] (swap! known-reservations #(merge-with clojure.set/union % {(:resource new-res) #{new-res}}))) (defn cancel-reservation [canceled-res] (swap! known-reservations #(update % (:resource canceled-res) disj canceled-res))) They seem to work correctly, but the lack of symmetry bothers me. Placing and canceling reservations are symmetric operations, but this is not reflected by the above code: merge-with clojure.set/union doesn't even resemble update ,,, disj. I tried the following variant: (defn place-reservation [new-res] (swap! known-reservations #(update % (:resource new-res) conj new-res)))
{ "domain": "codereview.stackexchange", "id": 20496, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "clojure", "url": null }
java, multithreading what should I change in the example, to demonstrate the advantage of parallel computation? public class ConcurrencyTest {
{ "domain": "codereview.stackexchange", "id": 28389, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, multithreading", "url": null }
for 1/4 ≤ y ≤ 1. Now, we know that the conditional mean of  Y given X = ½ is: $$E(Y|\dfrac{1}{2})=\dfrac{1+(1/2)^2}{2}=\dfrac{1+(1/4)}{2}=\dfrac{5}{8}$$ If we think again of the expected value as the fulcrum at which the probability mass is balanced, our results here make perfect sense:
{ "domain": "psu.edu", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9915543723101533, "lm_q1q2_score": 0.8060021341197433, "lm_q2_score": 0.8128673087708698, "openwebmath_perplexity": 275.64018313793264, "openwebmath_score": 0.9001290202140808, "tags": null, "url": "https://onlinecourses.science.psu.edu/stat414/book/export/html/117" }
java, android final TextView textView = createDateTimePicker(); textView.setHint(fName); textView.setId(countIdTextView); countIdTextView++; if (valMap.containsKey(id)) { textView.setText(valMap.get(id)); } linearLayout.addView(textView); ImageView imgView = createImageView(); linearLayout.addView(imgView); formD.addView(linearLayout); valMapId.put(textView.getId(), id); } else if (type.equalsIgnoreCase("Multiple")) { final String fName = name;
{ "domain": "codereview.stackexchange", "id": 20379, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, android", "url": null }
java, performance, file List<QueryLog> queryLogs = new AolQueryLogsProcessor(fileName).getQueryLogs(); // Read logs into a multimap to preserve duplicates multimap.putAll(Multimaps.index(queryLogs, QueryLog::getQueryString)); }); //Put the multimap into the trie. It now also has duplicates. queryTrie.addAll(multimap.asMap()); } catch (IOException e) { e.printStackTrace(); } }
{ "domain": "codereview.stackexchange", "id": 25941, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, file", "url": null }
star, light, naked-eye, interstellar Title: What fraction of starlight, seen from Earth, is actually reflected light? Thanks to reflected starlight, many planets and comets in the Solar System have been visible to humans since long before the development of modern astronomy. Some of the starlight from outside the Solar System should be reflected light as well. That is, light emitted by one star in the Milky Way may encounter another star, and be reflected toward Earth. Normally, stars are assumed to be perfect blackbodies, but in reality they must bounce off some radiation.
{ "domain": "astronomy.stackexchange", "id": 3351, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "star, light, naked-eye, interstellar", "url": null }
mathematics, density-matrix, nielsen-and-chuang Title: Understanding the outer products in density matrices I don't understand a simple property of the outer product when doing density matrices. I am studying nielsen and chuang's book. At equation 2.197 they do show the density matrix of the state of quantum teleportation before alice performs her measurements. For $$|\psi\rangle = \frac{1}{2}[|00\rangle(\alpha|0\rangle +\beta |1\rangle)+ |01\rangle(\alpha|1\rangle+\beta|0\rangle)+|10\rangle(\alpha|0\rangle -\beta |1\rangle) + |11\rangle(\alpha|1\rangle -\beta|0\rangle)]$$ The density matrix is just: $$\rho_1= \frac{1}{4}[|00\rangle \langle 00|(\alpha|0\rangle + \beta|1\rangle)(\alpha^*|0\rangle+\beta^*|1\rangle)+\\ |01\rangle\langle01|(\alpha|1\rangle +\beta |0\rangle)(\alpha^*\langle1| +\beta^*\langle0|) +\\ |10\rangle\langle10|(\alpha|0\rangle -\beta |1\rangle)(\alpha^*\langle 0|-\beta^*\langle1|) +\\ |11\rangle\langle11|(\alpha|1\rangle -\beta|0\rangle)(\alpha^*\langle 1| -\beta^*\langle0|) ]$$
{ "domain": "quantumcomputing.stackexchange", "id": 1302, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mathematics, density-matrix, nielsen-and-chuang", "url": null }
1. The intersection of $\sigma$-algebras is again a $\sigma$-algebra. 2. A set of the form $\lbrace \emptyset, \Omega, A, A^c \rbrace$ is a $\sigma$-algebra. Let $\mathscr{G}$ (particular case $\mathcal{F}_1$) be any set of subsets of $\Omega$. $\mathscr{A}:=\bigcap_{\mathcal{F} \in \mathcal{I}(\mathscr{G})}\mathcal{F}$ is nonempty because the powerset $\mathscr{P}(\Omega)$ has $\mathscr{P}(\Omega) \in \mathcal{I}(\mathscr{G})$. It follows that $\mathscr{G} \subset \mathscr{A}$ and that, because of assumption 1, $\mathscr{A}$ is a $\sigma$-algebra. If $\mathscr{A}'$ is another $\sigma$-algebra with $\mathscr{G} \subset \mathscr{A}'$, then, because $\mathscr{A}'$ is one of the sets over which the intersection is taken, we have $\mathscr{A} \subset \mathscr{A}'$. So $\mathscr{A}$ satisfies the definition of the smallest $\sigma$-algebra generated by $\mathscr{G}$, i.e. $\sigma(\mathscr{G}) = \mathscr{A}$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9825575142422757, "lm_q1q2_score": 0.8501324135954369, "lm_q2_score": 0.865224072151174, "openwebmath_perplexity": 194.6361182394072, "openwebmath_score": 0.9799019694328308, "tags": null, "url": "https://math.stackexchange.com/questions/1667546/sigma-algebra-generated-by-a-subset" }
rviz Title: Add published markers to Display in rviz Currently I am writing a visualization node that publishes marker topics depending on the parameters given in its launch file. After launching rviz I can then manually add the markers to the display and it works fine. However for practicability: Is there a way for the node to add these markers to the rviz display itself? I know that one can specify a config file with the displays for rviz. But then the user would have to edit both the launch and config file each time. Originally posted by odelay on ROS Answers with karma: 103 on 2016-11-14 Post score: 1
{ "domain": "robotics.stackexchange", "id": 26241, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rviz", "url": null }
c, array, statistics for (i = 0; i < len; ++i) { summary_data.mean += calc_array[i]; } summary_data.mean /= len; if (len % 2 == 0) { // is even == return the arithmetic middle of the two middle values summary_data.median = (calc_array[(len - 1) / 2] + calc_array[len / 2]) / 2.0; } else { // is odd == retunr the middle summary_data.median = calc_array[len / 2]; } free(calc_array); calc_array = NULL; return summary_data; } void print_result(const struct Summary_data* summary_data) { assert(summary_data); printf("smallest: %i\n", summary_data->smallest); printf("largest: %i\n", summary_data->largest); printf("median: %g\n", summary_data->median); printf("mean: %g\n\n", summary_data->mean); } int main() { int test_array[] = { 1,7,3,4,5,6,7,8,9 }; // 9 elements // int test_array[] = { 1,7,3,4,5,6,7,8 }; // 8 elements int len = sizeof(test_array) / sizeof(test_array[0]);
{ "domain": "codereview.stackexchange", "id": 31097, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, array, statistics", "url": null }
c, game while(shipcount!=0 && x_count!=difficulty) { printf("\nEnter coordinates (x,y):"); scanf("%d,%d",&coords.x,&coords.y); buff_clr(); system("cls"); battle(ships,pseudo_gui,n,coords,&shipcount,&x_count); result(pseudo_gui,n); printf("Number of ships to be sunk:%d",shipcount); printf("\nNumber of misses(out of %d): %d\n\n",difficulty,x_count); } if(shipcount==0) { printf("\nWinner!\n\n"); getch(); } else if(x_count==difficulty) { printf("\nYou Lose!\n\n"); getch(); } return 0; } This what I've made so far but i'm in doubt if my code is: Easy to understand Straightforward Could have I done anything in a different, easier, or more convenient way?
{ "domain": "codereview.stackexchange", "id": 1205, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, game", "url": null }
regular-expressions, software-verification, transducers Title: Formally Verify if a Sequence of Regex-Based Modifications is Idempotent I'm performing a sequence of text formatting using regex in Python. I'm curious to know if it's possible to formally verify whether a single (or a sequence of) regex modification(s) is idempotent, which means that for every text, d ∘ d (text) = d (text), where d denotes the resulting total modification. To be precise, by a single regex modification, I refer to the function f_regex_subst: text |-> re.sub(regex, sub, text, 0, re.MULTILINE). When a sequence of such modifications is applied, the resulting function is simply the composition of these functions. The motivation is ensuring the formatting process yields stable result for any text. This is likely to be a very hard problem, so hard that I don't think you should expect a useful solution that can handle all possible regexes.
{ "domain": "cs.stackexchange", "id": 21551, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "regular-expressions, software-verification, transducers", "url": null }
java, complexity Title: Finding Pythagorean triplet in array We have an integer array as: private int[] arr = {1, 3, 5, 14, 18, 29, 78}; We have a function which takes three inputs of the array and checks whether: a * a = b * b + c * c If the function returns true then those 3 inputs are stored in an ArrayList: private boolean findTriplet(int a, int b, int c) { int squareA = a * a; int squareB = b * b; int squareC = c * c; int sum = squareB + squareC; if (squareA == sum) { return true; } return false; }
{ "domain": "codereview.stackexchange", "id": 20383, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, complexity", "url": null }
java, optimization, algorithm, strings My program works as well as my function, but it is a nasty and inefficient solution. How can I improve the efficiency of this function? Here is what it looks like you're trying to do: loop through set, loop through each individual word in set, until you find toMerge. When you find that, you then restart the loop through set, and each word in set until you find fromMerge.
{ "domain": "codereview.stackexchange", "id": 7892, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, optimization, algorithm, strings", "url": null }
neuroscience, neurology Title: What decides the position of the node of Ranvier? The oligodendrocytes makes the myelin sheath in CNS and schwann cells make it in PNS. What decides where the oligodendrocyte or Schwann cell will attach and start forming myelin sheath? Is it genetically determined? Is it random? Is there any disease associated with improper positioning of node of Ranvier? A combination of differentiation site, chemical guidance during migration, and signaling cues form a variety of sources.
{ "domain": "biology.stackexchange", "id": 1239, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neuroscience, neurology", "url": null }
Added 2: This answer is generalizable to other ratios $r$ up to the infinite series expression. For the general $r \geq 2$ case, the argument above is easily adapted to produce the recurrence $S(n) = \binom{(r+1)n}{n} - \sum_{k=1}^{n-1} \binom{(r+1)n-(r+1)k}{n-k}S(k)$, with $S(1) = r+1$. The solution to the recurrence is $S(n) = \frac{r}{(r+1) n - 1} \binom{(r+1) n}{n}$ and can be verified easily by using the binomial convolution formula given above. Thus, for the ratio $r$, the probability of stopping has the infinite series expression $$\sum_{n=1}^{\infty} \binom{(r+1)n}{n} \frac{r}{(r+1)n-1} \frac{1}{2^{(r+1)n}}.$$ This can be expressed as a hypergeometric function, but I am not sure how to simplify it any further for general $r$ (and neither does Mathematica). It can also be expressed using the generalized binomial series discussed in Concrete Mathematics (p. 200, 2nd ed.), but I don't see how to simplify it further in that direction, either.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9902915223724212, "lm_q1q2_score": 0.870611589217961, "lm_q2_score": 0.8791467659263148, "openwebmath_perplexity": 286.28621061275317, "openwebmath_score": 0.9498146176338196, "tags": null, "url": "http://math.stackexchange.com/questions/60021/whats-the-probability-that-a-sequence-of-coin-flips-never-has-twice-as-many-hea" }
c#, multithreading, hash-map, dependency-injection, azure public SubscriptionDescription CreateSubscription(SubscriptionDescription description) { _validationService.Null(description, "description"); try { if(!this.NamespaceManager.SubscriptionExists(description.Path)) { return this.NamespaceManager.CreateSubscription(description); } } catch(MessagingEntityAlreadyExistsException) { // intended because the topic is simultaneously created by all isntances } } public TopicClient GetTopicClient(string topic) { _validationService.StringIsNullOrEmpty(topic, "topic");
{ "domain": "codereview.stackexchange", "id": 14273, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, multithreading, hash-map, dependency-injection, azure", "url": null }
solar-system, star-formation, accretion-discs Title: What is the mass limit in a stellar accretion disc? I became curious about the maximum mass in a star's accretion disc while watching an episode of Star Trek involving a Dyson Sphere. I wondered if some maximum amount of stellar material would limit natural and artificial structures around any system of one or more stars. What affects these limits? Is there a logarithmic/linear/exponential correlation between stellar mass and maximum accretion mass? The accretion disc is formed by material in orbital motion against a central body, which can be a star. The size, mass and other characteristics are usually determined by the central object, in this case the star. In general, the protoplanetary accretion discs are the largest ones (with the largest mass) and as the age on the central star increases, the average size decreases. This is because as the age of the system increases, more mass is drawn into the system and less is left out in the disc itself.
{ "domain": "astronomy.stackexchange", "id": 1040, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solar-system, star-formation, accretion-discs", "url": null }
waves, acoustics, doppler-effect Title: Doppler's effect According to the formula to calculate the apparent frequency heard by an observer, the frequency is independent of the distance of the source from the observer. This means however close the observer is from the source the frequency would be constant as far as the velocity is constant. Let's consider a source moving with a velocity $\ v$ towards an observer.Let its distance from the observer at an instant before it crosses the observer be $\ ds$. The frequency heard by the observer is $\nu_1$. When the source is exactly on the observer the frequency is the original frequency with which the waves were emitted, i.e., $\nu_o$. When the source crosses the observer and it is a distance $\ ds$ on the opposite side of the observer and the frequency now becomes $\nu_2$. My question is:
{ "domain": "physics.stackexchange", "id": 33053, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "waves, acoustics, doppler-effect", "url": null }
coq, dependent-type Title: Why does Coq have Prop? Coq has a type Prop of proof irrelevant propositions which are discarded during extraction. What are the reason for having this if we use Coq only for proofs. Prop is impredicative, so Prop : Prop, however, Coq automatically infers universe indexes and we can use Type(i) instead everywhere. It seems Prop complicates everything a lot. I read that there're philosophical reasons for separating Set and Prop in Luo's book, however, I didn't find them in the book. What are they? $\mathtt{Prop}$ is very useful for program extraction because it allows us to delete parts of code that are useless. For example, to extract a sorting algorithm we would prove the statement "for every list $\ell$ there is a list $k$ such that $k$ is ordered and $k$ is a permutatiom of $\ell$". If we write this down in Coq and extract without using $\mathtt{Prop}$, we will get:
{ "domain": "cstheory.stackexchange", "id": 5183, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "coq, dependent-type", "url": null }
homework-and-exercises, quantum-field-theory, fermions, anticommutator \end{align} After explicitly writing indices on everything, we are just dealing with products of (Grassman) numbers. $\gamma^0_{\alpha\beta}$ commutes with any other element, so it can be taken out. The commutation relations between $\psi$ and $\bar{\psi}\psi$ should be expressed as commutators, because $\psi$ is a fermion and $\bar{\psi}\psi$ is a boson. Using the equation above and $\{\psi_\alpha(\mathbf{x},t),\psi_\beta(\mathbf{x}',t)\}=0$ we get \begin{align} [\psi_{\alpha}({\bf x},t),{\bar \psi}{(\bf x'},t) \psi({\bf x'},t)] =& [\psi_{\alpha}({\bf x},t),{\bar \psi}_\beta{(\bf x'},t) \psi_\beta({\bf x'},t)] \\ =& \psi_{\alpha}({\bf x},t){\bar \psi}_\beta{(\bf x'},t) \psi_\beta({\bf x'},t) - {\bar \psi}_\beta{(\bf x'},t)\psi_\beta({\bf x'},t) \psi_{\alpha}({\bf x},t) \\ =& -\{\psi_{\alpha}({\bf x},t),{\bar \psi}_\beta{(\bf x'},t)\} \psi_\beta({\bf x'},t) - {\bar \psi}_\beta{(\bf x'},t)\psi_{\alpha}({\bf x},t) \psi_\beta({\bf x'},t) \\ &-
{ "domain": "physics.stackexchange", "id": 36323, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, quantum-field-theory, fermions, anticommutator", "url": null }
c#, serialization int bSize = preByte - 0x80; MemoryStream ms = new MemoryStream(); using (BinaryWriter b = new BinaryWriter(ms)) { for (int i = 0; i < 4 - bSize; i++) b.Write((byte)0x0); for (int i = 0; i < bSize; i++) b.Write((byte)s.ReadByte()); } byte[] rv = ms.ToArray(); Array.Reverse(rv, 0, rv.Length); return BitConverter.ToInt32(rv, 0); } } Any ideas are really welcome, and if someone's gonna close this on here please do a bit more research before going trigger-happy on me. So, based on your update it looks like you've switched to reading from Stream - good decision. A bit simplified/cleaner approach for reading int is: public static class ASN1Int { public static int Read(Stream inputStream) { const int sizeMarker = 0x80; //TODO: name appropriately int size = inputStream.ReadByte();
{ "domain": "codereview.stackexchange", "id": 3945, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, serialization", "url": null }
homework-and-exercises, forces, energy, work, power $$= 360 W$$ As you can see, the answer is different for different approaches. How is that possible? Where am I going wrong? Only the first method answers the specific question asked. In your first method, you examine the conditions at the moment the box passes the $216$ metre mark. You find the velocity at that moment and multiply by the constant force to determine the instantaneous power at that moment/location, as you are explicitly asked. In your second method, you correctly calculate the total energy transferred by the boy to the box and floor during the entire $12$ second, $216$ m push. You then assume (incorrectly) that this energy is transferred uniformly over the $12$ second exercise, divide by the time, and produce the average power output. The right answer, but not to the question asked.
{ "domain": "physics.stackexchange", "id": 83433, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, forces, energy, work, power", "url": null }
c++, c++11, pointers, interface int main() { std::cout << "Welcome to Bowser's 9 board game\n"; std::cout << "Start? y(yes) n(no)\n"; char wants_to_play; std::cin >> wants_to_play; if (wants_to_play == 'y' || wants_to_play == 'Y') { GameStateManager game_manager; game_manager.set_state("playing"); std::cout << "Let's a go!\nPress Enter to role.\n"; std::cin.get(); auto space1 = std::make_shared<Empty>(1); auto space2 = std::make_shared<Empty>(2); auto space3 = std::make_shared<Ladder>(3); auto space4 = std::make_shared<Empty>(4); auto space5 = std::make_shared<Empty>(5); auto space6 = std::make_shared<Empty>(6); auto space7 = std::make_shared<Snake>(7); auto space8 = std::make_shared<Empty>(8); auto space9 = std::make_shared<Empty>(9);
{ "domain": "codereview.stackexchange", "id": 34254, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++11, pointers, interface", "url": null }
must be typeset using LaTeX . a= 4;b= 2 implies a reference function g(n) = nlog 2 4 = n2. When you run the LaTeX file through LaTeX and BibTeX (instructions below), you'll get output for the body of the document that differs from the output when you use te. Similarly, if x−k is a factor of f (x) , then the remainder of the Division Algorithm f (x) = (x−k)q (x)+r is 0. Theorem. Introduction Code Beamer Features More LATEX Disclaimer #1 I am NOT an expert in LATEX I am NOT an expert in Beamer Disclaimer #2 This talk is designed to introduce you to presentations in LATEX 1/30/19 1 CS4102 Algorithms Spring 2019 Warm up Given any 5 points on the unit square, show there’s always a pair distance ≤" apart 1 1 1 1 1 1 2 1 2 2 If points # Solutions to problems sets must be typeset using LaTeX . Jan 24, 2016 · For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Miscelanea Defined terms on the margin; Date and time of compilation; Print labels on the margin (equation,
{ "domain": "consumercreditagency.org", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9632305349799241, "lm_q1q2_score": 0.8484106451503881, "lm_q2_score": 0.8807970826714614, "openwebmath_perplexity": 1198.9546450133387, "openwebmath_score": 0.7071961164474487, "tags": null, "url": "http://secure.consumercreditagency.org/ez5a/master-theorem-latex.html" }
c, linked-list, generics void lst_nodePrint( node *head, void(*print)(const void *)) { while (head) { print(head->data); head = head->next; } } void lst_nodeFree(node *head) { node *tmp = head; while (head) { head = head->next; free(tmp->data); free(tmp); tmp = head; } } void print_int(const void *a) { printf("%d\n", *(const int *)a); } void print_string(const void *str) { puts((const char *)str); } Comment "implemented a generic linked list', this codes seems to be focused on adding to the list variant sizes of data - which in good for strings. I would find this a bit error prone or tedious and prefer to provide the size at link-list head creation time, rather than provided it each time with lst_nodeAdd(). Major Invalid standard C code node is not defined yet used in node *next; typedef struct node { void *data; // node *next; struct node *next; } node;
{ "domain": "codereview.stackexchange", "id": 30335, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, linked-list, generics", "url": null }
slam, navigation, rtabmap-ros, rtabmap /rtabmap/rtabmap subscribed to (approx sync): /rtabmap/odom, /camera/rgb/image_rect_color, /camera/depth_registered/image_raw, /camera/rgb/camera_info
{ "domain": "robotics.stackexchange", "id": 25993, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, rtabmap-ros, rtabmap", "url": null }
quantum-mechanics, quantum-field-theory, wavefunction, mathematics, second-quantization Edit: Looking at the answer to this question, as was suggested: Why we use fields instead of wave functions? I get the impression that a quantum field is just the superposition of all the relevant wave functions, like, that the electron field is the superposition of the wavefunctions of all electrons. Is that right? The problem connecting QFT and $N$-body quantum mechanics isn't so much with QFT but rather with relativistic field theories. For a non-relativistic theory the connection is actually quite straight forward, if we define the one-particle momentum states by
{ "domain": "physics.stackexchange", "id": 97463, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-field-theory, wavefunction, mathematics, second-quantization", "url": null }
Regarding the rank statement, we discern from Equation 2 that if λj >0 λj 0 then x j , k ATA x j , k A A . The union of these vectors indeed constitutes a basis for ATA A A , for anything orthogonal to each of these x j , k x j , k necessarily lies in the eigenspace corresponding to a zero eigenvalue, i.e., in 𝒩ATA 𝒩 A A . As ATA=AT A A A it follows that dimℛATA=r dimℛ A A r and hence the njnj, for λj >0 λj 0 , sum to r. Let us now gather together some of the separate pieces of the proof. For starters, we order the eigenvalues of ATA A A from high to low, λ1 > λ2 >> λh λ1 λ2 λh and write
{ "domain": "cnx.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9877587236271351, "lm_q1q2_score": 0.8052530053587668, "lm_q2_score": 0.8152324915965392, "openwebmath_perplexity": 1752.2243218756485, "openwebmath_score": 0.6588999032974243, "tags": null, "url": "http://cnx.org/content/m10739/latest/?collection=col10048/latest" }
condensed-matter, many-body, bose-einstein-condensate, cold-atoms Title: Why is the density of a BEC so low? I've just begun reading C. Pethick and H. Smith's textbook "Bose-Einstein condensation in dilute gases" (Cam. Uni. Press). In the Introduction, they contrast the density of atoms at the centre of a BEC cloud to other phases of matter. To quote from the text (pg 1, 2nd ed.): The particle density at the centre of a Bose-Einstein condensed atomic cloud is typically 10^13–10^15 cm^−3. By contrast, the density of molecules in air at room temperature and atmospheric pressure is about 10^19 cm−3.
{ "domain": "physics.stackexchange", "id": 56068, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter, many-body, bose-einstein-condensate, cold-atoms", "url": null }
electromagnetism, forces, electrostatics, reference-frames Do you guys know what might be the fallacy? There is no fallacy, you're just not being particularly careful. You need to include both the electric and magnetic forces of the right magnitude and a covariant result drops out. (Of course historically it went the other way around: people noticed that frame changes were messed up unless the transformation laws were different and this led to the development of special relativity.) For simplicity let both beams be very thin and have equal uniform charge density $\rho$ in the rest frame and suppose they run exactly parallel separated by a distance $l$. Let the velocity of the beams be $v$ in the lab frame. Rest frame: Taking the usual gaussian pillbox gives the electric field of one beam at the location of the other as $$ \vec{E}_\text{rest} = \frac{\rho}{2\pi\epsilon_0 l} \hat{r}, $$ where $\hat{r}$ is the unit vector directed away from the source beam. Thus the force on a single particle in the second beam (charge $q$) is:
{ "domain": "physics.stackexchange", "id": 8648, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, forces, electrostatics, reference-frames", "url": null }
signal-analysis, fourier-transform, frequency-spectrum, continuous-signals, fourier-series $$h(t) = \delta(t) + \delta(t-T_0) $$ So you add a delayed copy to the original signal. In the frequency domain this looks like $$H(\omega) = 1 + e^{-j\omega T_0}$$ The delay turns into a phase shift so you add add a phase shifted copy of the spectrum the original one. If the phase shift at a certain frequency is 0 degree, they will just add. But if the phase shift at a different frequency is 180 degrees they will actually cancel. So some frequencies will be amplified others will be cancelled that's called a "comb filter" a graph of the amplitude with these notches looks like the teeth of a comb. The more you repeat the original rectangle, the more "combing" you get and eventually all frequencies are gone except the ones that line up exactly with the period of the repetition.
{ "domain": "dsp.stackexchange", "id": 10213, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "signal-analysis, fourier-transform, frequency-spectrum, continuous-signals, fourier-series", "url": null }
algorithm, c, game int arr[10]={1,3,2,4,5,6,1,3,2,4}; //this array ensures 40,40,20 percent division of words to horizontal, vertical and diagonal direction int p=0; //used as an index to traverse the above array arr void insertWordInGrid(char *word,int i) //function to insert word in the grid { //i signifies that the i th word is being inserted struct point place; //point where the word is to inserted in the grid enum direction d; do{ place.x = rand() % grid_size; //set to a random row place.y = rand() % grid_size; //set to a random column d = (arr[(p++)%10]); //get a direction according to the rule specified } while(!check_insert(word,place,d)); //run the loop until we cant insert the word int j = 0;//loop variable
{ "domain": "codereview.stackexchange", "id": 13400, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm, c, game", "url": null }
cc.complexity-theory, reference-request, computability, computable-analysis The paper I wrote with Jens is very general and maybe less accessible. Another good source to read about this is Peter Hertling's The Real Number Structure is Effectively Categorical. I have nothing intelligent to say about computational complexity.
{ "domain": "cstheory.stackexchange", "id": 2978, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, reference-request, computability, computable-analysis", "url": null }
python, visualization, matplotlib But the plot I am getting is very weird. I was expecting it would be scattered but it is something like this: I also tried this: colors = cm.jet(np.linspace(0, 1, 300)) for d_in, d_out, c in zip(inp_samples, out_samples, colors): plt.scatter(len(inp_samples), d_in, c=c) plt.scatter(len(out_samples), d_out, c=c) Can anyone help in understanding what I am doing wrong? In scatter in Matplotlib, the first two arguments must be the $x$ and $y$ values that you want to combine. In your code, the first value ($x$) is len(inp_samples) (or len(out_samples)) which is the number 300, so all your points line on the $x=300$ vertical line. In order to combine the inp_samples with the out_samples, I too would recommend using zip, as in your second sample of code. Just replace plt.scatter(len(inp_samples), d_in, c=c) plt.scatter(len(out_samples), d_out, c=c) with plt.scatter(d_in, d_out, c=c) (and see if it works). Edit
{ "domain": "datascience.stackexchange", "id": 3336, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, visualization, matplotlib", "url": null }
c#, performance, hash-map Title: Replacing all keys found in a dictionary for its value in a list I have a file that is written DOS format and I need to replace some characters into others. I created a dictionary for that purposes, and now I have to read everyline from my stream and replace my values as I go. I created this piece of code in C#, however, I highly doubt that this is the way to go, since I figure that C# dictionary class must have a simpler way to this probably common operation. Any feedback is greatly appreciated ! public Dictionary<char, string> Chars { get; set; } = new Dictionary<char, string>(); // Called in my constructor public void CreateDictionnary() { Chars.Add('\u001b', " "); Chars.Add('\u0000', " "); // Multiple other characters that I have to replace } public List<string> ReplaceSpecialCharacters(List<string> lines) { var result = new List<string>();
{ "domain": "codereview.stackexchange", "id": 38441, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, hash-map", "url": null }
java, strings, lambda, column // TODO Format better final int totalLength = (totalMaxLengths + columnSeparator.length() * (maxLengths.size() - 1) + System.lineSeparator().length()) * lines.size(); For the actual output building I suggested StringBuilder last time, but I'd probably also use Streams. The repeatition of strings in padCell is also interesting (you got it from Stackoverflow didn't you :) ). Since Java 11 however there is String::repeat(). Or at the very least extract the expression into a well named method.
{ "domain": "codereview.stackexchange", "id": 44504, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, strings, lambda, column", "url": null }
python, python-3.x Thanks in advance! Your first if is complicated. Instead make a function that clearly says what you're testing against. Keep the grouped prices in a prices object. I'd also keep all the prices in a prices object. Use a main function. You can calculate the total cost, and divide by the amount of seniors. You should group people into an object, and add that object to the names_ppl_going. This means that you won't have to times your id's by three, and won't have to have drastically different code in your edit list logic. Make a person_info function that builds the person for you. This makes your code more DRY. Don't add formatting to your values until you want to display them. My name isn't Peilonrayz: it's Peilonrayz. You can remove the need for n if you set proper start and end points on your range. Calculate the total profit at the end. Calculating it throughout add unneeded complexity. And actually makes the code slower. str.format is your friend, use it.
{ "domain": "codereview.stackexchange", "id": 29882, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x", "url": null }
turtlebot [ INFO] [1470897007.128141633]: Centroid at -0.110063 0.131935 0.519000 with 26554 points [ INFO] [1470897008.236953214]: Centroid at -0.110100 0.131929 0.520000 with 26519 points [ INFO] [1470897009.263505487]: Centroid at -0.110249 0.131807 0.519000 with 26521 points [ INFO] [1470897010.283830842]: Centroid at -0.111009 0.131917 0.520000 with 26443 points [ INFO] [1470897011.313652663]: Centroid at -0.109750 0.132123 0.520000 with 26477 points [ INFO] [1470897012.335891665]: Centroid at -0.109882 0.132068 0.520000 with 26448 points [ INFO] [1470897013.347062630]: Centroid at -0.109674 0.132116 0.520000 with 26536 points In addition, the bot works with teleop app with keyboard.
{ "domain": "robotics.stackexchange", "id": 25497, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turtlebot", "url": null }
and $$\lceil a\rceil + \lceil 2a\rceil + \lceil 3a\rceil + \cdots + \lceil na\rceil$$ for any whole number $n$ and $0 < a < 1$ ? Also, provide the proof for the same. - Have you tried calculating any, looking for patterns? –  Gerry Myerson Oct 5 '12 at 7:21 Yes, tried calculating for both things. There is a pattern for particular numbers like 0.2, 0.4, 0.8. But, how to generalize the pattern for any real number like 0.39856, 0.0009843, etc? Of course, finding general patterns helps in providing a formula. There doesn't seem having a general pattern among all real numbers. –  Nonymous NT Oct 5 '12 at 7:45 For rational $a$, your sums come up in some of the early proofs of quadratic reciprocity --- see, e.g., Eisenstein's proof en.wikipedia.org/wiki/Proofs_of_quadratic_reciprocity –  Gerry Myerson Oct 5 '12 at 13:19
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9790357567351324, "lm_q1q2_score": 0.8115607293547226, "lm_q2_score": 0.8289388040954684, "openwebmath_perplexity": 304.9833365284996, "openwebmath_score": 0.9605711102485657, "tags": null, "url": "http://math.stackexchange.com/questions/207604/formula-and-proof-for-the-sum-of-floor-and-ceiling-numbers?answertab=active" }
electromagnetism, special-relativity, reference-frames Title: Relativistic interpretation of magnetic effect on a charge due to a perpendicular current carrying wire Assume that we have a current-carrying conductor and a $+v_e$ test charge moving along the current. In the test charge's rest-frame, the electrons in the wire are length-expanded, and the positive ions of the metal are length-contracted. This makes the length charge density net positive, resulting in a repulsive force on the test charge, perpendicular to the wire. This explanation also concurs with the magnetic force $\mathbf{F}_L=q({\bf v} \times {\bf B})$ on the charge in lab-frame. Now, let's assume that the test charge is moving perpendicular to the wire. We know from elementary magnetism that the magnetic force on test charge is now along the wire. But how do I get the same result via relativistic length-contraction/expansion in the test charge's rest frame?
{ "domain": "physics.stackexchange", "id": 74418, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, special-relativity, reference-frames", "url": null }
cosmology, reference-frames, neutrinos, cosmic-microwave-background The $\mathrm{C}\nu\mathrm{B}$ also defines a reference frame of zero total momentum. Is it expected that this reference frame is coincident with that of CMB? As is discussed in this answer, the "rest frame" of the cosmic neutrino background would be very similar to that defined by the cosmic microwave background if neutrinos were very light (say $<0.1$ eV). The Sun would be moving with respect to this frame at around 370 km/s. But if neutrinos were more massive(say getting on for 1-2 eV) then they are expected to be very non-relativistic with speeds comparable to the escape speed of the Milky Way. In those circumstances they will cluster around the Milky Way and would have a rest frame that was more similar to that of the Milky Way galaxy itself - i.e. they would be orbiting around the Galaxy and the Sun would move on average at about 220 km/s with respect to the average neutrino.
{ "domain": "physics.stackexchange", "id": 32255, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology, reference-frames, neutrinos, cosmic-microwave-background", "url": null }
algorithms, machine-learning, clustering $$\mathrm{cosine}_{\mathrm{simple}}(b_1,b_2) = \sum_{g \in G} f_{g,b_1} f_{g,b_2}$$ This measure, however, would artificially weight books which have a lot of genres listed higher. More common is to use the relative frequency of a genre within a book, called the term frequency: $$\mathrm{tf}(g,b) = \frac{f_{g,b}}{\sum_{g' \in G} f_{g',b}}$$ And then: $$\mathrm{cosine}_{\mathrm{weighted}}(b_1,b_2) = \sum_{g \in G} \mathrm{tf}(g,b_1) \cdot \mathrm{tf}(g,b_2)$$ Next step up in complexity is tf-idf, short for "term frequency, inverse document frequency". The theory behind this one is that some genres (e.g. "fiction") probably shouldn't weigh as highly as others (e.g. "historical"). To weight each genre, we use a little information theory. The probability that a randomly picked book will have the genre $g$ is: $$P(g) = \frac{\left| \left\{ b \in B : f_{g,b} > 0 \right\}\right|}{\left|B\right|}$$
{ "domain": "cs.stackexchange", "id": 19802, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, machine-learning, clustering", "url": null }
mathematical-physics, waves At first sight it might seem that the impulse wave would be spherical, and centered on the origin of coordinates. However, the wave field at a point $z$ will get contributions from all the point sources lying on circular slices of the spherical shell beginning at the time $t = z/c$, see the figures, and will continue to contribute until $t = (z+2R)/c$. So it doesn't seem that one gets a 'cleanly propagating' impulsive spherical wave. The following shows how as time progresses (t0,t1,t2,t3,t4,t5,t6,...) corresponding circular slices of the impulsive spherical source contribute to the wave field at the stationary point z. So the field is stretched out in time. Also note that with the foregoing approach the 'inward' propagating waves are included in the results at point z.
{ "domain": "physics.stackexchange", "id": 19153, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mathematical-physics, waves", "url": null }
fluid-dynamics, acceleration, measurements Title: Measuring acceleration of a bus using water between two sheets of glass I was riding a bus one day and noticed that the double windows had some water between them. As the bus accelerated, the water collected to the sides, first forming a trapezoid and then a right triangle. I begun wondering how it would be possible to measure the acceleration using only the geometry of the form of the amount of water. Like, assume that at $a=0$ the water has height $h$ and width $w$. As the bus accelerates, at some time $t_1$ the water forms a trapezoid with the shorter side $h_1$ ans longer side $h_2$. The bottom has same width $w$. At time $t_2$ water has formed a triangle with height $H$ and width $w$ (the last moment it touches the other side of the glass). And finally at time $t_3$ the height is $y$ and the width is $W$.
{ "domain": "physics.stackexchange", "id": 6555, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, acceleration, measurements", "url": null }
filters, transfer-function, fixed-point %% Calculate transfer function %Anonymous Filter Transfer-Function H = @(z, b0, b1, b2, a1, a2) ((b0 .* z.^2 + b1 .* z + b2) ./ (z.^2 - a1 .* z - a2)); f = 0 : 1 : fs / 2 - 1; %Frequency vector [Hz] z = exp(1i * 2 * pi * f ./ fs); %Calculate frequency-corresponding z vector %Calculate double-precision filter gain [-] and phase [°] mag_double = 1; ph_double = 0; for (section = 1 : nCascadedFilterSections) mag_double = mag_double .* abs(H(z, ... b0_double(section), ... b1_double(section), ... b2_double(section), ... a1_double(section), ... a2_double(section))); ph_double = ph_double + rad2deg(unwrap(angle(H(z, ... b0_double(section), ... b1_double(section), ... b2_double(section), ... a1_double(section), ... a2_double(section))))); end
{ "domain": "dsp.stackexchange", "id": 11089, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, transfer-function, fixed-point", "url": null }
physiology, ant Title: Do Ants have a sense of Direction? Do ants understand which way is up or down? Could they differentiate between uphill and downhill? Actually No. Amazingly there is this paper published on the motions on Ants which says:
{ "domain": "biology.stackexchange", "id": 2469, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physiology, ant", "url": null }
# Conversion of grams to milliliters using kg/m3 density Convert 100 grams of a solution to milliliters with a density of $681.9\rm~\frac{kg}{m3}$. It also has a density of 5.69 pounds per gallon. Are any or all of these correct? $$\rm100~g \times \frac{1~mL}{681.9 \frac{g}{cm^3}} = 0.146649~mL\quad(\approx 0.15~mL)$$ $$\rm1,000~g \times \frac{1,000~mL}{681.9~\frac{g}{cm^3}} = 1,466.4906~mL\quad(\approx 1,500~mL)$$ $$\rm1~kg \times \frac{1~L}{681.9~\frac{kg}{m^3}} = 1.4664906~L (\approx1.5~L)$$ Or, can it be as simple as: 100 g x 681.9 = 68.190 mL? The unit kg/m3 is what's confusing me. I'm used to g/cm3 or g/mL for densities. Does 681.9 g/cm3 = 681.9 kg/m3, while scaled-up a thousand-fold? Can I consider using the same density with the math involved by substituting kg/m3 for g/cm3 in the equation, or do I have to utilize a conversion within the metric system?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9504109812297141, "lm_q1q2_score": 0.8188153766816554, "lm_q2_score": 0.861538211208597, "openwebmath_perplexity": 1011.8475433234929, "openwebmath_score": 0.7373126149177551, "tags": null, "url": "https://chemistry.stackexchange.com/questions/46550/conversion-of-grams-to-milliliters-using-kg-m3-density/46689" }
experimental-physics, dark-matter Title: Why do physicists assume that dark matter is weakly interacting? IceCube, XENON, etc, keep yielding negative results. If dark matter exists, it doesn't interact with baryonic matter at the energy ranges they can detect. The response is to build even bigger detectors to search for even fainter energy signatures. Why? Is there evidence that dark matter is supposed to have weak interactions (instead of gravity-only)? Or is it just searching for your keys under the lamp post (i.e. it's the only possibility that we have a way to detect)? The short answer is that they don't assume that. But among all the proposals that remain for what dark matter might be, weakly interacting stuff is the easiest to detect,1 so that is what is getting the money right now.2
{ "domain": "physics.stackexchange", "id": 73155, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "experimental-physics, dark-matter", "url": null }
which is the Cauchy principal value. The main point is that $\int_\mathbb R$ is not defined as a limit $\lim_{R \to \infty} \int_{-R}^R$.. For the riemann integral it is defined as $$\lim_{a \to -\infty} \int_a^0 x dx + \lim_{b \to +\infty} \int_{0}^b x dx$$ and you can already see that neither of those converge. A similar difference is there with the Lebesgue integral; The integral of a function $f(x)$ is defined as $$\int f(x) dx = \int f(x) ^+ dx - \int f(x)^-dx$$ where $f(x)^+ = \max(f(x), 0)$ and $f(x)^- =- \min(f(x), 0)$. The main point is that we want the two separate integral to exists, and only then (if both exists) we sum them up. In this way there can't be any cancellation like the one happening with Cauchy principal value.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9759464492044004, "lm_q1q2_score": 0.8024125646334138, "lm_q2_score": 0.8221891327004132, "openwebmath_perplexity": 132.3915363015826, "openwebmath_score": 0.9580719470977783, "tags": null, "url": "https://math.stackexchange.com/questions/1772495/the-doubly-infinite-series-sum-infty-infty-n/2069423" }
radiation, isotopes Would it be possible to considerably speed up the decay rate of an isotope? Considerably meaning more then a 1 or 2% increase in decay rate. Spontaneous decay rate are considered to be a kind of intrinsic property and it should not depend on anything else, see the answers in the related question. Usually when there are particles interacting with the atom, the decay rate can change but it is not spontaneous. I want to add an experimental results. There are occasional reports of the observing decay rate fluctuate less than 1%. Recently, an experimental results claimed that the effect of the sun on the beta decay can be large and its variation depends on the hour of day and the date of year. Details and discussion are given in a blog post.
{ "domain": "physics.stackexchange", "id": 6164, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "radiation, isotopes", "url": null }
noise, modulation, snr, spread-spectrum Title: Conversion between Eb/N0 and SNR in spread-spectrum modulation like LoRa CSS? This question is somewhat related to the one asked here: How is multiplexing achieved in spread-spectrum modulations like CSS? I am looking at the bit error rate (BER) performance of communication systems based on chirp spread spectrum (CSS) like the one implemented by Semtech's LoRa CSS. Authors of the research paper Range and coexistence analysis of long range unlicensed communication give an analytical expression for computing the $BER$ given the spreading factor ($sf$) and the energy per bit to noise ratio ($E_b/E_0$). The expression is as follows: \begin{equation} BER = Q(\frac{log_{12}(sf)}{\sqrt{2}} \frac{E_b}{N_0} ) \end{equation} where $Q(x)$ is the Q-function. Authors also show some results comparing the BER for a LoRa CSS system with a BPSK system. If useful, I have reproduced the results:
{ "domain": "dsp.stackexchange", "id": 11158, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "noise, modulation, snr, spread-spectrum", "url": null }
fft, python, numpy ...I was expecting that the real part will represent amplitudes of two sinusoidal signals and the imaginary part will represent a phase shift of them. The real and imaginary part of the complex Discrete Fourier Transform are one number which carries many pieces of information. Specifically, the magnitude $|X[k]|$ of some $k$ complex sinusoid returns information about its amplitude. $$|X[k]| = \sqrt{\mathcal{R}(X[k])^2 + \mathcal{I}(X[k])^2}$$ Or in more practical terms Y_amp = np.abs(np.fft.fft(y)) following the signals established in your example. The "phase shift" is the angle $\angle X[k]$ of a given $k$ complex sinusoid. $$\angle X[k] = \arctan\left(\frac{\mathcal{I}(X[k])}{\mathcal{R}(X[k])}\right)$$ Or in more practical terms Y_phs = np.angle(np.fft.fft(y)). Where $\mathcal{I},\mathcal{R}$ denote the imaginary and real part of a complex sinusoid at discrete frequency $k$. However the amplitude 109 doesn't make much sense to me, nor does the phase shift of +/-128.
{ "domain": "dsp.stackexchange", "id": 6401, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fft, python, numpy", "url": null }
javascript, security, cryptography how should it be used to protect data communication between client and server side computing? Use HTTPS. At the time, there really isn't an alternative for this. (well, you could use third party services such as OpenID). You definitely need asymmetric encryption, and you should definitely not write your own. Misc either always use camelCase or always snake_case, don't mix them without reason. key256Bits500Iterations is quite descriptive, but I think key would be just fine as well, and easier to read (and what if you change it to 600 iterations? Either your name would be wrong, or you would need to change it). encrypt doesn't just encrypt, it also changes the form. I would to this in a different function and just let encrypt do the encrypting. wherever you do the form manipulation, you should also set document.loginForm.password1.value to something else right there.
{ "domain": "codereview.stackexchange", "id": 10387, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, security, cryptography", "url": null }
energy, mass, energy-conservation, time-travel Title: Does time travel violate conservation of mass/energy? Imagine I exist at time $t_1$ and my mass is $m$. At time $t_2$ I time travel back to $t_1$. At time $t_1$ there is now a net increase of mass/energy in the universe by $m$. At time $t_3 = t_2 - x$ where $x < t_2 - t_1$, I travel back to t1 again. The net mass in the universe has now increased by $2 \times m$. Properly qualified, I can do this an arbitrary $n$ number of times, increasing the mass in the universe by $n \times m$. This extra mass, of course, can be converted to energy for a net increase in energy. Does this argument show that traveling back in time violates the conservation of mass/energy? Conservation of Energy is a consequence of Time-translational Symmetry of the system. If this symmetry is broken, there'd be no Conservation of Energy.
{ "domain": "physics.stackexchange", "id": 13453, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "energy, mass, energy-conservation, time-travel", "url": null }
linear-programming, constraint-satisfaction, smt-solvers Below are some approaches I've thought about, and the reason why I can't use them directly: (Integer) Linear Programming: here we're not dealing with the full space of Integers, but instead only with the values found in the bag (and tied to the other properties for each item) Stochastic Constraint Satisfaction: it would be easy to stochastically come up with a selection of Items which respects the constraints, but here we're not looking for any solution, instead we're looking for the solution which yields the highest total Value, which would need us to exhaustively list all possible satisfying solutions before picking the one with highest value (which may be a very inefficient approach) SMT solver: I think it could be the way to go, but so far I can't see how to model the link between the constraints, the objective function and the bag of elements
{ "domain": "cs.stackexchange", "id": 10219, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "linear-programming, constraint-satisfaction, smt-solvers", "url": null }
polygon. but they are not D congruent. Congruent Figures: Two figures are called congruent if they have the same shape and same size. Why should two congruent squares have the same area? Technically speaking, that COULD almost be the end of the proof. [2]). called congruent, if they have the same shape and the same size. Here’s another HUGE idea, which is much more appealing for visual thinkers. If a pair of _____ are congruent, then they have the same area . 2 rectangles can have the same area with different lengths of sides to … It's very easy for two rectangles to have the same area and different perimeters,or the same perimeter and different areas. Therefore, those two areas are equal. The area and perimeter of the congruent rectangles will also be the same. Rectangle 1 with length 12 and width 3. Conversely: "If a rectangle's diagonals are equal, then it is a square" is (False) because there exists a rectangle that is not a square that has equal diagonals. Remember, these are
{ "domain": "fonda107.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9678992932829917, "lm_q1q2_score": 0.8044627706632974, "lm_q2_score": 0.8311430499496096, "openwebmath_perplexity": 443.9631058696115, "openwebmath_score": 0.7242865562438965, "tags": null, "url": "http://fonda107.com/assassin-s-cycxk/fe9a5b-if-two-rectangles-have-equal-areas%2C-then-they-are-congruent" }
c#, object-oriented, .net, interface, polymorphism private void ProcessErrorMessage(NetIncomingMessage message) { _Logger.LogError("Network error: ", message.ReadString()); } private void AddClient(NetConnection client) { if (_ConnectedClients.Add(client)) _Logger.LogInformation("New client discovered: {0}", client.RemoteEndPoint); else _Logger.LogError("Duplicate client discovered: {0}", client.RemoteEndPoint); } private void RemoveClient(NetConnection client) { if (_ConnectedClients.Contains(client)) _ConnectedClients.Remove(client); _Logger.LogInformation("Client disconnected: {0}", client.RemoteEndPoint); }
{ "domain": "codereview.stackexchange", "id": 14653, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, object-oriented, .net, interface, polymorphism", "url": null }
photons Such an event is very rare and most of the pair production photographs were produced when high energy gamma photons passed through a thin sheet of lead as in the photograph below. There is a very good one taken in 1937 with a Getty Images copyright here. The other way in which gamma and X-ray photons were detected in a cloud chamber was by the photons knocking out electrons from atoms either in the chamber itself or in the walls of the chamber and these secondary electrons produced tracks.
{ "domain": "physics.stackexchange", "id": 39226, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "photons", "url": null }
computational-physics, cellular-automaton Title: Is there a physical system that emulates mathematical cellular automata? Theoretical cellular automata have been proposed as models for many physical phenomena from music to quantum mechanics. My question concerns the reverse: Is there a simple physical system that emulates a theoretical cellular automaton? A computer running CA software such as Hashlife can be considered an (extremely complex) physical system that behaves according to the rules of the programmed CA. I am however looking for something much simpler, such as a set of dominoes that topple each other in a fashion conforming to the rules of a certain CA, wave interference patterns that model a CA etc.
{ "domain": "physics.stackexchange", "id": 11108, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computational-physics, cellular-automaton", "url": null }
uranus, weather Some suggest that since the outer planets are gaseous, there is virtually no surface roughness to act as a drag on winds like on Earth and hence the atmospheres is more fluid like. Moreover, Uranus is very far away from the sun and thus there is less solar energy to impact the turbulence in the atmosphere. Some also suggests internal heat to be the reason. Uranus only puts out about 6 percent as much heat as it receives from the Sun. So, internal heat source might be involved on some level. For more information, read this excellent paper: Atmospheric confinement of jet streams on Uranus and Neptune Additional source:
{ "domain": "astronomy.stackexchange", "id": 4837, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "uranus, weather", "url": null }
c++, object-oriented, design-patterns, entity-component-system, constrained-templates class ECSManager { public: ECSManager() : context(entityManager, componentManager) { } Entity createEntity() { return entityManager.createEntity(); } void destroyEntity(Entity entity) { componentManager.destroyEntityComponents(entity); entityManager.destroyEntity(entity); } template<typename ComponentType, typename... Args> requires std::derived_from<ComponentType, Component> void addComponent(Entity entity, Args&&... args) { componentManager.addComponent<ComponentType>(entity, std::forward(args)...); } template<typename SystemType, typename... Args> requires std::derived_from<SystemType, System> void addSystem(Args&&... args) { systemManager.addSystem<SystemType>(std::forward<Args>(args)...); } template<typename SystemType> requires std::derived_from<SystemType, System> void removeSystem() { systemManager.removeSystem<SystemType>(); }
{ "domain": "codereview.stackexchange", "id": 44809, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, design-patterns, entity-component-system, constrained-templates", "url": null }
electromagnetism, maxwell-equations, differentiation $$\begin{array}{c|c} \text{The rest frame} & \text{The moving frame}\\ \hline \vec E = 0 & \vec E' = \vec v \times \vec B \\ \vec B = +B\hat z & \vec B' = \vec B \\ \end{array}$$ $\vec E = 0$ in the rest frame because there is no charge density $\rho$ in this problem and we require $\vec E' = 0$ when $\vec v = 0$. Thus from the moving bar's perspective, it sees a constant $\vec B$ field and $\vec E$ field everywhere; no Faraday's law required. Since $E'_y = vB$, we can recover the voltage $\mathcal{E} = vBL$ as previously found. Note from this calculation we are actually measuring $\vec E'$ in the moving frame. Lastly, the result from applying the Leibniz integral rule: $$\begin{align} \vec \nabla \times ( \vec E + \vec v \times \vec B) &= - {\partial \vec B \over \partial t} \\ \vec \nabla' \times ( \vec E' ) &= - {\partial \vec B' \over \partial t} \\ \end{align}$$ is the same as applying the field transformations for E and B.
{ "domain": "physics.stackexchange", "id": 47005, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, maxwell-equations, differentiation", "url": null }
quantum-field-theory, conservation-laws, electric-current Title: Why does current conservation involve an arbitrary function? In section 6.1 of Peskin's quantum field theory introduction, right after equation 6.3, the four current density $j^{\mu}$ is said to be conserved because for any function $f \left( x \right)$ that falls off at infinity, we have $$ \int f \left( x \right) \partial _{\mu} j^{\mu} \left( x \right) \mathrm{d}^4 x = 0 $$ I am just so confused on the fact that there is a function $f \left( x \right)$ involved in this evaluation. Is current conservation not just $\int \partial _{\mu} j^{\mu} \left( x \right) \mathrm{d}^4 x = 0$?
{ "domain": "physics.stackexchange", "id": 52672, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, conservation-laws, electric-current", "url": null }