text
stringlengths
49
10.4k
source
dict
homework-and-exercises, vector-fields, conservative-field Title: Proving that $\vec F$ is conservative field I need to prove that $\vec F$ is coservative field:
{ "domain": "physics.stackexchange", "id": 23360, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, vector-fields, conservative-field", "url": null }
particle-physics, experimental-physics, large-hadron-collider Title: Why does LHC use a $pp$ collision and not $p\overline{p}$ collision? Why does LHC use a $pp$ collision and not $p\overline{p}$ collision, when you actually only need one ring in $p\overline{p}$ as compared to two in $pp$. Please answer in easy language (I am not much familiar with Bjorken variables as such). It's much easier to supply a source of protons than antiprotons which annihilate with everything around it unless kept in high vacuum. The only difference is the slight charge asymmetry that you need to account for, but proton-proton colliders are really mostly gluon colliders anyway, which are the same between protons and anti-protons. As far as the effect of the charge asymmetry, this comes down to the quark content of the protons. For instance, in order to make a Z boson via the Drell-Yan interaction, one needs a quark-antiquark pair (e.g. an up/upbar). But since protons don't have any valence antiquarks, the charge asymmetry requires the anti-quark to be pulled from the "sea quarks." If your collider has low energy, then the sea quarks will generally not have enough momentum to make heavy particles. This is the hit you take when working with proton-proton collisions. In general, the absence of anti-protons means that you need to pull anti-quarks from the vacuum which, in turn, implies they will tend to have lower energy.
{ "domain": "physics.stackexchange", "id": 39500, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, experimental-physics, large-hadron-collider", "url": null }
inorganic-chemistry, everyday-chemistry, electrochemistry, plastics Title: What is the chemical reaction between coins (copper and nickel) and polyvinyl chloride? I often carry coins in a cheap plastic sandwich bag. A green solid soon appears. Is it copper oxide? How is the polyvinyl chloride (or polyethylene or polypropylene) greatly accelerating the reaction? Or are they? Are there other chemicals being formed? Coin collector websites all say vinyl-containing compounds, and maybe other plastics, damage coins. But they don't say how.... P.S.: Coins are made of copper, nickel and zinc, right? Far, far too broad a question. I don't know what is going on in your situation.
{ "domain": "chemistry.stackexchange", "id": 10386, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, everyday-chemistry, electrochemistry, plastics", "url": null }
homework-and-exercises, newtonian-mechanics, forces Title: Help me understand constant speed on a slope An object is pulled up a slope at constant speed. I'm trying to find the Tension on the rope. $$m = 65kg \Rightarrow W=-637$$ $$\mbox{constant speed} \Rightarrow a = 0$$ $$\mbox{angle of slope} = 45^\circ$$ $$T=?$$ I use Newton's first Law to determine that there is no acceleration because of constant speed: $$\sum F=0$$ Online I found that I should use this: $$W \sin\theta=T$$ I don't understand how they came up with that equation. I tried using mine like this but the answer is way too much to be correct. My equation: $\sum F_y=-W+\sin\theta*T_y=0$ This is because you're thinking of the wrong problem. What you're thinking is this: That's not what the problem states. Actually, if you assume 0 N in the vertical direction, this would cause it to accelerate horizontally, not what we want. The problem is about a ramp: You have to break the weight into components that are "x/y" if we treat the surface of the ramp as "y=0". Then, as you can see, $F_{net \, x}=T-W\sin \theta=0$, which gives you the solution. Also note for future that the "vertical" component of the weight cancels the normal force. By the way, tip for the future: know $mg \sin \theta$ well. I remember that it's the "horizontal" by thinking: "s" in "sin" for "sliding."
{ "domain": "physics.stackexchange", "id": 13954, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-mechanics, forces", "url": null }
homework-and-exercises, electromagnetism Title: Electric dipole oscillation in a uniform electric field We have two charged balls of masses $m_1=m_2=15.0\,g$, and charges $+q_1=-q_2=0.800\,\mu C$, fixed to the ends of a very light rod of length $\ell$. The centre of the rod is mounted on a friction-free pivot, and the whole system is then immersed in a uniform electric field of magnitude $E=450\,\frac NC$ (pointing from left to right). I want to determine the period of oscillation if the system is disturbed from its initial orientation by a small angle, and then the same thing but when $m_2'=2m_1$.
{ "domain": "physics.stackexchange", "id": 70725, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electromagnetism", "url": null }
turtlebot Title: Is there a minimum laptop specification for the TurtleBot? I would like to know what the minimum requirements are for processing and graphics. I can not find this info anywhere online. Originally posted by tfoote on ROS Answers with karma: 58457 on 2012-02-11 Post score: 0 The TurtleBot is a flexible system which can work at many levels. Controlling the base can be done by almost any processor. To process the Kinect you need at least an Atom processor from 2011 or newer. Every computer is different and we can't test all of them, we don't use the graphics card much for we usually keep the laptop shut. We recommend the Asus EeePC 1215N as it can process the Kinect with one core and there's a second core available for other processing. And we've tested extensively with it. Other things the laptop will need include: At least 2 USB buses with 1 USB 2.0 bus to dedicate solely to the Kinect. A wifi adapter compatible with your network. Originally posted by tfoote with karma: 58457 on 2012-02-11 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 8197, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turtlebot", "url": null }
gazebo Title: GAZEBO plugin subscribe to ROS topic Greetings, I have some problems making a GAZEBO plugin and a ROS node communicate. The ROS node pubblishes correctly the message on the topic, but the plugin isn't able to acces them. This is the code of the plugin: // Initialize node node = transport::NodePtr(new transport::Node()); // Subscribe to the control node node->Init("gazebo/quetzalcoatl"); std::cout << "Subscribing to " << "~/quetzalcoatl/cmdVelocity" << " topic" << std::endl; commandSubscriber = node->Subscribe("~/quetzalcoatl/cmdVelocity", &SkidSteer::updateVelocity, this); std::cout << "Subscribed to " << commandSubscriber << std::endl; // Listen to the update event, this event is broadcast every simulation iteration this->updateConnection = event::Events::ConnectWorldUpdateBegin( boost::bind(&SkidSteer::OnUpdate, this, _1)); This is the code of the ROS node which pubblishes the message: while (ros::ok()) { ROS_INFO("Node '%s' sending the %d message", ros::this_node::getName().c_str(), count); // Update the velocities linearVelocity = 10 + count / 100; angularVelocity = 0; // Fill the message cmdVelocityMsg.linearVelocity = linearVelocity; cmdVelocityMsg.angularVelocity = angularVelocity; cmdVelocityMsg.moveUp = moveUp; cmdVelocityMsg.moveDown = moveDown; cmdVelocityMsg.moveLeft = moveLeft; cmdVelocityMsg.moveRigth = moveRigth;
{ "domain": "robotics.stackexchange", "id": 3958, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo", "url": null }
c#, beginner, mathematics RngCrypto class: using System; using System.Security.Cryptography; public class RngCrypto { private const int BufferSize = 1024; // must be a multiple of 4 private readonly byte[] _randomBuffer; private int _bufferOffset; private readonly RNGCryptoServiceProvider _rng; public RngCrypto() { _randomBuffer = new byte[BufferSize]; _rng = new RNGCryptoServiceProvider(); _bufferOffset = _randomBuffer.Length; } private void FillBuffer() { _rng.GetBytes(_randomBuffer); _bufferOffset = 0; } public int Next() { if (_bufferOffset >= _randomBuffer.Length) { FillBuffer(); } int val = BitConverter.ToInt32(_randomBuffer, _bufferOffset) & 0x7fffffff; _bufferOffset += sizeof(int); return val; } public int Next(int maxValue) { return Next() % maxValue; } public int Next(int minValue, int maxValue) { if (maxValue < minValue) { throw new ArgumentOutOfRangeException(@"maxValue must be greater than or equal to minValue"); } int range = maxValue - minValue; return minValue + Next(range); } public double NextDouble() { int val = Next(); return (double)val / int.MaxValue; } public void GetBytes(byte[] buff) { _rng.GetBytes(buff); } } I don't know much about making numbers random enough, but I have some other suggestions. Instead of checking if a range of integers contains a number you can just check if the number is >= the beginning of the range and <= the end of the range. So you can replace this: if(criticalHits.Any(criticalHit => criticalHit == hitThatIsCritical)) With this: if(((int)percentChance) <= hitThatIsCritical)
{ "domain": "codereview.stackexchange", "id": 20068, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, mathematics", "url": null }
quantum-mechanics, hilbert-space, operators, notation In Dirac's notations the inner product is denoted by $\langle \cdot | \cdot \rangle$, hence $\langle i | \psi\rangle$ just denotes the inner product of $|i\rangle$ with $|\psi\rangle$, namely $\alpha_i$. More generally, if $|\psi\rangle$ and $|\phi\rangle$ denote two elements of $\mathcal{H}$ with $|\phi\rangle = \sum_i \beta_i |i\rangle$, then $\langle \psi|\phi\rangle$ denotes their inner product, namely $\sum_i \alpha_i^*\beta_i$. Moreover, as a vector space $\mathcal{H}$ has also a dual space $\mathcal{H}^{\dagger}$. Note that $\mathcal{H}^{\dagger}$ is the space of linear maps from $\mathcal{H}$ to $\mathbb{C}$. There is a one-to-one correspondence between $\mathcal{H}$ and $\mathcal{H}^{\dagger}$ that one can define with the help of the inner product. In Dirac's notations, the image of $|\psi\rangle$ under this correspondence is denoted $\langle \psi|$, so that $\langle \psi|$ applied to $|\phi\rangle$ is the inner product $\langle \psi|\phi\rangle$.
{ "domain": "physics.stackexchange", "id": 71249, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, hilbert-space, operators, notation", "url": null }
java, programming-challenge Title: Euler Project 12# - highly divisible triangular number I'm an amateur programmer, new to Java and while attempting the Project Euler Archive 12 (Highly divisible triangular number) I ran into extremely long run time, with no result as of yet. Is it efficient and what should I do to improve it? Is there a special method to follow when sorting factors of numbers? Basically I need to find the first triangle number with over 500 divisors. public class Divisor { public static void main(String[] args) { int f = 0; //divisors int m = 500; //max divisors int j = 1; //current number int z = 0; //sum (last run achieved: 135878572) int a = 1; //current denominator String t = ""; //total divisors while (f<=m) { f = 0; z += j; j++; System.out.println("------"); System.out.println("t: " + z); //Now get factors of each, the first to have over 500 is the answer while (a <= z){ if ((z % a) == 0) { t += (String.valueOf(a) + "|"); f++; } a++; } System.out.println("f: " + t); t=""; System.out.println("d: " + f); a = 1; } System.out.print("Answer: " + z); } } Here is an example of my output (first 3 triangle numbers): ------ t: 1 <--- Triangle Number f: 1| <--- Factors (Divisors) d: 1 <--- Total Factors (Total Divisors) ------ t: 3 f: 1|3| d: 2 ------ t: 6 f: 1|2|3|6| d: 4 ------
{ "domain": "codereview.stackexchange", "id": 16944, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, programming-challenge", "url": null }
# Characterization of $\mathbb{R}^n$? Let $$M$$ be a smooth $$n$$-dimensional manifold with the property that any compact subset $$K \subset M$$ is contained in an $$n$$-dimensional smooth ball $$K \subset B \subset M$$. If $$M$$ is open, does it follow that $$M$$ is diffeomorphic to $$\mathbb{R}^n$$? Note that all of the homotopy groups of $$M$$ must vanish and therefore, by Whitehead's theorem, $$M$$ is contractible. • Do you consider manifolds with boundary? If yes, a smooth ball is a closed smooth ball? – Paul Frost Jan 8 '19 at 9:37 • @PaulFrost - whoops sorry I fixed it - I am just interested in the open case – user101010 Jan 8 '19 at 16:14 • What is an $n$-dimensional smooth ball? Is it simply a ball in $M$ that is diffeomorphic to $\mathbb{R}^n$? And finally isn't an exotic $\mathbb{R}^4$ a counterexample? – freakish Jan 8 '19 at 19:15 • An $n$-dimension smooth ball in $B^n$ with the standard smooth structure. Is an exotic $\mathbb{R}^4$ a counterexample? I don't see why. If so, is it the only one? – user101010 Jan 8 '19 at 20:19 • Can I assume each ball has compact closure diffeomorphic to a closed ball? – user98602 Jan 8 '19 at 20:58 Yes, this is true. First, orient your $$n$$-manifold $$M$$ (your hypotheses imply that $$M$$ is contractible, so this is possible). First, by your hypothesis, you obtain an increasing exhaustion $$M_k \subset M$$ of compact sets, diffeomorphic to the $$n$$-ball, so that each $$M_k$$ is contained in the interior of $$M_{k+1}$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9859363725435202, "lm_q1q2_score": 0.8150852210708571, "lm_q2_score": 0.8267117876664789, "openwebmath_perplexity": 258.3417568614794, "openwebmath_score": 0.9373983144760132, "tags": null, "url": "https://math.stackexchange.com/questions/3065755/characterization-of-mathbbrn" }
java, object-oriented, game } Grid package go; import go.GameBoard.State; /** * Provides game logic. * * */ public class Grid { private final int SIZE; /** * [row][column] */ private Stone[][] stones; public Grid(int size) { SIZE = size; stones = new Stone[SIZE][SIZE]; } /** * Adds Stone to Grid. * * @param row * @param col * @param black */ public void addStone(int row, int col, State state) { Stone newStone = new Stone(row, col, state); stones[row][col] = newStone; // Check neighbors Stone[] neighbors = new Stone[4]; // Don't check outside the board if (row > 0) { neighbors[0] = stones[row - 1][col]; } if (row < SIZE - 1) { neighbors[1] = stones[row + 1][col]; } if (col > 1) { neighbors[2] = stones[row][col - 1]; } if (col < SIZE - 1) { neighbors[3] = stones[row][col + 1]; } // Prepare Chain for this new Stone Chain finalChain = new Chain(newStone.state); for (Stone neighbor : neighbors) { // Do nothing if no adjacent Stone if (neighbor == null) { continue; } newStone.liberties--; neighbor.liberties--; // If it's different color than newStone check him if (neighbor.state != newStone.state) { checkStone(neighbor); continue; } if (neighbor.chain != null) { finalChain.join(neighbor.chain); } } finalChain.addStone(newStone); } /** * Check liberties of Stone * * @param stone */ public void checkStone(Stone stone) { // Every Stone is part of a Chain so we check total liberties if (stone.chain.getLiberties() == 0) { for (Stone s : stone.chain.stones) { s.chain = null; stones[s.row][s.col] = null; } } }
{ "domain": "codereview.stackexchange", "id": 14169, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, object-oriented, game", "url": null }
11/9/19 # Evaluating Arbitrary Precision Integer Expressions in Julia using Metaprogramming While watching the Mathologer masterclass on power sums I came across a challenge to evaluate the following sum $1^{10}+2^{10}+\cdots+1000^{10}$ This can be easily evaluated via brute force in Julia to yield function power_cal(n) a=big(0) for i=1:n a+=big(i)^10 end a end julia> power_cal(1000) 91409924241424243424241924242500 Note the I had to use big to makes sure we are using BigInt in the computation. Without that, we would be quickly running in into an overflow issue and we will get a very wrong number. In the comment section of the video I found a very elegant solution to the above sum, expressed as (1/11) * 1000^11 + (1/2) * 1000^10 + (5/6) * 1000^9 – 1000^7 + 1000^5-(1/2) * 1000^3 + (5/66) * 1000 = 91409924241424243424241924242500 If I try to plug this into the Julia, I get julia> (1/11) * 1000^11 + (1/2) * 1000^10 + (5/6) * 1000^9 - 1000^7 + r1000^5-(1/2) * 1000^3 + (5/66) * 1000 -6.740310541071357e18 This negative answer is not surprising at all, because we obviously ran into an overflow. We can, of course, go through that expression and modify all instances of Int64 with BigInt by wrapping it in the big function. But that would be cumbersome to do by hand. ## The power of Metaprogramming In Julia, metaprogramming allows you to write code that creates code, the idea here to manipulate the abstract syntax tree (AST) of a Julia expression. We start to by “quoting” our original mathematical expressing into a Julia expression. In the at form it is not evaluated yet, however we can always evaluate it via eval.
{ "domain": "perfectionatic.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104924150546, "lm_q1q2_score": 0.8053336304968961, "lm_q2_score": 0.8333245932423309, "openwebmath_perplexity": 5748.005973277766, "openwebmath_score": 0.869595468044281, "tags": null, "url": "https://perfectionatic.org/" }
communication, ros-melodic, rosbridge, network, rosmaster Originally posted by Raza Rizvi with karma: 95 on 2021-09-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by osilva on 2021-09-29: Ingenious solution Comment by Raza Rizvi on 2021-09-29: Haha thanks, don't forget to vote for this solution. Comment by gvdhoorn on 2021-09-30: So in the end you just needed a multi-homed host? That doesn't require multiple NICs on most OS. You can assign multiple IPs to the same NIC. Comment by Raza Rizvi on 2021-09-30: No, I wanted to connect to two separate networks inside one PC. So for that, I used two separate network adapters and connected them to separate networks. In doing so, and lunching one ROS Master, I was able to get all the ROS topics from both the separate networks, into just one ROS Master inside the PC. Comment by gvdhoorn on 2021-09-30: Unless they're physically separated networks, all you need is a router. But if you're happy, we're happy.
{ "domain": "robotics.stackexchange", "id": 36899, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "communication, ros-melodic, rosbridge, network, rosmaster", "url": null }
organic-chemistry, nomenclature, esters Title: How to name an ester with an alkene or alkyne as the R group? More specifically, see the image down below: Would it be pentan-2-ene-3-yl propanoate? Do you just add the 2-ene to specify the position of the double bond? According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), the names of esters are generally formed by placing the alcoholic component in front of the name of the acid component as a separate word (e.g. ethyl acetate). P-65.6.1 General methodology Neutral salts and esters are both named using the name of the anion derived from the name of the acid. Anion names are formed by changing an ‘-ic acid’ ending of an acid name to ‘-ate’ and an ‘-ous acid’ ending of an acid name to ‘-ite’. Then, salts are named using the names of cations, and esters the names of organyl groups, cited as separate words in front of the name of the anion. The organyl group is named using the using the usual methodology. P-29.2 GENERAL METHODOLOGY FOR NAMING SUBSTITUENT GROUPS The presence of free valences formally derived from the loss of one or more hydrogen atoms from a parent hydride is denoted by suffixes ‘yl’, ‘ylidene’, and ‘ylidyne’, together with multiplying prefixes indicating the number of free valences; lowest locants are assigned to all free valences as a set, then in the order ‘yl’, ‘ylidene’, ‘ylidyne’. In names, the suffixes are cited in the order ‘yl’, ‘ylidene’, ‘ylidyne’. The suffixes ‘ylidene’ and ‘ylidyne’ are used only to indicate the attachment of a substituent to a parent hydride or parent substituent by a double or triple bond, respectively. (…) Systematic names are formed by using the suffixes ‘yl’, ‘ylidene’ and ‘ylidyne’, with elision of the final letter ‘e’ of parent hydrides, when present, according to two methods:
{ "domain": "chemistry.stackexchange", "id": 17889, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, nomenclature, esters", "url": null }
biochemistry, cell-biology, proteins Title: Why is protein turnover necessary or important for cells to function? Cells constantly create new proteins in order to maintain their normal function, this is called protein turnover. Why is that? Do the old molecules wear out as time passes, so that they need a replacement? Biology is an intricate orchestration of chemical reactions and their products. Generally, this fete is accomplished by enzymatic facilitation of certain reactions that would otherwise occur too slowly. However, "unwanted" reactions occur spontaneously all the time, too. One important mechanism for these reactions is the presence of free radicals causing oxidative stress: reactive molecules are present in cells as a consequence of the energy necessary for metabolism, and these can react with proteins and other cellular components and change their structure. Modified proteins may fold incorrectly and lose their function (or gain harmful function). There is very little that can be done to prevent these reactions from occurring except to clean up afterwards. Half-lives vary by protein, but for most proteins are measured in the scale of hours (Chen et al, 2016) to a couple days (Boisvert et al, 2012). By constantly degrading and replacing proteins, cells ensure that their proteins are functional and fresh. You might consider this as analogous to performing regular maintenance on a machine like an automobile, using replacement parts as the originals become worn. Both protein degradation (e.g., via the ubiquitin-proteasome system) and synthesis are highly regulated. Boisvert, F. M., Ahmad, Y., Gierliński, M., Charrière, F., Lamont, D., Scott, M., ... & Lamond, A. I. (2012). A quantitative spatial proteomics analysis of proteome turnover in human cells. Molecular & Cellular Proteomics, 11(3). Chen, W., Smeekens, J. M., & Wu, R. (2016). Systematic study of the dynamics and half-lives of newly synthesized proteins in human cells. Chemical science, 7(2), 1393-1400.
{ "domain": "biology.stackexchange", "id": 11923, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "biochemistry, cell-biology, proteins", "url": null }
python, plugin # Get total number of layers layer_data_count = len(layer_data) try: # Create new list for layer_count but ignore any zeros new_layer_count = [x for x in layer_count if x != 0] except ValueError: pass # List manipulation # Calculate the number of layers in each group and divide 1 by this number value_list = [1 / float(x) for x in new_layer_count] # Format the list to one decimal place formatted_value_list = ['%.1f' % elem for elem in value_list] # Convert the values in list to float formatted_value_list_to_float = [float(x) for x in formatted_value_list] # Create final list containing each layer and the values of their group final_value_list = [x for n,x in zip(new_layer_count,formatted_value_list_to_float) for _ in range(n)] # Set number of rows/columns nb_row = len(layer_data) nb_col = 2 qTable.setRowCount(nb_row) qTable.setColumnCount(nb_col) # Hide row index number qTable.verticalHeader().setVisible(False) # Insert layer names and values for row in range(layer_data_count): for col in [0]: item = QTableWidgetItem(str(layer_data[row])) qTable.setItem(row,col,item) # Make first column non-editable item.setFlags(QtCore.Qt.ItemIsEnabled) for col in [1]: item = QTableWidgetItem(str(final_value_list[row])) qTable.setItem(row,col,item) Your lists manipulations are indeed messy. It seems like you are trying to perform multiple things at once and thus you end up mixing variables that serve different purposes into the same loops. Instead, you should try to extract logical steps to perform, such as:
{ "domain": "codereview.stackexchange", "id": 24721, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, plugin", "url": null }
machine-learning, deep-learning, cross-validation, preprocessing, competitions inspired by this I use a classification algorithm to filter out those training data which fit to my test data best. Figure out some nice preprocessing train nice classification algorithms build ensembles of them (stacking, ..). The concrete question: Concerning step 1: Do you have experience with such an approach? Let's say I order the probability of train samples to belong to test (usually below 0.5) and then I take the largest K probabilities. How would you choose K? I tried with 15K .. but mainly to have a small training data set in order to speed up training in step 3. Concerning step 2: The data is already on a 0,1 scale. If I apply any (PCA like) linear transformation then I would break this scale. What would you try in preprocessing if you have such numerical data and no idea that this actually is. PS: I am aware that because numer.ai pays people discussing this could help me make some money. But as this is public this would help anybody out there... PPS: Today's leaderboard has an interesting pattern: The top two with logloss of 0.64xx, then number 3 with 0.66xx and then most of the predictors reach 0.6888x. Thus there seems to be a very small top field and lot of moderately successful guys (including me). I've looked at the approach and I'd select K by trying a range, i.e. 5k, 10k, 15k etc and then exploring the range in which the best result falls, say the best is 15k then I might do 13, 14, 15, 16, 17 and so on. So far I've not found any pre-processing to be effective. Answering the comment: I've tried using LogisticRegression, SVM, Neural Networks, RandomForests, Multinomial NB, Extra Trees. All except Neural Networks using the implementations in sklearn. PyBrain for the NN.
{ "domain": "datascience.stackexchange", "id": 955, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, deep-learning, cross-validation, preprocessing, competitions", "url": null }
beginner, rust Checking whether the input is empty at the beginning of the function probably does nothing useful. The function still does the same thing without it, so the best you can hope for is it saves you a handful of instructions (BTreeMap::new does not allocate), at the expense of adding an unavoidable branch at the beginning of every call. I wouldn't put it in unless profiling showed a non-negligible effect. To help myself not lose track of which u32s are data and which are counts, I'm going to add a type alias Count. Finally, this loop in the original offers a chance to use some iterator magic: for _ in 0..(*count as i32) { sorted.push(digit); }
{ "domain": "codereview.stackexchange", "id": 39588, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, rust", "url": null }
dynamics, matlab Title: Dynamic Model of a Manipulator I'm stuck on equation 4.30 of page 176 in http://www.cds.caltech.edu/~murray/books/MLS/pdf/mls94-complete.pdf This equation: $\frac {\partial M_{ij}} {\partial \theta_k} = \sum_{l=\max(i,j)}^n \Bigl( [A_{ki} \xi_i, \xi_k]^T A_{lk}^T {\cal M}_l' A_{lj} \xi_j + \xi_i^T A_{li}^T {\cal M}_l' A_{lk} [A_{kj} \xi_j, \xi_k] \Bigr)$ seems impossible to process because it requires adding a 2x1 to a 1x2 matrix. going by ROWSxCOLUMNS notation. Matrices M and A are 6x6 and $\xi$ is a 6x1, so how does this addition statement fit the rules of matrix addition? This must be my mistake, I just don't see how. The Lie Bracket is 6x1, not 6x2, so both terms should be 1x1.
{ "domain": "robotics.stackexchange", "id": 1048, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dynamics, matlab", "url": null }
electromagnetism, potential, lienard-wiechert where $ \mathbf x(t_{ret}) $ describes the position of the charge and $ \mathbf v(t_{ret}) = \frac{d}{dt}\mathbf x(t)\bigg|_{t=t_{ret}} $ its velocity at the retarded time $ t_{ret}(\mathbf r,t) = t-\frac{|\mathbf r - \mathbf x(t_{ret})|}{c} $, respectively. In the case of uniform motion, we have $ \mathbf x(t) = (vt,0,0)^\intercal $. How do I get now from (2) to (1)? My idea is to actually calculate an explicit expression for the retarded time and plug it into (2), which should yield (1) if I understand it correctly. By asserting that $ c^2(t-t_{ret})^2 = (x-vt_{ret})^2+y^2+z^2 $, $ t_{ret} $ can be found in terms of solving the quadratic equation, leading to the solutions $ t_{ret}^\pm = \gamma\left(\gamma(t-\frac{vx}{c^2})\pm\sqrt{\gamma^2(t-\frac{vx}{c^2})^2-t^2+\frac{r^2}{c^2}}\right) = \gamma\left(\gamma t'\pm\sqrt{\gamma^2t'^2-\tau^2}\right)$ where $ t' $ is the Lorentz transformation of $ t $ and $ \tau = \frac{t}{\gamma} $ looks like some proper time. Plugging this into (2) looks nothing like (1), what am I missing? If you look at Feynman Volume II Section 21-6, he walks through this calculation. Your idea and initial assertion look good; the trick is to manage the algebra to get to the final form you want.
{ "domain": "physics.stackexchange", "id": 45291, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, potential, lienard-wiechert", "url": null }
beginner, php, chat } else { //Creates and formats the new response $newresponse = '['.$command[0][0].', '.$command[0][1].'],'; //Adds the new response to the responses file file_put_contents('responses.php',str_replace('$responses = [', "\$responses = [\n\t$newresponse", file_get_contents('responses.php'))); $message = 'Added response '.$command[0][0].' -> '.$command[0][1]; } } else { $message = 'Invalid input'; } } else if (strpos($text, '/delresponse') !== FALSE) { //Checks to see if the arguements are set correctly if (isset($command[1][0]) && $command[1][0] !== "" && !isset($command[1][1])) { //Checks to see if that response is already in the responses file first if (search_array($command[1][0], $responses)) { //loads current responses by reading the responses.php file $currentresponses = file('responses.php'); //Reads each line of the responses file (now in array currentresponses) foreach ($currentresponses as $linenumber => $line) { //Checks to see if the line contains the response that is being deleted if (strpos($line, $command[0][0]) !== FALSE) { //Once that response is found, loads the contents of the responses file into new variable newresponses $newresponses = file_get_contents('responses.php'); //Deletes appropriate line $newresponses = str_replace($line, '', $newresponses); //Writes the modified version back to the responses.php file file_put_contents('responses.php', $newresponses); $message = 'Deleted response for '.$command[0][0];
{ "domain": "codereview.stackexchange", "id": 26976, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, php, chat", "url": null }
ros2 Title: ros2 nav2 costmap not working So I made my own robot, and I wanted to use nav2 to control it. I have copied the turtlebot3 files and changed them to work with my robot. Everything works, I can even give it commands via rviz. However, the costmap doesn't work. I have tried everything and yet can't seem to get it working. All I changed was the world that was used by gazebo, the robot itself and I changed the map to my map and made small changes to the yaml file This is the yaml file that I use: amcl: ros__parameters: use_sim_time: True alpha1: 0.2 alpha2: 0.2 alpha3: 0.2 alpha4: 0.2 alpha5: 0.2 base_frame_id: "base_footprint" beam_skip_distance: 0.5 beam_skip_error_threshold: 0.9 beam_skip_threshold: 0.3 do_beamskip: false global_frame_id: "map" lambda_short: 0.1 laser_likelihood_max_dist: 2.0 laser_max_range: 100.0 laser_min_range: -1.0 laser_model_type: "likelihood_field" max_beams: 60 max_particles: 2000 min_particles: 500 odom_frame_id: "odom" pf_err: 0.05 pf_z: 0.99 recovery_alpha_fast: 0.0 recovery_alpha_slow: 0.0 resample_interval: 1 robot_model_type: "differential" save_pose_rate: 0.5 sigma_hit: 0.2 tf_broadcast: true transform_tolerance: 1.0 update_min_a: 0.2 update_min_d: 0.25 z_hit: 0.5 z_max: 0.05 z_rand: 0.5 z_short: 0.05 amcl_map_client: ros__parameters: use_sim_time: True amcl_rclcpp_node: ros__parameters: use_sim_time: True
{ "domain": "robotics.stackexchange", "id": 35487, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2", "url": null }
# Math Insight ### The multidimensional differentiability theorem The question of the differentiability of a multivariable function ends up being quite subtle. Not only is the definition of differentiability in multiple dimensions fairly complicated and difficult to understand, but it turns out that the condition for a function to be differentiable is stronger than one might initially think. Although we view the derivative as the matrix of partial derivatives, the existence of partial derivatives is not sufficient for a function to be differentiable. We can create examples of functions that are not differentiable at a point despite having partial derivatives there. Thankfully, most of the time, there is a way out of this mess. For most nice functions, we can completely sidestep the subtleties of differentiability and know right away that the function is differentiable. This escape from the intricacies of differentiability is due to a theorem that states that continuous partial derivatives are enough to ensure differentiability. Differentiability theorem: If, for a function $\vc{f}: \R^n \to \R^m$ (confused?), all the partial derivatives of its matrix of partial derivatives exist and are continuous in a neighborhood of the point $\vc{x}=\vc{a}$, then $\vc{f}(\vc{x})$ is differentiable at $\vc{x}=\vc{a}$. That's all we were missing in these non-differentiable functions with partial derivatives: the partial derivatives weren't continuous. If we can show the partial derivatives are continuous, then we don't have to worry about any of the subtleties of differentiability. If you zoom in on the function, it is very close to be linear, i.e., it is differentiable. In the examples of calculating derivatives, the differentiability theorem is implicitly invoked to conclude that the functions are differentiable. #### Tying all loose ends The differentiability theorem finally puts some order into the chaotic world of multivariate differentiability. For many practical cases, testing for differentiability is reduced to the relatively simple task of checking for continuity of partial derivatives.
{ "domain": "mathinsight.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.992988205448513, "lm_q1q2_score": 0.816424102766701, "lm_q2_score": 0.8221891239865619, "openwebmath_perplexity": 188.4591528052775, "openwebmath_score": 0.6865671277046204, "tags": null, "url": "https://mathinsight.org/differentiability_multivariable_theorem" }
newtonian-mechanics, rotational-dynamics, torque I'm wondering: if friction force is strong enough to counteract the component of gravity force parallel to the plane, will the ball even start to roll/slide? The ball is either moving or the imperfections in the ball and plane are keeping the static condition outlined above. In the latter case, the normal force is stopping the ball, not the friction force. Imagine a bicycle with one wheel on a flat stair and another on a higher or lower flat stair - will the bicycle accelerate? Well, since the net force acting on the ball is zero, I think the ball would not roll nor slide down. It won't be totally still, though. There is the torque from the friction force, which would make the ball skid in its place. Does that make any sense? It sounds like you want the ball to start spinning without translating. No, that doesn't make any sense. To me it is the same case of placing a spool on an inclined plane while holding its string. Holding the string gives you a no slip condition on one side and either static (no movement) or kinetic (it translates and rotates) friction on the other. As the gravity force tries to push the spool downwards the plane, I pull the string to counteract the gravity force. The spool stays on its place, just rotating (on this case, I mimic the friction force by pulling the spool string upwards). If you pull on the string, you're doing something more complicated and un-sustainable as your arm isn't as long as the string. Even if my analysis is correct, I think no practical surface would have a friction coefficient high enough to keep a ball from rolling and/or sliding. Your analysis is not correct. Velcro and glue do a nice job of arresting movement. To solve problems like this, check to see if the ball rolls by using the static friction inequality and then use either a no-slip condition or a kinetic friction force that appears in both the the sum of the forces and the sum of the torques. One might also choose to use the energy equation for the no-slip case.
{ "domain": "physics.stackexchange", "id": 16262, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, rotational-dynamics, torque", "url": null }
chirality, symmetry, group-theory Part 2 Are there are also examples of vice-versa, where a molecule has a plane of symmetry/center of inversion, but lacks an axis of improper rotation and is thus chiral? An improper rotation $S_n$ is defined by a rotation about $360/n$ degrees, followed by reflection in a plane that is perpendicular to that rotation axis. A plane of symmetry ($S_1$) and an inversion centre ($S_2$) are special cases of an improper rotation ($S_n$). It is easier to convince yourself of the $S_1$ case: according to the definition above, $S_1$ means rotation through $360^\circ$ followed by reflection in a plane. Since rotation through $360^\circ$ obviously does nothing, this is the same as a reflection in a plane. So, the answer to this question is no. if a compound has either a plane of symmetry or an inversion centre, that automatically means it has an improper rotation axis.
{ "domain": "chemistry.stackexchange", "id": 14667, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "chirality, symmetry, group-theory", "url": null }
cc.complexity-theory, complexity-classes, time-complexity, lower-bounds, polynomial-time Title: Are there more polynomial time problems with complexity lower bounds? I'm looking for more problems in $P$ with classical time complexity lower bounds. Some people might wonder how you could prove such a lower bound. See below. Exponential Lower Bounds: Claim: If you have a problem $X$ that is $EXPTIME$-complete under polynomial reductions, then there is a constant $\alpha \in \mathbb{R}$ such that $X$ is not solvable in $O(2^{n^{\alpha}})$ time. Proof Idea: By the time hierarchy theorem, there is a problem $Y$ in $O(2^n)$ time that is not in $o(\frac{2^n}{n})$ time. Further, there must be a polynomial reduction from $Y$ to $X$. Therefore, there is a constant $c$ such that this reduction takes an instance of size $n$ for $Y$ to an instance of size $n^c$ for $X$. The lower bound for $Y$ of $O(2^{n^{1-\epsilon}})$ time shifts to a lower bound for $X$ of $O(2^{n^{\frac{1-\epsilon}{c}}})$ time. Polynomial Lower Bounds: Some $EXPTIME$-complete problems have nice parameterizations into polynomial time problems. Consider the problem $X$ from before. Suppose we have a parameterization $k$-$X$ for $X$ such that: For each fixed $k$, $k$-$X$ is in polynomial time.
{ "domain": "cstheory.stackexchange", "id": 3443, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, complexity-classes, time-complexity, lower-bounds, polynomial-time", "url": null }
java, beginner try { includedPolished = included.toString().substring(0,included.length() - 2); } catch (StringIndexOutOfBoundsException e) { includedPolished = null; } includedExcluded.append(includedPolished).append(excluded.toString().substring(0,excluded.length() - 2)); return includedExcluded.toString(); } } There's a lot of repeated code here, so i've gone for an enum and loop approach to try to reduce this. public enum Genre { action(new String[]{"28","10759"}), adventure(new String[]{"12", "10759"}), animated(new String[]{"16"}); //.. private final String[] ids; Genre(String[] id) { this.ids = id; } public String[] getId() { return ids; } } The enum above contains a list of available id's which will trigger the genre being added to the included list in the processGenre. Other than that, it's just a simple enum. public String processGenre(String GenId){ StringBuilder included = new StringBuilder(); StringBuilder excluded = new StringBuilder(); StringBuilder includedExcluded = new StringBuilder(); String includedPolised; excluded.append("\" data-excluded=\"");
{ "domain": "codereview.stackexchange", "id": 30261, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner", "url": null }
c++, algorithm, game-of-life, sdl gameWindowSurface = SDL_GetWindowSurface(gameWindow); SDL_GetWindowSize(gameWindow, &gameWindowWidth, &gameWindowHeight); gridWidth = 128; gridHeight = 72; gameMenu = new Menu(gameWindowWidth, gameWindowHeight, gridWidth, gridHeight, Sans); CreateGrid(); updatePrevTick = 0; fpsPrevTick = 0; fpsCurrent = 0; fpsFrames = 0; } GameOfLife::~GameOfLife() { isRunning = false; gameWindow = NULL; gameRenderer = NULL; gameWindowSurface = NULL; SDL_DestroyWindow(gameWindow); SDL_DestroyRenderer(gameRenderer); SDL_FreeSurface(gameWindowSurface); DeleteGrid(); TTF_CloseFont(Sans); } int GameOfLife::Play() { SDL_SetRenderDrawColor(gameRenderer, 255, 255, 255, 255); SDL_RenderClear(gameRenderer);
{ "domain": "codereview.stackexchange", "id": 27802, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, game-of-life, sdl", "url": null }
w ( k ) = { 2 ( k − 1 ) n − 1 , 1 ≤ k ≤ n 2 2 ( n − k ) n − 1 , n 2 ≤ k ≤ n w(k)=\begin{cases} \frac{2(k-1)}{n-1},&\text{$1\leq k\leq \frac{n}{2}$}\\ \frac{2(n-k)}{n-1},&\text{$\frac{n}{2} \leq k \leq n$} \end{cases} w(k)={n−12(k−1)​,n−12(n−k)​,​1≤k≤2n​2n​≤k≤n​ N = 42 win = np.bartlett(N) print(win) plt.plot(win) [0. 0.04878049 0.09756098 0.14634146 0.19512195 0.24390244 0.29268293 0.34146341 0.3902439 0.43902439 0.48780488 0.53658537 0.58536585 0.63414634 0.68292683 0.73170732 0.7804878 0.82926829 0.87804878 0.92682927 0.97560976 0.97560976 0.92682927 0.87804878 0.82926829 0.7804878 0.73170732 0.68292683 0.63414634 0.58536585 0.53658537 0.48780488 0.43902439 0.3902439 0.34146341 0.29268293 0.24390244 0.19512195 0.14634146 0.09756098 0.04878049 0. ] ## Blackman window The time domain expression is as follows:
{ "domain": "fatalerrors.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290930537121, "lm_q1q2_score": 0.8154900922744748, "lm_q2_score": 0.8397339736884712, "openwebmath_perplexity": 3388.48679894393, "openwebmath_score": 0.48512622714042664, "tags": null, "url": "https://www.fatalerrors.org/a/numpy-learning-notes-6-common-functions-5.html" }
can find missing values in the relationship between heat and temperature: heat added or removed, specific heat, mass, initial temperature and final temperature person_outline Timur schedule 2017-07-09 04:45:21. (force)× (distance) = (pressure)× (volume) The surface energy of the gas bubble is due to the difference between the bubble filled with gas and the bubble filled with liquid. ) It does this by the use of a counterweight that falls. In older works, power is sometimes called activity. Bullet Kinetic Energy Calculator. (a) Calculate the change in the internal energy of the block-belt system as the block comes to a stop on the belt. We are confronted every day by the notions of Energy and Power: Cars and motors are sold by Horsepower, lightbulbs by Watts, natural gas by Therms, electricity by Kilowatt-Hours, and air conditioners by Tons or BTUs per hour. 9 × 3000 = 27000. 098931 u * 931. Savings are for wattages shown. Energy against Gravity Calculator getcalc. Here are some practice questions that you can try. Your actual savings will vary based on the wattages you purchase. the bungee cord has more potential energy when it is stretched out than when it is slack. Calculate the kinetic energy of a body of mass 0. Your answer should be stated in joules, or J. The reactor physics does not need this fine division of neutron energies. A photon is characterized by either a wavelength, denoted by λ or equivalently an energy, denoted by E. Conservation of Energy 7 2. Physics Calculators Mechanics Statics and Dynamics -- Statics (stress analysis, beam analysis, resultant of vectors), Dynamics (motion on curved paths, projectile motion), and Fluid Dynamics (ideal flow, shock tube flows, airfoil flows, compressible aerodynamics, thermodynamics of air, boundary layer analysis, heat transfer). (Yes, even Passive House projects. If you heat a balloon (carefully), the molecules of air in the balloon gain energy and strike the inner walls of the balloon with greater force. 2 Chapter4 ChemicalEnergy amoleofoxygenatomswouldhaveamassof16grams. Here are some practice questions that you can try. This is hard to calculate directly, but for an ideal gas, for example, there is
{ "domain": "namastegroup.it", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9621075733703927, "lm_q1q2_score": 0.847421546851246, "lm_q2_score": 0.880797085800514, "openwebmath_perplexity": 864.7356758736282, "openwebmath_score": 0.5681233406066895, "tags": null, "url": "http://koyk.namastegroup.it/energy-calculator-physics.html" }
attribute-grammars Title: Visits sequence evaluator in attribute grammar I am failing to understand the "visits sequence evaluator in attribute grammar". Is it an order on instantiated tree nodes where on each visit to a node we evaluate a particular node and we may end up visiting a node more than once. Is it correct? There are two paper that describe the visit sequence evaluator: Kastens original paper Joost Engelfriet & Gilberto Filè describes a type of attribute grammar where we evaluate the attribute using only one visit to a tree node You don't evaluate nodes in an attribute grammar. You evaluate attributes. At a given node there may be zero or more attributes to evaluate, and they don't necessarily get evaluated on the same visit to the node. So the evaluator may indeed need to visit nodes multiple times. This theoretical structure corresponds to compilers which do multiple passes over the parse tree. But it also could also apply to an attribute evaluation sequence in which nodes are visited twice: once before their children (in order to compute inherited attributes) and again after their children (in order to compute synthesized attributes). Or multiple times, if some attribute needs to be synthesized from one child before being inherited by a different one. (Whether you count that as multiple visits or not is somewhat dependent on what you think a "visit" is. But it certainly affects how the evaluation code is ordered in the visitor function.) There is a whole chapter devoted to syntax-directed translation in the dragon book, for what it's worth.
{ "domain": "cs.stackexchange", "id": 18924, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "attribute-grammars", "url": null }
Notice the algorithm I've used is somewhat different (and easier to implement on the 17Bii) than the one in the Wikipedia article. Nonetheless, I get exactly the same results. Gerson. Edited to fix program line #17 Edited again to fix algorithm (inclusion of an IF-clause) Code: Program CPF; Uses Crt; var d1,d2, i: Byte;     s, t: Integer;        n: Longint; begin   ClrScr;   Read(n);   s:=0;   t:=0;   for i:=1 to 9 do     begin       s:=s+(10-i)*(n Mod 10);       t:=t+i*(n Mod 10);       n:=n div 10     end;   if (s Mod 11)=10 then     t:=t+9;   d1:=(s Mod 11) Mod 10;   d2:=(t Mod 11) Mod 10;   GotoXY(10,1);   WriteLn('-',d1:1,d2:1) end. ------------------------- Run 123456789  123456789-09 Type EXIT to return... As I said, this algorithm is more simple than the official one [ function TestaCPF(strCPF) ]. I have yet to discover why this works. 07-26-2015, 11:42 PM (This post was last modified: 07-27-2015 12:00 AM by Don Shepherd.) Post: #11 Don Shepherd Senior Member Posts: 745 Joined: Dec 2013 RE: checkdigit calculation for HP-17b Thanks Gerson. So everybody who pays taxes in Brazil has a number. Same here in the US, ours is just called our Social Security number, and we all must go to great lengths to make sure nobody knows our SS number except people who need to know it (like banks and employers). If the bad guys know your number, I'm told your life will become miserable. I have to shred some personal financial documents occasionally so my number doesn't become known by the bad guys.
{ "domain": "hpmuseum.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9802808733328717, "lm_q1q2_score": 0.8370743373913717, "lm_q2_score": 0.8539127510928476, "openwebmath_perplexity": 1543.6137833049002, "openwebmath_score": 0.38739073276519775, "tags": null, "url": "https://www.hpmuseum.org/forum/thread-4432.html" }
python, python-3.x, generator, properties ... class Circle: ... @coroproperty def diameter(self): self.radius = (yield self.radius * 2) / 2 ... Side-effects Wasted time and computation. Setting the diameter, circumference and/or area first involves computing those quantities, which is unnecessary. You could circumvent that by accepting a new value before returning the value. @coroproperty def diameter(self): diameter = yield None if diameter is not None: self.radius = diameter / 2 else: yield self.radius * 2 The getter would need to call next(coro) twice, ignoring the first returned value: None. If you desired a property which could store None, you'd need to create and use your own sentinel value. Assignment to diameter, even if not changing the value, will change the radius. In this example, it is changed from an int to a float: >>> c.radius = 100 >>> c.radius 100 >>> c.diameter = c.diameter >>> c.radius 100.0 Add in sqrt, and multiplication & division by π, and accuracy can be lost: >>> c.radius = 1_000_000_000_000 >>> c.area = c.area >>> c.radius 999999999999.9999
{ "domain": "codereview.stackexchange", "id": 40278, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, generator, properties", "url": null }
c, sieve-of-eratosthenes free(primes); return 0; } Edit: Even less memory usage Pete Kirkham suggested using even less memory by using 2 out of every 6 numbers. In other words, using 1 bit per every 3 numbers instead of 1 bit per every 2 numbers. At first I was skeptical because this required using a division in the inner loop. However, after coding it up, it turned out to be faster. The code is quite a bit trickier however, because the inner loops need to avoid any multiples of 3, because all multiples of 3 are no longer in the primes array: #include <stdio.h> #include <stdint.h> #include <stdlib.h> #include <string.h> #include <math.h> #define N 1000000000 int main(void) { int arraySize = (N/24 + 1); uint32_t *primes = malloc(arraySize); // The bits in primes follow this pattern: // // Bit 0 = 5, bit 1 = 7, bit 2 = 11, bit 3 = 13, bit 4 = 17, etc. // // For even bits, bit n represents 5 + 6*n // For odd bits, bit n represents 1 + 6*n memset(primes , 0xff, arraySize); int sqrt_N = sqrt(N); for(int i = 5; i <= sqrt_N; i += 4) { int iBitNumber = i / 3 - 1; int iIndex = iBitNumber >> 5; int iBit = 1 << (iBitNumber & 31); if ((primes[iIndex] & iBit) != 0) { int increment = i+i; for (int j = i * i; j < N; j += increment) { int jBitNumber = j / 3 - 1; int jIndex = jBitNumber >> 5; int jBit = 1 << (jBitNumber & 31); primes[jIndex] &= ~jBit; j += increment; if (j >= N) break;
{ "domain": "codereview.stackexchange", "id": 17174, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, sieve-of-eratosthenes", "url": null }
ros, tum-simulator, tum-ardrone Originally posted by green96 with karma: 115 on 2014-11-25 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by slowmed on 2014-11-26: actually the command line doesn't work , i tried all the commands listed in the section 3.3 they don't work , and what i want really is to move the drone in the simulator using the python programs in the autonavx_ardrone file
{ "domain": "robotics.stackexchange", "id": 20126, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, tum-simulator, tum-ardrone", "url": null }
terminology, programming-languages (usually in a declaration). But a value can also be obtained by applying operations to other named values. Names can be reused, and there are rules (scoping rules) to determine what is associated to a given identifier, according to context of use. There are also special names, called litterals, to name the values of some domains, such as integers (e.g. $612$) or boolean (e.g. true). The association of an unchanging value with an identifier is usually called a constant. Litterals are constants in that sense. "Value containers" can also be considered as values, and their association with an identifier is a variable in the usual "naive" sense that you have been using. So you might say that a variable is a "container constant". Now you might wonder what is the difference between associating an identifier with a value (constant declaration) or assigning a value to a variable, i.e. storing the value in the container defined as a container constant. Essentially, declaration may be seen as an operation that defines a notation, that associate an identifier which is a syntactic entity to some value which is a semantic entity. Assignment is a purely semantics operation that modifies a state, i.e. modifies the value of a container. In some sense, declaration is a meta concept with no semantic effect, other than providing a naming (i.e. syntactic) mechanism for semantics entities. Actually, assignments are semantic operations that occur dynamically as the program is executed, while declarations have a more syntactic nature and are usually to be interpreted in the text of the program, independently of execution. This is why static scoping (i.e. textual scoping) is usually the natural way to understand the meaning of identifiers. After all this, I can say that a pointer value is just another name for a container,and a pointer variable is a container variable, i.e. a container (constant) that can contain another container (with possible limitations on the containing game imposed by some type system). Regarding code, you state [pointers] might indicate the entry point to a section of code and can be used to call that code. Actually this is not quite true. A section of code is often meaningless alone
{ "domain": "cs.stackexchange", "id": 21460, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "terminology, programming-languages", "url": null }
javascript, array // add your tests here }); It is inefficient and overengineered because the code loops through the transactions several times when once would be enough, both by the multiple filters and the two separate loops. uses more memory than needed, partly due to the above and partly due to the creation of the array "filteredTransactions" calls moment() up to twice to convert the timestamp, when once would suffice. It is less maintainable / understandable by others in the team because some may not commonly use map/filter and most may not commonly use reduce which is particularly complex. (See also any article on KISS) You are overthinking this with filter and reduce and consuming more memory and cpu than needed in the process. If I got this code I would think that the interviewee was trying to impress me that they know about filter and reduce but the code ends up way longer and more complex than needed. A simpler version would be sum = 0 transactions.forEach(t => { if (t.category == category) { let d = moment(transactionDate) if (d.isSameOrAfter(start) && d.isBefore(end)) { sum += d.amount } } }) return sum This code is half the lines and almost no intermediate variables. It does not require the next guy to understand filter, map or reduce. It does not look at any transaction more than once. It does not create 3 intermediate lists of transactions / numbers. See also this.
{ "domain": "codereview.stackexchange", "id": 39159, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, array", "url": null }
vba, excel, hash-map Title: Aggregating hourly location data into daily sublocation data for many columns I start with an hourly table that has around 40 of items (example: bread, barley, bagels, beef, chicken). The purpose of my code is to aggregate this hourly table's numbers to daily numbers but broken out by a sublocation or "type". My only way to allocate to type is to use a table that shows the % breakdown of type to location. This table, however, is at the Monthly day/night (Timeframe) granularity. I solved the problem with a dictionary for each item, but extending this out makes me believe I am doing this inefficiently. I could use some opinions on how to better organize my code to handle dozens of items. A forewarning that I have never used collections or classes out of ignorance, but I am open to anything. *note: I converted these tables in markdown format using this site Hourly Table Example (~700,000 rows, ~40 columns to aggregate) <Pasted into B5> Day Location Hour Timeframe bread barley bagels beef chicken 4/1/2021 A 0 night 51 91 12 26 176 4/1/2021 A 1 night 51 24 4 43 17 4/1/2021 A 8 day 25 84 5 72 125 4/1/2021 A 14 day 32 10 7 7 166 4/2/2021 A 0 night 31 29 11 49 5 4/2/2021 A 1 night 25 25 3 40 175 4/2/2021 A 8 day 70 81 6 69 89 4/2/2021 A 14 day 83 45 2 9 141 4/1/2021 B 0 night 55 37 8 59 164 4/1/2021 B 1 night 53 88 12 50 74 4/1/2021 B 8 day 20 73 1 33 200 4/1/2021 B 14 day 6 33 7 2 191 4/2/2021 B 0 night 39 52 4 22 99 4/2/2021 B 1 night 19 80 6 55 0 4/2/2021 B 8 day 44 49 10 42 8 4/2/2021 B 14 day 72 11 3 54 44
{ "domain": "codereview.stackexchange", "id": 40990, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel, hash-map", "url": null }
special-relativity, coordinate-systems, inertial-frames Title: Some nitpicks on Lorentz invariance I'm trying to understand the exact mathematics of Lorentz invariance and I have a question. I hope this is a good place to ask. To prove that $J^\mu dt=\rho dx^\mu$ defines a $4$-vector, my book says that for a small charged particle occupying the volumes $dV$ with charge density $\rho$ in a certain referential $R$, the quantity $dq=\rho dV$ is evidently a Lorentz scalar, hence, with obvious notations, in a certain referential $R'$ we have $\rho dV = \rho'dV'$. I'd like to make sure I understand what this means, formally. Let's forget this particular setting and suppose $\rho$ is just the density of charge in the entire universe, as in the Maxwell equations for instance. Now let $M$ be the Minkowski space-time and $G$ the Lorentz group, acting on $M$. Formally one can think of $\rho$ as a function $M \to \mathbb{R}$. Am I right in saying that what is meant is generally that $g \in G$ acts on such functions $f : M \to \mathbb{R}$ by $(gf)(x)=f(g^{-1}x)$ , or, in this particular case, that $\rho'$ is gotten from $\rho$ by precomposing with the inverse of the Lorentz transform going from the basis corresponding to $R$ from that corresponding to $R'$?
{ "domain": "physics.stackexchange", "id": 67652, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, coordinate-systems, inertial-frames", "url": null }
to calculate area, perimeter or volume of a figure. Find the area of the face we are looking directly at. A (rectangular) cuboid is a closed box which comprises of 3 pairs of rectangular faces that are parallel to each other and joined at right angles. The Surface Area of a cuboid is the total of all the areas on each face added together. Then , Total surface Area of Cuboid = 2( lxb + bxh +hxl) square units. The injury is in the area of a small tarsal bone in the foot, the cuboid bone. Go to Surface Area or Volume. The cuboid bone is within the area of the mid-foot. Total surface area of cuboid is 1216 sq cm. The volume is found using the formula: Which is usually shortened to: It doesn't really matter which one is length, width or height, so long as you multiply all three together. For More Videos - Subscribe In this video, we will solve first two problem statements from class 9 NCERT, Exercise 13. Sol: Total area that can be painted = 9. Irregular Prism Volume Calculator - A prism has the same cross section all along its length. Write a Python program to calculate surface volume and area of a cylinder. 8 cm cubic and length of the two edges are 2 cm and 6 cm. You can change the Name, Class, Course, Date, Duration, etc. Some of the worksheets for this concept are Common 2 d and 3 d shapes, Geometric nets pack, Volume and surface area work, Surface area of solids, Surface area, Surface areas of prisms, Surface area of 3d shapes, Module mathematical reasoning. 6 x 3² = 54. So, total surface area of cuboid = 2(lb + bh + lb) = 2(5x × 4x + 4x × 2x + 5x × 2x) = 2(20x2 + 8x2 + 10x2) = 2(38x2) = 76x2 Total surface area of cuboid = 1216 cm2 76x2 = 1216cm2 x2 = 16 x = 4 Dimensions of cuboid are. Students learn how to find the surface area of cubes and cuboids using nets and 3D drawings. Submitted by IncludeHelp, on May 08, 2020. Answers included. It is also known as a right rectangular prism. A patient with cuboid
{ "domain": "caffeparty.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.980280871316566, "lm_q1q2_score": 0.8125928572894333, "lm_q2_score": 0.8289388083214156, "openwebmath_perplexity": 876.1001154440027, "openwebmath_score": 0.6718249917030334, "tags": null, "url": "http://caffeparty.it/qhoc/area-of-cuboid.html" }
perturbation-theory, resonance &= \frac{|\langle m |V_0| n \rangle|^2}{4\hbar^2} T^2 \end{align} So it suggests that the transition probability is larger when the system is exposed to the perturbation for a longer time. However, the probability should never exceed one. So where did the perturbation theory fail? Or did I make a mistake somewhere? There are two situations in which your approximate calculation is going to fail. One situation is at long times. You made use of the approximation that argument of the sine function was small, so that you could approximate $$\sin\left(\frac{E_{n}-E_{m}-\hbar\omega}{\hbar}T\right)\approx\frac{E_{n}-E_{m}-\hbar\omega}{\hbar}T.$$ However, it should be clear that, no matter how close to resonance (that is, how close to $E_{n}-E_{m}=\hbar\omega$) you get, there will always be times $T$ large enough that the argument of the sine is no longer small. At times for which $$T\gtrsim|\omega_{mn}-\omega|^{-1}=|\delta|^{-1}$$ [where $\omega_{mn}\equiv(E_{n}-E_{m})/\hbar$, and $\delta\equiv\omega-\omega_{mn}$ is called the "detuning"], you need to use the evaluate the trigonometric function fully. Since $|\sin^{2}x|\leq 1$, the transition probability is never going to exceed $$P_{m\rightarrow n}\leq\left|\frac{\langle m |V_0| n \rangle}{E_n - E_m - \hbar \omega} \right|^{2}.$$
{ "domain": "physics.stackexchange", "id": 83412, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "perturbation-theory, resonance", "url": null }
The output file to have each individual measurement on a seperate line in a single file Study. Wrapper function LPDistance ( ) function simplifies this process by calculating distances between in. Uses a covariance matrix unlike Euclidean of items zoo or xts objects see TSDistances calculates... Definition & example ), how to find Class Boundaries ( with Examples ) it comes modeling. I am very new to R, so any help would be appreciated we estimate the ( shortest distance! Open provided that each point of has an ε neighborhood that is entirely in... 1: distances for one single point to a projection that preserves distances and then calculate the distances between observations... When data representing the distance from every cell to the nearest source matrix! Three main functions: rdist computes the Euclidean distance output raster the Euclidean distance output contains. That preserves distances and then calculate the distances function of the proxy package to first project points! Then a subset of R 3 is open provided that each point of an... Relevant when it comes to modeling with Examples ) help me how to find distance between two points of 1! The distances function of the MNIST sample data a single file unlike Euclidean between all of... Length of a segment connecting the two points, as shown in the figure below measure TSDatabaseDistances. How to find distance between two points in either the plane or space... Of nodes once uses a covariance matrix unlike Euclidean series must have the same length recommend. Be invoked by the wrapper function LPDistance R using the Pythagorean distance for.! Der zweidimensionalen euklidischen Ebene oder im dreidimensionalen euklidischen Raum stimmt der euklidische Abstand (, ) dem... In a single file between two points, as we will see are not the same.... Pairs of items to themselves space measures the length of a segment connecting the two series must have same... Wrapper function LPDistance explaining topics in simple and straightforward ways find Class Boundaries with... Of that, MD works well when two or more than 2 dimensional.! Is calculated with the help of the dist function this distance is also used! Object is needed dist ( ) function simplifies this process by calculating distances between observations in one matrix returns! More variables are highly correlated
{ "domain": "kalpehli.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9553191284552528, "lm_q1q2_score": 0.833370136858242, "lm_q2_score": 0.8723473779969194, "openwebmath_perplexity": 859.4582796240163, "openwebmath_score": 0.7534364461898804, "tags": null, "url": "http://kalpehli.com/srhuil/0fv6vw9.php?id=61fab9-euclidean-distance-in-r" }
deep-learning, neural-network, image-segmentation Title: what is the largest network used for image recognition/segmentation? What is the largest network (in number of params and layers) considered in the literature for image recognition/segmentation task? I am in particular interested in ResNet architectures. Any recommendation for literature is appreciated. For natural language processing, the largest models are of order of billions, such as Megatron-LM, or DeepSeed with Zero. Is this also the case for image-classification? Unet was published around 2015 and still regarded as state of the art. Since then different versions of Unet has appeared. Resnet, Squeezenet blocks can be used instead of normal convolutional blocks in Unet. In addition to that, DeepLab currently holds top position in most of the benchmarks. But it is not ideal to train from scratch since the model is too big. Averagely one epoch of DeepLab v3+ (newest edition) takes 10 mins to train in the original implementation in Google Colab. In addition to that, mean IoU increases very slowly, and hence you will have to train for 1000s of epochs. But in Unet, the model is very small and trains very quickly. The best thing about Unet is that it can be trained for even 30 images. I myself have used Unet for 14 images and got very good results.
{ "domain": "datascience.stackexchange", "id": 8767, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "deep-learning, neural-network, image-segmentation", "url": null }
tensor-calculus, notation, differentiation \partial_1 x^1 = \frac{\partial}{\partial x} x = 1, \; \partial_1 x^2 = \frac{\partial}{\partial x} y = 0, \; \partial_1 x^3 = \frac{\partial}{\partial x} z = 0, $$ and so on. (You can fill in the rest). After doing this, you can reflect that the very meaning of these partial derivative symbols is ``change of (thing) while the other coordinates are constant''. Therefore the result holds for any system of coordinates.
{ "domain": "physics.stackexchange", "id": 53923, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "tensor-calculus, notation, differentiation", "url": null }
special-relativity, group-theory, spacetime-dimensions, lie-algebra Title: Number of Parameters of Lorentz Group We embed the rotation group, $SO(3)$ into the Lorentz group, $O(1,3)$ : $SO(3) \hookrightarrow O(1,3)$ and then determine the six generators of Lorentz group: $J_x, J_y, J_z, K_x, K_y, K_z$ from the rotation and boost matrices. From the number of the generators we realize that $O(1,3)$ is a six parameter matrix Lie group. But are there any other way to know the number of parameters of the Lorentz group in the first place? From special relativity we know that a Lorentz transformation: \begin{equation} x'^\mu = \Lambda^\mu {}_\nu x^\nu \end{equation} preserves the distance: \begin{equation} g^{\mu \nu} \Delta x_\mu \Delta x_\nu = g^{\mu \nu} \Delta x_\mu' \Delta x_\nu' \end{equation} The above two equations imply: \begin{equation} g^{\mu \nu} = g^{\rho \sigma}\Lambda_\rho {}^\mu \Lambda_\sigma {}^\nu \end{equation} Now, let us consider an infinitesimal transformation: \begin{equation} \Lambda_\nu {}^\mu = \delta_\nu{}^\mu + \omega_\nu{}^\mu + O(\omega^2) \end{equation} such that we can write: \begin{equation} \begin{aligned} g^{\mu \nu} & = g^{\rho \sigma}\Lambda_\rho {}^\mu \Lambda_\sigma {}^\nu \\& = g^{\rho \sigma} \left( \delta_\rho{}^\mu + \omega_\rho{}^\mu + \cdots \right)\left( \delta_\sigma{}^\nu + \omega_\sigma{}^\nu + \cdots \right) \\&
{ "domain": "physics.stackexchange", "id": 11989, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, group-theory, spacetime-dimensions, lie-algebra", "url": null }
meteorology, tropical-cyclone Title: Minimum sea-level pressure and maximum wind speed intensity relationship in Hurricane Katrina (2005) Hurricane Katrina was one of the deadliest and most destructive hurricane to hit the US. After looking into the minimum sea-level pressure (MSLP) and maximum wind speed (MWS) intensity data from NOAA's HURDAT database, I am confused with the pattern or whether there actually is one. For Katrina specifically, is there a pattern in the relationship between MSLP and MWS intensity? What role does a low or high air pressure system play in the velocity vector of the hurricane? Generally the relationship is the lower the air pressure the more intense the hurricane. For example during the same hurricane season as Katrina (2005) which also happened to be a record breaking season for the number of hurricanes recorded in the Atlantic basin, hurricane Wilma was recorded as being the most intense hurricane ever to form in the Atlantic. It reached a minimum air pressure of 882 millibar which was a record low. The corresponding maximum wind speed during this period of intensification was 185 mph. Hurricane Katrina was also a category 5 hurricane and reached a minimum air pressure of 902 millibar with maximum wind speeds at this pressure of 175 mph. Katrina of course striking where it did caused far more destruction even though it was the less intense hurricane of the two: http://en.wikipedia.org/wiki/Hurricane_Wilma http://en.wikipedia.org/wiki/Hurricane_Katrina
{ "domain": "earthscience.stackexchange", "id": 1245, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "meteorology, tropical-cyclone", "url": null }
image-processing, image-registration, projection Title: Transforming Non-Uniformly Sampled Image onto Rectangular Grid I have an image and associated latitude and longitude matrices. So I know the latitude and longitude of every pixel. However, due to the camera orientation, the image is skewed and thus the latitude and longitude sampling is non-linear. See the images below. I want to resample this image on rectangular, evenly sampled latitude-longitude grid. I have coded up a way to do this, but it's computationally expensive, and I feel certain that there must be a better way. I've done quite a bit of searching, but I haven't found good hits. Perhaps I just don't know the lingo necessary for the right search terms. The image below is the original image. I want to get a resampled image that looks something like this: I produced this image, by basically moving a square boolean mask across the original image and finding the mean of the values as shown below: Ideally, I think that pixels with mostly empty space should have an opacity value as well, but let's table that for now. Here's the MATLAB code I wrote to do this: %% Create Skewed Image - All of These Are Knowns rows = 1:size(frame, 1); cols = 1:size(frame, 2); % Create non-linearly sampled grid lat_grid = zeros(size(frame)); lon_grid = zeros(size(frame)); for r = 1:numel(rows) for c = 1:numel(cols) lat_grid(r,c) = 0 + .001*rows(r) + 1e-3*cols(c) + 1e-6 * rows(r).^2; lon_grid(r,c) = 0 + .001*cols(c) + 1e-3*rows(r) + 1e-6 * cols(c).^2; end end
{ "domain": "dsp.stackexchange", "id": 6502, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, image-registration, projection", "url": null }
c++, c++11, tree, reinventing-the-wheel template<typename T> void BST<T>::traversePostorder() { postorder(root); } template <class T> treeNode<T>* BST<T> :: inOrderSuccessor(treeNode<T>* node) { if(node->right != nullptr) return minHelper(node->right); treeNode<T>* succ = nullptr; treeNode<T>* curr = root; // Start from root and search for successor down the tree while (curr != nullptr) { if (node->key < curr->key) { succ = curr; curr = curr->left; } else if (node->key > curr->key) curr = curr->right; else break; } return succ; } template<typename T> bool BST<T>::isBST(treeNode<T>* node) const { static struct treeNode<T> *prev = nullptr; // traverse the tree in inorder fashion and keep track of prev node if (node) { if (!isBST(node->left)) return false; // Allows only distinct valued nodes if (prev != nullptr and node->key <= prev->key) return false; prev = node; return isBST(node->right); } return true; } template<typename T> bool BST<T>::isBST() const { return isBST(root); } #endif // BST_H
{ "domain": "codereview.stackexchange", "id": 9358, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++11, tree, reinventing-the-wheel", "url": null }
c++, file-system void del(const std::string &name, EntryType type = EntryType::ANY_TYPE) const { m_currentDirectory->deleteEntry(name, type); } void rmdir(const std::string &name) const { del(name, EntryType::DIRECTORY); } void processCommand(const std::string &command, const std::string &option) { if (command == "ls") return ls(); else if (command == "dir") return dir(); else if (command == "mkdir") return mkdir(option); else if (command == "mkfile") return mkfile(option); else if (command == "cd") return cd(option); else if (command == "rmdir") return rmdir(option); else if (command == "del") return del(option); } public: void readInput(const std::string &fileName) { std::ifstream infile {fileName}; std::string line; while (std::getline(infile, line)) { std::istringstream iss(line); std::string command; std::string option; iss >> command; iss >> option; std::cout << command << " " << option << "\n"; processCommand(command, option); } } }; int main() { FileSystem filesystem; filesystem.readInput("path to the input file");
{ "domain": "codereview.stackexchange", "id": 37664, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, file-system", "url": null }
c++, c++11, compression CharCountNode *left; CharCountNode *right; CharCountNode(Byte b, int i); void print(); }; bool isEmptyNode(CharCountNode node); bool operator < (const CharCountNode &lhs, const CharCountNode &rhs); bool operator > (const CharCountNode &lhs, const CharCountNode &rhs); charCountNode.cpp #include <iostream> #include "charCountNode.h" CharCountNode::CharCountNode(Byte b, int i) : byte(b.getChar(), b.getIsTerminator()) { count = i; left = nullptr; right = nullptr; } void CharCountNode::print() { std::cout << "Char:\t" << byte.getPrintable() << "\n"; std::cout << "Count:\t" << count << "\n"; std::cout << "Left:\t" << left << "\n"; std::cout << "Right:\t" << right << "\n"; } bool operator < (const CharCountNode &lhs, const CharCountNode &rhs) { return (lhs.count < rhs.count); } bool operator > (const CharCountNode &lhs, const CharCountNode &rhs) { return (lhs.count > rhs.count); } bool isEmptyNode(CharCountNode node) { return (node.count == 0); }
{ "domain": "codereview.stackexchange", "id": 17473, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++11, compression", "url": null }
Home > sample proportion > standard error of sample proportion # Standard Error Of Sample Proportion repeatedly randomly drawn from a population, and the proportion of successes in each
{ "domain": "winaudit.org", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9848109534209825, "lm_q1q2_score": 0.8466062813019293, "lm_q2_score": 0.8596637541053281, "openwebmath_perplexity": 1203.1106506971157, "openwebmath_score": 0.9443531036376953, "tags": null, "url": "http://winaudit.org/guides/sample-proportion/standard-error-of-sample-proportion.html" }
electromagnetism, induction Title: AC through a pure inductor I've studied the AC circuit for an ideal inductor in many physics books. After deriving the final equation for current the integration constant $C$ is assumed to be $0$ by giving inadequate reasons. In this question I seek for an adequate reasons. Suppose an ideal AC voltage source is connected across a pure inductor as shown: The voltage source is$$V=V_0\sin(\omega t).$$ From Kirchhoff’s loop rule, a pure inductor obeys $$V_0 \sin(\omega t)=L\frac{di}{dt},$$ so $$\frac{di}{dt}=\frac{V_0}{L} \sin(\omega t)$$ whose solution is $$i=\frac{-V_0}{\omega L}\cos(\omega t)+C$$ Consider the (hypothetical) case where the voltage source has zero resistance and zero impedance. In most of elementary physics books $C$ is taken to be $0$ for the case of an ideal inductor. $$\text{Can we assume that } C \neq 0?$$ (To me this is one of the inadequate reasons). This integration constant has dimensions of current and is independent of time. Since source has an emf which oscillates symmetrically about zero, the current it sustain also oscillates symmetrically about zero, so there is no time independent component of current that exists. Thus constant $C=0$. (Boundary condition) there might exist a finite DC current through a loop of wire having $0$ resistance without any Electric field. Hence a DC current is assumed to flow through the closed circuit having ideal Voltage source and ideal inductor in series if the voltage source is acting like a short circuit like a AC generator which is not rotating to produce any voltage. When the generator starts, it causes the current through the circuit to oscillate around $C$ in accordance with the above written equations. The fact is, in the context of ideal circuit theory, the inductor voltages are equal in the circuits below:
{ "domain": "physics.stackexchange", "id": 10339, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, induction", "url": null }
python, python-3.x, api, reddit reply += '\n# Japanese Definitions:' # type of definitions_data: list of dict, each dict is a definition definitions_data = requests.get('https://jisho.org/api/v1/search/words?keyword=' + query).json()['data'] if definitions_data == []: reply += ' no Japanese definitions found\n' else: for defin in definitions_data: try: reply += '\n\nWord: ' + defin['slug'] reply += '\n\nReading: ' + defin['japanese'][0]['reading'] reply += '\n\nEnglish Definition: ' + defin['senses'][0]['english_definitions'][0] except: reply += '\n\nError: Missing information for this definition' reply += '\n\nimprovements to come' print(reply) return reply # main function: so this module can be imported without executing main functionality. def main(): reddit = authenticate() comments_replied_to = get_saved_comments() while True: run_bot(reddit, comments_replied_to) ## end definitions ## begin executions if __name__ == '__main__': main() Indentation The indentation within authenticate is non-standard. Here are two standard alternatives: r = praw.Reddit(username = config.username, password = config.password, client_id = config.client_id, client_secret = config.client_secret, user_agent = "kanjibot") r = praw.Reddit( username = config.username, password = config.password, client_id = config.client_id, client_secret = config.client_secret, user_agent = "kanjibot", ) Comparison to None if summon != None should be if summon is not None
{ "domain": "codereview.stackexchange", "id": 38946, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, api, reddit", "url": null }
optimization, c, objective-c, ios [circle runAction:fade]; if (explosionLenght >= 5) { CCSprite *circle = [[CCSprite alloc]initWithFile:@"Circle.png"]; circle.position = ccp(circle4.position.x, circle4.position.y + tileSize * 4); [self addChild:circle]; id fade = [CCSequence actionOne:[CCMoveTo actionWithDuration:1 position:circle.position] two:[CCScaleTo actionWithDuration:1 scale:0]]; [circle runAction:fade]; if (explosionLenght >= 6) { CCSprite *circle = [[CCSprite alloc]initWithFile:@"Circle.png"]; circle.position = ccp(circle4.position.x, circle4.position.y + tileSize * 5); [self addChild:circle]; id fade = [CCSequence actionOne:[CCMoveTo actionWithDuration:1 position:circle.position] two:[CCScaleTo actionWithDuration:1 scale:0]]; [circle runAction:fade]; if (explosionLenght >= 7) { CCSprite *circle = [[CCSprite alloc]initWithFile:@"Circle.png"]; circle.position = ccp(circle4.position.x , circle4.position.y- tileSize * 6); [self addChild:circle]; id fade = [CCSequence actionOne:[CCMoveTo actionWithDuration:1 position:circle.position] two:[CCScaleTo actionWithDuration:1 scale:0]]; [circle runAction:fade]; if (explosionLenght >= 8) { explosionLenght = 1; } } } } } }
{ "domain": "codereview.stackexchange", "id": 3846, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optimization, c, objective-c, ios", "url": null }
digital-communications, channelcoding, forward-error-correction Title: Confusion in Channel Encoding and Convolutional Encoding If I'm making NRZ signal by first encoding 1 input bit into 10 bits multiplied by -V/+V before sending it into a channel, is it called channel encoding? If I then modulated it using GMSK technique, can it be considered Convolutional Encoding? Like other comments, I don't really understand your question. What I am trying to do is to list basic things so that other ones can suggest edit because I find it is much easier to write in answer part :). Please let me know if it makes you less confused. The general communication system is
{ "domain": "dsp.stackexchange", "id": 5169, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "digital-communications, channelcoding, forward-error-correction", "url": null }
c++, performance, c++11, array, template-meta-programming printarr(aa) bb = std::move(aa); cc = bb.clone(); bb[0] = -3; printarr(bb) printarr(cc) const ha_type dd(cc); printarr(dd) } // algorithms { cout << "\nalgorithms\n"; constexpr size_t dimensions = 3; using el_type = double; using ha_type = hyper_array::array<el_type, dimensions>; const ::std::array<size_t, dimensions> lengths{{2,3,4}}; ha_type aa{lengths}; printarr(aa) // uninitialized std::iota(aa.begin(), aa.end(), 1); printarr(aa) ha_type bb{aa.lengths()}; std::copy(aa.begin(), aa.end(), bb.rbegin()); printarr(bb) ha_type cc{aa.lengths()}; std::transform(aa.begin(), aa.end(), bb.begin(), cc.begin(), [](el_type a, el_type b) { return a + b; }); printarr(cc); } // in containers { cout << "\nin containers\n"; using el_type = double; constexpr size_t dims = 2; using ha_type = hyper_array::array<el_type, dims>; vector<ha_type> vv; vv.emplace_back(hyper_array::array<double, dims>{1, 2}); vv.push_back(hyper_array::array2d<double>{3, 4}); vv.push_back(ha_type{5, 6}); vv.push_back({7, 8}); vv.emplace_back(9, 10);
{ "domain": "codereview.stackexchange", "id": 16369, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, c++11, array, template-meta-programming", "url": null }
newtonian-mechanics, everyday-life, collision, spring But if you pull the small block away from the large block slowly, then the large block will follow the small block, while the spring doesn't stretch terribly much. In this case, the low acceleration of the large mass takes place over a longer time, and so it can move more while the force is being exerted on it. This is the equivalent of going over a bump/pothole at low speed; since the wheels move up or down relatively slowly, the frame of the car will follow them. If you go over a bump at low speed, this means that the frame will follow the wheels (which follow the road surface), rather than moving in something resembling a straight line and possibly hitting the road surface.
{ "domain": "physics.stackexchange", "id": 53306, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, everyday-life, collision, spring", "url": null }
distributed-systems, history Title: Who are the legislators of Paxos? In the seminal distributed systems paper The Part Time Parliament (the Paxos protocol), Leslie Lamport names fictional legislators who are involved in the Paxon parliament protocol. According to this writing, he notes that: I gave the Greek legislators the names of computer scientists working in the field, transliterated with Guibas's help into a bogus Greek dialect. Does anyone have any information on the scientists that the legislators are named after? A list of the legislators in the paper and the corresponding computer scientists would be the ideal answer. I think the first legislator mentioned in the paper, "Λινχ∂", is named after Nancy Lynch since it could be pronounced as "Linch". Also, "Λεωνίδας Γκίμπας" from the bibliography is Leo Guibas. I'm completely lost as to who the others are. This is an educated guess of the transliterated names I could find in the Paxos paper. Most of these are people mentioned in the paper's references. Λ˘ινχ∂: Lynch, N. - Legislator Φισ∂ερ: Fischer, M. J. - Legislator Tωυεγ: Toueg, S. - Legislator Ωκι: Oki, B. M. - Legislator ∆ωλεφ: Dolev, D. - Farmer Σκεεν: Skeen, M. D. - Merchant Στωκµε˘ιρ: Stockmeyer, L. - Legislator Στρωνγ: (Strong/Strang?) - Legislator ∆φωρκ: Dwork, C. - President ∆˘ικστρα: Dijkstra, E. W. - Cheese inspector Γωυδα: (Gouda) - Cheese inspector Φρανσεζ: Francez, N. - Wine taster Πνυeλ˘ι: Pnueli, A. - Wine taster Σ∂ν˘ιδερ: Schneider, F. B. - Citizen Γρεες: (Greece) - Citizen Λαµπσων: Lampson, B. - General Λισκωφ: Liskov, B. H. - Merchant Παρνας: Parnas, D. - Elder statesman Γρα˘ι: Gray, C. G. - Priest Λινσε˘ι: Lindsey, B. G. - Priest
{ "domain": "cs.stackexchange", "id": 5769, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "distributed-systems, history", "url": null }
thermodynamics, degrees-of-freedom Title: For a diatomic molecule, what is the specific heat per mole at constant pressure/volume? At high temperatures, the specific heat at constant volume $\text{C}_{v}$ has three degrees of freedom from rotation, two from translation, and two from vibration. That means $\text{C}_{v}=\frac{7}{2}\text{R}$ by the Equipartition Theorem. However, I recall the Mayer formula, which states $\text{C}_{p}=\text{C}_{v}+\text{R}$. The ratio of specific heats for a diatomic molecule is usually $\gamma=\text{C}_{p}/\text{C}_{v}=7/5$. What is then the specific heat at constant pressure? Normally this value is $7/5$ for diatomic molecules? "At high temperatures, the specific heat at constant volume $C_v$ has three degrees of freedom from rotation, two from translation, and two from vibration." I can't understand this line. $C_v$ is a physical quantity not a dynamical system. So how can it have a degrees of freedom?? You can say the degrees of freedom of an atom or molecule is something but it is wrong if you say the degrees of freedom of some physical quantity(like temperature, specific heat etc.) is something. Degrees of freedom is the number of independent coordinates necessary for specifying the position and configuration in space of a dynamical system. Now to answer your question, we know that the energy per mole of the system is $\frac{1}{2} fRT$. where $f$= degrees of freedom the gas. $\therefore$ molar heat capacity, $C_v=(\frac{dE}{dT})_v=\frac{d}{dT}(\frac{1}{2}fRT)_v=\frac{1}{2}fR$ Now, $C_p=C_v+R=\frac{1}{2}fR+R=R(1+ \frac{f}{2})$
{ "domain": "physics.stackexchange", "id": 25610, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, degrees-of-freedom", "url": null }
forces, torque, statics, equilibrium, moment Title: Understanding moments as forces? I was watching this lecture on analysis of stress for mechanics of materials. At time 7:20, the lecturer says that in equilibrium, the sum of forces and "moments" in each direction (x,y,z) must be zero. What exactly is meant by "moments" in this context? Does this mean the "twisting" forces? If so, why are twisting forces counted separately from the other forces in the x,y, and z directions? Some engineering texts use "moment" and "couple" to talk about forces that tend to rotate an assembly (what physicist mean when they say "torque", but the engineers sometimes have a slightly different meaning for that word). A roughly translation guide is... A "couple" is a pair of opposite forces whose points of action are not co-linear. A couple is sometimes called a "pure torque" because it imparts a tendency to rotate without imparting a tendency to accelerate, and engineers will occasionally shorten this to just "torque", which is why a physicist needs to be careful in talking about these things with engineers. A "moment" is the tendency to rotate imparted by a off-center force (i.e. it is a "torque" in physicist-speak), but because it has not been paired off in a couple you know that you may also have to worry about the linear acceleration that is implied. In your case the speaker is just saying that the static conditions, $$ \sum \vec{F} = 0 $$ $$ \sum \vec{\tau} = 0 $$ apply.
{ "domain": "physics.stackexchange", "id": 4855, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "forces, torque, statics, equilibrium, moment", "url": null }
Home > Taylor Series > Taylor Series Error Estimation Taylor Series Error Estimation Contents Example What is the minimum number of terms of the series one needs to be sure to be within of the true sum? However, it holds also in the sense of Riemann integral provided the (k+1)th derivative of f is continuous on the closed interval [a,x]. This will present you with another menu in which you can select the specific page you wish to download pdfs for. Sometimes you'll see something like N comma a to say it's an Nth degree approximation centered at a. Source The statement for the integral form of the remainder is more advanced than the previous ones, and requires understanding of Lebesgue integration theory for the full generality. Solution Here are the first few derivatives and the evaluations. Note that, for each j = 0,1,...,k−1, f ( j ) ( a ) = P ( j ) ( a ) {\displaystyle f^{(j)}(a)=P^{(j)}(a)} . Your cache administrator is webmaster. https://www.khanacademy.org/math/calculus-home/series-calc/taylor-series-calc/v/error-or-remainder-of-a-taylor-polynomial-approximation Taylor Series Error Estimation Calculator Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. These estimates imply that the complex Taylor series T f ( z ) = ∑ k = 0 ∞ f ( k ) ( c ) k ! ( z − In the mean time you can sometimes get the pages to show larger versions of the equations if you flip your phone into landscape mode. In this example we pretend that we only know the following properties of the exponential function: ( ∗ ) e 0 = 1 , d d x e x = e This kind of behavior is easily understood in the framework of complex analysis. Solution As with the last example we’ll start off in the same manner.                                       So, we get a similar pattern for this one.  Let’s plug the numbers into the MeteaCalcTutorials 55,406 views 4:56 Maclauren and Taylor Series Intuition - Duration: 12:59. Lagrange Error Bound Calculator The N plus oneth derivative of our Nth degree polynomial.
{ "domain": "accessdtv.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9615338090839606, "lm_q1q2_score": 0.8530772224704579, "lm_q2_score": 0.8872046041554923, "openwebmath_perplexity": 629.538565100748, "openwebmath_score": 0.821217954158783, "tags": null, "url": "http://accessdtv.com/taylor-series/taylor-series-error-estimation.html" }
• Yes, AC would suffice. For each $x$ the set of suitable $\zeta$ is nonempty, so you can use a Choice function to pick one and only one $\zeta_x$ for each $x$, and thus you get your function. Apr 15, 2021 at 3:17 • You said "If the function $x\mapsto \xi(x)$ were a continuous function, we could conclude that $\lim_{x\to x_0} f'(\xi(x))=\lim_{\xi\to x_0}f'(\xi)$. I thought that this is valid if $f'$ is continuous not $\xi(x)$. Am I wrong? Could explain this please? – ZFR Apr 15, 2021 at 3:23 • @ZFR: The equality would be valid (for different reasons) if either $f'(x)$ or $\xi(x)$ (or both) were continuous at $x_0$. But that doesn't help when neither is. Apr 15, 2021 at 3:29 • @ZFR: Continuity of $f’(x)$ is of course known to imply that you can “pass the limit through” (in fact, that’s one way to define continuity). Continuity of $\xi(x)$ would also suffice because it would imply the “intermediate value property” (IVP): not only does $\xi(x)$ take a value between $x$ and $x_0$, but every value between $\xi(x)$ and $x_0$ is taken somewhere between $x$ and $x_0$ by $\xi$; that would allow you to deduce that the limit exist, because then $\xi$ couldn’t be “skipping over” the points that mess you up.The IVP would suffice, and is weaker than continuity. Apr 15, 2021 at 3:34
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9416541593883189, "lm_q1q2_score": 0.8077209004307669, "lm_q2_score": 0.8577681013541611, "openwebmath_perplexity": 80.96758952931077, "openwebmath_score": 0.9359080791473389, "tags": null, "url": "https://math.stackexchange.com/questions/4102701/erroneous-proof-that-derivative-of-function-is-always-continuous" }
beginner, algorithm, primes, assembly, factors ; Check if p is a prime factor cdq mov eax, x div ebx cmp edx, 0 ; if(x % p == 0) jne noprime ;if yes, update x and result L2: cdq mov eax, x div ebx cmp edx, 0 ; if(x % p == 0) jne endL2 mov x , eax ; x /= p jmp L2 endL2: cdq mov eax, ecx div ebx sub ecx, eax ;result -= result / p noprime: inc ebx jmp L3 endL3: ; If x has a prime factor greater than sqrt(x) cmp DWORD PTR x, 1 jle finish cdq mov eax, ecx div DWORD PTR x sub ecx, eax finish: mov eax, ecx pop edx pop ecx pop ebx pop ebp ret 4 main: push 100 call _phi@4 INVOKE ExitProcess, 0 END main
{ "domain": "codereview.stackexchange", "id": 30055, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, algorithm, primes, assembly, factors", "url": null }
IMHO a good intuition for a closed set is that while staying in the set, you cannot get arbitrary close to any point outside of that set. Which is actually the exact opposite to a dense set which gets close to each point of the space. With that intuition it's also immediately clear why the only closed dense set is the space itself: The only way to come close to each point without coming close to any point outside of the set is if there are no points outside of the set. Let the metric space be $\mathbb{R}$. Then $\{0\}$ is closed but not dense in $\mathbb{R}$. While $\mathbb{Q}$ is dense but not closed in $\mathbb{R}$. Loosely speaking: Suppose $X$ is a topological space (with some topology, obviously). If $A \subseteq X$ and we say $A$ is dense in $X$, we mean that for each point $x \in X$, we can keep finding points from $A$ "closer" and "closer" to $x$. If $B \subseteq X$ and we say $B$ is closed in $X$, we mean if you can find points in $X$ that are really really close to elements of $B$, then those points from $X$ are also in $B$. In other words, if $y$ is an element that is not in $B$, then none of the points nearby it are in $B$ either (otherwise, if we could keep finding stuff from $B$ closer and closer to $y$, then $y$ would be in $B$). I'd say the most glaring difference between the two definitions is that dense sets need not contain their limit points. Closed sets necessarily do. There are plenty of examples of "dense not closed" and "closed not dense" scattered throughout this question now.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9744347853343058, "lm_q1q2_score": 0.8358390845671291, "lm_q2_score": 0.8577681104440172, "openwebmath_perplexity": 151.26199863167778, "openwebmath_score": 0.8905569314956665, "tags": null, "url": "https://math.stackexchange.com/questions/1390365/what-is-the-difference-between-dense-and-closed-sets/1390367" }
python, beginner, mysql, excel Finally, have a read through https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-autocommit.html . The way you're using commits now, you're better off just turning on autocommit. Combined inserts Your insert into member_project is inefficient. You should not insert in a loop. Read https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html Add a prompt to your input() Otherwise, the user doesn't know why the program has suddenly hung.
{ "domain": "codereview.stackexchange", "id": 35740, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, beginner, mysql, excel", "url": null }
parsing, file, swift, serialization Title: Parse data into an array of structs I'd had a hard time getting this to work. I’d like to parse a file containing data and copy this data into a struct. The data file (test.dat) looks like this: 1,"Tom","Smith" 2,"Peter","Perth Junior" 3,"Cathy","Johnson" I use the following function. func parseDataFile() { struct person { var id: Int var first: String var last: String } var friends: [person] = [] let filePath = Bundle.main.path(forResource: "test", ofType: "dat") guard filePath != nil else { return } let fileURL = URL(fileURLWithPath: filePath!) do { let file = try String(contentsOf: fileURL, encoding: .utf8) let arrayOfLines = file.split { $0.isNewline } for line in arrayOfLines { let arrayOfItems = line.components(separatedBy: ",") let tempPerson = person(id: Int(arrayOfItems[0])!, first: arrayOfItems[1].replacingOccurrences(of: "\"", with: ""), last: arrayOfItems[2].replacingOccurrences(of: "\"", with: "") ) friends.append(tempPerson) } } catch { print(error) } }
{ "domain": "codereview.stackexchange", "id": 35960, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "parsing, file, swift, serialization", "url": null }
and angle). By … Examples: Suppose we have a complex number expressed in rectangular form and we want to express it in polar form. Converts from Cartesian to Polar coordinates. Magnitude of vector, V = √ a 2 + b 2. This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. Polar - Polar. 1,673 talking about this. Polar - Polar. It is the distance from the origin to the point: See and . Only this week I discovered the Casio fx83 does Rectangular to Polar (and vice versa) conversions - very handy for LCHL Complex numbers - - Shift + / - and enter the number in Cartesian/Polar form seperated by a comma. Polar and Exponential Forms - Calculator. Contact. Polar and Exponential Forms - Calculator. But when dealing with polar equations, you view these points in their polar form. Rectangular (x,y) - Polar (r,θ) Coordinate system are the two dimensional plane to determine the position of points. By … Free Complex Numbers Calculator - Simplify complex expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. In order to work with complex numbers without drawing vectors, we first need some kind of standard mathematical notation.There are two basic forms of complex number notation: polar and rectangular. NOTE: If you set the calculator to return rectangular form, you can press Enter and the calculator will convert this number to rectangular form. How the Rectangular to Polar Calculator Works The calculator at the top of this page uses the exact same two equations as listed above. This calculator extracts the square root, calculate the modulus, finds inverse, finds conjugate and transform complex number to polar form.The calculator will … Convert to Polar Coordinates (0,-5) Convert from rectangular coordinates to polar coordinates using the conversion formulas. Since the complex number is in QII, we have 180° 30° 150° So that 3 i 2cis150°. r - the distance from the origin to the point. To Convert from Cartesian to Polar. This polar to rectangular form conversion calculator converts a number in polar form to its equivalent value in rectangular form. Examples: The phase is specified in degrees. The given rectangular equation is. Rectangular To Polar Calculator. Where, The rectangular coordinates
{ "domain": "gridserver.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.957912274487423, "lm_q1q2_score": 0.8063951261825719, "lm_q2_score": 0.8418256532040707, "openwebmath_perplexity": 987.9640415293046, "openwebmath_score": 0.8588264584541321, "tags": null, "url": "http://s130894.gridserver.com/hxniv2h/ccbd6e-rectangular-to-polar-form-calculator" }
haskell In shuffle , you could replace ys = take (length xs) rands with ys = zipWith const rands xs. This should do the same thing in 1 traversal instead of 2. const is defined as const a b = a, so when you zip the two together you'll only take elements from the first list, rands. zip stops when the shorter list is exhausted so the length you'll be left with is length xs. You can see some similar examples of this here.
{ "domain": "codereview.stackexchange", "id": 36808, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haskell", "url": null }
python, nlp, pytorch, bert, transformer Title: How do i generate text from ids in Torchtext's sentencepiece_numericalizer? The torchtext sentencepiece_numericalizer() outputs a generator with indices SentencePiece model corresponding to token in the input sentence. From the generator, I can get the ids. My question is how do I get the text back after training? For example >>> sp_id_generator = sentencepiece_numericalizer(sp_model) >>> list_a = ["sentencepiece encode as pieces", "examples to try!"] >>> list(sp_id_generator(list_a)) [[9858, 9249, 1629, 1305, 1809, 53, 842], [2347, 13, 9, 150, 37]] How do I convert list_a back t(i.e "sentencepiece encode as pieces", "examples to try!")? Torchtext does not implement this, but you can use directly the SentencePiece package. installable from PyPi. import sentencepiece as spm sp = spm.SentencePieceProcessor(model_file='test/test_model.model') sp.decode([9858, 9249, 1629, 1305, 1809, 53, 842])
{ "domain": "datascience.stackexchange", "id": 10772, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, nlp, pytorch, bert, transformer", "url": null }
homework-and-exercises, statistical-mechanics, continuum-mechanics, soft-matter Title: Estimate the persistence length of a rubber band Not much more to say here, it's all in the question. The best, most convincing estimate will be chosen as the correct answer. EDIT: Assume the rubber band is at room temperature, with thickness $t$ and width $w$, is linear (i.e. not circular), and was produced under standard factory conditions. What is the persistence length $P$ as a function of $w$ and $t$? According to this link the persistence length is estimated from $$P = \frac{EI}{k_B T}$$ The second moment of area $a$ of a rectangular section with width $w$ and thickness $t$ is given by $$I = \frac{t^3 w}{16}$$ The Young's modulus of rubber can be taken as 10 MPa (although it varies a lot... we will ignore that and stick to the lower limit). For a typical rubber band with t=1 mm, w = 5 mm we obtain $$P = 1.2\cdot 10^{16} \mathrm{m}$$ This is not inconsistent with the calculation that was given in the above link for a piece of spaghetti, for which they calculated $10^{18}\mathrm{m}$.
{ "domain": "physics.stackexchange", "id": 19480, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, statistical-mechanics, continuum-mechanics, soft-matter", "url": null }
programming-challenge, functional-programming, f# [<EntryPoint>] let main argv = printfn "Solution to Project Euler 1: %i" ([ 1 .. 999 ] |> Seq.filter (fun x -> x % 3 = 0 || x % 5 = 0) |> Seq.sum) printfn "Solution to Project Euler 2: %i" (1 :: 2 :: (fibsRec 1 2 4000000) |> Seq.filter (fun x -> x % 2 = 0) |> Seq.sum) printfn "Solution to Project Euler 3: %i" (findMaxPrimeFactor 600851475143L) printfn "Solution to Project Euler 4: %i" (findLargestPalindrome [ 100 .. 999 ]) printfn "Solution to Project Euler 5: %i" (findSmallestMultiple [| 1 .. 20 |]) System.Console.ReadLine() |> ignore 0 // return an integer exit code After working for far too long on this, here is a review focusing mainly on performance issues, and possibly introducing some non functional stuff along the way. Lets take it problem by problem. But let me start by stating that your code looks nice and clean, and without knowing any official style guides for F#, it is quite readable once you get a hang of the different F# idioms. One thing I think someone might mention though is to gather the open ... stuff at the beginning, and possibly to use multiline comments here and there. Problem 1: Sum of multiples of 3 and 5 This being a rather simple variant, one could think there isn't much to optimize, but I found two issues to optimize your variant: Instead of using a in-place function as predicate for Seq.filter use a predefined predicate. This cut down the time to 10 % of original time Instead of using a list, use a sequence to fill the pipe, this reduce the time by a third or so
{ "domain": "codereview.stackexchange", "id": 17189, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming-challenge, functional-programming, f#", "url": null }
python, matplotlib fig, axes = plt.subplots(2, 2) axes = axes.flatten() seasons = zip(axes, [winter, spring, summer, autumn]) for pair in seasons: im = pair[0].contourf(pair[1], cmap=segmented_map, norm=divnorm, vmin=-7, vmax=10) plt.colorbar(im, ax=pair[0]) Figure 2. Diverging colormap with 0 in as the center. Not quite, because although you supply contourf() with the overall vmin and vmax and your derived colormap, the levels (i.e. ticks) in the colorbar are not the same for the four plots ("WHY?!?", you scream)! Aha you find that you need to supply the same levels to contourf() in all four subplots (based on this: https://stackoverflow.com/questions/53641644/set-colorbar-range-with-contourf). But how do you exploit the functionality of contourf, that automatically choses appropriate contour levels? You think that you could invisibly plot the four individual images, and then extract the color levels from each (im = contourf(), im.levels, https://matplotlib.org/3.3.1/api/contour_api.html#matplotlib.contour.QuadContourSet) and from this create a unique set of levels that combines the maximum and minimum from the extracted color levels. # -1- Create the pseudo-figures for extraction of color levels: fig, axes = plt.subplots(2, 2) axes = axes.flatten() seasons = zip(axes, [winter, spring, summer, autumn]) c_levels = [] for pair in seasons: im = pair[0].contourf(pair[1], cmap=segmented_map, norm=divnorm, vmin=-7, vmax=10) c_levels.append(im.levels) # -1.1- Clear the figure plt.clf()
{ "domain": "codereview.stackexchange", "id": 40123, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, matplotlib", "url": null }
nmr-spectroscopy So the simplified rule would be that couplings along three bonds are visible, couplings along more than three bonds are only visible when there is at least one C=C double bond along the way. The concepts of chemical and magnetic equivalence are essential to understanding how different multiplicities arise. Chemical equivalence means there exists a symmetry operation that exchanges those nuclei. Magnetically equivalent nuclei additionally need to have identical couplings to all other spins in the molecule. Magnetically equivalent nuclei don't couple to each other. In propane the hydrogens of both CH3 groups are three bonds away from the hydrogens of the CH2 group, so the coupling between those is visible in the NMR spectrum. The hydrogens of each CH3 group are magnetically equivalent, due to the fast rotation along the C–C bond, and the two CH3 groups should also be magnetically equivalant. So you have 6 magnetically equivalent hydrogens that couple to your CH2 hydrogens. The result of that is a splitting into a septet (7). The OH-group is an interesting exception, as you would expect it to lead to a visible coupling on hydrogens connected to the same carbon, but you don't observe that under most conditions. The reason is that the OH is acidic enough that the hydrogen exchanges quickly with the solvent, so the hydrogen dissasociates and associates quickly. This happens too fast for NMR, so the other nuclei only see the average OH-hydrogen. This eliminates the coupling to the OH, and it is also the reason why the OH-signal is often very broad or even completely gone in NMR spectra.
{ "domain": "chemistry.stackexchange", "id": 8, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nmr-spectroscopy", "url": null }
human-anatomy Title: Difference between Appendix and the Cecum? What's the difference between an appendix and a cecum, and what are their functions? In herbivores the Cecum is an area that stores plant matter and helps digest it via symbiotic bacteria. Carnivores have smaller Cecums because meat is easier to digest than plant matter. In humans the Cecum is also an anatomical landmark that delineates the change from small intestine (a digesting organ) to the large intestine (mostly a capacity/storage organ). The Appendix is a small, previously thought "superfluous" fleshy worm-shaped organ at the junction between the small and large intestines. Recent research posits that the appendix is sort of a harbor for a person's gut flora that can re-populate the intestines should the existing bacteria die or get removed (diarrhea being the most common cause). It can also become infected, inflamed, and require surgery to remove (Appendicitis).
{ "domain": "biology.stackexchange", "id": 8680, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "human-anatomy", "url": null }
c++, object-oriented std::cout << getElement(i, j) << ", "; } std::cout << "}\n"; } } private: // Computes the inner product between // vec1, vec2, both vectors of length. int innerProduct(int* vec1, int* vec2, int length) { int sum = 0; for (int i = 0; i < length; i++) { sum += vec1[i]*vec2[i]; } return sum; } };
{ "domain": "codereview.stackexchange", "id": 40488, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented", "url": null }
# Mechanics Equlibrium Involving a Slope - Faulty Answer? • Jun 5th 2009, 12:47 PM StaryNight Mechanics Equlibrium Involving a Slope - Faulty Answer? I've just completed a question in the 'practice exam paper' section of my textbook, to find that I have the answer wrong. However, I can't see how the answer given can be right. Here's the question: A suitcase of mass 16KG is placed on a ramp inclined 15 degrees to the horizontal. The coefficient of friction between the suitcase and the ramp is 0.4. Determine whether the suitcase rests in equilibrium on the ramp, and state the magnitude of the frictional force on the suitcase. My working is as follows: Friction = 0.4 * R (normal contact force) Friction = 0.4 * 9.8 * 16 * cos(15) = 60.58N (2dp) Weight parallel to slope = 9.8 * 16 * sin (15) = 40.58N(2dp) The answers in the back of the book state that the case is in equilibrium with a frictional force of 40.6N. I fail to see how this can be so, with a frictional coefficient of 0.4. Do you think the answers are incorrect or is there something i'm missing? Help would be appreciated! • Jun 5th 2009, 01:32 PM skeeter Quote: Originally Posted by StaryNight I've just completed a question in the 'practice exam paper' section of my textbook, to find that I have the answer wrong. However, I can't see how the answer given can be right. Here's the question: A suitcase of mass 16KG is placed on a ramp inclined 15 degrees to the horizontal. The coefficient of friction between the suitcase and the ramp is 0.4. Determine whether the suitcase rests in equilibrium on the ramp, and state the magnitude of the frictional force on the suitcase. My working is as follows: Friction = 0.4 * R (normal contact force) Friction = 0.4 * 9.8 * 16 * cos(15) = 60.58N (2dp) Weight parallel to slope = 9.8 * 16 * sin (15) = 40.58N(2dp)
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9669140197044659, "lm_q1q2_score": 0.8275344692277266, "lm_q2_score": 0.855851143290548, "openwebmath_perplexity": 560.8544955990451, "openwebmath_score": 0.6576614379882812, "tags": null, "url": "http://mathhelpforum.com/math-topics/91912-mechanics-equlibrium-involving-slope-faulty-answer-print.html" }
ros2, c++ RCLCPP_INFO_STREAM(get_logger(), "[Output PointCloud] width " << cloud_out_ptr->width << " height " << cloud_out_ptr->height); } Originally posted by ravijoshi with karma: 1744 on 2022-09-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dvy on 2022-09-29: @rajivjoshi : Thanks a lot! It worked like a charm. By the way, can you please also explain the reason for the failure and the fix? Comment by ravijoshi on 2022-09-29: I am glad it worked. Notice that you are calling the class directly. Pay attention to the () after PCLPointCloud2 . The pcl::PCLPointCloud2::Ptr is a smart pointer, so you just need to provide the class name only, i.e., without (). Next, the voxelGrid filter requires smart pointer as input, so I created it at the beginning and used it inside pcl_conversions::toPCL. Notice the dereferencing of pointer here.
{ "domain": "robotics.stackexchange", "id": 38000, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2, c++", "url": null }
java, stream @Override public BiConsumer<Map<K, Collection<V>>, Map.Entry<K, V>> accumulator() { return (map, entry) -> { map.computeIfAbsent(entry.getKey(), k -> new HashSet<V>()) .add(entry.getValue()); }; } @Override public BinaryOperator<Map<K, Collection<V>>> combiner() { return (map1, map2) -> { final Map<K, Collection<V>> combined = new ConcurrentHashMap<>(map1); map2.forEach((map2Key, map2Observations) -> { combined.computeIfAbsent(map2Key, k -> new HashSet<V>()) .addAll(map2Observations); }); return combined; }; } @Override public Function<Map<K, Collection<V>>, Map<K, Collection<V>>> finisher() { return m -> m; } @Override public Set<Characteristics> characteristics() { return Set.of(Characteristics.CONCURRENT, Characteristics.UNORDERED); } } and replace the call with .collect(new EntryToMapCollector<Integer, MyObject>()); https://repl.it/@trajano/ForkedSpryRobot Have you looked at .groupingBy() and .toSet()? Map<Integer, Set<MyObject>> collected = Stream.of(...) .collect(Collectors.groupingBy(Map.Entry::getKey, Collectors.mapping(Map.Entry::getValue, Collectors.toSet()))); You possibly want .groupingByConcurrent() instead, though it is not exactly clear why. Not a repl.it, but a JShell log: U:\>jshell | Welcome to JShell -- Version 13 | For an introduction type: /help intro jshell> class MyObject { } | created class MyObject
{ "domain": "codereview.stackexchange", "id": 36858, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, stream", "url": null }
In this case, we know the true model is $y=5+x+\epsilon$, but if we estimate the model without the intercept, the regression line is nowhere near the data! We can also see that the estimates of the coefficient of $x$ are incorrect. The correctly-specified model estimates $x$ very close to 1, while the model omitting the intercept is not at all close to 1. It's hard to discern the effect in the graph, though. model1 Call: lm(formula = y ~ x + 1) Coefficients: (Intercept) x 4.991 1.000 model2 Call: lm(formula = y ~ x + 0) Coefficients: x 1.514 You're correct that in most cases you won't know the value of the intercept ahead of time. In general, the problems associated with omitting a feature is called "omitted variable bias." If the true model had two features $x_1$ and $x_2$, but we omitted one of them in the model, we would similarly have incorrect estimates of the remaining parameters. You can look at the confidence interval of the estimate to get a sense of how strongly the data support the hypothesis of a nonzero intercept. If all variables are transformed by subtracting their mean values, this effectively eliminates the need to estimate an intercept term, because the expectation of $y$ must be $0$ when all the features are also $0$. This comes at the cost of interpretability, though, since you'll be working on a new "scale." For example, if $x$ is temperature in Celsius, the $0$-mean temperature variable $x^\prime$ has moved the zero point to be the mean value of the data, so all of the nice properties about the location of the freezing point of water and so on move along with it. Whether this loss in direct, obvious interpretation is important to you depends on the particular application, of course.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9732407160384083, "lm_q1q2_score": 0.8152058568275831, "lm_q2_score": 0.8376199673867852, "openwebmath_perplexity": 648.3445468366037, "openwebmath_score": 0.75254225730896, "tags": null, "url": "https://stats.stackexchange.com/questions/146042/significance-of-intercept-as-portrayed-via-an-r-formula" }
c++, performance, search, simulation Thank you for the nice manifest constants, they are very clear. comments where appropriate bin.first[bin.second++] = boid; It wouldn't hurt to mention to the Gentle reader that .first is a boid and .second is count of boids within a bin. (Similar to the QueryCount comment.) I see why it isn't necessary, but I'm still a little sad that clear() leaves all the .first pointers untouched. edge effect size_t newX = std::clamp(xIndex + dx, 0ul, BINS - 1); size_t newY = std::clamp(yIndex + dy, 0ul, BINS - 1); Hmmm, interesting. No "index out of bounds" errors. But bins at edge of screen shall be double counted, which seems undesirable. That is, if Percival the pigeon is flying near the center left edge, then result records a pair of pointers to Percival. And if he is flying near one of the corners, the effect is even worse. squared vs square root We pass in delta_x and delta_y: if (std::hypot(bin.first[i]->position.x - position.x, bin.first[i]->position.y - position.y) < radius) { Finding length of hypoteneuse involves a slow sqrt(). But it would suffice to compare sum of squared deltas against radius². And then a good optimizing compiler would hoist it out of the loop, or you could help it out by assigning a radius_squared = radius * radius temp var. no velocity struct Boid { sf::Vector2f position; };
{ "domain": "codereview.stackexchange", "id": 45199, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, search, simulation", "url": null }
catkin-make -- Using CMAKE_PREFIX_PATH: /home/sam/code/ros_hydro/devel;/opt/ros/hydro -- This workspace overlays: /home/sam/code/ros_hydro/devel;/opt/ros/hydro -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Python version: 2.7 -- Using Debian Python package layout -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/sam/code/ros_hydro/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built CMake Warning (dev) at /opt/ros/hydro/share/catkin/cmake/tools/doxygen.cmake:40 (GET_TARGET_PROPERTY): Policy CMP0045 is not set: Error on non-existent target in get_target_property. Run "cmake --help-policy CMP0045" for policy details. Use the cmake_policy command to set the policy and suppress this warning.
{ "domain": "robotics.stackexchange", "id": 23175, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "catkin-make", "url": null }
classical-mechanics, energy, lagrangian-formalism, energy-conservation \end{align} contains a term linear in $\xi$ so you can then rewrite your potential choosing $b$ so $$ U(a)+U'(a)(x-a)+\frac12 U''(a)(x-a)^2\sim V_0+V_2\xi^2\, , $$ eliminating the linear term and leaving again only the quadratic term.
{ "domain": "physics.stackexchange", "id": 96615, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, energy, lagrangian-formalism, energy-conservation", "url": null }
javascript, regex, compiler, language-design if (oSS[0] == "§") { Always use === instead of ==. See Does it matter which equals operator (== vs ===) I use in JavaScript comparisons?. To conclude: yes, this code is ugly. You have a lot to learn.
{ "domain": "codereview.stackexchange", "id": 19435, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, regex, compiler, language-design", "url": null }
genetics, population-genetics, phylogenetics Title: Why the SNPs from the earliest human haplogroups are missing in people outside of Africa if they are infact the descendants of it? The SNPs M91, P97, M31, P82, M23, M114, P262, M32, M59, P289, P291, P102, M13, M171, M118 from haplogroup A and SNPs M60, M181, P90 from haplogroup B are missing in people without recent African ancestry. Considering these are the earliest humans in Africa, perhaps someone could explain genetically why they don't exist in non African populations? Sources: Klyosov, A. A. (2011a). The slowest 22 marker haplotype panel (out of the 67 marker panel) and their mutation rate constants employed for calculations timespans to the most ancient common ancestors. Pro- ceedings of the Russian Academy of DNA Genealogy Cruciani, F., Trombetta, B., Sellitto, D., Massaia, A., Destro-Bisol, G., Watson, E., et al. (2010). Human Y chromosome haplogroup R-V88: A paternal genetic record of early mid Holocene trans-Saharan con- nections and the spread of Chadic languages. European Journal of Human Genetics, 18, 800-807. Simms, T. M., Martinez, E., Herrera, K. J., Wright, M. R., Perez, O. A., Hernandez, M. et al. (2011). Paternal lineages signal distinct genetic contributions from British Loyalists and continental Africans among different Bahamian islands. American Journal of Physical Anthro- pology, 146, 594-608. doi:10.1002/ajpa.21616 It looks like the information presented in the question is not reliable. Taking the first two Y SNPs mentioned, M91 and P97, it says these "are missing in people without recent African ancestry." Yet Wikipedia says that M91 and P97 are defining mutations of haplogroup BT and that
{ "domain": "biology.stackexchange", "id": 8934, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics, population-genetics, phylogenetics", "url": null }
keras, tensorflow From a layer, i.e. layer.get_weights(): This is what you used. Here it will return the parameter arrays for a given layer. For example model.layers[1].get_weights() will return the parameter arrays for layer1. If layer1 has biases then this will return two arrays, one for the weights and one for the biases. I took the liberty of changing your code a bit to make this a bit more clear. import numpy as np import tensorflow as tf f = lambda x: 2*x Xtrain = np.random.rand(400, 5) # 5 input features ytrain = f(Xtrain) Xval = np.random.rand(200, 5) # 5 input features yval = f(Xval) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(10, activation='relu'), # this layer has 5 inputs and 10 outputs tf.keras.layers.Dense(1, activation='relu') # this layer has 10 inputs and 1 output ]) model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError() ) model.fit(Xtrain, ytrain, epochs=1, verbose=0) # I will be calling .get_weights() directly from the model, # so we expect 4 arrays: 2 for each layer. print('First layer weights:', model.get_weights()[0].shape) print('First layer biases:', model.get_weights()[1].shape) print('Second layer weights:', model.get_weights()[2].shape) print('Second layer biases:', model.get_weights()[3].shape) The output: First layer weights: (5, 10) First layer biases: (10,) Second layer weights: (10, 1) Second layer biases: (1,)
{ "domain": "datascience.stackexchange", "id": 7904, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "keras, tensorflow", "url": null }
javascript, recursion, mathematics Extract out Math.log2(n) into a variable. I think ES6+ gets you let and const - use them instead of var. As far as I remember, the former gets you proper scoping and the latter allows you to declare a named constant. I think my point (1) should have a const declaration. Type Coercion is not readable! Don't use implicit Boolean coercion of numbers in code which you claim to be "readable" - it isn't. Use proper comparisons in readable code, and leave the cool weak-typing hackery for the golfed version. How much does writing out the actual condition improve the readability of the code? Everything else seems to be fine - naming, indentation, and all. </NonCritical> Suggested Code: function Ackermann(m, n) { if (m < 0 || n < 0) { throw new TypeError("Arguments to Ackermann must be non-negative"); } if (m == 0) { return n + 1; } else if (n == 0) { return Ackermann(m - 1, 1); } else { return Ackermann(m - 1, Ackermann(m, n - 1)); } } function inverseAckermann(n) { const log2_n = Math.log2(n); let i = 0; // Taking `m` to be the constant 0 in the definition of the inverse Ackermann function do { ++i; // Pre-increment implies less side-effects } while (Ackermann(i, 0) < log2_n); return i; }
{ "domain": "codereview.stackexchange", "id": 25969, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, recursion, mathematics", "url": null }
type-theory, dependent-types, coinduction Title: Does co-inductive and co-recursive types also have their recursors? I'm new to type theory, and recently read introductory materials where dependent type are discussed. One of my friend asked me, "Those dependent types are having recursors & 'inductors'(dependent eliminators), but how about those types that is co-inductive/co-recursive?" Some related example is much appreciated. To understand coinduction, it helps to understand the categorical presentation of induction (since, as far as I know, coinduction comes from dualizing it there). The idea behind induction is that we have a functor $F : C → C$, and we consider algebras $(A,\ α : F A → A)$. The operation $α$ specifies how to interpret the structure given by $F$ into $A$. Then, an inductively defined type is an initial algebra $(μF, \mathsf{in} : F μF → μF)$, which mean there is a unique homomorphism $\mathsf{fold}_A : μF → A$ to any other algebra. This homomorphism is the recursor, and the uniqueness allows us to define the induction principle in various ways, depending on the setting. Coinduction flips most of these arrows around. Instead of algebras, we have coalgebras, $(A, α : A → F A)$. And instead of being initial, the coinductively defined type is final, so we get a unique $unfold_A : A → νF$ from any other coalgebra. So, instead of a recursive way of "eliminating", we get a recursive way of "introducing," and the eliminators are non-recursive.
{ "domain": "cs.stackexchange", "id": 12381, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "type-theory, dependent-types, coinduction", "url": null }
It is well known that if $M + M^T$ is negative semi-definite, then the eigenvalues of $M$ have non-postive real part (see this post, for instance). In your case, we find that $M = A - \epsilon \,e_ie_i^T$ is such that $M + M^T$ is a diagonal matrix with non-positive real entries on the diagonal. We conclude that $M + M^T$ is negative semi-definite, from which we deduce that the eigenvalues have non-positive real part, as you suspected. Notably, there are examples where we still have some imaginary eigenvalues in the resulting matrix. In particular: $$\pmatrix{0&0&0\\0&0&-1\\0&1&0} \to \pmatrix{-\epsilon&0&0\\0&0&-1\\0&1&0}$$ has eigenvalues $-\epsilon, \pm i$. I suspect that such an "unlucky" event has probability zero if $A$ is a suitably "random" skew-symmetric matrix. An obvious perturbation to move the eigenvalues over is to consider $A - \epsilon I$ instead (that is, apply the subtraction to all diagonals). • Thank you. 1) In line three you write "$M+M^T$ is a diagonal matrix with non-positive real entries on the diagonal", but isn't this matrix the zero matrix with a $-2\epsilon$ added on entry $ii$? Did you use the term "non-positive" to include the zeros on the other diagonal entries? 2) How can one show that $A-\epsilon I$ has eigenvalues with exclusively negative real parts? I'm probably missing something obvious. – Bobson Dugnutt Jun 5 '17 at 17:15
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9888419703960399, "lm_q1q2_score": 0.8174873257312261, "lm_q2_score": 0.8267118004748678, "openwebmath_perplexity": 319.54415559691324, "openwebmath_score": 0.8878233432769775, "tags": null, "url": "https://math.stackexchange.com/questions/2310841/diagonal-perturbation-of-skew-symmetric-matrix" }
homework-and-exercises, schroedinger-equation, boundary-conditions, parity For $-a<x<a$: $$\frac{-\hbar^{2}}{2m}\frac{d^{2}\psi}{dx^{2}}-V_{0}\psi=E\psi \\ \Rightarrow \frac{d^{2}\psi}{dx^{2}}=\frac{-2m(E+V_{0})}{\hbar^{2}}\psi \\ \Rightarrow \psi=Acos(\lambda x)+Bsin(\lambda x), \lambda=\sqrt{2m(E+V_{0})}/\hbar$$ Otherwise: $$\frac{-\hbar^{2}}{2m}\frac{d^{2}\psi}{dx^{2}}=E\psi \\ \Rightarrow \frac{d^{2}\psi}{dx^{2}}=\frac{-2mE}{\hbar^{2}}\psi \\ \Rightarrow \psi=Cexp(\mu x)+Dexp(-\mu x), \mu=\sqrt{-2mE}/\hbar$$ I then used the fact that both the wave function and its derivative must be continuous to get: $$Acos(\lambda a)+Bsin(\lambda a)=Cexp(\mu a)+Dexp(-\mu a)\\ Acos(\lambda a)-Bsin(\lambda a)=Cexp(-\mu a)+Dexp(\mu a) \\ -A \lambda sin(\lambda a)+B \lambda cos(\lambda a)=C \mu exp(\mu a) - D \mu exp(-\mu a) \\ -A \lambda sin(\lambda a)+B \lambda cos(\lambda a)=C\mu exp(-\mu a) - D \mu exp(\mu a)$$
{ "domain": "physics.stackexchange", "id": 13190, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, schroedinger-equation, boundary-conditions, parity", "url": null }
nginx Title: Nginx config with some conditionals This config has 3 main features but there seems to be a lot of duplication and I wonder if I could improve it. 1 Detect all pngs and jpgs in static/img and try a webp version if the requesting browser supports it 2 Detect non-ES6 supporting browsers and serve site.babel.js, otherwise serve site.js which is un-babelified 3 Proxy all other requests to a node app running on port 3000 upstream node_upstream { server node:3000; keepalive 64; } #Required since SSL termination is higher up at the AWS load balancer map $http_x_forwarded_proto $is_https { default off; https on; } map $http_accept $webp_suffix { default ""; "~*webp" ".webp"; } map $http_user_agent $script_file { default "site.js"; "~MSIE" "site.babel.js"; "~Trident" "site.babel.js"; "~Opera.*Version/[0-9]\." "site.babel.js"; "~Opera.*Version/[0-1][0-9]\." "site.babel.js"; "~Opera.*Version/2[0-1]\." "site.babel.js"; "~AppleWebKit.*Version/[0-9]\..*Safari" "site.babel.js"; "~Chrome/[0-9]\." "site.babel.js"; "~Chrome/[0-2][0-9]\." "site.babel.js"; "~Chrome/3[0-3]\." "site.babel.js"; "~Chrome/4[0-3]\." "site.babel.js"; "~Edge/1[0-3]\." "site.babel.js"; } server { root /var/www/html/src; gzip on;
{ "domain": "codereview.stackexchange", "id": 33703, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nginx", "url": null }
data-structures, search-trees Nievergelt J. and Reingold E.M., "Binary search trees of bounded balance", Proceedings of the fourth annual ACM symposium on Theory of computing, pp 137--142, 1972 Here is a page on weight-balanced trees which seems to have some more information and mentions their usage in functional languages. This paper: On Average number of rebalanced operations in weight balanced trees, seems to be investigating the number of rebalancing operations in weight balanced trees. I also seem to remember that one Knuth's Art of ... books had a reference to the above Reingold paper (perhaps in the exercises).
{ "domain": "cs.stackexchange", "id": 1362, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-structures, search-trees", "url": null }
evolution, taxonomy, ornithology Title: Birds and Dinosaurs This came up in an argument with some friends. I know that birds are direct descendants of dinosaurs, shown pretty clearly through the fossil record. However, is it proper to say that birds are dinosaurs, or is there an actual distinction? I bet you'll be interested about the concept monophyly. Any human-made group of species (or taxon) like birds dinosaurs, primate, bacteria, angiosperm, reptiles, … are either monophyletic, polyphyletic or paraphyletic. This picture explain the concept When the taxon is monophyletic it is called a clade. Monophyletic taxon are those groups of species that can be considered to be objective in the sense that it represents a group of species where each species in the taxon is more related (in terms of time to common ancestor, not according to their genetic similarity) to any other species within the same taxon than to any other species outside this taxon. This is obviously not the case for paraphyletic or polyphyletic taxon. Typically, we do not consider a parrot or a deer to be reptiles. Therefore, the ususal understanding of "reptiles" makes this taxon paraphyletic. Now, one should not confound the common understanding (what is a reptile in our everyday life) with the strict definition of the taxon Reptilia, which is a monophyletic taxon (or a clade in other words). Probably the best source for exploring the tree of life is tolweb.org. Here, you will find the clade Reptilia (who include birds, snakes, turtles and lizards). Note: Mammals are within the Reptiliomorpha, not the Reptilia. It is exactly the same issue with the dinosaurs. When we talk about dinosaurs in our everyday life we do not mean birds. But there is a clade called Dinosauria, which include both dinosaurs and birds. In short, I would say that a bird is a Dinosauria (monophyletic taxon) but is not a dinosaur (paraphyletic taxon). But this little play on word is not a scientific issue but an issue of english usage. You will also find in this post an introduction to phylogeny
{ "domain": "biology.stackexchange", "id": 1780, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, taxonomy, ornithology", "url": null }
cosmology, energy-conservation, reference-frames, universe, space-expansion Title: How is energy conserved? From what I’ve heard, there is no absolute grid you can put on the universe, and therefore you can only ever talk about relative motion. I’ve also heard that energy is conserved and constant no matter your location or velocity. But how does this work with kinetic energy? Surely if I start walking in a certain direction, all the galaxies in that direction have decelerated, and those behind me will have accelerated by the same amount. But since the equation for kinetic energy is $E = (1/2)mv^2$, this means that the galaxies which have, from my perspective, accelerated, have gained more kinetic energy than those which have, from my perspective, decelerated. How does this work? I’m assuming that the energy changes that happen when I walk are insignificant compared to the extra kinetic energy gained by billions of galaxies. There is no single preferred frame for examining the universe, but that doesn't mean all frames are identical. In fact, for many forms of analysis, we require an inertial frame. The fact that you start walking does not mean that every other object in the universe has accelerated. It means that you are examining them from another frame of reference or you are using a frame that is itself accelerating (non-inertial). Energy is not conserved when translating between different frames of reference. To keep things sane, you need to pick one (inertial) frame of reference and stick to it. You have two obvious ones in your scenario. One frame in which you are at rest before you begin walking, and a frame in which you are at rest after you begin walking. In either frame, the only object that we consider to change speed is you, so your KE changes before and after $t=0$, while the other objects do not.
{ "domain": "physics.stackexchange", "id": 47922, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology, energy-conservation, reference-frames, universe, space-expansion", "url": null }
ros-hydro, ubuntu-precise, ubuntu Originally posted by l0g1x with karma: 1526 on 2014-08-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mysteriousmonkey29 on 2014-08-13: Ok, I removed the package and redownloaded it with the modified git clone line you suggested. Then I navigate into gps_common, and rosmake, but receive the following response: [ rosmake ] rosmake starting... [ rosmake ] No package or stack specified. And current directory 'gps_common' is not a package name or stack name. [ rosmake ] Packages requested are: [] [ rosmake ] Logging to directory /home/randy/.ros/rosmake/rosmake_output-20140813-145138 [ rosmake ] Expanded args [] to: [] [ rosmake ] ERROR: No arguments could be parsed into valid package or stack names. Comment by mysteriousmonkey29 on 2014-08-13: I also tried rosmake gps (tab complete), and it filled in gpsd_client, which I then ran. No errors, but I'm not sure if this is what I wanted to do. Then I catkin_made just in case I did anything good. Again, I can find no reference in the build log to anything clontaining "gps." Comment by mysteriousmonkey29 on 2014-08-13: Although the error I receive when I try to run the node as originally described is slightly different (more complete now)!: [rospack] Error: stack/package gps_common not found [rosrun] Couldn't find executable named utm_odometry_node below Comment by l0g1x on 2014-08-14: manually download the zip, extract just the gps_common folder inside to zip into your own src folder, go to the gps_common folder in your src folder, type rosmake
{ "domain": "robotics.stackexchange", "id": 19041, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-hydro, ubuntu-precise, ubuntu", "url": null }
time, acceleration, differentiation Now here we are trying to find the variation of $\vec v$ in an infinitesimal interval $\mathrm dt$. We don’t need to find the variation of the $\vec v$ in an infinitesimal interval $dt$. We need to find derivative of the $\vec v$ relative to $t$ at each time instant $t$. Acceleration vector of a particle at each time instant is defined by derivative of the velocity vector of that particle at that time instant not anything else.
{ "domain": "physics.stackexchange", "id": 30959, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "time, acceleration, differentiation", "url": null }
dsp-core, bandpass, digital-filters, fixed-point The term (1<<14) is used to ensure that the result is rounded to nearest integer after the bit shift operation. Note that >>15 is practically an integer division by 32768, but we aware - it is not completely equivalent! Bit shift always rounds to minus infinity, while integer division always rounds to zero. The filter parameters values are as follows: B0=31771, B1=-63509m B2=31769, A1=-63509, A2=30772. This can also be found in a provided MATLAB code. I was really shocked when I realized that the integer variant of the filter doesn't work at all. Please see below the responses of three different filters used to remove a $100\textrm{ Hz}$ component from the input signal. From top to bottom: Notch filter in floating point implementation Notch filter in integer arithmetic implementation using $r=15$ A simple moving average filter, with a $10~\text{ms}$ ($200$ samples) window. Here is how I've implemented a moving average filter in C: uk200 = window[ind]; window[ind] = uk0; win_sum = win_sum - uk200 + uk0; yk0 = (((win_sum+(1<<3))>>4)*5243+(1<<15))>>16; ind++; if (ind==200) ind=0; Questions to the community. Since I don't have that much of an experience in digital filtering, can someone confirm to me are these results expected.
{ "domain": "dsp.stackexchange", "id": 4714, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dsp-core, bandpass, digital-filters, fixed-point", "url": null }
object-oriented, objective-c, singleton ABAddressBookRequestAccessWithCompletion(m_addressbook, ^(bool granted, CFErrorRef error) { accessGranted = granted; dispatch_semaphore_signal(sema); }); dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER); } if (accessGranted) { // Access has been granted self.contacts = (__bridge NSArray *)(ABAddressBookCopyArrayOfAllPeople(m_addressbook)); return YES; } else { // Access has not been granted return NO; } } @end HALContact.h: #import "HALAddressBook.h" @interface HALContact : NSObject #pragma mark - Properties @property NSArray *phoneNumbers; @property NSString *mainPhoneNumber; @property NSString *firstName; @property ABRecordRef contactRef; #pragma mark - Instance Methods - (BOOL)hasMultiplePhoneNumbers; @end HALContact.m: #import "HALContact.h" @interface HALContact () @end @implementation HALContact - (BOOL)hasMultiplePhoneNumbers { if (self.phoneNumbers.count > 1) { return YES; } else { return NO; } } @end HALUserDefaults.h: @interface HALUserDefaults : NSObject #pragma mark - Class Methods + (NSArray *)retrieveHalfImageMessages; + (NSArray *)retrieveFullImageMessages; + (void)storeHalfImageMessages:(id)halfImageMessages; + (void)storeFullImageMessages:(id)fullImageMessages; + (void)storeUsername:(NSString *)username; @end HALUserDefaults.m: #import "HALUserDefaults.h" @interface HALUserDefaults () @end @implementation HALUserDefaults
{ "domain": "codereview.stackexchange", "id": 8824, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "object-oriented, objective-c, singleton", "url": null }
c#, beginner, homework, checksum summeSusumme = sumEvenNumbers + sumOddNumbers; Console.WriteLine($"Your checksum: "); for (int i = 0; i < numbers.Length; i++) { Console.Write(numbers[i]); } Console.Write("-"); Console.Write(summeSusumme%10); } static void Main(string[] args) { JugglingWithNumbers(); Console.ReadLine(); } } You have a lot of very similar code repeated. This is a good hint that you can benefit from introducing a helper method to remove the duplication. For example, you have the need to output a list of numbers to the console multiple times. Start off by just extracting your code: private static void PrintNumbers(string heading, int[] numbers) { Console.WriteLine(heading); for (int i = 0; i < numbers.Length; i++) { Console.WriteLine(numbers[i]); } } You can now change the printing in the middle of your code to something much easier to read: PrintNumbers("Here are your numbers one more time - control output:", numbers); PrintNumbers("Even numbers", evenNumber); PrintNumbers("Odd Numbers", oddNumber); As well as being easier to read, you can now be sure that all the output is consistent. It's worth now refactoring the implementation of PrintNumbers: private static void PrintNumbers(string heading, IEnumerable<int> numbers) { Console.WriteLine(heading); Console.WriteLine(string.Join(Environment.NewLine, numbers)); }
{ "domain": "codereview.stackexchange", "id": 43180, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, homework, checksum", "url": null }
ros, gazebo, ros-fuerte, overlay [rosmake-1] Finished <<< kdl ROS_NOBUILD in package kdl No Makefile in package kdl [rosmake-2] Starting >>> eigen_conversions [ make ] [rosmake-1] Starting >>> tf_conversions [ make ] [rosmake-0] Starting >>> kdl_parser [ make ] [rosmake-0] Finished <<< kdl_parser ROS_NOBUILD in package kdl_parser [rosmake-0] Starting >>> mk [ make ] [rosmake-2] Finished <<< eigen_conversions ROS_NOBUILD in package eigen_conversions [rosmake-1] Finished <<< tf_conversions ROS_NOBUILD in package tf_conversions [rosmake-1] Starting >>> robot_state_publisher [ make ] [rosmake-0] Finished <<< mk No Makefile in package mk [rosmake-1] Finished <<< robot_state_publisher ROS_NOBUILD in package robot_state_publisher [rosmake-3] Finished <<< gazebo_msgs [PASS] [ 32.08 seconds ] [rosmake-3] Starting >>> gazebo [ make ] [ rosmake ] Last 40 lineszebo: 906.0 sec ] [ 1 Active 63/68 Complete ] {------------------------------------------------------------------------------- touch wiped mkdir -p build if [ ! -f gazebo-r54c81cc8dd7c.tgz.md5sum ]; then echo "Error: Couldn't find md5sum file gazebo-r54c81cc8dd7c.tgz.md5sum" && false; fi
{ "domain": "robotics.stackexchange", "id": 12545, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, gazebo, ros-fuerte, overlay", "url": null }