text
stringlengths
1
1.11k
source
dict
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/0a\/Calculate-the-Geometric-Mean-Step-3-Version-4.jpg\/v4-460px-Calculate-the-Geometric-Mean-Step-3-Version-4.jpg","bigUrl":"\/images\/thumb\/0\/0a\/Calculate-the-Geometric-Mean-Step-3-Version-4.jpg\/aid159065-v4-728px-Calculate-the-Geometric-Mean-Step-3-Version-4.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":" \u00a9 2020 wikiHow, Inc. All rights reserved. This image may not be used by other entities without the express written consent of wikiHow, Inc. \n<\/p>
{ "domain": "com.sv", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9693241982893258, "lm_q1q2_score": 0.8258184685700949, "lm_q2_score": 0.8519528038477824, "openwebmath_perplexity": 1452.4229376157004, "openwebmath_score": 0.23681902885437012, "tags": null, "url": "http://www.epersonnel.com.sv/o6tbydc5/4a7fa9-how-to-calculate-geometric-mean" }
multilabel-classification It is unlikely there is a combinatoric issue as an 8 dimensional output is not large in the context of machine learning. Also, not all of the $2^8$ possible labels may occur, due to correlation or other inter-relationships between label dimensions (depending on the data). For example, in the extreme, all digits might always be the same so only two labels ever occur: (0,0,0, ...) and (1,1,1, ...) meaning there is effectively only 1 classification task rather than 8. In your case, maybe certain people always speak at the same time. If any combination is possible then it really is effectively 8 independent tasks, but that is not necessarily a problem.
{ "domain": "datascience.stackexchange", "id": 9675, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "multilabel-classification", "url": null }
The form of the reasoning is: "any solution of the original equations will be a solution of this equation, too." Yes, but then you've only found a superset containing the solution set. To characterize the solution set exactly, you must worry about the converse. When you solve a system of equations (whether it be by elimination or other means), you cannot discard the individual equations you started with. The reason is that each one of these equations carry more information (restrictions for example) than the one equation you end up with. This goes for other example as well. The function $f(x) = \sqrt{x} - \sqrt{x}$ is not equal to $0$. The reason being any $x < 0$ is not in the domain of $f$. By eliminating $\sqrt{x}$ , you have lost a key piece of information about this function (its domain).
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9814534365728416, "lm_q1q2_score": 0.8157282026566773, "lm_q2_score": 0.8311430499496096, "openwebmath_perplexity": 247.20608715192833, "openwebmath_score": 0.8391405940055847, "tags": null, "url": "https://math.stackexchange.com/questions/1662357/can-extraneous-roots-be-introduced-by-elimination" }
kalman-filters, estimation, maximum-a-posteriori-estimation, bayesian-estimation Title: General questions on Kalman filter and difference In the wikipedia Kalman filter link, the state variable $x_k$ takes a continuous value say a floating point number, but what if the values are integer say symbols from an alphabet set, then how does one apply the Kalman filter? I have the following confusions and doubt and shall be obliged for an answer with some examples, if any. DOubt 1: Kalman filter is called continuous filter because the state values and covariance etc all take real-valued numbers. But, these are discrete-in time $k$ varies from $1,2,\ldots$ and continuously. Is my understanding correct? Doubt 2: I count not find any information or any working example on the case when the state variables take discrete values. What if they are symbols from DNA strings? Could somebody please throw some light into this case? #1.
{ "domain": "dsp.stackexchange", "id": 5424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kalman-filters, estimation, maximum-a-posteriori-estimation, bayesian-estimation", "url": null }
ros, c++, rosconsole Title: Logging of ROS_ERROR_STREAM messages? How do I log ROS_ERROR_STREAM messages into a log file?? Normally i would just do a >> log.txt but, i only want to store the ROS_ERROR_STREAM, and nothing else.. Originally posted by 215 on ROS Answers with karma: 156 on 2016-05-13 Post score: 0 The ros logging macros (rosconsole) use log4cxx and are compatible with log4j configuration files. It's probably possible to write a configuration file that does what you want, but the documentation is pretty thin: http://logging.apache.org/log4cxx/usage.html Originally posted by ahendrix with karma: 47576 on 2016-05-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24642, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, c++, rosconsole", "url": null }
beginner, ios, swift The Delegate Pattern This is the pattern I actually think is most appropriate for your scenario. For this pattern, we must start by declaring a protocol that defines the methods we'll be calling on our delegate. Something like this might work: @objc protocol GenerateTokenDelegate { func tokenGeneratorDidSucceed(tokenGenerator: DeviceInfo) func tokenGeneratorDidFail(tokenGenerator: DeviceInfo) }
{ "domain": "codereview.stackexchange", "id": 13071, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, ios, swift", "url": null }
Last edited: chip and math951 #### math951 @Plato I guess my question is when do we know which sampling w/out replacement to do? I have typically seen if we take one out one by one, to use sampling without replace, order matters. If not, then do sample without replacement, order irrelevant. #### Plato
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787864878115, "lm_q1q2_score": 0.8047802128966605, "lm_q2_score": 0.8152324826183822, "openwebmath_perplexity": 738.2974774050399, "openwebmath_score": 0.7257028222084045, "tags": null, "url": "https://mathhelpforum.com/threads/urn-problem-2-balls-randomly-without-replacement.283123/" }
scikit-learn, random-forest, boosting, bagging And I was able to see considerable improvement in the recall. Is this approach mathematically sound. I used the second layer of the Random Forest such that it would be able to correct the error by the first layer. Just looking to combine the principle of boosting to Random Forest Bagging Technique.Looking for thoughts. The underlying idea is fine, but you've fallen into a common data leakage trap. By recombining the data and then resplitting, your second model's test set includes some of the first model's training set. The first model knows the labels on those datapoints and, especially if overfit, passes along that information in its predictions. So the score you see for the ensemble is probably optimistically biased. The most common approach to fixing this is to use k-fold cross-validation to produce out-of-fold predictions on the entire training dataset for the second model. Note that sklearn now has such stacked ensembles builtin:
{ "domain": "datascience.stackexchange", "id": 7291, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "scikit-learn, random-forest, boosting, bagging", "url": null }
$$R = \pmatrix{1 &0.3055569 &0.5513377 &0.5100989\\ 0.3055569 &1 &0.1240151 &0.09634469\\ 0.5513377 &0.12401511 &1 &-0.4209064\\ 0.5100989 &0.09634469 &-0.4209064 &1}$$ (as generated randomly at the outset). In the figure the leftmost panel is a histogram of the foregoing Maximum Likelihood estimates; the middle panel is a histogram of estimates using the usual (unbiased) variance estimator; and the right panel is a QQ plot of the two sets of estimates. The slanted line is the line of equality. You can see the usual variance estimator tends to yield more extreme values. It is also biased (due to ignoring the correlation): the mean of the MLEs is 0.986 -- surprisingly close to the true value of $$\sigma^2 =1^2 =1$$ while the mean of the usual estimates is only 0.791. (I write "surprisingly" because it is well-known the usual maximum likelihood estimator of $$\sigma^2,$$ where no correlation is involved, has a bias of order $$1/(nN),$$ which is pretty large in this case.)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104972521579, "lm_q1q2_score": 0.8074200605381242, "lm_q2_score": 0.8354835371034369, "openwebmath_perplexity": 1150.9331620083958, "openwebmath_score": 0.8514554500579834, "tags": null, "url": "https://stats.stackexchange.com/questions/535727/how-to-estimate-the-variance-of-correlated-observations" }
of F, and F is discontinuous at t = 1. Convergence in Probability; Convergence in Quadratic Mean; Convergence in Distribution; Let’s examine all of them. %%EOF On the other hand, almost-sure and mean-square convergence do not imply each other. Xt is said to converge to µ in probability … Convergence in probability is stronger than convergence in distribution. Click here to upload your image I posted my answer too quickly and made an error in writing the definition of weak convergence. 249 0 obj <>/Filter/FlateDecode/ID[<82D37B7825CC37D0B3571DC3FD0668B8><68462017624FDC4193E78E5B5670062B>]/Index[87 202]/Info 86 0 R/Length 401/Prev 181736/Root 88 0 R/Size 289/Type/XRef/W[1 3 1]>>stream Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Definitions 2. Convergence in probability and convergence in distribution. Suppose that fn is a probability density function for a discrete distribution Pn on a countable set S ⊆ R for each n ∈ N ∗ +.
{ "domain": "soho46.com", "id": null, "lm_label": "1. Yes\n2. Yes\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.972414716174355, "lm_q1q2_score": 0.839574695396566, "lm_q2_score": 0.8633915976709976, "openwebmath_perplexity": 478.6392443132913, "openwebmath_score": 0.9117074012756348, "tags": null, "url": "http://www.soho46.com/cole-and-ehte/convergence-in-probability-and-convergence-in-distribution-addb5e" }
php, optimization, recursion } else { $query_translit=$query; } if (!in_array($query_translit, $this->transliterations)) $this->transliterations[]=$query_translit; } foreach ($this->transliterations as $transliteration) { if (!in_array($transliteration, $this->processed)) { if (!preg_match("/[a-zA-Z]/", $transliteration)) { return; } else { $this->transliterate($transliteration); } } } } Why a recursive function for such a simple task? $in = 'This is your input'; $map = 'your char translation array here'; $out = ''; for($i = 0; $i < strlen($in); $i++) { $char = $in[$i]; if (isset($map[$char])) { if (is_array($map[$char])) { $newchar = $map[$char][0]; // whatever your multi-char selection logic is... } else $newchar = $map[$char]; } $out .= $newchar; } }
{ "domain": "codereview.stackexchange", "id": 3334, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, optimization, recursion", "url": null }
halting-problem, correctness-proof Title: Can we enumerate provably non-terminating functions? In trying to understand the Halting Problem better, I am trying to come up with classes of provably non-terminating programs. For example, any program (including input) which leads to a finite-length infinite loop (with perfectly repeating state+tape) should be detectable. That leaves mostly induction/search problems like "get the first non-Goldbach even number above 4" as candidates. Clearly, some of these halt, and some can be proven non-halting (eg we know "find $n$ such that $n\cdot 0=5$ " and "find a proof that the Poincare Conjecture is false" would not halt). More generally, it seems every non-halting problem corresponds to a Diophantine equation which (provably or not) has no solutions. This leads to a seemingly fairly general algorithm to find many non-halting programs:
{ "domain": "cs.stackexchange", "id": 5738, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "halting-problem, correctness-proof", "url": null }
general-relativity Why can the proper time infinitesimal always be written in the form (according to Wiki "Proper time"): $ d\tau = \sqrt{g_{00}(x)}dt \; ? $ Thank you in advance for your answers 1) I believe this can be done always, but I am not sure. You need to get rid of the cross terms $g_{0a}$ and you have 4 coordinate transformations at your disposal to get rid of the 3 metric functions, while keeping $g_{00}$ positive (assuming the signature is (+,-,-,-)). This looks like a problem that has a solution. 2) It cannot. The proper time is always attached to some worldline. For any worldline whatsoever, the general formula is: $$d\tau=c^{-1}ds=c^{-1}\sqrt{g_{\mu\nu}dx^\mu dx^\nu}$$ This reduces to your formula only along worldlines that keep $x^1$,$x^2$ and $x^3$ constant.
{ "domain": "physics.stackexchange", "id": 73184, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity", "url": null }
electromagnetism, electromagnetic-induction I would like to know why exactly it is able to 'cut' the field, or more precisely, what it does when it cuts the field. (The Lorentz force on point charge explanation is easier, but this is given in the book that I follow, just so you know) Imagine there is a stream of water and you are moving a stick in it. When you are moving the stick parallel to the flow of water you are actually not cutting it. If you are moving the stick perpendicular to the flow of water you are cutting it " the most". If you move the stick at an another angle other than purely parallel or perpendicular angle there will always be perpendicular and parallel component of the motion with respect to the flow of the stream. The magnitude of the induced emf also depends upon the angle at which the conductor cuts the magnetic field lines. Where direction of the magnetic field lines from north to south corresponds to the flow of water. And the conductor corresponds to your stick.
{ "domain": "physics.stackexchange", "id": 66118, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electromagnetic-induction", "url": null }
quantum-field-theory, general-relativity, classical-mechanics, special-relativity Title: What if a particle falls into the center of a central field? Given a central field $U(r)$ satisfies $U(r) \rightarrow -\infty$ when $r \rightarrow 0$, then What if a particle falls into the center of a central field? Can you help me analysis this question in classical mechanics, relativistic mechanics and quantum mechanics? And, what will happen actually in experiments? Any help or suggestions will be appreciated! If one studies how the theoretical understanding of physics has progressed we find that when infinities or infinitesimals are encountered with the prevailing at the time mathematical model, the model has reached its region of validity in describing physics. They used to say that "nature abhors a vacuum". I would say that "nature abhors infinities and absolute zeros".
{ "domain": "physics.stackexchange", "id": 10205, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, general-relativity, classical-mechanics, special-relativity", "url": null }
(10) So, R is a Riemann sum since it is defined on [a,c] using a partition P and a set S of arbitrary points in each subinterval. (11) Since f(x) is continuous, we know that R, R1, and R2 have limits (that is, their integrals exist. (See Theorem, here) (12) Therefore, it follows that: lim (n → ∞) R = lim (n → ∞)R1 + lim(n → ∞) R2 which using the definition of definite integrals in terms of Riemann sums (see Definition 5, here) gives us: ∫ (a,b) f(x)dx = ∫ (c,a)f(x)dx + ∫ (b,c) f(x)dx. QED Lemma 4: Comparison Property If m ≤ f(x) ≤ M for all x in [a,b], then: m(b - a) ≤ ∫ (b,a) f(x)dx ≤ M(b-a) Proof: (1) Let m be minimum of f(x) on [a,b] (See Theorem, here for proof of the existence of the minimum) (2) Let M be ≥ maximum of f(x) on [a,b] (See Lemma 3, here for proof of the existence of the maximum) (3) For any partition P on [a,b], m(b-a) ≤ L(P) [Since L(P) = min of f(x) on [a,b], see Definition 2, here]
{ "domain": "blogspot.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9946150617189831, "lm_q1q2_score": 0.8177616906320347, "lm_q2_score": 0.8221891283434876, "openwebmath_perplexity": 2423.1762245442274, "openwebmath_score": 0.8980393409729004, "tags": null, "url": "http://mathrefresher.blogspot.com/2006/09/properties-of-integrals.html" }
cc.complexity-theory, cg.comp-geom, proofs, proof-complexity Title: Document references describing weaknesses for cutting planes and algebraic proof system? Here, Fortnow says (section 4.3): Since then complexity theorists have shown similar weaknesses in a number of other proof systems including cutting planes, algebraic proof systems based on polynomials and (...) I am trying to find references to documents describing weaknesses for cutting planes and algebraic proof system regarding the P vs NP question. Unfortunately, Fortnow's document does not provide any. For each of these proof systems we know that there are some formulas where the shortest proof needs to have exponential length. Some of the earliest examples are an exponential lower bound for the pigeonhole principle in polynomial calculus (Razborov '98, IPS '99), and an exponential lower bound for the clique-colouring formula in cutting planes (Pudlák '99). Nowadays there are a few more examples to choose from.
{ "domain": "cstheory.stackexchange", "id": 4788, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, cg.comp-geom, proofs, proof-complexity", "url": null }
molecular-biology, cell-biology, proteins, cell-membrane For instance Integrins (which form a category of CAMs, cp [Wikipedia, Cell adhesion molecules]:2 "Integrins are transmembrane receptors that facilitate cell-cell and cell-extracellular matrix (ECM) adhesion. Upon ligand binding, integrins activate signal transduction pathways..." - This may lead to think of CAMs, e.g. integrin as some form of receptor, as they can bind to ligands themselves, and vice versa categorize enzymatic receptors as CAMs, as in a wider sence, CAMs, e.g. Integrin can support enzymatic receptors, TYR-receptor, moreover:
{ "domain": "biology.stackexchange", "id": 11800, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "molecular-biology, cell-biology, proteins, cell-membrane", "url": null }
and other reference is! Months ago that an orthogonal matrix equal to D. or you could say C... Something from one position to another, or to exchange the positions of two.. The column and row elements as follows of a matrix obtained by interchanging the rows and columns of.. B transpose times a transpose given matrix '' ( matrix ) also in. All content on this website, including Dictionary, Thesaurus, literature, geography, other! Please solve It on “ PRACTICE ” first, before moving on to solution... Dimensions and is denoted by matrix equal to the matrix defined by where transposition. Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License we put a T '' in top... By Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License or... With dimension a ( 2 X 3 ) and B the top right-hand corner to mean transpose: reverse! Is pretty interesting, because how did we define these two returns a matrix dimensions
{ "domain": "ikiwago.info", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9658995752693051, "lm_q1q2_score": 0.8459222754304697, "lm_q2_score": 0.8757869835428965, "openwebmath_perplexity": 681.1274730658739, "openwebmath_score": 0.6436394453048706, "tags": null, "url": "http://forum.ikiwago.info/87616fru/5om3q4q.php?1c5f9e=transpose-matrix-definition" }
beginner, parsing, datetime, formatting, rust fn hour_from_12h_to_24h(hour: u8, period: &str) -> Result<u8> { match period { "AM" => Ok(hour % 12), "PM" => Ok(hour % 12 + 12), _ => Err(Error::new(ErrorKind::InvalidData, format!("invalid period {}", period))), } } pub fn to_24h(&self) -> String { format!("{:02}:{:02}:{:02}", self.hour, self.minute, self.second) } } fn main() { let time_12h = read_12h_time().expect("input error"); let time = Time::from_12h(&time_12h).expect("error"); println!("{}", time.to_24h()); } fn read_12h_time() -> Result<String> { let mut buffer = String::new(); match read_line(&mut buffer) { Ok(10) => Ok(buffer), Ok(length) => { Err(Error::new(ErrorKind::InvalidInput, format!("expected 10 characters, got {}", length))) } Err(e) => Err(e), } } fn read_line(buffer: &mut String) -> Result<usize> { stdin().read_line(buffer) }
{ "domain": "codereview.stackexchange", "id": 23312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, parsing, datetime, formatting, rust", "url": null }
ros, navigation, global-planner Title: How to log a global path defined in "/map" frame Hi, everyone. I'm using navigation stack and would like to log a global path defined in "/map" fame. How can I do it ? Navigation stack's setup has already done and it works well. I thought that a global path defined in "/map" frame could be got by subscribing "/move_base/TrajectoryPlannerROS/global_plan" topic and transforming it from "/odom" frame to "/map" frame with tf::TransformListener::transformPose() method. However it didn't work well and I had a following error. Frame id /map does not exist! Frames (1): A result of "rosrun tf view_frames" shows that there is a "/map" frame. However, a result of "rosrun tf tf_monitor" doesn't show a "/map" frame (Perhaps, because it is not broadcasted continuously?). How can I log a global plan (an array of geometry_msgs::PoseStamped) defined in a "/map" frame? Thanks for your attention.
{ "domain": "robotics.stackexchange", "id": 13082, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, navigation, global-planner", "url": null }
programming-languages print a.x; // outputs 3 print b.x; // outputs 3 What is not clear to me is where the object's internal representation should live in the interpreter's non-mutable state. I can store non-mutable values in lexical environment records but I believe I need another level of indirection for storing objects because they are mutable. I have considered having the interpreter state store a record of active objects, with lexical environments then storing a reference to one of these objects. Such references would need to be resolved dynamically against the current state object. How has this been addressed in other works? The basic way of implementing state is to explicitly implement the state monad. In fact, if you want a pure interpreter, you will be forced to do so one way or another. Since you have objects, and very likely recursion on those, you probably cannot get away with a stack, you need a heap (beware of memory leaks). Thus, your interpreter shall carry around three things:
{ "domain": "cs.stackexchange", "id": 1109, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming-languages", "url": null }
javascript, unit-testing, mocha const mongoose = require('mongoose') const User = mongoose.model('user') chai.use(chaiHttp) And my tests have a lot of duplicate 'expect' statements it('should return error when no user email found', async () => { const result = await chai.request(app) .post('/api/user/login') .send({ email: 'fail@email.com', password: currentUserData.password }) *expect(result).to.have.status(401) expect(result.error).to.exist expect(result.error.text).to.contain('No user found')* }) it('should return error when password is incorrect', async () => { const result = await chai.request(app) .post('/api/user/login') .send({ email: currentUserData.email, password: '123456' }) *expect(result).to.have.status(401) expect(result.error).to.exist expect(result.error.text).to.contain('Incorrect password')* })
{ "domain": "codereview.stackexchange", "id": 31574, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, unit-testing, mocha", "url": null }
dark-matter Title: Dark matter and vacuum How different would space have been if there was no dark matter? Can space be considered vacuum even when occupied with abundant dark matter? Non-baryonic dark matter (ie dark matter made of some particle that doesn't interact with light) had an important role in forming the large scale structure of the universe. The early universe was dominated by light radiation, and this would have prevented the early clumps of matter from growing. But dark matter isn't affected by radiation, so it can form clumps, which then attracts the normal matter towards it. Without dark matter, galaxies wouldn't have been able to form. You can read about structure formation on Wikipedia.
{ "domain": "astronomy.stackexchange", "id": 4176, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dark-matter", "url": null }
python, performance, numpy, opencv Timings - # Import the "4" digit image from previous section In [50]: from skimage import io ...: im = io.imread('https://i.stack.imgur.com/tmdiH.png') # @Gareth Rees's solution In [51]: %timeit crop_with_argwhere(im) 1000 loops, best of 3: 1.4 ms per loop In [52]: %timeit crop_image_only_outside(im,tol=0) 10000 loops, best of 3: 81.8 µs per loop The memory efficiency with crop_image_only_outside is noticeable on performance. Extend to generic 2D or 3D image data cases Assuming we are looking to check for ALL matches across all channels along the last dimension/axis, the extension would be simply performing numpy.all reduction along the last axis. Hence, we would have generic solutions to handle both 2D and 3D image data cases like so - def crop_image(img,tol=0): # img is 2D or 3D image data # tol is tolerance mask = img>tol if img.ndim==3: mask = mask.all(2) mask0,mask1 = mask.any(0),mask.any(1) return img[np.ix_(mask0,mask1)]
{ "domain": "codereview.stackexchange", "id": 20616, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, numpy, opencv", "url": null }
quantum-field-theory, symmetry-breaking, effective-field-theory I suspect the physical picture is this: we start with the symmetry intact, spontaneously break it at a high scale $\Lambda$, and then integrate out whatever fields are responsible for the spontaneous symmetry breaking. As a result, the effective theory does not respect the symmetry, not even nonlinearly. However, if we work at a low scale $\Lambda' \ll \Lambda$, the irrelevant terms are too small to be seen, so we only get relevant (soft) terms. Is this the right way of thinking about what's going on? This is a matter of terminology and the physics behind it is very simple. How do we break a symmetry explicitly? As you know, that's enough to add symmetry-breaking operators to the Lagrangian: relevant, marginal and irrelevant operators do the job.
{ "domain": "physics.stackexchange", "id": 50842, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, symmetry-breaking, effective-field-theory", "url": null }
digital-communications, homework, quadrature $ R_b / W_m = 480k / 120k = 4 $ 5. How long will the signal transmission last if the band exceedance factor is 1. And this is the only task I have no idea how to do. Any suggestions? I would also be very grateful I you check the other tasks, if they were done correctly, or if I made any mistakes. Thank you in advance! Since the bandwidth stays constant at 120 kHz, the symbol rate needs to be reduced: $$R_s = W_m/(1+\alpha) = 120000/2 = 60000 \text{ symbols per second.}$$ Then the bit rate is $R_b = 6R_s=360,000 \text{ bits per second.}$ The transmission duration is then $$T = \frac{48 \times 10^6 \text{ b}}{360 \times 10^3 \text{ b/s}} = 133.3 \text {s}.$$
{ "domain": "dsp.stackexchange", "id": 7250, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "digital-communications, homework, quadrature", "url": null }
newtonian-mechanics, energy, newtonian-gravity, energy-conservation, potential-energy in the mathematical model (not in reality), the total "available energy" that the gravitational attraction has is theoretically infinite assuming the two objects are point particles (*), because the potential energy does not have a minimum value and can reduce indefinitely. More precisely, two point particles with positive mass, under the gravitational force, are basically an infinite source of energy from the point of view of classical mechanics (point particles have no radius). That is admittedly mind-blowing and weird, but it is just a feature of the too simple mathematical model.
{ "domain": "physics.stackexchange", "id": 97328, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, energy, newtonian-gravity, energy-conservation, potential-energy", "url": null }
ros2 Comment by oferbar on 2022-11-21: I'm having the same issue. I tried to reproduce the exact code in my ROS workspace (foxy container) without any luck. For some reason, any MultiThreadExecutor example that I tried to reproduce with Python this week seems to fail. By fail, I mean that it act like SingleThreadExecutor. I'm starting to believe that there is a wider problem here. Edit: After a few more frustrating hours, I managed to successfully reproduce the demos in Galactic workspace (without container), while the same code wouldn't work in a Foxy container. Comment by ravijoshi on 2022-11-22: @oferbar: I remember verifying the code. As per my comments, you can noticed that it worked fine. I suspect something wrong with the installation or environment. It could an interesting task to identify it in great details. Thank you very much for the replies. Since it doesn't seem to be ROS, I built a completely new Docker container. In the new container it works!
{ "domain": "robotics.stackexchange", "id": 38065, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2", "url": null }
space, space-travel, interstellar-travel Space debris is not really a problem yet, because even though there is a large number of objects orbiting the Earth, the volume they fly through is so vast that the average density is still very low. But, when things like this happen, there is a major impact on many space missions, existing and future ones. When not taken seriously between now and ~15 years, space debris might indeed become a real threat, possibly even leading to the Kessler syndrome.
{ "domain": "physics.stackexchange", "id": 39067, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "space, space-travel, interstellar-travel", "url": null }
rotation Title: A closed-form solution of $\textbf{R}\textbf{R}_1=\textbf{R}_2\textbf{R}$ w.r.t $\textbf{R}$ Is there a closed-form solution of $\textbf{R}\textbf{R}_1=\textbf{R}_2\textbf{R}$ with respect to $\textbf{R}\in SO(3)$? $\textbf{R}_1$, $\textbf{R}_2 \in SO(3)$ are given. Added: I tried holmeski's solution but it fails because of rank deficiency in A matrix.(why?) The following code simulates holmeski's solution in matlab. (please correct me if the code is incorrect) clear all cTl=rotx(rand*100)*roty(rand*100)*rotz(rand*100) lTc = inv(cTl) for k = 1: 9 l1Tl2{k}=rotx(rand*100)*roty(rand*100)*rotz(rand*100) c1Tc2{k}= cTl*l1Tl2{k}*lTc; end R = calib_RR1_R2R_closedform(l1Tl2,c1Tc2) function R = calib_RR1_R2R_closedform(l1Tl2,c1Tc2_klt)
{ "domain": "robotics.stackexchange", "id": 1655, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rotation", "url": null }
homework-and-exercises, differentiation Title: How to derive $\frac{d}{d\lambda}=-\mathbb{y}\cdot\nabla$ for $\frac{1}{\lvert\mathbb{x}-\lambda\mathbb{y}\rvert}$? When talking about the multipole expansion of an electromagnetic potential, my professor noted that for the function $$\tag{1}\frac{1}{\lvert\mathbb{x}-\lambda\mathbb{y}\rvert},$$ the two operators $$\tag{2}\frac{d}{d\lambda}$$ and $$\tag{3}-\mathbb{y}\cdot\nabla_\mathbb{x}$$ are the same, i.e. $\frac{d^n}{d\lambda^n}=(-\mathbb{y}\cdot\nabla_\mathbb{x})^n$ for all $n$. I checked that explicitly for $n=1$ and $n=2$. Is there a way to derive it for an arbitrary $n\in\mathbb N$? EDIT: I have to clarify: I did not prove that $\frac{d}{d\lambda}$ and $-\mathbb{y}\cdot\nabla$ are the same operators. I just checked that their action on the function $(1)$ yields the same result if they are applied one or two times. However, this does not prove they are the same operators. Use the fact that
{ "domain": "physics.stackexchange", "id": 32584, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, differentiation", "url": null }
neural-networks, machine-learning, convolutional-neural-networks, representation-learning In contrast, the later layers depend on a much larger portion of the image and can thus learn high-level features like a nose. It's more a question of whats possible for the layers to learn and not a question of the network self-organizing it's resources. And besides just reasoning like that you can visualize what the kernels in the individual layers respond to. I found this question that also has an image with some further information on that. EDIT In response to the first comment: To further clarify how a CNN works, here is an example, where $X_0$ is the input image and $X_1$ is the output of the first CNN-Layer:
{ "domain": "ai.stackexchange", "id": 3408, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-networks, machine-learning, convolutional-neural-networks, representation-learning", "url": null }
php, mysql endforeach; endif; endif; As a general concept, I would recommend a few things: "DRY" or "Don't Repeat Yourself" is a good concept to start with, as others have mentioned. If you do something more than once, chances are that it deserves its own function or can be simplified. While typically applied to object-oriented programming, the "single responsibility principle" can be well applied to functions for their maintainability, but you should also avoid creating functions just because you can (function calls require overhead). Eventually, collections of functions often end up in reusable classes. "KISS" or "Keep it simple, stupid" (note: this is better looked at as a self-referencing "stupid", like when someone says, "I'm such an idiot!" when they figure something out) -- Simplify your logic and code whenever you can. "The simplest explanation is usually the correct one." To apply these concepts, here is how I would re-structure (not how I would write) your script:
{ "domain": "codereview.stackexchange", "id": 4081, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, mysql", "url": null }
differential-geometry, geometry, topology Assuming cows are spherical falls under a similar category; for the scales we're interested in for those problems, the exact geometry doesn't matter (see also the answer by @Nickolas Alves about multipole expansions). Note however, that we're not always this cavalier. Imagine you're a big sports company trying to design equipment for your star athlete (where e.g. outcomes of races are decided by milliseconds). Just take a look at some documentaries for how much biometric data they gather in order to come up with tailor-made shoes, sportswear or whatever else they come up with. No one in their situation will make the assumption that athletes are spheres! The bottom line is the level of detail and accuracy you put into your model depends highly on your purpose for the calculation.
{ "domain": "physics.stackexchange", "id": 88738, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "differential-geometry, geometry, topology", "url": null }
visible-light, power, fiber-optics But that’s ok, because your final ratio just depends on the loss ratio: $P_{\rm{out}} = 10^{-\rm{dB}/10} P_{\rm{in}}$ Since your fiber loss is 0.2dB/m, for your distance $d$: $P_{\rm{out}} = 10^{-0.2 d/10} P_{\rm{in}}$ which is what you had except for the minus sign. Luminous intensity, if everything else remains the same, will go like the power in the light. Then $\Phi_{\rm{out}} = 10^{-0.2 d/10} \Phi_{\rm{in}}$
{ "domain": "physics.stackexchange", "id": 48996, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "visible-light, power, fiber-optics", "url": null }
actionlib, rosjava Originally posted by Fast Clutch on ROS Answers with karma: 1 on 2014-03-17 Post score: 0 No plan that I know of and I'm not exactly sure of the current status having never tried it, but I know that Damon has said that it really needs some love and just needs someone to step up and do it. As a workaround, if you are wanting to write the client side in java to connect to a server side in python/c++, then you can do it fairly simply by just attaching handles to the publishers and subscribers manually. That is, use pubsub in a way that gets what you want out of the action server. It's more work to do it like this than to use a convenient action client front end such as there is for python and c++, but it works. Originally posted by Daniel Stonier with karma: 3170 on 2014-03-18 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 17312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "actionlib, rosjava", "url": null }
particle-physics, nuclear-physics, standard-model, elementary-particles, subatomic where the dots are quarks antiquarks and the squiggles gluons in the sea connecting the three constituent quarks ( the ones which build up the mess). It has to be kept in mind that this pictorial form mimics the probability of finding a point particle , quark or gluon, within the nucleon. As the elementary particles are point particles again one can can say that at any instant photo, most of the space is empty of particles, and what presents a "solid" form during a scatter , are the interaction forces acting between these point particles and an incoming one ( for example a photon) .
{ "domain": "physics.stackexchange", "id": 12257, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, nuclear-physics, standard-model, elementary-particles, subatomic", "url": null }
ros, navigation, rosaria, p3at Title: problem with robot center of turning in navigation Hi all We have modified our P3AT robot in a way it moves with just two front wheels it means we have disabled two rear wheels and the robot moves with just two front wheels. Accordingly, we have encountered with a malfunction in turning of the robot, because of the robot turns with two front wheels instead of all four wheels. Hence I am going to modify the center of turning of the robot from the center of robot to the center of two front wheels. I'd appreciate it if anyone help me to find the solution. Best Regards Farshid Originally posted by farshid616 on ROS Answers with karma: 63 on 2012-03-31 Post score: 0 You are going to have to edit your robots urdf. Look here for help understanding urdf better. http://www.ros.org/wiki/urdf/Tutorials Originally posted by Atom with karma: 458 on 2012-04-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8810, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, navigation, rosaria, p3at", "url": null }
RELATED VIDEO _________________ Test confidently with gmatprepnow.com Originally posted by GMATPrepNow on 06 Sep 2016, 15:10. Last edited by GMATPrepNow on 16 Apr 2018, 12:57, edited 1 time in total. Intern Joined: 27 Nov 2016 Posts: 1 Re: Two mixtures A and B contain milk and water in the ratios  [#permalink] ### Show Tags 23 Dec 2016, 09:03 2x+50/5x+40=4/6, find x, then don't get into decimals, approx 17.something then 2(17)+5(17)= approx 122 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 9558 Location: Pune, India Re: Two mixtures A and B contain milk and water in the ratios  [#permalink] ### Show Tags 09 Nov 2017, 02:28 4 bmwhype2 wrote: Two mixtures A and B contain milk and water in the ratios 2:5 and 5:4 respectively. How many gallons of A must be mixed with 90 gallons of B so that the resultant mixture contains 40% milk? A. 144 B. 122.5 C. 105.10 D. 72 E. 134 Responding to a pm:
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 1, "lm_q1q2_score": 0.8221891283434877, "lm_q2_score": 0.8221891283434877, "openwebmath_perplexity": 3334.6928289862044, "openwebmath_score": 0.7750915884971619, "tags": null, "url": "https://gmatclub.com/forum/two-mixtures-a-and-b-contain-milk-and-water-in-the-ratios-57862-20.html" }
from Rogerson, a student: About my last question, yes I'm sure I worded it correctly. It is desired to calculate the initial oil in place for an oil reservoir having a gas cap as illustrated below. Approximate Area Under Curve Using Left Endpoints and Right end Points. of rectangles as I’ll show on the board or document camera. Therefore, each rectangle is below the curve. To compare two cuvres, I need area under the curve. The area under the curve is actually closer to 2. A few of the other methods are shown in Figure 9. Let the height of each rectangle be given by the value of the function at the right side of the rectangle. This is called a "Riemann sum". y = 1/x does not exist at x = 0. 1) y = x2 2 + x + 2; [ −5, 3] x y −8 −6 −4 −2 2 4 6 8 2 4 6 8 10 12 14 2) y = x2 + 3; [ −3, 1] x y −8 −6 −4 −2 2 4 6 8 2 4 6 8 10 12 14 For each problem, approximate the area under the curve over the given. The area of the region bounded by the curve of f(x), the x-axis, and the vertical lines x = a
{ "domain": "centropartenopeo.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759632491111, "lm_q1q2_score": 0.8195057111990796, "lm_q2_score": 0.8354835289107307, "openwebmath_perplexity": 316.55087815325567, "openwebmath_score": 0.8472341895103455, "tags": null, "url": "http://bbdr.centropartenopeo.it/area-under-curve-calculator-with-rectangles.html" }
human-biology, cell-biology, cancer, database, pathway Source: http://www.genome.jp/kegg/document/help_pathway.html The pathway is kind of broad but it provides some mechanisms for tumorigenesis from two standpoints: chromosomal instability, and microsatellite instability. The pathways we're seeing promotes the tumor development by upregulating proliferation signals, and downregulating apoptosis pathways. There's bound to be more to it, especially since we see other players involved when we get into cases like metastasis like snail1/2, Met, Src, etc. Below is a survey of apoptotic pathways from Nature, with pro-apoptotic in red, and pro-survival in green: Source: http://www.nature.com/reviews/poster/apoptosis/index.html Ignoring the therapeutic targeting jargon, you can see that when growth factors bind, the cell generally pushes towards a survival state. When death factors bind or something goes wrong, the cell pushes toward an apoptotic state. Cancers, however, are intrinsically disregulated:
{ "domain": "biology.stackexchange", "id": 3694, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "human-biology, cell-biology, cancer, database, pathway", "url": null }
spacetime, relativity, tensor-calculus, metric-tensor Title: How to prove the raising/lowering indices operation? I've read this related question, though it didn't satisfy me; I hope this complements it. I know that if I contract a covariant tensor ${A_{\alpha\beta}}$ with a vector ${B^\beta}$, I get some other covector ${C_\alpha\equiv A_{\alpha\beta}B^\beta}$. So how can I show that if ${A_{\alpha\beta}=g_{\alpha\beta}}$ where ${g_{\alpha\beta}}$ is a metric, then ${C_\alpha=B_\alpha}$? I've tried using the definition of the metric ${g_{\alpha\beta}=\hat{\mathbf{e}}_\alpha\cdot\hat{\mathbf{e}}_\beta}$ where $\{\hat{\mathbf{e}}_\mu\}$ is a basis of the space, or properties like ${g_{\mu\nu}g^{\nu\rho}=\delta_\nu^\rho}$ and ${g_{\alpha\beta}=g_{\beta\alpha}}$, but have not succeed. Also I've searched for it in books like Carroll's or Lawden's, but it's given pretty much as if it would be a definition. Also I've searched for it in books like Carroll's or Lawden's, but it's given pretty much as if it would be a definition.
{ "domain": "physics.stackexchange", "id": 66186, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "spacetime, relativity, tensor-calculus, metric-tensor", "url": null }
symmetry, gauge-theory, gauge-invariance, asymptotics $$ \partial_\mu A^\mu = 0 $$ Where the gauge symmetry is $A^\mu\rightarrow A^\mu+\partial_\mu \alpha$. Then, we define the "residual gauge group" as the gauge transformations that leave the gauge-fixing condition invariant. The defining equation of residual gauge transformation is : $$ \partial_\mu \partial^\mu \alpha = 0 $$ In this case, a possible solution is $\alpha = A x^\mu+B$. This obviously has non-compact support. As pointed in the comments, it can be shown that any solution to the above equation is indeed non-compact. Once we have that, we define the "asymptotic symmetry group" as : $$ G = \mbox{Residual gauge symmetries that preserve the the boundary conditions} $$
{ "domain": "physics.stackexchange", "id": 90227, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "symmetry, gauge-theory, gauge-invariance, asymptotics", "url": null }
transform Title: compile error with tf transformation Hi everybody, I have written some codes as below: #include <ros/ros.h> #include <tf/transform_broadcaster.h> #include <nav_msgs/Odometry.h> int main(int argc,char ** argv){ ros::init(argc,argv,"odometry_publisher"); ros::NodeHandle n; ros::Publisher pubOdom= n.advertise<nav_msgs::Odometry>("odom",50); tf::TransformBroadcaster odomBroadcaster; //some other codes ros::Rate r(10); while(n.ok()){ //some other codes r.sleep(); } // return 0; } and put rosbuild_add_executable(OdomOFPS1_X64 src/OdomOFPS1.cpp)
{ "domain": "robotics.stackexchange", "id": 22227, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "transform", "url": null }
thermodynamics, dynamics Title: Is this a valid semi-diagnostic equation for Omega? Beginning with the Primitive equations governing atmospheric motion for a dry gas, primarily the ideal gas law, and the conservation of mass and energy, neglecting diffusivity. $$P=\rho R T$$ $$\omega \equiv \frac{DP}{Dt}$$ Therefore $$\omega=R(\rho\frac{DT}{Dt}+T\frac{D\rho}{Dt})$$ Since $$\frac{D\rho}{Dt}=-\rho \nabla\cdot\vec{u}$$ and $$\frac{DT}{Dt}=\frac{\omega\rho}{c_p}+\frac{Q}{c_p}$$ $$\omega=R(\frac{\rho^2\omega}{c_p}+\frac{\rho Q}{c_p}-T\rho\nabla\cdot\vec{u})$$ Distributing $R$ and applying the ideal gas law $$\omega=\frac{R\rho^2\omega}{c_p}+\frac{PQ}{Tc_p}-P\nabla\cdot\vec{u}$$ Separating $\omega$ from the right hand side of the equation yields $$\omega=(1-\frac{R\rho^2}{c_p})^{-1}(\frac{PQ}{Tc_p}-P\nabla\cdot\vec{u})$$ I call this semi-diagnostic, for $\omega$ is still a part of $\nabla\cdot\vec{u}$
{ "domain": "earthscience.stackexchange", "id": 2546, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, dynamics", "url": null }
magnetic-fields, energy-conservation, magnetic-moment Title: How is energy conserved in a series of magnets, progressively moving another magnet upwards? Imagine a tall building with windows allinged one above the other, and a railway going from the bottom of the building to the top of the building (vertically), right where the windows are. Imagine there is a magnet in this railway. Let us consider this following scenario: A person who is holding a magnet opens the first (lowest) window. The magnet in the railway starts going up (because of the electromagnetic force). Right when the magnet of the railway reaches the first window, the person closes this window and throws his magnet to the other side. In the meantime, a person standing next to the second window opens the window and the magnet in the railway continues to accelerate upwards. Obviously no energy is invested (if we replace the people with a fricionless mechanical system) but some mass does gain potential gravitational energy!
{ "domain": "physics.stackexchange", "id": 18891, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "magnetic-fields, energy-conservation, magnetic-moment", "url": null }
organic-chemistry, reaction-mechanism Title: What is the mechanism by which lactones are hydrolyzed during a Wolff-Kishner reaction? I've been away from orgo for a few years but lately someone asked me a question about lactones during a Wolff-Kishner reduction. I understand the basic mechanism of a Wolff-Kishner but I cannot figure out how a lactone gets hydrolyzed. Can anyone help me with that step? Here is a link to the basic mechanism that mentions lactones. This is a general feature of the hydrazine reagent that is employed in Wolff-Kishner reductions. When attacking an ester’s (includes lactone’s) carbonyl function, the generated tetrahedric intermediate can break down to form a hydrazide as shown in the scheme below.
{ "domain": "chemistry.stackexchange", "id": 4254, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, reaction-mechanism", "url": null }
ros, executive-smach, smach Originally posted by Wim with karma: 2915 on 2011-05-25 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by phil0stine on 2012-03-31: Could you give a little clarification on this? The way I understand, gotoA and gotoB would have to run in parallel (as well as monitor) to be part of the concurrence container. If monitor state changes, how could the container only preempt one state and transition to the other? Thanks
{ "domain": "robotics.stackexchange", "id": 5548, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, executive-smach, smach", "url": null }
organic-chemistry, melting-point, databases Yet, only CRC Handbook of Chemistry and Physics consistently listed $\pu{-162.90 °C}$ as the melting point of 3-methylpentane since 1994. Thus, I'd take that value at any time. References:
{ "domain": "chemistry.stackexchange", "id": 11296, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, melting-point, databases", "url": null }
electrostatics, electric-fields, potential, capacitance, conductors Title: Grounding system of conducting plates So, I always make mistakes on problems such as this (the grounding part), so I'm hoping someone could really explain to me how the process works. There are $n$ large parallel plate conductors carrying charges $Q_1, Q_2$,...... $Q_n$ respectively. If the left conductor (conductor $Q_1$) is grounded, then we have to find the magnitude of charge flowing from plate to ground. If any conductor is grounded, we have to find the magnitude of charge flowing from plate to ground.
{ "domain": "physics.stackexchange", "id": 37197, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, electric-fields, potential, capacitance, conductors", "url": null }
everyday-life, wetting Title: Why does wet hair keep its shape when it dries? When I wash my hair and go to sleep, my hair is impossible to comb in the morning, stubbornly sticking to the shape it assumed during the night. The only way to get it right is to wet it again and comb it. What's the cause of this memory effect? Hair, like fingernails and animal horn is made up mostly of a protein called Keratin. The strength and hardness of this polymer is caused by three types of chemical bonds: ionic bonds, hydrogen bonds and disulphide bonds. Water can significantly break the first two types (but not the disulphide ones). Significantly wetting hair thus leads to making it more flexible and softer. But if wet and deformed hair dries it tends to retain the shape it was in while it was wet. The reformed hydrogen and ionic bonds then leads to a 'permanent' deformation (until you wet it again).
{ "domain": "physics.stackexchange", "id": 31865, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "everyday-life, wetting", "url": null }
performance, c, strings, search, rags-to-riches It finds the location of data to keep, and moves each 'keep' span just once. No data is ever moved more than once. It works by keeping a cursor of the start of the copy-zone, the start of the following match (if any), and then it copies that region on to the end of previous content (advancing that copyTo variable as needed). It was only after I implemented the solution myself that I realized how similar the routine was to Edwards. I do prefer my naming though. The significant performance-affecting difference is that I only have to perform the strlen(src) on the final (shortest) span of unmatching code. That strlen(src) is essentially the only difference I can see in the effect of the algorithm.... and, this difference will become more and more apparent as the input String size increases. When I run it through Edward's harness (I ran without William's code...), I get: totaltime syb0rg = 1710000 totaltime 200_success = 1590000 totaltime rolfl = 1070000 totaltime janos = 1830000
{ "domain": "codereview.stackexchange", "id": 10298, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, c, strings, search, rags-to-riches", "url": null }
python, callback, nodes, publisher, messages -----Position.msg: float64 x float64 y float64 theta -----Params.msg float64 v float64 w Originally posted by thejesseslayer on ROS Answers with karma: 26 on 2017-08-08 Post score: 0 I fixed it. The problem was that controller takes some time to subscribe to simulator topic and viceversa. So I put this in my code: con_rate=rospy.Rate(rate) while pub.get_num_connections() == 0: con_rate.sleep() Originally posted by thejesseslayer with karma: 26 on 2017-08-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 28552, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, callback, nodes, publisher, messages", "url": null }
electromagnetism, magnetic-fields, electric-current $$ \vec{E} = \frac{Q_0 - I t}{r^2} \hat{r}. $$ (A time-varying magnetic field may also be present, though I suspect that this is not the case due to the symmetry of the configuration.) But it is also known that electromagnetic fields have a momentum density of $\vec{\mathscr{p}} = \epsilon_0 \vec{E} \times \vec{B}$. This quantity must be integrated over all of space and added to the momentum of the charges to find the total momentum of the system. While the momentum of the charges in this system is constant, the momentum of the fields is (probably) not; I have not done the explicit calculation, but it seems likely that the field momentum is increasing linearly with time, since the electric field will also be increasing linearly with time.
{ "domain": "physics.stackexchange", "id": 89721, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, magnetic-fields, electric-current", "url": null }
c#, interview-questions if(amount>=2) //0 - 1 { for(int j=0;j<amount;j++) { ascending=true; if(j<amount-1) { if((int)Char.GetNumericValue(num[j])>(int)Char.GetNumericValue(num[j+1])) { ascending=false; } if(ascending) less=(int)Char.GetNumericValue(num[j+1])-(int)Char.GetNumericValue(num[j]); else less=(int)Char.GetNumericValue(num[j])-(int)Char.GetNumericValue(num[j+1]); if(less!=1) { step=false; break; } } } if(step) { result.Add(Convert.ToInt32(num)); } } else result.Add(Convert.ToInt32(num)); } return result; } } Code review
{ "domain": "codereview.stackexchange", "id": 23303, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, interview-questions", "url": null }
homework-and-exercises, harmonic-oscillator Title: Initial conditions for shm This is the part of the question from the book that I am studying, "A mass of $0.75\:\mathrm{kg}$ is attached to one end of a horizontal spring of spring constant of $400\:\mathrm{N m^{−1}}$. The other end of the spring is attached to a rigid wall. The mass is pushed so that at time $t = 0$ it is $4.0\:\mathrm{cm}$ closer to the wall than the equilibrium position and is travelling towards the wall with a velocity of $0.50\:\mathrm{m s^{−1}}$." For the initial conditions, I wrote: $$x(0)=0.04\:\mathrm{m}$$ $$v(0)=0.50\:\mathrm{m s^{−1}}$$
{ "domain": "physics.stackexchange", "id": 25798, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, harmonic-oscillator", "url": null }
java, calculator, community-challenge private boolean endInput(String input) { return input.endsWith(END_INPUT); } private boolean isOperator(String input) { return OPERATORS.containsKey(input); } private boolean acceptOperator(String input) { if (inputs.size() < MIN_VALUES) { System.err.printf("Minimum %d integers before calculation.%n", Integer.valueOf(MIN_VALUES)); return false; } return true; } private void acceptValue(String input) { try { appendValue(Double.valueOf(Integer.parseInt(input))); } catch (NumberFormatException e) { System.err.println("Not an integer, ignored: " + input); } }
{ "domain": "codereview.stackexchange", "id": 13251, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, calculator, community-challenge", "url": null }
electromagnetism, gravity, experimental-physics, home-experiment p-value: < $0.00001$ Therefore the correlation was significant at $p<0.05$. I performed a two-tailed T-Test for 2 Independent Means to find the mean weight difference between: south pole upwards (0) 63,62,62,60,63,61,63,61,60,59,61,60,62,59,61,59,60,60,60,62,60,62,61,62,60,61,61,61,63,61,61,62,61,61,61,60,60,62,60,63,63,59,61,62,61,61,61,61 Mean: $61.04$ Standard deviation: $1.12$ Sample size: $48$ north pole upwards (1) 63,60,63,65,62,62,62,62,61,62,61,64,62,62,64,62,62,62,61,63,62,63,62,63,63,63,62,62,63,62,62,62,62,62,63,63,62,63,62,61,65,62,63,61,63,63,61,62,62,61,62,62 Mean: $62.29$ Standard deviation: $0.96$ Sample size: $52$ The result was: t-value: $-5.97273$ p-value: < $0.00001$ The result was significant at $p<0.05$. I used this calculator to find the effect size $g$. Hedges' g: $1.2$ mg Conventional explanation in terms of interaction with Earth's magnetic field
{ "domain": "physics.stackexchange", "id": 93915, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, gravity, experimental-physics, home-experiment", "url": null }
control-systems The denominator of the closed loop transfer function $1+G_{OL}(s)$ is called the characteristic equation and the roots of this (the closed loop poles) ultimately tell us if the system is stable or not. A big utility of the open loop gain alone and the use of the Bode Plot (and Nyquist plot) to assess stability is that it is something we can directly measure (in the case of stable open loop systems) and we can derive the Bode Plot even when an actual transfer function cannot be established such as the case of time delays in a continuous time system and other cases involving transcendental equations that can't be described with polynomials.
{ "domain": "dsp.stackexchange", "id": 11094, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "control-systems", "url": null }
python, file Should functions be limited? In my case, my function does perform several different actions, however they are all part of what I want the function to do; display the contents of the file to the user and allow him/her to execute amendments. I am highly confused, am I over thinking it! Should I add error handling to my function? If so, it really will start to become very long indeed. Docstring The purpose and arguments for your functions should be documented using docstring. Local declaration You have habits of languages requiring variable declarations but there is not such thing in Python (even in languages requiring it, I consider it is better to declare variables as late as possible, in the smallest possible scope). Style Python has a style guide called PEP 8. You have various points that need to be changed to be compliant, mostly related to spacing. On the other hand, your are properly following the naming convention. First draft
{ "domain": "codereview.stackexchange", "id": 18319, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, file", "url": null }
java, performance, parsing, memory-management public LocalDateTime getDateModified() { return dateModified; } public void setDateModified(LocalDateTime dateModified) { this.dateModified = dateModified; } public ContactList getContactList() { return contactList; } public void setContactList(ContactList contactList) { this.contactList = contactList; } @Override public String toString() { return ToStringBuilder.reflectionToString(this); } }
{ "domain": "codereview.stackexchange", "id": 6386, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, parsing, memory-management", "url": null }
neural-networks, graph-neural-networks [...] i don't understand the idx_base. Something like this is common in GNNs and relates to graph batching. Notice the transpose: (batch_size, num_points, num_dims) -> (batch_size*num_points, num_dims) To get an index that works with this shape, you cannot use the raw node indices, you have to offset them using the batch number and the number of points per sample. This is what idx_base is doing, it offsets the point indices. This DGL-documentation has a nice visualization for what graph-batching means. The same applies here for idx_base.
{ "domain": "ai.stackexchange", "id": 3489, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-networks, graph-neural-networks", "url": null }
complexity-theory, graphs The question I guess is then whether by some clever representation we can reduce the number of (sub)graphs that we have to check to enumerate all the maximal cliques. In principle I guess this is possible, there's no a priori reason we couldn't decorate a data structure with additional information that would help us to enumerate cliques - the problem is then that this data structure could grow exponentially compared to the size of the original input. So in a sense it would add nothing - we're just hiding computation time in the space the data takes up.
{ "domain": "cs.stackexchange", "id": 731, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, graphs", "url": null }
the-sun, planetary-transits //echo "\$feclong ".$feclong."<br /><br />"; $deltaSol = round(asin(sin($sollat*M_PI/180)*cos($earthincl*M_PI/180) + cos($sollat*M_PI/180)*sin($earthincl*M_PI/180)*sin($sollong*M_PI/180))*180/M_PI*10000)/10000; $alphaSol = round(acos(cos($sollong*M_PI/180)*cos($sollat*M_PI/180) / cos($deltaSol*M_PI/180))*180/M_PI*10000)/10000; if ($sollong>180) { $alphaSol = 360-$alphaSol; } echo "<table><tr><td colspan=2>Sol</td></tr><tr><td>RA: </td><td>".$alphaSol."&deg; (".converttohms($alphaSol).")</td></tr><tr><td>Dec:</td><td>".$deltaSol."&deg; (".converttodms($deltaSol).")</td></tr></table><br />";
{ "domain": "astronomy.stackexchange", "id": 3013, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "the-sun, planetary-transits", "url": null }
organic-chemistry, acid-base, esters, stereoelectronics Title: Why are lactones more acidic than esters? Lactones ($\mathrm pK_\mathrm a \sim 25$) tend to be more acidic than esters ($\mathrm pK_\mathrm a \sim 30$). I know that the lactone is fixed in a (E)-conformation, due to the ring: and that in normal esters, the (Z)-conformer is favoured based on an anomeric effect, namely the donation of the $\mathrm{sp^2}$ lone pair on the ester oxygen into the C–O σ* orbital (image taken from Clayden, Greeves & Warren, Organic Chemistry, 2nd ed. (OUP), p 805): Is this effect responsible for the increased acidity? If so, how? Overview The only interesting thing about lactones is that if the ring size is relatively small, they necessarily adopt the (E) conformation. Therefore, the question essentially boils down to: for a generic ester, why is the (E)-conformer 1 more acidic than the (Z)-conformer 2? This is a question that is easier to investigate, since we remove all other variables e.g. molecular structure.
{ "domain": "chemistry.stackexchange", "id": 8725, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, acid-base, esters, stereoelectronics", "url": null }
organic-chemistry Title: How does 1-methylcyclohexene reacting with KMnO4 produce 1-methyl-1,2-cyclohexanediol? I am interested in knowing how diols are produced when you introduce a reagent like potassium permanganate to an alkene. What is the mechanism for these reactions? The hydroxyl substituents happens to be at the 1,2 position each time. That's a particular case of syn-dihydroxylation, also known as Wagner reaction. "Syn-dihydroxylation" means that a double bond is cleaved into a single bond and two OH groups are attached to the carbon atoms that were involved into a double bond from the same side. Anti-addition is not possible as permanganate anion can't be attached to the double bond from different sides of the bond due to steric reasons. There is a Khan Academy video that describes the mechanism of this particular addition, but that video is all about different cases of syn-addition. In brief, the mechanism can be also drawn as following:
{ "domain": "chemistry.stackexchange", "id": 8871, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry", "url": null }
php, html, datetime, layout Title: Displaying courses in an HTML calendar I'm struggling for a while now with the readability of my code, I after I tried to get as much insight as possible (for my standards). On my level, I think I understand and use it all right for my level. But I'm still having big chunks of mixed html/css in the presentation. Often I have a mediocre complex multi-dimensional array as a return value and on the actual presentation page, I iterate through it, but still do a lot of stuff with it. So I'm looking now into template engines like Smarty, but I can't get my head around it, how I would save some actual code with it in examples like the following where I iterate and work with the array in the presentation: $courseinfo = new courseinfo($_SESSION['course_short']); $row = $courseinfo->get_all(); $default = $courseinfo->get_default(); $prices = $courseinfo->get_prices(); $month_min_show = 5;
{ "domain": "codereview.stackexchange", "id": 29164, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, html, datetime, layout", "url": null }
binary-trees, heaps Title: The number of nodes in a binary tree If a binary tree is both a max-heap and an AVL tree, what is its largest possible number of nodes, assuming all keys are different? I'm going to say 2. A max-heap is a near-complete binary tree. This means any child must have a key less than it's parent's key. An AVL tree is a balanced binary search tree. This means the left child must have a key less than it's parent and the right child must have key greater than it's parent. From these, You can see that there can be no right-children in the tree. Hence, You can only have left-children. Since the max-heap has the near-complete property, The largest tree you can have is the root node with a left-child. Hence, The maximum size is 2.
{ "domain": "cs.stackexchange", "id": 8921, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "binary-trees, heaps", "url": null }
roboearth [rosmake-0] Finished <<< comp_ehow [PASS] [ 54.30 seconds ] -- WARNING: 2 compiler warnings [rosmake-0] Starting >>> re_ontology [ make ] [ rosmake ] Last 40 linesldb: 212.8 sec ] [ re_ontology: 39.9 sec ] [ 2 Active 144/168 Complete ] {------------------------------------------------------------------------------- models/kitchen/tableSetting/.svn/tmp/tempfile.5.tmp models/kitchen/tableSetting/.svn/tmp/tempfile.6.tmp models/kitchen/tableSetting/.svn/tmp/tempfile.7.tmp models/kitchen/tableSetting/.svn/tmp/tempfile.8.tmp models/kitchen/tableSetting/.svn/entries models/kitchen/tableSetting/.svn/format models/kitchen/tableSetting/about.txt models/kitchen/tableSetting/meals_any_for_functional_new.learnt.xml.net models/kitchen/tableSetting/lateMorningConsumption.blogdb models/kitchen/tableSetting/blnlearn.config.dat models/kitchen/tableSetting/modelpaper_query1.blogdb models/kitchen/tableSetting/blnquery.config.dat
{ "domain": "robotics.stackexchange", "id": 9088, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "roboearth", "url": null }
electromagnetic-radiation, reflection, absorption Title: Layer of material that transmits light in one direction and absorbs it in the other direction I am looking for a material (or layer of materials) which transmits light coming from one side and absorbs light coming from the other side. The absorption should be as good as possible and the transmission should happen with minimal losses. Furthermore the layer of material should be very thin if possible and it is enough if it works at 1064 nm wavelength! Another requirement is that the light which is incoming has the same polarization before and after transmission.
{ "domain": "physics.stackexchange", "id": 60457, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetic-radiation, reflection, absorption", "url": null }
particle-physics, standard-model, quantum-chromodynamics, physical-constants, color-charge The U(1) numbers are completely crazy. The only sensible explanation is that they come from an SU(5) GUT (or SO(10) or E6 or some higher version of the SU(5) idea). The reduction of charges from SU(5) is explained in this answer: Is there a concise-but-thorough statement of the Standard Model? This gives the 1,2,3,6 ratios of the hypercharge assignments in nature, and completely explains the crazy quark charges. It is also an automatic way of ensuring anomaly cancellation. This, and approximate coupling constant unification, are the two strongest bits of evidence for a GUT at a scale of $10^16$ GeV or thereabouts.
{ "domain": "physics.stackexchange", "id": 2666, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, standard-model, quantum-chromodynamics, physical-constants, color-charge", "url": null }
java, validation, generics Title: Adding various types of configurable validators to fields I am currently reworking our entity code generator, that uses JaxB for schema validation, and Stringtemplate for code generation. As we are giving our professional service the possibility to add Validators to the attributes of the entities they create, I need to wrap each parsed Field, in order to add the Validators. The following code works, but I do not feel comfortable with it: @SuppressWarnings( "boxing" ) private Map<String, Object> wrapField( Field f ) { HashMap<String, Object> result = new HashMap<>(); if ( f.isGenerateGetter() ) { DecimalMaxValidator decMax = null; DecimalMinValidator decMin = null; NotBlankValidator notBlank = null; NotNullValidator notNull = null; LengthValidator lengthVal = null;
{ "domain": "codereview.stackexchange", "id": 23918, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, validation, generics", "url": null }
data-structures, red-black-trees Title: Introduction To Algorithms 3rd Edition MIT Press: Red Black Tree insertion error in pseudo-code? I'm implementing the algorithm on page 316 of the book Introduction to Algorithms. When I look at the pseudo-code, I feel that there is something wrong between line 10 to 14. I feel it's missing a check. There is a YouTube video explaining this whole function (and it includes the pseudo-code plus line numbers): https://youtu.be/5IBxA-bZZH8?t=323 The thing is, I think that //case 2 needs its own check. The else if z == z.p.right is both meant for //case 2 and //case 3. However, the code from //case 2 shouldn't always fire. It should only fire when there is a triangle formation according to the YouTube video. In my implementation it always fires, even when it's a line. So I feel the pseudo-code is wrong, it's also weird that it has an indentation, but I see no extra check. Am I missing something? Maybe superfluous, but I also typed the pseudo code given from the book here:
{ "domain": "cs.stackexchange", "id": 13914, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-structures, red-black-trees", "url": null }
complexity-theory, complexity-classes, intuition PSPACE is the class of problems that can be solved on a deterministic Turing machine with polynomial space bounds: that is, for each such problem, there is a machine that decides the problem using at most $p(n)$ tape cells when its input has length $n$, for some polynomial $p$. EXP is the class of problems that can be solved on a deterministic Turing machine with exponential time bounds: for each such problem, there is a machine that decides the problem using at most $2^{p(n)}$ steps when its input has length $n$, for some polynomial $p$.
{ "domain": "cs.stackexchange", "id": 3812, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, complexity-classes, intuition", "url": null }
This is a specific case of a more general phenomenon, where one can adapt the Euclidean algorithm to rewrite using automorphisms a word $W\in F(a, b, \ldots)$ such that it has zero exponent sum in all but one of the relators. For example, writing $\sigma_x$ for the exponent sum of the relator word in the letter $x$: \begin{align*} &\langle a, b\mid a^6b^8\rangle&&\sigma_a=6, \sigma_b=8\\ &\cong\langle a, b\mid (ab^{-1})^6b^8\rangle&&\text{by applying}~a\mapsto ab^{-1}, b\mapsto b\\ &=\langle a, b\mid (ab^{-1})^5ab^7\rangle&&\sigma_a=6, \sigma_b=2\\ &\cong\langle a, b\mid (a(ba^{-3})^{-1})^5a(ba^{-3})^7\rangle&&\text{by applying}~a\mapsto a, b\mapsto ba^{-3}\\ &\cong\langle a, b\mid (a^4b^{-1})^5a(ba^{-3})^7\rangle&&\sigma_a=0, \sigma_b=2 \end{align*} You can think of this as a "non-commutative Smith normal form", but it is more useful in this context than the Smith normal form as it gives you more information than just the abelianisation. For example, it is used in the HNN-extension
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759621310288, "lm_q1q2_score": 0.8413641170506816, "lm_q2_score": 0.8577681068080749, "openwebmath_perplexity": 385.91434471406995, "openwebmath_score": 0.8758471608161926, "tags": null, "url": "https://math.stackexchange.com/questions/2888319/what-is-the-abelianization-of-langle-x-y-z-mid-x2-y2z2-rangle/2888542" }
# Distributing 5 distinct objects into 3 identical boxes such that a box can be empty My approach : I listed down the following cases :- Case 1) 5 0 0 --> 5c5 = 1 way Case 2) 4 1 0 --> 5c4 * 1c1 = 5 ways Case 3) 3 2 0 --> 5c3 * 2c2 = 10 ways Case 4) 3 1 1 --> 5c3 * 2c1 * 1c1 = 20 ways Case 5) 2 2 1 --> 5c2 * 3c2 * 1c1= 30 ways by adding all this I get 66 ways in which all possible combinations of selecting distinct objects is taken care of , and I feel that I don't need to arrange it further in boxes as all the boxes are identical According to the solution provided for case 4 (i.e. 3,1,1) they have counted 10 ways and for case 5(i.e. 2,2,1) they have counted 15 ways , which makes their total to 41 can someone let me know where exactly am I going wrong with my approach ?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9780517462851323, "lm_q1q2_score": 0.833253922041176, "lm_q2_score": 0.8519527982093666, "openwebmath_perplexity": 380.88270702170183, "openwebmath_score": 0.69223952293396, "tags": null, "url": "https://math.stackexchange.com/questions/4250573/distributing-5-distinct-objects-into-3-identical-boxes-such-that-a-box-can-be-em" }
kinect, openni-kinect [ERROR] [1327685950.506497003]: Failed to load nodelet [/camera/rgb/rectify_color] of type [image_proc/rectify]: According to the loaded plugin descriptions the class image_proc/rectify with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer
{ "domain": "robotics.stackexchange", "id": 8018, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinect, openni-kinect", "url": null }
3. Oct 21, 2007 ### mgb_phys Why can't you use the radius you are given? Young's modulus is just stress/strain. You know the stress is just force (weight of spider ) / area and you are solving for strain. Just use the initial diameter of the thread. 4. Oct 22, 2007 ### faoltaem Yeah, i was taking the radius as the radius of the web. so i've done this: nb: s=spider p=person a) What is the fractional increase in the thread’s length caused by the spider? $$Y_s$$ = 4.7x10$$^9$$ N/m$$^2$$ m = 0.26g = 2.6x10$$^-^4$$ kg $$r_s$$ = 9.8x10$$^-^6$$ m $$F = Y(\frac{A_o}{L_o})\Delta L$$ => $$mg = Y_s(\frac{\pi r_s^2}{L_o})\Delta L$$ =>$$\frac{\Delta L}{L_o} = \frac{mg}{Y_s \pi r_s^2}$$ $$\frac{\Delta L}{L_o} = \frac{2.6\times10^4 \times 9.81}{4.7\times10^9 \times \pi \times (9.8\times10^-^6)^2}$$ $$\Delta L = 1.799\times10^-^3 Lo$$
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9724147153749274, "lm_q1q2_score": 0.8060722994664741, "lm_q2_score": 0.8289388125473628, "openwebmath_perplexity": 691.0735375512343, "openwebmath_score": 0.7214732766151428, "tags": null, "url": "https://www.physicsforums.com/threads/youngs-modulus-of-spider-thread.192971/" }
beginner, logging, powershell Conclusion Despite my novel, your code is pretty straightforward. Without knowing how you intend to use the data it's hard to be opinionated on any deeper design changes, so clearly most of my suggestions are general language stuff.
{ "domain": "codereview.stackexchange", "id": 28174, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, logging, powershell", "url": null }
rospy, ros-kinetic Edit: I checked the links you pointed out. In my occasion, it is not a problem to use an absolute path to open the file. Absolute paths are not the problem. The problem would be to embed paths that are only valid on your own machine in your code. That is never a good idea. But which is the recommended way to do it in ROS applications in general? If you want to keep things relative to package locations, you could use either something like rospkg or the substitution args I mentioned earlier. If you have the option, I'd go for the substitution args (as you avoid adding another dependency to your program). Edit2: I have my 4 files in the path: ~/catkin_ws/src/usb_rs232/scripts in the usb_rs232 package. So the command would be: rosrun usb_rs232 serial_connection.py _arg_name:=$(rospack find usb_rs232)~/catkin_ws/src/usb_rs232/scripts
{ "domain": "robotics.stackexchange", "id": 32526, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rospy, ros-kinetic", "url": null }
- Rather than looking at the player, I prefer to explain the paradox from the host's standpoint, as this only involves one step. As the player gets one door, the host gets two. There are 3 possibilities with the same probability: • donkey-donkey => leaves a donkey after a door is open • car-donkey => leaves the car • donkey-car => leaves the car So in two cases out of three the door that the host leaves closed hides the car. - Suppose you and your brother play simultaneously .You and he always choose the SAME door to start. You NEVER switch,so after your initial choice you go out for a coffee and come back after all 3 doors are open. Your chance of winning is unaffected by what happens in between.It's 1/3. Your brother ALWAYS switches. EVERY TIME YOU LOSE,HE WINS. YOU LOSE 2/3 OF THE TIME. -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9852713870152409, "lm_q1q2_score": 0.8394047170150483, "lm_q2_score": 0.8519528000888387, "openwebmath_perplexity": 414.0117737375531, "openwebmath_score": 0.7395754456520081, "tags": null, "url": "http://math.stackexchange.com/questions/96826/the-monty-hall-problem/96886" }
newtonian-mechanics, energy, rotational-dynamics, work which can be rewritten as $$\mathrm dW = \int_{\rm body} \mathrm d\vec F\cdot \left( \vec V+\vec \omega \times \vec r' \right)\, \mathrm dt$$ using the velocity equation from above. Note that $\mathrm d\vec F = \vec a \, \mathrm dm$ (Second Law) so we can write the above as, $$\mathrm dW = \int_{\rm body} \vec a \, \mathrm dm\cdot \vec V \, \mathrm dt + \int_{\rm body} \mathrm d \vec F \cdot\vec \omega \times \vec r'\, \mathrm dt$$ where the mixed product can be rewritten as $$\mathrm dW = \int_{\rm body} \vec a \, \mathrm dm\cdot \vec V \, \mathrm dt + \int_{\rm body} \vec \omega \cdot \vec r'\times \mathrm d\vec F\, \mathrm dt.$$ Realize that $\vec r' \times \mathrm d\vec F = \mathrm d\vec \tau$, the net torque acting on a a particle of the body.
{ "domain": "physics.stackexchange", "id": 87705, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, energy, rotational-dynamics, work", "url": null }
reinforcement-learning, deep-rl, alphazero, chess, notation I'm confused about this $\pi^T$ notation. My best guess is that this is a vector of actions sampled from all policies in the $N$ X $(s_t, \pi_t, z_t)$ minibatch, but I'm not sure. (PS the $T$ used in $\pi^T$ is different from the $T$ used to denote a terminal state if you look at the paper. Sorry for the confusion, I don't know how to write two different looking T's) I'm not 100% sure whether or not they added any data for terminal game states, but it's very reasonable to indeed make the choice not to include data for terminal game states. As you rightly pointed out, we don't have any meaningful targets to update the policy head towards in those cases, and this is not really a problem because we would also never actually make use of the policy output in a terminal game state. For the value head we could provide meaningful targets to update towards, but again we would never actually have to make use of such outputs; if we encounter a terminal game state in a tree search, we just back up
{ "domain": "ai.stackexchange", "id": 2496, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning, deep-rl, alphazero, chess, notation", "url": null }
formal-languages, finite-automata, context-free, formal-grammars, pumping-lemma Title: Existence of a CFL $L$ such that $\sqrt{L}$ is not CFL Does there exist a CFL L such that the language defined as $L' = \sqrt{L} = \{w | ww \in L\}$ is not CFL? I feel that there is no such $L$ but obviously, I am unable to prove it. I am sorry but I have not made any mentionable progress with my attempts on this problem. I would appreciate any hint to the proof or a language $L$ that could satisfy this. There is an example, and $L = \{a^nb^na^{2m}b^ka^k \mid n,m,k \in \mathbb{N}\}$ does the trick. We get that $\sqrt{L} = \{a^nb^na^n \mid n \in \mathbb{N}\}$, which is a standard example of a non-context-free language. To elaborate a bit on how to get there: CFLs can express that two numbers are the same, but not that three numbers are the same. So I want the square-root operation to introduce another equality, as it seems predisposed to do so.
{ "domain": "cs.stackexchange", "id": 18245, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, finite-automata, context-free, formal-grammars, pumping-lemma", "url": null }
• It's surprising to me that the length of the square's sides are an integer multiple of the circle's radius. – BlueRaja - Danny Pflughoeft May 6 '19 at 16:10 • Could you explain the calculation of $CG^2$ and $GD^2$? What principle are you invoking here? You appear to have applied some formula (perhaps some standard formula that applies to trapezia), but it is unknown to me. – Hammerite May 6 '19 at 16:26 • @Hammerite If $H$ is the foot of the perpendicular from $F$ to $EC$, then $CG=FH$, and then apply Pythagorean theorem on $\triangle FHE$ to find $FH$. And the same trick for $GD$. – SMM May 6 '19 at 16:37 • @BlueRaja-DannyPflughoeft Yes, in general $r=a/4$ and $s=a/9$, where $a$ is the side of the square. – SMM May 6 '19 at 16:46 • @SMM: Yes, those values are obvious from this answer, but that gives no intuition as to why the multiple should be an integer. I think Blue's answer below gives that, though. – BlueRaja - Danny Pflughoeft May 6 '19 at 21:17
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9828232894783426, "lm_q1q2_score": 0.8656678723152168, "lm_q2_score": 0.880797068590724, "openwebmath_perplexity": 274.6639717318454, "openwebmath_score": 0.8903278112411499, "tags": null, "url": "https://math.stackexchange.com/questions/3215729/in-the-figure-a-quarter-circle-a-semicircle-and-a-circle-are-mutually-tangent" }
css, plugin div.paginateContainer > ul:first-child > li.paginateDisabled > a, div.paginateContainer > ul:first-child > li.paginateDisabled > a:hover { color : #bbb; background-color: #eee; cursor: default; border: 1px solid #ddd; } Note that I've posted a question to review the JavaScript also, so this post is only about CSS. The JavaScript having no real dependency with the CSS in this case. You styles are pretty solid, it's mainly your selectors that have issues I feel. Here are some notes: It's normally best to not include the type of element if it has a class. This is for maintainability and extensibility purposes, you could switch in <ol> for <ul> with less work for example. Inversely, if all <ul>'s have the same style, they don't even need the same class. You never want to go too deep with you CSS selectors, more specific is less flexible. /* Bad */ div.paginateContainer > ul > li.paginateActive > a /* Good*/ .paginateContainer .paginateActive a
{ "domain": "codereview.stackexchange", "id": 3374, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "css, plugin", "url": null }
javascript, functional-programming const enhancedSubscribe = (deviceId, broadcast) => { //Init redis subscriber ... } const enhancedPublish = (deviceId, data) => { redisPublisher.publish(deviceId, data); } return { removeSubscriber, enhancedSubscribe, enhancedPublish } } I attempt to compose the behaviour I want like this: serverFactory.js const server = () => { const broadcast = (data) => { let webSocketBroadcast = websocketServer.broadcast(data); if (config.useRedis) { return webSocketBroadcast(); } else{ return webSocketBroadcast(redisServer.enchanceBroadcast); } } const onConnection = (ws) => { let websocketServerOnConnection = websocketServer.onConnection(); if (config.useRedis) { retuen websocketServerOnConnection() } else } } return { broadcast, onconnection, onMessage } }
{ "domain": "codereview.stackexchange", "id": 25976, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, functional-programming", "url": null }
speed-of-light, vacuum, rocket-science There is drag in space from gas and tiny particles. It is a very small amount, but at relativistic speeds you run into a lot more of them a lot faster and they hit a lot harder And so in practice, there will be an effective drag which will cost you kinetic energy $ \propto v^3$ and is not considered in this calculation. As an aside, we can use a Hall-effect thruster that can get to a $v_e \approx c$ (from a comment by Gyro Gearloose) and find $$\frac{m_0}{m_1} = 14$$ which means that we still would need $14$ times the dry mass to get the rocket to $0.99c$.
{ "domain": "physics.stackexchange", "id": 98792, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "speed-of-light, vacuum, rocket-science", "url": null }
c#, generics public interface ILogger { [Test] void Write(string message); } public class OnlineLogger : ILogger { public static bool IsOnline() { // A routine that check connectivity return false; } public void Write(string message) { Console.WriteLine("Logger: " + message); } } public class OfflineLogger : ILogger { public void Write(string message) { Console.WriteLine("Logger: " + message); } } [System.Diagnostics.DebuggerStepThroughAttribute()] public class TestAttribute : HandlerAttribute { public override ICallHandler CreateHandler(IUnityContainer container) { return new TestHandler(); } }
{ "domain": "codereview.stackexchange", "id": 2425, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, generics", "url": null }
c#, object-oriented, .net, interface, polymorphism _Logger.LogInformation("Client disconnected: {0}", client.RemoteEndPoint); } private void ProcessStatusChangeMessage(NetConnection connection) { switch (connection.Status) { case NetConnectionStatus.Disconnected: RemoveClient(connection); break; case NetConnectionStatus.Connected: AddClient(connection); break; default: _Logger.LogWarning("Unhandled StatusChanged: {0} now {1}", connection.RemoteEndPoint, connection.Status); break; } } } Then I added an interface called IDataMessage, which represents a message that contains some data: public interface IDataMessage { ILogger Logger { get; } DataMessageType Type { get; } void WriteNetworkMessage(NetOutgoingMessage message); void ReadNetworkMessage(NetIncomingMessage message); }
{ "domain": "codereview.stackexchange", "id": 14653, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, object-oriented, .net, interface, polymorphism", "url": null }
fluorescence In the extreme case of some materials, the metastable state can last for millenia, which can be useful for dating the age of the item. For example, when pottery is fired, atoms in a metastable state relax. From then on, energy accumulates from cosmic rays and radioactive decay of nearby minerals such as thorium or potassium. By heating the ceramic and measuring the amount of light emitted, one can approximate the age of the pottery. Ceramics need to be heated to a few hundred Celsius, but your glass works in just warm water. There are similar infrared detection cards with phosphors that are charged using visible or UV light, and that emit visible light when triggered by low-energy IR.
{ "domain": "chemistry.stackexchange", "id": 10142, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluorescence", "url": null }
beginner, bash, installer # Argument parser initMode=0 finalArgCount=0 getArgValue="no" hitArgListEnd="no" cntr=0 params=( "$@" ) while [ $cntr -le $(($# - 1)) ]; do arg=${params[$cntr]} if [ "$arg" = "-d" ]; then if [ $initMode -eq 0 ]; then vmmode="d" initMode=1 cntr=$(($cntr + 1)) continue else echo "Fatal error: unexpected argument -d" exit fi elif [ "$arg" = "-l" ]; then if [ $initMode -eq 0 ]; then vmmode="l" initMode=1 cntr=$(($cntr + 1)) continue else echo "Fatal error: unexpected argument -l" exit fi elif [ "$arg" = "-b" ]; then if [ $initMode -eq 0 ]; then vmmode="b" initMode=1 cntr=$(($cntr + 1)) continue else echo "Fatal error: unexpected argument -b" exit fi elif [ "$arg" = "-dlonly" ]; then
{ "domain": "codereview.stackexchange", "id": 43358, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, bash, installer", "url": null }
homework-and-exercises, electrostatics, energy, capacitance Title: Electrostatic potential energy of two concentric spherical conductors Suppose that a small spherical conductor is placed inside a hollow spherical conductor. The inner conductor has a charge $+q$, which induces a charge $-q$ on the inner surface of the hollow conductor, which in turn induces a $+q$ charge on the outer surface of the hollow conductor. I want to calculate the total electrostatic potential energy of the system. My observation is that the system can be seen as a capacitor with two spherical plates. My question is: is correct to assume that the total electrostatic potential energy of the system is equal to the energy stored between the two plates of the capacitor? Remembering that the energy is stored in the electric field then the energy is stored in the electric field between the plates and the electric field outside the larger conducting shell.
{ "domain": "physics.stackexchange", "id": 97832, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electrostatics, energy, capacitance", "url": null }
soft-question, career Title: Why would a TCS researcher need funding? I was reading this. It says ... You won't find yourself as starving for funding like Pure Mathematics. (You'll still always find yourself starving for funding.)...
{ "domain": "cstheory.stackexchange", "id": 4954, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "soft-question, career", "url": null }
c++ public: // iterators using iterator = typename std::vector<multidimensional_array<T, N - 1>>::iterator; using const_iterator = typename std::vector<multidimensional_array<T, N - 1>>::const_iterator; // constructers multidimensional_array() = default; virtual ~multidimensional_array() = default; template <typename... Sizes> multidimensional_array(const Sizes &... sizes) { resize(sizes...); } multidimensional_array( const std::vector<multidimensional_array<T, N - 1>> &Items) : _data(Items) {} // access to data public: std::vector<T> data() { return _data; } T *raw_data() { return _data.begin()->raw_data(); } multidimensional_array<T, N - 1> &operator[](std::uint64_t index) { return _data.at(index); }
{ "domain": "codereview.stackexchange", "id": 38115, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++", "url": null }
mysql, sql CREATE TABLE IF NOT EXISTS `users` ( `userid` INT NOT NULL AUTO_INCREMENT , `fname` VARCHAR(45) NOT NULL , `lname` VARCHAR(45) NOT NULL , `usernick` VARCHAR(45) NOT NULL , `useremail` VARCHAR(45) NOT NULL , `visible` INT(1) NOT NULL DEFAULT 1 //visible shows id is disables or not ) CREATE TABLE IF NOT EXISTS `posts` ( `postid` INT NOT NULL AUTO_INCREMENT , //Simple postid index `userid` INT NOT NULL , //userid of poster `streamid` INT NOT NULL , // post is posted on his wall `status` VARCHAR(5000) NOT NULL , //the post that has been posted `date` TIMESTAMP NOT NULL DEFAULT now(), //post was posted on `visible` INT(1) NOT NULL DEFAULT 1 ) //post is deleted or not
{ "domain": "codereview.stackexchange", "id": 3886, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mysql, sql", "url": null }
odometry, sensor-fusion, diff-drive-controller, ros2-control, ros2-controllers # cmd_vel_timeout: x # publish_limited_velocity: x # velocity_rolling_window_size: x # linear.x.has_velocity_limits: false # linear.x.has_acceleration_limits: false # linear.x.has_jerk_limits: false # linear.x.max_velocity: NAN # linear.x.min_velocity: NAN # linear.x.max_acceleration: NAN # linear.x.min_acceleration: NAN # linear.x.max_jerk: NAN # linear.x.min_jerk: NAN # angular.z.has_velocity_limits: false # angular.z.has_acceleration_limits: false # angular.z.has_jerk_limits: false # angular.z.max_velocity: NAN # angular.z.min_velocity: NAN # angular.z.max_acceleration: NAN # angular.z.min_acceleration: NAN # angular.z.max_jerk: NAN # angular.z.min_jerk: NAN # joint_broad: # ros__parameters: launch file import os from ament_index_python.packages import get_package_share_directory
{ "domain": "robotics.stackexchange", "id": 38523, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "odometry, sensor-fusion, diff-drive-controller, ros2-control, ros2-controllers", "url": null }
python, opencv Title: Reinventing BGR to Grayscale OpenCV convert function in Python For academic purposes I want to reinvent Blue Green Red to Grayscale function in Python. I am new to Python so I believe my code below can still be optimized. import cv2 import numpy as np data = np.array([[[255, 0, 0], [0, 255, 0], [0, 0, 255]], [ [0, 0, 0], [128, 128, 128], [255, 255, 255], ]], dtype=np.uint8) rows = len(data) cols = len(data[0]) grayed = [] for i in range(rows): row = [] for j in range(cols): blue, green, red = data[i, j] gray = int(0.114 * blue + 0.587 * green + 0.299 * red) row.append(gray) grayed.append(row) grayed = np.array(grayed, dtype=np.uint8) print(data) print(grayed) wndData = "data" wndGrayed = "greyed" cv2.namedWindow(wndData, cv2.WINDOW_NORMAL) cv2.imshow(wndData, data) cv2.namedWindow(wndGrayed, cv2.WINDOW_NORMAL) cv2.imshow(wndGrayed, grayed) cv2.waitKey()
{ "domain": "codereview.stackexchange", "id": 41494, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, opencv", "url": null }
set up the bounds on the integral. By using this website, you agree to our Cookie Policy. stackexchange [22], and in a slightly less elegant form it appeared much earlier in [18]. (1) is deflned as Z C a ¢ dr = lim N!1 XN p=1 a(xp;yp;zp) ¢ rpwhere it is assumed that all j¢rpj ! 0. Use a triple integral to determine the volume of the region below z = 4−xy. So then x2 +y2 = r2. Multiple integrals use a variant of the standard iterator notation. Challenge Problems. 25 3 4 3 12 4 tt t t dt 1. via contour integration. (Or vice versa. EXAMPLE 4 Find a vector field whose divergence is the given F function. MTH 254 LESSON 20. Triple Integral Calculator Added Mar 27, 2011 by scottynumbers in Mathematics Computes value of a triple integral and allows for changes in order of integration. The triple integral in this case is, Note that we integrated with respect to x first, then y, and finally z here, but in fact there is no reason to the integrals in this order. So let us give here a brief
{ "domain": "bollola.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9802808741970027, "lm_q1q2_score": 0.8534298480576168, "lm_q2_score": 0.8705972650509008, "openwebmath_perplexity": 1024.54283541851, "openwebmath_score": 0.9065842032432556, "tags": null, "url": "http://cghb.bollola.it/triple-integral-pdf.html" }