text
stringlengths
1
1.11k
source
dict
# Which is larger, $70^{71}$ or $71^{70}$? [duplicate] This question already has an answer here: Yet another question of which is larger: $70^{71}$ or $71^{70}$. I solved it by observing that $f(x)=\frac{\ln(x)}{x}$ is decreasing for all $x>e$ since $f'(x)=\frac{1-\ln(x)}{x^2}<0$ for all $x>e$. Then we have \begin{align*} \frac{\ln(70)}{70}>\frac{\ln(71)}{71} &\iff 71\ln(70)>70\ln(71)\\ &\iff \ln(70^{71})>\ln(71^{70})\\ &\iff e^{\ln(70^{71})}>e^{\ln(71^{70})}\\ &\iff 70^{71}>71^{70}. \end{align*} I briefly search for similar problems along this idea, but didn't really find much in other ideas about how to solve this. So I open this to the group: how else might this be proved?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9881308803517763, "lm_q1q2_score": 0.8055563996126603, "lm_q2_score": 0.8152324915965392, "openwebmath_perplexity": 484.36399249490546, "openwebmath_score": 0.9583338499069214, "tags": null, "url": "https://math.stackexchange.com/questions/1824314/which-is-larger-7071-or-7170" }
algorithms, time-complexity, integers Title: Run time of product of polynomially bounded numbers Let $M$ denote a set of $n$ positive integers, each less than $n^c$. What is the runtime of computing $\prod_{m \in M} m$ with a deterministic Turing machine? Computing the product sequentially, we need to perform $n-1$ multiplications of numbers of size $O(C^n n^{cn})$, for some constant $C$ corresponding to the constant in $O(n^c)$. Using algorithms for fast integer multiplication, each of these requires $\tilde{O}(C\log n + cn \log n) = \tilde{O}_c(n)$, so in total the time is $\tilde{O}_c(n^2)$ (the $\tilde{O}$ hides logarithmic factors).
{ "domain": "cs.stackexchange", "id": 404, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, time-complexity, integers", "url": null }
universe Title: What is a probability of a particle launched from earth hitting any object in the universe? Imagine I pointed a laser into the night sky. How likely is it that a particular photon will ever hit anything? This question bothered me for a while. I know very little about astronomy. When I asked my roommate he laughed and said that it must be very little, since the universe is mostly empty. I know it is true, but it seems to me that volume of the objects in the universe does not matter. Rather, it should be the area of objects when projected on a sphere that matters. On one hand, universe is pretty empty, so hitting something outside the Solar System might be hard. But on the other hand, there are a lot of possible targets. These targets, however, appear smaller when the distance increases.
{ "domain": "astronomy.stackexchange", "id": 2569, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "universe", "url": null }
rust, concurrency pub fn add(&self, value: i32) { self.value.fetch_add(value, Ordering::Relaxed); } } This should be quite faster. Another avenue of improvement is alignment. CPUs in general operate at the cache-line level, which is often 64 bytes. This means that if two values are within the same 64 bytes, even though they are independent, writing to one requires exclusive access to both. This leads to a "ping-pong" if one core writes to one in a loop and another core writes to the other in a loop in parallel. This is called false sharing. Intel CPUs are worse, in this regard, in that they often grab 2 cache lines at a time, instead of 1, and thus requires values to be in separate 128 bytes bins. It can be beneficial to pad structures, or force their alignment, to avoid this. This is as simple as using repr with the align directive: #[derive(Debug, Default)] #[repr(align(128))] pub struct Counter { value: AtomicI32, }
{ "domain": "codereview.stackexchange", "id": 43917, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rust, concurrency", "url": null }
homework-and-exercises, special-relativity, tensor-calculus, conventions $$ Because $\boldsymbol A,\boldsymbol B$ are arbirary, we get $$ w^l_\alpha w^\beta_l=\delta^\alpha_\beta. $$ Therefore $$ \boldsymbol v^i(\boldsymbol w_l)\boldsymbol w^l(\boldsymbol v_j) =v^i_\alpha v_j^\beta w^l_\beta w_l^\alpha =v^i_\alpha v_j^\beta \delta^\alpha_\beta=v^i_\alpha v_j^\alpha = \delta ^i_j. $$ Everything goes fine. The point is, I think, a vector $\boldsymbol v$ is a one-order tensor, $v_\alpha$. But when we talk about a set of vectors, $\{\boldsymbol v_i\}$, they form a two-order tensor, $v^j_\alpha$. The product of two vectors is in general a four-order tensor, $v^i_\alpha v_j^\beta=\delta^i_j\delta_\alpha^\beta$ in this case. Contract $i$ with $j$ or $\alpha$ with $\beta$ we get a two-order tensor, and contract all variables we get a scalar.
{ "domain": "physics.stackexchange", "id": 89762, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, special-relativity, tensor-calculus, conventions", "url": null }
1. Is my approach wrong? A 0.02 difference is a pretty big difference. The two methods are equivalent. The apparent discrepancy is due to roundoff. The textbook finds an odds ratio of $$361:99,$$ which is exact (insofar as the $$1\%$$ and $$95\%$$ are exact). Since this is $$P(D) : P(D^C),$$ the probability is given by $$P(D) = \frac{P(D)}{P(D) + P(D^C)} = \frac{361}{361 + 99} \approx 0.78478,$$ which the text rounds to $$0.78.$$ (Since you asked about this as a separate part of the question, I'll explain in more detail below.)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9566342012360932, "lm_q1q2_score": 0.8819419580886734, "lm_q2_score": 0.9219218348550491, "openwebmath_perplexity": 311.0663614422447, "openwebmath_score": 0.739641547203064, "tags": null, "url": "https://math.stackexchange.com/questions/2944252/finding-the-probability-that-someone-has-the-disease-given-they-test-positive-o" }
I would really appreciate anyones input on this! • Perhaps something simplifies if you use the recursive formula for the determinant of a tridiagonal matrix: en.wikipedia.org/wiki/Tridiagonal_matrix#Determinant Oct 3, 2020 at 20:49 • I may have potentially found a nice simplification Oct 3, 2020 at 20:59 • @Buraian did you ever find the nice simplification? Oct 23, 2020 at 19:18 • I did but it was too big to be written as an answer :-) Oct 23, 2020 at 19:25 Your matrix is a general tridiagonal matrix, with $$d_i:=a_i+b_i+c_i$$ along the diagonal. If we denote the determinant of the $$n\times n$$-matrix by $$f_n$$, then we have the recurrence relation $$f_n=d_nf_{n-1}-b_{n-1}c_{n-1}f_{n-2}.$$ Not much more can be said for general sequences $$b_n$$, $$c_n$$ and $$d_n$$. For more information see Wikipedia. I believe I have an explicit solution!
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668728630677, "lm_q1q2_score": 0.8044521804795529, "lm_q2_score": 0.8198933359135361, "openwebmath_perplexity": 138.25289405708645, "openwebmath_score": 0.9665737152099609, "tags": null, "url": "https://math.stackexchange.com/questions/3850422/closed-form-expression-for-particular-determinant" }
java has no effect and most likely leads to useless An error occured messages at the end of the development cycle. If you don't pass on an exception, but catch it you need to handle it, throw a RuntimeException or inform an online service like sentry.io about it. try-catch(Exception) is almost never right... Your project has no pom.xml, but IDE-related files in .idea. I'd provide a command-line based cross-platfrom, cross-IDE build command as early as possible in order to make is as easy as possible for others to build and run your project. Your questions: I don't see dependency injection in your code. Use a FLOSS framework following Java specification for it, that's why there're there. public abstract class SetOnOceanBase implements SetOnOcean { ... private final boolean horizontally; //only expose to subclasses' constructors
{ "domain": "codereview.stackexchange", "id": 25751, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java", "url": null }
python, python-3.x, immutability def show_listing(s): for i, l in enumerate(s.split('\n'), 1): print('{num: >3} {text}'.format(num=i, text=l.rstrip())) def unique_list(l): ulist = [] for thing in l: if thing not in ulist: ulist.append(thing) return ulist
{ "domain": "codereview.stackexchange", "id": 12064, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, immutability", "url": null }
neural-network, autoencoder, gaussian "why do we want to feed forward latent matrix with Gaussian distribution?" To be specific, we feed a Gaussian distributed variable $z$ into the decoder network. This is the basic model assumption of VAE regarding how samples are generated. Notice that one of the main difference between VAE and standard autoencoder is that VAE is a generative model, in the sence that you can randomly generate samples, instead of using a input $x$ to reconstruct something similar to $x$. If we imagine a VAE trained on the MNIST hand written digits dataset, we want to randomly sample $z$~$p(z)=\mathcal{N}(0,1)$, and feed this $z$ into the decoder network which give us the output distribution (which is again a Gaussian), and sampling from that output distribution results in an image of a digit. "Why compare a distribution of Mu and Sigma matrices? "
{ "domain": "datascience.stackexchange", "id": 3833, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-network, autoencoder, gaussian", "url": null }
optics, visible-light, vision, dirac-delta-distributions, luminosity Usually, "white light" is described or defined as an uniform mixture of waves is pretty much completely incorrect: this is not how the term "white light" is treated in the literature. The meaning of the term is relatively well captured by this glossary at Plastic Optics: light, white. Radiation having a spectral energy distribution that produces the same color sensation to the average human eye as average noon sunlight. However, the term is not normally taken to have a strict technical meaning, a fact which is well reflected by the observation that in the first page of a search for "optics glossary" only a single resource has an entry for "white light". The meaning of the term is even more complicated because it depends on who is using it:
{ "domain": "physics.stackexchange", "id": 34958, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, visible-light, vision, dirac-delta-distributions, luminosity", "url": null }
homework-and-exercises, newtonian-mechanics, friction, oscillators I nearly completed the question, but feel hard to answer part b due to no initial conditions given. For part b, I started by noting $$m\frac{\mathrm{d}^2x}{\mathrm{d}t^2}=f-kx$$ Letting $y = \frac{f}{k}-x$, I get $$m{d^2y \over dt^2}=-ky$$ thus $y = A\cos (\omega t+\phi)$ where $A$ and $\phi$ depend on initial conditions and $\omega= \sqrt \frac{k}{m}$, hence I deduce $$x= \frac{f}{k}+A\cos(\omega t+\phi)$$ If I assume $x(0) \gt 0$, then $$x= \frac{F}{k}+A\cos(\omega t+\phi)$$ and hence $$x_{\text{max}_1}= \frac{F}{k}+A$$ In the other half cycle, $\dot x \gt 0$ so $$x= -\frac{F}{k}+A\cos(\omega t+\phi)$$ and $$x_{\text{max}_2}= {-F \over k}+A\cos(\omega t+\phi)$$ the difference between the previous $x_{\text{max}_1}$ and this $x_{\text{max}_2}$ is $\frac{2F}{k}$. $\square$
{ "domain": "physics.stackexchange", "id": 23775, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-mechanics, friction, oscillators", "url": null }
undecidability, halting-problem 0 & \text{if $A(A)$ doesn't halt} \end{cases} \end{split} $$ ($\infty$ indicates an infinite loop. The $0$ could've just as well been a $1$ without affecting the argument.) Because $H\in\mathcal{M}$ so is $\overline{H}$. After all, it only adds conditional branching and an infinite loop. So we can ask whether $\overline{H}(\overline{H})$ halts - yielding a contradiction in either case. $$\tag*{$\Box$}$$ It seems to me that with this argument Turing could've just said "I don't care how you define computation. Try coming up with the most powerful notion you can conceive of, then run through this argument, and voila, there's still undecidable problems." Well, I would like to ask the same kind of question to you. Why did Sebastian Oberhoff have to define a computational model before demonstrating undecidability? I will answer the above question on behalf of you below, assuming an imaginary line of case development. Of course, you might answer better than me.
{ "domain": "cs.stackexchange", "id": 12397, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "undecidability, halting-problem", "url": null }
c#, performance, cache, antlr, rubberduck public IParseTree Parse(string code) { var input = new AntlrInputStream(code); var lexer = new VBLexer(input); var tokens = new CommonTokenStream(lexer); var parser = new VBParser(tokens); var result = parser.StartRule(); return result; } public IEnumerable<VBComponentParseResult> Parse(VBProject project) { var modules = project.VBComponents.Cast<VBComponent>(); foreach(var module in modules) { yield return Parse(module); }; } public VBComponentParseResult Parse(VBComponent component) { VBComponentParseResult cachedValue; var name = component.QualifiedName(); if (ParseResultCache.TryGetValue(name, out cachedValue)) { return cachedValue; }
{ "domain": "codereview.stackexchange", "id": 12416, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, cache, antlr, rubberduck", "url": null }
c#, finance public class PaymentShemeChaps : PaymentSchemeResult { public override MakePaymentResult GetResult(Account account) { var result = new MakePaymentResult(); if (account == null) { result.Success = false; } else if (!account.AllowedPaymentSchemes.HasFlag(AllowedPaymentSchemes.Chaps)) { result.Success = false; } else if (account.Status != AccountStatus.Live) { result.Success = false; } return result; } } public class PaymentSchemeBacs : PaymentSchemeResult { public override MakePaymentResult GetResult(Account account) { var result = new MakePaymentResult(); if (account == null) { result.Success = false; } else if (!account.AllowedPaymentSchemes.HasFlag(AllowedPaymentSchemes.Bacs)) { result.Success = false; } return result; } }
{ "domain": "codereview.stackexchange", "id": 29572, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, finance", "url": null }
python, python-3.x, parsing, regex, logging def save_to_file(filename): output_filename = "bytes_" + filename with open(output_filename, 'w+') as fout: fout.write(str(nbr_of_requests) + '\n' + str(sum_of_bytes)) if __name__ == '__main__': for line in parse_file(filename): if line and int(line[0][8]) >= 1000: nbr_of_requests += 1 sum_of_bytes = sum_of_bytes + int(line[0][8])
{ "domain": "codereview.stackexchange", "id": 34437, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, parsing, regex, logging", "url": null }
nxt, rosjava As current rosjava_bootstrap lacks such a file, this obviously fails. How is one supposed to reconstruct the Getting Started Tutorial with current rosjava_core? Best regards, Andreas Originally posted by andreasw on ROS Answers with karma: 61 on 2012-06-27 Post score: 0 The tutorial has been fixed. Please try it now. Originally posted by LawrieGriffiths with karma: 131 on 2012-07-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9967, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nxt, rosjava", "url": null }
c#, compiler, roslyn public static string Decrypt(string cipherText, string passPhrase) { // Get the complete stream of bytes that represent: // [32 bytes of Salt] + [32 bytes of IV] + [n bytes of CipherText] var cipherTextBytesWithSaltAndIv = Convert.FromBase64String(cipherText); // Get the saltbytes by extracting the first 32 bytes from the supplied cipherText bytes. var saltStringBytes = cipherTextBytesWithSaltAndIv.Take(Keysize / 8).ToArray(); // Get the IV bytes by extracting the next 32 bytes from the supplied cipherText bytes. var ivStringBytes = cipherTextBytesWithSaltAndIv.Skip(Keysize / 8).Take(Keysize / 8).ToArray(); // Get the actual cipher text bytes by removing the first 64 bytes from the cipherText string.
{ "domain": "codereview.stackexchange", "id": 29947, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, compiler, roslyn", "url": null }
thermodynamics, energy, work, adiabatic However, as in many other examples, sometimes the technical meaning in physics is not the same as in everyday language and the two semantic basins may cause some ambiguity. In some cases, we use the term isolated as synonym of a system which cannot exchange heat and particles while its walls can move. Moreover, is some contexts, isolated may be a shortening for thermally isolated. Therefore, it is always a good idea in physics to state explicitly the precise meaning of the term.
{ "domain": "physics.stackexchange", "id": 64550, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, energy, work, adiabatic", "url": null }
javascript, node.js, promise, rest, express.js https://stackoverflow.com/questions/26076511/handling-multiple-catches-in-promise-chain However, I don't understand a few things: I don't see how, at least in my example, I can separate the logic from the response. The response will depend on the logic after all! I would like to avoid error sub-classing and hierarchy. First because I don't use bluebird, and I can't subclass the error class the answer suggests, and second because I don't want my code with a billion different error classes with brittle hierarchies that will change in the future. My idea, that I don't really like either With this structure, if I want error differentiation, the only thing I can do is to detect an error occurred, build an object with that information and then throw it: .then(doc => { if(doc === null) throw {reason: "ObjectNotFound"};
{ "domain": "codereview.stackexchange", "id": 24998, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, node.js, promise, rest, express.js", "url": null }
# Number of relations that are both symmetric and reflexive Consider a non-empty set A containing n objects. How many relations on A are both symmetric and reflexive? The answer to this is $2^p$ where $p=$ $n \choose 2$. However, I dont understand why this is so. Can anyone explain this? - this is not (number-theory); just because it numbers does not make it number theory. It's about counting, so it's combinatorics. – Arturo Magidin Nov 27 '10 at 23:40 To be reflexive, it must include all pairs $(a,a)$ with $a\in A$. To be symmetric, whenever it includes a pair $(a,b)$, it must include the pair $(b,a)$. So it amounts to choosing which $2$-element subsets from $A$ will correspond to associated pairs. If you pick a subset $\{a,b\}$ with two elements, it corresponds to adding both $(a,b)$ and $(b,a)$ to your relation. How many $2$-element subsets does $A$ have? Since $A$ has $n$ elements, it has exactly $\binom{n}{2}$ subsets of size $2$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9877587221857245, "lm_q1q2_score": 0.8121244785039665, "lm_q2_score": 0.8221891239865619, "openwebmath_perplexity": 110.16573179545345, "openwebmath_score": 0.9319228529930115, "tags": null, "url": "http://math.stackexchange.com/questions/12139/number-of-relations-that-are-both-symmetric-and-reflexive/12145" }
multithreading, go type fileData struct { name, bytes string } func getBytesFromFile(file string, dataChan chan fileData) error { bytes, err := openFileAndGetBytes(file) if err == nil { dataChan <- fileData{name: file, bytes: bytes} } return err } func openFileAndGetBytes(file string) (string, error) { if file == "file2" { return "", fmt.Errorf("%s cannot be read", file) } return fmt.Sprintf("these are some bytes for file %s", file), nil }
{ "domain": "codereview.stackexchange", "id": 43037, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "multithreading, go", "url": null }
php, javascript, mysql, sql, ajax //req.open("GET", url, true); //req.send(); req.open("POST", page, true); req.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); req.send("data=<?php echo $user->id; ?>"); } } timer = setTimeout("toto(page,'scriptoutput')", 1 * 60 * 1000); } function ajaxDone(target) { // only if req is "loaded" if (req.readyState == 4) { // only if "OK" if (req.status == 200 || req.status == 304) { results = req.responseText; document.getElementById(target).innerHTML = results; } else { document.getElementById(target).innerHTML="ajax error:\n" + req.statusText; } } } </script> The PHP code if( isset( $_POST["data"] ) ){ $id = (int) $_POST["data"]; $sql ='INSERT INTO users_timeconnected ( profil_id, last_ajax_call ) VALUES ('. $id .', NOW() ) ON DUPLICATE KEY UPDATE id=id'; $db->query( $sql );
{ "domain": "codereview.stackexchange", "id": 400, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, javascript, mysql, sql, ajax", "url": null }
python, python-3.x, sorting In bubble_sort(), the while last_index > 0 loop would be written more idiomatically as: for last_index in range(len(array) - 1, 0, -1): … In insertion_sort(), the reversed(list(enumerate(sorted_array))) would be making temporary copies of sorted_array. I would therefore consider it an improper implementation of the algorithm.
{ "domain": "codereview.stackexchange", "id": 22686, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, sorting", "url": null }
human-biology, biochemistry In the Dicussion part, you will find your answer: Reductions in the plasma concentrations of LDL in human subjects consuming diets rich in long-chain omega-3 fatty acids from fish oils or omega-6 fatty acids from vegetable oils may be due to a reduction in LDL synthesis, an increased fractional rate of catabolism of LDL, or a combination of both... The present study indicates that the hypocholesterolemic effects of long-chain omega-3 fatty acids present in fish oils results from a reduction in the rate of LDL apoprotein B synthesis and that such oils do not stimulate the fractional rate of catabolism of LDL. These observations imply that the incorporation of omega-3 fatty acids into cellular or lipoprotein lipids does not enhance the rate of receptor-mediated catabolism of LDL as has been observed with omega-6 fatty acids in vitro.
{ "domain": "biology.stackexchange", "id": 5398, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "human-biology, biochemistry", "url": null }
Piecewise Function, Unrelated Problems, Tangent Line, Domain and Range etc. We look at average rates of change which become instantaneous, as time intervals become vanishingly small, leading to the notion of a derivative. In this foldable, students compare the formulas for velocity, acceleration, speed, average velocity, average speed, and total distance travelled for both types of problems. 5 sec and (ii) is asking for the average velocity between 2 sec and 2. average rate of change (or velocity) is equal to the change in distance divided by the change in time. The instantaneous velocity is represented by the first derivative of the positional equation. The Physical Concept of the Derivative This approach was used by Newton in the development of his Classical Mechanics. IB Mathematics HL. Exercises 1–6 give the positions s = f(t) of a body moving on a coordinate line, with s in meters and t in seconds. Af dt At-0 At This is the neat notation that Leibniz invented: Af/At approaches df
{ "domain": "gremium-franconia.de", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9895109081214212, "lm_q1q2_score": 0.8267200795997978, "lm_q2_score": 0.8354835432479663, "openwebmath_perplexity": 382.1881268872426, "openwebmath_score": 0.7783623337745667, "tags": null, "url": "http://tdot.gremium-franconia.de/average-velocity-problems-calculus.html" }
python, kivy Title: Music Tag Editor written in Python Kivy using Mutagen I've been working on a Python script which is an MP3 Tag editor. I need someone to go through my script and tell me whether I have written it correctly (following correct coding style) and where I can improve the code in term of readability, functionality, making it more Pythonic, etc. Since it is little long and I've already posted it on Github's gist, please follow this link: https://gist.github.com/24Naman/d0c293d86e58d5f7a8fd8b536c5343f9 [Edit] The program is moved to a GitHub Repo, you can view it at https://github.com/24Naman/PyMTag #!/usr/bin/python3 """ Python: Created by Naman Jain on 12-01-2018 File: music_file_tag_editor GUI Tag editor for MP3 file. It supports Tag editing using mutagen library, renaming the file based on its ID3 attributes, changing album art using local file system or using Internet search or removing it completely.
{ "domain": "codereview.stackexchange", "id": 35597, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, kivy", "url": null }
filters, image-processing, image-segmentation, total-variation My question is: Could you tell me what is different between two approaches for noise removal? What is the key idea of TV? Which one is better? Thanks. These are two different concepts that you talk about. First, MRF gives you a framework to do discrete optimization of problems, which respect the Markovian property, that is a pixel is conditioned only on the neighboring ones (roughly stated). Typical applications include binary or multi-class labeling problems. Total variation on the other hand, is generally used as a regularization by adding the integral of the absolute gradient of the signal/image to the energy functional. This helps to neglect irrelevant detail and focus on important ones. You cannot say one is better than the other, as they are not exactly contradictory things. It depends on the application and the energy function you use in the MRFs.
{ "domain": "dsp.stackexchange", "id": 3654, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, image-processing, image-segmentation, total-variation", "url": null }
c, linked-list, interview-questions typedef struct llnode { int value; struct llnode *next; } llnode; typedef struct xllist { llnode * head; llnode * tail; size_t length; } xllist; bool create_node(xllist *list, int value) { llnode *node = malloc(sizeof *node); if(!node) { return false; } list->length++; node->value = value; node->next = NULL; if(!list->head) { list->head = node; list->tail = node; } list->tail->next = node; list->tail = node; return true; } bool del_element_from_last(xllist *llist, int k) { printf("len:%d \n", llist->length); //window with 2 pointers, length of k //prev is the prev node to the window llnode *prev = llist->head; llnode *last = llist->head; if(k > (size_t)llist->length) { return false; } else if (k == (size_t)llist->length) { llist->head = llist->head->next; return true; }
{ "domain": "codereview.stackexchange", "id": 40262, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, linked-list, interview-questions", "url": null }
java, performance, parsing return null; } databaseName is a pointless local variable: you can simply delete all lines using it. The same goes for sourceFile. Arrline and s are poorly named variables, and s is lowercased twice: String[] Arrline = line.split(" "); for (String s : Arrline) { if (!s.toLowerCase().equals("create") && !s.toLowerCase().equals("database")) { return s; } } Cleaning these up the code becomes: String[] segments = line.split(" "); for (String origSegment : segments) { String segment = origSegment.toLowerCase(); if (!segment.equals("create") && !segment.equals("database")) { return origSegment; } } The current method of extracting the database name is quite awkward and inefficient. Look at these steps: lowercase the line check if it contains a string split the line to tokens for each token if the lowercased token is neither "create" nor "database" then return it
{ "domain": "codereview.stackexchange", "id": 15725, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, parsing", "url": null }
c++, performance, statistics, r, ffi Title: Moving average function in C++ for use with R I am trying to improve the code and speed up in C++ (Rcpp) a (centered) weighted moving average function I coded. An example of what the roll_mean function does. Note that the function works no matter what the size of x is and adapts to both tails of my data w=c(1/2,1,1/2) x=c(4,2,6,12) res=c(2,5,7,3) res=c((x[1:2]*w[2:3])/sum(w[2:3]),x[1:3]*w[1:3]/sum(w[1:3]),x[2:4]*w[1:3]/sum(w[1:3]),x[3:4]*w[1:2]/sum(w[1:2])) The file PartialMA.cpp #include <Rcpp.h> using namespace Rcpp; // [[Rcpp::export]] NumericVector roll_mean(const NumericVector& x, const NumericVector& w) { int n = x.size(); int w_size = w.size(); int size = (w_size - 1) / 2; NumericVector res(n); int i, ind_x, ind_w; double tmp_wsum, tmp_xwsum, tmp_w;
{ "domain": "codereview.stackexchange", "id": 30119, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, statistics, r, ffi", "url": null }
special-relativity, group-theory, conventions, lie-algebra Why this $\dfrac{1}{2}$ factor come into the equation? why some people define with the $i$ complex number and some don't? Mathematicians like to define it without the $i$ and physicists like the $i$. If you keep the $i$ in there, then unitarity of a representation $D(\Lambda)$ implies that the corresponding Lie algebra represention $D(M_{\alpha\beta})$ is Hermitian. This is easy to see as \begin{align} D(\Lambda)^\dagger &= D ( e^{\frac{i}{2} \omega^{\alpha\beta} M_{\alpha\beta}} )^\dagger = [ e^{\frac{i}{2} \omega^{\alpha\beta} D ( M_{\alpha\beta} ) } ]^\dagger = e^{- \frac{i}{2} \omega^{\alpha\beta} D ( M_{\alpha\beta} )^\dagger} , \\ D(\Lambda)^{-1} &= e^{- \frac{i}{2} \omega^{\alpha\beta} D ( M_{\alpha\beta} )} \end{align} Then, unitarity $D(\Lambda)^\dagger = D(\Lambda)^{-1}$ implies $D(M_{\alpha\beta})^\dagger = D(M_{\alpha\beta})$ which is the hermiticity condition. In the math convention, $D(M_{\alpha\beta})$ is anti-Hermitian.
{ "domain": "physics.stackexchange", "id": 85825, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, group-theory, conventions, lie-algebra", "url": null }
cosmology, radio-astronomy Now the big question is, according to eq. \ref{b} the dip should be of the order a few tens of mK, but is in fact roughly 0.5 K, i.e. an order of magnitude larger. One possible mechanism that could produce this effect is coupling of the gas with dark matter, something which is not usually considered possible but could happen if the dark matter particle has a very small charge (Barkana et al. 2018). Time evolution of the 21 cm signal The figure below (from a great review by Pritchard & Loeb 2012) shows how the 21 cm signal evolves with time. The dip discussed in this answer is the orange and red part.
{ "domain": "astronomy.stackexchange", "id": 2836, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology, radio-astronomy", "url": null }
python, console, validation Title: UPDATE on Newspaper Bill Calculator CLI with Python (2 of 3, CLI) Code is posted after explanation. Due to the size of the project, this is being posted in three separate posts. This also ensures each post is more focused. Post 1 of 3, Core: UPDATE 1 on Newspaper Bill Calculator CLI with Python (1 of 3, Core) Post 3 of 3, Database: UPDATE 1 on Newspaper Bill Calculator CLI with Python (3 of 3, Database) This is a follow-up to an earlier version of the same project. The feedback from the last round is tracked in an issue.
{ "domain": "codereview.stackexchange", "id": 43351, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, console, validation", "url": null }
quantum-mechanics, wavefunction, quantum-interpretations, observers, bells-inequality If this seems mysterious, a classical analogy can make it less so. Let's imagine that instead of an electron we have a classical object with a definite orientation, e.g. a ping-pong ball with a dot painted on it. If each of us believes that we know, with 100% certainty, the orientation of the ball then we have to agree. If we disagree then one of us must just be wrong, since we can't both be right. But it's possible for one of us not to know the orientation of the ball while the other one does, and in that case one of us can make predictions about measurements while the other one can't. This is all that the density matrix formalism is doing, it's giving us a way to represent this kind of lack of knowledge about quantum systems.
{ "domain": "physics.stackexchange", "id": 35798, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, wavefunction, quantum-interpretations, observers, bells-inequality", "url": null }
computational-chemistry, software N.B.: I cannot believe it, but the pm6/popt approach is actually a halfway decent application of compound jobs. Although I have to admit I have not tried it yet. It works as the following example demonstrates. (I used Gaussian 16 Rev. A.03, but it should work with Gaussian 09 Rev D.01, too. Maybe even earlier.) I set up an initial scan on the pm6 level of theory for $\ce{H3B-NH3}$ and scanned the $\ce{B-N}$ bond: %chk=pm6.scan.chk %nproc=2 %mem=8000MB #P PM6 OPT(MaxCycle=100) SYMMETRY(loose) GEOM(ModRedundant) Volume title 0 1 N 0.000000 0.000000 -0.764994 H 1.019690 0.000000 -1.164250 H -0.509845 -0.883078 -1.164250 H -0.509845 0.883078 -1.164250 B -0.000000 0.000000 0.235006 H 0.509845 0.883078 0.634262 H -1.019691 0.000000 0.634262 H 0.509845 -0.883078 0.634262 B 1 5 S 5 0.2
{ "domain": "chemistry.stackexchange", "id": 9424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computational-chemistry, software", "url": null }
work, spring, integration, calculus Title: Wrong calculation of work done on a spring, how is it wrong? So I would have thought that this would be how you derive the work on a spring: basically the same way you do with gravity and other contexts, use $$W=\vec{F}\cdot \vec{x}.$$ If you displace a spring by $x$, then it exerts a force $-k x$, so $F=-kx$, since the displacement is $x$. So $$W=-kx^2.\qquad \leftarrow\text{ (however, apparently wrong!)}$$ I've seen the correct derivation of work in a spring (with an extra half) and don't doubt that it's correct, but also don't see where my logic fails in this alternate derivation. You may be imagining that if you push with constant force $F$, the spring will compress until the spring has such a resistive force.
{ "domain": "physics.stackexchange", "id": 82283, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "work, spring, integration, calculus", "url": null }
forces, hydrogen, fusion This is of course why nuclear fusion produces energy. When four hydrogen atoms combine to form a helium atom the mass decreases by about 0.029 amu. That mass is converted to energy and emitted as the kinetic energy of the reaction products.
{ "domain": "physics.stackexchange", "id": 65588, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "forces, hydrogen, fusion", "url": null }
galaxy, distances Title: DSO carthesian coordinate estimations I'm more in the CS based stack exchange, but this question prompted me to come in here. I've been compiling a large set of ephemerides for solar system, stars with mag > 6.8, exo planets etc, and I am now working on the DSO catalog. The app actually holds celestial objects in 3D space, and I made conversions from RA/DEC/dist(parsec) for various objects, but I am having trouble with DSO objects. I know of a red shift time base formula, but heard the conversion is not very good. Also, I understand that most estimations cannot be counted as precise, but I would like to have these objects within 3D space. Can anyone tell me what features I should look for in a catalog and what formula I can base myself on for this conversion, if any?
{ "domain": "astronomy.stackexchange", "id": 3189, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "galaxy, distances", "url": null }
python, kivy Title: Life Counter for Magic the Gathering This is a life counter that I am using together with my friends for card games (Magic the Gathering). For those who know the game, it is actually designed for 1vs1 Commander. It provides you with the possibility to take notes about life points, commander damage, and also to roll a dice to decide who has to start the game.
{ "domain": "codereview.stackexchange", "id": 26387, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, kivy", "url": null }
genetics, mutations Below are some expectations from a simple model and possible reasons for why this expectation may break down under more complicated models. Simple model Under a simple model (panmictic population and a few other simple assumptions), the number of mutations a given individual has in any sequence considered follows a poisson distribution. Assuming that all mutations occuring have a constance selection coefficient $s$, a constant dominance coefficient $h$ and that the mutation rate for the sequence of interest is $U$, then the number of mutations an individual carry comes from a Poisson distribution with mean $\frac{U}{2hs}$ (Crow 1970). This model is simple but is probably a pretty good approximation to reality. Below are three assumptions that are not necessarily true and that would yield to a higher variance in the number of mutations (that is a higher probability for an individual that already carry a mutation to get a second mutation). Population structure
{ "domain": "biology.stackexchange", "id": 6640, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics, mutations", "url": null }
condensed-matter, electronic-band-theory, orbitals, tight-binding The logic of step 2 can differ. You can see in some papers that energy of overlapping orbitals is computed for example using Slater-Koster approximation. Also symmetries of the system can say something about this hopping as a function of wavevector $k$, so there are approaches where using defined symmetries the number of tight-binding parameters is reduced. Then these parameters are often fitted to ab initio calculations (often DFT). So finally yes, if you know $H_{ij}$ you can compute dispersion just solving eigenvalues problem. But tight-binding model is not about just that, it is about the method using which you can build your matrix which you will diagonalize.
{ "domain": "physics.stackexchange", "id": 90431, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter, electronic-band-theory, orbitals, tight-binding", "url": null }
navigation, ros-kinetic, robot-localization my ekt_template.yaml frequency: 20 two_d_mode: true print_diagnostics: true debug: false debug_out_file: /path/to/debug/file.txt publish_tf: true #map_frame: map # Padrões para "map" se não especificado odom_frame: odom # Defaults to "odom" se não especificado base_link_frame: base_footprint # Defaults to "base_link" se não especificado world_frame: odom # Defaults to the value of odom_frame se não especificado odom0: /odom odom0_config: [false, false, false, false, false, false, true, true, false, false, false, true, false, false, false] odom0_differential: false imu0: /imu_raw imu0_config: [false, false, false, false, false, true, false, false, false, false, false, true, true, false, false] imu0_differential: false
{ "domain": "robotics.stackexchange", "id": 33057, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ros-kinetic, robot-localization", "url": null }
java, spring if(principal instanceof AUser) { final AUser au = (AUser)principal; return au.getId(); } } } return 0; } } This is a utility method. There's nothing wrong with it being static. Its readability would benefit greatly from (a) using guard clauses instead of having multiple nesting levels, (b) using java bracket style, and (c) using java spacing. You might also want to add logging to indicate why there's no signed-in user, which is presumably a WARN or ERROR, but should at least be a DEBUG. public static int getSignedUpUser() { final SecurityContext ctx = SecurityContextHolder.getContext(); if (ctx == null) { LOGGER.debug("No security context available"); return 0; }
{ "domain": "codereview.stackexchange", "id": 12342, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, spring", "url": null }
it is reflexive, transitive and symmetric. 0 votes . 2. Reflexive: a word has the same number of letters as itself. Thus, a relation is a set of pairs. c) The relation graphed above is NOT a function because at least one vertical line intersects the given graph at two points as shown below. For an n-element set, we can count an int from 0 to (2^n)-1. 5 Sections 31-33 but not exactly) Recall: A binary relation R from A to B is a subset of the Cartesian product If , we write xRy and say that x is related to y with respect to R. A relation on the set A is a relation from A to A.. Related questions +1 vote. Key Takeaways. R is transitive if, and only if, 8x;y;z 2A, if xRy and yRz then xRz. Is R transitive? Now set the properties on the new relation you created under the Relations node. A relation is any set of ordered pairs. First, reflexive. 20 Equivalence Classes of an Equivalence Relation The following lemma says that if two elements of A are related by an equivalence relation R, then
{ "domain": "localnews.ie", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9683812327313546, "lm_q1q2_score": 0.8191947783800124, "lm_q2_score": 0.8459424353665381, "openwebmath_perplexity": 500.49743426716816, "openwebmath_score": 0.7657002210617065, "tags": null, "url": "https://localnews.ie/come-over-bezal/find-all-relations-on-the-set-a-0-1-c0da4f" }
natural-language-processing, terminology, books, test-datasets, validation-datasets Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or,devset. How do we divide our data into training, development, and test sets? We want our test set to be as large as possible, since a small test set may be accidentally unrepresentative, but we also want as much training data as possible. At the minimum, we would want to pick the smallest test set that gives us enough statistical power to measure a statistically significant difference between two potential models. In practice, we often just divide our data into 80% training, 10% development, and 10% test. Given a large corpus that we want to divide into training and test, test data can either be taken from some continuous sequence of text inside the corpus, or we can remove smaller “stripes” of text from randomly selected parts of our corpus and combine them into a test
{ "domain": "ai.stackexchange", "id": 3793, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "natural-language-processing, terminology, books, test-datasets, validation-datasets", "url": null }
filters, bandpass Title: filtering difference b/w freq and time domain When you want to get the specific frequency (ex. around 1000Hz), you might take FFT, remain the frequency you want, and reduce the power of the other frequencies you do not. ex.) if fs=16khz and fftsize=1024, remain the power of bin64 and reduce all other bins design band pass filtering (Butterworth or others) to pass 1000Hz (with stop/transition band also) In short, 1 is in frequency domain, and 2 is in time domain. Then my question is: are these two methods same? (I think yes in terms of "extracting 1000Hz signal"(Both can do)) If yes, how would you utilize these two methods? If not, how different are these (and how would you utilize)? I would appreciate your answers. Thank you. are these two methods same? no If not, how different are these (and how would you utilize)?
{ "domain": "dsp.stackexchange", "id": 9824, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, bandpass", "url": null }
ros, ros2, rclcpp, rclpy Originally posted by lmiller on ROS Answers with karma: 219 on 2021-09-28 Post score: 0 Original comments Comment by gvdhoorn on 2021-09-28: Just to make sure: ctrl+z suspends, it does not terminate. Are you killing the processes you've suspended? Comment by lmiller on 2021-09-28: Yes I killed the processes after suspending. Comment by lmiller on 2021-09-28: found out that UFW blocked something. with disabled UFW everything workes fine. Which rule do I need for my UFW with default ROS2 settings? The Firewall UFW blocked the discovery of the Cyclone DDS So disable your firewall or better, add rules for port 7400 & 7401 (default discovery ports; see this comment ) to your ufw Originally posted by lmiller with karma: 219 on 2021-09-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2021-09-28: Please note: DDS can use more ports than just the two you mention.
{ "domain": "robotics.stackexchange", "id": 36959, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros2, rclcpp, rclpy", "url": null }
quantum-field-theory, research-level, quantum-chromodynamics, electroweak, mesons For your second question, if you consider only valence quarks in the mesons, then you are using a tree-level approximation to describe hadronic states. This is a crude approximation, because QCD is non-perturbative at low energies. You can improve it by computing high-order QCD corrections, but you will never have a reliable result.
{ "domain": "physics.stackexchange", "id": 14359, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, research-level, quantum-chromodynamics, electroweak, mesons", "url": null }
kolmogorov-complexity Why would you ever need anything more than the string itself to describe it? The exact value of the Kolmogorov complexity depends on the language chosen to represent strings. This language has to be Turing complete, so representing all strings as themselves isn't an option. By the pigeonhole principle, if there is at least one string of length at most $n$ whose representation is shorter than itself, then there is also at least one string of length at most $n$ whose representation is longer than itself. (The representation is a compression algorithm.) You can have a description language where each string has a representation that's at most one bit longer than itself: start each representation with a bit that indicates either “print literally” or “interpret”. Not all description languages are that simple though.
{ "domain": "cs.stackexchange", "id": 2120, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kolmogorov-complexity", "url": null }
foundations, resource-theories [...] we note that the measurement of $k$ occasionally yields a residual state $\Psi_k$ with more entropy of entanglement than the original state $\Psi$. However, neither the measurement of $k$ nor any other local processing by one or both parties can increase the expected entropy of entanglement between Alice’s and Bob’s subsystems.
{ "domain": "quantumcomputing.stackexchange", "id": 4029, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "foundations, resource-theories", "url": null }
universe, expansion Title: Does space expand? I know this has been asked and here are the links of at least two. The first link below is what I thought was the standard description. What does it mean for space to expand? Now by accident while reading a different discussion I noticed this question. How do we know we're not getting bigger? One of the answers points to a link by an article by J.A. Peakcock and the other an article by "New Scientist" quoting Steve Weinberg so I thought I would include the link: https://www.newscientist.com/article/mg13818693-600/ Basically in a nutshell Weinberg I think is saying space in not expanding it is the Copernican Principle and the fact that the universe was smaller in the past. He calls this a "complication". I assume while other people might call it space expanding which he says emphatically is wrong. I always thought space and time were well sort of tied together. I wonder how something ia getting bigger in the past but not expanding in space?
{ "domain": "astronomy.stackexchange", "id": 6898, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "universe, expansion", "url": null }
example for physicists who might need to plot results of experiments). • Explore the irrational number e. Because the support for creating graphs in LaTeX itself is limited, it is best to rely on an external graphing program. Using it, you can define values and also perform math operations. formulas, graphs). 4) LaTeX version Mathematica notebook (graph) postscript picture (court diagram) Worksheet #12: Graphs of trig. Installation ----- You can experiment with the tkz-graph package by placing tkz-graph. A solution to use this function for weighted graphs has been taken from the igraph package (Csardi G & Nepusz T, 2006) in which the same function was ported from the SNA package. Graphs with four and five links satisfy several non--trivial relations, which have been proved recently. The two discrete structures that we will cover are graphs and trees. and exist, then. or with LaTeX you can use package hyperref. Graph Sketch is an online graphing software created by Andy Schmitz.
{ "domain": "cogoo-epaper.de", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9683812327313545, "lm_q1q2_score": 0.821156065034656, "lm_q2_score": 0.8479677602988602, "openwebmath_perplexity": 843.2423676801535, "openwebmath_score": 0.6379912495613098, "tags": null, "url": "http://nwxu.cogoo-epaper.de/latex-function-graph.html" }
c++, optimization, c++11 Note that pred1 requires all 3 actions, pred2 the last 2 actions, and pred3 only the 3rd action. I was not happy with this code: while efficient in avoiding extra work, it seems overly repetitive. However, the following straightforward try at refactoring is a lot more concise but also less efficient as it computes all 3 predicates for all inputs: // exposition only: will compute pred1, pred2 and pred3 for all inputs void hun(int x) { if (pred1(x)) fun1(x); if (pred2(x)) fun2(x); if (pred3(x)) fun3(x); } So I came up with the idea of caching the predicate values in a std::tuple and dispatch the various actions based on that: using Triple = std::tuple<bool, bool, bool>; auto preds(int x) -> Triple { if (pred1(x)) return Triple{ true, true, true }; if (pred2(x)) return Triple{ false, true, true }; return Triple{ false, false, pred3(x) }; }
{ "domain": "codereview.stackexchange", "id": 8595, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, optimization, c++11", "url": null }
beginner, strings, haskell stringMapReplace (m:ms) s = stringMapReplace ms (strReplace (fst m) (snd m) s) as stringMapReplace :: Dictionary -> String -> String stringMapReplace map s = stringMapReplace' s map where stringMapReplace' s [] = s stringMapReplace' s (m:ms) = stringMapReplace' (strReplace (fst m) (snd m) s) ms Now, we can see that stringMapReplace' fits the pattern for foldl. However, foldr is preferable to foldl. If we perform the substitutions in a different order… stringMapReplace :: Dictionary -> String -> String stringMapReplace map s = stringMapReplace' s map where stringMapReplace' s [] = s stringMapReplace' s (m:ms) = strReplace (fst m) (snd m) (stringMapReplace' s ms) … we can make stringMapReplace' fit the pattern for foldr. stringMapReplace map s = stringMapReplace' s map where stringMapReplace' s map = foldr f s map f m = strReplace (fst m) (snd m)
{ "domain": "codereview.stackexchange", "id": 20424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, strings, haskell", "url": null }
quantum-mechanics, thermodynamics, statistical-mechanics, probability, partition-function Therefore, the energy becomes \begin{align*} U&= \sum_{\epsilon_1} \epsilon_1\frac{e^{-\beta\epsilon_1}}{Z_1} +\sum_{\epsilon_2} \epsilon_2\frac{e^{-\beta\epsilon_2}}{Z_2}\,. \end{align*} Now, the last thing to recognize is that since the collection of energy states available for each particle is the same, this is just \begin{align*} U&= 2\sum_{\epsilon} \epsilon\frac{e^{-\beta\epsilon}}{Z} =\sum_{\epsilon} \frac{2e^{-\beta\epsilon}}{Z}\epsilon =\sum_{\epsilon} n_{\epsilon}\epsilon\,. \end{align*} The generalization to $N$ particles is obvious. Note that this derivation requires the particles to be non-interacting, which I think is necessary for this derivation to make sense. Alternatively, we can think of (bosonic) quasi-particles occupying the quasi-particle states, and I think the same derivation goes through, but we need to be careful about symmetrizing the partition function before computing the energy. Those are things I haven't thought carefully about in a long time, though.
{ "domain": "physics.stackexchange", "id": 84275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, thermodynamics, statistical-mechanics, probability, partition-function", "url": null }
python, beginner, regex, calculator, iteration if response.upper() == "END": escape() return response This way you won't have to check for END every time (Also, if you never specify to your user they can quit using END, they'll never use it). Finally, replace while 1 == 1 by while True Basically, try to keep your code that interacts with your user away from the code that has logic into it. Having print at the end of functions instead of return is usually not a good sign. You've done a good job and you use most tools properly, there're just a couple of points you need to look after regarding clean code :)
{ "domain": "codereview.stackexchange", "id": 40015, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, beginner, regex, calculator, iteration", "url": null }
operators, string-theory, tachyon The result with integration is the state of string theory, not CFT. String theory states are given by vertex operators integrated over the worldsheet. If you're asking how we can see the last point, I suggest looking into canonical quantization of the string. Diffeomorphism invariance comes from the constraint that arises due to the use of Polyakov action, which is diffeomorphism invariant.
{ "domain": "physics.stackexchange", "id": 70761, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "operators, string-theory, tachyon", "url": null }
# Count arrays with size n, sum k and largest element m I'm trying to solve pretty complex problem with combinatorics. Namely, we have given three numbers N, K, M. Now we want to count how many different arrays of integers are there with length N, sum K and all the elements in the range [1, M] Constraints: • 1 <= N <= 100 • 1 <= K <= 100 • 1 <= M <= 100 Example Let's say N = 2, K = 5, M = 3. This means that we want to count arrays of integers of size 2 with sum of all elements equal to 5 and elements in range [1, 3]. There are total of 2 arrays: {2, 3} and {3, 2}. Please note that the order of the elements also matters, {2, 3} is not equal to {3, 2} Second example: N = 4, K = 7, M = 3. We want to count arrays of length 4, sum of 7 and elements in range [1, 3]. There are total of 16 possible way of arrays: (1,1,2,3), (1,1,3,2), (2,1,1,3), (3,1,1,2), (2,3,1,1), (3,2,1,1), (1,2,3,1), (1,3,2,1), (1,2,1,3), (1,3,1,2), (1,2,2,2), (2,1,2,2), (2,2,1,2), (2,2,2,1) What I have tried
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363512883316, "lm_q1q2_score": 0.8603859947919545, "lm_q2_score": 0.8740772335247532, "openwebmath_perplexity": 521.7804452486056, "openwebmath_score": 0.681304931640625, "tags": null, "url": "https://cs.stackexchange.com/questions/79263/count-arrays-with-size-n-sum-k-and-largest-element-m" }
lambda-calculus closed terms $M$ and $N$ are equal if they reduce to the same normal form, closed terms $M$ and $N$ are equal if they are equal in some preferred model, closed terms $M$ and $N$ are equal if they are observationally equivalent, closed terms $M$ and $N$ are equal if they are extensionally equivalent (they give extensionally equivalent results when applied to extensionally equivalent arguments). Parametricity, which was mentioned in the comments, may be helpful with some of the above. But it's difficult to discuss "optimization" until we know what counts as "equal", or else we cannot tell which optimizations are valid.
{ "domain": "cs.stackexchange", "id": 13909, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lambda-calculus", "url": null }
To demonstrate, I simulated samples of size $$n=50$$ from various bivariate Normal distributions having a range of correlations $$\rho,$$ repeating this $$50,000$$ times to obtain $$50,000$$ sample correlation coefficients for each $$\rho$$. To make these results comparable, I subtracted the Fisher Z transformation of $$\rho$$ from each transformed sample correlation coefficient, calling the result "$$Z,$$" so as to produce distributions that ought to be approximately Normal, all of zero mean, and all with the same standard deviation of $$\sqrt{1/(50-3)} \approx 0.15.$$ For comparison I have overplotted the density function of that Normal distribution on each histogram. You can see that across this wide range of underlying correlations (as extreme as $$-0.95$$), the Fisher-transformed sample correlations indeed look like they have nearly Normal distributions, as promised.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9416541577509315, "lm_q1q2_score": 0.8278522014444369, "lm_q2_score": 0.8791467595934565, "openwebmath_perplexity": 410.0799118580892, "openwebmath_score": 0.832129716873169, "tags": null, "url": "https://stats.stackexchange.com/questions/61456/how-to-test-the-difference-between-2-sets-of-pearson-correlations/365804" }
java, performance, algorithm, strings, programming-challenge Title: Alternating Characters Online challenge on Hacker Rank. Shashank likes strings in which consecutive characters are different. For example, he likes ABABA, while he doesn't like ABAA. Given a string containing characters A and B only, he wants to change it into a string he likes. To do this, he is allowed to delete the characters in the string. Your task is to find the minimum number of required deletions. Input Format The first line contains an integer T, i.e. the number of test cases. The next T lines contain a string each. Output Format For each test case, print the minimum number of deletions required. public class Solution { private static int countChanges(String text) { char[] chars = new char[text.length()]; int top = -1;
{ "domain": "codereview.stackexchange", "id": 23583, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, algorithm, strings, programming-challenge", "url": null }
javascript, recursion, palindrome For the matter of efficiency I also recommend to avoid recursion entirely but I'll assume recursion is the subject of study here... function isPalindrome(input) { function isPalindromeDetail(first, last) { if (last - first < 1) return true if (input[first] !== input[last]) return false return isPalindromeDetail(first+1, last-1) } return isPalindromeDetail(0, input.length-1) }
{ "domain": "codereview.stackexchange", "id": 41163, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, recursion, palindrome", "url": null }
meteorology, models, nwp, weather-forecasting, numerical-modelling As you reduce grid spacing (increase resolution) and start to resolve non-hydrostatic features (thunderstorm updrafts, downdrafts, etc), then you need to make sure you are using a non-hydrostatic solver. For high resolution operational regional modeling and research modeling in the mesoscale using WRF-NMM, WRF-ARW, CM1, ARPS you'll find non-hydrostatic solvers. The non-hydrostatic solver is more computationally expensive than a hydrostatic solver, and is often found in higher resolution models which themselves are more computationally expensive than coarse grids.
{ "domain": "earthscience.stackexchange", "id": 366, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "meteorology, models, nwp, weather-forecasting, numerical-modelling", "url": null }
react.js, jsx Non-mutating-state handler. React class-based component state updates are merged in, but nested state needs to have their previous state merged manually. Also, instead of persisting the event for use in the "asynchronous" setState, it is more common to grab the event value and let react do what it needs with the event object (i.e. return back to pool). handleOnChange = key => e => { const { value } = e.target; this.setState(prevState => ({ data: { ...prevState.data, [key]: value, }, })); } Readability All other comments I'd have on the code are more about the readability, i.e. appropriate usage of whitespace, 2-space vs 4-space tabs, etc.. but these are largely dev team driven and tend to be subject to opinion. Common practices though are
{ "domain": "codereview.stackexchange", "id": 38641, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "react.js, jsx", "url": null }
in much the same manner as we plotted graphs of functions like y = f ⁢ (x): we make a table of values, plot points, then connect these points with a “reasonable” looking curve. Just Look for Root Causes. describe in parametric form the equation of a circle centered at the origin with the radius $$R.$$ In this case, the parameter $$t$$ varies from $$0$$ to $$2 \pi.$$ Find an expression for the derivative of a parametrically defined function. Example $$\PageIndex{1}$$: Finding the Derivative of a Parametric Curve Parametric Equation of a Plane formula. A circle in 3D is parameterized by six numbers: two for the orientation of its unit normal vector, one for the radius, and three for the circle center . To do this one has to set a fixed value for … We have already seen one possibility, namely how to obtain coordinate lines. Let's define function by the pair of parametric equations: , and where x (t), y (t) are differentiable functions and x ' (t) ≠ 0. Example: Given are the parametric
{ "domain": "regency-group.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9828232940063591, "lm_q1q2_score": 0.8103003883837694, "lm_q2_score": 0.8244619285331332, "openwebmath_perplexity": 528.449860012721, "openwebmath_score": 0.8386204242706299, "tags": null, "url": "http://regency-group.com/jlngq0kv/parametric-equation-formula-b2adc2" }
slam, navigation, kinect, rgbd6dslam, bagfiles Comment by Felix Endres on 2012-04-07: Well, interesting. I haven't got that problem with openni_camera. So openni_launch gives you bgr8 with the rgb8 encoding specified? If you don't care about the image colors in the UI, you could also just switch the channels in misc.cpp line 481-483 and skip the conversion in Edit 3 Comment by pinocchio on 2014-03-26: Hi @Yo, I encounter the same problem as discribed in your Edit 2. I run kinect XBOX on ROS hydro, and it works well for unregistered image. But there is no output of registered RGBD image. The detailed information is listed here http://answers.ros.org/question/143871/no-register-data-from-kinect/. Could you please tell me how to fix it?
{ "domain": "robotics.stackexchange", "id": 8681, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, kinect, rgbd6dslam, bagfiles", "url": null }
electromagnetic-radiation, antennas Perhaps the absorbing antennas emit EM-waves that all interfere constructively at the position where the emitting antenna is? Two antennas interfere with each other's operation if they are close to each other. In practice, close means to be within the Fraunhofer distance outside of which they can be treated essentially independently. Now make a spherical surface and tessellate it with elementary dipoles placed about a $\lambda/2$ distance. Match each dipole properly and they will be nearly independent of each other. Now place at the center of the sphere a properly matched radiating antenna and let the radius be larger then its Fraunhofer distance, that is the spherical surface be electrically far away. Then almost all, practically all, energy you radiate away will be absorbed by the dipoles on the surface without interfering with the emission, as you were asking in your question.
{ "domain": "physics.stackexchange", "id": 50943, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetic-radiation, antennas", "url": null }
c++, memory-management private: struct Pair { T data; size_t cnt; template <typename ... Args> Pair(Args && ... args): data(std::forward<Args>(args)...), cnt(1) {} }; Pair *ptr; public: template <typename ... Args> static ThreadUnsafeSharedPtr make(Args && ... args) { ThreadUnsafeSharedPtr p; p.ptr = new Pair(std::forward<Args>(args)...); return p; } ThreadUnsafeSharedPtr() : ptr(nullptr) {}
{ "domain": "codereview.stackexchange", "id": 36410, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, memory-management", "url": null }
python, hangman Other remarks: In function wordLength, you forgot to account for invalid input. This results in a TypeError. You should print an error message and ask again. Sometimes, at the end when I loose and the program prints the word, it's clearly incorrect. Eg. letters are m--- and it says the word was llama. The function GetRandomWord seems pointless, it's the same as random.choice. If the argument is called words, then GetRandomWord(words) is not much more readable than random.choice(words). Although, one might argue that if you wanted to switch to another implementation of choice, it would be easier. But my guess is that wasn't your intention. Create a main function and check if __name__ == '__main__'. If yes, call that main function, otherwise don't do anything. It might not be necessary in this program, but if someone else wanted to use functions from your module, they would have to import it. And when they do, your game is going to start playing, which is not what they want.
{ "domain": "codereview.stackexchange", "id": 24934, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, hangman", "url": null }
objective-c, ios, networking Now this method handles the networking asynchronously on a queue that you specify, but what's important to note here is that the return is void. Once again, we never have or need an NSURLConnection object. In this case, we also don't have a delegate. Instead, we send a block for the completionHandler argument, and the code in this block is executed when the request completes. So we have 5 ways to use NSURLConnection. Two of don't start the connection immediately and require a call to start to start. The third either does or doesn't start immediately depending on a BOOL argument, but this is sent explicitly by the user, so it's CLEAR what's going on. And the last two do start immediately, but don't actually instantiate an object of this class. Your class doesn't follow any of these patterns.
{ "domain": "codereview.stackexchange", "id": 7375, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "objective-c, ios, networking", "url": null }
c#, game, mvc, winforms, chess public struct ChessMove { public ChessMove(Coords startCoords, Coords endCoords) { StartCoords = startCoords; EndCoords = endCoords; } public Coords StartCoords { get; } public Coords EndCoords { get; } public override string ToString() { return StartCoords.ToString() + " -> " + EndCoords.ToString(); } } class Rulebook { public static Piece MakeMove(Board board, Player player, ChessMove move) { if (!CheckLegalMove(board, player, move)) { throw new ArgumentException(string.Format("Move {0} is not a valid move", move)); }
{ "domain": "codereview.stackexchange", "id": 44851, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, game, mvc, winforms, chess", "url": null }
c, linked-list, console, database, edit-distance if (!record_list) { return NULL; } record_list->head = NULL; record_list->tail = NULL; record_list->size = 0; return record_list; } /******************************************************************************* * Appends the argument telephone book record to the tail of the argument * * telephone book record list. * * --- * * Returns a zero value if the operation was successfull. A non-zero value is * * returned if something fails. * *******************************************************************************/ int telephone_book_record_list_add_record(telephone_book_record_list* list, telephone_book_record* record) { telephone_book_record_list_node* new_node; if (!list || !record) { return 1; }
{ "domain": "codereview.stackexchange", "id": 22265, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, linked-list, console, database, edit-distance", "url": null }
Note: This answer is mostly based upon chapter VI from Visual Complex Analysis by T. Needham. • Very nice and clean explanation!! – Error 404 Jun 15 '18 at 10:40 • @Error404: Thanks. :-) – Markus Scheuer Jun 15 '18 at 11:47 • "The concepts of logarithm and square root match in the sense that the infinitely many branches of the logarithm yield precisely the two branches of the square root." I don't see how this makes any sense of the word 'match'. – Al Jebr Sep 12 at 19:50 • @AlJebr: Here we have the situation that the exponential function maps the infinite branches of the logarithm to the two branches of the square root function. You might want to consult the reference, which provides helpful information. – Markus Scheuer Sep 12 at 20:20
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9881308800022472, "lm_q1q2_score": 0.8008592304625846, "lm_q2_score": 0.81047890180374, "openwebmath_perplexity": 158.31567822274167, "openwebmath_score": 0.9843408465385437, "tags": null, "url": "https://math.stackexchange.com/questions/2536235/what-are-the-branches-of-the-square-root-function" }
safety, crystal-structure Title: Is it possible to cut yourself on pyrite? I am planning an RPG adventure where the characters might end up getting a large cubic lump of "gold", which in reality, as you might have guessed, is in fact pyrite. I also wanted to plan for the contingency that the group wouldn't notice that it's not the real deal, so I thought I could maybe let one of the characters cut himself on the crystal. A quick google search yields images such as this:
{ "domain": "chemistry.stackexchange", "id": 824, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "safety, crystal-structure", "url": null }
cc.complexity-theory, reference-request, randomized-algorithms, search-problem [1] Pseudo-Deterministic Proofs, Shafi Goldwasser, Ofer Grossman, and Dhiraj Holden (https://arxiv.org/abs/1706.04641) I’m not sure about the exact definition as given. However, the kind of search problems that has been studied the most in the literature are NP-search problems. In this context, there is no meaningful difference between “BPP-like”, “RP-like”, or “ZPP-like” randomized polynomial-time algorithms, as we can check the correctness of any purported solution in deterministic polynomial time. Thus, while the class of NP-search problems solvable in probabilistic polynomial time has been studied for a long time, it has not been generally called “Search-BPP”. In particular, the class is mentioned in Papadimitriou’s seminal papers [1,2], where it is denoted FZPP. [1] Christos Papadimitriou, On inefficient proofs of existence and complexity classes, Annals of Discrete Mathematics 51 (1992), pp. 245–250, doi: 10.1016/S0167-5060(08)70637-X.
{ "domain": "cstheory.stackexchange", "id": 4223, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, reference-request, randomized-algorithms, search-problem", "url": null }
general-relativity, gauss-law, gravitational-waves, maxwell-equations, linearized-theory In detail:
{ "domain": "physics.stackexchange", "id": 49835, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, gauss-law, gravitational-waves, maxwell-equations, linearized-theory", "url": null }
algorithm, c, tree, compression so I'm also not looking for feedbacks regarding the above things that I have assumed / intentionally ignored. I have ensured that the code is working properly, with no warnings when compiled with this command (taken from make output) gcc -O3 -std=c11 -Wall -Wno-unused-result -o huffman huffman.c The last option is to suppress the warning about unused result from fread(3). During my coding process, I run clang-format occasionally and diff the output and my written code to check for potentially bad indentation / styling issues. I am not confident if it can solve everything. * The link points to my GitHub repo. The code on that page is identical to the code submitted below verbatim. // File: huffman.c // Author: iBug #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdint.h> typedef unsigned char byte; typedef struct _HuffNode { unsigned data; struct _HuffNode *left, *right, *parent; } HuffNode;
{ "domain": "codereview.stackexchange", "id": 33057, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm, c, tree, compression", "url": null }
quantum-field-theory, statistical-mechanics, mathematical-physics, path-integral If one replaces $J$ by $iJ$ with $J$ real-valued then the functional $Z$ is the characteristic function of the probability measure (within the totally standard framework of Lebesgue integration theory) one would like to define rigorously. The most convenient space $D$ for the test functions $J$ is Schwartz space $\mathscr{S}(\mathbb{R}^d)$. In this case the space where the wanted probability measure would live is the dual space $\mathscr{S}'(\mathbb{R}^d)$ of temperate Schwartz distributions.
{ "domain": "physics.stackexchange", "id": 70932, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, statistical-mechanics, mathematical-physics, path-integral", "url": null }
quantum-mechanics, general-relativity, spacetime, point-particles Contrast this with a stellar mass black hole, with $M\sim 10^{30}$ kg. The Compton wavelength is $10^{-72}$ m, while the Schwarzschild radius is about $1.5$ km. In constrast to the case of the electron, QFT effects are completely swamped by GR effects - at least until one probes (ludicrously) deep within the event horizon. In between these two extremes lies the mass range $m \sim 10\ \mu$g - the Planck mass scale. This is the mass regime in which the Schwarzschild radius and the Compton wavelength are on the same order of magnitude. For such objects, relativistic quantum effects are of equal importance to spacetime curvature effects. The dynamics of such a particle cannot be adequately described by one without the other, and so this is the scale at which a coherent theory of quantum gravity would be needed.
{ "domain": "physics.stackexchange", "id": 68928, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, general-relativity, spacetime, point-particles", "url": null }
polarimetry Title: Why is only monochromatic light used in polarimeter? While using the polarimeter to determine optical rotation, why can't we use polychromatic light? What would change? The reason is partly historical and partly scientific. Most of the polarimetric data exists with the sodium yellow emission at 589 nm because sodium lamps were among the most convenient ones and remember electricity is rather new. Early light sources were either flames or sunlight. It was easy to generate sodium emission by introducing salt in the flame. If you are familiar with Raman spectroscopy, it was discovered using sunlight (heliostats) not lasers in the 1920s-30s. Today nobody can think of using sunlight for spectroscopy.
{ "domain": "chemistry.stackexchange", "id": 13147, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "polarimetry", "url": null }
control-engineering, control-theory, transfer-function, noise $$ N(s) = \frac{C(s)}{1+P(s)C(s)} $$ However, deriving $\ N(s) $ produces an improper transfer function with degree of numerator being $\ 3 $ and of denominator being $\ 2 $. This results, for example, to the imcapability of obtaining a step response in order to study the behaviour. Is the definition of the noise sensitivity function wrong or am I missing something ? This is because your controller is also improper. A common way to correct this is to add a low-pass filter to the derivative (effectively making it a high-pass filter). It can be noted that the PD controller could then also be seen as a lead-lag filter. However, if you are only interested in the step response you could also use that a step is the integral of an impulse. So you could add an integrator to the system, which raises the order of the denominator, and use impulse instead of step on the causal modified system.
{ "domain": "engineering.stackexchange", "id": 3206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "control-engineering, control-theory, transfer-function, noise", "url": null }
You can model your graphics as computing $f_n(x,y)=n\left[\frac{xy}{n}\right]$, where here the square brackets are ad-hoc notation to mean taking the fractional part (or "reduce modulo 1"). This takes $z=xy$ and cuts it at a bunch of horizontal hyperbolae, collapsing the graph like a Fresnel lens. Then, what you are doing is sampling $f_n$ on integer points $\{(i,j)\in\mathbb{Z}^2:1\leq i,j\leq n\}$, but $f_n$ oscillates faster than your sample grid, leading to a Moire pattern. These would be the powers in a discrete Fourier transform matrix: http://en.wikipedia.org/wiki/Discrete_Fourier_transform_(general)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9783846659768268, "lm_q1q2_score": 0.8664882522134824, "lm_q2_score": 0.8856314723088733, "openwebmath_perplexity": 400.4800235176337, "openwebmath_score": 0.6396054029464722, "tags": null, "url": "https://math.stackexchange.com/questions/1251466/what-is-this-pattern-called/1251501" }
gazebo-ignition Originally posted by chapulina with karma: 7504 on 2020-03-27 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4487, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo-ignition", "url": null }
rviz, ros-melodic I'm unable to solve this. Any help is appreciated. Thank You. Edit1 - Here are Rviz outputs after keeping /rover1/base_link and /base_link as fixed frames. And the output of rosrun rqt_tf_tree rqt_tf_tree. here Edit2 - I have managed to resolve the problem. It was due to a missing tf_prefix in Rviz. Originally posted by Sankeerth on ROS Answers with karma: 13 on 2021-01-30 Post score: 0 There appear to be errors in your RViz Displays panel. You should provide any errors/warnings when posting a question, but I'm guessing you have at least two problems:
{ "domain": "robotics.stackexchange", "id": 36021, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rviz, ros-melodic", "url": null }
thermodynamics, classical-mechanics, electricity Title: Is there a fundamental efficiency limit for generators? Im currently interested in the theoretical efficiency limit of a generator, ie any device transforming kinetic energy to electrical energy. In particular, id be interested in turbines, that convert angular motion into electricity. In thermodynamics there is the well known Carnot-limit $1-\frac{T_C}{T_H}$ that describes the fundamental limit for an ideal heat machine in terms of the difference between a hot reservoirs temperature $T_H$ and its cold reservoirs temperature $T_C$. Im looking for something similar, but for motion to electricity conversion. Clearly friction plays a role in practice, but an ideal turbine or generator should not be a heat machine in the Carnot sense. Thus the limit seems inapplicable.
{ "domain": "physics.stackexchange", "id": 53984, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, classical-mechanics, electricity", "url": null }
ros, ros-melodic, posestamped, tf2, rospy Title: How to correctly translate a pose along an axis? I have a pose of my robot base_link in the map frame. I'd like to translate this 0.2 m along the robot x-axis so the point is in front of the robot. To do this I've tried the following but all it seems to do is give the transformed_pose the same position in the x-axis of my transforms x-axis translation. How might I fix this? transform = TransformStamped() transform.transform.translation.x = 0.2 transformed_pose = tf2_geometry_msgs.do_transform_pose(pose, transform) transformed_pose: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: '' pose: position: x: 0.4 y: 0.0 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.5 w: 0.0 I've tried to troubleshoot this issue in this question, which may be a useful reference. Originally posted by Py on ROS Answers with karma: 501 on 2021-04-23 Post score: 0
{ "domain": "robotics.stackexchange", "id": 36358, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-melodic, posestamped, tf2, rospy", "url": null }
machine-learning, deep-learning, computer-vision, papers Title: How could we estimate the square footage of a room from an image? I wonder if it would be possible to know the size of a room using image, I don't see anything about this subject, do you have some idea how it could be done? Welcome to AI.SE Hadrien! A possible approach is:
{ "domain": "ai.stackexchange", "id": 1013, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, deep-learning, computer-vision, papers", "url": null }
scala testing: val qx = QTable("bogus.txt") // (No such file or directory) qx.getMaxValueIdx("1st 9 Tbl", List(7, 2, 3)) // None val qt = QTable("./q_table_test.txt") qt.getMaxValueIdx("1st 9 Tbl", List(7, 2, 3)) // Some(7) qt.getMaxValueIdx("1st 9 Tbl", List()) // None qt.getMaxValueIdx("Short Tbl", List(2,3,0)) // Some(0) qt.getMaxValueIdx("Short Tbl", List(2,3,990)) // Some(3) qt.getMaxValueIdx("NoSuchTbl", List(5,0,8)) // None qt.getMaxValueIdx("Split Tbl", List(5,0,8)) // Some(5) qt.getMaxValueIdx("2nd 9 Tbl", List(7, 2, 3)) // Some(2)
{ "domain": "codereview.stackexchange", "id": 38542, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "scala", "url": null }
python, python-3.x The number of letters and the list of letters must be specified on the command line. The reference dictionary may be specified on the command line. If is not then dict.txt is used. The matching of letters is case-insensitive. """ import sys if len(sys.argv) < 3: print >> sys.stderr, __doc__ sys.exit('Usage: %s letters number_of_letters [dictionary_file]' % sys.argv[0]) letters = set(sys.argv[1].upper()) wordsize = int(sys.argv[2]) dictname = 'dict.txt' if len(sys.argv) <= 3 else sys.argv[3] try: wordlist = [w.strip() for w in open(dictname).readlines()] except IOError: sys.exit('''Couldn't find dictionary file "%s"''' % dictname) matches = [w for w in wordlist if len(w) == wordsize and all(c in letters for c in w.upper())] print 'Found %d results:' % len(matches) print matches
{ "domain": "codereview.stackexchange", "id": 3917, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x", "url": null }
quantum-information, terminology, quantum-computer Let $P$ be the orthogonal projector onto $A\otimes B$, and let $\mathcal{E}$ be an arbitrary quantum channel, i.e. a completely positive trace preserving linear map. We say that $\mathcal{E}$ is recoverable if there exists another quantum channel $\mathcal{R}$ such that for all states $\rho_A \otimes \rho_B$, we have $$\mathcal{R}\circ\mathcal{E}(\rho_A \otimes \rho_B) = \rho_A \otimes \rho'_B,$$ where $\rho'_B$ is arbitrary. This says that for any state which is supported on $A\otimes B$ and is initially separable, we can reverse the effects of $\mathcal{E}$ up to a change on system $B$. Fortunately, there are simpler equivalent conditions that one can check instead. For example, an equivalent condition can be stated in terms of the Kraus operators $E_j$ for the channel $\mathcal{E}$. The subsystem $A$ is correctable for $\mathcal{E}(\rho) = \sum_j E_j \rho E_j^\dagger$ if an only if for all $i,j$, there exists a $g^{ij}_B$ on subsystem $B$ such that
{ "domain": "physics.stackexchange", "id": 70247, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-information, terminology, quantum-computer", "url": null }
meteorology, atmosphere-modelling, weather-forecasting, ice-age, winter When you ask about an overlaying thermal cycle (as in heating until the temperature crashes), this is very unlikely to happen unless the temperature reaches a point where a very large part of life gets eradicated, as the temperature on earth goes up the ocean can hold less and less CO2 until at one point the oceans starts to release more CO2 than it absorbs from the atmosphere. This means the heating will not trigger cooling unless life on our planet is significantly reduced. Water can hold less dissolved gasses when the temperature goes up, so at one point aerobic life in the ocean can no longer survive due to the lack of dissolved oxygen in the water. This is just about how serious it might get if we keep on going like we are now.
{ "domain": "earthscience.stackexchange", "id": 1756, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "meteorology, atmosphere-modelling, weather-forecasting, ice-age, winter", "url": null }
energy, momentum, collision Title: Variation of Veritasium bullet block experiment I'm new here. this is in reference to the video posted by veritasium on the bullet block experiment. i realise there is already a thread on this but i want to ask a variation of the question. so basically we have two identical blocks and we shoot identical bullets at them one at the centre and another off centre. which one will rise higher? now the answer assuming the collision is inelastic is already discussed. but what if the collision were perfectly elastic? i know this is practically impossible but still we have momentum telling us that both blocks must rise to the same height but since no energy is lost as heat, energy tells us that the rotating one must rise to a lower height. What am i missing?? but still we have momentum telling us that both blocks must rise to the same height
{ "domain": "physics.stackexchange", "id": 30385, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "energy, momentum, collision", "url": null }
astrophysics, astronomy, definition, stars, stellar-physics The grey lines show evolutionary tracks for stars with masses from 1 to 10 solar masses, in steps of 1 solar mass, up to core helium ignition (I think). But they all evolve at different rates, which isn't shown by the grey curves. (Roughly speaking, the main-sequence lifetime of a star goes like $M^{-5/2}$.) The black lines are where the tracks have been interpolated at fixed ages, as indicated (roughly uniformly spaced in $\log(\mathrm{age})$). The first, left-most black line is the "zero-age main-sequence" (ZAMS), which is roughly where a star starts burning hydrogen in the core. The next isochrone (30 Myr) is the one closest to the ZAMS, where the most massive stars have already disappeared completely because they live for less than 30 Myr. Stars around 6-7 solar masses have started finishing off the hydrogen in their cores, but lower mass stars have barely budged.
{ "domain": "physics.stackexchange", "id": 49745, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "astrophysics, astronomy, definition, stars, stellar-physics", "url": null }
c++, iterator, c++17 template <typename DataT, bool Reverse> bool ArrayIterator<DataT, Reverse>::operator>(ArrayIterator other) const { if constexpr (Reverse) return other._ptr > _ptr; else return _ptr > other._ptr; } template <typename DataT, bool Reverse> bool ArrayIterator<DataT, Reverse>::operator>=(ArrayIterator other) const { if constexpr (Reverse) return other._ptr >= _ptr; else return _ptr >= other._ptr; } template <typename DataT, bool Reverse> bool ArrayIterator<DataT, Reverse>::operator==(ArrayIterator other) const { return _ptr == other._ptr; } template <typename DataT, bool Reverse> bool ArrayIterator<DataT, Reverse>::operator!=(ArrayIterator other) const { return !(*this == other); } template <typename DataT, bool Reverse> typename ArrayIterator<DataT, Reverse>::pointer ArrayIterator<DataT, Reverse>::operator->() { return _ptr; }
{ "domain": "codereview.stackexchange", "id": 30206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, iterator, c++17", "url": null }
but i have to use c++. The project is in Java and we use are using the CERN Colt BLAS. 1 Properties and structure of the algorithm 1. Computer code. The following table summarizes the types of matrix factorizations that have been implemented in Julia. the Cholesky decomposition requires the correlation matrix to be positive definite. If A is not SPD then the algorithm will either have a zero. Cholesky decomposition is the decomposition of a symmetric matrix in the product of lower half of Hermitian matrix and it’s conjugate. Alternative formulation is A = U H ·U, which is exactly the same. The long-run impact matrix is the lower-triangular Choleski decomposition of the above matrix and the contemporaneous impact matrix is equal to:. get_new_position (data, eigv, U, k, Cholesky, Rotation) [source] ¶ Obtain a new position in the parameter space from the eigen values of the inverse covariance matrix, or from the Cholesky decomposition (original idea by Anthony Lewis, in Efficient sampling
{ "domain": "gorizia-legnami.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9877587243478403, "lm_q1q2_score": 0.812124484585226, "lm_q2_score": 0.8221891283434876, "openwebmath_perplexity": 797.4534706934702, "openwebmath_score": 0.5892771482467651, "tags": null, "url": "http://ydeb.gorizia-legnami.it/cholesky-decomposition-code.html" }
quantum-mechanics, hilbert-space, linear-algebra $$ \begin{align*} |e_1\rangle=\left[\begin{array}{c} 1\\0\\\vdots\\0 \end{array}\right],&& |e_2\rangle=\left[\begin{array}{c} 0\\1\\\vdots\\0 \end{array}\right],&& ...,&& |e_n\rangle=\left[\begin{array}{c} 0\\0\\\vdots\\1 \end{array}\right] \end{align*} $$ However in Griffiths, he doesn't make this distinction. He doesn't even qualify if the basis is orthonormal. I have tried to derive it myself without using the standard basis in $\mathbb{R}^n$ and am getting nowhere and just don't see it. Anyone have any advice? Let $\{|e_i\rangle: 1\leq i\leq n\}$ be a basis.
{ "domain": "physics.stackexchange", "id": 25986, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, hilbert-space, linear-algebra", "url": null }
python, performance, pandas # group the df by person grouped = df.groupby('person') # create a temporary list to hold frames frames = [] # iterate over the groups and apply exp. weighted moving average for group in grouped.groups: frame = grouped.get_group(group) frame['metric1_emw'] = frame['metric1'].ewm(span=60).mean() frames.append(frame) # concat the frames for a new dataframe df_new = pd.concat(frames) return df_new %timeit df_new = run_ewm(df) /home/curtis/Program_Files/miniconda2/envs/py35/lib/python3.5/site-packages/ipykernel_launcher.py:15: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy from ipykernel import kernelapp as app
{ "domain": "codereview.stackexchange", "id": 28229, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, pandas", "url": null }
c++, performance, algorithm, chess Another potential optimization is how to store positions. Instead of a separate row and column, consider storing a single integer, and enumerate the squares going from left to right first and then continue on from top to bottom. This also makes it easier to calculate desination positions. For example, a knight might move 2 squares right and 1 square down, but with the above enumeration, it just means adding 10 to the index (2 for going two squares right, plus 8 to go one row down).
{ "domain": "codereview.stackexchange", "id": 39278, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, algorithm, chess", "url": null }
visualization, robotics-toolbox Title: Robotics Toolbox: Display all DH link frames in Seriallink.plot() I am using the Robotics Toolbox by Peter Corke to create a plot of a simple four-link robot. For an assignment, I want a figure of the robot with all frames according to the DH-convention. Though even after reading through the documentation and googling for an hour, I couldn't find an option. Seriallink.plot(...) offers the options 'jaxis' and 'jvec', though they only display the joint axes, not the complete xyz-frame per link. How can I get a plot of a robot with all DH frames included?
{ "domain": "robotics.stackexchange", "id": 1806, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "visualization, robotics-toolbox", "url": null }