text
stringlengths
1
1.11k
source
dict
homework-and-exercises, differential-geometry, symmetry, lie-algebra, vector-fields $$\mathcal{L}_X T = 0$$ If you aren't sure about the terminology above then read this handy intro. The vector field $X$ is often referred to as the generator of the transformations $\phi_t$. We say that $X$ generates a symmetry of a metric spacetime if the associated $\phi_t$ preserve the metric $g$, or equivalently $$\mathcal{L}_Xg=0$$ This is exactly the condition that $X$ is a Killing vector field for the metric $g$. So the Killing vector fields are exactly those vector fields which generate spacetime symmetries. Back to your example. In a spacetime with spherical symmetry you should be able to identify the Killing vectors above, using spherical polar coordinates. Conversely, if (no subset of) the Killing vectors of your manifold obeys the Killing algebra of $S^2$ then you may conclude that your manifold doesn't have spherical symmetry. This is because the Killing vectors determine the Lie algebra of the maximal symmetry group preserving the metric, by definition.
{ "domain": "physics.stackexchange", "id": 62206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, differential-geometry, symmetry, lie-algebra, vector-fields", "url": null }
zoology, vision, sensation Title: Do animals exist which have good vision, but see only grayscale? In computer vision the color information is often discarded, as most object recognition tasks seem to work just as well on the greyscale image (even better, because there is less unnecessary information). Do animals exist which have good vision (e.g. see sharp, many details), but only see greyscale / very limited colors? Preamble & Overview. This is a rather unsatisfying answer I'm afraid. I can't seem to find any animal that has exceptional eyesight and sees in monochrome. Baring in mind the context of humans vs machines; machines cope better with shapes rather than colours however it generally appears that those animals which rely on exceptional eyesight to survive tend to see in colour and have other neurological systems to ease "data overload". Animals that see in grey-scale.
{ "domain": "biology.stackexchange", "id": 3457, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "zoology, vision, sensation", "url": null }
fourier-transform If what you need is a 382 kHz sinusoid, you will need to bandpass filter the mixed signal. The BPF will have to be pretty narrow with a short transition band.
{ "domain": "dsp.stackexchange", "id": 10156, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-transform", "url": null }
newtonian-mechanics, work, potential-energy Title: What definition of change in potential energy applies to a multi-particle system? I read a text: We define change in potential energy of the system corresponding to conservative internal forces as:-. $$\Delta U = -W = -\int \mathbf{F}\cdot d\mathbf{r}$$ as the system goes from one configuration to another. I can't understand what $d\mathbf{r}$ represents when we talk about a multi-particle system. You can use that equation for each particle in the system separately. Then you can add up everything at the end. In other words: $$\Delta U_{system}=\sum_i(\Delta U)_i=-\sum_i\int\mathbf F_i\cdot\text d\mathbf r_i$$ Where the sum over $i$ covers all particles of the system.
{ "domain": "physics.stackexchange", "id": 53646, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, work, potential-energy", "url": null }
newtonian-mechanics, forces, acceleration, causality They are the same vector, scaled slightly differently. One doesn't cause the other because they don't exist independently of each other; they aren't really separate entities. That's what it means to be equal to, philosophically. "Cause" is often a misleading and overused term in physics that should be avoided if possible. EDIT: a generic, subscript-less $ \vec{F}(t) $ in the context of Newton's Second Law is universally assumed to be the total force vector, among physicists and those with any formal education in physics at all.
{ "domain": "physics.stackexchange", "id": 97780, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, forces, acceleration, causality", "url": null }
fourier-transform, fourier-series, amplitude Title: Relationship between fourier transform and fourier series Let $$x(t) = A\sin(2 \pi f_0 t + \alpha)$$ its Fourier transform is given by $$ X(\omega) = \frac{A \pi}{i}(e^{ia}\delta(\omega-2\pi f_0) - e^{-ia}\delta(w+2\pi f_0)). $$ the Fourier series complex representation of a $T$-periodic is : $$x(t) = \sum_{n=-\infty}^\infty c_n e^{(2 i \pi n)/T \cdot t}$$ thus its Fourier transform is $X(\omega) =2 \pi \sum_{n=-\infty}^\infty c_n \delta(\omega - 2 \pi n/T)$ now here is my question, whats the expression of $c_n$ by identification with the first Fourier transform of the first signal previously, here is what i did : since $\exists k = \omega \cdot T/(2 \pi)$ then the Fourier transform becomes $$X(\omega) = 2 \pi c_k = \frac{A \pi}{i}(e^{ia}\delta(\omega-2\pi f_0) - e^{-ia}\delta(w+2\pi f_0)) \iff \begin{align} c_k = \frac{A}{2i}(e^{ia}\delta(\omega-2\pi f_0) - e^{-ia}\delta(w+2\pi f_0)) \end{align}$$
{ "domain": "dsp.stackexchange", "id": 12356, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-transform, fourier-series, amplitude", "url": null }
swift, graphics, uikit func midPoint(point1: CGFloat, point2: CGFloat) -> CGFloat { return (point1 + point2) / 2 } } func midPoint(point1: CGFloat, point2: CGFloat) -> CGFloat { return (point1 + point2) / 2 } This is the first thing I see that bothers me. First of all, this is a "helper function" and has no business belonging in this class. This should be either a global function, or better yet an extension method. But worse, being called midPoint makes it sound like it should take two points and return a point. But it doesn't. I realize that "midpoint" is a pretty common term when talking about finding the halfway point between any two anythings, but in the context of drawing code, it's confusing. And really, all this is is a special case of the average function. So, take the method out of this class, make it a global function that takes an array of CGFloats: func average(values: CGFloat...) -> CGFloat { return values.reduce(0, combine: +) / CGFloat(values.count) }
{ "domain": "codereview.stackexchange", "id": 14625, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "swift, graphics, uikit", "url": null }
I’d be interested to know whether this approach, describing the radius of a polygon as a periodic function, has any precedent (has anyone else done this, or am I the first)? I’ve been working on this idea for some time (on and off for years), but just recently overcame some stumbling blocks with a little help from a friend and my dad. Most of the legwork was my own, though. The relatively final form(s) appear to be: 1/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[(v*x)/4]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) (n-gon, circumradius=1, rotated $-\pi/4$) 1/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(3Pi/4)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1) (n-gon, function centered around unit circle, unrotated) ((Sec[Pi/v]+1)/2)/(((Sec[Pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9850429103332288, "lm_q1q2_score": 0.8030389883314318, "lm_q2_score": 0.8152324938410783, "openwebmath_perplexity": 781.848302968416, "openwebmath_score": 0.8314317464828491, "tags": null, "url": "http://math.stackexchange.com/questions/41940/is-there-an-equation-to-describe-regular-polygons" }
matlab, signal-analysis, signal-detection, snr, sinad The relationship between SNR and the normalized correlation coefficient is given in this post with the formula to compute the correlation coefficient. I call this the "Rho Tool" when I create one. Here is the detailed process:
{ "domain": "dsp.stackexchange", "id": 11966, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "matlab, signal-analysis, signal-detection, snr, sinad", "url": null }
c++, beginner, c++14, pointers I will take the chance to say thank you to anyone who answers this / Edit: Thanks for everyone who helped with answering my questions! (And for putting up with all of them as well) Data members I don't see the point of the T** ptrToPtr member variable. Every instance of *ptrToPtr can be replaced by ptr. Remove that member and you have one less thing to delete. Why is refCount a pointer to a long? If you make is just a long then you don't need to delete it. I see what's happening. Every instance that has the same pointer needs to have access to the same refCount. Got it. Now I can see your thinking with ptrToPtr. Remember that pointers only contain addresses. If you copy a pointer, both pointers refer to the exact same data. You don't need to share references to pointers. That's why the member ptrToPtr is not needed. A pointer and a copy of a pointer do just as well pointing to the same data. From now on, consider it removed from the class. Methods Default constructor: rCPtr()
{ "domain": "codereview.stackexchange", "id": 41059, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, beginner, c++14, pointers", "url": null }
cosmology, astrophysics, gravitational-redshift Here's a picture from a Hamburg Sternwarte description of these absorption lines, that illustrates the situation. Absorption lines are formed due to absorption of the quasar continuum by gas clouds (or maybe even much fainter normal galaxies) A, B and C, which are at progressively lower redshifts (often multiple absorption systems can be found and can be used not only to place the distance of the intervening cloud/galaxy but also say something about its chemical composition). Spectrum D shows how the measured spectrum at the telescope will contain a highly redshifted quasar's light with superimposed absorption bands due to the lower redshift clouds.
{ "domain": "physics.stackexchange", "id": 16818, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology, astrophysics, gravitational-redshift", "url": null }
phylogenetics I think what's causing some confusion is that there are no names/years at the circled nodes. My first question is, was there an actual population between nodes (ex.) 2 and 3? I would like to check if my understanding of the tree is correct: Before node 1, there was a population which split into two populations: Porifera and the population between nodes 1 and 2 which we will call population 1-2. This population is unnamed on the diagram, either because we don't have a name for it, or because members of that population no longer exist, or to keep the diagram simplified. Next, after some millions of years, population 1-2 split into the Cnidarians and population 2-3. Etc. Is my understanding correct? Thank you very much. Yes, your understanding is correct.
{ "domain": "biology.stackexchange", "id": 9104, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "phylogenetics", "url": null }
water, bond Title: Why is H₂O V shaped? We know that the molecule of H₂O is V-shaped. This is what makes it a dipole. But why is that? I mean, if the hydrogens have a partial positive charge, then they should try to get away from each other, until they are diametrically opposite to the oxygen molecule. But that doesn't happen. Why is this? The oxygen atom fills its octet rule by forming two bonds. It shares 1 electron in a covalent bond with each hydrogen and has 4 remaining valence electrons. It is sp$^3$ hybridized and has the 4 non-bonding electrons in two lone pairs. An sp$^3$ hybridized atom has four attachment points spaced approximately 109$^\circ$ apart and has the shape of a pyramid with a triangular base. See this image for an example. You don't typically see the water molecule drawn in 3 dimensional space with the lone pairs of electrons, all you see is a planar molecule with H-O-H bond angle of ~109$^\circ$.
{ "domain": "chemistry.stackexchange", "id": 576, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "water, bond", "url": null }
lo.logic, ct.category-theory, semantics, db.databases, modal-logic So now I suppose if G7 is the top yellow model in the model Poset category above then given that $ ":" : IRIRef \times String \to IRIRef$ is the append operation on the string component of a IRIRef, and we have defined the strings foaf = <http://xmlns.com/foaf/0.1/> timbl = <https://www.w3.org/People/Berners-Lee/card#i>
{ "domain": "cstheory.stackexchange", "id": 4475, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lo.logic, ct.category-theory, semantics, db.databases, modal-logic", "url": null }
arduino, mobile-robot, localization, mapping, planning for i = 1:nRows for j = 1:nCols currentPos = sub2ind(mapSize,i,j); % left neighbor, if it exists if (j-1)> 0 destPos = sub2ind (mapSize,i,j-1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % right neighbor, if it exists if (j+1)<=nCols destPos = sub2ind (mapSize,i,j+1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % top neighbor, if it exists if (i-1)> 0 destPos = sub2ind (mapSize,i-1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % bottom neighbor, if it exists if (i+1)<=nRows destPos = sub2ind (mapSize,i+1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end end end
{ "domain": "robotics.stackexchange", "id": 717, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "arduino, mobile-robot, localization, mapping, planning", "url": null }
svm Title: prediction for a linear sum I am learning about SVMs in particular linear SVMs through many questions here. However, one problem i faced is that there seems to be no indepth explanation on how does linear SVM works in terms of predicting new data. I understand that the main purpose of SVM is to find linear separating hyperplane $w^Tx+b$ and a linear SVM is actually a set of super long equation. Let's consider a 2 class problem : A and B. Suppose $(w^*,b^*)$ are the minimizing hyperplane parameters for a fixed choice of $\lambda$.
{ "domain": "datascience.stackexchange", "id": 4125, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "svm", "url": null }
c, linked-list, console, database, edit-distance /******************************************************************************* * Appends the argument telephone book record to the tail of the argument * * telephone book record list. * * --- * * Returns a zero value if the operation was successfull. A non-zero value is * * returned if something fails. * *******************************************************************************/ int telephone_book_record_list_add_record(telephone_book_record_list* list, telephone_book_record* record);
{ "domain": "codereview.stackexchange", "id": 22181, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, linked-list, console, database, edit-distance", "url": null }
javascript, datetime, dom this.fillTheRows = function() { let day = 1; //leap year fix if (this.month == 1) { if (this.year % 4 === 0 && this.year % 100 != 0 || this.year % 400 === 0) { monthLength = 29; } } //rows let condition = (startingDay >= 4) ? 6 : 5; for (let i = 0; i < condition; i++) {
{ "domain": "codereview.stackexchange", "id": 26579, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, datetime, dom", "url": null }
linux Title: robot-brand the PR2 from basestation Hello, I'd like to have some clarification to be sure to understand the Configuring Basestation tutorial. In my understanding : 1.4.3. Contact Your Sysadmin : This step is necessary for the Sysadmin to allow the BaseStation to connect on the Building Network. It also allows (the DNS entries) any user on the Building network to access the BaseStation and the PR2 through the BaseStation from their name (initially basestation, c1 and c2). And the DHCP access for c1 is used to connect PR2 with WAN port on the Building Network. In our situation, we are connected on the Building Network through a router that "masks" BaseStation and PR2 IPs on the rest of the network. Thus, we didn't require the corresponding DNS entries, as we don't want the robot to be accessible from any computer of the lab, but only the BaseStation or a computer in the Primary internal network (i.e. a person physically next to the robot). 1.5. Configure the Basestation Network
{ "domain": "robotics.stackexchange", "id": 17001, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "linux", "url": null }
homework-and-exercises, special-relativity, vectors, conventions, dimensional-analysis So I will: $$P_X=\left(E_X,\,\boldsymbol{p_X}c\right)=(m_Xc^2,0,0,0)$$ $$P_c=\left(E_c,\,\boldsymbol{p_c}c\right)=(E_c,{p_c}^xc,0,0)$$ So $$P_X\cdot P_c=E_cm_Xc^2-0=E_cm_Xc^2$$ substituting this result in $(1)$: $$m_d^2c^2=m_X^2c^2+m_c^2c^2-2E_cm_Xc^2$$ $$\implies E_c\stackrel{\color{red}{{?}}}{=}\frac{m_X^2+m_c^2-m_d^2}{2m_X}$$ Well, this is definitely not the same answer as $(\rm{A})$. So what am I missing? Why is there a $m_X c$ in the first element of a four-vector which (I thought) should have dimensions of energy, not momentum?
{ "domain": "physics.stackexchange", "id": 69681, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, special-relativity, vectors, conventions, dimensional-analysis", "url": null }
algorithms, matrix Title: Coloring book. Finding region by point Let me explain what I want to achieve. I'm working on the coloring book project. On the input, I'm getting transparent images with black borders (Like this). Currently, I've created the 2D Matrix with colors in points. And basing on this Matrix I want to form an array of regions. Region: class Region { var set = Set<Point>() func contains(_ point: Point) -> Bool { return set.contains(point) } } The region represents points that form bordered parts(e.g. gun barrel, button, hat). My future logic will be this: Users taps. Get point. Find the region that contains that point. Color all the points in that region.
{ "domain": "cs.stackexchange", "id": 14599, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, matrix", "url": null }
of Trigonometric Graphs More Pre-Calculus how to graph a horizontal stretch divide a factor 2. X/2 ) 2 Related to the left shifted πb/2 to the left factor, ( h k! ( Part 1 ) the general formula is given as well as a concrete... On our website step-by-step explanations = 3√ ( x/2 ) 2 Related to the graph one unit downward start! General formula is given as well as a few concrete examples want change! Compression would multiply go left O I and IV I and III have to multiply the y-values the. Horizontal stretch of 2, x 2 would become ( x/2 ) copyrights of Their respective.. For me to detect features in the y direction, multiply or divide the output by a of! Should divide a factor of 1/a when the input value is also increased by a factor of.! To include the new x-coordinate of the graph to graph g ( x ) the... Learn about horizontal and vertical stretches and compressions in the y direction multiply. F ( x ) is stretched horizontally by a scale factor of 1/4 vertical shift or the edges
{ "domain": "tringham.net", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9840936101542133, "lm_q1q2_score": 0.8966861309181959, "lm_q2_score": 0.9111797106148062, "openwebmath_perplexity": 739.5238033094421, "openwebmath_score": 0.5198330879211426, "tags": null, "url": "https://tringham.net/h06rim/5a3ef8-how-to-graph-a-horizontal-stretch" }
biochemistry, photosynthesis, thermodynamics P680+, the photochemically oxidized reaction-center chlorophyll of PSII, is the strongest biological oxidant known. (emphasis mine) And that explains what (thermodynamically) drives O2-evolving complex (still according to Lodish, 2002): The reduction potential of P680+ is more positive than that of water, and thus it can oxidize water to give O2 and H+ ions. (emphasis mine) I hope those terms are simple enough for you. Finally, for better understanding all this, have a look at the concept of reduction potential (which, as a college student, I believe you have to study). Source: Lodish, H. (2002). Molecular cell biology. 4th ed. New York, N.Y.: Freeman.
{ "domain": "biology.stackexchange", "id": 7703, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "biochemistry, photosynthesis, thermodynamics", "url": null }
classical-mechanics, momentum Title: In snooker does margin of error increase or decrease as the target angle changes? There is a perception (widely held) in snooker that a straight shot is more difficult than an angled shot. There are many forum discussion about this, and the reasons are usually accepted to be psychological. But I was wondering, is there a mathematical or mechanical reason for it. Is the margin of error greater when the shot being taken is at an angle? Practical Example - if the white ball was 1 degree off target on a straight shot, and one degree off target on an angled shot (same target for both shots), and each shot hit at the same speed, would the red ball travel off line to the same extent?
{ "domain": "physics.stackexchange", "id": 39802, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, momentum", "url": null }
python The * star makes the meaning of this source code less obvious to the Gentle Reader; it slightly obscures it. Prefer from constants import TARGET_DIR_. Also, wazzup with that trailing _ underscore? Typically we follow that convention for local variables like dir_ or map_, when we want to avoid shadowing a builtin. That's not what happening here. Plus, the identifier is part of your Public API, so spelling matters, more than for a local variable. docstrings """_summary_ _desc_: returns the extension of a file name. _param_: file_name (str): The name of the file. _returns_: str: The extension of the file name. """ I didn't understand that module-level docstring at all; it's not helping me. OIC, you meant to write a function docstring: def get_extension(file_name: str) -> str: """_summary_ ... """ return "." + file_name.split(".")[-1].lower()
{ "domain": "codereview.stackexchange", "id": 44974, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
the matrix of a linear transformation with respect to arbitrary bases, and find the matrix of an inverse linear transformation. We find a basis of the vector space of polynomials of degree 1 or less so that the matrix of a given linear transformation is diagonal. A matrix representation of the linear transformation relative to a basis of eigenvectors will be a diagonal matrix — an especially nice representation! Though we did not know it at the time, the diagonalizations of Section SD were really about finding especially pleasing matrix representations of linear transformations. Exercise 1. this will be useful too. This Linear Algebra Toolkit is composed of the modules listed below. to find the matrix of T with. This is important with respect to the topics discussed in this post. what is the matrix representation of T with respect to B and C? We need to solve one equation for each basis vector in the domain V; one for each column of the transformation matrix A: For Column 1: We must
{ "domain": "puntoopera.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9891815516660637, "lm_q1q2_score": 0.8155425233563044, "lm_q2_score": 0.8244619220634456, "openwebmath_perplexity": 369.9796243537065, "openwebmath_score": 0.8836857080459595, "tags": null, "url": "http://puntoopera.it/wbjm/matrix-of-linear-transformation-with-respect-to-two-basis.html" }
physical-chemistry, thermodynamics, metallurgy Title: Why is the formation free energy of carbon dioxide almost constant in Ellingham diagrams? While reading about Ellingham diagram in my textbook, I noticed that for: $\ce{C_(_s_) + O2_(_g_) -> CO2_(_g_)}$, the standard Gibbs free energy ($\ce{\Delta_fG^\circ}$) doesn't seem to vary with temperature (I even compared the line's inclination with a horizontal line and they coincided completely!). Going through the Wikipedia page for Ellingham diagram, I was able to find the following information: The formation free energy of carbon dioxide ($\ce{CO2}$) is almost independent of temperature, while that of carbon monoxide (CO) has negative slope and crosses the $\ce{CO2}$ line near $\mathrm{700 ^\circ C}$.
{ "domain": "chemistry.stackexchange", "id": 15578, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, thermodynamics, metallurgy", "url": null }
Conversely, let $\bar{V}$ be a subgroup of $\bar{G}$ of order three. It's preimage in $G$ has order 6. Hence there is $V \le G$ of order three that maps to $\bar{V}$ and by the above $V$ is conjugated to some $U_i$. • @tj: no, $GL_3(\mathbb{Z})$ is not isomorphic to $PGL_3(\mathbb{Z})$, but to $PGL_3(\mathbb{Z})\times\{\pm 1\}$. – YCor Oct 9 '13 at 17:44 • @Yves: Thanks for pointing out that $GL_3(\mathbb{Z}) \not\cong PGL_3(\mathbb{Z})$. I understand $GL_3(\mathbb{Z}) \cong SL_3(\mathbb{Z})\times \lbrace \pm 1\rbrace$. But do we really have $GL_3(\mathbb{Z}) \cong PGL_3(\mathbb{Z}) \times \lbrace \pm 1\rbrace$ ? – tj_ Oct 9 '13 at 19:11 • yes because the obvious homomorphism $SL_3(\mathbb{Z})\to PGL_3(\mathbb{Z})$ is an isomorphism – YCor Oct 9 '13 at 19:44
{ "domain": "mathoverflow.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9770226287518852, "lm_q1q2_score": 0.8204390865880502, "lm_q2_score": 0.8397339656668287, "openwebmath_perplexity": 196.24063858227652, "openwebmath_score": 0.8857816457748413, "tags": null, "url": "https://mathoverflow.net/questions/144375/conjugacy-classes-of-pgl3-z/144398" }
quantum-mechanics, homework-and-exercises, quantum-information, quantum-measurements Title: How to construct a POVM set to distinguish between two-qubit states? Let's say we have a quantum system with 2 qubits, which are in some linear combination of Bell basis. $\vert \psi_{j} \rangle = \alpha_{j} \vert {T_1} \rangle + \beta_{j} \vert {T_2} \rangle + \gamma_{j} \vert {T_3} \rangle + \epsilon_{j} \vert {T_4} \rangle$ for j = 1, 2 Question Given these 2 qubits, how do I know which qubit I have? To me, this sounds like a good example of a case when we cannot distinguish these states unambigously. My understanding is that in such a scenario, we can try to construct a POVM set that will allow us to distinguish the states without any false identification (although, we will not be able to ascertain which qubit we have at times). But how does one go about constructing such a POVM set? Bell basis:
{ "domain": "physics.stackexchange", "id": 67003, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, homework-and-exercises, quantum-information, quantum-measurements", "url": null }
moveit, ros-kinetic, ur5 * /klaus/ur_hardware_interface/kinematics/forearm/yaw: -0.000129296667033 * /klaus/ur_hardware_interface/kinematics/forearm/z: 0 * /klaus/ur_hardware_interface/kinematics/hash: calib_72470218391... * /klaus/ur_hardware_interface/kinematics/shoulder/pitch: 0 * /klaus/ur_hardware_interface/kinematics/shoulder/roll: 0 * /klaus/ur_hardware_interface/kinematics/shoulder/x: 0 * /klaus/ur_hardware_interface/kinematics/shoulder/y: 0 * /klaus/ur_hardware_interface/kinematics/shoulder/yaw: 8.77666344309e-05 * /klaus/ur_hardware_interface/kinematics/shoulder/z: 0.0893072092318 * /klaus/ur_hardware_interface/kinematics/upper_arm/pitch: 0 * /klaus/ur_hardware_interface/kinematics/upper_arm/roll: 1.57027963108 * /klaus/ur_hardware_interface/kinematics/upper_arm/x: 8.02597230883e-05 * /klaus/ur_hardware_interface/kinematics/upper_arm/y: 0 * /klaus/ur_hardware_interface/kinematics/upper_arm/yaw: -0.000256378975585 * /klaus/ur_hardware_interface/kinematics/upper_arm/z: 0
{ "domain": "robotics.stackexchange", "id": 34136, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "moveit, ros-kinetic, ur5", "url": null }
wheeled-robot, dynamics, two-wheeled Title: Dynamic model of two wheeled mobile robot I wanted to design a LQR controller for my two wheel mobile robot. What are the equation of two wheeled mobile robot can be rearranged into state space model? My two wheeled mobile robot is nonlinear system. The outputs of the system are v, linear velocity and w, angular velocity while the inputs of the system are x, y and theta. At first we should define what a dynamic model is. A dynamic model is equal to a physics engine. The input value is given to the engine, and the engine calculates where the robot will be in the next timestep. The calculation is done with a sinus function, Dynamic Modeling and Stabilization of Wheeled Mobile Robot For example: the robot looks to north, and both wheels are spinning. Then the robot will drive to north. If only the right wheel is spinning, then the robot will not drive to north but in a slightly different angle to the right.
{ "domain": "robotics.stackexchange", "id": 1627, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "wheeled-robot, dynamics, two-wheeled", "url": null }
python, functional-programming browser = webdriver.Chrome() class GameData: def __init__(self): self.dates = [] self.games = [] self.scores = [] self.home_odds = [] self.draw_odds = [] self.away_odds = [] def parse_data(url): browser.get(url) df = pd.read_html(browser.page_source, header=0)[0] game_data = GameData() game_date = None for row in df.itertuples(): if not isinstance(row[1], str): continue elif ':' not in row[1]: game_date = row[1].split('-')[0] continue game_data.dates.append(game_date) game_data.games.append(row[2]) game_data.scores.append(row[3]) game_data.home_odds.append(row[4]) game_data.draw_odds.append(row[5]) game_data.away_odds.append(row[6]) return game_data if __name__ == '__main__':
{ "domain": "codereview.stackexchange", "id": 40828, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, functional-programming", "url": null }
homework-and-exercises, forces, classical-mechanics, orbital-motion, potential-energy $$ \frac{r}{a} < \frac{-1 + \sqrt{5}}{2} $$ which is a sufficiently tight upper bound on $x$, for all practical purposes.
{ "domain": "physics.stackexchange", "id": 69846, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, forces, classical-mechanics, orbital-motion, potential-energy", "url": null }
# Equivocal Function Transformations The last two posts were about transformations of functions (shift, stretch, reflect) and their effect on a graph, first individually and then in combination. The next thing to look at will be how to determine the transformations when you are given a graph; but before we take that challenge in general, we need to see how transformations can interact with particular base functions, resulting in graphs that can be attained in more than one way. This can make it hard to decide what to call it; or it can actually make things easier, by letting us choose the easier of two ways. ## Square roots: compressed or stretched? We’ll first look at this question from 2009: Square Root Functions and Transformations How can we tell by looking at a graph if a square root function is being horizontally compressed or vertically stretched? For example, y = 2 sqrt(x) is being vertically stretched when graphed, but y = sqrt(2x) is being horizontally compressed.
{ "domain": "themathdoctors.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9934102273614378, "lm_q1q2_score": 0.8003232513895994, "lm_q2_score": 0.8056321843145404, "openwebmath_perplexity": 591.5794150520877, "openwebmath_score": 0.7341336607933044, "tags": null, "url": "https://www.themathdoctors.org/equivocal-function-transformations/" }
# Hi, I'm Ataias Reis Programmer, made in Brazil, Alma mater University of Brasilia # 30 days of code in Go: Day 19 - Binomial Distribution II Hi there! Today's problem is very similar to the last day, again using a binomial distribution. ### Question Given that 12% of the pistons of a manufacture are rejected because of incorrect sizing, what is the probability of batch of 10 pistons contain 1. No more than two rejects? 2. At least two rejects? Again, very similar. Now the probability of success for the Bernoulli trial is of the chance of the piston being reject, the same as being incorrectly sized. For the case of no more than two rejects, the probability is given by $$P_{x \le 2} = \sum_{i=0}^2 b(i, 10, 0.12),$$ while for the case of at least two rejects it is $$P_{x \ge 2} = \sum_{i=2}^{10} b(i, 10, 0.12).$$ ### Code My solution is pretty similar to my last one and is presented below. Just the main function was changed. package main import ( "fmt" "math" )
{ "domain": "com.br", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.97482115683641, "lm_q1q2_score": 0.8080670901342489, "lm_q2_score": 0.8289388104343892, "openwebmath_perplexity": 5992.30584688969, "openwebmath_score": 0.3870244026184082, "tags": null, "url": "https://ataias.com.br/2016/10/06/30-days-of-code-in-go-day-19-binomial-distribution-ii/" }
feature-selection Title: what are the solutions for efficient feature selection in a very large feature space? I have a classification dataset (50k observations and 10 features) on which I can't get a good result.. I want to try increasing the number of features.. I plan to automatically generate many feature options from whatever seems reasonable to me for my dataset. When I roughly calculated the number of possible signs, they turned out to be about 100 million or more ... Naturally, such a large number of features cannot be processed at once, and 99.9% of the features will turn out to be unimportant. My plan is this: create a date set of 100/1000 features train the model choose features that are important from the point of view of the classification algorithm save important features transition to point "1" but with new features My question is this:
{ "domain": "datascience.stackexchange", "id": 11274, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "feature-selection", "url": null }
counted above. Transforms the range [first, last) into the next permutation from the set of all permutations that are lexicographically ordered with respect to operator< or comp. Now, we have all the numbers which can be made by keeping 1 at the first position. As you can see, in a permutation, the order matters. R-permutation of a set of N distinct objects with repetition allowed. If we take one such circular permutation. permutations. How many different two-chip stacks can you make if the bottom chip must be red or blue?. n×(n - 1) ×(n - 2) ×… ×2×1, which number is called "n factorial" and written "n!". How many different two-chip stacks can you make if the bottom chip must be red or blue?. n P n is the number of permutations of n different things taken n at a time -- it is the total number of permutations of n things: n!. Then (12)α 6= (12) β. A permutation g 2 S n is in canonical form if g maps the identity element to itself. Permutations are used to calculate the probability of an
{ "domain": "tvniederrhein.de", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9805806472775278, "lm_q1q2_score": 0.8150027796464006, "lm_q2_score": 0.8311430394931456, "openwebmath_perplexity": 409.6601729542495, "openwebmath_score": 0.7891137003898621, "tags": null, "url": "http://glyk.tvniederrhein.de/permutation-of-numbers-from-1-to-n.html" }
c#, wcf public static Address[] ToAddresses(this DecisionRequest request) { return new Address[] { new Address { City = request.City, Country = request.Country, HouseNumber = request.HouseNumber, HouseNumberExtension = request.HouseNumberExtension, Street = request.Street, ZIP = request.ZIP, kind = "MAIN" } }; } } This is how I will use this code public static class Mapper { public static Request FromDecisionRequestToZRequest(DecisionRequest request) { var applicationDetails = request.ToData(); var address = request.ToAddresses();
{ "domain": "codereview.stackexchange", "id": 8965, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, wcf", "url": null }
c, hash-map /*For testing...*/ void displayHashTable (hnode** hashArray, void (*printValue)(void* )){ int i; hnode *curr, *next; printf("%-20s%-20s\n", "hashTable-Key,", "hashTable-Value"); for (i = 0; i < HASHSIZE; i++){ curr = hashArray[i]; while(NULL != curr){ next = curr->next; printf("%-20s,",curr->name); printValue(curr->value); printf("\n"); curr = next; } } } hashTable.h #include <stdio.h> #include <string.h> #include <stdlib.h> #ifndef HASHTABLE_H_INCLUDED #define HASHTABLE_H_INCLUDED #define HASHSIZE 52 /* 26 for a-z and 26 for A-Z*/
{ "domain": "codereview.stackexchange", "id": 26988, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, hash-map", "url": null }
algorithm, strings, pathfinding, matlab, edit-distance % to do this, we have to find the minimum of the surround paths. % Back to the cost variables! h_cost = cost_matrix(x,y-1); v_cost = cost_matrix(x-1,y); d_cost = cost_matrix(x-1,y-1); % its important to keep track of the order, because the index of % the minimum value of the list will give us the correct direction % for the point! min_list_direction = [h_cost, v_cost, d_cost]; % now we find the minumum AND the index. Without the index, we % dont know which way to go! % minimum = min(min_list_direction); % now that we have a minumum, which direction is it in? % we went h, v, d index = find(min_list_cost == minimum); if index(1) == 1 direction_cell_matrix{x,y} = 'h'; elseif index(1) == 2 direction_cell_matrix{x,y} = 'v'; elseif index(1) == 3 direction_cell_matrix{x,y} = 'd'; end end end %%Traceback
{ "domain": "codereview.stackexchange", "id": 21116, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm, strings, pathfinding, matlab, edit-distance", "url": null }
evolution, ornithology, mammals Grigg et al. 2004 Hillenius and Ruben 2004 Ruben 1995
{ "domain": "biology.stackexchange", "id": 4746, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, ornithology, mammals", "url": null }
phylogenetics Title: Demographic model for admixed African Americans I am fairly new to genomics and population genetics. And I am reading this article 'Demographic history and rare allele sharing among human populations': https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3142009/pdf/pnas.201019276.pdf In this article they propose a demographic model in Figure 4 on page 11986. However, I am confused how does African Americans fit in this demographic model that are known to have both African and European ancestry. And if I am to consider African American population than what is the split time between the two populations, time they merge into admixed population and the admixed proportion.
{ "domain": "bioinformatics.stackexchange", "id": 974, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "phylogenetics", "url": null }
to solve for the area of a segment, solve for the area of the sector, and subtract the area of a triangle. first let's solve for the angle of the sector: we have the radius = 1, and because we're dealing with a symmetric arrangement, we can make a right triangle with the radius as hypontenuse, and half the radius as the short leg. We can solve for half the angle of the sector (call this theta) by taking cos(theta) = half radius/radius. Note that the radius drops out so we end up with a generalized equation regardless of what the radius is, that lets us solve for the angle. (we will need an actual radius later to solve for the specific area, but we use "r" as has been suggested as the variable for that so any quantity can be plugged in later) Therefore cos(theta) = .5; arccos(.5) = theta = 60 degrees. two times theta = 120 degrees = the angle of our sector.
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759621310288, "lm_q1q2_score": 0.8019391553101182, "lm_q2_score": 0.8175744806385543, "openwebmath_perplexity": 247.90578941242111, "openwebmath_score": 0.8896540999412537, "tags": null, "url": "http://mathhelpforum.com/geometry/167320-area-intersection-between-two-circles.html" }
navigation, mapping, knowrob Title: semantic map input Hello Im new in knowrob package and would like to know what is the input and output of the semantic map. Is the bitmap image a input and semantic map can be create from bitmap image or what is the general input ( like RGB image or point claud datas) for semantic maps in knowrob. Please any help? Thanks
{ "domain": "robotics.stackexchange", "id": 10676, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, mapping, knowrob", "url": null }
light, general-relativity the ball is simply being left behind (which, ironically, checks better with the stresses you can easily detect on every object around you that are not present on the ball, including the feeling you are receiving from your bottom right now).
{ "domain": "astronomy.stackexchange", "id": 6293, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "light, general-relativity", "url": null }
special-relativity, dimensional-analysis Observant readers may now be saying "Hey, that can't be right! The Lorentz factor also slows time... so shouldn't the light in the moving sphere be slower and thus less energetic?" While it is true that time will pass more slowly within the moving sphere, it is not correct to think that this same light will be slower when viewed from the rest frame. For that situation the geometry of the wavelengths wins, and the light looks green. However, a simpler way to think of it is that since the light is being emitted and reflected by an object traveling at $\gamma=2$ (or equivalently $v=\sqrt{\frac{3}{4}}$ c), the ordinary Doppler effect will double its frequency. (@ColinK has correctly noted that the above explanation glosses over some important complications. Please see his excellent comment for more info. I may try to address that soon.) Now it's time to put this all together. The original light and sphere had an eta factor of:
{ "domain": "physics.stackexchange", "id": 6221, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, dimensional-analysis", "url": null }
electromagnetism, special-relativity, differential-geometry, maxwell-equations $$F_{\mu \nu} = \partial_{\mu}A_{\nu} - \partial_{\nu} A_{\mu} $$ That's a rather trivial proof, but it seems that the author (and others that I've seen) appeal to this as a general property of antisymmetric tensors. So, if anyone would want to show me how to prove why this would be a general property of antisymmetric tensors, I would be very grateful. It's not true for a general antisymmetric tensor. The equation you write is a bianchi identity, $$d(dA)=0$$. This is true only because $F$ is of the form $dA$(an exterior derivative). It didn't have to be, ofcourse. As an analogy, the Reimann tensor satisfies a Bianchi identity too. It is certainly a special tensor-not any good old tensor will describe curvature. EDIT: In response to comments. Roughly, an exterior derivative is a map between differential forms-it maps a $k$-form to a $k+1$ form. It's an extension of the notion of successive differentiation. For smooth functions($0$-forms) $f$, it is the regular derivative.
{ "domain": "physics.stackexchange", "id": 66249, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, special-relativity, differential-geometry, maxwell-equations", "url": null }
time, measurements, error-analysis, si-units, metrology However I am puzzled by the fact that all uncertainties have disappeared since I graduated. Following wikipedia, the "realization" of the meter has a relative uncertainty of about $2×10^{-11}$, but I could not find information about the second and its uncertainty. From what I recall before the redefinition, $\Delta\nu_{{\rm Cs}}$ had a relative uncertainty of O($10^{-15}$), so is that how well do we know the duration of the 'realization' a second? A second is a second long by definition, but if you measure any time in seconds, the number of seconds you infer will be subject to an error of at least $\mathcal O(10^{-15})$ because of the uncertainty of Caesium clocks as you correctly point out. This is true even if you make a higher precision measurement using new clocks like Quantum Clocks with uncertainties of $\mathcal O(10^{-18})$. Edit: For completeness, I include this quote from here:
{ "domain": "physics.stackexchange", "id": 82823, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "time, measurements, error-analysis, si-units, metrology", "url": null }
c++, homework, tic-tac-toe for (size_t i = 0; i < 3; ++i) { for (size_t k = 0; k < 3; ++k) { cout << "\n"; char left, right; left = right = ' '; for (size_t j = 0; j < 3; ++j) { if (k == 1) { if (3*i + j + 1 == cur) { left = '>'; right = '<'; } else { left = right = ' '; } } cout << " " << left << " "; for (size_t l = 0; l < 3; ++l) { cout << grid[i][j].get(k, l) << " "; } cout << right; } } cout << "\n\n"; } cout << "\n"; } void Game::input(int& g) {
{ "domain": "codereview.stackexchange", "id": 33947, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, homework, tic-tac-toe", "url": null }
c, linked-list, console, database, edit-distance /******************************************************************************* * Allocates and initializes a new telephone book record. * * --- * * Returns a new telephone book record or NULL if something goes wrong. * *******************************************************************************/ telephone_book_record* telephone_book_record_alloc(const char* last_name, const char* first_name, const char* phone_number, int id);
{ "domain": "codereview.stackexchange", "id": 22181, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, linked-list, console, database, edit-distance", "url": null }
discrete-signals, signal-analysis, power-spectral-density Title: How to calculate the discrete power of a discrete signal? If I have a time series defined by a series like $$ x = [a_1, a_2, a_3, ..., a_n] $$ for time from $t = 1$ to $t = n$ How can I get to the power of the signal in this form: $$ P_x = [p_1, p_2, p_3, ..., p_n] $$ Can I use this formula and make it so $N_0 = N_1$ $$ P_x=\frac{1}{N_1-N_0+1}\sum_{n=N_0}^{n=N_1}\left|x(n)\right|^2 $$ Then calculate the $P$ value for every instance in $X(t)$? Thank you. The instantaneous power is simply $p(k) = x^2(k)$. The signal energy is given by $W_x = \sum_{k=0}^{k=N} x^2(k) $.
{ "domain": "dsp.stackexchange", "id": 3446, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, signal-analysis, power-spectral-density", "url": null }
aliasing Title: What happens if an harmonic in a band-limited signal at Nyquist frequency is added to a 90 degrees out-of-phase replica of it? I'm interested about aliasing in digital audio and I wonder if aliasing can be produced by simple mixing of band-limited signals. As I know a band limited signal can contain frequencies up to sample rate/2 frequency, the so-called Nyquist frequency. Is this right? If that is so, what will happen if an harmonic in a band-limited signal at Nyquist frequency is added to a 90 degrees out-of-phase replica of it? Will this create an harmonic with double frequency and could such an addition create aliasing? No. Adding two waves at the same frequency results in another wave of the same frequency. Depending on the phases, there can be anywhere from complete constructive interference to maximum destructive interference. If the amplitudes are the same there can be complete cancellation. This applies to the Nyquist frequency too.
{ "domain": "dsp.stackexchange", "id": 6171, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "aliasing", "url": null }
general-relativity, ligo The main alternative to this are called tensor-scalar theories. In this case, in addition to the tensor field of classical GR, there is also a scalar field. Analogous to a tensor field, this scalar field assigns a scalar to every point in spacetime. This scalar field is equated to $1/G$, where $G$ is the gravitational constant. Since the scalar field varies from place to place, this is equivalent to $G$ being variable. In classical Einsteinian GR, $G$ is constant. The most well-known tensor scalar theory is Brans-Dicke theory. This theory requires a coupling constant $\omega$ in the field equations, which has a constant value, but that value is unknown and can be altered so as to match experimental observations. Consequently Brans-Dicke theories are hard to falsify which is often the mark of bad explanation and so most physicists stick with Einsteinian GR (for now). What is the best alternative to GR given the detection of these echoes?
{ "domain": "physics.stackexchange", "id": 36884, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, ligo", "url": null }
c++, performance, algorithm, graphics Version B: void bresenprecalcB(int x1, int y1, int x2, int y2) { int dx = x2 - x1, dy = y2 - y1, dxA = sgn(dx), dyA = sgn(dy), dxB = abs(dx), dyB = abs(dy), cx = dxB >= dyB, cy = dyB >= dxB; int lookm[10] = {0,dxB,0,dyB,0,dxA,0,dyA,dyB,dxB}; int qx = lookm[cy], qy = lookm[2+cx], xm = lookm[4+cx], ym = lookm[6+cy], xym = lookm[8+cx], qr = qx != qy, pd = qx + qy, er = pd - (xym / 2), ec; int lookx[2] = {xm,xm + (qr * cy * dxA)}, looky[2] = {ym,ym + (qr * cx * dyA)}, lookd[2] = {qr * pd, qr * (pd - xym)}; //draw_point(x1, y1); for(;;) { ec = er >= 0; x1 += lookx[ec]; y1 += looky[ec]; er += lookd[ec]; //draw_point(x1, y1); if (x2 == x1 && y2 == y1) break; }; };
{ "domain": "codereview.stackexchange", "id": 38933, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, algorithm, graphics", "url": null }
javascript, jquery Title: Personal project for managing my bookmarks I am working on a personal project for managing my bookmarks, which is basically a web page to manage my bookmarks by categories. Managing here means everything - adding, viewing, updating, searching by categories, adding new categories. As can be guessed there is need of AJAX and JavaScript. I have a JavaScript file for managing this interactivity on the webpage. I am a newbie when it comes to JS/jQuery/AJAX so I have doubts about the quality of the code that I have written. I decided to get part of it reviewed. I got an answer due to which I decided to refactor the whole thing. I am placing both files here for review. The first file is largely based on triggers. The second one has functions instead of using triggers. I want to know whether there is any improvement or not in any aspect. Or is the old one better in any aspect. Any suggestions are ok.
{ "domain": "codereview.stackexchange", "id": 7929, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery", "url": null }
performance, sql, sql-server, t-sql, join To your question of can this be further optimized, I would answer not really. This structure violates so many principals of data quality, it would take less time to explain what it does do right. Twice in my IT career I have chosen to change jobs rather than put up with the mandate that "Change is not an option". If I was handed a system that was structured like this and told I could not make changes, I would do the best I could while looking for another job. Sorry to be blunt, but the real problem here is a horrible data model. I don't know if adding an index would count as a change, but a simple improvement you could make would be a non-clustered index on the name column of the Place table. That would at least improve the join performance from the logs table. Good luck
{ "domain": "codereview.stackexchange", "id": 29534, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, sql, sql-server, t-sql, join", "url": null }
You have these numbers for successive $n$: 1, 6, 18, 40, 75, 126 Take the differences: 5, 12, 22, 35, 51 Take the differences of those: 7, 10, 13, 16 Take the differences of those: 3, 3, 3 You end up with a constant on the 3rd difference. So the original numbers follow a 3rd-degree polynomial. This algorithm is $O(n^3)$. I could use the difference information to find out exactly what the polynomial is. But the point is that I don't care if I just want to know the complexity. #### andrewkirk Homework Helper Gold Member Here's a way that works for any series in which the n-th term is a polynomial in n. Construct a difference table as follows: First row is the series of numbers s(1), ...., s(n).
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9591542805873231, "lm_q1q2_score": 0.8281257720155506, "lm_q2_score": 0.863391624034103, "openwebmath_perplexity": 447.1146995006615, "openwebmath_score": 0.6652476787567139, "tags": null, "url": "https://www.physicsforums.com/threads/algorithm-analysis-problem-turned-into-a-math-problem.973321/" }
estimation, kalman-filters, filtering, state-space This is the "just trust the assertions of the math" argument. Second, and this is definitely a loose intuitive argument, any single-point measurement is going to narrow down the possible values of the states in exactly one direction in the states space. I.e., a measurement $y = \begin{bmatrix}1 & -1 \end{bmatrix} \mathbf x$ tells you a lot about the linear combination $x_1 - x_2$, but it tells you nothing about $x_1 + x_2$.
{ "domain": "dsp.stackexchange", "id": 11512, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "estimation, kalman-filters, filtering, state-space", "url": null }
gravity, orbital-mechanics, newtonian-gravity By chance, I selected parameters that happen to make a (at least approximately) closed orbit. That won't happen in general. When I change the angular momentum slightly, I get the following. Here the red points show the starting and ending positions of the orbiting object. The positions and velocities don't quite match up, and if I continue the orbit indefinitely, it fills up the entire annulus between the minimum and maximum radii: Click here for an animation. These kinds of orbits are common in galactic contexts, where things are orbiting inside extended mass distributions. The particular case of a force that is constant with distance corresponds to an orbit inside a density distribution that scales inversely with radius, i.e. $\rho\propto r^{-1}$. That's precisely the structure of the deep interior of the Navarro-Frenk-White density profile.
{ "domain": "astronomy.stackexchange", "id": 6963, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gravity, orbital-mechanics, newtonian-gravity", "url": null }
cc.complexity-theory, np-hardness Title: Simplest proof of NP-completeness The only first-principles "proof" that a problem is NP-complete I encountered is from Introduction to algorithms, and deals with the circuit-satisfiability problem. According to the authors, many details in the proof are omitted. What is the simplest first-principles proof that a problem is NP-complete that thoroughly presents all the technical details? What about { (M,$1^t$) : M is a turing machine that, run on a blank tape, accepts within t steps} ? The proof of NP-completeness is a simple exercise from the definition.
{ "domain": "cstheory.stackexchange", "id": 1382, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, np-hardness", "url": null }
# Condition for two events to be independent Question: An experiment has 10 equally likely outcomes. Let $A$ and $B$ be two non-empty events of that experiment. If $A$ consists of $4$ outcomes, find the number that $B$ must have in order for $A$ and $B$ to be independent. I really don't understand what the question is asking. For two events to be independent, the probability of one of the events occurring must be independent of the other event occurring. So shouldn't the answer simply be $6$?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9859363729567545, "lm_q1q2_score": 0.8037673680941626, "lm_q2_score": 0.8152324938410784, "openwebmath_perplexity": 196.8587564529405, "openwebmath_score": 0.8227855563163757, "tags": null, "url": "https://math.stackexchange.com/questions/1783749/condition-for-two-events-to-be-independent" }
c++, performance, reinventing-the-wheel, c++20 #endif // REPORT_WRITER_H UtilityTimer.h #ifndef CC_UTITLTY_TIMER_H #define CC_UTITLTY_TIMER_H /* * Chernick Consulting Utility timer class. * Encapsulates execution timing for all or parts of a program. */ #include <chrono> #include <iostream> #include <string> class UtilityTimer { public: UtilityTimer() = default; ~UtilityTimer() = default; void startTimer() noexcept { start = std::chrono::system_clock::now(); } void stopTimerAndReport(std::string whatIsBeingTimed) noexcept { end = std::chrono::system_clock::now(); std::chrono::duration<double> elapsed_seconds = end - start; std::time_t end_time = std::chrono::system_clock::to_time_t(end); double ElapsedTimeForOutPut = elapsed_seconds.count(); std::cout << "finished " << whatIsBeingTimed << std::ctime(&end_time) << "elapsed time in seconds: " << ElapsedTimeForOutPut << "\n" << "\n" << "\n"; }
{ "domain": "codereview.stackexchange", "id": 44065, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, reinventing-the-wheel, c++20", "url": null }
c++, c++11, opengl Title: OpenGL shader class I'm working my way through some basic OpenGL tutorials and have decided to offload the shader loading/compiling/linking to a separate object that I'll use for the remainder of the tutorial material. Ideally, I'd also like it to be useful later, when I get on with working my own OpenGL projects. And, who knows, maybe someone else would find it useful. That said, I'd like someone (everyone?) to offer their feedback on the robustness and correctness of this code. I'm shooting for C++11/14 modes and methods as well as useful errors on something going wrong. One thing I know doesn't work correctly is that if the user tries to create an instance of a Shader object using the constructor that takes file names before they've created an OpenGL context, it simply segfaults. I'm not sure how I can stop OpenGL from doing that, though.
{ "domain": "codereview.stackexchange", "id": 18355, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++11, opengl", "url": null }
coordinate-systems, tensor-calculus, covariance Title: Tensor transformation Formula Proof Ok so basically I am trying to prove that the following expression: Can be written using matrices like this: Any suggestions on how to approach this? To prove that you need to know this, $a_{ij}b_{jn} = (\textbf{a} \cdot \textbf{b})_{in} = \textbf{a} \cdot \textbf{b}$ Note that the position of index $j $ . $a'_{mn} = v_{mi} v_{nj} a_{ij}$ and you want to show $\textbf{a}' = \textbf{v} \cdot \textbf{a} \cdot \textbf{v}^T $ So, $a'_{mn} = v_{mi} v_{nj} a_{ij} = $ $a'_{mn} = v_{nj} v_{mi} a_{ij} =$ $a'_{mn} = v_{nj} (\textbf{v} \cdot \textbf{a})_{mj} =$ $a'_{mn} = (\textbf{v} \cdot \textbf{a})_{mj} v_{nj} =$ $a'_{mn} = (\textbf{v} \cdot \textbf{a})_{mj} v^T_{jn} =$ $a'_{mn} = (\textbf{v} \cdot \textbf{a} \cdot \textbf{v}^T)_{mn} \implies$ $\textbf{a}' = \textbf{v} \cdot \textbf{a} \cdot \textbf{v}^T $
{ "domain": "physics.stackexchange", "id": 74341, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "coordinate-systems, tensor-calculus, covariance", "url": null }
filter-design, infinite-impulse-response https://www.mathworks.com/help/signal/ug/iir-filter-design.html?lang=en#brbq5qb There is other technique named Model order Reduction , it used for reduce model order while preserving model characteristics that are important for the application.Generally MOR working with lower-order models can simplify analysis and control design, relative to higher-order models. https://www.mathworks.com/help/control/ug/about-model-order-reduction.html It is still unclear to me about all these methods for design IIR filter,what is the difference between them ?in briefly,when can i use these methods? ?
{ "domain": "dsp.stackexchange", "id": 6554, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filter-design, infinite-impulse-response", "url": null }
c++, tree, c++17 #include "binary_tree_iterator.hpp" #include <ostream> #include <utility> #include <type_traits> #include <memory> namespace shino { template <typename ValueType> class binary_search_tree { constexpr static bool v_copiable = std::is_copy_constructible_v<ValueType>; constexpr static bool v_moveable = std::is_move_constructible_v<ValueType>; struct node { //DO NOT MOVE ELEMENTS AROUND, emplace relies on this order const ValueType value; node* left = nullptr; node* right = nullptr; }; node* root = nullptr; std::size_t element_count = 0; public: using iterator = binary_tree_iterator<node>; binary_search_tree(binary_search_tree&& other) noexcept: root(std::exchange(other.root, nullptr)), element_count(std::exchange(other.element_count, 0)) {}
{ "domain": "codereview.stackexchange", "id": 31077, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, tree, c++17", "url": null }
quantum-mechanics, electromagnetism, photons, quantum-spin, polarization So I'll talk about polarization of classical electromagnetic waves just because you've already seen it. Imagine a wave travelling in the $z$ direction with the electric field always pointing in the same direction, say $\pm x$. This is called a linearly polarized wave. Same if the wave traveled in the $z$ direction and the electric field was in the plus or minus y direction. If those two waves were in phase and had the same magnitude, then their superposition would be a wave that oscillates at the same frequency/wavelength as the previous waves, and is still linearly polarized but this time not in the $x$ or $y$ direction but instead in the direction $45$ degrees (halfway) between them. Basically if the electric field always points in plus or minus the same direction, then that's linear polarization, and it could in theory be in any direction by adjusting the relative magnitude of an $x$ polarized one and a $y$ polarized one (that are in phase with each other).
{ "domain": "physics.stackexchange", "id": 81601, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, electromagnetism, photons, quantum-spin, polarization", "url": null }
fft, power-spectral-density, qpsk $$10log_{10}\bigg(1-\frac{R/20}{R/2}\bigg) =10log_{10}(0.9) = -0.46 dB $$ This can be decreased by lowering the loop bandwidth further (-0.18 dB for $R/50$), but you need to then see if you are adding more noise to this from the increased phase noise that would be added from the local oscillator and other jitter sources (sampling clock) as I show here PLL for Phase Demodulation and Carrier Tracking , or if the dynamics in the system (Doppler for a moving transmitter or receiver for example) are changing faster than this can track.
{ "domain": "dsp.stackexchange", "id": 8574, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fft, power-spectral-density, qpsk", "url": null }
python, python-3.x, object-oriented Title: Combinable filters I have an initial pool of subjects, then I need to apply a set of general criteria to retain a smaller subset (SS1) of subjects. Then I need to divide this smaller subset (SS1) into yet finer subsets (SS1-A, SS1-B and the rest). A specific set of criteria will be applied to SS1 to obtain the SS1-A, while another set of specific criteria will be applied to obtain the SS1-B, and the rest will be discarded. The set of criteria/filter will need to be flexible, I would like to add, remove, or combine filters for testing and development, as well as for further clients' requests. I created a small structure code below to help me understand and test the implementation of template method and filter methods. I use a list and some filter instead of actual subject pool, but the idea is similar that the list items can be seen as subjects with different attributes. from abc import ABC, abstractmethod
{ "domain": "codereview.stackexchange", "id": 34312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, object-oriented", "url": null }
electromagnetism, electrostatics, magnetostatics Follow up question: Thanks; sounds like all the answer is that as Coloumb's law field needs to update with the movement of the particle, the transition mixes old and new fields. All are radial, but not spherically symmetrical, providing a non-zero curl, yet no radiation? They come about in the ordinary course of the retarded update of the field per Coulomb's law...? But $\mathbf \nabla \times \mathbf E = 0$ at $(0, 1, 0)$ both before $t = 1$ and after $t = 1.1$.
{ "domain": "physics.stackexchange", "id": 99374, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electrostatics, magnetostatics", "url": null }
electromagnetic-radiation, thermal-radiation Now there is a modification for the SB law: if you divide the usual equation by $\pi$ you get the result in "per steradian" (see the Wiki article). This gives me the following code snippet (Python - not my strongest language but it's good for this kind of thing because of the scipy.integrate functions which do a lot of the hard work; the built in physical constants and plotting routines are nice too): # integrate Planck's Law and compare results with Stefan-Boltzman
{ "domain": "physics.stackexchange", "id": 23063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetic-radiation, thermal-radiation", "url": null }
javascript, jquery, validation, form The way it is above, if dob was missed, ONLY the dob error displays (even if they missed others). So that's not a good UX. For example, if the user missed the dob AND sex fields, they'd click "Submit" and get an alert about dob and would choose a dob, then scroll all the way to bottom and click "Submit" again and it would send them back up to fill in Sex. Again, not a good UX. So I know return false; has got to go SOMEWHERE (and likely only once, once the code is written more efficiently), just not sure where. One thing that pops up in my mind when I see code containing repetitions of almost exact copies of structure is how can we achieve reusability? DRY principle is a very powerful thing that helps identifying and fixing code issues, until gets to the point where the other principles are abused. This can be solved with code like one below.
{ "domain": "codereview.stackexchange", "id": 25455, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, validation, form", "url": null }
thermodynamics, non-equilibrium The authors then compute the work done: $W=-\int_0^{V_f} P_f dV-\int_{V_i}^0P_idV$. In what sense can the nonequilibrium states not be described by thermodynamic coordinates ? We fix the pressure to be $P_i$ on one side of the porous plug and $P_f$ on the other, so the thermodynamic coordinates are known, and the work done can be evaluated. As the passage you cite states, the initial (i.e. pre-throttle) and final (i.e. post-throttle) states are equilibrium states. Therefore, you have no difficulty in describing them in equilibrium thermodynamics language, for example by the pressures $P_{i}$ and $P_f$. They are true states. The difference between nonequilibrium and equilibrium isn't necessarily that state variables cannot be used (for example, you could talk about a variable like pressure in a local sense, $P(x)$ with $x$ along the throttle). It's rather that the name state variable is a misnomer, because they do not describe a thermodynamic state.
{ "domain": "physics.stackexchange", "id": 75262, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, non-equilibrium", "url": null }
(a) Determine whether the sequence defined as follows is convergent or divergent: $a_1 = 1$ $a_{n + 1} = 4 - a_n$ for $n \ge 1$ (b) What happens if the first term is $a_1 = 2$ ? GR Gabriel R. If $\$ $1000 is invested at$ 6 \% $interest, compounded annually, then after$ n $years the investment is worth$ a_n = 1000(1.06)^n $dollars. (a) Find the first five terms of the sequence$ \{ a_n\}. $(b) Is the sequence convergent or divergent? Explain. JS Joseph S. Numerade Educator ### Problem 66 If you deposit$ \100 at the end of every month into an account that pays $3 \%$ interest per year compounded monthly, the amount of interest accumulated after $n$ months is given by the sequence $I_n = 100 \left( \frac {1.0025^n - 1}{0.0025} - n\right)$ (a) Find the first six terms of the sequence. (b) How much interest will you have earned after two years? JS Joseph S. ### Problem 67
{ "domain": "numerade.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9901401461512077, "lm_q1q2_score": 0.809513317949498, "lm_q2_score": 0.8175744828610095, "openwebmath_perplexity": 455.27363600530987, "openwebmath_score": 0.9319241046905518, "tags": null, "url": "https://www.numerade.com/books/chapter/infinite-sequences-and-series/" }
PlotDistributions(Uniform=(dist1, 'Using sqrt(rand())'), NonUniform=(dist2, 'Using rand()')) if __name__ == '__main__': main()<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span> Additional references: • Originally saw the question here • I got the original idea here and wanted to put more details on how the pdf and cdf’s are calculated. Advertisements
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9830850872288502, "lm_q1q2_score": 0.8394688932213539, "lm_q2_score": 0.8539127529517043, "openwebmath_perplexity": 790.9059456024086, "openwebmath_score": 0.7553662657737732, "tags": null, "url": "https://meyavuz.wordpress.com/2018/11/15/generate-uniform-random-points-within-a-circle/" }
quantum-mechanics This similarly leads me to wonder how $\langle\mathbf{p}|\hat{\mathbf{r}}|\psi\rangle = i\hbar\nabla_{\mathbf{p}}\bar{\psi}(\mathbf{p})$ for the same reason, since the left seems to be a scalar and the right (due to $\nabla_{\mathbf{p}}$) is a vector. Am I missing something simple? There are two different notions of vector at play here. The kets are vectors in the Hilbert space, and bra-kets are inner products in this vector space. The operator $\hat{\mathbf{r}}$ is a vector in a three-dimensional euclidean space. The matrix elements of such a vector operator are scalars with respect to the Hilbert space, but vectors in the euclidean space.
{ "domain": "physics.stackexchange", "id": 55950, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics", "url": null }
- Though your conclusion is right, your example is not.. $f(u)=1/(b-a)$ but not $1$ when you plug in. – MoonKnight Dec 24 '13 at 0:14 @MoonKnight Thanks for spotting the mistake -the habit of working all the time with $U(0,1)$... Since your answer showed that the inequality does not hold in general, I changed mine to show that it does hold for the normal distribution. – Alecos Papadopoulos Dec 24 '13 at 2:29 Thanks @AlecosPapadopoulos. That really helped. – ubaabd Dec 24 '13 at 5:52 Nice to hear. One can also show that the inequality will hold if the distribution is a uniform $U(a,b)$, $a<0<b,\; b>-a$. – Alecos Papadopoulos Dec 24 '13 at 14:42
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9728307692520259, "lm_q1q2_score": 0.8148624772137432, "lm_q2_score": 0.8376199673867852, "openwebmath_perplexity": 299.8779217628463, "openwebmath_score": 0.9975676536560059, "tags": null, "url": "http://math.stackexchange.com/questions/616833/proving-an-inequality-involving-definite-integrals" }
navigation, ekf, odometry, pose, robot-localization Can you help me with this? I am new in ROS, so maybe I didn't understand some basic concepts. You will find the launch file and the YAML file below, as well as some examples of the topics published. As I said, I am new in ROS, so I can't upload files yet. I'll copy-paste them. I can't upload an image of the output of the 'rosrun rqt_tf_tree rqt_tf_tree' command, but I can tell you that odom is linked to base_link, and the children of base_link are odometry and pozyx_frame. This is the YAML file: frequency: 30 sensor_timeout: 0.1 two_d_mode: true transform_time_offset: 0.0
{ "domain": "robotics.stackexchange", "id": 27478, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ekf, odometry, pose, robot-localization", "url": null }
A very nice simple proof but, more importantly, a technique that can be and has been used a great many times. For any $x\in [0,1]$ there is a open set $U_x \subset [0,1]$ such that $f$ restricted to $U_x$ is minimal or maximal at $x$. Thus the image of $f$ consists of all the existing minimal and maximal values of $f$ restricted the $U_x$. Now since $[0,1]$ has a countable base we can choose the $U_x$ in a way that only countable many of them are distinct. This shows that the image of $f$ is countable, but since it is also connected it has to be a point. I found this proof here.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9865717424942961, "lm_q1q2_score": 0.8284578076750131, "lm_q2_score": 0.8397339716830606, "openwebmath_perplexity": 139.99760375600476, "openwebmath_score": 0.882183849811554, "tags": null, "url": "https://math.stackexchange.com/questions/1544495/continuous-functions-that-attain-local-extrema-at-every-point" }
localization, navigation Dose anyone have suggestion for this? Or is there exist a implemented answer for this in ROS ? thanks
{ "domain": "robotics.stackexchange", "id": 10226, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "localization, navigation", "url": null }
Choose two disjoint closed intervals $I_0$ and $I_1$ that are subsets of the interior of $I_{-1}$ such that the lengths of these two intervals are less than $2^{-t}$ and such that $U_0^*=\text{int}(I_0)$ and $U_1^*=\text{int}(I_1)$ satisfy further properties, which are that $U_0=U_0^* \cap X \subset V_{-1}$ and $U_1=U_1^* \cap X \subset V_{-1}$ are open in $X$. Let $U_0$ and $U_1$ be two possible moves by player $\beta$ at the next stage. Then the two possible responses by $\alpha$ are $V_0=\gamma(U_{-1},U_0)$ and $V_1=\gamma(U_{-1},U_1)$. Let $C_1=I_0 \cup I_1$. At the $n^{th}$ step, suppose that for each $f \in 2^n$, disjoint closed interval $I_f=I_{f(0),\cdots,f(n-1)}$ have been chosen. Then for each $f \in 2^n$, we choose two disjoint closed intervals $I_{f,0}$ and $I_{f,1}$, both subsets of the interior of $I_f$, such that the lengths are less than $2^{-(n+1) t}$, and:
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.982013790564298, "lm_q1q2_score": 0.8204563532136758, "lm_q2_score": 0.8354835350552604, "openwebmath_perplexity": 579.0447059325933, "openwebmath_score": 0.9998471736907959, "tags": null, "url": "https://dantopology.wordpress.com/tag/baire-space/" }
phase-space, wigner-transform $$\tilde{A} = \frac{1}{(2\pi \hbar)^2} \int \exp{\big[\frac{i(-py + p'(x+y/2) - p''(x-y/2))}{\hbar} \big]} \langle p' \vert \hat{A} \vert p'' \rangle dy dp' dp'' $$ Gathering the y terms and integrating over y gives: $$\tilde{A} = \frac{1}{2\pi \hbar} \int \exp{\big[\frac{ix(p' - p'')}{\hbar} \big]} \delta(p - \frac{p'}{2} - \frac{p''}{2}) \langle p' \vert \hat{A} \vert p'' \rangle dp' dp'' \rightarrow $$ $$\tilde{A} = \frac{1}{\pi \hbar} \int \exp{\big[\frac{2ix(p' - p)}{\hbar} \big]} \langle p' \vert \hat{A} \vert 2p - p' \rangle dp' $$ Using $p' = p + \frac{u}{2}$ ($u$ being the variable here), we can write this as: $$\tilde{A} = \frac{1}{h} \int \exp{\big[\frac{ixu}{\hbar} \big]} \langle p + \frac{u}{2} \vert \hat{A} \vert p - \frac{u}{2} \rangle du $$
{ "domain": "physics.stackexchange", "id": 78677, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "phase-space, wigner-transform", "url": null }
java, algorithm, search, graph, pathfinding public double getH() { return h; } public double getF() { return f; } } /** * The graph represents an undirected graph. * * @author SERVICE-NOW\ameya.patil * * @param <T> */ final class GraphAStar<T> implements Iterable<T> { /* * A map from the nodeId to outgoing edge. * An outgoing edge is represented as a tuple of NodeData and the edge length */ private final Map<T, Map<NodeData<T>, Double>> graph; /* * A map of heuristic from a node to each other node in the graph. */ private final Map<T, Map<T, Double>> heuristicMap; /* * A map between nodeId and nodedata. */ private final Map<T, NodeData<T>> nodeIdNodeData;
{ "domain": "codereview.stackexchange", "id": 20833, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, search, graph, pathfinding", "url": null }
measurements, error-analysis Title: Error analysis and how values in references are determined Question 1:Most science textbooks have appendixes that have a value for some physical property of some object. This includes diameter of electrons, viscosity of fluids, boiling points, etc. My question is, are the values presented in such appendixes (or other data bases) averages?
{ "domain": "physics.stackexchange", "id": 12366, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "measurements, error-analysis", "url": null }
botany, species-identification, flowers Title: What is the name of this plant, from Bangladesh? This is a small plant from my backyard. I have found the flower closes in the evening. This plant belong to Malvaceae, and the yellow flowers open towards morning to mid-day. This plant is most-likely Sida rhombifolia or may be Malvastrum tricuspidatum . From broad leaves it seems not Sida acuta Check epicalyx ( made up of bracteoles)... is there is no epicalyx then Sida rhombifolia. If epicalyx is 3, it is Malvastrum tricuspidatum look also https://en.wikipedia.org/wiki/Sida_rhombifolia, https://books.google.co.in/books?id=Roi0lwSXFnUC&lpg=PA258&ots=N7DNvrN4UL&dq=malvastrum%20tricuspidatum%203%20epicalyx&pg=PA258#v=onepage&q=malvastrum%20tricuspidatum%203%20epicalyx&f=false Epicalyx of Malvastrum tricuspidatum : C= Corolla (all petals), K= Calyx (all sepals), E= Epicalyx the bracteole 3 bracteoles
{ "domain": "biology.stackexchange", "id": 5895, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "botany, species-identification, flowers", "url": null }
Bounds Substituting $$x\mapsto x/\pi$$ in $$(21)$$ from this answer, we get $$\cot(x)=\sum_{k\in\mathbb{Z}}\frac1{k\pi+x}\tag8$$ Subtracting $$(8)$$ from $$\frac1x$$ and taking the derivative, we get $$\frac1{\sin^2(x)}-\frac1{x^2}=\sum_{\substack{k\in\mathbb{Z}\\k\ne0}}\frac1{(k\pi+x)^2}\tag9$$ Evaluating $$(9)$$ at $$x=0$$ gives $$\frac2{\pi^2}\zeta(2)=\frac13$$, which agrees with $$(6)$$. Taking two derivatives of $$(9)$$ gives \begin{align} \frac{\mathrm{d}^2}{\mathrm{d}x^2}\left(\frac1{\sin^2(x)}-\frac1{x^2}\right) &=\sum_{\substack{k\in\mathbb{Z}\\k\ne0}}\frac6{(k\pi+x)^4}\\ &\ge0\tag{10} \end{align} Thus, $$\frac1{\sin^2(x)}-\frac1{x^2}$$ is an even convex function with a minimum of $$\frac13$$ at $$x=0$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9683812345563902, "lm_q1q2_score": 0.8172121755449993, "lm_q2_score": 0.8438950966654774, "openwebmath_perplexity": 153.501210644107, "openwebmath_score": 0.9736623764038086, "tags": null, "url": "https://math.stackexchange.com/questions/4129806/limit-of-infinite-composition-of-sinx/4129836" }
nlp Title: Can you detect source language of a translation? Sometimes you read text and you have a strong feeling that it was translated from a certain language. For example, you read Russian text, see «взять автобус» («take bus» instead of Russian «сесть в автобус» (literally «sit on bus»)), and it becomes obvious that the text was originally written in English and then translated by low-qualified translator. Provided you have a long text, can you automatically detect if it is translation or is it originally written in this language, and can you detect the source language? Are there any ready solutions? In the machine translation research community, the translated text that exhibits some traits from the original language is called "translationese". There are multiple lines of research that try to spot translationese (i.e. tell apart text that has been translated, either by human or machine, from text written directly). Here you can see academic articles related to the matter.
{ "domain": "datascience.stackexchange", "id": 10989, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nlp", "url": null }
programming, measurement no measurement has been applied yet. So the computation does not get started. The state you print is exactly the input state you set. Hope everything makes sense now. Let me know if you need more help. We do hope to make Paddle Quantum better together with the community!
{ "domain": "quantumcomputing.stackexchange", "id": 3950, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming, measurement", "url": null }
python, keras, tensorflow, regression The fact that loss is not decreasing while being thousands high is alarming. Test model #Generate test data: generate_data(30,prefix='pngs/test_') #Read in pngs: test_pngs = glob.glob('pngs/test_*png') test_ims = {} for png in test_pngs: test_ims[png]=np.array(PIL.Image.open(png)) #Prepare test questions and solutions as before: test_questions = np.array([each for each in test_ims.values()]).astype(np.float32) test_solutions = np.array([float(each.split('_')[-1].split('.')[0])/100 for each in test_ims]).astype(np.float32) #Apply model: test_answers = model.predict(test_questions) test_answers is: print(test_answers) #array([[90.65718 ], # [90.65722 ], # [90.65722 ], # [90.65722 ], # . # . # . # [90.65722 ], # [90.657196], # [90.65721 ], # [90.65723 ]], dtype=float32)
{ "domain": "datascience.stackexchange", "id": 10402, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, keras, tensorflow, regression", "url": null }
ruby, ruby-on-rails, rspec end Provided you sort the active record results (as you have done), comparing to an array works fine. I'm not sure what you mean by 'flexible', but the test you've written looks pretty good (without knowing the internals of your 'sort' function). I'd only suggest adding some more tests to cover reverse sorting, sorting by author.nickname (if that's part of the default scope, or pulled in via delegation), and passing invalid values to the sort function. Being a little pedantic, you don't need to repeat the word "it" in the spec declaration. The word "it" will be prepended to the message in the case of failures. describe "self.sort" do before(:each) do @tee = FactoryGirl.create :author, nickname: "tee jia hen", user: FactoryGirl.create(:user) @jon = FactoryGirl.create :author, nickname: "jon", user: FactoryGirl.create(:user) @tee_article1 = FactoryGirl.create :article, author: @tee, title: "3diablo"
{ "domain": "codereview.stackexchange", "id": 2258, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ruby, ruby-on-rails, rspec", "url": null }
newtonian-mechanics, reference-frames, inertial-frames, earth, approximations Title: Why is the Earth not an inertial frame of reference? From many sources I have found the explanation that the Earth is not an inertial frame of reference because it rotates around its axis. However, nobody mentions the rotation about the Sun. What I thought was, since the Earth rotates around the Sun, there is a centripetal force acting on the Earth, hence an object that is considered to have zero acceleration is actually being accelerated by the Sun. People look at this from the perspective of the rotating frame. And actually, I don't even get why rotation about its axis can be a reason for non inertial frame of reference. Let's look at the definition:
{ "domain": "physics.stackexchange", "id": 63407, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, reference-frames, inertial-frames, earth, approximations", "url": null }
• The TangentVector(C, t) command computes the tangent vector to the curve C that is parameterized by t. Note that this vector is not normalized by default, so it is a scalar multiple of the unit tangent vector to the curve C. Therefore, by default, if C is a curve in ${ℝ}^{3}$, the result is generally different from the output of TNBFrame(C, t, output=['T']). • If n is given as either normalized=true or normalized, then the resulting vector will be normalized before it is returned. As discussed above, the default value is false, so that the result is not normalized. • The curve can be specified as a free or position Vector or a Vector valued procedure. This determines the returned object type. • If t is not specified, the function tries to determine a suitable variable name by using the components of C.  To do this, it checks all of the indeterminates of type name in the components of C and removes the ones which are determined to be constants.
{ "domain": "maplesoft.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363522073338, "lm_q1q2_score": 0.8498677444279865, "lm_q2_score": 0.8633916064586998, "openwebmath_perplexity": 652.1686491450349, "openwebmath_score": 0.9521265029907227, "tags": null, "url": "https://de.maplesoft.com/support/help/maple/view.aspx?path=VectorCalculus%2FTangentVector" }
and Sorting Algorithms in Data Structure, you can find MCQs of the binary search algorithm, linear search algorithm, sorting algorithm, Complexity of linear search, merge sort and bubble sort and partition and exchange sort. Finding the median in a list seems like a trivial problem, but doing so in linear time turns out to be tricky. Linear search has linear-time complexity; binary search has log-time complexity. Conversely, giv. Data Structures and Algorithms Objective type Questions and Answers. com: Time complexity of an algorithm: In computer science , the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function. Browse other questions tagged time-complexity linear-algebra matrices or ask your own question. • DEMO • Conclusion: Maybe I can SCALE well … Solve O(10^12) problems in O(10^12). See full list on yourbasic. The time complexity is the sum of time spent in all calls plus some extra preprocessing time. BIG O Notation – Time
{ "domain": "marathon42k.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9814534327754854, "lm_q1q2_score": 0.8612026083995186, "lm_q2_score": 0.8774767906859264, "openwebmath_perplexity": 614.0522630942693, "openwebmath_score": 0.5858543515205383, "tags": null, "url": "http://marathon42k.it/zcy/time-complexity-of-linear-search.html" }
in that code, then used the pol2cart function to create Cartesian representations for them, and plotted them in Cartesian space. For instance: consider the polar curves r 1 (θ) = 1 and r 2 (θ) = 2 (two circles, of course). This reference Area with Polar Coordinates does a very similar exercise. Sec-tion 9. View Notes - pp19 from MATHS 101 at IIT Kanpur. We can also use Area of a Region Bounded by a Polar Curve to find the area between two polar curves. The area of a region in polar coordinates defined by the equation \(r=f(θ)$$ with $$α≤θ≤β$$ is given by the integral $$A=\dfrac{1}{2}\int ^β_α[f(θ)]^2dθ$$. This results in the following dialogue box for a polar curve. Area of the right side: (1/2) integral from 0 to pi/2, of [2^2 - (2(1-sinx))^2] dx. It is then somewhat natural to calculate the area of regions defined by polar functions by first approximating with sectors of circles. (b) The curve resembles an arch of the parabola 8 16yx 2. Rotation around the y-axis. Castillo, and K. To
{ "domain": "radiofakenews.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9861513905984457, "lm_q1q2_score": 0.8130442815718196, "lm_q2_score": 0.8244619328462579, "openwebmath_perplexity": 490.98177478940477, "openwebmath_score": 0.8563680648803711, "tags": null, "url": "http://cnew.radiofakenews.it/area-between-polar-curves-calculator.html" }
7. There are important connections between non-trivial set theory and topology. For example, Tychonoff's Theorem (a purely topological statement) is equivalent to the Axiom of Choice. So... don't skip the chapter. If nothing else, do you really think that the authors were just trying to pad the book by adding a useless/unnecessary chapter at the beginning? - There are some important examples in general topology that require a basic knowledge at least of well-orderings, the first two infinite cardinals, $\omega$ and $\omega_1$, and the cardinal $2^\omega=\mathfrak{c}$. A basic understanding of ordinals makes these ideas easier to talk about and use. You could certainly get started in topology without these things, but I’d strongly recommend not skipping them, though you might work on them concurrently with the first parts of the topology proper.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9626731115849662, "lm_q1q2_score": 0.8163157586319821, "lm_q2_score": 0.8479677564567913, "openwebmath_perplexity": 687.3476849995833, "openwebmath_score": 0.7610117197036743, "tags": null, "url": "http://math.stackexchange.com/questions/131160/is-set-theory-important-for-topology" }
\begin{align} P(A \mid B^c) &= 1- P(A^c \mid B^c)\\ &= 1-\frac{P(A^c \cap B^c)}{P(B^c)}\\ &= 1-\frac{P((A \cup B)^c)}{P(B^c)}\\ &= 1-\frac{1-P(A\cup B)}{P(B^c)} \end{align} Now, in regards to your reasoning, the total probability theorem tell us that $$P(B) = P(B \cap A^c) + P(B \cap A).$$ From this we get $$P(B \cap A^c) = P(B)-P(B \cap A),$$ which show us that indeed $P(A^c \cap B) \neq P(B)$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.980280871316566, "lm_q1q2_score": 0.8292611857210855, "lm_q2_score": 0.8459424334245618, "openwebmath_perplexity": 604.5177978700682, "openwebmath_score": 0.999550461769104, "tags": null, "url": "https://math.stackexchange.com/questions/1556932/calculate-conditional-probability" }
php, object-oriented, api if ( isset( $records ) && !empty( $records ) ) { foreach ( $records as $record ) { foreach ( $arrFields as $fieldName => $value ) { $record->setField( $this->fm_escape_string( $fieldName ), $this->fm_escape_string( $value ) ); } } $commit = $record->commit(); if ( $this->isError( $commit ) === 0 ) { $blOut = true; } else { return $this->isError( $commit ); } } // Housekeeping unset( $record, $commit, $fieldName, $value ); return $blOut; }
{ "domain": "codereview.stackexchange", "id": 1418, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, object-oriented, api", "url": null }
- What does xT refer to in this case? – Anderson Green Sep 6 '12 at 2:36 It is the notation for the image of $x$ by the linear map $T$. Usually we write $T(x)$ or $Tx$. – Sigur Sep 6 '12 at 2:38 Always check and make sure you have the right convention for the occasion. Usually m x n is rows x columns. I like to remember this as being in REVERSE alphabetical order - Rows by Columns, or R first then C. However, in Boyce & DiPrima's book "Elementary Differential Equations and Boundary Value Problems" an m x n matrix has m vertical columns and n horizontal rows. However, when addressing elements within a matrix, it's the opposite. The element "a sub i,j" references the element in the ith row and jth column. Lesson? Always check to make sure you have the correct convention! -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9532750400464604, "lm_q1q2_score": 0.8064158145047498, "lm_q2_score": 0.8459424411924673, "openwebmath_perplexity": 1028.3437828471365, "openwebmath_score": 0.8054248094558716, "tags": null, "url": "http://math.stackexchange.com/questions/191711/how-many-rows-and-columns-are-in-an-m-x-n-matrix" }
snakemake rule blah: output: "{sample}.{reference}.txt" shell: "touch {output}" I'm using your cluster.json, so with that let's use a contrived cluster submission program: snakemake --cluster-config cluster.json --cluster "echo 'rule.wildcards: {rule}.{wildcards} cluster: {cluster} -o {cluster.output} -e {cluster.error} -n {cluster.name}' >> log && " --jobs 1
{ "domain": "bioinformatics.stackexchange", "id": 1318, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "snakemake", "url": null }
javascript, beginner, html, event-handling, ecmascript-6 for (; i < inputs_length; i++) { inputs[i].classList.remove('visible'); inputs[i].disabled = true; } document.querySelector("[data-value='" + e.target.value + "']").className += ' visible'; document.querySelector("[data-value='" + e.target.value + "']").disabled = false; } }); } var JS_radios = new JavaScript_form(); JS_radios.radio_hide_element(); </script> Let's start with a dry review: Underscores in JavaScript hurt the eyes. The absolute naming convention of JavaScript is to use camelCase. Your underscore notation, while consistent, hurts the eyes of other JavaScript developers. Using event delegation is great, but since you've queried the DOM once when you instantiated the element, you can only use the elements which existed in the DOM when you instantiated the object. Boo. This line this.radio_set = document.querySelector('radio-set'),
{ "domain": "codereview.stackexchange", "id": 9327, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, beginner, html, event-handling, ecmascript-6", "url": null }