text
stringlengths
49
10.4k
source
dict
c++, reinventing-the-wheel, collections, library // can we remove elements? std::cout << "current size of resizing array is: " << ra.size() << " and last element: " << ra[ra.size() - 1] << std::endl; ra.pop_back(); std::cout << "size of resizing array after pop() is: " << ra.size() << " and last element: " << ra[ra.size() - 1] << std::endl; // copy construction works? resizing_array<int> ra2(ra); std::cout << "current size of resizing array 2 is: " << ra2.size() << " and last element: " << ra2[ra2.size() - 1] << std::endl; // create a new resizing_array from begin and end iterator constructor const int raw_array[]{ 1,2,3,4,5}; size_t size = sizeof(raw_array) / sizeof(raw_array[0]); resizing_array<int> ra3(raw_array, raw_array + size); assert(ra3.size() == 5); assert(ra3[0] == raw_array[0]); assert(ra3[1] == raw_array[1]); assert(ra3[2] == raw_array[2]); assert(ra3[3] == raw_array[3]); assert(ra3[4] == raw_array[4]); // create a new resizing array from an initialisation_list resizing_array<int> ra4{1,2,3,4,5}; assert(ra4[0] == 1); assert(ra4[1] == 2); assert(ra4[2] == 3); assert(ra4[3] == 4); assert(ra4[4] == 5);
{ "domain": "codereview.stackexchange", "id": 42966, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, reinventing-the-wheel, collections, library", "url": null }
ros, rosdep, catkin, dependencies, rospack Test 7: rosbuild package with <rosdep name="urdf" /> # rosdep check: misses dependency $ rosdep check example_package All system dependencies have been satisified # rospack depends: misses dependency $ rospack depends example_package (nothing) Test 8: rosbuild package with <depend package="urdf" /> # rosdep check: misses dependency and generates error $ rosdep check example_package All system dependencies have been satisified ERROR[example_package]: resource not found [cmake_modules] # rospack depends: misses dependency $ rospack depends example_package | grep cmake_modules (nothing) Test 9: apt-cache # ros-hydro-urdf: misses dependency apt-cache depends ros-hydro-urdf | grep cmake-modules (nothing) Question 9a: Why doesn't ros-hydro-urdf not depend on ros-hydro-cmake-modules? Question 9b: Do debian dependencies only use <run_depend> entries for catkin packages? Question 9c: What is the relationship between debian dependencies and the <depend> and <rosdep> entries in rosbuild package manifest.xml files? Originally posted by leblanc_kevin on ROS Answers with karma: 357 on 2013-09-28 Post score: 14 I want to clarify the terms first: build dependencies (http://ros.org/reps/rep-0127.html#build-depend-multiple) are things a package needs in order to be build run dependencies (http://ros.org/reps/rep-0127.html#run-depend-multiple) are on the one hand things a package needs to be executed and on the other hand for transitive dependencies it provides to other package when they build against this package
{ "domain": "robotics.stackexchange", "id": 15696, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, rosdep, catkin, dependencies, rospack", "url": null }
geophysics, seismology, seismic Title: Love-Wave Propagation Love-waves cannot exist in a half-space. Layering must be present and there also must be accompanied impedance contrasts associated with the layering. Because layering naturally induces seismic dispersion (in certain frequency ranges) and layering must be present for Love-wave generation, does that - as a corollary - indicate that Love-waves are naturally and always dispersive? Note that the motivation for this question primarily comes from the idea that Rayleigh-waves - which can exist in a half-space - are not dispersive when layering is not present. Your assertion is correct! See e.g. Stein & Wysession An Introduction to Seismology, Earthquakes, and Earth Structure (2003) p. 90 ,section 2.7.3: Love waves in a layer over a halfspace (https://books.google.ch/books?id=-z80yrwFsqoC&lpg=PP1&hl=de&pg=PA90). In particular, for any particular frequency/period, Love waves can only have certain horizontal apparent velocities/wavenumbers. Hence, different frequencies have different apparent velocities, which is what we refer to as dispersion. See for example also the figure 2.8-2 from the book, https://books.google.ch/books?id=-z80yrwFsqoC&lpg=PP1&hl=de&pg=PA96, which shows the dispersion curve for a layer over a halfspace.
{ "domain": "earthscience.stackexchange", "id": 1699, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "geophysics, seismology, seismic", "url": null }
molecular-structure, covalent-compounds Title: How to know it when I see a covalent network? This is a well-known (better said: well-discussed) question in the internet. When you look for answers for popular questions, you usually see them with a variable degree of reliability and complexity. Unfortunately, for this one, I only observed very very crude and general rules of thumb. So let's get a real answer: A network solid or covalent network solid is a chemical compound (or element) in which the atoms are bonded by covalent bonds in a continuous network extending throughout the material. In a network solid there are no individual molecules, and the entire crystal may be considered a macromolecule. Formulas for network solids, like those for ionic compounds, are simple ratios of the component atoms represented by a formula unit. Covalent network, wikipedia Diamond and SiO$_2$ are really great examples of covalent networks-lattices. So enough with stories: If you face a new chemical formula, how would you assume it's a covalent network? (In case it is) Is it somehow done by drawing the Lewis structure? Is there a rule for this? Or is it only possible to know such thing with experimental data?
{ "domain": "chemistry.stackexchange", "id": 2768, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "molecular-structure, covalent-compounds", "url": null }
gazebo-camera, docker, gazebo-9 Title: Can you render a camera in headless mode? (version 9.18.0) I'm trying to run a simulation in a Docker container in which the output of a camera (libgazebo_ros_camera) can be accessed externally. Everything works fine when my display is mounted to the container. I receive this if I do not mount my display. [Err] [RenderEngine.cc:742] Can't open display: [Wrn] [RenderEngine.cc:88] Unable to create X window. Rendering will be disabled [Wrn] [RenderEngine.cc:291] Cannot initialize render engine since render path type is NONE. Ignore this warning ifrendering has been turned off on purpose. This page is pretty clear that cameras won't render in headless mode. So, this may be a dumb question, but I vaguely remember accomplishing this before. Any other ideas would be great! Originally posted by jdekarske on Gazebo Answers with karma: 21 on 2021-06-08 Post score: 1 Original comments Comment by m.bahno on 2021-06-09: Not sure, if this is the same case, but some time ago I needed to use GPU version of a laser scanner (gpu_ray sensor). In order to make it work, I needed to use vcl software to emulate virtual X-window. However, if camera is part of visualisation GUI, it may not help... Figured it out nearly a year later :) I needed to use a dummy xserver to convince gazebo that there was a display to render to: /usr/bin/Xorg -noreset +extension GLX +extension RANDR +extension RENDER -logfile ./xdummy.log -config /etc/X11/dummy-xorg.conf :1 dummy-xorg.conf # This xorg configuration file is meant to be used by xpra # to start a dummy X11 server. # For details, please see: # https://xpra.org/Xdummy.html
{ "domain": "robotics.stackexchange", "id": 4605, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo-camera, docker, gazebo-9", "url": null }
c, datetime void wrapGetEndOfRest(TimeStamp startOfRest, int neededLocalNights){ if(isNeededLocalNightsInTheRange(neededLocalNights)){ startOfRest = getEndOfRest(startOfRest, neededLocalNights); printf("%s is the end\n\n", asctime(&startOfRest)); } else{ printf("Error: the neededLocalNights is out of range min 2, max 5 got: %d \n\n", neededLocalNights); } } Why not just use <stdbool.h> instead of defining your own enum for it? If it has to do with the standard being used, then make sure you're compiling under C99 or GNU99. It looks like t and t2 are for testing and more could be added at some point. If so, you could have an array of TimeStamps to help make the code more concise. isNeededLocalNightsInTheRange() can be simplified as such: return 2 <= neededLocalNights && neededLocalNights <= 5; This will automatically return the appropriate condition. Try to avoid magic numbers by instead making them into constants. This would apply to the 2, 8, 6, 5, and 0 (the -1 indicates an error and doesn't apply here). Comments aren't always helpful enough and don't explain each of these anyway. Error messages should be printed to fprintf() with its first argument as stderr to indicate this. printf only prints to standard output.
{ "domain": "codereview.stackexchange", "id": 23299, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, datetime", "url": null }
phylogeny, software-recommendation, metagenome, 16rrna Title: Variation in 16S rRNA between assemblers - how do I know which is real? I have a low-diversity metagenome (~11 bins > 80% completion). Out of the bins, 3 are of interest to me. None of the lower-completion bins that can be identified are from the group of interest. So let's call the bins of interest 1, 2 and 3. The raw reads had been assembled separately with MEGAHIT and SPAdes (meta). Out of them, only the SPAdes assembly was binned / refined. There are 3 16S rRNA in the MEGAHIT assembly (x, y, z) and 3 16S rRNA in the SPAdes assembly (a, b, c). Bin 1 is associated with sequence 'a', which is identical to 'x'. Bin 2 is linked with SPAdes sequence 'b', which has no MEGAHIT equivalent. Bin 3 is not associated with any 16S rRNA (even with MarkerMAG). Sequences 'c' and 'z' are very similar. Sequence 'y' is an outlier. There are small hints from phylogenetics that bin 3 could be associated with sequences c/z. But then, could sequence 'y' just be a misassembly? Or is it that somehow my final bins are missing an entire genome of interest? If the 16S rRNA for 'y' is real, I would expect a genome divergent enough from the others that it wouldn't get binned together with them. This is what I am trying to understand. Would you happen to have any advice / suggestions on what I could to to test this? Initial approach: assessing scaffold quality with the BBMap stats.sh . It's a bit difficult to tell, since SPAdes has about 10x more contigs. Worse N50/N90 but more long contigs and a better 'general assembly score'. Additional thoughts: The new bins seem too divergent from known genomes to use a reference for misassembly estimation. Thank you for your time! This is easy @Laura - I'm a bit surprised by the results Spades meta didn't have a good reputation, but the criteria assessed was not what you are looking for. Anyway take all outputs and place them in large multi-species alignment,
{ "domain": "bioinformatics.stackexchange", "id": 2561, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "phylogeny, software-recommendation, metagenome, 16rrna", "url": null }
classical-mechanics, energy-conservation Title: Confusion regarding the conservation of mechanical energy If it's always the case that---when all forces are derivable from a potential energy---$$\Delta T = W = -\Delta V$$ so that $$\Delta \left(T + V\right) = 0,$$ why is energy not conserved when $V$ is time-dependent? More precisely, what's wrong with my reasoning? The work done on a particle over some interval of time $[t_1,t_2]$ is $$ W = \int_{t_1}^{t_2} \vec F_{net} \cdot \frac{d\vec x}{dt} dt$$ If we assume that $\vec F_{net} = - \nabla V(\vec x)$ then $$W = - \int_{t_1}^{t_2} \nabla V(x(t)) \cdot \frac{d\vec x}{dt} dt = - \int_{t_1}^{t_2} \frac{d}{dt}\left(V(\vec x(t))\right)dt$$ $$ = - [V(x(t_f))-V(x(t_i))] \equiv -\Delta V $$ This is a result of the chain rule: $$\frac{d}{dt} V(\vec x(t)) = \frac{\partial V}{\partial x} \cdot \frac{dx}{dt} + \frac{\partial V}{\partial y}\frac{dy}{dt} + \frac{\partial V}{\partial z}\frac{dz}{dt} \equiv \nabla V \cdot \frac{d\vec x}{dt}$$ However, if $V$ is an explicit function of time, then
{ "domain": "physics.stackexchange", "id": 56807, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, energy-conservation", "url": null }
quantum-field-theory, propagator, lattice-model, bosonization Title: What is the meaning of propagator in the context of lattice theory? Say in $1+1D$ free fermion theory, it is easy to calculate the propagator in the (effective) field theory to be $$\langle \psi^\dagger(z)\psi(z')\rangle = \frac{1}{2\pi}\frac{1}{z-z'}$$ (in the notation of https://arxiv.org/abs/cond-mat/9908262). What is the meaning of this back in the lattice theory? Especially, there is a singularity when $z\rightarrow z'$ which is not present in the lattice theory. How to explain this discrepancy? I would prefer to answer your general question in a different context, because the example you mention has a difficulty which is special to the chosen QFT and not to the question of the meaning of singularities in the two-point functions. In particular, there is the fermion doubling problem, namely that it is impossible to put a single chiral non-interacting fermion on the lattice. Therefore allow me to discuss your question in the simplest case: a scalar field in $(0+1)$ dimensions. Our variable is a field $\phi : \mathbb{Z} \rightarrow \mathbb{R},\ x \mapsto \phi(x).$ For simplicity i will set the lattice constant to $1$. This just means i am measuring distances in units of the lattice constant. Now consider the Green's function $$ G(x-y) = \langle \phi(x) \phi(y) \rangle \tag{1} $$ This solves the difference equation $$ 2G(x) - G(x+1) - G(x-1) = \delta_{x,0} \ . $$ This may be solved by a Fourier transform; write $$ G(x) = \int_{-\pi}^\pi \frac{ d p}{2\pi} \hat{G}(p) e^{ i p x} \ . $$ Inserting in equation $(1)$ gives
{ "domain": "physics.stackexchange", "id": 52715, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, propagator, lattice-model, bosonization", "url": null }
quantum-mechanics, measurements, probability, quantum-statistics Now, atoms in the $\mathsf{A}$ beam have pure quantum state: $$\ket{\psi} = \ket{\uparrow_{x}} = \frac{1}{\sqrt{2}}\left(\ket{\uparrow_{z}} + \ket{\downarrow_{z}}\right)$$ And therefore: $$P(\ket{\psi'}=\ket{\uparrow_{z}})=|\braket{\uparrow_{z}}{\psi}|^{2} = \frac{1}{2}$$ However, for beam $\mathsf{B}$ we have an unpolarized beam and thus we have that the density matrix is given by: $$\mathbf{\rho}=\frac{1}{2}\begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}$$ And therefore the probability of measuring $\ket{\uparrow_{z}}$ after passing it through the Stern-Gerlach experiment is the same. Therefore, I do not see how there can be a possibility of distinguishing between the two states after passing them through the Stern-Gerlach apparatus. Yet the phrasing of the question has made me think I am misunderstanding something.
{ "domain": "physics.stackexchange", "id": 63254, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, measurements, probability, quantum-statistics", "url": null }
$$x = 0 \vdash \forall x (x = 0)$$. By use of Deduction Theorem we can conclude : $$\vdash x = 0 \rightarrow \forall x (x = 0)$$. But this may not be because, by soundeness of first-order logic (i.e. : if $$\vdash \alpha, then \vDash \alpha$$) we expect that only valid formulae are provable, and the above formula is not valid. Tho show that it is not, consider an interpretation with domain the set $$\mathbb N$$ of natural numbers and consider an assignement $$s$$ of value to the free variables [i.e. a function $$s : Var \rightarrow \mathbb N$$] such that $$s(x)=0$$. Clearly : $$(x = 0 \rightarrow \forall x (x = 0))[s] = FALSE$$ because $$(x = 0)[s]$$ is $$0 = 0$$, which is true, while $$\forall x (x = 0)$$ is false. Pretty much nothing. Unless the quantifier is more specific like $\forall x \in (0, 1)$. Otherwise I would say there is absolutely no difference. $P(x) \implies Q(x)$ makes obvious the fact that the implication is true for each $x$. It means $Q(x)$ is true each time $P(x)$ is true. Makes no sense to put the quantifier at the beginning. Not a fan of doing it.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9458012717045181, "lm_q1q2_score": 0.8200435226094163, "lm_q2_score": 0.8670357580842941, "openwebmath_perplexity": 248.66327700433055, "openwebmath_score": 0.9031214118003845, "tags": null, "url": "https://math.stackexchange.com/questions/801448/difference-between-bound-and-free-variable" }
• Now look at your function $$f(X)=\left\{1,2,3,4\right\}-X$$. In words, $$f(X)$$ is the complement of $$X$$, where $$X$$ is a subset of $$\left\{1,2,3,4\right\}$$. Again, if you're having difficulties, try to calculate a few random values. For example, let's take a random element $$X$$ of $$A$$, say $$X=\left\{1,3,4\right\}$$. Then $$f(X)=\left\{1,2,3,4\right\}-\left\{1,3,4\right\}=\left\{2\right\}$$ Cool, so $$\left\{2\right\}$$ is in the image of $$f$$. In symbols, I will denote the image of $$f$$ by $$Im(f)$$, so $$Im(f)=\left\{\{2\},\ldots\right\}$$ You can then keep computing $$f(X)$$ for all 12 possibilities of $$X$$; here are a few other random values: $$f(\left\{2\right\})=\left\{1,3,4\right\},\quad f(\left\{1,3\right\})=\left\{2,4\right\},\quad f(\left\{2,4\right\})=\left\{1,3\right\}...$$ so $$Im(f)=\left\{\{2\},\{1,3,4\},\{1,3\},\{2,4\},\ldots\right\}$$ This should give you the whole of $$Im(f)$$. It is not too much work because $$A$$ is small. But if $$A$$ were a little larger things would get exponentially more time-consuming.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9799765575409524, "lm_q1q2_score": 0.8166385641715835, "lm_q2_score": 0.8333245911726382, "openwebmath_perplexity": 76.98865536803906, "openwebmath_score": 0.961359977722168, "tags": null, "url": "https://math.stackexchange.com/questions/3189827/image-of-a-discrete-function" }
general-relativity, tensor-calculus, representation-theory, spinors, covariance $$ = (v^{k}\boxdot_{\mathbb{K}}w^{m})\boxdot_{\mathbb{K}}(\delta^{i}_{k}\boxdot_{\mathbb{K}}\delta^{j}_{m})\equiv v^{k}w^{m}\delta^{i}_{k}\delta^{j}_{m} = v^{i}w^{j}$$ Hence, we have properly that a $(0,2)$-tensor can be writen as: $$\{T\}[(v,w)] = \{v^{i}w^{j}T_{ij} \}[(v,w)] = \{f^{i} \bar{\otimes} f^{j}T_{ij} \}[(v,w)] \implies $$ $$T = T_{ij} \boxdot _{\mathfrak{Lin'_{2}}} f^{i}\bar{\otimes}f^{j} \equiv$$ $$T = T_{ij}f^{i}\bar{\otimes}f^{j}$$ And after all of this awful text we can say that i) The tensor product os covariant tensors are indeed: $$V^{*}\otimes V^{*}\equiv \mathfrak{T}^{0}_{2}(\mathfrak{V}) \approx [\mathfrak{Lin_{2}}'(\mathfrak{V},\mathfrak{V};\mathbb{K}),\bar{\otimes}] $$ I wrote $\approx$ not $=$ because if you look at the commutative diagram you will find that there's a linear map $L$. Well, $L$ is an isomorfism. The diagram is then:
{ "domain": "physics.stackexchange", "id": 52714, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, tensor-calculus, representation-theory, spinors, covariance", "url": null }
ros, roslaunch, add-dependencies, sicktoolbox-wrapper, sick Title: Unable to build dependancy with package in stack Hi and thanks for a great forum! This is my first question, hope I have the correct tags and so forth. So, I am trying to build a launch package connecting several packages. Building dependancies and launching nodes directly under the catkin_ws works fine but when I try to depend to "sicktoolbox_pls_wrapper" I receive: Cmake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "sicktoolbox_pls_wrapper" with any of the following names: sicktoolbox_pls_wrapperConfig.cmake sicktoolbox_pls_wrapper-config.cmake The package is found with roscd and rospack find, I can also run the nodes in sicktoolbox_pls_wrapper with rosrun. Could the fact that the package is found in a stack be the problem? I have also tried to build with the dependency directed to the stack without success. Does anyone have an idea? Thanks! Originally posted by raspet on ROS Answers with karma: 16 on 2017-02-21 Post score: 0 Original comments Comment by mgruhler on 2017-02-22: What do you mean by "inside a stack"? Where did you get this package from? My guess currently is that the package is a rosbuild package, whereas your package is a catkin package. You cannot depend from a catkin to a rosbuild package. You would need to migrate the sick package to catkin as well. Comment by raspet on 2017-02-22: Spot on! Did not think about that, I will try to migrate it. Thanks! The package was found here link text I can prob look at the similar package (without PLS support) here link text? Best regards! The problem was indeed that the package was rosbuild, after migrating it to catkin the package is found and can be called within the launch file. Originally posted by raspet with karma: 16 on 2017-02-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27084, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, roslaunch, add-dependencies, sicktoolbox-wrapper, sick", "url": null }
javascript, html5, community-challenge, simon-says init(); <!DOCTYPE html> <html> <head> <title>Simon</title> </head> <body> <canvas id="simon" width="500" height="500" style="border:1px solid #d3d3d3;"> Your browser does not support the HTML5 canvas tag. How sad.</canvas> </body> </html> First, bravo! It's very fun and highly nostalgic. I may or may not have forced encouraged my children to play it well past their bedtime. As to the structure, in 2014, it is absolutely recommended that you encapsulate your game inside a module. This ensures that it's portable, that it doesn't cause conflicts in the global namespace, and (from a self-discipline perspective) it will benefit from a bit more code organization and thinking about things like DRY, scoping, subclasses, etc. So some general pseudocode from a more OO-style would be: SimonGame { Oscillator { Type Frequency Start() Disconnect() Beep() } Canvas { WriteMessage() Paint() Quadrants Quadrant Color LitColor State } Guess() Init(options) } That's just a rough start, but the general idea would be to call things like Canvas.Paint() and Oscillator.Beep(), etc. rather than your global functions. You might also want take a document element or ID in your Init() method arguments so the user can pass in the canvas they want to "simonize".
{ "domain": "codereview.stackexchange", "id": 11224, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, html5, community-challenge, simon-says", "url": null }
beginner, ruby, statistics d1 = [27.5,21.0,19.0,23.6,17.0,17.9,16.9,20.1,21.9,22.6,23.1,19.6,19.0,21.7,21.4] d2 = [27.1,22.0,20.8,23.4,23.4,23.5,25.8,22.0,24.8,20.2,21.9,22.1,22.9,20.5,24.4] d3 = [17.2,20.9,22.6,18.1,21.7,21.4,23.5,24.2,14.7,21.8] d4 = [21.5,22.8,21.0,23.0,21.6,23.6,22.5,20.7,23.4,21.8,20.7,21.7,21.5,22.5,23.6,21.5,22.5,23.5,21.5,21.8] d5 = [19.8,20.4,19.6,17.8,18.5,18.9,18.3,18.9,19.5,22.0] d6 = [28.2,26.6,20.1,23.3,25.2,22.1,17.7,27.6,20.6,13.7,23.2,17.5,20.6,18.0,23.9,21.6,24.3,20.4,24.0,13.2] d7 = [30.02,29.99,30.11,29.97,30.01,29.99] d8 = [29.89,29.93,29.72,29.98,30.02,29.98] x = [3.0,4.0,1.0,2.1] y = [490.2,340.0,433.9] s1 = [1.0/15,10.0/62.0]
{ "domain": "codereview.stackexchange", "id": 35862, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, ruby, statistics", "url": null }
slam, navigation, odometry, robot-pose-ekf, robot-localization Title: rtabmap odometry source I have been experimenting with rtabmap_ros lately, and really like the results I am getting. Awesome work Mathieu et al.! First, let me describe my current setup: Setup ROS Indigo/Ubuntu 14.01 rtabmap from apt-binary (ros-indigo-rtab 0.8.0-0) Custom robot with two tracks (i.e. non-holonomic) Custom base-controller node which provides odometry from wheel encoders (tf as /odom-->/base_frame as well as nav_msgs/Odometry messages) Kinect2 providing registered rgb+depth images XSens-IMU providing sensor_msgs/Imu messages (not used at the moment) Hokuyo laser scanner providing sensor_msgs/LaserScan messages Problem Description The problem I am having is the quality of the odometry from wheel encoders: while translation precision is good, precision of rotation (depending on the ground surface) is pretty bad. So far, I have been using gmapping for SLAM/localization. This has been working good, gmapping subscribes to the /odom-->/base_frame tf from the base_controller as well as laser scan messages. In my experiments, gmapping does not have any problems in indoor environments getting the yaw-estimate right. Using rtabmap's SLAM instead of gmapping works good as long as I don't perform fast rotations or drive on surfaces on which track slippage is high (i.e. odom quality from wheel encoders is poor). This results in rtabmap getting lost. To improve rtabmap performance, I would like to provide it with better odometry information. My ideas are: Use laser_scan_matcher subscribing to laser scan + imu/data + wheel_odom OR Use robot_pose_ekf subscribing to imu/data + wheel_odom OR Use robot_localization subscribing to imu/data + wheel_odom OR Use gmapping subscribing to tf + laser scans OR Use hector_mapping subscribing to laser scans
{ "domain": "robotics.stackexchange", "id": 20733, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, odometry, robot-pose-ekf, robot-localization", "url": null }
quantum-information, linear-algebra, trace $$ What is implicit to this notation, is that you leave the part of the operator which acts on the space B untouched. In principle what you do is to multiply the square matrix by rectangular matrices to obtain a smaller matrix: $$ tr_A(L_{AB})=\sum_i [(\langle i|\otimes id)L_{AB}(|i\rangle\otimes id)] $$ If you want to think of matrices, just represent the tensor products via Kronecker products: $$ tr_A(L_{AB})= \left(\array{1&0&0&0\\0&1&0&0}\right)\cdot \left(\array{0&0&1&0\\1&0&0&0\\0&0&0&0\\0&0&0&0} \right)\cdot \left( \array{1&0\\0&1\\0&0\\0&0}\right)=\left(\array{0&0\\1&0} \right) $$ (I just wrote the surviving term (where $i=0$).)
{ "domain": "physics.stackexchange", "id": 21640, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-information, linear-algebra, trace", "url": null }
inorganic-chemistry, experimental-chemistry, metal, surface-chemistry, nanotechnology Title: Nanocar racing competition and the race tracks used In the Nanocar race competition the molecules are raced on a gold surface. Is there a specific property of gold that it forms a good "racetrack"? On what criteria does it compare with other metal surfaces? While I can't find specific justification for the surface, the first competition involved both gold and silver "racetracks." Drivers gear up for world’s first nanocar race How to build and race a fast nanocar The competition involves propulsion and imaging using STM so the substrate must be conductive. Gold (and silver to a lesser degree) is particularly useful, since it won't easily oxidize and is easy to purify. I've seen several comments about having 'tracks' in the gold surface, which suggests a well-defined crystal face with reconstruction, e.g. Au(100): (Image from Wikipedia)
{ "domain": "chemistry.stackexchange", "id": 12768, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, experimental-chemistry, metal, surface-chemistry, nanotechnology", "url": null }
optics, visible-light, geometric-optics, lenses Title: Why do the asymptotes appear in the graph of $u$ against $v$ for the thin lens equation and what does this represent? For the thin lens equation, $$\frac{1}{v} = \frac{1}{u}+\frac{1}{f}$$ when $u$ and $v$ are plotted to determine $f$ you get a graph like this. Why do the asymptotes appear and why can we use them as a way to find $f$? Can the $1/f$ part of the equation be thought of as translating the graph? Furthermore, what would this then actually represent in physical terms? Why do the asymptotes appear and why can we use them as a way to find $f$? First Why asymptotes appear? $$\frac{1}{y}=\frac{1}{x}+\frac{1}{f}$$ That's the same thing i.e. the lens formula. $$\Rightarrow y=\frac{xf}{x+f}, $$ $$\lim_{x\rightarrow -f}y=+\infty$$ or $$\lim_{u\rightarrow -f}v=+\infty$$ What does means is basically if you set the location of the object at focal length, you get the image at infinity. Now Why we use this? First, it's not the only way to determine focal length but It's showing you the way to find focal length by just watching the image and vary the object locations. Can the $1/f$ part of the equation be thought of as translating the graph? No! Just plot the graph, It's not translating rather It's changing the asymptotes. Furthermore, what would this then actually represent in physical terms? It's the inverse of focal length that's all. You can call it the Power of lense.
{ "domain": "physics.stackexchange", "id": 74430, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, visible-light, geometric-optics, lenses", "url": null }
newtonian-mechanics, rotational-dynamics Title: Confusion about rotation of a beam due to a couple moment and Newton's 2nd Law I'm struggling with the following concept. If we have a uniform beam with a single pivot at its centre, and we apply two forces in opposite directions on opposite ends creating a couple moment, then Newton's 2nd Law states that (because the forces cancel out) there will not be any motion (or at least transnational motion?) But, of course, there will be rotational motion, as there is a net moment on the beam. I am struggling to connect the idea of Newton's 2nd Law and the forces together with rotational motion, as my intuition keeps telling me that there should not be any motion whatsoever as the forces cancel out. Does this have to do with the internal forces of the beam? Is it because, in the transnational case, we are analysing the beam as if it is a single particle, but in the rotational case, we are analysing it as if it is a "line" of multiple particles, and therefore (if we look at the particles on either end of the beam) the net force is in a direction so as to cause rotation? If this is the case, then what causes the force on the particles on either end that accelerates the particle "inwards" in the normal direction and causes the signature "rotational" motion instead of the particle just accelerating off into space? I hope I've gotten across what I'm confused about! :P then Newton's 2nd Law states that (because the forces cancel out) there will not be any motion (or at least transnational motion?) For a force couple, there will not be any translational motion of the center of mass but there will be rotational motion. But, of course, there will be rotational motion, as there is a net moment on the beam. I am struggling to connect the idea of Newton's 2nd Law and the forces together with rotational motion, as my intuition keeps telling me that there should not be any motion whatsoever as the forces cancel out.
{ "domain": "physics.stackexchange", "id": 73455, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, rotational-dynamics", "url": null }
image-processing, matlab, filters from here, eventually, we got the x3-3 result what we think this is the box filter's result. is this correct to calculate the box filter? I'm not sure how does x3-x is represented the box filter's result? It seems your question is more about separability than how it is implemented. Try this: if your two-dimensional box filter kernel (the first "code" text in your question) can be written as the product of two one-dimensional box functions: $$ b_2[n,m] = \left \{ \begin{array}{cl} 1 & \mbox{ for } -1 \le n, m \le 1\\ 0 & \mbox{ otherwise } \end{array} \right . = b[n]b[m] $$ where $$ b[p] = \left \{ \begin{array}{cl} 1 & \mbox{ for } -1 \le p \le 1\\ 0 & \mbox{ otherwise } \end{array} \right . $$ Then, our double sum for one particular position $n_0, m_0$ looks like: $$ S[n_0,m_0] = \sum_n \sum_m b[n,m] I[n-n_0,m-m_0]\\ = \sum_n \sum_m b[n]b[m] I[n-n_0,m-m_0]\\ = \sum_n b[n] \sum_m b[m] I[n-n_0,m-m_0] $$ which just says you can do the sum in one direction first and then the sum in the other direction because the kernel is separable.
{ "domain": "dsp.stackexchange", "id": 5199, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, matlab, filters", "url": null }
$$\sum_{k=1}^{\infty} \left\lfloor\frac{n}{p^k}\right\rfloor$$ where $\lfloor x\rfloor$ is the integer part of $x$. Coupled with $$\sum_{i=1}^{n} \left\lfloor x_i\right\rfloor \le \left\lfloor\sum_{i=1}^{n} x_i \right\rfloor$$ we get that if $\sum_{i=1}^{n} a_i = N$ then $$\sum_{i=1}^{n} \left\lfloor\frac{a_i}{p^k}\right\rfloor \le \left\lfloor\frac{N}{p^k}\right\rfloor$$ And so any prime power which divides $(a_1)! \dots (a_n)!$ divides $N!$ and so $\displaystyle \frac{N!}{(a_1)! \dots (a_n)!}$ is an integer. Besides the obvious combinatorial interpretations, one also has the has the following reduction from multinomial coefficient to products of binomial coefficients. Namely for $n = i+j+k+\cdots + m$ $$\frac{n!}{i!j!k!\cdots m!} = \binom{n}{i} \frac{(n-i)!}{j!k!\cdots m!} = \binom{n}{i}\binom{n-i}{j} \frac{(n-i-j)!}{k!\cdots m!} = \;\cdots$$ $$\frac{(x+y+z)!}{x!y!z!} = \binom{x + y + z}{x + y} \binom{x + y}{x}.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9748211572021571, "lm_q1q2_score": 0.808067086317888, "lm_q2_score": 0.828938806208442, "openwebmath_perplexity": 187.29081016642436, "openwebmath_score": 0.9205998778343201, "tags": null, "url": "https://math.stackexchange.com/questions/2158/division-of-factorials" }
c++, performance, dynamic-programming const unsigned long k = 1000000007; const unsigned long w_max = 1001; int main() { unsigned long h, w; std::cin >> h >> w; // We allocate a contiguous array of longs on the stack. // We use long instead of int as long is guaranteed to be at least 32 bits // which can hold the result of k+k without truncating before we take the modulo, // while int is only guaranteed to be 16 bits (although it's 32 on PC). // // Allocating it on the stack as a fixed size avoids a slow memory and page allocation. // Note that we're over allocating the size here so that we can have an extra zero // padding before the actual DP matrix begins, this allows us to not have to check // for the left edge in the inner loop, avoiding a costly branch as long as this pad // is set to zero which it always will be as we never over write it. unsigned long dp[2 * (w_max + 1)] = {0}; // We take two pointers into the above buffer so we can alternate them easily. unsigned long* dp_curr = &dp[0]; unsigned long* dp_prev = &dp[w + 1]; // This is the starting condition. It's stored in the dp_curr[1] (remember, // after the pad) it'll then be swapped into dp_prev[1] when entering the loop // then when computing dp_curr[1] = dp_curr[0] + dp_prev[1] it'll be correctly // transfered to the starting square without additional branching in the inner loop. dp_curr[1] = 1;
{ "domain": "codereview.stackexchange", "id": 40727, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, dynamic-programming", "url": null }
Set whose cardinality is known 1! } { 1! } { 1! } {!. More robust definition of cardinality than we saw before, as … VOL set all! Functions are also called one-to-one, onto functions be sets and let be a function with property... Precisely the epimorphisms in the codomain ) a more robust definition of cardinality than we before... We can use functions to establish the relative size of sets let a. X and Y are two sets having m and n elements respectively one-to-one ) if implies of. And n elements respectively the functions in the codomain ) one is left out of it a. Some definitions and results about functions continuous function is we will show that the cardinality a. Are precisely the epimorphisms in the codomain ) true in general is mapped to distinct images in the )... Us to decide its cardinality by comparing it to a set is a measure of the set of continuous., surjective functions in Exam- ples 6.12 and 6.13 are not injections but the cardinality of surjective functions in Example 6.14 is epimorphism. Images in the codomain ) bijective if it is injective ( or one-to-one ) if implies it is (. The cardinality of the domain is mapped to distinct images in the codomain ) let X Y! 1! } { 1! } { 1! } {!! Prefix epi is derived from the Greek preposition ἐπί cardinality of surjective functions over, above, on of. Domain is mapped to distinct images in the category of sets function in Example 6.14 is an epimorphism but. $\frac { 3! } { 1! } { 1! } { 1! {. Of the domain is mapped to distinct images in the codomain ): a \rightarrow B\ is... Set Ato a set is a more robust definition of cardinality than we before. Above, on another: let X and Y are two sets having m n..., on not true in general called a surjection 1. f is injective any! To distinct images in the three preceding examples all used the same formula to determine the outputs preposition! Determine the outputs the I 'll begin by reviewing
{ "domain": "org.br", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363517478327, "lm_q1q2_score": 0.8405373584622082, "lm_q2_score": 0.8539127473751341, "openwebmath_perplexity": 518.5773486296423, "openwebmath_score": 0.9288099408149719, "tags": null, "url": "http://felizcidade.org.br/2rf5m4/cardinality-of-surjective-functions-b8ae3d" }
fluid-dynamics, atmospheric-science, vortex If anyone else has any ideas about the constant vorticity in the cyclone eye I'd be very interested. EDIT 1: Breaking Rossby waves outside the eyewall I forgot to mention in the foregoing answer another interesting dynamical aspect that contributes to the observed "sharpness" of the transition between the regions of the constant vorticity (in the eye) and the varying vorticity outside. The mean swirling flow in a tropical cyclone provides a background vorticity profile on which so-called "vortex Rossby waves (VRWs)" can propagate. Such VRWs can reach large amplitudes and "break" (the term has a specific definition in this context). This breaking of VRWs is very efficient at mixing vorticity (or more precisely "potential vorticity (PV)"). This leads to a steepening of the vorticity jump in the eye wall and a homogenisation just outside of it (in the so-called "surf-zone") because a breaking VRW mixes high PV air from the eye interior with lower PV air outside. EDIT 2: Concentric eyewalls It may be of interest to you that the velocity profile at the beginning of this post is by no means always present in TC. In fact there is a phenomenon called "concentric eyewall" or "secondary eyewall", where a second maximum occurs in the tangential wind profile. This affects the storm structure immensely, both in its horizontal extent and in its intensity. Therefore, it is an important issue to reproduce them correctly in supercomputer simulations (e.g. to make better forecasts). This is just to illustrate that there are many interesting phenomena associated with TCs :). EDIT 3: Barotropic instability and potential vorticity mixing inside the eye A dynamic explanation for why the vorticity is constant. I have learned just recently that, from a dynamical point of view, there is an answer to why vorticity in the TC eye is constant. The explanation goes something like this: consider some nonlinear velocity profile close to the origin, let's say $V\sim x^\alpha$, $\alpha\neq 1$.
{ "domain": "physics.stackexchange", "id": 33163, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, atmospheric-science, vortex", "url": null }
general-relativity, cosmology, units, stress-energy-momentum-tensor, si-units Could someone please tell me where the mistake is? This extra $c^4$ term in the first EFE solution appears to come directly from the metric tensor. What is the correct value for $T_{00}$ in SI units? Okay, so we are using coordinates $(t,r,\theta,\phi)$, where $t$ and $r$ have dimensions of time and length, and $\theta $ and $\phi$ are dimensionless. We see that $g_{\mu\nu} \mathrm dx^\mu \mathrm dx^\nu$ has consistent units of length squared, as required.
{ "domain": "physics.stackexchange", "id": 85017, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, cosmology, units, stress-energy-momentum-tensor, si-units", "url": null }
organic-chemistry, experimental-chemistry, surface-chemistry Title: How to create different levels of hydrophobicity on glass beads surface? I have some glass beads and I need to create different levels of hydrophobicity on their surfaces (various contact angles). I know that this is possible by 'silanization'. However, I need to know about the following questions: Whether there is any other reasonable method to do that. In silanization, how can I change the level of hydrophobicity that my glass beads achieve? (e.g. by changing coating time, using different solutions or concentrations, etc.)
{ "domain": "chemistry.stackexchange", "id": 4705, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, experimental-chemistry, surface-chemistry", "url": null }
datetime, elixir You should have a @doc attribute and unit tests :-) Remember to document that the range is inclusive (which is not what I would expect). And if you add @doc, consider adding an example which can be run via a doctest in ExUnit. More info: elixir-lang.org/docs/stable/ex_unit/ExUnit.DocTest.html (thanks to @alxndr). Consider making the range not inclusive, but rather [from, to) or (from, to]. The former is common in computer science, the latter is common in daily speech. The _word style is usually used for arguments that aren't used as part of the pattern matching. This is why elixir doesn't warn you if you haven't used an argument whose name starts with an underscore. I would rename to start_date and end_date. I would introduce an anonymous function working as an alias, to make the first line a bit easier on the eye. See example below. Your function would give a MatchError on invalid output. I would probably deliberately check it and raise an ArgumentError instead. Update: The Erlang/Elixir philosophy is let it fail, and therefore this suggestion might be contrary to common E/E style. From the creator himself: This is typically how Elixir/Erlang code is written indeed. If a condition is not met, it fails as MatchError or FunctionClauseError. We typically don't add an extra clause saying what went wrong. The upside is that we simply worry about the happy path (forcing us to write more assertive code). The downside is that it may be cryptic sometimes to find exactly what went wrong. We do include the arguments in the stacktrace though. — José Valim You can neatify the last line to |> Enum.map &:calendar.gregorian_days_to_date/1.
{ "domain": "codereview.stackexchange", "id": 10522, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "datetime, elixir", "url": null }
c, programming-challenge, time-limit-exceeded On such a dataset, I can already see the benefits of the idea described above. Correspond code looks like : int nb_bacteria_after_beam(coordinate coord[], int N, int xi, int xf) { int nbacterias = 0; assert(xi < xf); for (int j = 0; j< N; j++){ coordinate c = coord[j]; int x1 = c.x1; int x2 = c.x2; if (x1 > x2) { // swap int tmp = x1; x1 = x2; x2 = tmp; } if (x1 < xi) nbacterias++; if (x2 > xf) nbacterias++; } return nbacterias; } When the number of cases gets huge indeed, it might be worth performing some preprocessing to ensure queries can be performed more efficiently. We can easily write a procedure to prepare our datasets in such a way that we can apply the trick described above directly. This leads to much better performances indeed : void prepare_dataset(coordinate coord[], int N) { for (int j = 0; j< N; j++){ if (coord[j].x1 > coord[j].x2){ // swap (to keep things nice, y coord has to be moved too even if it is not useful) int tmp = coord[j].x1; coord[j].x1 = coord[j].x2; coord[j].x2 = tmp; tmp = coord[j].y1; coord[j].y1 = coord[j].y2; coord[j].y2 = tmp; } } } int nb_bacteria_after_beam(coordinate coord[], int N, int xi, int xf) { int nbacterias = 0; assert(xi < xf);
{ "domain": "codereview.stackexchange", "id": 12393, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, programming-challenge, time-limit-exceeded", "url": null }
# The slope of the tangent to the curve at any point is equal to y+2x.If the curve passes through the origin, then find its equation Verified 146.4k+ views Hint: By using the given slope , we get a linear differential equation of the form $\dfrac{{dy}}{{dx}} + Py = Q$ To solve such equations , first we need to find the integrating factor $I.F = {e^{\int {Pdx} }}$ and the general solution of the linear differential equation is given by $y{e^{\int {Pdx} }} = \int {Q{e^{\int {Pdx} }} + c}$. As the line passes through the orgin , we can substitute (0,0) in the obtained equation to get the value of c . And this gives the equation of the tangent.
{ "domain": "vedantu.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.98058065352006, "lm_q1q2_score": 0.8039715453045347, "lm_q2_score": 0.8198933381139645, "openwebmath_perplexity": 228.1170894102122, "openwebmath_score": 0.9627687335014343, "tags": null, "url": "https://www.vedantu.com/question-answer/the-slope-of-the-tangent-to-the-curve-at-any-class-12-maths-cbse-5f617fb542336d43bc74efd9" }
notation, electromagnetism, plane-wave Title: Notation of plane waves Consider a monochromatic plane wave (I am using bold to represent vectors) $$ \mathbf{E}(\mathbf{r},t) = \mathbf{E}_0(\mathbf{r})e^{i(\mathbf{k} \cdot \mathbf{r} - \omega t)}, $$ $$ \mathbf{B}(\mathbf{r},t) = \mathbf{B}_0(\mathbf{r})e^{i(\mathbf{k} \cdot \mathbf{r} - \omega t)}. $$ There are a few ways to simplify this notation. We can use the complex field $$ \tilde{\mathbf{E}}(\mathbf{r},t) = \tilde{\mathbf{E}}_0 e^{i(\mathbf{k} \cdot \mathbf{r} - \omega t)} $$ to represent both the electric and magnetic field, where the real part is the electric and the imaginary part is proportional to the magnetic. Often it is useful to just deal with the complex amplitude ($\tilde{\mathbf{E}}_0$) when adding or manipulating fields. However, when you want to coherently add two waves with the same frequency but different propagation directions, you need to take the spatial variation into account, although you can still leave off the time variation. So you are dealing with this quantity: $$ \tilde{\mathbf{E}}_0 e^{i\mathbf{k} \cdot \mathbf{r}} $$ My question is, what is this quantity called? I've been thinking time-averaged complex field, but then again, it's not really time-averaged, is it? Time-independent? Also, what is its notation? $\langle\tilde{\mathbf{E}}\rangle$? Stationary field or monochromatic field. Yes, basically that is the field including the $e^{i\omega t}$ term, but even when it is omitted one still knows what is meant.
{ "domain": "physics.stackexchange", "id": 63964, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "notation, electromagnetism, plane-wave", "url": null }
$$2\pi\cdot\phantom{}_1 F_2\left(1;\frac{1}{2}+\frac{N}{2},1+\frac{N}{2};-\frac{\pi^2}{4}\right)=\color{red}{2\pi-2\pi^2\int_{0}^{1}v^N\sin(\pi v)\,dv}$$ that clearly behaves like $2\pi-O\left(\frac{1}{N^2}\right)$ for large $N$. An extremely good approximation for large $N$ is $2\pi-\frac{2\pi^3}{(N+1)(N+3)}$. • OP did say they didn't want the problem solved, though. – Brian Tung Apr 6 '16 at 17:26
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676454700793, "lm_q1q2_score": 0.8009512642663279, "lm_q2_score": 0.817574478416099, "openwebmath_perplexity": 231.87649628274002, "openwebmath_score": 0.8627997040748596, "tags": null, "url": "https://math.stackexchange.com/questions/1730593/average-perimeter-with-n-points-on-the-unit-circle" }
primer, pcr, qpcr, mice Title: PCR primers for detecting hetero vs homozygous and WT mice in our colony My PI has asked me for advice on making a PCR screen to identify mice from a particular Cre line. We have multiple Cre lines in the colony so it can't just be for mice with a copy of Cre in their genome the primer. The insertion site is known for the mutation, is there a way to use blast or some other search tool to pull the sequences we should base our primers on? We also want to identify which mice are hetero vs homozygous. Are there particular kinds of PCR we should use or will a well designed primer enable this? Details The strain: https://www.jax.org/strain/004146 I was thinking of using this tool: https://www.ncbi.nlm.nih.gov/tools/primer-blast/index.cgi?LINK_LOC=BlastHome The strain reference from Jax, not sure the recommended cre primers are going to be sufficient for what we want. So I'm trying to find the FASTA file for this strain in the region of interest, then I can throw that into the BLAST tool for primer design I listed above. Not sure where to search for the FASTA for this strain though. Strain information: https://www.informatics.jax.org/reference/J:66884 Jax recommends this protocol so I suggested it: https://www.jax.org/Protocol?stockNumber=004146&protocolID=20627 Primer-blast is okay, but I presume these are lab mice so the variation is low, thus you can just use Primer3. SNPs are detected via TaqMan here, i.e. a separate probe that sits over the mutation site, it will detect heterozygosity via qPCR signal (half strength than homozygous).
{ "domain": "bioinformatics.stackexchange", "id": 2628, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "primer, pcr, qpcr, mice", "url": null }
moveit, ros-kinetic } Any help? Thanks Originally posted by Astronaut on ROS Answers with karma: 330 on 2020-10-18 Post score: 0 In your code, the only CollisionObject message you send to the planning scene contains the ADD operation. You can remove the object by sending the message with the REMOVE flag instead. [...] moveit_msgs::CollisionObject collision_object; collision_object.id = "BOX_" + str; collision_object.operation = collision_object.REMOVE; planning_scene_interface.applyCollisionObject(collision_object); On another note, it seems like you are sending a vector with a single element to the planning scene inside your for-loop. It would make more sense to construct the vector of collision objects inside the loop and the apply them to the planning scene once the vector is complete. Also, the line in your code that says bool applyCollisionObject(const moveit_msgs::CollisionObject& collision_objects); does not really make sense (that's a function declaration, but you are not defining the function anywhere). The question where you copied this from was referring to this applyCollisionObject method, which you can use as described in the example code above. Originally posted by fvd with karma: 2180 on 2020-10-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Astronaut on 2020-10-19: Yes. I have a version of the code with / /Remove collision_object.operation = collision_object.REMOVE; std::vector<std::string> object_ids; object_ids.push_back(collision_object.id); planning_scene_interface.removeCollisionObjects(object_ids); collision_object.primitives.push_back(primitive); collision_object.primitive_poses.push_back(box_pose); collision_object.operation = collision_object.ADD; std::vector<moveit_msgs::CollisionObject> collision_objects; collision_objects.push_back(collision_object);
{ "domain": "robotics.stackexchange", "id": 35649, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "moveit, ros-kinetic", "url": null }
Let $c$ be the target number somewhere between lower bound $1$ and upper bound $n$, inclusive. We will search at the value $M = \lceil\frac{n}{2}\rceil$. If we’re told that the guess is too high, we will set $n=M-1$ and try the search again. Otherwise, we will set the lower bound to $M+1$ and search again. However, what really matters is $c$’s position relative to the bounds, so this is also the same as starting a new search with lower bound $1$ and upper bound $n-M$ with a new target number $c-M$. This recursive process ends in two possible ways: We either stop when we find the target number, or we stop if the lower and upper bounds are the same (in which case only one search is performed). Let $F(n)$ represent the expected number of searches needed to find the unknown target number, where $F(0) = 0$ and $F(1) = 1$. Overall, there is a $\frac{M-1}{n}$ probability that $c < M$, a $\frac{1}{n}$ probability that $c=M$, and a $\frac{n-M}{n}$ probability that $c>M$. $$F(n) = \left(\frac{M-1}{n}\right)\left(1+F(M-1)\right) + \left(\frac{1}{n}\right)\left(1\right) + \left(\frac{n-M}{n}\right)\left(1+F(n-M)\right)$$ Multiply by $n$, simplify things a bit, and substitute in the full value for $M$:
{ "domain": "bootmath.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429624315189, "lm_q1q2_score": 0.8107288415776958, "lm_q2_score": 0.8244619350028204, "openwebmath_perplexity": 305.1263104361031, "openwebmath_score": 0.8512685298919678, "tags": null, "url": "http://bootmath.com/average-number-of-guesses-to-guess-number-between-1-and-1000.html" }
machine-learning, neural-network, q-learning put the elements back in the memory (where you've taken them from) Samples that have high priority are likely to be used in training many times. Reducing the weights on these oft-seen samples basically tells the network, "train on these samples, but without much emphasis; they'll be seen again soon." or throw it out, substituting it with new memories instead. Then keep playing the game, adding say, another 10 examples before doing another weights-correction session after assembling a batch of 64 random elements. I refer to the "Session" meaning a sequence of backprops, where the result is the average gradient used to finally correct the network. EDIT: Another question I have in terms of training a neural network against a neural network, is that do you train it against a completely separate network that trains itself, or do you train it against a previous version of itself. And when training it against the other neural network, do you turn the epsilon greedy down to make the opposing neural network not use any random moves. Consider using just one network. Let's say our memory bank contains several elements: ... {...} {stateFrom, takenActionIndex, takenActionQval, immediateReward, stateNext } <-- a single element {...} {...} {...} ... When using each element in your memory during the Correction session (one element after the other), you need to:
{ "domain": "datascience.stackexchange", "id": 2564, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, neural-network, q-learning", "url": null }
javascript, image // good for (const image of images) { // define block start loadImage(image.url, image.name, onImageLoaded); // semicolon } // define block end You are not using standard export so that means the whole set of functions could be in global scope. Use a singleton or use the native export. Try to limit your exports, exporting every function is just a pain an will lead to problems. The function getTileCoordinates is out of place and does not belong here. BTW there is a better way to find the row function getTileCoordinates(image, index, width, height){ const tilesAcross = image.width / width | 0; // | 0 floors result return { x : width * (index % tilesAcross), y : (index / tilesAcross | 0) * height }; } Don't repeat. The functions downloadSprites and downloadTiles are repeats I could not work out why you are returning the result of the load callbacks. There is no consistency as you do it sometimes and other times not. Also you are passing the image as an argument to the callbacks but there is no indication that you use it. For the onload event you can access the image via this if needed. Note you will have to use a standard function declaration for the callback. There is no error checking. What do you do if the connection or a request is lost? A rewrite This is how I would have written your code. It is just a suggestion example and far from ideal. I do not know what you wanted exposed but I only exposed what was not called internally (assuming this is a module) "use strict"; const images = { tiles : {}, }; function loadImage(imageDesc, onLoad) { var image = images[imageDesc.name]; if (image) { onLoad() } else{ image = images[imageDesc.name] = new Image(); image.src = imageDesc.url; image.onload = onLoad; } },
{ "domain": "codereview.stackexchange", "id": 28930, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, image", "url": null }
climate-change, climate In this case, as it is an area that it is almost constantly cloudy with high humidity, temperature is varying just a little bit, and except the first day of the period, it seems that there is no relationship. In fact, on the second day there was a storm (I am living now at Singapore) and it is reflected in a quick change in temperature (both) and solar radiation. Conclusion: It is not as simple as it seems. Hope it helps!
{ "domain": "earthscience.stackexchange", "id": 2667, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "climate-change, climate", "url": null }
php, object-oriented Title: FileMaker PHP API Interface I have written this piece of code to help the development of FileMaker developers having to use the dog-awful PHP API. <?php require_once ( 'fm_api/FileMaker.php' ); require_once ( 'config/config.php' ); /** * Interface between the FileMaker API and PHP - Written by RichardC */ class FMDB { /** * Setting up the classwide variables */ protected $fm; protected $layout = ''; protected $debugCheck = true; public $lastObj = null; //Filemaker LessThan/Equal to and GreaterThan/Equal to characters - Does not work in all IDEs public $ltet = '≤'; public $gtet = '≥'; /** Constructor of the class */ public function __construct() { $this->fm = new FileMaker( FMDB_NAME, FMDB_IP, FMDB_USERNAME, FMDB_PASSWORD ); } /** * Checks whether there is an error in the resource given. * * @param obj $request_object * * @return int */ public static function isError( $request_object ) { $preg = preg_grep( '/^([^*)]*)error([^*)]*)$/', array_keys( $request_object ) ); if( is_array( $request_object ) && !empty( $preg ) ){ return $preg[0]; } return ( FileMaker::isError( $request_object ) ? $request_object->getCode() : 0 ); } /** * Just a quick debug function that I threw together for testing * * @param string $func * @param array $arrReturn * @param string $type 'file' || 'console' * * @return mixed */ protected function debug( $func, $arrReturn, $type='file' ){ $debugStr = ''; if( $func == '' || empty( $func ) ){ return ''; } switch( $type ){
{ "domain": "codereview.stackexchange", "id": 1214, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, object-oriented", "url": null }
ros, serial, topic, publisher Title: How do I create a ROS topic publisher/subscriber? Hi all! I have a sensor from which I read data via a serial communication (port is of type ttyUSB). Now I am thinking of writing two packages, "publisher_node" and "client". What I want is, publisher_node should read data from the sensor continuously, and publish it as a topic (basically just scream out the values). On the other end, in the client package, I have a specific task, during which I want to poll the device for readings. I think that using a topic rather than a service is much more logical here. Data flow is a unidirectional stream. Its sort of like filling a cup(client) with water from a river(publisher_node). Am I thinking in the right direction? My System: Ubuntu 11.10 64-bit ROS version: ROS electric Here are some of the key questions I have: How do I create a topic for publisher_node? Is there any tutorial online for that? Maybe any step by step procedure..? Once I know how to create a topic, I will do that in publisher_node. Then I will 'rosmake' and 'rosrun publisher_node publisher_node'. After that, I will make sure I have the topic up, by doing 'rostopic list'. All is OK. Now how do I write code/make the client "subscribe" to the topic? Any guidelines here? Also, since this is serial communication, I want a special functionality with the topic. I want the user to have an option of setting the frequency of publishing (i.e how fast the data is screamed out). Is this possible with ROS? Note: Yes, I am using a serial device, but I have not used rosserial in any way. I am implementing my own serial port by using the termios API in Linux. Any help is greatly appreciated. Thanks in advance! Originally posted by Nishant on ROS Answers with karma: 143 on 2012-07-06 Post score: 0
{ "domain": "robotics.stackexchange", "id": 10070, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, serial, topic, publisher", "url": null }
cosmology, universe, space-expansion, big-bang, popular-science Title: What has been proved about the big bang, and what has not? Ok so the universe is in constant expansion, that has been proven, right? And that means that it was smaller in the past.. But what's the smallest size we can be sure the universe has ever had? I just want to know what's the oldest thing we are sure about. Spencer's comment is right: we never "prove" anything in science. This may sound like a minor point, but it's worth being careful about. I might rephrase the question like this: What's the smallest size of the Universe for which we have substantial observational evidence in support of the standard big-bang picture? People can disagree about what constitutes substantial evidence, but I'll nominate the epoch of nucleosynthesis as an answer to this question. This is the time when deuterium, helium, and lithium nuclei were formed via fusion in the early Universe. The observed abundances of those elements match the predictions of the theory, which is evidence that the theory works all the way back to that time. The epoch of nucleosynthesis corresponds to a redshift of about $z=10^9$. The redshift (actually $1+z$) is just the factor by which the Universe has expanded in linear scale since the time in question, so nucleosynthesis occurred when the Universe was about a billion times smaller than it is today. The age of the Universe (according to the standard model) at that time was about one minute. Other people may nominate different epochs for the title of "earliest epoch we are reasonably sure about." Even a hardened skeptic shouldn't go any later than the time of formation of the microwave background ($z=1100$, $t=400,000$ years). In the other direction, even the most credulous person shouldn't go any earlier than the time of electroweak symmetry breaking ($z=10^{15}$, $t=10^{-12}$ s.) I vote for the nucleosynthesis epoch because I think it's the earliest period for which we have reliable astrophysical evidence.
{ "domain": "physics.stackexchange", "id": 1231, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology, universe, space-expansion, big-bang, popular-science", "url": null }
$\displaystyle \sum_{n \in E_d} a_n = g(d) X + r_d \ \ \ \ \ (13)$ for some quantity ${X}$ independent of ${d}$, some multiplicative function ${g}$ with ${0 \leq g(p) \leq 1}$, and some remainder term ${r_d}$ whose effect is expected to be negligible on average if ${d}$ is restricted to be small, e.g. less than a threshold ${D}$; note for instance that (5) is of this form if ${D \leq x^{1-\varepsilon}}$ for some fixed ${\varepsilon>0}$ (note from the divisor bound, Lemma 23 of Notes 1, that ${\prod_{p|d} \omega(p) \ll x^{o(1)}}$ if ${d \ll x^{O(1)}}$). We are thus led to the following idealisation of the sieving problem, in which the remainder terms ${r_d}$ are ignored: Problem 10 (Idealised sieving) Let ${z, D \geq 1}$ (we refer to ${z}$ as the sifting level and ${D}$ as the level of distribution), let ${g}$ be a multiplicative function with ${0 \leq g(p) \leq 1}$, and let ${{\mathcal D} := \{ d|P(z): d \leq D \}}$. How small can one make the quantity $\displaystyle \sum_{d \in {\mathcal D}} \lambda^+_d g(d) \ \ \ \ \ (14)$ for a sequence ${(\lambda^+_d)_{d \in {\mathcal D}}}$ of upper bound sieve coefficients, and how large can one make the quantity $\displaystyle \sum_{d \in {\mathcal D}} \lambda^-_d g(d) \ \ \ \ \ (15)$
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9905874104123901, "lm_q1q2_score": 0.8075590360644194, "lm_q2_score": 0.8152324848629214, "openwebmath_perplexity": 4622.61496713103, "openwebmath_score": 1.0000090599060059, "tags": null, "url": "https://terrytao.wordpress.com/category/mathematics/mathnt/page/2/" }
ros-melodic, rosbridge, rospy, rostopic-echo, joint-states Title: Is there a way to extract specific info or lines from a ROS topic? echo /joint_states has the following output (shortened). I made a node (using the rospy tutorial) to subscribe to /joint_states and I want the.py script to print() just the position list for use outside ROS, not the whole topic. How can I extract only that one line from the topic? Any help appreciated! --- header: seq: 1734 stamp: secs: 26 nsecs: 47000000 frame_id: '' name: - L_CROTCH_Y - L_CROTCH_R - L_CROTCH_P - L_KNEE_P - L_ANKLE_R - L_ANKLE_P - CHEST_Y - LC-AI-linear-joint - LC-AO-linear-joint - RC-W-linear-joint - LC-W-linear-joint - RC-E-linear-joint - LC-E-linear-joint
{ "domain": "robotics.stackexchange", "id": 36184, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic, rosbridge, rospy, rostopic-echo, joint-states", "url": null }
from A to B, there are b possible images under f for each element of A. If the nested table is empty, the CARDINALITY function returns NULL. Example 5.6.1 … {\displaystyle 2^{\aleph _{0}}} , Chinese Simplified / 简体中文 randell@unsw.edu.au. Slovenian / Slovenščina there is no set whose cardinality is strictly between that of the integers and that of the real numbers. 0 = Vietnamese / Tiếng Việt. c A bijection (one-to-one correspondence), a function that is both one-to-one and onto, is used to show two sets have the same cardinality. The cardinality |A| of a finite set A is simply the number of elements in it. 0 , " (a lowercase fraktur script "c"), and is also referred to as the cardinality of the continuum. It is a relative notion. c 2 It follows by definition of cardinality that Z+ has the same cardinality … Oracle/PLSQL syntax of the CARDINALITY function. Definition (Rosen p141): A function f: D → C is one-to-one (or injective) means for every a, b in the domain D, if f (a) = f (b) then a = b. , i.e. Usage cardinality(w) Arguments w. a numeric matrix, e.g. With function types, we usually want to consider two functions that return the same value for every input to be "the same function", for cardinality purposes at least (this is known as "extensional equality"). CARDINALITY . Turkish / Türkçe {\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}} The cardinality of a set is a measure of a set's size, meaning the number of elements in the set. Using our intuition of cardinality we count the number of elements in the set. The CARDINALITY function counts the number of elements that a collection contains. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtain such a proof. , And what we want is the cardinality of hash functions to be the same as the size of
{ "domain": "vedantaworld.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429638959674, "lm_q1q2_score": 0.8236677053345417, "lm_q2_score": 0.8376199714402812, "openwebmath_perplexity": 714.9079948119803, "openwebmath_score": 0.7675915360450745, "tags": null, "url": "https://vedantaworld.org/w77rhy/cardinality-of-a-function-c63172" }
c++, logging [ WARN 2024-01-01 09:42:23] 1: count = 2, What a refreshing sleep! [ WARN 2024-01-01 09:42:23] 2: count = 2, What a refreshing sleep! [ INFO 2024-01-01 09:42:15] roadblock says "What a refreshing sleep!" [TRACE 2024-01-01 09:42:25] This is trace message number 2 [ WARN 2024-01-01 09:42:24] 1: count = 1, What a refreshing sleep! [ WARN 2024-01-01 09:42:24] 2: count = 1, What a refreshing sleep!
{ "domain": "codereview.stackexchange", "id": 45292, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, logging", "url": null }
aqueous-solution, analytical-chemistry, halides Title: Why do we use hydrochloric in the test for sulfate ions but nitric acid for halide ions? Recently, I have learnt about testing for ions in qualitative analysis (BBC Bitesize). The link above which I used for reference mentions that for the test for sulfate anions, hydrochloric acid $(\ce{HCl})$ is used to ensure that there is no presence of carbonate ions. For the test for halide ions $(\ce{F-},$ $\ce{Cl-},$ $\ce{Br-},$ $\ce{I-},$ etc), nitric acid $(\ce{HNO3})$ is used instead, for the same purpose. I would like to know why the acids used are different, since they serve the same purpose. Update: $HCl$ cannot be used for removal of carbonates in halides since $HCl$ contains the $Cl-$ halide. However, as to why nitric acid cannot be universally applied to all, I am still unsure. The link above which I used for reference mentions that for the test for sulfate anions, hydrochloric acid (HCl) is used to ensure that there is no presence of carbonate ions. For the test for halide ions (F−, Cl−, Br−, I−, etc), nitric acid (HNO3) is used instead, for the same purpose.
{ "domain": "chemistry.stackexchange", "id": 12292, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "aqueous-solution, analytical-chemistry, halides", "url": null }
ros, pcl, tutorial, camera, ros-fuerte Forgetting to run roscore (very common mistake for beginners). Did not source their files (making ros unable to recognize it as a stack/package) Use input:=/narrow_stereo_textured/points2 even though that input is specific to a certain type of camera which might not be yours. What kind of camera are you using? If it's a kinect for example then you would want to use input:=/camera/depth_registered/points Are you trying the tutorial one for Point Clouds or Point Clouds 2? The one for Point Clouds does not publish images you can view, while the one for Point Clouds 2 publishes a message of type Point Clouds 2 (if you are trying to view it in rviz make sure that's the type you have chosen to view it). If none of these are your problem then please give us some more information so that we may be able to understand your problem better. Originally posted by Loufis with karma: 28 on 2013-08-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Loufis on 2013-08-06: For future problems, a very helpful way of debugging in ros is using rqt_graph, it helps you understand what exactly is happening in the background (is the program subscribed correctly to all the inputs? does it publish everything correctly? if using a series of nodes, where do they stop working? etc...)
{ "domain": "robotics.stackexchange", "id": 15184, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, pcl, tutorial, camera, ros-fuerte", "url": null }
'Y': 0.01842499064721287, 'X': 0.0014029180695847362, 'Z': 0.0006546950991395436}
{ "domain": "grocid.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9848109520836027, "lm_q1q2_score": 0.8484522660367423, "lm_q2_score": 0.8615382112085969, "openwebmath_perplexity": 2297.7335466269597, "openwebmath_score": 0.8094127774238586, "tags": null, "url": "https://grocid.net/page/4/" }
php, formatting This is more readable for two reasons : it's easier to understand the syntax, but more importantly, it's easier to see that you simply want a default value when there's nothing. We don't need the 'else' part which is confusing with the ternary operator. Note: as mentioned by Simon Scarfe and corrected by mseancole, PHP also has some special ternary syntax to do this: $newsItems[0]['image_url'] = $newsItems[0]['image_url'] ?: 'img/cat_placeholder.jpg'; Factorize it! If you're doing this only once or twice, then all is good, but otherwise you'd want to factorize it into a function, since you don't repeat yourself, right? The simple way is: function default_value($var, $default) { return empty($var) ? $default : $var; } $newsItems[0]['image_url'] = default_value($newsItems[0]['image_url'], '/img/cat_placeholder.jpg'); (The ternary operator does make sense here since variable names are short and both branches of the condition are useful.) However, we're looking up $newsItems[0]['image_url'] when calling default_value, and this is possibly not defined, and will raise an error/warning. If that's a concern (it should), stick to the first version, or look at this other answer that gives a more robust solution at the expense of storing PHP code as a a string and thus cannot be checked syntactically. Still too long If we don't care about the warning/error, can we do better? Yes we can! We're writing the variable name twice, but passing by reference can help us here: function default_value(&$var, $default) { if (empty($var)) { $var = $default; } } default_value($newsItems[0]['image_url'], '/img/cat_placeholder.jpg');
{ "domain": "codereview.stackexchange", "id": 38655, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, formatting", "url": null }
quantum-mechanics satisfies a multicomponent time independent Schrodinger equation of the form : $$\Bigg[-\frac{\hbar^2}{2}\sum_{n=1}^{N} \frac{1}{m_{n}^{}}\Bigg(I_{K \times K}^{}\vec{\nabla}_{\vec{R}_{n}}^{}+{\vec{\tau}_{\vec{R}_{n}}}_{K \times K}\Bigg)^2_{}+V_{K \times K}(\{R\})\Bigg]\phi_{}^{}(\{R\})=E\phi(\{R\}),$$ where the matrix elements of the above defined '$K \times K$' matrices are respectively $$I_{mn}^{}=\delta_{mn}^{},$$ $${\vec{\tau}_{\vec{R}_{n}}}_{mn}^{}=\langle \psi_{m}^{}(\{r\};\{R\})|\vec{\nabla}_{\vec{R}_{n}}|\psi_{n}^{}(\{r\};\{R\})\rangle$$ and $$V_{mn}^{}(\{R\})=e_{m}^{}(\{R\})\delta_{mn}^{}.$$ For time dependent Schrodinger equation case, the same analysis can be carried out to get more aesthetic looking equation :
{ "domain": "physics.stackexchange", "id": 43836, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics", "url": null }
discrete-signals, audio-processing, pitch (Keith Lent, PSOLA, Hamon) They are all based on the same principle - Based on the pitch of each grain analyzed, it is possible to alter the pitch while maintaining the formant or not, to do it adjust the frequency at which the grains are emitted crossfading between the two grains to prevent clicks . For higher pitches, you may need to repeat grains, while for lower pitches, you may need to eliminate some grains. To modify the formant while maintaining the pitch, maintain the grain rate constant but alter the grain's playback speed. Higher speeds will produce higher formants, while lower speeds will produce lower formants. (you will need a very good pitch track) how do it here Source Filter (Physical modeling) - The human voice can be conceptualized as a source-filter model, the physical model represents the vocal signal as a time-domain glottal signal (with a flat spectral envelope) combined with a vocal tract filter containing formant information. Where the source is the vocal cords and the filter is the vocal tract. Physical modeling involves analyzing the vocal signal to separate the glottal information (source) from the vocal tract information (filter). Since we can manipulate the glottal signal and the vocal tract model independently. One way to do it is using LPC, one more time you will need a very good pitch track to generate an pulse train (keep the original pitch or change) to resynth the signal with original formant or not ... but there is still another way to do this here, we can time stretch and resample the extrated glottal signal and then apply a filter to the pitch-shifted excitation signal using the original spectral envelope as a reference (here you don't necessarily need a pitch track, just autocorrelation to time strech can solve). Based in Frequency Domain here you can use a phase vocoder techniques to pitch shift and extract the spectral envelope from the pitch shifted signal and then warp the spectral to a new envelope or match with the original signal... you can use cepstrum or peak-peak interpolation to extract the spectral envelope Math and code from a basic phase vocoder pitch shift (no formant warp Implementation here) and here how get spectral envelope and warp
{ "domain": "dsp.stackexchange", "id": 12304, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, audio-processing, pitch", "url": null }
resource-recommendations, group-theory, representation-theory, group-representations, lie-algebra Title: Comprehensive book on group theory for physicists? I am looking for a good source on group theory aimed at physicists. I'd prefer one with a good general introduction to group theory, not just focusing on Lie groups or crystal groups but one that covers "all" the basics, and then, in addition, talks about the specific subjects of group theory relevant to physicists, i.e. also some stuff on representations etc. Is Wigner's text a good way to start? I guess it's a "classic", but I fear that its notation might be a bit outdated? There is a book titled "Group theory and Physics" by Sternberg that covers the basics, including crystal groups, Lie groups, representations. I think it's a good introduction to the topic. To quote a review on Amazon (albeit the only one): "This book is an excellent introduction to the use of group theory in physics, especially in crystallography, special relativity and particle physics. Perhaps most importantly, Sternberg includes a highly accessible introduction to representation theory near the beginning of the book. All together, this book is an excellent place to get started in learning to use groups and representations in physics."
{ "domain": "physics.stackexchange", "id": 63801, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "resource-recommendations, group-theory, representation-theory, group-representations, lie-algebra", "url": null }
Since $a^2 + b^2\:=\:c^2$, then $\Delta FOG$ is a right triangle with $\angle FOG = 90^o$. 4. ## ok I honor both replies but soroban did a fantastic job using the correct math symbols with LAtex.
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9905874104123902, "lm_q1q2_score": 0.8359518604101135, "lm_q2_score": 0.8438950986284991, "openwebmath_perplexity": 1306.9851889727213, "openwebmath_score": 0.9018391370773315, "tags": null, "url": "http://mathhelpforum.com/pre-calculus/10147-lines.html" }
• Reference topics: Tarski-finite & Dedekind-infinite. – DanielWainfleet Jun 2 '19 at 19:51 The key point is that every infinite subset of $$\mathbb{N}$$ is in bijection with $$\mathbb{N}$$ itself. To see this, just "collapse" the set: if $$A\subseteq\mathbb{N}$$ is infinite, consider the map from $$\mathbb{N}$$ to $$A$$ sending each $$n\in\mathbb{N}$$ to the unique $$a_n\in A$$ such that $$\vert\{b\in A: b Now if $$f:A\rightarrow\mathbb{N}$$ is an injection and $$A$$ is infinite, then $$ran(f)$$ is an infinite subset of $$\mathbb{N}$$; hence by the above point we have a bijection $$b: ran(f)\cong\mathbb{N}$$. Now think about composing $$f$$ and $$b$$ ... • (Minor thing that briefly confused me: this uses the convention $0 \in \mathbb{N}$, which OP also used, but still, it requires a slight adjustment if $\min \mathbb{N}=1$.) – Ian Jun 2 '19 at 17:15 • @Ian Quite right! If we're working without zero just replace "$<$" with "$\le$." – Noah Schweber Jun 2 '19 at 17:20 • This is wrong. The OP asks: "If some set 𝐴 proves to be a countable and infinite set, then is it automatically countably infinite?" (emphasis mine), and the answer to that question is simply yes. (Their definition of "countable" is "admits an injection into $\mathbb{N}$.") Every countable infinite set is indeed countably infinite; the subtlety you bring up is whether every infinite set has a countably infinite subset, which is quite different. – Noah Schweber Jun 2 '19 at 16:52
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9787126513110865, "lm_q1q2_score": 0.8197892551050185, "lm_q2_score": 0.8376199633332891, "openwebmath_perplexity": 128.40726835490142, "openwebmath_score": 0.9782803058624268, "tags": null, "url": "https://math.stackexchange.com/questions/3248726/are-all-countable-infinite-sets-countably-infinite" }
javascript, security, html5 manages to undo your sanitizer transformation. Consider simplifying the sanitizer signature so it just accepts a single string, putting responsibility for catenating strings on the caller. In software engineering, simplicity is a virtue. Doubly so for security-critical code. performance sanitizer() processes a string of arbitrary length, and makes half a dozen scans of the string, looking for half a dozen dangerous characters. This works. Rather than making repeated scans, consider making just a single scan where you examine each character, and append it or its sanitized substitute to an output string.
{ "domain": "codereview.stackexchange", "id": 44346, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, security, html5", "url": null }
\sin(y)\frac{s}{y^2+s^2}|_{-\infty}^\infty + \int_{-\infty}^\infty \frac{s\cos(y)}{y^2+s^2}dy = I(s)$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.959762052772658, "lm_q1q2_score": 0.8079523131871653, "lm_q2_score": 0.8418256492357358, "openwebmath_perplexity": 259.1933032725024, "openwebmath_score": 0.9764607548713684, "tags": null, "url": "https://math.stackexchange.com/questions/3755533/evaluating-int-infty-infty-frac-cos2xx24-mathrmdx" }
c++, linked-list, c++17, vectors class String: public Item { public: String(...) {...} virtual ~String() override {} }; Consider: Item *item = new String("foo"); delete item; Without the virtual destructor, the base classes destructor would be called, which doesn't clean up String's member variable _s. Consider making some member variables private If a member variable should only ever be modified via a member function, then you should make that variable private. If other code needs to access it anyway, but only for reading, then add a function to get a const pointer or reference to the data. For example, in Scene, you don't want something to remove items directly from _items, because it would bypass the removal of strings from data. So make it private and add const access to it: class Scene { std::vector<std::shared_ptr<Item>> _items; public: ... const std::vector<std::shared_ptr<Item>> &get_items() const { return _items; } } Maybe the same can be applied to String as well. Avoid reinventing the wheel Perhaps there is a reason for it, but if I just look at the code you posted, I wonder why you create your own class String. If Scene is just a container for strings, then why not have it have std::vector<std::string> _items? Use auto where appropriate You can avoid repeating types by using auto in several places. For example, in del() you could write: auto pa = items[n]; auto pb = dynamic_pointer_cast<String>(pa); Use range-based for-loops Whenever you are iterating over the items in a container, use range-based for loops. They are easier to write, and there is less chance of errors. For example: for (auto item: _items) { item->print(); }
{ "domain": "codereview.stackexchange", "id": 36857, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, linked-list, c++17, vectors", "url": null }
molecular-biology, cancer, cell-culture, stem-cells Title: What type of flask should I use to culture NTERA2 embryonic cancer stem cells? I'm just starting my MSc research and I am in the process of making a list of equipments/consumables to order. Is there a specific flask in which I can culture NTERA2 (NTERA2/D1) cell line? I found a protocol by ATCC for NTERM2 cells, and it didn't mention any specific flask, so any cell culture flask would do. Since ATCC is basically a cell culture bank I trust that their protocol is valid.
{ "domain": "biology.stackexchange", "id": 3601, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "molecular-biology, cancer, cell-culture, stem-cells", "url": null }
rviz [ WARN] [1372654229.088285128]: OGRE EXCEPTION(2:InvalidParametersException): Named constants have not been initialised, perhaps a compile error. in GpuProgramParameters::_findNamedConstantDefinition at /build/buildd/ogre-1.7.4/OgreMain/src/OgreGpuProgramParams.cpp (line 1425)
{ "domain": "robotics.stackexchange", "id": 14760, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rviz", "url": null }
sequence-alignment, python, phylogeny, phylogenetics print('Number of columns with the same amino acid: {}\n' 'Number of columns with at least 2 amino acids (no gaps): {}\n' 'Number of columns with at least one gap: {}' .format(var1, var2, var3)) For your example data, this outputs: Number of columns with the same amino acid: 9 Number of columns with at least 2 amino acids (no gaps): 40 Number of columns with at least one gap: 1
{ "domain": "bioinformatics.stackexchange", "id": 692, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sequence-alignment, python, phylogeny, phylogenetics", "url": null }
human-biology, biophysics, skin, light, uv Title: Can UV radiation be safe for the skin? It is well known that UV radiation can damage the DNA and generally harm our skin. We also know that UV radiation helps on the production of melanin and Vitamin D. From what I could find, the DNA absorption spectrum goes to almost zero for wavelengths higher than 300 nm. This seems to suggest that we would be safe to use UV radiation between 300 and 340 nm in our skin (as long as the power or exposure is not too high/long to make burns), for therapeutic purposes such as the stimulation of Vitamin D production. Is this assumption correct? Are there any evidences that we could use this UV wavelength range safely? You're talking about long-wave UV, or UV-A radiation. In the 80s, experts claimed that this was a safe wavelength. Protection against UV-A was not part of sunscreen in the early days. Consequently, UV-A was (and still is) used in tanning beds due to its perceived safety over UV-B. However, a lot of research has been done since. UV-A is well understood now to also be unsafe in unreasonable amounts. Currently, UV-A protection is a typical feature of sunscreen and tanning beds are still not a healthy alternative to moderate, healthy doses of sun. Here is a recent review covering some of the aspects comparing different UV range effects on skin. I really suggest you put a search engine to good use here; it makes little sense for us to expound on the literature when it is so clear and easily available. In summary, UVA certainly contributes to the development of skin cancer. UVA penetrates deeper into the skin than UV-B (which is largely responsible for 'burning' of the topmost layer of skin, without directly affecting the deeper layers). For this reason, UV-B is associated primarily with burning and UV-A is primarily associated with aging and aging diseases like cancer. It is important to note that 95% of UV light in every day life is UV-A, because it does not vary seasonally and can penetrate clouds and windows. Therefore, in spite of the fact that short wavelengths carry more energy per photon, the ratios of UV-A and UV-B exposure are far from equal. These are only a few of the explanations as to why we observe an incidence of aging and skin damage and disease upon UV-A exposure.
{ "domain": "biology.stackexchange", "id": 10010, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "human-biology, biophysics, skin, light, uv", "url": null }
transform Title: How to transform a pose I have a pose of a frame that is not correctly positioned (think about it like the /odom frame when doing AMCL navigation, it gets continuously corrected). I calculate this amount of necessary correction and save it as a Pose variable (from geometry_msgs). Now I want to apply this correction to the pose of my frame. How can I do that with tf in Python? What I tried until now is: # Get pose of camera in world frame (true pose from marker observation= pose_from_marker = self.tf_listener.transformPose("/map", pose_camera_marker_frame) # Now get transform between odom_marker and camera_frame_marker (copy of normal odometry) (trans, rot) = self.tf_listener.lookupTransform('camera_frame_marker', 'odom_marker', rospy.Time(0)) print rot R_cam_odom = quaternion_matrix(rot) t_cam_odom = np.zeros((3, 1)) t_cam_odom[0] = trans[0] t_cam_odom[1] = trans[1] t_cam_odom[2] = trans[2] print trans t_true_pose = np.zeros((3, 1)) t_true_pose[0, 0] = pose_from_marker.pose.position.x t_true_pose[1, 0] = pose_from_marker.pose.position.y t_true_pose[2, 0] = pose_from_marker.pose.position.z R_true_pose = quaternion_matrix((pose_from_marker.pose.orientation.x, pose_from_marker.pose.orientation.y, pose_from_marker.pose.orientation.z, pose_from_marker.pose.orientation.w))
{ "domain": "robotics.stackexchange", "id": 22424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "transform", "url": null }
dsp-puzzle Title: DSP Puzzle: What network will create this waveform? What network consisting of two unit sample delays, three multipliers (with constant gain coefficients), and three adders can we create to convert a square wave into the waveform as shown below: The waveform is a 1 KHz square wave sampled at 100 KHz. This is a “DSP Puzzle”, please preface your answer with spoiler notation by typing ythe following two characters first ">!" A solution with 1 multiply, 2 adds and 2 delays :-) The target wave form is simply the square wave itself with the fundamental removed. Since the frequency is known, I simply implemented a local oscillator with the same frequency, amplitude and phase of the fundamental and than subtract it out. The most efficient way to create a local oscillator is a simple recursion $$y[n] = 2\cos(\omega_0)\cdot y[n-1] - y[n-2], \quad x_{out}[n] = x_{in}[n]-y[n]$$ The tricky bit is to seed the oscillator states correctly. Assuming we want to implement$$y[n] = A \cdot \cos(\omega_0\cdot n + \varphi) $$ we need to seed $y[n-2] = y[-2]$ and $y[n-1] = y[-1]$. This isn't the most stable oscillator and the amplitude may drift over time. An alternative would be to use a rotating phasor instead, but that would take 4 multiplies. Results (after fixing a stupid mistake). Here is the code:
{ "domain": "dsp.stackexchange", "id": 12271, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dsp-puzzle", "url": null }
java, functional-programming to remove every entry from the map where the value is a list that has a size less than 1. This works because values() returns a view of the map, so changes to it reflect through the map. And then removeIf gives us the ability to remove the elements matching the given predicate. So all in all, you can have: public static Map<Character,List<Integer>> charMaps(String str) { Map<Character, List<Integer>> map = IntStream.range(0, str.length()) .boxed() .collect(Collectors.groupingBy(str::charAt, HashMap::new, Collectors.toList())); map.values().removeIf(l -> l.size() <= 1); return map; }
{ "domain": "codereview.stackexchange", "id": 21468, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, functional-programming", "url": null }
by $$\left[ a \right].$$ Thus, by definition, \[{\left[ a \right] = \left\{ {b \in A \mid aRb} \right\} }={ \left\{ {b \in A \mid a \sim b} \right\}.}$. You also have the option to opt-out of these cookies.                  R1∩ R2 = {(1, 1), (2, 2), (3, 3)}, Example: A = {1, 2, 3} An equivalence class is defined as a subset of the form {x in X:xRa}, where a is an element of X and the notation "xRy" is used to mean that there is an equivalence relation between x and y.                     R-1 is a Equivalence Relation. Linear Recurrence Relations with Constant Coefficients. The subsets $$\left\{ 2 \right\},\left\{ 1 \right\},\left\{ 5 \right\},\left\{ 3 \right\},\left\{ 0 \right\},\left\{ 4 \right\}$$ form a partition of the set $$\left\{ {0,1,2,3,4,5} \right\}.$$, The set $$A = \left\{ {1,2} \right\}$$ has $$2$$ partitions: $\forall\, a,b \in A,a \sim b \text{ iff } \left[ a \right] = \left[ b \right]$, Every two equivalence classes $$\left[ a \right]$$ and $$\left[ b \right]$$ are either equal or disjoint. system should handle them equivalently. Given a set A with an equivalence relation R on it, we can break up all elements in A … ${A_i} \cap {A_j} = \varnothing \;\forall \,i \ne j$, $$\left\{ {0,1,2} \right\},\left\{ {4,3} \right\},\left\{ {5,4}
{ "domain": "spacebd.biz", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9637799399736476, "lm_q1q2_score": 0.8373578084299698, "lm_q2_score": 0.8688267660487573, "openwebmath_perplexity": 526.3073391422087, "openwebmath_score": 0.5353816151618958, "tags": null, "url": "http://spacebd.biz/1nbqnla/bec524-equivalence-class-examples" }
java, algorithm, tree, validation, groovy I have already conducted a minor change by replacing invalidNodes += nodes with invalidNodes.addAll(nodes). This can be a slight improvement of performance since invalidNodes grows over time and += creates a new list each time it is invoked while addAll simply adds items to the existing list. Performance You traverse the tree starting from each node to the top. Therefore there are several nodes that are visited multiple times. Think about a linear tree structure. Map edges = (1..51).collectEntries { [it+1 as String, it as String] } findInvalidNodes(edges) The tree has 52 nodes and is therefore invalid. But since I built the map edges so that the order of entries is worst case, you start with nodes 1, 2, ..., 51 and need 1+2+3+...+51 = 1326 loops to detect that the nodes are invalid. If you started with the last entry of the map it would only take 51 loops. Since the height is limited to a constant amount I would see the method still as scalable. You can think about saving information for visited nodes and use that information once you find a visited node. Is the Method working as intended? Root node missing Due to the while condition while(parent) a child with no parent (therefore all roots of the trees) cannot be treated as invalid. For the same reason a root node will never show up in nodes list so effectively you check for tree height > 51 and not tree height > 50. Child of invalid Parent not invalid Due to the while condition while ( !invalidNodes.contains(child) ) the loop breaks once the parent of a child is invalid. But the child is never added to the invalidNodes list. Is this really intended? Map edges = (1..101).collectEntries { [it+1 as String, it as String] }} assert findInvalidNodes(edges) == [2, 3, ..., 52]
{ "domain": "codereview.stackexchange", "id": 44565, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, tree, validation, groovy", "url": null }
beginner, scheme, programming-challenge, racket (I've taken the liberty of using (add1 i) instead of (+ i 1) since you're using Racket.) Here's a version that's even more Rackety, using for/sum to accumulate the sum rather than using a manual loop: (define (sum-multiples start end) (for/sum ((i (range start end))) (if (or (zero? (modulo i 3)) (zero? (modulo i 5))) i 0)))
{ "domain": "codereview.stackexchange", "id": 6932, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, scheme, programming-challenge, racket", "url": null }
c++, linked-list This function will leak memory if it fails. Imagine what would happen if you were copying a 100 element stack, and push() failed on element 99. All those other nodes would never be deallocated. If you're not going to use smart pointers (which makes sense in this context), then you'll need to use an explicit try-catch, maybe something like this: template <class T> Stack<T>::Stack(Stack<T> const& value) { try { for(auto loop = value._top; loop != nullptr; loop = loop->next) push(loop->data); } catch (...) { while(_top != nullptr) do_unchecked_pop(); throw; } } Moving on down: template <class T> void Stack<T>::swap(Stack<T> &other) noexcept { using std::swap; swap(_top,other.top); } There appears to be a typo here - it should probably be other._top. If you're going to define a swap() member function, you might as well define a free swap() function as well. template <class T> int Stack<T>::size() const { int size = 0; Node* current = _top; while(current != nullptr) { size++; current = current->next; } return size; } You can simplify this quite a bit with a for loop: template <class T> int Stack<T>::size() const { int size = 0; for (auto curent = _top; current != nullptr; current = current->next) size++; return size; } On to the push() functions: template <class T> void Stack<T>::push(const T &theData) { Node* newNode = new Node; newNode->data = theData; newNode->next = nullptr; if(_top != nullptr) { newNode->next = _top; } _top = newNode; }
{ "domain": "codereview.stackexchange", "id": 31087, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, linked-list", "url": null }
3 This is true. If $f\colon A\to B$ is injective, then $S_f(X)=f(X)$ is injective as a function from $\mathcal P(A)$ to $\mathcal P(B)$. To prove this, verify the definition of "injective", namely take $X$ and $Y$ which are different and show that $f(X)\neq f(Y)$. HINT: Suppose that $x\in X$ and $x\notin Y$, can $f(x)$ be an element of $f(Y)$? For the ... Only top voted, non community-wiki answers of a minimum length are eligible
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9702399034724605, "lm_q1q2_score": 0.8126923143375281, "lm_q2_score": 0.8376199653600371, "openwebmath_perplexity": 150.32774871007192, "openwebmath_score": 0.9476624727249146, "tags": null, "url": "http://math.stackexchange.com/tags/elementary-set-theory/hot" }
performance, r 202805 712171 596546 84308 247665 389822 34249 565730 862118 898577 739121 264902 587981 129755 677316 414275 178863 995157 19222 934733 801079 940076 219751 330526 723365 203720 849493 82329 718318 591392 410965 56986 712445 687912 686862 396779 193649 459354 872432 334893 865470 444696 45755 809095 507425 348440 768708 808101 674365 38301 694119 426365 636760 970465 180527 641201 545622 316408 396620 182038 806921 862933 725253 670868 456910 216942 281742 979785 738681 554452 864155 142627 445857 263100 518269 504065 281053 984551 171706 367321 723602 8712 366800 800361 59081 593664 156837 63199 936131 500236 531773 383137 687002 101699 468400 614459 738692 443825 800059 631937 367643 243201 289837 431024 430651 653182 621837 139126 963267 834190 931352 601575 229028 670554 721501 562612 249332 51289 158771 556296 319658 346314 963900 242262 366598 426893 152387 380547 262012 211065 301428 856861 217645
{ "domain": "codereview.stackexchange", "id": 35824, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, r", "url": null }
algorithms, graphs, depth-first-search I'm trying to prove the following statement for myself: In every DFS run on $G$, in every step of DFS, the $G_{\pi}$ is a forest. This question is coming from a booklet published for studding for the finals (without solutions). I understand the logic behind why it true (with the help of the statements from the book). But I struggle of writing a "formal proof" which shows the correctness of it. Do I need to use induction to prove it (since I need to show it for every step). How to prove this statement formally? For the completeness of the question, the $G_\pi=(V,E_\pi)$ is the following graph: $$ E_\pi=\{(\pi[v],v)\,:\,\pi[v]\neq NULL \wedge v\in V\} $$ Yes use induction. You will assume that $G_\pi$ is a forest, and you want to prove that $G_{\pi'}$ is also a forest, where $\pi'$ is $\pi$ after one step of the DFS algorithm. The key point, is that a forest is a graph without cycles. So basically, you want to show that no new cycles where created in the last step. To prove this, you will want to have a statement similar to this: Assume towards contradiction that $G_{\pi'}$ contains a cycle. Hence, the new node $u'$ that was added to $G_{\pi}$ in order to create $G_{\pi'}$ must be a part of the new cycle (since $G_{\pi}$ was a forest). Therefore, there must be some node $u\in G_\pi$ such that $\pi(u)=u'$, but this is impossible since it would mean that $u'$ would have been already visited in $G_{\pi}$, but this is impossible since $u'$ was visited only after $G_{\pi}$ was constructed.
{ "domain": "cs.stackexchange", "id": 18748, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, graphs, depth-first-search", "url": null }
antarctica, continent, history Large areas of Antarctica is ice-free, eg. this valley in Vestfold Hills, East Antarctica. The area was first explored in the mid to late 1900's but sightings of outcrop would assure explorers that they actually saw the continent they were expecting and not floating ice. Geologists could also early correlate the rare findings from Antarctica with surrounding continents and started to reconstruct the Gondwana assembly, a work that is still ongoing and due to the lack of outcrops in Antarctica is to a large extent based on geophysical data. Still, geologists and geophysicists are looking for similarities in Africa, India and Australia to understand the nature of the Antarctic geology under the ice cover. With aviation, it was finally possible to map the hinterland of the continent, and great discoveries were made as late as 1946–47 during Operation Highjump and during the Soviet's expeditions in the 50's. (Don't miss this short footage of the discovery of Bunger Hills: Youtube) As far as I know, it was not until the International Geophysical Year (IPY) 1957-58 an international effort deploy scientists and collect data firmly defined the physical shape of the continent, that we are still refining today. International collaboration appears to be the key to successful research in Antarctica. Seismic data is indeed very important to understand the Antarctic lithosphere, but the resolution is low due to the low seismicity and especially the limited number of deployed seismometers. This is, however, improving during recent years and a number of new studies are using seismic tomography and receiver functions to measure the continental shape of Antarctica. See e.g. An et al (2015a) and An et al (2015b). Seismic data is also used to understand the ice sheets and to derive the heat flux that causes basal melting. Also very important is satellite potential field data. Gravity data e.g. GOCE from ESA is and important constraints, but the resolution is low. Magnetic data especially from airplanes but also satellites as the flight lines are still sparse is used to map and understand geological terranes covered by ice. The ICECAP project is collecting high-resolution data from flights to improve our understanding of the ice sheets, glaciers and also the bedrock under.
{ "domain": "earthscience.stackexchange", "id": 1106, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "antarctica, continent, history", "url": null }
c++, stream return pos; } else { break; } default: assert(0); } return pos_type(off_type(-1)); } ::std::streamsize xsgetn(char_type* const s, ::std::streamsize const count) final { auto const size(::std::min(egptr() - gptr(), count)); ::std::memcpy(s, gptr(), size); gbump(size); return egptr() == gptr() ? traits_type::eof() : size; } ::std::streamsize xsputn(char_type const* s, ::std::streamsize const count) final { auto const size(::std::min(epptr() - pptr(), count)); ::std::memcpy(pptr(), s, size); pbump(size); return epptr() == pptr() ? traits_type::eof() : size; } }; template <::std::size_t N = 1024> class memstream : public memstreambuf<N>, public ::std::istream, public ::std::ostream { public: memstream() : ::std::istream(this), ::std::ostream(this) { } }; #endif // MEMSTREAMBUF_HPP Assert I personally dislike using asserts. The problem for me is that they do different things in production and debug code. I want the same action in both. pos_type seekpos(pos_type const pos, ::std::ios_base::openmode const which) final { switch (which) { case ::std::ios_base::in: case ::std::ios_base::out: default: assert(0); } return pos_type(off_type(-1)); }
{ "domain": "codereview.stackexchange", "id": 37640, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, stream", "url": null }
java, beginner, object-oriented, array I am looking on advice on if there are any other possible errors / exceptions I may have missed, in addition to any way of increasing the efficiency / simplicity of my code. You need to make sure you always close resources that can be closed. That means either a try-with-resources block or a try-finally block. You can’t just leave close outside one of these constructs because if an exception gets thrown the close method might not get called. Arguably, parsing the distances is easier to read using the stream API. You might not yet have learned that part of the language yet, in which case there’s nothing wrong with your loop except that inFile is never closed. Oh, no, it is, but waaaay too late. Keep the scope of the scanner as tight as possible. Likewise, the code for finding the minimum distance can be written as a stream. Credit to @AnkitSont. Variable names are really, really important. They should describe the information being pointed to. l is a terrible variable name because it doesn’t tell us anything. list would be a slight improvement, but distances would be much better. Along the same lines, there’s no reason to cut short variable name lengths. sc instead of scanner doesn’t save you anything, and it makes it harder on the reader because they have to go dig up what an sc is. minDist, diff and minDiffcan and should all be expanded out, and minDist might be better named safeDistance. Declare variables where they’re first used, and limit their scope as much as possible. This reduces the cognitive load on the reader. minDist can be declared right before your output. It could arguably also be a constant in your class (private static final double declared at the class level). diff can be declared inside the for loop. Even though the language allows curly braces to be optional for one-line blocks, it is widely regarded best practice that you always include curly braces. To do otherwise is inviting problems later when your code is modified. Be careful with conditional checks. All of your code will work correctly if there are zero distances in the file until you hit if (l.size() != 1). You really want to check if (l.size() > 1), right?
{ "domain": "codereview.stackexchange", "id": 31564, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner, object-oriented, array", "url": null }
digital-communications Title: Time and sample rate for QAM I am trying to understand how to get time as I already have sample rate. I want to draw a graph of sample rate vs time. I have information such as carrier frequency, channel bandwidth. Can that be used to get time. So far, I have understood this: Samples per symbol and number of symbols for QAM In order to get the time, I would need to do this: T = 1/Samples per symbol Would this be correct. Consider you have a bandwidth of B. The sampling rate $F_s$ which is related to B using the relation $F_s > 2B$. Now the sample time is $\frac{1}{F_s}$. So if you have N samples then total time is $\frac{N}{F_s}$ Is that what you are looking for?
{ "domain": "dsp.stackexchange", "id": 8650, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "digital-communications", "url": null }
electricity, definition, voltage Title: What is the relationship between the verbal definition and the mathematical definition of some quantities? I know this is probably an easy question, but it's been a while since I've studied physics and I've started reading some circuit analysis textbooks. I'm finding hard to understand the relationship between between the verbal definition of quantites and the mathematical definitions. For instance, in the Sadiku's book "Fundamentals of electic circuits", I've got the following verbal definiton for voltaje (literal) "Voltage (or potential difference) is the energy required to move a unit charge through an element, measured in volts (V)." And then, it says that this mathematically "means" $$ v_{ab} \triangleq \frac{dw}{dq} $$ I can't understand well the relationship between this two "definitions" could someone explain further the relationship? The work required to move a charge $q$ between two locations $a$ and $b$ with a voltage difference $V_{ab}$ is $$w=V_{ab}q.$$ Differentiating with respect to $q$, one obtains $$\frac{dw}{dq}=V_{ab},$$ which is how the book got the equation. Basically, the intuition is that voltage is "work per charge moved".
{ "domain": "physics.stackexchange", "id": 12910, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electricity, definition, voltage", "url": null }
Since they are in fact the same function where they are both defined (which is $$-1\lt x\lt 1$$), it is not a surprise they have the same derivative. More generally, two functions have the same derivative exactly when (if and only if) they differ by a constant (this is the Constant Function Theorem). So for example, $$-\arctan(x)$$ and $$\arctan(\frac{1}{x})$$ (for $$x\gt 0$$) have the same derivative: \begin{align*} -\frac{d}{dx}\arctan(x) &= -\frac{1}{1+x^2}\\ \frac{d}{dx}\arctan\left(\frac{1}{x}\right) &= \left(\frac{1}{1+\frac{1}{x^2}}\right)\left(\frac{1}{x}\right)’\\ &= \frac{1}{\quad\frac{x^2+1}{x^2}\quad}\left(-\frac{1}{x^2}\right)\\ &= -\frac{1}{x^2+1}. \end{align*} so $$-\arctan(x)$$ and $$\arctan\left(\frac{1}{x}\right)$$ differ by a constant on $$(0,\infty)$$. You can figure out the value since at $$x=1$$ you get $$-\arctan(1) = -\frac{\pi}{4}$$, and $$\arctan(\frac{1}{1}) = \frac{\pi}{4}$$, so the two functions differ by $$\frac{\pi}{2}$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9850429107723175, "lm_q1q2_score": 0.8076301181310951, "lm_q2_score": 0.8198933359135361, "openwebmath_perplexity": 98.903858877004, "openwebmath_score": 0.9980387091636658, "tags": null, "url": "https://math.stackexchange.com/questions/4013153/why-is-the-derivative-of-arctan-fracx-sqrt1-x2-the-same-as-the-deriv" }
javascript, html function appendImages(container, images) { container.append(...imgGenerator(images)); } function handleIntersect(entries, observer) { entries.forEach((entry) => { if (entry.isIntersecting) { appendImages(imgsContainer, 5); } }); } appendImages(imgsContainer, 5); </script> </body> </html> I think that container and images are being repeated unecessarly. When you call imgGenerator in appendImages, you already know that you want an array of images. You can use Array.from: the first argument will be an array-like object, that is, an object that is accessed by index and with a length propertie(which is identified as images on our javascript) the second argument will be the mapFunction(which will be imgGenerator), a function that will be called for every item of the array Doing this, on imgGenerator you avoid to make an instance of an array(imgs), a push into it and a for loop every time you want this array, although, you will not need to pass images as parameter again the container is always imgsContainer, so, in the appendImages you already know where to append function imgGenerator() { const img = document.createElement("img"); img.src = `http://api.adorable.io/avatars/${randomInteger()}`; img.classList.add('infinite-scroll-img'); return img; } function appendImages(images) { imgsContainer.append( ...Array.from({ length: images }, imgGenerator) ); } function handleIntersect(entries) { entries.forEach(entry => { if (entry.isIntersecting) { appendImages(5); } }); } appendImages(5);
{ "domain": "codereview.stackexchange", "id": 39146, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, html", "url": null }
Then G[16] > 0, Otherwise G[i,j]=-1. the sum of the weights of the edges in the paths is minimized. shortest path algorithm. Variations of the Shortest Path Problem. We present a new all-pairs shortest path algorithm that works with real-weighted graphs in the traditional comparison-addition model. I Therefore, the numbers d 1;d 2; ;d n must include an even number of odd numbers. Your solution should be complete in that it shows the shortest path from all starting vertices to all other vertices. All-Pairs Shortest Paths Problem To find the shortest path between all verticesv 2 V for a graph G =(V,E). This algorithm has numerous applications in network analysis, such as transportation planning. (2018) Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time. Dijkstra’s Shortest Path Algorithm in Java. Fast Paths allows a massive speed-up when calculating shortest paths on a weighted directed graph compared to the standard Dijkstra algorithm. It was conceived by computer scientist Edsger W. Shortest Path in a weighted Graph where weight of an edge is 1 or 2 Given a directed graph where every edge has weight as either 1 or 2, find the shortest path from a given source vertex ‘s’ to a given destination vertex ‘t’. Given an undirected, weighted graph, find the minimum number of edges to travel from node 1 to every other node. be contained in shortest augmenting paths, and the lay-ered network contains all augmenting paths of shortest length. Dating back some 3000 years, and initially consisting mainly of the study of permutations and combinations, its scope has broadened to include topics such as graph theory, partitions of numbers, block designs, design of codes, and latin squares. Simple path is a path with no repeated vertices. Find a TSP solution using state-of-the-art software, and then remove that dummy node (subtracting 2 from the total weight). C… Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. This module covers weighted graphs, where each edge has an associated weight or number. Discuss an efficient algorithm to compute a shortest path from node s to node t in a weighted directed graph G such that the path is of minimum cardinality among all shortest s - t paths in G graph-theory
{ "domain": "chicweek.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.987758723627135, "lm_q1q2_score": 0.8528226068102591, "lm_q2_score": 0.8633916222765627, "openwebmath_perplexity": 383.31142122654893, "openwebmath_score": 0.5556688904762268, "tags": null, "url": "http://isxy.chicweek.it/number-of-shortest-paths-in-a-weighted-graph.html" }
frequency, string, oscillators, normal-modes The equation is invariant if you exchange $h\leftrightarrow1/h$. It means that if $A_n = h^n A_0$ satisfies the recurrence equation, then $A_n = h^{-n} A_0$ will too. An intuitive way to see this property is by noting that if a translation (to the left) results in an equivalent mode $A_{n+1} = h A_n$, then a translation to the right $A'_n = A_{n-1}$ would also give "the same" normal mode but it holds $A'_n = A_{n-1} = A_{n}/h$ . Moreover, to obtain $\omega>0$ it is required that $h=e^{i \theta}$. This is not a convenience as said in your link, it is a must, because any other real or complex number $h$ will lead to a complex $\omega$, which is not correct
{ "domain": "physics.stackexchange", "id": 83801, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "frequency, string, oscillators, normal-modes", "url": null }
angular-momentum, quantum-spin, time-reversal-symmetry Title: Time-reversal procedure for spin What's the physical reason/explanation for the fact when time is reversed then, in addition to momentum of fermion, spin is also reversed? If the spin is an actual magnetic moment, then its behavior under time reversal is simply similar to that of classical magnetization, which changes sign. Think of magnetic fields and dipoles as generated by electric currents. Under time reversal the currents reverse direction and so do the corresponding magnetic fields or dipoles. At quantum level, spin reversal goes hand in hand with the reversal of orbital and total angular momentum (orbital and total magnetic moments), and with CPT symmetry. This lecture gives a nice presentation: Time reversal
{ "domain": "physics.stackexchange", "id": 24036, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "angular-momentum, quantum-spin, time-reversal-symmetry", "url": null }
algorithms, data-structures, shortest-path, heaps What they probably mean is that $m \in \omega(n)$. Then, $m/n \in \omega(1)$ -- the base of the logarithm grows with $n$, that is the resulting sequence of values grows more slowly than with a fixed base. For dense graphs in the sense that $m \in \Theta(n^2)$, the effect is most pronounced. For the sake of simplicity, say $m = cn^2$ for some $c \in (0,1)$. Then, $m/n = cn$ and therewith $\log_{m/n} n = \log_{cn} n \in O(1)$ -- the logarithmic factor has vanished, the algorithm has running-time in $O(m)$. Note that all of this is discussing only the upper Landau-bound. In which way the true running-time cost is affected by this change is not per se clear, and the Wikipedia article is bold in claiming that it's an "improvement".
{ "domain": "cs.stackexchange", "id": 10640, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, data-structures, shortest-path, heaps", "url": null }
ros, ros2, python3 Title: What is the ROS2 equivalent of rospy.AnyMsg? Currently in ROS I am using rospy.AnyMsg. I would like to convert the ROS code to ROS2. I tried doing an import AnyMsg from rclpy but it does not work. I tried to search for AnyMsg for ROS2 and I could not find anything. Does AnyMsg exist in ROS2? If so, how do I access it? If not, is there an alternative and what is it? rospy.AnyMsg was generally used to subscribe to serialized messages in ROS 1. These would come to the user-specified callback as an array of bytes that the user could do something with. This was helpful if you didn't need to deserialize the message and instead just store it or use it for bandwidth computations (eg rostopic bw) In ROS 2, the correct way to get this behavior is by utilizing the raw flag on Node.create_subscription. You still need a message type for the subscription in this case, but you can construct this at runtime from a string argument: from rosidl_runtime_py.utilities import get_message def callback(data): # 'data' here is just bytes, it is up to the callback implementor # to do something with it. print(len(data)) node.create_subscription( get_message('my_cool_message_type'), topic, callback, qos_profile_sensor_data, raw=True ) If you do not have ready access to the message type as a string or a python object, you can use something like ros2cli uses to figure out the type at runtime: get_message_class This uses graph introspection to determine what available message types are available on a given topic and then returns the corresponding message class.
{ "domain": "robotics.stackexchange", "id": 38890, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros2, python3", "url": null }
ros2 Title: what does ecl stand for? Hey guys! This might be a silly question, but I'm new to ROS. What does ecl actually stand for? I have been unable to find it. Originally posted by Babra on ROS Answers with karma: 3 on 2021-01-05 Post score: 0 Original comments Comment by gvdhoorn on 2021-01-05: wiki/ecl. Embedded Control Libraries Originally posted by stevemacenski with karma: 8272 on 2021-01-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Babra on 2021-01-05: Thank you so much! It is greatly appreciated! Comment by tfoote on 2021-01-05: @Babra please use the checkmark to the left of the answer to accept the answer so @stevemacenski get the credit and others know that your answer has a solution.
{ "domain": "robotics.stackexchange", "id": 35929, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2", "url": null }
c#, sqlite, xamarin Title: Generate a bar chart of weekly data from SQLite.NET I am using Xamarin.Forms, Microcharts and SQLite.NET to create a mobile app. The SQLite.NET database stores details about books (book ID and entry date - the date it was entered in the system). The bar chart displays the number of books entered this week on each day - from Monday to Sunday. However, this implementation seems inefficient. Additionally, as DateTime fields in the database don't have an equivalent of DateTime.Date property in .NET the query checks between two dates to get the count for each day. using SkiaSharp; using Xamarin.Forms; using Xamarin.Forms.Xaml; using Microcharts; using ChartEntry = Microcharts.ChartEntry; public GraphDisplayPage() { InitializeComponent(); DrawChart(); } void DrawChart() { var monday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek + (int)DayOfWeek.Monday); int mondayBookCount = App.Database.GetDailyCount(monday); var tuesday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek + (int)DayOfWeek.Tuesday); int tuesdayBookCount = App.Database.GetDailyCount(tuesday); var wednesday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek + (int)DayOfWeek.Wednesday); int wednesdayBookCount = App.Database.GetDailyCount(wednesday); var thursday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek + (int)DayOfWeek.Thursday); int thursdayBookCount = App.Database.GetDailyCount(thursday); var friday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek + (int)DayOfWeek.Friday); int fridayBookCount = App.Database.GetDailyCount(friday);
{ "domain": "codereview.stackexchange", "id": 40620, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, sqlite, xamarin", "url": null }
localization, navigation, ekf, robot-pose-ekf, robot-localization The twist covariances are all set to 10000. The frame_id for this tag based odom message is map. The wheel odometry is very standard, it is based off of the turtlebot odometry. One note is that when the first tag estimate is received, a transform is broadcasted which places the wheel odom frame at the initial location of the robot with respect to the map. Using both of the aforementioned packages and setups, and keeping the robot motionless, it is observed that the filter estimates become increasingly erratic and tend to infinity in no particular direction. Removing the wheel odometry and keeping only the ar tag odometry yields the same effect. The ar tag odometry messages remain completely constant within three significant figures. Thus we conclude that somehow our consistent tag odometry measurements are causing the kalman filters to behave erratically and tend to infinity. This happens even with the wheel odometry measurements being fused as well. Can anyone explain why this might be, and offer any suggestions to fix this? I would be happy to provide any extra information which I haven't provided. Side note: As a naive test, we also set all of the covariances to large values and observed that no matter how we moved the robot the differences in the filter outputs were tiny (+/- 0.01), more likely drift than an actual reading.
{ "domain": "robotics.stackexchange", "id": 21439, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "localization, navigation, ekf, robot-pose-ekf, robot-localization", "url": null }
window-functions, bandpass, bandwidth, window, cosine Title: When can a windowed cosine be considered band-limited signal? I have an exam in the following days and i have no clue of what is the response to this question in the exam: given this signal: $$g(t) = H\left(t+\frac{T}{3}\right) f(t) - H\left(t-\frac{2T}{3}\right)f(t)$$ $H(t)=$ Heaviside step function $f(t) = \cos(\omega_1 t)$ Define a value of $T$ for which the signal $g(t)$ can be considered band limited. This question doesn't make much sense to me. This is a windowed cosine function and it's bandwith depends on the value of $T$, it's always limited! Unless $T$ approches 0 and it can be seen as an impulse. Am I missing something? T=0 or T=infinity. Any non-zero signal with finite length support in one domain has infinite support in the other domain. cos(wt) is only theoretically band-limited if infinite in length.
{ "domain": "dsp.stackexchange", "id": 8021, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "window-functions, bandpass, bandwidth, window, cosine", "url": null }
Generate data which are multivariate normal and have a covariance matrix. a vector zero-mean. I ; remember that the random numbers ( using the language 's built-in random ). The scores on k tests for n students, as shown in equation ( 0 ) and covariance are displayed. Assuming Z is the average squared deviation from the mean score been working with the psd matrices some! A, as shown below scores are more variable than English test scores ndarray or javascript array ) with specified! A53 n is the identity matrix I ; remember that the random numbers have variance one and independently! The correlation and the English test, the k x k matrix holding ordered sets raw. We compute a ' a, compute the variance correctly, you would have covariance... Matrix procedure and read the set of standard normal variables from step 2: Get population. A measure of the variability or spread in a dataset data matrix. to turn raw data positive... Be used for applications such as portfolio construction, risk analysis and performance attribution converts the correlation matrix that a. This can be a useful way to understand how different variables are in., scores on the MATLAB ® cov function simulate 100 observations with 4 variables of! Is zero compute the variance of Z is a financial time series with... Tend to have three-dimensional data that are statistically dependent generate an N-dimensional generate covariance matrix random.. We draw N-dimensional samples, the data set for given cov much two random variables gets change together and... As shown below matrix a assuming Z is a measure of the extent to which corresponding elements two. N to create covariance matrix: the genS and genArray functions produce random matrices... Matrix a example, matrix x might display the scores tend to covary in a matrix! Ordered sets of raw scores: X11, X12, are the covariances can say that test! In generate covariance matrix case, you may check this guide for the covariance between two more! The data represented in matrix a vice versa table in Excel or covariance table in Excel matrix a, shown! Head back to the tutorial called portfolio risk correlation and generate covariance matrix other entries are variances! Element is the identity matrix I ; remember that the random numbers and convert them into Gaussian... Is one of the
{ "domain": "kpi.ua", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9465966702001758, "lm_q1q2_score": 0.8007662961755292, "lm_q2_score": 0.845942439250491, "openwebmath_perplexity": 630.5306069118313, "openwebmath_score": 0.5929288864135742, "tags": null, "url": "http://cad.kpi.ua/bhu9qiz3/2c6df3-generate-covariance-matrix" }
# How is that always proven? 1. Aug 25, 2013 ### cdux Teacher wanted to prove that for that condition to be satisfied ρ must be zero, but the limits give the idea that it applies only when those limits are satisfied. By the way, x>0 and y>0. Is it a universal proof or only for those limits? 2. Aug 25, 2013 ### haruspex The given condition is that the equality holds for all x, y > 0. If the limits exist and the functions are continuous (e.g. limx → 0 ex = e0) then it must also hold in the limit. 3. Aug 25, 2013 ### LCKurtz I don't know what you mean by "universal proof". You have this proposed identity that looks really unlikely to be true unless $\rho=0$. To refute the identity, all you have to do is find something that implies that $\rho$ must be $0$. That argument surely works. You could undoubtedly find other arguments that would imply $\rho=0$. 4. Aug 26, 2013 ### cdux But didn't that use the restriction that x->0 and y->1? How did it prove it in general? 5. Aug 26, 2013 ### cdux I didn't see this reply at first. But if it holds for the limit, how does it prove it in general? 6. Aug 26, 2013 ### D H Staff Emeritus That $\exp(-\rho x y) = 1/((1+\rho x)(1+\rho y))$ is true for all x,y in the case that rho=0 is obvious. That this trivial solution for rho is the only solution in the case that one but not both of x or y is zero is also obvious. And that is all that is needed. The claim is that $\exp(-\rho x y) = 1/((1+\rho x)(1+\rho y))$ for all x,y>0. Finding any particular x,y that restricts rho to 0 suffices to show that rho must necessarily be zero. 7. Aug 26, 2013 ### cdux
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.952574122783325, "lm_q1q2_score": 0.8019013255771008, "lm_q2_score": 0.8418256452674008, "openwebmath_perplexity": 644.5358633187326, "openwebmath_score": 0.7939848899841309, "tags": null, "url": "https://www.physicsforums.com/threads/how-is-that-always-proven.707188/" }
operating-systems, memory-management, paging So, with $1024$ bytes in the logical memory, and $4$ pages, then each page is $256$ bytes. Therefore, the size of the physical memory must be $4096$, right? ($256 \times 16$). Then, to calculate the logical address offset: $$1024 \mod 717 = 307$$ Is that how we calculate the offset? And, we can assume that $717$ is in page $2$ ($\frac{1024}{717} = 2.8$)? So, according to the page table, the corresponding frame number is $3$. And so to get the physical address, we multiply the frame number and page size? $$2 \times 256 = 768$$ Then, do we add the offset, like so: $$768 + 307 = 1,075$$ Thank you for taking the time to read. If I don't quite have this correct, would you be able to advise on the correct protocol to calculating this? You are correct in your reasoning that the pages are $256$ bytes and that the physical memory capacity is $4096$ bytes. However, there are errors after that. The offset is the distance (in bytes) relative to the start of the page. I.e., logical_address mod page_size. The bits for this portion of the logical address are not translated (given power of two page size). The logical (virtual) page number is number of (virtual) page counting from zero. I.e., $$\frac{logical\_address}{page\_size}$$ As you noted, the physical page is determined by the translation table, indexed using the logical (virtual) address. Once the physical page number had been found, the physical address of the start of that page is found by multiplying the physical page number by the page size. The offset is then added to determine the precise physical address. I.e., $$(physical\_page\_number \times page\_size) + offset$$ So a logical address of, e.g., $508$, with $256$ byte pages would have an offset of $$508 \mod 256 = 252$$ The logical/virtual page number would be
{ "domain": "cs.stackexchange", "id": 7644, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "operating-systems, memory-management, paging", "url": null }
python, django def form_invalid(self, form): return render(self.request, self.template_name, self.get_context_data()) def get_form_kwargs(self): kwargs = super(SendTransfer, self).get_form_kwargs() kwargs['sender'] = BankAccount.objects.get(id=self.kwargs['pk']) kwargs['user'] = self.request.user return kwargs class OutcomeTransfers(DetailView): model = BankAccount template_name = 'bank/report.html' def get_object(self, queryset=None): obj = super(OutcomeTransfers, self).get_object(queryset) if not obj.is_owner(self.request.user.citizen): raise Http404 return obj def _get_queryset(self): return self.model.objects.get(id=self.kwargs['pk']).outcome_transfers.all() def get_context_data(self, **kwargs): data = super(OutcomeTransfers, self).get_context_data() queryset = self._get_queryset() if 'sort' in self.request.GET: queryset = queryset.order_by(self.request.GET['sort']) data['table'] = MoneyTransferTable(queryset) return data class IncomeTransfers(DetailView): model = BankAccount template_name = 'bank/report.html' def get_object(self, queryset=None): obj = super(IncomeTransfers, self).get_object(queryset) if not obj.is_owner(self.request.user.citizen): raise Http404 return obj def _get_queryset(self): return self.model.objects.get(id=self.kwargs['pk']).income_transfers.all()
{ "domain": "codereview.stackexchange", "id": 17913, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, django", "url": null }
physical-chemistry, concentration the mass concentration of $x$ in g/L is $C_x f_x$. the weight fraction of $x$ in grams of $x$ per total gram of solution is thus $w_x = \frac{C_x f_x}{\rho}$. The weight fraction $w_x$ must be a number between 0 and 1. The weight ratio of $x$ in grams of $x$ per gram of water is $\frac{\frac{C_x f_x}{\rho}}{1-\frac{C_x f_x}{\rho}} = \frac{w_x}{1-w_x}$. The molality is simply the weight ratio divided by the formula weight, $\frac{1000}{f_x}\frac{\frac{C_x f_x}{\rho}}{1-\frac{C_x f_x}{\rho}}$. The factor of 1000 is because molality is moles per kg of solvent, not moles per g. That last expression can be written as $\frac{1000}{f_x}\frac{\frac{C_x f_x}{\rho}}{1-\frac{C_x f_x}{\rho}} = \frac{1000}{f_x} \frac{w_x}{1-w_x}$ This last formula shows that molality values will always be higher than molarity values. Because $w_x$ has to be between zero and 1, then the molality formula will divide by a smaller number than then molarity formula. One molar water "dissolved" in water (say we are dissolving distilled water in tap water, or dissolving isotopically labeled water into regular water) has a molality of around 1.02. My approach is: Convert 1M into molality and then compare if it is less or more than 1m. (?) Good approach. The density of water is 1kg per L.
{ "domain": "chemistry.stackexchange", "id": 7552, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, concentration", "url": null }
path-integral, greens-functions, correlation-functions, propagator, partition-function Now it seems to feel like the evolution function in the propagator, but how can one deal with the “expectation value” part of the green function definition, which is missing in the propagator definition? I also know that partition function $Z$ could be related to the integral of imaginary time propagator, but couldn’t really get all these fuzzy things in place at once. All right so after days of looking textbooks I finally get a feel of how things are arranged, I’ll try to put all things together to give a clear distinction for the people who are also confused by this. So basically it is the difference between the operator language and the path integral language, and it uses the fact that the real-time green function is defined on zero temperature. In the path integral formulation, we tend to talk about the expectation value, so in this language, we write the green function in terms of expectation value of “pure function” or “correlation function”, there is no operator anymore: $ G( x_1,x_2) = \langle \phi(x_1) \phi(x_2) \rangle $ In the operator formulation, we tend to care how operator operates on the states and what is its outcome. In this language, we write green function in expectation value of operators’ matrix elements. $ G(x_1,x_2) = \langle \mathcal{T} [\phi(x_1,t_1) \phi^{\dagger} (x_2,t_2) ]\rangle $ While doing this expectation value calculation, we actually face two situations, finite temperature or zero-temperature. In the zero-temperature scenario, the ground state contributions dominate and we could write the operators expectation value as: $ G(x_1,x_2) = \langle 0| \mathcal{T} [\phi(x_1, t_1) \phi^{\dagger} (x_2,t_2) ]| 0 \rangle $ And that is what we usually call “propagator”.
{ "domain": "physics.stackexchange", "id": 65455, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "path-integral, greens-functions, correlation-functions, propagator, partition-function", "url": null }
filters, wavelet, frequency-response Title: How can we calculate the frequency response of each filter of a Discrete Wavelet Decomposition filter bank? I am using the Multilevel Wavelet decomposition. The decomposition results in a filter bank such as the following: Specifically, I am using one of the Daubechies wavelets, db7. In reality Level 2 coefficients are the result of filtering by $\ g(n)$ then down-sampling and filtering by $\ h(n)$. Suppose that the whole operation I described above is a single filter ie. at each level the coefficients get generated by a single filter. I would like to know how to calculate the frequency response of those filters at each level. Is it correct to feed an impulse, i.e a Kronecker Delta function to the wavelet decomposition and then compute the Fourier Transform of the resulting coefficients ? Thanks PS: The same question has been asked here and the response was to get the convolution of the impulse response of $\ g(n)$ and an up-sampled version of $\ h(n)$. What puzzles me in that response though, is why divide by $\sqrt2$ the result of the convolution. Also, it seems like more code compared to just calling the wavelet decomposition routine with a Kronecker delta function. The decision under the link you provided seems good enough. The main idea is: we have impulse response of each filter take the impulse response of the filter with the slowest sample rate as the input signal route it back through the chain perform spectrum estimation of the resulting signal Division by $\sqrt2$ is for normalization only. For the given set of db coefficients their sum is $\sqrt2$, look here for explanation. It has no impact on the sense of the result I think.
{ "domain": "dsp.stackexchange", "id": 2304, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, wavelet, frequency-response", "url": null }
to put an absolute value around y for our answer.. In this section we will define radical notation and relate radicals to rational exponents. To simplify a radical expression when a perfect cube is under the cube root sign, simply remove the radical sign and write the number that is the cube root of the perfect cube. The last step is to simplify the expression by multiplying the numbers both inside and outside the radical sign. For example, a 5 outside of the square root symbol and … Simplify the constant and c factors. This algebra 2 review tutorial explains how to simplify radicals. Distribute (or FOIL) to remove the parenthesis. To simplify radicals, rather than looking for perfect squares or perfect cubes within a number or a variable the way it is shown in most books, I choose to do the problems a different Here are the steps required for Simplifying Radicals: Step 1: Find the prime factorization of the number … In other words, the product of two radicals does not equal the radical of their products when you are dealing with imaginary numbers. The most detailed guides for How To Simplify Radicals 128 are provided in this page. 2*2 = 4 and is a perfect square. The number 32 is a multiple of 16 which is a perfect square, so, we can rewrite √ 3 2 as √ 1 6 × 2. $$\red{ \sqrt{a} \sqrt{b} = \sqrt{a \cdot b} }$$ only works if a > 0 and b > 0. The factor of 75 that wecan take the square root of is 25. Step 1 : 1. $$\red{ \sqrt{a} \sqrt{b} = \sqrt{a \cdot b} }$$ only works if a > 0 and b > 0. A. How to Simplify Radicals. In other words, the product of two radicals does not equal the radical of their products when you are dealing with imaginary numbers. All circled “nth group” move outside the radical and become single value. To simplify radicals, we will need to find the prime factorization of the number inside the radical sign first. FALSE this rule does not apply to negative radicands ! Rewrite the fraction as a series of factors in order to cancel factors (see next step). When you simplify a radical,you want to take out as much as
{ "domain": "gob.mx", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.981735721648143, "lm_q1q2_score": 0.8116124997470036, "lm_q2_score": 0.8267117940706734, "openwebmath_perplexity": 607.5708475436666, "openwebmath_score": 0.7944364547729492, "tags": null, "url": "http://gob2018.morelia.gob.mx/m879h4w8/viewtopic.php?id=becddd-how-to-simplify-radicals-with-a-number-on-the-outside" }
# If f(x) is a polynomial of degree three with leading coefficient 1 and $f(1)=1$, $f(2)=4$, $f(3)=9$ then prove $f(x)=0$ has a root in interval $(0,1)$ If f(x) is a polynomial of degree three with leading coefficient 1 such that $f(1)=1$, $f(2)=4$, $f(3)=9$then prove $f(x)=0$ has a root in interval $(0,1)$. This is a reframed version of "more than one option correct type" questions. I could identify all the other answers but this one got left out. My Attempt: From the information given in the question, the cubic equation is $$x^3-5x^2+11x-6=0$$ Now I don't know how to prove the fact that one of the roots lies in the interval $(0,1)$. As it is a cubic equation I can't even find the roots directly to prove this.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9820137863531805, "lm_q1q2_score": 0.80286940917512, "lm_q2_score": 0.817574478416099, "openwebmath_perplexity": 115.2366371538814, "openwebmath_score": 0.8772744536399841, "tags": null, "url": "https://math.stackexchange.com/questions/2081627/if-fx-is-a-polynomial-of-degree-three-with-leading-coefficient-1-and-f1-1" }
ros-melodic, cv-bridge, ubuntu, ubuntu-bionic Here the errors which states that a bunch of undefined references are found: [100%] Built target rviz_imu_plugin CMakeFiles/imageConverter.dir/src/imageConverter.cpp.o: In function `ImageConverter::ImageConverter()': /home/emanuele/catkin_ws/src/map_ros/src/imageConverter.cpp:89: undefined reference to `cv::namedWindow(cv::String const&, int)' CMakeFiles/imageConverter.dir/src/imageConverter.cpp.o: In function `ImageConverter::~ImageConverter()': /home/emanuele/catkin_ws/src/map_ros/src/imageConverter.cpp:94: undefined reference to `cv::destroyWindow(cv::String const&)' CMakeFiles/imageConverter.dir/src/imageConverter.cpp.o: In function `ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)': /home/emanuele/catkin_ws/src/map_ros/src/imageConverter.cpp:115: undefined reference to `cv::imshow(cv::String const&, cv::_InputArray const&)' /home/emanuele/catkin_ws/src/map_ros/src/imageConverter.cpp:116: undefined reference to `cv::waitKey(int)' collect2: error: ld returned 1 exit status map_ros/CMakeFiles/imageConverter.dir/build.make:334: recipe for target '/home/emanuele/catkin_ws/devel/lib/map_ros/imageConverter' failed
{ "domain": "robotics.stackexchange", "id": 33446, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic, cv-bridge, ubuntu, ubuntu-bionic", "url": null }
quantum-field-theory, supersymmetry At some point, you seem to be puzzled by the relation that you incorrectly claim to "only hold for 3D matrices" that the Hermitian conjugation is equivalent to (minus) the conjugation by the temporal gamma matrix. This actually holds in any dimension and any (natural!) convention in which the gamma matrices are chosen Hermitian or anti-Hermitian for temporal and spatial directions, respectively. (This choice is needed because the individual matrices must square to plus identity or minus identity matrix, depending on their being spacelike or timelike, because it follows from the anticommutator relations of gamma matrices and the signature of the metric.) It's not hard to see why. The anti-Hermitean ones change the sign by the conjugation, which are exactly those that anti-commute with $\gamma_0$: note that $\gamma_0$ commutes with itself but anticommutes with others, so the conjugation by $\gamma_0$ changes the sign of the "others". Again, you shouldn't study obscure supermultiplets in obscure dimensions if you don't understand what gamma matrices do under the Hermitian conjugation. This fact is a basic component of any quantum field theory course. In fact, it is normally discussed in the context of the (single-particle) Dirac equation even before the students start with quantum field theory. The usual pedagogical treatment may have a $d=4$ bias, for obvious reasons, but it's true that gamma matrices only become worth their name if they can be used above $d=2$ or $d=3$, too: knowing gamma matrices in $d=2$ or $d=3$ only means not to know gamma matrices at all; $d=4$ is already complicated enough so that one could guess how it generalizes to any dimension. You must have missed this whole thing in some way. Again, I recommend you some basic course on quantum field theory because it covers all those questions or, less naturally, an equivalent mathematical introduction to representations of Lie groups. The variation of the fermions
{ "domain": "physics.stackexchange", "id": 623, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, supersymmetry", "url": null }
# Finding the rank of a certain general matrix If I have an $m \times n$ matrix $A$ and an $n \times m$ matrix $B$ such that $AB=I_m$, how do I go about calculating the rank of $A$ and the rank of $B$? Any clues would be much appreciated! If $A$ is of order $m \times n$ and $B$ is of order $n \times m$, and $AB$ is nonsingular of order $m$, then both $A$ and $B$ are of full rank, i.e., $$\operatorname{rank} A = \operatorname{rank} B = m.$$ Furthermore, $m \le n$. If $m = n$, both $A$ and $B$ are nonsingular and, since $AB = I$, we see that $B = A^{-1}$. • Thank you. Can you please explain as to how you got to this solution? – AYR Aug 10 '13 at 15:42 • See here. – Vedran Šego Aug 10 '13 at 15:50 A good way to think about it is to consider the subspace that results after $\mathbb{F}^m$ is transformed first by $B$, and then by $A$. Since $AB = I_m$, we have $AB(\mathbb{F}^m) = \mathbb{F}^m$. At no point in that process can $\mathbb{F}^m$ be multiplied by a matrix that has rank $< m$, because there is no way to get an $m$-dimensional subspace out of a lower dimensional subspace by multiplying by a matrix...once you're down to $m-1$ dimensions, you can't get back up.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.974434788304004, "lm_q1q2_score": 0.8055767278833667, "lm_q2_score": 0.8267117898012104, "openwebmath_perplexity": 62.514225356130666, "openwebmath_score": 0.959274411201477, "tags": null, "url": "https://math.stackexchange.com/questions/464360/finding-the-rank-of-a-certain-general-matrix?noredirect=1" }