text
stringlengths
49
10.4k
source
dict
graph-theory, optimization, treewidth \cup S_{i+1}]$, which shows that the graph is a disjoint union of stars which are bicliques. So the graph G' has treewidth equal to G, size polynomial in that of G, and can be constructed in polynomial time. Since treewidth is NP-complete, so is the restriction to your special graph class.
{ "domain": "cstheory.stackexchange", "id": 1286, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "graph-theory, optimization, treewidth", "url": null }
particle-physics, particle-detectors, cherenkov-radiation Events that are not contained in the Super-Kamiokande detector are also less useful because their energy cannot be measured. The energy of a muon is determined from how far it travels before it stops. This length is usually determined by measuring the total amount of Cherenkov light observed, and the width of the Cherenkov ring also provides energy information. If the muon doesn't stop before reaching the wall, only a lower bound can be set on the muon energy. Such events are less useful for physics analysis and are less likely to be chosen for public display, since published events are usually the "best" events. Perhaps the most interesting class of uncontained Super-Kamiokande events that produce filled-end circles of light are very high energy upward-going muons, such as seen in this image which actually passes completely through the detector.
{ "domain": "physics.stackexchange", "id": 93321, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, particle-detectors, cherenkov-radiation", "url": null }
entanglement, mathematics, no-cloning-theorem \begin{pmatrix}\alpha\\0\\0\\\beta\end{pmatrix} = \alpha \begin{pmatrix}1\\0\\0\\0\end{pmatrix}+\beta \begin{pmatrix}0\\0\\0\\1\end{pmatrix}=\\ \alpha |00\rangle + \beta |11\rangle $$ So the result becomes 2 exactly identical qubits, both have identical probabilities of being zero and identical probabilities of being one. Since I am sure that the no-cloning theorem can't be wrong, I am asking what is wrong with my reasoning. The cloning theorem requires that the result of the cloning is two independent copies of the starting qubit, i.e., the state of the system in the end should be $\big(\alpha |0\rangle + \beta |1\rangle \big) \otimes \big(\alpha |0\rangle + \beta |1\rangle \big)$. This is not the state CNOT will give you. The qubits you get after applying CNOT as you described do not, in fact, gave identical probabilities of measuring 0 and 1: as soon as you measure the first qubit, the measurement result of the second qubits will always the measurement result of the first qubit! If you were able to actually clone the qubit, the measurement results of the second qubit would not depend on the results of measuring the first one. (You can also browse other questions about no-cloing theorem to find different explanations.)
{ "domain": "quantumcomputing.stackexchange", "id": 898, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "entanglement, mathematics, no-cloning-theorem", "url": null }
c++ Is there an easy way to clean this up? It seems like a lot of code duplication. I have no qualms about using #defines, and I cannot use C++11. The most straightforward way to do this is to introduce a new boolean variable horz that is true in the case that you wish to check horizontals. With that simple addition, both routines now can be expressed as one: // if (horz) is true, check horizontals, else check verticals aLimit = horz ? sGridHeight : sGridWidth; bLimit = horz ? sGridWidth : sGridHeight; for (int a = 0; a < aLimit; a++) { Shape* groupShape = nullptr; int groupSize = 0; for (int b = 0; b < bLimit; b++) { GridItem* gridItem = (horz ? GetItem(b, a) : GetItem(a, b)); Shape* shape = gridItem->GetShape(); if (shape == groupShape) { groupSize++; if (groupSize == m_MatchSize) { g_Audio->PlaySound(m_GemDestroyedSound.c_str()); m_Score += shape->GetScore(); for (int tb = b - m_MatchSize + 1; tb <= b; tb++) { RANDOMIZE(horz ? GetItem(tb, a) : GetItem(a, tb)); } return; } } else { groupSize = 1; groupShape = shape; } } } However, there are a number of things that suggest opportunities for further improvement. In particular, the use of pointers is probably not a good idea. For example, these two lines: GridItem* gridItem = (horz ? GetItem(b, a) : GetItem(a, b)); Shape* shape = gridItem->GetShape();
{ "domain": "codereview.stackexchange", "id": 7458, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++", "url": null }
lagrangian-formalism, gauge-theory, higgs, symmetry-breaking Title: Can we modify the Standard Model to fully break $SU(2)_L \times U(1)_Y$ symmetry? I'm trying to understand how to see if a gauge symmetry would be fully or partially broken on Higgsing. Specifically I am looking at Lagrangians of the form $$ \mathcal L = - \frac{1}{4} {F_a}_{\mu \nu} {F_a}^{\mu \nu} + (D_\mu \phi)_a^*(D^\mu \phi)_a + \mu^2 \phi_a^* \phi_a - \lambda (\phi_a^* \phi_a) $$ where the fields $\phi$ are in the fundametal representation of some group $G$ and $D_\mu = \partial_m - i {A_a}_\mu \tau^a$ making the gauge fields $A_\mu$ in the adjoint representation of $G$. Explicit computations are given in Rabakov's book on classical fields sections 6.1 for $G=U(1)$, section 6.2 for $G=SU(2)$ and section 6.3 for $G=SU(2) \times U(1)$. In the case of $G=U(1)$ we get one massive vector field so gauge symmetry is completely broken (as much as it breaking means in this context). For $G=SU(2)$ we get 3 massive vector fields and again gauge symmetry is complete broken. However, for $G=SU(2) \times U(1)$ there are 3 massive vector fields and one massless one. This situation corresponds to the standard model Higgs. I follow the calculations, but am trying to understand: (a) if there is a general way to look at the group and say how many gauge fields will acquire mass, i.e. how much symmetry will be broken
{ "domain": "physics.stackexchange", "id": 47563, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lagrangian-formalism, gauge-theory, higgs, symmetry-breaking", "url": null }
thermodynamics, home-experiment, thermal-conductivity Title: A thermally conductive gap filler does not conduct the heat as expected. Where does the heat go? I have a mug warmer, which is a hot plate that I've measured heats up to 56C. It came with a mug with a flat bottom, which gets its contents warmed up to about 53C after some minutes. My understanding of the sytem is that the 56C of the hot plate is a boundary condition that would force the mug to eventually reach that same temperature if there was no heat lost to the environment. But of course there is some heat loss, which is why the mug stabilizes at about 53C. Now, I have added to the hot plate a 5mm layer of a compressible thermally conductive gap filler, which is normally used to fill the gap between a heat-generating electronic component and a heatsink; my goal being to be able to use mugs with a non-flat bottom, like any off-the-shelf ceramic mug, while still having good thermal contact. The problem is, I found that somehow the gap filler is not transferring the heat as much as I expected even when there is full contact, and I don't understand why. The surface of the gap filler gets as hot as the hot plate: 56C. However, even the flat-bottomed mug on top of that flat gap filler gets only to 41C, no matter how long I wait. As far as I can tell, every surface here is flat and having good contact. So, how to explain the fact that just introducing the gap filler lowers (so much!) the maximum temperature reached by the same mug? It's as if the filler is sinking part of the heat, which makes no sense - right? So, where does the heat "go"? What am I missing? Adding some detail: The mug is about 10 cm high and 8 cm in diameter, so its side surface is about 251 $cm^2$. It is not double-walled, so it gets hot in every case. The gap filler is a circle, 5 mm high and 5.2 cm in diameter covering the hot plate, so its side surface is about 8 $cm^2$. So, where does the heat "go"?
{ "domain": "physics.stackexchange", "id": 71121, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, home-experiment, thermal-conductivity", "url": null }
type-theory, type-inference, types-and-programming-languages I think my question is actually if any of Γ, t, or T can have free type variables (i.e. that haven't been introduced by the type-inference rules). $\Gamma$ can have type variables not introduced by the rules. For example, the term $(\lambda y : B.y)$ of the polymorphic type $B \to B$ is typable under some environment like $\Gamma = x:A$ (and to avoid ambiguity, one needs to make sure that the identifiers $x$ and $y$ are distinct). This would be another way to encode constants in a language. For example, instead of having the rule CT-Zero, we could always type a term under the unvironment $\Gamma_0 = 0 : \mathrm{Nat}$. Regarding the term $t$, this term is given and therefore, so are the type variables that it contains. So, the type variables in $t$ are certainly not introduced by the rules. The rules only manipulate the environment $\Gamma$, the type $T$, the unification variables $\mathcal{X}$ and the constraint set $\mathcal{C}$. The term $t$ is only being read. Regarding the type $T$, it can also contain type variables not mentioned in $\mathcal{X}$. For example, try typing the term $(\lambda f: A \to A. f\,0)$. When you reach rule CT-Var, the type $T$ in the conclusion of the rule will be $A\to A$, where $A$ will not appear in $\mathcal{X}$. Edit 1 I still don't fully understand the question. What do you mean by "standalone" derivations? Anyway, there is something I'd like to add: e.g. I can apply CT-App to two subderivations for which ₂ contains type variables in χ₁.
{ "domain": "cs.stackexchange", "id": 18475, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "type-theory, type-inference, types-and-programming-languages", "url": null }
climate-change, sea-level Title: "Five of the Solomon Islands disappeared" due to sea level rise, how is this possible so quickly? The text of the introduction to the BBC Podcast Sea Levels Rise; The Compass, Living on the Edge Episode 1 of 4 says: Five of the Solomon Islands have disappeared, many more are becoming uninhabitable. For Kerry and Sally, climate change is not a theory - it is what has made them abandon their island and the graves of their ancestors. They see themselves as lucky - they had family land to move to and the skills to build new homes on stilts - but they are resigned to moving again. Award-winning journalist Didi Akinyelure visits her home city of Lagos to find out the latest solution to sea level rise in West Africa. The glass towers of the new financial district of Eko Atlantic are protected from the waves by state of the art sea defences. The residents of the luxury apartments should keep their feet dry whatever the climate throws at them. That may be small comfort for their unprotected neighbours in the shanty town on the lagoon, Makoko, but they’re experts in survival against the odds. Certainly sea level is rising, on the order of perhaps 15 centimeters in the last century judging from this plot, and the New Scientist article Five Pacific islands vanish from sight as sea levels rise certainly adds credence to this. Answers to the question Sea Level Rise due to Climate Change shed some light on human-induced climate change. Between about 04:00 and 06:00 in the podcast, Simon Albert, a climate change scientist from University of Queensland describes the situation in the Solomons. Here is my best attempt at a transcription of a small part of the podcast:
{ "domain": "earthscience.stackexchange", "id": 1388, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "climate-change, sea-level", "url": null }
javascript, datetime, comparative-review, file-system Title: Generate next available filename based on current date I have a bit of code that I would like your collective opinions on. Currently, my working code has two return statements. I'm not sure how I feel about having multiple points of exit, however, the alternative introduces the inclusion of else, which looks like it will eventually be harder to maintain if more logic is added to the code later on. My current code (Feel free to comment on other aspects as well): function generateFileName() { var k = 0; var today = new Date(); today = (today.getMonth() + 1).toString() + '-' + today.getDate().toString() + '-' + today.getFullYear().toString(); while (true) { if (!fs.existsSync('./'+ today + '.pdf')) { return './' + today + '.pdf'; } if (!fs.existsSync('./' + today + '(' + k + ').pdf')) { return './' + today + '(' + k + ').pdf'; } k++; } }
{ "domain": "codereview.stackexchange", "id": 25716, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, datetime, comparative-review, file-system", "url": null }
virology, epidemiology, infectious-diseases, coronavirus, influenza Title: Is COVID-19 more deadly than swine flu? I have a question about the novel coronavirus and swine flu. How do the death rates compare between the two diseases? How do the transmissions and rate of transmission compare? Was a vaccine developed quicker for swine flu? I ask because I don't recall this level of global disruption during the swine flu outbreak. “Swine flu” is an obsolete name. The official name for the virus that was briefly called “swine flu” is “H1N1pdm09”. H1N1pdm09 has a mortality rate of around 0.01-0.1%. That’s roughly 10- to 20-fold lower than COVID-19. Its R0 was estimated at between 1 and 2, which is roughly half the estimates for SARS-CoV-2 (the virus responsible for COVID-19]. A vaccine for H1N1pdm09 was available in the fall of 2010. It was possible to make it that quickly because it’s just another influenza strain, and the normal techniques for for vaccines against influenza strains worked fine. Most importantly, H1N1pdm09 never went away. It is still one of the main influenza strains circulating today, and if you were vaccinated for influenza since 2010 you received a vaccine against it. CDC info here: 2009 H1N1 Pandemic (H1N1pdm09 virus) WHO info here: Evolution of a pandemic: A(H1N1) 2009, April 2009 – August 2010, 2nd edition
{ "domain": "biology.stackexchange", "id": 10392, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "virology, epidemiology, infectious-diseases, coronavirus, influenza", "url": null }
computational-chemistry Ref. [2] does not discuss derivatives, though I assume the same method ought to possible as in the first part, only things might be complicated by the fact we need we need to integrate out a coordinate. Maybe not in the first derivative case, but in the second derivative case, this would present a problem as far as I can see.
{ "domain": "chemistry.stackexchange", "id": 8689, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computational-chemistry", "url": null }
statistical-mechanics, temperature \end{aligned} By inverse Laplace transform we can get $$ \frac{1}{kT} = \frac{1}{\omega} \frac{\Delta E}{g} (2\pi)^{3N/2} \prod_{i=1}^Nm_i^{3/2} \int \frac{(E-U)^{3N/2-2}}{\Gamma(3N/2-1)} \Theta(E-U) d^{3N}r$$ where $\Theta$ is the Heaviside step function. On the other hand, the ensemble average of the inverse of kinetic energy is $$\langle K^{-1}\rangle = \langle (E-U)^{-1}\rangle = \frac{1}{\omega} \frac{\Delta E}{g} \int \frac{\delta(E-H)}{E-U} d^{3N}pd^{3N}r$$ The Laplace transform of this is \begin{aligned} \mathcal{L}\left[\langle K^{-1}\rangle\right] &= \frac{1}{\omega} \frac{\Delta E}{g} \int \frac{e^{-sH}}{H-U}d^{3N}pd^{3N}r \\&= \frac{1}{\omega} \frac{\Delta E}{g} \int \frac{ \exp(-s\sum_{i=1}^N \frac{p_i^2}{2m_i}) }{ \sum_{i=1}^N \frac{p_i^2}{2m_i} } d^{3N}p \int \exp(-sU) d^{3N}r \end{aligned}
{ "domain": "physics.stackexchange", "id": 85611, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "statistical-mechanics, temperature", "url": null }
radioactivity Title: Reasoning behind alpha , beta to be not occuring at same time for any element Is it true that Alpha decay can occur after beta decay and that Alpha decay and beta decay cannot happen at same time ? If yes is my reasoning correct : For both decays to occur together they would need to together excite one neutron and one proton to convert to antineutrino, alpha particle and a new element Y which is not favourable . Hence they cannot occur together but one by one they can , this is the reason too why an element can only be a alpha or beta emitter but not both? No, this is not correct. Some isotopes can decay both via alpha decay or via beta decay, e.g. many isotopes of Bismuth. In decay chains, an isotope can first decay via an alpha decay, and then the daughter can decay via a beta decay. And vice versa. See e.g. the Radon decay chains. For an alpha decay to happen at the same time as a beta decay, that probability is arbitrarily small; the two processes are mediated by different forces (alpha decay goes via the Strong/EM force, beta decay via the Weak force) and really have nothing to do with each other.
{ "domain": "physics.stackexchange", "id": 87230, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "radioactivity", "url": null }
php, security case 'nu': $text = $str;$striptags = true; $search = ["40","41","58","65","66","67","68","69","70", "71","72","73","74","75","76","77","78","79","80","81", "82","83","84","85","86","87","88","89","90","97","98", "99","100","101","102","103","104","105","106","107", "108","109","110","111","112","113","114","115","116", "117","118","119","120","121","122" ]; $replace = ["(",")",":","a","b","c","d","e","f","g","h", "i","j","k","l","m","n","o","p","q","r","s","t","u", "v","w","x","y","z","a","b","c","d","e","f","g","h", "i","j","k","l","m","n","o","p","q","r","s","t","u", "v","w","x","y","z" ]; $entities = count($search); for ($i=0; $i < $entities; $i++) {
{ "domain": "codereview.stackexchange", "id": 2668, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, security", "url": null }
ros, c++, catkin Title: Catkin/Ros “undefined reference to” I'm trying to build a project with ROS, but I keep getting "undefined reference to <Class::Function>" errors, for exemple :
{ "domain": "robotics.stackexchange", "id": 24781, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, c++, catkin", "url": null }
java, game, server Title: MMO Game Server I've been building an MMO in Java for a game that will have clients built with libGDX. I have already built clients for the browser, desktop, iOS, and Android. To accommodate multiple platforms, websockets are used, and all messages sent back and forth are in the form of strings. The purpose of the scheduled refreshAllClients() method is to try to update all clients at a 100ms delay. I have heard that this is a good approach for an MMO, but I am not totally sure about this. I have removed some of the code (mostly password handling and the saving and loading of game and player data) so that it is more concise. I would like to hear about the readability and overall organization of the code. This is the largest project that I have written in Java so far, and I believe I am still not following all the best practices or taking advantage of all the language features. If you see any architectural failings, those would be great to hear about too. If you would like to see any other methods, please let me know. The main function of the BZLogger class is to log to a file while optionally allowing to log to the console. The websocketServer class has listeners for things like onMessage or onOpen, but everything is forwarded to the Server. I'm using the java_websocket library. public class Server { private final ServerSockets websocketServer; private final int maxIpConnectionsThreshold = 10; private ArrayList<Client> clients = new ArrayList<Client>(); private HashMap<String, Integer> recentIpAddresses = new HashMap<String, Integer>(); private MainGame game; public boolean isLoadingOrSaving = false;
{ "domain": "codereview.stackexchange", "id": 13424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, server", "url": null }
The left-hand side of this equation is called the net flux of the magnetic field out of the surface, and Gauss's law for magnetism states that it is always zero. Flux Integral Example Problem: Evaluate RR S F·nˆdS where F=x4ˆııı+2y2ˆ +zkˆ, S isthehalfofthesurface 1 4x 2+1 9y 2+z2 =1 withz ≥ 0and ˆn istheupwardunitnormal. Introduction What I want to do tonight is • Define the concept of "flux", physically and mathematically • See why an integral is sometimes needed to calculate flux • See why in 8. where C is positively oriented. Multivariable calculus 3. Example 1 Let us verify the Divergence Theorem in the case that F is the vector field F( )= 2i+ 2j+ 2k and is the cube that is cut from the first octant by the planes =1, =1and =1 Since the cube has six faces, we need to compute six surface integrals in order to compute ZZ F·n but. The magnetic flux formula is given by, Where, B = Magnetic field, A = Surface area and. The electric flux over a surface S is therefore given by the surface integral: Ψ E = ∬ S E ⋅ d S {\displaystyle \Psi _{E}=\iint _{S}\mathbf {E} \cdot d\mathbf {S} } where E is the electric field and d S is a differential area on the closed surface S with an outward facing surface normal defining its direction. The output should look something the surface integrals below, but hopefully better: Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Similar is for limit expressions. After learning about what flux in three dimensions is, here you have the chance to practice with an example. However, we know that this is only part of the truth, because from Faraday’s Law of Induction, if a closed circuit has a changing magnetic flux through it, a circulating current will arise, which means there is a nonzero voltage around the circuit. Consider the mass balance in a stream tube by using the integral form of the conservatin of mass equation. The package follows a modular concept:
{ "domain": "dearbook.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9820137910906878, "lm_q1q2_score": 0.8074010586395752, "lm_q2_score": 0.822189123986562, "openwebmath_perplexity": 939.595995278296, "openwebmath_score": 0.7990542650222778, "tags": null, "url": "http://dearbook.it/izsw/flux-integral-examples.html" }
php Now, for example: $Class = new Session(); if ($Class->Set("Key","Value")){ echo "Created Value/key within the Session array, and returned true"; } I understand that people like to validate a successful/fail boolean based on true and false, so I have included the correct return, but my overall question is just to confirm which I believe is correct.. Which is regarding the returns... Is it a perfectly acceptable thing to do, over having no returns? and within best preference? As for your question, at least Get should not return false if the value is not set, now the caller will not know whether the actual value is false or whether the value is not set. Other than that, I would suggest you use PHP_SESSION_DISABLED , PHP_SESSION_NONE and PHP_SESSION_ACTIVE instead of 1, 2 and 3 To have Status_Session call session_status is confusing, maybe call it session_status_string ? Also, this should not return false, but maybe "Session status could not be determined" I dont understand this : if (session_status() === 1){ $this->Session_Started = true; if session_status() == 1, then sessions are disabled, why would you set Session_Started to true ? session_start() will return a boolean that indicates whether the session actually started, you should check that boolean instead of assuming success with $this->Session_Started = true; session_start() also can resume a session, I am not a PHP master, but it seems to me you could replace init() with public function init(){ /* Start or Resume a session */ $this->Session_Started = session_start(); return $this->Session_Started; } Finally, if you are adamant about checking whether the session is started or not in set, would it not make sense to start the session for the caller, so that your code works auto-magically ? public function Set ($Key = false, $Value){ if (!isset($_SESSION)) { init(); } $_SESSION[$Key] = $Value; }
{ "domain": "codereview.stackexchange", "id": 4359, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php", "url": null }
c#, strings, html, escaping // same as `String.StartsWith` but accepts a start index public static bool StartsWithAt(this string text, int startIndex, string value) { if (text.Length - startIndex < value.Length) return false; for (int i = 0; i < value.Length; i++) { if (text[startIndex + i] != value[i]) return false; } return true; } I didn't test it a lot but you may.
{ "domain": "codereview.stackexchange", "id": 40930, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, strings, html, escaping", "url": null }
ros, ros-info Title: Use ROS_ info to output std::vector data If I'm dealing with std::vector data in C++ and I'd like to print it on the output stream I can do something similar to what is shown below: std::cout << "Data Retrieved: \n"; std::copy(data.begin(), data.end(), std::ostream_iterator<double>(std::cout, " ")); std::cout << std::endl; How might I reproduce this behavior using ROS_INFO to display the data on the console as new data is received? Thank you! Originally posted by rhw0023 on ROS Answers with karma: 3 on 2021-10-28 Post score: 0 It should be easy to use a std::stringstream to accomplish what you want. Moreover, it might be a bit cleaner to use ROS_INFO_STREAM for this particular application. Something like this: std::stringstream ss; ss << "Data Retrieved: \n"; std::copy(data.begin(), data.end(), std::ostream_iterator<double>(ss, " ")); ss << std::endl; ROS_INFO_STREAM(ss.str()); Originally posted by jarvisschultz with karma: 9031 on 2021-10-29 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by rhw0023 on 2021-10-29: This worked out great! Thank you!
{ "domain": "robotics.stackexchange", "id": 37065, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-info", "url": null }
The divergence We want to discuss a vector fleld f deflned on an open subset of Rn. As we will see, the analogous formula, known as Kirchho ’s formula, can be derived through the following steps. cal polar coordinates and spherical coordinates. Legendre, a French mathematician who was born in Paris in 1752 and died there in 1833, made major contributions to number theory, elliptic integrals before Abel and Jacobi, and analysis. Note that when h = 0 the coordi-386 THE COLLEGE MATHEMATICS JOURNAL. Triple integral in spherical coordinates Example Find the volume of a sphere of radius R. Rectangular coordinates are depicted by 3 values, (X, Y, Z). Let a triple integral be given in the Cartesian coordinates $$x, y, z$$ in the region $$U:$$ \iiint\limits_U {f\left( {x,y,z} \right)dxdydz}. When converted into spherical coordinates, the new values will be depicted as (r, θ, φ). Point Doubling (4M + 6S or 4M + 4S) []. The Jacobian is the determinant of a matrix of. Our kinetic Lagrangian in spherical coordinates is. Polar/cylindrical coordinates: Spherical coordinates: Jacobian: x y z θ r x = rcos(θ) y = rsin(θ) r2 = x2 +y2 tan(θ) = y/x dA =rdrdθ dV = rdrdθdz x y z φ θ r ρ. Math 121 (Calculus I) Math 122 (Calculus II) Math 123 (Calculus III) Math 200 (Calculus IV) Math 200 - Multivariate Calculus. It is easier to calculate triple integrals in spherical coordinates when the region of integration U is a ball (or some portion of it) and/or when the integrand is a kind of f\left ( { {x^2} + {y^2} + {z^2}} \right). Our goal is for students to quickly access the exact clips they need in order to learn individual concepts. LAPLACE'S EQUATION IN SPHERICAL COORDINATES. In mathematics, the Pythagorean theorem, also known as Pythagoras' theorem, is a fundamental relation in Euclidean geometry among the three sides of a right triangle. 2 22 2 2
{ "domain": "ol3roma.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9915543738075945, "lm_q1q2_score": 0.8174988355515703, "lm_q2_score": 0.824461932846258, "openwebmath_perplexity": 585.6846455806344, "openwebmath_score": 0.8959506750106812, "tags": null, "url": "http://ol3roma.it/jacobian-of-spherical-coordinates-proof.html" }
python, python-3.x def is_metathesis_pair(word1, word2): assert word1 != word2, "Words are the same word." assert len(word1) == len(word2), "Words are not equal in length." assert sorted_word(word1) == sorted_word(word2), ( f"Words '{word1}', '{word2}' are not anagrams of eachother.") letter_pairs = zip(word1, word2) count = 0 for letter_1, letter_2 in letter_pairs: if count > 2: return False if letter_1 != letter_2: count += 1 if count == 2: return True elif count > 2: return False else: return "Error." def find_metathesis_pairs(anagram_dict): """ Takes a dict mapping word families to words, and looks in each to find words that are metathesis pairs (ie words that are the same, except for a single pair of swapped letters. Returns a list of tuples of these pairs. anagram_dict: dict returns: list """ metathesis_pairs_list = [] for word_family in anagram_dict.keys(): # Prevent iterating over a pair more than once by iterating over a word # and subsequent words in list. for word1_index in range(len(anagram_dict[word_family])): word1 = anagram_dict[word_family][word1_index] for word2 in anagram_dict[word_family][word1_index+1:]: if word1 != word2: if is_metathesis_pair(word1, word2): metathesis_pairs_list.append((word1, word2)) return metathesis_pairs_list def metathesis_pairs(input_filename): """ Return list of metathesis pairs from a file (ie all pairs of words that differ by swapping two letters). input_filename: text file returns: list of tuples """ wordlist = make_words_list(input_filename) anagram_dict = anagram_sets(wordlist)
{ "domain": "codereview.stackexchange", "id": 29417, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x", "url": null }
Letting $T^{\ast}:V^{\ast}\to V^{\ast}$ denote the dual homomorphism, we also have $T^{\star}={B_{R}}^{-1}\circ T^{\ast}\circ B_{R}.$ Similarly, we define the left adjoint ${}^{\star}T\in L(V,V)$ by ${}^{\star}T={B_{L}}^{-1}\circ T^{\ast}\circ B_{L}.$ We then have $B(u,Tv)=B({}^{\star}Tu,v),\quad u,v\in V.$ If $B$ is either symmetric or skew-symmetric, then ${}^{\star}T=T^{\star}$, and we simply use $T^{\star}$ to refer to the adjoint homomorphism. 1. 1. if $B$ is a symmetric, non-degenerate bilinear form, then the adjoint operation is represented, relative to an orthogonal basis (if one exists), by the matrix transpose. 2. 2. If $B$ is a symmetric, non-degenerate bilinear form then $T\in L(V,V)$ is then said to be a normal operator (with respect to $B$) if $T$ commutes with its adjoint $T^{\star}$. 3. 3. An $n\times m$ matrix may be regarded as a bilinear form over $K^{n}\times K^{m}$. Two such matrices, $B$ and $C$, are said to be congruent if there exists an invertible $P$ such that $B=P^{T}CP$. 4. 4. The identity matrix, $I_{n}$ on $\mathbb{R}^{n}\times\mathbb{R}^{n}$ gives the standard Euclidean inner product on $\mathbb{R}^{n}$.
{ "domain": "planetmath.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9948603896643566, "lm_q1q2_score": 0.8110425053247646, "lm_q2_score": 0.8152324826183822, "openwebmath_perplexity": 151.42562348333868, "openwebmath_score": 0.9816983342170715, "tags": null, "url": "http://planetmath.org/BilinearForm" }
cc.complexity-theory, np-hardness, big-list, randomness, reductions Title: Problems that are NP-complete under randomized or P/poly reductions. In this question, we appear to have identified a natural problem that is NP-complete under randomized reductions, but possibly not under deterministic reductions (although this depends on which unproven assumptions in number theory are true). Are there any other such problems known? Are there any natural problems that are NP-complete under P/poly reductions, but not known to be under P reductions? Under randomized reduction with probability $\frac{1}{2}$ (known also as $\gamma$-reducibility, on the discussion of randomized reductions see "On Unique Satisfiability and Randomized Reductions") problems Linear divisibility Binary quadratic diophantine equations are NP-complete, but the same is not known for deterministic reductions (as far as I know, for slightly out-dated discussion of this situation see here). $\gamma$-reducibility was introduced in the paper "Reducibility, randomness, and intractibility" by Leonard Adleman and Kenneth Manders (proofs for the problems above were proposed also there). There are other such examples in "A Catalog of Complexity Classes", but I haven't checked what is known about their NP-completeness under deterministic reductions.
{ "domain": "cstheory.stackexchange", "id": 684, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, np-hardness, big-list, randomness, reductions", "url": null }
algorithms, streaming-algorithm, simulation, lower-bounds Title: Is every linear-time algorithm a streaming algorithm? Over at this question about inversion counting, I found a paper that proves a lower bound on space complexity for all (exact) streaming algorithms. I have claimed that this bound extends to all linear time algorithms. This is a bit bold as in general, a linear time algorithm can jump around at will (random access) which a streaming algorithm can not; it has to investigate the elements in order. I may perform multiple passes, but only constantly many (for linear runtime). Therefore my question: Can every linear-time algorithm be expressed as a streaming algorithm with constantly many passes? Random access seems to prevent a (simple) construction proving a positive answer, but I have not been able to come up with a counter example either. Depending on the machine model, random access may not even be an issue, runtime-wise. I would be interested in answers for these models:
{ "domain": "cs.stackexchange", "id": 426, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, streaming-algorithm, simulation, lower-bounds", "url": null }
• I put mine in the comments because I'm not sure it answers the actual question - namely, is there a convergence test which works for series with definite integral summand? Jun 2 '17 at 2:14 • @MichaelBiro:a definite integral is still a number, so why are you looking for different criteria? They are just the same. And they all derive from elementary inequalities, so when in doubt you may always "go back to the stone age". Jun 2 '17 at 2:16 • Yes, but OP asked the question of whether there is a test specifically designed for summing integrals, not for the answer to that specific sum. Jun 2 '17 at 2:20 • @MichaelBiro: in such a case I guess the answer is just no, because there is no need for such a thing. Definite integrals are just numbers. Jun 2 '17 at 2:21 • Thank you and @MichaelBiro for the critical insight! However, it doesn't really follow from $x \in {[0, 1]}$ that $1 - 1 / (x + 1) \geq 1 / 2$...? It seems to me that $0 \leq 1 - 1 / (x + 1) \leq 1 / 2$ for $x \in {[0, 1]}$. Jun 2 '17 at 2:26
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.951142217223021, "lm_q1q2_score": 0.8006959107548359, "lm_q2_score": 0.8418256452674008, "openwebmath_perplexity": 238.75844467288178, "openwebmath_score": 0.8943539261817932, "tags": null, "url": "https://math.stackexchange.com/questions/2306388/convergence-test-for-series-with-definite-integral-summand" }
botany, plant-physiology, plant-anatomy Title: How do plants grow year after year even though they die? How do plants grow, die, and then grow again? For instance, when my plants die during the winter, how do they grow again next year? Does it have something to do with the root system? Or do they even die? It depends on the type of plant, but basically not all of the plant dies. Plants have evolved a number of strategies for winter* dormancy. These are common ones, but probably not an exhaustive list. Deciduous trees and bushes simply drop their leaves in the fall, and so may look "dead" to the unskilled eye - though with practice, it's usually easy to distinguish between dead and dormant. Then when the weather warms in the spring, new leaves grow. Other perennial plants may lose some or all of their top growth, even dying back to ground level, but the roots will be alive, and will start growing when the ground warms. Still other plants have developed specialized underground structures like bulbs & rhizomes - think daffodils, tulips, irises, and similar. The rest of the plant dies, only to grow again from the bulb when conditions are right. It's worth noting that most, if not all, of these are used for propagation as well, often naturally, and frequently with a bit of human help. Bulbs and rhizomes multiply: the daffodil bulb you planted a few years ago may now be a dozen bulbs, each of which can be moved to grow new ones. Many perennials can be increased by dividing the root mass into pieces, each of which will become a new plant. And cuttings from many trees & bushes can be induced to form new root systems, and become new plants. Or summer, dry season, &c. For simplicity, I'll just say "winter".
{ "domain": "biology.stackexchange", "id": 8491, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "botany, plant-physiology, plant-anatomy", "url": null }
python, beginner, postgresql, cursor # --------------------------------- # A2.msql-table4 > A3.psql-table4 | # --------------------------------- # cur_msql.execute("SELECT l.field1, r.field2, l.field3, l.field4, l.field5, l.field6, l.field7, l.field8 \ FROM msql-table4 l, msql-table0 r \ WHERE l.field2=r.field2") for row in cur_msql: try: cur_psql.execute("INSERT INTO psql-table4(field1, field2, field3, field4, field5, field6, field7, field8, field9) \ VALUES(%(field1)s, %(field2)s, %(field3)s, %(field4)s, %(field5)s, %(field6)s, %(field7)s, %(field8)s, %(field9)s, NULL, DEFAULT)", row) except psycopg2.Error as e: print "cannot execute that query!!", e.pgerror sys.exit("Some problem occured with that query! leaving early this lollapalooza script") # --------------------------------- # A2.msql-table5 > A3.psql-table5 | # --------------------------------- # cur_msql.execute("SELECT field1, field2, field3, field4, field5, field6, field7, field8, field9, field10 FROM msql-table5")
{ "domain": "codereview.stackexchange", "id": 16161, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, beginner, postgresql, cursor", "url": null }
java, object-oriented, interface, immutability @Override public StudentType retrieveStatus() { return type; } } If there is any new type of status, it can be added my implementing StudentStatus, no need to modify the Student class. Use: List<String> documents = new ArrayList<String>(); documents.add("Passport"); documents.add("Drivers License"); StudentStatus domestic = new Domestic(); StudentStatus international = new International(documents); List<Student> students = new ArrayList<Student>(); students.add(new Student("123456789","Susan","Ceesharp", domestic)); students.add(new Student("987654321","Bill","Finalclass", international)); for(Student display : students) { System.out.println(display.getID() + " " + display.retrieveStatus()); } I'm aware none of my classes have validation, I kept these out to keep my code clear and concise. Goal: Keep the Student immutable without violating SOLID. You can shorten / simplify your StudentStatus implementations. Both International and Domestic have a fixed return value for retrieveStatus(). You can leave out the type field and add the value directly in the method. retrieveDocuments() copies what the constructor already copied. You can simply return documents. Domestic students have no documents, so retrieveDocuments() can return the empty list. public final class International implements StudentStatus { private final Collection<String> documents; public International(Collection<String> documents) { this.documents = Collections.unmodifiableList(new ArrayList<String>(documents)); } @Override public Collection<String> retrieveDocuments() { return documents; } @Override public StudentType retrieveStatus() { return StudentType.International; } } public final class Domestic implements StudentStatus { @Override public Collection<String> retrieveDocuments() { return emptyList(); } @Override public StudentType retrieveStatus() { return StudentType.Domestic; } } If there is any new type of status, it can be added my implementing StudentStatus, no need to modify the Student class.
{ "domain": "codereview.stackexchange", "id": 27977, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, object-oriented, interface, immutability", "url": null }
fourier-transform, sampling, python, nyquist # Number of sampled points N = 50 # Where we sample the signal pts = np.linspace(0, 2 * np.pi, num = N, endpoint = False ) # A much finer grid, used solely to display results oversampled = np.linspace( 0, 2 * np.pi, 20 * N, endpoint = False ) # The grid we use for the plotting grid = oversampled # The frequncy of the sine wave below freq = 7 # Do the FFT on the sampled signal f_hat = rfft( f( pts, freq ) ) plt.plot( grid, interpolant( f_hat, grid ) , color = "g" ) plt.plot( grid, f( grid, freq ) , color = "r" ) title1 = str(N) + " samples. Signal frequency is " + str(freq)+" \n" title2 = "Red is true, green is interpolant. Reconstruction succeeds" plt.title( title1 + title2 ) plt.show() # The frequncy of the sine wave below freq = 12.9 # Do the FFT on the sampled signal f_hat = rfft( f( pts, freq ) ) plt.plot( grid, interpolant( f_hat, grid ) , color = "g" ) plt.plot( grid, f( grid, freq ) , color = "r" ) title1 = str(N) + " samples. Signal frequency is " + str(freq)+" \n" title2 = "Red is true, green interpolant. Reconstruction fails" plt.title( title1 + title2 ) plt.show() While hotpaw2's answer is correct in general, in your case it is just a bug in your code. The N in your interpolant function is just 25 as far as I can tell, and then you divide it by 2 again.
{ "domain": "dsp.stackexchange", "id": 3046, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-transform, sampling, python, nyquist", "url": null }
## Hero Group Title If a 2x2 matrix has the same two eigenvalues as another 2x2 matrix, what conclusions can you infer about the two matrices? Make some hypotheses and attempt to prove them, or refute them via counterexamples. 2 years ago 2 years ago 1. malevolence19 Group Title $\left[\begin{matrix}\psi & \xi \\ \chi & \zeta\end{matrix}\right] \implies (\psi - \lambda)(\zeta - \lambda) - \xi * \chi = 0$ Take the transpose of that matrix you have: $\left[\begin{matrix}\psi & \xi \\ \chi & \zeta\end{matrix}\right]^T=\left[\begin{matrix}\psi & \chi \\ \xi & \zeta\end{matrix}\right] \implies (\psi - \lambda)(\zeta - \lambda) - \xi * \chi = 0$ They are the same. I'm not sure if you can ALWAYS assume this but it seems pretty logical. I also don't know any theorems to name. 2. Hero Group Title Brilliantly done! 3. eliassaab Group Title No, they are not the same always. 4. Hero Group Title Hmmm 5. Hero Group Title I need a counterexample. 6. Hero Group Title Well, by default they wouldn't be the same matrices 7. eliassaab Group Title \left( \begin{array}{cc} 2 & 4 \\ 0 & 3 \\ \end{array} \right) \\ \left( \begin{array}{cc} 2 & 0 \\ 0 &3 \\ \end{array} \right) Have the same eigenvalues 2 and 3 but they are not the same. 8. Hero Group Title The matrices that malevolence posted are not the same either are they? 9. eliassaab Group Title Ok. I thought he wanted to conclude that if two matrices have the same eigenvalues then they are the same. This is not true in general. What do you think the answer to your question is? 10. Hero Group Title
{ "domain": "openstudy.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9669140254249553, "lm_q1q2_score": 0.8199119168262041, "lm_q2_score": 0.8479677564567912, "openwebmath_perplexity": 2245.3514919326662, "openwebmath_score": 0.9776151776313782, "tags": null, "url": "http://openstudy.com/updates/4fe67752e4b02c91101b265d" }
general-relativity, cosmology, cosmological-inflation, modified-gravity When we go to higher energies, (a quantum theory of) gravity as described by the Einstein-Hilbert action, is non-renormalizable and would need all these possible higher order terms as counterterms, indicating again that all these higher order terms might be there in a quantum theory of gravity. This is part of the motivation for studying $f(R)$ theories of gravity. String theory also gives rise to higher order terms in the Einstein-Hilbert action, so most people in that field believe those terms could be there (for the above-mentioned reasons). In addition, one of the more well-known inflation models (using a scalar field with a slow roll potential) known as the Starobinsky model, can be obtained from an extended theory of gravity that includes only the next leading cubic term in $R$. $$ S = \frac{1}{2} \int d^4 x \left(R + \frac{R^2}{6M^2} \right) $$ This adds to the suspicion that a proper quantum theory of gravity, describing the full quantum mechanical dynamics of spacetime, including cosmic inflation in the early universe, might indeed be one that contains (many) of these higher-order terms. Using $f(R)$ theories in an attempt to explain dark matter is typically more difficult since $f(R)$ will at low curvature reduce to GR (which will again reduce to Newton's law of gravitation). Theories that attempt to explain dark matter by modified theories of gravity should give a different Newton's law of gravitation at low energies and are therefore known as MOND (Modified Newtonian Dynamics) theories.
{ "domain": "physics.stackexchange", "id": 38697, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, cosmology, cosmological-inflation, modified-gravity", "url": null }
homework-and-exercises, lie-algebra Title: Pauli Matrices proof The original question was to prove $$e^{i\sigma_z\phi} \sigma_y e^{-i\sigma_z\phi}=e^{2i\sigma_z\phi} \sigma_y.$$
{ "domain": "physics.stackexchange", "id": 29235, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, lie-algebra", "url": null }
rotational-dynamics, momentum, conservation-laws, rotational-kinematics, intuition Title: Intuition behind rotational analog's of motion Whenever I think of rotational analog's of motion , like angular velocity ,angular displacement and mainly angular momentum , something doesn't click with my intuition. Like I cant understand how they are substituted in kinematic equations of motions (v=u+at etc. as ω=ω₀+ αt etc ) whenever I intuitively think about this i cant get it .Can someone help me out? Rotation is nothing but the arc distance traveled by a point, divided by its distance from the rotation axis. Take any kinematic equation: $$v_2=v_1+at$$ and divide by $r$, the distance from the axis of rotation to the point in question. $$\frac {v_2}{r}=\frac {v_1} {r} +\frac a r t$$ $$\omega_2=\omega_1+\alpha t$$ They are the same equation, divided by a constant factor. You are still tracking the same motion of the same point, but changing how you are measuring it. And for rigid bodies it turns out this way of measuring makes the statement also valid for any point on that rigid body. It is that simple. Getting into the dynamics of rotational motion with torques, angular momentum, and cross products is a bit more complex, but that did not seem to be your question.
{ "domain": "physics.stackexchange", "id": 93390, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rotational-dynamics, momentum, conservation-laws, rotational-kinematics, intuition", "url": null }
arduino, rosserial Originally posted by nishthapa on ROS Answers with karma: 47 on 2016-03-21 Post score: 2 Original comments Comment by ahendrix on 2016-03-22: It looks like part of the error message is missing; can you edit your question to include the command you're running and the full error message? Comment by nishthapa on 2016-03-23: No. I copy pasted the error message. That's all it says. Nothing more. I checked. Hey guys i solved it. All i had to do was rebuild my workspace with catkin_make install and install the diagnostic_msgs package into a source built ROS distribution. Thank you all for your help. Originally posted by nishthapa with karma: 47 on 2016-03-29 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24191, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "arduino, rosserial", "url": null }
javascript, performance, jquery In the "improved version", notice that 1) it uses a native array instead of jQuery in the first place, allowing the use of the native array.map and 2) it doesn't call $() in the callback at all. We just avoided ~3100 jQuery objects again! Now back to your toggling code, I think I see an update you did before my answer by adding/removing classes. That's a good move because 1) styling should be done in CSS and 2) show and hide uses inline styles. They are hard to override from a stylesheet. The only way you could is to use !important which you should generally avoid. As for your question about removal vs hiding, I'd say go for hiding. As far as I know, removals are slow. This is even worse when there are stuff attached to the elements in question. For instance, removing an element with jQuery handlers attached. jQuery has to clean them up, remove references to data and handlers etc. If removals aren't done properly, you could have a lot of lingering things in memory.
{ "domain": "codereview.stackexchange", "id": 19498, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, performance, jquery", "url": null }
So our resultant is 30.8 meters, 35.8 degrees north of west. What we can see is that if there is an obstacle in the path here, some big rock that you want to avoid, you can get to the same place by taking a different route which is kind of obvious, but now we just sort of demonstrated that that's true using analytical techniques.
{ "domain": "collegephysicsanswers.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104953173166, "lm_q1q2_score": 0.8053336349155948, "lm_q2_score": 0.8333245953120233, "openwebmath_perplexity": 400.76962298026945, "openwebmath_score": 0.6436581015586853, "tags": null, "url": "https://collegephysicsanswers.com/openstax-solutions/repeat-exercise-316-using-analytical-techniques-reverse-order-two-legs-walk-and" }
ros, joy-node else if ( b == 1 ) { ROS_INFO("panning servo right publish"); } } /*********************************************************************************** created while sorting out left and right > keeping it around for up/down tilt **********************************************************************************/ void servo2_moveCallback(const sensor_msgs::Joy::ConstPtr &msg) { if (msg->buttons[3] == 1) { ROS_INFO("tilting servo up publish"); }
{ "domain": "robotics.stackexchange", "id": 37891, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, joy-node", "url": null }
homework-and-exercises, newtonian-gravity, mass, estimation Title: Do other people's mass count when measuring the attraction of the earth on me? Suppose that I'm alone on a planet of $M$. From what I learned from school, the gravitation force acting on me is given by $$F=G\frac{mM}{r^2},$$ where $m$ is my mass, $r$ is the distance between me and the planet and $G = 6.67 \cdot 10 ^{-11}$. Suppose that my dog, of mass $\tilde m$, comes to live with me on this planet. Now, is the gravitation force acting on me still $F$ or is it $$\tilde F=G\frac{m(M+\tilde m)}{r^2}~? $$ I mean, technically it would affect F, yes. But the mass of your dog is negligible compared to the mass of the planet. Just going off of Earth, we're talking 6e24 kg compared to 50 kg at most. The gravitational force from the planet is so much more than the gravitational force from the dog - which is why in real life we fall towards the Earth, not towards our pets. EDIT: The effect of the dog would be horizontal, not vertical as Earth's gravitational force is. So OP's (M+m) isn't right, but that doesn't change that we can ignore the dog's mass when calculating gravity.
{ "domain": "physics.stackexchange", "id": 25092, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-gravity, mass, estimation", "url": null }
set.seed(1) n <- 20 X <- sample(150:200, n, replace=TRUE) X X = {153, 188, 150, 183, 172, 192, 163, 167, 200, 182, 170, 170, 191, 195, 159, 156, 158, 164, 170, 186} Numerator $$\overline{X} - \mu_0$$ Our sample is only one possible sample out of the many we could have drawn from our population. It has a mean $$\overline{x}$$ (actual value) and a standard deviation $$s$$ (notation for both in lower case that represent observed data). If we imagine that we could get a significant number of random samples (of the same size) from our population of interest, we would be able to calculate the mean for each of them. This distribution of sample means is called the sampling distribution of the means. In our t-test, $$\overline{X}$$ denotes the random variable that represents this distribution. The Central Limit Theorem states that, given a sufficiently large sample size, the sampling distribution of the mean for a variable will approximate a normal distribution. We can empirically illustrate that by simulating this sampling distribution of the mean through bootstrapping : Since $$\mu_0$$ is fixed (in our t-test we assume that the null hypothesis is true), then $$\overline{X}-\mu_0$$ is also a random variable. Same distribution as above (normal), but centered on the effect (the difference between the population value and the null hypothesis - illustration example below). Denominator $$S/\sqrt{n}$$ The denominator is actually the standard error of the mean which measures the variability of sample means in the sampling distribution of means. $$S$$ is the estimate of the standard deviation of the population. It is also a random variable and $$S^2$$ follows a chi-square distribution. Ratio: t-test $$\frac{\overline{X} - \mu_0}{S/\sqrt{n}}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9766692335683307, "lm_q1q2_score": 0.8242063830240131, "lm_q2_score": 0.8438951025545427, "openwebmath_perplexity": 232.16549085151544, "openwebmath_score": 0.82356858253479, "tags": null, "url": "https://stats.stackexchange.com/questions/541363/hypothesis-testing-understanding-test-statistics-and-their-sampling-distributio" }
rviz, laserscan Title: Unknown tf::filter_failure_reasons in rviz Hey, I have a really weird bug .. I have a laser scanner (xv11) which is publishing data at approximately 5hz. If I visualize this in rviz, the messages seems to be visualized fine if I set the fixed frame equal to the frame of the laserscan message (base_laser_link). However if I change the fixed frame to base_link, I see a delay of approximately 1 second. I also tried sending laserscan messages with another device (asus xtion) at 5hz with the same link, which does not appear to be delayed. The transform between base_laser_link and base_link is published by my robot_state_publisher and this is working fine, otherwise the asus xtion would also have problems.. In addition, if I set the queue size for both LaserScan display to 1, I only see the asus xtion message remain. The laserscanner message claims it has an unknown reason for why the transform failed. Also, if I change the laserscan display in rviz of the xv11 laserscanner topic to that of the asus xtion, I see the same delay as with the xv11! As if that display is in a bugged state or something. So I can have both displays on, both listening to the xtion, one of which is delayed, the other is not. They both read in from the same topic, same queue size, same settings... What in earth could be causing this? Best regards, Hans Originally posted by Hansg91 on ROS Answers with karma: 1909 on 2014-05-02 Post score: 0
{ "domain": "robotics.stackexchange", "id": 17830, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rviz, laserscan", "url": null }
butterworth, hardware-implementation Title: Can't make sense from VHDL butterworth filter implementation I am trying to understand the VHDL implementation of the 3rd order Butterworth filter published on opencores. Supposedly the filter implementation gets away with only division and multiplications by 2 which is cheap when implemented in hardware (only shifting the bits left/right). I've translated the VHDL to MATLAB to simulate a step response. x = int16([zeros(1,500) 512*ones(1,1000)]); % step-input a = zeros(1,3); % accumulator registers w = zeros(1,5); % analog filter state variables s = 4; % scaling parameter to adjust cutoff frequency for t=1:length(x) in = x(t); a1 = a(1) + w(1) - w(3); a2 = a(2) + w(2) - w(4); a3 = a(3) + w(3) - w(5); w1 = in - w(2); w2 = a(1)/power(2,s); w3 = a(2)/power(2,s+1); w4 = a(3)/power(2,s); w5 = w(4); % update registers for next cycle a = [a1 a2 a3]; w = [w1 w2 w3 w4 w5]; y(t) = 2*w(5); end The step response looks quite promising; dotted line is the step input, solid red, orange and blue are the filter response for s=3, 4 and 5 respectively. However, the implementation really does not make much sense to me. Can anyone enlighten me? What are the coefficients of the filter? Which form does the implementation follow? The filter structure is a digital leapfrog and the structure looks like this picture (note: the picture is a different order than the code): These filters are discussed at some length on wikipedia and wikibooks.
{ "domain": "dsp.stackexchange", "id": 4713, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "butterworth, hardware-implementation", "url": null }
Question I hope someone can verify if my solution is correct. Of course, other ideas, comments or solutions are welcome. I like to post problems and my solutions to the forum because I think it's beneficial to the community, and for learners like me. First, I can hardly know if there's flaw in my own argument. Second, I may get new insights/solutions for my problem. Thank you. • Sorry, I am not yet clear how the two threads are related? – tom_a2 May 22 '13 at 16:43 • You are absolutely right. I misread your question. – Martin May 22 '13 at 16:45 • It looks good to me. – Julien May 22 '13 at 17:32 Let $B = \{x\in [a,b]: |f(x)|=0\}$. By continuity, this set is closed. Since $f(0) = 0$, the set is non-empty. We show that it is also open. Let $x\in B$, i.e. $f(x) =0$. By continuity, there exists $0<r<(2A)^{-1}$, such that $|f(y)|<\frac 12$ for all $y\in B_r(x)$. Fix such a $y\in B_r(x)$. We want to show that $f(y) =0$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9825575157745541, "lm_q1q2_score": 0.8670361216806681, "lm_q2_score": 0.8824278556326344, "openwebmath_perplexity": 89.07947414986923, "openwebmath_score": 0.9560487270355225, "tags": null, "url": "https://math.stackexchange.com/questions/399394/if-left-fx-right-leq-a-fx-beta-then-f-is-a-constant-function" }
cycsB := [op(cycsB), cyc]; od; rep := pet_cycs2table(cycsB); edgeperm := subs([seq(q=rep[q], q=1..2*n)], edges); cindB := cindB + flat[1]* pet_autom2cycles(edges, edgeperm); od; (cindA+cindB)/2; end; Q := proc(n) option remember; local cind; cind := pet_cycleind_knn(n); subs([seq(a[p]=2, p=1..n*n)], cind); end;
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9777138144607745, "lm_q1q2_score": 0.8562690370894727, "lm_q2_score": 0.8757869884059266, "openwebmath_perplexity": 1273.553374674751, "openwebmath_score": 0.5907314419746399, "tags": null, "url": "https://math.stackexchange.com/questions/1151538/finding-the-spanning-subgraphs-of-a-complete-bipartite-graph/" }
control, ros, microcontroller, ros-kinetic, network Velocity Controllers --------------------------------------- pata1_velocity_controller: type: effort_controllers/JointVelocityController joint: pata_1_to_base_link_joint pid: {p: 3.0, i: 0.0, d: 0.0} required_drive_mode: -1 pata2_velocity_controller: type: effort_controllers/JointVelocityController joint: pata_2_to_base_link_joint pid: {p: 3.0, i: 0.0, d: 0.0} required_drive_mode: -1 pata3_velocity_controller: type: effort_controllers/JointVelocityController joint: pata_3_to_base_link_joint pid: {p: 3.0, i: 0.0, d: 0.0} required_drive_mode: -1 pata4_velocity_controller: type: effort_controllers/JointVelocityController joint: pata_4_to_base_link_joint pid: {p: 3.0, i: 0.0, d: 0.0} required_drive_mode: -1 pata5_velocity_controller: type: effort_controllers/JointVelocityController joint: pata_5_to_base_link_joint pid: {p: 3.0, i: 0.0, d: 0.0} required_drive_mode: -1 pata6_velocity_controller: type: effort_controllers/JointVelocityController joint: pata_6_to_base_link_joint pid: {p: 3.0, i: 0.0, d: 0.0} required_drive_mode: -1
{ "domain": "robotics.stackexchange", "id": 30946, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "control, ros, microcontroller, ros-kinetic, network", "url": null }
java, generics, lambda, statistics /** * sets how many chunks should be run. * * The total amount of how often the given functions should be run when * timed is amountChunks * amountRunsPerChunk. * * @param amountChunks amountChunks */ public void setAmountChunks(int amountChunks) { this.amountChunks = amountChunks; } /** * sets how often the function is run per chunk. * * The total amount of how often the given functions should be run when * timed is amountChunks * amountRunsPerChunk. * * @param amountRunsPerChunk amountRunsPerChunk */ public void setAmountRunsPerChunk(int amountRunsPerChunk) { this.amountRunsPerChunk = amountRunsPerChunk; } /** * performs the actual timing for all given functions. */ public void time() { for (int chunks = 0; chunks < amountChunks; chunks++) { // run a chunk of tests on this timingObject: for (TimingObject timingObject : functionsToTime) { // generate input: ArrayList input = new ArrayList<>(); for (int runs = 0; runs < amountRunsPerChunk; runs++) { input.add(timingObject.inputConverter.apply((chunks * amountRunsPerChunk) + runs)); } // run with input: long[] times = timeRuns(timingObject, input); timingObject.addTimeChunk(times); } Collections.shuffle(functionsToTime); // randomize functions each time } for (TimingObject timingObject : functionsToTime) { timingObject.processTimes(); } }
{ "domain": "codereview.stackexchange", "id": 9864, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, generics, lambda, statistics", "url": null }
algorithms, graphs, algorithm-analysis, runtime-analysis Title: Time Complexity for Creating a Graph from a File Assume that I have a file that consists of pairs of numbers separated by a space. These numbers are the labels for vertices in my graph. For example: 0 5 0 7 2 3 4 5 1 5 . . . I want to create a graph (adjacency list) by reading this file line-by-line. For each line, I will create an edge between the two vertices. Of course, if the vertex doesn't exist yet, then I will add it before creating the edge. I read here of an algorithm that builds a graph with a time complexity of $O(|V| + |E|)$ where $V$ = set of vertices and $E$ = set of edges. That makes sense to me. However my algorithm doesn't insert the vertices in a loop first and then insert all of the edges in another loop second. My algorithm just adds the vertices as it's looping through the edges. My question is if my algorithm is $O(|E|)$? It seems like that can't be right, but I read here that when calculating the time complexity you don't take into account if statements. That's exactly what my vertex creation would be -- an if statement that checks if the node exists in the middle of my looping through all the edges. You forget that $O(|E|) \subset O(|E|+|V|)$ and $O(|E|+|V|) \subset O(|E|)$. Though It's actually $O(|E|\ln|V|)$, because checking if a vertex has been inserted is hardly $O(1)$.
{ "domain": "cs.stackexchange", "id": 1735, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, graphs, algorithm-analysis, runtime-analysis", "url": null }
plus the new information about the results of the new measures, write a quadratic taylor polynomial approximation for your function. 125. 1. 4: Chebyshev Approximation Algorithm in R1 • Objective: Given f(x) defined on [a,b], find its Chebyshev polynomial approximation p(x) • Step 1: Compute the m ≥ n+1Chebyshev interpolation nodes on [−1,1]: In the last part of this post, we are going to build a plot that shows how the Taylor Series approximation calculated by our func_cos() function compares to Python's cos() function. Give the cubic approximation to the sine, formed at x 0 = 1. Question T1. Apr 30, 2018 · A century ago engineers had very good and robust means of drafting 2D curves using specialized spline sets and curve templates (e. 3 6. mws, and go through it carefully. 4: Chebyshev Approximation Algorithm in R1 • Objective: Given f(x) defined on [a,b], find its Chebyshev polynomial approximation p(x) • Step 1: Compute the m ≥ n+1Chebyshev interpolation nodes on [−1,1]: The MacLaurin series is a Taylor series approximation of a function f(x) centered at x = 0. TAYLOR'S FORMULA FOR FUNCTIONS OF SEVERAL VARIABLES. 56, No. Your captors say that you can earn your freedom, but only if you can produce an approximate value of 8. – 1 around a = 0, to get linear , quadratic and cubic approximations. Definition at line 60 of file taylor_model. 3 Cubic Approximation at x = a. Obtain the cubic spline approximation for the function y=f(x) from the following data, given that y0” 3=”=0y. Apr 10, 2018 · The approximation (as opposed to the in nite series) is one instance of Taylor approximation. The formulas also give an infinite spectrum of rational inverse Set the point where to approximate the function using the sliders. 3 Interpolation Problem 1. Example 1. In analogy with the conditions satis ed by T the \best" approximation of its kind for the function f(x) if we look at values of xthat are close to 0. The
{ "domain": "immoplus24.de", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.988841966778967, "lm_q1q2_score": 0.8385061097586799, "lm_q2_score": 0.8479677622198946, "openwebmath_perplexity": 649.3671631794019, "openwebmath_score": 0.8625760078430176, "tags": null, "url": "http://immoplus24.de/tcl-roku/taylor-cubic-approximation-formula.html" }
slam, navigation, octomap, ros-kinetic, rtabmap-ros <arg name="approx_rgbd_sync" default="true"/> <!-- false=exact synchronization --> <arg name="subscribe_rgbd" default="$(arg rgbd_sync)"/> <arg name="rgbd_topic" default="rgbd_image" /> <arg name="depth_scale" default="1.0" /> <arg name="compressed" default="false"/> <!-- If you want to subscribe to compressed image topics --> <arg name="rgb_image_transport" default="compressed"/> <!-- Common types: compressed, theora (see "rosrun image_transport list_transports") --> <arg name="depth_image_transport" default="compressedDepth"/> <!-- Depth compatible types: compressedDepth (see "rosrun image_transport list_transports") --> <arg name="subscribe_scan" default="false"/> <arg name="scan_topic" default="/scan"/> <arg name="subscribe_scan_cloud" default="false"/> <arg name="scan_cloud_topic" default="/scan_cloud"/> <arg name="scan_normal_k" default="0"/> <arg name="visual_odometry" default="true"/> <!-- Launch rtabmap visual odometry node --> <arg name="icp_odometry" default="false"/> <!-- Launch rtabmap icp odometry node --> <arg name="odom_topic" default="odom"/> <!-- Odometry topic name --> <arg name="vo_frame_id" default="$(arg odom_topic)"/> <!-- Visual/Icp odometry frame ID for TF -->
{ "domain": "robotics.stackexchange", "id": 31589, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, octomap, ros-kinetic, rtabmap-ros", "url": null }
want to find the area of any parallelogram, and if you can figure out the height, it is literally you just three corresponding sides that are congruent RecommendedScientific Notation QuizGraphing Slope QuizAdding and Subtracting Matrices Quiz  Factoring Trinomials Quiz Solving Absolute Value Equations Quiz  Order of Operations QuizTypes of angles quiz. When this happens, just go back to the drawing board. just 1/2 times base times height. You may have to extend segment AB as you draw the height from C. What this means is that a parallelogram has two pairs of opposite sides that are parallel to each other and are the same length. this parallelogram would literally be 5 times 6. This is possible to create the area of a parallelogram by using any of its diagonals. In this section we will discuss parallelogram and its theorems. It's as if a rectangle had a long, busy … Hence, area of a rhombus is 24 cm 2. Our mission is to provide a free, world-class education to anyone, anywhere. over here is a parallelogram. And if we wanted to So this length is Everything you need to prepare for an important exam! Covid-19 has led the world to go through a phenomenal transition . It can be shown that the area of this parallelogram (which is the product of base and altitude) is equal to the length of the cross product of these two vectors. of the entire parallelogram. And we've proven E-learning is the future today. It is possible to create a tessellation with any parallelogram. the area of a parallelogram. So I'm going to say CBA. ABDC is a parallelogram with a side of length 11 units, and its diagonal lengths are 24 units and 20 units. This would be I drew the altitude outside this length right over here is 5, and if they were to tell So that's one way you The leaning rectangular box is a perfect example of the parallelogram. A third way to do the proof is to get that first pair of parallel lines and then show that they’re also congruent — with congruent triangles and CPCTC — and then finish with the fifth parallelogram proof method. times the area of triangle ADC. And a rhombus is Triangles on the same base (or equal bases) and
{ "domain": "trends-magazine.net", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668684574636, "lm_q1q2_score": 0.808934730697203, "lm_q2_score": 0.8244619306896955, "openwebmath_perplexity": 573.0564525336912, "openwebmath_score": 0.6392198801040649, "tags": null, "url": "https://trends-magazine.net/flower-farming-bua/waltbp.php?7cc94a=area-of-parallelogram-proof" }
haskell, programming-challenge Which, for a test input of: 3 5 4 abc bca dac dbc cba (ab)(bc)(ca) abc (abc)(abc)(abc) (zyx)bc Yields the results: Case #1: 2 Case #2: 1 Case #3: 3 Case #4: 0 A good rule of thumb is that you should only ever use foldr when you're really sure that your fold is not an instance of something simpler. In your case, the fold is doing pretty much exactly two things while traversing the pattern list: Keeping track of the "case index" Accumulating the result list The second should be easily recognisable as a map - maybe less obvious is that the first can just be written as a zip with an enumeration: getResult :: [String] -> (Int, String) -> String getResult w (count, pattern) = "Case #" ++ show count ++ ": " ++ show (numberOfMatches pattern w) main :: IO () main = do [...] let results = map (getResult knownWords) $ zip [1..n] patterns Which is much easier to understand than trying to bend the fold to do the right thing. Also, just as a suggestion, here's main implemented in a more "imperative" (read: monadic) style. After all, Haskell is said to be the best imperative language ever invented, so we can do that proudly: main :: IO () main = do [_, d, n] <- fmap (map read . words) getLine knownWords <- replicateM d getLine forM_ [1..n] $ \count -> do pattern <- getLine let matches = numberOfMatches pattern knownWords putStrLn $ concat ["Case #", show count, ": ", show matches]
{ "domain": "codereview.stackexchange", "id": 2649, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haskell, programming-challenge", "url": null }
c# foreach (var token in tokens) { if (token.Index > 0) { e = Expression.Call(e, appendMethod, Expression.Constant(format.Substring(index, token.Index - index))); } index = token.Index + token.Length; if (token.Value == "$$") { e = Expression.Call(e, appendMethod, dollarSign); } else if (token.Value == "$&") { e = Expression.Call(e, appendMethod, valueVariable); } else if (token.Value == "$`") { e = Expression.Call(e, appendMethod, getBeforeSubstring); } else if (token.Value == "$'") { e = Expression.Call(e, appendMethod, getAfterSubstring); } else { var n = token.Value.Substring(1); if (n.Length == 2 && n[0] == '0') { n = n.Substring(1); } var i = int.Parse(n) - 1; var nConst = Expression.Constant(i); var t = Expression.Constant(token.Value); if (i < 0) { e = Expression.Call(e, appendMethod, t); } else { var cond = Expression.Condition( Expression.GreaterThan( capturesLengthProp, nConst ), Expression.ArrayIndex(capturesProp, nConst), t ); e = Expression.Call(e, appendMethod, cond); } } } e = Expression.Call(e, toStringMethod); var lambda = Expression.Lambda<Func<string, RegExpParser.MatchState, string>>(e, valueVariable, stateVariable); return lambda.Compile(); }
{ "domain": "codereview.stackexchange", "id": 235, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#", "url": null }
Centroid: Centroid of a plane figure is the point at which the whole area of a plane figure is assumed to be concentrated. 4' 13 Answers: (X,Y) in • The coordinates ( and ) define the center of gravity of the plate (or of the rigid body). The cartesian coordinate of it's centroid is $\left(\frac{2}{3}r(\theta)\cos\theta, \frac{2}{3}r(\theta)\sin\theta\right)$. Find the coordinates of the centroid of the area bounded by the given curves. How to calculate a centroid. It is also the center of gravity of the triangle. The centroid of a right triangle is 1/3 from the bottom and the right angle. Centroid by Composite Bodies ! The x-centroid would be located at 0 and the y-centroid would be located at 4 3 r π 7 Centroids by Composite Areas Monday, November 12, 2012 Centroid by Composite Bodies Recall that the centroid of a triangle is the point where the triangle's three medians intersect. y=x^{3}, x=0, y=-8 y=2 x, y=0, x=2 It is the point which corresponds to the mean position of all the points in a figure. Next, sum all of the x coodinates ... how to find centroid of composite area: how to calculate centroid of rectangle: how to find centroid of equilateral triangle: For example, the centroid location of the semicircular area has the y-axis through the center of the area and the x-axis at the bottom of the area ! Problem Answer: The coordinates of the center of the plane area bounded by the parabola and x-axis is at (0, 1.6). The Find Centroids tool will create point features that represent the geometric center (centroid) for multipoint, line, and area features.. Workflow diagram Examples. Find the coordinates of the centroid of the plane area bounded by the parabola y = 4 – x^2 and the x-axis. Center of Mass of a Body Center of mass is a function of density. With this centroid calculator, we're giving you a hand at finding the centroid of many 2D shapes, as
{ "domain": "eliteprotek.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668679067631, "lm_q1q2_score": 0.8658089843659367, "lm_q2_score": 0.8824278649085117, "openwebmath_perplexity": 388.8633420720968, "openwebmath_score": 0.7444528341293335, "tags": null, "url": "https://eliteprotek.com/starbucks-vanilla-gwd/beaa86-determine-the-coordinates-of-the-centroid-of-the-area" }
ros, callback, topic, rqt Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback cb(msg) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback self.sizes.append(buff.len) AttributeError: 'cStringIO.StringO' object has no attribute 'len' [ERROR] [1488471809.729279]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>> Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback cb(msg) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback self.sizes.append(buff.len) AttributeError: 'cStringIO.StringO' object has no attribute 'len' [ERROR] [1488471809.736976]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>> Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback cb(msg) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
{ "domain": "robotics.stackexchange", "id": 27184, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, callback, topic, rqt", "url": null }
vba, excel Here are some of my questions/problems: My 2 subs both go through the same loop (trough all sheet if visible). I tried to loop before calling the subs, but the subs in itself need to compile (add) data from every sheet and since subs cannot output data, I was forced to loop in both subs... which is not effective but is it really more effective to use function [y1,...,yN] = myfun(x1,...,xM) and re-input my data every loop or compile outside the sub? I couldn't find another way to assign Pass and Loss (Passes sub) data to Workers without using Array, go it to work but didn't felt logic at first glance. My workbook trigger range is way too big but I couldn't "Union" range in the statement. Reducing the range to my 2 or 3 important rows would most likely trigger it less often, maybe I could split my second statement in 3 different If not intersect? You guys will most likely find some other upgrade, Thanks Remove the Passes and Arrêt parameters and declare them as Global Constants in a public Module. Public Const nombrelignezonecomposant As Long = 25 'Number of lines available for document entry Public Const ligneinitzonecomposant As Long = 18 'First part number entry line Public Const nocolonnetype As Long = 22 'No of the column in which the types of stop are found Public Const nocolonneminute As Long = 21 'No. of the column in which the minutes of stoppages are located Public Const ligneinitzonenoemploye As Long = 5 'First line of operator number entry Public Const nombrelignenoemploye As Long = 4 'Number of operator number entry lines Public Const nocolnoemploye As Long = 3 'Operator no. Column no. Rem Position of the "Total loss" box Public Const colperte As Long = 13 Public Const ignetotperte As Long = 43 Rem Position of the "Passes" box Public Const colpasse As Long = 3 Public Const lignepasse As Long = 10 Dim w, q, Z As Integer Variables must be Typed individually. There is no advantage to using Integer use Long instead.
{ "domain": "codereview.stackexchange", "id": 38971, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
gazebo Title: Can the optical frame orientation be set via pose in sensor Update: I can rotate the physical frame using the pose element. I do not want to move the camera as its image is correct it is the orientation of the optical that is not oriented in rviz correctly. Do I have to manually (in code) translate or is there a way to normalize these coordinate system to each other. Hopefully I asked the question correctly this time. This following statement is incorrect as the pose does not rotate the optical frame. XY pose work fine. I was trying to rotate my depth camera by doing the following: The z rotation takes effect but the XY rotation are ignored. I understand it is normal to create a joint for this but I was wonder why I can not do it here. <sensor name="camera1" type="depth"> <visualize>1</visualize> <pose>-0.1 0 0.2 -1.57 0 -1.57</pose> <camera name='head'> <horizontal_fov>1.39626</horizontal_fov>  <clip> <near>0.02</near> <far>300</far> </clip> <save enabled="false"> <path>/tmp</path> </save> <depth_camera> <output>depthImage</output> </depth_camera> <noise> <type>gaussian</type> <mean>0</mean> <stddev>0.007</stddev> </noise> </camera> <plugin name="head_camera_controller" filename="libgazebo_ros_openni_kinect.so"> <baseline>0.2</baseline> <alwaysOn>true</alwaysOn> <updateRate>1.0</updateRate>
{ "domain": "robotics.stackexchange", "id": 3508, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo", "url": null }
1+ r n ·nt A(7. In this lesson we show several Real Life uses of Exponents, as well as their impact on our understanding of the modern world around us. Applications of Pressure in Daily Life Some of the applications of pressure are given below. Newton's Law of Cooling Newton's Law makes a statement about an instantaneous rate of change of the temperature. Most of these real life patterns were evolved over a long period of time by brilliant people to have efficient systems in the society. Search MLS listings directly on your local Coldwell Banker ® office website to find the most up-to-date homes for sale. This includes such things as plant or population growth or decay such as a bouncing spring. In addition, Logarithmic scales are used in Another application of proportions in the real life is in movies screens, because in order to project. Personally I would have done 3 examples of good, bad and great straight from loopnet. In this section, we explore some important applications in more depth, including radioactive isotopes and …. M11GM-Id-1 2. y =ln (x +1) Write original function. LOGARITHMIC FUNCTIONS (Interest Rate Word Problems) 1. That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x. Students, in turn, can take this valuable experience solving real world problems and building applications to stand out in the interview process as they look for new job opportunities. This course applies principles learned in my course “Introduction to Engineering Mechanics” to analyze real world engineering structures. † Range is (0;1). Carbon dating is based upon the decay of 14 C, a radioactive isotope of carbon with a relatively long half-life (5700 years). f (x) = ln(x). As Pierre-Simon Laplace, a scholar who worked in the fields of mathematics, statistics, physics and astronomy, remarked, “By. Trigonometry is a subject that has lots of practical applications. The Real-Life Possibilities of Black Panther's Wakanda, According to Urbanists and City Planners The design and infrastructure of Black Panther 's fictional East African nation, Wakanda, has. Seven strangers enter The Real World house ready to enjoy the single life, but they don't know that they'll be sharing the space with
{ "domain": "coroilcontrappunto.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.984093606422157, "lm_q1q2_score": 0.8422376417795499, "lm_q2_score": 0.8558511469672594, "openwebmath_perplexity": 1085.9298730210649, "openwebmath_score": 0.36487436294555664, "tags": null, "url": "http://irmj.coroilcontrappunto.it/application-logarithms-real-life.html" }
c, hashcode As for the hashing itself, I'm sure that the output looks fairly random, but it's actually doing a lot of work for the level of hashing that it supplies. My top concerns: If the input is longer than HASH_LENGTH, the latter part of the input is completely ignored. XOR is a standard building block for this sort of low level stuff, but one of the key properties of XOR is that it undoes its own work. That is to say, because your for loop with j is only doing XOR it will confuse rather less than you'd expect for the work that goes into a nested loop. The space allocated to digest at the end is underused. For example, because it's restricted to that alphabet, the first three bits of every byte will be "011". That has implications for use with, say, hash maps because the hash map will allocate 87.5% of its slots to values this hash function can never fill for any possible input.
{ "domain": "codereview.stackexchange", "id": 37341, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, hashcode", "url": null }
Now since $n\ge k$ this simplifies to $$(-1)^{k} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \frac{1}{1+z} \; dz = (-1)^k (-1)^k = 1.$$ The second case when $k=0$ yields $$\sum_{q=0}^n (-1)^{q} {n+1\choose q+1} = - \sum_{q=1}^{n+1} (-1)^{q} {n+1\choose q} = - ((1-1)^{n+1}-1) = 1.$$ -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9869795077797211, "lm_q1q2_score": 0.8389041344529213, "lm_q2_score": 0.8499711775577736, "openwebmath_perplexity": 122.00356305063883, "openwebmath_score": 0.9858556389808655, "tags": null, "url": "http://math.stackexchange.com/questions/503694/combinatorial-interpretation-of-an-alternating-binomial-sum" }
physical-chemistry, acid-base, amino-acids \begin{align} \mathrm{p}K_\mathrm{a}(\ce{-COOH}) &= 1.9\\ \mathrm{p}K_\mathrm{a}(\ce{-NH_3+}) &= 8.35\\ \mathrm{p}K_\mathrm{a}(\ce{-SH}) &= 10.5 \end{align} From these values, $\alpha$ can be calculated for each ionizable group at the desired pH and this will give you the net charge of the amino acid. Upon deprotonation, the following changes in charge occur for the ionizable groups: \begin{array}{lclcr} \ce{-COOH} &:& 0 &\rightarrow &-\\ \ce{-NH_3+}&:& + &\rightarrow &0\\ \ce{-SH} &:& 0 &\rightarrow &-\\ \end{array} As an example, let's calculate the charge of cysteine at pH 10. Using the Henderson-Hasselbalch in a spreadsheet yields the following $\alpha$ values: \begin{array}{lcr} \ce{-COOH} &:& 0.9999\\ \ce{-NH_3+} &:& 0.9781\\ \ce{-SH} &:& 0.2403\\ \end{array} This gives the following total charge for cysteine at pH 10: $$(-1)\cdot0.9999+(+1)\cdot(1-0.9781)+(-1)\cdot0.2403=-1.218$$
{ "domain": "chemistry.stackexchange", "id": 853, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, acid-base, amino-acids", "url": null }
noise, sampling, random Title: SD of a function of signals Say $a_x$, $a_y$, $b_x$, $b_y$ are signals with values in $\mathbb{R}$. Say they are independednt, normally distributed variables with given SDs $\sigma_{a_x}$, $\sigma_{a_y}$, $\sigma_{b_x}$, $\sigma_{b_y}$. Say $f: \mathbb{R}^4\longrightarrow\mathbb{R}$ is a continuous function of these signals (i.e. $f(a_x, a_y, b_x, b_y)\in \mathbb{R}$ for all $(a_x, a_y, b_x, b_y)\in\mathbb{R}^4$. General question What is the SD of $f(a_x, a_y, b_x, b_y)$? Example/Question
{ "domain": "dsp.stackexchange", "id": 5452, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "noise, sampling, random", "url": null }
Since $A$ is a $\mathbb{Q}$-independent subset, by proposition 5.3 there exists a basis $B$ of $\mathbb{R}$ that contains $A$. Then $A\subseteq B\subseteq\mathbb{R}$ and $|A|=|\mathbb{R}|$ and Cantor-Bernstein theorem imply $|B|=|\mathbb{R}|$, therefore $[\mathbb{R}:\mathbb{Q}]=\mathrm{dim}_\mathbb{Q}{}\mathbb{R}=|\mathbb{R}|$. $\quad\blacksquare$ • Is this uncountable linearly independent set basis ? – user195218 Jun 5 '15 at 2:53 • @user195218 It's not a basis, as it doesn't span the reals, but it is uncountable and linearly independent. Jul 25 '17 at 19:08 • The only proper answer to the question. Such a shame that it is not the most voted answer. May 1 '19 at 21:42 • Way to nail it down, nice work. Dec 28 '20 at 2:05 No transcendental numbers are needed for this question. Any set of algebraic numbers of unbounded degree spans a vector space of infinite dimension. Explicit examples of linearly independent sets of algebraic numbers are also relatively easy to write down. The set $\sqrt{2}, \sqrt{\sqrt{2}}, \dots, = \bigcup_{n>0} 2^{2^{-n}}$ is linearly independent over $\mathbb Q$. (Proof: Any expression of the $n$th iterated square root $a_n$ as a linear combination of earlier terms $a_i, i < n$ of the sequence could also be read as a rational polynomial of degree dividing $2^{n-1}$ with $a_n$ as a root and this contradicts the irreducibility of $X^m - 2$, here with $m=2^n$).
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.987946222258266, "lm_q1q2_score": 0.8397258120193181, "lm_q2_score": 0.8499711756575749, "openwebmath_perplexity": 190.83497961073266, "openwebmath_score": 0.9173307418823242, "tags": null, "url": "https://math.stackexchange.com/questions/6244/is-there-a-quick-proof-as-to-why-the-vector-space-of-mathbbr-over-mathbb/6250" }
python, tree, serialization, constructor BinaryTree('D', BinaryTree('B', BinaryTree('A'), BinaryTree('C')), BinaryTree('F', BinaryTree('E'))) BinaryTree('D', BinaryTree('B', BinaryTree('A', None, None), BinaryTree('C', None, None)), BinaryTree('F', BinaryTree('E', None, None), None)) Sometimes it's best just to write simple code and not worry about a small amount of repetition: def __repr__(self): if self.right is not None: fmt = '{}({value!r}, {left!r}, {right!r})' elif self.left is not None: fmt = '{}({value!r}, {left!r})' else: fmt = '{}({value!r})' return fmt.format(type(self).__name__, **vars(self))
{ "domain": "codereview.stackexchange", "id": 16503, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tree, serialization, constructor", "url": null }
ros, genmsg, catkin, message-generation Update 2 Here is the repository with the code that is producing this error: https://github.com/raptort3000/ublox_catkin Originally posted by Raptor on ROS Answers with karma: 377 on 2013-06-06 Post score: 4 Original comments Comment by William on 2013-06-06: Can you post the entire output of catkin_make -j1 on a previously unbuilt workspace? Your package looks ok to me, I don't see why you would be getting the out of order build bug (I can pretty confidently confirm that this is that bug since you can resolve it by running multiple times). Comment by Raptor on 2013-06-07: Look at update above. Comment by joq on 2013-06-07: Why is CATKIN_DEPENDS message_runtime commented out? What happens when you use it? Comment by Raptor on 2013-06-07: I do not see a difference if I use it or not, at least during compile time. Comment by joq on 2013-06-07: I think it is needed for other catkin packages that use your messages. Comment by Raptor on 2013-06-07: Alright I will keep that in then. UPDATED to remove incorrect suggestion, see comments below. The currently-recommended variable name for expanding self-defined messages and services is: add_dependencies(uuid_msgs ${${PROJECT_NAME}_EXPORTED_TARGETS}) I wrote that using the uuid_msgs target, because others reading this question may be confused by the fact that your library target has the same name as your package. Originally posted by joq with karma: 25443 on 2013-06-06 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14449, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, genmsg, catkin, message-generation", "url": null }
# Thread: Differentiation and Need HELP with LATEX! 1. ## Differentiation and Need HELP with LATEX! g(x) = {-x, x<=0 3x^2,x>0 (a) Evaluate the limit of {g(x+delta x) - g(x)}/{delta x) for x<=0 and x>0 as delta x tends to 0 (b) evaluate the limit of {g(delta x) - g(0)}/{delta x) as delta x tends to 0 from the right and as delta x tends to 0 from the left (c) sketch the graph of g'(x) Is g(x) continuous at x=0? I need help with (b) and (c) only. Is there any user-friendly programmes to type the equations? I find the syntax of latex very difficult to learn. Btw, how do you type the above equations (using latex)? Thanks! 2. B is simply asking you to compute the separate limits for the left and right side of the equation corresponding the respective piece of your piece-wise function. You are meant to see whether the limit from the right is the same as the limit from the left. 3. Originally Posted by ANDS! B is simply asking you to compute the separate limits for the left and right side of the equation corresponding the respective piece of your piece-wise function. You are meant to see whether the limit from the right is the same as the limit from the left. Does this mean I must find the limit for $-x$, $x\leq 0$ as delta x approaches 0 from the right and from the left as well as the limit for $3x^2$, $x>0$ as delta x approaches 0 from the right and from the left? Can anyone show me the working??? Thank you. 4. Originally Posted by cyt91 Does this mean I must find the limit for $-x$, $x\leq 0$ as delta x approaches 0 from the right and from the left as well as the limit for $3x^2$, $x>0$ as delta x approaches 0 from the right and from the left?
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9773707973303964, "lm_q1q2_score": 0.8474154201187959, "lm_q2_score": 0.8670357477770336, "openwebmath_perplexity": 1669.8137986344384, "openwebmath_score": 0.9797729849815369, "tags": null, "url": "http://mathhelpforum.com/calculus/130818-differentiation-need-help-latex.html" }
mechanical-engineering, structural-engineering, friction, experimental-physics, building-physics 3) There is probably no advantage in going for a very heavy ball as more weight will only increase the friction at the bearing surface having said that a larger ball give syou a larger range of movement. In my example a 50mm ball in a 48mm beveled hole gives around 45 degrees of motion in all directions from vertical. With this approach you are not relying on the felxibility of a 'string' of provide the range of mevement so you have more options in how you attach the bob to the bearing whihc in turn should allow you to use a heavier bob while keeping bearing friction and damping to a minimum. 4) I would guess that the magnetic donut is providing an impulse to the pendulum to compensate for friction losses and keep it swinging indefintely. This requires a coil and a modified ocsillator circuit...but that probably merits a new question.
{ "domain": "engineering.stackexchange", "id": 1122, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mechanical-engineering, structural-engineering, friction, experimental-physics, building-physics", "url": null }
special-relativity, spacetime, reference-frames, inertial-frames Title: What will be the Lorentz transformation formula for 2 frames where they do not cross each other at $t=t'=0$? For the standard Lorentz transformation, we assume that F' crosses F at $t=t'=0$ and is moving to the right i.e. velocity $= +v$. In that case, we use \begin{gathered}x'=\gamma(x-vt), \\t'=\gamma\left(t-x\frac{v}{c^2}\right).\end{gathered} But if we assume that F' is at distance $+d$ (in F frame) away from F at $t=t'=0$ and is moving to the LEFT i.e. velocity $= -v$. In that case, what would be the corresponding Lorentz formula expression for $x'$ and $t'$? Do we just add (or subtract?) $d$ (or $d/\gamma$ or $d\cdot\gamma$?) from the standard expressions? Yes, you just compose a translation before/after the Lorentz transformation. To figure out how much to translate, consider that you want the event $(t,x)=(0,d)$ in $F$ to map to $(t',x')=(0,0)$ in $F'.$ Pure Lorentz transformations being linear and invertible, the only way to get $(0,0)$ from one is to put $(0,0)$ in, so if you choose to translate first just subtract $d$ from $x.$ $$\begin{gathered}x'=\gamma(x-d+vt),\\t'=\gamma\left(t+(x-d)\frac{v}{c^2}\right).\end{gathered}$$
{ "domain": "physics.stackexchange", "id": 81206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, spacetime, reference-frames, inertial-frames", "url": null }
everyday-life, water, physical-chemistry, surface-tension Title: Why do hot water droplets persist in cooler water? I notice this phenomenon typically when mixing hot or warm water with cold water. Basically, tiny droplets of hot water travel inside the body of cooler water and persist. I have included a photo of when I noticed this happening (in a bathtub with hot water coming out of a showerhead) and those little white spheres are what I'm asking about. I can also provide a short video if it would be helpful. Specifically, the conditions to generate these droplets are as follows: Take a US consumer detachable (<2.5 gallon per minute flow rate) shower-head with hot water (near the temperature of my home's hot water heater which is a typical US residential gas-powered water heater with a standard temperature set-point), and angle the water towards the far end of the bathtub. This shower-head has different nozzle settings and the one that I have chosen produces a fine mist of droplets similar to that produced when using a typical plastic spray bottle. This is not a uniform effect as the water is emitted from nozzles that are placed in an annulus. I have noticed that from this shower-head, water is emitted as a cone; droplets and partial streams in the middle of the cone are much hotter than those on the outside of the cone. This middle stream of water is maybe 50-60 degrees Celsius, which I estimate because I can touch the water stream for ~3 seconds before feeling it is too hot. The outer water cone is warm to the touch, so presumably it is between 36 degrees and 45 degrees Celsius. I believe the majority of the water (especially that on the outside), through a combination of evaporative cooling and conductive cooling (due to the fiberglass body of the bathtub) forms a lower temperature body of water. Some droplets, however, retain a much higher temperature (as the shower-head is not able to produce a completely uniform droplet spray) and join the body of water. For some reason, these water droplets do not coalesce with the cooler body of water. Is there a name for this phenomenon, and a description for the conditions to reproduce it? I understand there are complaints about image quality. I cannot fix these as I do not have access to photography equipment to take higher quality images. (This photo was taken with a flagship 2023 smartphone).
{ "domain": "physics.stackexchange", "id": 98760, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "everyday-life, water, physical-chemistry, surface-tension", "url": null }
genetics, nomenclature, research-design The lowercase indicates that the mutation is recessive that is, you need to have two copies of the mutation to see its effect (its phenotype). dt comes from dystopia, another symptom of the mutation. As for Rgs9-Cre/+;gtROSA/+ mouse, the genotype is a bit more complex. Whenever you see /+ it means that the animal is heterozygous, that is, it only carries one copy of that specific gene. For instance these are transgenic animals producing the bacterial protein Cre under the Rgs9 promoter. A promoter is a sequence of DNA that controls the expression of a certain gene. In this case the DNA sequence for the promoter of the gene Rgs9 was attached to the DNA sequence for Cre. This new construct was then inserted into the genome of the mouse. The result is a mouse in which the cells that normally transcribe the Rgs9 gene also produce Cre. This mouse line was then crossed with a reporter mouse line called gtROSA26, which expresses a certain protein called $\beta$-galactosidase in a Cre-dependent manner. The gene for the galactosidase is present in all the cells, but only where Cre is present (in our case the Rgs9 positive cells) it will be transcribed. Eventually the crossing leads to the expression of $\beta$-galactosidase in Rgs9 positive cells. $\beta$-galactosidase has the nice property that it can change the colour of certain chemicals to blue, which allows to easily visualize the cells that produce it. This can help you answer questions like: "where is the Rgs9 gene transcribed?". Wikipedia has a neat page on Gene nomenclature with lots of links to nomenclature guidelines.
{ "domain": "biology.stackexchange", "id": 2377, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics, nomenclature, research-design", "url": null }
8. Angel says: @mathman: The remainder does not need to be accounted for. The division 1/3 is never formally complete if there is a remainder. Therefore, one must keep iterating the operations in long division. As the number of iterations n approaches infinity, the remainder does approach zero. That is what matters here. The number of 3s in the decimal expansion of 1/3 is infinite, and so is the number of 9s in the figure 0.999…
{ "domain": "askamathematician.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.993611678144426, "lm_q1q2_score": 0.8701921728044477, "lm_q2_score": 0.8757869819218865, "openwebmath_perplexity": 579.3539525925751, "openwebmath_score": 0.8159306049346924, "tags": null, "url": "http://www.askamathematician.com/2011/05/q-is-0-9999-repeating-really-equal-to-1/" }
What's wrong in my calculation of $\int \frac{\sin x}{1 + \sin x} dx$? I have the following function: $$f: \bigg ( - \dfrac{\pi}{2}, \dfrac{\pi}{2} \bigg ) \rightarrow \mathbb{R} \hspace{2cm} f(x) = \dfrac{\sin x}{1 + \sin x}$$ And I have to find $$\displaystyle\int f(x) dx$$. This is what I did: $$\int \dfrac{\sin x}{1 + \sin x}dx= \int \dfrac{1+ \sin x - 1}{1 + \sin x}dx = \int dx - \int \dfrac{1}{1 + \sin x}dx =$$ $$= x - \int \dfrac{1 - \sin x}{(1 + \sin x)(1 - \sin x)} dx$$ $$= x - \int \dfrac{1 - \sin x}{1 - \sin ^2 x} dx$$ $$= x - \int \dfrac{1 - \sin x}{\cos^2 x} dx$$ $$= x - \int \dfrac{1}{\cos^2x}dx + \int \dfrac{\sin x}{\cos^2 x}dx$$ $$= x - \tan x + \int \dfrac{\sin x}{\cos^2 x}dx$$ Let $$u = \cos x$$ $$du = - \sin x dx$$ $$=x - \tan x - \int \dfrac{1}{u^2}du$$ $$= x - \tan x + \dfrac{1}{u} + C$$ $$= x - \tan x + \dfrac{1}{\cos x} + C$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9648551556203814, "lm_q1q2_score": 0.8815654985151661, "lm_q2_score": 0.9136765175373274, "openwebmath_perplexity": 278.8970099780308, "openwebmath_score": 0.8357511758804321, "tags": null, "url": "https://math.stackexchange.com/questions/3483283/whats-wrong-in-my-calculation-of-int-frac-sin-x1-sin-x-dx" }
So $(a,b)$ really is in the span of $\{(1,2),(0,3)\}$. As we can choose any $(a,b) \in \mathbb R^2$, we know that those vectors span the whole $\mathbb R^2$. • Thanks for the response. Quick question, how exactly did we get y = b-2a / 3? – user2719875 Jan 16 '16 at 23:46 • I started from the last line=) (Many times in math the proof goes in the other direction compared to the order of the discovery.) Assume there is $x,y$ such that $(a,b) = (x,2x+3y)$.. Can we solve for $(x,y)$ or do we get a contradiction? – flawr Jan 17 '16 at 9:56 For any $(a,b)\in \mathbb{R}^2$, $$(a,b) = a\cdot(1,2)+(-\frac{2a}{3}+\frac{b}{3})\cdot(0,3)$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180635575532, "lm_q1q2_score": 0.8338607411849058, "lm_q2_score": 0.8459424373085146, "openwebmath_perplexity": 728.0928822186013, "openwebmath_score": 0.9348577857017517, "tags": null, "url": "https://math.stackexchange.com/questions/1614997/how-does-the-span-of-vectors-1-2-and-0-3-equal-r2" }
slam, navigation, kinect, openni, ros-fuerte Thank you! Originally posted by rosslam on ROS Answers with karma: 1 on 2013-06-10 Post score: 0 Original comments Comment by Felix Endres on 2013-06-11: Your device is not found. I have seen reports that some drivers have problems with newer versions of the Xtion pro, maybe that holds true for your Kinect? Otherwise, have you tried a different USB port, preferrably a USB 2.0 port? Comment by sai on 2013-06-11: This issue arises for Asus xtion pro live devices and new version of microsoft kinect "kinect 4 windows" Comment by Zayin on 2013-06-12: If you have a Turtlebot and if your camera is draining power from your robot, make sure it's set to full mode on the dashboard. I have to do that every time I want to use openni + Kinect. Hi, You can find the installation steps to get the new kinect 4 windows working from these links http://www.20papercups.net/programming/kinect-on-ubuntu-with-openni/ https://groups.google.com/forum/?fromgroups=#!msg/openni-dev/h0F6kYCNigs/BR6iqqFhSJ8J http://mitchtech.net/ubuntu-kinect-openni-primesense/ Before these installation, remove the old library files from ubuntu software center searching for libopenni. after installation, dont do rosdep install openni_launch. directly do rosmake openni_launch and it should work. Originally posted by sai with karma: 1935 on 2013-06-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14497, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, kinect, openni, ros-fuerte", "url": null }
objective-c, memory-management Title: Am I using my data source array correctly? When I want to make some quick tests for my UITableViewControllers, I usually create an NSArray that holds some dummy data. I would like to know if I'm doing anything wrong here: First in MasterViewController.h: #import <UIKit/UIKit.h> @class DetailViewController; @interface MasterViewController : UITableViewController @property (nonatomic, retain) NSArray *dataSourceArray; @end and then in MasterViewController.m: #import "MasterViewController.h" @implementation MasterViewController @synthesize dataSourceArray = _dataSourceArray; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { self.title = NSLocalizedString(@"Master", @"Master"); _dataSourceArray = [[NSArray alloc] initWithObjects:@"obj 1", @"obj 2", @"obj 3", nil]; } return self; } - (void)dealloc { [_dataSourceArray release]; [super dealloc]; } So, the real question, as long as I'm not assigning _dataSourceArray to something else, I'm safe in terms of memory management here, right? Yes. Everything seems correct here. Like you mention yourself: make sure you use self.dataSourceArray to assign new values, and the synthesized setter will take care of memory management.
{ "domain": "codereview.stackexchange", "id": 1242, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "objective-c, memory-management", "url": null }
python, data Title: How to use glob() output as input to os.listdir() I'm trying to use the output of glob.glob() as the input to os.listdir() in order to get the number of files in the directory. The output of glob() gives the following: f = glob.glob(ct) print(f) ['C:\\Users\\tennant\\Desktop\\RF WAVEFORMS\\SPRING 2018\\RF\\1L22\\2018_05_02\\133258.2'] which, if I try to use as input to listdir() gives the following error test = os.listdir(str(f)) Eventually through enough trial and error, I was able to find a solution. f = glob.glob(ct) fnew = str(f).strip('[]') test = os.listdir(fnew.replace('\'',"")) print(fnew.replace('\'',""), len(test)) C:\\Users\\tennant\\Desktop\\RF WAVEFORMS\\SPRING 2018\\RF\\1L22\\2018_05_02\\133258.2 7 However it's messy and I'm clearly not understanding something more fundamental about the output of glob() or strings in general. Anything that could clean this code up and help my understanding would be greatly appreciated! Assuming you have a string to pass into glob that does wildcard matching, glob will return a list of matches. So you don't need to make a string out of that list and replace the square-brackets and so on. You can just iterate over that list and do something with each of the values, which are already strings. results = glob.glob(your_pattern)
{ "domain": "datascience.stackexchange", "id": 4781, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, data", "url": null }
python, object-oriented, python-2.x, cache loaded_keys.update(self._F.load('particle', tuple(['ng_' + typesuffix for typesuffix in T.keys()]))) for typesuffix in T.keys(): self.pmasks[typesuffix] = self._F['ng_' + typesuffix] == self.fof elif self.mask_type == 'aperture': loaded_keys.update(self._F.load('group', ('cops', 'vcents'))) loaded_keys.update(self._F.load('snapshot', ('xyz_g', 'xyz_dm', 'xyz_b2', 'xyz_b3', 'xyz_s', 'xyz_bh', 'Lbox'))) for typesuffix in T.keys(): self._F['xyz_' + typesuffix] = self._F['xyz_' + typesuffix] - self._F.cops[self.gmask] self._F['xyz_' + typesuffix][self._F['xyz_' + typesuffix] > self._F.Lbox / 2.] -= self._F.Lbox self._F['xyz_' + typesuffix][self._F['xyz_' + typesuffix] < self._F.Lbox / 2.] += self._F.Lbox cube = (np.abs(self._F['xyz_' + typesuffix]) < self.aperture).all(axis=1) self.pmasks[typesuffix] = np.zeros(self._F['xyz_' + typesuffix].shape[0], dtype=np.bool)
{ "domain": "codereview.stackexchange", "id": 23481, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, object-oriented, python-2.x, cache", "url": null }
For infinite sets, you have to add one more clause to the definition: for the free abelian group on a set $X$, you do not take all functions $f : X \to \mathbb{Z}$, you take only those functions which satisfy the property that there is a finite subset $Y \subset X$ such that if $x \in X-Y$ then $f(x)=0$. I preassume that here the elements $A,B,C,D$ are distinct. An abelian group free on set $\{A,B,C,D\}$ is the group $\langle\mathbb Z^4,+\rangle$ where $+:\mathbb Z^4\times\mathbb Z^4\to\mathbb Z^4$ is prescribed by:$$+(\langle n_A,n_B,n_C,n_D\rangle,\langle m_A,m_B,m_C,m_D\rangle)=\langle n_A+m_A,n_B+m_B,n_C+m_C,n_D+m_D\rangle$$ Or more commonly:$$\langle n_A,n_B,n_C,n_D\rangle+\langle m_A,m_B,m_C,m_D\rangle=\langle n_A+m_A,n_B+m_B,n_C+m_C,n_D+m_D\rangle$$ If $\eta:\{A,B,C,D\}\to\mathbb Z^4$ is prescribed by: • $A\mapsto\langle1,0,0,0\rangle$ • $B\mapsto\langle0,1,0,0\rangle$ • $C\mapsto\langle0,0,1,0\rangle$ • $D\mapsto\langle0,0,0,1\rangle$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9715639686018701, "lm_q1q2_score": 0.8138013737871279, "lm_q2_score": 0.837619961306541, "openwebmath_perplexity": 121.59319022332629, "openwebmath_score": 0.9427700638771057, "tags": null, "url": "https://math.stackexchange.com/questions/2629591/what-is-the-definition-of-a-free-abelian-group-generated-by-the-set-x/2629624" }
computer-architecture, cpu-pipelines If we put n as 1 we will get wrong answer. If we put $n$ as 1 then we get $k*tp$, which seems fine to me. Any less than that would mean the instruction has not made it all the way through the pipeline, and so far there is no concrete reason to say that it will take longer (a more detailed model may reveal such reasons, but then the original formula for the time would also be modified). Consider 5 instructions being executed by a 5-stage pipeline (pictured below). How many cycles does that take? The answer isn't 5, after 5 cycles only the first instruction is done, and 4 instructions are still in the pipeline. It takes another 4 cycles to finish them. Multiply the number of cycles by the length of a cycle, $tp$.
{ "domain": "cs.stackexchange", "id": 20320, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computer-architecture, cpu-pipelines", "url": null }
fluid-mechanics, modeling I found this document: HEC-RAS Output Variables which gives the full names of the abbreviated variable names. But the actual significance of each variable still make little sense to me. Thank you in advance for any help you can give me. HEC-RAS stands for Hydrologic Engineering Center - River Analysis System. It's an Army Corps of Engineers program for characterizing flow in river or other large open channel systems. First of all, the output for your question is the channel data at one particular cross section in an open channel. In order from left to right: Q = total channel flow in cubic meters per second Min Ch El = elevation of the lowest point of the channel at that point (section) W.S. Elev. = the elevation in feet of the water surface at that section Crit W.S. = this is the elevation of the water surface at critical flow; higher than this and the flow is subcritical, lower than this and the flow is supercritical. Think of a slow meandering river (subcritical) and a rapid, shooting river (supercritical). Compare this elevation to the actual WS Elev to determine which type of flow it is. E.G. Elev = The is the elevation of the Energy Grade Line, which is the sum of the actual water surface elevation and the additional head derived from the flow velocity. This number represents the elevation if the flow velocity were cleanly directed upwards at this point in the channel. It's important to determine factors of safety of nearby facilities if the flow should become obstructed at this point. E.G. Slope = The is just the slope of the above Grade Line. It's related to the slope of the channel bottom and the velocity together. Vel Chnl = The average flow velocity for the channel in meters per second. This is an average just to get a feel for the overall flow. Flow Area = Looking at the cross section of the channel, the area of the flow. Top Width = the width of the flow section measured at the top free surface.
{ "domain": "engineering.stackexchange", "id": 2321, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-mechanics, modeling", "url": null }
osx, macos-snowleopard, ogre Originally posted by ahendrix on ROS Answers with karma: 47576 on 2012-02-16 Post score: 4 As it turns out, the ogre build scripts weren't able to find the Cg framework during the build process. The Cg installer put Cg.framework in /Library/Frameworks, but the orge build was looking in /Developer/SDKs/MacOSX10.6.sdk/Library/Frameworks I fixed this by creating a symlink in /Developer/SDKs/MacOSX10.6.sdk/Library/Frameworks to /Library/Frameworks/Cg.framework Originally posted by ahendrix with karma: 47576 on 2012-02-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8273, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "osx, macos-snowleopard, ogre", "url": null }
php, html, mysql, pdo, sql-injection </li> </ul> </div> </div> <div class="panel panel-default"> <div class="panel-heading" role="tab" id="filterBySkills"> <h4 class="panel-title text-center"> <a role="button" data-toggle="collapse" data-parent="#filter" href="#filterBySkillsBody" aria-expanded="true" aria-controls="filterBySkillsBody"> Filter By Skills </a> </h4> </div> <div id="filterBySkillsBody" class="panel-collapse collapse" role="tabpanel" aria-labelledby="filterBySkills"> <div class="panel-body"> <p>This filter need more room to be displayed than this sidebar can be offer. Please click on the button below to display the skills filter.</p> <button class="btn btn-primary btn-block" type="button" data-toggle="collapse" data-target="#filterBySkillsHideAway" aria-expanded="false" aria-controls="filterBySkillsHideAway"> Display Skills Filter </button> </div> </div> </div> </div> <input type="hidden" name="token" value="<?=Token::generate();?>" /> <button type="submit" class="btn btn-primary btn-block">Filter Results</button> </form>
{ "domain": "codereview.stackexchange", "id": 18664, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, html, mysql, pdo, sql-injection", "url": null }
robotic-arm, valve Title: Does a vacuum pump have to run constantly in a pick & place system? I'm reading this article about which of vacuum pump or venturi vacuum generator is more efficient for a pick and place system: https://fluidpowerjournal.com/pump-or-venturi/ The example application is as follows: Here’s a typical vacuum-lifting application: a small end-of-arm tool consisting of eight small vacuum cups of Ø40 mm (Ø1.5″). These eight cups are picking up a flat plastic sheet, which is, of course, non porous. Cycle rate is one sheet pick every 15 seconds, or four sheets a minute. The piece is being held for only three seconds during transfer. What’s the most efficient? Pump or venturi? The conclusion appears to be venturi, but I find the argument a bit odd: The pump will run continuously with the vacuum being turned on and off via a vacuum solenoid valve. The venturi will run only when required (during the lift part of the cycle), turned on and off using a compressed air solenoid valve. The vacuum pump uses 0.25 hp all day long during a 24-hour shift. The venturi uses 0.9 hp for every three seconds of a 15-second cycle. Therefore, the venturi uses on average 0.18 hp of compressed air over the same cycle period. Consequently, the venturi is “more efficient” in overall running costs.
{ "domain": "robotics.stackexchange", "id": 2091, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "robotic-arm, valve", "url": null }
javascript, ecmascript-6 function listPastTimes() { pastTimesDisplay.innerHTML = '<ul class="list-past-times"><li>' + pastTimes.join('</li><li>') + '</li></ul>'; } Stuff that's just my opinion I personally would not use the curlies for your if if you're only calling a function. I would put the function on the same line and lose the curlies. Some people are religious about the curlies and it's ok if you are, but in my opinion it's just adding unnecessary lines to the code. I'll defend that to the death. The Apple SSL bug is not a valid argument for curlies because that developer did not put the code on the same line as the if. This is only gonna work on relatively new browsers (maybe that was inended) - I would have used more widely supported methods. I've really don't like the idea of assigning booleans in an if/else. To me, this feels redundant. I would have done triggerStartStop like this: function triggerStartStop() { intervalId = timeRunning ? clearInterval(intervalId) : setInterval(increment, 10) ; timeRunning = !timeRunning; } function increment() { currentTime += 10; // Increment currentTimeDisplay.textContent = // Write (currentTime / 1000).toFixed(2); }
{ "domain": "codereview.stackexchange", "id": 27379, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, ecmascript-6", "url": null }
# Compute the equivalence classes Define an equivalence relation on $\mathbb{R}^2$ by $\textbf{x}\sim\textbf{y}$ iff $\exists A\in GL_2(\mathbb{R})$ such that $A\mathbf{x}=\mathbf{y}$. Compute the equivalence classes of this equivalence relation. My attempt: Let $\mathbf{x}=\begin{bmatrix} 0 \\ 0 \end{bmatrix}$. $A\mathbf{x}=\begin{bmatrix} 0 \\ 0 \end{bmatrix}$ $\forall A\in GL_2(\mathbb{R})$ So, it seems that the zero vector resides alone in its equivalence class. My hunch is that all the other (nonzero) vectors reside in the other equivalence class, making a total of 2 equivalence classes. But I don't know how to prove this as there doesn't seem to be any obvious way to solve for the matrix $A$ in the equation $A\mathbf{x}=\mathbf{y}$. Can someone please tell me how to proceed? Consider $x=\begin{pmatrix}1\\1\end{pmatrix}$. Consider $A=\begin{pmatrix}\lambda & 0\\0&\mu\end{pmatrix}$, where $\lambda$, $\mu$ are nonzero so that $A$ is invertible and thus in $GL_2(\mathbb{R})$. Then $Ax=\begin{pmatrix}\lambda\\\mu\end{pmatrix}$. So all vectors $\begin{pmatrix}\lambda\\\mu\end{pmatrix}$, with $\lambda,\mu$ both nonzero are in the same class as $\begin{pmatrix}1\\1\end{pmatrix}$. The final question is how about those vectors with one component nonzero? We can see that they are also in the same equivalence class:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9916842205394514, "lm_q1q2_score": 0.8263948579158017, "lm_q2_score": 0.8333246015211008, "openwebmath_perplexity": 119.95291112148425, "openwebmath_score": 0.9576627612113953, "tags": null, "url": "https://math.stackexchange.com/questions/2029655/compute-the-equivalence-classes" }
electrostatics Title: Change in kinetic energy of a charge is zero If i move positive test charge from infinity to a point in the electric field of positive charge untill it reaches certain point And my external work made the change in kinetic energy zero does that mean the charge will stay stationary at that point It does not. If there is a nonzero net electric field at that point, then the charge will begin to accelerate.
{ "domain": "physics.stackexchange", "id": 51644, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics", "url": null }
c#, sorting, linq, async-await, extension-methods public static async IAsyncEnumerable<T> Bottom<T, TValue>( this IAsyncEnumerable<T> source, int number, Func<T, TValue> selector, IComparer<TValue> comparer) { // Actual implementation. } Binary tree I think this problem would call for a binary tree, instead of a list that's sorted/ranked continuously. This would you can quickly check whether the item you are iterating would even be in the top-number of items in the collection, without having to add and subsequently having to remove it again. The downside is that C# doesn't have a built-in Binary Tree that isn't hidden within the SortedX collection set. These classes sadly require unique values in their collections, which isn't guaranteed here. Alternatively, if you can handle adding another few lines to your solution, you can check if the index returned by BestIndex is equal to number, and skip adding and removing it from the list if this is the case. Code style This needs to be said. Your code is really compact. It took me multiple reads to figure out what on earth BestIndex was actually doing. Docstrings, comments and/or intermediate variables with clear names please. Something as simple as "Returns the rank of value in the list by performing a merge-sort style algorithm." is enough to understand its role in Bottom.
{ "domain": "codereview.stackexchange", "id": 35717, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, sorting, linq, async-await, extension-methods", "url": null }
sql, sql-server, t-sql If @cmd Like '%?%' Begin --Use Cursor to hold list of databases to execute against Declare [DbNames] Cursor Local Forward_Only Static Read_Only For Select QuoteName([name]) From [sys].[databases] Where [state] = 0 --only online databases And [is_read_only] = 0 --only databases that can be executed against And [database_id] > 4 --only user databases Order By [name]; Open [DbNames]; Fetch Next From [DbNames] Into @Database; --Get next database to execute against While @@fetch_status = 0 --when fetch is successful Begin Set @SqlScript = Replace(Replace(Replace(@cmd , '?' , @Database) , '[[' , '[') , ']]' , ']');--Adds the database name and in the case of [[]] --Print @SqlScript; Begin Try --try to execute script Exec(@SqlScript); End Try Begin Catch --if error happens against any db, raise a high level error advising the database and print the script Set @ErrorMessage = 'Script failed against database ' + @Database; Raiserror (@ErrorMessage,13,1); Print @SqlScript; End Catch; Fetch Next From [DbNames] Into @Database;--Get next database to execute against End; Close [DbNames]; Deallocate [DbNames]; End; End; Go /* --Testing Script This test is designed to generate error messages by using a table that may not exist in other databases --Create script Declare @Script NVarchar(2000);
{ "domain": "codereview.stackexchange", "id": 18056, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sql, sql-server, t-sql", "url": null }
general-relativity, spacetime, acceleration, coordinate-systems, projectile Title: A projectile in a rocket with constant proper acceleration I am a beginner in general relativity. I read the chapter 9.2 in Relativity make relatively easy vol 1 by Andrew Steane. There is a rocket which accelerates upwards with constant proper acceleration. The worldlines of the rocket frame should be hyperbolas. A rocket observer throws a ball in the rocket, the ball should go upwards and then fall down then he mentioned the worldline of the thrown ball looks a vertical straight line. However, I totally do not understand why the worldline of the ball is a vertical straight line. The whole description is like the following picture. The brown one is the vertical worldline of the thrown ball. That is what I am confused of. The red hyperbolas are the worldlines of the rocket frame. The two blue slant lines are asymptotes of rocket frames. The coordinates represented by the axis belong to an inertial frame. For this frame, all the hyperbolas are accelerated frames, which are momentarily at rest in $t=0$, separated by the distances corresponding to the interception with the x-axis. If an object is free from its accelerated frame exactly at $t=0$ and without relative velocity with respect to this frame, its worldline is vertical. In a more general situation it is a straight line, with an angle proportional to the initial velocity with respect to the inertial frame which made the chart.
{ "domain": "physics.stackexchange", "id": 87222, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, spacetime, acceleration, coordinate-systems, projectile", "url": null }
optics, refraction, geometric-optics B = \frac {F_\text{s} - F_\text{p}}{2}\\ C = \cos(\delta_\text{s} - \delta_\text{p})\sqrt{F_\text{s} F_\text{p}}\\ S = \sin(\delta_\text{s} - \delta_\text{p})\sqrt{F_\text{s} F_\text{p}}\\ \\ M = \begin{bmatrix} A & B & 0 & 0\\ B & A & 0 & 0\\ 0 & 0 & C & S\\ 0 & 0 & -S & S\\ \end{bmatrix} $$ Where,
{ "domain": "physics.stackexchange", "id": 85283, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, refraction, geometric-optics", "url": null }
(see this answer). Since diagonal matrices commute, the scaling has no effect in this case, so you can parametrise all matrices similar to a diagonal matrix using the shear and rotation parameters $q$ and $\phi$. Alternatively, closer to your own approach, you can note that multiplying $S$ by an invertible diagonal matrix from the left doesn't change $A$, so instead of considering general $S= \begin{bmatrix} e & f \\ g & h \end{bmatrix}$ you can restrict to the cases $e,g\in\{0,1\}$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9706877692486438, "lm_q1q2_score": 0.8130674556534608, "lm_q2_score": 0.8376199653600372, "openwebmath_perplexity": 158.78235655930806, "openwebmath_score": 0.8841601014137268, "tags": null, "url": "https://math.stackexchange.com/questions/1802038/find-all-similar-matrices-to-diagonal-matrix" }
c, parsing, linux, assembly uint32_t *ubuf = buf; if (info == CPU_PROC_BRAND_STRING) { ___cpuid(CPU_PROC_BRAND_STRING, &ubuf[0], &ubuf[1], &ubuf[2], &ubuf[3]); ___cpuid(CPU_PROC_BRAND_STRING_INTERNAL0, &ubuf[4], &ubuf[5], &ubuf[6], &ubuf[7]); ___cpuid(CPU_PROC_BRAND_STRING_INTERNAL1, &ubuf[8], &ubuf[9], &ubuf[10], &ubuf[11]); return; } else if (info == CPU_HIGHEST_EXTENDED_FUNCTION_SUPPORTED) { *ubuf = highest_ext_func_supported(); return; } uint32_t eax, ebx, ecx, edx; ___cpuid(info, &eax, &ebx, &ecx, &edx);
{ "domain": "codereview.stackexchange", "id": 5117, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, parsing, linux, assembly", "url": null }
#### evinda ##### Well-known member MHB Site Helper Could you explain me how you got to this equation? Therefore the error in the next iteration is: $$\varepsilon_{i+1} = \frac 2 3 x \varepsilon_i + \mathcal O(\varepsilon_i^2)$$ Since we're interested in the root $x=4$ (with values of $|\varepsilon| \le 1$), we can substitute $x=4$: $$\varepsilon_{i+1} = \frac 2 3 \cdot 4 \cdot \varepsilon_i + \mathcal O(\varepsilon_i^2)$$ Since $|\frac 2 3 \cdot 4| > 1$, we can tell that the process is numerically unstable around $x=4$. That is, an error in the input will grow iteration by iteration, meaning it diverges from the root. More generally, we're looking at: $$\varepsilon_{i+1} = \varphi(x+\varepsilon_i) - \varphi(x) = \varphi'(x) \varepsilon_i + \mathcal O(\varepsilon_i^2)$$ Do you see what you should do? #### Klaas van Aarsen ##### MHB Seeker Staff member Could you explain me how you got to this equation? We have the algorithm: $$x_{i+1} = \varphi(x_i)$$ Now suppose that $\varepsilon_i$ is the error in $x_i$ with respect to the actual root $x$. That means that $x_i = x + \varepsilon_i$ and $x_{i+1} = x + \varepsilon_{i+1}$.
{ "domain": "mathhelpboards.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9828232899814557, "lm_q1q2_score": 0.8334024583110201, "lm_q2_score": 0.8479677545357569, "openwebmath_perplexity": 358.68960473524623, "openwebmath_score": 0.8711573481559753, "tags": null, "url": "https://mathhelpboards.com/threads/how-can-i-choose-which-is-the-most-suitable-method.8784/" }
algorithms, combinatorics, matrices Title: is this NPC Prob? Minimum count of distinct values at all matrix columns provided only in-row swap operation I am searching for an algorithm for this! Cannot find anything useful in textbook so far. Thanks in advance! The input is a $N \times K$ matrix, where $N$ and $K$ are positive numbers( usually $N$ is a large number, $K$ is a small number). Each row of the matrix contains distinct values, but each column may contain duplicate values. For $i = 1, 2, \cdots, K$, let $m_i$ denote the count of distinct values at $i$-th column and define $\rho := min(m_1, m_2, \cdots, m_K)$. If we can swap any two numbers within each row, how to minimize $\rho$ provided you can perform any number of the given operations(swaps)? Can this problem be NP-complete in terms of $N$??
{ "domain": "cs.stackexchange", "id": 6803, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, combinatorics, matrices", "url": null }
vba, excel Title: Copy a contiguous sub-column of cells This code searches for the word "Disbursements" and copies all of the rows with data below it, including the row with the specified word. After the rows are copied to a new sheet, the unnecessary rows below the contiguous range of data are deleted. In order to search for the last cell, I used a function that finds the last cell with data. Instead of deleting the rows below the contiguous range, should I just copy the contiguous rows to begin with? Is there a better way to delete the rows below the first blank row? Sub Reformat_ZZ_CM_BNREG() Application.ScreenUpdating = False Application.Calculation = xlManual Application.DisplayAlerts = False 'declare variables to search for within bank register Dim searchText As String Dim ws As Worksheet Dim searchCell As Range Dim newWS As Worksheet Dim lastCell As String Dim firstCell As String Dim lastRow As Long 'set variables to sheet name, search text, and searchCell searchText = "Disbursements" Set ws = Sheets("ZZ_CM_BNREG") lastCell = FindLast(3, "ZZ_CM_BNREG") 'Set lastCell = Range(Cells.Find("*", , xlFormulas, , xlRows, xlPrevious, , , False), Cells.Find("*", , xlFormulas, , xlColumns, xlPrevious, , , False)) Set newWS = Sheets.Add(After:=Sheets("ZZ_CM_BNREG")) newWS.Name = "Bank Register" 'Unmerge all cells in Column A & get cell address of searchCell With ws ws.Activate Range("A:A").UnMerge Set searchCell = .Cells.Find(What:="Disbursements", _ SearchFormat:=True) If searchCell Is Nothing Then MsgBox ("Error") Else firstCell = searchCell.Address End If End With
{ "domain": "codereview.stackexchange", "id": 40518, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
computer-architecture, computation-models Title: Relation between Machine code and Von Neumann architecture Since machine language executes instructions on ALU, CPU register and memory, is correct say that machine code abstract the Von Neumann model? If exists, semantically, what is the relation between machine code and Von Neumann architecture? Asking whether machine code abstracts the von Neumann model is a category error – it's a statement that doesn't type-check. The von Neumann model is essentially a system architecture: it's a way of designing computers. Machine code is a sequence of instructions: a way of telling a computer what to do. Those are two completely different things. Machine code arguably doesn't abstract anything. It's the most concrete, specific thing there is: it runs on only one specific kind of CPU (or perhaps a family of very closely related ones).
{ "domain": "cs.stackexchange", "id": 7608, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computer-architecture, computation-models", "url": null }
nuclear-physics, astrophysics, stars, pauli-exclusion-principle, neutron-stars Title: What really supports neutron stars? I have read this question (to Andrew's answer, in the comments): What supports neutron stars is the repulsion provided by the strong nuclear force between closely-packed neutrons. The central pressure in a neutron star is an order of magnitude higher than ideal neutron degeneracy pressure. no, it's not quark degeneracy pressure, it's actual forces due to gluon exchange. Will a neutron star always collapse into a black hole in the future? Now as far as I understand, on this site (and wiki) it is said that neutron stars do not collapse because they are supported by neutron degeneracy pressure. Though, based on the comments it is at the core a different mechanism, being the residual strong force (repulsive at short distances) between neutrons. One of the comments says it is mediated by gluons, but as far as I understand, the residual strong force is mediated by pions between neutrons. Now the distinction is important, because even on this site, it is not clarified, whether it is neutron degeneracy pressure (which is explained differently, based on QM and the Pauli exclusion principle), or the repulsive (at short distances) residual strong force that actually supports the neutron star from further collapse. So there are two main ideas: it is the neutron degeneracy pressure, between neutrons, and how they fill the QM energy levels (PEP) it is the repulsive (at short distances) residual strong force
{ "domain": "physics.stackexchange", "id": 83302, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nuclear-physics, astrophysics, stars, pauli-exclusion-principle, neutron-stars", "url": null }
special-relativity If we can't perform those position measurements simultaneously, then we need to take the rod's velocity into account. If the rod has a high velocity relative to our frame, so that relativistic effects aren't negligible, we need to be very careful that our position sensors record the exact time of their measurements, using clocks that are synchronised in our frame. And if those sensors measure the rod's position from a distance, they also need to compensate for the time it takes light to travel (from the part of the rod that they're measuring) to the sensor.
{ "domain": "physics.stackexchange", "id": 75746, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity", "url": null }
### Most Acronyms (but Especially FOIL and BDMAS) Why they're bad: My objection is qualified here (but not for FOIL). An acronym can sometimes summarize effectively, but it is not an explanation and does not lead to understanding. In rare cases, understanding may not be critical for long term proficiency, maybe. But an acronym is a shoddy foundation to build on. If you're trying to make good robots, use acronyms exclusively. How it happens: Acronyms can make early work go easier and faster. This makes the initial teaching appear successful—like a fresh coat of paint on rotten wood. Teacher and student are happy until sometime later when the paint starts to peel. Sometimes after the student has sufficient understanding they may continue to use certain acronyms because of an efficiency gain they get from it, which may lead to perpetuating an emphasis on acronyms. Better: Teach students to understand first. Give the student the acronym as a way for them test if they are on the right track when you're not around. Very sparingly use as a means of prompting them to work a problem out for themselves. (My ideal would be never, but realistically, they need to be reminded of their back up strategy when they get stuck.) Never, ever take the risk of appearing to "prove" the validity of operations you or others have performed by an appeal to an acronym (unless it is a postulate or theorem reference)—that's not just bad math, it is illogical. Expansion: Certain acronyms, if you stoop to use them, can possibly be viewed as training wheels. Maybe BDMAS qualifies. But is there a strategy for losing the training wheels or are the students who use the acronym doomed to a life of having nothing else but training wheels to keep from falling over?
{ "domain": "blogspot.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9830850847509661, "lm_q1q2_score": 0.8170843274762134, "lm_q2_score": 0.8311430415844384, "openwebmath_perplexity": 686.6043304835285, "openwebmath_score": 0.5754119753837585, "tags": null, "url": "https://darrenirvine.blogspot.com/" }
qiskit, quantum-gate File ~/git/Deuteron/lib/python3.10/site-packages/qiskit/quantum_info/operators/scalar_op.py:248, in ScalarOp._pad_with_identity(current, other, qargs) 246 if qargs is None: 247 return other --> 248 return ScalarOp(current.input_dims()).compose(other, qargs=qargs) File ~/git/Deuteron/lib/python3.10/site-packages/qiskit/quantum_info/operators/scalar_op.py:130, in ScalarOp.compose(self, other, qargs, front) 121 return self.coeff * ret 123 # For qargs composition we initialize the scalar operator 124 # as an instance of the other BaseOperators subclass. We then 125 # perform subsystem qargs composition using the BaseOperator (...) 128 # not support initialization from a ScalarOp or the ScalarOps 129 # `to_operator` method). --> 130 return other.__class__(self).compose(other, qargs=qargs, front=front) File ~/git/Deuteron/lib/python3.10/site-packages/qiskit/quantum_info/operators/operator.py:278, in Operator.compose(self, other, qargs, front) 275 indices = [num_indices - 1 - qubit for qubit in qargs] 276 final_shape = [np.product(output_dims), np.product(input_dims)] 277 data = np.reshape( --> 278 Operator._einsum_matmul(tensor, mat, indices, shift, right_mul), final_shape 279 ) 280 ret = Operator(data, input_dims, output_dims) 281 ret._op_shape = new_shape
{ "domain": "quantumcomputing.stackexchange", "id": 3609, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "qiskit, quantum-gate", "url": null }
string-theory, standard-model Is there such a case, so that one could study the science behind this claim? Also (iii) is somewhat disappointing, as it means that most of the potential insight an underlying theory could provide must already be assumed: Quite unlike Newton's theory, which is indispensible to derive from a few dozen constants the complete motion of the planets, or quantum chemistry and statistical mechanics, which are indispensible to derive from a few dozen constants all properties of macroscopic materials, string theory does not provide a similar service for elementary particle theory, as quantum field theory already contains all machinery needed to draw the empirical consequences from 'the qualitative structure of the standard model, plus the SUSY, plus'' say 4-decimal place data on 33 parameters. When Newton's mechanics was new, people expected a theory of the solar system to produce better descriptions for the stuff left unexplained by Ptolmey: the equant distances, the main-cycle periods, and epicycle locations. Newton's theory didn't do much there--- it just traded in the Ptolemy parameters for the orbital parameters of the planets. But the result predicted the distances to the planets (in terms of the astronomical unit), and to the sun, and these distances could be determined by triangulation. Further, the theory explained the much later observation of stellar aberration and gave a value for the speed of light. If your idea of what a theory should predict was blinkered by having been brought up in a Ptolmeian universe, you might have considered Kepler/Newton's theory as observationally inconsequential, since it did not modify the Ptolemaic values for the observed locations of the planets in any deep way. The points you bring up in string theory are similar. String theory tells you that you must relate the standard model to a microscopic configuration of extra dimensions and geometry, including perhaps some matter branes and almost certainly some orbifolds. These predictions are predicated on knowing the structure of the standard model, much like the Newtonian model is predicated on the structure of the Ptolemaic one. But the result is that you get a complete self-consistent gravitational model with nothing left unknown, so the number of predictions is vastly greater than the number of inputs, even in the worst-case scenario you can imagine.
{ "domain": "physics.stackexchange", "id": 2649, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "string-theory, standard-model", "url": null }