text
stringlengths
49
10.4k
source
dict
html, css, layout .wrapper { display: flex; flex-flow: column nowrap; height: 175px; padding: 10px; border-color: red; } .b, .c { float: right; border-color: green; } .c { background-color: lightgrey; } /** irrelevant **/ div { height: 50px; border: 1px solid; } .a { border-color: blue; } <div class="wrapper"> <div class="a">:::::: A :::::::</div> <div class"b"> <div>:::::: B ::::::</div> <div class="c">:::::: C ::::::</div> </div> </div> I bring that up, because navigation items are generally top-level children, so we can consider that in circumstances like margin-top: -24px. You can take the following snippet (unchanged from OP, other than color for effect) and see why it can be problematic real fast, and worse, it can easily create a domino effect where you will have to add margins to a lot of other things just to create synergy: /*------------------------ Default grid -------------------------*/ .flex { display: flex; flex-flow: row wrap; justify-content: space-between; } .col { flex: 1; } /*------------------------ Columns -------------------------*/ .has-2-columns .col { flex: none; } .has-2-columns .col { width: 49%; }
{ "domain": "codereview.stackexchange", "id": 34668, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "html, css, layout", "url": null }
The first thing I did was find their corresponding Lie algebras: $$\mathfrak{h}=T_1H=\left\{\left( \begin{array}{ccc} 0 & x & z \\ 0 & 0 & y \\ 0 & 0 & 0 \end{array}\right) \Bigg\vert x,y,z \in \mathbb{R} \right\} ~\text{ and }~\mathfrak{z}=T_1Z=\left\{\left( \begin{array}{ccc} 0 & 0 & z \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right) \Bigg\vert z \in \mathbb{R} \right\}$$ Next we need to find all the $2$-dimensional subalgebras of $\mathfrak{h}$ containing $\mathfrak{z}$. Here's where I lose confidence in my solution; I think that for any $a,b \in \mathbb{R}$ the following are all the $2$-dim. subalgebras (containing $\mathfrak{z}$): 1. $\operatorname{span}\left\{ \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right), ~\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right) \right\}$ 2. $\operatorname{span}\left\{ \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right), ~\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right) \right\}$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9783846659768267, "lm_q1q2_score": 0.8110210128928739, "lm_q2_score": 0.8289388019824946, "openwebmath_perplexity": 149.41031518913852, "openwebmath_score": 0.8617475032806396, "tags": null, "url": "https://math.stackexchange.com/questions/2263983/finding-lie-subalgebras-and-their-corresponding-lie-subgroups" }
- That's a very nicely written question. Wish more were like this... –  rm -rf Jul 26 '12 at 16:17 An immediate generalization to a system of equations would be nDLangevin[x0_, f_, covMat_, tf_, n_, m_: 1] := With[{nDim = Length[x0], mean = ConstantArray[0, Length[x0]], dt = N[tf/n], xx0 = Table[x0, {m}], nDf = Function[x, f[#] & /@ x]}, Transpose@NestList[# + dt nDf[#] + RandomVariate[MultinormalDistribution[mean, covMat], m] &, xx0, n]] but it's terribly slow. –  b.gatessucks Jul 26 '12 at 16:49 I don't think it can be parallelized, because each step depends on the previous step. Parallelization works well when the steps are independent of each other (e.g. Map, Do, etc.). I think your NestList approach is very clean and efficient. An equivalent formulation would be using memoization and recursive functions, but that is about 2x slower in my tests. All the functions used are compilable, but I'm not sure how to handle the case of arbitrary f. If f is known in advance, then you could easily compile it for that f. –  rm -rf Jul 26 '12 at 17:02 @R.M it can be parallelized if you calculate each realization independently (I ended up doing this when I was working on stochastic processes a few years back) –  acl Jul 26 '12 at 17:17
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9559813513911654, "lm_q1q2_score": 0.8356015286535688, "lm_q2_score": 0.8740772269642948, "openwebmath_perplexity": 2660.992528975516, "openwebmath_score": 0.6694821715354919, "tags": null, "url": "http://mathematica.stackexchange.com/questions/8744/efficient-langevin-equation-solver/8747" }
nuclear-physics, mass-energy, binding-energy $$ \tfrac{1}{2}m_ev^2 = k\frac{e^2}{r} \tag{2} $$ So far so good, but here's the problem. In a hydrogen atom the separation of the electron and proton is about a Bohr radius, which is about 0.05 nm. If we let the electron fall to one Bohr radius from the proton then use equation (2) to calculate the kinetic energy of the electron we find it is about 27.2 eV, which corresponds to a velocity of about 3 million metres per second. The electron is moving far too fast to "stick" to the proton. It will just flash past the proton and whizz off to infinity again. To make the hydrogen atom we have to slow down the electron, that is we have to take away some of its kinetic energy (in fact we have to take away about 13.6 eV of its energy). This will slow the electron down enough to make it "stick" (I'm using the term "stick" rather loosely here!) to the proton. Now, we started with an energy: $$ E = m_pc^2 + m_ec^2 $$ and we have to take away 13.6 eV or our hydrogen atom won't form. So the energy of the hydrogen atom is: $$ E_H = m_pc^2 + m_ec^2 - 13.6 eV $$ The mass of the hydrogen atom is given by Einstein's famous equation $E = mc^2$, so to get the mass of the hydrogen atom we divide by $c^2$ to get: $$ m_H = m_p + m_e - \frac{13.6 eV}{c^2} $$ And you can see immediately that the mass of the hydrogen atom is less than the mass of the proton and electron we used to make it. The mass is less because we started with a proton and electron then took something away. In practice when we combine protons and electrons they form hydrogen by emitting a photon with the energy 13.6 eV.
{ "domain": "physics.stackexchange", "id": 23364, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nuclear-physics, mass-energy, binding-energy", "url": null }
genetics Title: Can there be medium height(neither tall nor short) pea plants in Mendel's experiment? Can there be medium height(neither tall nor short) pea plants in Mendel's experiment? All textbooks I have read seem to imply that pea plants have to be either tall or short, nothing in between. Medium height (like in people) and other traits that seem like a mixture of two extremes are often a result of incomplete dominance. For example, a red and white flower are bred to produce an offspring with pink petals. Mendelian genetics does not include incomplete dominance (which is classified as, surprisingly, non-Mendelian genetics). Basically, Mendel got very lucky with his choice of plant. Pea plant height is strictly dominant, meaning one dominant allele results in tall plants, regardless of the identity of the second inherited allele. This is a consequence of the genetic makeup of pea plants. Had he tried a similar experiment with snapdragon flower color, he would be very confused. (See https://www.ndsu.edu/pubweb/~mcclean/plsc431/mendel/mendel2.htm for snapdragon incomplete dominance example.)
{ "domain": "biology.stackexchange", "id": 11847, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics", "url": null }
evolution, botany, ecology, plant-physiology, plant-anatomy Title: Why do some plant species have lobed leaves, while similar species in the same habitat don't? Some plants have lobed leaves, like the English oak (Quercus robur), while other plants growing the same deciduous woodland habitats, and very often growing alongside oaks, such as the European beech (Fagus sylvaticus) don't have lobes. Here are two two leaves side by side for comparison: These two species should be subject to most of the same evolutionary pressures. Why would one evolve lobed leaves, whilst the other has only tiny serrations? This is a question for which, I think at the moment, we don't have a clear answer. It is important to bear in mind that the leaf plays a number of important roles in the plant (photosynthesis, thermoregulation etc.) so leaf shapes probably evolved through a process of successive trade-offs. This may make it difficult to identify the exact selection processes operating on any one species. In contrast, something like the eye has a well-defined single function, which in principle at least, makes it easier to understand the link between form and function. From Niklas (1988): Life history and optimisation theory suggest that the number of phenotypic solutions that allow for different equally successful trait combinations increases as the number of trade-offs increases – a conclusion that applies to traits within the leaf (e.g. for shape) as well as to leaf–branch relationships. However, there are a number of ideas to explain leaf shape diversity which include: Thermoregulation It has been shown that by adding lobes to leaves, the rate of heat transfer across a leaf is greater than that of an unlobed leaf of the same area (e.g. Gurevitch and Schuepp 1990). So, lobed leaves may be selected for under certain environmental conditions. hydromechanical constraints Lobed leaves may have greater hydraulic efficiency. For smaller veins, hydraulic pressure increases as they present an increased resistance to water flow. This places stress on the deliate outer leaf tissues. If lobed leaves have relatively less mesophyll tissue than large, highly conductive veins, they may have reduced hydraulic resistance compared unlobed leaves (Sack and Tyree 2005).
{ "domain": "biology.stackexchange", "id": 131, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, botany, ecology, plant-physiology, plant-anatomy", "url": null }
quantum-field-theory, particle-physics, field-theory, renormalization, phase-transition I should mention that there are many other arguments that the IR fixed point is described by free fermions. The usual argument is that this theory should describe the phase transition of the 2D Ising model, and there are many derivations which show that this transition is described by free fermions in the IR. But since you are interested in QFTs which are defined in the UV and the IR I thought taking the CFT approach would be more cautious.
{ "domain": "physics.stackexchange", "id": 64255, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, particle-physics, field-theory, renormalization, phase-transition", "url": null }
we get $$\begin{eqnarray*} S &:&=\sum_{k=1}^{n}k^{2}=\sum_{k=0}^{n}\binom{k}{1}+2\binom{k}{2} =\sum_{i=1}^{n}\binom{k}{1}+2\sum_{k=1}^{n}\binom{k}{2} \\ &=&\binom{n+1}{2}+2\binom{n+1}{3} \\ &=&\frac{n\left( n+1\right) \left( 2n+1\right) }{6}. \end{eqnarray*}$$ • your $\LaTeX$ isn't rendering? – Guy Mar 13 '14 at 11:55 • Why? Everything is fine for me. I'm currently using Mozilla Firefox 27.0.1. Please see the screen shot I added. – Américo Tavares Mar 13 '14 at 11:58 • Okay, I can read it now(from your screenshot, still not rendering). Better make a bug report on Mathematics Meta – Guy Mar 13 '14 at 12:03 • The rendering issue was posted here. – Américo Tavares Mar 13 '14 at 12:47 The identity is an application of Worpitzky's identity involving Eulerian numbers. Worpitzky's theorem states: $$x^n=\sum_{k=0}^{n}A(n,k) \binom{x+k}{n}$$ where Eulerian number $A(n, k)$ is defined to be the number of permutations of the numbers 1 to n in which exactly k elements are greater than the previous element (permutations with k "ascents"). (Worpitzky's identity is not hard to prove using induction btw)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9658995742876885, "lm_q1q2_score": 0.8170954400681838, "lm_q2_score": 0.8459424373085146, "openwebmath_perplexity": 448.53102314873036, "openwebmath_score": 0.8942179083824158, "tags": null, "url": "https://math.stackexchange.com/questions/710452/prove-1222-cdotsn2-n1-choose22n1-choose3" }
We show that if a set of values of $\mu$ is finite then $dim L^p(\mu)<\infty$. Let $x_1$ be a smallest positive finite value of measure and let $\mu(D_1)=x_1$. Then $D_1$ is an atom. Next we take the smallest positive finite value $x_2$ of the measure on $X\setminus D_1$ and a set $D_2 \subset X\setminus D_1$ with $\mu(D_2)=x_2$-it is an atom, and so on. The procedure have to finish after finite many steps, say $n$ steps. On the set $X \setminus (D_1\cup...\cup D_n)$ the measure takes at most two values: zero and infinity, hence each function from $L^p(\mu)$ is zero a.a on this set. On arbitrary atom measurable function is constant a.a. (because it is true for measurable simple functions). Hence arbitrary function from $L^p(\mu)$ is equal a.a to linear combination of characteristic function of atoms $D_1,...,D_n$. • Thank you very much for the proof sketch! – PhoemueX Nov 30 '15 at 6:53
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759615719876, "lm_q1q2_score": 0.8130861444789108, "lm_q2_score": 0.8289388019824946, "openwebmath_perplexity": 100.17748898269063, "openwebmath_score": 0.9883548021316528, "tags": null, "url": "https://math.stackexchange.com/questions/1551377/when-is-the-space-l-infty-mu-finite-dimensional" }
java, object-oriented, tic-tac-toe, gui scoreBoard.add(tie); scoreBoard.add(reset); scoreBoard.add(turn); timerPanel.add(timer); timerPanel.add(newGame); mainPanel.setLayout(new BorderLayout()); mainPanel.add(board, BorderLayout.CENTER); mainPanel.add(scoreBoard, BorderLayout.EAST); mainPanel.add(timerPanel, BorderLayout.SOUTH); add(mainPanel); newGame.addActionListener(new BtnListener()); reset.addActionListener(new BtnListener()); btn = new BoardButton[3][3]; board.setLayout(new GridLayout(3,3)); timer.setRunning(true); for(int i=0; i<3; i++){ for(int j=0; j<3; j++) { btn[i][j]=new BoardButton(j,i); btn[i][j].setFont(new Font("Arial", Font.BOLD, 70)); btn[i][j].setForeground(Color.blue); btn[i][j].addActionListener(new BoardListener()); board.add(btn[i][j]); } } } //Checks for winner public void checkWin() { int diagSum1 = 0; int diagSum2 = 0; int colSum = 0; int rowSum = 0; String winner = ""; diagSum1 = btn[0][2].getValue() + btn[1][1].getValue() + btn[2][0].getValue(); diagSum2 = btn[0][0].getValue() + btn[1][1].getValue() + btn[2][2].getValue();
{ "domain": "codereview.stackexchange", "id": 23381, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, object-oriented, tic-tac-toe, gui", "url": null }
c++, c++20 constexpr void set_functor_move(scope_fail &&other) noexcept(std::is_nothrow_move_constructible_v<EF> || std::is_nothrow_copy_constructible_v<EF>) { /* only preform construction if other is active */ if (!other.m_released) { if constexpr(std::is_nothrow_move_constructible_v<EF>) { ::new(&m_functor) EF(std::forward<EF>(other.m_functor)); } else { try { ::new(&m_functor) EF(other.m_functor); } catch (...) { m_released = true; other.m_functor(); other.release(); throw; } } other.release(); } } }; template<typename EF> scope_fail(EF) -> scope_fail<EF>; } namespace turtle { template<typename EF> requires std::invocable<EF> && requires(EF x) {{ x() } -> std::same_as<void>; } struct scope_success { constexpr scope_success operator=(const scope_success &) = delete; constexpr scope_success operator=(scope_success &&) = delete;
{ "domain": "codereview.stackexchange", "id": 38928, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++20", "url": null }
homework-and-exercises, electromagnetism, electric-fields Provided the radial component of a vector field falls off sufficiently rapidly with $r$, the integral of the divergence of the vector field over all of space vanishes. Try to use this fact along with things you know about how the electric field of a finite distribution of charge behaves very far from the distribution.
{ "domain": "physics.stackexchange", "id": 15413, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electromagnetism, electric-fields", "url": null }
quantum-field-theory, mathematical-physics, path-integral, regularization Title: If path integrals aren't well-defined, how can they have any physical meaning? I am confused about a particular point about the nature of path integration. According to what I've read, what we really mean when we say functional integration is \begin{equation} \int\mathcal{D}\phi= \int_{-\infty}^\infty\prod_x d\phi(x) \end{equation} in the sense that \begin{equation} \int\prod_{i=1}^n dx_i=\underbrace{\int\cdots\int}_n dx_1\cdots dx_n \end{equation} As I understand it, we integrate at each point over all field configurations at that point. But supposedly one never actually does a path integral because this is not well-defined. $\prod_x$ makes no sense. My Question: Why doesn't $\prod_x$ make any sense? How can the integral have any meaning if its not well-defined? When I said we don't actually do path integrals, what I meant to say is that we can do some very specific path integrals and the way we do them is rather ad-hoc. In other owrds, it's highly nontrivial and not straightforward. To show this, I'll do a very general path integral for you. (I had most of this typed up already for another reason.) The vacuum transition amplitude for a set of quantum fields collectively denoted by $\varphi(x)$ is given by the path integral $$Z[0]:=\langle\text{VAC, out}|\text{VAC, in}\rangle=\int D\varphi\;\exp\left(i\int d^4x \,\mathcal{L}(\varphi)\right)$$ where $D\varphi$ is the path measure $$D\varphi:=\prod_{x,\ell}d\varphi_\ell(x)$$
{ "domain": "physics.stackexchange", "id": 19479, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, mathematical-physics, path-integral, regularization", "url": null }
c++, template-meta-programming, c++17 template <template <typename ...> typename Container, typename T, typename ... TArgs, typename Callable> auto realfmap(Container<T, TArgs...>&& container, Callable&& callable, std::false_type) { using parameter_type = decltype(*container.begin()); using invoke_result = std::result_of_t<Callable(parameter_type)>; Container<invoke_result> mapped_container; if constexpr(std::is_same_v<decltype(mapped_container), std::vector<invoke_result>>) { mapped_container.reserve(container.size()); } std::transform(container.begin(), container.end(), std::back_inserter(mapped_container), std::forward<Callable>(callable)); return mapped_container; } } template <typename Container, typename Callable> auto realfmap(Callable&& callable, Container&& container) { return detail::realfmap(std::forward<Container>(container), std::forward<Callable>(callable), mutable_in_place<std::decay_t<Container>, Callable>{}); }
{ "domain": "codereview.stackexchange", "id": 26214, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, template-meta-programming, c++17", "url": null }
c#, beginner, object-oriented, csv, role-playing-game public PlayerStat(int rank, int level, int experience) { Rank = rank; Level = level; Experience = experience; } } Then this (and I'm only using these few lines as an example): public string Name { get; private set; } public int Rank { get; private set; } public int TotalLevel { get; private set; } public int TotalExperience { get; private set; } public int AttackRank { get; private set; } public int AttackLevel { get; private set; } public int AttackExperience { get; private set; } public int DefenceRank { get; private set; } public int DefenceLevel { get; private set; } public int DefenceExperience { get; private set; } Goes down to: public string Name { get; private set; } public PlayerStat Total { get; private set; } public PlayerStat Attack { get; private set; } public PlayerStat Defense { get; private set; } Then we'll take your parsing: public Player GetHiscore(string name) { WebRequest _request = HttpWebRequest.Create(string.Format("http://services.runescape.com/m=hiscore/index_lite.ws?player={0}", name)); _request.Proxy = null; _request.AuthenticationLevel = AuthenticationLevel.None;
{ "domain": "codereview.stackexchange", "id": 23955, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, object-oriented, csv, role-playing-game", "url": null }
beginner, regex, perl, tex \tikzstyle{styTODO} = [draw,rectangle,minimum height={110pt},text width=17cm]; \tikzstyle{styCTD} = [draw,rectangle,minimum height={90pt},text width=8.5cm]; \draw [draw,use as bounding box] (0cm,0cm) rectangle (17cm, 25cm);
{ "domain": "codereview.stackexchange", "id": 10820, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, regex, perl, tex", "url": null }
javascript, game, chat, websocket This page has a good amount of sending and listening for WebSocket messages from the game server, and it was more complicated to put together. I'm looking for a review of all aspects, but in particular, anti-patterns that would be better written in other ways. The whole repository can be found here on GitHub. sections/lobby/lobby.html <div id="lobby" class="table lobby"> <!-- ROW 1 - Headers --> <div id="lobby_headers" class="tableHeading lobbyHeader"> <div id="lobby_title" class="tableCell lobbyTitle"> Lobby </div> <div id="lobby_deck_builder" class="tableCell lobbyDeckBuilder"> <input id="lobby_deck_builder_btn" type="button" value="Deck Builder" class="btn btn-navbar csh-button" /> </div> </div> <!-- ROW 2 - Only show when getting invite request --> <div id="lobby_invite_request" style="display: none;" class="tableHeading lobbyInviteRequest"> <!-- td colspan 2 --> <div id="lobby_invite_request_colspan" class="tableCell"> <div id="lobby_invite"> <!-- TODO this should be filled in dynamically --> Game invite from NAME to play GAME_TYPE! <input id="lobby_invite_accept" type="button" value="Accept" class="btn btn-success" /> <input id="lobby_invite_decline" type="button" value="Decline" class="btn btn-warning" /> <audio id="invite_ping"> <source src="../../sounds/ping_sound.mp3" /> </audio> </div> </div> </div> <!-- ROW 3 - Subheaders for Messages and Users -->
{ "domain": "codereview.stackexchange", "id": 30016, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, game, chat, websocket", "url": null }
c++, algorithm, tree, simulation, physics i->pos = i->tpos; i->v = i->tv; i->a = i->pa; } delete t4; } Would half a minute be normal when simulating this many particles? If not how could I improve this code to make it run faster? This looks like a really fascinating project. I've only played with toy particle systems – never anything as useful as this! But I do have some thoughts that might be helpful. Performance It's hard to say for sure where the slowdown is because it's not possible for me to profile the code based just on what's here. But that's something you should do. Run it in a profiler and see where the slowdowns are rather than guessing. That said, I see some things that I frequently find to be performance issues. As mentioned in the comments, the number of allocations that the code does might be an issue. Stack allocations may be faster in some cases than heap allocations. So if you break up the code in move() into 4 different functions, you can allocate t1-t4 on the stack rather than the heap. It might improve the speed since a stack allocation is generally just a pointer addition. Your accelerate() function is recursive. You might be able to increase the speed by making it loop instead of recursing. But if you want to get real performance gains I recommend one or more of the following: Write a SIMD version of the code that operates on more than one particle at a time. Write a multi-threaded version of the code that does the calculations for multiple particles on different threads. (You can do this along with #1 to get even more speed.) Run the simulation on the GPU. This is likely to yield even better performance than combining 1 & 2 as GPUs have thousands of cores instead of just dozens.
{ "domain": "codereview.stackexchange", "id": 33105, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, tree, simulation, physics", "url": null }
data-science-model, career Title: What skills do I need to become a data scientist? And how to show them? I have fimiliarized myself with the recommended most important concepts (Linear Algebra, Analysis, Phython, Numpy, Pandas, a bit of Statistics, Linear regression). For the last two, I don't know how deep it should go. I know what things mean and how to get them working in python. But the question is what now? I guess I could argue that this is a starting point and I can apply to a bad data analysis or visualisation position if I learn tableau and present myself well. But what would I do to even prove what I can do before an interview? Putting a notebook on github where I imported a dataset, cleaned it a bit, did a .desribe(), .plot() and a linear regression isn't very impressive nor interesting to anyone. So what would I do instead? Also, this clearly isn't data science area yet. If I look at kaggle challenges, I either don't know what to do or think to myself "Clean data, LinRegression". So what should I take a look at next? Note that I'm taking classes, but not in Data Science but in Chemistry right now. So you're still on the Basics and William's answer is pretty good, I will list here a bit of stuff to learn, and where to. 1 - You need the basics, that is already much more than you expected it to be: Linear Algebra: knowing the best way of inverting a matrix might be useful for a computer scientist, but you're not aiming for that. You need to understand concepts and their meaning and effects such as: Matrix Rank (For example, this could tell you, by an Autocorrelation matrix, that your data is still not enough for things like least squares.) Meaning of Vector Spaces and basic linear transformations such as base change Meaning of eigenvalues and eigenvectors Calculus: also, focus on the meaning and understanding, computers can do most of the operations, even analytically Derivatives and Integrals Optimization
{ "domain": "datascience.stackexchange", "id": 5206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-science-model, career", "url": null }
beginner, strings, delphi Additionally, I had to put this after type to make it work this way: Arrayofstring = array of String; This functions are working, but I wonder if there is any way to improve it even further. I am new to Delphi, so any help would be appreciated. Delphi is a nice language in that it is not case sensetive. However, I would still encourage you to stick to one casing. Decide on if you want to use if or If and stick to that. (My personal opinion: Use all lowercase). This also goes for var and for in your code. Your code has a bit of inconsistent spacing. Take a look at this line for example: Result[NTokens]:=Result[NTokens]+Chain[i]; I would write this as: Result[NTokens] := Result[NTokens] + Chain[i]; I also think that you can use more spaces in your function header for GetToken (compare it with Validate). And also use a space after each comma (in parameter lists and variable declarations).
{ "domain": "codereview.stackexchange", "id": 6742, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, strings, delphi", "url": null }
quantum-mechanics, atomic-physics Title: Compton effect for a moving electron In a problem from Bransden and Joachain's Quantum Mechanics, it is asked to calculate the Compton wavelength shift, but the electron is now moving, with a momentum $P$, in the same direction as the approaching photon. The book tells us that this shift is given by $$\Delta \lambda = 2 \lambda_0 \frac{(p_0 + P) c}{E - Pc} \sin^2(\theta / 2)$$ where $p_0 = h / \lambda_0$ is the momentum of the incident photon, $\lambda_0$ is the original wavelength of the photon (before scattering), $\theta$ is the photon scattering angle, and $E = \sqrt{m^2 c^4 + P^2 c^2}$ is the initial energy of the electron. In proving this, I started in the same way as in the derivation for "stationary electron" - conservation of momentum and energy along each axis. Suppose the photon and the electron are both moving initially along the $x$-axis. Then momentum conservation gives, for the $x$- and $y$- axes respectively $-$ $$(h \nu_0 / c) + P = (h \nu / c) \cos\theta + p \cos \phi$$ $$0 = (h \nu / c) \sin \theta - p \sin \phi$$ where $\nu$ is the frequency of the electron after scattering, $p$ is the momentum of the electron after scattering, and $\phi$ is the electron scattering angle. Then, multiplying both equations by $c$, $$p c \cos \phi = h \nu_0 + P c - h \nu \cos \theta$$ $$p c \sin \phi = h \nu \sin \theta$$ Squaring both and adding gives $-$ $$p^2 c^2 = (h \nu_0 + P c - h \nu \cos \theta)^2 + (h \nu \sin \theta)^2$$
{ "domain": "physics.stackexchange", "id": 66030, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, atomic-physics", "url": null }
java, memory-management, factory-method final SchemaParser schemaParser; SchemaFormat(SchemaParser schemaParser) { this.schemaParser = schemaParser; } } And then use them like so: public static Schema deserializeSchema(String schema, SchemaFormat schemaFormat) { return schemaFormat.schemaParser.parse(schema); } That way I can just completely delete the Deserializer and the ParserFactory classes. If they are present, however, each Parser in lazy-initialized and as soon as it's initialized, it's kept in the memory for the rest of the time. If I have all of the Parsers inside the enum then for each message an object gets declared and then discarded. But what happens with the performance when the application obtains 150 messages of 5 different types within a short period of time? I'm concerned that it would be really slow and the memory will bloat until the garbage collector comes and cleans it. I also don't have the way to load the schemas at the start-up, so I may choose the option with putting the SchemaParsers inside the SchemaFormat just for the reason I will parse them once. But several data providers may choose the XML schemas, so, I that case I will instantiate the XML parser twice but that isn't a big deal, because there will not be more than 10 data providers. Multithreading You're making multithreading really hard on yourself by using static fields in this class. Consider the following: public static Schema deserializeSchema(String schema, SchemaFormat format) { return ParserFactory.getSchemaParser(format).parse(schema); } public static Message deserializeMessage(string msg, Schema schema) { return ParserFactory.getMessageParser(schema.getMessageFormat()).parse(msg); }
{ "domain": "codereview.stackexchange", "id": 26797, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, memory-management, factory-method", "url": null }
computability, turing-machines, intuition Title: Turing Recognisable => enumerable I get the proof of going from an enumerator to a Turing Machine (keep running enumerator and see if it matches input) but I don't see how the other way works. According to my notes and the book (Intro to the Theory of Computation - Sipser), to get Turing enumerator from a Turing machine, we basically write all combinations of the alphabet. You then run the TM on this input, if it accepts print it out, replace with new string repeat ad infinitum. The problem I am having is surely this requires the language to be decidable. Otherwise it might get stuck on the third word in some infinite loop doomed never to accept or reject and certainly never print out the whole language. What am I missing? What's missing is the way you run the Turing Machine $M$ on strings to get the Enumerator. Rather than generate each string, run $M$, and then output this string if the $M$ accepts – which as you identified will not work – you do something like the following, which adopts the strategy of simulating many instances of the $M$ on different strings "in parallel". Assume the tape has contents $\langle w_1, S_1\rangle \# \cdots \# \langle w_n, S_n\rangle$, where $w_i$ is some word under consideration and $S_i$ is the current state of $M$ operating on $w_i$. This represents that $n$ copies of $M$ are being simulated. $w_i$ is stored so we know what the original input was. Now run the following loop At the end the tape write the next string from $w\in\Sigma^*$, along with the initial configuration $S$ of $M$, that is, write $\# \langle w, S\rangle$. Simulate each copy of the $M$ on the tape for one step. (Presumably use another tape.) If any of the $M$s enters an accepting state, put the corresponding string onto the output tape. Remove this instance of $M$ from the tape. If any of the $M$s enters a rejecting state, remove that instance of $M$ from the tape. Goto step 1.
{ "domain": "cs.stackexchange", "id": 275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computability, turing-machines, intuition", "url": null }
dna, cell-biology, mitochondria, chromosome I've also just read that a mitochondrion stores more than one copy of its DNA. That seems to allow for variation even within a single mitochondrion. EDIT 2: I think it's possible that my question is unclear. There seem to be four levels on which mDNA variety can be considered: a species, an organism, a cell, a mitochondrion. I ask the answerers to focus on the third one. I will be happy to learn about the other levels too and I will consider such information on-topic, but my main question is about what happens in a cell. This link seems to have good information that answers most of your questions. In my mind, there are two types of mitochondria: ones that work and ones that don't. Mitochondria do have DNA but that mDNA is there to encode proteins for their specific functions (e.g. to create ATP). So, although the mDNA may not be uniform for every mitochondrion in your body, it is most likely functional unless you have a mitochondrial disease. If you do have a mitochondrial disease then it suggests that some of your mitochondria are functional and some are not (otherwise you would be inviable). As discussed in the comments, all DNA undergoes mutations so that even identical mDNA genomes may vary after undergoing some mutations. It's possible that only one genetic line of mitochondria was passed to you from your mom. In this case all the mitochondria in your body would be identical (except for minor mutations). It's also possible that your mom inherited and passed on to you more than one genetic line (as is the case in viable mitochondrial disease). Mitochondrial repair mechanisms do exist but they differ in various ways from nuclear DNA repair and may be more prone to damage.
{ "domain": "biology.stackexchange", "id": 513, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dna, cell-biology, mitochondria, chromosome", "url": null }
matlab Title: Designing Butterworth filter in Matlab and obtaining filter [a.b] coefficients as integers for online Verilog HDL code generator I've designed a very simple low-pass Butterworth filter using Matlab. The following code snippet demonstrates what I've done. fs = 2.1e6; flow = 44 * 1000; fNorm = flow / (fs / 2); [b,a] = butter(10, fNorm, 'low'); In [b,a] are stored the filter coefficients. I would like to obtain [b,a] as integers so that I can use an online HDL code generator to generate code in Verilog. The Matlab [b,a] values seem to be too small to use with the online code generator (the server-side Perl script refuses to generate code with the coefficients), and I am wondering if it would be possible to obtain [b,a] in a form that can be used as a proper input. The a coefficients that I get in Matlab are: 1.0000 -9.1585 37.7780 -92.4225 148.5066 -163.7596 125.5009 -66.0030 22.7969 -4.6694 0.4307 The b coefficients that I get in Matlab are: 1.0167e-012 1.0167e-011 4.5752e-011 1.2201e-010 2.1351e-010 2.5621e-010 2.1351e-010 1.2201e-010 4.5752e-011 1.0167e-011 1.0167e-012
{ "domain": "dsp.stackexchange", "id": 197, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "matlab", "url": null }
Corollary. Let $V$ be a vector space, $T$ an operator on $V$, and let $w_1,\ldots,w_n$ be vectors in $V$ with corresponding $T$-annihilators $p_1(t),\ldots, p_n(t)$. If the $p_i(t)$ are pairwise relatively prime, then the $T$-annihilator of $w_1+\cdots +w_n$ is $p_1(t)\cdots p_n(t)$. Putting this together, we have: Theorem. Let $V$ be a finite dimensional vector space, let $T$ be an operator on $V$, and let $m(t)$ be the minimal polynomial of $T$. Then $V$ has a $T$-cyclic subspace of dimension $\deg(m(t))$. Proof. Write $m(t)= \phi_1(t)^{k_1}\cdots \phi_r(t)^{k_r}$ as a product of powers of pairwise distinct irreducible polynomials. We know that for each $i$ there is a vector $v_i$ whose $T$-annihilator is $\phi_i(t)^{k_i}$. By the theorem above, the $T$-annihilator of $v=v_1+\cdots+v_r$ is $\phi_1(t)^{k_1}\cdots \phi_r(t)^{k_r}=m(t)$, and in particular the $T$-cyclic subspace generated by $v$ has dimension $\deg(m(t))$, as desired. QED Now it follows that if the minimal polynomial has degree $\dim(V)$, then there exists a vector $v$ such that no polynomial of degree strictly smaller than $m(t)$ will $T$-annihilate $v$, so $v$ is a witness to the fact that $V$ is $T$-cyclic (and hence $k[x]$-cyclic under the defined action).
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9817357211137667, "lm_q1q2_score": 0.8383166522872831, "lm_q2_score": 0.8539127529517043, "openwebmath_perplexity": 60.96249448866761, "openwebmath_score": 0.9636895656585693, "tags": null, "url": "https://math.stackexchange.com/questions/44651/kx-module-and-cyclic-module-over-a-finite-dimensional-vector-space" }
electromagnetism, visible-light, acoustics As an aside - sound can also travel "through the wall"; the pressure waves will cause slight motion of the wall which in turn causes waves on the other side (although much reduced in amplitude because of the acoustic mismatch between the air and the wall; this is where the "glass against the wall" so beloved in 50's movies can help). In principle, electromagnetic waves can also travel through walls (which is why your radio works indoors) - but again, the short wavelength of light has two implications. One is, that the distance to cross is "many wavelengths" so that even a tiny extinction coefficient is sufficient to cause complete absorption; and the short wavelength means that "everything" in the wall acts as a scattering point, and the light doesn't stand a chance of making it through in measurable quantities.
{ "domain": "physics.stackexchange", "id": 13715, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, visible-light, acoustics", "url": null }
ros, transform, tf2 // Set the color -- be sure to set alpha to something non-zero! marker.color.r = 0.0f; marker.color.g = 1.0f; marker.color.b = 0.0f; marker.color.a = 1.0; marker.lifetime = ros::Duration(); // Publish the marker while (marker_pub.getNumSubscribers() < 1) { if (!ros::ok()) { return 0; } ROS_WARN_ONCE("Please create a subscriber to the marker"); sleep(1); } marker_pub.publish(marker); // Cycle between different shapes switch (shape) { case visualization_msgs::Marker::CUBE: shape = visualization_msgs::Marker::SPHERE; break; case visualization_msgs::Marker::SPHERE: shape = visualization_msgs::Marker::ARROW; break; case visualization_msgs::Marker::ARROW: shape = visualization_msgs::Marker::CYLINDER; break; case visualization_msgs::Marker::CYLINDER: shape = visualization_msgs::Marker::CUBE; break; } r.sleep(); } } Inside the rviz i put /my_frame than i could see the markers, but at same time point cloud cant be visualized, how sould i visualize point cloud along with markers? Originally posted by dinesh on ROS Answers with karma: 932 on 2016-07-07 Post score: 0 You need to publish a specific marker message. You code needs to define the type and location of the markers, and publish them to a topic. http://docs.ros.org/jade/api/visualization_msgs/html/msg/Marker.html http://docs.ros.org/jade/api/visualization_msgs/html/msg/MarkerArray.html Originally posted by dcconner with karma: 476 on 2016-07-07 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 25174, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, transform, tf2", "url": null }
If you divide both sides by $k!$ you will get binomial coefficients and you are in fact trying to prove $$\binom kk + \binom{k+1}k + \dots + \binom{k+n-1}k = \binom{k+n}{k+1}.$$ This is precisely the identity from this question. The same argument for $k=3$ was used here. Or you can look at your problem the other way round: If you prove this result about finite sums $$\sum_{j=1}^n j(j+1)\dots(j+k-1)= \frac{n(n+1)\dots{n+k-1}}{k+1},$$ you also get a proof of the identity about binomial coefficients. - From (i), (ii) and (iii) it is reasonable to guess that your sum will be $$n(n+1)\cdot...\cdot(n+k)/(k+1)$$ Try to prove this by induction. - For a fixed non-negative $k$, let $$f(i)=\frac{1}{k+1}i(i+1)\ldots(i+k).$$ Then $$f(i)-f(i-1)=i(i+1)\ldots(i+k-1).$$ By telescoping, $$\sum_{i=1}^ni(i+1)(i+2)\dots(i+k-1)=\sum_{i=1}^n\left(f(i)-f(i-1)\right)=f(n)-f(0)=f(n)$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.984575447918159, "lm_q1q2_score": 0.8536621198769043, "lm_q2_score": 0.867035758084294, "openwebmath_perplexity": 350.9822847302935, "openwebmath_score": 0.9113698601722717, "tags": null, "url": "http://math.stackexchange.com/questions/219693/finding-a-closed-formula-for-1-cdot2-cdot3-cdots-k-dots-nn1n2-cdotsk" }
It's correct, just a bit too long for my taste, that distracts from the main ideas. These bits would be sufficient: Then as $f \in \mathscr{R}$ on $[0, 1]$ and as $c \in (0, 1)$, so by Theorem 6.12 (c) in Baby Rudin $f \in \mathscr{R}$ on $[0, c]$ and on $[c, 1]$, and $$\int_0^c f(x) \ \mathrm{d} x \ + \ \int_c^1 f(x) \ \mathrm{d} x = \int_0^1 f(x) \ \mathrm{d} x.$$ As $f \in \mathscr{R}$ on $[0, 1]$, so $f$ is also bounded on $[0, 1]$ and hence also on $[0, c]$: An unbounded function can't be Riemann integrable, because one could construct an unbounded sequence of Riemann sums, then. Let $M \colon= \sup \{ \ f(x) \ \colon \ 0 \leq x \leq c \ \}$. Then by Theorem 6.12 (d) in Baby Rudin, we have $$\left\lvert \int_0^c f(x) \ \mathrm{d} x \right\rvert \leq M c.$$ It's clear that the latter converges to $0$ as $c\rightarrow0.$ As for b), a simple example would be $f(x)=\frac1x\sin\frac1x$ for $x>0.$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9658995723244553, "lm_q1q2_score": 0.8069931951289984, "lm_q2_score": 0.8354835411997897, "openwebmath_perplexity": 90.09374076831719, "openwebmath_score": 0.8836351633071899, "tags": null, "url": "https://math.stackexchange.com/questions/2375688/prob-7-a-chap-6-in-baby-rudin-if-f-is-integrable-on-c-1-for-every" }
c++, serialization, constrained-templates, c++23 [[nodiscard]] auto whatevs::deserialize(std::istream& o, indi::my_type& t) -> std::istream& { // Or use temporaries to hold the data, and don't touch t // unless all reads succeed. whatevs::deserialize(i, t.x); whatevs::deserialize(i, t.y); return i; } There are ways you can make this easier and more ergonomic (you might want to make the actual (de)serialize function a neibloid, for example), but that’s the gist. Now you’re probably thinking that it sucks that you can’t auto-generate the obvious default serialize/deserialize function. It does, but that capability is coming. It might look like this: namespace whatevs { template <typename T> requires requires { T::default_serializable; } [[nodiscard]] auto serialize(std::ostream& o, T const& t) -> std::ostream& { template for (constexpr auto member : std::meta::nonstatic_data_members_of(^T)) { if (not serialize(o, t.[:member:])) break; } return o; } } // namespace whatevs And you’d opt into it like this: namespace indi { struct my_type { static constexpr auto default_serializable = true; int x; double y; }; } // namespace indi Something like that, basically. But that’s the future… possibly as early as C++26. For now, you’ll have to hand-roll the serialize/de-serialize functions.
{ "domain": "codereview.stackexchange", "id": 45477, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, serialization, constrained-templates, c++23", "url": null }
beginner, php, converting, number-systems, roman-numerals num2RomanNumeral(4212, $romanNumeralText); echo $romanNumeralText; ?> I would appreciate if you give some tips and some things that would make the code better. For a beginner it seems fine. There are some improvements that can be made. The points below are analysis of the given code. Obviously there are examples of simpler implementations - e.g. see answers to Numbers to Roman Numbers with php on Stack Overflow. While I wouldn't expect a beginner to write a succinct and optimized function they are out there for the reader to compare. Avoid Global variables The first thing I spot is this line near the beginning of getClosestNum() global $NumberRomanNumeral; Global variables have more negative aspects than positives. Since that variable is never re-assigned it can be declared as a constant instead of a variable. Then there is no need to reference a global variable. Coding Style can be changed to be idiomatic I would not expect a beginner to write code that always adheres to a standard coding style, but there are recommended styles that many PHP developers write code to be inline with. One popular style guide is PSR-12. It has many conventions for readability and robust code. There are 54 errors and 1 warning for the code above. Many of the errors concern spacing between operators, tokens, etc. For example, instead of: }elseif($numToFind == $differenceToSum){ Idiomatic PHP code would be spaced like this: } elseif ($numToFind == $differenceToSum) { Array syntax can be simplified There isn't anything wrong with using array() but as of the time of writing, there is currently active support for versions 8.0 and 8.1 of PHP, and since PHP 5.4 arrays can be declared with short array syntax (PHP 5.4). So lines like: $differenceArray = array(); can be simplified like this: $differenceArray = []; Variable gets overwritten The seventh line of getClosestNum() is: $numLeft = null; Then towards the end of the function, that same variable is assigned with the difference of two other variables. $numLeft = $numToFind - $closestNum;
{ "domain": "codereview.stackexchange", "id": 43838, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, php, converting, number-systems, roman-numerals", "url": null }
optics, waves, interference, diffraction [Now copying from Feynman] Source: http://www.feynmanlectures.caltech.edu/I_30.html Resultant Electric Field $$R=A[\cos(\omega t)+\cos(\omega t+\phi)+\cos(\omega t+2\phi)+\cdots+\cos(\omega t+(n-1)\phi)],$$ where $\phi$ is the phase difference between one oscillator and the next one, as seen in a particular direction. Now we must add all the terms together. We shall do this geometrically. The first one is of length $A$, and it has zero phase. The next is also of length $A$ and it has a phase equal to $\phi$. The next one is again of length A and it has a phase equal to $2\phi$, and so on. So we are evidently going around an equiangular polygon with $N$ sides
{ "domain": "physics.stackexchange", "id": 18362, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, waves, interference, diffraction", "url": null }
Last edited: Mar 20, 2009 4. ### lanedance 3,307 hmmm.. do you mean $$\int x^a = ^{lim}_{b\rightarrow a} \frac{x^{b+1}}{b+1}$$ then $$\int \frac{1}{x}= ^{lim}_{b\rightarrow -1} \frac{x^{b+1}}{b+1} =ln(x)$$ 5. ### HallsofIvy 40,310 Staff Emeritus It is not clear what you are asking. The obvious answer is what lurflurf said. You cannot use the formula $$\int x^n dx= \frac{1}{n+1} x^{n+1}+ C$$ when n= -1 because then you would be dividing by 0. I'm not sure why you think of that as "normal means". 6. ### confinement 192 Think of it this way, the antiderivative of 1/x is the function whose inverse is exactly equal to its own derivative. Let y(x) be the antiderivate of 1/x. Then we have: $$\frac{dy}{dx} = \frac{1}{x}$$ Inverting the Liebniz notation in the way that he intended yields: $$\frac{dx}{dy} = x$$ The last equation says that x' = x, i.e. the function x(y) is equal to its own derivative. This means that x(y) cannot be a polynomial or rational function, since all of those functions change when you differentiate them. It is this perfect property, that the antiderivative of 1/x is the inverse function of the function who is exactly equal to its own derivative, that puts in a class of its own, a special case. 7. ### 2^Oscar 45 Hi, Sorry for being unclear. My question was supposed to ask why you cannot do the following: $$\int x^n dx= \frac{1}{n+1} x^{n+1}+ C$$
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9793540674530016, "lm_q1q2_score": 0.8491349862125928, "lm_q2_score": 0.8670357477770337, "openwebmath_perplexity": 799.0612908401829, "openwebmath_score": 0.8864861130714417, "tags": null, "url": "https://www.physicsforums.com/threads/quick-question-about-integral-of-1-x.300986/" }
newtonian-mechanics, energy, lagrangian-formalism, metric-tensor, work What about 2D collisions? Tomáš Brauner pointed out that elastic collisions involve kinetic energy conservation. Now we can't discriminate between $e$-kinetic energy and $g$-kinetic energy in 1D collisions, because in 1D $e$- and $g$-kinetic energies differ only by a scalar factor. However, I found out that in 2D elastic collisions, only the $e$-kinetic energy is conserved and not any other. I take this a valid empirical reason for taking $e$-kinetic energy to be the kinetic energy as opposed to any other $g$-kinetic energy, but this made me wonder why this was the case when Newton's laws made no reference to kinetic energy.
{ "domain": "physics.stackexchange", "id": 85109, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, energy, lagrangian-formalism, metric-tensor, work", "url": null }
php, object-oriented, file-system It seems like this breaks the single responsibility principle, but creating another class simply to do a str_replace seems pointless. What do you think? README: First off, I'll just focus on making this one class a tad more generic, and expandable and maintainable and... well, generally more OO. Then, though I'm not going to go into too much detail on the matter, I'll just list a few quick tips of how you could go about separating concerns even more and create an entire File namespace. Code-review can be harsh, but I do not intend to be hurtful or patronizing. My intentions are simply to convey, to the best of my abilities my views on what might be a preferable approach. I base this on experience, as well as personal preference and. While I try to be as objective as possible, I'm only human, so it stands to reason that the code I suggest might not be to your liking. But the code listed here is untested, written off the top of my head and only serves to illustrate my point. This class/method definitely could do with a couple of additional methods. For example: I might want to set a path, and then -at various points in time- want to get reflection classes for a given "gateway namespace". Your code, as it now stands would imply calling the collect method with the same $path argument, which will create an all-new RecursiveDirectoryIterator instance, only to get those files I'm after. If, however, your class would look like this: class Collector { protected $iterator = null; public function __construct($path = null) { if ($path) $this->setPath($path); } public function setPath($path) { $this->iterator = new \RecursiveDirectoryIterator( dirname(__FILE__) . $path ); return $this; } public function collect($namespace, $path = null) { $return = array(); if ($path !== null) $this->setPath($path); foreach(new \RecursiveIteratorIterator($this->iterator) as $file) {
{ "domain": "codereview.stackexchange", "id": 5922, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, object-oriented, file-system", "url": null }
c++, variadic template <typename ...Args> std::vector<V> values_at(Args const & ...args) const { std::initializer_list<K> input {args...}; std::vector<V> output; output.reserve(input.size()); for (auto const & e : input) { auto const it = gate.find(e); if (it != gate.end()) { output.emplace_back((*it)->second); continue; } output.emplace_back(); } return output; } }; int main() { core_table<std::string, std::string> ct {{"a", "hello"}, {"b", "world"}}; for (auto const & e : ct.values_at("b", "a")) { cout << e << endl; } return 0; } Yes, it is a good example Variadic templates are associated with recursion by similarity to functional programming, where an recursion is a simple method of operating on sequences. Direct expansion is akin to a map or fold, or some other higher-order function. It can be simplified even more If you have a similar function /* find value or return default */ template <typename Arg> V value_at(Arg const & key) const { auto const it = gate.find(key); if (it != gate.end()) { return it->second; } return {}; } Then you can condense this down into a single expression return vector<V>({value_at(args)...}); as that expands the parameter pack into multiple calls to value_at, which goes into vector's std::initializer_list<V> constructor.
{ "domain": "codereview.stackexchange", "id": 24163, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, variadic", "url": null }
ros, c++, header Title: How to modify or change C++ file located in the ROS stack I have the following error message: /home/username/ros_workspace/my_controller_pkg/src/my_controller_file.cpp:52: error: ‘class pr2_mechanism_model::JointState’ has no member named ‘position_1’ /home/username/ros_workspace/my_controller_pkg/src/my_controller_file.cpp:53: error: ‘class pr2_mechanism_model::JointState’ has no member named ‘position_2’ I think it mean I have to delare "position_1" and "position_2" in the "joint.h" file, but this header file belongs to ROS stack so that it will not be changed. Is anybody who know how to modify the C++ header file belonged to ROS stack? Originally posted by maruchi on ROS Answers with karma: 157 on 2011-12-01 Post score: 0
{ "domain": "robotics.stackexchange", "id": 7487, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, c++, header", "url": null }
# Electric potential at center of circular arc 1. Oct 12, 2006 ### cyberstudent An insulating rod of length l is bent into a circular arc of radius R that subtends an angle theta from the center of the circle. The rod has a charge Q ditributed uniformly along its length. Find the electric potential at the center of the circular arc. Struggling with this problem. I know that I have to divide the charge Q into many very small charges, essentially point charges, then sum them up (integration). dV = dq/(4 Π Ε0 R) Length of dq = ds db = angle subtended by ds dΒ=ds/R => ds = dBR dq = λds => dq = λdBR V = ∫ dq/(4 Π Ε0 R) V = ∫ λdBR/(4 Π Ε0 R) Now, this is where it all goes wrong for me. I take out the constants V = λR/(4 Π Ε0 R) * ∫ dB My Rs cancel out, which makes no sense. The radius must be important in the calculation of the difference potential. Notice also that I did not indicate the limits on the integration. In a similar problem which was done in a previous assignemnt to calculate the electric field at the center, the upper and lower limits were set to -B/2 and B/2, but I am not sure why. Last edited: Oct 12, 2006 2. Oct 12, 2006 ### OlderDan It is. Define λ The limits would be -theta/2 to +theta/2. You have to integrate over the angle subtended by the arc. You could actually use any pair of limits that differ by theta. 3. Oct 12, 2006 ### cyberstudent λ is the linear charge density. That actually makes senses to me. Thanks. Of course, you would want to integrate over the angle subtended by the arc. Would 0 and theta also be valid limits then? So, I now have my limits, but I still end up with the same absurd problem of losing my R. What is the mistake. Is the equation wrong? Am I not taking out constants? Θ/2 V = ∫ λdBR/(4 Π Ε0 R) -Θ/2
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9873750510899382, "lm_q1q2_score": 0.833240964184624, "lm_q2_score": 0.8438950966654774, "openwebmath_perplexity": 2065.8886759553116, "openwebmath_score": 0.8285332322120667, "tags": null, "url": "https://www.physicsforums.com/threads/electric-potential-at-center-of-circular-arc.138075/" }
c++, parsing return tdefault; } ConfigReader::ConfigReader (const std::string & file) : records() { readfile (file); } ConfigReader::ConfigReader() : records() { } The syntax of the config file is simple: [video] width = 1920; etc; What should I change? Improve? Are there any errors? (Well, there is one in parse.) Use standard algorithms where applicable. For instance, trim could be rewritten: std::string& trim(std::string& s) { auto is_whitespace = [] (char c) -> bool { return c == ' ' || c == '\t'; }; auto first_non_whitespace = std::find_if_not(begin(s), end(s), is_whitespace); s.erase(begin(s), first_non_whitespace); auto last_non_whitespace = std::find_if_not(s.rbegin(), s.rend(), is_whitespace) .base(); s.erase(std::next(last_non_whitespace), end(s)); return s; } Likewise, normalize could be written like this: std::string& normalize(std::string& s) { s.erase(std::remove_if(begin(s), end(s), [] (char c) { return c == ' ' || c == '\t'; }), end(s)); std::transform(begin(s), end(s), begin(s), [] (char c) { return std::tolower(c); }); return s; }
{ "domain": "codereview.stackexchange", "id": 4093, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, parsing", "url": null }
condensed-matter Title: Why does edge states emerge in SSH model In reading Girvin and Yang’s “Modern condensed matter Physics” p146, I came across the following argument. In the traditional SSH model, if we consider a system with open boundary conditions, particle-hole symmetry and an odd number of atoms. Due to particle-hole symmetry it is guaranteed that there must exist one state with exactly zero energy. If the system is dimerized, then there is a gap in the bulk and the zero mode must live on one of the boundaries (and decay exponentially into the bulk). Now my question is why we consider the energy gap is located in the “bulk”, which is the space location of the material. Do we assume that the system will behave differently on the boundary? "There is a gap in the bulk" -- this means that if you consider energy eigenstates that have a significant support in the bulk, you will not find such states with energy in the gap. You can also define a local density of states, and then you will find that this local density of states is zero in the bulk. You can still have eigenstates of the Hamiltonian with energy in the bulk gap, but their wavefunctions will not extend deep into the bulk. So yes, the system is different on the boundary than on the bulk. The behavior of most solid state systems is dominated by the bulk, so we often neglect boundary effects. For topological insulators this is different.
{ "domain": "physics.stackexchange", "id": 69106, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter", "url": null }
decimal. Read Also: ( 10 Facts about Ramanujan’s Achievement ) Facts about Rational Numbers Any number that can be expressed in the form a/b, where a and b are integers and b≠0 is called a rational number. In The fractions module provides support for rational number arithmetic. Notice that we can In the positive (right) direction, the real line extends toward +∞ (positive infinity); in the negative (left) direction, it extends toward –∞ (negative infinity). Is it a rational number (can it be written as a fraction)? Yes, it's a terminating decimal (a decimal that Repeating decimals are considered rational numbers because they can be represented as a ratio of two integers. In Maths, Rational Numbers sound similar to Fractions and they are expressed in the form of p/q where q is not equal to zero. a b Show that the terminating decimals below are rational 2. Non-terminating decimals are not rational numbers because they cannot be expressed in the form of a common fraction. 2684 is a rational Repeating decimals are considered rational numbers because they can be represented as a ratio of two integers. A rational Non- Terminating and repeating decimals are Rational numbers and can be represented in the form of p/q, where q is not equal to 0. Converting Rational Numbers Worksheet – Worksheet For Kindergarten. Numbers … Yes. 25. That is why the numbers of arithmetic are called the rational numbers. Skip to main content close Start your trial now! First week only $4. 1 Decimals: Place Value, Estimation, and Mental Computation A decimal representation of a rational Positive decimals are decimals more than $$0$$. A rational So, from the above examples, it is clear terminating decimals and non-terminating repeating decimals are rational numbers and can be This is a terminating decimal. Decimal Place Values 2. Decimal To Percentages. Express Rational Numbers into Decimals. Every rational number can be written as a fraction a/b, where a and b are integers. • Terminating decimals, which are decimals with a set number of digits, are always rational Positive rational numbers are the rational numbers having the same sign in both their numerator and denominator, like -1/-5, 5/7, -11/-22, etc. Some problems have students identify an equivalent decimal Add and subtract rational numbers teaching activities. In
{ "domain": "co.zm", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9759464513032269, "lm_q1q2_score": 0.808999882142364, "lm_q2_score": 0.8289388019824947, "openwebmath_perplexity": 566.2455819575215, "openwebmath_score": 0.6455246806144714, "tags": null, "url": "http://sign.amzn-manage-89471928.sumasystems.co.zm/nn4va/are-decimals-rational-numbers.html" }
broadly classified into decagons ) / 2 standards and ensure that all the guest rooms are serviced rooms are serviced vertices. Going to it and each diagonal joins a vertex to one of four bolts on the meet Math at any level and professionals in related fields how much power is consumed by a 12-V incandescent if Are seven diagonals to connect two different vertices them up with references or personal experience a metal Supposed to reverse the 2020 presidential election 's try that reasoning on a made ( C^ { 10 } _2 = 45\ ) ways how many diagonals does a decagon have connect two different. Two because diagonals are perpendicular 4 sides congruent diagonals bisect the angles how many diagonals does a decagon have exist in the story of sides. You ca n't form diagonals to them ) would a company prevent their from N into the formula for number of diagonals in an over the board game of. Answer verbatim and I C ) how many lines of symmetry does this have! Pets - 2006 Save the Nutcracker ( n 3 ) / 2 on show diagonals to.
{ "domain": "18hao.net", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9706877658567787, "lm_q1q2_score": 0.8171498509935402, "lm_q2_score": 0.8418256412990658, "openwebmath_perplexity": 1095.6483295119074, "openwebmath_score": 0.3421178460121155, "tags": null, "url": "http://18hao.net/0b9r3hm/7zwfvgh.php?a2b067=how-many-diagonals-does-a-decagon-have" }
forces, lagrangian-formalism, mathematical-physics, potential-energy, velocity $$ {\bf F}~=~{\bf F}({\bf r},{\bf v},{\bf a},t) \tag{1}$$ has a velocity-dependent potential $$U~=~U({\bf r},{\bf v},t),\tag{2}$$ which by definition means that $$ {\bf F}~\stackrel{?}{=}~\frac{d}{dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}}. \tag{3} $$ If we define the potential part of the action as $$ S_p~:=~\int \!dt~U,\tag{4}$$ then the condition (3) can be rewritten with the help of a functional derivative as $$ F_i(t)~\stackrel{(2)+(3)+(4)}{=}~ -\frac{\delta S_p}{\delta x^i(t)}, \qquad i~\in~\{1,\ldots,n\}, \tag{5} $$ where $n$ is the number of spatial dimensions. It follows from eqs. (2) & (3) that in the affirmative case the force ${\bf F}$ must be an affine function in acceleration ${\bf a}$. Since functional derivatives commute $$ \frac{\delta}{\delta x^i(t)} \frac{\delta S_p}{\delta x^j(t^{\prime})} ~=~\frac{\delta}{\delta x^j(t^{\prime})} \frac{\delta S_p}{\delta x^i(t)},\tag{6}$$ we derive the following consistency condition (7) for a force with a velocity dependent potential $$ \frac{\delta F_i(t)}{\delta x^j(t^{\prime})}
{ "domain": "physics.stackexchange", "id": 59551, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "forces, lagrangian-formalism, mathematical-physics, potential-energy, velocity", "url": null }
$\begin{array}{c}\hfill 4\overline{)28}\end{array}$ 7 $\frac{\text{30}}{5}$ $\frac{\text{16}}{4}$ 4 $\text{24}÷8$ $\text{10}÷2$ 5 $\text{21}÷7$ $\text{21}÷3$ 7 $0÷6$ $8÷0$ not defined $\text{12}÷4$ $\begin{array}{c}\hfill 3\overline{)9}\end{array}$ 3 $\begin{array}{c}\hfill 0\overline{)0}\end{array}$ $\begin{array}{c}\hfill 7\overline{)0}\end{array}$ 0 $\begin{array}{c}\hfill 6\overline{)48}\end{array}$ $\frac{\text{15}}{3}$ 5 $\frac{\text{35}}{0}$ $\text{56}÷7$ 8 $\frac{0}{9}$ $\text{72}÷8$ 9 Write $\frac{\text{16}}{2}=8$ using three different notations. Write $\frac{\text{27}}{9}=3$ using three different notations. $\text{27}÷9=3$ ; $\begin{array}{c}\hfill 9\overline{)27}\end{array}=3$ ; $\frac{\text{27}}{9}=3$ In the statement $\begin{array}{c}\hfill 4\\ \hfill 6\overline{)24}\end{array}$ 6 is called the . 24 is called the . 4 is called the . In the statement $\text{56}÷8=7$ , 7 is called the . 8 is called the . 56 is called the . 7 is quotient; 8 is divisor; 56 is dividend ## Exercises for review
{ "domain": "jobilize.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9910145704512984, "lm_q1q2_score": 0.8102282161345035, "lm_q2_score": 0.8175744739711883, "openwebmath_perplexity": 2430.6254852470406, "openwebmath_score": 0.5777667164802551, "tags": null, "url": "https://www.jobilize.com/online/course/2-2-concepts-of-division-of-whole-numbers-by-openstax?qcr=www.quizover.com&page=1" }
c#, unit-testing, async-await, mvvm, constructor Title: Testing async method call from constructor I have a project where I want to build a more sophisticated ToDo list - basically a personal project management system. I'm just starting out with the project, and I'd like some feedback on whether my test methods are OK, as I'm pretty new to TDD. So far, for the main page that lists the daily ToDo items, I've got the following ViewModel: public class MainVM : ViewModelBase { private IEnumerable<ToDoItem> _toDoItems; private IRepository<ToDoItem> _toDoItemRepo; private bool _dataIsLoaded; public bool DataIsLoaded { get { return _dataIsLoaded; } set { Set(ref _dataIsLoaded, value, true); } } public IEnumerable<ToDoItem> ToDoItems { get { return _toDoItems; } set { Set(ref _toDoItems, value); } } public MainVM(IRepository<ToDoItem> toDoItemRepo) { _toDoItemRepo= toDoItemRepo; LoadData().ContinueWith(t => FinishedLoadingData(t)); } private void FinishedLoadingData(Task loadTask) { switch (loadTask.Status) { case TaskStatus.RanToCompletion: DataIsLoaded = true; break; default: DataIsLoaded = false; break; } } public async Task LoadData() { if (!DataIsLoaded) ToDoItems= await _toDoItemRepo.GetAsync(); } } As you can see, I want to load my data when the ViewModel is created. I'm using the Set method from the MVVMLight toolkit to set the DataIsLoaded property- it sets the field behind and raises the PropertyChanged event for me. I have the following test set up to test for a successful load: public void DataIsLoaded_is_true_if_loading_task_ran_to_completion() { AutoResetEvent testTrigger = new AutoResetEvent(false);
{ "domain": "codereview.stackexchange", "id": 29629, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, unit-testing, async-await, mvvm, constructor", "url": null }
r "Drilling & Completion team", "Drilling & Completion team", "Drilling & Completion team", "Drilling & Completion team", "Production Operations & Company Midstream Services team", "Production Operations & Company Midstream Services team", "Production Operations & Company Midstream Services team", "Production Operations & Company Midstream Services team", "EH&S team", "EH&S team", "EH&S team", "EH&S team", "Investor Relations team", "Investor Relations team", "Investor Relations team", "Investor Relations team", "Investor Relations team", "Business Development Company Midstream team", "Business Development Company Midstream team", "Human Resources & Employee Development team", "Human Resources & Employee Development team", "Human Resources & Employee Development team", "Human Resources & Employee Development team", "Human Resources & Employee Development team", "Infrastructure & Technology team", "Infrastructure & Technology team", "Infrastructure & Technology team", "IT Strategy team", "IT Strategy team", "IT Strategy team", "IT Strategy team", "IT Strategy team", NA, "Legal team", "Legal team", "Legal team", "Marketing team", "Marketing team", "Marketing team", "Government Affairs & Regulatory Compliance team", "Government Affairs & Regulatory Compliance team"), FTrole = c("DR", "DR", "TL", "DR", "DR", "TL", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "TL", "TL", "DR", "DR", "TL", "DR", "DR", "DR", "DR", "DR", "TL", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "DR", "TL", "DR", "TL", "DR", "DR", "TL", "DR", "DR", "DR", "DR", "DR",
{ "domain": "codereview.stackexchange", "id": 32530, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "r", "url": null }
## A number of people shared a meal, intending to divide the cost evenly among themselves. However, several of the diners ##### This topic has expert replies Legendary Member Posts: 2898 Joined: 07 Sep 2017 Thanked: 6 times Followed by:5 members ### A number of people shared a meal, intending to divide the cost evenly among themselves. However, several of the diners by Vincen » Sat Nov 27, 2021 4:38 am 00:00 A B C D E ## Global Stats A number of people shared a meal, intending to divide the cost evenly among themselves. However, several of the diners left without paying. When the cost was divided evenly among the remaining diners, each remaining person paid $$\12$$ more than he or she would have if all diners had contributed equally. Was the total cost of the meal, in dollars, an integer? (1) Four people left without paying. (2) Ten people in total shared the meal. Source: Veritas Prep ### GMAT/MBA Expert GMAT Instructor Posts: 16162 Joined: 08 Dec 2008 Location: Vancouver, BC Thanked: 5254 times Followed by:1268 members GMAT Score:770 ### Re: A number of people shared a meal, intending to divide the cost evenly among themselves. However, several of the dine by [email protected] » Sat Nov 27, 2021 7:31 am 00:00 A B C D E ## Global Stats Vincen wrote: Sat Nov 27, 2021 4:38 am A number of people shared a meal, intending to divide the cost evenly among themselves. However, several of the diners left without paying. When the cost was divided evenly among the remaining diners, each remaining person paid $$\12$$ more than he or she would have if all diners had contributed equally. Was the total cost of the meal, in dollars, an integer? (1) Four people left without paying. (2) Ten people in total shared the meal. Source: Veritas Prep Target question: Was the total cost of the meal, in dollars, an integer? This is a great candidate for rephrasing the target question
{ "domain": "beatthegmat.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9848109520836026, "lm_q1q2_score": 0.8330933770937216, "lm_q2_score": 0.8459424373085146, "openwebmath_perplexity": 2921.9206532191765, "openwebmath_score": 0.7196266055107117, "tags": null, "url": "https://www.beatthegmat.com/a-number-of-people-shared-a-meal-intending-to-divide-the-cost-evenly-among-themselves-however-several-of-the-diners-t328306.html" }
Remark: the developemnts here show that there is a simple integral representation of the MeijerG function, much simpler than the complex integral version. We have MeijerG[{{}, {3/2, 3/2, 3/2}}, {{1/2, 1, 1}, {}}, x] = 2/Pi Integrate[ 27/8 ( Sqrt[-((w x)/(-1 + w))] (Sqrt[-1 + w/x] - ArcCos[Sqrt[x/w]]))/w, {w, x, 1}] The difficulties of Mathematica calculating the numerical values close to 1 have been overcome by this representation and the series expansion. • "using the Mellin transformation" - with the Meijer result, this is what you're implicitly doing anyway, since the $G$-function is effectively an inverse Mellin transform… – J. M. will be back soon Aug 12 '15 at 12:29 • I have been busy doing the calculations and the editing so I did't see the resuls of wolfie. Sorry for that. – Dr. Wolfgang Hintze Aug 12 '15 at 12:30 • @J. M.: that's what I was - cautiously - saying ;-) – Dr. Wolfgang Hintze Aug 12 '15 at 12:34 • The reason why I had mentioned that the Meijer result looked familiar in the other answer is that it resembled some of the Mellin-Barnes representations for elliptic integrals that I remember; your result, which at least involves the complete elliptic integrals, seems to be tantalizing. – J. M. will be back soon Aug 12 '15 at 12:41 • Some nice reference to MeijerG is ams.org/notices/201307/rnoti-p866.pdf pointing out specifically the closure under convolutions. – Dr. Wolfgang Hintze Aug 12 '15 at 13:20
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9766692305124306, "lm_q1q2_score": 0.8649690012069938, "lm_q2_score": 0.8856314647623015, "openwebmath_perplexity": 1678.0706061135265, "openwebmath_score": 0.5556476712226868, "tags": null, "url": "https://mathematica.stackexchange.com/questions/91407/distribution-over-the-product-of-three-or-n-independent-beta-random-variables/91441" }
c++, performance, matrix, complexity, vectors float res; res += vec1.x * vec2.x + vec1.y * vec2.y + vec1.z * vec2.z; // ERROR ON THIS LINE. if (isNearlyEqual(res, 0)) res = 0; return res; } The variable res is not initialized prior to being used. The variable res is being used because the += operator says add the following to this variable. There are 2 ways to correct this, either change the += to = or assign zero in the declaration of res. float res = 0;
{ "domain": "codereview.stackexchange", "id": 35213, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, matrix, complexity, vectors", "url": null }
neural-networks, deep-learning, applications, overfitting, early-stopping Title: Should I prefer the model with the lowest validation loss or the highest validation accuracy to deploy? I trained a ResNet20 on Cifar10 and obtained the following learning curves. From the figures, I see at epoch 52, my validation loss is 0.323 (the lowest), and my validation accuracy is 89.7%. On the other hand, at the end of the training (epoch 120), my validation loss is 0.413 and my validation accuracy is 91.3% (the highest). Say I'd like to deploy this model on some real-world application. Should I prefer the snapshotted model at epoch 52, the one with lowest validation loss, or the model obtained at the end of training, the one with highest validation accuracy? Okay, I think it's better if we distinguish loss and accuracy first via Jeremy's answer, and I agree with him with the sentence "low or huge loss is a subjective metric". The loss value is easy to affect by noise from data and significant increase with a few error data points. My advice in this case is to use more evaluation metrics, and understand correctly what you need from your model. For example, with Cifar 10, and you need the more correct label the better, you can believe in accuracy. However, if you want your model to make sure its result is the correct, area under receiver operating characteristic curve (AUROC) maybe the better choice. For example, classification problem with 3 class, correct label y = 1: Good accuracy, bad AUROC: the output probability from softmax [0.3,0.4,0.3] Good accuracy, good AUROC: [0.1,0.8,0.1] And with imbalanced dataset, Precision, Recall and F1-score will be more suitable.
{ "domain": "ai.stackexchange", "id": 2715, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-networks, deep-learning, applications, overfitting, early-stopping", "url": null }
electromagnetism, magnetic-fields After a quick calculation with Biot-Savart Law (using the Dirac $\:\delta\:$ function) I found the solution \begin{equation} \mathbf{B}_{_{\mathbf{BS}}}\left(\mathbf{x},t\right) \boldsymbol{=}\dfrac{\mu_{0}q}{4\pi }\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{\bf 3}} \tag{02} \end{equation} which compared with that from the Lienard-Wiechert potentials, see above equation (01b) \begin{equation} \mathbf{B}_{_{\mathbf{LW}}}\left(\mathbf{x},t\right)\boldsymbol{=}\dfrac{\mu_{0}q}{4\pi }\dfrac{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\right)}{\left(1\!\boldsymbol{-}\!\beta^{\bf 2}\sin^{\bf 2}\!\phi\right)^{\boldsymbol{3/2}}}\dfrac{\boldsymbol{\upsilon}\boldsymbol{\times}\mathbf{{r}}}{\:\:\Vert\mathbf{r}\Vert^{\bf 3}} \tag{03} \end{equation} it looks as an approximation for charges whose velocities are small compared to that of light $\:c$ \begin{equation} \mathbf{B}_{_{\mathbf{BS}}}\left(\mathbf{x},t\right)\boldsymbol{=}
{ "domain": "physics.stackexchange", "id": 81616, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, magnetic-fields", "url": null }
concurrency Title: Two-phase locks: why is it better? I'm reading Arpaci's Operating Systems: Three Easy Pieces, the chapter on Locks. At the end of the chapter, they present Two-phase locks (section 28.16). They say A two-phase lock realizes that spinning can be useful, particularly if the lock is about to be released. So in the first phase, the lock spins for a while, hoping that it can acquire the lock. I understand this may only be useful in a multiprocessor environment, is that right? Also, it seems a little arbitrary to me to wait the first time and then go to sleep. I mean, I don't see why this would be such a great improvement over going directly to sleep in the case the lock is being held. Is there anything I'm missing? Thanks in advance! Below is the whole paragraph on Two-phase locks:
{ "domain": "cs.stackexchange", "id": 16964, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "concurrency", "url": null }
general-relativity, metric-tensor, volume Geometrically, the condition $\rho=1$ means that the grid of coordinate lines is such that the parallelepipeds formed by the intersection of the coordinate lines have unit volume.
{ "domain": "physics.stackexchange", "id": 89620, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, metric-tensor, volume", "url": null }
So your two roots $x = \frac{1}{2}(3 \pm \sqrt{5})$ come from one of them being a solution to $x-1 = -\sqrt{x}$ and the other $x-1 =\sqrt{x}$, even if both equations have domain $x \geq 0$. For an answer to your last question, yes, in such cases, you'll need to check that any solution set you get from an equation after squaring it isn't a spurious solution by checking that it satisfies the original equation. To clarify this: it needn't be checking numerically - there are certain conditions you can check, for example as in law-of-fives comment, you have $x-1 =\sqrt{x} > 0$ so the only valid solution is the one that satisfies $x>1$. • It suffices to note that $\sqrt{x}$ must be positive given the original problem and when $x$ is found through the quadratic equation, given your first equation we not only have the criterion $x>0$ but also $x-1>0$. – law-of-fives Apr 29 '17 at 2:28 Let $y=\sqrt x\ge0$ $$\implies y^2=1+y\iff y^2-y-1=0$$ Clearly, the two roots are of opposite signs. • This doesn't address the OP's question. – mrnovice Apr 29 '17 at 2:34 In general you "slip" the domain restiction at the exact point you square both sides. $x=1+\sqrt {x}$ $x-1=\sqrt {x}$ !!! Here!! ===> $(x-1)^2=\sqrt {x}^2$<=== !!! Here!!! (so $x-1\ge 0$) And with that in mind.....
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9828232919939074, "lm_q1q2_score": 0.8232324098030392, "lm_q2_score": 0.8376199633332891, "openwebmath_perplexity": 209.59842578797492, "openwebmath_score": 0.9902226328849792, "tags": null, "url": "https://math.stackexchange.com/questions/2257047/why-does-solving-x-1-sqrtx-give-an-invalid-solution" }
electrostatics, electric-fields, potential Title: Electric field generated by a uniform rod When you calculate the electric field of a uniform rod of length $L$ and charge density $\lambda$ at a distance $d$ on its axis you can remove the effect of the ends considering $L$ is big enough. However if you take other point, not on the axis (for example, at a distance $a$ of one end), you should consider the effect of the end, how do you do it? I think, this effect only adds another component to the electric field, but I'm not sure. Can someone tell me if this is correct? A uniformly charged rod or needle with finite length ($L=2c$) is the limiting case for a conducting prolate spheroid.
{ "domain": "physics.stackexchange", "id": 48120, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, electric-fields, potential", "url": null }
beginner, programming-challenge, rust spacer(coded_no_spacing) } You are mixing two different approaches here. Firstly, you make a string out of the input and then modify it. Secondly, you use an iterator over the bytes of the string. This code would be more straightforward if you just iterated over the letters. Here is my approach: plain .chars() .filter_map(|c| { if c.is_ascii_alphabetic() { let letter = c.to_ascii_lowercase() as u8; Some(char::from(b'z' - letter + b'a')) } else if c.is_ascii_alphanumeric() { Some(c) } else { None } }) .collect() If you haven't seen it before, the filter_map function combines filtering and mapping. The closure can return either None, to remove the element or Some(x) to provide an element in the output. /// "Decipher" with the Atbash cipher. pub fn decode(cipher: &str) -> String { let mut out = encode(cipher); out.retain(|c| c.is_ascii_alphanumeric()); out } It took me a bit to figure out why you were filtering the chars again. But I see it is remove the spacing. It would make more sense to split the basic ciphering and into its own function so you can call that without adding the spacing. Then you wouldn't have to filter it. fn spacer(coded_no_spacing: String) -> String { let mut coded_no_spacing = coded_no_spacing.chars(); let mut temp_char = coded_no_spacing.next(); let mut counter = 0; let mut coded_with_spaces = "".to_string();
{ "domain": "codereview.stackexchange", "id": 36321, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, programming-challenge, rust", "url": null }
rna-seq, cell-line, clustering Title: Should the cell sorting marker genes be excluded during clustering? We sort different populations of blood cells using a number of fluorescent flow cytometry markers and then sequence RNA. We want to see what the transcriptome tells us about the similarity and relation between these cells. In my experience on bulk RNA-seq data, there is a very good agreement between flow cytometry and mRNA expression for the markers. It's good to remember we sort using a few markers only, while there are hundreds if not thousands of cell surface proteins. Should we exclude genes of these sorting (CD) marker proteins when we perform PCA or other types of clustering? The argument is that after sorting, these genes may dominate clustering results, even if the rest of the transcriptome would tell otherwise, and thus falsely confirm similarity relations inferred from sorting. Should we at least check the influence of these genes on clustering results? I do not think there is a simple "yes" or "no" answer here. A good starting point would be, as you suggest, use all the genes and assess the results in the light of the marker genes and expected results. This could both serve as as good quality control as well as give you overview of all the processes happening in the cells. Depending on the marker genes effect and biological question you may then want to remove the marker genes with potential ordering effect, or even other genes, e.g. by GO terms or pathways. You most likely want to account for the cell cycle phases as well. And check for other technical factors. I recommend Bioconductor pipeline to get inspired https://www.bioconductor.org/help/workflows/simpleSingleCell/ when it comes to scRNA-seq analyses.
{ "domain": "bioinformatics.stackexchange", "id": 98, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rna-seq, cell-line, clustering", "url": null }
rosnode, ros-kinetic Title: ros nodes vs ros timers? Hi, I'm would like to set up two loops at different frequencies on a system (like raspberry pi3). I'm trying to decide the optimal way to set up my code with the least latency in data transfer between the two loops and least computation. Here are two options I thought of, Two different ros-nodes each with different loop rates and using ros-msgs to communicate between these loops. -With the ros-nodes options, the loop rates are pretty consistent but there is latency in the data transfer. A class with two functions run at different loop rates using ros::Timers. Class variables used to transfer the data between the functions. -Using the ros::Timers, even though no latency in the data transfer, loop rates are not quite consistent. What is the better option, or are there any other better options? Is multi-threading a better option? Also, it is possible to share data between two ros-nodes without using rosmsgs/rosservice but by sharing variables/pointers between them? Thank you prasanth Originally posted by praskot on ROS Answers with karma: 257 on 2019-07-07 Post score: 1
{ "domain": "robotics.stackexchange", "id": 33356, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rosnode, ros-kinetic", "url": null }
homework-and-exercises, special-relativity, operators, commutator, poincare-symmetry The problem, accoding to me, comes when I try to compare both expressions using again the conmutation relations of Poincare algebra, because I obtain the following expression: $$\eta_{\nu\sigma}M^{\mu\nu}P_{\lambda}P_{\mu}P^{\lambda}+P^{\mu}M_{\lambda\sigma}P_{\mu}P^{\lambda}=\eta_{\nu\sigma}([M^{\mu\nu},P_{\mu}]+P_{\mu}M^{\mu\nu})P_{\lambda}P^{\lambda}+([P^{\mu},M_{\lambda\sigma}]+M_{\lambda\sigma}P^{\mu})P_{\mu}P^{\lambda}=\eta_{\nu\sigma}(i\eta_{\mu\alpha}(\eta^{\nu\alpha}P^{\mu}-\eta^{\mu\alpha}P^{\nu})+P_{\mu}M^{\mu\nu})P_{\lambda}P^{\lambda}+M_{\lambda\sigma}P_{\mu}P^{\mu}P^{\lambda}-i\eta^{\mu\alpha}(\eta_{\sigma\alpha}P_{\lambda}-\eta_{\lambda\alpha}P_{\sigma})P_{\mu}P^{\lambda}=(M_{\mu\sigma}P^{\mu}+\eta_{\nu\sigma}P_{\mu}M^{\mu\nu}-3iP_{\sigma})P_{\lambda}P^{\lambda}$$
{ "domain": "physics.stackexchange", "id": 60721, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, special-relativity, operators, commutator, poincare-symmetry", "url": null }
battery Title: How can I measure the amount of voltage stored in a sealed lead-acid battery? I know that the simple way to measure the voltage stored in a lead-acid battery is to simply measure the positive and negative using a voltmeter. In my case, I think that my battery has a builtin charge controller and it is sealed like the picture below. How can I effectively measure it without opening? If there is no way to measure it, how do I open this thing? I tried to open it using flat screwdriver and it leaves a dented mark. The DC 12v output may or may not be direct from the battery, if it is current limited then probably not - you will need to check the spec sheet. The top is probably screwed down as the pv controller is under there. The screws to get access are hidden under the orange graphic with all the labels, removing it without damage depends on how strong the adhesive is. One way is to rub your thumb over the surface to find the screw holes then just uncover those... but if they fitted plastic hole covers you may not be lucky.
{ "domain": "engineering.stackexchange", "id": 2893, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "battery", "url": null }
ros-industrial Title: ROS Industrial website is often down We are working on a project to explore the possibilities of ROS Industrial for SME companies. However, the ROS Industrial website (rosindustrial.org) is very often not reachable and this is not beneficial for the confidence of companies in the technology. Is it possible to host the ROS Industrial website at a more reliable provider? I have not experienced problems with the ros.org website. Originally posted by Wilco Bonestroo on ROS Answers with karma: 159 on 2017-05-31 Post score: 0 Thank you for the information. Indeed, this is an issue within our organisation. The IP address of rosindustrial.org is blocked by our firewall because the website andyschwinn.com (which doesn't exist anymore) had the same IP address as rosindustrial.org and is reported as a source of cryptolocker malware. Our system administrator has manually removed the block. However, I expect that more organisations will have the same issue. Originally posted by Wilco Bonestroo with karma: 159 on 2017-06-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2017-06-01: Could I please ask you to not post answers, unless you are answering your own question? ROS Answers is not a regular forum, but a Q&A site. Discussion is best done in comments. Thanks.
{ "domain": "robotics.stackexchange", "id": 28025, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-industrial", "url": null }
newtonian-mechanics, forces, energy, work with $W_{1 \to 2}$ the work done by ${\bf F}$ from time $t_1$ to $t_2$, ${\bf r}(t)$ is the trajectory of a point particle. I don't understand why it says that ${\bf F}({\bf r}(t),{\bf\dot{r}}(t),t)\cdot \dot{\bf r}(t)=\frac{d}{dt}f({\bf r}(t)) $ has to be fulfilled. I know that ${\bf F}= m \,\dot{\bf v} $. Where $\dot{\bf v}$ is a derivative with respect to $t$. Does it have to do with this? I am going to take a different approach than the other two answers and say that this is not required. As you can pick up from the other answers, this requirement the text gives is essentially saying that we need our force to be conservative. I am going to argue that these integrals themselves do not require us to use conservative forces.$^*$ We will start with the definition of the work done by a force: $$W=\int_\gamma\mathbf F(\mathbf x)\cdot\text d\mathbf x=\int_{\mathbf x_1}^{\mathbf x_2}\mathbf F(\mathbf x)\cdot\text d\mathbf x$$ where $\mathbf F$ is the force in question and $\text d\mathbf x$ is the infinitesimal displacement along the path $\gamma$. I will drop the $\gamma$ and put the initial and final locations as limits of the integral (the final integral on the right), but just keep in mind that this integral is done over the path the object moves through in space. Now, two things to consider:
{ "domain": "physics.stackexchange", "id": 55742, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, forces, energy, work", "url": null }
Prove/Disprove that if two sets have the same power set then they are the same set I am really sure that if two sets have the same power set, then they are the same set. I just am wondering how does one exactly go about proving/showing this? I'm usually wrong, so if anyone can show me an example where this fails, I'd like that too. The homework just asks for true/false, but I'm wanting to show it if possible. My thoughts are that since the power set is by definition the set of all subsets of a set, if each of the two power sets are identical, we have an identity map between each set, thus it's indistinguishable which power set is a given set's power set. I hope that wasn't verbose. Since a set has only one power set, we can conclude they are in fact the same set. - What do you mean by "same"? – Qiaochu Yuan Sep 19 '11 at 2:37 I think he means "same" in the sense of the axiom of extensionality. $(\forall x)(x \in A \Leftrightarrow x \in B) \Rightarrow (A = B)$ – William Sep 19 '11 at 2:41 "the same" isn't the same, depending on the context! – The Chaz 2.0 Sep 19 '11 at 2:59 Suppose $A \neq B$. Without loss of geneality, there exists $x \in A$ such that $x \notin B$. Then $\{x\} \in \mathscr{P}(A)$ wherease $\{x\} \notin \mathscr{P}(B)$. Thus $\mathscr{P}(A) \neq \mathscr{P}(B)$. Conversely, if $\mathscr{P}(A) = \mathscr{P}(B)$, then all their singleton's are the same. Thus $A = B$. $A = B$ if and only if $\mathscr{P}(A) = \mathscr{P}(B)$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9759464478051827, "lm_q1q2_score": 0.8426239765339743, "lm_q2_score": 0.8633916117313211, "openwebmath_perplexity": 261.35133308086944, "openwebmath_score": 0.9378816485404968, "tags": null, "url": "http://math.stackexchange.com/questions/65672/prove-disprove-that-if-two-sets-have-the-same-power-set-then-they-are-the-same-s/65706" }
gravity, roche-limit The satellite has its own gravity, and normally that is stronger than the tidal force. But if the satellite is sufficiently close to the planet, the tidal force may become greater than the self-gravitation of the satellite. The point at which this occurs is the Roche limit. The basic calculation of the Roche limit is based on finding when the tidal force (caused by the different gravitational and centrifugal forces over the satellite) exceeds the gravitational force. A more subtle calculation can take into account other factors: The tidal force can distort the satellite, the satellite may have significant rotation, there may be significant strength in the materials that form the satellite. These factors can cause a satellite to break up earlier or later than a simple calculation suggests. However, The "centrifugal forces acting on satellite centre and surface facing to the planet, caused by satellite's orbital movement?" are the tidal forces. So these are already accounted for in the simple calculation. The big uncertainty in calculating the break-up of a satellite is the strength of the chemical bonds that hold the satellite together. Most artificial satellites orbit will inside their Roche limit, but they don't break up because they are held together by strong metallic bonds. If a satellite is rigid, then it probably has some tensile strength and will hold together even if tidal forces are tending to break it apart. If a satellite is a "rubble pile" and doesn't have any significant strength, the assumption of it being "rigid" must be questioned. These factors introduce uncertainties that are much greater than any other forces, such as solar tides.
{ "domain": "astronomy.stackexchange", "id": 5890, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gravity, roche-limit", "url": null }
Many authors take the existence of $1$ as part of the definition of a ring. In fact, I would disagree with Alessandro's comment and claim that most authors take the existence of $1$ to be part of the definition of a ring. There is another object, often called a rng (pronounced "rung"), which is defined by taking all the axioms that define a ring except you don't require there to be a $1$. Rng's are useful in of themselves, for example functions with compact support over a non-compact space do not form a ring, they form a rng. But there is also a theorem that states that every rng is isomorphic to an ideal in some ring. So studying rings and their ideals is sufficient, and this is why it is so popular to include the existence of $1$ as one of the axioms of a ring. So to summarize, there isn't really a reason why it's necessary for rings to have a $1$, it certainly does not follow from the other axioms. It's just a choice of terminology: Do you say rings have a $1$ and if they don't have a $1$ call them rngs, or do you say rings don't need a $1$ and when they do have it call them rings with unity? • For completeness, note that "a rng with a multiplicative identity" is still a different concept from "a ring": the difference being what is required of a homomorphism. A homomorphism between rngs with multiplicative identity is not required to map the identity to the identity, but a homomorphism between rings is required to satisfy $f(1) = 1$. As an example that this matters, if $R$ is a ring, the map $R \to R \times R: x \mapsto (x,0)$ is a rng homomorphism, but not a ring homomorphism. – Hurkyl Sep 2 '15 at 18:03
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9465966717067253, "lm_q1q2_score": 0.8045798913202781, "lm_q2_score": 0.849971181358171, "openwebmath_perplexity": 259.9114490915158, "openwebmath_score": 0.8188384771347046, "tags": null, "url": "https://math.stackexchange.com/questions/1418036/why-is-it-necessary-for-a-ring-to-have-multiplicative-identity/1418046" }
quantum-mechanics, homework-and-exercises, path-integral, feynman-diagrams, anharmonic-oscillators $$A_2^1 = 72 \frac{1}{2!}\left(-\frac{\lambda}{\hbar}\right)^2 \frac{1}{4!^2} \int_0^{\hbar \beta} d\tau \int_0^{\hbar \beta} d\tau' \left[ \left< x(\tau)x(\tau') \right>_0 \right]^2\left< x^2(\tau)\right>_0 \left< x^2(\tau')\right>_0 $$ $$A_2^2 = 24 \frac{1}{2!}\left(-\frac{\lambda}{\hbar}\right)^2 \frac{1}{4!^2} \int_0^{\hbar \beta} d\tau \int_0^{\hbar \beta} d\tau' \left[ \left< x(\tau)x(\tau') \right>_0 \right]^4$$ which involve double integrals of powers of the propagator. For the harmonic oscillator, I found that it's given by a very complicated expression: $$G(\tau - \tau') = \frac{\cosh(\hbar\beta\omega/2 - \omega |\tau - \tau'|)}{2\omega \sinh (\hbar \omega \beta /2)}$$ All in all, the free energy to second order would be $$F = -kT(1+A_1 + A_2^1 + A_2^2)$$
{ "domain": "physics.stackexchange", "id": 36106, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, homework-and-exercises, path-integral, feynman-diagrams, anharmonic-oscillators", "url": null }
Click here to learn more. Contents 1 History 2 Background 2.1 Classification of multiple hypothesis tests 3 Definition 4 Controlling procedures 4.1 The Bonferroni procedure 4.2 The Å idák procedure 4.3 Tukey's procedure 4.4 Holm's step-down procedure The most significant test must therefore pass the Bonferroni criterion. Tukey's method, Fisher's least significant difference (LSD), Hsu's multiple comparisons with the best (MCB), and Bonferroni confidence intervals are methods for calculating and controlling the individual and family error rates for Contents 1 History 2 Background 2.1 Classification of multiple hypothesis tests 3 Definition 4 Controlling procedures 4.1 The Bonferroni procedure 4.2 The Å idák procedure 4.3 Tukey's procedure 4.4 Holm's step-down procedure Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment. That means that to reject, we need p < 0.00005. For a single comparison, the family error rate is equal to the individual error rate which is the alpha value. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Journal of the American Statistical Association. 100: 94–108. If that doesn't work, try using the Reset Code button. 3683 points Submitted by Judy over 1 year ago 2 Comments sinners over 1 year ago @albinosrefuge i got it, thanks Now suppose you have 1000 tests, and use the Bonferroni method. when m 0 = m {\displaystyle m_{0}=m} so the global null hypothesis is true).[citation needed] A procedure controls the FWER in the strong sense if the FWER control at level α Please refer to your web browser help section to enable Javascript on your computer. Referrals = Cash! To give an extreme example, under perfect positive dependence, there is effectively only one test and thus, the FWER is uninflated.
{ "domain": "new-contents.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9532750413739075, "lm_q1q2_score": 0.8315869796887896, "lm_q2_score": 0.8723473746782093, "openwebmath_perplexity": 3044.7905660014203, "openwebmath_score": 0.5219835042953491, "tags": null, "url": "http://new-contents.com/New-York/family-error.html" }
php on line 143 Deprecated: Function create_function() is deprecated in. where y is the end deflection (m), E is the bending modulus of elasticity (N/m 2), P is the end-point load (N), L is the cantilever beam length (m), I is the area moment of inertia (m 4), b is the base width of the specimen (m), and h is the thickness of the specimen (m). The moment of inertia will be I = (b x h^3)/12 where "b" is the width of the member and "h" is th. Engineers use a structure's area moment of inertia to describe how well it resists load stresses. Theory MomentofInertia(I) can be understood as the ro-tational analog of mass. Calculation Example - Cantilever Beam with uniform loading. For a non-prismatic member, the stress varies with the cross section AND the moment. A tapered beam subjected to a tip bending load will be analyzed in order to predict the distributions of stress and displacement in the beam. A fresh study for dynamic behaviour of atomic force microscope cantilever by considering different immersion environments. SFD& BMD for cantilever beam with Pt load. The maximum deflection occurs where slope is zero. In this method, a load is applied from the end of the notched cantilever beam. Simply supported beam with a uniformly distributed load. Ask Question. Calculation Example – Reinforced Concrete Column at Stress. Area Moment Of Inertia Typical Cross Sections I. Question: The Cantilevered Beam At Night Is Subjected To An Applied Moment M Isfree End, 1. MAXIMUM DEFLECTION OF DIFFERENT TYPES OF BEAMS. 0 The purpose of this tutorial is to outline the steps required to do a simple nonlinear analysis of the beam shown below. A cantilever beam AB carrying a concentrated load W at the free and B (Figure 4. In particular, if we know the moment of inertia of an object around one axis of rotation, it turns out that we can find the moment of inertia for the same object about an axis. If this doesn't look like the arrangement you are trying to calculate go back to the beam deflection home page to select a more suitable calculator. Design of Beams - Flexure
{ "domain": "on50mm.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9861513873424044, "lm_q1q2_score": 0.8062522083374559, "lm_q2_score": 0.8175744806385542, "openwebmath_perplexity": 989.1310041200337, "openwebmath_score": 0.6327506303787231, "tags": null, "url": "http://on50mm.it/moment-of-inertia-of-cantilever-beam.html" }
ros Originally posted by Dimitri Schachmann with karma: 789 on 2015-10-26 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by ben.gill on 2015-10-26: Thanks Dimitri, It seems that didn't ask my question that clearly. I had reviewed the msg documentation, and could understand the members that you listed. Where I'm unclear is the usage of msg.point_step, msg.row_step, msg.data[] & msg.is_dense members.
{ "domain": "robotics.stackexchange", "id": 22834, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros", "url": null }
reinforcement-learning Without other matching adjustments, you will break your agent. The problem is how your new action space gets converted back into gradients to update the agent, after it has acted and needs to learn from the results. The NN component of policy function you are considering is designed to work by balancing a discrete probablility distribution. It learns by increasing the probability of actions (in the binary case, the probability of going left vs going right) that score better than a current baseline level. When interpreting the result from going 63.8% left, you have to resolve two things - which action did the agent take, and what changes to your parameters will increase the probability of taking that action. Unfortunately neither of these tasks are simple if you combine the action choices in the way you suggest. Also, you have lost exploration. The combined left/right algorithm will always output a fixed steering amount for each state. Whilst there are algorithms, like DDPG, that can work with this, it is not really possible to adapt PPO to do so. However, PPO already supports continuous action spaces directly. You can have your network output the mean and standard deviation of a distribution for how to steer, and sample from that. Then the action choice taken will directly relate to the output of the network and you can adjust the policy to make that choice more or less possible depending on results from taking it. If you are using a library implementation of PPO, then this option should be available to you.
{ "domain": "ai.stackexchange", "id": 1536, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning", "url": null }
quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation, fourier-transform Title: Why is a wave packet normalizable? I'm doing some reading in Zettili's quantum mechanics book and came across this passage: Now the thing I'm left dumbfounded on is why it can be considered normalizable? I tried finding some explanation in the book but that didn't yield anything other than some vague verbal argument for it. Now I tried just plugging the wave packet into the normalization condition and got some nasty looking integral expression that I'm not sure what to do with. Maybe this is something trivial but as I'm trying to learn this on my own I'd really like to leave nothing unclear in my understanding so if someone could tell me why defining the wave function like this makes it normalizable that would be appreciated. Actually, you're right questioning this. In general, a "wave packet" with unspecified $\psi(x,0)$ doesn't have to be normalizable. E.g. what if you have $\psi(x,0)=(x^2+1)^{-1/4}$? This function decays at infinity, is somewhat localized near $x=0$, but is not normalizable. But usually, wave packets are supposed to be such that they are indeed normalizable, like e.g. Gaussian wave packet. Normally one doesn't call a non-normalizable function a wave packet. So, take the third point in the book to be part of a definition of a wave packet, a constraint on $\psi$.
{ "domain": "physics.stackexchange", "id": 73440, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation, fourier-transform", "url": null }
$\because \sum_{n=0}^{k} (ar^n) = \frac{a(1-r^{k+1})}{1-r}$ $\therefore \sum_{n=1}^{20} (2^n) - 20 = 2097150 - 20 = 2097130$ Which is the same answer as yours, however the answer in the book says $2097170$. They must of added the 20 instead of subtracted it. OK thanks • Sep 5th 2006, 06:53 PM ThePerfectHacker Quote: Originally Posted by chancey Right, but thats not the answer in the back of the book... I did it this way: $\sum_{n=1}^{20} (2^n - 1) = \sum_{n=1}^{20} (2^n) - 20$ $\sum_{n=1}^{20} (2^n) = \frac{1-2^{21}}{1-2} - 1 = 2097150$ $\because \sum_{n=0}^{k} (ar^n) = \frac{a(1-r^{k+1})}{1-r}$ $\therefore \sum_{n=1}^{20} (2^n) - 20 = 2097150 - 20 = 2097130$ Which is the same answer as yours, however the answer in the book says $2097170$. They must of added the 20 instead of subtracted it. OK thanks Then the book is inncorect.
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9777138131620183, "lm_q1q2_score": 0.8423378096655416, "lm_q2_score": 0.861538211208597, "openwebmath_perplexity": 2277.379439425073, "openwebmath_score": 0.9254875779151917, "tags": null, "url": "http://mathhelpforum.com/algebra/5345-evaluation-non-linear-series-print.html" }
- Notice, Laplace transform $$L[1]=\int e^{-st}dt=\frac{1}{s}$$ Now, we have $$\int_{0}^{\infty}\frac{e^{-x}-e^{-xt}}{x}dx$$ $$=\int_{0}^{\infty}\frac{e^{-x}}{x}dx-\int_{0}^{\infty}\frac{e^{-xt}}{x}dx$$ $$=\int_{1}^{\infty}L[1]dx-\int_{t}^{\infty}L[1]dx$$ $$=\int_{1}^{\infty}\frac{1}{x}dx-\int_{t}^{\infty}\frac{1}{x}dx$$ $$=[\ln |x|]_{1}^{\infty}-[\ln |x|]_{t}^{\infty}$$ $$=\lim_{x\to \infty}\ln|x|-\ln 1-(\lim_{x\to \infty}\ln|x|-\ln |t|)$$ $$=\lim_{x\to \infty}\ln|x|-0-\lim_{x\to \infty}\ln|x|+\ln |t|$$ $$=\ln |t|$$$$=\ln(t) \ \ \ \ \forall\ \ \ t>0$$ -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290913825542, "lm_q1q2_score": 0.807147192875574, "lm_q2_score": 0.8311430478583168, "openwebmath_perplexity": 462.7604943654641, "openwebmath_score": 0.9808322787284851, "tags": null, "url": "http://math.stackexchange.com/questions/164400/show-int-0-infty-frace-x-e-xtxdx-lnt-for-t-gt-0" }
quantum-mechanics, epr-experiment Title: Completeness of quantum mechanics and the EPR paradox I am reading through the EPR paper and follow most of it. The authors argue that either QM must be incomplete (let's call this statement A), or incompatible observables can not have simultaneous physical reality (statement B). The authors define what they mean by complete and physically real. They go on to show that rejecting A forces us to also reject B. Since one of these statements must be true, it follows that must accept A, i.e. QM must be incomplete. I am having trouble seeing exactly where the authors invoke the !A to demonstrate !B. They construct a joint quantum state and show that by measuring the position or momentum of one particle the position or momentum (respectively) of the other can be known perfectly. It is not clear to me that the assumed completeness of QM is exploited anywhere in this argument. update: In Arthur Fine's analysis of the EPR argument he writes, Indeed what EPR proceed to do is odd. Instead of assuming completeness and on that basis deriving that incompatible quantities can have real values simultaneously, they simply set out to derive the latter assertion without any completeness assumption at all. This “derivation” turns out to be the heart of the paper and its most controversial part.
{ "domain": "physics.stackexchange", "id": 46591, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, epr-experiment", "url": null }
quantum-mechanics, quantum-field-theory, particle-physics, gauge-theory Just looking at how the $A_{\mu}$ are transforming under the group action (the first term), we recognize the adjoint representation. Of course, on the global stage, the fields $\psi$ can be interpreted as bundle sections and the gauge fields as bundle connections. $A_\mu$'s transformation law will be recognisable as a transformation of connection coefficients under the action of the bundle's structure group. A good reference is Nakahara, or this link.
{ "domain": "physics.stackexchange", "id": 1742, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-field-theory, particle-physics, gauge-theory", "url": null }
c#, serial-port switch (responesCode) { case Constants.HANDSHAKE_AND_WRITE_ACK_SUCCESS: if (_counter == 1) { timer.Stop(); connectionTimer.Start(); this.Dispatcher.Invoke(() => { lblConnectionStatus.Content = "Connected"; lblConnectionStatus.Background = Brushes.Green; }); } else if (_counter == 4) { _noOfPackets = packetList.Count; this.PacketToSendInfoCommand(_noOfPackets); } // For Temp Purpose {Will be refactored as per new protocol} else if (_counter == _noOfPackets + 7) { VerifyCommandWithCheckSum(_totalCheckSum); } else if (_counter != 1 || _counter != 4) { // Check the isFileSelected if the file is selected if (isFileSelected && _packetCounter != packetList.Count) { this.Dispatcher.Invoke(() => { double percentage = Math.Round(((double)_packetCounter / _noOfPackets) * 100); lblStatus.Content = "Writing data... " + _packetCounter + "/" + _noOfPackets + " & Perce: " + percentage + "%"; pbProcess.Value = percentage; }); //Thread.Sleep(10); this.SendPacket(packetList[_packetCounter]); } else { isFileSelected = false; } } break; case Constants.ERASE_SUCCESS: this.Dispatcher.Invoke(() => { pbProcess.Value = 50; lblStatus.Content = "Erase successfull!"; }); this.AllowToWriteCommand(); break;
{ "domain": "codereview.stackexchange", "id": 33935, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, serial-port", "url": null }
python, pygame Title: Running a version of Space Invaders I made a program that runs my version of Space Invaders. I recently finished it. I just want to make it more pythonic, and streamline it so it uses less memory and performs faster. The basic outline of the program is at this website. import pygame, sys, random from pygame.locals import * # set up pygame pygame.init() mainClock = pygame.time.Clock() # set up the window width = 800 height = 700 screen = pygame.display.set_mode((width, height), 0, 32) pygame.display.set_caption('caption') # set up movement variables moveLeft = False moveRight = False moveUp = False moveDown = False # set up direction variables DOWNLEFT = 1 DOWNRIGHT = 3 UPLEFT = 7 UPRIGHT = 9 LEFT = 4 RIGHT = 6 UP = 8 DOWN = 2 # set up the colors BLACK = (0, 0, 0) GREEN = (0, 255, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) BLUE = (135, 206, 250) blue1 = (236, 237, 252) blue2 = (195, 197, 240) blue3 = (111, 115, 196) blue4 = (77, 81, 167) blue5 = (111, 115, 196) bg = (152, 155, 221) paddle = (195, 197, 240) MOVESPEED = 11 MOVE = 1 SHOOT = 15 # set up counting score = 0 # set up font font = pygame.font.SysFont('calibri', 50) def makeplayer(): player = pygame.Rect(370, 635, 60, 25) return player def makeinvaders(invaders): y = 0 for i in invaders: x = 0 for j in range(11): invader = pygame.Rect(75+x, 75+y, 50, 20) i.append(invader) x += 60 y += 45 return invaders
{ "domain": "codereview.stackexchange", "id": 13966, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, pygame", "url": null }
The pieces came from trying something and making different rules for the numbers it didn't work for. After finding a few other solutions similar to Fernando's very smooth function, we wondered if all such f operated on these kinds of tricks, and the answer was an emphatic No. We could map pi^e > 2 > -pi^e > -2 > pi^e if we wanted, just as long as each nonzero pair (x,-x) was itself paired with another nonzero (y,-y), and those pairings of pairs could be whatever we liked. That thought led to the attached document. I hope you guys find it interesting and correct! #### Attachments • 103.1 KB Views: 11 Last edited: MHB Math Helper
{ "domain": "mathhelpboards.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9637799420543365, "lm_q1q2_score": 0.8153023475955906, "lm_q2_score": 0.8459424314825853, "openwebmath_perplexity": 1439.6916593054284, "openwebmath_score": 0.9280920624732971, "tags": null, "url": "https://mathhelpboards.com/threads/iterated-functional-equation.3230/" }
electromagnetism, hamiltonian-formalism For a shorter, more easy-to-read account, see also R. C. Stabler, A Possible Modification of Classical Electrodynamics, Physics Letters, 8, 3, (1964), p. 185-187. http://dx.doi.org/10.1016/S0031-9163(64)91989-4 It is true Frenkel too proposes half-retarded half-advanced solutions as particularly interesting since they allow for stable motion of hydrogen atom particles, but his formalism actually does not require them, it allows for any EM field that obeys Maxwell's equations. The core idea is that particles act on other particles but never on themselves. The reason for this assumption for Frenkel was that self-action of a point on itself is contradictory and leads nowhere. A particle acts on other particles via electromagnetic field of its own,so each field acquires an index that indicates which particle the field 'belongs to'. For example, particle $a$ generates electric field and its value at point $\mathbf r_b$ is $\mathbf E_a(\mathbf r_b)$. This is introduced so we can keep track of which field acts on which particle. The fields obey the Maxwell equations with the owning particle as source: $$ \nabla \cdot \mathbf E_a = \rho_a/\epsilon_0 $$ $$ \nabla \cdot \mathbf B_a = 0 $$ $$ \nabla \times \mathbf E_a = - \frac{\partial \mathbf B_a}{\partial t} $$ $$ \nabla \times \mathbf B_a = \mu_0 \mathbf j_a + \mu_0\epsilon_0 \frac{\partial \mathbf E_a}{\partial t} $$ Superposition of the elementary fields of all particles still obeys the Maxwell equations (thanks to their linearity), so this superposition is a good candidate for macroscopic total EM field. The equation of motion of a charged particle $b$ is $$
{ "domain": "physics.stackexchange", "id": 46009, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, hamiltonian-formalism", "url": null }
# Is $e^x$ the only isomorphism between the groups $(\mathbb{R},+)$ and $(\mathbb{R}_{> 0},*)$? If so, how might I be able to prove it? EDIT: OK, thanks to many answers especially spin's and Micah's explanations. All of the answers were extremely helpful in understanding -- I have accepted Micah's because it seems the most complete, but all answers provide helpful additions/perspectives! I have tried to summarize: $\phi$ is an isomorphism between the groups if and only if $\phi(x) = e^{f(x)}$ where $f$ is an isomorphism from $(\mathbb{R},+)$ back to $(\mathbb{R},+)$. Of course there are lots of such $f$, especially when we take the Axiom of Choice. However, it seems from the answers and Micah's link (Cauchy functional equation) that the only "nice" solutions are $f(x) = cx$ for a constant $c$. It seems that all others must be "highly pathological" (in fact $\{(x,f(x))\}$ must be dense in $\mathbb{R}^2$). A remaining question is, how strong is the statement All such isomorphisms have the form $e^{cx}$ for some $c \in \mathbb{R}$ or its negation? Or what is required for each to hold? One answer seems to be that supposing the reals have the Baire property is sufficient to rule out other solutions (as is assuming every subset of the reals is measurable, assuming the Axiom of Determinacy, and it holds in Solovay's model). For more, see this question, this question, and this mathoverflow question.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9770226280828407, "lm_q1q2_score": 0.8342920856746178, "lm_q2_score": 0.8539127566694178, "openwebmath_perplexity": 358.3871202190427, "openwebmath_score": 0.9697872400283813, "tags": null, "url": "http://math.stackexchange.com/questions/302257/is-ex-the-only-isomorphism-between-the-groups-mathbbr-and-mathbb/302267" }
evolution, population-dynamics Title: How many humans have been in my lineage? Is it almost the same for every human currently living? If I were to count my father, my grandfather, my great-grandfather, and so on up till, say chimps, or the most common ancestor, or whatever that suits the more accurate answer, how many humans would there have been in my direct lineage? And would it be almost the same for every human being currently living? A quick back-of-the-envelope answer to the number of generations that have passed since the estimated human-chimp split would be to divide the the split, approximately 7 million years ago (Langergraber et al. 2012), by the human generation time. The human generation time can be tricky to estimate, but 20 years is often used. However, the average number is likely to be higher. Research has shown that the great apes (chimps, gorilla, orangutan) have generation times comparatble to humans, in the range of 18-29 years (Langergraber et al. 2012). Using 7 million years and 20 years yields an estimated 350000 ancestral generations for each living human. A more conservative estimate, using an average generation time of 28, would result in 250000 generations. However, some have argued that the human-chimp split is closer to 13 million years old, which would mean that approximately 650000 generations have passed (using a generation time of 20 years). The exact number of ancestral generations for each human will naturally differ a bit, and some populations might have higher or lower numbers on average due to chance events or historical reasons (colonizations patterns etc). However, due to the law of large numbers my guess would be that discrepancies are likely to have averaged out. In any case, the current estimates of the human-chimp split and average historical generation times are so uncertain, so that they will swamp any other effects when trying to calculate the number of ancestoral generations. However, this is only answering the number of ancestral generations. The number of ancestors in your full pedigree is something completely different. Since every ancestor has 2 parents, the number of ancestors will grow exponentially. Theoretically, the full pedigree of ancestors can be calculated using:
{ "domain": "biology.stackexchange", "id": 9707, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, population-dynamics", "url": null }
electromagnetism, electric-fields, fourier-transform First, assume that you have a harmonic function, say, $E_0(t)=\hat A_0 e^{\mathfrak j \omega_0 t}$ for some complex $\hat A_0$ amplitude at the frequency $\omega_0$. This $E_0(t)$ does not have a Fourier transform in the usual sense, since the integral $\hat E(\omega) = \int_{-\infty}^{\infty}A_0 e^{\mathfrak j \omega_0 t} e^{-\mathfrak j \omega t}dt$ does not exist. But of course, wish to interpret the complex amplitude $\hat A_0$ as being the Fourier transform of $E_0(t)$ in some sense, and we can do so if instead of the Fourier integral of eq. (1) we redefine it to be a true amplitude density as $$\tilde E(\omega) = \lim_{T\to \infty} \frac{1}{2T}\int_{-T}^{T}E(t)e^{\mathfrak j \omega t}dt \tag{2}.$$ Now the integral $$\tilde E_0(\omega) = \lim_{T\to \infty} \frac{1}{2T}\int_{-T}^{T}A_0 e^{\mathfrak j \omega_0 t}e^{-\mathfrak j \omega t}dt \tag{3}$$ does exist, and in fact $$ \tilde E_0(\omega)= \begin{matrix} A_0 &\text{if } \omega = \omega_0 \\ 0 & \text{if } \omega \ne \omega_0\\ \end{matrix}$$
{ "domain": "physics.stackexchange", "id": 97477, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electric-fields, fourier-transform", "url": null }
algorithms Title: How can you find all unbalanced parens in a string in linear time with constant memory? I was given the following problem during an interview: Gives a string which contains some mixture of parens (not brackets or braces-- only parens) with other alphanumeric characters, identify all parens that have no matching paren. For example, in the string ")(ab))", indices 0 and 5 contain parens that have no matching paren. I put forward a working O(n) solution using O(n) memory, using a stack and going through the string once adding parens to the stack and removing them from the stack whenever I encountered a closing paren and the top of the stack contained an opening paren. Afterwards, the interviewer noted that the problem could be solved in linear time with constant memory (as in, no additional memory usage besides what's taken up by the input.) I asked how and she said something about going through the string once from the left identifying all open parens, and then a second time from the right identifying all close parens....or maybe it was the other way around. I didn't really understand and didn't want to ask her to hand-hold me through it. Can anyone clarify the solution she suggested? Since this comes from a programming background and not a theoretical computer science exercise, I assume that it takes $O(1)$ memory to store an index into the string. In theoretical computer science, this would mean using the RAM model; with Turing machines you couldn't do this and you'd need $\Theta(\log(n))$ memory to store an index into a string of length $n$. You can keep the basic principle of the algorithm that you used. You missed an opportunity for a memory optimization. using a stack and going through the string once adding parens to the stack and removing them from the stack whenever I encountered a closing paren and the top of the stack contained an opening paren
{ "domain": "cs.stackexchange", "id": 13050, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms", "url": null }
electromagnetism, electrostatics $$ Q\cong 10^{-12}\:\mathrm{C} \approx 10^7e. $$ This means that $e/Q$ is of the order of $10^{-7}$, so the usual expression $W=Q^2/2C$ is accurate to about seven significant figures. Corrections to this order are normally not something that we fret about, so we can usually just drop the term. If we do want 7+ significant figure accuracy on that energy, then we have a number of other things to worry about: a sufficiently accurate value for the capacitance, for instance, as well as a number of residual capacitances and inductances all over the lab, among many other effects that might contribute in any given situation. And, as you'll have guessed by now, these charge quantization effects do become relevant if your circuit is small enough that you care about single-electron effects. However, if you're in that regime then odds are that you need to be doing things quantum mechanically to begin with, and that's a whole other ball game, with the energy itself being replaced by a more complicated object, just for starters.
{ "domain": "physics.stackexchange", "id": 36031, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electrostatics", "url": null }
terminology, state Title: Digital, continuous and the state term If a digital system has two or more modes (states), does a continuous system have just one mode (state)? Your first statement a digital system has two or more modes (states) is a bit confusing or misleading; it misses the point! The point is that a digital system can only be in one of a countable set of possible states (and that these are acquired in a countable set of times). The state space of an analog system can be uncountable. Now the problem with your statement is that two or more doesn't say much about the countability of states of digital systems. Uncountably many states, as an analog system might have, is always more than countably many. does a continuous system have just one mode (state)? um, no, only a constant system (a very boring system) would have one state. Such as system could be both analog or digital. (so, your definition "two or more" is actually wrong...) In generally, as mentioned above, an analog system can take one of a uncountable infinite set of states, whereas a digital system can only take one of a countable infinite or finite number of states. That's the usual definition of the difference between digital and analog, together with the time-discreteness of digital systems.
{ "domain": "dsp.stackexchange", "id": 10606, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "terminology, state", "url": null }
c++, median, constrained-templates Overhead of std::is_sorted() While checking if the input is already sorted might speed up things greatly if the range is indeed sorted, it adds overhead in case it isn't. Also consider that for the in-place strategies, not checking it might be as fast as checking it, depending on the implementation of std::ranges::nth_element(). For the copy and external strategies, perhaps checking for sortedness during the copy is cheap enough to always do.
{ "domain": "codereview.stackexchange", "id": 42930, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, median, constrained-templates", "url": null }
python I'd then try and take advantage of sorting the array using an O(Nlog(N)) sort like Mergesort / Quicksort, or depending on your data an O(N) sort like Counting Sort. If you know your data is going to be ordered, you can skip this step. With sorted data, we don't have to use the sum_set set; we can instead pick an index in the array & determine whether it is the total of two other elements. We know that any index we suspect to be our sum will have to be made up of elements that are lower indexes than it in the list, i.e. [1, 2, 3, 4, 5] -> If we start looking at 3, we know we don't need to consider elements 4 & 5, as they will be larger than 3, so couldn't possibly sum to it. Finally, the halfway point for a number is also relevant, I.e. [1, 3, 5, 7, 9, 11, 99, 117] if we're looking at 99, we first look to add the next lowest index & the first index; however, since 11 < 99/2 we know we won't be able to find a match that adds to 99; on average this should be another speedup assuming the data isn't perfectly uniform. Finally, since we aren't pushing results into sum_set & only checking once for each total, this will cause some repetition in our search. However, since we can return immediately upon finding a match, our best/average case just got a lot better. def func2(l): # l = filter_inputs(l) # l.sort() for index in range(2, len(l)): i = 0 j = index - 1 half_val = l[index] / 2; while ( i < j and l[i] <= half_val and l[j] >= half_val ): if l[index] > l[i] + l[j]: i = i + 1 elif l[index] < l[i] + l[j]: j = j - 1 else:
{ "domain": "codereview.stackexchange", "id": 34045, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
where $q=1-p$. It's very simple to explain this formula. Let's assume that we consider as a success getting a 6 rolling a dice. Then the probability of getting a success at the first try is $$P(X=1) = p = pq^0= \frac{1}{6}$$ To get a success at the second try, we have to fail once and then get our 6: $$P(X=2)=qp=pq^1=\frac{1}{6}\frac{5}{6}$$ and so on. The expected value of this distribution answers this question: how many tries do I have to wait before getting my first success, as an average? The expected value for the Geometric distribution is: $$E(X)=\displaystyle\sum^\infty_{n=1}npq^{n-1}=\frac{1}{p}$$ or, in our example, $6$. Edit: We are assuming multiple independent tries with the same probability, obviously. • Could you please explain how the final summation value came out to be 1/p? – user1993 Oct 8 '17 at 13:46 The probability of something happening in n rolls might be 1/2, and that number might be say '10' - what if there was then a situation where the probability of the same event happening between 1000 and 2000 times was 1/2 - so 1-10 is P=1/2 1000-2000 is 1/2 The above could all make sense, but you can see that the average is never going to be 10 only. +++++++++++++++++++++++++++++++++++++ After your first roll, you either get a 6 and finish in 1 (probability 1/6), or you get a non-six and are back in the same position you were in at the start, with an expectation of a further E rolls needed (plus the one you made) - probability (5/6) E = 1/6 + 5/6(E + 1) (1/6)E = 1 E = 6
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9863631615116321, "lm_q1q2_score": 0.8282826432696436, "lm_q2_score": 0.8397339596505965, "openwebmath_perplexity": 273.33526783532335, "openwebmath_score": 0.7793364524841309, "tags": null, "url": "https://math.stackexchange.com/questions/1119872/on-average-how-many-times-must-i-roll-a-dice-until-i-get-a-6/1952238" }
equilibrium Title: metallic mercury is shaken with a solution of mercury(II) nitrate Hi there, I am reviewing equilibrium. About this question, I wonder if the chemical equation is wrong. Because the description says 'a solution of mercury(I) nitrate is formed'. However, in the equation, the product is Hg2 2+? I thought the product should be Hg1+. Here is my answer, but my answer seems to be wrong. Your expression would be correct if the mercury(I) ions were individual, separate atoms like most metals. But they are actually paired up, forming $\ce{Hg_2^{2+}}$ with a covalent bond between the metal atoms. Thus, properly, $\ce{Hg(l) + Hg^{2+} <=> Hg2^{2+}}$ with $K_c$ then equalling $\ce{[Hg2^{2+}]/[Hg^{2+}]}$ as given in the textbook. This behavior of forming diatomic metal(I) ions is actually known with several elements in Group 2 and Group 12 (or if you are using an older text, Group 2A and Group 2B), but mercury is the one that most commonly has metal(I) ions and not always metal(II). So your textbook (presumably) identifies specifically mercury as forming $\ce{Hg2^{2+}}$.
{ "domain": "chemistry.stackexchange", "id": 17752, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "equilibrium", "url": null }
matlab, compressive-sensing, optimization, linear-algebra, sparse-model numRows = size(mA, 1); numCols = size(mA, 2); numElmBlock = numCols / numBlocks; if(round(numElmBlock) ~= numElmBlock) error('Number of Blocks Doesn''t Match Size of Arrays'); end vActiveIdx = false([numCols, 1]); vR = vB; vX = zeros([numCols, 1]); activeBlckIdx = []; for ii = 1:paramK maxCorr = 0; for jj = 1:numBlocks vBlockIdx = (((jj - 1) * numElmBlock) + 1):(jj * numElmBlock); currCorr = abs(mA(:, vBlockIdx).' * vR); if(currCorr > maxCorr) activeBlckIdx = jj; maxCorr = currCorr; end end vBlockIdx = (((activeBlckIdx - 1) * numElmBlock) + 1):(activeBlckIdx * numElmBlock); vActiveIdx(vBlockIdx) = true(); vX(vActiveIdx) = mA(:, vActiveIdx) \ vB; vR = vB - (mA(:, vActiveIdx) * vX(vActiveIdx)); resNorm = norm(vR); if(resNorm < tolVal) break; end end end The MATLAB code is available at my StackExchange Signal Processing Q60197 GitHub Repository (Look at the SignalProcessing\Q60197 folder). In the full code I compare the Block implementation to OMP to verify the implementation.
{ "domain": "dsp.stackexchange", "id": 7796, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "matlab, compressive-sensing, optimization, linear-algebra, sparse-model", "url": null }
turing-machines, automata, pushdown-automata As you can see, we won't necessarily use the two stacks as if they stood for the content of the tape at each side of the head of a Turing machine.
{ "domain": "cs.stackexchange", "id": 4547, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turing-machines, automata, pushdown-automata", "url": null }
set: y˙ 1 = y. by Ron Kurtus (revised 21 December 2019) The equations for a simple pendulum show how to find the frequency and period of the motion. p is the number of time samples per sine wave period. y = a sin x − h b + k. Using complex numbers, we can write the harmonic wave equation as: i. A sine wave or sinusoid is a mathematical curve that describes a smooth periodic oscillation. 2 Plane Wave A solution to the three-dimensional wave equation 2E 1 c2 2E t2 0 is E x,y,z;t E oCos k r t where the position vector is r xi yj zk. Otherwise it is turned into the low level (0). One important takeaway from this formula is that the series composition of a square wave only uses the odd harmonics. The point closest to the ground is labeled P. These functions measure the contribution of the particular sine and cosine contributions to f(x). If we’re talking about a pure sine wave, then the wave’s amplitude, A, is the highest y value of the wave. Double angle formulas for sine and cosine. The motion of a vibrational system results in velocity and acceleration that is not constant but is in fact modeled by a sinusoidal wave. The solutions to the wave equation ($$u(x,t)$$) are obtained by appropriate integration techniques. If a sine wave is defined as Vm¬ = 150 sin (220t), then find its RMS velocity and frequency and instantaneous velocity of the waveform after a 5 ms of time. 10) all produce the same sequence values with cosine, and with sine may differ by the numeric sign – A generalization to handle both cosine and sine is to con-. That means $$-1 \leq \sin(t) \leq 1$$ for any real number $$t$$. Step 1: a sin (bx +c) Let b=1, c=0, and vary the values of a. Using the wave number, one can write the equation of a stationary wave in a slightly more simple manner: In order to write the equation of a travelling wave, we simply break the boundary between the functions of time and space, mixing them together like chocolate and peanut butter. You can move
{ "domain": "wattonweb.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429604789206, "lm_q1q2_score": 0.8318515387880796, "lm_q2_score": 0.8459424353665382, "openwebmath_perplexity": 596.3331284839677, "openwebmath_score": 0.8392153978347778, "tags": null, "url": "http://wattonweb.it/mvv/sine-wave-equation.html" }
serial, ros-hydro, pioneer-3dx, p2os-driver, p2os Title: p2os setup failed... for Hydro I've been working through the p2os-vanderbilt setup tutorial and have been having many issues. I am currently using Ubuntu 12.04 with ROS Hydro and a Pioneer 3-DX connected to my SlimPro via serial comm port. I have successfully installed and compiled the p2os and the pr2 controllers packages: sudo apt-get install ros-hydro-p2os-driver ros-hydro-p2os-teleop ros-hydro-p2os-launch ros-hydro-p2os-urdf sudo apt-get install ros-groovy-pr2-controllers ros-groovy-joystick-drivers cd ~/catkin_ws/src && git clone https://github.com/allenh1/p2os.git /*note different from tutorial*/ git clone https://github.com/allenh1/vanderbilt-ros-pkg.git /*note different from tutorial*/ source ../devel/setup.bash cd ~/catkin_ws catkin_make When I try: rosrun p2os_driver p2os_driver so I can then enable the motor, I get an error: [ INFO] [1417478313.806461148]: using serial port: [/dev/ttyS0] [ INFO] [1417478313.835717295]: P2OS connection opening serial port /dev/ttyS0... [ERROR] [1417478313.835785839]: P2OS::Setup():open(): [ERROR] [1417478313.835822762]: p2os setup failed...
{ "domain": "robotics.stackexchange", "id": 20211, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "serial, ros-hydro, pioneer-3dx, p2os-driver, p2os", "url": null }
javascript, parsing if (!(part in _o)){ if (val) _o[part] = new object; else return null; } _o = _o[part]; } } Here is what's wrong with your code: Do not use variable names like _o. Get an editor with good auto-completion. typeof _o != 'object' does not do what you think it does: typeof([1,2]) // "object". In general, doing those kinds of checks is a code smell. if (!isNaN(parseInt(loc))) loc = parseInt(loc);. Confusing and not needed. JavaScript: ['a', 'b']["1"] // 'b'. Same goes for the other isNaN in. Do not do that check. null is a value, but what you want to return is the lack of value. It is undefined in JavaScript, and it is what will be returned if there is no value. Consider using split instead of indexOf and substring. It is much faster and makes the code more readable. So, here is a neat version for you: function chained(obj, chain, value){ var assigning = (value !== undefined); // split chain on array and property accessors chain = chain.split(/[.\[\]]+/); // remove trailing ']' from split if (!chain[chain.length - 1]) chain.pop(); // traverse 1 level less when assigning var n = chain.length - assigning; for (var i = 0, data = obj; i < n; i++) { data = data[chain[i]]; // if (data === undefined) return; // uncomment to handle bad chain keys } if (assigning) { data[chain[n]] = value; return obj; } else { return data; } } Blogged: http://glebm.blogspot.com/2011/01/javascript-chained-nested-assignment.html Please come up with further improvements :)
{ "domain": "codereview.stackexchange", "id": 29, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, parsing", "url": null }
organic-chemistry, biochemistry, aromatic-compounds, toxicity These undergo further reactions, catalysed by various other enzymes, to give a large range of metabolites: In particular, The benzoquinones have been shown to inhibit DNA topoisomerase II, an enzyme that makes temporary cuts in double-stranded DNA in order to "unwind" DNA that has been entangled. It plays a key role in many cellular processes such as DNA replication, DNA repair, and chromosome segregation (during cell division); therefore, inhibition may lead to chromosome breakage or failure to segregate. The same quinones can undergo a process called redox cycling, where they undergo an enzymatic reaction in which a single electron is added to them to form a radical anion. These species are then released, and react with molecular oxygen $\ce{O2}$ to give the superoxide anion, $\ce{O2^-}$... and then the process repeats itself. The buildup of $\ce{O2^-}$ (and other reactive oxygen species) leads to oxidative stress and DNA damage. (E,E)-muconaldehyde has been recently shown to form an adduct with two molecules of deoxyguanosine, i.e. the guanine bases in DNA. Here, R represents the rest of the deoxyribose sugar. Intra- or inter-chain links can be formed via this method, which then lead to inaccurate replication or chromosomal aberrations. A mechanism was proposed in reference 3. It is not reproduced here but it is not difficult to imagine how such a reaction might happen: nitrogen atoms in guanine are nucleophilic, and literally every carbon in muconaldehyde is electrophilic. The literature on the topic contains much more information than I can write in here. Reference 4 is a relatively recent review on the topic, which would be a decent starting point to find further information. Regardless of what mechanism it is, one thing is certain: benzene itself and its molecular properties are not likely to be the cause of its carcinogenicity. It is almost certain that multiple metabolites of benzene are the culprits. References
{ "domain": "chemistry.stackexchange", "id": 4325, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, biochemistry, aromatic-compounds, toxicity", "url": null }
with(numtheory); with(group): with(combinat): pet_cycleind_symm := proc(n) option remember; if n=0 then return 1; fi; expand(1/n* end; pet_varinto_cind := proc(poly, ind) local subs1, subsl, polyvars, indvars, v, pot; polyvars := indets(poly); indvars := indets(ind); subsl := []; for v in indvars do pot := op(1, v); subs1 := [seq(polyvars[k]=polyvars[k]^pot, k=1..nops(polyvars))]; subsl := [op(subsl), v=subs(subs1, poly)]; od; subs(subsl, ind); end; pet_flatten_term := proc(varp) local terml, d, cf, v; terml := []; cf := varp; for v in indets(varp) do d := degree(varp, v); terml := [op(terml), seq(v, k=1..d)]; cf := cf/v^d; od; [cf, terml]; end; pet_cycleind_rel := proc(n) option remember; local dsjc, flat, p, cyc1, cyc2, l1, l2, res; if n=0 then return 1; fi; if n=1 then return a[1] fi; res := 0; for dsjc in pet_cycleind_symm(n) do flat := pet_flatten_term(dsjc); p := 1; for cyc1 in flat[2] do l1 := op(1, cyc1); for cyc2 in flat[2] do l2 := op(1, cyc2);
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9854964186047326, "lm_q1q2_score": 0.8190884949516316, "lm_q2_score": 0.8311430457670241, "openwebmath_perplexity": 505.2541397078272, "openwebmath_score": 0.6827992796897888, "tags": null, "url": "https://math.stackexchange.com/questions/356995/counting-non-isomorphic-relations" }
quantum-mechanics, operators, phase-space, wigner-transform, deformation-quantization Edit in response to comment on entangled states. One of the co-inventors of the industry, Groenewold, in his monumental 1946 paper, Section 5.06 on p 459, details exactly how to handle entangled states--in his case for the EPR state. The entanglement and symmetrization is transparent at the level of phase-space parameters (Weyl symbols): the quantum operators in the Wigner map are still oblivious of different modes. What connects/entangles them, indirectly, are the symmetrized δ-function kernels involved, even though this is a can of worms that even stressed Bell's thinking. The clearest "modern" paper on the subject is Johansen 1997, which, through its factorized Wigner function and changed +/- coordinates, reassures you you never have to bother with the quantum operators: the entangling is all in the Wigner function and phase-space, instead! (Illustration: 351884/66086.)
{ "domain": "physics.stackexchange", "id": 57056, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, operators, phase-space, wigner-transform, deformation-quantization", "url": null }
physical-chemistry, computational-chemistry, spectroscopy Any help would be greatly appreciated Link to the data file used to generate this spectrum: https://github.com/tkh97/HCN-2nu2.git Update : In the given data by the OP, the x scale is not evenly spaced so the discrete convolution approach will not work to generate a spectrum until and unless your data is evenly spaced! The trick mentioned by EdV is a very nice shortcut. Here, if you really want to play around with convolution via MATLAB, then read below. Solution: You can satisfy yourself as follows: Make sure you have a even spacing of the x-axis. Your uneven axis threw me off, and it took a while to discover why it is not working. I wish you mentioned it earlier. Your sticks are basically scaled delta functions as Prof. Ed said. Generate a vector for a single Lorentzian peak, whose center is zero, which is also evenly spaced. The x-axis of the Lorentzian would be symmetric [-length (wavenumber/2):Sampling rate:length(wavenumber/2)-Sampling rate]. Perform convolution of your spectrum and this Lorentzian. Let us start with a Dirac delta centered at point $x.$ I don't know how to draw a displace Dirac delta in MATLAB. I manually made it in an Excel file. If you convolute it with a zero centered Lorentzian, then you will get a Lorentzian centered at $x.$ This is called the sifting property. Basically, all you have to ensure is that your Lorentzian is centered at zero. First, play with convolution using single stick. Once you correct the code, so that the line position does not change, apply it on the entire spectrum. A great book is Bracewell's Fourier Transform And Its Applications. It is all there. From MIT 2.14 / 2.140 Analysis and Design of Feedback Control Systems, Spring 2007 handout:
{ "domain": "chemistry.stackexchange", "id": 14224, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, computational-chemistry, spectroscopy", "url": null }
pharmacology, receptor Berg, K. A., Maayani, S., Goldfarb, J., Scaramellini, C., Leff, P., & Clarke, W. P. (1998). Effector pathway-dependent relative efficacy at serotonin type 2A and 2C receptors: evidence for agonist-directed trafficking of receptor stimulus. Molecular pharmacology, 54(1), 94-104. González-Maeso, J., Weisstaub, N. V., Zhou, M., Chan, P., Ivic, L., Ang, R., ... & Sealfon, S. C. (2007). Hallucinogens recruit specific cortical 5-HT2A receptor-mediated signaling pathways to affect behavior. Neuron, 53(3), 439-452. Jarpe, M. B., Knall, C., Mitchell, F. M., Buhl, A. M., Duzic, E., & Johnson, G. L. (1998). [D-Arg1, D-Phe5, D-Trp7, 9, Leu11] Substance P acts as a biased agonist toward neuropeptide and chemokine receptors. Journal of Biological Chemistry, 273(5), 3097-3104.
{ "domain": "biology.stackexchange", "id": 10570, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "pharmacology, receptor", "url": null }