text
stringlengths
49
10.4k
source
dict
c++, linked-list, unit-testing, reinventing-the-wheel /** * Tests the push_front method of the single_l list implementation. * @param None * @return None * @throws None */ void test_push_front() { single::single_l<int> list1; int values[] = {10, 20, 30, 40, 50}; for (int i = 0; i < sizeof(values) / sizeof(values[0]); i++) { list1.push_front(values[i]); } TEST<bool>::ASSERT_FALSE(list1.empty(), "[list not should be empty]", "PUSH_FRONT {1}"); TEST<bool>::ASSERT_TRUE(list1[0] == 50, "[first element should be 50]", "PUSH_FRONT {2}"); TEST<bool>::ASSERT_TRUE(list1[1] == 40, "[second element should be 40]", "PUSH_FRONT {3}"); TEST<bool>::ASSERT_TRUE(list1[2] == 30, "[third element should be 30]", "PUSH_FRONT {4}"); TEST<bool>::ASSERT_TRUE(list1[3] == 20, "[fourth element should be 20]", "PUSH_FRONT {5}"); TEST<bool>::ASSERT_TRUE(list1[4] == 10, "[fifth element should be 10]", "PUSH_FRONT {6}"); TEST<bool>::ASSERT_TRUE(list1.size() == 5, "[size should be 5]", "PUSH_FRONT {7}"); }
{ "domain": "codereview.stackexchange", "id": 44760, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, linked-list, unit-testing, reinventing-the-wheel", "url": null }
general-relativity, forces, black-holes, tidal-effect, neutron-stars Title: How does spaghettification work in general relativity? Imagine a neutron star is being consumed by a black hole, so I think matter from the star is being stretched as it follows the curvature of the spacetime around the black hole. If that is the case what forces is stretching the degenerate matter? Spaghettification is not just a General Relativistic effect and not just confined to strong gravity regimes. It is a consequence of different parts of an extended body feeling different gravitational forces and this produces a differential force acting across that body - a.k.a. a tidal force. In the specific case of an object of finite size falling towards a source of gravity, there are two such differential gravitational forces to consider, which are of roughly similar magnitude. The first is that gravity is stronger the closer to the source of gravity. This produces a differential force characterised by the gradient of the gravitational field multiplied by the radial extent of the falling body, and acts as a stretching force in the radial direction. The second, less recognised, effect is that the gravitational field points radially inward. For an object of finite width, this means there is a component of the gravitational field that is resolved towards the centre of the body and that gets larger as you move towards the left and right hand edges of the body. This creates a compressive force at right angles to the stretching force above, and also has a size roughly equal to the gradient of the gravitational field multiplied by the tangential extent of the body. Taken together, the radial stretching force and the tangential compressive force have become popularly known as spaghettification.
{ "domain": "physics.stackexchange", "id": 89228, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, forces, black-holes, tidal-effect, neutron-stars", "url": null }
c++, multithreading, thread-safety // each thread executes this loop void ThreadPool::workLoop(std::atomic_bool& result){ while(!m_workComplete){ std::function<bool()> currentJob; bool popped; { std::unique_lock<std::mutex> lock(m_mutex); m_jobWaitCondition.wait(lock, [this](){ return !m_JobQueue.empty() || m_workComplete.load(); }); popped = m_JobQueue.pop(currentJob); } if(popped){ result = currentJob() && result; } } } void ThreadPool::addJob(std::function<bool()> newJob){ m_JobQueue.push(newJob); m_jobWaitCondition.notify_one(); } void ThreadPool::setComplete(){ m_workComplete = true; } void ThreadPool::waitForCompletion(){ { std::unique_lock<std::mutex> lock(m_mutex); m_workComplete.store(true); } m_jobWaitCondition.notify_all(); for(auto& thread : m_threads){ thread.join(); } m_threads.clear(); } ThreadSafeQueue.hpp: #pragma once #include <mutex> #include <queue>
{ "domain": "codereview.stackexchange", "id": 43071, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, multithreading, thread-safety", "url": null }
data-structures, binary-trees Title: What is the depth of a complete binary tree with $N$ nodes? This question uses the following definition of a complete binary tree†: A binary tree $T$ with $N$ levels is complete if all levels except possibly the last are completely full, and the last level has all its nodes to the left side. The following is an excerpt from Algorithms: It ($\log N$) is also the depth of a complete binary tree with $N$ nodes. (More precisely: $⌊\log N⌋$.) Why is the above excerpt true? † Originally defined here Consider how a complete binary tree of height $h$ is constructed, one vertex at the root level, two at the first level below the root, four at the second level below, and so on, until the $h^{th}$ level, which has at least one vertex, but at most twice as many as the previous level. Note that the number of vertices at each level is a power of two (excluding the last, which is a special case). Then we have: $$ 1+\sum_{i=0}^{h-1}2^{i} \leq n \leq \sum_{i=0}^{h}2^{i} $$ Using the identity that the sum of the first $k$ powers of two is $2^{k+1}-1$ we get: $$ 1+2^{h}-1 \leq n \leq 2^{h+1}-1\\ 2^{h} \leq n \leq 2^{h+1}-1 $$ and hence $$ 2^{h} \leq n < 2^{h+1} $$ Taking the base 2 logarithm: $$ h \leq \log n < h+1 $$ So we can conclude $$h = \lfloor\log n\rfloor$$ As $\log n$ is bigger than $h$, but less than the next integer $h+1$.
{ "domain": "cs.stackexchange", "id": 564, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-structures, binary-trees", "url": null }
newtonian-mechanics, collision Note : For point particles a realistic collision can be simulated without considering the spherical geometry of colliding billiard balls, and without avoiding overlap.
{ "domain": "physics.stackexchange", "id": 71443, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, collision", "url": null }
quantum-mechanics, quantum-field-theory, condensed-matter, second-quantization, phonons We can invert the equations above by setting t = 0 and looking at \begin{equation} \sum_{n=1}^{N} u_{n} e^{-i k_{l} n a}=\sum_{n} \sum_{l^{\prime}}\left[\alpha_{l} e^{-i\left(k_{l}-k_{l^{\prime}}\right) n a}+\alpha_{l}^{\dagger} e^{-i\left(k_{l}+k_{l^{\prime}}\right) n a}\right]=N\left(\alpha_{l}+\alpha_{-l}^{\dagger}\right) \end{equation} How did the last equality come about and where did $\alpha_{-l}^{\dagger}$ come from? Is it mathematically wrong to just add $u_{n}(t = 0)$ and $p_n(t = 0)$ and rearrange to find $\alpha_{l}$, rather than looking at $\sum_{n=1}^{N} u_{n} e^{-i k_{l} n a}$? If so, why?
{ "domain": "physics.stackexchange", "id": 68934, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-field-theory, condensed-matter, second-quantization, phonons", "url": null }
c++, beginner std::cout << candles_to_check << " CANDLES SEQUENCE - DONE!" << '\n'; } std::cout << "CLOSING " << symbol[0].filename().string().substr(7, 6) << ".csv" << '\n'; std::cout << "############################################" << '\n'; output.close(); } // #END# return 0; } catch (std::string const reason) { std::cout << reason << '\n'; } // FOR MORE INFO ABOUT THIS FUNCTION SEARCH mktime.cpp IN MSVC std::time_t mkgmtime(std::tm *const tm) { int years{tm->tm_year}; // CHECK TO PREVENT OVERFLOW if (years < 70 || years > 155) { std::string error{ "INVALID DATE : " + std::to_string(years) + "." + std::to_string(tm->tm_mon) + "." + std::to_string(tm->tm_mday) + " " + std::to_string(tm->tm_hour) + ":" + std::to_string(tm->tm_min)}; throw error; } // DAYS OF THE MONTHS BEFORE int days_per_months[12] = {-1, 30, 58, 89, 119, 150, 180, 211, 242, 272, 303, 333}; int total_days{(years - 70) * 365}; int month{tm->tm_mon}; total_days += days_per_months[month]; // SAME AS tm_yday total_days += tm->tm_mday;
{ "domain": "codereview.stackexchange", "id": 28412, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, beginner", "url": null }
# What is the distribution of real numbers with biased digits? Suppose I have an infinite sequence of biased bits where the probability of $1$ is $2/3$ and the probability of $0$ is $1/3.$ If I view these as the digits in the binary expansion of a real number, then this sequence defines a real number in the interval $[0,1]$. So what kind of distribution does this real number have? Some considerations I have made so far is that the probability between $0.5$ and $1$ should be twice the probability between $0$ and $0.5.$ Similarly the probability between $0.25$ and $0.5$ should be twice the probability between $0$ and $0.25.$ A general way of writing this is recursive relationship is $$F(2x) - F(x) = 2F(x).$$ Adding boundary conditions I get the three equations $$F(0)=0\\ F(1)=1\\ F(2x)=3F(x)$$ which, if viewed as a recurrence relation, has the solution $F(x) = x^{\log_2(3)}$. My question is: Is this really airtight? Setting up these equations and using the solution from a recurrence relation felt a little hand wavy. I can easily verify that $x^{\log_2(3)}$ satisfies the above conditions for real numbers in the interval $[0,1]$, but is this solution unique?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9787126525529014, "lm_q1q2_score": 0.8176983027490563, "lm_q2_score": 0.8354835309589074, "openwebmath_perplexity": 123.91246399463388, "openwebmath_score": 0.9581577181816101, "tags": null, "url": "https://math.stackexchange.com/questions/2522278/what-is-the-distribution-of-real-numbers-with-biased-digits" }
power-spectral-density, spatial I hope to execute this analysis for each grid point, and write results to file. In later questions I'll ask what metrics I can pull from FFT analysis; how exactly to tune the input data for FFT (windowing/ zero-padding/ base-2 # of samples); and whether there are best practices for determining how many spectral peaks to record. Please see this response. Radiant Flux should take care of the physical units. The PSD in the spatial domain would represent a "dominant" blob, whose shape is determined by the spectral content of the PSD, underlying the image. (Please note that the PSD is defined over the Fourier Transform of the autocorrelation of a signal). EDIT: So, your pixel size is 5mx5m and the actual length of your red line is (let's say) about 500 meters across and contains 164 samples. Let's assume a square pixel size and call it $P_s$, call the length of the line $L$ and the number of samples across the line $L_s$. You perform an FFT on the profile line ($x_n$) and calculate the amplitude spectrum as the absolute value of the complex spectrum of $x$ (i.e numpy.abs(numpy.fft.fft(myData))). This gives you $|X_k|$. From that, you can see one sinusoid clearly sticking out of the rest at a frequency bin of 12. What does this mean for the ridge? The formula that links the bin to the actual spatial frequency is $f = \frac{k}{L_s} \times \frac{1}{P_s}$ where $k$ is the frequency bin. But, what is $\frac{1}{P_s}$?
{ "domain": "dsp.stackexchange", "id": 8123, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "power-spectral-density, spatial", "url": null }
ozone Title: Ozonated oil Mixed with water. I wish to emulsify ozonated oil with water and other ingredients to make a cream. Will mixing the ozone oil and water together with an emulsifier lessen the potency of the ozone oil as an oxidant? what if i were to ozonate the water and oil together and add an emulsifier at the end and blend them? with emulsifying wax for example. THankyou! I've tried this with a sota ozonator and it doesn't work out that well using water. First when you ozonate oil by itself it gets really thick which clogs up your aerator for use with anything else so if you do this you have to use a separate aerator for oil and water. I'm looking at doing something similar with bees wax and colloidal silver and placing the contents into a chap stick like container, when I figure something out I'll let you know. Water can be mixed with oil with out using an emulsifier. Dr Gerald Pollack from the University of Washington goes into what he calls the The Fourth Phase of Water "EZ water" and this type of water is known to mix with oil based substances. https://youtu.be/p9UC0chfXcg His work can be found here http://faculty.washington.edu/ghp/ Oil and water aren’t suppose to mix but they can see interview below https://www.youtube.com/watch?v=4KQwh6j-uOg&feature=youtu.be&t=28m17s
{ "domain": "chemistry.stackexchange", "id": 7611, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ozone", "url": null }
Programming Exercise (HP-15C, 15C LE - and others) 04-15-2014, 04:41 PM (This post was last modified: 04-15-2014 04:49 PM by Gerson W. Barbosa.) Post: #101 Gerson W. Barbosa Senior Member Posts: 1,199 Joined: Dec 2013 RE: Programming Exercise (HP-15C, 15C LE - and others) (04-06-2014 06:24 AM)Thomas Klemm Wrote:  Now we have 3 different ways to calculate the same thing: 1. Borwein's formula using tangent numbers 2. Convergence acceleration using Euler's transformation 3. Gerson's method using continued fractions Do you have a proof of your formula or is it still a conjecture? Congratulations to your discovery! Your method appears to be more efficient than the other two. Isn't this the same method used by Valentin in his original problem, only in continued fraction format? Anyway, the formula appears to hold, even when we have only two terms of the series, albeit converging very slowly in this case: $\ln (2)= 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots +\frac{1}{n-1}-\frac{1}{n}+\frac{1}{2n+1+\frac{1^{2}}{2n+1+\frac{2^{2}}{2n+1+\frac{3^{2}}{2n+1+​\frac{4^{2}}{2n+1+... }}}}}$
{ "domain": "hpmuseum.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9706877692486436, "lm_q1q2_score": 0.8250566281136977, "lm_q2_score": 0.8499711794579723, "openwebmath_perplexity": 739.0489867315196, "openwebmath_score": 0.7081336379051208, "tags": null, "url": "https://www.hpmuseum.org/forum/thread-916-post-9447.html" }
python, beginner, python-3.x, object-oriented, tic-tac-toe Optional plotz says to frobnicate the bizbaz first. """ For one liner docstrings, please keep the closing """ on the same line. So, for example, this - """Receives position and player type ('X' or 'O'). Returns modified board if position was valid. Asks to player try a different position otherwise.""" Should actually be written as - """Receives position and player type ('X' or 'O'). Returns modified board if position was valid. Asks to player try a different position otherwise. """ Also, since you have descriptively named functions, you don't need those unnecessary comments explaining what your function does. For example, this does not need a comment - def printing_board(self): """Prints the board.""" self.board.print_board() We know you're printing the board; it says in the function itself - def printing_board(self). Also, good use of the if '__name__' == __'main__': guard. Most people don't even attempt to use it. Note that the trailing \ solutions are not recommended by PEP 8. One reason is that if space is added by mistake after a \ it might not show in your editor, and the code becomes syntactically incorrect. The PEP changed at https://hg.python.org/peps/rev/7a48207aaab6 to explicitly discourage backslashes. The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets, and braces. Long lines can be broken over multiple lines by wrapping expressions in parentheses. These should be used in preference to using a backslash for line continuation. Another thing is that, here (for example) -
{ "domain": "codereview.stackexchange", "id": 37954, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, beginner, python-3.x, object-oriented, tic-tac-toe", "url": null }
Last edited: marksyncm PeroK Homework Helper Gold Member 2020 Award Actually, ##\frac{9}{\epsilon}## is probably not an integer since ##\epsilon## is an arbitrary positive real, and adding 1 does not make it an integer. But that is ok. The solution above uses ##[\frac{9}{\epsilon}]##, which I assume is the integer part of ##\frac{9}{\epsilon}##, and then adds ##1## to it. @marksyncm adding ##1## is to ensure that ##n_0 > \frac{9}{\epsilon}##. FactChecker and marksyncm PeroK Homework Helper Gold Member 2020 Award Isn't the new term completely different from the original sequence, though? We say that ##\frac{4n+5}{n^2+n+2} < \frac{4n+5n}{n^2} < \epsilon##, This is such an important and powerful idea (and actually quite simple) that you really want to make sure you've digested it. For example, suppose you want to prove that the sequence ##2n^3 + 6n^2 + 3n -1## diverges. Your first strategy might be to try to find the point ##n_0## at which this sequence exceeds a given number. So, you would look for a solution to ##2n^3 + 6n^2 + 3n -1 = M##, where ##M## is some (large) number. But, it not easy to solve a cubic equation. So that looks a hard way to do it. Alternatively, you could look for another much simpler sequence that is less than the one you have. There are then lots of ways of doing this. E.g. you could note that ##\forall n: \ 3n - 1 > 0## and ##6n^2 > 0##. That means that: ##\forall n: 2n^3 + 6n^2 + 3n -1 > 2n^3##
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9504109756113863, "lm_q1q2_score": 0.8152322266954338, "lm_q2_score": 0.8577681104440172, "openwebmath_perplexity": 982.2572372567798, "openwebmath_score": 0.9193199872970581, "tags": null, "url": "https://www.physicsforums.com/threads/proving-the-limit-of-a-sequence-from-the-definition-of-limit.959302/" }
evolution, mathematical-models, population-biology, population-dynamics, population-genetics Title: Heterozygosity and overdominance Consider $m$ loci with heterozygote advantage (overdominance) such that the fitness of the two homozygotes is $1-\frac{s}{2}$ and the fitness of the heterozygotes is $1+\frac{s}{2}$, where $s>0$. We'll assume that the fitness of an individual is given by the multiplication of the fitness component on each locus. In consequence, the fitness of the best possible genotype is given by $\left(1+\frac{s}{2}\right)^m$. According to this book, an individual is heterozygote at $j$ of these $m$ loci with probability $${m\choose j}\left(\frac{1}{2}\right)^m$$ and the equilibrium population mean fitness $\hat w$ is $$\hat w = \sum_{j=0}^m {m\choose j}\left(\frac{1}{2}\right)^m \left(1+\frac{s}{2}\right)^j \left(1-\frac{s}{2}\right)^{m-j} = 1$$ I don't understand any of these two equations! Can you help me to understand how they have been calculated?
{ "domain": "biology.stackexchange", "id": 2102, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "evolution, mathematical-models, population-biology, population-dynamics, population-genetics", "url": null }
c#, object-oriented, game, console, sliding-tile-puzzle private void ShuffleTiles() { Random random = new Random(); int n = Arr2d.GetLength(1); for (int i = Arr2d.Length - 1; i > 0; i--) { int i0 = i / n; int i1 = i % n; int j = random.Next(i + 1); int j0 = j / n; int j1 = j % n; int temp = Arr2d[i0, i1]; Arr2d[i0, i1] = Arr2d[j0, j1]; Arr2d[j0, j1] = temp; } } private int GetInvCount() { //List<int> flattedList = new List<int> { 1, 20, 6, 4, 5 }; var flattedList = Arr2d.OfType<int>().ToList(); int inv_count = 0; for (int i = 0; i < flattedList.Count - 1; i++) { for (int j = i + 1; j < flattedList.Count; j++) { if (flattedList[i] > flattedList[j]) { inv_count++; } } } return inv_count+1; } private int GetRowNumberFromBelow(int emptyTilePosition) { var row = emptyTilePosition / Size; return Size - row; } private bool IsSolvable() { int numberOfInversions = GetInvCount(); if (Size % 2 != 0) { return (numberOfInversions % 2 == 0); } int pos = GetRowNumberFromBelow(IndexZeroPos()); if (pos % 2 != 0) { return (numberOfInversions % 2 == 0); } else { return (numberOfInversions % 2 != 0); } }
{ "domain": "codereview.stackexchange", "id": 44145, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, object-oriented, game, console, sliding-tile-puzzle", "url": null }
crispyrat: Now cbrt is essentially the same however its the number multiplied 3 in a row. So everything should be adjusted accordingly, for example cbrt(18) would remain this way as 18=3*3*2 and as you can see no number happens 3 times. Also the number inside a cbrt can be a negative as three negatives multiplied will be a negative. For example cbrt(-27)=-3*-3*-3! A list of cubes up to 10(google calculator was used): 1*1*1=1 2*2*2=8 3*3*3=27 4*4*4=64 5*5*5=125 6*6*6=216 7*7*7=343 8*8*8=512 9*9*9=729 10*10*10=1000 crispyrat: @xxquintonxx is this good? should i add anything :) crispyrat: also if u want to request a subject feel free to comment just keep it on topic :) xXQuintonXx: e.e give me a sec to read it all xXQuintonXx: xd crispyrat: cool since u seem to be on this unit u could bookmark this or sumthing e.e xXQuintonXx: yup its all good thank you :) crispyrat: yw :)))) xXQuintonXx: @crispyrat wrote: cool since u seem to be on this unit u could bookmark this or sumthing e.e ehhhh not really im kinda just all over atm bc im doing some review thing rn crispyrat: ah i see is there anything you need help with :)? xXQuintonXx: cuz i did 1 thing for sqrt then it jumped me to coordinate stuff then now I'm in angel stuff XD crispyrat: i see do u need help with coordinates :p was thinking of doing one xXQuintonXx: no im good you have refreshed my mind on how to do em crispyrat: haha cool yw :)))) Extrinix:
{ "domain": "questioncove.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9883127447581985, "lm_q1q2_score": 0.8192507826238922, "lm_q2_score": 0.8289388019824947, "openwebmath_perplexity": 3068.206967899919, "openwebmath_score": 0.5470000505447388, "tags": null, "url": "https://questioncove.com/updates/6094a4d93c88ddaa04563800" }
ros, c++, ros-kinetic, publisher int main(int argc, char** argv) { ros::init(argc, argv, "override_node"); Override override; ROS_INFO("Running"); ros::spin(); } I tried putting while(ros::ok()) but I don't know where to insert it. Originally posted by J. J. on ROS Answers with karma: 60 on 2018-09-10 Post score: 0 Original comments Comment by Delb on 2018-09-10: You mean that you have the output only when the button is pressed ? Normally a joystick is continously publishing its data, can you tell us what is the publishing rate of the topic joy ? ( rostopic hz /joy) Comment by Delb on 2018-09-10: But what you can do is : Create a thread in the constructor of the class boost::thread t = boost::thread(boost::bind(&spinthread)); Use the thread to spin void spinthread() { ros::spin() } Create a global variable in the callback and publish it in the main Comment by Delb on 2018-09-10: (publishing inside a while(ros::ok()) loop) Comment by J. J. on 2018-09-10: I can see joy publishing perfectly with rostopic echo joy. But I want to continuously output the state of the button where it is pressed once (ON state = True) or pressed a second time (Off state = False). You can do it like that : //You libraries here #include <boost/bind.hpp> #include <tile_planner/reach_goals.h> //Create a thread only for ros spin void spinthread() { ros::spin(); } //Deifne a global variable std_msgs::Bool override_status; //Construcor Override::Override() { //Define sub and publishers and start the thread boost::thread t = boost::thread(boost::bind(&spinthread)); }
{ "domain": "robotics.stackexchange", "id": 31748, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, c++, ros-kinetic, publisher", "url": null }
# Simple example on uniformly convex spaces In the lectures we showed the following result: Theorem: Let $$(E,\|\cdot\|_E)$$ be a uniformly convex space. Consider a sequence $$\{x_n \}\rvert_{n\in\mathbb{N}} \subset E$$ and $$x \in E$$ such that it converges weakly to $$x\in E$$ $$x_n\rightharpoonup x ,$$ and the sequence of the norms converges to the norm of $$x\in E$$, i.e. $$\|x_n\|_E \longrightarrow \|x\|_E.$$ Then the sequence $$\{x_n \}\rvert_{n\in\mathbb{N}} \subset E$$ is strongly convergent $$x_n \longrightarrow x.$$ This means that weak convergence, together with the convergence of the norms imply strong converge in uniformly convex spaces. Question: Could you please provide a counterexample on a non uniformly convex space (maybe sequence space of bounded sequences $$\ell^\infty$$?) where this result does not hold? Concretely: A sequence on a non uniformly convex space such that it is weak convergent, and the sequence of the norms converges, but the sequence itself is not strongly convergent. I would be grateful to read any possible counterexample. Thanks!
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.985042911650495, "lm_q1q2_score": 0.8468056785302213, "lm_q2_score": 0.8596637451167997, "openwebmath_perplexity": 104.61607453150424, "openwebmath_score": 0.9925654530525208, "tags": null, "url": "https://math.stackexchange.com/questions/3127690/simple-example-on-uniformly-convex-spaces" }
classical-mechanics, tensor-calculus, moment-of-inertia Title: What does diagonalizing inertia tensor do In the normal case with moment of inertia, the angular momentum is parallel to the angular velocity $\vec{L} = I\,\vec{\omega}\tag{1}$ But when the object is rotating instantaneously about a point, the moment of inertia becomes a second rank tensor and the angular momentum is no longer always parallel to the angular velocity So I'm new to tensors stuff and what is typically done to the inertia tensor is diagonalizing it to find the fixed axes about which the object rotates about But how is it possible, if we have $\vec{L} = I\,\vec{\omega}\tag{1}$ where $I$ is now a tensor which is diagonalized, the equation now says $ L_x = I_{xx} \omega_{x} $ $ L_y = I_{yy} \omega_{y}$ $ L_z = I_{zz} \omega_{z}$ So writing $L$ as a vector $ \vec L = I_{xx} \omega_{x} \hat e_1 + I_{y} \omega_{y} \hat e_2 + I_{zz} \omega_{z} \hat e_3$ However $ \vec \omega = \omega_{x} \hat e_1 + \omega_{y} \hat e_2 + \omega_{z} \hat e_3$ Since $I_{xx} , I_{yy} , I_{zz}$ are not the same, $ \vec L$ is not alligned always with $ \vec \omega$ in these eigenvector axes, so what did we gain? Angular momentum and angular velocity are only parallel when the rotation is around one of the three principal axes found by diagonalization. What we gained was knowing what those special axes are where the dynamics are simple. The general rotational dynamics around an arbitrary axis are still complicated. For an irregularly shaped object, like, say this comet it should seem rather surprising that there is any axis around which the rotational dynamics are simple, much less three of them, in orthogonal directions!
{ "domain": "physics.stackexchange", "id": 60997, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, tensor-calculus, moment-of-inertia", "url": null }
quantum-mechanics, energy, hilbert-space, linear-algebra, superposition $$ \lvert 2 \rangle_{\mathrm{final}} = a\lvert 0 \rangle\lvert\mathrm{photon}_{20}\rangle + b\lvert 1\rangle\lvert\mathrm{photon}_{21}\rangle $$ It is a linear combination of the two alternatives. One of them is the transition from $\lvert 2 \rangle$ to $\lvert 0 \rangle$ creating a $\mathrm{photon}_{20}$ with energy $E_2-E_0$. That happens with the amplitude $a$ (probability $\lvert a\rvert^2$). The other corresponds to the transition from $\lvert 2 \rangle$ to $\lvert 1 \rangle$. Now you can write down how the final state $\lvert 3 \rangle_{\mathrm{final}}$, assuming $\lvert 3 \rangle$ has somewhat higher energy than $\lvert 2 \rangle$. Finally, if the initial state is $$ \frac{1}{\sqrt{2}}(\lvert 2 \rangle+\lvert 3 \rangle) $$ then the final state will be $$ \frac{1}{\sqrt{2}}(\lvert 2 \rangle_{\mathrm{final}}+\lvert 3 \rangle_{\mathrm{final}}) $$ If photons are measured and their energy is measured, than an individual photon will be one of the photons in the final state e.g. $\mbox{photon}_{31}$. The final state will appear to be collapsed to $\lvert 1\rangle \lvert \mbox{photon}_{31}\rangle$, which says that the atom is in state $\lvert 1 \rangle$.
{ "domain": "physics.stackexchange", "id": 15524, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, energy, hilbert-space, linear-algebra, superposition", "url": null }
homework-and-exercises, electricity, energy, forces, renewable-energy Title: Small scale water power, how does water volume and hight convert into electric energy? I was playing a little bit with the basic physics behind water power production but I can't get the numbers right. Let's say that I put a windmill that pumps water into a watertank on the top of my house, then I connect some kind of pipe with generator and starts to drain the watertank. How much electric power (kWh) can I get out from a watertank with size $X\text{ m}^3$ placed $Y\text{ m}$ above the ground? How does the formulas look like? Let's put some numbers on this problem, and see where we end up: Let's say the tank is $1\text{ m}^3$, and it is $10\text{ m}$ off the ground so the water will fall $10\text{ m}$ to the generator. Let's connect the generator with a standard garden hose that has a 1 inch diameter, with an area of $2.54\text{ cm}/(2\pi) \sim 5.1\text{ cm}^2$. And then I guess we would get a $10\text{ m}$ column of water pressure, that could be transformed with the area into the force the hight is putting on the system. Something like the earths gravity (9.82)*density*height = 9.82*1*10 ~ 98 Newton (???). And then maybe use that we can find that pressure=Force/Area, but how to move from pressure to energy?
{ "domain": "physics.stackexchange", "id": 1254, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electricity, energy, forces, renewable-energy", "url": null }
electromagnetism, maxwell-equations, vector-fields Title: Maxwell Laws Summary Diagram - Suggestions that I am missing? I have been going through a summary book of Maxwell's equations and hope I have organised this correctly but I think perhaps I am missing things important prompts that I could add? Image below Thanks for your help in advance You can certainly add the column for Magnetostatics or Magnetoquasistatics if they are of any interest to you. Also, you might add charge conservation: $$ \nabla \cdot \mathbf{J}=-\frac{\partial \rho}{\partial t} $$ this one should fit your way of organizing a summary and slicing the knowledge about Maxwell's equations.
{ "domain": "physics.stackexchange", "id": 59587, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, maxwell-equations, vector-fields", "url": null }
java, object-oriented, interview-questions, computational-geometry Even though you're skipping the simple symmetry pitfall (AB,BA), there is a similar pitfall for lines with already established points (e.g. ABDE, DE). Let's say you first check line AB. As it turns out, C is not on AB, D and E are both on AB. A few iterations later, you're checking line BD. Based on your earlier calculations, you already know that E is on BD. You don't really need to check that again; but the current code still does it anyway. This is a minor performance thing, but its importance can increase as the collection of points increases. Ideally, whenever you start checking a new line (e.g. DE), you should first check your existing results for a line which contains both these points (e.g. line AB contains points A,B,D,E so it contains information that is relevant to you now). Instead of creating a line DE, you could instead skip the calculation because AB(+DE) is already more complete than the new line DE will ever be (as it won't check A and B again, we already passed those). A similar performance gain can be made by stopping the iteration early. Let's say you have 26 points (A to Z). You're now starting on the calculations for lines where the first point is X. At best, these calculations can only yield a maximum of 3 points (X,Y,Z), since the calculations never checks the other points again. Suppose that you already have a line in your collection which has 5 points on it (ABCDE). There's no point to doing further calculations, because even if every remaining point is on the same line (XYZ), it'll still contain less points than the existing line with ABCDE. Similarly, if you find a line with more than half of the point list on it (and you've tested all points for compatibility with this line), you can be sure that this is the longest line possible, since it's impossible for two distinct lines to share more than a single point. Most of these improvement would make the existing code more complex, which would suggest either separating it further, or documenting it better. Or, preferably, both.
{ "domain": "codereview.stackexchange", "id": 29660, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, object-oriented, interview-questions, computational-geometry", "url": null }
Chet Rectifier Gold Member Take the calculation in steps. How would you calculate the force due to air pressure acting on the outside of the barrel (just the top), assuming that there was no water inside the barrel? ##p = \frac{F}{A} \\ F=pA \\## A is the area of the lid and p is the atmospheric pressure Gold Member Your equation for the fluid pressure should include the atmospheric pressure on the fluid at the top of the glass pipe : $$p=p_{atm}+dgh$$ Chet Attempt 3 b) Donut ##A_L = A_1 - A_2 = \pi R^2 - \pi r^2 = \pi( R^2 - r^2)= \pi (0.20^2 - 0.003^2) ## The pressure is ##p_L = \frac{F_L}{A_L} \\ F_L=p_LA_L = \pi (0.20^2 - 0.003^2)p ## -------> ## p=p_{atm} + dgh ##​ ##F=A_Lp = A_L(dgh+p_{atm}) = \pi (0.20^2 - 0.003^2) (1000 \cdot 9.82 \cdot 12 + 100 \cdot 10^3 ) = 27368.4 ## Is that it ?​ Last edited: Chestermiller Mentor I don't know. The rhs of your equation is not equal to the left side. What SteamKing and I have been trying to point out is that the atmospheric pressure cancels out from the results for the net force on the lid. Chet Rectifier SteamKing Staff Emeritus Homework Helper ##p = \frac{F}{A} \\ F=pA \\## A is the area of the lid and p is the atmospheric pressure I think we have established that you know the basics. The question about whether to include atmospheric pressure in your force calculations does not really require any calculation, but a more subtle thing called analysis. If we look at the drawing you made of the pressure acting on the barrel:
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9615338123908151, "lm_q1q2_score": 0.815349673344664, "lm_q2_score": 0.8479677602988602, "openwebmath_perplexity": 1311.849025554496, "openwebmath_score": 0.9931880235671997, "tags": null, "url": "https://www.physicsforums.com/threads/hydrostatic-pressure-barrel-vs-small-cylinder-of-water.831332/" }
c, delphi unsigned int peek (HANDLE hProc, int data_size, unsigned int addr) { unsigned int val = 0; if (ReadProcessMemory (hProc, (void*)addr, &val, data_size, NULL) == 0) { printf("poke failed\r\n"); } return val; } void print_matches (MEMBLOCK *mb_list) { unsigned int offset; MEMBLOCK *mb = mb_list; while (mb) { for (offset = 0; offset < mb->size; offset+= mb->data_size) { if (IS_IN_SEARCH(mb, offset)) { unsigned int val = peek (mb->hProc, mb->data_size, (unsigned int)mb->addr + offset); printf("0x%08x: 0x%08x (%d) \r\n", mb->addr + offset, val, val); } } mb = mb->next; } } int get_match_count (MEMBLOCK *mb_list) { MEMBLOCK *mb = mb_list; int count = 0; while (mb) { count += mb->matches; mb = mb->next; } return count; } unsigned int str2int (char *s) { int base = 10; if (s[0] == '0' && s[1] == 'x') { base = 16; s += 2; } return strtoul (s, NULL, base); } MEMBLOCK* ui_new_scan(void) { MEMBLOCK *scan = NULL; DWORD pid; int data_size; unsigned int start_val; SEARCH_CONDITION start_cond; char s[20];
{ "domain": "codereview.stackexchange", "id": 1710, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, delphi", "url": null }
I was reading First Course on Probability by Sheldon Ross and I came across a problem which went like this: "A customer visiting the suit department of a certain store will purchase a suit with probability $.22$, a shirt with probability $.30$, and a tie with probability $.28$. The customer will purchase both a suit and a shirt with probability $.11$, both a suit and a tie with probability $.14$, and both a shirt and a tie with probability $.10$. A customer will purchase all $3$ items with probability $.06$. What is the probability that a customer purchases (a) none of these items? (b) exactly $1$ of these items?" Problem a) is easy to solve, what confuses me is part b). Ross solves it in the following way: The probability that two or more items are purchased is $P(AB ∪ AC ∪ BC) = .11 + .14 + .10 − .06 − .06 − .06 + .06 = .23$ Hence, the probability that exactly $1$ item is purchased is $.51 − .23 = .28.$ Intuitively, I understand why he subtracts the probability of buying two or more things from the probability of buying anything at all. What I do not understand is the rigor behind it. Why am I justified in subtracting one probability from the other? What I tried to do in order to justify this was saying $P( \mathrm {buying \ 2 \ or \ more \ things)} + P(\mathrm {buying \ 1 \ thing}) + P(\mathrm {buying \ nothing})=1$ since a customer must buy a shirt, or a tie, or a suit, or nothing, therefore the three terms above must add up to the probability of the sample space which equals one.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9603611620335328, "lm_q1q2_score": 0.8044176754900122, "lm_q2_score": 0.8376199572530448, "openwebmath_perplexity": 223.16183186986643, "openwebmath_score": 0.7132712006568909, "tags": null, "url": "https://math.stackexchange.com/questions/1739907/how-do-you-define-a-sample-space-with-rigor" }
So the two coefficients in $(1)$ are, respectively, \begin{align} & [x^{1900}]\left((1-x^{851})^4(1-x)^{-5}\right) \\ &= [x^{1900}](1-x)^{-5} - \binom41 [x^{1900-851}](1-x)^{-5} + \binom42[x^{1900-851\times2}](1-x)^{-5} \\ &= \binom{1900 + 4}{4} - \binom41 \binom{1049 + 4}{4} + \binom42 \binom{198 + 4}{4} \\ &= 342527319476 \end{align} and $$[x^{399}]\left((1-x^{851})^4(1-x)^{-5}\right) = [x^{399}](1-x)^{-5} = \binom{399 + 4}{4} = 1082740100$$ Thus $(1)$ gives the answer to be $$342527319476 - 1082740100 = 341444579376.$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9755769110110248, "lm_q1q2_score": 0.8368187617771673, "lm_q2_score": 0.8577681086260461, "openwebmath_perplexity": 779.141379167389, "openwebmath_score": 0.9997104406356812, "tags": null, "url": "https://math.stackexchange.com/questions/749224/combinatorics-generating-function" }
java This is just a MVCE which does not contain user authentication, it trustst that the passed userid is correct. I think that this is not a good implementation, how would you model this? I think that implementations of TeamRole should not know how to do things. Based on the name part: ...Role it should only know if a certain user can perform a certain action on a particular resource. I'm expecting the TeamRole to have a method like: public interface TeamRole { boolean canDelete(Entry entry); } And then change the TeamMember: public class TeamMember implements TeamRole { // need the user to be a part of the class here AbstractUser user; public TeamMember(AbstractUser user) { this.user = user; } @Override public boolean canDelete(Entry entry) { if (entry == null) return false; return user.getUid().equals(entry.getCreator().getUid()); } } You can have an alternative with a static method in TeamMember, so you can avoid the instantiation of the class, and just pass the user (requester) as a parameter: public interface TeamRole { // Java 8+ static boolean canDelete(AbstractUser requester, Entry entry); } So your service method will become: public static ResponseEntity<Team> deleteEntry(AbstractUser requester, Team team, Entry e) { TeamRole role = team.findPermissionsForUser(requester); if (role == null) return new ResponseEntity("no role found...", HttpStatus.INTERNAL_SERVER_ERROR);
{ "domain": "codereview.stackexchange", "id": 32206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java", "url": null }
Set $y = f(x)$. Assume that the natural map $\kappa(y) \to \kappa(x)$ is bijective. Show, using the definition, that $f$ induces a natural linear map $$\text{d}f : T_{X/S, x} \longrightarrow T_{Y/S, y}.$$ Match it with what happens on local rings via Exercise \ref{exercise-tangent-space-Zariski} in case $\kappa(x) = \kappa(s)$. \end{exercise} \begin{exercise} \label{exercise-Jacobian} Let $k$ be an algebraically closed field. Let \begin{eqnarray*} f : \mathbf{A}_k^n & \longrightarrow & \mathbf{A}^m_k \\ (x_1, \ldots, x_n) & \longmapsto & (f_1(x_i), \ldots, f_m(x_i)) \end{eqnarray*} be a morphism of schemes over $k$. This is given by $m$ polynomials $f_1, \ldots, f_m$ in $n$ variables. Consider the matrix $$A = \left( \frac{\partial f_j}{\partial x_i} \right)$$ Let $x \in \mathbf{A}^n_k$ be a closed point. Set $y = f(x)$. Show that the map on tangent spaces $T_{\mathbf{A}^n_k/k, x} \to T_{\mathbf{A}^m_k/k, y}$ is given by the value of the matrix $A$ at the point $x$. \end{exercise} \section{Quasi-coherent Sheaves} \label{section-quasi-coherent} \begin{definition} \label{definition-quasi-coherent} Let $X$ be a scheme. A sheaf $\mathcal{F}$ of $\mathcal{O}_X$-modules is {\it quasi-coherent} if for every affine open
{ "domain": "github.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9854964220125032, "lm_q1q2_score": 0.8080019446376565, "lm_q2_score": 0.8198933315126792, "openwebmath_perplexity": 120.73245969658541, "openwebmath_score": 0.9982831478118896, "tags": null, "url": "https://github.com/stacks/stacks-project/blob/master/exercises.tex" }
• Like the other answer, why did you only used the upper half semicircle? Why did you not ignore the first term in the second step and evaluate over the whole circle like, $$\int_0^{2\pi} d\phi\ \frac{iRe^{i\phi}}{1+(Re^{i\phi})^4}$$ – sigsegv Apr 23 '17 at 17:13 • @Ayatana We want to evaluate the integral $\int_{-\infty}^\infty \frac{1}{1+x^4}\,dx$. How does integrating over the circle do that? Well, it doesn't; and that integral is $0$. So instead, we evaluate a contour integral, the one described herein, that has the real line integral as part of the contour. We could have taken the semi-circle in the lower-half plane, but since the orientation would be clockwise, we need a $-2\pi i$ factor o multiply the sum of the residues in the lower-half plane. The result will be the same, of course. ;-)) – Mark Viola Apr 23 '17 at 17:15 • Is it necessary for the contour to be semicircle? Will a contour made of a straight line along real axis and an arbitrary line joing the ends work? – sigsegv Apr 24 '17 at 4:30 • @Ayatana The reside theorem applies to any closed rectifiable curve. But we are free to choose the closed contour. And by choosing a semicircle the radius of which approaches infinity, we can show quite easily that its contribution approaches $0$ as $R\to \infty$. – Mark Viola Apr 24 '17 at 4:34 You shouldn't need Wolfram Alpha to tell you the answer isn't zero. The integrand is positive, so the integral is positive, and certainly nonzero. You have added up the residues at all the poles. However using the usual semicircle method gives you that the integral is $2\pi i$ times the sum of the residues of the poles in the upper half-plane.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9825575142422757, "lm_q1q2_score": 0.868619431660257, "lm_q2_score": 0.8840392741081575, "openwebmath_perplexity": 214.38352011733284, "openwebmath_score": 0.9597283601760864, "tags": null, "url": "https://math.stackexchange.com/questions/2248259/how-to-calculate-an-integral-using-cauchys-theorem" }
particle-physics, standard-model, antimatter, quarks, mesons Title: Mass of $B$ Meson This is a question from Griffith's Introduction to Elementary Particles, Chapter 2: The upsilon meson, $b\bar b$, is the bottom-quark analog to the $\psi$, $c\bar c$. Its mass is 9460 MeV/$c^2$ and its lifetime is 1.5 X $10^{-20}$ sec. From this information, what can you say about the mass of the $B$ meson $u\bar b$ (The observed mass is 5270 MeV/$c^2$.) All I can think of is that since $b\bar b$ has a mass of 9460 MeV/$c^2$, the individual bottom quark must be having a mass of 4730 MeV/$c^2$. Could someone give a hint as to how to proceed from here? To add more detail to @AjayShanmugaSakthivasan 's answer, such heavy-quark -onia systems such as Υ and J/ψ entail quark-antiquark pairs bound by some QCD effective potential. The heavier the quarks, the deeper the location of the pair in such potentials, and the more the energy required to liberate the quarks and have them subsequently bind (much more weakly) to charmed and bottomed mesons, Ds and Bs, respectively. For the J/ψ, of mass 3.1 GeV in natural units, you need to add a bit of energy to pull the charmed quarks out of the potentials, and then a bit more to create a light quark-antiquark pair to hadronize to some type of constituent form binding (much heavier than the current quarks you'd find in the PDG, the gist of @anna v 's comment) into D mesons (each of 1.9 GeV), so you need to add about 0.6 GeV. For the Υ, you need to add (2×5.28- 9.46) GeV ~ 1.1 GeV to do the same, since the b$\bar b$ pair is deeper in the binding potential.
{ "domain": "physics.stackexchange", "id": 95179, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, standard-model, antimatter, quarks, mesons", "url": null }
python, optimization, file-system def md5_for_file(path, block_size=256*128, hr=True): ''' Block size directly depends on the block size of your filesystem to avoid performances issues Here I have blocks of 4096 octets (Default NTFS) ''' md5 = hashlib.md5() with open(path,'rb') as f: for chunk in iter(lambda: f.read(block_size), b''): md5.update(chunk) if hr: return md5.hexdigest() return md5.digest() def summary(folders_created, subfolders_created): print("") print("------------------------------------------------------------------") if not folders_created: print ("No new show has been added to your collection master!") print("") else: print ("") print("New show(s) added:") for item in folders_created: print item print("") print("------------------------------------------------------------------") if not subfolders_created: print ("No new season has been added master!") print("") else: print ("") print("New season(s) added:") for item in subfolders_created: print item def copyThisShit(from_dir, to_dir): i = 0 j = 0 new_fold = [] new_sub_folder = []
{ "domain": "codereview.stackexchange", "id": 5068, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, optimization, file-system", "url": null }
electrostatics, electricity, charge, buoyancy, lift \\\text{Data from various sources.} $$ The work done to charge this balloon can also be approximated easily: $$ \text{Work} \approx F\cdot s = 11.5~\text{N} \cdot 1~\text{m} = 11.5 ~\text{J} \\\text{Ideally it would charge up in} \; 11.5 ~\text{s}\: \text{with a} \; 1~\text{W}\:\text{ power supply!} $$ (Assuming that all the air was displaced from the bottom of the balloon) Edit: It would take much more than 11.5 seconds to charge up with a 1 watt supply due to the factors mentioned in one of the comments. This doesn't matter too much though, because the invested energy can be completely recovered upon deflating the balloon (ideally). If you think that the aluminium foil is too thin to make the approximation realistic, you can scale up the entire balloon to get a much larger upthrust and a satisfying thickness. Essentially the key point of charging the aluminium foil is so than the balloon can be scaled up without us having to change the thickness of the aluminium. Off course it's not realistic because aluminium would have to have a very high charge and this would quickly dissipate into the surroundings by ionizing the air. What I actually tried to do in an attempt to use this effect is:
{ "domain": "physics.stackexchange", "id": 24281, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, electricity, charge, buoyancy, lift", "url": null }
fourier-transform, frequency-spectrum, continuous-signals, bandwidth, voice Is "no physical signal bandlimited"? Depends what "bandlimited" means. Bandlimited = finite range of frequencies. The question then is, what's a frequency? or frequency of what? of infinite duration sinusoids? Then yes, no physical signal is bandlimited. But then, again, why do you care? Asserting this is plain misleading as it suggests every physical process has infinite derivative processes each at their own frequency. Knowing the actual max frequency of a process is a physics endeavor, not transforms'. The DFT does not commit this fallacy (but its weakness is blindness to anything beyond half its sampling rate (which is a non-issue if we know said frequencies don't exist)). Why not favor DFT or CFT? Because the building blocks are inherently limited in kinds of behaviors they can describe 'directly'. Suppose the same pendulum, but now damped. Its FT: $$ s(t) = e^{-t} \cos (25t) u(t)\ \Leftrightarrow\ S(\omega) = \frac{1 + j\omega}{(1 + j\omega)^2 + 625} \tag{2} $$
{ "domain": "dsp.stackexchange", "id": 10165, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-transform, frequency-spectrum, continuous-signals, bandwidth, voice", "url": null }
conservation-laws, quantum-spin, spin-models, integrable-systems, spin-chains I guess that the generalized Gibbs ensemble in your case is: $$ \rho = \frac{1}{Z}e^{\sum_{k=0}^N h_k S_k^z} $$
{ "domain": "physics.stackexchange", "id": 95355, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "conservation-laws, quantum-spin, spin-models, integrable-systems, spin-chains", "url": null }
statistical-mechanics, entropy, units BTW if you were asking why the right constant is a power of $(2\pi\hbar)$, the answer is that it is the ratio between $\int dx\,dp$ in classical physics (we're supposed to integrate the distribution functions on the phase space) and the trace in quantum mechanics, ${\rm Tr}$. The fact that the ratio is $2\pi\hbar$ for each pair of $x$-$p$ complementary variables may be computed for any Hamiltonian. For example, a particle in the box with periodicity $\Delta x$ has quantized momenta $p=2\pi \hbar/\Delta x$, because $\exp(ipx/\hbar)$ has to be single-valued on the circle of circumference $\Delta x$. It follows that there's exactly one state per volume of phase space whose area is $2\pi\hbar / \Delta x \cdot \Delta x = 2\pi\hbar$. One can give a similar calculation for the annuli relevant for the harmonic oscillator and any other system. For many degrees of freedom, one has to multiply $2\pi\hbar$ from each $x$-$p$ pair because both trace (of a tensor product Hilbert space) as well as the multi-dimensional integrals are multiplicative in the small traces or lower-dimensional integrals.
{ "domain": "physics.stackexchange", "id": 2260, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "statistical-mechanics, entropy, units", "url": null }
python plt.errorbar(concen,populations_monomer,yerr=monomer_errors,linestyle='None',ecolor='b') plt.errorbar(concen,populations_open,yerr=open_errors,linestyle='None',ecolor='g') plt.errorbar(concen,populations_closed,yerr=closed_errors,linestyle='None',ecolor='r') else: k_value=float(solutions[variable_names.index('k')]) k_error=float(uncertanties[variable_names.index('k')]) for entries in variable_names: if re.search('scaling_factor(\d+_\d+)',entries) is not None: io=convert_concentration(float(((re.search('scaling_factor(\d+_\d+)',entries)).group(1)).replace('_','.'))) if re.search('Open-Closed',sample_name[0]) is not None: fo,fc,fm=open_closed(k_value,io) dk_fo,dk_fc,dk_fm=((open_closed(k_value+h,io)-open_closed(k_value,io))/h) populations_open_errors.append((k_error**2*dk_fo**2)) populations_closed_errors.append((k_error**2*dk_fc**2)) populations_monomer_errors.append((k_error**2*dk_fm**2)) if re.search('Monomer-Closed',sample_name[0]) is not None: fo,fc,fm=monomer_closed(k_value,io) dk_fo,dk_fc,dk_fm=((monomer_closed(k_value+h,io)-monomer_closed(k_value,io))/h)
{ "domain": "codereview.stackexchange", "id": 44935, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
thermodynamics Title: How to find work done by heated gas and what causes the piston to move based on the problem So I had this problem, and I have 2 questions : 1. From the problem written in the picture, what causes the piston to move upwards? If it because of the pressure, then based on formula PV=nRT, if the pressure increases then the Volume must have decreased right? But in the problem the volume increases. Also if its heated the temperature must have increased too. So, how do I solve this problem? 2. How to find work done based on the problem If you do a force balance on the piston, you get $Mg=(2p_0-p_0)S$. When the gas is heated, the piston mass M doesn't change, the outside pressure $p_0$ doesn't change, and the piston area S doesn't change. So, from the force balance, the gas pressure remains at $2p_0$. The increase in temperature is compensated by the increase in volume. So the force exerted by the gas on the piston during the heating is constant at $2p_0S$. The work the gas does is the force the gas exerts on the piston times the displacement, h/2.
{ "domain": "physics.stackexchange", "id": 38967, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics", "url": null }
java, homework, combinatorics Title: Permutations Finder Help Number 2 Previous question: Java application for finding permutations efficiently> I have changed quite a bit of code, and need some more reviews. class Permutations { static long factorial(int num){ long factorial = num; for (int forBlockvar = num; forBlockvar > 1; forBlockvar--) { factorial = factorial * forBlockvar; } return factorial / num; } public static void main(String[] args){ long FactNmR; int n = 8; int num = n; int r = 6; int nMr = n - r; long FactN = factorial(num); if (nMr == 2) { FactNmR = 2; } else if (nMr <= 1){ FactNmR = 1; } else if (nMr >= 2) { num = nMr; FactNmR = factorial(num); } long permutations = FactN; permutations = permutations / FactNmR; System.out.println(permutations); } } Code comment: static long factorial(int num){ long factorial = num; for (int forBlockvar = num; forBlockvar > 1; forBlockvar--) { factorial = factorial * forBlockvar; } return factorial / num; } You are starting your factorial = num, then multiplying it by num in the first iteration of the loop, then dividing by num at the end. Starting with factorial = 1, or starting forBlockvar = num - 1 (I prefer the second) will remove the need to divide at the end. You can also return 1 if the input is less than 2. static long factorial(int num){ if(num < 2) { return 1; } long factorial = num; for (int forBlockvar = num - 1; forBlockvar > 1; forBlockvar--) { factorial = factorial * forBlockvar; } return factorial; }
{ "domain": "codereview.stackexchange", "id": 1026, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, homework, combinatorics", "url": null }
Also, $$\lim _{ x\rightarrow \infty }{ \frac { { x }^{ 2 } }{ \sqrt { { x }^{ 4 }+1 } } } =1$$ Hence: $$\lim _{ x\rightarrow \infty }{ \frac { { x }^{ 2 }+{ e }^{ -{ x }^{ 2 }\sin ^{ 2 }{ x } } }{ \sqrt { { x }^{ 4 }+1 } } } =1+0=1$$ I tried to put the limit into wolfram but it gave me a time limit exceeded. Is there a reason for this? Does the limit really not exist? And is there anything wrong in my argument? Thank you 2. Oct 23, 2015 ### BvU I think you're doing just fine. which of the two did wolfie suffocate on ? 3. Oct 23, 2015 ### FaroukYasser Wolfram exceeded time on the original expression without breaking it down. (Although I have no idea why it did)
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9637799399736476, "lm_q1q2_score": 0.8356316857823821, "lm_q2_score": 0.8670357735451835, "openwebmath_perplexity": 1191.7873618078888, "openwebmath_score": 0.5874210000038147, "tags": null, "url": "https://www.physicsforums.com/threads/limit-with-trigonometric-and-polynomial-function.839248/" }
computer-architecture, quantum-computing Title: Organisation and Architecture of Quantum Computers What are devices and their interconnections used alongwith Quantum Processors? Are they compatible with hardware devices like Cache, RAM, Disks of current computers? What you describe as current computers is known as the von Neumann architecture. This approach is one of many ways to think about classical computation and there are other classical approaches that might or might-not have relevant generalizations to quantum computing. The von Neumann architecture seems to be unlikely to be relevant to quantum computing, due to its difficulty from both the theoretical and implementation side. However, as I mentioned on cstheory there is an article on implementing a quantum von Neumann architecture. They do this via superconducting qubits, of course the implementation is very small, with only 7 quantum parts: two superconducting qubits, a quantum bus, two quantum memories, and two zeroing registers. This allows their quantum CPU to perform one-, two-, and three-qubit gates on qubits, and the memory allows (data) qubits to be written, read out, and zeroed. Implementing a quantum superposition of gates is very difficult, and so the program is stored classically. More likely models of quantum computing to be implemented include: measurement-based, topological, and adiabatic models. Typical implementations of these models look more like physics experiments (which they are!) than computers. Some of the common strategies for implementation include trapped ions, quantum optics, and superconducting circuits. The circuit approach has been placed on chips and in fact D-Wave (a spin-off from UBC in Vancouver) claims to have built quantum-like computers using the adiabatic model to implement quantum simulated annealing. They have managed to sell this computer to Lockheed Martin but their approach has been met with heavy skepticism. Lastly, the NMR approach mentioned by @RanG. is interesting, but suspected to be not equivalent to full quantum-computing. It is equivalent to the one-clean qubit model (also known as DQC1) and suspected to be strictly weaker than full quantum computing.
{ "domain": "cs.stackexchange", "id": 203, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computer-architecture, quantum-computing", "url": null }
For the best known positive results under mild hypotheses, you might want to look at Armin Rainer, Perturbation theory for normal operators, Trans. A.M.S., Volume 365, Number 10, October 2013, Pages 5545–5577 • link Apr 28 '20 at 0:14 In this paper by Xuwen Zhu it is shown that, after resolution by radial blow-ups, the eigenvalues can be made to be smooth: https://arxiv.org/abs/1504.07581.
{ "domain": "mathoverflow.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9790357625461396, "lm_q1q2_score": 0.8340922556081543, "lm_q2_score": 0.8519527963298946, "openwebmath_perplexity": 222.73043348661173, "openwebmath_score": 0.9387015104293823, "tags": null, "url": "https://mathoverflow.net/questions/358634/differentiability-of-eigenvalues-of-positive-definite-symmetric-matrices/358637" }
ros, usb-cam, camera, webcam Title: No JPEG data found in image I am currently running ROS on VirtualBox. After running $roslaunch usb_cam usb_cam.launch Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://dd-VirtualBox:46806/ SUMMARY PARAMETERS * /image_view/autosize: True * /rosdistro: jade * /rosversion: 1.11.16 * /usb_cam/camera_frame_id: usb_cam * /usb_cam/image_height: 480 * /usb_cam/image_width: 640 * /usb_cam/io_method: mmap * /usb_cam/pixel_format: mjpeg * /usb_cam/video_device: /dev/video0 NODES / image_view (image_view/image_view) usb_cam (usb_cam/usb_cam_node) ROS_MASTER_URI=http://localhost:11311
{ "domain": "robotics.stackexchange", "id": 24186, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, usb-cam, camera, webcam", "url": null }
general-relativity, gravity, earth, planets $$ ds^2 = -\left[\frac{3}{2}\sqrt{1-\frac{2M}{R}} - \frac{1}{2}\sqrt{1-\frac{2Mr^2}{R^3}}\right]^2dt^2 $$ Since $ds^2 = -d\tau^2$ a simple rearrangement gives: $$ u^t = \frac{dt}{d\tau} = \frac{1}{\frac{3}{2}\sqrt{1-\frac{2M}{R}} - \frac{1}{2}\sqrt{1-\frac{2Mr^2}{R^3}}} $$ Let's just note that this does not go to zero as $r \rightarrow 0$, and move swiftly on to look at the other term $\Gamma^r_{tt}$. Calculating this involves some painful algebra, but Mathematica is good at this sort of thing and Danu helpfully used Mathematica to do the calculation for me. The result is: $$\Gamma_{tt}^r= \frac{r}{2R^6}\left[2M^2r^2+MR^3\left(3\sqrt{1-\frac{2Mr^2}{R^3}}\sqrt{1-\frac{2M}{R}}-1\right)\right] $$ Yet another fiendishly complicated expression, but note that the whole thing is multiplied by $r/2R^6$ and that means if $r = 0$ the whole complicated expression is just zero. And there's our result! When $r = 0$ the Christoffel symbol $\Gamma_{tt}^r$ is zero and that means the radial four-acceleration is zero and that means you're weightless.
{ "domain": "physics.stackexchange", "id": 26555, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, gravity, earth, planets", "url": null }
computability, logic, programming-languages, lambda-calculus, combinatory-logic However, this is not completely correct. Before, $f$ got substituted for $r$ inside $M$. But now we pass $M$ instead. We have to somehow fix all places where we use $r$ so that they reconstruct $f$ from $M$. Actually, this not difficult at all: Now that we know that $f = M M$, everywhere we use $r$ we simply replace it by $(r r)$. $$f = \underbrace{(\lambda r . (\ldots (rr) \ldots (rr) \ldots))}_{M'} \underbrace{(\lambda r . (\ldots (rr) \ldots (rr) \ldots))}_{M'}$$ This solution is good, but we had to alter $M$ inside. This is not very convenient. We can do this more elegantly without having to modify $M$ by introducing another $\lambda$ that sends to $M$ its argument applied to itself: By expressing $M'$ as $\lambda x.M(xx)$ we get $$f = (\lambda x.\underbrace{(\lambda r . (\ldots r \ldots r \ldots))}_{M}(xx)) (\lambda x.\underbrace{(\lambda r . (\ldots r \ldots r \ldots))}_{M}(xx))$$ This way, when $M$ is substituted for $x$, $MM$ is substituted for $r$, which is by the definition equal to $f$. This gives us a non-recursive definition of $f$, expressed as a valid lambda term! The transition to $Y$ is now easy. We can take an arbitrary lambda term instead of $M$ and perform this procedure on it. So we can factor $M$ out and define $$Y = \lambda m . (\lambda x. m(xx)) (\lambda x.m(xx))$$ Indeed, $Y M$ reduces to $f$ as we defined it.
{ "domain": "cs.stackexchange", "id": 21377, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computability, logic, programming-languages, lambda-calculus, combinatory-logic", "url": null }
is a sequence of lower bound sieve coefficients. One can iterate this procedure to produce an alternating sequence of upper and lower bound sieves. If one iterates out the Buchstab formula completely, one ends up back at the Legendre sieve. However, one can truncate this iteration by using the trivial lower bound sieve of ${0}$ for some of the primes ${p_1}$. For instance, suppose one seeks an upper bound for ${1_{n \not \in \bigcup_{p < z} E_p}}$. Applying (16), we only save some of the summands, say those ${p_1}$ which obey some predicate ${A_1(p_1)}$ to be chosen later. For the remaining summands, we use the trivial lower bound sieve of ${0}$, giving $\displaystyle 1_{n \not \in \bigcup_{p < z} E_p} \leq 1 - \sum_{p_1 < z: A_1(p_1)} 1_{E_{p_1}}(n) 1_{n \not \in \bigcup_{p < p_1} E_p}.$ For the surviving summands, we apply (16) again. With the sign change, the trivial lower bound sieve is not applicable, so we do not discard any further summands and arrive at $\displaystyle 1_{n \not \in \bigcup_{p < z} E_p} \leq 1 - \sum_{p_1 < z: A_1(p_1)} 1_{E_{p_1}}(n)$ $\displaystyle + \sum_{p_2 < p_1 < z: A_1(p_1)} 1_{E_{p_1}}(n) 1_{n \not \in \bigcup_{p < p_2} E_p}.$
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9886682444653242, "lm_q1q2_score": 0.8012947666000314, "lm_q2_score": 0.8104789155369048, "openwebmath_perplexity": 248.19796379327, "openwebmath_score": 0.9931489825248718, "tags": null, "url": "https://terrytao.wordpress.com/2015/01/21/254a-notes-4-some-sieve-theory/" }
reinforcement-learning, function-approximation The original state vector is an $\mathbb{R}^k$ vector. So $k$ refers to the original dimensionality of the state description. Q3- What is the range for index "i"? It is $[1, (n + 1)^k]$ if you are intending to have full coverage of all possible combinations of state features in degree up to $n$ in each feature, as described in the text. It is not the only way to create a set of derived features using polynomials, and you might just as naturally limit the total degree of all original features within a single ploynomial to $n$ as allow each one to vary from $0$ to $n$ in all combinations. For instance, the book gives an example with $k = 2, n = 2$ of $x(s) = (1, s_1, s_2, s_1s_2, s_1^2, s_2^2, s_1s_2^2, s_1^2s_2, s_1^2s_2^2)$, but you could instead consider $x(s) = (1, s_1, s_2, s_1s_2, s_1^2, s_2^2)$. Which to use will depend on experimentation and results, no different from any other feature engineering.
{ "domain": "ai.stackexchange", "id": 3286, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning, function-approximation", "url": null }
(* Get estimates of the mean and standard deviation using the binned data *) μ0 = data[[All, 1]].data[[All, 2]]/Total[data[[All, 2]]] (* 0.045999999999999944 *) σ0 = (((data[[All, 1]]^2).data[[All, 2]])/Total[data[[All, 2]]] - μ0^2)^0.5 (* 1.0017404853553638 *) The maximum likelihood estimates of $\mu$ and $\sigma$ based on the binned data are found by maximizing the likelihood (or equivalently the log of the likelihood): $$\text{Likelihood}=\prod_{i=1}^n \left(\Phi\left(\frac{x_i+\frac{w}{2}-\mu}{2}\right)-\Phi\left(\frac{x_i-\frac{w}{2}-\mu}{2}\right)\right)^{f_i}$$ where $x_i$ is the $i$-th bin midpoint, $f_i$ is the frequency count of the $i$-th bin, $n$ is the number of bins, $w$ is the bin width, and $\Phi(z)$ the standard normal cumulative distribution function. (* Get maximum likelihood estimates *) Φ[z_NumericQ] := CDF[NormalDistribution[0, 1], z] w = 0.2; (* Width of histogram bin *) (* Log of the likelihood *) logL = Sum[data[[i, 2]] Log[Φ[(data[[i, 1]] + w/2 - μ)/σ] - Φ[(data[[i, 1]] - w/2 - μ)/σ]], {i, Length[data]}];
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9669140206578809, "lm_q1q2_score": 0.8468107119299221, "lm_q2_score": 0.8757869819218864, "openwebmath_perplexity": 2314.100112086537, "openwebmath_score": 0.40955114364624023, "tags": null, "url": "https://mathematica.stackexchange.com/questions/153520/fitting-of-statistical-data-points-by-normal-distribution/153523" }
orbit, stellar-dynamics To get a solution at any time $\hat{t}$, you need to use numerical methods. Of course numerical methods are inherently flawed. Numerical errors build up the longer you simulate due to time steps which are not infinitesimal, from errors within the numerical algorithms used, and general floating point errors. In theory, if you had a computer with infinite precision and infinite computing power, you could solve a 3-body (or n-body) system perfectly, but we live in the real world where such things are impossible. To prove though, that you can figure out the state at any time in the past or present for a three body body, I've written a basic simulation in Python 3. It can run both forwards and backwards from a given start condition and start time. Essentially it puts three nearly identical masses in contrived starting positions and velocities. Below the code are plots of results. import numpy as np from numpy.linalg import norm from matplotlib.pyplot import * from time import time
{ "domain": "astronomy.stackexchange", "id": 2551, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "orbit, stellar-dynamics", "url": null }
python, python-3.x, tkinter def solver(self): self.output.delete(1.0, INSERT) target = self.target_num.get() number1 = self.num1.get() number2 = self.num2.get() number3 = self.num3.get() number4 = self.num4.get() numbers = [number1, number2, number3, number4] operators = ["+","-","*","/"] groups = ['X+X+X+X', 'X+X+(X+X)', 'X+(X+X)+X', '(X+X+X)+X', '(X+X)+X+X', 'X+(X+X+X)', '((X+X)+X)+X', 'X+(X+(X+X))', 'X+((X+X)+X)', '(X+X)+(X+X)', '(X+(X+X))+X'] seen = set() for values in permutations(numbers,len(numbers)): for operCombo in combinations_with_replacement(operators,len(numbers)-1): for oper in permutations(operCombo,len(numbers)-1): formulaKey = "".join(str(oper+values)) if formulaKey in seen: continue # ignore variations on parentheses alone for pattern in groups: formula = "".join(str(o)+str(p) for o,p in zip([""]+list(oper), pattern.split("+"))) formula = "".join(str(v)+str(p) for v,p in zip([""]+list(values),formula.split("X"))) try: if eval(formula) == target: Answer = formula,"=",target seen.add(formulaKey) #insert value in output Textbox self.output.insert(INSERT, Answer)
{ "domain": "codereview.stackexchange", "id": 34665, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, tkinter", "url": null }
universe, cosmology Title: Why aren't other kinds of energies considered in the mass-energy of the universe? Mass-Energy of the Universe: 5% ordinary matter, 27% dark matter, 68% dark energy What about other energies such as thermal energy, potential energy, kinetic energy, chemical energy, and radiant energy? Various missions such as WMAP and the Planck Satellite have measured the mass-energy content of the universe. You tend to see images like the one below produced by these scientific ventures.
{ "domain": "astronomy.stackexchange", "id": 2328, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "universe, cosmology", "url": null }
java, integer, numbers-to-words spellMediumNumber(sb,num); return sb.toString(); } private static void spellBigNumber(StringBuilder sb, long current) { long hundreds = current / HUNDRED.getSize(); if (hundreds > 0) { spellSmallNumber(sb, hundreds); sb.append(HUNDRED.getName()).append(" "); current %= HUNDRED.getSize(); } spellMediumNumber(sb, current); } private static void spellMediumNumber(StringBuilder sb, long num) { if (num >= 20) { sb.append(MULTIPLES_OF_TEN[(int) (num / 10)]).append(" "); num %= 20; } if (num > 0) { spellSmallNumber(sb, num); } } private static void spellSmallNumber(StringBuilder sb, long l) { sb.append(SMALL_NUMBERS[(int) l]).append(" "); } public static void main(String[] args) { int[] candidates = { 2147483647, 147483648, 47483648, 7483648, 483648, 83648, 3648, 648, 48, 8, -2147483648, 2_000_000_000, 2_000_000, 2_000, 200, 20, 14, 1, 0 }; for (int candidate : candidates) { System.out.format("%d - %s%n", candidate, spell(candidate)); } } }
{ "domain": "codereview.stackexchange", "id": 43422, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, integer, numbers-to-words", "url": null }
inorganic-chemistry, catalysis, mole Here we are dealing with zero knowledge as to what the remaining element(s) are, but with a couple of assumptions we actually can come up with an answer from the textbook, and an answer on top of that with a justification. Let's dive in and assume we are dealing with an unknown compound with the following formula: $$\ce{Co_xCl_y}\sum_{i=1}^N \ce{El}_{i,z_i}$$ where we account for $i$ other elements $\ce{El_$i$}$ with the respective coefficients $z_i.$ The first assumption required to advance with the solution is to treat the compound as stoichimetric. e.g. $x, y, z_i \in\mathbb{N}.$ This would immediately allow us to find the exact values for both $x$ and $y$: $$y : x = \frac{m(\ce{Cl})}{M(\ce{Cl})} : \frac{m(\ce{Co})}{M(\ce{Co})} = \frac{\pu{25.7 g}}{\pu{35.45 g mol-1}} : \frac{\pu{11.65 g}}{\pu{58.93 g mol-1}} = 3.67\tag{1}$$ To satisfy $x, y \in\mathbb{N},$ and taking into account the "hint" decimal part $.67$ (which implies a triple factor to an integer): $$x = 3, 6, 9, \ldots, 3n~(n\in\mathbb{N})\tag{2}$$ To pinpoint the exact allowed value of $x$ (and $y,$ as $y = 3.67x),$ we can use the total molar mass: $$M = x × M(\ce{Co}) + 3.67x × M(\ce{Co}) + \sum_{i=1}^N z_iM_i\tag{3}$$
{ "domain": "chemistry.stackexchange", "id": 12985, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, catalysis, mole", "url": null }
performance, rust, lexical-analysis pub mod cursor; pub mod lexer; pub use lexer::*; #[cfg(test)] mod tests { use super::*; } main.rs: use scanner_rs::*; macro_rules! time { ($name:literal, $($test:tt)*) => { { let start = std::time::Instant::now(); $($test)* println!("{}: {:?}", $name, start.elapsed()); } } }
{ "domain": "codereview.stackexchange", "id": 44224, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, rust, lexical-analysis", "url": null }
java, algorithm, heap boolean hasLeftChild(int parent) { int leftChildIndex = getLeftChildIndex(parent); if (leftChildIndex > size - 1) { return false; } return true; } boolean hasRightChild(int parent) { int rightChildIndex = getRightChildIndex(parent); if (rightChildIndex > size - 1) { return false; } return true; } boolean hasParent(int index) { if (index > 0) { return true; } else { return false; } } int getLeftChildIndex(int parent) { int leftChildIndex = parent * 2 + 1; return leftChildIndex; } int getRightChildIndex(int parent) { int rightChildIndex = parent * 2 + 2; return rightChildIndex; } int getParentIndex(int index) { int parent = (index -1)/2; return parent; } int getLeftChild(int parent) { return items[getLeftChildIndex(parent)]; } int getRightChild(int parent) { return items[getRightChildIndex(parent)]; } int getParent(int index) { return items[getParentIndex(index)]; } void ensureCapacity() { if (size == capacity) { items = Arrays.copyOf(items, capacity * 2); capacity = capacity * 2; } } void addItem(int item) { ensureCapacity(); items[size] = item; size = size + 1; heapifyUp(); } int poll() { int topItem = items[0]; items[0] = items[size - 1]; size = size - 1; heapifyDown(); return topItem; } int peek() { return items[0]; } abstract void heapifyDown(); abstract void heapifyUp(); void swap(int indexA, int indexB) { int temp = items[indexA]; items[indexA] = items[indexB]; items[indexB] = temp; } }
{ "domain": "codereview.stackexchange", "id": 26889, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, heap", "url": null }
c++ // At this point add, subtract and ignore vectors are set // add auto found = false; for (const auto& a : add) { if (tok == a) { count++; found = true; break; } } // subtract if (!found) for (const auto& s : subtract) { if (tok == s) { count--; found = true; break; } } // ignore if (!found) for (const auto& i : ignore) { if (tok == i) { ig_string += ' ' + tok; found = true; break; } } // check for end of sequence if (!found) { // recurse over ignore string if (ig_string.length() > 0) { std::string opt; optimize_sequence_recursion(ig_string, opt); if (opt.length() > 0) out += ' ' + opt; } // the numbers of moves in any direction must be mod 4 count %= 4; if (count > 0) { switch (count) { case 1: out += ' ' + add_string; break; case 2: out += ' ' + add_string; out += ' ' + add_string; break; case 3: // 3 add == 1 substract out += ' ' + subtract_string; break; default: break; } } else if (count < 0) { switch (count) { case -1: out += ' ' + subtract_string; break; case -2: // 2 subtracts == 2 adds for simplicity out += ' ' + add_string; out += ' ' + add_string; break; case -3: // 3 subtracts == 1 add out += ' ' + add_string; break; default: break; } } // trigger a new sequence by clearing it search.clear(); // move 1 token backwards index--; } } trim(out); }
{ "domain": "codereview.stackexchange", "id": 38762, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++", "url": null }
The key word is "estimate" ... meaning we're dealing with differentials. We have: . $f(x) \:=\:\frac{x}{x+1}$ Then: . $df \:=\:\left[\frac{(x+1)\!\cdot\!1 - x(1)}{(x+1)^2} - 0\right]\,dx \quad\Rightarrow\quad df \:=\:\frac{dx}{(x+1)^2}$ We are given: . $x = 4,\;\;dx = -0.2$ Therefore: . $df \;=\;\frac{-0.2}{(4+1)^2} \;=\;-0.008$ .(approximate change) ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ We can compare it to the actual change of $f(x).$ $f(4) \:=\:\frac{4}{5} - 3 \;=\;-2.2$ $f(3.8) \:=\:\frac{3.8}{4.8} -3 \;=\;-2.208333\hdots$ The actual change is: . $\Delta f \;=\;(-2.208333) - (-2.2) \;=\;-0.008333\hdots$
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9865717480217662, "lm_q1q2_score": 0.8305214062111052, "lm_q2_score": 0.8418256532040707, "openwebmath_perplexity": 697.5952513614482, "openwebmath_score": 0.5321648120880127, "tags": null, "url": "http://mathhelpforum.com/calculus/190116-business-calc-question.html" }
fourier-transform \\ &= -\dfrac{A}{4\pi}\int_{0}^{\infty} \dfrac{u}{u - i\left[2+\frac{t}{T_o}\right]}+\dfrac{u}{u - i\left[2-\frac{t}{T_o}\right]}-\dfrac{2u}{u -i\frac{t}{T_o}}-\dfrac{2u}{u+i\frac{t}{T_o}}+\dfrac{u}{u+ i\left[2+\frac{t}{T_o}\right]}+\dfrac{u}{u + i\left[2-\frac{t}{T_o}\right]} \; du \\ &= -\dfrac{A}{2\pi}\int_{0}^{\infty} \dfrac{u^2}{u^2 +\left[2+\frac{t}{T_o}\right]^2}+\dfrac{u^2}{u^2 + \left[2-\frac{t}{T_o}\right]^2}-\dfrac{2u^2}{u^2 +\left[\frac{t}{T_o}\right]^2}\; du \\ \\ &= -\dfrac{A}{2\pi}\int_{0}^{\infty} \dfrac{u^2}{u^2 +\left[\frac{t}{T_o}+2\right]^2}+\dfrac{u^2}{u^2 + \left[\frac{t}{T_o}-2\right]^2}-\dfrac{2u^2}{u^2 +\left[\frac{t}{T_o}\right]^2}\; du \\ \\
{ "domain": "dsp.stackexchange", "id": 9628, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fourier-transform", "url": null }
$$p\mathbb{Z}[\zeta_n] = P_1\dots P_r$$ The field extension is normal and degree $\varphi(n)$, so we have $fr=\varphi(n)$ where $f$ is the inertial degree of the $P_i$. By definition, $f$ is the degree of the field extension $[\mathbb{Z}[\zeta_n]/P_i:\mathbb{Z}/(p)]$. This field extension is cyclic, with Galois group generated by the Frobenius automorphism $\sigma$, which therefore has order $f$. Meanwhile, $Gal(\mathbb{Q}(\zeta_n)/\mathbb{Q})\cong (\mathbb{Z}/n\mathbb{Z})^\times$ has an automorphism $\tau$ mapping $\zeta_n\mapsto \zeta_n^p$, since $p$ is prime to $n$. The order of $\tau$ is the order of $p$ in $(\mathbb{Z}/n\mathbb{Z})^\times$. The problem is to show that $\tau$ and $\sigma$ have the same order.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9845754497285467, "lm_q1q2_score": 0.8407415291059963, "lm_q2_score": 0.8539127492339909, "openwebmath_perplexity": 104.38016924399712, "openwebmath_score": 0.9648061394691467, "tags": null, "url": "http://math.stackexchange.com/questions/210787/where-is-the-calculation-hiding-in-this-proof-about-how-p-splits-in-mathbbq" }
c#, beginner, .net, winforms public BankAccountInfo GetByAccountNumber(string accountNumber) { using (var connection = new SqlConnection(_connectionString)) { var sql = "SELECT ..."; connection.Open(); using (var command = new SqlCommand(sql, connection)) { command.Parameters.AddWithValue(...); using (var reader = command.ExecuteReader()) { if (reader.Read()) { return new BankAccountInfo { EmployeeName = reader[0].ToString(), AccountNumber = reader[1].ToString(), AccountName = reader[2].ToString(), BankName = reader[3].ToString(), AccountType = reader[4].ToString() }; } } } } return null; } } Now you have a class whose role is specifically to interact with the database; its interface gives you a GetByAccountNumber method that lets you fetch everything there is to know about a BankAccountInfo by passing in an account number. It knows nothing of a form, and it doesn't care how the returned object is being used, it's none of its concern. This is where you have a design decision to make: you need to somehow "connect" these pieces together. Let's see: the form really only needs to know about BankAccountInfo. the BankAccountInfo doesn't need to know about BankAccountDataService. the BankAccountDataService needs to know about BankAccountInfo (and any other type it might return). We're missing something else. public class BankAccountPresenter { private BankAccountView _form; private BankAccountDataService _service; public BankAccountPresenter(BankAccountView form, BankAccountDataService service) { _form = form; _form.OnShowAccountInfo += View_OnShowAccountInfo; _service = service; } public void Show() { _form.ShowDialog(); }
{ "domain": "codereview.stackexchange", "id": 7026, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, .net, winforms", "url": null }
python, performance, pandas, data-visualization, data-mining Note that I also eliminated your approach to only read every first out of two lines, which is far too inflexible. I replaced it with a simple test whether the line contains any non-whitespace character. You have to modify your Playing with Loaded Data: # Populate/map DataFrame with data part then as well though, because you can get each generator item only once. That means you have to perform all analyses once per item instead of all items once per analyse. It could look like this: # Populate/map DataFrame with data for tweet in load_tweets_data(): tweets['text'] = tweet.get('text', None) tweets['lang'] = tweet.get('lang', None) tweets['country'] = None if tweet.get('place', None) is None else tweet.get('place', {}).get('country') Alternatively to the last line of the snippet above, you could also use this (thanks to @oliverpool): try: tweets['country'] = tweet['place']['country'] except KeyError: tweets['country'] = None That's all you need to change to use generators instead of a huge list. Alternatively, you could have placed the code to populate the DataFrame directly in the loop you use to read the file. Oh, and please use a single # to start comments instead of ##.
{ "domain": "codereview.stackexchange", "id": 18172, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, pandas, data-visualization, data-mining", "url": null }
general-relativity, reference-frames, mathematical-physics Title: Why is locally synchronizable defined as $\omega\wedge \mathrm d\omega=0$? I'm reading GTM48: General relativity for mathematicians, and I have a problem about the definition of locally synchronizable, which is defined as $\omega\wedge \mathrm d\omega=0$. Let $(M,g,D)$ be a spacetime. Definition 1 A reference frame $Q$ on a spacetime $M$ is a vector field each of whose integral curves is an observer. Let $\omega$ be the 1-form physically equivalent to $Q$, which means $\omega=g(Q,\cdot)$ Definition 2 $Q$ is called: locally synchronizable iff $\omega\wedge \mathrm d\omega=0$, Why do we define locally synchronizable in this way? At first, I think since Q have integral curves, Q should be integrable, so the requirement of the definition should be to do with integrability. $\omega\wedge \mathrm d\omega=0$ reminds me of Frobenius theorem, which tells if $\omega\wedge \mathrm d\omega=0$ then $\omega$ is integrable. That is to say, the distribution orthogonal to $Q$ is integrable. But what we want is that $Q$ is integrable! Do I make mistakes anywhere? In Definition 1 the vector field must be timelike obviously. Yes the point is the Frobenius theorem. That condition, in terms of $Q$, means that locally there is a spacelike co-dimension $1$ surface $\Sigma$ everywhere orthogonal to $Q$. This is the rest space of the observer because the normal tangent space to $Q$ is tangent to $\Sigma$ as well. This condition, in turn, is the mathematical description of Einstein synchronisation: the local value of the light speed measured along lightlike curves is $1(=c)$. As you have noticed the integrability condition of $Q$ is not referred to its integral lines that obviously do necessarily exist! It is instead referred to the existence of locally $Q$-orthogonal immersed submanifolds...
{ "domain": "physics.stackexchange", "id": 75527, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, reference-frames, mathematical-physics", "url": null }
R}^n. If T isn't an isomorphism find bases of the kernel and image of T, and. {\mathbb R}^m. It is normally performed on binary images. Next, we study the space of linear transformations from one vector space to another, and characterize some algebraic properties of linear transformations. These are all vectors which are annihilated by the transformation. Thus, the kernel is the span of all these vectors. Because is a composition of linear transformations, itself is linear (Theorem th:complinear of LTR-0030). And since we have a linear transformation that has the same properties of a subspace, the image and kernel of the linear transformation are subspaces of Rn. Define the transformation $\Omega: L(V,W) \to M_{m \times n} (\mathbb{R})$ Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Find the kernel of f. Let R4 be endowed with the standard inner product, let W = Spanf 2 6 6 4 1 2 1 0 3 7 7 5; 2 6 6 4 3 1 2 1 3 7 7 5g, and let P : R4! R4 be the orthogonal projection in R4 onto W. Linear Algebra, David Lay Week Seven True or False. Let’s check the properties:. Suppose a linear transformation is applied to the random variable X to create a new random variable Y. Use automated training to quickly try a selection of model types, and then explore promising models interactively. Find polynomial(s) p i(t) that span the kernel of T. The Kernel of a Linear Transformation: Suppose that {eq}V_1 {/eq} and {eq}V_2 {/eq} are two vector spaces, and {eq}T:V_1 \to V_2 {/eq} is a linear transformation between {eq}V_1 {/eq} and {eq}V_2. (a) A linear functional on V is a function ~u∗ : V → IR that is linear in the sense that ~u∗(~v + w~) = ~u∗(~v)
{ "domain": "tuning-style.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9838471647042428, "lm_q1q2_score": 0.8439127109138213, "lm_q2_score": 0.8577680977182186, "openwebmath_perplexity": 420.58524934978, "openwebmath_score": 0.7735112905502319, "tags": null, "url": "http://tuning-style.it/wdmc/how-to-find-the-kernel-of-a-linear-transformation.html" }
randomized-algorithms swap(s+1,q) # put the pivot in the correct place return s+1 Also, you can set a base case for select(p,q,k) (although this is not critical): def select(p,q,k): if (p==q): return lst[p] #... For more info: Introduction to Algorithms (Cormen, Leiserson, Rivest & Stein)
{ "domain": "cs.stackexchange", "id": 5194, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "randomized-algorithms", "url": null }
homework-and-exercises, electric-circuits Title: Power dissipated in resistor Suppose one has a circuit consisting of an inductor $L$ and resistor $R$ in series where $L$ and $R$ are known, passes an alternating voltage of frequency $\omega$ through it and that one wishes to calculate the mean power dissipated in the resistor. Let the RMS voltage across the series combination be $V_0$. Then the RMS current through the components will be $I=\frac{V_0}{i\omega L + R}$ and the mean power dissipated in $R_2$ will be $\overline{P}=I^2R$. However, at this point, $I$ involves a complex quantity. How do you calculate the mean power? Do you calculate the magnitude of the complex current? With very many thanks, Froskoy. okay, This was really cool and I got some help from my physics professors on this one (apparently I won't learn this until next semester) and to find the magnitude of the square of a complex number you take it times it's complex conjugate. So in this case $$\frac{V_0}{i \omega L +R}$$ is multiplied with $$\frac{V_0}{-i \omega L + R}$$ leaving you with simply $$\frac{V_0^2}{w^2L^2+R^2}$$, then just take it times your $$R_2$$ giving you $$\frac{V_0^2}{\omega^2 L^2+R^2} \times R_2$$ for the average power. Sorry that my equations aren't very pretty, LaTex isn't working on my ubuntu install yet..... Hope this helps!!
{ "domain": "physics.stackexchange", "id": 3697, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electric-circuits", "url": null }
dataset, pandas, data-cleaning, correlation Title: How do you Speed up the Calculation of a Correlation Matrix on a Large Dataset in Pandas? I'm using a dataset with roughly 460,000 rows and 1,300 columns. I'd like to reduce the number of columns by seeing which have the largest effect on score using pandas' .corr() function. However, on such a large dataset, calculating the correlation matrix takes about 20 minutes. Is there any way to speed up the calculation? You can use libraries with similar or identical pandas syntax, such as: dask, pandaralells, ray, modin. Each of these libraries allows all processor cores to work. Pandas often uses only 1 core. Dask and ray also allow you to work with big data. It is also possible to select only part of the dataset. 460,000 is quite a lot, I think if you accidentally take half of this value, the result will be very similar if you take the entire dataset. Unfortunately, I cannot mathematically estimate how much difference there will be.
{ "domain": "datascience.stackexchange", "id": 11297, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dataset, pandas, data-cleaning, correlation", "url": null }
aircraft-design Title: For a tangent ogive, given a radius, can you determine it's length? A tangent ogive body has a specific radius for a given length, so how would you determine length if only given the radius? Looking at the the image below: $L=\sqrt{r^2-x^2}$ $x=r-\left(\frac{D}{2}\right)$ $L=\sqrt{r^2-\left(r-\left(\frac{D}{2}\right)\right)^2}=\frac{1}{2}\sqrt{-D (D - 4 r)}$
{ "domain": "engineering.stackexchange", "id": 2996, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "aircraft-design", "url": null }
c#, .net, exception-handling, mvp Why I've caught exceptions in DAL is that even at the moment I'm not logging errors, I may have to do it in future. And also this way I can differentiate the error massages in DAL, whether it's sql related or general. I've used messaging service in my presenter when showing error messages. Does this meaning full? Can this be simplified? I can follow the reasoning behind the catch and rethrow even if I might not fully agree with it - without a fuller understanding of what you would like to achieve I will not try to offer an alternative. The only comments I can see at the moment are: HouseKeeping InsertAccount() still has a return value even though it is never used. It can/should be removed. Don't Repeat Yourself (DRY) If you have a lot of calls to database, you do not want to have to repeat the exception handling (logging) code in each of them. At the simplest level you can have a member in the data service that takes in an Action, executes it and handles any exceptions private void ExecutNonQuery(Action nonQueryAction) { try { nonQueryAction(); } catch (SqlException) { throw new Exception("DataBase related Exception occurred"); } catch (Exception) { throw new Exception("Exception occurred"); } } public void InsertAccount(IBankAccount ba) { ExecuteNonQuery(()=> { string insertStatement = @"IF NOT EXISTS (SELECT ac_no FROM BankAccount WHERE ac_no=@ac_no) BEGIN INSERT INTO BankAccount (ac_no, emp_id, ac_name, bank_name, ac_type," + "ent_date, ent_by, remarks, file_no, status) VALUES (@ac_no, @emp_id, @ac_name, @bank_name, @ac_type, @ent_date, @ent_by, @remarks, @file_no, @status) END"; //... }); }
{ "domain": "codereview.stackexchange", "id": 7984, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, exception-handling, mvp", "url": null }
I was hoping someone could please come to the rescue and help me differentiate the function $z$ with respect to $C$. I thank all helpers. • In the first derivative, is A supposed to be C? – John Polcari Jul 8 '18 at 16:31 • Does this book have copy right? This not allowed on this website – Cloud JR K Jul 8 '18 at 16:46 • @JohnPolcari : yes fixed it now – kroner Jul 8 '18 at 16:49 • @CloudJR : no, it is OK – kroner Jul 8 '18 at 16:50 • @JohnPolcari : thanks for pointing out – kroner Jul 8 '18 at 16:51 Let's use a convention where uppercase Latin letters represent matrices, lowercase Latin vectors, and Greek letters are scalars. The function you've denoted by $\phi$ is the ReLU function, $r(\alpha)$, whose derivative is the Heaviside step function $$h(\alpha) = \frac{dr(\alpha)}{d\alpha} \implies dr = h\,d\alpha$$ Applying these scalar functions element-wise on a matrix argument $A=CX,$ produces matrix results, which we'll denote as \eqalign{ R &= r(A) \cr H &= h(A) \implies dR = H\odot dA = H\odot(dC\,X) \cr } where $\odot$ is the elementwise/Hadamard product.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9890130573694068, "lm_q1q2_score": 0.8592810313304701, "lm_q2_score": 0.8688267813328976, "openwebmath_perplexity": 425.7984376329643, "openwebmath_score": 0.9998749494552612, "tags": null, "url": "https://math.stackexchange.com/questions/2844744/help-with-matrix-derivative" }
python, python-3.x, object-oriented, random, file-system Ah, that's no good. It's continually appending the data file. So, let's change: scores = open('rating.txt', 'a') into: scores = open('rating.txt', 'w') and run the tests again a few times - yes, that fixes it - we're only seeing a single line of data in the file, and all the tests still pass successfully. Let's complete the rest of the program into the entry point, remove self.main() and fix game.options into game.choices, as we merged those two earlier: if __name__ == "__main__": game = RockPaperScissors() game.getname() game.getchoices() game.search_for_player() if game.user_found is False: game.create_new_user() while game.running: response = False while response is False: computer = choice(game.choices) human = input() response = game.check_choice(human) if game.running and response: _round = game.check_result(human, computer) game.update_score(_round) My IDE, PyCharm, highlights _round = game.check_result(human, computer) - it says human and computer are being referenced before being used. This is true because of "variable scope" - human and computer are defined and used inside the while loop - but once we leave the while loop - they are "lost". Python is a bit more forgiving than other languages. This code would crash in most other languages. Let's address that later, and test the while loop with (and later, to monkeypatch the input): def test_input_loop(): game = RockPaperScissors() game.name = "myname" game.search_for_player() response = False while response is False: computer = choice(game.choices) human = input() response = game.check_choice(human) Do the tests run successfully? No, we get a failure - _________ test_input_loop _________
{ "domain": "codereview.stackexchange", "id": 38184, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, object-oriented, random, file-system", "url": null }
recurrence-relation, discrete-mathematics Title: Solve the recurrence $a_n - 3a_{n-1} + 2a_{n-2} = 6 \cdot 2^n$ Consider the recurrence $$ a_n - 3a_{n-1} + 2a_{n-2} = 6 \cdot 2^n. $$ I tried to solve this as follows. First, I found the homogeneous solution: $$ a_n^{(h)} = r^2 - 3r + 2r \\ (r-2)(r-1) = 0 \\ r = 2,1 \\ a_n^{(h)} = A \cdot 2^n + B $$ While trying to solve for the particular solution (adding $n$ for linear independence), I get an answer which is a false statement: $$ a_n^{(p)} = Cn2^n \\ Cn2^n - 3[C(n-1)2^{n-1}] + 2[C(n-2)2^{n-2}] = 6\cdot 2^n \\ 4Cn - 3[2Cn-2] + 2[Cn-2] = 6 \cdot 2^2 \\ 4Cn - 6Cn + 6 + 2Cn - 4 = 24 \\ 0Cn+2 = 24 \\ 0Cn = 22 $$ I’m not sure where the issue lies. Am I performing the problem incorrectly, or does the problem have an issue? Your calculation is an example of proof by contradiction. You assumed that $a_n = Cn2^n$, and reached a contradiction $0 = 22$. You can conclude that $a_n \neq Cn2^n$. Your calculation shows that the general solution to your equation is $$ a_n = A + 2^n B + n2^n C. $$ For some reason you forgot about the first two terms.
{ "domain": "cs.stackexchange", "id": 13857, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "recurrence-relation, discrete-mathematics", "url": null }
neural-networks, convolutional-neural-networks, training, architecture, hardware Title: How do neural network topologies affect GPU/TPU acceleration? I was thinking about different neural network topologies for some applications. However, I am not sure how this would affect the efficiency of hardware acceleration using GPU/TPU/some other chip. If, instead of layers that would be fully connected, I have layers with neurons connected in some other way (some pairs of neurons connected, others not), how is this going to affect the hardware acceleration? An example of this is the convolutional networks. However, there is still a clear pattern, which perhaps is exploited by the acceleration, which would mean that if there is no such pattern, the acceleration would not work as well? Should this be a concern? If so, is there some rule of thumb for how the connectivity pattern is going to affect the efficiency of hardware acceleration? The topology of a neural network can have a significant impact on the performance of GPU and TPU acceleration. The most important factor is the number of layers and the connectivity between them. Simple topologies require less data movement and can be more easily parallelized. a shallower network with fewer layers will often be faster to train on a GPU or TPU than a deeper network with more layers. This is because each layer in a neural network must be fully connected to the previous and next layers, and a deep network will have many more connections than a shallow network. Additionally, the activation function used for each layer can also impact performance. ReLU is a common activation function that is often used in networks that are accelerated by GPUs or TPUs. GPU-based acceleration of CNN GPU is widely used in neural network applications due to a large number of ALU units which helps in faster data processing (multiplication and summation operations in NN), and also the GPU caches, which help in data reuse. The GPU is capable of merging multiple data access requests using the controllers, and it helps in massive parallel and pipelined processing. GPU is a temporal architecture paradigm with a large number of ALUs, but the ALUs lack direct data communication, and they communicate using direct memory access. GPU has around $3,000–5,000$ ALU But the Von Neumann bottleneck exists in GPU due to the access to registers and the shared memory for intermediate data storage in every ALU operation.
{ "domain": "ai.stackexchange", "id": 3558, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-networks, convolutional-neural-networks, training, architecture, hardware", "url": null }
minimum-spanning-tree I did the following: Consider an edge $e$ in $E_{T^*}$. Delete it from $G$ and find the new lowest weighted edge connecting the resulting components, say $e'$. The upper bound for $I_e$ is $w(e')$. The lower bound would be $-\infty$. Use the algorithm sketched in 1. I do not know what to do here. Could you help me? The solution in short. For each edge $e$ not in the tree, check the path between its endpoints in the tree. The weight of each edge on this path is upper-bounded buy the weight of $e$. Keep track of the smallest upper-bound for each edge on the tree while iterating over all edges not in the tree. Following is a sketch of the correctness. Let us call an edge in a simple cycle Heavy for this cycle, if its weight is maximum among all edges in the cycle. We claim that each edge $e$ in $G\setminus T^*$ is Cycle heavy for some cycle in $G$. Proof. By adding the edge $e$ to the tree, it constructs a cycle with a path in the tree between its endpoints. If there is an edge in the cycle with strictly heavier weight, we can remove this edge and keep $e$ constricting a lighter tree which contradicts the assumption that the given tree is minimum. On the other hand, it is not hard to prove that an edge is in an MST of a given graph if it is not heavy for any simple cycle containing this edge. The correctness of the sketched algorithm follows directly from this statement.
{ "domain": "cs.stackexchange", "id": 14970, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "minimum-spanning-tree", "url": null }
c#, language-design, lexical-analysis should be converted to: if (IsEnd) { throw new ExpectedCharacterException(opening); } There are a few properties or methods where you may consider using =>. Example: private bool IsEnd => charIdx >= sourceRaw.Length; You seem to use CurrentChar != ' ' && CurrentChar != ';' frequently. Apparently, these are delimiters between tokens and values. The DRY Principle (Don't Repeat Yourself) suggests this could become its own property: private bool IsDelimiter => CurrentChar == ' ' || CurrentChar == ';' Elsewhere in code you would replace CurrentChar != ' ' && CurrentChar != ';' with !IsDelimiter. The advantage here, besides readability, is that if you were ever to add a 3rd delimiter in the future, you would only have to change it in one spot.
{ "domain": "codereview.stackexchange", "id": 43246, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, language-design, lexical-analysis", "url": null }
quantum-mechanics, operators, hilbert-space, mathematical-physics, harmonic-oscillator We know that $A$ is a trace-class operator if $\sqrt{|A|}$ is a Hilbert-Schmidt operator or equivalently if $A$ is compact and $$ \sum_{\lambda~\in \text{ sing}(A)} \lambda m_\lambda < \infty\; , $$ where $m_\lambda$ is the multiplicity of $\lambda$. We know that $\lambda\in \text{sing}(\exp(-\beta H))$ is of the form $$ \exp\left(-\beta \left( n+\frac{1}{2}\right)\right) $$ with $m_\lambda=1$ and $n\in \mathbb{N}$. So we have $$ \sum_{\text{sing}(\exp(-\beta H)}\lambda m_\lambda=\sum_{n=0}^{\infty} \exp\left(-\beta \left( n+\frac{1}{2}\right)\right) \leq \sum_{n=0}^{\infty}\frac{1}{\beta^2\left(n+\frac{1}{2}\right)^2} <\infty. $$ Then it only remains to prove that $\exp(-\beta H) $ is compact. I have tried to prove that $$ \sum_{k=0}^n \frac{(-\beta H)^k}{k!} $$ is compact $\forall n$. In this way, using the fact that the space of compact operators is a Banach space, we can conclude. I cannot figure out how to prove this. Observe that, from the spectral decomposition of $e^{-\beta H}$ we have that
{ "domain": "physics.stackexchange", "id": 54596, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, operators, hilbert-space, mathematical-physics, harmonic-oscillator", "url": null }
javascript, mvc, ajax /***/ }), /* app.js */ /***/ (function(module, exports, __webpack_require__) { //const css = require('./app.scss'); const url = './dist/data/shows.json'; const model = __webpack_require__ (0); const view = __webpack_require__ (2); /* Fetch Data */ model.get(url).then(data => { view.renderData(); }).catch(err => { view.handleError(err.statusText); }); /***/ }), /* view.js */ /***/ (function(module, exports, __webpack_require__) { const model = __webpack_require__(0); const showList = document.getElementById("shows"); let filterButtons; document.addEventListener('DOMContentLoaded', DOMLoaded); function DOMLoaded() { filterButtons = document.getElementsByClassName('filter_list_button'); document.addEventListener('click', clickHandler); } function clickHandler(clickEvent) { const target = clickEvent.target; if (target.classList.contains('filter_list_button')) { handleButtonState(target); let filterBy; if (target.dataset.key) { filterBy = {key: target.dataset.key, value: target.dataset.value}; } renderData(filterBy); } } function filterObject(filterBy) { if(typeof filterBy === 'undefined') return model.getData(); return model.getData().filter((item) => { return item[filterBy.key] == filterBy.value; }) }
{ "domain": "codereview.stackexchange", "id": 27680, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, mvc, ajax", "url": null }
How to prove the identity $\prod_{n=1}^{\infty} (1-q^{2n-1}) (1+q^{n}) =1$ for $|q|<1$? Eulers product identity is as follows \begin{align} \prod_{n=1}^{\infty} (1-q^{2n-1}) (1+q^{n}) =1 \end{align} How one can explicitly prove this identity? Note here $q$ deonotes a complex number satisfying $|q|<1$ • what's $q$?.... – voldemort Jan 25 '15 at 15:29 • @voldemort, q is just variable – phy_math Jan 25 '15 at 15:30 • How do you compute an identity? Do you mean proving that it holds? – Wojowu Jan 25 '15 at 15:31 • @Wojowu, yes, it is known as Euler identity – phy_math Jan 25 '15 at 15:32 • Rewrite it as: $$\prod(1+x^n)=\prod\frac{1}{1-x^{2n-1}}$$ Then the left side coefficient of $x^N$ is the number of ways to partition $N$ into distinct positive integers, and the right is the number of ways to partition $N$ into odd not-necessarily distinct integers. This reduces it to a combinatorial question, at least. – Thomas Andrews Jan 25 '15 at 15:37 For $\lvert q\rvert < 1$, define $$f(q) = \prod_{n=1}^\infty (1-q^{2n-1})(1+q^n).$$ The product is absolutely and locally uniformly convergent on the open unit disk, hence $f$ is holomorphic there, and the product can be reordered and regrouped as desired. It is clear that $f(0) = 1$, and by reordering and regrouping, we have
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9859363713038173, "lm_q1q2_score": 0.8279242589701172, "lm_q2_score": 0.8397339656668286, "openwebmath_perplexity": 659.7724612678295, "openwebmath_score": 0.996915876865387, "tags": null, "url": "https://math.stackexchange.com/questions/1118993/how-to-prove-the-identity-prod-n-1-infty-1-q2n-1-1qn-1-for" }
convolution, terminology Title: Is linear convolution same thing as aperiodic convolution? I am reading an online article and what i understood from there is that linear convolution is aperiodic Is it right? I have also attached snapshot and higlighted two instances What they call noncircular or aperiodic convolution is more commonly called linear convolution. Linear convolution describes the input-output relation of linear time-invariant (LTI) systems. Linear convolution can be computed via circular convolution if we use sufficient zero-padding. Circular convolution is important because it can be computed using fast algorithms (FFT).
{ "domain": "dsp.stackexchange", "id": 9076, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "convolution, terminology", "url": null }
a..., 13 ( 3 ), corresponding to an uninitialized object will… [ R generate! P-Matrices, some of which yield P-matrices with additional properties was called it must length... Analysis with High Dimensional Dependence Modelling, Wiley, 2006 in this email thread but helping... Optional data frame in which to evaluate the variables named in value and form matrix, but beware - is. Four methods to generate random matrix ; vermehren es, indem es die eigene ;! The function genPositiveDefMat implements four methods to generate random matrix in natural parametrization, also inheriting from class pdMat 2. Covariance matrix ( see details below, Henderson, S., Henderson, S. G. ( 2003 ) )...: Andrei Bobrov on 2 Oct 2019 Accepted Answer: Elias Hasle correspond! Mixed-Effects Models in S and S-PLUS '', then eigenvalues will be automatically generated coef.pdMat, pdClasses matrix... Ratio of the upper bound of the underlying positive-definite matrix, but generate positive definite matrix r it... An initialized pdMat object, object will be automatically generated is supposed to be positive definite in.! Gibt eine orthogonale matrix S, so dass S⊤AS eine Diagonalmatrix ist:! Matrix are positive that looks like '' a correlation matrix, but not necessary to normally... Implements four methods to generate random matrix in R consists of the.. To the parent frame from which the function genPositiveDefMat implements four methods to generate random correlation matrices Based Partial! Matrix a symmetric positive definite then so is a constructor for the pdNatural class, representing a general positive-definite in!: number of binary places to move over by n positive definite positive-definite covariance matrix in R consists of function. The diagonals are all 1′s makes some properties of positive generate positive definite matrix r, n ) that has the positive. Non-Positive definite correlation matrix alphad=1 for uniform data or if using tetrachoric or polychoric Correlations, all... “ unifcorrmat ” ; see details below a random positive definite with random values afterwards, the matrix positive. More comment a and generate positive definite matrix r are positive, it is positive definite much! Negative eigenvalues ( and no zero
{ "domain": "todogolden.es", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9658995703612219, "lm_q1q2_score": 0.8049078745806677, "lm_q2_score": 0.8333246015211008, "openwebmath_perplexity": 1480.8413721117697, "openwebmath_score": 0.7686029672622681, "tags": null, "url": "http://todogolden.es/don-t-cfmf/generate-positive-definite-matrix-r-b260eb" }
if $$\displaystyle 2k - 4 > 0$$, that is if $$\displaystyle k > 2$$, which is true. But $$\displaystyle 2k^2 - 3k^2 - 5k + 3 = (2k - 1)(k^2 - k - 3)$$. Now $$\displaystyle 2k - 1 > 0$$ for any $$\displaystyle k > 0$$, so we are down to showing $$\displaystyle k^2 - k - 3 > 0$$ whenever $$\displaystyle k > 3$$. Since $$\displaystyle k^2 - k > 3$$ is the same as $$\displaystyle k(k-1) > 3$$, we have: $$\displaystyle k(k-1) > 3(2) = 6 > 3$$. Thus we conclude that $$\displaystyle k^2 - k - 3 > 0$$ and so: $$\displaystyle 3^{k+1} = 3(3^k) > 3k^3 > (k+1)^3$$. With all due respect to Zaid, I wanted to give a purely algebraic proof. #### Evgeny.Makarov ##### Well-known member MHB Math Scholar Equivalently, we must show that: $$\displaystyle 3k^3 > k^3 + 3k^2 + 3k + 1$$ Starting from this point, we could continue as follows. We need to show that $3k^2+3k+1<2k^3$. $3k^2+3k+1<3k^2+3k^2+k^2=7k^2$ since $k>1$. Now, $7k^2<2k^3\iff 7<2k$, and the last inequality is true since $k\ge4$. #### Deveno
{ "domain": "mathhelpboards.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9924227591803558, "lm_q1q2_score": 0.901733072947393, "lm_q2_score": 0.9086178895092414, "openwebmath_perplexity": 977.5917597252328, "openwebmath_score": 0.9834364652633667, "tags": null, "url": "https://mathhelpboards.com/threads/proof-critique-induction.6248/" }
c#, programming-challenge, time-limit-exceeded sum = Math.Abs(A[i]-A[j]) + Math.Abs(i-j); if (sum > maxSum) maxSum = sum; } return maxSum; }
{ "domain": "codereview.stackexchange", "id": 25777, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, programming-challenge, time-limit-exceeded", "url": null }
quadcopter, control, pid, imu, pwm Proportional error compares your speed to your target speed. Want to be going 60 and you're going 55? Speed up. Going 65? Slow down. This is easy. Integral error compares the integral of target speed to the integral of actual speed. This means that it's comparing a target position to your actual position. Were you supposed to be in the city already? Then you need to speed up. Were you supposed to be in the city and you're still at your house? SPEED THE ENTIRE WAY. Derivative error compares the derivative of the difference between target and actual speeds. Is the person in front of you pulling away? Speed up! Is the person in front of you pushing their brakes? Slow down! As I mentioned above, if your target is to be immediately behind the person in front of you, then proportional and integral will both "tell" you to speed up. Derivative is the only "person" to tell you you're going to rear-end them if you don't start braking. SO, what happens when you force signs on the error terms? Let's suppose you're in a car, trying to follow a friend to a restaurant. Here the friend's speed represents the speed reference and your speed represents your speed feedback. Speed limit on the road is 35mph (55kph). Here's what happens: Your friend begins to move. You are still stationary, so the following errors happen: Proportional error is positive (you want to go 35 and are actually going 0). Integral error is a little positive (your friend is farther from you). Derivative error is large and positive (your friend is quickly pulling away from you). You force signs: force integral error to be positive (already is) and derivative error to be negative. This means that derivative error is "telling" you that the car is getting farther away from you, but you invert that and assume that derivative error meant to say that you are getting closer to your friend. This is wrong. Actions: Proportional - press the gas a moderate amount. Integral - Press the gas pedal a little bit. Derivative - Should be press the gas pedal a lot, but you inverted it, so instead you press the brake a lot. Eventually your friend gets far enough away that the proportional and integral error becomes large enough that it overrides your (incorrectly inverted) derivative term. At this point:
{ "domain": "robotics.stackexchange", "id": 1034, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quadcopter, control, pid, imu, pwm", "url": null }
c++, calculator num = 0; operation_flag = false; negative_flag = false; } else if (function == "pi") { num = pi_num; if (negative_flag) num *= -1; if (operation_flag == false) // omitting the '*' in multiplication operation = '*'; calc_units.push_back(std::make_pair(operation, num)); num = 0; operation_flag = false; negative_flag = false; } else function_flag = true; } // parsing numbers and applying functions // the user might use a decimal point without a zero before it to show a number smaller than one // example: 1337 * .42 where the zero in 0.42 is omitted else if ((expression[i] >= '0' && expression[i] <= '9') || expression[i] == '.') { while (expression[i] >= '0' && expression[i] <= '9' && i <= end) { num = 10 * num + (expression[i] - '0'); ++i; } if (expression[i] == '.') // decimal point { ++i; unsigned decimals_count = 0; NUM_TYPE decimals = 0; while (expression[i] >= '0' && expression[i] <= '9' && i <= end) { decimals = 10 * decimals + (expression[i] - '0'); decimals_count++; ++i; } num += decimals / pow(10, decimals_count); decimals = 0; decimals_count = 0; } if (negative_flag) // negative number num *= -1; // applying functions if (function_flag) { applyFunction(function, num); if (error_flag) { error_msg = "Unknown Function"; return 0; } function_flag = false; }
{ "domain": "codereview.stackexchange", "id": 37636, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, calculator", "url": null }
keras, convolutional-neural-network, theano Title: shape of theano tensor variable out of keras Conv2D Being new to theano, pls bear with me. I thought the shape of the tensor variable is already well defined out of the Conv2D layer since the input is specified, as follow, from keras.layers import Input, Convolution2D import theano input_img = Input(shape=(1, 28, 28)) x = Convolution2D(16, 3, 3, activation='relu', border_mode='same') (input_img) print type(x) print theano.tensor.shape(x) But the output is, <class 'theano.tensor.var.TensorVariable'> Shape.0 Since I'm taking the default stride of 1, and same border mode here means that padding is added so the output is the same as input. Using this information I could calculate by hand what the output shape should be. Did I miss something here? The question is how to get the shape of the output of a convolution layer? You can't get the shape of a theano tensor, because it is not fixed. The output of the convolutional layer is just a symbolic variable and its shape depends on whatever you put into the layer as input. You can get the shape of the output for a specific input by making a theano function for the output of the layer, and feeding a numpy array through the function: import numpy as np input = np.ones(28*28).reshape(1, 1, 28, 28).astype('float32') fn = theano.function([input_img], x) print fn(input).shape >>> (1, 16, 28, 28)
{ "domain": "datascience.stackexchange", "id": 1337, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "keras, convolutional-neural-network, theano", "url": null }
density-matrix The insight behind this is the following. We want to use the Lemma to show that these two matrices are equal. Note that though the lemma statement uses density matrices, the proof actually works with general matrices, so we don't have to show that $\sum_h\mathbb{P}[h]\,\left.\rho^A\middle|h\right.$ is a density matrix in the first place. Thus, we want to show that, for any unitary $\mathbf{U}$, the probability of measuring the basis state $|x\rangle$ after having applied $\mathbf{U}$ is given by: $$\mathrm{tr}\left(|x\rangle\langle x|\mathbf{U}\sum_h\mathbb{P}[h]\left.\rho^A\middle|h\right.\mathbf{U}^\dagger\right)$$ Now, the linearity of the trace gives: $$\mathrm{tr}\left(|x\rangle\langle x|\mathbf{U}\sum_h\mathbb{P}[h]\left.\rho^A\middle|h\right.\mathbf{U}^\dagger\right)=\sum_h\mathbb{P}[h]\mathrm{tr}\left(|x\rangle\langle x|\mathbf{U}\left.\rho^A\middle|h\right.\mathbf{U}^\dagger\right)\,.$$ By definition of $\left.\rho^A\middle|h\right.$, $\mathrm{tr}\left(|x\rangle\langle x|\mathbf{U}\left.\rho^A\middle|h\right.\mathbf{U}^\dagger\right)$ is the probability of the adversary measuring $|x\rangle$ given h. hence, this does give the probability of measuring $|x\rangle$, which concludes the proof. In this case, we would thus have:
{ "domain": "quantumcomputing.stackexchange", "id": 2589, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "density-matrix", "url": null }
In particular, if given $f\in R[x]$ and $g\in S[x]$ we let $$\mathrm{Roots}_R(f) = \{r\in R\mid f(r)=0\}$$ and $$\mathrm{Roots}_S(g) = \{s \in S\mid g(s)=0\},$$ then $$h\left(\mathrm{Roots}_R(f)\right) \subseteq \mathrm{Roots}_S(\overline{h}(f)),$$ but the inclusion may be proper. For examples with proper inclusion in the setting of reduction modulo $p$, consider $f(x)=x^2+1\in\mathbb{Z}[x]$ which has not roots, but whose reduction modulo $p$, with $p=2$ or $p\equiv 1\pmod{4}$ has roots. Or $f(x)=x^p-x$, $p$ an odd prime, which has two roots in $\mathbb{Z}$ but its reduction modulo $p$ has $p$ roots in $\mathbb{F}_p$ (by Fermat's Little Theorem). For examples of proper inclusion with fields and $h$ an inclusion map, take $x^2+1\in\mathbb{R}[x]$, and consider its image in $\mathbb{C}[x]$; $x^2+1$ has no real roots, but its image in $\mathbb{C}[x]$ has roots. Added. When dealing with inclusion, it usually makes more sense to consider the set of roots in the larger field, and just ask which ones lie in the smaller field. So we think of $x^2+1$ as a polynomial with complex coefficients, and then ask what $\mathrm{Roots}_{\mathbb{C}}(x^2+1)\cap\mathbb{R}$ is (this will equal $\mathrm{Roots}_{\mathbb{R}}(x^2+1)$).
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9854964215865321, "lm_q1q2_score": 0.8490428207025734, "lm_q2_score": 0.8615382076534742, "openwebmath_perplexity": 76.4085526447313, "openwebmath_score": 0.9455252885818481, "tags": null, "url": "http://math.stackexchange.com/questions/35136/does-fx-in-mathbbzx-have-the-same-roots-as-fx-in-mathbbf-px" }
rosnode, java Title: Access ROS command-line tools via Java Hi, I am trying to write a tool for ROS which allows the user to combine Eclipse with PyDev. Therefore, I want to create an eclipse plugin. I am trying for quite some time to access the commandline tools like "rosnode" in Java but I am not yet able to accomplish it. Is there anyone who already accomplished that and could give me some ideas how to manage it. Is there a package or some junks of code as an example? Until now I tried it as a direct process, as a shell script, and jython (but this is not 100% out). Any Idea would be great. Thanks a lot! Originally posted by Chakka on ROS Answers with karma: 11 on 2011-03-14 Post score: 1 It's great to hear you are working on an Eclipse plugin. If you access it as a process, have you made sure to pass in ROS_ROOT, ROS_MASTER_URI, and ROS_PACKAGE_PATH? The tools will not function without those set. In the long run, a lot of the information would be easier to get access to by writing XML-RPC clients that talk to the master, which is how tools like rosnode and rosparam get most of their information. Originally posted by kwc with karma: 12244 on 2011-03-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5061, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rosnode, java", "url": null }
acid-base \mathrm{pH} &= \log\left(\frac{[\ce{AcO-}]}{[\ce{AcOH}]}\right) + \mathrm{p}K_\mathrm a \\ \mathrm{pH} &= \mathrm{p}K_\mathrm a + \log\left(\frac{[\ce{B}]}{[\ce{A}]}\right) \\ \end{aligned} With buffer systems, you can calculate the $\mathrm{pH}$ of any buffer solution. The Henderson–Hasselbalch equation has several caveats however.
{ "domain": "chemistry.stackexchange", "id": 996, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "acid-base", "url": null }
deep-learning, dataset, cnn, image-classification, faster-rcnn Is there any paper that uses studio photos as well as field photos for object detection? Well, there are many. These controlled datasets are pretty easy to come up with and have been used extensively see BioID for example, until today many publications use it for benchmarking on Eye Pupil Detection and Facial Landmark Detection. In the wild datasets (usually collected from the web) are usually more challenging and also more fitted to real-world application, but it is not always possible to acquire them easily. Last year CEFET-RJ and UFRJ published papers in IEEE with created datasets for Aedes Aegypti breeding sites detection. This dataset was created using drones and manually annotated by CEFET-RJ and UFRJ undergrad students.
{ "domain": "datascience.stackexchange", "id": 4885, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "deep-learning, dataset, cnn, image-classification, faster-rcnn", "url": null }
• Hey sorry about that, I edited it-hopefully it's right now? – hmmmm Apr 22 '12 at 13:39 • I also edited now my calculation which still differs abit from your new one. Could you show the steps of how you got this answer for $|f(x)-f(y)|$? – T. Eskin Apr 22 '12 at 13:47 • yeah I messed that up quite a bit sorry (I had the wrong power and the wrong delta's) – hmmmm Apr 22 '12 at 13:50 • It should be $|(\frac{\delta}{2}+\frac{1}{\delta})^4-\frac{1}{\delta^4}|$ which would give $|\frac{\delta^4}{16}+\frac{\delta^2}{2}+\frac{2}{\delta}+\frac{3}{2}|$ I think I could conclude a similar thing from here? – hmmmm Apr 22 '12 at 13:52 • Except that is the last $-\frac{1}{\delta^{4}}$ missing from there? Otherwise it looks close to mine. – T. Eskin Apr 22 '12 at 13:57 (1). If $f:(0,\infty)\to \Bbb R$ is differentiable and $\lim_{x\to \infty}f'(x)=\infty$ then $f$ is not uniformly continuous: For any $\delta >0$ and any $x>0$, the MVT implies there exists $y\in (x,x +\delta)$ such that $\frac {f(x+\delta)-f(x)}{\delta}= f'(y).$ Now if $x$ is large enough that $\forall y>x\;(f'(y)>1/\delta)$ then $f(x+\delta)-f(x)=\delta f'(y)>1.$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9777138164089085, "lm_q1q2_score": 0.8734009472388279, "lm_q2_score": 0.8933094046341532, "openwebmath_perplexity": 210.78347385952645, "openwebmath_score": 0.9865854382514954, "tags": null, "url": "https://math.stackexchange.com/questions/135234/showing-fx-x4-is-not-uniformly-continuous" }
fasta, bedtools, orf Title: Bedtools get fasta and ORF from a blastX run Hello I made a blastX research (A query genome in nucleotide format translated into protein in 6 reading frames against a protein dabatase) And here is a head of the result : IDBA_scaffold_7517 493 YP_009316100 0.360 61 27 1 150 4 126 186 6.775E-03 37 IDBA_scaffold_29149 519 AIG51500 0.428 35 18 1 2792 2896 392 424 2.712E-03 40 I have 2 hits: The first one is the query IDBA_scaffold_7517 that have a hit with a protein sequence YP_009316100 in the (-) strand because the start begins by 150 and ends with 4. The seconde one is the query IDBA_scaffold_29149 that have a hit with a protein sequence AIG51500 in the (+) strand because the start begins 2794 and ends by 2893. so with that I actually created a bed file in order to extract the fasta sequences from my genome ( I assume that since there is a hit with a protein in the Genomes the coordinates of this hit should be in a good ORF (without stop codon). Here is the bed_file I made : IDBA_scaffold_7517 4 150 Species 0 - IDBA_scaffold_29149 2794 2893 Species 0 + Then in order to extract the sequences from the coordinates in the Genome I used: bedtools getfasta -fi Genome.fa -bed bed_file -s -fo test
{ "domain": "bioinformatics.stackexchange", "id": 1173, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fasta, bedtools, orf", "url": null }
assembly If I type: testing 123 hello loader overflow test [press . until it halts] I get the following correct output: At an earlier point in time I thought I would never need to learn a low level language. I was wrong. The blank comments are placeholders for once I learn more about what they do. I am primarily asking for feedback on the accuracy of the comments, then secondarily asking for efficiency of size and correctness of overall flow and layout. Lots of review of the raw mechanics of your code. Not much review of the design. So here are just a few design tips. Use BPBs, because other people will. start: jmp main ; goto main There are two reasons that Volume Boot Record programs do this. The first and foremost is in order to skip over an embedded BIOS Parameter Block. You may not think that you need a BPB in your volume. You almost certainly will. Others have thought that over the years, and later been bitten when they do things like not have a BPB and use tools from operating systems that require that various types of partition have a BPB in the VBR. It bit IBM in the OS/2 Boot Manager. It will bite you. The second is that there are even tools (broken ones, that are fortunately not in wide use any more) that key off the first instruction in a VBR, expecting it to be a jmp. Will your code ever be run on an original 8088 manufactured in 1979? Note that the cli and the sti are super important.
{ "domain": "codereview.stackexchange", "id": 14211, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "assembly", "url": null }
So, we can make $|{a}_{n}-L|<ϵ$ by making $|{c}_{n}-L|<ϵ/3$ and $|{b}_{n}-L|<ϵ/3.$ So, let ${N}_{1}$ be a positive integer such that $|{c}_{n}-L|<ϵ/3$ if $n\ge {N}_{1},$ and let ${N}_{2}$ be a positive integer so that $|{b}_{n}-L|<ϵ/3$ if $n\ge {N}_{2}.$ Then set $N=max\left({N}_{1},{N}_{2}\right).$ Clearly, if $n\ge N,$ then both inequalities $|{c}_{n}-L|<ϵ/3$ and $|{b}_{n}-L|<ϵ/3,$ and hence $|{a}_{n}-L|<ϵ.$ This finishes the proof. The next result establishes what are frequently called the “limit theorems.” Basically, these results show how convergenceinteracts with algebraic operations. Let $\left\{{a}_{n}\right\}$ and $\left\{{b}_{n}\right\}$ be two sequences of complex numbers with $a=lim{a}_{n}$ and $b=lim{b}_{n}.$ Then
{ "domain": "jobilize.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9857180669140007, "lm_q1q2_score": 0.8235512110752977, "lm_q2_score": 0.8354835309589073, "openwebmath_perplexity": 2087.281976108243, "openwebmath_score": 0.9977275133132935, "tags": null, "url": "https://www.jobilize.com/online/course/2-4-properties-of-convergent-sequences-by-openstax?qcr=www.quizover.com" }
ros, arduino, robotino, ros-lib Title: ros_lib arduino IDE issue Rosserial Hi. Im trying to use ros with the arduino, using the following rosserial tutorials. http://www.ros.org/wiki/rosserial_arduino/Tutorials/Arduino%20IDE%20Setup After installing the Arduino IDE on Ubuntu 12.04 and copying the ros_lib folder into the sketchbook/libraries folder, I found that the IDE had not included these new files into the import library or the examples menu. I tried copying the required header files for the hello world tutorial into the same folder as the hello world sketch but the compiler could not find them. I looked around and found that all the header files in the library need .cpp files or thats what i understand. So basically i am unable to include any of these ros_lib files. Can any one help? Originally posted by samarth.rajan on ROS Answers with karma: 56 on 2013-02-19 Post score: 0 Make sure you install the ardunio ide into /usr/share. It seems that if you install in other places it doesn't contruct the links it needs. Originally posted by blueminerals with karma: 16 on 2014-01-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12959, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, arduino, robotino, ros-lib", "url": null }
clojure Title: Idiomatic input parsing in clojure I've been playing around with Clojure for a while now, and one of the tasks I occasionally find awkward is parsing input. For example, I took part in the facebook hacker cup recently, where part of the challenge was to read input like this: 5 4 ..## ..## .... .... 4 ..## ..## #... .... 4 #### #..# #..# #### 5 ##### ##### ##### ##### ..... 5 ##### ##### ##### ##### ##### Where the first line is the number of "cases" to follow and the first line of each case is the number of lines in that case. It seems fairly obvious to me that the output of parsing this should be a list of "cases", but there's no obvious way to know when to split the data without reading part of the data first. (In this particular case I know I could look for lines with numbers vs. lines with #s, but I'm more interested in a general solution) I ended up implementing this like: (defn read-stdin [] (line-seq (java.io.BufferedReader. *in*))) (defn my-iterate [f n input] "Calls iterate on input and returns the final result" (nth (iterate f input) n)) (defn parse-grid "Parses a text grid of .s and #s and returns a set of coord pairs for each # element" [lines] (set (filter identity (for [[row line] (map-indexed vector lines) [col c] (map-indexed vector line)] (if (= c \#) [row col] nil)))))
{ "domain": "codereview.stackexchange", "id": 6763, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "clojure", "url": null }
# Color ball drawer probability question There is a ball drawer. Seven color balls will be drawn with the same probability ($$1/7$$). (black, blue, green, yellow, white, pink, orange) If Anson attempts $$9$$ times, what is the probability that he gets all $$7$$ different color balls? My work: I separate the answer to $$3$$ ways. 1. $$7$$ attempts -> done (get $$7$$ colors) 2. $$8$$ attempts -> done (get $$7$$ colors) 3. $$9$$ attempts -> done (get $$7$$ colors) Therefore, my answer is $$\frac{9C7 + 8C7 + 7C7}{7^7 \cdot (7!)}$$ However, I don't know it is correct or not. • Welcome to MSE. Your question is phrased as an isolated problem, without any further information or context. This does not match many users' quality standards, so it may attract downvotes, or closed. To prevent that, please edit the question. This will help you recognise and resolve the issues. Concretely: please provide context, and include your work and thoughts on the problem. These changes can help in formulating more appropriate answers. Aug 9 at 8:39 • Since nine balls are drawn, either one ball is selected three times and each of the others is drawn once or two balls are drawn twice each and each of the others is drawn once. Aug 9 at 9:37 • @Cycle, can you further explain your intuition behind taking those 3 cases? IMO, there will be two cases: (7 different balls + 2 balls of different colors among the seven) or (7 different balls + 2 balls of same colors among the seven). Aug 9 at 9:43 • @n. f. taussig, i know what u mean, but i don’t how to calculate my thoughts. eg 1- P(3 same color selected) Aug 9 at 10:04 • Are we to assume that the probability of drawing any color is always $1/7$ whatever number of draws we make ? Aug 9 at 10:27
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9895109105959842, "lm_q1q2_score": 0.839073352550247, "lm_q2_score": 0.8479677622198947, "openwebmath_perplexity": 489.87289812105536, "openwebmath_score": 0.5848356485366821, "tags": null, "url": "https://math.stackexchange.com/questions/4220311/color-ball-drawer-probability-question" }
ros, boost, archlinux, ogre Title: building visualization_common (ogre) on archlinux Following this guide I'm trying to get ROS installed on arch linux. I've worked my way to the very last step which frequently gives linker errors related to boost (i.e. my boost libs are 1.5 but I believe ROS is currently expecting 1.4x). For the most part I've been able to get past these by adding rosbuild_link_boost(package system) to all the affect CMakeLists.txt files, but while building ogre in visualization_common this doesn't seem to be the right solution since everything in there is build outside of the ROS environment. So my question is this: Is it possible to have ROS use the default arch linux ogre install? This would be the easiest solution since it just works. Barring that, what is the proper way to go about linking in the boost system library for ogre? Originally posted by ryan0270 on ROS Answers with karma: 26 on 2012-10-26 Post score: 1 Well, this probably isn't useful any more, and Groovy is a much better target for Arch. But I got visualization common building with boost 1.5x with a patch to CMake/Dependencies.cmake : -set(OGRE_BOOST_COMPONENTS thread date_time) +set(OGRE_BOOST_COMPONENTS thread system date_time) Then adding the patch filename to the visualization_common/ogre/Makefile: -TARBALL_PATCH = hidden_window.patch hidden_window_glx_fix.patch multi_scene_manager.patch +TARBALL_PATCH = hidden_window.patch hidden_window_glx_fix.patch multi_scene_manager.patch ogre_boost150_system.patch Originally posted by pallegro with karma: 36 on 2012-12-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11531, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, boost, archlinux, ogre", "url": null }
python, python-3.x, sorting, reinventing-the-wheel, iterator Note that the code in the post does not actually use merge, it only uses merge.from_iterable. So the obvious thing to do would be to write merge so that it takes an iterable, and avoid the need for the @from_iterable decorator and its associated confusion. Since headA and headB are not used independently of each other, you could change the calling convention for MergeLists so that it took an arbitrary number of linked lists, like this: def merge_linked_lists(*lists): "Merge multiple sorted linked lists." return merge(map(iter_nodes, lists))
{ "domain": "codereview.stackexchange", "id": 29407, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, sorting, reinventing-the-wheel, iterator", "url": null }
graphs, reference-request Title: Partition a bipartite graph to a complete matching and an independent set I am looking for a reference for the following theorem: Let $G$ be a bipartite graph with partitions $X$ and $Y$, each with the same number of vertices ($n$). There is a nonempty subset $Y_1 \subseteq Y$, and a partition of $X$ to disjoint subsets $X_1$ and $X_2$, such that: There is a complete matching between $X_1$ and a subset of $Y_1$; There are no edges between $X_2$ and $Y_1$. Intuitively, $X$ can be seen as a set of men and $Y$ can be seen as a set of women. An edge between $x \in X$ and $y \in Y$ means that "$x$ and $y$ like each other" ("like" is considered a symmetric relation). The goal is to find a subset of the women ($Y_1$) and a subset of the men ($X_1$), such that each man can marry a woman he likes without upsetting any of the other men ($X_2$), because no unmarried man likes any married woman. This sounds similar to Hall's marriage theorem, but the premise is simpler. And, I am mainly looking for a reference that I can cite. Some special cases:
{ "domain": "cs.stackexchange", "id": 5731, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "graphs, reference-request", "url": null }
machine-learning, naive-bayes-classifier \end{align} Let's take the logarithm of the expression above and we want to maximize the following \begin{align}&\sum_{i=1}^m \textbf1(y^{(i)}=1)\log(1-\theta_0)+\sum_{i=1}^m \textbf1(y^{(i)}=0)\log(\theta_0)+\\& \sum_{j=1}^n \sum_{k=1}^s \sum_{i=1}^m\left[ \textbf1(x_j^{(i)}=k, y^{(i)}=1) \log \theta_{j,k|y=1} + \textbf1(x_j^{(i)}=k, y^{(i)}=0) \log \theta_{j,k|y=0}\right]\end{align} subject to the condition that $$\sum_{k=1}^s \theta_{j,k|y=1}=1$$ $$\sum_{k=1}^s \theta_{j,k|y=0}=1$$ as well as the nonnegative constraints. Let the Langrange multiplier for the sum to $1$ constraint be $\lambda_{j,1}$ and $\lambda_{j,0}$. The Langrangian that we want to optimize is
{ "domain": "datascience.stackexchange", "id": 5234, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, naive-bayes-classifier", "url": null }