text
stringlengths
1
1.11k
source
dict
# How to understand the usage of Inner and Outer figuratively? ## Description: In Mathematica the functions like Thread, Inner, Outer etc. are very important and are used frequently. For the function Thread: Thread[f[{a, b, c}]] {f[a], f[b], f[c]} Thread[f[{a, b, c}, x]] {f[a, x], f[b, x], f[c, x]} Thread[f[{a, b, c}, {x, y, z}]] {f[a, x], f[b, y], f[c, z]} And I understand the Usage1, Usage2, Usage3 easily as well as I use them masterly. However I always cannot master the usage of Inner and Outer so that I must refer to the Mathematica Documentation every time when I feel I need using them. I find that I cannot master them owing to that I cannot understand the results of Inner and Outer clearly. Namely, I always forget what construct they generate when executed. The typical usage cases of Inner and Outer shown as below: Inner Usage: Inner[f, {a, b}, {x, y}, g] g[f[a, x], f[b, y]] Inner[f, {{a, b}, {c, d}}, {x, y}, g] {g[f[a, x], f[b, y]], g[f[c, x], f[d, y]]}
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9362850039701655, "lm_q1q2_score": 0.810096330336868, "lm_q2_score": 0.8652240791017535, "openwebmath_perplexity": 7003.955651286775, "openwebmath_score": 0.17559194564819336, "tags": null, "url": "https://mathematica.stackexchange.com/questions/57760/how-to-understand-the-usage-of-inner-and-outer-figuratively/57769#57769" }
window-functions, window, parametric-eq x \in -1..1,\\ y \in -1..1$$ yields window functions that are 2D similar. Window functions of that form are not generally separable, but some of them are, for example: $$w(x, y) = \frac{\cos(\pi x - \pi y)}{8} + \frac{\cos(\pi x + \pi y)}{8} + \frac{\cos(\pi x)}{4} + \frac{\cos(\pi y)}{4} + \frac{1}{4}\\ = \left(\frac{\cos\left(\pi x\right)}{2} + \frac{1}{2}\right)\left(\frac{\cos\left(\pi y\right)}{2} + \frac{1}{2}\right)$$
{ "domain": "dsp.stackexchange", "id": 3097, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "window-functions, window, parametric-eq", "url": null }
I think that it might be related to $^nC_r$ or similar but I don't have the maths to change it from something like pascal's triangle to something that I can work with. • Question: Do you count, say "0990" as an example when $n=2$? It's gonna be much harder if you don't. – Thomas Andrews Jan 27 '16 at 20:56 • @ThomasAndrews Given he says the answer is $10$ when $n=1$ means he must be including $00$ as a two digit number. – Gregory Grant Jan 27 '16 at 21:04 • @ThomasAndrews yes it is counted – Cjen1 Jan 27 '16 at 21:06 • The value you have for $n=3$ is actually the value for $n=4$. The value for $n=3$ is $55252$. – Thomas Andrews Jan 28 '16 at 0:05
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.983085087724427, "lm_q1q2_score": 0.8505888999545501, "lm_q2_score": 0.8652240895276223, "openwebmath_perplexity": 144.6144611556823, "openwebmath_score": 0.8320608735084534, "tags": null, "url": "https://math.stackexchange.com/questions/1629665/how-many-numbers-are-there-of-2n-digits-that-the-sum-of-the-digits-in-the-first" }
mars, doppler-effect, occultation freq = f_sc* (1- t1-t2-t3+t4-t5) # print(f_sc*t1,"\n",f_sc*t2,"\n",f_sc*t3,"\n",f_sc*t4,"\n",f_sc*t5) return freq
{ "domain": "astronomy.stackexchange", "id": 7196, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mars, doppler-effect, occultation", "url": null }
keras, autoencoder Title: Autoencoder for cleaning outliers in a surface I have been looking at autoencoders from the keras blog here: https://blog.keras.io/building-autoencoders-in-keras.html I was wondering, what motifications would be necessary in order to be able to give it different surfaces i.e. 2-dimensional vectors, of which some of them have large spikes. For example here we see a surface that looks clean:
{ "domain": "datascience.stackexchange", "id": 1930, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "keras, autoencoder", "url": null }
To be even more precise with the proof one can use the law of total probability to define the probability of survival as follows $$P(S)=P(U_1)P(S|U_1)+P(U_2)P(S|U_2)$$ where $U_1,U_2$ - events of the respective urn being picked by the king and $S$ - the event of survival. The above translates into $$P(S) = q\frac{n_w}{n_w + n_b} + (1 - q)\frac{n - n_w}{2n - n_w - n_b},$$ where $q=\frac{1}{2}$ - the probability of the king picking the first urn; $n_w, n_b = 0,\ldots,n$ - number of white and black marbles in the first urn respectively and $n = 100$ - total number of marbles of each colour. Maximizing for $n_w$ and $n_b$ gives $$\max_{n_w,n_b}P(S)|_{n=100}=\frac{149}{199}\approx0.7487$$ for either $n_w=1,n_b=0$ or $n_w=n-1,n_b=n$ since the problem is symmetrical. 3D plot of $P(S)|_{n=100}$ with the maxima in the top left and top right corners Interestingly $$\lim_{n\to\infty}\max_{n_w,n_b}P(S)=\frac{3}{4}$$ If you want a proof that your solution is optimal, consider the following:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9814534327754852, "lm_q1q2_score": 0.8437199336775127, "lm_q2_score": 0.8596637451167995, "openwebmath_perplexity": 381.8466916210627, "openwebmath_score": 0.6243056654930115, "tags": null, "url": "https://math.stackexchange.com/questions/2022884/probability-a-flaw-in-logic-the-emperors-proposition-with-marbles-and-two-urn" }
nuclear-physics, notation, nuclear-engineering, particle-accelerators, ions Title: Charge state in Accelerator physics while asking for calculation of magnetic rigidity for accelerators, I am seeing notations like '238-U-28+' & '197-Au-77+ Previously I was comfortable seeing charge state like 40-Ca-1+ ions before. which would obviously mean Calcium ion with a positive charge or that has lost one electron. But here U-28+ it seems a little crazy to me.
{ "domain": "physics.stackexchange", "id": 72905, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nuclear-physics, notation, nuclear-engineering, particle-accelerators, ions", "url": null }
/* --- display test results --- */ fprintf(msgfp,"testpoint#%d...\n",ipt+1); prtpoint(" testpoint: ",&pt); /* current test point... */ prtpoint(" rotated: ",&ptrot); /* ...rotated around given axis */ prtpoint(" transpose rot: ",&pttran); /* ...transpose rotation */ prtpoint(" axis=z-axis: ",&pttoz); /* ...coords when axis=z-axis */ } /* --- end-of-for(ipt) --- */
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9674102524151827, "lm_q1q2_score": 0.8040563097680147, "lm_q2_score": 0.8311430520409023, "openwebmath_perplexity": 5552.375857425399, "openwebmath_score": 0.7075979113578796, "tags": null, "url": "https://math.stackexchange.com/questions/542801/rotate-3d-coordinate-system-such-that-z-axis-is-parallel-to-a-given-vector/543538" }
ros-melodic, debian Secondly I get: The following packages have unmet dependencies: catkin : Depends: python-catkin-pkg but it is not going to be installed E: Unable to correct problems, you have held broken packages. debian@beaglebone:~$ sudo apt install python-catkin-pkg Reading package lists... Done Building dependency tree Reading state information... Done python-catkin-pkg is already the newest version (0.4.8-100). So catkin wants python-catkin-pkg , which is said to be installed already. :( What do I have to do? Originally posted by segmentation_fault on ROS Answers with karma: 3 on 2018-08-22 Post score: 0 I want to install ros-base pkg [..] on [..] Debian Stretch Image. [..] Firstly there are no melodic nor kinetic packages available after updating
{ "domain": "robotics.stackexchange", "id": 31608, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic, debian", "url": null }
c#, database, generics, ms-access foreach (var prop in type.GetProperties()) { if (prop.IsDefined(typeof(ColumnAttribute), true)) { var nameAttr = (ColumnAttribute)prop.GetCustomAttributes(typeof(ColumnAttribute), true).FirstOrDefault(); result.AllProperties.Add(new QueryProperty { Property = prop, Name = nameAttr.Name }); } else { result.AllProperties.Add(new QueryProperty { Property = prop, Name = prop.Name }); } } var keys = result.AllProperties.Where(x => x.Property.IsDefined(typeof(KeyAttribute), true)); if (keys.Any()) { foreach (var key in keys) { result.KeyProperties.Add(key); } }
{ "domain": "codereview.stackexchange", "id": 25991, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, database, generics, ms-access", "url": null }
c, multithreading, pthreads /* Change working directory to daemon's */ slog(LOG_DEBUG, "DEBUG: daemonize(): changing working directory to %s\n", ddir); if(chdir(ddir)){ slog(LOG_ERR, "ERROR: daemonize(): chdir(): %s [%d]\n", strerror(errno), errno); return(EXIT_FAILURE); }; /* Close inherited descriptors and standard I/O descriptors */ slog(LOG_DEBUG, "DEBUG: daemonize(): closing file descriptors\n"); int i; for(i = getdtablesize();i>=0; --i){ close(i); }; /* Reopen stdin, stdout, stderr */ slog(LOG_DEBUG, "DEBUG: daemonize(): opening file descriptors\n"); stdin = fopen("/dev/null", "r"); /* fd = 0 */ stdout = fopen("/dev/null", "w+"); /* fd = 1 */ stderr = fopen("/dev/null", "w+"); /* fd = 2 */ if(stdin < 0 || stdout < 0 || stderr < 0){ slog(LOG_ERR, "ERROR: daemonize(): open(): %s [%d]\n", strerror(errno), errno); return(EXIT_FAILURE); };
{ "domain": "codereview.stackexchange", "id": 21326, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, multithreading, pthreads", "url": null }
acoustics, measurements The measurement method was specified by Guinness, and used a two-microphone coherent power measurement technique with two Type 4955 low-noise microphones. It's worth immediately noting that the B&K Type 4955 mics have a broadband inherent noise of 6.5 dBA, far greater than the -20.3 dBA of the room. This is what makes your question an interesting one. So how did they do it? With two very precisely in-phase microphones pointed at each other, the pressure at both microphone surfaces is known simultaneously, which allows a basic description of the pressure gradient between the microphones. Brüel & Kjær describes the calculation of intensity from this (pg. 10):
{ "domain": "physics.stackexchange", "id": 47312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "acoustics, measurements", "url": null }
well that there are many possible vectors to use here, we just chose two of the possibilities. The following code shows the DotProduct method. The dot product of two vectors is equal to the product of their lengths times the cosine of the angle between them. The first of these is the resultant, and this is obtained when the components of each vector are added together. 5 Applications of Vectors 7. The cross product is also been called the vector product which is had been used to find the answer of the vectors. Given two arrays of positive integers of size m and n where m > n. = v1 u1 + v2 u2 NOTE that the result of the dot product is a scalar. Find the dot product of the two vectors P and Q a. Remarks: (a) The triple product of three vectors is a scalar. If you have two vectors written in matrix form, such as A = (1, 2, 3) x y B = (-1, -2, -1) z Then A•B is the projection of A onto B (a magnitude, or scalar). To define multiplication between a matrix $A$ and a vector $\vc{x}$ (i.
{ "domain": "sabrinamanin.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9881308772060157, "lm_q1q2_score": 0.8032193003064874, "lm_q2_score": 0.8128673223709251, "openwebmath_perplexity": 388.39075359230407, "openwebmath_score": 0.7167556881904602, "tags": null, "url": "http://sabrinamanin.it/c-program-to-find-dot-product-of-two-vectors.html" }
c++, performance, beginner We can also use standard algorithms to ease the job a bit, such as using std::copy_n to copy the last two items in the set to std::cout, assuming we find two or more missing numbers. Use of braces A fair number of people advise using braces around the statement(s) controlled by every if, while, for, etc. Personally I don't like (or usually follow) this advice, but it's pretty widespread, and if you decide to do otherwise, you should (IMO) be consistent about whether to do it or not. Summary Code incorporating all these could look something like this: std::set<int> s;
{ "domain": "codereview.stackexchange", "id": 19954, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, beginner", "url": null }
php, form, authentication Get rid of your unused code. For example $query and $options variable sin login. Leaving around extra cruft in your code is going to make it harder to maintain. "Session" and "login" are not the same thing. I don't understand why you conditionally start a session only upon login. "Session" is used for server-side storage or data related to the session only and should not be used to convey login state. You do need to understand best practices about regenerating session id's and destroying session state based on login state. Review http://php.net/manual/en/features.session.security.management.php for good starting point on managing sessions securely in PHP. Why are you using password_hash() at all in your login script? All you should be doing is getting the user record based on the use name and then using password_verify() to compare the record vs. login input.
{ "domain": "codereview.stackexchange", "id": 23985, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, form, authentication", "url": null }
functions for algebraic variables in algebraic expressions like these (a is a constant): This will be a nifty trick to solve some integrals you probably haven't cracked yet. Z xndx = Integrals of Rational Functions. Variations of this mnemonic device can be found all around the internet. - ) $$\displaystyle \int \cos^n x \sin^m x, dx = + C$$ \[ = \frac{ \cos^{m-1} x \sin^{n+1} x }{m+n} + \frac{m-1}{m+n} \int \cos^{m-2} x \sin^n x , dx Trigonometric integrals Previous lecture Z sinxcosx = 1 2 Z sin2xdx = Mar 27, 2018 · Even if you are supplied with a table of integrals in examinations, learn as many as you can, and especially learn the conditions that apply. Jan 28, 2012 · Sometimes, there are things you need to memorize. TRIGONOMETRIC FUNCTIONS WITH xn AND eax. Although we include this table of trigonometric integrals, it makes more sense. NOTE: The letter U means undefined. Download as PDF file. Integrating trigonmetric functions 4 5. F. An online printable table of integrals and
{ "domain": "reverseplays.xyz", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9850429103332288, "lm_q1q2_score": 0.8292343875163289, "lm_q2_score": 0.8418256492357358, "openwebmath_perplexity": 1166.2791979654166, "openwebmath_score": 0.9463605284690857, "tags": null, "url": "https://reverseplays.xyz/7rieqf/trigonometric-integrals-table.php" }
mean. Start studying for CFA® exams right away. For example, when rainfall data is made available for different days in mm, any absolute measures of dispersion give the variation in rainfall in mm. (Definition & Example). They are: 1. These are the range, variance, absolute deviation and the standard deviation. Assume that the returns realized in example 2 above were sampled from a population comprising 100 returns. Characteristics of a good measure of dispersion Calculate and interpret 1) a range and a mean absolute deviation and 2) the variance and standard deviation of a population and of a sample. In the above cited example, we observe that. It tells the variation of the data from one another and gives a clear idea about the distribution of the data. Relative Dispersion The actual variation or dispersion, determine from standard deviation or other measures is called absolute dispersion, now the relative dispersion is For Example, Relative dispersion It is a measurement of the
{ "domain": "marymorrissey.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9802808701643914, "lm_q1q2_score": 0.8351530266828452, "lm_q2_score": 0.8519527944504227, "openwebmath_perplexity": 681.8413820680182, "openwebmath_score": 0.7993106245994568, "tags": null, "url": "http://www.marymorrissey.com/davinci-raspberry-rdcxs/1e3f95-measures-of-dispersion-examples" }
filters, z-transform Title: Transfer function of resonant filter with 2 poles, peak at $f_0 = 500\text{ Hz}$, and $\Delta f = 32\text{ Hz}$ This a contest question. I'd like some help because I can't find any materials related to this topic. https://www.qconcursos.com/questoes-de-concursos/questao/ecd6c966-51 My english translation: Find the transfer function of a resonant filter with 2 poles, peak on $f_0 = 500\text{ Hz}$, $\Delta f = 32\text{ Hz}$, and sampling frequency $f_s = 10 \text{ kHz}$. -> a) $H(z) = \frac{0.062}{1-1.8831z^{-1}+0.09801z^{-2}}$ b) $H(z) = \frac{0.081}{3-2.641z^{-3}+0.09801z^{-2}}$ c) $H(z) = \frac{0.082}{1-1.8831z^{-1}+0.09801z^{-2}}$ d) $H(z) = \frac{0.762}{1-2.8831z^{-1}+0.09801z^{-2}}$ e) $H(z) = \frac{0.262}{2-1.8831z^{-1}+0.09801z^{-2}}$
{ "domain": "dsp.stackexchange", "id": 6704, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, z-transform", "url": null }
First it should be specified that eaters can only buy DISTINCT foods once ie a customer who buys five foods will buy five DIFFERENT foods. Now taking the bucket image we know from the problem statement that the number of tokens in each bucket is strictly more than two fifths of N (number of customers). In mathematical language it says that the number of tokens in each bucket is at least : Floor function (2N/5) +1 Consequently the total number of tokens is at least 15Floor function (2N/5) +15 Now N can take five different values( mod5) and we see that if N is equal to 2 (mod5) then the total number of tokens is at least 6N+3 and not 6N+6 as stated above. This unfortunately invalidates the above argument. As an example take N=7 If 6 customers buy four foods and one customer buys five foods then the total number of tokens is 6X6 +10=46. At the same time each bucket has at least three tokens which gives a total of 45 which is compatible with the above figure.Therefore the problem should
{ "domain": "purdue.edu", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9811668739644686, "lm_q1q2_score": 0.813327293049016, "lm_q2_score": 0.8289388019824947, "openwebmath_perplexity": 613.3653647588111, "openwebmath_score": 0.32428979873657227, "tags": null, "url": "https://www.math.purdue.edu/pow/discussion/2017/spring/37" }
c, interpreter, brainfuck malloc result You should check the return value of malloc. Currently you'll (probably) just get a segfault in newnode if allocation fails. Code consolidation The case where a file name is provided and the case where stdin is used look very similar to me. You can probably pull that into a consume_stream function that takes a FILE*, and it will make your main a lot more manageable and get rid of code duplication. Depending on how you want to handle the line-based approach compared to file based, this might be a problem though. NULL vs 0 node *newnode() { node *n = malloc(sizeof(node)); n->prev = n->next = 0; n->jump = n->val = 0; return n; }
{ "domain": "codereview.stackexchange", "id": 14223, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, interpreter, brainfuck", "url": null }
parsing, file, bash Some other simplifications are possible too. You don't need the \+ in the pattern here: sed -n -e '/^[[:digit:]]\+/s/\/.*//p' | ... Instead of repeating the tab character like this: awk '{print $1 "\t" $3 "\t" $4 "\t" $NF}' You could set the output field separator to simplify a bit: awk '{OFS="\t"; print $1, $3, $4, $NF}'
{ "domain": "codereview.stackexchange", "id": 18891, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "parsing, file, bash", "url": null }
ros, ros-master-uri Originally posted by felix k with karma: 1650 on 2012-11-11 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 11693, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-master-uri", "url": null }
and "carefully design the function $\phi$". I know that the middle and right parts of the corollary basically represent Markov's inequality, but beyond that I don't understand what this means. I could use Markov's inequality alone to show that the probability of X being greater than 5 is less than or equal to $\frac{2}{5}$, but that is not good enough. How do I use the above hint to prove the probability is smaller than $\frac{1}{5}$? We have $$P(X > 5) = P(X^2 > 25) \leq P(X^2 \geq 25) \leq \frac{\mathsf{E}(X^2)}{25}$$ Moreover, since $\mathsf{E}(X) = 2$ and $\mathsf{Var}(X) = 1$, $$\mathsf{E}(X^2) = \mathsf{E}(X)^2 + \mathsf{Var}(X) = 5$$ • Thank you. This more directly answers the proof that my instructor was looking for and now that I see it in front of me it makes perfect sense. The answer from Proved Maroon Z was not wrong though. Thanks to both of you. – Halpneeded Jun 16 '17 at 14:54
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.978712651931994, "lm_q1q2_score": 0.8024399789397189, "lm_q2_score": 0.8198933337131076, "openwebmath_perplexity": 193.17900620471517, "openwebmath_score": 0.7484848499298096, "tags": null, "url": "https://math.stackexchange.com/questions/2324509/given-a-positive-valued-random-variable-with-known-mean-and-variance-find-the-p/2324528" }
logic, satisfiability, 3-sat It is not hard to see that this formula is unsatsifiable.
{ "domain": "cs.stackexchange", "id": 2295, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "logic, satisfiability, 3-sat", "url": null }
aqueous-solution Title: Calculating Ka values from std. state thermodynamic data My professor showed us a sample calculation of the Ka value for a strong acid. He used standard state thermodynamic data. I have an issue with this: 1) Wouldn't this give us an understated Ka value? 2) Standard state conditions start with 1 molar solutions of solute. So in writing the reaction equation of an acid with water and by using that equation to find the delta G std. of reaction and then using the relationship between delta G and K to find K, wouldn't we be running into a sort of "common ion" effect issue? There's already a 1 molar concentration of hydronium ion IN solution under standard state conditions. Finding the K of an acid dissolving in water that has a pH of 0.0 doesn't give us a Ka value (or does it?) EDIT to address Martin: This is what he did: $\ce{HNO_3 + H2O ->H_3O^+ +NO_3^-}$ Find ${\Delta G^o_r}$ for above reaction. Use ${\Delta G^o_r=-RTlnK}$ to solve for K.
{ "domain": "chemistry.stackexchange", "id": 1584, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "aqueous-solution", "url": null }
blackbody Title: If a black body is a perfect absorber, why does it emit anything? I'm trying to start understanding quantum mechanics, and the first thing I've come across that needs to be understood are black bodies. But I've hit a roadblock at the very first paragraphs. :( According to Wikipedia: A black body (also, blackbody) is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. OK, that's nice. It's an object that absorbs (takes in itself and stores/annihilates forever) any electromagnetic radiation that happens to hit it. An object that always looks totally black, no matter under what light you view it. Good. But then it follows with: A black body in thermal equilibrium (that is, at a constant temperature) emits electromagnetic radiation called black-body radiation.
{ "domain": "physics.stackexchange", "id": 71095, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "blackbody", "url": null }
php, mysql, comparative-review, pdo My suggestion would be to abandon such an approach in favor of writing individual classes for your database entities such as artikelen, or better yet using a proper PHP ORM. Never hard code database or other security credentials into your code. These should be derived from configuration that lies outside your code repository (ideally derived from environmental configuration). You should change your DB password NOW as you just posted all your credentials for the whole world to see. You have unreachable lines of code in updateStock(). When return is called, any lines of code after that call in that code path will never be reached. You should never have unreachable lines in your code. In class B, you should be using parametrized prepared statements vs. concatenating string values into your query strings. You are totally open to SQL injection as you are doing nothing to sanitize your parameters for use in the queries.
{ "domain": "codereview.stackexchange", "id": 24200, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, mysql, comparative-review, pdo", "url": null }
python, symbolic-math, sympy Title: Polynomial calculation in SymPy Thie programs is written in Sympy 1.9 and it should find a polynomial g of degree dim for a given polynomial f such that f o g = g o f as described here, where I already posted program written in Maxima. This is my first program in SymPy so I am interested in your feedback on this program except on the algorithm itself. from sympy import symbols, pprint, expand, Poly, IndexedBase, Idx,Eq,solve from sympy.abc import x,f,g,i fg,gf,d=symbols('fg gf d') f=x**3+3*x # a polynomial g exists #f=x**3+4*x # a polynomial g does not exist dim=5 a=IndexedBase('a') i=Idx('i',dim) # create a polynomial of degree dim # with unknown coefficients, # but the coefficient of the highest power is 1 # an the lowest power is 1 g=a[0]+x**dim for i in range(1,dim): g+=x**i*a[i] # calculate fg = f o g # and gf = g o f fg=f gf=g fg=expand(f.subs(x,g)) gf=expand(g.subs(x,f))
{ "domain": "codereview.stackexchange", "id": 42600, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, symbolic-math, sympy", "url": null }
thermodynamics, evaporation When introducing the liquid into the chamber, it undergoes what is called flash evaporation: because the liquid is now superheated due to the pressure loss, part of it vaporizes. If there is not enough surface area to let this happen quickly enough, gas bubbles will form, just like in a pot of water on the stove. So there's the surface area as a limiting factor. The vapour then expands to fill the vessel at the speed of sound. The process cools both vapour and liquid down to the saturation temperature corresponding to the low pressure, so the next limiting factor would be how quickly heat can be transferred to the liquid to bring it back to the "boil". And then it's the surface area again.
{ "domain": "physics.stackexchange", "id": 12186, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, evaporation", "url": null }
quantum-state, haar-distribution &=\frac{1}{d} \operatorname{Tr}\left[ | 0 \rangle \langle 0 | \right] \operatorname{Tr}\left[ \Pi_{j} \right] = \frac{1}{d}, \end{align} where in the second equality, I bring the expectation value inside the trace (since the trace is linear), and in the third equality, I've used the the following lemma: $\mathbb{E}_{U} [UXU^{\dagger}] = \operatorname{Tr}\left[ X \right] \frac{\mathbb{I}}{d}$. Notice that this average value does not depend on the choice of either $\psi$ or the basis $\mathbb{B}$. Namely, Haar-uniformity is such a strong assumption that it "coarse-grains" all details about the state of the system.
{ "domain": "quantumcomputing.stackexchange", "id": 2883, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-state, haar-distribution", "url": null }
control-theory, systems-engineering, systems-design sysd = c2d(sys, T); Ad = sysd.A; Bd = sysd.B; Cd = sysd.C; %extension A_ext = [ Ad [0 0 0]' -Cd 1 ]; B_ext = [Bd 0]; %desidered poles p_des = [0.5 0.501 0.502 0.503]; K = -place(A_ext, B_ext, p_des); Kr = K(1:3); Ki = K(4); N = 100; %desidered output yd = 0.05; x(:, 1) = [0 0 0]'; u(:, 1) = yd * Ki + Kr * x(:, 1); %(yd - 0) * Ki + Kr * x(:, 1) x(:, 2) = Ad * x(:, 1) + Bd * u(:, 1); y(:, 1) = Cd * x(:, 1); for i=2:N u(:, i) = (yd - y(:, i - 1)) * Ki + Kr * x(:, i); if (i < N) x(:, i + 1) = Ad * x(:,i) + Bd * u(:, i); end y(:, i) = Cd * x(:, i); end k = 1:N; plot(k, x');
{ "domain": "engineering.stackexchange", "id": 4574, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "control-theory, systems-engineering, systems-design", "url": null }
reference-request, lo.logic, proof-complexity, automated-theorem-proving Title: Automated theorem proving via unsupervised approaches This question Where and how did computers help prove a theorem? considers some automated theorem proving successes. However they seem to be mostly supervised approaches, such as with the 4 color graph theorem, the researcher did/does the hard conceptual, non-automatable work of narrowing the conjecture to a finite (but large) set of computer-checkable cases. Also, in other cases/examples the computer may have found an "infinite" type proof but have been obtained largely by the researcher closely intervening/guiding the overall theorem proving process. Are there any significant examples of what could be called unsupervised theorem proving successes? This would generally come in two forms:
{ "domain": "cstheory.stackexchange", "id": 2876, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reference-request, lo.logic, proof-complexity, automated-theorem-proving", "url": null }
quantum-mechanics, angular-momentum, operators, quantum-spin, rotation That said, if you have a specific $j$ you want to investigate, there is probably an analogous formula to $(\ast)$, because the $(2j+1)$th and higher powers must be linear combinations of the first $2j$ powers and the identity, so you can again fold the exponential series into $2j+1$ terms, $$ \exp(-i\theta \hat{\mathbf n}\cdot\hat{\mathbf J})=\sum_{k=0}^{2j}f_k(\theta) (\hat{\mathbf n}\cdot\hat{\mathbf J})^k, $$ where the $f_k$ are given by appropriate series. Depending on what comes out, this may or may not be useful, but again it is a lot of work for each specific $j$ you're interested in. To be a bit more explicit, let me show how this works for the simplest nontrivial orbital angular momentum, $j=1$. If you work in the $z$ direction you get, for the different powers of $\hat J_z$, $$ \mathbb 1=\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix} ,\quad \hat J_z=\hbar\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix} ,\quad \hat J_z^2=\hbar^2\begin{pmatrix}1&0&0\\0&0&0\\0&0&1\end{pmatrix}
{ "domain": "physics.stackexchange", "id": 19847, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, angular-momentum, operators, quantum-spin, rotation", "url": null }
c, array, pointers #define array_give_fixed(p, ...) \ do { \ p = array_give_impl( \ p, &(__typeof__(*p)[]){__VA_ARGS__}, \ sizeof((__typeof__(*p)[]){__VA_ARGS__}) \ / sizeof(__typeof__(*p))); \ } while(0) #define array_take_fixed(p, a) \ array_take(p, a, sizeof(a) / sizeof(*a)) #define array_free(a) do { array_free_impl(a); a = NULL; } while(0) #endif /* ARRAY_H */ array.c #include "array.h" void *array_alloc(size_t szElem, void (*freeElem)(void *)) { void *a = malloc(sizeof(Array_Header)); Array_Header *header = a; header->freeElem = freeElem; header->szElem = szElem; header->ctElem = 0; header->cpElem = 0; return header + 1; }
{ "domain": "codereview.stackexchange", "id": 44427, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, array, pointers", "url": null }
c#, asp.net, asp.net-mvc-4 applicationUser.PictureSmalUrl = "/Scripts/assets/images/uploads/" + fileName; applicationUser.PictureUrl = "/Scripts/assets/images/uploads/" + fileName; } } } if (setting.FirstName != null && applicationUser != null) { applicationUser.FirstName = setting.FirstName; }
{ "domain": "codereview.stackexchange", "id": 9557, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, asp.net, asp.net-mvc-4", "url": null }
quantum-mechanics, operators, angular-momentum, commutator, rotation $$ etc., we find $$ e^{\lambda A} B e^{- \lambda A} = B + \frac{\lambda}{1!} [A,B] + \frac{\lambda^2}{2!} [A,[A,B]] + \frac{\lambda^3}{3!} [A,[A,[A,B]]] + \cdots. $$ Now consider the special case where $$ [A,[A,B]] = \beta B, $$ which is true for the problem you are interested in. This results in a simplification where all terms collapse into terms proportional to either $B$ or $[A,B]$. Explicitly, $$ e^{\lambda A} B e^{- \lambda A} = B + \frac{\lambda}{1!} [A,B] + \frac{\lambda^2}{2!} \beta B + \frac{\lambda^3}{3!} \beta [A,B] + \frac{\lambda^4}{4!} \beta^2 B + \cdots \\ = B \left\{ 1 + \frac{(\lambda \sqrt{\beta})^2}{2!} + \frac{(\lambda \sqrt{\beta})^4}{4!} + \cdots \right\} + \frac{[A,B]}{\sqrt{\beta}} \left\{ \frac{\lambda \sqrt{\beta}}{1!} + \frac{(\lambda \sqrt{\beta})^3}{3!} + \cdots \right\}. $$ Then you can compare this to the Taylor series for the hyperbolic functions, obtaining $$
{ "domain": "physics.stackexchange", "id": 83961, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, operators, angular-momentum, commutator, rotation", "url": null }
mfcc Title: Cepstral Mean Normalization Can anyone please explain about Cepstral Mean Normalization, how the equivalence property of convolution affect this? Is it must to do CMN in MFCC Based Speaker Recognition? Why the property of convolution is the fundamental need for MFCC? I am very new to this signal processing. Please help Just to make things clear - this property is not fundamental but important. It is the fundamental difference when it comes to using DCT instead of DFT for spectrum calculation. Why do we do Cepstral Mean Normalisation In speaker recognition, we want to remove any channel effects (impulse response of vocal tract, audio path, room, etc.). Providing that the input signal is $x[n]$ and channel impulse response is given by $h[n]$, the recorded signal is a linear convolution of both: $$y[n] = x[n] \star h[n]$$ By taking the Fourier Transform we get: $$Y[f] = X[f]\cdot H[f] $$
{ "domain": "dsp.stackexchange", "id": 2275, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mfcc", "url": null }
observational-astronomy, photography, instruments, infrared, parker-solar-probe Of course anomalous emission of Venus' surface due to unexpected temperature might also be part of the explanation. So I'd like to ask: Question: Did they ever figure out why Parker's WISPR cameras were able to see the surface of Venus? Mischaracterized filter, or unexpected atmospheric window? Or maybe something else? There's actually a paper by Wood et al. (2022) that came out focused on this. It discusses the expected detection given the expected emission (using a model that combines the surface emission and the emission and radiative transfer through the Venusian atmosphere). They do mention checking to see if the detector might have been more sensitive than assumed, but the answer seems to be no:
{ "domain": "astronomy.stackexchange", "id": 6829, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "observational-astronomy, photography, instruments, infrared, parker-solar-probe", "url": null }
qiskit, programming Title: Converting a dictionary to a statevector in Qiskit Suppose we have a dictionary of computational basis states and their amplitudes: {'01':0.5, '10':0.5, '11':0.5, '00':0.5} How do I convert this (or any arbitrary dictionary) into a Statevector object? In this case the statevector should be is $$ | \psi \rangle = \frac{|00 \rangle + |01 \rangle + |10 \rangle + |11 \rangle}{2} $$ I don't think there a built-in functionality for this in Qiskit. However, it is easy to implement. The below code snippet shows how to do that: from qiskit.quantum_info import Statevector import numpy as np _state = {'01':0.5, '10':0.5, '11':0.5, '00':0.5} # Convert dictionary into array: num_qubits = len(next(iter(_state))) data = np.zeros(2 ** num_qubits) for key in _state: data[int(key, 2)] = _state[key] psi = Statevector(data) # Check: psi.draw('latex')
{ "domain": "quantumcomputing.stackexchange", "id": 5225, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "qiskit, programming", "url": null }
java, datetime Title: Calculate the number of days passed from a given year I have this code for calculating the number of days passed from a given year: for (int currentYear = localDate.getYear() - 1; currentYear >= STARTING_YEAR; currentYear--){ LocalDate date = LocalDate.of(currentYear,5,5); //an arbitrary day of year jdn += date.lengthOfYear(); } jdn += localDate.getDayOfYear(); This code is working as expected. I am not comfortable with choosing an arbitrary day to only get the length of the year. Is there any better approach? Your code scales poorly if there are many years between STARTING_YEAR and localDate. Worse, if localDate occurs before the STARTING_YEAR, then your code behaves as if STARTING_YEAR is the same year as localDate, which is weird behavior. To calculate the number of days between two dates, use DAYS.between(). import java.time.LocalDate; import static java.time.temporal.ChronoUnit.DAYS;
{ "domain": "codereview.stackexchange", "id": 32798, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, datetime", "url": null }
newtonian-mechanics, gravity, newtonian-gravity, projectile However, especially as the projectile approaches this height, the force of gravity experienced by this object is not constant. Because of this, 13,300 m/s is greater than escape velocity, so it would have no max height. I'm at a loss on how to find the velocity of the projectile. Critically, the height depends on the velocity and the time. The velocity depends on the gravity and the time. The gravity depends on the height. As such, there's a circle of sorts, where I can't seem to figure out one without knowing the others, and since I don't know the others, I can't make any headway.
{ "domain": "physics.stackexchange", "id": 76418, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, gravity, newtonian-gravity, projectile", "url": null }
cc.complexity-theory, np-hardness, factoring Title: Is integer factorization an NP-complete problem? Possible Duplicate: What are the consequences of factoring being NP-complete? What notable reference works have covered this? No, its not known to be NP-complete, and it would be very surprising if it were. This is because its decision version is known to be in $\text{NP} \cap \text{co-NP}$. (Decision version: Does $n$ have a prime factor $\lt k$?) It is in NP, because a factor $p \lt k$ such that $p \mid n$ serves as a witness of a yes instance. It is in co-NP because a prime factorization of $n$ with no factors $\lt k$ serves as a witness of a no instance. Prime factorizations are unique, and can be verified in polynomial time because testing for primality is in P.
{ "domain": "cstheory.stackexchange", "id": 14, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, np-hardness, factoring", "url": null }
physical-chemistry, everyday-chemistry, combustion A word of warning some ABC foams such as those based on PFOS are horrible and harmful. It might be best if you stick to a water spray system. The PFOS foams are considered to be problem when they enter the environment, if I recall correctly they are very harmful to fish and the perfluoro compound PFOS only breaks down very slowly in the environment.
{ "domain": "chemistry.stackexchange", "id": 10182, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, everyday-chemistry, combustion", "url": null }
programming-languages, language-design, types list_induction : (Property : Vector len typ -> Type) -> -- a property to show (Property Empty) -> -- the base case ((w : a) -> (v : Vector n a) -> Property v -> Property (w :: v)) -> -- the inductive step (u : Vector m b) -> -- an arbitrary vector Property u -- the property holds for all vectors
{ "domain": "cs.stackexchange", "id": 7692, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming-languages, language-design, types", "url": null }
convolutional-neural-networks, reference-request, convolution, convolutional-layers, 3d-convolution Most CNN models that learn from video data almost always have 3D CNN as a low level feature extractor. In the example you have mentioned above regarding the number 5 - 2D convolutions would probably perform better, as you're treating every channel intensity as an aggregate of the information it holds, meaning the learning would almost be the same as it would on a black and white image. Using 3D convolution for this, on the other hand, would cause learning of relationships between the channels which do not exist in this case! (Also 3D convolutions on an image with depth 3 would require a very uncommon kernel to be used, especially for the use case)
{ "domain": "ai.stackexchange", "id": 1300, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "convolutional-neural-networks, reference-request, convolution, convolutional-layers, 3d-convolution", "url": null }
electromagnetism, magnetic-fields, quantum-spin, electric-current, magnetic-monopoles As you have identified in your point (1), there is indeed a problem with the term magnetic dipole. Let's unpack the term starting with dipole. The simplest system in electromagnetism is the electric point charge at rest, which produces a spherically symmetric $1/r$ potential (it's easier to work with the potential here). This is called a monopole field, and a point charge is a monopole.
{ "domain": "physics.stackexchange", "id": 36430, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, magnetic-fields, quantum-spin, electric-current, magnetic-monopoles", "url": null }
python, multithreading, python-2.x, thread-safety, community-challenge def stop(self): elevators_mail.remove(self.mailbox) self.mailbox.put('shutdown') self.join() class Floor(threading.Thread): def __init__(self, number=0): threading.Thread.__init__(self) self.mailbox = Queue.Queue() floors_mail.append(self.mailbox) self.number = number def run(self): while True: data = self.mailbox.get() if data == 'shutdown': sys.stdout.write(str(self) + ' shutting down' + '\n') return sys.stdout.write(str(self) + ' received: ' + str(data) + '\n') def stop(self): floors_mail.remove(self.mailbox) self.mailbox.put('shutdown') self.join() def call(self, data): banks_mail[0].put((self.number, data)) def broadcast_event(data): for q in elevators_mail: q.put(data) b0 = Bank() b0.start() t1 = Elevator() t2 = Elevator() t3 = Elevator(8, 'DOWN') t1.start() t2.start() t3.start()
{ "domain": "codereview.stackexchange", "id": 15887, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, multithreading, python-2.x, thread-safety, community-challenge", "url": null }
condensed-matter, computational-physics, tight-binding the Peierls phase. This is tremendously convenient, since then we get to use the same material parameters regardless of the magnetic field value, and the corresponding phase is computationally trivial to take into account. For electrons it amounts to replacing the hopping term $t_{ij}$ with $t_{ij}e^{i\frac{e}{\hbar}\int_i^j\mathbf{A}\cdot d\mathbf{l}}$. Finally, note that a beautiful and elucidating explanation for this phase can also be found in Feynman's Lectures (Vol. III, Chapter 21).
{ "domain": "physics.stackexchange", "id": 22764, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter, computational-physics, tight-binding", "url": null }
image-processing, color, java Title: How to Measure the Intensity / Saturation of a Color in an Image? I have a task of developing an application to measure the intensity of a color in an image. I am researching about how to go about it. Intensity refers to the purity of a hue. Intensity is also known as Chroma or Saturation. The highest intensity or purity of a hue is the hue as it appears in the spectrum or on the color wheel. Source
{ "domain": "dsp.stackexchange", "id": 6032, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, color, java", "url": null }
lagrangian-formalism, hamiltonian-formalism, variational-principle, action, non-locality \tag{7}$$ I just assumed this naively (since it would correct the contradiction), is this true, or am I making some mistake in my work? -- [1] This question deals with the Legendre transform for non-local Lagrangian formulations. In this answer we apply the general non-local theory developed in my Phys.SE answer here to OP's non-local example. Let us for simplicity assume that time belongs to the unit interval $[t_i,t_f]=[0,1]$. OP's non-local Lagrangian action functional reads (modulo some sign conventions$^1$) $$ \left. S[q,v]\right|_{v=\dot{q}}, \tag{A} $$ where $$ S[q,v]~:=~\frac{1}{2}\int_{[0,1]^2}\! dt~du~\delta(1\!-\!t\!-\!u)\left\{ v(t)v(u) -q(t)q(u)\right\} .\tag{B} $$ The corresponding Lagrangian eq. of motion reads $$ \ddot{q}~\approx~q,\tag{C} $$ i.e., exponentially increasing/decreasing solutions. The Lagrangian momentum is $$ p(t)~:=~\frac{\delta S[q,v]}{\delta v(t)}~=~v(1\!-\!t) .\tag{D}$$ The Hamiltonian functional becomes
{ "domain": "physics.stackexchange", "id": 20788, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lagrangian-formalism, hamiltonian-formalism, variational-principle, action, non-locality", "url": null }
population '' two. R to find out information about the nation 's citizens started with are the whole population + ( 19-1 8.2. We consider the formulas for both the sample standard deviations is ( 1 + +... Most common parameters of interest of a random variable or list of numbers ( the lowercase sigma... We are going to calculate standard deviation is a fixed value calculated from every individual in the frontal,... Adsbygoogle = window.adsbygoogle || [ ] ).push ( { } ) ; StandardDeviationCalculator.net all reserved! ) ; Enter values separated by comma 's ( ie a total of pirates! Their differences average mean standard deviation is calculated from only some of the following formulas 5 = =4! Calculate standard deviation takes into account one less value than the number of gold coins every pirate has are from... Between the population and sample standard deviation population standard deviation formula S= 27.51 ( population. Divide by one less than the number of data in a population from
{ "domain": "tokocase.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576914916509, "lm_q1q2_score": 0.8043260247044693, "lm_q2_score": 0.8244619285331332, "openwebmath_perplexity": 390.95259407910424, "openwebmath_score": 0.8247133493423462, "tags": null, "url": "https://tokocase.com/61s0qt8/population-standard-deviation-9f9839" }
pl.programming-languages, type-theory If there's a standard reference for this, even better. Benjamin Pierce's "Types and Programming Languages" lists the necessary subtype rules near the end of chapter 21, but doesn't define the join rules. I thought I wanted something more like gasche proposed in his answer, but after talking to some colleagues in my lab I realized I was just using iso-recursive types wrong in my examples, which led to me wanting a larger subtyping relation than iso-recursive types usually have (specifically, allowing types with different numbers of unrollings to be subtypes). After fixing my code to use iso-recursive types properly, I found the standard subtyping based on the Amber rule (Cardelli 1986) would work just fine for my examples. Based on that, this seems to be the corresponding natural rule for join: $\mu X. \tau \sqcup \mu X'. \tau' = \mu X''. \tau''$ where $X''$ is a fresh type variable and $\tau'' = (\tau[X \leftarrow X''] \sqcup \tau'[X' \leftarrow X''])$.
{ "domain": "cstheory.stackexchange", "id": 4008, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "pl.programming-languages, type-theory", "url": null }
visible-light, refraction, dispersion higher (angular) frequencies have a greater deflection than lower frequencies through a glass prism is false - materials have a refractive index that trends lower at higher frequencies (for X rays the refractive index is almost 1.0). But it is true for the limited frequencies that correspond to visible light, for a transparent glass prism (no absorption bands, i.e. colorless glass). Considering a dielectric as made up from different damped harmonic oscillators, as you approach resonance you will initially get an increase in response amplitude and then a decrease, as you pass through resonance. This means that the refractive index as a function of frequency tends to follow a curve like this (after Hecht and Zajak, "Optics" (1973), figure 3.14)
{ "domain": "physics.stackexchange", "id": 42312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "visible-light, refraction, dispersion", "url": null }
Since there should be 4 letters in a code (X-X-X-X) and each letter can take 5 values (A, B, C, D, E) then total # of combinations of the letters only is 5*5*5*5=5^4. Now, we are told that the first and last digit must be a letter digit, so number digit can take any of the three slots between the letters: X-X-X-X, so 3 positions and the digit itself can take 3 values (1, 2, 3). So, total # of codes is 5^4*3*3=5,625. Similar question to practice: a-4-letter-code-word-consists-of-letters-a-b-and-c-if-the-59065.html Hope it helps. _________________ Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7118 Location: Pune, India Followers: 2128 Kudos [?]: 13625 [5] , given: 222 Re: Maths Question on Combinations [#permalink] ### Show Tags
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. Yes\n2. Yes\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 1, "lm_q1q2_score": 0.81047890180374, "lm_q2_score": 0.81047890180374, "openwebmath_perplexity": 1766.9377120838003, "openwebmath_score": 0.6206804513931274, "tags": null, "url": "http://gmatclub.com/forum/a-5-digit-code-consists-of-one-number-digit-chosen-from-132263.html" }
molecular-biology, homework, transcription I guess your answer might be wrong. The question should be under another assumption that how does this protein HisP regulate histidine biosynthesis. positiver or negative feedback regulation. Generally, amino acids synthesis is regulated by negative feedback loops so that cells could control the amount of amino acids they want. In this case, the answer should be more tightly. As it functions as a repressor then it should bind promoter more tightly so that to repress the transcription more which then generate less histidine synthesis enzyme. (I believe this is what your teacher want you to answer) In the other case, biosystems sometimes have the positive feedback regulation so that they can amplify the sensitivity to the environment noise or generate bistability (phenotypic switching). In that case, the protein bind promoter less tightly in order to generate more histidine.
{ "domain": "biology.stackexchange", "id": 271, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "molecular-biology, homework, transcription", "url": null }
java, game, gui, graphics Refresh rf = new Refresh(); Thread t = new Thread(rf); t.start(); while(true){ try{ Random r = new Random(); Thread.sleep(r.nextInt(40)); double dis = Math.sqrt(xDis*xDis + yDis*yDis); double easingAmount = 180/b.size; if(dis > 1){ b.x += easingAmount*xDis/dis; b.y += easingAmount*yDis/dis; } if(r.nextInt(10) == 5){ int randX = r.nextInt(600); int randY = r.nextInt(600); Dot d = new Dot(randX,randY); synchronized(dots){ dots.add(d); } mf.add(d); mf.repaint(); System.out.println(score); } }catch(Exception e){ } } } class Refresh implements Runnable{
{ "domain": "codereview.stackexchange", "id": 16010, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, gui, graphics", "url": null }
discrete-signals, gnuradio $$A[n]e^{j\phi[n]}A[n]e^{-j\phi[n]} = A^2[n]$$ Thus we see we get a real result and specifically the square of the magnitude. However to be very clear, this will only occur reliably when the signal and its conjugate product are synchronized in time. The OP has stated that the conjugate product is the signal with a delayed version of itself, as: $$A[n]e^{j\phi[n]}A[n-1]e^{-j\phi[n-1]} = A[n]A[n-1]e^{j(\phi[n]-\phi[n-1])}$$ This is a classic conjugate product frequency discriminator that can be used to correct for carrier offsets. The result gives us the phase between the two successive samples. It will only be real when there is no phase variation between adjacent samples. The two samples are separated in time, and frequency is given as the derivative of phase versus time (change in phase versus a change in time).
{ "domain": "dsp.stackexchange", "id": 10684, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, gnuradio", "url": null }
solar-system, pluto, neptune (Pluto is invisible but at the center of the cross-- the bright object nearby is 13 Oph, not Pluto). A perhaps more interesting question: what is the closest approach of the two orbits, even if the planets in question aren't occupying that given point in the orbit.
{ "domain": "astronomy.stackexchange", "id": 2438, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solar-system, pluto, neptune", "url": null }
ros, action, rospy, actionlib, action-server # Add action files add_action_files( DIRECTORY action FILES CustomAction.action ) # Install python scripts using distutils catkin_python_setup() # Generate action messages generate_messages( DEPENDENCIES actionlib_msgs ) ## DEPENDS: system dependencies of this project that dependent projects also need ## CATKIN_DEPENDS: catkin_packages dependent projects also need ## INCLUDE_DIRS: ## LIBRARIES: libraries you create in this project that dependent projects also need catkin_package( DEPENDS ${SYS_DEPS} CATKIN_DEPENDS ${CATKIN_DEPS} INCLUDE_DIRS LIBRARIES ) include_directories(include ${catkin_INCLUDE_DIRS}) add_executable(my_node1 src/my_node1.cpp) target_link_libraries(my_node1 ${catkin_LIBRARIES}) install(TARGETS my_node1 ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
{ "domain": "robotics.stackexchange", "id": 14536, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, action, rospy, actionlib, action-server", "url": null }
lo.logic, time-complexity Q1. What is the output of $M_{const}( M_{paradox} )$ ? Update: There were another question Q2 here, but it I decided to post it as a new "Part II" question to avoid confusion. $\def\mc{M_\mathit{const}}\def\mp{M_\mathit{paradox}}$Let me for the record write up the answer to Q1, so that it doesn’t live only in the comments. The reasoning given in steps 1–5 in the question is correct in the real world. Thus, $\mc(\mp)$ outputs NO, and $\mp(x)$ halts in constant time, but there is no short enough proof of this in ZFC. When trying to formalize this argument in ZFC, the problematic step is 2: here, we need to assert, in ZFC itself, the implication (1) If ZFC proves ‘$\mp$ halts in constant time’ by a proof of length $\le|\mp|^2$, then $\mp$ halts in constant time. In general, statements of the form (2) If ZFC proves $\phi$, then $\phi$.
{ "domain": "cstheory.stackexchange", "id": 4316, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lo.logic, time-complexity", "url": null }
c#, beginner, object-oriented, winforms, playing-cards public int RightCard { get; set; } public string Name { get; set; } public int? Chips { get; set; } public int Type { get; set; } public bool Turn { get; set; } public bool FoldTurn { get; set; } public int PreviousCall { get; set; } public int LeftCard { get; set; } public double Power { get; set; } public int EnumCasted { get; set; } } } Edit Adding GitHub for the project : https://github.com/tempAccount741/PokerReview Quick glance, going over this from top to bottom: public enum TableCards { FirstCard = 12, SecondCard = 13, ThirdCard = 14, FourthCard = 15, FifthCard = 16 } Do the assigned indices have any special meaning?? why are they there? Either add a comment explaining wtf they mean, or shred them. I daresay you don't actually need them... #region Variables
{ "domain": "codereview.stackexchange", "id": 18928, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, object-oriented, winforms, playing-cards", "url": null }
python, parsing, file, database, django In the else, you know record == '', and '' + stripped_string is always the same as stripped_string. if record != '': # Dispatch the previous record. dispatch_record(record) # Reset the used record. record = '' # Assign the current record. record = stripped_string else: record = stripped_string In both branches, the last line is the same, so we can move it out, and drop the else which is now empty. if record != '': # Dispatch the previous record. dispatch_record(record) # Reset the used record. record = '' # Assign the current record. record = stripped_string
{ "domain": "codereview.stackexchange", "id": 19350, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, parsing, file, database, django", "url": null }
beginner, kotlin Title: Springer book downloader in Kotlin I'm starting with Kotlin recently and am hoping to improve. I wrote a small app to parse the list of free Springer books and download the books into your chosen local folder. Comments around obvious mistakes, unidiomatic Kotlin, any other points of improvements will be greatly appreciated. Thank you. Gradle dependencies: implementation("org.apache.poi:poi:4.1.2") implementation("org.apache.poi:poi-ooxml:4.1.2") implementation("org.jsoup:jsoup:1.13.1") Kotlin code: package dev.rayfdj.kotlinutils.springer import org.apache.poi.ss.usermodel.WorkbookFactory import org.jsoup.Jsoup import java.io.File import java.io.FileOutputStream import java.net.URL import java.nio.channels.Channels import java.nio.file.Files import java.nio.file.Path import java.nio.file.Paths
{ "domain": "codereview.stackexchange", "id": 38251, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, kotlin", "url": null }
vba, excel Set columnToParse = GetUserInputRange If columnToParse Is Nothing Then Exit Function If columnToParse.Columns.Count > 1 Then MsgBox "You selected multiple columns. Exiting.." Exit Function End If Dim columnLetter As String columnLetter = ColumnNumberToLetter(columnToParse) Dim result As String result = MsgBox("The column you've selected to parse is column " & columnLetter, vbOKCancel) If result = vbCancel Then MsgBox "Process Cancelled." Exit Function End If lastRow = Cells(Rows.Count, columnToParse.Column).End(xlUp).Row Set UserSelectRange = Range(Cells(2, columnToParse.Column), Cells(lastRow, columnToParse.Column)) End Function
{ "domain": "codereview.stackexchange", "id": 19596, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
potential, capacitance Title: Potential across the plate of a capacitor So I came across this question: Two capacitors are connected in series of given capacitance $5\ \mu\mathrm F$ and $10\ \mu\mathrm F$. The first plate of the $5\ \mu\mathrm F$ capacitor is given a potential of $100\ \mathrm V$ and the second plate of the $10\ \mu\mathrm F$ capacitor is earthed. Find the potential of the other two plates. So when they say that the 2nd plate is earthed, does it imply that the circuit is completed and also is the potential of each plate nothing but the potential of the capacitor? No, the circuit is not complete. If it were complete Capacitors would discharge and 1st plate would have 0 potential. Next, potential of capacitor and plate are different. Capacitor has potential difference b/w two plates. Since both are connected in series net Capacitance = 10/3 uF . $$Q = CV = 10/3 *100 = 1000/3 uC$$ which is charge on both capacitors. $$Q = CV$$ $$1000/3 = 5 * V_{1}$$ $$V_{1} = 200/3$$
{ "domain": "physics.stackexchange", "id": 28055, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "potential, capacitance", "url": null }
python, etl, project-planning, best-practice Create python classes/functions (even if they are just a simple wrapper in some cases) in separate modules that do each of the steps required, and then use the jupyter notebooks only to run stuff, as needed. Another advantage of this kind of modularity is that you could run things in parallel, if for example you are doing two different preprocessing routines to create different feature sets. If on the other hand you find yourself running everything most of the time, then the single notebook might be the best way to go since you simply save time by not saving/loading for no reason, but in my experience I have found out that it is almost never the case.
{ "domain": "datascience.stackexchange", "id": 11845, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, etl, project-planning, best-practice", "url": null }
bigdata, efficiency, performance For example, consider an entreprise that continuously crawls webpages and parses the data collected. For each sliding-window, different data mining algorithms are run upon the data extracted. Why would the developers ditch off using available libraries/frameworks (be it for crawling, text processing, and data mining)? Using stuff already implemented would not only ease the burden of coding the whole process, but also would save a lot of time. In a single shot:
{ "domain": "datascience.stackexchange", "id": 27, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "bigdata, efficiency, performance", "url": null }
python, python-3.x, random class RandomIdsGenerator: """ Generate random ids from specific numbers of auto-generated unique ids. For instance: you maybe want to generate 1000 random user ids from 10 unique ids. How it works: it generate random number in specific range and hash this number using md5 and convert it to hexdigest """ __slots__ = ['__n_unique_id', '__start_num', '__end_num']
{ "domain": "codereview.stackexchange", "id": 37186, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, random", "url": null }
species-identification, entomology Title: Bug on wall identification request I've been finding a few of these on the same wall of my apartment the path month. Size is maybe 3-5 mm. I live in an apartment in Toronto Canada. Thanks in advance! Without clearer photos it is pretty hard to say, however, I think it is likely to be one of the Dermestidae family of beetles. These include a bunch of common pests in the household, including "carpet beetles" (Anthrenus sp.), and the larder beetle (Dermestes lardarius). I think this is most likely to be a carpet beetle, given the light/dark mottled pattern on the shell.
{ "domain": "biology.stackexchange", "id": 11835, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "species-identification, entomology", "url": null }
haskell filterByIngredients :: [Ingredient] -> [Drink] -> [Drink] filterByIngredients ingredients = filter (canMake ingredients) main = do drinks <- readFile "drinks" ingredients <- readFile "ingredients" putStrLn $ case parse drinksFile "drinksFile" drinks of Left error -> "Error parsing drinks file." Right parsedDrinks -> showPossibleDrinks (lines ingredients) parsedDrinks showPossibleDrinks :: [Ingredient] -> [Drink] -> String showPossibleDrinks = showDrinks . (filterByIngredients ingredients) Given the succinctness of your methods, moving towards pointfree form is quite readable and more idiomatic. If we import the <$> operator from Control.Applicative after transposing the function's arguments we can make filterByIngredients completely pointfree: filterByIngredients :: [Ingredient] -> [Drink] -> [Drink] filterByIngredients = filter <$> canMake
{ "domain": "codereview.stackexchange", "id": 7297, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haskell", "url": null }
quantum-field-theory, particle-physics, standard-model, higgs any thoughts as always very much appreciated! What do you mean exactly by "interact differently with the Higgs"? I can come up with two reasonable definitions: Some difference in the internal structure makes them interact differently. In this case this very difference is called "generation", so nothing new here. The mode of the interaction is somehow chosen spontaneously. I am almost 100% sure that in this case the model is inconsistent with observations. Anyway, this questions needs clarification.
{ "domain": "physics.stackexchange", "id": 39162, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, particle-physics, standard-model, higgs", "url": null }
• Welcome to StackExchange! Although we are not a homework answer site, we are more than happy to help if you provide us with exactly what steps you have already taken to solve the problem and we can help guide you the rest of the way. Please edit your question to provide what you have done so far. – rb612 Nov 13 '17 at 5:13 • @rb612 My apologies! This isn't a homework problem. It was an extra credit problem that was given to us, and I wanted to attempt it but I was unsure of where to begin. I've never used this forum before. I've edited my question. – LaylaA312 Nov 13 '17 at 5:26 • @EricTowers I've updated my question! My apologies. – LaylaA312 Nov 13 '17 at 5:29
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.992767234698562, "lm_q1q2_score": 0.8294406807663524, "lm_q2_score": 0.8354835371034368, "openwebmath_perplexity": 70.61997753290618, "openwebmath_score": 0.8322858214378357, "tags": null, "url": "https://math.stackexchange.com/questions/2517748/a-tangent-line-to-y-frac1x2-cuts-the-x-axis-at-a-and-y-at-b-minim" }
c++, algorithm, image, error-handling, c++20 template<class ExPo, class InputT> requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>>) constexpr static Image<InputT> divides(ExPo execution_policy, const Image<InputT>& input1, const Image<InputT>& input2) { return pixelwiseOperation(execution_policy, std::divides<>{}, input1, input2); } template<arithmetic ElementT = double> constexpr static auto abs(const Image<ElementT>& input) { return pixelwiseOperation([](auto&& element) { return std::abs(element); }, input); } template<class ExPo, arithmetic ElementT = double> requires (std::is_execution_policy_v<std::remove_cvref_t<ExPo>>) constexpr static auto abs(ExPo execution_policy, const Image<ElementT>& input) { return pixelwiseOperation(execution_policy, [](auto&& element) { return std::abs(element); }, input); }
{ "domain": "codereview.stackexchange", "id": 42578, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, image, error-handling, c++20", "url": null }
newtonian-mechanics Any help is welcome! Edit: (to make the question more clear) What we know: F, the weight of a person (his mass times g) d, the distance the person is sitting from the fixture Mp, something in N about the bending strength of the wood (we got it from a table) What we want to know: The force that the wood aplies on the person. I would guess that you are interested in achieving a comfortable springingness, a comfortable deflection under weight. Calculating it is just a tool to get you there. It may be easier to try different things out until you find the right feel. After all, the calculation you are looking for assumes a person sits at a point. That point is their center of mass. For a standing person that is about at your butt. But for a sitting person, it changes if you lean forward or lean back. Also the calculation assumes they sit still. And if you did calculate it and got an answer of 1 cm, would that tell you if the deflection was right?
{ "domain": "physics.stackexchange", "id": 47228, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics", "url": null }
python, tic-tac-toe def do_turn(board): draw(board) index, mark = get_move(board) board[(index-1)//3][(index-1)%3] = mark def round_winner(): board = initialize_board() turn = 0 while not check_three(board): print("Turn: {}".format(turn)) do_turn(board) turn += 1 draw(board) return check_three(board) def main(): goal_score = get_goal_score() current_score = {"x": 0, "o": 0} round = 1 while continue_game(current_score, goal_score): print("Round: {}".format(round)) current_score[round_winner()] += 1 print("Current score: x: {x}, o: {o}".format(**current_score)) round += 1 winner = "." for key, val in current_score.items(): if val == goal_score: winner = key print("Player {} wins!".format(winner)) if __name__ == "__main__": main()
{ "domain": "codereview.stackexchange", "id": 21426, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, tic-tac-toe", "url": null }
rna, virus Title: How does a prophage leave the host cell's genome? I understand that, unlike a prophage, a provirus never leaves the genome, but I don't understand how the prophage "leaves". This is explained reasonably well at the Wikipedia entry for phage λ, which is the prototype lysogenic phage. Basically, integration occurs by site-specific recombination between the attP region on the phage genome and the attB region in the host genome. This means that in the prophage form the phage DNA is flanked by direct repeats. Excision involves the reverse process - recombination between these direct repeats. Excision is promoted by the phage-encoded Xis and Int proteins in combination with the host protein Ihf. The regulatory cascade which links cellular stress to this recombination event is detailed at the linked page.
{ "domain": "biology.stackexchange", "id": 1640, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rna, virus", "url": null }
python, gui, pyqt def buildUi(self): self.gridLayout = QtGui.QGridLayout() self.gridLayout.setSpacing(10) for index, (key, values) in enumerate(self._data.iteritems()): getLbl = QtGui.QLabel("Get", self) label = QtGui.QLabel(key, self) chkBox = QtGui.QCheckBox(self._data[key][0], self) chkBox.setToolTip("Click here to get the book") version = QtGui.QSpinBox( self) version.setValue(self._data[key][-1]) version.setRange(self._data[key][-1], 12) self.gridLayout.addWidget(getLbl, index, 0) self.gridLayout.addWidget(label, index, 1) self.gridLayout.addWidget(chkBox, index, 2) self.gridLayout.addWidget(version, index, 3) self.layout = QtGui.QVBoxLayout() self.okBtn = QtGui.QPushButton("OK") self.layout.addLayout(self.gridLayout) self.horLayout = QtGui.QHBoxLayout() self.horLayout.addStretch(1)
{ "domain": "codereview.stackexchange", "id": 5626, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, gui, pyqt", "url": null }
quantum-field-theory, quantum-chromodynamics, topology Title: Why is the QCD $\theta$-term aware of the topology of the space? The QCD Lagrangian without the $\theta$-term $$\mathcal{L}_{QCD}=-\frac{1}{4}G_{\mu\nu}^aG^{\mu\nu a}\tag{1}$$ is not topological. However, the $\theta$-term $$\mathcal{L}_\theta=\frac{\theta}{32\pi^2}G^a_{\mu\nu}\tilde{G}^{\mu\nu a}\tag{2}$$ is topological! What is so special about the $\theta$-term that it knows about the topology of the space? There is nothing special about functions "knowing" about topology. In fact, things like Morse theory rely on the notion that the "typical" smooth function on a manifold always reflects some topological characteristics of the manifold in its various properties.
{ "domain": "physics.stackexchange", "id": 83926, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, quantum-chromodynamics, topology", "url": null }
solid-state-physics, electrical-resistance, tensor-calculus In the general case, this is contained in the relation $\rho_{xy} = -\rho_{yx}$, which effectively says that an electric field in one direction always results in a deflected (perpendicular) current which is in a specific orientation (clockwise or anti-clockwise) from it, with the same magnitude for a given electric field magnitude.
{ "domain": "physics.stackexchange", "id": 21866, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solid-state-physics, electrical-resistance, tensor-calculus", "url": null }
Therefore, the solution to the optimization problem $$v(x) = -x' \tilde{P}x$$ follows the above result by denoting $$\tilde{P} := A'PA - A'PB(Q + B'PB)^{-1}B'PA$$. Footnotes [1] Suppose that $$\|S \| < 1$$. Take any nonzero vector $$x$$, and let $$r := \|x\|$$. We have $$\| Sx \| = r \| S (x/r) \| \leq r \| S \| < r = \| x\|$$. Hence every point is pulled towards the origin. • Share page
{ "domain": "quantecon.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9927672363035179, "lm_q1q2_score": 0.8022226543462614, "lm_q2_score": 0.8080672135527632, "openwebmath_perplexity": 292.48710622759694, "openwebmath_score": 0.9709522724151611, "tags": null, "url": "https://lectures.quantecon.org/jl/linear_algebra.html" }
python, python-3.x, postgresql :type use_named_cursor: boolean :param use_named_cursor: If true, then use server side cursor, else client side cursor. """ if use_named_cursor: cursor_name = get_random_cursor_name() with conn.cursor(cursor_name) as cursor: cursor.itersize = 2000 if itersize == -1 else itersize cursor.execute(query) row = cursor.fetchone() header = [desc[0] for desc in cursor.description] Row = namedtuple('Row', header) yield Row(*row) for row in cursor: yield Row(*row) else: with conn.cursor() as cursor: cursor.execute(query) header = [desc[0] for desc in cursor.description] Row = namedtuple('Row', header) if itersize == -1: rows = cursor.fetchall() else: cursor.arraysize = itersize rows = itertools.chain.from_iterable(iter(cursor.fetchmany, []))
{ "domain": "codereview.stackexchange", "id": 17759, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, postgresql", "url": null }
javascript, vue.js <style> </style> // -- Blue.vue ----------------------------------------------------------- <template> <div> <slot name="blue-headline"></slot> <slot name="blue-text"></slot> </div> </template> <script> </script> <style scoped> div { border: 1px solid blue; background-color: lightblue; padding: 30px; margin: 20px auto; text-align: center } </style> // -- Green.vue ---------------------------------------------------------- <template> <div> <slot name="green-headline"></slot> <slot name="green-text"></slot> </div> </template> <script> </script> <style scoped> div { border: 1px solid green; background-color: lightgreen; padding: 30px; margin: 20px auto; text-align: center } </style>
{ "domain": "codereview.stackexchange", "id": 33468, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, vue.js", "url": null }
Denote $h(x,y)=\sum_{i,j\geqslant 0} \binom{i+j}i x^iy^j=\frac1{1-(x+y)}$, $f(x,y)=\sum_{i,j\geqslant 0} \binom{i+j}i^2 x^iy^j$. We want to prove that $2xyf^2(x^2,y^2)$ is an odd (both in $x$ and in $y$) part of the function $h(x,y)$. In other words, we want to prove that $$2xyf^2(x^2,y^2)=\frac14\left(h(x,y)+h(-x,-y)-h(x,-y)-h(-x,y)\right)=\frac{2xy}{1-2(x^2+y^2)+(x^2-y^2)^2}.$$ So, our identity rewrites as $$f(x,y)=(1-2(x+y)+(x-y)^2)^{-1/2}=:f_0(x,y)$$ This is true for $x=0$, both parts become equal to $1/(1-y)$. Next, we find a differential equation in $x$ satisfied by the function $f_0$. It is not a big deal: $$\left(f_0(1-2(x+y)+(x-y)^2)\right)'_x=(x-y-1)f_0.$$ Since the initial value $f_0(0,y)$ and this relation uniquely determine the function $f_0$, it remains to check that this holds for $f(x,y)$, which is a straightforward identity with several binomials. Namely, comparing the coefficients of $x^{i-1}y^j$ we get
{ "domain": "mathoverflow.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9796676436891864, "lm_q1q2_score": 0.8458368241240267, "lm_q2_score": 0.8633916099737807, "openwebmath_perplexity": 217.7272216357863, "openwebmath_score": 0.8715867400169373, "tags": null, "url": "https://mathoverflow.net/questions/283540/combinatorial-identity-sum-i-j-ge-0-binomiji2-binoma-ib-ja/283611" }
c#, wpf, asp.net-web-api, windows-phone, json.net [DataContract] and [DataMember] is needed for the DataContractJsonSerializer. ReleaseDateParsed is dirty but acceptable in my case. Finally some code to build up the Uri and download and parse the data: UriQueryBuilder builder = new UriQueryBuilder("http://localhost:61933/api/ProductVersions", "CheckVersion"); builder.Parameters.Add("product", "myproduct"); builder.Parameters.Add("platform", "wpf"); builder.Parameters.Add("version", "1.2.3.4"); CheckVersionResult data = await HttpHelper.DownloadJsonObjectAsync<CheckVersionResult>(builder, null); Quite small and elegant I think. The best thing is that all code works in Wpf, Windows Store and Windows Phone apps.
{ "domain": "codereview.stackexchange", "id": 8626, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, wpf, asp.net-web-api, windows-phone, json.net", "url": null }
java, multithreading, interview-questions for (int i = 0; i < coreCount; i++) { Future<Integer> futureArmstrongCount = futureArmstrongCounts.get(i); countOfArmstrongNumbers = countOfArmstrongNumbers + futureArmstrongCount.get(); } service.shutdown(); long readAndAnalyzeEnd = System.currentTimeMillis(); // Part 3: Printing result System.out.println("Read and analysis done. Thak took " + (readAndAnalyzeEnd - readAndAnalyzeStart) + " milliseconds."); System.out.println("Prime numbers count: " + countOfPrimeNumbers); System.out.println("Prime numbers count: " + countOfArmstrongNumbers); System.out.println("10 most frequently appeared numbers in bar chart form:"); Map<BigInteger, Integer> numbersFreqMap = MapUtils.getSortedFreqMapFromList(numbers); BarChartPrinter printer = new BarChartPrinter(numbersFreqMap); printer.print(); } } BarChartPrinter Class: package ee.raintree.test.numbers;
{ "domain": "codereview.stackexchange", "id": 37001, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, multithreading, interview-questions", "url": null }
cv-bridge ardrone_visualservo: /usr/include/boost/smart_ptr/shared_ptr.hpp:418: T* boost::shared_ptr::operator->() const [with T = cv_bridge::CvImage]: Assertion `px != 0' failed. Aborted (core dumped) If I understood what it tried to say, it's complaining about the -> operator usage with cv_bridge::CvImage (which is the type of my cv_ptr_green object). Any ideas how to fix this? Updating using alternative function cv::mixChannels I found that maybe I can use cv::mixChannels() to extract only the green channel (as I actually need) with // cv_ptr[1] -> cv_ptr_green[1] int from_to[] = {1,1}; cv::mixChannels( cv_ptr, 1, cv_ptr_green, 1, from_to, 1); but it also gives me an error path/to/cpp/file.cpp: In function ‘void imageCallback(const ImageConstPtr&)’: path/to/cpp/file.cpp:62:58: error: no matching function for call to ‘mixChannels(cv_bridge::CvImagePtr&, int, cv_bridge::CvImagePtr&, int, int [2], int)’
{ "domain": "robotics.stackexchange", "id": 10718, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cv-bridge", "url": null }
enumeration $$ \sum_{\ell=1}^{\lfloor k/t \rfloor} \binom{k-\ell t}{\ell - 1}. $$ For example, if $t = \lfloor \sqrt{k}+1 \rfloor$, then we can choose $\ell \approx \sqrt{k}/2$ to get at least $\binom{\Theta(k)}{\Theta(\sqrt{k})}$ many sequences, which is $\exp \Theta(\sqrt{k} \log k)$; this is tight up to the hidden constant.
{ "domain": "cs.stackexchange", "id": 18763, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "enumeration", "url": null }
ros, dependencies Title: list dependencies of a pre-built package of ros? I have installed the full desktop version of ROS kinetic instead of building it from the source files. I would like to know the list of dependencies for a pre-built package in ROS ( for ex, turtlesim ). Is there a command that can list all the dependencies of a pre-built package. Any help is much appreciated. Thank you in advance. Originally posted by sam26 on ROS Answers with karma: 231 on 2017-02-17 Post score: 0 found it. rospack depends does the job! Originally posted by sam26 with karma: 231 on 2017-02-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27039, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, dependencies", "url": null }
human-physiology, digestion, stomach The stomach accomplishes much of its function by mechanically breaking down the swallowed food particles and mixing them with acid and enzymes into a sort of slurry. To do this, there are three major layers of muscle surround the stomach - from the outside, the longitudinal layer, the circular layer, and the oblique layer. The stomach also has two holes in it - the gastroesophageal opening, coming from the esophagus with the swallowed food/saliva mix, and the pylorus, where the food/acid/enzyme slurry exits into the duodenum, which is the beginning of the small intestine.
{ "domain": "biology.stackexchange", "id": 6455, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "human-physiology, digestion, stomach", "url": null }
ros, moveit, ros-kinetic, ompl, motion-planners As for you point n°2, I don't think there is any tutorial. But you can study the source code of the OMPL/MoveIt interface (located in ../src/moveit/moveit_planners/ompl/ompl_interface) and from there, create your own plugin for your planning library. I actually attempted to do so for the KrisLibrary, but I'm not good enough to do it. I believe even for an experienced programmer, this would take quite a long time. As for point n°3, I am not aware of any other methods. Hope this helps you. Best, Maxens. Originally posted by mxch_18 with karma: 146 on 2018-07-26 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 31334, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, moveit, ros-kinetic, ompl, motion-planners", "url": null }
classical-mechanics I am sorry that I cannot comment on the Haskell code - it's "not my language". So I wrote a little program in Python that does largely the same thing; note that I borrowed the idea first shown by @Shane to compute the new velocity by looking at the energy gained; this has the advantage that it doesn't lead to a singularity when $v=0$ (which would lead to "infinite force"): # bike power calculation import math import numpy as np import matplotlib.pyplot as plt
{ "domain": "physics.stackexchange", "id": 27311, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics", "url": null }
The Product Rule b m · b n = b m+n When multiplying exponential expressions with the same base, add the exponents. Use this sum as the exponent of the common base. The Power Rule (Powers to Powers) (b m ) n = b mn When an exponential expression is raised to a power, multiply the exponents. Place the product of the exponents on the base and remove the parentheses. The Quotient Rule When dividing exponential expressions with the same nonzero base, subtract the exponent in the denominator from the exponent in the numerator. Use this difference as the exponent of the common base. Example Find the quotient of 4 3 /4 2 Solution: Products to Powers (ab) n = a n b n When a product is raised to a power, raise each factor to the power. Simplify: (-2y) 4. (-2y) 4 = (-2) 4 y 4 = 16y 4 Text Example Solution A.-16y 4 B.-8y 4 C.16y 4 D.8y 4 Quotients to Powers When a quotient is raised to a power, raise the numerator to that power and divide by the denominator to that power.
{ "domain": "slideplayer.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.977022627413796, "lm_q1q2_score": 0.8162863245840011, "lm_q2_score": 0.8354835411997897, "openwebmath_perplexity": 593.4701726939123, "openwebmath_score": 0.8483084440231323, "tags": null, "url": "https://slideplayer.com/slide/7519221/" }
python alignments = AlignIO.parse("Rfam.seed", "stockholm") for alignment in alignments: print(alignment) It gets read for some alignments and then throws up following error : Traceback (most recent call last): File "test.py", line 5, in <module> for alignment in alignments: File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Bio/AlignIO/__init__.py", line 394, in parse for a in i: File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Bio/AlignIO/StockholmIO.py", line 408, in __next__ line = handle.readline() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe1 in position 2468: invalid continuation byte
{ "domain": "bioinformatics.stackexchange", "id": 1214, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
slam, navigation, camera, rtabmap, stereo Question is, what is the best way of doing this? rtabmap seems to be written to subscribe to only one stereo camera. Should I combine the point clouds generated by stereo_image_proc? Also I'm unsure as to how I should use my wheel odometry in this setup. It looks like rtab can only use one odometry source. Should I just do visual odometry if it works well enough and pipe that into rtab? I'm experimenting right now with wheel odometry data and one stereo camera with rtab and I'm slightly unsure on how I should connect these systems together. Thanks! Originally posted by psammut on ROS Answers with karma: 258 on 2017-08-23 Post score: 2 I see two setups that could be possible:
{ "domain": "robotics.stackexchange", "id": 28684, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, camera, rtabmap, stereo", "url": null }
navigation, ubuntu, ros-fuerte, ubuntu-precise, map-server Original comments Comment by jarvisschultz on 2013-05-09: I am having major problems since the last Fuerte update as well. Every node I try and run that uses pcl is crashing. I'm guessing something went wrong during packaging. I filed a bug report with perception_pcl (wrong place) error here Comment by martimorta on 2013-05-14: Thanks @jarvisschultz, I followed the instructions of the patch there and it worked. Comment by tfoote on 2013-05-27: What does the log file for the move-base node contain? Comment by martimorta on 2013-05-28: Hi Tully, It says log file: /home/chaplin/.ros/log/3a0dfd2e-c790-11e2-9b26-c417fe1f097c/move_base-3*.log but there's not actual log file and the logs don't give other information.
{ "domain": "robotics.stackexchange", "id": 14122, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ubuntu, ros-fuerte, ubuntu-precise, map-server", "url": null }
python, algorithm, python-2.x With this list of moves, you would select one, and pass a copy of that list of valid moves to minimax with that move removed. Then you would select the next one, and again pass a copy of that list of moves to minimax with this new move removed instead, and so on. The minimax function would do the same: for each move in the list of valid moves, make that tentative move and pass a copy of the valid moves without the current move. For 9 possible moves, this translates to \$9*8*7*6*5*4*3*2*1 = 9! = 362,880\$ move tests, an improvement of 1000-fold over the exhaustive enumeration of each move and testing for the space. Tie With the valid_moves list, checking for no valid removes remaining is simply len(valid_moves) == 0, or equivalently, not valid_moves. No need to search all 9 cells looking for a space. Obvious moves Similarly, len(valid_moves) == 9 means you’ve got an empty board, and can make a fast good move, like claiming the centre spot, or picking a random corner.
{ "domain": "codereview.stackexchange", "id": 39956, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, algorithm, python-2.x", "url": null }
data-mining, data, association-rules Second, inspired by the definition of lift in probabilistic terms in your notes, I've conflated historical frequencies given in the table with probabilities. This is standard practice in much of data science and certainly in exercises but it is not completely uncontroversial. Probabilities are about the future and you have data about the past. The extent to which you can infer future probabilities from past data is philosophically unresolved. But that is not something to worry about in this situation.
{ "domain": "datascience.stackexchange", "id": 10486, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-mining, data, association-rules", "url": null }
java, algorithm, tree, data-mining public static long toLong(int[] ticket) { long l=0; for (int i=0; i<LOTTERY_ROW_LENGTH; i++) { l*=LOTTERY_MAXIMUM_NUMBER; l+=ticket[i]; } return l; } public static int[] fromLong(long l) { int[] result = new int[LOTTERY_ROW_LENGTH]; for (int i=LOTTERY_ROW_LENGTH-1; i>=0; i--) { result[i] = (int) (((l % LOTTERY_MAXIMUM_NUMBER) + LOTTERY_MAXIMUM_NUMBER) % LOTTERY_MAXIMUM_NUMBER); l/=LOTTERY_MAXIMUM_NUMBER; } return result; } private static long[] generateTicketArray(List<int[]> allTickets) { System.out.println("Initializing arrays"); long[] longTickets = new long[OPTIONS]; for (int i=0; i<OPTIONS; i++) { int[] tic = allTickets.get(i); //System.out.println("Generating ticket:" + Arrays.toString(tic));
{ "domain": "codereview.stackexchange", "id": 38101, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, tree, data-mining", "url": null }
type-theory, denotational-semantics $$ Above, $n$ can be zero, resulting in $\vdash e:T$. This means that no assumptions on variables are made. Usually, this means that $e$ is a closed term (without any free variables) having type $T$. Often, the rule you mention is written in a more general form, where there can be more hypotheses than the one mentioned in the question. $$ \dfrac{ \Gamma, x:T_1 \vdash t : T_2 }{ \Gamma\vdash (\lambda x:T_1. t) : T_1\to T_2 } $$ Here, $\Gamma$ represents any context, and $\Gamma, x:T_1$ represents its extension obtained by appending the additional hypothesis $x:T_1$ to the list $\Gamma$. It is common to require that $x$ did not appear in $\Gamma$, so that the extension does not "conflict" with a previous assumption.
{ "domain": "cs.stackexchange", "id": 12147, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "type-theory, denotational-semantics", "url": null }
However, in practice it takes less than 16 because of the natural random variation in the rest of a typical sample; it wiggles about a uniform: The left side is an ECDF of sample of 100 values from an actual uniform. There's some deviation in the center due to random variation, but nowhere near large enough to reach the 1% significance level. The right side is an ECDF of the same sample where in addition the first 11 values (not the smallest 11, just 11 values from the start of the sample) were replaced by exactly 0*. In this case that's more than enough to pass the 1% critical value of the statistic. (In this case, fewer than 11 would be sufficient, but typically it takes a little more than 11.) *(which, given even a single instance of such a value, some other tests would identify non-normality without difficulty)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9777138118632621, "lm_q1q2_score": 0.8038656688751573, "lm_q2_score": 0.8221891305219504, "openwebmath_perplexity": 1006.0321615928104, "openwebmath_score": 0.6693643927574158, "tags": null, "url": "https://stats.stackexchange.com/questions/129765/understanding-multiple-ks-tests" }