text
stringlengths
49
10.4k
source
dict
c++, unit-testing /** @struct IntervalPart * @brief Describes part of an interval excluding the direction it lies on * @var position_start the absolute starting position of the interval relevan part * @var steps_inside_target the size of the interval's relevant part */ struct IntervalPart{ std::uint32_t position_start; std::uint32_t steps_inside_target; }; /** @brief Tells which parts of the provided range relative to the stored direction are mappable to * the buffer range determined by the dimensions and padding * * @param[in] dimension The direction of the range relative to the currently stored position * @param[in] delta The size of the range starting from the stored position * * @return A vector of the parts of the interval inside the bounds of the objects buffer range: * {position, size}: * |->absolute position inside the given dimension, * |--------->number of steps still inside the defined ranges in the direction of the given dimension */ std::vector<IntervalPart> mappable_parts_of( const std::vector<std::uint32_t>& position, std::uint32_t dimension, std::int32_t delta ) const; /** @brief Tells which parts of the stored range relative to the stored direction are mappable to * the buffer range determined by the dimensions and padding * * @param[in] dimension The direction of the range relative to the currently stored position * @param[in] delta The size of the range starting from the stored position * * @return A vector of the parts of the interval inside the bounds of the objects buffer range: * {position, size}: * |->absolute position inside the given dimension, * |--------->number of steps still inside the defined ranges in the direction of the given dimension */ std::vector<IntervalPart> mappable_parts_of(std::uint32_t dimension, std::int32_t delta) const{ return mappable_parts_of(m_position, dimension, delta); }
{ "domain": "codereview.stackexchange", "id": 43993, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, unit-testing", "url": null }
c, calculator /** * Tokenise the string and assign the two tokens to variables */ bin[0] = strtok_r(input, &operation, &rest); bin[1] = rest; /** * Convert binary number to decimal */ dec[0] = strtol(bin[0], &junk, 2); dec[1] = strtol(bin[1], &junk, 2); if(dec[1] == 0){ printf("\n\nDividing by zero and modulus with zero is impossible. Destroy the Universe elsewhere. Good day.\n\n"); exit(1); } /** * Perform correct operation */ dec_result = (operation == '*') ? (dec[0]*dec[1]) : ( (operation == '/') ? (dec[0]/dec[1]) : (dec[0]%dec[1]) ); /** * Convert result back to binary */ *(bin_output+16) = '\0'; mask = 0x10 << 1; while(mask >>= 1) *bin_output++ = !!(mask & dec_result) + '0'; *dec_output = dec_result; } /** * This is the main function that calls all other functions * * @var char line[] The array for storing the incoming string * @var int i * @var int dec_output The decimal result to be printed * @var int bin_output The binary result to be printed * * @return int 0 */ int main(void) { char operation, line[256], bin_output[256]; unsigned int i = 0, dec_output = 0; /** * Read in the input to 'char line[]' */ printf("Enter two binary integers seperated by one of [ * / %% ] and press enter\nCALC: "); fgets(line, sizeof(line), stdin); /** * Remove newline from string if present */ i = strlen(line) - 1; line[i] = '\0';
{ "domain": "codereview.stackexchange", "id": 1506, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, calculator", "url": null }
If the kernel function is also a covariance function as used in Gaussian processes, then the Gram matrix can also be called a covariance matrix. Leave-one-out Gaussian Process Prediction We assume that either a proper kernel function k0 (x, x’) that satisfies Mercer’s theorem or a valid Gram matrix K0 (symmetric and positive semi-definite) (Schölkopf & Smola, 2002) for both labeled and unlabeled data is given. The focus is on the mathematics and not my English skills :). 84089642) at the midpoints of each pixel and then normalising. Please refer to the full user guide for further details, as the class and function raw specifications may not be enough to give full guidelines on their uses. Gilles Gasso Benoit Gauz ere - St ephane Canu Kernel Machines 25/35. The last image shows the subset spawned by each class and its Kernel scatter matrix. To make this matrix, previou sly researchers randomize the centroid points. Bermudez, Senior Member, the step size and the Gaussian kernel bandwidth. Kernel Trick: Replace : kernel If we use algorithms that only depend on the Gram-matrix, G, then we never have to know (compute) the actual features This is the crucial point of kernel methods Definition: A finitely positive semi-definite function is a symmetric function of its arguments for which matrices formed by restriction on any finite. In this case it is shown that the eigenfunctions f i g obey the equation K(x, y)p(x) i (x)dx = i i (y). Input is a matrix of similarities (the kernel matrix or Gram matrix) which should be positive semi-definite and symmetric. It only takes a minute to sign up. 20 thoughts on " Gaussian kernel regression with Matlab code (Gaussian Kernel or RBF Smoother) " Chris McCormick February 22, 2014 at 12:01 AM. One challenge for this task is that all current. Sparse Matrix Gaussian Processes (OSMGP), and demonstrate its merits with a few vision applications. For nonlinear SVM, the algorithm forms a Gram matrix using the rows of the predictor data X. In the following, we describe the pairs of kernels (κ y, κ x) that we used for solving the
{ "domain": "gipad.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363512883317, "lm_q1q2_score": 0.8181243089519122, "lm_q2_score": 0.8311430415844384, "openwebmath_perplexity": 1156.4016423735295, "openwebmath_score": 0.7524211406707764, "tags": null, "url": "http://gipad.it/ssmz/gram-matrix-gaussian-kernel.html" }
Note: I'm still not sure how to prove that a set of variables is independent under some linear transformation • I am not sure what you mean by "discontinuous" in relation to the example in my answer. Both the X and Y are continuous random variables and (X,Y) is a continuous random vector. Perhaps you are thinking about the density function of (X,Y) or perhaps X_t as a function of t? neither of those are continuous. Mar 9 '21 at 9:18
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9481545333502202, "lm_q1q2_score": 0.8020841624112836, "lm_q2_score": 0.8459424431344437, "openwebmath_perplexity": 306.0589029996912, "openwebmath_score": 0.8479734659194946, "tags": null, "url": "https://stats.stackexchange.com/questions/511023/definition-of-gaussian-process/511025" }
optics, electromagnetic-radiation, polarization, maxwell-equations Wrong Guess Update 2: Ruslan then suggested that the round source I chose may generate unequally widening wave in x and y direction depending on polarization i.e. it maybe not work as expected. If this guess is correct, the simulation result won't show perfect symmetry even if I remove the cone i.e do the simulation in an empty space. So I simulated the y-polarized light in vacuum without the cone, and the result shows perfect symmetry, so the source seems to work as expected. The following picture shows how the whole domain looks like in this simulation (formed with 120×120×70 grids): Pic 12 Here's the result (taken at same place as pic 9): Pic 13 As mentioned by Ruslan , precisely speaking, what one should do to simulate the unpolarized light is to take an average of the intensity of all the orthogonal polarized light other than just 2 of them. Plane source is a special case because its z-polarized component is quite weak so it won't hurt even if only an average of x and y-polarized component is taken. But wait, in Pic 4, OP has already taken an average of all 3 component, why he still can't get full rotational symmetry? The answer is really simple but easy to ignore: the grid isn't dense enough, so it has formed a not-really-round cone. After halving the grid size (with other condition remaining the same as Pic 9 and Pic 11), we got the following result: y-polarized light (with same condition as Pic 9, except for a denser grid): Pic 14 Supposed unpolarized light (with same condition as Pic 11, except for a denser grid): Pic 15 z-polarized light: Pic 16 Light intensity here is in fact very weak, the simulation result of passing rate is about 0.08%. BTW, grids for this simulation still seems to be not dense enough, but it's not a big deal. To avoid any possible confusion, here is a more accurate simulation result of the z-polarized case: z-polarized light (with same condition as Pic 16, except for a denser grid, $\Delta x=\Delta y=\Delta z=12.5 nm$): Pic 17
{ "domain": "physics.stackexchange", "id": 13121, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, electromagnetic-radiation, polarization, maxwell-equations", "url": null }
c++, template, callback void update(unsigned int price) { //update model with price std::cout << "Received price: " << price / 100.0 << std::endl; } private: Pricer* p_; }; int main ( int argc, char *argv[]) { Pricer p; Strategy s(&p); p.receivePrice(105.67); } The functionality you are trying to create already exists in std::function<> using std::bind<> to help. Comments on the rest of the code: Here you use emplace back: void attach ( CbType cb ) { callbacks_.emplace_back(cb); } Emplace back is usually used when you have the arguments and want to build the object in place using the arguments. By passing the object you are going to just invoke the copy constructor. As a result there is no benefit from using it. Though there is nothing wrong with using it either. Currently I am still working out when to use emplace_back() over push_back() but this is one situation where I would still use push_back(). Also because you pass the argument by value you are copying the object into the function then using the copy constructor to put it in the array resulting in another copy. So here I would pass by reference. void attach(CbType const& cb) { callbacks_.push_back(cb); } Don't use unnecessary casts broadcast(static_cast<unsigned int>(price*100)); // This is easier to read as: broadcast(price*100); // double is auto converted to unsigned int Use standard types: typedef Callback<void,unsigned int> CbType; // Replace with: typedef std::function<void(unsigned int)> CbType; If you use the standard function then the equivalent to creation becomes much simpler p.attach(std::bind(&Strategy::update,this, _1)); Don't use pointers where a reference is a better choice: Strategy(Pricer* p) : p_(p)
{ "domain": "codereview.stackexchange", "id": 4518, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, template, callback", "url": null }
quantum-mechanics, homework-and-exercises, condensed-matter, solid-state-physics, fourier-transform Title: Inverse Fourier Transform of $e^{i\mathbf{k}\cdot\mathbf{R}}$ in Brillouin Zone in Proving Orthogonality of Wannier Functions The relationship between wannier function and Bloch function like this: $$ |\mathbf{R}_n\rangle = \dfrac{V}{(2\pi)^3} \int_{\mathrm{BZ}} |\psi_{n\mathbf{k}}\rangle e^{-i\mathbf{k}\cdot \mathbf{R}}d\mathbf{k} \tag{1} $$ where $V$ represents the volume of primitive cells in the lattice; Others have proved $\langle \mathbf{R}_n|\mathbf{R}_m'\rangle = \delta_{nm}\delta_{\mathbf{R}\mathbf{R}'}$ Maximally localized Wannier functions. I try to prove this formula. $$ \begin{align} \langle\mathbf{R}_n|\mathbf{R}_m'\rangle =& \bigg{(} \dfrac{V}{(2\pi)^3} \int_{\mathrm{BZ}} \langle\psi_{n\mathbf{k}}|e^{i\mathbf{k}\cdot \mathbf{R}} d\mathbf{k} \bigg{)}\bigg{(} \dfrac{V}{(2\pi)^3} \int_{\mathrm{BZ}} |\psi_{m\mathbf{k}}\rangle e^{-i\mathbf{k}\cdot \mathbf{R}'} d\mathbf{k} \bigg{)} \\
{ "domain": "physics.stackexchange", "id": 90128, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, homework-and-exercises, condensed-matter, solid-state-physics, fourier-transform", "url": null }
xgboost, regularization, lightgbm Finally, since L1 regularization in GBDTs is applied to leaf scores rather than directly to features as in logistic regression, it actually serves to reduce the depth of trees. This in turn will tend to reduce the impact of less-predictive features, but it isn't so dramatic as essentially removing the feature, as happens in logistic regression. You might think of L1 regularization as more aggressive against less-predictive features than L2 regularization. But then it might make sense to use both: some L1 to punish the less-predictive features, but then also some L2 to further punish large leaf scores without being so harsh on the less-predictive features. Toy example: https://github.com/bmreiniger/datascience.stackexchange/blob/master/trees_L1_reg.ipynb Possibly useful: https://github.com/dmlc/xgboost/blob/release_0.90/src/tree/split_evaluator.cc#L118 https://xgboost.readthedocs.io/en/latest/tutorials/model.html#model-complexity
{ "domain": "datascience.stackexchange", "id": 10088, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "xgboost, regularization, lightgbm", "url": null }
newtonian-mechanics, centripetal-force, angular-velocity Title: Confusion regarding the answer to a question about $(v_T)^2/r$ In this answer to the question "Intuitive explanation for why centripetal acceleration is $\frac{v^2}{r}$": https://physics.stackexchange.com/a/190532/262601 (excerpt from the answer is attached below), I don't understand why it says that "The only difference between the position and velocity is that we rotated by 90 degrees and multiplied the length by v/r". I understand why it needs to be rotated by 90 degrees, but I don't understand how we come up with the v/r. A point is moving around a circle. It has a blue position vector and a red velocity vector, like this: The position vector stays the same length and rotates around and around in a circle. Because the position vector is changing, it has a derivative. That derivative is the velocity. Because we're always going the same speed, the velocity vector also stays the same length. Because the velocity is always 90 degrees rotated from the position, the velocity is also going around in a circle. In other words, the velocity vector is doing exactly the same thing as the position vector is doing; rotating and staying constant length. The only difference between the position and velocity is that we rotated by 90 degrees and multiplied the length by v/r. You are looking for a relationship between $v$ and $r$. If you start with $r$, and you want to "go" to $v$, then you multiply by $\dfrac vr$ since $r\cdot\dfrac vr=v$.
{ "domain": "physics.stackexchange", "id": 78783, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, centripetal-force, angular-velocity", "url": null }
plot their frequency. However, when I plot a histogram, it is not appearing in my PDF. Return to Top. First, let’s look at a boxplot using some data on dogwood. 05769231 -1. Plot points (Scatter plot) geom_pointrange. I'm plotting some Q-Q plots using the qqplot function. find multiples files with three types of extensions: pdf, csv, and txt. descriptive statistics based on empirical-QQ plots are also o ered. If your data is not homoscedastic, it might look something like the plot below. > qq <- cumsum(pp) # see how the cumulative sum qq is a list of partial sums from pp. 069901831 16. Exponential Probability Plot Goal: How to assess whether given data comes from Exponential distribution with. wblplot plots each data point in x using plus sign ('+') markers and draws two reference lines that represent the theoretical distribution. (10 points) Normal random numbers (no data file required) Use software to generate. Chapter 144 Probability Plots Introduction This procedure constructs probability plots for the Normal, Weibull, Chi-squared, Gamma, Uniform, Exponential, Half-Normal, and Log-Normal distributions. Vega-Lite - a high-level grammar for statistical graphics. The ecdfPlot function has the group argument that can be used to construct multiple ECDF plots in the same graph. commonplace than it ought to be. On the next line you will write a statement to plot the function. Quantile-Quantile Plots Description. Matplotlib supports all kind of subplots including 2x1 vertical, 2x1 horizontal or a 2x2 grid. Watson speculate on the identity of the owner of a cane that has been left in their office by an unknown visitor. To see how this kind of. In the past, when working with R base graphics, I used the layout() function to achive this [1]. • For a left skewed distribution the QQ-plot is the mirror image along the 45 degree line (arch going upwards and towards the left). 069901831 16. Still not sure how to plot a histogram in Python? If so, I’ll show you the full steps to plot a histogram in Python using
{ "domain": "clubita.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9621075711974103, "lm_q1q2_score": 0.8099268345758419, "lm_q2_score": 0.8418256532040708, "openwebmath_perplexity": 1397.0072810229867, "openwebmath_score": 0.538626492023468, "tags": null, "url": "http://clubita.it/yzob/qq-plot-pdf.html" }
c /* set safe defaults */ switch(type) { case 0: (*msg)->s32val = 0; break; case 1: (*msg)->u32val = 0; break; case 2: (*msg)->u64val = 0; break; case 3: *(*msg)->strval = 0; break; case 4: *(*msg)->binval = 0; break; } } /* add on 5 for type and length */ sz += 5; return sz; } void free_encode(unsigned char** outputdata) { free(*outputdata); } void free_decode(tlv_msg** msg) { switch((*msg)->datatype) { case DTYPE_STRING: free((*msg)->strval); break; case DTYPE_BINARY: free((*msg)->binval); break; } free(*msg); } void printmessage(tlv_msg* msg) { switch(msg->datatype) { case DTYPE_S32: printf("%i\n", msg->s32val); break; case DTYPE_U32: printf("%u\n", msg->u32val); break; case DTYPE_U64: printf("%u\n", msg->u64val); break; case DTYPE_STRING: printf("%s\n", msg->strval); break; case DTYPE_BINARY: printbytes(msg->binval, msg->bytelen-5); break; default: printf("unknown data type\n"); break; } }
{ "domain": "codereview.stackexchange", "id": 911, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c", "url": null }
java, webdriver private String attribute; ProfileAttribute(String attribute) { this.attribute = attribute; } public String getAttribute() { return attribute; } } private static HtmlElement getProfileAttributeElement (ProfileAttribute profileAttribute) { return new LabelElement("Profile Attribute", locateByCSSSelector("."+ profileAttribute.getAttribute())); } public String getAttributeValue(ProfileAttribute profileAttribute) { return getProfileAttributeElement(profileAttribute).getText(); } } A curtailed version of TopHeader class which gets to the Myrofile page - public class TopHeader extends PageObject { public MyProfilePage goToManageYourAccount() throws Exception { manageYourAccountLink.click(); return new MyProfilePage(MyProfilePage.ProfileAttribute.DEPARTMENT); } } This is how a test uses it - public class MyProfileTest extends SelTestCase { LoginUserNavigation loginUserNavigation = new LoginUserNavigation(); @Test(groups = "shouldDisplayProfileAttributesOnMyProfilePage") public void shouldDisplayProfileAttributesOnMyProfilePage() throws Exception { MyProfilePage myProfilePage = loginUserNavigation .loginAsDefaultUser(getEnvironment()) .getTopHeader() .goToManageYourAccount(); assertThat("department is wrong", myProfilePage.getAttributeValue (MyProfilePage.ProfileAttribute.DEPARTMENT), is(equalTo("Technology"))); }
{ "domain": "codereview.stackexchange", "id": 21163, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, webdriver", "url": null }
- Take your red integral and integrate it by parts again, making sure to let $u$ be the exponential function again. You will get that The integral you care about = stuff + the integral you care about*(other stuff) and therefore the integral you care about = stuff/(1 - other stuff) - In general, if you want to find $$\int e^{ax}\cdot \sin{bx}\cdot dx$$ you can argue as follows: Note that for any $\alpha$ or $\beta$, you have \eqalign{ & \frac{d}{{dx}}\left( {{e^{\alpha x}}\sin \beta x} \right) = \alpha {e^{\alpha x}}\sin \beta x + \beta {e^{\alpha x}}\cos \beta x \cr & \frac{d}{{dx}}\left( {{e^{\alpha x}}\cos \beta x} \right) = \alpha {e^{\alpha x}}\cos \beta x - \beta {e^{\alpha x}}\sin \beta x \cr} so that any integral of the form $$\int e^{\alpha x}\cdot \sin{\beta x}\cdot dx$$ is a linear combination of the former functions. Let's then find $c_1$ and $c_2$ such that $$\frac{d}{{dx}}\left( {{c_1}{e^{\alpha x}}\sin \beta x + {c_2}{e^{\alpha x}}\cos \beta x} \right) = {e^{\alpha x}}\sin \beta x$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9740426443092215, "lm_q1q2_score": 0.8279081752258711, "lm_q2_score": 0.8499711794579723, "openwebmath_perplexity": 1005.4668307075298, "openwebmath_score": 0.9939223527908325, "tags": null, "url": "http://math.stackexchange.com/questions/136595/finding-int-e2x-sin4x-dx" }
# How to determine the most probable value of the perimeter of a quadrilateral? The problem is as follows: The diagonals of a quadrilateral measure $$8\,cm$$ and $$10\,cm$$. Using this information find the most probable perimeter of this quadrilateral indicated as a range. The alternatives given in my book are: $$\begin{array}{ll} 1.&\textrm{Between 14 cm and 16 cm}\\ 2.&\textrm{Between 18 cm and 36 cm}\\ 3.&\textrm{Between 38 cm and 40 cm}\\ 4.&\textrm{Between 42 cm and 50 cm}\\ 5.&\textrm{Between 12 cm and 14 cm}\\ \end{array}$$ In this problem I don't know what sort of strategy can be used?. Does it exist a relationship between the diagonals of a quadrilateral?. So far what it come to my mind was to use the triangle inequality which states that the sum of the two sides of a triangle is greater than the third side. Given a quadrilateral ABCD then this means the diagonals are from $$AC$$ and from $$BD$$. $$AC $$BD $$AC $$BD Given that the diagonals are AC and BD then its sum must be less than: By summing the expressions given: $$2AC+2BD<2AB+2BC+2CD+2AD$$ $$AC+BD This must be the upper bound in the perimeter of the quadrilateral. But how about the lower bound?. Well, for this part I'm using the reverse triangle inequality which states that any side is greater than the difference of the other two sides provided one is greater than the other. In this case I'm assuming $$AB>BC$$ and $$BC>CD$$ and $$CD>AD$$ and $$AB>AD$$ Then this would mean: $$AC>AB-BC$$ BD>BC-CD $$AC>CD-AD$$ $$BD>AB-AD$$ The thing here is that you may not get the perimeter directly. Suming these:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9688561712637256, "lm_q1q2_score": 0.8254197352752342, "lm_q2_score": 0.8519528076067262, "openwebmath_perplexity": 285.71180065507934, "openwebmath_score": 0.7358667254447937, "tags": null, "url": "https://math.stackexchange.com/questions/3920313/how-to-determine-the-most-probable-value-of-the-perimeter-of-a-quadrilateral" }
complexity-theory, time-complexity, polynomial-time, statistics Title: How to determine if a black-box is polynomial or exponential I have a problem which essentially reduces to this: You have a black-box function that accepts inputs of length $n$. You can measure the amount of time the function takes to return the answer, but you can't see exactly how it was calculated. You have to determine whether the time-complexity of this function is polynomial or exponential. The way I did this was by running thousands of random sample inputs of varying lengths through the function, then plotting them on a scatter plot with times on the y-axis and input length on the x-axis. What are some metrics and methods I can use to determine if these points best fit to a polynomial curve or to an exponential curve? (Similar question asking how to draw polynomial/exponential best fit lines in Python on Stack Overflow: https://stackoverflow.com/questions/23026267/how-to-determine-if-a-black-box-is-polynomial-or-exponential) Theoretically speaking, this is impossible to accomplish, basically because "polynomial" and "exponential" are asymptotic concepts, and no prefix of the data guarantees anything about the behavior at infinity. Practically speaking, you can try to compute $t(n)^{1/n}$ and so if it approaches a constant bounded away from 1. If so, it is exponential. To test whether it's polynomial, you see whether $\log t/\log n$ approaches a constant. These are only rough tests, of course, but they could be useful in practice.
{ "domain": "cs.stackexchange", "id": 2685, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, time-complexity, polynomial-time, statistics", "url": null }
• I think Mathematica floating point handling sometimes can be strange. I tried all these commands shown in the question and answers given, in Maple 2017. used Maple floating points (evalf) (not hardware floating points evalfh) and never got a complex value to show up. I think one needs a PhD in computer science to really understand Mathematica's floating points handling :) – Nasser Feb 25 '18 at 1:06 • @Nasser Did you try exp(102*log(-1)) in Maple? In Matlab, exp(..) gives the Mathematica result, but (-1).^102 gives 1. I think in Matlab and maybe Maple, -1 raised to a floating-point whole number is treated as a special case. OTOH, (-1)^10000000000000001 in Matlab yields 1. -- I rather think in this case, Mathematica's handling is predictably consistent and normal: a^b == Exp[b * Log[a]] and Log[-1] = Pi * I. One should expect a rounding error of at most 102*Pi*\$MachineEpsilon/2, which is about what we get. IMO, the issue not floating point, but the def. of (-1)^x. – Michael E2 Feb 25 '18 at 3:32 • For exp(102*log(-1)) Maple gives 1 here is screen shot !Mathematica graphics – Nasser Feb 25 '18 at 4:01 • @Nasser I see an imaginary part on the order of 10^-8 on the floating-point input; that seems like a large error. Does Maple use single-precision? I mean Mathematica gives 1, too: i.stack.imgur.com/IoQZv.png – Michael E2 Feb 25 '18 at 4:05 • By default, Maple software floating points uses the Digits setting. By default this is set to 10. Here is the same thing, when I changed it to 16 !Mathematica graphics Maple software floating point is what used by default. Now it give similar value as Mathematica. To use hardware floating point, there is evalfh also. – Nasser Feb 25 '18 at 4:45
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9324533069832974, "lm_q1q2_score": 0.8117913051086764, "lm_q2_score": 0.870597271765821, "openwebmath_perplexity": 1562.445111263223, "openwebmath_score": 0.4757954776287079, "tags": null, "url": "https://mathematica.stackexchange.com/questions/166566/mathematica-thinks-1n-is-non-real" }
newtonian-mechanics, forces, reference-frames, acceleration, centrifugal-force An object acted on by no net external force remains in equilibrium, i.e. has a constant velocity (which my be zero) and zero acceleration. This is why we must introduce a force that acts on the passenger in order to make the reference frame $B$ satisfy the first and second Newton's laws. Since this force actually does not exist, it is called a pseudo-force or fictitious force. To describe this with equations $$a_{P/A} = a_{P/B} + a_{B/A}$$ where $a_{P/A}$ is the passenger acceleration in $A$, $a_{P/B}$ is the passenger acceleration in $B$, and $a_{B/A}$ is the wagon acceleration in $A$. Since no external force acts on the passenger $a_{P/A} = 0$, the above equation becomes $$a_{P/B} = -a_{B/A}$$ Suddenly, the passenger has some acceleration although there is no external force acting on them. Introduce a fictitious force to make the first and second Newton's laws work in $B$ and problem solved. However, this clearly violates the third Newton's law since the fictitious force has no reaction pair, i.e. there is no force that acts from the passenger to the wagon. This is why non-inertial reference frames are not suitable for the Newton's laws.
{ "domain": "physics.stackexchange", "id": 84569, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, forces, reference-frames, acceleration, centrifugal-force", "url": null }
vcf, bash, linux #Control or Normal read, variant, reference information ReadCountControl=$(echo "$line" | cut -f 8 | sed 's/;/\t/g' | cut -f 14 | sed 's/ReadCountControl=//') VariantAlleleCountControl=$(echo "$line" | cut -f 8 | sed 's/;/\t/g' | cut -f 27 | sed 's/VariantAlleleCountControl=//') ReferenceAlleleCountControl=$(echo "$line" | awk -v rcc="$ReadCountControl" -v vacc="$VariantAlleleCountControl" '{print rcc-vacc}') VAF=$(echo "$line" | cut -f 8 | sed 's/;/\t/g' | cut -f 28 | sed 's/VariantAlleleFrequency=//') #Print output echo -e "$outname\t""$chrom"'\t'"$Pos"'\t'"$Ref"'\t'"$Alt"'\t'"$ReadCount"'\t'"$VariantAlleleCount"'\t'"$ReferenceAlleleCount"'\t'"$ReadCountControl"'\t'"$VariantAlleleCountControl"'\t'"$ReferenceAlleleCountControl"'\t'"$VAF" >> $outname ; #Remove info tags from VCF done; < <( egrep -v '#' "$1")
{ "domain": "bioinformatics.stackexchange", "id": 1215, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vcf, bash, linux", "url": null }
$$Pr(\text{2 boxes hold 6 or more balls}) = \frac{30415369655816064000}{30^{14}} \approx 0.063591$$ The reason I'm dividing by $30^{14}$ in these probabilities is because I normalized the counting to begin with one case where a ball is in one box. If we counted that as thirty cases, we'd need to divide by $30^{15}$. So this keeps the totals slightly smaller. Each ball we add increases the total number of cases by a factor of $30$. I wrote a recursive rule to build cases for $n+1$ balls from cases for $n$ balls. The first few cases have the following counts: /* case(Balls,Partition,LengthOfPartition,Count) */ case(1,[1],1,1). /* Count is nominally 1 to begin */ case(2,[2],1,1). case(2,[1,1],2,29). case(3,[3],1,1). case(3,[1,2],2,87). case(3,[1,1,1],3,812). /* check: for Sum = 3, sum of Count is 900 */ The number of cases generated is modest enough for a desktop, daunting to manage by hand. For $n=15$ there are $176$ partitions. It simplified the Prolog code to maintain the partitions as lists in ascending order. • I checked out with a c# program. I simulated the problem with random numbers and counted: How often I find 6 ore more balls in only 2 boxes. 1000000-times tried: Pr(2 boxes hold 6 or more balls) = 0,063491%. Thanks alot! – Simon Aug 4 '14 at 13:39 • Simulation is easier to program than the recursive/exact counting, but as briefly mentioned, getting an extra digit of accuracy may well require a hundred times as many trials because of the random walk phenomenon. – hardmath Aug 4 '14 at 13:42 Solution of Question 1:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9814534354878828, "lm_q1q2_score": 0.8509552150325912, "lm_q2_score": 0.8670357494949105, "openwebmath_perplexity": 588.1410203704329, "openwebmath_score": 0.7497148513793945, "tags": null, "url": "https://math.stackexchange.com/questions/878115/in-30-boxes-are-15-balls-chance-all-balls-in-10-or-less-boxes" }
quantum-mechanics, angular-momentum, operators $$ L^2 L_+\,f= L_+ L^2 f \tag{1} $$ and since by definition $L^2 f=\hbar^2 l(l+1) f$, it follows from (1) that $L_+\,f$ will have the same eigenvalue for $L^2$ as $f$, as will all states of the ladder.
{ "domain": "physics.stackexchange", "id": 57834, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, angular-momentum, operators", "url": null }
non-locality, contextuality, peres-mermin-square for every set of outcomes $\{\tilde{a}_i\}_1^k$. In words, this implies that the marginal for each context over the intersection must coincide. So, already you may have noticed that statistics arising from Bell scenarios allow for this description. Both Bell scenarios are a particular case of compatibility scenarios and the nonsignaling condition is a particular case of the nondisturbing condition. Let us see the particular case for the CHSH scenario where Alice has two binary outcome measurements $\{x_1,x_2\}$ and Bob has two binary outcome measurements $\{y_1,y_2\}$ with the set of outcomes being $\{+1,-1\}$. In this case we have that the contexts are imposed by space-like separation hence we have the structure of contexts as $\mathcal{C} := \{\{x_1,y_1\},\{x_1,y_2\},\{x_2,y_1\},\{x_2,y_2\}\}$. Note that this structure may be putted in a graph (known as the compatibility graph)
{ "domain": "quantumcomputing.stackexchange", "id": 3686, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "non-locality, contextuality, peres-mermin-square", "url": null }
transform <launch> <param name="/use_sim_time" value="true"/> <node name="rosplay" pkg="rosbag" type="play" args="/home/Desktop/PowRos/Data/user2/pow.bag --loop --clock"/> <node pkg="tf" type="static_transform_publisher" name="baselink_laser" args="0 0 0 0 0 0 /base_link /laser 10"/> <node pkg="tf" type="static_transform_publisher" name="laser_imu" args="0 0 0 0 0 0 /laser /base_imu 10"/> <node pkg="tf" type="static_transform_publisher" name="baselink_camera" args="0 0 0 0 0 0 /base_link /camera 10"/> <node pkg="tf" type="static_transform_publisher" name="door1" args="1 0 0 0 0 0 /odom/door1 10"/> <!-- Start the map server node and specify the map file (*.pgm) and the map resolution in metres/pixel --> <node name="map_server" pkg="map_server" type="map_server" args="$(find amcl_listener)/maps/pow_real_time.yaml" output="screen"/> <!--Start the Laser_scan_matcher package, to provide odometry from laser data (ICP)--> <node pkg="laser_scan_matcher" type="laser_scan_matcher_node" name="laser_scan_matcher_node" output="screen"> <param name="use_alpha_beta" value="true"/> <param name="max_iterations" value="10"/> </node>
{ "domain": "robotics.stackexchange", "id": 10648, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "transform", "url": null }
Now, giving a rank $N$, to know $\phi(N)$ amounts to position $N$ w.r.t. the following sectors $$2n^2+3n<2n^2+3n+1<\cdots <2n^2+5n+2<2n^2+5n+3<\cdots <2(n+1)^2+3(n+1)$$ The positioning can be achieved solving $2n^2+3n-N=0$ and taking the integer part, here $$n=\lfloor\frac{-3+\sqrt{9+8N}}{4}\rfloor\ .$$ Late edit Above is just to show you in detail where the ideas came from. If you have to compactly answer to the ranking question ($B$), you can, for instance, use the piecewise description of Milo (with $T_n=\frac{n(n+1)}{2}$). Having to compute $g(N)$, you first compute $$n=\lfloor\frac{-1+\sqrt{1+9N}}{2}\rfloor\ .$$ you then get $k=N-T_n$ and can apply the dichotomy. If you are chasing the single formula you can even have one with "commutators". Late late edit Here is, just for the fun, my version of commutators ; first use Iverson_bracket to write $g$ and then (I use here [[??]] in order to differentiate it from maths) convert Iverson brackets as follows $$[[k\mbox{ is odd}]]=k-2\lfloor\frac{k}{2}\rfloor\ ;\ [[k\mbox{ is even}]]=1-k+2\lfloor\frac{k}{2}\rfloor\ .$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9910145733704833, "lm_q1q2_score": 0.8031964167065775, "lm_q2_score": 0.8104789155369047, "openwebmath_perplexity": 220.9275912624634, "openwebmath_score": 0.9267269968986511, "tags": null, "url": "https://math.stackexchange.com/questions/2529501/transforming-a-function-f-mathbb-n2-to-a-to-a-function-f-mathbb-n-to-a" }
aerodynamics, bernoulli-equation, aircraft Title: Why can a helicopter fly upside down? I saw images and video clips of helicopter flying upside down, so it can't be bernoulli principle or angle of attack of the rotary blades. So how can the upside down helicopter provide lift in this case? The rotating blades of a helicopter's main rotor are shaped like wings in cross-section. Their angle of attack can be changed by the pilot to make the helicopter climb or descend with a lever called the collective pitch control. If you loop a helicopter so it is upside down and then push the collective to descend, the pitch of the blades will move into a position where they will generate lift even though they are upside down. This means that it is at least theoretically possible to fly a helicopter inverted. Note that the main rotor blade hub assembly of some helicopters cannot function properly unless the helicopter is right-side up and gravity is pulling down on it i.e., the rotor disc is loaded. If you unload the rotor by flying this type of helicopter in a parabolic arc so the occupants experience zero gravity, it's possible to lose control and then very bad things will quickly happen.
{ "domain": "physics.stackexchange", "id": 64946, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "aerodynamics, bernoulli-equation, aircraft", "url": null }
physical-chemistry, kinetics I dont understand why we have to divide with the partial pressure of hydrogen. In my understanding the rate is $\frac{dp(NH_3)}{dt}=-kp(NH_3)$. I know I have to use the fact that hydrogen binds strongly at the surface, but I'm lost. Can you please explain to me how should I derive this, or just hint me to the right direction. I lack of good understanding on this topic, so sorry for this novice question. What I've searched: Why is decomposition of ammonia gas on quartz a 1st order reaction? What is the molecularity of the RDS of a zero order complex reaction? Thank you for your time. Thanks to Poutnik's comment, I arrived to the answer and I would like to share it. $$\ce{2NH_3 ->N_2 + 3H_2}$$ $$\theta_{NH_3}=\frac{K_{NH_3}p(NH_3)}{1+K_{NH_3}p(NH_3)+K_{N_2}p(N_2)+K_{H_2}p(H_2)}$$ $$\theta_{N_2}=\frac{K_{N_2}p(N_2)}{1+K_{NH_3}p(NH_3)+K_{N_2}p(N_2)+K_{H_2}p(H_2)}$$ $$\theta_{H_2}=\frac{K_{H_2}p(H_2)}{1+K_{NH_3}p(NH_3)+K_{N_2}p(N_2)+K_{H_2}p(H_2)}$$
{ "domain": "chemistry.stackexchange", "id": 13548, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, kinetics", "url": null }
reference-request, ethics, explainable-ai Ecco: explaining transformer-based NLP models using interactive visualizations (homepage, code, article). Recipes for Machine Learning Interpretability in H2O Driverless AI (repo) Reviews & general papers A Survey of Methods for Explaining Black Box Models (2018, ACM Computing Surveys) Definitions, methods, and applications in interpretable machine learning (2019, PNAS) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019, Nature Machine Intelligence, preprint) Machine Learning Interpretability: A Survey on Methods and Metrics (2019, Electronics) Principles and Practice of Explainable Machine Learning (2020, preprint) Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges (keynote at 2020 ECML XKDD workshop by Christoph Molnar, video & slides) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI (2020, Information Fusion) Counterfactual Explanations for Machine Learning: A Review (2020, preprint, critique by Judea Pearl) Interpretability 2020, an applied research report by Cloudera Fast Forward, updated regularly Interpreting Predictions of NLP Models (EMNLP 2020 tutorial) Explainable NLP Datasets (site, preprint, highlights) Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges eBooks (available online) Interpretable Machine Learning, by Christoph Molnar, with R code available Explanatory Model Analysis, by DALEX creators Przemyslaw Biecek and Tomasz Burzykowski, with both R & Python code snippets An Introduction to Machine Learning Interpretability (2nd ed. 2019), by H2O Online courses & tutorials Machine Learning Explainability, Kaggle tutorial Explainable AI: Scene Classification and GradCam Visualization, Coursera guided project Explainable Machine Learning with LIME and H2O in R, Coursera guided project Interpretability and Explainability in Machine Learning, Harvard COMPSCI 282BR Other resources explained.ai blog A Twitter thread, linking to several interpretation tools available for R A whole bunch of resources in the Awesome Machine Learning Interpetability repo
{ "domain": "ai.stackexchange", "id": 2426, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reference-request, ethics, explainable-ai", "url": null }
• Dear Teacher, Dear Professor, I'm so sorry for this comment..I asked mathoverflow for help. But the question was downvoted. I had to delete the question. I really need help. But unfortunately, no one helped. If I ask you for help, can you take a look? If you have a few minutes. Then, I will delete this comment..Thank you very much.. mathoverflow.net/q/298997/123863 – Mathematics is Life Apr 29 '18 at 17:40 • @MathematicsIsLife: This seems to be an entirely different question. If you could explain what you are doing and summarize your data, someone might be able to help. For example, your first vector $P_1$ when multiplied by 42 can be summarized as: values 0, 1, 2, 3, 4, 5 with respective frequencies 25, 4, 5, 5, 2, 1 . – BruceET Apr 29 '18 at 18:01 • Teacher, the sum of all values are equal to $1$. For any $P(x).$ Because, this is probabilistic distribution...(English is my second language..Sorry for wrong words) – Mathematics is Life Apr 29 '18 at 18:11 • Teacher, I can not do anything because I do not have a computer and I can not use mathematical softwares..:( – Mathematics is Life Apr 29 '18 at 18:42 This can be phrased as Given that at least one die rolls a 6, what is the probability of rolling a double 6? We can use the conditional probability formula: $$P(A|B) = \frac{P(A\ \mathrm{and}\ B)}{P(B)}$$ This means "the probability of event A given event B, is the probability of A and B divided by the probability of B".
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9658995742876885, "lm_q1q2_score": 0.8374694663072302, "lm_q2_score": 0.8670357546485408, "openwebmath_perplexity": 383.7705486631621, "openwebmath_score": 0.8932090997695923, "tags": null, "url": "https://math.stackexchange.com/questions/2493037/probability-of-rolling-a-double-6-with-two-dice" }
java, linked-list, generics /** * POSTCONDITIONS * return the next current element */ public boolean hasNext() { return this.currentElement != null; } /** * PRECONDITION * currentElement != null * POSTCONDITIONS * return all elements consecutively in the given order */ public T next(){ ListElement<T> next = this.currentElement.getNext(); ListElement<T> returnElement = this.currentElement; this.currentElement = next; return returnElement.getValue(); } /** * PRECONDITION * currentElement != null * POSTCONDITION: The element is removed from the linked list. */ public void remove() { ListElement<T> nextElement = this.currentElement.getNext(); ListElement<T> previousElement = this.currentElement.getPrevious(); previousElement.setNext(nextElement); nextElement.setPrevious(previousElement); this.currentElement = nextElement; } /** * PRECONDITION * builder != null * POSTCONDITIONS * return elements as a String */ public String toString(){ ListIterator<T> iterator = new ListIterator<T>(this.currentElement); StringBuilder builder = new StringBuilder(); builder.append("["); while(iterator.hasNext()){ builder.append(iterator.next()); builder.append(", "); } builder.append("]"); return builder.toString(); } } protected class ListElement<T>{ private T value; private ListElement<T> previous; private ListElement<T> next;
{ "domain": "codereview.stackexchange", "id": 5245, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, linked-list, generics", "url": null }
python my_dic['Model'] = ['Apple', 'Banana', 'Pineapple', 'Melon', 'Orange', 'Grape'] y_list = [y_Apple, y_Banana, y_Pineapple, y_Melon, y_Orange, y_Grape] keys = zip(['AAA', 'BBB', 'CCC', 'DDD', 'EEE'], ['method1', 'method2', 'method3', 'method4', 'method5']) func = lambda F, a, b: eval(F)(a,b) for name, method in keys: my_dic[name] = [ func(method, y, y2) for y2 in y_list]
{ "domain": "datascience.stackexchange", "id": 6035, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
data-mining, social-network-analysis, crawling, scraping Title: LinkedIn web scraping I recently discovered a new R package for connecting to the LinkedIn API. Unfortunately the LinkedIn API seems pretty limited to begin with; for example, you can only get basic data on companies, and this is detached from data on individuals. I'd like to get data on all employees of a given company, which you can do manually on the site but is not possible through the API. import.io would be perfect if it recognised the LinkedIn pagination (see end of page). Does anyone know any web scraping tools or techniques applicable to the current format of the LinkedIn site, or ways of bending the API to carry out more flexible analysis? Preferably in R or web based, but certainly open to other approaches. Beautiful Soup is specifically designed for web crawling and scraping, but is written for python and not R
{ "domain": "datascience.stackexchange", "id": 525, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "data-mining, social-network-analysis, crawling, scraping", "url": null }
electrical-engineering, automotive-engineering Title: Remote start on a car with manual transmission I'm making a remote starter for a car using a Raspberry Pi. The problem is that the car has a manual transmission. The driver needs to remember to place the gear shift in neutral when parking, otherwise all kinds of bad stuff could happen when the car is started in-gear. What possible solutions do I have for making an interlock that prevents the car from starting in gear? It's a 2001 Honda Accord, if that helps. I have seen transmissions with sensors (switches) on every gear, but this transmission does not have those. I thought of monitoring the speedometer sensor, but I think that info would come too late. I'm thinking I might have to put one or more switches on the shift lever to detect its position, but this sounds like a lot of work. I can't think of a more elegant way than adding a sensor or sensors either. An inductive sensor may be a good option for a switch if you have enough room below and the stick has metal in it. If not, you could epoxy a magnet to the hidden part of the shifting stick and use a hall effect sensor (or reed switch but I don't recommend it). A roller style limit switch may also be an option, but I would recommend going non-contact if possible. http://www.amazon.com/gp/product/B008MU1GEY http://www.mouser.com/ProductDetail/Littelfuse/55100-3H-02-A/?qs=nyo4TFax6Nff4PypTg%2FOjg%3D%3D You may also want to include the parking break in the interlock system. It would be a little insurance if the sensor fails. An accelerometer would also be some good insurance. If it feels a kick when it starts the engine it immediately stops and prevents further input until the car is manually started. Even though this response will be very fast, the car could still creep forward 6 inches or more so it would still be a safety issue and couldn't be the primary interlock system. And as previously mentioned in the comments you will want to make sure that if you are bypassing the clutch interlock you make sure that it is still enforced when starting the car manually.
{ "domain": "engineering.stackexchange", "id": 5082, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrical-engineering, automotive-engineering", "url": null }
electromagnetism, electrostatics, potential Title: Electric field calculation gives opposite direction We have a finite line of charge of length l positioned as shown from x to x+l I want to find the electric field at the point 0. I have calculated the electric potential to be the following: $$V=\frac{λ}{4\piε_0}\ln\frac{x+l}{x}$$ Then by definition $$\vec E=-\nabla V\\=-\frac{\partial V}{\partial x}\hat x=-\frac{λ}{4\piε_0}\left(\frac{1}{x+1}-\frac{1}{x}\right )\hat x=-\frac{λ}{4\piε_0}\left(\frac{x-x-l}{x(x+l)}\right)\hat x\\\vec E=\frac{λl}{4\piε_0x(x+l)}\hat x$$ However, it is obvious that E at 0 is towards the left, so I should get -x hat. While calculating V I integrated from x to x+l, should it maybe be the other way around? Where have I made the mistake? You differentiated the potential after inserting the $y,z$ coordinates of the point where the field is measured. The correct way is the following (I've relabelled the x-coordinate of the position of the left end of the rod to $x_0$): $$V(\vec{r})=\frac{1}{4\pi\epsilon_0}\int\frac{\rho(\vec{r'})}{|\vec{r}-\vec{r'}|}d^3\vec{r}$$ \begin{align}
{ "domain": "physics.stackexchange", "id": 42065, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electrostatics, potential", "url": null }
# Curve where torsion and curvature equal arc length I study differential geometry independently in my free time as an undergraduate. I am using the book by Do Carmo. I recently read the section and local theory of curves and learned about torsion and curvature. My question is, does there exist a curve that has both torsion and curvature equal to arc length? I have tried deriving such a curve, but I’ve failed. I speculate that it must be somewhat helical in nature. Standard equation of helix is given by $$\alpha(s)=(a\cos(s/c),a\sin(s/c),b)$$. The curvature of such a curve is $$\kappa(s)=\frac{a}{a^2+b^2}$$ and torsion is $$\tau(s)=\frac{b}{a^2+b^2}$$. Clearly $$a=\frac{1}{2}s^{-1}=b$$. I’m not sure if this is the right approach to take. I feel as though the curve’s normal ought to trace out a curve on a sphere, but it doesn’t. Any help is appreciated. • For any nondegenerate curve $|{\bf N}(s)| = 1$, so ${\bf N}(s)$ traces a curve on the unit sphere. It's true for curves satisfying $\kappa(s) = \tau(s) = s$, however, that ${\bf N}(s)$ has image contained in a great circle on that sphere. See my answer for an explicit solution $\gamma(s)$. – Travis Willse Mar 26 '19 at 4:58 • The planar curve that has curvature proportional to the arc length is known as the clothoid. It resurfaces in the solution by Travis. – Yves Daoust Mar 26 '19 at 9:39 • For future reference, any curve with $\tau/\kappa$ constant is a generalized helix. – Ted Shifrin Mar 27 '19 at 0:18
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9835969645988575, "lm_q1q2_score": 0.8455626573454416, "lm_q2_score": 0.8596637523076225, "openwebmath_perplexity": 250.24061568047625, "openwebmath_score": 0.9725632071495056, "tags": null, "url": "https://math.stackexchange.com/questions/3162558/curve-where-torsion-and-curvature-equal-arc-length" }
thermodynamics, radiation, thermal-radiation Here’s my calculation: I used the Stefan-Boltzmann law to calculate the $Q$ radiated by object due to its temperature which is $$(Q_{rad})_{obj}=\sigma eAT^4 $$ Where $\sigma$ is Stefan-Boltzmann constant. The energy is being radiated by the surroundings also so,$$(Q_{rad})_{surr}=\sigma AT_o^4$$So the object absorbs and also emits the energy due the radiation from surroundings which is given by$$Q_{abs}=\sigma aAT_o^4$$ $$Q_{em}=\sigma eAT_o^4$$ Therefore the net energy emitted is given by $$Q_{net}=(Q_{rad})_{obj}-Q_{abs}+Q_{em}$$ which equals to $$Q_{net}=\sigma eAT^4-\sigma aAT_o^4+\sigma eAT_o^4$$ Now if this all is correct, how to simplify it further, because the answer is $Q_{net}=\sigma eA(T^4-T_o^4)$ which I had an attempt to prove.So please guide me. Assume first that the temperature of the blackbody environment and the object are the same. Hence the object will radiate the same spectrum as the bb env but reduced by emissivity. From the outside you see this emission plus the reflection. Since e+r=1, this will be the same intensity as the radiation falling on it, consistent with thermal equilibrium. If we raise the temperature of the object it will radiate more energy than it receives by an amount of $e\sigma (T^4 - T_0^4)$. It will still absorb and reflect the same amount of energy, as this is determined solely by the blackbody environment.
{ "domain": "physics.stackexchange", "id": 58035, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, radiation, thermal-radiation", "url": null }
ros, rviz, laser, 2dlaserscan, pointcloud pcl_assembler_client: #!/usr/bin/env python import rospy from laser_assembler.srv import AssembleScans2 from sensor_msgs.msg import PointCloud2 rospy.init_node("assemble_scans_to_cloud") rospy.wait_for_service("assemble_scans2") assemble_scans=rospy.ServiceProxy('assemble_scans2',AssembleScans2) pub=rospy.Publisher("/point_Cloud",PointCloud2,queue_size=2) r=rospy.Rate(10) while True: try: resp=assemble_scans(rospy.Time(0,0),rospy.get_rostime()) print "Got cloud with %u points" % len(resp.cloud.data) pub.publish(resp.cloud) except rospy.ServiceException, e: print "service call failed: %s" %e r.sleep() my urdf code: <?xml version="1.0"?> <robot name="model1"> <link name="base_link"> <visual> <geometry> <box size="0.6 0.2 0.2"/> </geometry> </visual> </link> <link name="laser"> <visual> <geometry> <box size="0.2 0.2 0.2"/> </geometry> <origin rpy="0 1.57 0" xyz="0.3 0 0.3"/> </visual> </link> <joint name="base_tilt_joint" type="revolute"> <parent link="base_link"/> <child link="laser"/> <axis xyz="0 -1 0"/> <limit effort="3.0" velocity="9.178465545" lower="0.02" upper="3.14" /> </joint> </robot>
{ "domain": "robotics.stackexchange", "id": 29021, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, rviz, laser, 2dlaserscan, pointcloud", "url": null }
vba, api, stack Title: Managing a programmatically accessible stack trace VBA has a call stack... but there's no programmatic way to tap into it, which means in order to get a stack trace for a runtime error, one has to manage it manually. Here's some example code that demonstrates a custom CallStack class in action: Option Explicit Private Const ModuleName As String = "Module1" Sub DoSomething(ByVal value1 As Integer, ByVal value2 As Integer, ByVal value3 As String) CallStack.Push ModuleName, "DoSomething", value1, value2, value3 TestSomethingElse value1 CallStack.Pop End Sub Private Sub TestSomethingElse(ByVal value1 As Integer) CallStack.Push ModuleName, "TestSomethingElse", value1 On Error GoTo CleanFail Debug.Print value1 / 0 CleanExit: CallStack.Pop Exit Sub CleanFail: PrintErrorInfo Resume CleanExit End Sub Public Sub PrintErrorInfo() Debug.Print "Runtime error " & Err.Number & ": " & Err.Description & vbNewLine & CallStack.ToString End Sub Running DoSomething 42, 12, "test" produces the following output: Runtime error 11: Division by zero at Module1.TestSomethingElse({Integer:42}) at Module1.DoSomething({Integer:42},{Integer:12},{String:"test"})
{ "domain": "codereview.stackexchange", "id": 26054, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, api, stack", "url": null }
\frac{\sin(t+2\pi n)}{(t+2\pi n+1)(t+2\pi n+1+\pi)}dt\\ &=\int_{0}^{\pi/2} \frac{\sin(t+2\pi n)}{(t+2\pi n+1)(t+2\pi n+1+\pi)}dt +\int_{0}^{\pi/2} \frac{\sin(t+\pi/2+2\pi n)}{(t+2\pi n+1+\pi/2)(t+2\pi n+1+3\pi/2)}dt\\ &=\int_{0}^{\pi/2} \frac{\sin(t+2\pi n)}{(t+2\pi n+1)(t+2\pi n+1+\pi)}dt +\int_{0}^{\pi/2} \frac{\sin(\pi/2-t+\pi/2+2\pi n)}{(\pi/2-t+2\pi n+1+\pi/2)(\pi/2-t+2\pi n+1+3\pi/2)}dt\\ &=\int_{0}^{\pi/2} \frac{\sin(t+2\pi n)}{(t+2\pi n+1)(t+2\pi n+1+\pi)}dt -\int_{0}^{\pi/2} \frac{\sin(t+\pi/2+2\pi n)}{(\pi/2-t+2\pi n+1+\pi/2)(\pi/2-t+2\pi n+1+3\pi/2)}dt\\ &=\int_{0}^{\pi/2} \sin(t+2\pi n)\left(\frac1{(t+2\pi n+1)(t+2\pi
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.972830769252026, "lm_q1q2_score": 0.8042506684987633, "lm_q2_score": 0.8267117919359419, "openwebmath_perplexity": 86.38482608665306, "openwebmath_score": 0.970745861530304, "tags": null, "url": "https://math.stackexchange.com/questions/3191597/prove-that-a-definite-integral-is-positive" }
probability, dirac-delta-distributions, randomness The first constraint ensures that we have a proper probability density, and the second and third constraints are your observations about the mean and the variance. We can solve this with Lagrange multipliers: $$ -\int dx \, p(x) \log p(x) + \lambda_0 \left[ \int dx\, p(x) - 1 \right] + \lambda_1 \left[ \int dx \, x \, p(x) - \mu \right] + \lambda_2 \left[ \int dx \, (x- \mu)^2 p(x) - \sigma^2 \right] $$ Where, taking the variation of $p(x)$ we obtain $$ \int dx\, \left[ -\log p(x) - 1 + \lambda_0 + \lambda_1 x + \lambda_2 (x- \mu)^2 \right] = 0 $$ which is only satisfied if the integrand vanishes everywhere, suggesting: $$ p(x) = \exp \left[ 1 + \lambda_0 + \lambda_1 x + \lambda_2 (x - \mu)^2 \right] $$ The only remaining problem is to determine the $\lambda$s from the constraints, from which we find $$ p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 } } \exp \left( - \frac{(x-\mu)^2 }{ 2\sigma^2 } \right) $$ The standard gaussian. Generalization What is really neat is that this generalizes. If you happen to have a set of functions $f_i$ for which you know the expectations under your probability distribution: $$ \langle f_i (x) \rangle = \alpha_i $$ The probability distribution with the largest entropy consistent with those observations is: $$ p(x) \propto \exp \left[\sum_i \lambda_i f_i(x) \right] $$
{ "domain": "physics.stackexchange", "id": 20493, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "probability, dirac-delta-distributions, randomness", "url": null }
java, beginner, game-of-life if(boolBoard[i-1][j-lengthT+1]){ fullspace++; } if(fullspace == 3){ turnOnX.add(i); turnOnY.add(j); } } else if(boolBoard[i][j]){ fullspace = 0; if(boolBoard[i - 1][j-1]){ fullspace++; } if(boolBoard[i - 1][j]){ fullspace++; } if(boolBoard[i][j-1]){ fullspace++; } if(boolBoard[i+1][j-lengthT+1]){ fullspace++; } if(boolBoard[i+1][j]){ fullspace++; } if(boolBoard[i][j-lengthT+1]){ fullspace++; } if(boolBoard[i+1][j-1]){ fullspace++; } if(boolBoard[i-1][j-lengthT+1]){ fullspace++; } if(fullspace < 2){ turnOffX.add(i); turnOffY.add(j); } else if(fullspace > 3){ turnOffX.add(i); turnOffY.add(j); } } } for(int l = 1; l < lengthT - 1; l++){//left bar int i = 0; int j = l; if(!boolBoard[i][j]){//off fullspace = 0; if(boolBoard[i + widthT -1][j-1]){ fullspace++; } if(boolBoard[i + widthT -1][j]){ fullspace++; } if(boolBoard[i][j-1]){ fullspace++; } if(boolBoard[i+1][j+1]){ fullspace++; }
{ "domain": "codereview.stackexchange", "id": 5133, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner, game-of-life", "url": null }
As when proving the Mean Value Theorem, consider an auxiliary function $$F(x) = f(c_2)-f(x) + K(x-c_2),$$ with $$K = \frac{f(c_2)-f(c_1)}{c_2-c_1}< 0.$$ It is easy to verify that $$F(x)$$ satisfies the Lemma hypotheses in $$[c_1,c_2]$$, with $$F(c_1) = F(c_2) = 0$$, and $$$$F'_+(x) = K-f'_+(x).\tag{2}\label{eq:2}$$$$ By the Lemma there is a point $$c\in[c_1,c_2]$$ such that $$F'_+(c) \geq 0,$$ which, by \eqref{eq:2}, yields $$f'_+(c) \leq K < 0,$$ a contradiction. $$\blacksquare$$ • The proof of your lemma requires slight modification. Suppose the maximum is attained at $\beta$ then we have $f(x) \leq f(\beta)$ and there is no need of strict inequality. – Paramanand Singh Feb 4 at 2:25 • Nice proof altogether +1 – Paramanand Singh Feb 4 at 2:27 • @ParamanandSingh thanks! I'll make the correction! – dfnu Feb 4 at 7:52
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290938892911, "lm_q1q2_score": 0.8071472010517766, "lm_q2_score": 0.831143054132195, "openwebmath_perplexity": 144.18711519010785, "openwebmath_score": 0.9446426630020142, "tags": null, "url": "https://math.stackexchange.com/questions/2047701/right-derivative-of-continuous-function-nonnegative-implies-increasing-function" }
rust, circular-list pop_item_that_was_pushed_to_buffer popping_returns_first_pushed_first pop_beyond_write_index_continuing_on_works buffer_wraps_around For further exercices I would recommend: Implement Iterator/IntoIterator/FromIterator Next, implement Debug, which is fairly easy (Hint: Take a look at the implementation of Debug for slice) Make it accept a generic type
{ "domain": "codereview.stackexchange", "id": 33115, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rust, circular-list", "url": null }
optimization, scheduling, heuristics, genetic-algorithms N.B., I believe that I've used the terminology that is used in the books by Coffman and Denning, and by Pinedo. The wikipedia pages are an utter disaster. The wikipedia page on job shop scheduling gives a definition of the open-shop problem that contradicts the wikipedia page on open shop scheduling. The wikipedia page on the flow shop problem is incomprehensible. And wikipedia is confused about the difference between multiprocessor scheduling, which involves $m$ interchangeable resources, and job-shop scheduling. For example, the wikipedia page on the Coffman-Graham list scheduling algorithm calls it a solution to the job-shop scheduling problem, although Coffman and Graham use the term "multiprocessor", not job-shop in the titles of their papers. (And note that the author of the wikipedia page on multiprocessor scheduling seems unaware of the original variant with precedence constraints.)
{ "domain": "cs.stackexchange", "id": 1604, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optimization, scheduling, heuristics, genetic-algorithms", "url": null }
ros, turtlebot, pioneer-3dx Title: starter physical robot base recommendations? I am looking for graduating from simulator to actual physical hardware with ROS support. Could anyone please recommend popular (and cheaper) options for a base? I want to get a base where I could implement and experiment slam/navstack. Pioneer P3 DX (USD 4K) and Turtlebot (USD 2K) are on expensive side for individual starter. Originally posted by hmrobo on ROS Answers with karma: 1 on 2017-10-15 Post score: 0 Original comments Comment by gvdhoorn on 2017-10-16: I'm not sure, but I feel this is more suitable for some discussion (as there isn't any single answer that is the answer). Perhaps a post on discourse.ros.org would be better? How about the Trutlebot3 burguer? It is about $600 Some useful links: http://www.robotis.us/turtlebot-3/ http://turtlebot3.robotis.com/en/latest/ Originally posted by Martin Peris with karma: 5625 on 2017-10-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by hmrobo on 2017-10-15: Thanks, Martin. That is interesting but still on pre-order/nov delivery currently. Wondering if that's the cheapest or are there any other options including DIY.
{ "domain": "robotics.stackexchange", "id": 29084, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, turtlebot, pioneer-3dx", "url": null }
# Runner's High (Speed) I find the following mind-boggling. Suppose that runner $$R_1$$ runs distance $$[0,d_1]$$ with average speed $$v_1$$. Runner $$R_2$$ runs $$[0,d_2]$$ with $$d_2>d_1$$ and with average speed $$v_2 > v_1$$. I would have thought that by some application of the intermediate value theorem we can find a subinterval $$I\subseteq [0,d_2]$$ having length $$d_1$$ such that $$R_2$$ had average speed at least $$v_1$$ on $$I$$. This is not necessarily so! Question. What is the smallest value of $$C\in\mathbb{R}$$ with $$C>1$$ and the following property? Whenever $$d_2>d_1$$, and $$R_2$$ runs $$[0,d_2]$$ with average speed $$Cv_1$$, then there is a subinterval $$I\subseteq [0,d_2]$$ having length $$d_1$$ such that $$R_2$$ had average speed at least $$v_1$$ on $$I$$.
{ "domain": "mathoverflow.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787879966233, "lm_q1q2_score": 0.8138913209719587, "lm_q2_score": 0.8244619220634456, "openwebmath_perplexity": 166.82903944181908, "openwebmath_score": 0.912818431854248, "tags": null, "url": "https://mathoverflow.net/questions/316954/runners-high-speed/316976" }
condensed-matter, resource-recommendations, superconductivity, many-body, greens-functions Title: Microscopic theory of superconductivity in the language of the vertex function In Chapter 7 of Abrikosov, Gorkov, and Dzyaloshinski (AGD), the authors cover a microscopic overview of superconductivity, with an emphasis on the poles of the vertex function $\Gamma$. Despite the thoroughness of their derivation, I would prefer to look at other such references of similar quality, preferably in a more modern context. Therefore, I am specifically looking for additional sources (original papers, review articles, books, etc.) that cover a microscopic approach to the Cooper instability, with an emphasis on its connection to the vertex function, the phonon propagator, and the corresponding diagrammatics. The more advanced the better, but I would prefer references with excessive detail if possible. Basically, a modern supplement to AGD subsections 7.33, 7.34, and 7.35 at the advanced graduate/postdoc level. EDIT: Clarified question. To see Cooper instability from the vertex function, you can check Altland & Simons book. May be it is not a postdoc level but it seems that topic is not so hard. Here I sketch the derivation. Consider 4-fermion theory with contact attractive interaction $g$. Then consider temperature Green functions and non-crossing approximation. It means that vertex function is given by infinite ladder,
{ "domain": "physics.stackexchange", "id": 64905, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter, resource-recommendations, superconductivity, many-body, greens-functions", "url": null }
python, sqlite print(time.time() - start) Do you have any tricks or ideas I could use to speed up the insert ? (I will do very basic SELECT on the data using the accession primary key) First, some comments on your code: An sqlite3 connection can be used as a context manager. This ensures that the statement is committed if it succeeds and rolled back in case of an exception. Unfortunately it does not also close the connection afterwards with sqlite3.connect("test.sqlite") as connection, open("test_file.map") as f: connection.execute(""" CREATE TABLE IF NOT EXISTS map ( accession TEXT PRIMARY KEY, accession_version TEXT, taxid TEXT, gi TEXT )""") next(f) # ignore header connection.executemany("INSERT INTO map VALUES (?, ?, ?, ?)", read_large_file(f)) connection.close() You should separate your functions from the code calling it. The general layout for Python code is to first define your classes, then your functions and finally have a main block that is protected by a if __name__ == "__main__": guard to allow importing from the script without executing all the code. open automatically opens a file in read-mode, if not specified otherwise. That being said, if you have a billion lines, basically any approach is probably going to be slow. Here is an alternate approach using dask. It may or may not be faster, you will have to test it. The usage is very similar to pandas, except that the computations are only performed once committed to with the call to compute(). First, to install dask: pip install dask[dataframe] --upgrade Then for the actual usecase you mention, finding a specific gi in the table: from dask import dataframe df = dataframe.read_csv("test_file.map", sep="\t") df[df.gi == 7184].compute() # accession accession.version taxid gi # 0 V00184 V00184.1 44689 7184
{ "domain": "codereview.stackexchange", "id": 34261, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, sqlite", "url": null }
object-oriented, game, objective-c Title: Game Balance and where to store Variables I have been thinking about this aspect of my game for a little while, and I have come up with the following solution. I am sure that there are problems with the way I have implemented it so any kind of feedback would be very helpful. The idea is that there are a number of different variables in the game that together have an effect on the balance of the difficulty as well as the potential "fun" of playing the game. These variables will need to be tweaked repeatedly on the way to finalizing the game balance. Unfortunately, they are scattered around in different classes based on the scope of their effect, so some of them have to be in the Game class, some of them have to be in the Tower class, and some might even be in the Dwarf class or any other class that might be created. I don't see a way around this in an OOP context. At first I was simply declaring constants at the top of the class, so at least they were all in one place inside the file. But trying to remember where I have the value of food and where I have the value of the cost of building a farm can be troubling. I would rather look at a list of all the variables that will potentially change for balance reasons and see/change their values all together in one place. To achieve that I have built a GameBalance class. It provides access to the variables stored in a Settings dictionary that contains other dictionaries for the GameSettings, TowerSettings, RoomCostSettings, and any other settings that might be needed. It loads all of those values from a property list (very much like XML if you don't know Objective-C) and then the objects that need various settings make calls to the GameBalance object asking for whatever setting value they may need. That way, I can make any variable changes directly inside the property list and they will be automatically loaded into the game. Because of the number of properties total, I have omitted some of them for this post. DTGameBalance.h #import <Foundation/Foundation.h> #import "DTJobType.h" @interface DTGameBalance : NSObject <NSCoding>
{ "domain": "codereview.stackexchange", "id": 9063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "object-oriented, game, objective-c", "url": null }
quantum-gate, quantum-algorithms, grovers-algorithm If $M$ is known in advance: What happens when more than half the items are solutions to the search problem, that is, $M \geq N/2$? [...] the number of iterations needed by the search algorithm increases with $M$, for $M \geq N/2$. Intuitively, this is a silly property for a search algorithm to have: we expect that it should become easier to find a solution to the problem as the number of solutions increases. There are at least two ways around this problem. If $M$ is known in advance to be larger than $N/2$ then we can just randomly pick an item from the search space, and then check that it is a solution using the oracle. This approach has a success probability at least one-half, and only requires one consultation with the oracle. It has the disadvantage that we may not know the number of solutions $M$ in advance. In the case where it isn’t known whether $M \geq N/2$, another approach can be used. [...] The idea is to double the number of elements in the search space by adding $N$ extra items to the search space, none of which are solutions. As a consequence, less than half the items in the new search space are solutions. This is effected by adding a single qubit $|q \rangle$ to the search index, doubling the number of items to be searched to $2N$.
{ "domain": "quantumcomputing.stackexchange", "id": 1725, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-gate, quantum-algorithms, grovers-algorithm", "url": null }
c++, c++14 /// \brief Returns the binary negated underlying value. /// \return Binary negated underlying value. constexpr auto operator~() const; /// \brief Preincrements the underlying value. /// \return Reference to this number. /// \throw std::domain_error if result is illegal. CheckedNumber& operator++(); /// \brief Postincrements the underlying value. /// \return Copy of this number before increment. /// \throw std::domain_error if result is illegal. CheckedNumber operator++(int); /// \brief Predecrements the underlying value. /// \return Reference to this number. /// \throw std::domain_error if result is illegal. CheckedNumber& operator--(); /// \brief Postdecrements the underlying value. /// \return Copy of this number before decrement. /// \throw std::domain_error if result is illegal. CheckedNumber operator--(int); /// \brief Adds a value to the underlying value. /// \tparam U Type of the value. /// \param[in] val Value to add. /// \return Reference to this number. /// \throw std::domain_error If the sum is illegal. template <typename U> CheckedNumber& operator+=(const U& val); /// \brief Subtracts a value from the underlying value. /// \tparam U Type of the value. /// \param[in] val Value to subtract. /// \return Reference to this number. /// \throw std::domain_error If the difference is illegal. template <typename U> CheckedNumber& operator-=(const U& val);
{ "domain": "codereview.stackexchange", "id": 30098, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++14", "url": null }
python Title: Gigantic class to model a baseball game I'm trying to think of ways to separate things out. I'm open to ideas, or if you see anything blatantly wrong, I'd like to know that too. Generally, I'm happy with this, but the sheer size of the class seems like a code smell. Original Source '''The Game class.''' import game_exceptions from bases import Bases from count import Count from inning import Inning from team import Team # pylint: disable=R0902 # pylint complains about the number of instance attributes. class Game(object): '''Maintain the state of the game. Includes methods to carry out various game related events. ''' def __init__(self, home, away, league="league.db", innings_per_game=9): '''Set up a new game. home and away are the team_id pointing to a team in the league. The league variable is used to play games in different leagues. innings_per_game is the number of innings in a standard game, it can be changed if you want to represent a little league game, or some type of exhibition game. ''' self.count = Count() self.home = Team(league=league, team_id=home) self.away = Team(league=league, team_id=away) self.inning = Inning(innings_per_game) self.bases = Bases() self.winner = None self.game_over = False self.bat_team = self.away self.pitch_team = self.home self.pitch_team_stats = self.pitch_team.stats.pitching self.current_pitcher = self.pitch_team.pitcher().stats.pitching self.bat_team_stats = self.bat_team.stats.batting self.current_batter = self.bat_team.current_batter().stats.batting def __str__(self): '''Output the important game information as a string.''' if self.game_over: return self.__str_finished_game() else: return self.__str_game_in_progress()
{ "domain": "codereview.stackexchange", "id": 29911, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python", "url": null }
lo.logic, pl.programming-languages, computability, type-theory, lambda-calculus Edit: Elaboration on the 4th point: Wadler defines, among other things, the Reynolds embedding from $\lambda_2$ to $\lambda\mathrm{PRED}_2$ with higher-order functions (Figure 4). This embeding sends a type to a proposition (the parametricity theorem) and sends a well tyepd term to a proof of parametricity of that same term. The theorem you get implies "type correctness" of the defined function rather trivially. In fact I think Wadler treats this in detail in Section 5, and treats the $\mathrm{add}$ example in the appendix. The only subtle point is that the function is not directly defined over the inductive type, rather it takes the type and constructors as arguments. At this point I really have to suggest you work through some examples.
{ "domain": "cstheory.stackexchange", "id": 2752, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "lo.logic, pl.programming-languages, computability, type-theory, lambda-calculus", "url": null }
rust, game-of-life -------------------------------------------------------------------- game-of-life new impl, (10x10), random state, 10000 steps time: [13.847 ms 14.175 ms 14.580 ms] Found 10 outliers among 100 measurements (10.00%) 2 (2.00%) high mild 8 (8.00%) high severe --------------------------------------------------------------------- game-of-life original impl, (10x10), random state, 10000 steps time: [33.176 ms 33.209 ms 33.247 ms] Found 9 outliers among 100 measurements (9.00%) 2 (2.00%) low mild 4 (4.00%) high mild 3 (3.00%) high severe
{ "domain": "codereview.stackexchange", "id": 43664, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rust, game-of-life", "url": null }
momentum, structural-beam Title: Beam momentum positive/negative sign Could anyone explain to me why the momentum of this isostatic beam in $A$ is negative instead of positive? Why isn't it as simple as $+FL-FL/2 = +FL/2$? Because that is what is required for the beam to be static. For the beam to be static, at any point you need $\sum M = 0$; which your setup does not give. If you start by assuming that $\sum M = 0$ at any point on the beam, you should be able to solve for the moment at A and see why it is $\frac {FL}2$. If the sum of moments is not zero, that would mean the beam is rotating.
{ "domain": "physics.stackexchange", "id": 60495, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "momentum, structural-beam", "url": null }
fl.formal-languages, automata-theory, regular-language, nfa Title: What class of languages is recognized by finite-state automata with $k$ heads? A DFA or NFA reads through an input string with a single head, moving left-to-right. It seems natural to wonder about finite-state machines that have multiple heads, each of which moves through the input from left-to-right, but not necessarily at the same place in the input as the others. Let us define a finite state machine with $k$ heads as follows: A k-head NFA is a tuple $(Q, \Sigma, \Delta, q_0, F)$, where: As usual, $Q$ is a finite set of states, $\Sigma$ is a finite alphabet, $q_0$ is an initial state, and $F$ is a set of accepting states. Let $\Sigma_\varepsilon := \Sigma \cup \{\varepsilon\}$ denote the set of characters including the empty string. $\Delta \subseteq Q \times (\Sigma_\varepsilon)^k \times Q$ is the transition relation: a transition $(p, (\sigma_1, \sigma_2, \ldots, \sigma_k), q)$ means that, if the machine is in state $p$, it may read in $(\sigma_1, \sigma_2, \ldots, \sigma_k)$ such that $\sigma_i$ is the next character for head $i$ (or $\varepsilon$ if that head does not move), and then move to state $q$. A run of this kind of machine (any path starting from the start state and ending in an accepting state) results in not one string, but $k$ different strings (formed by concatenating the characters along the run). Then we say that the run is valid if the $k$ strings are identical. The language of the machine is the set of strings $w$ such that there exists a valid run of the machine where the $k$ strings produced along that run are all equal to $w$. What is the class of languages recognized by such machines? Has it been studied?
{ "domain": "cstheory.stackexchange", "id": 4764, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fl.formal-languages, automata-theory, regular-language, nfa", "url": null }
organic-chemistry, reaction-mechanism, regioselectivity Steric effects Electronic effects Stereoelectronic effects Two of these are related to lowering the overall energy of the transition state. The third relates to how productive a collision can be. The one that matters less here (or is already accounted for) is stereoelectronic effects. Notice that you proposed a backside attack for a $\mathrm{S_N2}$ reaction. Therefore, you have considered the proper alignment of orbitals, which is what is meant by stereoelectronic effects. Steric effects government repulsion of large groups. In your example, attacking the more substituted side would result in greater steric repulsion since the $\ce{R}$ group is larger than $\ce{H}$. Steric arguments therefore argue for attack at the other carbon. Frequently, the three factors do not say the same thing, and you need to use some intuition and/or experience to figure out which one dominates (in reality, the relative energies will determine the "winner"). In this case, it turns out with a carbocation and a negative nucleophile, perhaps not surprisingly, electronic effects dominate. There are actually two related electronic effects. One is that the partial positive is greater on the more substituted carbon (cf. Markovnikov rule). This carbon is hyperconjugatively stabilized (relative to the other carbon) by greater substitution. In addition, the greater positive charge also implies a weaker $\ce{C-E}$ bond, which further implies a lower energy $\ce{C-E}$ $\sigma^{*}$ orbital. We can then argue that the transition state energy will be lower given that the two interacting orbitals are closer in energy. And indeed, the electronic effects dominate this reaction giving the reactivity shown in your diagram.
{ "domain": "chemistry.stackexchange", "id": 7807, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, reaction-mechanism, regioselectivity", "url": null }
meteorology, snow, radar Also note that winter precipitation adds an extra complication because the particles are lighter in weight and can thus be blown about more by vertical and horizontal winds. Raindrops (and hail) are quite likely to fall unless extreme updrafts exist because they are heavy. But drizzle, snow, and sleet may be blown around quite a bit. Without a time-intensive dual-Doppler analysis, you cannot know the wind motion in the storm thoroughly, and therefore will have varying results at times. And finally, the big wrench is unfortunate inherent to how radars work. They measure the percentage of their sent energy that is reflected back to them. That's great because that's directly connected to the diameter of the item falling (to the 6th power). But unfortunately the grand problem is that in a storm, there is a huge variety of drop/flake sizes mixed together at once... such that we can't extract which combination of particle sizes created it (and thus can't calculate volume to actually know the rain/snow amount that falls). It could be like 6 medium size flakes causing the 10 dBZ echo... or 2 large flakes and 10 small flakes... and each combination is a different volume/snow total. (to see the nitty-gritty math details on this, read more here.) So we can never know for sure the exact rain/snow falling using just radar. The good news is we've at least done lots of experiments and come up with some fairly useful best-practice formulas for using the Z-R ratio in different scenarios. Good, but not perfect.
{ "domain": "earthscience.stackexchange", "id": 1293, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "meteorology, snow, radar", "url": null }
beginner, console, sieve-of-eratosthenes, rust You can use Unified Function Call Syntax (UFCS) to call usize::pow as a function, not just as a method. This is also a built-in method, so you don't need to use an external crate for this: let limit = usize::pow(2, input_num); We can make a similar change for sqrt, and use range syntax (start..end) for easy ranges: let slimit = f64::sqrt(limit as f64) as usize; for i in 2..slimit { if primes[i] { for j in num::iter::range_step(i*i, limit, i) { primes[j] = false; } } } Sadly, range notation with a step size bigger than 1 is still unstable, so we continue using the external crate for this. I would encourage you to move your variable declarations (pc, maxprime) closer to where they are used. This isn't C89, where it's required to put the variables at the top! ^_^ You can also make that code a bit more functional: (0..limit).fold((0, 7), |(count, max), prime| { if primes[prime] { (count + 1, prime as u64) } else { (count, max) } }) Bigger picture the different ways of using pow vs sqrt In bog-standard Rust, they are the same. sqrt and pow are methods on their respective types, and there are versions for each type. Using UFCS, you can call any method as a function — value.pow(5) or u32::pow(value, 5) for example. However, you are using the num crate, which tries to provide an abstraction on top of all the concrete numeric types. the documentation that says that both arrays and vectors are created and filled I'm not sure what could be improved here. The docs for vec! indicate that it can be used this way, and the docs for Vec point out vec!. I don't know that it makes sense to copy-and-paste the docs from every method into every other method...
{ "domain": "codereview.stackexchange", "id": 13896, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, console, sieve-of-eratosthenes, rust", "url": null }
1. You are correct to assume $f$ is injective. A function $f(x)$ is injective iff $f(x) = f(y) \implies x = y$. It is true that $f$ returns two values, but the criterion is the same. If $f(x) = f(y)$, in particular the two second coordinates are the same, but like you said, $x + 1 = y + 1 \iff x = y$; 2. You are correct to assert $f$ is not surjective, as there is no $x$ such that $f(x) = (-1, 1)$, for example. An example of a surjective function could be a function that goes around in a spiral (for non-negative $x$) covering all points in $\mathbb{Z}^2$. That would be $f(0) = (0, 0), f(1) = (1,0), f(2) = (1,1), f(3) = (0,1), f(4) = (-1, 1), f(5) = (-1, 0), \cdots$, but I don't know how to write that in a clean way. For $x < 0$ we could just take anything you like. 3. Also right! Defining $f(x) = z+5$ is nonsensical, unless you previously stated that $z$ is some constant. In that case $f$ would be a constant function. • How is your $g$ surjective? For what value of $x$ does $g(x)=(0,0)$? – G Tony Jacobs Oct 17 '17 at 22:14 • @GTonyJacobs For none, I don't know what I was thinking D: Please check my new example. – RGS Oct 17 '17 at 22:15 $$f:\mathbb{Z} \to \mathbb{Z}^2$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9390248191350351, "lm_q1q2_score": 0.8000048310744914, "lm_q2_score": 0.8519528076067262, "openwebmath_perplexity": 121.21035632964012, "openwebmath_score": 0.9118388891220093, "tags": null, "url": "https://math.stackexchange.com/questions/2477453/is-a-function-in-the-form-of-fx-x2-x-1-injective-or-surjective" }
homework-and-exercises, classical-mechanics, torque, moment-of-inertia Now to balance the forces/moments: $$ \begin{align} N_1 - F_2 & = 0 & \mbox{sum of forces in the x-direction}\\ N_2 - m g & = 0 & \mbox{sum of forces in the y-direction}\\ N_2 (a \cos \theta) - N_1 (a \sin \theta)-m g ( \frac{a}{2} ( \cos\theta + \sin \theta)) & =0 &\mbox{sum of torques about the origin} \end{align} $$ Theese three equations have to be solved for the three unknown forces, $N_1$, $N_2$ and $F_2$. The minimum coefficient of friction has to be at least equal to $\mu = \frac{F_2}{N_2}$ for each angle. I will let you work out the details, but the larger the angle $\theta$ the less the friction needs to be. At $\theta=\frac{\pi}{4}$ friction can be 0 as the contact is directly under the center of mass.
{ "domain": "physics.stackexchange", "id": 39955, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, classical-mechanics, torque, moment-of-inertia", "url": null }
c#, performance, winforms if (words.Count == 0) { resultText.Append(currentLine.TrimEnd() + "\n"); break; } } } return resultText.ToString().TrimEnd().Split('\n'); } Performance measurement shows that MeasureString is the main time consumer here (~90%). Is there any clever way to reduce the number of MeasureString calls? Other than that, I don't see opportunities to make word wrapping significantly faster. First, I'd like to say that your code is written very well, and is quite readable. As for your question - your code runs MeasureString at least once per word in your text. Since a line should be (quite) longer than a single word, I think you can reduce the number of calls substantially. I can think of two strategies: 1. Binary search Check if the whole text can fit in one line - if yes - you are done! If not - take (about) half of the text, and wrap it. Append the last line of the first half to the remaining text (unless it all fit in a single line), and wrap that text This strategy might need a little refining, but it should reduce the number of calls to MeasureString considerably. 2. Approximate line length Use your current solution to find the first line. Note the line length (in characters). Take next X words in the remaining text whose length is at most the previous line's length, and check if it fits in a single line. If it does - continue as before (add a word until it doesn't fit) If it does not repeat - take one word out, and try again. Wash, rinse, repeat In this solution all MeasureString calls should be relevant, as they should be around the actual length of each line.
{ "domain": "codereview.stackexchange", "id": 6970, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, winforms", "url": null }
slam, navigation, turtlebot, slam-gmapping, gmapping [ INFO] [1316405019.341845241]: Still waiting on map... -maxUrange 16 -maxUrange 9.99 -sigma 0.05 -kernelSize 1 -lstep 0.05 -lobsGain 3 -astep 0.05 -srr 0.01 -srt 0.02 -str 0.01 -stt 0.02 -linearUpdate 0.5 -angularUpdate 0.436 -resampleThreshold 0.5 -xmin -1 -xmax 1 -ymin -1 -ymax 1 -delta 0.05 -particles 80 [ INFO] [1316405019.677929669]: Initialization complete update frame 0 update ld=0 ad=0 Laser Pose= 0 0 0 m_count 0 Registering First Scan [ INFO] [1316405020.342255791]: Still waiting on map... [ INFO] [1316405021.376333727]: Received a 256 X 32 map at 0.050000 m/pix [ INFO] [1316405021.710116383]: MAP SIZE: 256, 32 [ INFO] [1316405021.734994211]: Subscribed to Topics: scan [ INFO] [1316405022.221708936]: Sim period is set to 0.20 Originally posted by edgan on ROS Answers with karma: 66 on 2011-09-18 Post score: 0 I wasn't setting ROS_HOSTNAME on the turtlebot. So it was defaulting to the hostname, like ROS_HOSTNAME=turtlebot-0169. My laptop couldn't resolve turtlebot-0169, and so it couldn't pull the data for rviz. Originally posted by edgan with karma: 66 on 2011-09-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6717, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, turtlebot, slam-gmapping, gmapping", "url": null }
ros, node from /home/ubuntu/catkin_ws/src/moon/src/SpanningTree.cpp:4: /usr/include/c++/4.8/bits/stl_tree.h:316:5: note: template<class _Val> bool std::operator!=(const std::_Rb_tree_iterator<_Tp>&, const std::_Rb_tree_const_iterator<_Val>&) operator!=(const _Rb_tree_iterator<_Val>& __x, ^ /usr/include/c++/4.8/bits/stl_tree.h:316:5: note: template argument deduction/substitution failed: /home/ubuntu/catkin_ws/src/moon/src/SpanningTree.cpp:96:46: note: mismatched types ‘const std::_Rb_tree_iterator<_Tp>’ and ‘int’ for (i = adj[j].begin(); i != adj[j].end(); ++i) ^ In file included from /usr/include/c++/4.8/vector:64:0, from /usr/include/boost/format.hpp:17, from /usr/include/boost/math/policies/error_handling.hpp:31, from /usr/include/boost/math/special_functions/round.hpp:14, from /opt/ros/indigo/include/ros/time.h:58, from /opt/ros/indigo/include/ros/ros.h:38, from /home/ubuntu/catkin_ws/src/moon/src/SpanningTree.cpp:4:
{ "domain": "robotics.stackexchange", "id": 24388, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, node", "url": null }
black-hole $$ \frac{c^6}{G^2M^2}\Delta r > T (\Delta r)^2 $$ $$ M < 2\times10^5 \left({T \Delta r}\right)^{-1/2}\ M_{\odot}$$ Diamond has $T\sim 10^{11}$ N/m$^2$. If it was of size 1 cm, then a black hole would need to be larger than 6 solar masses for this chunk of diamond to survive being ripped apart outside the event horizon. Other materials or larger bits of material are easier to break and would not survive outside the horizon of even more massive black holes. e.g. The tensile strength of bone is about $10^{8}$ N/m$^2$, so a 0.1m piece of bone would be pulled apart prior to falling through the event horizon if $M< 60M_{\odot}$.
{ "domain": "astronomy.stackexchange", "id": 5630, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "black-hole", "url": null }
v) = −1/2, so the Jacobian factor in the area element will be 1/2. The simple goal of this guide is to demonstrate how to Multiple Integration. We could have also projected this region onto the xz- or yz-planes. We first recall some even, differentiable functi ons: x 2 , x 4 , x 2n , cosx. A triple integral is a three-fold multiple integral of the form intintintf(x,y,z)dxdydz. Integrating discrete data can we approximate the integral R b a f(x) has commands dblquad and triplequad to compute double and triple integrals. triple integral examples pdf 2(x,y)} where D is given in polar coordinates by D = (r,θ)|α ≤ θ ≤ β,h. Note the ordering of integration: z first, then y, then x. For each of the following solids give a description in rectangular coordinates in Exercises: Double and Triple Integrals Solutions Math 13, Spring 2010 1. Cylindrical coordinates 2 1. Dans ce chapitre, nous d´efinirons l’int´egrale triple d’une fonction f(x,y,z) sur une r´egion born´ee de R31. Integral XT series. Find the volume of the solid bounded above by the plane z = 4 − x − y and below by the rectangle R = {(x,y) : 0 ≤ x ≤ 1 0 ≤ y ≤ 2}. 1 Double Integrals 3. ca/~wallegre/209-309WebNotes/209sec2f. Indeed, in this section we develop the concept of a triple integral …Chapter 5 DOUBLE AND TRIPLE INTEGRALS 5. 25 3 3 2!xdx This notation means: …Triple Integrals x y z x y z 1. integral3 calls integral to integrate over xmin ≤ x ≤ xmax. Problems for Fun and Practice 1. At the point x= 4, the function becomes 1 0 which is unde ned. Let R be the region bounded by the paraboloid z = x2 + y2 and the plane z = 4 with S8: Double integrals in polar co–ordinates. Series with positive terms. The spirit catches you and you fall down chapter 5 summary calculus 2
{ "domain": "e3p-ltd.org", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9852713878802044, "lm_q1q2_score": 0.8294267257756476, "lm_q2_score": 0.8418256492357359, "openwebmath_perplexity": 833.9997947007021, "openwebmath_score": 0.8829783201217651, "tags": null, "url": "http://e3p-ltd.org/7dajins/q7liwnr.php?tynundghs=triple-integral-examples-pdf" }
computational-chemistry, theoretical-chemistry While it is certainly possible to use one Gaussian to approximate the Slater atomic orbitals, the performance would be poor. Here's an example plot that I use in class. Note that near the nucleus, the Gaussian-type function does a poor job of describing the wavefunction, and thus the electron density. Additionally, the single GTO is too high at medium range, and shows the wrong asymptotic behavior - it falls off too quickly at long range. (In other words, poor performance everywhere.) Instead, if we use a best-fit to 3 Gaussian-type functions, we do a much better job at approximating the STO. Pretty much everywhere except very close to the nucleus has a good fit.
{ "domain": "chemistry.stackexchange", "id": 16566, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computational-chemistry, theoretical-chemistry", "url": null }
thermodynamics, temperature, entropy Title: Third principle of thermodynamics and the unattainability of absolute zero Consider a $S-T$ diagram (entropy-temperature) and consider cooling a substance by doing a series of succesive isothermal and reversible adiabatic processes between two volumes $V_{1}$ and $V_{2}$. Now when cooling the substance from $T_{1}$ to $T_{2}$ in the reversible adiabatic process we can write: $$S(0, V_{1})+\int_{0}^{T_{1}}\frac{C_{V}}{T}dT = S(0, V_{2})+\int_{0}^{T_{2}}\frac{C_{V}}{T}dT$$ letting $T_{2}=0$ will lead to: $$\underbrace{\int_{0}^{T_{1}}\frac{C_{V}}{T}dT}_{>0} = \underbrace{S(0, V_{2})-S(0, V_{1})}_{=0}$$ a contradiction showing that the third principle of thermodynamics implies that absolute zero cannot be achived. Is this reasoning correct? Your reasoning proves that we cannot reach absolute zero by reversible adiabatic process from a non zero temperature. I prefer to reach this conclusion as follows: Your first equation applies to any reversible adiabatic process between states $(T_1,V_1)$ and $(T_2,V_2)$. If we set one temperature to absolute zero the other must also be zero because both states must have the same entropy –zero. In other words, we cannot connect $T=0$ to any finite temperature via an isentropic path.
{ "domain": "physics.stackexchange", "id": 92679, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, temperature, entropy", "url": null }
special-relativity $$ t' = (t-v\cdot x/c^2)/sqrt(1-v^2/c^2) $$ It's the $v\cdot x$ term in the numerator that causes the mischief here. In the runner's frame the more distant event (larger $x$) happens earlier. The far door is closed first. It opens before she gets there, and the near door closes behind her. Safe again — either way you look at it, provided you remember that simultaneity is not a constant of physics.
{ "domain": "physics.stackexchange", "id": 26534, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity", "url": null }
00000 n There are five ways in which you can make is a rectangle theorem theorem! Of dimensions ( 4ab + 3bc ) meters, ( 3ab+2bc ) and. 36/15=8/A, Marissa collects donations for a dance marathon www.risingpearl.comLike us at - www.risingpearl.comLike us at -,... The prove of two parallelogram theorems walk 1/2 of a parallelogram has diagonals... A dance marathon dimensions ( 4ab + 3bc ) meters, ( 3ab+2bc ) meters, 3ab+2bc! Information, segment AB is parallel to segment AD prove a parallelogram area. \ ( \Delta AEB\ ) and … theorem we will show that ΔABD and ΔCDB are congruent does! It is a parallelogram: definition: a parallelogram is it gear train on the site answer to question! Parallelogram is a parallelogram square park, ( 3ab+2bc ) meters, 3ab+2bc. Friends.They are very health conscious right angles less additional lines tips form a parallelogram for its that., in a parallelogram two column method to prove a quadrilateral bisect each other then the total area! Visit us at - www.risingpearl.comLike us at - www.facebook.com/risingpearlfansFriends, This site is cookies! Covered by Shikha around the triangular park of dimensions ( 4ab + 3bc ) meters the. Are always equal n There proving the parallelogram diagonal theorem five ways in which you can make is parallelogram...: a quadrilateral is equal then it is a parallelogram are perpendicular 1 diagonal! Given ABCD is a parallelogram separates it into two segments of equal area 3: a parallelogram ) 2 cm! Do term limits seem to have more a... a family smoothie recipe calls strawberry. Congruency criteria, and line and angle FJE$ 500 shape you can make is a parallelogram it. Have is called a rhombus if and only if each diagonal bisects a of... 0000156537 00000 n 0000045848 00000 n There are five ways in which you can prove the following figure Proof! … proving a
{ "domain": "higgsme.ir", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9553191259110588, "lm_q1q2_score": 0.8100798140638624, "lm_q2_score": 0.8479677545357568, "openwebmath_perplexity": 1001.6566035361559, "openwebmath_score": 0.626719057559967, "tags": null, "url": "https://higgsme.ir/adxb6i1d/cca795-proving-the-parallelogram-diagonal-theorem" }
atoms, atomic-radius Title: Radius with Bohr model $$r_n = \left(\frac{h^2}{4\pi^2me^2}\right)\times\frac{n^2}{Z} $$ Would I be able to calculate the radius of sodium ion ($\ce{Na+}$) with the help of the above Bohr atomic model formula? The hydrogen atom wavefunctions can be useful for multielectron atoms as a means of looking up their size (by means of a parameterization). The hydrogen atom wavefunctions form one possible basis for the definition of effective nuclear charges, see for instance here, where Hartree–Fock orbitals are used to compute effective charges: $$ Z_{eff} = \frac{<r>_H}{<r>_Z}$$ Here $<r>_H$ and $<r>_Z$ are the mean hydrogenic and Hartree-Fock radii (for nuclear charge Z). The mean hydrogenic radius for ground state hydrogen is related to the Bohr radius $a_0$ as $$<r>_H = \frac{3}{2}\frac{a_o}{Z}$$ Expressions for $<r>_H$ are available for other orbitals, allowing $<r>_Z$ to be computed from the tabulated values of $ Z_{eff}$ as $$ <r>_Z = \frac{<r>_H}{Z_{eff}}$$ So using the hydrogen wavefunctions in this way is a little more complicated than using the equation for the Bohr model, but just the s-orbitals can already be useful to give you an idea about the size of atoms.
{ "domain": "chemistry.stackexchange", "id": 11434, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "atoms, atomic-radius", "url": null }
mechanical-engineering, civil-engineering, solid-mechanics Title: Neutral axis and 2nd moment of area Does anyone know how I would find the horizontal neutral axis (yc) and 2nd moment of area of this cross section? (R = 210 mm) I know that and that there needs to be the same amount of area on either side of the neutral axis, but I don't know how I'm supposed to put that into practice here. Any help is much appreciated. Thanks. This problem is most easily handled using the "table" form as shown below. The result can be verified by selecting a more convenient reference line, to eliminate the potential mistake made in sign convention, as shown below. Final checks: $y_{c(A)} = y_{c(B)}$ Due to effect of substraction, the neutral axis must shifting above the centerline of the circle, so $y_c > R$.
{ "domain": "engineering.stackexchange", "id": 4110, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mechanical-engineering, civil-engineering, solid-mechanics", "url": null }
an There are several ways to find the area of a hexagon. link to the specific question (not just the name of the question) that contains the content and a description of The interior angles of Hexagon are of 120 degrees each and the sum of all angles of a Hexagon is 720 degrees. The area of a triangle is . Remember that in  triangles, triangles possess side lengths in the following ratio: Now, we can analyze  using the a substitute variable for side length, . You also need to use an apothem — a segment that joins a regular polygon’s center to the midpoint of any side and that is perpendicular to that side. Area of the hexagon is the space confined within the sides of the polygon. New York Medical College, PHD, Doctor of Medicine. Regular hexagons are interesting polygons. If you've found an issue with this question, please let us know. Therefore, all 6 of the triangles (we get from drawing lines to opposite vertexes) are congruent triangles. If we know the side length of a regular hexagon, then we can solve for the area. Just enter the coordinates. University of North Florida, Master of Arts Teaching, Speci... University of Kansas, Bachelor of Science, Chemistry. Substitute the value of side in area formula. A = 3 * √3/2 * a² = (√3/2 * a) * (6 * a) /2 = apothem * perimeter /2 Alternatively, the area can be found by calculating one-half of … Let's solve for the length of this triangle. In geometry, a hexagon is defined as a two-dimensional figure with six sides. We know that each triangle has two two sides that are equal; therefore, each of the base angles of each triangle must be the same. If we find the area of one of the triangles, then we can multiply it by six in order to calculate the area of the entire figure. A regular (also known as equilateral) hexagon has an apothem length of  . Maximum hexagon area. Alternatively, the area of area polygon can be calculated using the following formula; A = (L 2 n)/[4 tan (180/n)] Where, A = area of the polygon, L = Length of the side. If Varsity Tutors takes action in response
{ "domain": "com.pl", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9683812309063186, "lm_q1q2_score": 0.8529473676412579, "lm_q2_score": 0.8807970873650403, "openwebmath_perplexity": 455.577694290356, "openwebmath_score": 0.6481115818023682, "tags": null, "url": "https://hotel-krotosz.com.pl/night-of-bediob/f321b1-area-of-hexagon" }
Second, given the coordinate-free definition, the fundamental idea of the dot product is that of projection. By this it gives a single number which indicates the component of a vector in the direction of another vector. Your observation of the dissimilarity between the dot and cross product is correct, however, the dot product is used to produce a vector as well, it just does it component-by-component. Let's suppose that we have a vector $\mathbf v$ represented by its components in a given coordinate system. Let's further suppose that we have an orthonormal basis defined in that same coordinate system as the set of column vectors $\{\mathbf u_1, \mathbf u_2, \ldots, \mathbf u_n\}$. Finally, suppose that we want to represent $\mathbf v$ in this basis as $\mathbf w$. The question is how do we do that? We use the dot product of course! So the first component of $\mathbf w$ would then be $w_1 = \mathbf u_1\cdot \mathbf v$, and the second component would be $w_2 = \mathbf u_2\cdot \mathbf v$ and so on. (Note that because $\|\mathbf u_i\| = 1$, we have $\mathbf u_1\cdot \mathbf v= \|\mathbf v\|\cos\theta_i$.) If we then think of the vector $\mathbf w$ defined as such we have
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9890130575313262, "lm_q1q2_score": 0.8502186725627057, "lm_q2_score": 0.8596637487122112, "openwebmath_perplexity": 177.29955373726372, "openwebmath_score": 0.9262725710868835, "tags": null, "url": "http://math.stackexchange.com/questions/414776/what-is-the-use-of-the-dot-product-of-two-vectors/414785" }
design-patterns, bash, plugin validateNumeric() { BOOKLENGTH=$1 reg='^[0-9]+$' if ! [[ $BOOKLENGTH =~ $reg ]] ; then echo "Error: Argument not a number, try again:" >&2; read BOOKLENGTH validateNumeric $BOOKLENGTH else validateEven $BOOKLENGTH fi } validateEven() { BOOKLENGTH=$1 echo "Testing if ${BOOKLENGTH} is even now" if [ $((BOOKLENGTH%2)) -eq 0 ] ; then echo "Ok ....... Proceeding" echo "Setting book length = $BOOKLENGTH" return $BOOKLENGTH else echo "Error: Not an even number, try again:" >&2; read BOOKLENGTH validateEven $BOOKLENGTH fi } setupProject() { echo "Setting up $PROJECTNAME now ..." mkdir -p "$1" && cd "$1" && touch README.md license.txt .gitignore && mkdir "trash" "cover" "templates" "images" "manuscript" || return $? echo "# $1" >> README.md cd "templates" && touch template.html head.html template.css template.js && cd ".." } createPages() { PAGES=$1 cd "manuscript" p=0 while [ "$p" -lt "$PAGES" ]; do p=$((p+1)) mkdir -p "page-$p" cd "page-$p" touch "body.html" touch "style.css" echo "body{background:rgba(200, 235, 255, 0.99); margin:0 0; overflow:hidden;}" >> style.css cd ".." done echo "Done!" && cd ".." #Head back to root }
{ "domain": "codereview.stackexchange", "id": 15789, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "design-patterns, bash, plugin", "url": null }
real form then there is only one orbit, i.e. $G_0$ acts transitively $G/P$. This gives the familiar "compact picture" $G_0/G_0 \cap P$ of the flag variety. On the other hand, if $G_0$ is noncompact then it's relatively uncommon for its action on $G/P$ to be transitive, but it is possible. Joseph Wolf worked out the list of $G_0$ for which this is the case in</p> <blockquote> <p>Joseph Wolf, <em><a href="http://www.ams.org/journals/proc/2001-129-08/S0002-9939-01-05825-7/home.html" rel="nofollow">Real groups transitive on complex flag manifolds</a></em>, Proc. Amer. Math. Soc. <strong>129</strong> (2001), 2483-2487. </p> </blockquote> http://mathoverflow.net/questions/80492/a-technical-problem-on-the-contragredient-representation-in-the-context-of-locall/80507#80507 Answer by Faisal for A technical problem on the contragredient representation in the context of locally compact totally disconnected groups Faisal 2011-11-09T18:51:56Z 2011-11-09T18:51:56Z <p>This follows from two facts:</p> <ol> <li><p>The complement $E_1^\perp$ of $E_1$ in $\tilde{E}$ is isomorphic to the contragredient of $E/E_1$.</p></li> <li><p>If $V$ is admissible and nonzero then $\tilde{V}$ is nonzero (and admissible). For if $\tilde{V}=0$ then $V = \tilde{\tilde{V}} = 0$.</p></li> </ol>
{ "domain": "mathoverflow.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462215484651, "lm_q1q2_score": 0.8100105276363543, "lm_q2_score": 0.8198933403143929, "openwebmath_perplexity": 598.38865623681, "openwebmath_score": 0.9509128332138062, "tags": null, "url": "http://mathoverflow.net/feeds/user/430" }
r, fuzzy-logic Title: [R]Fuzzy C-Means, different between ppclust vs e1071? no. of cluster = 10, data points = 6000 library(ppclust) cm <- fcm(x,centers = cen) takes ~ 10 minutes library(e1071) cm <- cmeans(x,cen,1000) takes ~ 1 minute the only reason I prefer ppclust is because it allows nstart, meaning I am not stuck with local optima. Given both are doing same thing, why is one taking 10x time then other? Look at the source code. The guts of e1071::cmeans are written in C, but ppclust::fcm looks like all R and has some ugly looking eval stuff which might make it dead slow.
{ "domain": "datascience.stackexchange", "id": 4114, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "r, fuzzy-logic", "url": null }
c++, ros-kinetic Title: use custom interpreter Hi I wrote an interpreter for cpp. how can I use it in ros and write my code for this interpreter? Originally posted by ashkan_abd on ROS Answers with karma: 1 on 2018-04-14 Post score: 0 I assume you mean that you wrote a compiler for C++, since C++ is not an interpreted language. ROS uses whatever compiler the environment says is the current one. How to change the compiler for your environment (which can be done on a shell-by-shell basis, or system wide) is not really ROS specific, but it's also easy enough to do. Set the CC environment variable to the compiler's executable to change the C compiler, and the CXX environment variable to change the C++ compiler. If you are using CMake (which you are for ROS) then you can also do it in a CMakeLists.txt file by setting the CMAKE_C_COMPILER and CMAKE_CXX_COMPILER variables to the path to the compiler executable. I'm not sure how much Catkin will like having this variable changed in one package, though. It might take effect for the whole workspace, or it might just be overridden. Originally posted by Geoff with karma: 4203 on 2018-04-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ashkan_abd on 2018-04-16: please say more details how can do this? Comment by Geoff on 2018-04-17: Setting environment variables is a basic Linux question. I suggest you read up on how to use the shell, and if you have any further Linux-related questions, ask them at a Linux help site.
{ "domain": "robotics.stackexchange", "id": 30635, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, ros-kinetic", "url": null }
c#, bitwise, serialization, stream if (min > 0) { max -= min; } data = Read(BitsRequired(max)) - min; } /// <summary> /// Read bits from the stream and write that information to data. /// WARNING: If you read data in a different order than written, there is a possibility that the actual number written to data is outside of the given range. In such a case, you may want to check the bounds yourself. /// </summary> /// <param name="data">the variable to be written to</param> /// <param name="min">the smallest possible number that could have been written</param> /// <param name="max">the largest possible number that could have been written</param> public void Read(out uint data, uint min, uint max) { ulong tempData; Read(out tempData, min, max); data = (uint) tempData; } /// <summary> /// Read bits from the stream and write that information to data. /// WARNING: If you read data in a different order than written, there is a possibility that the actual number written to data is outside of the given range. In such a case, you may want to check the bounds yourself. /// </summary> /// <param name="data">the variable to be written to</param> /// <param name="min">the smallest possible number that could have been written</param> /// <param name="max">the largest possible number that could have been written</param> public void Read(out ushort data, ushort min, ushort max) { ulong tempData; Read(out tempData, min, max); data = (ushort) tempData; }
{ "domain": "codereview.stackexchange", "id": 21901, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, bitwise, serialization, stream", "url": null }
mechanical-engineering, springs Title: Is the Spring Rate for a Belleville Spring Linear? Belleville disc springs are usually spec'd with a free height, loaded height, and loaded force. From this information you can calculate the average spring rate of a belleville spring. However, what I would like to know if if the force is actually linear (or nearly linear). I assume it is not, but don't know what curve the force might follow. Is there an equation that can predict the force-curve (plot of force versus height) of a belleville spring given ID, OD, free height, and thickness? The curve is not linear. $s = $ actual deflection (mm) $h_o = $ total possible deflection (mm) $F = $ actual load (N) $F_c = $ designed limit load (N) Here you can see the original archive with the ploted graph from one industry (I usualy use this material for reference about these type of spring). I don't know the exact formula. But the reasoning should be (this is my guess): From 0% Force to ~70% force should be affected mostly by the compression between the inner diameter and outer diameter. From ~70% Force to 100% should be affected mostly by bending.
{ "domain": "engineering.stackexchange", "id": 3032, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mechanical-engineering, springs", "url": null }
electrical-engineering, structural-engineering, infrastructure What I'm wondering is: How effective is LiDAR in practice? Does it enable inspections to be done faster? If so, how much faster? I know that underground inspections are prone to human error and missed information, but there is some information on LiDAR effectiveness that I can't seem to find when looking at research. Say if one has a manhole structure where a vault exists. Resolution of photos and ability to see 2nd or 3rd cable deep on a rack in the back office is critical to evaluate defects – particularly cables with cracks and no active oil leak. I don't see how a tethered vehicle is going to be able to position itself to find these defects. It makes it even more difficult to give an honest analysis because I haven't ever been inside a manhole myself. If anyone could speak towards their experience with IR/thermography and LiDAR point-cloud scan effectiveness in manhole structures, your advice will be greatly appreciated. I don't think you want LIDAR. From what I have seen (and my experience is limitd), LIDAR, when it is referred to as LIDAR at least, is only accurate to a few cm. You aren't going to see cable defects with that. But instead of using putty to mold impressions of your teeth, orthodontists now use this handheld scanner. It has a rotating mirror in it and they run it up and down your teeth and software stitches the "images" together to make a 3D model of your mouth in the computer. https://www.kerenor.ca/blog/articles/digital-impressions/ https://www.faceandsmile.ca/itero-digital-impressions I have experienced several models and they all the ones I have seen have a rotating mirror and emit a red light. You have to get in close though. They are called "virtual orthodontic impressions" or "digital teeth impressions". Something like that. It's possible it might be using something in additional to time-of-flight, such as phase. They are really accurate but require dexterity to scanning around the target (if you experienced an orthodontic assistant scanning someone's mouth you would understand what I mean). https://www.youtube.com/watch?v=0xcewGpzdPc
{ "domain": "engineering.stackexchange", "id": 4195, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrical-engineering, structural-engineering, infrastructure", "url": null }
terminology Modularity based on abstraction is the way things are done. Abstraction here means not exposing or requiring unnecessary details. For instance, a dictionary specification in a scripting language (Python for example) should not specify the exact number of bytes required to store each element. This doesn't help users of the module and makes it harder to develop the module. However, the level of abstraction depends on the application. When developing a spacecraft the right level of abstraction might indeed require specifying the amount of bytes used. When people speak about modularity in software development today, they will typically mean the "modularity based on abstraction" Liskov describes. If modules are not abstracted properly, it might even be almost impossible to identify them.
{ "domain": "cs.stackexchange", "id": 18751, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "terminology", "url": null }
quantum-mechanics, two-level-system This is exactly the same as $C_{I}$ in $(9.10)$ except for the factor $\frac{b}{\sqrt2}$. Are you able to just assume that $b=\sqrt2$ to eliminate this factor and make $C_I$ equal to what is shown in $(9.10)$, or is this incorrect? The constant $b$ can be solved as follows. Suppose that at $t=0$ we know the molecule is in state $\lvert{I}\rangle$, then $C_{I}(0)=1$. Now knowing that $a=0$ $C_I(0)=\frac{1}{\sqrt2}[C_1(0)-C_2(0)]=\frac{1}{\sqrt2}[\frac{b}{2}e^{-(i/\hbar)(E_0+A)(0)}+\frac{b}{2}e^{-(i/\hbar)(E_0+A)(0)}]=\frac{b}{\sqrt2}=1$ Therefore, $b={\sqrt2}$
{ "domain": "physics.stackexchange", "id": 77560, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, two-level-system", "url": null }
python, python-3.x, type-hinting Title: Type hints and expected types I'm unsure if this question belongs on Code Review or Stack Overflow. Another developer told me that I should consider adding type hints to my open source project WordHoard. I have never used type hints before and the documentation doesn't seem intuitive. So I need to guidance on how to implement this functionality in my code. I've started looking at adding it some low level code pieces first. For example: from typing import AnyStr, Dict def colorized_text(r: int, g: int, b: int, text: str) -> AnyStr: """ This function provides error messages color. For example: rgb(255, 0, 0) is displayed as the color red rgb(0, 255, 0) is displayed as the color green :param r: red color value :param g: green color value :param b: below color value :param text: text to colorized :return: string of colorized text """ return f"\033[38;2;{r};{g};{b}m{text}\033[0m" Are type hints implemented correctly in the function colorized_text? Here is another code example. from typing import Optional temporary_dict_antonyms = {} def cache_antonyms(word: str) -> [bool, str]: item_to_check = word if item_to_check in temporary_dict_antonyms.keys(): values = temporary_dict_antonyms.get(item_to_check) return True, list(sorted(set(values))) else: return False, None def insert_word_cache_antonyms(word: str, values: list[str]) -> None: if word in temporary_dict_antonyms: deduplicated_values = set(values) - set(temporary_dict_antonyms.get(word)) temporary_dict_antonyms[word].extend(deduplicated_values) else: values = [value.strip() for value in values] temporary_dict_antonyms[word] = values
{ "domain": "codereview.stackexchange", "id": 44404, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, type-hinting", "url": null }
organic-chemistry, nomenclature, heterocyclic-compounds Title: What do the numbers in the preferred IUPAC name for ascorbic acid mean? If I look up the IUPAC name of ascorbic acid, I find this: (5R)-[(1S)-1,2-Dihydroxyethyl]-3,4-dihydroxyfuran-2(5H)-one. I am used to numbering of molecules in a way like the 3,4-dihydroxyfuran part in this molecule name. But now what I want to know is: what do the 5R, 1S and 5H parts signify? For the 5H I came up with this: furan has two double bonds, so the H might signify that this is not the case any more, because if there is an extra H at the fifth position, this would be impossible. This can be seen in the IUPAC name of 2-furanone: 5H-furan-2-one. I don't know though if this assumption is correct, also because in 5H-furan-2-one the 5H is before the furan part and in 3,4-dihydroxyfuran-2(5H)-one it is after (shouldn't it be 2(3,4-dihydroxy-5H)-furanone anyway?) Fact is that I can't find an explanation for the 5R, nor the 1S (as I see no sulfur). Could anyone explain me these weird molecule numberings? The numbering of (R)-5-[(S)-1,2-dihydroxyethyl]-3,4-dihydroxyfuran-2(5​H)-one has four parts: 1) Numbering of the furan ring 2) Numbering of the ethyl side chain 3) Stereodescriptors R and S 4) Italicized element symbols The italic element symbol H denotes indicated or added hydrogen.
{ "domain": "chemistry.stackexchange", "id": 2276, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, nomenclature, heterocyclic-compounds", "url": null }
ros, buildfarm Title: Is code analyzer supported in ros_buildfarm? Hi, I am interested in the "Make ROS package quality visible" work by the Quality Assurance WG. Is it currently supported to execute code analysis on the ros_buildfarm? I found some scripts for code analyzer which are noted "deprecated" in the ros-infrastructure repository. https://github.com/ros-infrastructure/jenkins_scripts/tree/master/code_quality Then I have checked the wiki page of code_quality and the scripts directory in ros_buildfarm but couldn't find the answer. http://wiki.ros.org/code_quality https://github.com/ros-infrastructure/ros_buildfarm/tree/master/scripts Thanks. Originally posted by akihikotsukuda on ROS Answers with karma: 13 on 2019-07-26 Post score: 0 In general we don't add code analyzers / linters to the buildfarm alone since it would make it difficult to reproduce the results locally. Instead in ROS 2 any kind of linters are integrated on a per-package level. So when you run the tests of a package you also run any kind of linter available (e.g. clang-format, clang-tidy, cppcheck, cpplint, flake8, uncrustify, xmllint). The generally used ones are in this repo: https://github.com/ament/ament_lint All of these produce xUnit compliant result file to be picked up by Jenkins when running the tests in the buildfarm. For ROS 1 there is no such infrastructure for linters and you would need to integrate each linter manually into each package. Originally posted by Dirk Thomas with karma: 16276 on 2019-07-26 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by akihikotsukuda on 2019-07-28: I understood the current policy for ROS 1 infrastructure. Thank you!
{ "domain": "robotics.stackexchange", "id": 33523, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, buildfarm", "url": null }
python, python-3.x, multithreading, http, network-file-transfer class USession(Session): portassigner = Port_Getter() def __init__(self, *args, **kwargs): super(USession, self).__init__(*args, **kwargs) self.headers.update( {'connection': 'close', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0'}) self.setport() def setport(self): port = USession.portassigner.randomport() self.mount('http://', Adapter(port)) self.mount('https://', Adapter(port)) class Multidown: def __init__(self, dic, id): self.count = 0 self.completed = False self.id = id self.dic = dic self.position = self.getval('position') def getval(self, key): return self.dic[self.id][key] def setval(self, key, val): self.dic[self.id][key] = val
{ "domain": "codereview.stackexchange", "id": 42148, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, multithreading, http, network-file-transfer", "url": null }
complexity-theory, turing-machines, optimization, terminology, logic Title: Turing Machine where branches are resolved via arbitrary operator Alternating Turing Machines output Boolean values and combine the values returned by branches via the any/all operators. Is there a name or theory behind the class of Turing Machines where there is no restriction to the Boolean space and the any/all operators? For example, I want a machine where terminal states output real values and non-terminal states use the min operator to combine the outputs of branches. Additionally, are there subclasses of this class? I imagine operators which have certain properties (associativity, idempotence, and especially properties related to ordering or transitivity) would have interesting guarantees regarding interruptibility in the same way that a machine using only the any operator can terminate as soon as it finds one accepting state. Using any/all (a.k.a. or/and) gives rise to alternating Turing machines. Goldschlager and Parberry (On the construction of parallel computers from various bases of boolean functions, Theoretical Computer Science 48:43–58, 1986) consider the generalization to allowing arbitrary Boolean functions, and they call the resulting machines extended Turing machines. To me, it would make sense to use the same term for what you're proposing. I suggest following the references in Goldschlager and Parberry and looking up who's cited them, to see if the term "extended Turing machine" stuck and if it's been applied to your scenario.
{ "domain": "cs.stackexchange", "id": 13413, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, turing-machines, optimization, terminology, logic", "url": null }
error-correction, stabilizer-code, fault-tolerance The above shows that for these self-orthogonal CSS codes, we can always construct a reasonable basis where a product of physical Hadamards behaves as a product of logical Hadamards up to some logical swaps. This answers my question in the affirmative. As an aside, note that for some self-orthogonal CSS codes, we cannot always construct a reasonable basis where a product of physical Hadamards behaves as a product of logical Hadamards without any swaps. For example, this question noted that the CSS code generated from the classical codes $$C = \langle (111001), (000110) \rangle$$ $$C^\perp = \langle (111001), (000110), (010001), (001001) \rangle $$ suffers from a logical swap after performing $\otimes_{i=1}^n H_i$. I also noted in a comment that the CSS code generated from $$C = \langle(11110)\rangle$$ $$C^\perp = \langle(11110), (11000),(01100), (00001)\rangle$$ with any reasonable basis also suffers from a swap on two of the three logical qubits after performing $\otimes_{i=1}^n H_i$.
{ "domain": "quantumcomputing.stackexchange", "id": 5343, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "error-correction, stabilizer-code, fault-tolerance", "url": null }
linear-regression, encoding, stata Title: How to encode ordinal data before applying linear regression in STATA? I have a data set that has student performance marks (continuous and dependent variable), Teacher Qualification (Ordinal and independent variable containing categories: Masters, Bachelors, High School). I want to apply the regression analysis to check the impact of teacher qualification on student's marks. How can I encode ordinal data before applying linear regression? I think the best way is to dummy-encode teacher qualification. So each level of qualification enters the regression with a separate intercept term. Note that dummy-encoding always works against a contrast level. So when "Master degree" is the base-level, you will see the effect of "Bachelor" compared to "Master" etc. You can dummy-encode in Stata by using the i. prefix, e.g. summarize i.size. In a regression you would use reg y i.x. See the Stata docs for details.
{ "domain": "datascience.stackexchange", "id": 7999, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "linear-regression, encoding, stata", "url": null }
theoretical-biology, literature, glucose, insulin This looks very similar to how Insulin and Glucose interact with each other in the body. Glucose uptake release insulin and glucagon offsets the effect of insulin through glycogenesis. Can the Glucose-Insulin dynamic be described as Lotka Volterra? Is the standard Lotka-Volterra (LV) model an exact fit for insulin-glucose (IG) dynamics? No. Can a similar model built on the same principles capture most of the essential features of the IG dynamics? Absolutely. How to capture most of the insulin-glucose dynamics using a slightly modified Lotka-Volterra model We can figure out how to change the LV equations to fit the IG dynamics by figuring out how our assumptions have changed. Like Daan mentioned, neither insulin nor glucose undergo self-reproduction. So we'll drop those terms from the equation, and we'll represent an influx of glucose (as from, say, a meal) as a simple time-dependent linear spike. Your rate equations will now look like: $\frac{dx}{dt} = \alpha[t_{gi} < t< t_{gf}]-\beta x y$ $\frac{dy}{dt} = \delta x-\gamma x y$ where $x$ is glucose concentration, $y$ is insulin concentration, and $[t_{gi} < t< t_{gf}]$ is equal to 1 if the current time $t$ is greater than the time when the glucose spike starts $t_{gi}$ and less than the time when the glucose spike ends $t_{gf}$, and is equal to zero if the time $t$ does not meet this condition. Simulation of insulin-glucose dynamics using the above LV model I ran a simulation of the above model in Matlab, and here's what it looks like:
{ "domain": "biology.stackexchange", "id": 4292, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "theoretical-biology, literature, glucose, insulin", "url": null }
microbiology, immunology, immune-system, antibody, antigen Title: How are antibodies specific for a disease detected in the blood if everybody produces a different antibody for the same antigen? To break the title down into parts: There exist serology tests that detect the amount of an antibody (Ab) against a specific pathogen/antigen. Every human produces their own Ab for a specific antigen by (relatively) random combination of different gene segments in their B cells, until one that recognizes the pathogen is found and produced in massive amounts. One portion of every Ab is constant, whereas the portion that recognizes the antigen is variable. Even for antibodies (in different humans) that recognize the same antigen, albeit to a lesser degree. What exactly does an antibody titer/serology test detect in the blood so that it can accurately measure the amount of antibodies against a specific antigen? I am assuming they use antibodies against the antigen-binding/variable portion of an antibody. But if the variable region differs from person to person, how can a universal antibody (to be used in serology tests) against it be generated? Is the variable region not that variable, so that any lab-generated antibody against any functional antibody against a virus works well enough for an accurate measurement of antibodies against that virus? But even so the concept sounds hard to believe since even a single antigen can have multiple epitopes, and one person might have an Ab for epitope 1 of antigen X, while the other would have an Ab for epitope 2 of antigen X. And they would still be immune to the same virus, while a universal Ab titer test for the virus would be unable to detect antibodies from one of them. Do they perhaps employ multiple lab-generated antibodies against most of the possible variations of human-generated antibodies in the tests? Specific antibodies are typically detected using ELISA. The way you make a test for an antibody to a particular pathogen is not by using secondary antibodies to the specific part of the target antibody, but by using the antigen. Different kits work differently, but the general idea is that you have antigen stuck to a surface, you put the sample on that surface. If there are antibodies to the antigen, they stick to it. Then you wash everything away and add a tagged antibody that binds to generic human antibodies. See also https://www.sciencemag.org/news/2020/03/new-blood-tests-antibodies-could-show-true-scale-coronavirus-pandemic
{ "domain": "biology.stackexchange", "id": 10395, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "microbiology, immunology, immune-system, antibody, antigen", "url": null }
### (Question) Are the mandelbrot sets generated different in appearance to the "actual" set? • 17 Replies • 496 Views 0 Members and 1 Guest are viewing this topic. • Fractal Fanatic • Posts: 29 #### Are the mandelbrot sets generated different in appearance to the "actual" set? « on: May 15, 2018, 01:37:56 AM » Hey guys, I looked up somewhere that the mandelbrot sets we see on computer are an approximation to the true mandelbrot set. How far off could the difference between the generated and the "true" appearance of the set be? • 3c • Posts: 843 #### Re: Are the mandelbrot sets generated different in appearance to the "actual" set? « Reply #1 on: May 15, 2018, 01:48:52 AM » Not very. I seem to recall there is a theorem that, unless the precision is too low (enough to cause detectable artifacting), the color of a pixel is the correct color for some point inside of that pixel. • 3f • Posts: 1536 #### Re: Are the mandelbrot sets generated different in appearance to the "actual" set? « Reply #2 on: May 15, 2018, 01:57:46 AM » Not very. I seem to recall there is a theorem that, unless the precision is too low (enough to cause detectable artifacting), the color of a pixel is the correct color for some point inside of that pixel. That's "backward stability". AFAIK it is not proven but a conjecture supported numerically. See https://fractalforums.org/fractal-mathematics-and-new-theories/28/perturbation-theory/487/msg2365#msg2365 • 3c • Posts: 891 #### Re: Are the mandelbrot sets generated different in appearance to the "actual" set? « Reply #3 on: May 15, 2018, 04:48:27 AM » It can be quite far off.
{ "domain": "fractalforums.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9828232889752296, "lm_q1q2_score": 0.8080666211371796, "lm_q2_score": 0.8221891261650247, "openwebmath_perplexity": 1838.5187834022888, "openwebmath_score": 0.7593277096748352, "tags": null, "url": "https://fractalforums.org/fractal-mathematics-and-new-theories/28/are-the-mandelbrot-sets-generated-different-in-appearance-to-the-actual-set/1333" }
biochemistry, drugs, proteins, pharmacology Title: Estimating protein binding and disassociation I don't have a background in the area of drugs or pharmacokinetics/Pharmacodynamics but I am trying to understand about protein binding. I was going through this paper. If $C_b$ is the concentration of the bound drug $C_f$ is the concentration of the free drug $C$ is the total drug concentration $P$ is the concentration of protein binding sites (bound and free) $K_d$ is the disassociation constant of the drug protein complex For a one compartmental model the free drug can be found through: $C_f=\dfrac{-(P+K_d-C)+\sqrt{(P+K_d-C)^2+4 K_d C}}{2}$ The assumptions used in this derivation are: Binding occurs only to plasma proteins and follows simple saturation kinetics. The binding process can be described by a single macroscopic dissociation constant. Binding equilibrium is achieved virtually instantaneously with respect to distribution and elimination. Aside from binding, all other processes (ie., distribution and elimination) are linear. Distribution and elimination processes operate only on free drug. My question is in how to determine the parameter values of $P$ and $K_d$. The article has given certain properties of the drug but not a way in how these parameters are found. The given details on a drug are: 1.The drug has a molecular weight of 150. Drug binds only to serum albumin, whose concentration is 4.4% and whose molecular weight is 67,000. At sufficiently low drug concentrations, the drug is 89% bound. The distribution volume is 50 ml./kg. Free drug has a half-life of 30 min.
{ "domain": "chemistry.stackexchange", "id": 13063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "biochemistry, drugs, proteins, pharmacology", "url": null }
performance, php, laravel Title: Laravel controller method that searches jobs using variable criteria I have custom code where I check request params for existing and make query to database with model public function search(Request $request) { $position = $request->get("position") ?? null; $location = $request->get("location") ?? null; $employment = $request->get("employment") ?? null; $jobsBy = null; $jobs = null; if($position) { $jobs = Job::where('title', 'LIKE', '%'.$position.'%'); if($location) { $locations = []; if(is_array($location)) { foreach ($location as $name) { $locations[] = $name; } } else $locations[] = $location; $jobs = $jobs->whereMetaIn("location", $locations); } if($employment) { $employments = []; if(is_array($employment)) { foreach ($employment as $name) { $employments[] = $name; } } else $employments[] = $employment; $jobs = $jobs->withAnyTag($employments); } if($jobs->get()->count()) $jobsBy = "position"; } if($location && $jobsBy === null) { $locations = []; if(is_array($location)) { foreach ($location as $name) { $locations[] = $name; } } else $locations[] = $location; $jobs = Job::whereMetaIn('location', $locations); if($employment) { $employments = []; if(is_array($employment)) { foreach ($employment as $name) { $employments[] = $name; } } else $employments[] = $employment; $jobs = $jobs->withAnyTag($employments); }
{ "domain": "codereview.stackexchange", "id": 36992, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, php, laravel", "url": null }
c++, object-oriented, game, design-patterns, event-handling EventListenerRegister.h #pragma once #include <vector> #include <memory> #include <algorithm> #include "EventListener.h" class EventListenerRegister { public: using ListenerList = std::vector<std::unique_ptr<BaseEventListener>>; template<typename EventType> int registerListenerFor(const typename EventListener<EventType>::EventCallBackFn& callBack) { static int listenerID = 0; std::unique_ptr<BaseEventListener> listener = std::make_unique<EventListener<EventType>>(callBack, listenerID); listeners.push_back(std::move(listener)); return listenerID++; } void unregisterListener(const int listenerID) { listeners.erase( std::remove_if(listeners.begin(), listeners.end(), [listenerID](const auto& listener) { return listenerID == listener->getID(); }), listeners.end() ); } ListenerList::iterator begin() { return listeners.begin(); } ListenerList::const_iterator begin() const { return listeners.begin(); } ListenerList::iterator end() { return listeners.end(); } ListenerList::const_iterator end() const { return listeners.end(); } private: ListenerList listeners; }; EventQueue.h #pragma once #include <queue> #include <memory> #include "MouseEvent.h" #include "KeyboardEvent.h" #include "WindowEvent.h" class EventQueue { public: void push(const std::shared_ptr<Event>& event) { eventQueue.push(event); }
{ "domain": "codereview.stackexchange", "id": 44795, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, game, design-patterns, event-handling", "url": null }
optics, non-linear-optics Title: Topological phase in Laguerre-Gaussian transverse mode Why is the topological phase in a Laguerre-Gaussian transverse mode is the sum of orbital angular momenta per photon, and why is it quantized? I take this question to mean: Why does the Laguerre-Gaussian (LG) modes have an $e^{i\ell\phi}$ dependance on the azimuthal coordinate $\phi$? Why is $\ell$ required to be an integer?
{ "domain": "physics.stackexchange", "id": 14034, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, non-linear-optics", "url": null }
A function that is both One to One and Onto is called Bijective function. Below is a visual description of Definition 12.4. In mathematics, a bijective function or bijection is a function f : A → B that is both an injection and a surjection. If it crosses more than once it is still a valid curve, but is not a function. Each value of the output set is connected to the input set, and each output value is connected to only one input value. And I can write such that, like that. A function f : A -> B is said to be onto function if the range of f is equal to the co-domain of f. How to Prove a Function is Bijective without Using Arrow Diagram ? A function is invertible if and only if it is a bijection. Definition: A function is bijective if it is both injective and surjective. Question 1 : A bijective function is both injective and surjective, thus it is (at the very least) injective. My examples have just a few values, but functions usually work on sets with infinitely many elements. So we can calculate the range of the sine function, namely the interval $[-1, 1]$, and then define a third function: $$\sin^*: \big[-\frac{\pi}{2}, \frac{\pi}{2}\big] \to [-1, 1]. The function f is called as one to one and onto or a bijective function, if f is both a one to one and an onto function. Stated in concise mathematical notation, a function f: X → Y is bijective if and only if it satisfies the condition for every y in Y there is a unique x in X with y = f(x). Thus, if you tell me that a function is bijective, I know that every element in B is “hit” by some element in A (due to surjectivity), and that it is “hit” by only one element in A (due to injectivity). This is equivalent to the following statement: for every element b in the codomain B, there is exactly one element a in the domain A such that f(a)=b.Another name for bijection
{ "domain": "noqood.co", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9783846640860381, "lm_q1q2_score": 0.8131776198706601, "lm_q2_score": 0.8311430562234877, "openwebmath_perplexity": 652.0293769054215, "openwebmath_score": 0.8542974591255188, "tags": null, "url": "https://staff.noqood.co/gillian-flynn-hwmjkyk/what-is-bijective-function-c0afea" }
# Vector whose inner product is positive with every vector in given basis of $\mathbb{R}^n$ I am trying to solve the following question which I came across when studying root system in euclidean spaces, with positive definite symmetric bilinear form. Statement: Given a basis $\{v_1,\cdots, v_n\}$ of $\mathbb{R}^n$, $\exists$ $v\in\mathbb{R}^n$ such that $(v,v_i)>0$ for all $i$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9867771778588347, "lm_q1q2_score": 0.8800187951128634, "lm_q2_score": 0.8918110540642805, "openwebmath_perplexity": 133.78975763635015, "openwebmath_score": 0.9290722608566284, "tags": null, "url": "https://math.stackexchange.com/questions/2730864/vector-whose-inner-product-is-positive-with-every-vector-in-given-basis-of-mat" }
# Find derivative of function (matrix) Let $f: \mathbb{R}^3 \to \mathbb{R}^2$ satisfy the condition $f(0)=(1,2)$ and $$Df(0)= \left( {\begin{array}{cc} 1 & 2 & 3 \\ 0 & 1 & 1 \end{array} } \right)$$ Let $g : \mathbb{R}^2 \to \mathbb{R}^2$ be defined by $g(x,y)=(x+2y+1,3xy)$. My question is how can I find $D(g \circ f)(0)$? • Using this notation, the chain rule is really easy: $D(g\circ f) = D(g)\circ D(f)$. Mar 14, 2015 at 10:41 • @Arthur i tried this but I was unsure about the calculations, can you please show me? – Lori Mar 14, 2015 at 13:02 \begin{align} f(x,y,z) &= (u(x,y,z),v(x,y,z))\\ f(0,0,0) &= (1,2)\\ g(u,v) &= (u+2v+1,3uv) \end{align}\ \begin{align} Dg(u,v) &= \left( {\begin{array}{*{20}{c}} {{\partial _u}(u + 2v + 1)}&{{\partial _v}(u + 2v + 1)}\\ {{\partial _u}(3uv)}&{{\partial _v}(3uv)} \end{array}} \right) =\left( {\begin{array}{*{20}{c}} 1&2\\ {3v}&{3u} \end{array}} \right) \end{align}\
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9888419682586786, "lm_q1q2_score": 0.8130151181062171, "lm_q2_score": 0.8221891305219504, "openwebmath_perplexity": 2716.294398789352, "openwebmath_score": 0.9999755620956421, "tags": null, "url": "https://math.stackexchange.com/questions/1189282/find-derivative-of-function-matrix" }
This will result in an answer that that does not violate any of the stated constraints. We have 3 fishes in each tank. We have 7 tanks in sector Gamma. We have 5 sharks in sector Gamma. However, this seems like a bit of kludge. The proper way go about it is represent the number of sharks in the each sector as binary array, with only one value set to 1. # Number of sharks in each sector @variable(m, s[i=1:3,j=1:7], Bin) We will have to modify our constraint block accordingly @constraints m begin # Constraint 2 sharks[i=1:3], sum(s[i,:]) == 1 u_sharks[j=1:7], sum(s[:,j]) <=1 # uniquness # Constraint 4 sum(nt) <= 13 # Constraint 5 s[1,2] == 1 nt[1] == 4 # Constraint 6 s[2,4] == 1 nt[2] == 2 end We invent a new variable array st to capture the number of sharks in each sector. This simply obtained by multiplying the binary array by the vector $$[1,2,\ldots,7]^\top$$ @variable(m,st[i=1:3],Int) @constraint(m, st.==s*collect(1:7)) We rewrite our last constraint as # Constraints 1 & 3 @NLconstraint(m, st[1]+st[2]+st[3]+n*(nt[1]+nt[2]+nt[3]) == 50) After the model has been solved, we extract our output for the number of sharks. sharks_in_each_sector=getvalue(st) …and we get the correct output. This problem might have been an overkill for using a full blown mixed integer non-linear optimizer. It can be solved by a simple table as shown in the video. However, we might not alway find ourselves in such a fortunate position. We could have also use mixed integer quadratic programming solver such as Gurobi which would be more efficient for that sort of problem. Given the small problem size, efficiency hardly matters here. 06/12/17
{ "domain": "perfectionatic.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9664104904802131, "lm_q1q2_score": 0.8512115437020169, "lm_q2_score": 0.8807970858005139, "openwebmath_perplexity": 3075.453846992643, "openwebmath_score": 0.4335174560546875, "tags": null, "url": "https://perfectionatic.org/?tag=julia" }
java, object-oriented @Override public String toString() { return "ComputerPlayActivityTable"; } } Generally, it looks good (model wise). I have a few remarks: Number of students? You have 50 students, but 49 places for tables. Is this intentional? Student I would create a Student class. Experience taught me that every time I used a simple String to represent a domain object, I ended up by changing it to a full class later on. Just start out with the basics (String name, hashCode() and equals()) public class Student { private final String name; public Student (String name) { if (name == null) throw new IllegalArgumentException("'name' cannot be 'null'"); this.name = name; } public int hashCode() { return name.hashCode(); } .. } Error / Exception flow There is no way to tell if the assignment went wrong, except from the message that goes to the standard out. You could return an result, for example a simple boolean indicating if the assignment was succesful. Alternatively, you could throw a custom Exception when it is impossible to assign the students. It is not the task of the Assigner to do error logging/printing. Overengineering? I think your TableController and RandomTableGetter are a bit overengineerd. What is their added value? You can implement a Collection (List, or Set) of ActivityTable and have a getRandomAvailableTable() method in the ActivityTableAssigner. Alternate behaviour can be coded in a different ActivityTableAssigner. The RandomTableGetter is also doing full-table-checks, so a better name might be RandomAvialableTableGetter. Assignment algorithm I would solve the assignment with generating a permutation instead of randomly trying to assign students to tables. You could for example do this: Collections.shuffle(students) The collection of students will be shuffled and you can assign them simply by iterating the collection, because to order is already random. Better OO modelling of ActivityTable Consider your constructor: public ActivityTable(int chairs) { this.totalChairs = chairs; }
{ "domain": "codereview.stackexchange", "id": 23248, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, object-oriented", "url": null }