text
stringlengths
1
1.11k
source
dict
Then we explore additional applications by looking at Half Life, Compounding Interest and Logistic Functions. The basic exponential function is f(x) = b^x, where the b is your constant, also called base for these types of functions. Graphing Rational Functions 23. 9(C) - write exponential functions in the form f(x) = abx (where b is a rational number) to describe problems arising from mathematical. Grade 11 maths Here is a list of all of the maths skills students learn in grade 11! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. Derivatives Of Exponential, Trigonometric, And Logarithmic Functions Exponential, trigonometric, and logarithmic functions are types of transcendental functions; that is, they are non-algebraic and do not follow the typical rules used for differentiation. Exponential and Logarithmic Functions Worksheets October 3, 2019 August 28, 2019 Some of the worksheets below are Exponential and Logarithmic
{ "domain": "umood.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.981735720045014, "lm_q1q2_score": 0.8511999843866641, "lm_q2_score": 0.8670357683915538, "openwebmath_perplexity": 944.1857117142524, "openwebmath_score": 0.4596410393714905, "tags": null, "url": "http://pziq.umood.it/exponential-functions-game.html" }
c#, performance, programming-challenge, array Title: Distribute items over array in order to minimize the difference between min and max array values I came across this problem in a programming challenge a few days ago. I came up with the implementation below, however it resulted in a "time limit exceeded" failure for a few of the test cases. The questions were unfortunately not made available after the challenge was over but it went something like this:
{ "domain": "codereview.stackexchange", "id": 43137, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, programming-challenge, array", "url": null }
human-biology, molecular-biology, cell-biology Title: What's the mixture of plasma and haemoglobin called I know of oxyhaemoglobin but the mixture of plasma and haemoglobin in the blood gives what? You and Dr. James must be friends ;) There is no specific name for the mixture of plasma and haemoglobin, however, when hemoglobin is found in the plasma (i.e. not in blood cells) it is usually referred to as "free hemoglobin." This is a term you are most likely to encounter when dealing with plasma/ serum hemoglobin testing. Additionally, when haemoglobin is broken down, it forms bilirubin, which contributes to the color of plasma (there is no specific name for the mixture of bilirubin and plasma either, except perhaps "plasma bilirubin").
{ "domain": "biology.stackexchange", "id": 5156, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "human-biology, molecular-biology, cell-biology", "url": null }
reinforcement-learning, definitions, markov-decision-process, markov-chain, ergodicity This leads me to believe that the second definition (the stricter one) is the most appropriate one, considering the ergodicity definition in an MDP derives from the definition in a Markov chain. As an MDP is basically a Markov chain with choice (actions), ergodicity should mean that independently of the action taken, all states are visited, i.e., all policies ensure ergodicity. Am I correct in assuming these are different definitions? Can both still be called "ergodicity"? If not, which one is the most correct? In short, the relevant class of a MDPs that guarantees the existence of a unique stationary state distribution for every deterministic stationary policy are unichain MDPs (Puterman 1994, Sect. 8.3). However, the unichain assumption does not mean that every policy will eventually visit every state. I believe your confusion arises from the difference between unichain and more constrained ergodic MDPs. Puterman defines that a MDP is (emphasis in the following mine):
{ "domain": "ai.stackexchange", "id": 2750, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning, definitions, markov-decision-process, markov-chain, ergodicity", "url": null }
c#, asynchronous, rubberduck Clipboard.SetText(text); } private string FormatResultForClipboard(ICodeInspectionResult result) { var module = result.QualifiedSelection.QualifiedName; return string.Format( "{0}: {1} - {2}.{3}, line {4}", result.Severity, result.Name, module.Project.Name, module.Component.Name, result.QualifiedSelection.Selection.StartLine); } private int _issues; private void OnIssuesFound(object sender, InspectorIssuesFoundEventArg e) { Interlocked.Add(ref _issues, e.Issues.Count); Control.Invoke((MethodInvoker) delegate { var newCount = _issues; Control.SetIssuesStatus(newCount); }); }
{ "domain": "codereview.stackexchange", "id": 13587, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, asynchronous, rubberduck", "url": null }
computability, turing-machines explain. Typically, printing a string is a side effect, hence not functional. If instead of printing you were considering just producing the string "Hello world", for whatever purpose, the you would have a functional program meeting a degenerate case of the above schema, because there is no input. The problem reduces to: produce the string "Hello world" and corresponds to the specification $\exists O, O=\text{"Hello world"}$. The lack of input makes the universal quantifier unneeded. But printing is something else, as is network reconfiguration. They are not directly functional operations. Now, it is possible to simulate non-functional structures in a functional context by building appropriate domains of computation. But this often makes things a lot more complex. And the understanding I tried to give is no longer so obvious. Dealing with these issues has been one of the main purposes of denotational semantics. Much of the research in language design is
{ "domain": "cs.stackexchange", "id": 4093, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computability, turing-machines", "url": null }
javascript, dom while (currentElement) { if (currentElement.classList.contains(className)) { return currentElement; } currentElement = currentElement.parentElement; } return null; } Of course, since you're extending the Element prototype you could be clever and make it recursive. It looks nice, but I'd probably stick to the simpler loop myself: Element.prototype.getAncestorByClassName = function(className) { if (this.classList.contains(className)) { return this; } if (!this.parentElement) { return null; } return this.parentElement.getAncestorByClassName(className); }
{ "domain": "codereview.stackexchange", "id": 24299, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, dom", "url": null }
gradient-descent, learning-rate Title: Constant Learning Rate for Gradient Decent Given, we have a learning rate, $\alpha_n$ for the $n^{th}$ step of the gradient descent process. What would be the impact of using a constant value for $\alpha_n$ in gradient descent? Intuitively, if $\alpha$ is too large you may "shoot over" your target and end up bouncing around the search space without converging. If $\alpha$ is too small your convergence will be slow and you could end up stuck on a plateau or a local minimum. That's why most learning rate schemes start with somewhat larger learning rates for quick gains and then reduce the learning rate gradually.
{ "domain": "datascience.stackexchange", "id": 4563, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gradient-descent, learning-rate", "url": null }
- Thanks for the reference - so it was Gauss after all. The Gauss process is basically what I do, although I keep an eye out for any possible shortcuts, as in the example I gave. –  Old John Jul 24 '12 at 15:21 @mathh As the linked post says, Gauss's algorithm requires prime modulus. Generally modular fractions make sense only for denominators coprime to the modulus. Thus when scaling fractions we must restrict to scale factors coprime to the modulus, e.g. in your case we can do $\tag*{}$ ${\rm mod}\ 10\!:\ \dfrac{1}3\equiv \dfrac{3}9\equiv \dfrac{3}{-1} \equiv -3\equiv 7\ \$ –  Bill Dubuque Aug 18 '14 at 16:44 I'm sorry, I deleted the comment before you replied since I found it out. I would add to the above explanations that the denominators can be reduced if and only if the modulus is coprime to them. –  user314 Aug 18 '14 at 16:51
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9740426412951847, "lm_q1q2_score": 0.8336355135381395, "lm_q2_score": 0.8558511488056151, "openwebmath_perplexity": 484.1728047110517, "openwebmath_score": 0.8980107307434082, "tags": null, "url": "http://math.stackexchange.com/questions/174676/solving-simple-congruences-by-hand" }
php, mysql And this next class is simply a little class for my laziness of remembering the DSN of MySQL and port and all for PDO objects: class dbCon extends PDO { public function __construct($host,$port,$user,$pass,$dbName=null) { $pdo_options[PDO::ATTR_ERRMODE] = PDO::ERRMODE_EXCEPTION; $dsn = 'mysql:host='.$host.';port='.$port.';'; if(!is_null($dbName)) $dsn .='dbname='.$dbName; try{ parent::__construct($dsn,$user,$pass,$pdo_options); }catch(PDOException $e){ die("Error Connecting To Database: ".$e->getMessage() ); } } } Technically on any page I can access the cart with only 2 line of code: Checkout::$dbCon = new dbCon("127.0.0.1",3306,"UserName","Password","DatabaseName"); $Checkout = new Checkout(); Maybe there is some security issue that I'm not thinking about, or perhaps I should do something different. Let me know what you think. Database
{ "domain": "codereview.stackexchange", "id": 7962, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, mysql", "url": null }
c++, performance, c++17 std::vector<std::vector<int>> SolvePuzzle(const std::vector<int> &clues) { return SolvePuzzle(clues, std::vector<std::vector<int>>{}, 0); } } // namespace codewarsbacktracking Use more efficient containers There are several areas where your code can be improved by changing the containers you are using to hold the data: Don't use nested std::vectors for 2D arrays Instead of std::vector<std::vector<Something>>, use a std::vector<Something>. Make sure the size is big enough to hold the same number of elements of course. To look up the element at coordinates x, y of an N * N vector, use [x + y * N] as the array index. Store positions in a single std::size_t Instead of passing x and y coordinates separately, consider passing an index into the flattened vector as described above. This saves a variable, and sometimes some calculations as well. For example: if (x == size) { x = 0; y++; } if (y == size) { return true; }
{ "domain": "codereview.stackexchange", "id": 40845, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, c++17", "url": null }
neuroscience, brain, electrophysiology, measurement, eeg Title: What is the significance of the amplitude of brain waves? What does the amplitude of brain waves represent and to what neuronal activities is this amplitude related to? For example, in a hypothetical situation, the frequency of brain waves is kept the same, but the wave amplitude is increased or decreased by some means. What would be the effect on human brain and its activities? Can resonance related to waves effect human brain waves? Brain waves are a colloquial term for EEG recordings. EEG recordings are gross potential recordings, in other words, they represent the responses of thousands of neurons together. Much of the background is stochastically determined, meaning it represents, basically, random activity. In other words, oftentimes neurons fire stochastically and they cancel out each others contribution to the EEG, resulting in random ('white') noise.
{ "domain": "biology.stackexchange", "id": 10569, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neuroscience, brain, electrophysiology, measurement, eeg", "url": null }
polynomial-time, heuristics, packing, greedy-algorithms EDIT: I implemented the following heuristic and also the optimal version using the integer programming. The average ratio of $a_{heuristic}/a_{optimal}$ is 1.02 (worst case was 1.23) over 200 runs of the simulation for 20 objects, varying number of bins and randomly generated ball and bin sizes. The heuristic is as follows: $(1)$ Sort objects from largest to smallest using a priority queue. Set $a = 1$. Assume $U_i$ is the total size of all objects in $B_i$. $(2)$ Remove the biggest object and call it $O$. $(3)$ For all bins, find the $1 \le i \le m$ for which $(U_i+O)/B_i$ is the least. Add $O$ to $B_i$ and $U_i = U_i + O$. $(4)$ $a = max(a, U_i/B_i)$. $(5)$ Goto $(2)$ if there are more balls. One simple "bad" input that needs to be considered for worst-case analysis of this problem is as follows. Let $c=(\sqrt{17}-1)/2 \approx 1.56$. There are three objects of size $c$, $1$, and $1$. There are two bins of size $2$ and $c$. Initially $a=1$.
{ "domain": "cstheory.stackexchange", "id": 3567, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "polynomial-time, heuristics, packing, greedy-algorithms", "url": null }
algorithms, machine-learning, data-mining, statistics Tricky details There are a few things about your specific problem that might be a bit tricky. One tricky aspect is the business about apps, and how to code that into a linear regression framework. One way is to pick a few categories of apps, and then have a separate indicator input variable for each category (0 if that app is not present, 1 if it is present). However, you don't want to have too many input variables, because that will require a lot more data to reliably form a linear model. A few dozen app categories might be fine, but thousands of input variables for the thousands of different possible apps probably would not be.
{ "domain": "cs.stackexchange", "id": 2927, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, machine-learning, data-mining, statistics", "url": null }
catkin, ros-lunar, ubuntu, ros-indigo Originally posted by DHCustomPak with karma: 70 on 2018-01-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2018-01-10: Thanks for reporting back. Not adhering to naming conventions should definitely not be a fatal error or cause Catkin to hang. If you can create an MWE and report it here that would be great.
{ "domain": "robotics.stackexchange", "id": 29057, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "catkin, ros-lunar, ubuntu, ros-indigo", "url": null }
Here’s three theorems that are duals to the conjectures. Theorem 1 Let $X$ be a space. The product space $X \times Y$ is normal for every discrete space $Y$ if and only if $X$ is normal. Theorem 2 Let $X$ be a space. The product space $X \times Y$ is normal for every metrizable space $Y$ if and only if $X$ is a normal P-space. Theorem 3 Let $X$ be a space. The product space $X \times Y$ is normal for every metrizable $\sigma$-locally compact space $Y$ if and only if $X$ is normal countably paracompact. The key words in red are for emphasis. In each of these three theorems, if we switch the two key words in red, we would obtain the statements for the conjectures. In this sense, the conjectures are called duality conjectures since they are duals of known results.
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363499098282, "lm_q1q2_score": 0.8366575264457774, "lm_q2_score": 0.8499711775577736, "openwebmath_perplexity": 148.87961063114636, "openwebmath_score": 0.9994206428527832, "tags": null, "url": "https://dantopology.wordpress.com/tag/point-set-topology/" }
Is this correct? If so, given the context of the exercise, how could I make my answer more acceptable? - +1 for showing your work. –  Arturo Magidin Nov 30 '10 at 20:50 To write that the matrix is the zero matrix, you should write "let $a_{ij}=0$ for all $i$ and $j$", not "$a_{ij}\in 0$". (Nothing is an element of $0$). For (b): No, notice that the $n$ is fixed. You are only considering matrices that are symmetric of a fixed size. If $n=2$, then you only consider $2\times 2$ matrices; if $n=3$, then you only consider $3\times 3$ matrices. You never consider both $2\times 2$ and $3\times 3$ matrices at the same time. $M(n,n,\mathbb{R})$ means: • Matrices (that's the "$M$"); • with $n$ rows (that's the first $n$); • with $n$ columns (that's the second $n$); • and each entry is a real number (that is the $\mathbb{R}$).
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9777138144607744, "lm_q1q2_score": 0.8016210452097711, "lm_q2_score": 0.819893340314393, "openwebmath_perplexity": 132.85686149007262, "openwebmath_score": 0.9307544827461243, "tags": null, "url": "http://math.stackexchange.com/questions/12531/show-that-the-set-of-all-symmetric-real-matrices-is-a-subspace-determine-the-d?answertab=oldest" }
notation For $n=1$, this formulation is the way to go because it can be solved with power series
{ "domain": "physics.stackexchange", "id": 18063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "notation", "url": null }
java, beginner, inheritance, polymorphism public class AssemblyWorker<P extends Product> { private final Function<? super PropertyHelper, ? extends P> productFactory; private final Function<? super ProductBox<P>,? extends Receiver> receiverFactory; private final ProductBox<P> productBox = new ProductBox<>(); public AssemblyWorker(Function<? super PropertyHelper,? extends P> productFactory, Function<? super ProductBox<P>,? extends Receiver> receiverFactory) { this.productFactory = productFactory; this.receiverFactory = receiverFactory; } void startAssemblyLine(String text) { Arrays.stream(text.split("\\n")) //stream line by line .map(LineParser.TO_PROPERTY_HELPER) //parse the line .map(productFactory) //create the product .forEach(productBox::addProduct); //add the product to the box } public Receiver deliverProduct() { return receiverFactory.apply(productBox); } }
{ "domain": "codereview.stackexchange", "id": 23699, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner, inheritance, polymorphism", "url": null }
mathematical-models, theoretical-biology, sex-chromosome For opposing selection with arbitrary dominance, Kidwell parameterizes the heterozygous fitness components as $w_{m2} = 1-h_ms_m$, and $w_{f2} = 1-h_fs_f$. Notice that $h$ is modifying the fitness difference $s$ in the heterozygotes. This is due to incomplete dominance of one allele over the other. If both alleles in a heterozygote contribute equally to the phenotype, then $h = 0.5$, and you have the additive fitness described above ($w = 1-.5s$). If one allele is incompletely dominant over the other, the selective difference $s$ will be modified by some amount, $h$. If A$_1$ is only slightly dominant over A$_2$, (say, $h=0.6$), the effect on $w$ won't be much different than if $h=0.5$. Both alleles will affect fitness but the more dominant allele will have a slightly greater affect on fitness than the other allele. If one allele is much more dominant, then $h$ becomes larger and the more dominant allele contributes more to the overall fitness than the other allele.
{ "domain": "biology.stackexchange", "id": 2793, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mathematical-models, theoretical-biology, sex-chromosome", "url": null }
c++, algorithm, time-limit-exceeded if(departureA == -1 || departureB == -1) continue; If the first schedule.lookup(...) for departureA returns -1, then you don't need to do the second lookup. int departureA = schedule.lookup(trains.give(a).getId(), stations.give(c).getId(), 1); if(departureA == -1) continue; int departureB = schedule.lookup(trains.give(b).getId(), stations.give(c).getId(), 1); if(departureB == -1) continue; Do the same in the inner loop for arrivalA and arrivalB. Even earlier: for(int b = 0; b < trains.length(); b++){ for(int a = 0; a < trains.length(); a++){ if(a == b) { continue; } // A train can't overtake itself.
{ "domain": "codereview.stackexchange", "id": 23770, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, time-limit-exceeded", "url": null }
python, python-3.x, tkinter def step_forward(self): if self._done: return if self.activated: self.button['text'] = '' self.button['bd'] = 0 self.button['bg'] = '#F0F0F0' self.button['state'] = 'disabled' if random.randint(0, 1): self.label['text'] = 'Accomplished' self.label['fg'] = 'green' else: self.label['text'] = 'Failed' self.label['fg'] = 'red' self._done = True def combine(*callables): def runner(): for function in callables: function() return runner if __name__ == '__main__': root = tk.Tk() main = tk.Button(root, text="Show/Hide", bg="white", font="courier 30") main.pack() frame = tk.Frame(root, pady=10) frame.pack()
{ "domain": "codereview.stackexchange", "id": 22752, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, tkinter", "url": null }
I think that $$f$$ satisfies the equation iff $$f(x) = mx + c$$, i.e, $$f$$ is linear, for $$m,c \in \mathbb{R}$$ Proof If $$x=y$$ then the equation is trivially satisfied so let $$x = y+a$$ with $$a \neq 0$$. Then the functional equation becomes $$(y+a)f(y+a) - yf(y) = a f(2y+a)$$ Furthermore, if we let $$x=a$$ in the original equation and rearrange we get $$(y-a)f(y+a) - yf(y) = -af(a)$$ Subtracting the second equation from the first gives $$2af(y+a) = af(2y+a) + af(a)$$ and dividing by $$a$$ and rearranging gives $$f(2y + a) - f(y+a) = f(y+a) - f(a)$$ and this holds for all values of $$y$$ and $$a \neq 0$$. In particular, setting $$y=1$$ gives $$f(a+2) - f(a+1) = f(a+1) - f(a)$$ and it follows that $$f(a+N) - f(a) = N(f(a+1) - f(a)) = m(a+N) + c$$ for all integers $$N$$ where $$m = f(a+1)-f(a)$$ and $$c = f(a) - a(f(a+1)-f(a))$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9802808718926534, "lm_q1q2_score": 0.8312465735194738, "lm_q2_score": 0.8479677583778257, "openwebmath_perplexity": 127.44799670002323, "openwebmath_score": 0.97682785987854, "tags": null, "url": "https://puzzling.stackexchange.com/questions/99019/what-should-you-substitute" }
complexity-theory, time-complexity, np-hard, approximation This allows us to define what an optimal solution is: Let $y_{opt}\in L(x)$ be the optimal solution of an instance $x\in X$ of an optimization-problem $O=(X,L,f,opt)$ with $$f(x,y_{opt})=opt\{f(x,y')\mid y'\in L(x)\}.$$ The optimal solution is often denoted by $y^*$. Now we can define the class NPO: Let $NPO$ be the set of all optimization-problems $O=(X,L,f,opt)$ with:
{ "domain": "cs.stackexchange", "id": 1803, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, time-complexity, np-hard, approximation", "url": null }
quantum-mechanics, forces, nuclear-physics, physical-chemistry Title: Reverse shielding effect? It is known that shielding effect causes a reduction in nuclear attraction of electrons,then wont the valence electrons cause a repulsive force on the inner shell electrons and cause a stronger 'attraction' to the nucleus? The position of electrons in atoms follow probability distributions. Even the outer shell electrons have probabilities of being closer to the nucleus than the inner shell electrons. But in atoms where screening is noticeable, we have a large number of inner electrons that have a probability of being closer to the nucleus than the few valence electrons that are being screened. That is, a great majority of the time we have an inner region of electrons that form a spherically symmetric region of charge so that any repulsive forces from outer electrons is unnoticeable.
{ "domain": "physics.stackexchange", "id": 83459, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, forces, nuclear-physics, physical-chemistry", "url": null }
c#, playing-cards player.ClearBet(); Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine("Player Busts"); break; case RoundResult.PLAYER_BLACKJACK: Console.ForegroundColor = ConsoleColor.Yellow; Console.WriteLine("Player Wins " + player.WinBet(true) + " chips with Blackjack."); break; case RoundResult.DEALER_WIN: player.ClearBet(); Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine("Dealer Wins."); break; case RoundResult.SURRENDER: Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine("Player Surrenders " + (player.Bet / 2) + " chips"); player.Chips += player.Bet / 2; player.ClearBet(); break;
{ "domain": "codereview.stackexchange", "id": 43543, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, playing-cards", "url": null }
field-theory, dirac-equation, complex-numbers, majorana-fermions For the electron field you can write $\Psi_e = \phi_e + i \psi_e$ where $\Psi_e \in {\mathbb C}$ and $\phi_e , \psi_e \in {\mathbb R}$. $\Psi_e$ creates electrons and destroys positrons and $\Psi_e^*$ creates positrons and destroys electrons. To answer your other question about whether we can write $\phi_e + i \psi_\tau$, we have to go a bit deeper. In QFT, we like to describe fields as objects which have some "nice" properties. For instance, the fields $\Phi_e, \phi_e , \psi_e$ all have the same mass $m_e$ which is to say, they all satisfy the same differential equation $( \Box + m_e^2 ) \Phi_e = 0$ (and similarly $( \Box + m_e^2 ) \phi_e = 0$, etc.). We can therefore combine these 3 fields in whichever way we want. On the other hand, $\Phi_\tau$, $\phi_\tau$ and $\psi_\tau$ satisfy a different equation, namely $(\Box + m_\tau^2 ) \Phi_\tau = 0$, etc. and we can also combine these three fields however we want.
{ "domain": "physics.stackexchange", "id": 80458, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "field-theory, dirac-equation, complex-numbers, majorana-fermions", "url": null }
reinforcement-learning, function-approximation, features Title: How can $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$ be 1 for $S_{t}$ 's group's component and 0 for the other components? In Sutton's RL:An introduction 2nd edition it says the following(page 203): State aggregation is a simple form of generalizing function approximation in which states are grouped together, with one estimated value (one component of the weight vector w) for each group. The value of a state is estimated as its group's component, and when the state is updated, that component alone is updated. State aggregation is a special case of SGD $(9.7)$ in which the gradient, $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$, is 1 for $S_{t}$ 's group's component and 0 for the other components.
{ "domain": "ai.stackexchange", "id": 1123, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning, function-approximation, features", "url": null }
beginner, c, multithreading, time-limit-exceeded, mathematics Whoops. The if doesn't get checked, and you don't get a warning (in older compiler versions; new ones do warn about possible whitespace issues). If you declare your variables later (e.g. C99-style), errors like that cannot happen (although it introduces possible shadowing): for (long int a = 1; a < 100000; a++) for (long int b = 1; b < 300000; b++) for (long int c = 1; c < 500000; c++) for (long int d = 1; d < 500000; d++) printf("inner loop"); if (prop(a,b,c,d)) printf("FOUND IT!\na = %ld\nb = %ld\nc = %ld\nd %ld\n", a, b, c, d);
{ "domain": "codereview.stackexchange", "id": 22728, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c, multithreading, time-limit-exceeded, mathematics", "url": null }
ros-melodic hardware_interface::JointStateHandle state_handle_b("joint2", &pos[1], &vel[1], &eff[1]); jnt_state_interface.registerHandle(state_handle_b); hardware_interface::JointStateHandle state_handle_c("joint3", &pos[2], &vel[2], &eff[2]); jnt_state_interface.registerHandle(state_handle_c); registerInterface(&jnt_state_interface); // connect and register the joint position interface hardware_interface::JointHandle pos_handle_a(jnt_state_interface.getHandle("joint1"), &cmd[0]); jnt_pos_interface.registerHandle(pos_handle_a); hardware_interface::JointHandle pos_handle_b(jnt_state_interface.getHandle("joint2"), &cmd[1]); jnt_pos_interface.registerHandle(pos_handle_b); hardware_interface::JointHandle pos_handle_c(jnt_state_interface.getHandle("joint3"), &cmd[2]); jnt_pos_interface.registerHandle(pos_handle_c); registerInterface(&jnt_pos_interface); }
{ "domain": "robotics.stackexchange", "id": 34662, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic", "url": null }
python, performance, object-oriented, python-2.x, excel # 0 1 2 3 4 5 # studentsentries module name date,time IP address name student activity type (full) activity name # xlout name student, time activity name activity type (short) time since previous question xl file colour row # valid quizzes name student activity name number of question answered in quiz. # done quizzes name student activity name # done quizzes name student activity name # list of quizzes activity name # inactivestudents name student
{ "domain": "codereview.stackexchange", "id": 8081, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, object-oriented, python-2.x, excel", "url": null }
Also, I stumbled upon this answer according to which $e$ has another unique property: $$e^x\ge x+1,\quad\text{for all }x$$ which again $\pi$ seems to fulfill too. So what exactly am I missing here? • what is "all" $x$? Do you notice that $x^e$ is not necessarily real for $x<0$? – user251257 Dec 30 '16 at 19:06 • @user251257 I really don't know much about all this... Yeah the video says any positive number... My bad I missed it... hold on, I'll fix it. – Farhan Anam Dec 30 '16 at 19:08 • As a counterexample to your claim that this works for $\pi$, $\pi^3 \ngtr 3^{\pi}$. – DooplissForce Dec 30 '16 at 19:09 • @DooplissForce I see... exactly what I was looking for. Thanks. But what exactly makes $e$ special? Just lying between $2$ and $3$ shouldn't make it special, should it? And for that matter, $\pi$ is irrational too... just like $e$. – Farhan Anam Dec 30 '16 at 19:12 [Corrected]
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.980280873044828, "lm_q1q2_score": 0.8104097529701464, "lm_q2_score": 0.8267117876664789, "openwebmath_perplexity": 174.9425276717116, "openwebmath_score": 0.8426945805549622, "tags": null, "url": "https://math.stackexchange.com/questions/2077667/how-is-e-the-only-number-n-for-which-nx-xn-is-satisfied-for-all-values" }
xgboost, grid-search first_tp_detected = False while first_tp_detected==False: for index in true_predicted_tuples.index: # TRUE POSITIVE condition: if ((true_predicted_tuples.iloc[index]['y_true']==1)&(true_predicted_tuples.iloc[index]['y_predicted']==1)): tp_savings += return_true_positive_savings(true_predicted_tuples.iloc[index]['days_till_slag']) break first_tp_detected = True fp_number = len(true_predicted_tuples[(true_predicted_tuples.y_true==0)&(true_predicted_tuples.y_predicted==1)]) fn_number = len(true_predicted_tuples[(true_predicted_tuples.y_true==1)&(true_predicted_tuples.y_predicted==0)]) final_cost = ((costs_dict['fp_cost'])*fp_number) + ((costs_dict['fn_cost'])*fn_number) - tp_savings score = final_cost_custom_function print('score en evaluate_model_with_slag_days', score) # append scores scores.append(score) histories.append(history) k = k + 1
{ "domain": "datascience.stackexchange", "id": 7535, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "xgboost, grid-search", "url": null }
python, beginner, object-oriented, python-3.x Title: Object-oriented student library 2 Follow up of Object-oriented student library Question: How do you refactor this code so that it is pythonic, follows OOP, reads better and is manageable? How can I write name functions and classes better? How do you know which data structure you need to use so as to manage data effectively? from collections import defaultdict from datetime import datetime, timedelta class StudentDataBaseException(Exception): pass class NoStudent(StudentDataBaseException): pass class NoBook(StudentDataBaseException): pass """To keep of a record of students who have yet to return books and their due dates""" class CheckedOut: loan_period = 10 fine_per_day = 2 def __init__(self): self.due_dates = {} def check_in(self, name): due_date = datetime.now() + timedelta(days=self.loan_period) self.due_dates[name] = due_date
{ "domain": "codereview.stackexchange", "id": 32520, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, beginner, object-oriented, python-3.x", "url": null }
• Well every strict subspace is still isomorphic to $\mathbb R ^k$ for some $k<n$ Dec 27 '16 at 0:00 • But they are not of that form. To be able to even make that statement, @Askask, you need to have an abstract notion of vector spaces. Dec 27 '16 at 0:03 • @Mehrdad - that is because you missed the very good insight it gives. In $\Bbb R^3$, every plane through the origin forms a 2-dimensional subspace. There are infinitely many such planes. But only two of them can be directly identified with $\Bbb R^2$. All the others are among the many. many examples of vector spaces without a canonical basis. Dec 27 '16 at 3:18 • @PaulSinclair: The question said "if they are all isomorphic to $\mathbb{R}^n$", not "if they are all $\mathbb{R}^n$"... Dec 27 '16 at 3:20
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9706877684006774, "lm_q1q2_score": 0.8171498512090519, "lm_q2_score": 0.8418256393148981, "openwebmath_perplexity": 186.24420430696242, "openwebmath_score": 0.8404276967048645, "tags": null, "url": "https://math.stackexchange.com/questions/2072977/why-study-finite-dimensional-vector-spaces-in-the-abstract-if-they-are-all-isomo/2073545" }
python, markov Title: What's a good Python HMM library? I've looked at hmmlearn but I'm not sure if it's the best one. SKLearn has an amazing array of HMM implementations, and because the library is very heavily used, odds are you can find tutorials and other StackOverflow comments about it, so definitely a good start. http://scikit-learn.sourceforge.net/stable/modules/hmm.html PS: Right now its outdated in June 2019 As of July 2019 you can use hmmlearn (pip3 install hmmlearn)
{ "domain": "datascience.stackexchange", "id": 1904, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, markov", "url": null }
• thanks for your comment, there are infinite solutions on MOD, -18 is the same as 5 and that would be the same as 28. Regarding your second observation, does that explain the answer to my first question? How do we go from $\sqrt(12)$ to $\sqrt(3)$ – bitoiu Feb 10 '14 at 9:56 • $\sqrt{12} = \sqrt4 \sqrt 3 = 2 \sqrt 3$. – steven gregory Feb 26 '16 at 17:25
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9525741214369554, "lm_q1q2_score": 0.8224445076751875, "lm_q2_score": 0.8633916134888614, "openwebmath_perplexity": 319.80001416219926, "openwebmath_score": 0.7970887422561646, "tags": null, "url": "https://math.stackexchange.com/questions/670006/solving-quadratic-modulo-congruence-review" }
reinforcement-learning, deep-rl, policy-gradients, actor-critic-methods, proximal-policy-optimization Is this some form of approximation or augmentation on the data by using the data that was created by an older model for multiple iterations or am I missing something here? If yes, where was this idea first introduced or better described? And where is an (empirical) proof that this still leads to a correct weight updating? Seems this question was asked and answered in openai/baselines github issue. The issue has been closed for a while. Below is an answer provided by @matthiasplappert which has the most "thumbs up":
{ "domain": "ai.stackexchange", "id": 2490, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning, deep-rl, policy-gradients, actor-critic-methods, proximal-policy-optimization", "url": null }
# Thread: Coin and Die probability 1. ## Coin and Die probability Question: A fair coin is tossed once and a fair die is rolled once. Calculate the probability of observing that the coin is heads or dice is six, P(R=True). NOTE: R in this case is for the event that coin is heads or dice is six. My Solution: Let A be the event that the coin toss results in a head. Let B be the event that the roll of the die results in 6. The probability of at least one of A or B' is: P(A∪B)=P(A)+P(B)−P(A∩B) And since A and B are independent, P(A and B)=P(A)⋅P(B) P(A or B)= 1/2 + 1/6 − 1/2 x 1/6 =2/3−1/12 =7/12 The solution requires formal calculation using mathematical notation and the derivation of every step in the calculation. Since its a 10 mark question, I believe that my solution is not enough to get the full marks in an exam. Is there another approach that I can take to get a better answer? Should I use the Bayes theorem formula for this question? 2. ## Re: Coin and Die probability
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759598948639, "lm_q1q2_score": 0.8109017308471971, "lm_q2_score": 0.8267118004748677, "openwebmath_perplexity": 289.0668947924018, "openwebmath_score": 0.85202956199646, "tags": null, "url": "http://mathhelpforum.com/statistics/282344-coin-die-probability.html" }
homework, thermodynamics Title: Thermochemistry Problem - How did they get that volume? The chemistry problem in my textbook is : Calculate the work one when 2.0 L of methane gas, $\ce{CH4}$ (g) undergoes combustion in excess oxygen at 0ºC and 1.00 bar. Assume the volume of water formed is negligible. I know that we should use the work equation $\mathrm{Work=-Pressure \cdot \Delta Volume}$, and that the pressure is 1.00 bar and we must convert this into pascals. All i need is the volume and convert that to $\mathrm{m^3}$. I got the equation $\ce{CH4 + 2O2 -> CO2 + 2H2O}$, but what I don't get is that how do we find the change in volume? The book says let $\ce{O2}$'s volume = $\mathrm{V_{initial}}$, and that the total volume of gases for initial is $\mathrm{V_{i}}$ + 2.0 L. It says the final volume is 2.0 L of $\ce{CO2}$ (which I get because it should be the same amount of liters for carbons), but then it says that the water's volume is $\mathrm{V_{i}}$ - 4.0 L of $\ce{O2}$.
{ "domain": "chemistry.stackexchange", "id": 4296, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework, thermodynamics", "url": null }
reference-request, approximation-algorithms, computational-geometry, clustering Title: A counter example for the set mean objective Let $\mathcal{P} = \{P_1, \cdots,P_n\}$ be a family of finite point sets in $\mathbb{R}^d$, each having at most $m$ points. Consider the following objective function \begin{align} cost(\mathcal{P},c) = \sum_{i\in[n]}\max_{p\in P_i}\Vert p-c\Vert^2 \end{align} Let $c^\star$ be the point which minimizes the above objective function and the optimal cost is $opt(\mathcal{P}) = cost(\mathcal{P},c^\star)$. For a point $c\in\mathbb{R}^d$ and $i \in [n]$, let $p_i^{(c)}$ be the point of $P_i$ farthest from $c$, i.,e. \begin{align*} \Vert p_i^{(c)} -c\Vert^2 = \max_{p\in P_i}\Vert p-c\Vert^2 \end{align*} Define $c$-mean to be the mean of these farthest points \begin{align*} \mu^{(c)} = \frac{1}{n}\sum_{i=1}^n p_i^{(c)} \end{align*} So we have \begin{align*} cost_2(\mathcal{P},c) = \sum_{i=1}^n \Vert p_i^{(c)} -c\Vert^2 \end{align*} For ease of notation, let $p_i^\star = p_i^{(c^\star)}$ and $\mu^\star = \mu^{(c^\star)}$. So
{ "domain": "cstheory.stackexchange", "id": 5336, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reference-request, approximation-algorithms, computational-geometry, clustering", "url": null }
frequency, modulation, demodulation By use of this approach you can receive the signal at the carrier frequency $f_c+f_\Delta = 500\:$MHz, perform quadrature down-conversion as normal, and the apply the above technique to modify the effective carrier frequency by using e.g. $f_\Delta = 100\:$Hz or whatever. As shown above you do not need to demodulate the signal and the conversion can be applied to any signal and be handled at baseband. Depending on the actual application, which is not known from the question, it may not be allowed to use the above. You can extract the original $I$/$Q$ signals directly from $x(t)$ if this signal is quadrature down-converted with the carrier frequency $f_c$. If you quadrature down-convert $x(t)$ with a locally generated frequency of $f_0$ you extract $I_\mathrm{new}(t)$/$Q_\mathrm{new}(t)$. If you have $I_\mathrm{new}(t)$/$Q_\mathrm{new}(t)$ you can reconstruct $I(t)$/$Q/t)$ from the above equations (solving two equations with two unknowns).
{ "domain": "dsp.stackexchange", "id": 368, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "frequency, modulation, demodulation", "url": null }
optics, refraction Title: Intensity of a light beam at material transition The definition of the intensity of light is given as $$I=0.5\varepsilon_0n_0\vert E\vert^2$$ Now, when transitioning from one material with $n_1=1$ to another one with $n_2=2$ at a normal incident angle, I will have a reflection coefficient of $$r=\left\vert\frac{n_1 - n_2}{n_1 + n_2}\right\vert^2\approx0.11$$ which means that $$I_2=0.89I_1$$ and $$\begin{split}\vert E_2\vert^2&=\frac{0.89I_1}{2\cdot\varepsilon_0n_2}\\ &=\frac{0.89}{2\cdot\varepsilon_0n_2}\frac{\varepsilon_0n_1\vert E_1\vert^2}{2}\\ &=0.445\vert E_1\vert^2\end{split}$$
{ "domain": "physics.stackexchange", "id": 68745, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, refraction", "url": null }
Smaller units like microradians (μrad) and nanoradians (nrad) are used in astronomy, and can also be used to measure the beam quality of lasers with ultra-low divergence. More common is arc second, which is π648,000 radians (around 4.8481 microradians). Similarly, the prefixes smaller than milli- are potentially useful in measuring extremely small angles. ## Notes and references
{ "domain": "ipfs.io", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9924227578357059, "lm_q1q2_score": 0.8019442857099139, "lm_q2_score": 0.8080672066194945, "openwebmath_perplexity": 1843.8368886165324, "openwebmath_score": 0.9429770112037659, "tags": null, "url": "https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Radian.html" }
c#, graph, generics Title: Generic graph implementation in C# I am implementing fundamental data structures in C# while trying to learn techniques in the language to make my code cleaner, more concise, and reusable. I have implemented a generic graph with a few elementary search algorithms. What are some ways to improve my implementation and coding style? /// <summary> /// Implementation of a generic vertex to be used in any graph /// </summary> class Vertex<T> { List<Vertex<T>> neighbors; T value; bool isVisited; public List<Vertex<T>> Neighbors { get { return neighbors; } set { neighbors = value; } } public T Value { get { return value; } set { this.value = value; } } public bool IsVisited { get { return isVisited; } set { isVisited = value; } } public int NeighborsCount { get { return neighbors.Count; } } public Vertex(T value) { this.value = value; isVisited = false; neighbors = new List<Vertex<T>>(); }
{ "domain": "codereview.stackexchange", "id": 20382, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, graph, generics", "url": null }
homework-and-exercises, electrostatics, electric-circuits, capacitance, dielectric Now in $C1$ after insertion of dielectric, calculate change of capacitance, assume that it has new potential $V1$ and charge $Q1$, assume potential and charge for 2nd capacitor too as $V2$ and $Q2$. Equate $V1$ and $V2$, also equate $Q1 + Q2$ with initial total charge of system. This way you can calculate $V$ across capacitors and $Q1$ and $Q2$. Now to calculate polarisation charge, check the electric field $ E1 (without dielectric) $ Now find electric field $E2 (with dielectric)$ this difference has come because of insertion of dielectric so the electric field of dielectric (calculate for a hypothetical parallel plate capacitor) is equal to the difference in electric fields, this will give you tue induced charge. Lastly, since you know initial and final potential and charges, apply formula for potential energy of capacitors both before and after insertion of dielectric and take the difference, this is the change in potential energy.
{ "domain": "physics.stackexchange", "id": 11245, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electrostatics, electric-circuits, capacitance, dielectric", "url": null }
comets, photography, kepler I don't really understand this image. It is a GIF assembled from several exposures, but in each frame, individual stars appear as tight clusters of dots for some reason, and the comet moves within a bright, narrow band that crosses the entire image in all frames. Why does this looks so strange? Also, what is the importance of Kepler deviating from it's observations to concentrate on 67P during this period? What is the scientific importance of this series of observations?
{ "domain": "astronomy.stackexchange", "id": 1885, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "comets, photography, kepler", "url": null }
c++, c++14, windows, opengl I always prefer something like while(running) in combination with running=false over a while(true) in combination with a break, because it is more verbose about what it is doing. In c++ you should not use c style castings but static_cast, dynamic_cast or reinterpret_cast What is the difference between static_cast<> and C style casting? If you don't need to target devices that only support OpenGL with immediate mode, then you should think over using modern OpenGL this will not only reduce the API calls, but will also increase the probability that the driver and gpu can optimise your drawing. This will for sure not have great effect in your simple hello world example, but will be noticeable in real world examples.
{ "domain": "codereview.stackexchange", "id": 21977, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++14, windows, opengl", "url": null }
excel, vba, pdf Private oFullList As Object, oMyList As Object, oFSO As Object Private Const FDR1 = "C:\Test\" Private Const FDR2 = "C:\Test\Merged\" Private Const sAppend1 = " SomeWord.pdf" Private Const sAppend2 = " AnotherWord 2014.pdf" Private oPDF As New PDF_reDirect_v25002.Batch_RC_AXD Private lFiles As Long
{ "domain": "codereview.stackexchange", "id": 11983, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "excel, vba, pdf", "url": null }
ros-melodic libicuuc.so.60 => /usr/lib/x86_64-linux-gnu/libicuuc.so.60 (0x00007f5f91f26000) libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f5f91d1f000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5f91b1b000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f5f918e3000) libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f5f916b1000) libicudata.so.60 => /usr/lib/x86_64-linux-gnu/libicudata.so.60 (0x00007f5f8fb08000)
{ "domain": "robotics.stackexchange", "id": 34729, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic", "url": null }
c#, beginner, game, unity3d if (warningTextAlive == true) { if (warningFade < 400F) { warningAlpha = 1F; warningFade++; } else if (warningFade < 500F) { warningAlpha -= 0.01F; warningFade++; } else if (warningFade == 500F) { warningText.text = ""; warningAlpha = 1F; warningTextAlive = false; warningFade = 0F; } }
{ "domain": "codereview.stackexchange", "id": 19897, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, game, unity3d", "url": null }
particle-physics So, what use are these theories for consumers? Well, once we have a consistent theory of how the Universe operates, technology may improve vastly. A lot of current technology is based on "new" theories. Semiconductor technology (which forms a huge part of our world now--you wouldn;t be reading this if not for semiconductior technology sprouts from quantum mechanics. Quantum mechanics, tries to explain stuff at the atomic level--at first glance that sounds useless for the consumer. But it is useful, as you probably know by now. Similarly, a better understanding of how the universe operates can lead to improved technology for the consumer.
{ "domain": "physics.stackexchange", "id": 3930, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics", "url": null }
vb.net, enum I feel like this is not efficient and not secure but I don't know how to improve it. Any suggestions? You query HttpContext.Current.Request.Cookies.Get("lang") twice. Access it once and store it in a variable. You will get a problem if in the future you have to support more than 9 languages. Instead of accessing the last char from langCookie you should Split() langCookie by = and take the second array element. You can then use [Enum].TryParse() to get the enum. Some horizontal spacing (new lines) would help to easier read the code. Your code could look like so Public Function GetLangFromCookie() As Language Dim langCookie = HttpContext.Current.Request.Cookies.Get("lang") If langCookie Is Nothing Then Return Language.English Else Return GetLanguage(langCookie.Value) End If End Function Private Function GetLanguage(languageCookieValue As String) As Language
{ "domain": "codereview.stackexchange", "id": 36365, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vb.net, enum", "url": null }
c++, c++17, fltk const Point Word_query_window::button_show_all_xy{ Point{0, button_with_len_xy.y + button_with_len_size_y} }; // "Window" display order const Point Word_query_window::button_display_back_xy{ Point{ 0 , window_size_y - button_display_back_size_y } }; const Point Word_query_window::button_previous_page_xy{ Point{ window_size_x - button_previous_page_size_x - button_next_page_size_x, window_size_y - button_previous_page_size_y } }; const Point Word_query_window::button_next_page_xy{ Point{ window_size_x - button_next_page_size_x, window_size_y - button_previous_page_size_y } }; const Point Word_query_window::text_display_xy{ Point{0,window_offset_xy.y + text_current_filename_font_size } }; Word_query_window::Word_query_window() :Window{ window_offset_xy, window_size_x, window_size_y, window_label }, // Error text_error{ text_error_xy,"" },
{ "domain": "codereview.stackexchange", "id": 31841, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++17, fltk", "url": null }
organic-chemistry, reaction-mechanism, reference-request, wittig-reactions The fact Berger was able to do this, does of course not imply that they are real intermediates along the Wittig pathway, but it was a nice piece of mechanistic work that I thought deserved mention. In addition to this mechanistic work, there are some empirical issues with invoking a betaine intermediate that cannot be easily explained. Namely, all Wittig reactions using non-stabilised ylids should be under kinetic control and irreversible (hence highly (Z)-selective due to the inability of the initial intermediates to reverse and hence equilibrate to the thermodynamic product), however this is frequently observed not to be the case, and in certain cases, high (E)-selectivity can be achieved. Evidence for direct oxaphosphetane formation Vedejs was the one of the first to propose direct (irreversible) cycloaddition of the ylid and carbonyl to give rise to the oxaphosphetane, quickly followed by cycloreversion to form the desired alkene and a phosphine oxide.
{ "domain": "chemistry.stackexchange", "id": 8741, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, reaction-mechanism, reference-request, wittig-reactions", "url": null }
homework-and-exercises, kinematics Now using the first equation of motion we get: $$v=u+at$$ Putting this in the integral we get: $$\langle v \rangle =\frac{\displaystyle\int_{t_0}^{t}(u+at)~\mathrm dt}{t-t_0}$$ which simplifies to $$\langle v \rangle =v+\frac{1}{2}a(t-t_0)\ ,$$ provided $a$ is constant.
{ "domain": "physics.stackexchange", "id": 31624, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, kinematics", "url": null }
[Euler, 1771, part 2, sec. Performing row operations on a matrix is the method we use for solving a system of equations. Join Facebook to connect with Gauss Jordan Method and others you may know. The Gauss-Jordan method is a modification of the Gaussian elimination. Gauss-Jordan Elimination Step 1. From the Each of these systems can be solved by the Gauss-Jordan method. Indicate the solutions (if any exist). You should be able to figure out. It differs in eliminating the unknown equations above the main diagonal as well as below the main diagonal. Gauss-Jordan Method is a popular process of solving system of linear equation in linear algebra. Use multiples of the row containing the 1 from step 1 to get zeros in all remaining places in the column containing this 1. Its computational complexity indicates that this method is more efficient than the existing Gauss–Jordan elimination method in the literature for a large class of problems. In fact Gauss-Jordan elimination algorithm is
{ "domain": "globalbeddingitalia.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9859363717170516, "lm_q1q2_score": 0.8060764148580296, "lm_q2_score": 0.817574478416099, "openwebmath_perplexity": 712.3485932488467, "openwebmath_score": 0.63821941614151, "tags": null, "url": "http://puis.globalbeddingitalia.it/gauss-jordan-method.html" }
by first constructing the truth table for the formula and examining the far right column. The method of truth tables illustrated above is provably correct - the truth table for a tautology will end in a column with only T, while the truth table for a sentence that is not a tautology will contain a row whose final column is F, and the valuation corresponding to that row is a valuation that does not satisfy the sentence being tested. It can be used to test the validity of arguments. Square of Opposition. ) The final column of a truth table for a tautology (respectively, a contradiction) is all Ts (respectively, all Fs). De nition 1. • Truth Table - a calculation matrix used to demonstrate all logically possible truth-values of a given proposition. The truth or falsity of a statement built with. Decide whether the formula p 0→¬p 0 is a tautology, a contingency, or a contradiction. I buy you a car. АЛВ+В'VA" с. Contradiction: A propositional formula is contradictory (unsatisfiable) if there
{ "domain": "chiavette-usb-personalizzate.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.970239907775086, "lm_q1q2_score": 0.8545844878495524, "lm_q2_score": 0.8807970904940926, "openwebmath_perplexity": 467.3208727966209, "openwebmath_score": 0.5699777007102966, "tags": null, "url": "http://insd.chiavette-usb-personalizzate.it/contradiction-truth-table.html" }
navigation, ros-kinetic, navsat-transform-node, navsat-transform, robot-localization Title: Using robot_localization navsat_transform_node to fuse IMU and GPS measurements from a drone Hi, I'm looking throuh the robot_localization package and I'm a bit confused on how to use the navsat_transform node to fuse measurements coming from a drone. I have three topics: dji_odom for odometry. dji_imu for Imu data. dji_gps for gps data. which according to the tutorial are the sample inputs. My issue is I'm not quite sure where these inputs are meant to be in the package. I'm using the sample dual_ekf_navsat_example file which looks like: <launch> <rosparam command="load" file="$(find robot_localization)/params/dual_ekf_navsat_example.yaml" /> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_odom" clear_params="true"/> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_map" clear_params="true"> <remap from="odometry/filtered" to="odometry/filtered_map"/> </node>
{ "domain": "robotics.stackexchange", "id": 31981, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, ros-kinetic, navsat-transform-node, navsat-transform, robot-localization", "url": null }
navigation, mapping, rviz Title: RVIZ display 2d probabilistic map Hi Everybody. I wonder how RVIZ can display a 2d probabilistic map. I have a 2D matrix where I store the probabilistic values of an occupancy grid map. What can I use to display this values in RVIZ? Regards, Originally posted by acp on ROS Answers with karma: 556 on 2012-02-06 Post score: 2 The rviz map plugin unfortunately currently only supports the three discrete states unknown, occupied and free. See this code snipped from visualization/rviz/src/rviz/default_plugin/map_display.cpp (starting at line 289): for( unsigned int pixel_index = 0; pixel_index < num_pixels_to_copy; pixel_index++ ) { unsigned char val; if(msg->data[ pixel_index ] == 100) val = 0; else if(msg->data[ pixel_index ] == 0) val = 255; else val = 127; pixels[ pixel_index ] = val; }
{ "domain": "robotics.stackexchange", "id": 8131, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "navigation, mapping, rviz", "url": null }
deep-learning The results are shown in the graphs. I suspect that I am making an error somewhere, because I have seen that deep learning is able to match complex functions with much less epochs. I shall appreciate any hint to fix this problem and obtain a good fit with the deep learning function. In order to make it clear I post the graph's code. rs =[x for x in range(20)] def masas_circulo(x): masas_circulos =[] rs =[r for r in range(x)] for r in rs: masas_circulos.append(model.predict([r])[0][0]) return masas_circulos
{ "domain": "ai.stackexchange", "id": 2990, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "deep-learning", "url": null }
3. Can you perform another set of column operations to difference these three first differences? The final two columns of your matrix will then be filled with the second differences of your original sequences of four consecutive square numbers. Recall the result from (1). What does this tell you about these two columns and hence the determinant? $(*)$ Perhaps $a^2, \dots, (a+3)^2$ may not be "square numbers" in the sense that $a$ may not be a natural number. But this doesn't matter very much; the reason the second differences of the sequence $n^2, \, n\in\mathbb{N}$ are so nice is because of algebra that works just as well on $x^2, \, x\in \mathbb{R}$. In particular, what is $\left((x+2)^2-(x+1)^2\right) - \left((x+1)^2-x^2\right)$? If you brush up a little on finite differences of higher polynomials you might want to have a think about how you could determine the following determinant, where each row has six consecutive fourth powers:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9736446479186303, "lm_q1q2_score": 0.8388320648592148, "lm_q2_score": 0.8615382076534743, "openwebmath_perplexity": 232.25404886669062, "openwebmath_score": 0.8706985116004944, "tags": null, "url": "https://math.stackexchange.com/questions/1953843/what-will-be-the-value-of-the-following-determinant-without-expanding-it/1953852" }
mathematical-physics, computational-physics, oscillators, acoustics Title: Numerical computation of the Rayleigh-Lamb curves The Rayleigh-Lamb equations: $$\frac{\tan (pd)}{\tan (qd)}=-\left[\frac{4k^2pq}{\left(k^2-q^2\right)^2}\right]^{\pm 1}$$ (two equations, one with the +1 exponent and the other with the -1 exponent) where $$p^2=\frac{\omega ^2}{c_L^2}-k^2$$ and $$q^2=\frac{\omega ^2}{c_T^2}-k^2$$ show up in physical considerations of the elastic oscillations of solid plates. Here, $d$ is the thickness of a elastic plate, $c_L$ the velocity of longitudinal waves, and $c_T$ the velocity of transverse waves. These equations determine for each positive value of $\omega$ a discrete set of real "eigenvalues" for $k$. My problem is the numerical computation of these eigenvalues and, in particular, to obtain curves displaying these eigenvalues. What sort of numerical method can I use with this problem? Thanks.
{ "domain": "physics.stackexchange", "id": 1466, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mathematical-physics, computational-physics, oscillators, acoustics", "url": null }
string-theory, supersymmetry, mass-energy $$p^-\sim \int \frac{\partial}{\partial_\tau X^-}L_{LC} \sim \int \partial_\tau X^-$$ I just got the same formula as for the bosonic string. I know that I'm using the wrong virasoro constraint for the Green-Schwarz string but I cannot find that in the book. I read that the vanishing of the ground state mass directly comes from the cancellation between fermions' and bosons' zero-point energies, that was the reason I started this calculation. The supersymmetric string has more gauge symmetries than the bosonic string, namely the Kappa symmetry. Gauging this symmetry by fixing half of the fermionic variables to zero $$ (\gamma_+\theta)=0 $$ and doing a field redefinition $$ \theta\rightarrow\sqrt{p^+}\theta $$ you will end up with an action of the form: $$ S=\int d^2z\left(\partial x^i \bar\partial x_i + \theta^\alpha\bar\partial\theta^\alpha\right) $$
{ "domain": "physics.stackexchange", "id": 57998, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "string-theory, supersymmetry, mass-energy", "url": null }
ros2, windows Originally posted by Weidong Chen with karma: 26 on 2020-08-31 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by mirella melo on 2020-09-01: Good to hear, Weidong!
{ "domain": "robotics.stackexchange", "id": 35472, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2, windows", "url": null }
strings, html, json, groovy "type" : "Chinese God" } Notice the \n character in the description field which separates multiple statements which are part of the same description. I wrote a Groovy script to take this data and format it into HTML. I'm heavily using Groovy's GString interpolated expressions to assemble the data and HTML together. I'm using a little bit of RegEx & partial string matching so that card types like "Chinese God", "Chinese Hero" and "Chinese" all get included in one set, for example. Bear in mind, we have not yet decided on and made styles for the tags and such, so you'll see some inline CSS; that is temporary. import groovy.json.JsonSlurper
{ "domain": "codereview.stackexchange", "id": 14698, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "strings, html, json, groovy", "url": null }
condensed-matter, density, metals, matter, states-of-matter At lower densities, the next phase is "nuclear pasta" - very neutron-rich nuclear matter bound into planar or spaghetti-like forms surrounded by a (degenerate) free neutron plus electron fluid. Below a few $10^{15}$ kg/m$^3$, more "normal" nuclei form with pseudo-spherical shapes. These nuclei are still very neutron-rich (n/p ratios of 3 or more and masses of several hundred amu) locked into some sort of solid lattice by Coulomb forces and bathed in a fluid of neutrons and relativistic degenerate electrons. At densities below $4\times 10^{14}$ kg/m$^3$ the lowest energy configuration sees the free neutrons absorbed into the neutron-rich nuclei. The equilibrium nuclei are still very heavy, but not so heavy as at higher densities. At lower densities, we gradually get back to material which has the n/p ratios of the stable elements we are familiar with.
{ "domain": "physics.stackexchange", "id": 64167, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter, density, metals, matter, states-of-matter", "url": null }
python, object-oriented, design-patterns, inheritance Now Main, after inheriting Unifier, will have all fn(self, ...) methods from module_1a, module_2a, module_1b (except def _method_to_omit). get_module_methods source Method class assignment ideas derived from this Q&A
{ "domain": "codereview.stackexchange", "id": 38213, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, object-oriented, design-patterns, inheritance", "url": null }
c#, calculator Function names should be denoting actions, since functions are entities that are supposed to do stuff. For example, SecondInput, IMO, would be a function that displays a message and returns the input. But it's actually quite unclear what it does. In fact, in your code - it does something different than what I would have thought of. In this particular example, I would call the function ShowSecondInputMessage. Although it's longer, it expresses the purpose of the function better.
{ "domain": "codereview.stackexchange", "id": 39742, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, calculator", "url": null }
education, linear-algebra, didactics, mathematical-foundations Title: What parts of linear algebra are used in computer science? I've been reading Linear Algebra and its Applications to help understand computer science material (mainly machine learning), but I'm concerned that a lot of the information isn't useful to CS. For example, knowing how to efficiently solve systems of linear equations doesn't seem very useful unless you're trying to program a new equation solver. Additionally, the book has talked a lot about span, linear dependence and independence, when a matrix has an inverse, and the relationships between these, but I can't think of any application of this in CS. So, what parts of linear algebra are used in CS? The parts that you mentioned are basic concepts of linear algebra. You cannot understand the more advanced concepts (say, eigenvalues and eigenvectors) before first understanding the basic concepts. There are no shortcuts in mathematics. Without an intuitive understanding of the concepts of span and linear independence you won't
{ "domain": "cs.stackexchange", "id": 15066, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "education, linear-algebra, didactics, mathematical-foundations", "url": null }
A. $64 B.$70 C. $73 D.$74 E. $85 We can determine the average using the formula: average = sum / number: average = (74 + 69 + 64 + 79 + 64 + 84 + 77)/7 = 511/7 = 73 Answer: C _________________ # Scott Woodbury-Stewart Founder and CEO Scott@TargetTestPrep.com 181 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. VP Joined: 14 Feb 2017 Posts: 1369 Location: Australia Concentration: Technology, Strategy GMAT 1: 560 Q41 V26 GMAT 2: 550 Q43 V23 GMAT 3: 650 Q47 V33 GMAT 4: 650 Q44 V36 GMAT 5: 650 Q48 V31 GMAT 6: 600 Q38 V35 GMAT 7: 710 Q47 V41 GPA: 3 WE: Management Consulting (Consulting) Re: Over the past 7 weeks, the Smith family had weekly grocery bills of$7  [#permalink] ### Show Tags
{ "domain": "gmatclub.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.97482115683641, "lm_q1q2_score": 0.834301810542596, "lm_q2_score": 0.8558511524823262, "openwebmath_perplexity": 6035.688192416138, "openwebmath_score": 0.5074110627174377, "tags": null, "url": "https://gmatclub.com/forum/over-the-past-7-weeks-the-smith-family-had-weekly-grocery-bills-of-268908.html?kudos=1" }
c++, object-oriented, game, playing-cards And once you have that, you can do: template <typename InputRange> void refill(InputRange&& range) { using std::begin; using std::end; refill(begin(range), end(range): } and that will not only work for std::vector<Card>, it will work for std::set<Card> and even std::istream_iterator<Card> for reading from a file (assuming you implemented operator>> for Card), and more. private: void shuffle(); Making shuffling a private function hidden from users seems like a poor design decision. Why shouldn't users of a deck of cards be the ones who decide when and how the cards get shuffled? Deck.cpp Deck::Deck() { fill(); shuffle(); } As a user of the class, I would be annoyed that the deck doesn't let me decide when to shuffle. I might specifically want an unshuffled deck for predictability or testing. Why wouldn't you give me that choice? const Card& Deck::dealCard() { if (_cards.empty()) throw EmptyDeckException("");
{ "domain": "codereview.stackexchange", "id": 31130, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, game, playing-cards", "url": null }
homework-and-exercises, electromagnetism, waves, electromagnetic-radiation, maxwell-equations In simpler terms $$\Box\textbf{E}=\textbf{C}$$ where $$\textbf{C}=\nabla\rho +\frac{\partial}{\partial t}\textbf{J}$$ Now moving to the case of $\textbf{B}$ $$\nabla \times (\nabla \times \textbf{B})=\nabla \times\left(\textbf{J} + \frac{\partial\textbf{E}}{\partial t}\right)= \nabla \times\textbf{J} + \frac{\partial}{\partial t}(\nabla\times\textbf{E})=\nabla \times\textbf{J} -\frac{\partial^2\textbf{E}}{\partial t^2}$$ as for LHS we have $$\nabla(\nabla.\textbf{B}) - \nabla^2\textbf{B}=\nabla(0) - \nabla^2\textbf{B}$$ Rearranging RHS and LHS we get $$\nabla^2\textbf{B}-\frac{\partial^2\textbf{B}}{\partial t^2}=-\nabla \times\textbf{J} $$ in simpler terms $$\Box \textbf{B}=\textbf{F}$$ where $$\textbf{F}=-\nabla \times\textbf{J}$$ So putting sources has ultimately lead to what we call inhomogeneous wave equation which is simply $$\Box f(t,\vec{x})=h(t,\vec{x})$$ same thing as in case of Laplacian and Poisson equation in chapter 3.
{ "domain": "physics.stackexchange", "id": 65590, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electromagnetism, waves, electromagnetic-radiation, maxwell-equations", "url": null }
# Demo Code Implement a class QuadraticEquation (you can start with the code below if you wish to save time) whose constructor receives the coefficients a, b, c of the quadratic equation ax2 + bx + c = 0. Supply methods getSolution1 and getSolution2 that get the solutions, using the quadratic formula. Write a test class QuadraticEquationTester (or use the sample one below if you wish) that constructs a QuadraticEquation object, and prints the two solutions. For example, if you make QuadraticEquation myProblem = new QuadraticEquation(2,5,-3), the solutions should be .5 and -3 Recall that if {$ax^2+bx+c=0$} then {$x= \frac{-b \pm \sqrt{b^2-4ac}}{2a}$} public class QuadraticEquationTester { public static void main(String[] args) { double x1 = myProblem.getSolution1(); double x2 = myProblem.getSolution2(); System.out.println(myProblem); System.out.println("The solutions are "+x1+" and "+x2); System.out.println("Expected .5 and -3"); } } public class QuadraticEquation {
{ "domain": "mathorama.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9802808741970027, "lm_q1q2_score": 0.8125928638197543, "lm_q2_score": 0.8289388125473629, "openwebmath_perplexity": 4288.58655115846, "openwebmath_score": 0.2352210134267807, "tags": null, "url": "https://mathorama.com/apcs/pmwiki.php?n=Main.Chapter4" }
galaxies Title: What keeps galaxies united like a solar system? Blackholes may be really strong but they act in a very short range. For example if the sun was a black having the same mass, it will be dark but we will still be revolving around it. It wont engulf us. Also I hear that the outer stars in a galaxy rotates around the galaxy with same speed as the inside stars? This defies the law of gravity. Is this still a mystery? Does anybody know what is the pull on a star by a galaxy? And is this pull uniform throughout the galaxy? Related Question: Is there evidence of dark matter in our galaxy? Dark matter is only the "caboose" on a whole train of answer.
{ "domain": "physics.stackexchange", "id": 3076, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "galaxies", "url": null }
python, python-3.x, google-bigquery # Grab start_time logical_bool = False for line in split_out: if logical_bool == False: if "state: RUNNING" in line: logical_bool = True elif logical_bool == True: start_time = line.replace("stateStartTime:", "").strip(" `'\n") logger.info(f"START TIME AFTER STRIP: {start_time}") start_time = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%S.%fZ").strftime("%Y-%m-%d %H:%M:%S") break status_value = status_value.replace("state:", "").strip() if status_value == "DONE": status_value = "SUCCEEDED" return [status_value, start_time, end_time] except Exception as e: logger.error(e, stack_info=True, exc_info=True)
{ "domain": "codereview.stackexchange", "id": 42917, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, python-3.x, google-bigquery", "url": null }
reaction-mechanism My prof has an argument for the first one; the first one involves fewer steps, so it is probably more favorable. My proposed mechanism has three steps of electron pushing versus two, so okay. However, I can see an argument for the second mechanism - the first one involves a bonding pair acting as a nucleophile rather than a lone pair on chlorine. The bonding pair is stabilized by the chlorine and the iron atom, while the lone pair is stabilized by only the chlorine.
{ "domain": "chemistry.stackexchange", "id": 1937, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reaction-mechanism", "url": null }
localization, kalman-filter, noise Title: What is a good approach for outlier rejection during real time data filtering? I'm trying to finish up a localization pipeline, and the last module I need is a filtering framework for my pose estimates. While a Kalman filter is probably the most popular option, I'm using cameras for my sensing and I wouldn't be able to ensure the kinds of noise profiles KF is good for, I doubt it would work as well with suddenly appearing outliers in my poses: so I am looking for other options which can work with real time data and be immune to outliers.
{ "domain": "robotics.stackexchange", "id": 1379, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "localization, kalman-filter, noise", "url": null }
c++, datetime, reinventing-the-wheel // TODO: support full range of double rather than limiting arbitrarily double getUnixTimeStamp( const int year, const int month, const int day, const int hour, const int minute, const int second ) { // Do some bounds checking // Note that double will still not be indefinite, and you should check what the boundaries for double are. // (this also changes per implementation, so you need to check limits.h or something like that) // It would be more logical and correct to limit to the range permitted by double than to arbitrarily limit a 0y. // If your program needs this limit, you should enforce it outside this function, because it doesn't belong here. const int* const DAYS_PER_MONTH = isLeapYear( year ) ? N_DAYS_IN_LEAP_MONTH : N_DAYS_IN_MONTH;
{ "domain": "codereview.stackexchange", "id": 1747, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, datetime, reinventing-the-wheel", "url": null }
with complex numbers; Matrix divisions . It also has an open programming How to concatenate 3 histograms on the same graph in Matlab. E-Mail: a. \ Scilab version 5. Scilab is specialized in handling matrices (basic matrix manipulation, concatenation, transpose, in-verse, etc. part, length, quote, evstr, execstr, strsubst, strcat, strindex, sci2exp, who List of Scilab variables who_user List of user variables exit End Scilab session abort Stop computation pause Pause computation, ask for input Output disp(x) Display string or value x string(x) Converts value x to string disp(s,t) Display strings as list disp(s+t) Display concatenation of strings Testing x==y Returns true if x =y x~=y Veja grátis o arquivo Ajuda Scilab enviado para a disciplina de Informática I Categoria: Resumo - 14 - 53085016 La variable retournée par la fonction Scilab passée en argument n'est pas valide. Alternatively, if you concatenate two matrices by separating those using semicolons, they are appended vertically.
{ "domain": "artruista.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9433475683211323, "lm_q1q2_score": 0.8073651018048328, "lm_q2_score": 0.8558511506439707, "openwebmath_perplexity": 3828.1249397353417, "openwebmath_score": 0.40898624062538147, "tags": null, "url": "https://artruista.com/chu-mt-upcoming/concatenation-in-scilab.html" }
ros, diff-drive-controller, ros-control, ros-kinetic Those two points should resolved the problem of controller spawner couldn't .... However, you may have later a problem loading the controller with a different error report. Since I don't know if you are loading the ROS control plugin and if you are specifying the transmisions. You need to do that in your URDF model if you want the controllers to load properly. Have a look at this video where I show step by step how to modify your files for proper working: https://goo.gl/ZWE5st Originally posted by R. Tellez with karma: 874 on 2018-05-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by javi_tecla on 2018-08-18: Thanks a lot for the answers and specialy for the dedicated video. Finaly I can launch the control node. You can see in this video one of the test of the weelchair working https://www.youtube.com/watch?v=2xSHQy6WOEo
{ "domain": "robotics.stackexchange", "id": 30727, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, diff-drive-controller, ros-control, ros-kinetic", "url": null }
homework-and-exercises, experimental-physics, kinematics, acceleration Time (s) | Acceleration (m/s^2) 0.75 | -1.7 1.25 | -1.68 1.75 | -1.72 2.25 | -1.68 2.75 | -1.72 3.25 | -1.68 3.75 | -1.72 You are right in that gravity did not change during data collection. You are a victim of uncertainty, which is a very important part of experimental physics. I'm sorry in advance for the "wall of text", and I hope that this clears up some confusion.
{ "domain": "physics.stackexchange", "id": 12346, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, experimental-physics, kinematics, acceleration", "url": null }
vba, excel With Processing.Label11 .Caption = SheetData.AppriasalReviewSentText & vbCrLf & _ SheetData.DocumentSentForReviewText & vbCrLf & _ SheetData.HMDALastText & vbCrLf & _ SheetData.BSASavedText .Font.Size = DefaultFontSize .TextAlign = fmTextAlign.fmTextAlignLeft End With With Processing.Label6 .Caption = "In-House Title Work Received:" & vbCrLf & _ "In-House Evaluation Received:" & vbCrLf & _ "Appraisal Received:" & vbCrLf & _ "Title Work Received:" .Font.Size = DefaultFontSize .TextAlign = fmTextAlign.fmTextAlignRight End With
{ "domain": "codereview.stackexchange", "id": 36946, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "vba, excel", "url": null }
distances, jupiter Because Jupiter's visible features do not rotate uniformly at all latitudes, astronomers have defined three different systems for defining the longitude. Rocky planets like Mars do have a well-defined coordinate system, as shown in this map and on this page. These coordinates are used to mark and find interesting places on either the surface (like on Earth) or in the atmosphere of a planet. There are also tables available showing which latitude/longitude is directed towards us at a certain moment, so that you can compute whether a particular region of the planet is visible from Earth or not.
{ "domain": "astronomy.stackexchange", "id": 3021, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "distances, jupiter", "url": null }
kinematics, acceleration, velocity, integration, calculus Title: Why does acceleration need to be constant if integrating? My teacher wrote the following: Constant Acceleration If acceleration is constant, then: $$\vec{v}(t) = \int_0^t \vec{a}(t')dt'\ + \vec{v_0}$$ and $$\vec{x}(t) = \int_0^t \vec{v}(t')dt'\ + \vec{x_0}$$ Why does acceleration need to be constant? I can't see why integration would need a constant acceleration as such. Acceleration does not need to be constant. By definition, $a=dv/dt$. You can still solve for $v(t)$ by integrating $\int a(t) dt$. If acceleration is constant, you will arrive at the common situation of $v(t)=v_0 +at$. If acceleration is not constant, you will have some other (more interesting) result for $v(t)$ since you are now integrating over a function that includes $t$. For example, if $a(t)=\frac{a_0}{t^2}$, then $v(t)=-\frac{a_0}{t}+v_0$.
{ "domain": "physics.stackexchange", "id": 32862, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinematics, acceleration, velocity, integration, calculus", "url": null }
density, fluid-statics, buoyancy Title: Why the body is drowning in the swamp? Is it correct explanation why does the body sink? Swamp is not a liquid, but particle suspension, that's why Archimedes principle not working and upward buoyant force not applied to the body. Hey here is a text about quicksand, which I assume is a similar problem : http://journals.fcla.edu/cee/article/viewFile/84539/81576 They take the mean density to apply Archimede principle. Their conclusion is that one is unlikely to sink totaly in the quicksand ! In swamps, I assume that the mean density makes that one can actually sink. So the answer would be that Archimede principle does apply, but you have to take into account the mean density, and also the viscosity of the swamp that makes the sinking slower.
{ "domain": "physics.stackexchange", "id": 54143, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "density, fluid-statics, buoyancy", "url": null }
particle-physics, cosmology, dark-matter But in any case, your question can still be answered, at least in a hand-waving sort of way. There are a number of processes proposed for DM annihilation, here are a few:
{ "domain": "physics.stackexchange", "id": 23381, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, cosmology, dark-matter", "url": null }
performance, vba RemoveBlanksFromStringArray = result End Function On a test of 500,000 items original code took 1.1481±0.0001 s, refactored code took 0.1157±0.0001 s or ~ 10 times faster Addendum Now I think about it, the original code with copying memory boils down to an \$\mathcal O (n^2)\$ algorithm, the refactored code is \$\mathcal O (n)\$ (here \$ n\$ is the size of the array, \$\mathcal O (n^a)\$ basically means you loop over the array \$ a\$ times). Using Timer for some rough results, you can see this trend: 10,000 elements Old 0 New 0 100,000 elements Old 0.046875 (same order of magnitude as each other) New 0.015625 1,000,000 elements Old 3.171875 (1 order of magnitude slower relative to New) New 0.125 10,000,000 elements Old 321.46875 (2 orders of magnitude slower) New 1.3125
{ "domain": "codereview.stackexchange", "id": 33955, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "performance, vba", "url": null }
differential-geometry, metric-tensor, tensor-calculus, relativity Title: Relativity and components of a 1-form I have a question regarding Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973), Gravitation ISBN 978-0-7167-0344-0. It is a book about Einstein's theory of gravitation. At page 313, the exercise 13.2. "Practice with Metric" presents a four-dimensional manifold in spherical coordinates + $v$ that has a line element $$ds^2 = - (1-2 M/r) dv^2 + 2 dv dr+r^2 (d\theta^2 + sin^2 \theta d\phi^2).$$ The question (b) is: Define a scalar field $t$ by $$t \equiv v - r - 2M \ln((r/2M)-1)$$What are the covariant and contravariant compoenents of the 1-form $dt$ (equal to u tilde)? What is the squared length $u^2$of the corresponding vector? Show that $u$ is timelike in the region $R > 2M$.
{ "domain": "physics.stackexchange", "id": 59385, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "differential-geometry, metric-tensor, tensor-calculus, relativity", "url": null }
meteorology, atmosphere-modelling, statistics, climatology, correlation Thank you. Convergent cross mapping (CCM) is a recently developed tool to answer the question you've asked. It's based on tools developed in nonlinear time series analysis and dynamical systems theory. It allows you to: 1) determine if a causal relationship between two variables is present 2) establish the direction of causality 3) do so even in the presence of noise. As for an interesting application, check out the paper Causal feedbacks in climate change [van Nes et al., 2015], where CCM is applied to co2 and temperature based on the Vostok data sets. EDIT: Below I've added a more detailed explanation of CCM to show the original poster that this technique does indeed answer their question, as well as to show it has a rigorous mathematical underpinning.
{ "domain": "earthscience.stackexchange", "id": 1156, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "meteorology, atmosphere-modelling, statistics, climatology, correlation", "url": null }
thermodynamics, equilibrium, chemical-potential Title: Do chemical potentials stay constant in chemical equilibrium? I am uncertain whether chemical potentials stay constant during chemical equilibrium. Consider a closed container divided into two parts 1 and 2 filled with ideal gas particles. The barrier between parts 1 and 2 allows particle exchange. The Gibbs energy for parts 1 and 2 are $$G_1=N_1\mu_1$$ $$G_2=N_2\mu_2$$ The total Gibbs energy for the container is $$G=G_1+G_2=N_1\mu_1+N_2\mu_2$$ which has differential $$dG=(\mu_1-\mu_2)dN_1+N_1d\mu_1+N_2d\mu_2$$ At equilibrium, $$(\mu_1-\mu_2)dN_1+N_1d\mu_1+N_2d\mu_2=0$$ It seems like the equilibrium conditions are $$\mu_1=\mu_2$$ $$N_1d\mu_1=-N_2d\mu_2$$ (Not sure if it is necessarily true that $N_1d\mu_1=N_2d\mu_2=0$.) What I can conclude from here is that in chemical equilibrium, 1 and 2 exchange the same number of particles, ie. 1 gains some particles from 2 but gives back the same amount of particles to 2 so that the net change is zero?
{ "domain": "physics.stackexchange", "id": 94507, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, equilibrium, chemical-potential", "url": null }
digital-communications Title: Noisy and Noiseless Channels I am trying to understated the the achievable bit rate in noiseless and noisy channels. Particularly, I would like to solve the following problem: Let a real symbol $0\leq x_i < 1$ be transmitted every $1$ milliseconds through the following channels: $I$: Noiseless channel such that the output $y_i=\alpha x_i$ where $\alpha$ is the attenuation. $II$: Noisy channel such that $y_i= x_i + n_i$ where $n_i$ is the noise and $|n_i|<0.0000223$. What are the achievable bit rates for (i) channel $I$ assuming $\alpha$ is known, (ii) channel $I$ assuming $\alpha$ is unknown (iii) channel $II$.
{ "domain": "dsp.stackexchange", "id": 6710, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "digital-communications", "url": null }
organic-chemistry, reaction-mechanism, synthesis, alcohols Title: What is the function of tosylchloride in the synthesis of an ether? Considering the following reaction:
{ "domain": "chemistry.stackexchange", "id": 6700, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, reaction-mechanism, synthesis, alcohols", "url": null }
friction, centripetal-force I learnt a formula to calculate max speed of circular motion of the car possible. I don't understand what it means? What will happen if driver faster than that? would the static friction force would become too high? what are the consequences? before attempt to turn, there was kinetic friction force only.
{ "domain": "physics.stackexchange", "id": 19206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "friction, centripetal-force", "url": null }
beginner, haskell, recursion, interpreter, brainfuck For execCode we can do the same transformation: cs is invariant across loops, and initial positions are always 0. Note also that prevBracketIndex can be completely precalculated (replaced by a single array lookup), as cs doesn't change. Applying everything above but case we get: import qualified Data.Sequence as S import Data.Char (chr, ord) import Data.Array import Data.List cachePrev cs = listArray (bounds cs) $ snd $ mapAccumL f [] $ assocs cs where f l (i, c) = case c of '[' -> (i : l, Nothing) ']' -> (tail l, Just $ head l) _ -> (l, Nothing) cacheNext cs = listArray (bounds cs) $ snd $ mapAccumR f [] $ assocs cs where f l (i, c) = case c of ']' -> (i : l, Nothing) '[' -> (tail l, Just $ head l) _ -> (l, Just i) cache arr i = case arr ! i of Nothing -> error "oops!" Just idx -> idx
{ "domain": "codereview.stackexchange", "id": 10870, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, haskell, recursion, interpreter, brainfuck", "url": null }
diameter. The moment of inertia of the complete disc about an axis passing through its centre O and perpendicular to its plane is I 1 = 2 9 M R 2 Now, the moment of inertia of the disc with removed portion I 2 = 2 1 M (3 R ) 2 = 1 8 1 M R 2 Therefore, moment of inertia of the remaining portion of disc about O is I = I 1 − I 2 = 9 2 M R 2 − 1 8 M R 2. 34 kg m 2 for a 60 kg woman in the position designed by Balanchine, as shown below: We have included an exercise for calculating the beginning position for the classical Russian technique in the workbook. Radius of gyration 3. dA= ipds or 2dA=pds torque T = 2q dA. To make a semicircle, take any diameter of the circle. Polar Moment of Inertia The second moment of Area A with respect to the pole O or the z-axis. Multiply pi over four by the difference between both radii taken to the fourth power. In this video I will find the moment of inertia (and second moment of area) I(C. If you missed Problem Session on Thu. moment of inertia has
{ "domain": "accademiakravmagaitalia.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9828232935032462, "lm_q1q2_score": 0.8334024631854222, "lm_q2_score": 0.8479677564567913, "openwebmath_perplexity": 611.8841540734246, "openwebmath_score": 0.6420783400535583, "tags": null, "url": "http://qupw.accademiakravmagaitalia.it/moment-of-inertia-of-quarter-circle.html" }
beginner, c, programming-challenge Title: HackerRank "Flowers" challenge I'm fairly new to C and wanted a general review of this code. It's a solution to this problem. You and your \$K-1\$ friends want to buy \$N\$ flowers. Flower number \$i\$ has cost \$c_i\$. Unfortunately the seller does not want just one customer to buy a lot of flowers, so he tries to change the price of flowers for customers who have already bought some flowers. More precisely, if a customer has already bought \$x\$ flowers, he should pay \$(x+1)*c_i\$ dollars to buy flower number \$i\$. You and your \$K-1\$ friends want to buy all \$N\$ flowers in such a way that you spend the least amount of money. You can buy the flowers in any order. Input: The first line of input contains two integers \$N\$ and \$K\$ (\$K \le N\$). The next line contains \$N\$ space separated positive integers \$c_1,c_2,\dotsc,c_N\$. Output: Print the minimum amount of money you (and your friends) have to pay in order to buy all \$N\$ flowers.
{ "domain": "codereview.stackexchange", "id": 14030, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c, programming-challenge", "url": null }
c++, algorithm, sorting, mergesort, iteration // Perform pairwise merges. for (; run_index < runs - 1; run_index += 2) { size_t left_end = (run_index + 1) * run_length; std::merge(source + run_index * run_length, source + left_end, source + left_end, source + std::min(range_length, (left_end + run_length)), target + run_index * run_length, compare); } // Handle the orphan run, which occurs in the end of the range. if (run_index < runs) { std::copy(source + run_index * run_length, source + range_length, target + run_index * run_length); } }
{ "domain": "codereview.stackexchange", "id": 15914, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, algorithm, sorting, mergesort, iteration", "url": null }
c# public class NumberGenerator : INextNumberGenerator { public NumberChainItem GetNext(int input) { var digits = input.ToString().Select(ch => int.Parse(ch.ToString())).ToArray(); var descending = DigitsToNumber(digits.OrderByDescending(i => i)); var ascending = DigitsToNumber(digits.OrderBy(i => i)); var number = descending - ascending; var chainItem = new NumberChainItem { Number = number, GenerationDescription = string.Format("{0} - {1}", descending, ascending) }; return chainItem; } private int DigitsToNumber(IEnumerable<int> digits) { var numberString = string.Join("", digits); var number = int.Parse(numberString); return number; } } public class ChainGenerator { private readonly INextNumberGenerator _nextGenerator; public ChainGenerator() { _nextGenerator = new NumberGenerator(); }
{ "domain": "codereview.stackexchange", "id": 21638, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#", "url": null }
electrostatics, potential, computational-physics, boundary-conditions, dipole I’ll make an example: Let’s suppose we have an electrically neutral unit cell that gains an electric dipole moment during the simulation. This unit cell, and its dipole moment, would be instantly reproduced in all image cells. Let’s consider the electric potential in an arbitrary point: it would have an infinite number of shells of dipoles around, with area increasing as $r^2$ ($r$ is the shell radius) and the electric field dipole component would decrease as $r^{-2}$; the sum does not converge, so we would have an infinite electric potential in every point! Am I missing something? I cannot see how PME or Ewald Summation, or any other algorithm, can solve a physical issue, unless that in some way those methods put additional boundary conditions. But I don’t see how. Can you help me understand? Thank you in advance.
{ "domain": "physics.stackexchange", "id": 37495, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, potential, computational-physics, boundary-conditions, dipole", "url": null }