text
stringlengths
49
10.4k
source
dict
genetics, human-genetics A very similar event to what you mention has been shown to have occured when migrants from Persia moved to India with their Zoroastrian religion, forming the Parsee community in India. Clearly Middle East and Native Americans are much less genetically similar to one another than Indians and Iranians, so it would be similarly much easier to disinguish them genetically.
{ "domain": "biology.stackexchange", "id": 11511, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics, human-genetics", "url": null }
c#, functional-programming, linq This approach skips out of the sorting, and loops through the input list only once. Checking for inclusion in the Hashset is \$O(1)\$, so if I'm not mistaken, this takes the solution from \$O(n\log n)\$ to \$O(n)\$. As comment by @slepic points out, we need to be careful that sum - item doesn't overflow. If that happens, that automatically means that the complement cannot appear in the array, since it wouldn't fit in our datatype. To account for this, we can do the subtraction in checked context and catch any OverflowException.
{ "domain": "codereview.stackexchange", "id": 36459, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, functional-programming, linq", "url": null }
rosnode #Goal Action Server def action_server_for_Point(self): self._as = actionlib.SimpleActionServer('Point', Point_ActionAction, execute_cb=self.handle_Point_action, auto_start = False) self._as.start() def handle_Point_action(self, goal): r = rospy.Rate(1) success = True a = 0 while success: print "Hello" a = a+1 if(a>10): success = False r.sleep() rospy.loginfo('Executing Goal ') result = Point_ActionResult() if True: result.status = a #Service def handle_Stop_service(self,req): print "Request Received %d"%(req.devNo) return (0) def service_server_for_Stop(self): s = rospy.Service('Stop', Stop_Service, self.handle_Stop_service) print "Ready Processing for Service." rospy.spin() if __name__ == '__main__': # Initialize the node and name it. # Go to class functions that do all the heavy lifting. Do error checking. try: ne = CN_Dish() ne.(call all the function defined in the class to start publishing, subription, services and action server) print "Done" except rospy.ROSInterruptException: pass Any hint or help would be a great help !! Originally posted by amarbanerjee23 on ROS Answers with karma: 5 on 2017-06-21 Post score: 0
{ "domain": "robotics.stackexchange", "id": 28166, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rosnode", "url": null }
java, array, memory-management, combinatorics, integer It's more common to use just List on the left: public List<Integer> getRandomPermutation() { List<Integer> availableNumbers = new ArrayList<Integer>(); ArrayList is an implementation detail and your caller doesn't need to know it. I also changed unused to availableNumbers. I think it better represents what the List holds. I'm not sure that I'd call these brute force versus smart force. A brute force algorithm is one that tries every possibility once. A brute force solution would be to generate all \$10!\$ solutions and then randomly pick one. Your algorithm can actually iterate an arbitrary number of times (in fact, it's not guaranteed to finish). I might call it naive rather than brute force.
{ "domain": "codereview.stackexchange", "id": 10615, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, array, memory-management, combinatorics, integer", "url": null }
interface allows for modeling and solving of complex problems with minimal syntax instruction. Perfect for acing essays, tests, and quizzes, as well as for writing lesson plans. Graphs of Circles. Question: Sketch the graphs of the limacons: {eq}(a) \ r=2+cos\ \theta \\ (b) \ r=1+cos\ \theta \\ (c) \ r=1+2cos\ \theta {/eq} Limacons: Limacons are a special kind. Conic Sections: Ellipse with Foci example. Exercise 8 Find an equation of the plane through P(4,2,−6) and normal vector → OP. r= p 2 p 2sin This is the graph of a cardioid that is symmetric with respect to the vertical axis. 2 Page | 5 LEMNISCATES Lemniscates are propeller-shaped graphs, as shown in the figures on the right. If $$r<0$$, the point is units (like a radius) in the. Identify the polar graph (circle, spiral, cardioid, limacon, rose): If a circle, name the center (in polar coordinates) and the radius. The graph was sketched in class. 7 Graphs of Polar Equations Objective: In this lesson you learned how to graph polar equations. This is cross-posted on CAS Musings. Use point plotting to graph polar equations. You can drag the point on the rectangular graph. Then you're free to explore the beauty of circles, spirals, roses, limacons and more in this polar graphing playground. By performing three tests, we will see how. See Figure 8 9. We will also look at many of the standard polar graphs as well as circles and some equations of lines in terms of polar coordinates. Polar Coordinates: Graphs. 16 min 12 Examples. 1 Inner loop Dimpled with no inner loo 1<—< Heart-shaped No dimple and no inner loo Graphinq Limacons The graphs of r=a+bsinO r = a + bcosO r=a—bsinO r = a—bcosO are called limacons. graph has this type of symmetry. Learn Desmos: Polar Graphing Convert the coordinate plane to a polar grid with just a pair of clicks (starting with the wrench on the top right).
{ "domain": "primaopoiarrivo.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9871787857334058, "lm_q1q2_score": 0.8204867909489747, "lm_q2_score": 0.8311430541321951, "openwebmath_perplexity": 1507.055918034725, "openwebmath_score": 0.7386087775230408, "tags": null, "url": "http://gjpy.primaopoiarrivo.it/how-to-graph-limacons.html" }
quantum-mechanics, density-operator, density-functional-theory Now to my question: is the density matrix in the context of DFT as explained above just a tool to ease technical/computational operations or can still any meaning in the 'classical' quantum-mechanical view be given to it? While the Kohn-Sham states $|n^{\rm KS}\rangle$ are really only an auxiliary construct to compute the non-interacting kinetic energy, one often nevertheless goes ahead and interprets the $|n^{\rm KS}\rangle$ as single-particle states and the Kohn-Sham eigenvalues $\epsilon_n$ as quasi-particle energies (e.g. one uses the Kohn-Sham band structure as a first approximation to an experimentally observed quasi-particle band structure with such famous problems of density functional theory typically underestimating band gaps significantly). If one does accept the interpretation of Kohn-Sham states as single-/quasi-particle states, the eigenvalues $\lambda_n$ of the density matrix $\hat \rho = \sum_n \lambda_n |n^{\rm KS}\rangle\langle n^{\rm KS}|$ are the probabilities to find a particle in state $|n^{\rm KS}\rangle$ (or simply the occupation of that state). $\hat \rho$ is therefore called one-particle reduced density matrix: $N-1$ particle degrees of freedom have been traced out. The eigenvectors of one-particle reduced density matrices are called natural orbitals (which in the case of Kohn-Sham density functional theory coincide with the Kohn-Sham states). Since Kohn-Sham density functional theory only uses a single Slater determinant, at $T=0$K the $\lambda_n$ are either $0$ or $1$ (or $0$ or $2$ if spin degeneracy is factored into the occupation numbers); correlation beyond a single Slater determinant would lead to fractional occupation even at $0$K.
{ "domain": "physics.stackexchange", "id": 58260, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, density-operator, density-functional-theory", "url": null }
haskell, reinventing-the-wheel Title: Haskell's `group` Function I implemented the group function: group' :: (Ord a) => [a] -> [[a]] group' [] = [] group' xs = takeWhile (== head' xs) (sorted xs) : (group' $ dropWhile (== head' xs) (sorted xs)) where sorted ys = mergesort' (ys) head' (x:_) = x Note - please assume mergesort is implemented properly. Its signature is: [a] -> [a] where the latter list is sorted. How's it look? Is head''s pattern matching exhaustive? It seems to me it is since group xs will only be reached if group's argument is non-empty, i.e. it'll always have a head. You have made some uncommon stylistic choices which are not to your benefit, and a few things aren't doing what I think you think they're doing. First, the stylistic elements. Your whitespace is excessive, there's no need to push everything that far to the right and out of line with the beginning of the RHS. I also would move the colon down to begin the next line, this is mostly a matter of personal style. The parentheses around ys are unnecessary and while harmless noise for the compiler will distract your human reader. My version would look like this. group' xs = takeWhile (== head' xs) (sorted xs) : group' (dropWhile (== head' xs) (sorted xs)) where sorted ys = mergesort' ys head' (x:_) = x
{ "domain": "codereview.stackexchange", "id": 7421, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "haskell, reinventing-the-wheel", "url": null }
c#, .net, mysql, database, wpf public SqlCmdParameter(SqlCmdParameter original) { _publicName = original._publicName; _storedProcedureParameterName = original._storedProcedureParameterName; _direction = original._direction; _isRequired = original._isRequired; _type = original._type; _isValueSet = original._isValueSet; _value = original._value; _valueKey = original._valueKey; _valueInt = original._valueInt; _valueDouble = original._valueDouble; _skipInsertOfPrimaryKey = original._skipInsertOfPrimaryKey; } public string PublicName { get { return _publicName; } } public ParameterDirection Direction { get { return _direction; } set { _direction = value; } } public bool IsValid { get { return _dataIsValid(); } } public bool IsRequired { get { return _isRequired; } set { _isRequired = value; } } public string Value { get { return _value; } set { SetValue(value); } } public bool BValue { get { return (_valueInt > 0); } set { SetValue(value); } } public uint KeyValue { get { return _valueKey; } set { _valueKey = value; } } public MySqlDbType Type { get { return _type; } } public bool AddParameterToCommand(MySqlCommand cmd) { if (_skipInsertOfPrimaryKey) { return true; } // If it is an output variable validity doesn't matter. if (_direction != ParameterDirection.Input) { string IndexByNameValue = _storedProcedureParameterName; cmd.Parameters.Add(new MySqlParameter(IndexByNameValue, _type)); cmd.Parameters[IndexByNameValue].Direction = _direction; return true; } if (!IsValid) { return IsValid; }
{ "domain": "codereview.stackexchange", "id": 35353, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, mysql, database, wpf", "url": null }
interpolation, compressive-sensing, reconstruction, multirate, sparsity Position and orientation of the source Directional characteristic of the source Position and orientation of the receiver Directional characteristic of the receiver Geometry and acoustic properties of all surfaces including reflection, absorption, diffraction, resonance, diffusion, etc. First of all you need to define which of this you want to interpolate. I think you want to the receiver location for a fixed source position in a fixed room, but it would help to specifically state that. And what about orientation? Natural sources and receivers are NOT omnidirectional so orientation matters. I think your best chance to find any sparsity would be in the perceptual domain. In a reasonably well behaved room, when moving a relatively small distances from A to B, the fine structure of the RIR changes drastically (to the point of being mostly uncorrelated), but they often sound very similar. One potential way here would be to derive relevant perceptual attributes, interpolate those and and the resynthesize an RIR from that. Another approach would be to use a parametric physical mode. The RIR can be broken down roughly in three areas Direct Sound Early reflections Late reverberation The direct sound can often be calculated directly from the locations, orientations and polar patterns of source and receiver. Early reflections are probably the most tricky one: You would need to identify the major reflections (Ceiling, floor, etc) in the measured RIRs and try then to interpolate "matching" reflections. The late reverberation is fairly straight forward. You can use the frequency dependent decay envelop and just fill the fine structure with white noise. The thing here to interpolate are the reverb times and the direct/reflected energy ratio to the the reverb level correct. Things get much more difficult in less "well-behaved" rooms. For example if you move from an area where there is direct line of sight to the source to one where the source is obstructed, the RIR changes fairly drastically over a relatively small distance. I don't think this can be interpolated with any reasonable means.
{ "domain": "dsp.stackexchange", "id": 12424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "interpolation, compressive-sensing, reconstruction, multirate, sparsity", "url": null }
turing-machines, halting-problem Title: Faults in the halting problem reasoning I find very interesting the problem of existence of a machine $H$ which given as input any algorithm $P$ outputs whether $P$ halts or not. Alan Turing disproved the existence of such an $H$ machine in the following way: "Assume $H$ exists. The input of $H$ is any algorithm $P$. The output of $H$ is $YES$ if $P$ halts and $NO$ if $P$ doesn't. Create another machine $N$ with input in the set $\{YES,NO\}$ and without output. The machine $N$ does this: if the input is $YES \to \text{loop for ever}$ otherwise, if input is $NO \to \text{halt}$. Let us call the composition of $H$ followed by $N$ as $X$. We will simply write $X(P) = N(H(P))$. Now $X$ has as input a machine and no output (it either halts or loops forever). What Turing does is to "feed" $X$ with itself ... and reason as follows: if $X(X)$ halts then $X(X) = N(H(X)) = N(YES) = \text{loops for ever}$ from which somehow the contradiction arises ..." but here one should assume (in order to obtain a contradiction) that $X(X) = X$ ... Question
{ "domain": "cs.stackexchange", "id": 17166, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turing-machines, halting-problem", "url": null }
a source node and a destination node in a graph: Using DFS: The idea is to do Depth First Traversal of given directed graph. , capacity, cost, demand, traffic frequency, time, etc. There are already comprehensive network analysis packages in R, notably igraph and its tidyverse-compatible interface tidygraph. G Is A Weighted Graph With Vertex Set {0,1,2,,1-1} And Integer Weights Represented As The Following Adjacency Matrix Form. Geodesic paths are not necessarily unique, but the geodesic distance is well-defined since all geodesic paths have. ple, Figure 1a illustrates a graph G, and Figure 1e shows an aug-mented graph G∗ constructed from G. The latter only works if the edge weights are non-negative. 4 5 Args: 6 graph: weighted graph with no negative cycles. Weighted directed graphs may be used to model communication networks, and shortest distances (shortest-path weights) between nodes may be used to suggest routes for messages. Chan⁄ September 30, 2009 Abstract Intheflrstpartofthepaper,wereexaminetheall-pairs shortest paths (APSP)problemand present a new algorithm with running time O(n3 log3 logn=log2 n), which improves all known algorithmsforgeneralreal-weighteddensegraphs. This module covers weighted graphs, where each edge has an associated weight or number. The problem is to find k directed paths starting at s, such that every node of G lies on at least one of those paths, and such that the sum of the weights of all the edges in the paths is minimized. The one-to-all shortest path problem is the problem of determining the shortest path from node s to all the other nodes in the. Finding shortest paths in weighted graphs In the past two weeks, you've developed a strong understanding of how to design classes to represent a graph and how to use a graph to represent a map. Exercise 3 [Modeling a problem as a shortest path problem in graphs] Four imprudent walkers are caught in the storm and nights. How to use BFS for Weighted Graph to find shortest paths ? If your graph is weighted, then BFS may not yield the shortest weight paths. You are given a weighted directed acyclic graph G, and a start node s in G. It finds a shortest-path tree for a weighted
{ "domain": "chicweek.it", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.987758723627135, "lm_q1q2_score": 0.8528226068102591, "lm_q2_score": 0.8633916222765627, "openwebmath_perplexity": 383.31142122654893, "openwebmath_score": 0.5556688904762268, "tags": null, "url": "http://isxy.chicweek.it/number-of-shortest-paths-in-a-weighted-graph.html" }
nlp, bert # compare output with using the huggingface model directly bert_self_attn = transformers.models.bert.modeling_bert.BertSelfAttention(torch_model.config) bert_self_attn.load_state_dict(torch_model.encoder.layer[0].attention.self.state_dict()) bert_self_attn.eval() # <------ set model in inference mode output_self_attention2 = bert_self_attn(output_embedding)[0] output_self_attention != output_self_attention2 # tensors are equal
{ "domain": "datascience.stackexchange", "id": 9488, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nlp, bert", "url": null }
java, game, mvc, swing Testing I did not see a single unit test. Since you mentioned you are in university, and my experience is, that people do not learn how to write proper unit tests and do not learn how to grow an application with unit tests during their education - at least I did not, nor did my apprentices -: Please write unit tests. Not only will it help you verify and document your application, but it will also help you with your design. There are a few places in your code where testing will be hard or even: not possible. And this is always the sign of bad design. And: In the real world, testing is a crucial part of software development. Other / smaller stuff
{ "domain": "codereview.stackexchange", "id": 26191, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, mvc, swing", "url": null }
physical-chemistry, kinetics Title: Rate law of reactions, what about the proportions? Consider the reaction $$\ce{A + B + C -> D + E}$$ and $\mathrm{rate} = k[\ce{A}][\ce{B}]^2.$ My book said if we double the amount of $\ce{A},$ then the reaction rate will double. I understand this, but what about the law of definite proportion? [OP, in a comment] oh ok, so if we double the amount of only one substance, that substance will get left? Yes. The rate law tells you how fast product is being made. The change in the different species is still given by the chemical reaction equation (which tells you in which proportion the amount of species will change, not in which proportion they are present at the beginning or at the end of the reaction). You can see that when you look at the definition of rate: $$\mathrm{rate} = \frac{d[X_i]}{\nu_i dt}$$ where $\nu_x$ is the (signed, i.e. negative for reactants) stoichiometric factor of the $i$-th species. Dividing by the stoichiometric factor ensures that the law of definite proportions is followed. [OP in another comment] we cant simply put more of a substance to get more of the product, the reactants must have a fixed ratio. The law of definite proportions is not about the amounts present, but about the change in those amounts. I can set up a reaction with 1000 times more elemental oxygen than hydrogen, but they will still react in a 1:2 ratio if the reaction yields water.
{ "domain": "chemistry.stackexchange", "id": 13800, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, kinetics", "url": null }
I was showing that, when you use the JIT-version with the compilation to C, the first call for arguments of a given type takes a while since we have to inslude the time to compile to C. All subsequent calls are instant, because the compiled function is cached. – Leonid Shifrin Jun 17 '12 at 11:11 Here's another approach. mergeLists[lista_, listb_, crit_: LessEqual] := Module[{merge}, merge[list1_, list2_] /; crit[First[list1], First[list2]] := With[{part = TakeWhile[list1, crit[#, First[list2]] &]}, Sow[part]; If[Length[part] == Length[list1], Sow[list2], merge[list1[[Length[part] + 1 ;;]], list2]]]; merge[list2_, list1_] /; crit[First[list1], First[list2]] := merge[list1, list2]; merge[list1_, list2_] := With[ {part = TakeWhile[list1, Not[crit[First[list2], #]] &]}, Sow[part]; If[Length[part] == Length[list1], Sow[list2], merge[list1[[Length[part] + 1 ;;]], list2]]]; Flatten[Reap[merge[lista, listb];][[2]]]] It does give slightly different results from Leonid's code though. For example for list1 = {1, 4, 3}; list2 = {2, 3, 4}; I get with my code mergeLists[{1, 4, 3}, {2, 3, 4}, LessEqual] (* out: {1, 2, 3, 4, 4, 3} *) whereas with Leonid's code I get Block[{$IterationLimit = Infinity}, merge[{1, 4, 3}, {2, 3, 4}, LessEqual]]
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9173026573249612, "lm_q1q2_score": 0.8298383392089704, "lm_q2_score": 0.9046505344582187, "openwebmath_perplexity": 3240.3428661326166, "openwebmath_score": 0.23184773325920105, "tags": null, "url": "http://mathematica.stackexchange.com/questions/6931/implementing-a-function-which-generalizes-the-merging-step-in-merge-sort?answertab=votes" }
c++, hash-map class Database { std::multimap<std::string, std::shared_ptr<Student>> mapByName; std::map<long, std::shared_ptr<Student>> mapByStudentNumber; public: void insertStudent (const std::shared_ptr<Student>& p) { mapByName.emplace (getName(p), p); mapByStudentNumber.emplace (p->getStudentNumber(), p); // 2 insertions needed. } inline void removeStudentByStudentNumber(); inline void removeStudentByName(); private: void remove (const std::shared_ptr<Student>& p) { mapByName.erase(getName(p)); mapByStudentNumber.erase(p->getStudentNumber()); // 2 removals needed. } std::string getName (const std::shared_ptr<Student>& p) const {return p->fullName();} }; inline void Database::removeStudentByStudentNumber() { long studentNumber; std::cout << "Student number? "; std::cin >> studentNumber; const auto it = mapByStudentNumber.find(studentNumber); if (it == mapByStudentNumber.end()) { std::cout << "No student with that student number exists.\n\n"; return; } remove(it->second); } inline void Database::removeStudentByName() { // Similar to above } int main() {} Algorithmic time analysis If there are to be k different search criteria later on, then \$k∗O(log N)\$ is still better than \$O(N)\$ up to, say, k = 20 or so. Agree?
{ "domain": "codereview.stackexchange", "id": 18362, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, hash-map", "url": null }
Now $25^{-1}=48$, since $25*48=1200=1(mod\ 109)$. So we have - $x=48*3=35(mod\ 109)$ - Hi thanks for the reply, can you please explain how you got Now 25−1=48, since 25∗48=1200=1(mod 109)? –  KaliKelly Aug 22 '10 at 6:34 To calculate modular inverse you need to use your extended Euclid's algorithm. (The procedure is there in the Wikipedia link.) –  KalEl Aug 22 '10 at 6:38 Let me know if you have trouble understanding why $48=25^{-1}$. (The reason it is so is 25*48=109*11+1.) –  KalEl Aug 22 '10 at 7:29 Okay I've used EEA, I got 25(48) - 109(11) = 1. I googled how to do it following this: mast.queensu.ca/~math418/m418oh/m418oh04.pdf Which was what you got... in one line, while I took a dozen lines. BTW, how do I use the fancy math formatting on my posts? –  KaliKelly Aug 22 '10 at 7:40 The fancy formatting is done by a component called MathJax, which is basically a TeX formatter using Javascript. So for example 25^{−1} in-between two dollar signs looks like looks like $25^{-1}$ automatically when you post. You can google TeX formatting to learn how it works, and for anything on this site which interests you, you can right-click the mathematical equation to "view source". –  KalEl Aug 22 '10 at 9:30 I meant this as a comment to the discussion after Student's answer but it seems that I don't have the option (reputation too low?) so I'll post it as an answer. Sorry.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9820137868795701, "lm_q1q2_score": 0.8074010551772399, "lm_q2_score": 0.8221891239865619, "openwebmath_perplexity": 665.7038812563881, "openwebmath_score": 0.9431530237197876, "tags": null, "url": "http://math.stackexchange.com/questions/2991/not-understanding-simple-modulus-congruency/2993" }
quantum-mechanics, angular-momentum, hilbert-space A trivial example of this would be a spin half up and a spin half down. Without antisymmetry, you would have $$|\uparrow \downarrow\rangle = \frac{1}{2}\left [ |\uparrow \downarrow\rangle + |\downarrow \uparrow\rangle \right ] +\frac{1}{2}\left [|\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle \right ].$$ The first term is the $S=1$, the second the $S=0$. So you have both values. If it is required to be antisymmectric, only the second $S=0$ term survives. A way to see that the closed shell gives zero angular momentum, is to calculate L_x, L_y, and L_z. As you say, L_z is zero since the $m$ values add to zero. If you write $L_x = \sum_i L_{ix}$, that is the sum of the single particle angular momenta in the $x$ direction, and then remember that $L_x = (L^++L^-)/2$, then, for a closed shell, every raising operator either gives zero if it acts on an $m=\ell$ state, or it makes an orbital have the same angular momentum as another. In the Slater determinant wave function, this gives zero since to particles are in the same orbital. The lowering operator acts similarly, it gives zero if $m=-\ell$ or it doubly occupies an orbital. So $L_x=0$. $L_y=-i(L^+-L^-)$, so again it is zero. Since $L_x=L_y=L_z=0$, $L^2=0$.
{ "domain": "physics.stackexchange", "id": 50231, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, angular-momentum, hilbert-space", "url": null }
machine-learning, deep-learning, neural-network, data-mining, machine-learning-model Title: Auto-ML for only fixed estimator Am working on a binary classification with 1000 rows and 28 columns. I would wish to use an Auto-ML solution to try out different combinations of hyperparameters etc but the algo should only be logistic regression. I don't wish to use other algorithms for the lack of interpretability. So, I would like my auto-ML solution to stick to logistic regression and try out different values for hyperparameters. Of course, I might use fixed estimators like Decision trees, random forests etc as well Is there any auto-ML solution that can use fixed estimator? I read about Tpot, Evalml, AutoML etc but they all try multiple algorithms and finally output the best one (which may not be logistic regression). How can I restrict my auto-ML solution to only use logistic regression? I found that we can do this using Tpot's config_dict and pass this as input to the classifier function like as shown below tpot_config = { 'sklearn.linear_model.LogisticRegression': { 'penalty': ["l1", "l2"], 'C': [1e-4, 1e-3, 1e-2, 1e-1, 0.5, 1., 5., 10., 15., 20., 25.], 'dual': [True, False] }, } tpot = TPOTClassifier(max_time_mins=10,verbosity=2, config_dict=tpot_config,scoring='f1') tpot.fit(ord_train_t, y_train) This will ensure that TPOT searches best pipeline based on the configs provided by the config_dict. However, if there are any other ML tool, I am interested to know from others here as well
{ "domain": "datascience.stackexchange", "id": 10516, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, deep-learning, neural-network, data-mining, machine-learning-model", "url": null }
homework-and-exercises, classical-mechanics, kinematics, rotational-kinematics K, fixed to ground K$'$, rotating with the carousel. In K, B is steady (velocity $\v vB=0$) and A's velocity is $$\v vA = \v\om{}\times\v rA$$ always tangent to the circle. There are two positions of A where $\v vA$ directly points to B: the one drawn, where A is approaching B, and that symmetric wrt CB, where A is receding from B. In K$'$, $\v vA'=0$ whereas $$\v vB' = -\v\om{} \times \v rB.$$ This is on a straight line from B orthogonal to CB and never touches the circumference. Then B's velocity is never directed towards A. The right drawing shows that: B's velocity has a component towards A, of $1\,\rm m\,s^{-1}$, but this is not the full velocity of B as seen from K$'$ frame. It's drawn as the downward $2\,\rm m\,s^{-1}$ vector. This has nothing to do with A's location. Edit In order to better understand the meaning of both components of $\v vB'$ let's decompose $\v rB$ in eq. (1): $$\v rB = \ora{\rm CA} + \ora{\rm AB}.$$ Then $$\v vB' = -\v\om{} \times \ora{CA} - \v\om{} \times \ora{AB}.\tag2$$ Eq. (2) shows that $\v vB'$ is the (vector) sum of two contributions. The former is the one pointing towards A, the latter is perpendicular. The latter arises because what you called "A's frame" is rotating with angular velocity $\v\om{}$ so that B looks rotating around A in the opposite direction.
{ "domain": "physics.stackexchange", "id": 58838, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, classical-mechanics, kinematics, rotational-kinematics", "url": null }
algorithm-analysis Title: Time complexity of this nested loop should be $O(n^{48})$ instead of $O(n^6)$ I was asked to solve this code on piece of paper while having a debate in the class. We are all new to DataStrct/Algo. I came up with the answer $O(n^{48})$ which was proved wrong. I am not convinced with the answer they gave me which was $O(n^6)$. void main() { int i; for(i=1, i<= n^2; i++) for(i=1, i<= n^4; i++) for(i=1, i<= n^6; i++) printf("0"); } What is the no. of times '0' is printed? Following is my computation for calculating complexity for loop: $O(n^2)$ then for loop: $O(n^4)$ then for loop: $O(n^6)$ making total of $$O(n^2) \cdot{} O(n^4) \cdot{} O(n^6) = O(n^{48})$$ Sorry to say you that you are wrong! I am no expert but I have tried to break it down for you. This question not only test your algorithm analysis skills but also test your language skills. You can learn in detail about algorithmic complexity and how to translate your program to mathematics here Is there a system behind the magic of algorithm analysis? Save this link or Bookmark it, it will be helpful to you later. Here is your program IMP: You need to pay attention to variable 'i' and how it plays its role in all three loops. This problem checks your syntax knowledge, scope of variable, and complexity knowledge all three in one go. void main() { int i;..............................statement 1 [executed once]
{ "domain": "cs.stackexchange", "id": 10522, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm-analysis", "url": null }
python, keras, tensorflow, bayesian Title: What is the num_initial_points argument for Bayesian Optimization with Keras Tuner? I've implemented the following code to run Keras-Tuner with Bayesian Optimization: def model_builder(hp): NormLayer = Normalization() NormLayer.adapt(X_train) model = Sequential() model.add(Input(shape=X_train.shape[1:])) model.add(NormLayer) for i in range(hp.Int('conv_layers',2,4)): model.add(Conv1D(hp.Choice(f'kernel_{i}_nr',values=[16,32,64]), hp.Choice(f'kernel_{i}_size',values=[3,6,12]), strides=hp.Choice(f'kernel_{i}_strides',values=[1,2,3]), padding="same")) model.add(BatchNormalization(renorm=True)) model.add(Activation('relu')) model.add(MaxPooling1D(2,strides=2, padding="valid")) model.add(Flatten()) model.add(Dropout(hp.Choice('dropout_flatten',values=[0.0,0.25,0.5]))) for i in range(hp.Int('dense_layers',1,2)): model.add(Dense(hp.Choice(f'dense_{i}_size',values=[500,1000]))) model.add(Activation('relu')) model.add(Dropout(hp.Choice(f'dropout_{i}_others',values=[0.0,0.25,0.5]))) model.add(Dense(hp.Choice('dense_size_last',values=[100,200]))) model.add(Activation('relu'))
{ "domain": "datascience.stackexchange", "id": 9082, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, keras, tensorflow, bayesian", "url": null }
inorganic-chemistry, metal, metallurgy, extraction Title: Why do we use coke instead of coal in order to reduce FeO as basic reaction in ironmaking process? Why is coke used as a reducing agent to reduce FeO to produce iron instead of coal.I admit that coke is carbonaceous but what is it that compels us to use coke instead of the naturally available coal? The single most important factor is strength ( mechanical compressive ); coal is heated to make coke, the resulting coke is stronger than the original coal. Also, coke helps to make the charge of iron oxides and limestone more porous to permit gas flow up and droplets of liquid iron and slag down. The coke oven heating drives off volatiles from the coal, which are a valuable source of chemicals, and contain much of the undesirable $\ce{S}$ and $\ce{P}$, which would otherwise go into the pig iron. Minerals like silica (ash) remain in the coke but they are easily collected by the slag. A bit off topic, but powdered coke and gasses are blown into the bottom of the blast furnace to provide more $\ce{C}$, which makes modern blast furnaces much more productive than 60+ years ago.
{ "domain": "chemistry.stackexchange", "id": 12395, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, metal, metallurgy, extraction", "url": null }
c, wordle To build the game: Clone this repository: Wordle50 cd into the repository. Run: make wordle50 Lack of comments At least <"Wordle50" is a command-line-based adaptation of ... the target word> deserves to be a code comment, perhaps as part of a command line option "-help". Longer functions deserve a line or two describing the function goal. Non-standard code \e --> \033 getline(), strcasecmp(), ssize_t are not part of the C standard as of C17. Data validation I'd expect get_guess() to validate characters read, perhaps to letters. Correct file? fscanf(wlist, "%s", options[i]); stands on shaky ground as 1) return value not checked. 2) string size read may exceed options[i] size 3) if file longer than expected, no notice given. I'd expect the number of words to be driven by the file and not a program constant here. Consider a command line option to select the files' directory. More informative prompt When while (strlen(guess) != wlen) is true, consider a prompt to inform user of the invalidity of input. Better random initialization srand(time(NULL)); initializes to the same state if called in the same second as time_t usually has second resolution. Consider xor'ing in other to differentiate if code called multiple times in the same second - which may happen in automated usage. // srand(time(NULL)) unsigned r = (unsigned) (time(NULL) ^ getpid_or_the_like()); srand(r); For testing purposes, consider a command line option to override and assign r. Close when done fclose(wlist); deserves to be right after the for (size_t i = 0; i < LIST_SIZE; ++i) loop. No need to leave open while running the game. Avoid magic numbers As is, code has to update LIST_SIZE and a comment should a value other than 1000 get used. // From /* Each of our text files contains 1000 words. */ // To /* Each of our text files contains LIST_SIZE words. */
{ "domain": "codereview.stackexchange", "id": 45109, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, wordle", "url": null }
limit definition of derivative and is given by n't find in your maths.... Involving two functions, so: apply the quotient rule with the step-by-step explanations { x+2 } \ be!: derivative quotient rule by watching this harder derivative tutorial 4z 3 − 2! Your feedback, comments and questions about this site or page ) of.. Well, maybe the quotient rule, and so we first apply the quotient rule and what! 8 ∫ z dz + 4 ∫ z 2 dz 1 ) Differentiate the rule! 39 ; s take a look at an example of how these two derivative rules would be used.. You are not … Tag Archives: derivative quotient rule J a Rossiter Slides. A similar rule for differentiating problems where one function is divided by....
{ "domain": "laurentbompard.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9591542829224748, "lm_q1q2_score": 0.80135761292398, "lm_q2_score": 0.8354835371034369, "openwebmath_perplexity": 793.3395182882508, "openwebmath_score": 0.899865448474884, "tags": null, "url": "http://laurentbompard.com/50msm/quotient-rule-examples-b8097c" }
orbit, earth, satellite, orbital-mechanics, artificial-satellite There are a few details that may affect the answer: you should consider if the motion of the person will affect the answer to part (b) (and if not, why not). You should know that the Earth is not spherical. You should consider whether the optical refraction of light by the Earth's atmosphere should be considered. However, in a homework problem, such issues can often be ignored, as their effects are small.
{ "domain": "astronomy.stackexchange", "id": 2199, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "orbit, earth, satellite, orbital-mechanics, artificial-satellite", "url": null }
potential, computational-physics Furthermore the quality of the computational results hugely depend on the kind of the pseudopotentials we use (and exchange-correlation functionals...). A lot of scientists have worked to develop more feasible exchange-correlation functionals, from mere LDA and GGA to hybrid functionals. Surely they also have developed more plausible pseudopotentials, but the worry about "poor performance pseudopotential" can be completely overcome if we just use the giant basis set I mentioned above. This makes computational costs skyrocket but why don't we use so-called supercomputers? Any poor property of the pseudopotentials will be erased and hopefully many unsolved problems of theoretical divisions(such as the special properties of transition metal oxides) would be resolved. The size of the required basis set is too large to be dealt with, even, world-level supercomputers? Firstly, if you have vast computational resources and accuracy is really important then you'd more likely use a method like quantum Monte Carlo rather than DFT. Nevertheless, there is no theoretical reason why you can't perform DFT without a pseudopotential (indeed, pseudopotentials were introduced some years after DFT). But, and here is the crucial point: pseudopotentials generally introduce errors that are negligible in comparison to the errors introduced by the exchange-correlation functional (which you have no choice but to use). So you might as well use them given that doing so may speed up your simulations by orders of magnitude and introduce negligible error. As a side note, your understanding for using pseudopotentials is correct: it allows for using smaller cut-offs since it reduces the width of the Fourier spectrum, plus it demands fewer eigenstates be constructed.
{ "domain": "physics.stackexchange", "id": 39014, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "potential, computational-physics", "url": null }
c#, performance, beginner, error-handling, tcp if (TcpReaderThread != null) { TcpReaderThread.Abort(); TcpReaderThread = null; } TcpReaderThread = new Thread(ReadData) { IsBackground = true }; TcpReaderThread.Start(); successFlag = true; } catch { } } return successFlag; } catch { return false; } } public bool Disconnect() { try { lock (syncLock) { try { if (TcpReaderThread != null) { TcpReaderThread.Abort(); TcpReaderThread = null; } if (client != null) { client.Client.Close(); client.Close(); client = null; } if (ReceivedStringQueue.Count > 0) { ReceivedStringQueue.Clear(); } } catch { } } return true; } catch { return false; } } public bool Send(string sendString) { try { bool successFlag = false; lock (syncLock) { try { client.Client.Send(ASCIIEncoding.ASCII.GetBytes(sendString)); successFlag = true; } catch { } } return successFlag; } catch { return false; } } public string GetReceivedString() { try { string returnString = ""; lock (ReceivedStringQueue.SyncRoot) { try { if (ReceivedStringQueue.Count > 0) { returnString = ReceivedStringQueue.Dequeue().ToString(); } } catch { } } return returnString; } catch { return ""; } } public bool Listen(int port) { try { IPEndPoint ipLocalEndPoint = new IPEndPoint(IPAddress.Any, port); listener = new TcpListener(ipLocalEndPoint); listener.Start(port);
{ "domain": "codereview.stackexchange", "id": 35487, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, performance, beginner, error-handling, tcp", "url": null }
ros, ros-indigo W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-backports/InRelease W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease W: Failed to fetch http://archive.canonical.com/ubuntu/dists/trusty/InRelease W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/trusty/InRelease W: Failed to fetch http://packages.ros.org/ros/ubuntu/dists/trusty/InRelease W: Failed to fetch http://packages.ros.org/ros/ubuntu/dists/trusty/Release.gpg Could not resolve 'packages.ros.org' W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg Could not resolve 'archive.ubuntu.com' W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/trusty/Release.gpg Could not resolve 'extras.ubuntu.com' W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg Could not resolve 'archive.ubuntu.com' W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-backports/Release.gpg Could not resolve 'archive.ubuntu.com' W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg Could not resolve 'archive.ubuntu.com' W: Failed to fetch http://archive.canonical.com/ubuntu/dists/trusty/Release.gpg Could not resolve 'archive.canonical.com'
{ "domain": "robotics.stackexchange", "id": 23422, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-indigo", "url": null }
But I was expecting a more general solution such as $$V = C + Ee^{-t / \tau}$$ without having to "guess" it, can somebody please give me some direction? • When integrating $\int e^{{t} / {\tau}} \frac{E_0}{\tau} \ dt$ be sure to put in an arbitrary constant. (This is one of the reasons the integration constant $+C$ is drummed into us early in our calculus education!) – Simon S Jun 8 '15 at 18:35 • Wow that was such a school boy error. Thank you @SimonS . – HBeel Jun 8 '15 at 18:38 • I assume that adding a constant when working out $v(t)$ won't add anything to the general solution and just make the calculation longer, so I can let the constant of integration $0$ there ? – HBeel Jun 8 '15 at 18:39 • I'm not following you. See below. – Simon S Jun 8 '15 at 18:41 $$V(t) =e^{{-t} / {\tau}} \int e^{{t} / {\tau}} \frac{E_0}{\tau} \ dt = E_0 e^{-t/\tau} \left( e^{t/\tau} + C' \right) = E_0 + Ce^{-t/\tau}$$ (where $C = E_0C'$)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9783846628255123, "lm_q1q2_score": 0.8066409038504134, "lm_q2_score": 0.8244619263765707, "openwebmath_perplexity": 406.0072925357363, "openwebmath_score": 0.9952335953712463, "tags": null, "url": "https://math.stackexchange.com/questions/1317429/finding-a-general-solution-to-a-first-order-linear-differential-equation-using" }
complexity-theory, context-free, formal-grammars These two basic algorithms are useful to simplify the raw results of some grammar construction techniques, such as used for the intersection of a context-free language and a regular set. In particular, this is useful in cleaning up the results of general CF parsers. Useless non-terminal symbols removal is necessary in the context of solving the question asked, as the rules using them cannot be "invoked" (i.e. used in its derivation) by any string of the language. Building a set of string that invoke every rule (We are not looking yet for minimal strings.) Now answering specifically the question, one must indeed remove all useless symbols, whether unreachable symbols or unproductive non-terminal symbols, a well as useless rules having such useless non-terminals as LHS. They have no chance of being ever invoked usefully while parsing a terminal string (though some may well waste the processing time of a parser when they are not removed; which ones may waste time depends on the parser technology). We now consider, for each (useful) rule, the production of a terminal string that invokes it, i.e. that may be generated by using this rule. This is essentially what is done by these two algorithms above, though they do not keep the information, as they are satisfied with proving the existence of these strings to ensure that non-terminals are both reachable and productive. We modify the first algorithm (lemma 4.1) by keeping with each non-terminal $U$ in the set $Prod$ a terminal string $\sigma(U)$ it derives on: $U\overset{*}{\Longrightarrow}\sigma(U)$. For every terminal we define the $\sigma$ as the identity mapping. When $U$ is added to the set $Prod$ because a rule $U\rightarrow\gamma$ has all its RHS symbols in $Prod$, then we define $\sigma(U)=\sigma(\gamma)$, extending $\sigma$ as a homomorphism on strings, and we remove all $U$-rules, that is all rules with $U$ as LHS. We modify the second algorithm (lemma 4.2) by keeping with each non-terminal symbol $U$ added to $Reach$ the path used to reach it
{ "domain": "cs.stackexchange", "id": 3078, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory, context-free, formal-grammars", "url": null }
javascript, object-oriented, node.js, serial-port Title: Serial port for Node.js I am trying to create a class for serial port using Node.js, but I want to know if there is a better way to write my data class code. In my sample code below, in the getPortName prototype in the forEach loop, I declare a port object to store the serial port information before pushing into an array. Is this a good way of declaring this port object? I think it is not good because it is difficult for users to see what the objects' members contain. Is there any way to make it clearly expose the port object member? Or is there a better way to improve this code structure? 'use strict'; var serialPort = require("serialport"); function Serial() { this.ComName = ""; } Serial.prototype.getPortName = function() { serialPort.list(function (err, ports) { var availablePorts = []; ports.forEach(function(port) { var port = { comName: "", manufacturer: "", pnpId: } port.comName = port.comName; port.manufacturer = port.manufacturer; port.pnpId = port.pnpId; availablePorts.push(port); }); }); return availablePorts; }; Serial.prototype.setCOMName = function(name) { this.ComName = name; } module.exports = Serial; First off, you should probably allow a default ComName to be passed into the constructor, and the default should probably be null instead of an empty string (unless you have good reason to do otherwise). function Serial(name) { this.ComName = name !== undefined ? name : null; } Next, getPortName is strange. You set it on Serial.prototype, which indicates it will be used as an instance method. In fact, however, it doesn't use instance state at all! As your code is currently laid-out, you would need to perform the following to call getPortName: new Serial().getPortName() In classical OOP parlance, this is a static function. It doesn't need instance state, so declare it on Serial itself, not Serial.prototype. Serial.getPortName = function() { ... }
{ "domain": "codereview.stackexchange", "id": 11852, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, object-oriented, node.js, serial-port", "url": null }
c, parsing case 3: return symbol->func3(vals[0], vals[1], vals[2]); } } else if (*(*str)++ == ',') continue; break; } } } } else switch (*(*str)++) { case '+': return term(str); case '-': return -term(str); case '(': { const double val = expr(str, 0); if (*(*str)++ == ')') return val; } } return NAN; } #include <stdio.h> int main(int argc, char **argv) { for (int arg = 1; arg < argc; ++arg) if (printf("%g\n", expr(&(const char *){argv[arg]}, 0)) < 0) return EXIT_FAILURE; return EXIT_SUCCESS; } Bug memcmp(str, name, len); lets memcmp() compare the first len bytes of the data pointed to by str and name, yet some of those are pointers to a string less than len in size. What code needs is strncmp() (or perhaps even strcmp()) to not compare past the end of the string. Bug The *(const char **)strp += len relies on bsearch() to stop calling cmp() once a return value of 0 occurs. Although this is common, bsearch()is not specified that way. Instead the calling code should advance the pointer, not the compare function. Pedantic Bug str[len]-name[len] in compar() should be an unsigned char compare to well sort symbols that have a character outside the [0...CHAR_MAX] range. Yet since, presently, symbols[] only employs ASCII characters, it does not make a difference. Non-standard macros M_PI and its 13 friends are not part of the C standard. See Using M_PI. Consider #ifndef M_PI
{ "domain": "codereview.stackexchange", "id": 45092, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, parsing", "url": null }
observational-astronomy, the-moon, telescope, jupiter, refractor-telescope Title: Can I use 75mm Plano convex lens to build telescope I want to see Jupiter with moon. So am planning to build telescope. Can I use 75mm Plano convex lens with 1000mm focal length? Can I use 75mm Plano convex lens with 1000mm focal length tl;dr: Go for it! First see this answer to Will these simple 2 convex lens arrangement telescope see the moon clearly? but your question is quite different, not a duplicate and I think you can have some success! You will have some chromatic aberration but at f/12.5 it won't be so strong, and you certainly won't have a problem with spherical aberration. See How does making a refracting telescope very long reduce the chromatic aberration of an uncorrected lens? In fact it's probably time that get answered so I've added a bounty, and if nobody answers it after that then I will! Yes, please carry out your experiment! With a 75mm plano convex lens with a 1000 mm focal length, you will have a low quality image due to chromatic aberration but with a good eyepiece and stable way to hold it at the objective lens' focal plane (that's not easy!) you will be able to catch a glimpse of Jupiter's Galilean moons and probably see some bands of clouds. They will not be clear, the image won't be stunning, but it will be very exciting to DIY yourself all the way to Jupiter for a few moments! Go for it!
{ "domain": "astronomy.stackexchange", "id": 6068, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "observational-astronomy, the-moon, telescope, jupiter, refractor-telescope", "url": null }
javascript, design-patterns, ajax, modules XHR.open("post", url, true); XHR.setRequestHeader("content-type", "application/x-www-form-urlencoded"); XHR.setRequestHeader('X-Requested-With', 'XMLHttpRequest'); XHR.onreadystatechange = function () { if (XHR.readyState === 4 && XHR.status === 200) { promise.keep(XHR.responseText); } }; XHR.send(parameters); return promise; } }
{ "domain": "codereview.stackexchange", "id": 2709, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, design-patterns, ajax, modules", "url": null }
imu // read a packet from FIFO mpu.getFIFOBytes(fifoBuffer, packetSize); // track FIFO count here in case there is > 1 packet available // (this lets us immediately read more without waiting for an interrupt) fifoCount -= packetSize; // display Euler angles in degrees mpu.dmpGetQuaternion(&q, fifoBuffer); mpu.dmpGetGravity(&gravity, &q); mpu.dmpGetYawPitchRoll(ypr, &q, &gravity); Serial.print("ypr\t"); Serial.print(ypr[0] * 180/M_PI); ul_msg.data = int(ypr[0]); pub_ul.publish(&ul_msg); Serial.print("\t"); Serial.print(ypr[1] * 180/M_PI); ul_msg.data = int(ypr[1]); pub_ul.publish(&ul_msg); Serial.print("\t"); Serial.println(ypr[2] * 180/M_PI); ul_msg.data = int(ypr[2]); pub_ul.publish(&ul_msg); // blink LED to indicate activity blinkState = !blinkState; digitalWrite(LED_PIN, blinkState); } nh.spinOnce(); } It uploads on to the ardunio board . But than it does not show the output in seial moitor. And when I try to echo the topic in ros it shows version mismatch between audrino ROS and the node I am using ROS Indigo Originally posted by ganesh on ROS Answers with karma: 21 on 2015-12-20 Post score: 1
{ "domain": "robotics.stackexchange", "id": 23266, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "imu", "url": null }
Hint $\ \ f\bar f = (a\!-\!x)(a\!+\!x)\ \Rightarrow\ \dfrac{a-x}f \,=\, \dfrac{\bar f}{a+x}\ \$ where $\ \ \bar f,\,f\, =\, 5\pm \sqrt{x^2+16}$ Remark $\$ This is a special case of rationalizing the denominator, which is often handy. • Does that link imply those three accounts all belong to you? – TMM Feb 28 '14 at 15:49
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9852713889614089, "lm_q1q2_score": 0.8542655154914718, "lm_q2_score": 0.8670357477770337, "openwebmath_perplexity": 569.7749495678605, "openwebmath_score": 0.9157493114471436, "tags": null, "url": "https://math.stackexchange.com/questions/694098/solve-algebraically-lim-limits-x-to-3-frac3-x5-sqrtx216" }
quantum-field-theory, hilbert-space, scattering, vacuum, s-matrix-theory Alternative construction Just to illustrate my point further, here's an alternative construction that seems more reasonable to me (but also way too OP). Since the asymptotic operators $(a_p^{\pm\infty})^\dagger$ are time-independent anyway, we can just define operators in the Schrödinger picture by $A_p^\dagger=(a_p^\infty)^\dagger$ and create momentum eigenstates from the vacuum at asymptotic times (which still works because the vacuum is translation-invariant). In fact, it seems like this should work at whatever time we want, not just $t=\pm\infty$, then the $S$-matrix elements are \begin{align} \langle f|S|i\rangle&=\langle p_1\cdots p_n|U(t_f,t_i)|k_1\cdots k_m\rangle=\langle\Omega|A_{p_1}\cdots A_{p_n}U(t_f,t_i)A_{k_1}^\dagger\cdots A_{k_m}^\dagger|\Omega\rangle\tag*{(Schrödinger)}\\ &={_{\rm out}}\langle p_1\cdots p_n|k_1\cdots k_m\rangle_{\rm in}=\langle\Omega|A_{p_1}(t_f)\cdots A_{p_n}(t_f)A_{k_1}^\dagger(t_i)\cdots A_{k_m}^\dagger(t_i)|\Omega\rangle\tag*{(Heisenberg)} \end{align}
{ "domain": "physics.stackexchange", "id": 96695, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, hilbert-space, scattering, vacuum, s-matrix-theory", "url": null }
interpolation, compressive-sensing, reconstruction, multirate, sparsity Title: Room Impulse Response Domain of Sparsity I have been studying the problem of room impulse responses (RIRs) interpolation for a couple of months. I am trying to use compressed sensing to reconstruct (at best) the sound field in the room with a better resolution, or (at least) get an RIR at some desired location where it was not measured before. "A better resolution" here means that by having only 20-30 measured RIRs in the room, I want to get the RIRs of, say, one million locations. That is, think of it as a better sampling of the sound field in the room, given the three dimensions of the room. The problem is, in most RIR interpolation using CS papers, the domain of sparsity is not really described well. Most of the CS algorithms use some pre-defined domain of sparsity. When I think about it, RIRs are not sparse in time (generally around 3 seconds filled with data), neither are they sparse in frequency (an impulse utilizes all the frequencies from $-\infty$ to $\infty$). Does anyone have an idea of the sparsity of RIRs? Or, do you have any suggestions on how to tackle this problem using common and well-studied CS algorithms? I want to give you my intuition. I believe that RIRs are sparse in some domain inspired by the spatial characteristics of the room. That is, one RIR is not sparse itself, and this is not what we are after. We are after the sparsity of the sound field in some domain. I do not know how to represent such a domain mathematically, but it can be thought of as follows: only a couple of samples of the sound field (a couple of RIRs) are enough to carry all the information of the sound field in the room (low entropy of the sound field), and hence projecting them onto that certain domain that inscribes this property can ease the task of CS. Then we can go back to the original domain (which is a 4D domain of 3 spatial dimensions and 1 temporal dimension) by taking the inverse of the transform that took us to the sparsity domain. Any help is appreciated, especially if MATLAB is involved. That's tricky.
{ "domain": "dsp.stackexchange", "id": 12424, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "interpolation, compressive-sensing, reconstruction, multirate, sparsity", "url": null }
quantum-mechanics, contextuality, quantum-foundations (i) $v(A+B)= v(A)+v(B)$ if $AB=BA$, (ii) $v(AB) = v(A)v(B)$ if $AB=BA$. SKETCH OF PROOF Notice that orthogonal projectors are selfadjoint operators $P:{\cal H}\to {\cal H}$ such that $PP=P$. We can restrict $v$ to the space (lattice) of orthogonal projectors). From the hypotheses of additivity and multiplicativity, taking $PP=P$ into account, it is easy to conclude that (a) $v(P) \in \{0,1\}$ for every orthogonal projector $P$ and that (b) $v_\lambda(P_1+...+P_k)= v(P_1)+...+v(P_k)$ if $P_kP_h=0$ when $h\neq k$. Furthermore, $v(I)=1$ otherwise $v$ is the trivial map $v(P)=0$ for all orthogonal projectors. The spectral theorem would imply that $v(A)=0$ for every $A\in B_{sa}({\cal H})$ and this is not permitted. $\dim{\cal H}>2$, (a), and (b) through the Gleason theorem (a part of the proof) imply that there exists a unique mixed state, $\rho$, such that $v(P) = tr (P\rho)$ for every orthogonal projector $P\in B_{sa}({\cal H})$. Let us restrict this map to the set $S$ of the one-dimensional orthogonal projectors (that is the rays $p= |\psi\rangle \langle \psi|$). $$S \ni |\psi\rangle \langle \psi| \mapsto \langle \psi, \rho_\lambda\psi \rangle \in \{0,1\}$$
{ "domain": "physics.stackexchange", "id": 97161, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, contextuality, quantum-foundations", "url": null }
electromagnetism, maxwell-equations Title: Questions about Biot-Savart law and Ampere's law A textbook I'm studying with described finding vector magnetic potential $\vec{\text{A}}$ from Biot-Savart law as below. $\vec{\text{H}_2}=\int_{\text{vol}}\frac{\vec{\text{J}_1}\times\hat{\text{a}_{\text{R}12}}}{4\pi\text{R}_{12}}dv_1$ $=\frac{1}{4\pi}\int_{\text{vol}}\vec{\text{J}_1}\times(-\nabla_2\frac{1}{\text{R}_{12}})dv_1$ $=\frac{1}{4\pi}\int_{\text{vol}}[(\nabla_2\times\frac{\vec{\text{J}_1}}{{\text{R}_{12}}})-\frac{1}{\text{R}_{12}}(\nabla_2\times\vec{\text{J}_1})]dv_1$ $=\frac{1}{4\pi}\int_{\text{vol}}(\nabla_2\times\frac{\vec{\text{J}_1}}{{\text{R}_{12}}})dv_1$ ($\because\nabla_2\times\vec{\text{J}_1}=0$)
{ "domain": "physics.stackexchange", "id": 29513, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, maxwell-equations", "url": null }
Problem Calling remez() in remez.cpp using the same specs, however, does not return the same coefficients. /******************** * remez *======= * Calculates the optimal (in the Chebyshev/minimax sense) * FIR filter impulse response given a set of band edges, * the desired response on those bands, and the weight given to * the error in those bands. * * INPUT: * ------ * int numtaps - Number of filter coefficients * int numband - Number of bands in filter specification * double bands[] - User-specified band edges [2 * numband] * double des[] - User-specified band responses [numband] * double weight[] - User-specified error weights [numband] * int type - Type of filter * * OUTPUT: * ------- * double h[] - Impulse response of final filter [numtaps] * returns - true on success, false on failure to converge ********************/ int numOrder = 5; std::vector<double> h(numOrder + 1); std::vector<double> bandsEdges = {0, 0.4, 0.5, 0.5}; std::vector<double> desiredAmps = {1, 0}; std::vector<double> weights = {1, 1}; remez(h.data(),6,2,bandsEdges.data(),desiredAmps.data(),weights.data(),1,16) // Pseudo code // h = [0.0154523, 0.115447, 0.343533, 0.343533, 0.115447, 0.0154523]; I've also tried in Matlab using the following command, and I also couldn't get the same coefficients as Python ones.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9230391685381604, "lm_q1q2_score": 0.8052107953994214, "lm_q2_score": 0.8723473746782093, "openwebmath_perplexity": 2583.9934775013994, "openwebmath_score": 0.2598196566104889, "tags": null, "url": "https://dsp.stackexchange.com/questions/78778/find-the-equivalent-of-this-python-remez-specs-in-c-remez-or-matlab-firpm" }
optics, geometric-optics, sensor Are the pupil conjugates referring to the conjugates of A) the real pupil(Iris), B) The effective pupil or (C) the exit pupil of the eye (which i suppose is just a conjugate of the the "real" pupil). My own understanding is we must have to image the "Effective pupil" as this will include abberations after the cornea, if we sucessfully image "The real pupil" on the Shack Hartmann this will not include aberrations from the cornea? As in Virens's answer "A conjugate to B" simply means A is an image surface when B is an object surface and contrariwise (I use the word "surface" here because it is almost always curved to some degree, even though to first order we model these surfaces as planes). There is no way that this setup can measure anything other than the total aberration of the total eye system, including cornea, lens and aqueous humour. It is quite analogous to a double-pass interferometric test of a lens system as wontedly done with a Fizeau or Twyman-Green interferometer. The light passes through all the eye's components and there is no way of telling which part of the total wavefront aberration is induced by which part of the eye. If you think about the above in terms of what I describe below, it should become clear that you need to image the effective pupil as drawn in your diagram at the plane of the lenslets. In fact, strictly speaking, the talk of pupils in your document is not quite right and what you need to do is image the principal plane of the eye's optics onto the lenslet plane. However, this principal plane is almost always very near to the pupil and, as I discuss at the end, small errors in any of this discussion will not make a great deal of difference.
{ "domain": "physics.stackexchange", "id": 17412, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optics, geometric-optics, sensor", "url": null }
the open Let f(x) = x3 3x+ 1. If f(a) = f(b), then there is at least one point c in (a, b) where f'(c) = 0. this open interval, the instantaneous of the tangent line is going to be the same as Rolle’s Theorem is a special case of the Mean Value Theorem in which the endpoints are equal. the right hand side instead of a parentheses, So let's calculate So think about its slope. Rolle’s theorem say that if a function is continuous on a closed interval [a, b], differentiable on the open interval (a, b) and if f (a) = f (b), then there exists a number c in the open interval (a, b) such that. (“There exists a number” means that there is at least one such… looks something like this. that means that we are including the point b. a and b, there exists some c. There exists some a, b, differentiable over-- f is continuous over the closed If you're seeing this message, it means we're having trouble loading external resources on our website. mean, visually? it looks, you would say f is continuous over at those points. rate of change at that point. Draw an arbitrary we'll try to give you a kind of a real life example (The tangent to a graph of f where the derivative vanishes is parallel to x-axis, and so is the line joining the two "end" points (a, f(a)) and (b, f(b)) on the graph. continuous over the closed interval between x equals If f is constantly equal to zero, there is nothing to prove. At this point right as the average slope. So in the open interval between is it looks like the same as the slope of the secant line. about this function. slope of the secant line, is going to be our change if we know these two things about the slope of the secant line. So the Rolle’s theorem fails here. Well, the average slope There is one type of problem in this exercise: Find the absolute extremum: This problem provides a function that has an extreme value. And as we saw this diagram right So those are the function, then there exists some x value let's see,
{ "domain": "campingcostabrava.nl", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9770226290864075, "lm_q1q2_score": 0.8471135541970389, "lm_q2_score": 0.8670357563664174, "openwebmath_perplexity": 409.60202241971086, "openwebmath_score": 0.7660291194915771, "tags": null, "url": "http://campingcostabrava.nl/cz8rs490/9ab558-rolle%27s-theorem-khan-academy" }
algorithms, approximation It seems intuitive to use the 2-approximation on each coordinate of the given vectors $x_1,...,x_n\in [0,1]^d$ and obtain $d$ partitions into at most $2k$ bins (where $k$ is the minimal number of bins for the vectors partition). However, i cant seem to combine those bins into a partition of size at most $2d\cdot k$. For any vector $x_i\in [0,1]^d$, let $b_i\in \left\{1,...,2k\right\}^d$ be the vector whose $j'th$ entry is the bin in which $x_i^j$ was placed (after running the approximation on all the $j'th$ components). Since i cant allow two vectors $x_i,x_j$, whose corresponding vectors $b_i,b_j$ differ in at least one component, go into the same bin, it seems i need at least $(2k)^d$ bins. What am i missing? Create an instance $y_1,\ldots,y_n$ of bin packing by taking $y_i = \max(x_i^1,\ldots,x_i^d)$, and solve it using the given approximation algorithm. The solution is feasible for the vector bin packing problem. On the other hand, given a solution for the vector bin packing problem, you can construct a solution for the "max" bin packing problem using at most a factor $d$ more bins by splitting each bin into $d$ bins according to the largest coordinate.
{ "domain": "cs.stackexchange", "id": 5957, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, approximation", "url": null }
ros, ros2, transforms, tf2-ros, rclpy using namespace std::chrono_literals; using namespace std; class MyNode : public rclcpp::Node { public: MyNode(const string& node_name, const string& nm) : Node(node_name, nm) { _buffer_tf2 = std::make_unique<tf2_ros::Buffer>(this->get_clock()); _listener_tf2 = std::make_shared<tf2_ros::TransformListener>(*_buffer_tf2); timer_ = this->create_wall_timer( 500ms, std::bind(&TFListenerNode::timer_callback, this)); } private: void timer_callback() { if (_buffer_tf2->canTransform("camera_color_optical_frame", "map", tf2::TimePointZero)) RCLCPP_INFO(this->get_logger(), "Can transform"); } rclcpp::TimerBase::SharedPtr timer_; std::unique_ptr<tf2_ros::Buffer> _buffer_tf2; std::shared_ptr<tf2_ros::TransformListener> _listener_tf2; }; int main(int argc, char* argv[]) { rclcpp::init(argc, argv); auto tf_transforms_node = make_shared<MyNode>("listener_transforms_node", "tf_transforms"); rclcpp::spin(tf_transforms_node); rclcpp::shutdown(); return 0; }
{ "domain": "robotics.stackexchange", "id": 36602, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros2, transforms, tf2-ros, rclpy", "url": null }
c#, winforms if (valueIsAnInteger) { ValidateCellEdit(e.RowIndex, int.Parse(e.FormattedValue.ToString())); } else { //Value Entered Is A Non Number ResetCellAndAddErrorMessage("A non number has been entered as the Replenish Amount"); } } } /// <summary> /// Check if new cell value added to existing replenish amount total /// for that row's part is less or equal to that parts available stock. /// if not undo edit and return error /// </summary> /// <param name="rowIndex">Row Number</param> /// <param name="newValue">Edited Cells new value</param> private void ValidateCellEdit(int rowIndex, int newValue) { bool replenishTotalIsEqualToOrLessThanAvailableStock = IsReplenishTotalEqualOrLessToAvailableStock(rowIndex); if (replenishTotalIsEqualToOrLessThanAvailableStock) { lbStatusBar.ForeColor = Color.Black; lbStatusBar.Text = string.Empty; } else { //Value Entered Is More Than Free Stock ResetCellAndAddErrorMessage("Replenish Amount entered is more than the available free stock"); } } /// <summary> /// Works out if the running total of Replenish amounts /// is less or equal to the available stock for that part. /// </summary> /// <param name="rowIndex">Row Number</param> /// <param name="newValue">Replenish amount value entered</param> /// <returns>True or false</returns> private bool IsReplenishTotalEqualOrLessToAvailableStock(int editedRowIndex) { var rows = GetRowsIndexAndValues(ProductColumnIndex.ReplenishAmount.ToString());
{ "domain": "codereview.stackexchange", "id": 13936, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, winforms", "url": null }
filters, discrete-signals, lowpass-filter, laplace-transform, c++ Substituting $s=\frac{1-z^{-1}}{T}$: $_{Numerator} = \frac{-2 R_g (-C L z^{-2} + 2 C L z^{-1} - C L + C R_d T z^{-1} - C R_d T - T^{2})}{T^{2}}$ $_{Denominator} = \frac{2 C R_g L z^{-2} - 4 C R_g L z^{-1} + 2 C R_g L + 2 C R_g T R_d - 2 C R_g T R_dz^{-1} + C L R_d- 2C L R_dz^{-1}+C L R_dz^{-2} + 2 R_g T^2 - L T z^{-1} + L T}{T^2}$ Canceling the $1/T^{2}$: $_{Numerator} = -2 R_g (-C L z^{-2} + 2 C L z^{-1} - C L + C R_d T z^{-1} - C R_d T - T^{2})$ $_{Denominator} = 2 C R_g L z^{-2} - 4 C R_g L z^{-1} + 2 C R_g L + 2 C R_g T R_d - 2 C R_g T R_dz^{-1} + C L R_d-2C L R_dz^{-1}+C L R_dz^{-2} + 2 R_g T^2 - L T z^{-1} + L T$ Cross Multiplying:
{ "domain": "dsp.stackexchange", "id": 8227, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, discrete-signals, lowpass-filter, laplace-transform, c++", "url": null }
wavelet, signal-energy, dwt, parseval You can verify this indirectly, looking at approximation coefficients. At each level, their number of samples is halved, and their amplitudes have around a $1.4$ scale factor, which is just $\sqrt{2}$. This feature is used for instance to estimate the Gaussian noise power from wavelet coefficients: $$ \hat{\sigma} = \textrm{median} (w_i)/0.6745$$ A little further, there is a notion that generalizes (orthonormal) bases: frames. A set of functions $(\phi_i)_{i\in \mathcal{I}}$ ($\mathcal{I}$ is a finite or infinite index set) is a frame if for all vectors $x$: $$ C_\flat\|x\|^2 \le \sum_{i\in \mathcal{I}} |<x,\phi_i>|^2\le C_\sharp\|x\|^2$$ with $0<C_\flat,C_\sharp < \infty$. This is a more general Parseval-Plancherel-like result used for general wavelets. In other words, it "approximately preserves energy" by projection (inner product). If the constants $ C_\flat$ and $C_\sharp $ are equal, the frame is said to be tight. Orthonormal bases are non-redundant sets of vectors with $ C_\flat=C_\sharp = 1 $. For those using Matlab, you should care about the native border extension, which is obtained by dwtmode('status'). Some add tails to the data to help inversion with little border artifacts. With a periodic mode dwtmode('per') and a number of samples that can be divided by $2^L$ where $L$ is the wavelet level, you can get a good match in energy, with tiny differences:
{ "domain": "dsp.stackexchange", "id": 11293, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "wavelet, signal-energy, dwt, parseval", "url": null }
inorganic-chemistry, ions, coordination-compounds, spectrophotometry Title: Why do ligands have such a small effect on overall absorption of a complexed ion? When a metal cation is complexed, there is strong UV-Vis absorption due to the splitting of its $d$ orbitals, thereby allowing electronic transitions. My understanding is that ligands contribute very little to overall observed absorption. This is evident, for example, in copper solutions. Copper(II) chloride, copper(II) sulfate, and copper(II) nitrate – all solutions of complexed ions – are all very similar in color and have similar absorption spectra. I think the simple answer is that transition metals have the capacity to allow for electronic transitions while most ligands do not, but I don't believe it is this simple. Also, it is curious that sulfate, chloride, and nitrate are all colorless when dissolved in solution. My question is: why do ligands have such a small effect on overall absorption of a complexed metal cation? This is absolutely not true. Many ligands can strongly dictate the color of transition metal solutions. Your example picks three simple salts and then questions whether the color of the solutions (dictated largely by $\ce{[Cu(H2O)6]^{2+}}$) is very different. No, because the resulting majority complex in aqueous solution is likely identical. Even taking a simple ammonia complex, e.g. $\ce{[Cu(NH3)4(H2O)2]^{2+}}$ you can see a substantial color change. Many copper acetonitrile compounds are weakly colored or colorless, with UV/Vis optical absorption occurring near the edge of the red to near-IR. When you get into octahedral complexes, you can find substantial changes in color due to MLCT and LMCT absorptions, as well as significant modulation of the d-d transitions due to the ligand (i.e., high-field and low-field ligands).
{ "domain": "chemistry.stackexchange", "id": 6919, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, ions, coordination-compounds, spectrophotometry", "url": null }
quantum-field-theory, differential-geometry, gauge-theory, topological-field-theory, chern-simons-theory Title: Chern-Simons theory on a plane/sphere with a single charge insertion Consider the pure Chern-Simons theory on the plane $\mathbb{R}^2$ with a single charge insertion in some representation $\rho$ of the group $G$. What does the Hilbert space look like? Is it null or non-null for non-integrable representations $\rho$? Below is my attempt at tackling this problem and an outline of the difficulties that I ran into. The same question has a very nice answer for the case of a sphere $S^2$. Here there's no non-contractible loops, thus the loop around the charge insertion must have holonomy of $1$ (the identity element of $G$). Thus it is restricted to the trivial orbit (which is actually a single point), which means that the phase space is a point if $\rho$ is trivial and it doesn't exist if it isn't. Hence, the Hilbert space is 1-dimensional for $\rho$ a trivial representation and 0-dimensional otherwise. Now for $\mathbb{R}^2$ we don't have any restriction on the holonomy around the charge insertion, besides the fact that it must lie on an orbit which belongs to the discrete series (for consistent quantization of the orbit using Kirillov's method). But there's also the gauge invariance – we have to factor out by the gauge group $\mathcal{G}$. It is equivalent to saying that the group $G$ acts on the holonomy by conjugation: $h \rightarrow g h g^{-1}$. This constraint is saying that all points on the orbit are gauge-equivalent, so essentially the entire orbit reduces to just one point on the moduli space. Hence, the Hilbert space must be 1-dimensional for any $\rho$.
{ "domain": "physics.stackexchange", "id": 59427, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, differential-geometry, gauge-theory, topological-field-theory, chern-simons-theory", "url": null }
image-processing, fft, signal-analysis, fourier-transform I thought about it some more and I guess what you would have is aliasing for each pixel equal to the frame rate, and each pixel will have a different amount of phase shift equal to the pixel clock. I feel that the question needs a little bit more clarifying but there probably is enough material here to provide some sort of response and adjust it later if needed. To cut a long story short: Rolling Shutter distortion. This concept is key here (irrespectively of whether or not you have a rolling or global shutter camera that is unsynchronised to the interferometer). Now, I will treat $v, R$ as independent from time here for a minute. Also, $F_{FR}$ will stand for frame rate. Your "problem" seems to be that due to $v$, each pixel row might actually be sampling spectral lines from two radically different wavelengths. If this is true then each one of your frames will acquire a slant whose angle will be proportional to $\frac{v}{F_{FR}}$. In the time dependent version, the frames are capturing a "curve" whose curvature is proportional to $\frac{v(t)}{F_{FR}}$ and...good luck with that. Since you are dealing with interferometry, you are only really imaging ONE single focal plane, therefore, you could apply a simple (anti)-skew transform to counteract the skew caused by the rolling shutter. This, of course, is within limits. In any case, the rolling shutter effect of the camera inserts a phase error in the image which, to an extent, can be counteracted but in the end it all comes down to controlling $v$. Assuming that your camera can maintain a constant $F_{FR}$ for the duration of the scan, then essentially, you don't have a 2D camera. I would like us to imagine it as one long linear $M \times N$ camera (because of $v$ and interferometry) only it is wrapped around with a "wrap" factor that depends on $v$.
{ "domain": "dsp.stackexchange", "id": 3991, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "image-processing, fft, signal-analysis, fourier-transform", "url": null }
nlp, data-mining, word-embeddings, language-model, gpt Title: Can I use LLM to explain codebase? I am a Data Engineer, and I am currently assigned a task to refactor an outdated code and rectify any bugs present. However, I am unable to comprehend the code written in the existing codebase. Furthermore, the developers who worked on this codebase did not provide any documentation. Consequently, I am inquiring if there is a feasible method to convert the entire codebase into an extensive text document. Subsequently, I would like to utilize ChatGPT to translate the codebase into a comprehensive document(very long text, with folder structure tree and code inside src) that I can use to embedding. I do not require an in-depth explanation of the code; rather, I am seeking a more abstract-level understanding, such as the purpose of specific files, the functionality of particular folders, etc. Sure, many people have done that. You can also ask it to add comments or try to find bugs. Just take into account that LLMs are known for generating bullshit, so the explanations could be mere fabrications and the generated code may not work (in evident or subtle ways). I myself have tried chatGPT for generating code, but I had to iterate a few times until I got it working. I suggest you prepare some unit tests and integration tests to ensure that everything is working as before chatGPT's suggested changes. Take into account that the amount of text/code an LLM can use as context is not that large, so you may need to ask multiple times regarding different parts of the code base. There may also be privacy concerns regarding the fact that you are basically sending the source code of your company to a third party, which is something many employers would frown upon.
{ "domain": "datascience.stackexchange", "id": 11605, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nlp, data-mining, word-embeddings, language-model, gpt", "url": null }
python, performance, numpy, simulation, computational-geometry delta_mat = [delta2, delta3] try: part_a = np.linalg.solve(dyad_a, delta_mat) part_b = np.linalg.solve(dyad_b, delta_mat) ground = [complex(pp1[0], pp1[1]) - part_a[0] - part_a[1], complex(pp1[0], pp1[1]) - part_b[0] - part_b[1]] # stored in list as ground_a, wa, za, wb, zb, ground_b; then changes to those four_bar_list = [[ground[0], part_a[0], part_a[1], part_b[0], part_b[1], ground[1]], [ground[0], part_a[0] * np.exp(complex(0, i)), part_a[1] * np.exp(complex(0, alpha2)), part_b[0] * np.exp(complex(0, k)), part_b[1] * np.exp(complex(0, i)), ground[1]], [ground[0], part_a[0] * np.exp(complex(0, j)), part_a[1] * np.exp(complex(0, alpha3)), part_b[0] * np.exp(complex(0, l)), part_b[1] * np.exp(complex(0, j)), ground[1]]] check_lengths(four_bar_list) solutions.append(four_bar_list) except: pass
{ "domain": "codereview.stackexchange", "id": 25040, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, numpy, simulation, computational-geometry", "url": null }
ros, tf2-ros, tf2, transform Originally posted by tfoote with karma: 58457 on 2015-03-05 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Karsten on 2015-03-06: Hi @tfoote thanks for the response that clears things up. I was simply unsure what's the correct way to go. Regarding the datatypes - is there a best practice whether to prefer KDL over TF2 ? Comment by tfoote on 2015-03-06: It's a trade-off. If you only need what currently exists in the tf2 data types it's ok to use them, but they're not expected to evolve or expand. The KDL datatypes have notably more features, but does bring in another dependency. Comment by Dragonslayer on 2020-12-08: Iam at the same point today, as the poster was 5 years ago. Has there been anything done in the past 5 years to make this more convenient?
{ "domain": "robotics.stackexchange", "id": 21054, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, tf2-ros, tf2, transform", "url": null }
java, algorithm, bitwise, vectors, bitset if (rank3 != actualRank) { System.out.printf( "ERROR: i = %d, actual rank = %d, rank1 = %d, " + "rank2 = %d, rank3 = %d.\n", i, actualRank, rank1, rank2, rank3); } assertEquals(actualRank, rank1); assertEquals(actualRank, rank2); assertEquals(actualRank, rank3); assertEquals(actualSelect, select1); } } private static RankSelectBitVector getRandomBitVector(Random random) { RankSelectBitVector bv = new RankSelectBitVector(5973); for (int i = 0; i < bv.getNumberOfSupportedBits(); i++) { if (random.nextDouble() < 0.3) { bv.writeBitOn(i); } } return bv; } private static BruteForceBitVector copy(RankSelectBitVector bv) { BruteForceBitVector referenceBv = new BruteForceBitVector(bv.getNumberOfSupportedBits()); for (int i = 0; i < bv.getNumberOfSupportedBits(); i++) { if (bv.readBit(i)) { referenceBv.writeBitOn(i); } } return referenceBv; } @Test public void toInteger() { RankSelectBitVector bitVector = new RankSelectBitVector(31); assertEquals(0, bitVector.toInteger(20)); bitVector.writeBit(1, true); assertEquals(2, bitVector.toInteger(20)); bitVector.writeBit(2, true); assertEquals(6, bitVector.toInteger(20)); bitVector.writeBit(4, true); assertEquals(22, bitVector.toInteger(20)); } @Test public void readWriteBit() {
{ "domain": "codereview.stackexchange", "id": 45358, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, algorithm, bitwise, vectors, bitset", "url": null }
motors, electromagnetism Title: Physics behind a realistic and theoretical DC motor I am learning about theoretical DC motors where the rotational motion is produced due to torque on either side of a coil. However, after researching real DC motors it seems to me that the winding pattern of the 3 coils would cause torque on each side to approximately cancel out , since the 2 ends of the coil are not on opposite sides of the circle, but rather wound about 1/3 of the armature (see image). Instead, the motion is produced due to the 3 sections of the armature being magnetised as N and S poles to produce an attraction and/or repulsion from the stator poles. However, N and S poles are just an abstraction which apply equivalently to a theoretical DC motor, where one side of the coil is a N pole and the other is a S pole (e.g. in the image below the bottom of the coil is a N pole).
{ "domain": "engineering.stackexchange", "id": 5094, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "motors, electromagnetism", "url": null }
- If possible, can you also edit your answer to include the case $-c \le x_n \le 0$ (if not already sufficient)? – Aryabhata Mar 2 '12 at 1:48 @Aryabhata: That case never arises: we’re given that $c$ and $x_1$ are positive. – Brian M. Scott Mar 2 '12 at 1:54 I know. I just want the answer to cater to the case when $x_1 \lt 0$. I am trying to get this to be one of the abstract parents. See my edits to the question. So it would be great if you can do it (if not, I will edit your answer later). (It is a simple noting that $x_2 \gt 0$ and rest of the argument works, I suppose) – Aryabhata Mar 2 '12 at 1:56 @Aryabhata: Now I understand. Done. – Brian M. Scott Mar 2 '12 at 2:18 Thank you......! – Aryabhata Mar 2 '12 at 2:27 Let $k$ be the positive root to your polynomial. Note that $y=x^2-x-c$ is an upward opening parabola with its vertex below the $x$-axis and an initial downward slope. This implies that positive $x$-values less than $k$ produce negative output, while $x$-values greater than $k$ produce positive output. Note also that all $x_n$ are positive, so it will be acceptable to preserve equalities and inequalities involving $x_n^2$ after taking a square root. If $x_0=k$, then $x_1^2=c+k=k^2$, so $x_1=k$. The sequence continues like this, and is constant.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975201841245846, "lm_q1q2_score": 0.8040167928456962, "lm_q2_score": 0.8244619306896955, "openwebmath_perplexity": 105.13754031163828, "openwebmath_score": 0.9563726782798767, "tags": null, "url": "http://math.stackexchange.com/questions/115501/sqrtc-sqrtc-sqrtc-cdots-or-the-limit-of-the-sequence-x-n1-sq" }
game-theory, statistics, monte-carlo The state of the game at any point can be summarized by the pair $(g,r)$, where $g$ is the set of letters that have been guessed so far, and $r$ is the responses (i.e., the sequence of blanks and letters from $g$ that is visible to the player). The order of past guesses doesn't matter (which is why it suffices to have a set $g$ of past guesses). We'll say that a word $w$ is consistent with $(g,r)$ if it remains possible, i.e., if the opponent's word is $w$ and you make the guesses in $g$, then you'd get the response $r$. Let $p(g,r)=1$ if it is possible to win from here, if starting from the state $(g,r)$. That means that there exists a strategy to win: where no matter which word the opponent is thinking of (as long as it is consistent with $(g,r)$), the number of wrong guesses you've made so far, plus the number you make in the future with this strategy, won't exceed the upper limit. Otherwise, define $p(g,r)=0$. Now you can compute $p(g,r)$ with dynamic programming, using the recurrence relation $$p(g,r) = \bigvee_a \bigwedge_{(g',r')} p(g',r'),$$ where $a$ ranges over all letters not in $g$ (i.e., all possibilities for which letter to guess next), and $(g',r')$ ranges over all possible responses if you guess $a$ next (i.e., $g'=g\cup \{a\}$, and we range over all words $w$ that are consistent with $(g,r)$ and compute the response $r'$ to guesses $g'$ if the word is $w$).
{ "domain": "cs.stackexchange", "id": 16647, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "game-theory, statistics, monte-carlo", "url": null }
kinematics, inverse, ikfast, openrave, robot Any suggestions as to how I could fix this issue ? SETUP informations: Ubuntu 12.04 LTS 64 bits, ROS hydro, Openrave 0.8 Best regards, Pascal Fortin Originally posted by pascal.fortin on ROS Answers with karma: 16 on 2014-11-04 Post score: 0 Edit (2017-09-26): I've written an updated version of this in #q263925. That also uses a Docker image to avoid having to install OpenRave on the ROS machine (which is non-trivial on current versions of Ubuntu). For some of my 5dof manipulators, I've run into the same issue. What worked for me was to wrap the Collada file describing your manipulator in an OpenRAVE robot definition file (that is not official terminology). This provides OpenRAVE IKFast with enough information to be able to generate a plugin for your robot. This also requires passing different parameters to be passed to the openrave.py script (I used version 0.8): openrave0.8.py --database inversekinematics --robot=/path/to/collada_file_with_manipulator.xml --iktype=translationdirection5d --iktests=100 The iktests parameter value was just a default, you can make it larger or smaller. Unfortunately I cannot find my collada_file_with_manipulator.xml right now, so I cannot provide it to you, but I used something like: <robot file="/path/to/converted.urdf.dae"> <Manipulator name="YOUR_NAME"> ... </Manipulator> </robot> Note that you don't need to manually edit the Collada file you got by converting your urdf, you can reference it in your wrapper model definition using the file attribute. I used the following pages for information: OpenRAVE Custom XML Format, in particular the Defining Manipulators section Translation3D failed to find a variable to solve Originally posted by gvdhoorn with karma: 86574 on 2014-11-12 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 19956, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinematics, inverse, ikfast, openrave, robot", "url": null }
string-theory, mathematical-physics, conformal-field-theory, vector-fields, lie-algebra Title: Why do we consider the Witt algebra to be the symmetry algebra of a classical conformal field theory? In standard physics textbooks, it is usually stated that the Witt algebra is the symmetry algebra of classical conformal field theories in two dimensions. Following M. Schottenloher, A Mathematical Introduction to Conformal Field Theory and this Phys.SE post, we note that a more precise form of the preceding statement is: In Euclidean spacetime, the Lie algebroid of locally defined conformal Killing vector fields, or equivalently, the Lie algebroid of locally defined holomorphic vector fields in the Riemann sphere contains a complex Witt algebra. Why do we use the complex Witt algebra to describe classical symmetries of a ${\rm CFT}_2$? Why not ${\rm LocConfVec}(\mathbb{S}^2)$ or any other Lie subalgebra contained in the Lie algebroid? Well, this is likely because in physics textbooks on CFT in 2+0D (especially in string theory) we are rarely studying the conformal compactification = the Riemann-sphere $\mathbb{S}^2$ per se, but typically a double-punctured Riemann-sphere $\mathbb{S}^2\backslash \{0,\infty\}\cong \mathbb{S}\times \mathbb{R}=$ a cylinder, where the 2 punctures $z=0$ and $z=\infty$ are temporal infinities (= distant past & future). A locally defined holomorphic vector field on $\mathbb{S}^2\backslash \{0,\infty\}$ is then expanded as a (possibly formal) Laurent series $$ \sum_{n\in\mathbb{Z}} a_nz^n \partial ,\qquad a_n~\in~\mathbb{C}. $$ This leads to the complex Witt algebra $L_n = -z^{n+1}\partial$.
{ "domain": "physics.stackexchange", "id": 81188, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "string-theory, mathematical-physics, conformal-field-theory, vector-fields, lie-algebra", "url": null }
algorithms, linear-algebra If the matrix is invertible and complex valued, then it's just the inverse. Finding the inverse takes $O(n^\omega)$ time, where $\omega$ is the matrix multiplication constant. It is Theorem 28.2 in Introduction to Algorithms 3rd Edition. If the matrix $A$ has linearly independent rows or columns and complex valued, then the pseudoinverse matrix can be computed with $A^*(A A^*)^{-1}$ or $(A A^*)^{-1}A^*$ respectively, where $A^*$ is the conjugate transpose of $A$. In particular, this implies an $O(n^\omega)$ time for finding the pseudoinverse of $A$. For general matrix, the algorithms I have seen uses QR decomposition or SVD, which seems to take $O(n^3)$ arithmetic operations in the worst case. Is there algorithms that uses fewer operations? First of all, people tend to forget that $\omega$ is an infimum. Whenever we write $O(n^\omega)$, we actually mean for all $\gamma > \omega$, there is an algorithm running in time at most $C_\gamma n^\gamma$, where $C_\gamma$ is a constant depending on $\gamma$ (possibly $C_\gamma \to \infty$ as $\gamma \to \omega$). Keller-Gehrig showed (among else) how to present a matrix $A$ in rank normal form in time $O(n^\omega)$. If $A$ has rank $r$, then a rank normal form of $A$ is $$ S \begin{pmatrix} I_r & 0 \\ 0 & 0 \end{pmatrix} T $$ for some invertible $S,T$ of the appropriate dimensions; see also Algebraic Complexity Theory, Proposition 16.13 on page 435.
{ "domain": "cs.stackexchange", "id": 2738, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, linear-algebra", "url": null }
programming-challenge, haskell, functional-programming gridGet :: (Int, Int) -> Int gridGet (row, col) = listifyGrid !! row !! col This is just rearranging your code, except that I've used [Int] instead of (Int,Int,Int,Int) as the type of each number line. I'm not an experienced Haskeller, but I think that, here, the list is considerably easier to deal with than the tuple, at the expense of giving up an explicit length guarantee. Using a list also makes it easy to adapt to other lengths than 4, since it's just a take n. The tuple code, at least structured as you have it right now, would grow linearly. Also get the sense that I don't use function composition enough . and instead use $ for most nested function calls. Is this bad practice? How could I rewrite some of the functions in this using composition? Again, keeping in mind that I haven't done a whole lot of Haskell: I don't see anywhere in your code where $ looks out of place. One place where composition is nice, though, is with map and nested lists. This: listifyGrid :: [[Int]] listifyGrid = map (map read) doubleListOfStrings where doubleListOfStrings = map (S.split " ") $ lines gridOfNumbers could use composition if you want, as well as the words function: listifyGrid = (map . map) read $ map words $ lines gridOfNumbers (map . map) is so elegant that the equivalent in Python, for example, just seems obtuse. Oh, Haskell, you charmer. Anyway, the main problem_11 function: problem_11 :: Direction -> Int problem_11 dir = maximum $ map (\(x1,x2,x3,x4) -> product $ x1:x2:x3:x4:[]) $ numberPairs dir can also be defined as a pointfree composition, if that's your style, especially now that we can use map directly without unwrapping the tuple: problem_11 = maximum . (map product) . quads
{ "domain": "codereview.stackexchange", "id": 17815, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming-challenge, haskell, functional-programming", "url": null }
# Recursive solution to the extended Josephus problem [duplicate] The Josephus Problem is described here, with extension of killing every $$k$$th problem. In the simple case where every other person is killed, we can also use the binary trick. w[n_] := FromDigits[RotateLeft[IntegerDigits[n, 2]], 2] The code works well. This page gives good simulation with different values of $$n$$ and $$k$$. I have coded the answers in a recursive way, ClearAll[win]; Table[win[1, i] = 1, {i, 2, 12}]; win[n_, k_: 2] := win[n, k] = Block[{$RecursionLimit = Infinity}, If[Mod[win[n - 1, k] + k, n] == 0, n, Mod[win[n - 1, k] + k, n]]] With $RecursionLimit = Infinity, it still works well for up to a certain number like win[9000] But it won't work for win[50000] And the kernel just quits. I am wondering 1. is there a way to improve the code? 2. is there a way to formulate the generic problem in an easier way like in binary which works for $$k=2$$? ### Update I can't work out win[50000] straight away. But if I start small, it still works and the kernel won't quit, like excuting these in order. win[10000] win[20000] win[30000] win[40000] win[50000] works fine. • Related: 69286, 33595 – C. E. Aug 7 '19 at 16:26 • @C.E. Thanks. Looks interesting. Both were approached from a crude way, where numbers just got dropped. Is there any other Mathematical ways like the Binary method? – CasperYC Aug 7 '19 at 22:51 • The implicit answer to that question is no, or people would have used it. – C. E. Aug 8 '19 at 4:07
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9711290955604489, "lm_q1q2_score": 0.8273561540356756, "lm_q2_score": 0.8519528019683106, "openwebmath_perplexity": 1589.5867406272118, "openwebmath_score": 0.5120452642440796, "tags": null, "url": "https://mathematica.stackexchange.com/questions/203429/recursive-solution-to-the-extended-josephus-problem" }
quantum-field-theory, perturbation-theory, unitarity You define a QFT through its perturbative series (say, in the causal approach if you want to be mathematically rigorous). Here, and when regarded as a formal power series, the perturbative expansion is well-defined regardless of any analytic properties of the Hamiltonian, so there are essentially no conditions on the operators. You analyse the problem in standard QM (one-dimensional QFT, if you will: the only spacetime coordinate is time), and assume that the same formalism should hold in QFT, provided we eventually find a good formulation. A canonical reference for rigorous perturbation theory in QM is Kato's Perturbation Theoryfor Linear Operators. It is a tough route, so have fun if you want to go there; no guarantee you will find what you're looking for, but it is hard to imagine you will find anything more explicit than this. Some very specific (lower dimensional) QFTs have been constructed rigorously, from where the perturbative expansion can be derived. The canonical example is Glimm & Jaffe's Quantum physics: A functional Integral point of view. Here the authors deal with two-dimensional (Euclidean) $\phi^4$ theory, which has the key property that normal-ordering is all you need to render it finite. Therefore, you cannot really hope to draw general conclusions from this example but, sadly, we don't have many more rigorous (interacting) QFTs that can be analysed explicitly. Finally, let me mention that a heuristic reason the conditions in the OP are usually assumed is the so-called Gell-Mann and Low theorem, which is sometimes used to justify perturbation theory. This theorem does require the spectrum to be bounded from below, and that interactions are switched on and off adiabatically.
{ "domain": "physics.stackexchange", "id": 58147, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, perturbation-theory, unitarity", "url": null }
newtonian-mechanics, forces, energy, work, free-body-diagram Title: Pushing off another object — why does the other object do work on you without expending any energy? Suppose you are in space (or on a frictionless surface) next to another object of the same mass $m$ as you. Take the reference frame of the center of mass of you and the object (so there is zero velocity initially). Now suppose you push on the object with a constant force $F$ for some time $\Delta t$ to "jump off" the object. During the time $\Delta t$, there is an acceleration $a = F/m$ for both you and the object in opposite directions. The final velocities of you and the object are $v = a\Delta t$. The object has final kinetic energy $KE = \frac{1}{2}ma^{2}\Delta t^{2}$ and you have final kinetic energy $KE = \frac{1}{2}ma^{2}\Delta t^{2}$. Together, there is total kinetic energy $$ KE = ma^{2}\Delta t^{2}. $$ The work done by you on the object is $W = F\cdot \frac{1}{2}a\Delta t^{2} = \frac{1}{2}ma^{2}\Delta t^{2}$, so this is how much potential/chemical energy you've transferred from yourself to the other object in the form of kinetic energy. Now the issue is, this only accounts for half of the total kinetic energy. There is also work done by the object on you, but the issue is, that other object didn't have any potential energy. So how can the other object do work on you, without transferring any energy to you? What is the right way to think about this situation? There is also work done by the object on you, but the issue is, that other object didn't have any potential energy. So how can the other object do work on you, without transferring any energy to you?
{ "domain": "physics.stackexchange", "id": 100077, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, forces, energy, work, free-body-diagram", "url": null }
electromagnetism, potential, magnetic-fields Title: Linear dependence of magnetic potential on current density I'm a mathematician learning physics to provide some background for my mathematical work (especially pde's!). I have been reading through Jackson's Classical Electrodynamics (3rd edition), and I was puzzled by an assumption he makes. On page 214, he derives the equation $W=\frac12 \int_{V_1} \mathbf{J} \cdot \mathbf{A}$ under the assumption that the magnetic potential $\mathbf{A}$ and the charge density $\mathbf{J}$ are related linearly. However, this seems to be a very strict condition, since they are related in each coordinate by the Poisson equation. It seems like only eigenvalues of the Poisson equation could satisfy the linearity condition (after diagonalizing the linear relationship). And yet on the very next page, he uses the above formula for work in the very general setting of a system of $N$ arbitrary circuits. So, my question is, how common is a linear relationship between the vector magnetic potential and the current density? And do Jackson's results hold in the settings he uses them in? The essential physics this encodes is the superposition principle, which is at the heart of classical electromagnetic theory. What this states is that the fields from a collection of sources is the vector sum of the fields created by each different source. In particular this means that twice the currents generates twice the vector potential and twice the magnetic field, and so on, which boils down to a linear relation between the potentials and fields and the sources that generate them. There is plenty of relatively direct experimental evidence for the superposition principle, but I think the consensus is that it is such a basic element of the theory that it is an essential postulate of its own, and that its validity should be shown in the overall success of the theory. Electromagnetism simply couldn't exist in its current form without it. Jackson discusses this in detail in pp. 9-13 along with the circumstances (pretty extreme in macroscopic terms) in which it could break.
{ "domain": "physics.stackexchange", "id": 5839, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, potential, magnetic-fields", "url": null }
ros-fuerte, ubuntu-precise, ubuntu or Utilize a virtual machine such as VirtualBox. There is a tool called "vagrant" that allows you to run headless VirtualBox instances from the command line. Hope that helps! Edit: Regarding your second comment Yes, you cannot simply move binaries built against 32 bit libraries onto a machine with the 64 bit libraries installed. Yes, if the 32 bit versions of the libraries were installed, it would not be a problem. However, as I had stated earlier...it's not very easy to have both versions libraries installed side by side when using prepackaged debians. The most common trick is to use a "chroot" environment. "chroot" stands for change root and allows you to turn some folder in your file system into a fake root that is jailed/sandboxed from the rest of the system. Using the "chroot" command, you can enter the jailed root environment. In there you can install 32 bit versions of all the debians you need. Checkout your code within the chroot environment, build it there, and most importantly...RUN IT FROM THERE. The other option is using virtualization. You'll incur a little bit of overhead for your application, but it's negligible on modern processors. Here's a great Ubuntu answer explaining chroot a little further: http://askubuntu.com/questions/29665/how-do-i-apt-get-a-32-bit-package-on-a-64-bit-installation Originally posted by mirzashah with karma: 1209 on 2013-08-16 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 15280, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-fuerte, ubuntu-precise, ubuntu", "url": null }
waves, oscillators, string Is the principal the same as for the transverse waves? There is a few reasons why I can't really see that working: the rotational waves in this case are sometimes so large that the flat part goes vertical (i.e. against the wind) or even rotates by multiple revolutions. The most important one: Why is the wavelength of the rotation waves (i.e. the separation of the nodes as can be seen in the picture) so much lower than for the transverse waves? This is an observation that is not obvious from the picture, but usually there are only like 3 or 4 oscillations nodes for the transverse ones and like 30 for the rotational ones. The question will get too broad if I ask more questions about this, so let's focus on the above. I am generally interested in this phenomenon and there are other questions I can't quite answer (e.g. when it is not very windy there aren't any oscillations and sometimes when the wind is weak oscillations come and go. So why is there a "critical wind speed" at which they start resonating up?). I'm having a problem visualizing the transverse waves with 3 or 4 nodes you mention. All videos I've found show standing waves with only two nodes, the slackline moving up and down between the anchor points (or between an anchor point and the person walking the slackline). The rotation waves I saw (maybe torsion oscillation is a better name) also had only two nodes. The "nodes" in the picture aren't actual nodes, they are points where the twist angle corresponds to the viewing angle. If the middle of the line makes (a bit more than) seven complete rotations, you'll have 14 positions on the left and on the right where the rotation is a multiple of 180 degrees, giving you 28 visual points. Their number indicates the amplitude of the torsional oscillation rather than the number of nodes.
{ "domain": "physics.stackexchange", "id": 77814, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "waves, oscillators, string", "url": null }
database-theory, databases Title: Is relation following given characteristic already in 2NF, 3NF and BCNF? The definitions of 2NF, 3NF and BCNF goes like this: 2 Normal Form (NF) definition A relation is in second normal form if and only if it is in first normal form and all the non-key attributes are fully functionally dependent on the candidate key. 3 NF - definition A relation schema $R$ is in third normal form (3NF) if, whenever a nontrivial functional dependency $α→β$ holds in $R$, either $α$ is a superkey of $R$, or $β$ is a prime attribute of $R$. BCNF (Boyce–Codd normal form) - definition A relation schema $R$ is in BCNF, if whenever a nontrivial functional dependency $α→β$ holds in $R$, $α$ is a superkey in $R$. Based upon these decision, I want to know if the relation having below characteristic is already in some normal form. All candidate keys are of single attributes: this one is definitely in 2NF as there cannot be any partial dependency, but full functional dependency on CK. I am more confused about 3NF and BCNF. I feel its not in 3NF and BCNF since there can exist $\alpha \rightarrow \beta$, such that both $\alpha$ and $\beta$ are non key, violating 3NF and BCNF definitions. Am I right with this? Consider a relation schema R(A, B, C), with the functional dependencies: A → B B → C In this schema there is only one candidate key (A), and it has a unique attribute. But the schema is not in BCNF, since B is not a superkey, neither in 3NF, since C is not a prime attribute. So your intuition is correct.
{ "domain": "cs.stackexchange", "id": 11416, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "database-theory, databases", "url": null }
quantum-mechanics, angular-momentum, quantum-spin, group-theory, representation-theory Title: Is spin 1 described by $SO(3)$ or $SU(2)$ What spin is described by which rotation group? I always only find information about spin-1/2 Quantum spin in nonrelativistic Quantum Mechanics is generally associated either with the projective unitary representations of the rotation group SO(3) or with the vector unitary representations of the special unitary group SU(2). To be more precise, spin comes naturally from the projective unitary representations of the full 3D-Galilei group, but only for angular momentum/rotation symmetry purposes, it is enough for one to restrict only to a subgroup of it isomorphic to SO(3). Therefore, spin 1 we can describe in a proper (rigged) Hilbert space environment by the linear unitary representations of the SU(2) group which are in 1-1 correspondence with the representations of the su(2) Lie algebra by (essentially) self-adjoint operators.
{ "domain": "physics.stackexchange", "id": 54004, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, angular-momentum, quantum-spin, group-theory, representation-theory", "url": null }
general-relativity, cosmology, big-bang, black-hole-thermodynamics if we average over large scales the universe looks the same in all directions, that is it is isotropic if we average over large scales the universe is the same everywhere, that is it is homogeneous
{ "domain": "physics.stackexchange", "id": 97902, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, cosmology, big-bang, black-hole-thermodynamics", "url": null }
ros, callback, spinonce Originally posted by Jeremy Zoss with karma: 4976 on 2013-06-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by anthonyli on 2013-06-01: Yes, at first, I try to realize image processing in the callback function, but thereis a problem that the image processing(last for maybe 50ms) will be distrub by the next image and the data will be refresh earlier than I want. Comment by anthonyli on 2013-06-01: As I described in the beginning of thequestion, I need grabbing image then do processing. But not trigger by the each image passively. Comment by anthonyli on 2013-06-01: So now mine method is getting cvImage in the callback function and process it in another function. And there is still the question I asked. Comment by anthonyli on 2013-06-01: To be honest, I have the same thought as you described in the answer, but when I use this method(http://www.ros.org/wiki/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages), the image will always be refreshed. Comment by anthonyli on 2013-06-01: Now I am trying to not use a calss(http://www.ros.org/wiki/image_transport/Tutorials/SubscribingToImages), I hope it will works. Thank you! Comment by anthonyli on 2013-06-02: Fine, I finally solve the problem! Thank you ~~
{ "domain": "robotics.stackexchange", "id": 14387, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, callback, spinonce", "url": null }
You get: $$P(B\mid A)=(\frac {(^{13}C_4 \cdot 3+ ^{9}C_4)\cdot ^{13}C_4}{^{52}C_8 })\cdot(\frac{^{52}C_4}{^{13}C_4})$$ • To obtain $P(B \mid A)$, type $P(B \mid A)$. Also, check your last line. – N. F. Taussig Nov 19 '18 at 10:40 • @N.F.Taussig thanks for that, but whats wrong in last line? – idea Nov 19 '18 at 11:02 • Your probability is greater than $1$. – N. F. Taussig Nov 19 '18 at 11:04 • @N.F.Taussig Yeah...$C(52,8)$. And i later added $C(4,1)$ to select a suite-heart; but we dont need to it seems... – idea Nov 19 '18 at 12:20 • @N.F.Taussig And Thanks, for taking me through this. Got to learn something new. – idea Nov 19 '18 at 12:41 Letting $$X$$ be the event where Bob has $$4$$ cards of the same suit and letting $$H$$ be the event where Ann has four hearts. $$P(X|H)=\frac{P(X\cap H)}{P(H)}$$ where $$P(H)=\frac{\binom{13}{4}}{\binom{52}{4}}$$ and $$P(X \cap H)=\frac{\binom{13}{4}\times \left(\binom{3}{1}\times\binom{13}{4}+\binom{9}{4} \right)}{\binom{52}{4}\times\binom{48}{4}}$$ Substituing into the conditional probability gets the result that I originally posted in my solution.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.984336353585837, "lm_q1q2_score": 0.8047682764936583, "lm_q2_score": 0.8175744739711883, "openwebmath_perplexity": 538.2194609667971, "openwebmath_score": 0.6292099356651306, "tags": null, "url": "https://math.stackexchange.com/questions/3004697/conditional-probability-on-cards-of-the-same-suit" }
c#, expression-trees var methodTypes = method.GetParameters().Select( m => m.ParameterType ); var delegateTypes = delegateInfo.GetParameters().Select( d => d.ParameterType ); // Convert the arguments from the delegate argument type // to the method argument type when necessary. var arguments = methodTypes.Zip( delegateTypes, ( methodType, delegateType ) => { ParameterExpression delegateArgument = Expression.Parameter( delegateType ); return new { DelegateArgument = delegateArgument, ConvertedArgument = methodType != delegateType ? (Expression)Expression.Convert( delegateArgument, methodType ) : delegateArgument }; } ).ToArray(); // Create method call.; MethodCallExpression methodCall = Expression.Call( instance == null ? null : Expression.Constant( instance ), method, arguments.Select( a => a.ConvertedArgument ) ); // Convert return type when necessary. Expression convertedMethodCall = delegateInfo.ReturnType == method.ReturnType ? (Expression)methodCall : Expression.Convert( methodCall, delegateInfo.ReturnType ); return Expression.Lambda<T>( convertedMethodCall, arguments.Select( a => a.DelegateArgument ) ).Compile(); } At first I got the dreaded "variable .. of type ... referenced from scope '' but it is not defined." exception, but after some pondering I realized I had to add ToArray() after the Zip() statement to make sure the delegate arguments would already be defined. Powerful stuff this deferred execution, but apparently also a source for errors. I got the same exception when running smartcaveman's latest update, which is perhaps due to a similar mistake. Writing a custom Zip which takes three arguments removes the need of the anonymous type. Writing the Zip is really easy, as documented by Jon Skeet. public static T CreateCompatibleDelegate<T>( object instance, MethodInfo method ) { MethodInfo delegateInfo = MethodInfoFromDelegateType( typeof( T ) );
{ "domain": "codereview.stackexchange", "id": 220, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, expression-trees", "url": null }
special-relativity, inertial-frames \end{equation} Since we must have \begin{equation} S\mathbf{v}_{0}=\mathbf{v} \tag{B-10} \end{equation} or \begin{equation} \begin{bmatrix} &s_{11}&s_{12}&s_{13}&\\ &s_{21}&s_{22}&s_{23}&\\ &s_{31}&s_{32}&s_{33}& \end{bmatrix} \begin{bmatrix} &1&\\ &0&\\ &0& \end{bmatrix} = \begin{bmatrix} &n_1&\\ &n_2&\\ &n_3& \end{bmatrix} \tag{B-11} \end{equation} then \begin{equation} \begin{bmatrix} &s_{11}&\\ &s_{21}&\\ &s_{31}& \end{bmatrix} = \begin{bmatrix} &n_1&\\ &n_2&\\ &n_3& \end{bmatrix} \tag{B-12} \end{equation} The rows or columns of $\;S\;$ constitute a right orthonormal system, so \begin{equation} SS^{\rm{T}}=I=S^{\rm{T}}S \tag{B-13} \end{equation} and \begin{equation} S^{-1}=S^{\rm{T}} \tag{B-14} \end{equation} The $4\times4$ matrix is in block form \begin{equation} \Bbb{S}\ =\ \begin{bmatrix} & S &\mathbf{0}&\\ &&&\\ &\mathbf{0}^{\rm{T}}&\ \ 1\ \ \ &\\ \end{bmatrix} \tag{B-15} \end{equation} where, as in definitions (A-05) \begin{equation}
{ "domain": "physics.stackexchange", "id": 24147, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity, inertial-frames", "url": null }
homework-and-exercises, electricity, electrical-resistance Here is what I did: $$V=IR=I\cdot\frac{\rho l}{A}=I\cdot\frac{\rho_{0}l}{a}\cdot\frac{l}{\pi a^2}=I\cdot\frac{\rho_0l^2}{\pi a^3}\Rightarrow I=\frac{V\pi a^3}{\rho_0l^2}$$ However, the provided answer is: $$I=\frac{4V\pi a^3}{3\rho_0l^2}$$ So I think that I am missing something integral(no pun intended) or extremely basic with this question and any help would be appreciated. Your working assumes the resistivity of the material is constant, when it actually varies with r and z. You can consider the cylinder as being made of concentric cylindrical shells, each with resistance $\int_{0}^{l}\frac{{\rho}_0z}{r(2{\pi}r{\delta}r)}dz=\frac{{\rho}_0l^2}{4{\pi}r^2{\delta}r}$. As each of these shells is in parallel with each other, the effective resistance, $R_T$, can be found from $\frac{1}{R_T}=\int_{0}^{a}\frac{4{\pi}r^2dr}{{\rho}_0l^2}\\=\frac{4{\pi}a^3}{3{\rho}_0l^2}$. Hence for $I=\frac{V}{R_T}$, $I=\frac{4V{\pi}a^3}{3{\rho}_0l^2}$ as required.
{ "domain": "physics.stackexchange", "id": 40825, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electricity, electrical-resistance", "url": null }
machine-learning, neural-network, classification, training, multilabel-classification Title: How do machine learning models (e.g. neural networks) get better over time from new data? I'm a complete newbie to the world of machine learning, and I'm currently working on implementing a model that will need to incorporate feedback, as well as changes to the data set (both feature & label changes over time). The frequency of change isn't yet entirely known, but for simplicity could probably be rolled into a batch every day or so. I'm aware of how I can build a training & test set, and get a classifier up and running. My primary issue is that it's probably not going to be ideal to run a completely fresh training every time there's a change. Users will be interacting with the system via "this was helpful / not helpful" type feedback, which I want to use to strengthen / weaken its association model. I'm absolutely in the dark as to how once you have the model from the initial data, you can then get it to refine over time from this sort of feedback, and how to update (i.e. add/remove features & labels) without starting from scratch. tl;dr: What sort of classifier is best suited to this sort of refinement-over-time problem? I'll also add that the model needs to support multi-label classification, so any caveats / gotchas / information on how to do this in the broader context of my question would be helpful too. If you only want to add more examples you can retrain the machine learning algorithm you had the day before. But if you want to add new features training it from the beginning is needed, you could train a new ML algorithm using only the new features and mix the outputs, but that is not a good solution, retrain the whole ML algorithm. I would use a neural network, which is very intuitive for your case, you calculate a set of weights and save them. When you get new data you load your old network and calculated set of weights and tune it using the new examples you have. NN natively support multilabel classification, and if one day you decide that you one to add a new label you dont need to retrain the whole NN, you could erase the last layer, add a new one, and only train this last layer weights.
{ "domain": "datascience.stackexchange", "id": 2847, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, neural-network, classification, training, multilabel-classification", "url": null }
coordinate, data-analysis, pulsar My understanding is that these variables should be related by the following equations: tan(G_L) = YY/XX tan(G_b) = XX/ZZ However, when I test this assumption my calculated values are very different: I have tried exploring the possibility that the x,y,z coordinate system may be oriented differently than I expect, but I can find no orientation that yields similar results for G_l or G_b: Where could I have gone wrong? I feel like I am losing my mind not being able to convert these with simple trig. The ATNF Pulsar Catalogue's galactic longitude and latitude are heliocentric, but the origin of their rectangular coordinates is near the center of our galaxy. The catalogue documentation, section 6. Distances, says: The Galactocentric coordinate system (XX, YY, ZZ) is right-handed with the Sun at (0.0, 8.5 kpc, 0.0) and the ZZ axis directed toward the north Galactic pole. From a heliocentric point of view (U, V, W) = (8.5 - YY, XX, ZZ). Then $\tan{l} = V / U$ and $\tan{b} = W / \sqrt{U^2 + V^2}$ as you'd expect.
{ "domain": "astronomy.stackexchange", "id": 2829, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "coordinate, data-analysis, pulsar", "url": null }
search, comparison, efficiency Title: Why is informed search more efficient than uninformed search? Why does informed search more efficiently finds a solution than an uninformed search? There are several informed and uninformed search algorithms. They do not all have the same time and space complexity (which also depends on the specific implementation). I could come up with an informed search algorithm that is highly inefficient in terms of time or space complexity. So, in general, informed search algorithm are not more efficient than uninformed ones, in terms of space and time complexity. However, given that informed search algorithms use "domain knowledge" (that is, an heuristic function that estimates the distance to the goal nodes), in practice, they tend to find the goal node more rapidly, given a more informed "heuristic" (which needs to be admissible in order to find the optimal solution). For example, in theory, A* has exponential time and space complexities (with respect to the branching factor and the depth of the tree), but, in practice, it tends to perform decently well. It tends to have an effective branching factor (that is, the branching factor for specific problem instances) quite "small" (for several problems). What is a more informed heuristic? Intuitively, it is an heuristic that more rapidly focuses on the promising parts of the search space. Let's denoted by $h$ the heuristic function. If $h(n)=0$, for all nodes $n$, then this is an admissible heuristic, because it always underestimates the distance to the goal (that is, it always returns $0$). However, it is a quite uninformed heuristic: either if you are at the start or goal nodes, the estimation is always the same (so you cannot distinguish the start and goal nodes, in terms of estimates). Given two admissible heuristics $h_1$ and $h_2$, $h_2$ is more informed than $h_1$ if $h_1(n) \leq h_2(n)$, for all nodes $n$.
{ "domain": "ai.stackexchange", "id": 1078, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "search, comparison, efficiency", "url": null }
• I finally understand, thanks to both you and Mark for the quick answers! – Jack May 10 '14 at 14:18 • @Jack: You're welcome. May 10 '14 at 14:20 Taking the elements coprime with n means those who have no common prime factor with n. This means we take only those who have an inverse (and are not zero divisors, which is saying the same thing dealing with finite groups). In the first example with $$\mathbb{Z}/8$$ , we keep the classes representatives as 1, 3, 5, 7 and make them into a abelian multiplicative group named $$\mathbb{Z}^*_8$$ . Some use an *, others like Andrew Baker from University of Glasgow use a x. He would write $$(\mathbb{Z}/8)^×$$. There is a group isomorphism with $$\mathbb{Z}^*_8$$ and $$\mathbb{Z}/2$$ x $$\mathbb{Z}/2$$. This shows the isomorphism: • Your last statement is false; the group $(\Bbb{Z}/8\Bbb{Z})^{\times}$ is not cyclic. Also, non-zero divisors are not necessarily units in general; in finite rings they are though. Jun 18 '19 at 12:32 • You are right I will correct this isomorphism thing. Also, I was taking into account math.n00b previous post's explanation about coprime and zero divisors. I took into account that we were talking about finite groups. Should have made it clear. Thanks for the fast reaction. Jun 18 '19 at 13:00 This is a notational device. The $^*$ is being used to show that the group operation is multiplication, and the elements of the group are the elements of $\mathbb Z_8$ which are coprime to $8$. The identity is $1$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.975576912786245, "lm_q1q2_score": 0.8252819189625015, "lm_q2_score": 0.84594244507642, "openwebmath_perplexity": 118.57512977664304, "openwebmath_score": 0.9081557989120483, "tags": null, "url": "https://math.stackexchange.com/questions/789073/what-is-this-notation-cyclic-group-mathbbz-8" }
quantum-field-theory, particle-physics, terminology Title: To what extent does the concept of "particle" in QFT correspond to the concept of "particle" in experimental physics? My understanding of the "particle" concept in Quantum Field Theory is that it describes something infinite in extent in space (and also time?) having no concept of trajectory (in absolutely any sense) (possibly) effectively immutable or having no temporal extent, like mathematical concepts / global statements about a model / coordinates, or possibly a minimal existence, for example just binary existance or cardinality but certainly no properties at an individual particle level In experimental physics, notably including areas dealing with the phenomena intimately tied to Q.F.T., there is a concept of a "particle" which follows conventional English usage, as nicely mentioned in an answer to this question. localised in space having a definite trajectory (under appropriate conditions) having definite properties (up to some uncertainty) that vary with time (position, momentum, energy etc.) observed in bubble tanks as a concrete example
{ "domain": "physics.stackexchange", "id": 46899, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, particle-physics, terminology", "url": null }
kinect, rviz Title: Polygonal Map not working in rviz I've been trying to get PolygonalMapDisplay to work in rviz [specifically with the kinect demos] In rviz the polygonal map display says "plugin from package [mapping_rviz_plugin] not loaded for display class [mapping_rviz_plugin::PolygonalMapDisplay]) I've looked around in the file system and can't find any Polygonal Map stuff. Does anyone know where I might be able to find this plugin/how to get polygonal maps to work in rviz? Thanks! Originally posted by Perchik on ROS Answers with karma: 13 on 2011-06-03 Post score: 0 I had the same problem with the kinect. I solved the problem in this way: "rosmake mapping_rviz_plugin" Regards Originally posted by JosèP with karma: 36 on 2011-06-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Perchik on 2011-06-08: yep, that did it. Thanks!
{ "domain": "robotics.stackexchange", "id": 5745, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "kinect, rviz", "url": null }
Example in 2 dimension: Consider if your true region of consideration is x+y>0 Now if your random forest just gets to see 2 variables, it shall have a deciasion tree as : if (x<1): if (y>1): if (x<-1): ... ... Basically if you see in a plot, it comes across a step function. However, had there been created a feature x+y, the Classifier would have been simple. (One step decision tree)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9621075744568837, "lm_q1q2_score": 0.8177637079983707, "lm_q2_score": 0.8499711775577736, "openwebmath_perplexity": 2127.0216975947324, "openwebmath_score": 0.4680294096469879, "tags": null, "url": "https://stats.stackexchange.com/questions/187593/does-a-linear-recombination-of-features-affect-random-forest" }
quantum-mechanics, quantum-field-theory, conformal-field-theory, topological-field-theory, topological-order Toric code with dislocations are not exactly Ising anyons. Dislocations behave like Ising anyons in many ways, but they are extrinsic defects, not true anyonic quasiparticle excitations of the system. Your question is not really about calculating the quantum dimensions (it is just a mathematical definition), because the significance of quantum dimension comes from (partly) its appearance in the topological entanglement entropy. In http://arxiv.org/abs/1303.4455 they looked at the TEE of a disk with a single twist defect. From the general TQFT argument, we expect the TEE is $\ln (D/d_\sigma)=1/2$, which is what they find. To be more precise, one should think about the general structure of defects. These defects are symmetry defects, for example the dislocation corresponds to a symmetry of the model that exchanges $e$ and $m$, and we can consider the symmetry group to be $G=\mathbb{Z}_2$. Now for each $g\in G$, we can collect all the "defects" which are associated with the action of $g$, call them $a_g$ where $a$ labels different types of $g$ defects (not to be confused with the anyon labels). In the toric code case, there are actually two distinct types of dislocations, each of them is Ising like (see Bombin, for example). Now for each $g$ one should define a total quantum dimension $D_g=\sqrt{\sum_{a_g}d_{a_g}^2}$. However, one can actually prove that $D_g$ is always equal to the total quantum dimension of the anyon model $D$. I may be too brief, but the punch line is that now consider a region with a single $g$-defect $a_g$ in it. The TEE should be $-\ln D_g/d_{a_g}$. For more details on the general structure you can look at http://arxiv.org/abs/1410.4540.
{ "domain": "physics.stackexchange", "id": 25039, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-field-theory, conformal-field-theory, topological-field-theory, topological-order", "url": null }
For $a > 0$, let $b = \frac12 + \frac1a$ and $I(a)$ be the integral $$I(a) = \int_0^1 \frac{\log(a+x)}{\sqrt{x(1-x)(2-x)}}dx$$ Substitute $x$ by $\frac{1}{p+\frac12}$, it is easy to check we can rewrite $I(a)$ as $$I(a) = -\sqrt{2}\int_\infty^{\frac12}\frac{\log\left[a (p + b)/(p + \frac12)\right]}{\sqrt{4p^3 - p}} dp$$ Let $\wp(z), \zeta(z)$ and $\... 64 Let's take a closer look at your calculations: $$x^6\geq x^8 \Leftrightarrow \ln(x^6)\geq \ln(x^8)$$ Here we must have$x\neq 0$, because$\ln(0)$is not well-defined. So you have to check if the inequality holds for$x=0$separately. (It does hold). Now you want to use $$\ln(x^6)=6\cdot\ln(x)\geq 8\cdot\ln(x)=\ln(x^8).$$ This only holds for$x>0, as ... 61 Observe that: \begin{align*} \log_5 7 &= \dfrac{3}{3}\log_5 7 \\ &= \dfrac{1}{3}\log_5 7^3 \\ &= \dfrac{1}{3}\log_5 343 \\ &< \dfrac{1}{3}\log_5 625\\ &= \dfrac{1}{3}\log_5 5^4\\ &=
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9532750440288019, "lm_q1q2_score": 0.812145346598456, "lm_q2_score": 0.8519528038477825, "openwebmath_perplexity": 628.8438922650452, "openwebmath_score": 0.9998056292533875, "tags": null, "url": "https://math.stackexchange.com/tags/logarithms/hot?filter=all" }
entropy, correlation-functions, molecular-dynamics Title: Two-body correlation function computation TL;DR Summary: How to compute correlation function from MD data? I'm studying how to compute excess entropy in molecular dynamics (MD). I've found it is needed to compute the two-body correlation function (neglecting high-order terms), the details can be found, for example, in this article. So the definition of correlation function (CF for short) is $$C(t,\vec r, t',\vec r')=\langle X(t,\vec r)Y(t', \vec r')\rangle$$ where angle brackets mean averaging. First question: is the averaging performed by time or ensemble (by all the atoms in the system)? Second: for computing the CF, do I need to compute it in the stationary process? I mean, do I need to simulate a steady-state system in MD (probably to perform time-averaging) or can CF be found from one time point (using atom coordinates and velocities in a specific time moment)? Third: If, for example, I want to compute CF for relative distance $\vec r=\vec r_2-\vec r_1$, where $\vec r_1, \vec r_2$ are the absolute positions of two atoms, what will be $X$ and $Y$? I'm sorry if I've written something unclear, I'm always ready to clarify the question, and I'd be happy for any help.
{ "domain": "physics.stackexchange", "id": 98206, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "entropy, correlation-functions, molecular-dynamics", "url": null }
relativity Title: Does a light cone look the same from all reference frames? If there were a light cone centered at some point $P$, and you were to look at that light cone from different reference frames, would it change its shape? I know that points inside and outside the light cone would remain inside/outside of the light cone in every frame, but does the light cone itself shift? If it does, how would it shift? No, the light cone does not depend on the frame in which it is viewed. The light cone is a collection of events that are lightlike-separated from $P$. This collection of points is the same in all reference frames because in special relativity the interval is invariant. If you swept out a light cone from $P$ by having a source at $P$ emit a spherical electromagnetic wave and noting where the wavefront was at future times, you would get the same result in all reference frames - that the edge of the wavefront was a sphere centered on $P$ with radius $ct$, with $t$ the time since the pulse was emitted. This is true even if source at $P$ is moving at the time the wavefront is emitted.
{ "domain": "physics.stackexchange", "id": 1070, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "relativity", "url": null }
javascript, php, jquery, ecmascript-6 <main class="w3-container w3-row" > <section class="canvas w3-container w3-col l6 m9 s12" > <canvas> </section> <section class="controls w3-container w3-col l3 m3 s6" > <h2 >controls</h2> <nobr > <button id=moveForward >F &#x2191;</button> <button id=Slink >S</button> <input id=step type=text size=1 value=20 onFocus=this.select()> <button id=moveBack >B &#x2193;</button> </nobr> <nobr > <button id=turnLeft >L &#x21b6;</button> <button id=Alink >A</button> <input id=angle type=text size=1 value=90 onFocus=this.select()> <button id=turnRight >R &#x21b7;</button> </nobr> <p> <nobr > <button id=Tlink style="display:none" >Turtle</button> <button id=Nlink >No Turtle</button> </nobr> <p> </section> <section class="info w3-container w3-col l3 m3 s6" > <input id=doodleName type=text size=10 onFocus=this.select()> <button align=right id=save >Save</button> <button align=right id=new >New</button> <h2 ><span align=left>info</span> <span align=right> <button id=undo align=right >Undo</button> </span> </h2> <p><div class=doodle ></div></p> <h2 align=right > <button id=redo >Redo</button> </h2> <p align=right><div class=undo align=right></div></p> </section>
{ "domain": "codereview.stackexchange", "id": 36096, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, php, jquery, ecmascript-6", "url": null }
compressed-sensing Title: Proof of non-existence of the universal archiver? Does anybody knows a proof that no algorithm $A$ exists that can reversibly transform every possible finite sequence $S$ to the sequence $C$ of smaller size? Here I assume $S$ and $C$ to be a finite bit sequences (or more generally some finite sequences of elements from certain finite set), algorithm should be executed in the finite time for each sequence S and use finite memory. The same constraints applies for the reverse algorithm $A^{-1}$ - it should consume finite memory and "unpack" certan sequence in the finite time. I guess such a proof would be trivial one, but I forgot how the formal proof is done. Assume there is a program that maps every sequence of $n$ bits to a sequence of $n-1$ bits. There are $2^n$ sequences with $n$ bits, but only $2^{n-1}$ sequences with $n-1$ bits. Hence there are two sequences $S,S'$ that get mapped to the same sequence $C$. Therefore there can be no algorithm that reverses the transformation.
{ "domain": "cstheory.stackexchange", "id": 1924, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "compressed-sensing", "url": null }
homework-and-exercises, electrostatics, electric-fields But the correct answer does not multiply by 2$\pi$ Correct: $$E=\frac {1}{4 \pi \epsilon _o}\frac {2\pi r \lambda z}{[r^2+z^2]^{3/2}}$$ Why was I wrong? where did I slip up? Thanks! The length element should be $r d\theta$ not $2\pi r d\theta$. So the charge element is $$dq=\lambda r d\theta$$ but not $$dq=\lambda 2\pi r d\theta.$$
{ "domain": "physics.stackexchange", "id": 34742, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electrostatics, electric-fields", "url": null }
analyst ), while the former focuses, in addition to analysis, on building tools of implementation for the models. Applied mathematics is the application of mathematical methods by different fields such as science, engineering, business, computer science, and industry. It is nothing but the process or technique to express the system by a set of mathematical equations (algebraic or differential in nature). Mathematical Applications, Modeling and Technology Michael de Villiers School of Science, Mathematics & Technology Education, Univ. Courtier | G. Statistics can be defined as a type of mathematical analysis which involves the method of collecting and analyzing data and then summing up the data into a numerical form for a given set of factual data or real world observations. the preface in Stillman et al. Miller, 2. Math modeling. Part I offers in depth coverage of the applications of contemporary conjugate heat transfer models in various industrial and technological processes, from aerospace and nuclear reactors to drying and food processing. Mathematical Modeling: Models, Analysis and Applications covers modeling with all kinds of differential equations, namely ordinary, partial, delay, and stochastic. One of the most amazing things about mathematics is the people who do math aren't usually interested in application, because mathematics itself is truly a beautiful art form. aerobiology), complex systems (e. Program will supersede, beginning in the fall quarter of 2014, the interdisciplinary M. This supports the notion that the TEKS should be learned in a way that integrates the mathematical process standards in an effort to develop fluency. Examples related to the applications of mathematics in physics and engineering such as the projectile problem, distance-time-rate problems and cycloid are included. Mathematical Programming is one of a number of OR techniques. Jacob Tsimerman to receive the 2019 Coxeter-James Prize. May 13, 2016, NOAA Headquarters. Applications of Physics and Geometry to Finance by Jaehyung Choi Doctor of Philosophy in Physics Stony Brook University 2014 Market anomalies in nance are the most interesting topics to aca-demics and practitioners. This lesson will help you understand mathematical models and how they are used in the context of business. 127 journal of mathematical analysis and applications 0022-247x 1. The heavily regulated cell renewal cycle in the colonic crypt provides a good example of how modeling can be used to find out key features. To accomplish t. Many everyday activities require the use of mathematical models, perhaps unconsciously. arXiv is funded by Cornell
{ "domain": "charus.de", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.983342958526322, "lm_q1q2_score": 0.8396890891550979, "lm_q2_score": 0.8539127492339909, "openwebmath_perplexity": 1384.1810447561586, "openwebmath_score": 0.29184573888778687, "tags": null, "url": "http://mrlz.charus.de/applications-of-mathematical-modelling.html" }
machine-learning, python, neural-network, keras, tensorflow Title: Using "Demon Adam" as optimizer in Tensorflow I am working with a simple neural network in Google Colab using Python with Tensorflow where I've only tried to use the optimizers already available in keras such as Nadam, Adam, Adadelta, Adagrad etc. The best results so far were achieved with Adam. I found an interesting paper "Demon: Improved Neural Network Training with Momentum Decay" and I thought I'd try to use it and see if my results can get even better. The first line in the source code reads class DemonAdam(optimizer.Optimizer): def __init__(self, iterations, learning_rate=0.0001, momentum=0.9, rho=0.999, use_locking=False, epsilon=1e-8, name="DemonAdam"): When changing my optimizer from 'adam' to DemonAdam(250), 250 = iterations model.compile(loss='mse', optimizer = DemonAdam(250), metrics=[tf.keras.metrics.RootMeanSquaredError()]) I get an error in my final line which runs the NN (i'm not sure if iterations is the same as # of epochs but anyway): hist = run.fit(X_train_normalized, y_train_normalized, batch_size=100, validation_data=(X_test_normalized, y_test_normalized),epochs=250, verbose=2, callbacks = [learning_decay]) I get this error message: NotImplementedError: in user code:
{ "domain": "datascience.stackexchange", "id": 10854, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, python, neural-network, keras, tensorflow", "url": null }
solid-state-physics, group-theory, representation-theory, crystals Title: How do you observe "silent" quantum vibrations? In the theory of quantum vibrations (aka phonons) it is useful to divide up the vibrational normal modes of a crystal based on their representation within the symmetry group of the crystal. The representations signify how the phonon will transform under symmetry operations like reflection, rotation, inversion, etc. For example, a particular phonon might have the $A_{1u}$ representation in a cubic crystal group $O_h$, and the subscript $u$ would tell you the phonon is antisymmetric under inversion. Based on the representation, one can usually assign representations of phonons as infrared- or Raman-active based on their symmetry. In a nutshell, the former requires something that is antisymmetric under inversion, while the latter requires inversion symmetry. This assignment is useful in actual experiments that use infrared absorption or Raman scattering to predict which phonons should be visible. However, not all representations can be classified as infrared- or Raman-active. In crystals without inversion symmetry, some representations are both infrared- and Raman-active, while others are neither and are classified as silent modes (see Chapter 8.8 of Group theory by Dresselhaus page 160). My question is following: is there a general way using light to observe silent phonons? If there is no such method using light, how can these silent modes be observed? I do want to emphasize the word "general" in my question, as it might be possible to observe some silent modes in special cases. What I am interested in is a systematic method for routinely observing all these so-called silent modes. One of the techniques, which allow you to probe silent modes is the Hyper-Raman scattering. This method is quite similar to the usual Raman one, but, it involves a three-photon process: two photons with energy $\omega _i$ are exciting the system, and one photon with energy $2\omega _i \pm \omega _{phonon}$ is emitted. It is inherently nonlinear and involves a quadratic term in the expansion of the induced polarization of the crystal.
{ "domain": "physics.stackexchange", "id": 86551, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "solid-state-physics, group-theory, representation-theory, crystals", "url": null }
optimization, dynamic-programming, linear-algebra Edit: The idea outlined in my previous edit is still somewhat nonoptimal. The exponentiation by squaring algorithm actually evaluates expressions of the form $K A^n$ or $A^n K$, where $K$ isn't necessarily the identity matrix. But my algorithm doesn't consider the possibility of using the exponentiation by squaring algorithm with $K$ not equal to the identity matrix. Disclaimer: The following method has not been rigorously proven to be optimal. An informal proof is provided. The problem reduces to finding the most efficient ordering when considering the square of the product. For example, when looking at e.g. $(ABC)^{50}$, we only need to optimally solve $(ABC)^2$ since this expands to $ABCABC$. No useful ordering information is added by concatenating $ABC$ again. The intuition here is that since the problem of optimal ordering can be solved bottom-up, higher orderings consisting of more elements using the same matrices are irrelevant. Finding the best ordering of $ABCABC$ reduces to the Matrix Chain Multiplication problem. After finding an optimal ordering, apply exponentiation to the triplet (n-tuple generally) in the ordering. As an e.g., if the optimal ordering for the square is $A(B(CA))BC$, the solution to the initial problem is $A(B(CA))^{49}BC$. In summary: 1) The first step in solving $(A_1 A_2 \cdots A_n)^m$ is to solve $(A_1 A_2 \cdots A_n)^2$. 2) Solving $(A_1 A_2 \cdots A_n)^2$ is best approached as an instance of the Matrix Chain Multiplication problem. 3) Using the n-tuple ordering $G$ from the solution in (2) will give us the solution to (1) as some flavor of $A_1 \cdot A_2 \cdot G^{m-1} \cdot A_n$ (note that any other groupings from solving (2) should be applied as well). Informal proof
{ "domain": "cs.stackexchange", "id": 9058, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "optimization, dynamic-programming, linear-algebra", "url": null }
ros-melodic ******************************************************** * MoveGroup using: * - ApplyPlanningSceneService * - ClearOctomapService * - CartesianPathService * - ExecuteTrajectoryAction * - GetPlanningSceneService * - KinematicsService * - MoveAction * - PickPlaceAction * - MotionPlanService * - QueryPlannersService * - StateValidationService ******************************************************** [ INFO] [1581661415.640506792]: MoveGroup context using planning plugin ompl_interface/OMPLPlanner [ INFO] [1581661415.640540185]: MoveGroup context initialization complete There is also an error message that follows this: [ INFO] [1581661418.260400434]: Loading robot model 'myworkcell'... [ WARN] [1581661418.272970984]: Kinematics solver doesn't support #attempts anymore, but only a timeout. Please remove the parameter '/rviz_adriels_computer_11767_3564696681073316907/manipulator/kinematics_solver_attempts' from your configuration. Originally posted by AHJL001 on ROS Answers with karma: 27 on 2020-02-14 Post score: 0 Original comments Comment by gvdhoorn on 2020-02-14: Just a comment: <include file="$(find ur_modern_driver)/launch/ur_common.launch" >
{ "domain": "robotics.stackexchange", "id": 34434, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros-melodic", "url": null }
c++, opencl Pixel indices by 1D emulated to 2D kernel for 8k x 8k example with 64 local threads: unsigned ix = (get_group_id (0)%1024)*8+get_local_id(0)%8; unsigned iy = (get_group_id (0)/1024)*8+get_local_id(0)/8; After data locality problem is solved, you can optimize for buffer copies to see actual compute performance instead of pci-e bottleneck.
{ "domain": "codereview.stackexchange", "id": 24796, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, opencl", "url": null }
nlp Title: Configure BRAT so that if no `.ann` file is found, then an empty `.ann` file is created I am annotating a new corpus with BRAT. I have a set of .txt files to annotate. Is it possible to configure BRAT so that if no .ann file is found, then an empty .ann file is created? Or am I supposed to myself provide the empty .ann files? I run into the same problem with BRAT from time to time. The way I solved it for a folder with .txt files is to run this Bash one-liner directly in the shell when inside the directory with these files: for f in *.txt; do touch $(basename $f .txt)".ann"; done What is does is the following: With a for-loop, it loops over every .txt file, and creates an empty file with touch having the same basename, but a .ann extension instead of a .txt extension.
{ "domain": "datascience.stackexchange", "id": 1776, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nlp", "url": null }
First, note there is a terminology problem in your title: the exponential family seems to imply one exponential family. You should say a exponential family, there are many exponential families. Well, one consequence of your definition: $$p(\mathbf x|\boldsymbol \eta) = h(\mathbf x) g(\boldsymbol \eta) \exp \{\boldsymbol \eta^\mathrm T \mathbf u(\mathbf x)\}$$ is that the support of the distribution family indexed by parameter $$\eta$$ do not depend on $$\eta$$. (The support of a probability distribution is the (closure of) the least set with probability one, or in other words, where the distribution lives.) So it is enough to give a counterexample of a distribution family with support depending on the parameter, the most easy example is the following family of uniform distributions: $$\text{U}(0, \eta), \quad \eta > 0$$. (the other answer by @Chaconne gives a more sophisticated counterexample). Another, unrelated reason that not all distributions are exponential family, is that an exponential family distribution always have an existing moment generating function. Not all distributions have a mgf. Consider the non-central Laplace distribution $$f(x; \mu, \sigma) \propto \exp \left(-| x - \mu | / \sigma \right).$$ Unless $\mu = 0$ you won't be able to write $|x - \mu|$ as an inner product between $\mu$ and some function of $x$. The exponential family does include the vast majority of the nice named distributions that we commonly encounter, so at first it may seem like it has everything of interest, but it by no means is exhaustive. Both the existing answers are good, but just to try add a bit of intuition about what is going on here.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9511422241476943, "lm_q1q2_score": 0.8212082145492495, "lm_q2_score": 0.8633916082162403, "openwebmath_perplexity": 237.25269586975656, "openwebmath_score": 0.8464807271957397, "tags": null, "url": "https://stats.stackexchange.com/questions/295363/why-doesnt-the-exponential-family-include-all-distributions" }
c#, sql, sql-server, benchmarking namespace AEDemo { class Program { static void Main(string[] args) { using (SqlConnection con1 = new SqlConnection()) { Console.WriteLine(DateTime.UtcNow.ToString("hh:mm:ss.fffffff")); string name; string EmptyString = ""; string conString = ConfigurationManager.ConnectionStrings[args[0]].ToString(); int salary; int i = 1; while (i <= 100000) { con1.ConnectionString = conString; using (SqlCommand cmd1 = new SqlCommand("dbo.GenerateNameAndSalary", con1)) { cmd1.CommandType = CommandType.StoredProcedure; SqlParameter n = new SqlParameter("@Name", SqlDbType.NVarChar, 32) { Direction = ParameterDirection.Output }; SqlParameter s = new SqlParameter("@Salary", SqlDbType.Int) { Direction = ParameterDirection.Output }; cmd1.Parameters.Add(n); cmd1.Parameters.Add(s); con1.Open(); cmd1.ExecuteNonQuery(); name = n.Value.ToString(); salary = Convert.ToInt32(s.Value); con1.Close(); }
{ "domain": "codereview.stackexchange", "id": 15169, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, sql, sql-server, benchmarking", "url": null }